Amnesty urges Swedish authorities to stop use of discriminatory AI system

Amnesty International has called on Försäkringskassan, Sweden’s Social Insurance Agency, to immediately stop using an AI system which disproportionately flags certain groups for further investigation for welfare fraud.

An investigation by research firm Lighthouse Reports and media firm Svenska Dagbladet found that the system flagged those born overseas, low-income earners, and individuals without university degrees.

The system assigns risk scores, calculated by an algorithm, to social security applicants to detect social benefits fraud. According to the human rights charity, the machine learning system has been used by the Swedish Social Insurance Agency since at least 2013.

The investigation found Försäkringskassan conducts two types of checks, one is a standard investigation conducted by case workers which does not presume criminal intent and believes that individuals have simply made mistakes.

The other checks are conducted by a control department when there is suspected criminal intent.

The study found that people with the highest risk scores as designated by the algorithm have been automatically subject to investigations by fraud controllers within the welfare agency, under an assumption of “criminal intent” right from the start.

Amnesty International said fraud investigators who see files that have been flagged by the system have enormous power, claiming that they can trawl through a person’s social media accounts, obtain data from institutions such as schools and banks, and even interview an individual’s neighbours as part of their investigations.

“The entire system is akin to a witch hunt against anyone who is flagged for social benefits fraud investigations,” said David Nolan senior investigative researcher at Amnesty Tech. “This is a clear example of people’s right to social security, equality and non-discrimination, and privacy being violated by a system that is clearly biased.”

Amnesty International said there are longstanding concerns about embedded bias in systems used by Sweden’s Försäkringskassan.

In 2018, a similar investigation found that an algorithm used by the agency was also discriminatory.

The Swedish Social Insurance Agency argued that the analysis was flawed and rested on dubious grounds.



Share Story:

Recent Stories


Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.