Algorithms have controlled social systems for years. Now they are under fire for bias

Algorithms have controlled social systems for years. Now they are under fire for bias

A coalition of human rights groups launched legal action today against the French government over its use of algorithms to detect miscalculated welfare payments, claiming they discriminate against disabled people and single mothers.

The algorithm used since 2010 to date, violates both European privacy rules and French anti-discrimination laws, say the 15 groups involved in the case, including digital rights group La Quadrature du Net, Amnesty International and Collectif Changer de Cap, a French group that campaigns against inequality .

“This is the first time that a public algorithm has been the subject of a legal challenge in France,” said Valérie Pras of the Collectif Changer de Cap, adding that he wants this type of algorithm banned. “Other social organizations in France use scoring algorithms to target the poor. If we can get [this] algorithm is banned, the same will apply to the others.

France’s social welfare agency, CNAF, has analyzed the personal data of more than 30 million people – those claiming state support, as well as the people they live with and their family members, according to the lawsuit filed at the Supreme Administrative Court of France on October 15.

Using their personal information, the algorithm gives each person a score between 0 and 1 based on how likely they are to receive payments they’re not entitled to – either fraudulently or by mistake.

People with higher risk scores could then be subject to what welfare recipients across the bloc have described as stressful and intrusive investigations, which could also include having their welfare payments suspended.

“The processing carried out by the CNAF constitutes massive surveillance and a disproportionate attack on the right to privacy,” legal documents for the French algorithm said. “The effects of this algorithmic processing particularly affect the most insecure people.”

CNAF has not publicly shared the source code of the model it currently uses to detect social payments made in error. But based on an analysis of older versions of the algorithm, which are supposed to be in use by 2020, La Quadrature du Net claims the model discriminates against marginalized groups by rating disabled people, for example, as more high risk than others.

Leave a Reply

Your email address will not be published. Required fields are marked *