The UK public feel strongly that the public sector should be transparent about its use of algorithms and how they work, according to a new study by the Centre for Data Ethics and Innovation (CDEI).
The CDEI was set up by the government in 2018 to advise on the governance of AI and data-driven technology.
There was almost no awareness of the use of algorithms in the public sector by 36 participants in the study, who the CDEI said represented a diverse sample of the UK population, except for a few participants who remember the use of an algorithm to award A-level results in 2020.
Once introduced to examples of potential public sector algorithm use, participants became more engaged and felt that in principle information about algorithmic decision making should be made available to the public, including citizens and experts.
The researchers focused on three use-cases to test a range of emotional responses: policing, parking, and recruitment.
Information participants in the study wanted included: a description of the algorithm, why an algorithm was being used, contact details for more information, data used, human oversight, potential risks, and the technicalities of the algorithm.
For lower potential risk and lower potential impact use-cases, passively available transparency information – in other words, information that individuals can seek out if they want to – is acceptable on its own according to the research’s participants.
But for higher potential risk and higher potential impact use-cases, the participants also wanted active communication of basic information upfront to notify people that the algorithm is being used and why.
The topic of algorithmic bias is beginning to attract more mainstream attention for large organisations; in May, Twitter’s AI-powered cropping tool was found to have bias towards excluding black people according to the social media platform’s own research.
Recent Stories