Algorithms that change people’s lives - for example when estimating students’ grades - should now meet strict standards of ethics and competence, according to a new report from BCS, the Chartered Institute for IT.
The trade association's new report demands that the government's use of data science must achieve public service standards of openness, accountability and objectivity to avoid another ‘computer says no’ moment in education and other disciplines.
It recommended that government endorse and support professionalising of data science, in line with a plan already being developed by a collaboration of the Royal Statistical Society, BCS, The Royal Society and others.
This would mean algorithms whose creators did not meet a strict code of practice for ethics, professionalism and competence could not decide issues such as exam grades or make estimates about the outcomes of pandemics like COVID-19.
The BCS study concluded that policy makers should ensure "the best possible ethical and professional practice in algorithm design, development and testing is ubiquitous at information system level across government and industry".
All algorithms and data with high-stakes consequences such as grades estimation or triggering lockdowns should be put through an impact assessment against widely recognised ethical standards, and open to public scrutiny, before ‘going live’, the BCS report added.
Bill Mitchell, director of policy at BCS, the Chartered Institute for IT, said: “The exam crisis has given algorithms an undeserved reputation as ‘prejudice engines’ when in fact ethically designed algorithms fed on high quality data can result in massive benefit to our everyday lives.
“Lack of public confidence in data analysis will be very damaging in the long term - information systems that rely on algorithms can be a force for good but, as students found out to huge cost, we have been using them to make high-stakes judgements about individuals based on data that is subjective, uncertain and partial."
Mitchell concluded: “We need true oversight of the way algorithms are used, including identifying unintended consequences, and the capability at a system level to remedy harm that might be caused to an individual when something goes wrong."
Recent Stories