All 193 member states of the UN Educational, Scientific and Cultural Organisation (UNESCO) have adopted an agreement that defines common principles which address ethical issues related to AI.
A first draft of the 141 measures was published in May 2020, after being developed by a unit of 24 AI specialists known as the Ad Hoc Expert Group (AHEG).
The recommendation said it “approaches AI ethics as a systematic normative reflection, based on a holistic and evolving framework of interdependent values, principles and actions that can guide societies in dealing responsibly with the known and unknown impacts of AI technologies on human beings, societies, and the environment and ecosystems, and offers them a basis to accept or reject AI technologies”.
The recommendation said it addresses “ethical questions regarding AI systems pertaining to all stages of the AI system life cycle, ranging from research, design, and development to deployment and use, including maintenance, operation, trade, financing, monitoring and evaluation, validation, end-of-use, disassembly, and termination”.
The recommendation said it did not provide “one single definition of AI” as “such a definition would need to change over time, in accordance with technological developments”.
Instead, it said it was aimed at addressing “those features of AI systems that are of central ethical relevance and on which there is large international consensus”.
The recommendation provides guidance on ten policy areas: ethical impact assessment, ethical governance and stewardship, data policy, development and international cooperation, environment and ecosystems, gender, culture, education and research, economy and labour, health and social well-being.
The objectives said its recommendations were:
1. to provide a universal framework of values, principles and actions to guide States in the formulation of their legislation, policies or other instruments regarding AI
2. to guide the actions of individuals, groups, communities, institutions and private sector companies to ensure the embedding of ethics in all stages of the AI system life cycle
3. to promote respect for human dignity and gender equality, to safeguard the interests of present and future generations, and to protect human rights, fundamental freedoms, and the environment and ecosystems in all stages of the AI system life cycle
4. to foster multi-stakeholder, multidisciplinary and pluralistic dialogue about ethical issues relating to AI systems
5. to promote equitable access to developments and knowledge in the field of AI and the sharing of benefits, with particular attention to the needs and contributions of Low-to-middle-income countries (LMICs), including Least developed countries (LDCs), Landlocked developing countries (LLDCs), and Small Island Developing States (SIDS)
The news comes as the UK looks to make moves to limit the potential threat posed by AI; the UK government launched one of the world’s first national standards for algorithmic transparency at the end of last month.
Recent Stories