Ethics needed in AI development, a parliamentary report states

According to the House of Lords, the UK is in a “strong position” to be a world leader in the development of artificial intelligence, potentially delivering a major boost to the economy.

However, the House of Lords Select Committee on Artificial Intelligence warns that there are ethical dangers in the advancement of AI, and wants a cross-sector code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are:
• Artificial intelligence should be developed for the common good and benefit of humanity.
• Artificial intelligence should operate on principles of intelligibility and fairness.
• Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
• All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
• The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

In addition, it believes the Government needs to draw up a national policy framework, in lockstep with the Industrial Strategy, to ensure the coordination and successful delivery of AI policy in the UK.

Accepting that AI will have a major impact on the employment, it notes that some jobs will be enhanced by AI, but that many will disappear and new, as yet unknown jobs, will be created, and significant Government investment in skills and training will be necessary to mitigate the negative effects of AI.

In light of recent scandals, the Select Committee has suggested that individuals will need to be given greater personal control over their data, and the way it is collected and used. There is also a suggestion that the power of AI could become too concentrated in the hands of too few companies.

Ominously it notes that it is “not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users, and clarity in this area is needed”. The Committee recommend that the Law Commission investigate this issue.

The Chairman of the Committee, Lord Clement-Jones, said: “AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these. An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.”

    Share Story:

Recent Stories


Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.