To curtail the “potential risks to society” posed by AI, a non-profit funded in part by entrepreneur Elon Musk is advocating for a pause of at least six months in the training of “powerful AI systems”.
A letter issued by the Future of Life Institute has garnered over 1,000 signatures and warns against the risk of economic and political disruptions by “human-competitive” AI systems.
Apple co-founder Steve Wozniak, philosopher Yuval Noah Harari, and Emma Bluemke, Centre for the Governance of AI, PhD Engineering, University of Oxford, are among the letter’s signatories.
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter read. "AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."
The UK government recently decided against formal AI regulation and instead published new oversight rules.
While a looser approach to regulation may be welcomed by some, the announcement may irk experts who believe AI governance should be brought under the purview of a single regulator.
Recent Stories