Italy’s parliament has passed a new law regulating the use of artificial intelligence (AI), aligning national policy with the European Union’s recently adopted AI Act.
The legislation introduces comprehensive rules for the deployment of AI across sectors including healthcare, employment, public administration, and the justice system, with the stated aim of ensuring safety, innovation, privacy, and security.
A key provision restricts access to AI technology for young people, requiring parental consent for those under 14. The law designates the Agency for Digital Italy and the National Cybersecurity Agency as the primary national authorities overseeing AI development, while sectoral regulators such as the Bank of Italy and Consob retain their existing powers.
The legislation introduces criminal penalties for the creation and dissemination of harmful AI-generated content, such as deepfakes, with offenders facing prison sentences of up to five years. This aligns with broader European efforts to combat the misuse of generative AI and protect against digital disinformation.
To support domestic innovation, the government has allocated around €1 billion to a state-backed venture capital fund, targeting equity investments in small and medium-sized enterprises as well as larger companies active in AI, cybersecurity, quantum technologies, and telecommunications. This move is consistent with the EU’s strategy to bolster technological sovereignty and reduce reliance on non-European providers.
In healthcare, the law permits the use of AI to assist with diagnosis and patient care under specific conditions, but mandates that doctors retain ultimate decision-making authority. Patients must be informed whenever AI technology is used in their treatment, upholding transparency and patient rights.
Alessio Butti, undersecretary for digital transformation, stated that the law “brings innovation back within the perimeter of the public interest, steering AI toward growth, rights and full protection of citizens.”
Recent Stories