India’s IT ministry has called on tech firms to seek its approval before publicly releasing “unreliable” artificial intelligence (AI) tools.
In an advisory, the IT ministry said that such tools’ "availability to the users on Indian Internet must be done so with explicit permission of the Government of India.”
Companies including Google and Microsoft-backed OpenAI have seen their generative AI (GenAI) products suffer major malfunctions in recent weeks, raising further concerns around the validity of their results.
Google in particular stoked controversy in India, when a top minister last month criticised its Gemini AI chatbot for stating that the country’s far-right Prime Minister Narendra Modi had been accused of implementing policies characterised as “fascist.”
The company in a statement said that the tool "may not always be reliable", union minister of state for Electronics & Technology Rajeev Chandrasekhar to react by posting on X that "Safety and trust is platforms legal obligation. 'Sorry Unreliable' does not exempt from law."
The advisory specifically targets all AI models, including large language models (LLMs) like Gemini, software utilising GenAI, or any algorithms currently in the testing or beta stage of development. Such platforms must seek explicit permission from the government before being made available to users on the Indian internet.
Chandrasekhar said that compliance with the advisory is currently voluntary, but warned that non-compliance may lead to further legislation.
India is set to stage its general elections this summer, and is one of several major regions including the US and European Union where disinformation created by AI is becoming a major concern surrounding election integrity.
Recent Stories