AI Crackdown: India Demands Approval Before Tech Firms Release New AI Tools

In a surprising move that has the tech world buzzing, the Indian government has issued a strict new advisory requiring major technology companies to obtain official approval before launching any new artificial intelligence tools or services in the country. The advisory states that the use of unreliable or trial AI systems in India now requires “explicit permission” from the authorities.

The directive, issued on Friday by the Ministry of Electronics and IT, has raised eyebrows around the globe. While non-binding, the government has made it clear that this policy signals their intent to regulate the emerging AI sector much more tightly going forward.

“We are doing it as an advisory today asking you to comply with it,” stated Rajeev Chandrasekhar, India’s Deputy IT Minister. “This is signaling that this is the future of regulation.”

The sudden crackdown stems from recent controversies around responses from AI chatbots like Google’s Bard, which was criticized by Indian officials for an “unreliable” answer characterizing Prime Minister Narendra Modi’s policies as “fascist.” The new rules require tech companies to ensure their AI tools do not “threaten the integrity of the electoral process” ahead of India’s upcoming general elections.

But the broad scope of the advisory has many in the industry worried it could stifle innovation in artificial intelligence, an arena where India aims to be a major player on the global stage. Critics argue such heavy-handed regulation could hamper Indian startups and hinder the nation’s ability to compete with other AI powerhouses.

“This is terrible and demotivating,” lamented Pratik Desai, founder of agri-tech startup KisanAI. “We were so excited to bring AI to help Indian farmers, but now I’m not sure if we can move forward.”

The new policy represents a stark reversal from India’s previous stance welcoming AI development with relatively relaxed oversight. Now firms must certify their AI systems are free of bias or discrimination, and clearly label outputs that may be flawed or inaccurate.

Major tech leaders have strongly criticized the move, with some calling it disastrous for India’s AI ambitions. But government officials insist the regulations are critical to prevent misuse and ensure public trust.

“Safety and trust is platforms’ legal obligation,” Chandrasekhar asserted. “‘Sorry, unreliable’ does not exempt from law.”

As the global artificial intelligence race intensifies, all eyes are now on India to see if this pioneering crackdown spurs other nations to follow suit with tougher rules for AI governance. The implications could be far-reaching for both companies and consumers alike.


Grow your business with AI. Be an AI expert at your company in 5 mins per week with this Free AI Newsletter

Recent Articles

Related Stories