Government Amends AI Advisory, Eases Regulations for Industry Players

Government Amends AI Advisory, Eases Regulations for Industry Players - Explained - News

In a landmark decision aimed at fostering innovation and growth in the artificial intelligence (ai) sector, the Indian government has revised its advisory concerning the release of GenAI-based tools and features into the market. This significant development is a welcome relief for companies, who will no longer be required to seek explicit government consent before launching their ai-driven products.

Industry Applauds Revisions

The amended advisory, issued on March 15, has notably removed the requirement for companies to comply within a strict 15-day timeframe. This alteration has been applauded by industry experts who had expressed concerns over the potential hindrance to innovation posed by the initial regulations.

Rohit Kumar, founding partner at The Quantum Hub, a public policy consulting firm, commended the government’s responsiveness to industry feedback. He emphasized that the earlier advisory could have significantly impeded speed to market and stifled the innovation ecosystem. Kumar also pointed out that removing the necessity to submit an action-taken report indicated that the advisory was not merely suggestive but carried weight as a directive.

Key Revisions and Continuity in Requirements

Under the revised advisory, platforms and intermediaries equipped with ai and GenAI capabilities, such as Google and OpenAI, are still required to obtain government approval before offering services that enable the creation of deepfakes. Additionally, they must continue to label themselves as ‘under testing’ and secure explicit consent from users, informing them about potential errors inherent in the technology.

The directive extends to all platforms and intermediaries utilizing large language models (LLMs) and foundational models. Moreover, services are mandated not to produce content that compromises the integrity of the electoral process or violates Indian law, underscoring concerns over misinformation and deepfakes influencing election outcomes.

Emphasis on Procedural Safeguards

While the advisory revision is a positive step forward, some industry executives stress the importance of procedural safeguards in policymaking. They advocate for a consultative approach to prevent knee-jerk reactions to incidents and ensure the formulation of well-considered regulations.

Executives, speaking on condition of anonymity, highlighted the necessity for intermediaries to exercise caution during high-risk periods such as elections. They supported the government’s initiative to urge intermediaries to be vigilant before releasing untested models and appropriately labeling outputs.

The original advisory was prompted by various controversies, including the criticism faced by Google’s ai platform Gemini for generating answers about Prime Minister Modi that were deemed inappropriate. Instances of ‘hallucinations’ by GenAI models, as exemplified by Ola’s beta GenAI platform Krutrim, have also been observed, leading to regulatory intervention.

By addressing the concerns of industry players and striking a balance between innovation and regulation, the Indian government’s revised advisory is expected to foster a conducive environment for ai development while ensuring user protection and maintaining social harmony.