India has recently announced a new set of rules for regulating artificial intelligence (AI) applications, which have raised concerns among the India’s A.I industry players and experts. The rules, proposed by the National Institution for Transforming India (NITI) Aayog and headed by Dr. Rajiv Kumar Chandrashekaran, aim to ensure the safety, security, privacy, and ethics of AI systems. However, they also impose several restrictions and obligations on the developers and users of AI, which could hamper the innovation and growth of the sector.
One of the most controversial aspects of the rules is the requirement for platforms to seek government approval for deploying untested AI models and to clearly label them as such. This means that any new or experimental AI system that has not been validated by a third-party agency or a government body will have to undergo a lengthy and cumbersome process of scrutiny before it can be used in the market(India’s A.I Industry). Moreover, the platforms will have to inform the users about the potential risks and limitations of such untested AI models, which could discourage them from using them
- Microsoft Emerges as the Largest Buyer of Nvidia Hopper GPUs in 2024: Bought 485,000 GPUs
- T-Hub selects 7 Indian startups for its Semiconductor Program Cohort.
- US Funds 4.745 Billion Dollar To SAMSUNG
- $406 million chip subsidy allocated to Taiwan’s GlobalWafers by the US
- Wi-Fi 7: Revolutionizing Wireless Connectivity with IEEE P802.11be Standards
The rules also mandate that platforms must ensure that their AI systems are fair, transparent, accountable, and non-discriminatory and are according to India’s A.I Industry norms They must provide explanations for the decisions and actions taken by their AI systems, as well as mechanisms for redressal and grievance in case of any harm or adverse impact. Additionally, they must ensure that their AI systems respect the privacy and consent of the users, and do not collect or process any personal or sensitive data without their permission
While these principles may sound reasonable and desirable, they also pose significant challenges for the implementation and enforcement of the rules. For instance, how will the platforms determine whether their AI systems are fair or biased? How will they provide meaningful explanations for complex and opaque AI models? How will they ensure that their AI systems do not violate the privacy or consent of the users, especially when they rely on large and diverse data sources? How will they handle the liability and accountability issues in case of any harm or damage caused by their AI systems?
These questions are not easy to answer, and they may require a lot of technical expertise, legal clarity, and ethical judgment. The platforms may have to invest a lot of time, money, and resources to comply with these rules, which could affect their profitability and competitiveness. Moreover, they may face legal risks and penalties if they fail to adhere to these rules, which could expose them to litigation and reputational damage.
The rules also create a distinction between large and small platforms, based on their annual turnover and user base. The large platforms, which have more than Rs. 50 crore turnover or more than 50 lakh users per month, will have to follow stricter norms and obligations than the small platforms. They will also have to register themselves with a designated authority and submit periodic reports on their AI activities.
This differentiation may create an uneven playing field for the industry, as it could favor the small platforms over the large ones. The small platforms may enjoy more flexibility and freedom in developing and deploying their AI systems, while the large platforms may face more restrictions and regulations. This could create a disincentive for the large platforms to innovate and experiment with new AI applications, as they may fear losing their market share or facing legal troubles.
The rules also ignore the fact that many AI applications are developed and used in collaboration with multiple stakeholders, such as researchers, developers, providers, users, regulators, etc. The rules do not specify how these stakeholders will coordinate and cooperate with each other to ensure the compliance and governance of their AI systems. The rules also do not address how they will deal with cross-border issues, such as data transfer, jurisdiction, sovereignty, etc., which are crucial for the global nature of AI.
In conclusion, India’s new AI rules may have noble intentions of ensuring the safety, security, privacy, and ethics of AI systems. However, they also have serious flaws and limitations that could undermine the innovation and growth of the industry. The rules may create more barriers than opportunities for the platforms to develop and deploy their AI systems. They may also create more confusion than clarity for the stakeholders to understand and follow their roles and responsibilities
Discover more from WireUnwired
Subscribe to get the latest posts sent to your email.