Registration
You will receive an email confirming your registration.
Carnegie India hosted a closed-door discussion on the issue of A.I. This closed-door discussion sought to gather inputs from a select group of stakeholders to map the global discourse on A.I.; the benefits and challenges of “vertical” and “horizontal” regulations when regulating A.I.; and how countries could look at “future-proofing” their A.I. regulatory regimes, given the rapid pace of development in A.I. The discussion also touched upon the requirement of auditing A.I. regulations and the risk of global fragmentation when it came to regulating A.I.
Discussion Highlights
“Vertical” vs. “Horizontal” approach to A.I. regulation
At the discussion, the idea of “vertical” and “horizontal” approaches to A.I. regulation was explored. “Vertical” regulation entails that there is no singular overarching approach to regulation. Instead, the regulators target specific A.I. applications or a set of applications. This is in contrast to the “horizontal” approach where a generalized singular approach to all enduses of A.I. is considered. While the “vertical” approach may be seen as more arduous for regulators – owing to a need to specifically regulate each application – it is seen as more iterative and accordingly, it helps build a vast pool of bureaucratic know-how on regulating A.I. On the other hand, the “horizontal” approach may be more flexible. However, neither approach can fully stand on its own. An example is the A.I. algorithm registry in China, which serves as a more horizontal tool, given its overall focus on algorithms, irrespective of the sector focused on. Similarly, while the EU has adopted a more general regulatory approach to A.I., it will eventually need to look at individual sectors and actually adopt more “vertical” tools to have more targeted regulations.
Whether “future-proofing” of A.I. laws can be done
While the field of A.I. is developing quickly, overall things have not changed substantially in terms of a need to re-think the approach to regulation. Sure, generative A.I. has been a disruption in the field of A.I., but the regulatory approach can be tailored in this case. Different tiers of risk can be attached to different types of A.I. and a process-based approach which continues to look at accountability and transparency measures will still be there.
The degree of auditing required, prior to the release of an A.I. application
Regulators may not be always able to understand the implications of A.I., compared to engineers. Accordingly, auditing is a significant part of the process. However, where does one draw the line? No matter much auditing is done, there is always the real prospect of “jailbreak” or bad actors in the system. There are no guarantees that the A.I. model will be completely safe and will never behave in a way that we don’t expect. Trade-offs will have to be made – what is the benefit from the A.I. application in question; what the risk involved; and how expensive the audit will be. For now, the capacity-gap between the technology providers and regulators is so large that it may be a worthwhile idea to consider an intermediary body that is licensed by the government, but more technically oriented.
A co-regulatory approach is better than a self-regulatory or a top-down approach
There was a discussion on whether the correct approach is top-down regulatory approach of the government, or a more self-regulatory approach by the industry. It was pointed out that the five-point blueprint for A.I. espoused by Brad Smith, Chairperson, Microsoft, was quite relevant. Serious consideration should go into exploring an approach that would require a framework that involves a combination of government, private sector, civil society, and members of the academia. What would also help is to know beforehand the critical things that need to be governed. Defining high risk A.I. systems that control critical infrastructure would be a good start.
Risk of fragmentation and the possible consequences A.I. regulation depends on local context. This is especially the case with generative A.I. which creates novel content – and which may be governed differently across jurisdictions. Accordingly, there may be a risk of fragmentation across global markets. However, it appears that fragmentation on issues like content-moderation is not a huge problem, since developing a single set of rules for content-moderation was always going to be tough anyway. However, countries and companies will be looking for harmonization of regulatory approach when it comes to regulating existential/catastrophic risk issues posed by A.I. Despite the presence of fragmentation in A.I., there can be global cooperation when it comes to the level of acceptable risk and identifying the industries that can adopt this approach. For example, given the transnational issues in aviation, agreement may be forthcoming in that area. However, that being said, agreement will take time. Even in jurisdictions such as the EU, there is divergence when it comes to the approach taken by the European Parliament, European Commission, and the European Council. Accordingly, global harmonization will be an ongoing process.