How the Artificial Intelligence Act Could Kickstart A Regulation Revolution

• Bookmarks: 69


No longer relegated to the world of science fiction, artificial intelligence (AI) has penetrated the mainstream. From Isaac Asimov’s influential 1950 literary work I, Robot to modern-day algorithmic decision-making, the rise in AI systems has been fueled by advancements in computer storage, processing speed, and access to big data. The value of AI is rooted in its potential to outperform humans, and proponents have lauded its benefits to automation, analysis, and accuracy. Consequently, AI applications have infiltrated all corners of the economy – from advertising to banking to healthcare.

But the growth in AI has also resulted in a proliferation of detrimental effects, such as infringements on human rights, privacy concerns, bias and racial discrimination, and barriers to critical services like public benefits and financial services. Regulation has not kept pace with technological innovation, resulting in limited ability to mitigate the consequences of AI. However, in April 2021, the European Union proposed a landmark piece of legislation, the Artificial Intelligence Act (AIA), which is the first piece of AI regulation from a major regulatory body. The act, which has not yet passed, has the potential to kickstart a wave of global AI regulation.

The AIA categorizes AI technology based on level of potential harm, drawing from what’s known as the Pyramid of Criticality for AI Systems. It establishes four major categories of risk: unacceptable-risk, high-risk, limited-risk, and minimal-risk. The bill will enact new requirements and thresholds for AI technologies that are considered high-risk, such as AI used to verify travel documents, AI used in robot-assisted surgery, and credit-scoring AI used to determine loan eligibility. The bill will also restrict certain AI implementations that carry an “unacceptable” level of risk. Examples of “unacceptable-risk” AI include real-time remote biometric identification systems, such as the use of facial recognition technology in public spaces for law enforcement. As of April 2022, the European Parliament is also pushing for an amendment to the bill that would ban predictive policing systems, which use AI to prosecute and police communities and individuals based on their potential criminality – a practice that is highly controversial and opens the door to racial discrimination. This categorization of AI is seen by many proponents of the bill as a first step in establishing a global set of standards for AI regulation and design.

The bill’s new requirements for AI vary from imposing strict rules for training, validating, and testing data to introducing new transparency thresholds. One key mechanism the bill enlists to regulate AI is the conformity assessment requirement. Companies must conduct a conformity assessment of “high-risk” AI to prove it meets legislative standards before entering the EU market. The bill’s assessment criteria center on data quality, technical documentation, record keeping, transparency, human oversight, as well as robustness, accuracy, and cybersecurity. These assessments can be conducted by AI creators, manufactures or purchasers, or independent third parties. They’re intended to be holistic and ongoing in nature – companies are required to incorporate these standards during early design stages and must implement a monitoring system after entering the EU market (or be subject to regular audits in the case of a third-party assessors). Companies that don’t comply with these checks are subject to fines of up to 30 million euros.

Although the bill primarily focuses on categorizing, assessing, and restricting AI from the side of technology companies and manufacturers, it also contains some measures focused on end users. It aims to increase user awareness and transparency by requiring companies to notify users when they’re being exposed to deepfakes, biometric recognition systems (such as facial recognition), or AI systems that claim to be able to detect human emotions. Policymakers are still considering whether the bill should establish a process for individuals to submit grievances and pursue redress when they have been harmed by an AI system.

The AIA is the first piece of legislation that aims to curb the harmful effects of AI on a massive scale. Some local governments in the U.S., such as the city of San Francisco and the state of Virginia, have introduced restrictions on facial recognition, but the AIA is a sweeping piece of legislation with the potential to impact the EU’s population of nearly 450 million. While proponents of the bill praise its role as a landmark piece of legislation, critics of the bill claim its broad nature will result in limited tangible effects. There are also concerns that the bill will stifle technological innovation, though the European Commission has partially addressed this concern by introducing legal sandboxes—i.e. exceptions that allow companies to test innovative products in a limited capacity without being subject to regulation. From a practical perspective, there is also the question of companies’ abilities to comply with the bill logistically. Some of the mandatory checks would require external access to source code, a practice technology companies are likely to be wary of.

Although the practicalities around compliance and enforcement remain to be seen, the bill has the potential to become the global standard for AI regulation for precisely the same reasons. Once the bill goes into effect, foreign companies operating in the EU will have to comply with its regulations. When AI regulation is being developed in other countries, companies adhering to the AIA will be incentivized to lobby for similar legislation and against additional regulation in order to reduce the cost and labor associated with further altering their processes and products. Companies with global footprints beyond the EU, such as tech giants Google, Meta, and Microsoft, are especially likely to oppose additional regulation that conflicts with the AIA, thus priming the AIA to become the global standard. This was the case with the General Data Protection Regulation (GDPR), which established the right to data protection in the EU and kickstarted a wave of data privacy regulation that is still ongoing in countries like Brazil, India, and Japan, as well as U.S. states California, Connecticut, Colorado, Virginia, and Utah.

As countries like the U.S. and China, and bodies like the EU, race to innovate their AI technologies, a parallel race to establish global influence through AI policy is emerging. Exerting leadership in AI regulation is seen as not only a moral imperative, but also a way to ensure a country’s competitiveness within the realm of AI. In the case of the U.S., officials from the National Institute of Standards and Technology (NIST) argued for increased American leadership in developing international standards for AI during a House committee hearing in September. NIST is developing an AI Risk Management Framework that aims to help government and private-sector entities safely navigate and use AI. The Biden-Harris Administration also recently announced a commitment to increasing AI transparency and limiting discriminatory algorithmic decision-making as part of its Blueprint for an AI Bill of Rights, released in October. Still, until a federal approach to AI regulation is codified, American technology companies stand to be subjected to foreign regulation. Either way, if passed, the EU’s AIA, with its “first-to-market” position, is primed to start a global revolution in AI regulation.

 

 

 

 

 

617 views
bookmark icon