The European Parliament voted on June 14 to approve its own draft law for the AI Act, a piece of legislation that has been in the works for two years with the goal of establishing international rules for the regulation of AI.
The bill should be passed before the end of the year following a last round of discussions to harmonize several proposals issued by the European Parliament, Commission, and Council. It will be the first piece of legislation in the world to regulate artificial intelligence in practically all spheres of society, with the exception of defense.
It is noteworthy that, of all possible approaches to AI regulation, this legislation is exclusively based on the idea of risk. The application of AI in certain societal areas, each of which has a unique set of potential issues, is what is being regulated, not AI itself. The four risk categories are unacceptable, high, limited, and low, each of which is subject to a separate set of legal requirements.
Systems identified as “high risk” will be subject to disclosure requirements
Systems that are regarded to represent a threat to EU principles or fundamental rights will be labeled as having an “unacceptable risk” and forbidden. AI-based “predictive policing” systems are one example of such a concern. In order to determine if a person is likely to commit a crime, risk assessments of that person are made using AI and based on their personal information.
The use of face recognition technologies in real-time street camera feeds is a more debatable situation. This has been added to the list of dangers that cannot be tolerated and would only be permitted following the commission of a crime and with court approval.
Systems identified as “high risk” will be subject to disclosure requirements and be required to register in a specific database. Various monitoring or auditing standards will also apply to them.
Applications that potentially restrict access to services in important sectors like finance, healthcare, education, and employment will be labeled as high risk. Although the employment of AI in these fields is not viewed as bad, control is necessary due to the possibility that it could have a negative impact on safety or fundamental rights.
Limited risk” AI systems will be subject to minimal transparency requirements
The idea is that, at least if we reside in the EU, we should be able to trust that any software making decisions about our mortgage will be thoroughly checked for compliance with European laws. This will ensure that we are not subject to discrimination based on protected characteristics like sex or ethnic background.
“Limited risk” AI systems will be subject to minimal transparency requirements. Operators of generative AI systems, such as bots that generate text or graphics, will also need to let consumers know that they are engaging with a machine.
The legislation’s lengthy trip through the European institutions, which began in 2019, has made it more explicit and detailed about the potential hazards of using AI in sensitive situations, as well as how those risks may be monitored and reduced. The concept is clear: if we want to accomplish anything, we must be particular. However, much more effort needs to be done.
Contrarily, we have recently seen petitions that demand the reduction of a purported “risk of extinction” brought on by AI without providing any additional information. Many politicians have expressed similar opinions. Since there is no specific information about what we should be on the lookout for or what we can do right away to protect against it, this general and extremely long-term risk differs significantly from the one that inspired the AI Act.
EU draft law to regulate AI: The act will go into effect in two or three years
If “risk” is defined as the “expected harm” that could result from anything, then it makes sense to concentrate on scenarios that are both likely and detrimental because they provide the greatest risk. Events that are extremely unlikely, like an asteroid crash, shouldn’t take precedence over others that are more likely to occur, like the effects of pollution.
In this regard, the draft regulation that the EU parliament recently approved has less flare but more substance than some recent AI warnings. It tries to strike a delicate balance between upholding rights and ideals while fostering innovation and addressing both risks and fixes. Although far from ideal, it at least offers specific actions.
Trilogues, or three-way dialogues, where the distinct drafts from the parliament, commission, and council will be combined into a final text, will be the next step in the process of creating legislation. In this stage, compromises are anticipated to take place. Before the next round of European election campaigning begins, the resulting law will be approved and put into effect by a vote, most likely before the end of 2023.
The act will go into effect in two or three years, and any company doing business inside the EU would have to abide by it. We don’t know how AI or the world will appear in 2027, so this lengthy timetable does raise some problems of its own.
This legislation was initially suggested by Ursula von der Leyen, the president of the European Commission
This legislation was initially suggested by Ursula von der Leyen, the president of the European Commission, in the summer of 2019, right before a pandemic, a war, and an energy crisis. Additionally, this was before ChatGPT sparked regular discussions about an existential threat from AI among lawmakers and the media.
The act’s language is sufficiently comprehensive, though, that it might continue to be applicable for a while. It might have an impact on how companies and researchers approach AI outside of Europe.
Every technology, however, carries hazards, so rather of waiting for something bad to happen, academic and policymaking organizations are attempting to anticipate the effects of research. This does indicate some development compared to how we utilized earlier technology, such as fossil fuels.