On Wednesday, the European Union’s parliament approved the world’s first major set of legal regulations for mediatized artificial intelligence, which is at the vanguard of tech investment.
In early December, the EU mediated a provisional political accord, which was then approved by the Parliament on Wednesday with 523 votes in favour, 46 against, and 49 abstentions.
“Europe is NOW a global standard-setter in AI,” wrote Thierry Breton, EU Commissioner for Internal Market, on X.
Roberta Metsola, President of the European Parliament, lauded the measure as groundbreaking, saying it will promote innovation while protecting fundamental rights.
“Artificial intelligence is already very much part of our daily lives. Now, it will be part of our legislation too,” she wrote in a social media post.
Dragos Tudorache, a senator who managed the EU’s discussions on the pact, praised it, but highlighted that the major challenge remains execution.
The EU AI Act, which goes into effect in 2021, categorizes technology into risk categories ranging from “unacceptable” (which would result in its ban) to high, medium, and low hazard. The legislation is anticipated to go into effect at the end of the legislative session in May, after clearing final checks and gaining approval from the European Council. Implementation will be staggered beginning in 2025.
Some EU countries have previously called for self-regulation rather than government-led limitations, citing fears that suffocating regulation would impede Europe’s ability to compete with Chinese and American digital giants. Detractors include Germany and France, which house some of Europe’s promising AI startups.
EU AI Act sorts tech into risk tiers from “ban-worthy” to low-hazard
Last week, the Union enacted major competition measures aimed at reining in the United States’ corporate titans. The EU can use the Digital Markets Act to crack down on anti-competitive activities by giant digital corporations and force them to open up their services in sectors where their dominant position has suffocated smaller players and limited user choice. Six major companies, including Alphabet, Amazon, Apple, Meta, Microsoft, and Bytedance from China, have been identified as “gatekeepers.”
Concerns have been growing about the potential for exploitation of artificial intelligence, even as heavyweight players like Microsoft, Amazon, Google and chipmaker Nvidia beat the drum for AI investment.
According to Morgan Stanley’s Lisa Shalett, AI investors should shift their focus to adopters outside of technology. Governments are concerned that deepfakes, which are kinds of artificial intelligence that generate fictitious events such as images and movies, may be used in the run-up to several crucial global elections this year.
Some AI supporters are already self-regulating to prevent deception. Google stated on Tuesday that it will limit the kind of election-related queries that its Gemini chatbot can answer, adding that the modifications have already been introduced in the United States and India.
“The AI Act has pushed the development of AI in a direction where humans are in control of the technology, and where the technology will help us leverage new discoveries for economic growth, societal progress, and to unlock human potential,” Tudorache said on social media on March 12.
“The AI Act is not the end of the journey, but, rather, the starting point for a new model of governance built around technology. We must now focus our political energy in turning it from the law in the books to the reality on the ground,” he added.
Legal experts hail new AI law as global game-changer, predict worldwide ripple effect
Legal professionals described the act as a significant milestone in international AI regulation, with the potential for other countries to follow suit.
“Once again, it’s the EU that has moved first, developing a very comprehensive set of regulations,” said Steven Farmer, partner and AI specialist at international law firm Pillsbury.
“The bloc moved early in the rush to regulate data, giving us the GDPR, which we are seeing a global convergence towards,” he continued, referring to the EU’s General Data Protection Regulation law. “The AI Act seems to be a case of history repeating itself.”
Mark Ferguson, a public policy specialist at Pinsent Masons, added that the act’s approval was only the beginning and that businesses will need to collaborate closely with lawmakers to understand how it will be implemented as the fast-changing technology evolves.