In a shocking move, the European Union has recently finalized an unprecedented regulatory accord concerning the development of Artificial Intelligence (AI). This decision positions Europe at the forefront of endeavors to regulate this advancing technology.
After an exhaustive 72-hour deliberation, legislators within the European Union reached a momentous consensus on the expansive AI Act safety development bill. This bill stands as the most comprehensive and far-reaching of its kind to date, according to reports from The Washington Post. Specifics regarding the agreement are not immediately accessible.
Dr. Dragoș Tudorache, a Romanian lawmaker who co-led the negotiation on the AI Act, remarked to The Washington Post, "This legislation will set a standard, a paradigm, for numerous other jurisdictions out there. We must exercise extra care in its drafting, as it is poised to exert influence on many others."
The proposed regulations delineate the methods by which forthcoming machine learning models could be formulated and disseminated within the trade bloc. These regulations will impact their application in diverse areas, ranging from education and employment to healthcare. AI development is categorized into four tiers based on the societal risk each potentially poses — minimal, limited, high, and prohibited.
Prohibited uses encompass anything contravening the user's consent, targeting protected social groups, or involving real-time biometric tracking such as facial recognition. High-risk applications include those "intended to be used as a safety component of a product" or designated for critical sectors like infrastructure, education, legal proceedings, and employee recruitment. Chatbots like ChatGPT, Bard, and Bing fall under the "limited risk" classification.
"The European Commission once again has taken a bold stance in addressing emerging technology, akin to its approach with data privacy through the GDPR," stated Dr. Brandie Nonnecke, Director of the CITRIS Policy Lab at UC Berkeley, to Engadget in 2021. "The proposed regulation is intriguing as it tackles the issue from a risk-based perspective, akin to what has been proposed in Canada's AI regulatory framework."
Recent negotiations on the proposed rules faced disruptions from France, Germany, and Italy. These nations obstructed talks on the regulations governing how EU member nations could develop Foundational Models, generalized AIs serving as bases for more specialized applications. OpenAI's GPT-4 is one such foundational model, with ChatGPT, GPTs, and other third-party applications derived from its fundamental functionality. The concerned trio feared that stringent EU regulations on generative AI models could impede member nations' competitive development efforts.
The European Commission had previously addressed the challenges posed by emerging AI technologies through various initiatives. These include the first European Strategy on AI and Coordinated Plan on AI in 2018, followed by the Guidelines for Trustworthy AI in 2019. In the subsequent year, the Commission released a White Paper on AI and a Report on the safety and liability implications of Artificial Intelligence, the Internet of Things, and robotics.
The European Commission's draft AI regulations emphasize that artificial intelligence should be a tool serving people to enhance human well-being, rather than an end in itself. The rules aim to place people at the core, ensuring technology use is safe and compliant with the law, upholding fundamental rights.
Furthermore, the regulations advocate for a balanced, proportionate approach, avoiding unnecessary constraints on technological development. This is crucial because, despite AI's omnipresence in daily life, it's impossible to predict all potential future uses or applications.
More recently, the EC has initiated collaboration with industry stakeholders on a voluntary basis to formulate internal rules. These rules aim to establish a common ground for companies and regulators to operate under mutually agreed-upon guidelines. European Commission (EC) industry chief Thierry Breton affirmed in a May statement, "[Google CEO Sundar Pichai] and I agreed that we cannot afford to wait until AI regulation becomes applicable. We must collaborate with all AI developers to develop an AI pact on a voluntary basis ahead of the legal deadline." Similar discussions have been undertaken with US-based corporations.
