The EU publishes the first draft of regulatory guidance for general purpose AI models

by Pelican Press
7 views 3 minutes read

The EU publishes the first draft of regulatory guidance for general purpose AI models

On Thursday, the European Union published its first draft of a Code of Practice for general purpose AI (GPAI) models. The document, which won’t be finalized until May, lays out guidelines for managing risks — and giving companies a blueprint to comply and avoid hefty penalties. The EU’s AI Act came into force on August 1, but it left room to nail down the specifics of GPAI regulations down the road. This draft (via TechCrunch) is the first attempt to clarify what’s expected of those more advanced models, giving stakeholders time to submit feedback and refine them before they kick in.

GPAIs are those trained with a total computing power of over 10²⁵ FLOPs. Companies expected to fall under the EU’s guidelines include OpenAI, Google, Meta, Anthropic and Mistral. But that list could grow.

The document addresses several core areas for GPAI makers: transparency, copyright compliance, risk assessment and technical / governance risk mitigation. This 36-page draft covers a lot of ground (and will likely balloon much more before it’s finalized), but several highlights stand out.

The code emphasizes transparency in AI development and requires AI companies to provide information about the web crawlers they used to train their models — a key concern for copyright holders and creators. The risk assessment section aims to prevent cyber offenses, widespread discrimination and loss of control over AI (the “it’s gone rogue” sentient moment in a million bad sci-fi movies).

AI makers are expected to adopt a Safety and Security Framework (SSF) to break down their risk management policies and mitigate them proportionately to their systemic risks. The rules also cover technical areas like protecting model data, providing failsafe access controls and continually reassessing their effectiveness. Finally, the governance section strives for accountability within the companies themselves, requiring ongoing risk assessment and bringing in outside experts where needed.

Like the EU’s other tech-related regulations, companies that run afoul of the AI Act can expect steep penalties. They can be fined up to €35 million (currently $36.8 million) or up to seven percent of their global annual profits, whichever is higher.

Stakeholders are invited to submit feedback through the dedicated Futurium platform by November 28 to help refine the next draft. The rules are expected to be finalized by May 1, 2025.



Source link

#publishes #draft #regulatory #guidance #general #purpose #models

Add Comment

You may also like