EU's Action Plan to curb the power of AI models like Chat-GPT and similar entities
**EU Introduces Voluntary Code of Conduct for AI Models**
The European Union has unveiled a voluntary code of conduct for large AI models, known as the General-Purpose AI Code of Practice. This supportive framework, designed to ease compliance burdens and provide legal certainty, aims to help companies align with the EU AI Act's regulatory requirements, particularly for general-purpose and advanced AI systems.
The key provisions of this code focus on transparency, copyright, and security and safety. In terms of transparency, AI model providers must clearly communicate how their systems operate, disclose information about data usage, model architecture, and intended applications to users, and develop tools that allow external scrutiny of AI model behavior.
Regarding copyright, providers must ensure that AI models respect copyright laws, involve rightsholders in the process, and, where feasible, enable attribution of generated content to original sources and provide mechanisms for redress in cases of copyright infringement.
On the matter of security and safety, companies must conduct regular risk assessments, implement robust security protocols, and manage systemic risks associated with high-impact AI models.
While adherence to the code is voluntary, companies that sign on benefit from reduced regulatory scrutiny and clearer legal expectations. AI companies have up to two years to comply with the EU AI Act, and the most stringent obligations and enforcement for general-purpose AI with systemic risk will begin no earlier than August 2026.
The new code of conduct covers both general-purpose AI systems and AI systems that can write texts, analyze language, or program. For existing systems like ChatGPT-4, these new rules will apply from next year. The code includes a form to help providers record technical information in a clear manner for supervisory authorities and downstream AI developers.
The EU Commission sees the code as an important tool to help companies transition to the new European regulatory framework. Providers that do not adopt the code will have to develop their own approach to demonstrate their legal compliance, which may involve greater effort. Enhanced requirements will apply for particularly powerful AI models with potential risks, such as the development of new chemical or biological weapons technologies, or where there is a risk of losing control over the technology.
The voluntary code is meant to provide a framework for providers to better fulfill their future obligations under the EU AI law. The code is still pending approval by the EU Commission and member states. The EU adopted a comprehensive AI law a year ago, with some regulations already in effect, and further rules set to come into force in early August for new AI models.
- The European Union's General-Purpose AI Code of Practice, a voluntary code of conduct for large AI models, includes provisions on transparency, copyright, and security and safety.
- Providers must ensure their AI models respect copyright laws, involve rightsholders in the process, and enable attribution of generated content to original sources under the transparency and copyright section of the code.
- In the area of security and safety, companies are required to conduct regular risk assessments, implement robust security protocols, and manage systemic risks associated with high-impact AI models under the code.