AI model providers obliged to disclose information under fresh EU regulations
New EU AI Act Strengthens Regulations for AI Providers
As of August 2, 2022, the European Union has implemented new rules for providers of General-Purpose AI (GPAI) models. These rules aim to ensure transparency, strengthen intellectual property protection, and implement safety measures, with significant penalties for non-compliance.
Transparency Under the new regulations, providers are required to disclose detailed information about their AI models, including how the models are trained, the datasets used, the model architecture, and potential risks. This information must be clear, accessible, and submitted to regulators and downstream users.
Intellectual Property Protection The Act strengthens copyright enforcement, obliging providers to maintain documentation showing respect for intellectual property rights associated with training data and model components. This aims to prevent unauthorized use of copyrighted content during both model training and deployment.
Safety Measures Providers of the most advanced GPAI models (exceeding 10^25 FLOP) must implement additional safety and security obligations. This includes systematic risk assessments, adversarial testing, cybersecurity protocols, and ensuring the safe and secure operation of AI models. Notifications to the European Commission are mandatory for these higher-risk models.
Penalties Violations of the EU AI Act may result in severe penalties, including fines of up to €35 million or 7% of a company’s global annual turnover. Enforcement will be overseen by national authorities such as Germany’s Federal Network Agency, and there will be no transition periods after August 2, 2025, for the key transparency and due diligence obligations.
Developers must now report which sources they used for their training data and specify whether they automatically scraped websites for their training data. Operators of these AI systems must disclose how their systems work and what data they were trained on. Particularly powerful AI models that could potentially pose a risk to the public must also document safety measures.
According to EU guidelines, there should be a contact point for rights holders within the companies. Developers must also specify the measures they have taken to protect intellectual property. Models already on the market before August 2025 will be controlled from August 2027, while the European AI Authority will enforce the AI Act's rules from August 2026 for new models and older models from August 2027.
These rules apply specifically to General-Purpose AI systems, which can write texts, analyze language, or program. Google has expressed concern about the new rules regarding intellectual property protection. The Initiative for Copyright, however, states that there is no obligation to name specific data sets, domains, or sources in the legislation.
Providers adhering to the EU’s General-Purpose AI (GPAI) Code of Practice—a voluntary but authoritative framework developed by the European Commission, experts, and stakeholders—may benefit from reduced compliance burdens and increased legal certainty. In summary, the new EU AI Act imposes comprehensive transparency documentation, robust intellectual property compliance, and stringent safety and risk management protocols on AI model providers, backed by significant fines for breaches, ensuring responsible AI deployment within the European Union.
Artificial-intelligence providers, given the new EU AI Act, are now obligated to disclose detailed information about their AI models, including the sources used for training data and safety measures employed to mitigate potential risks. The Act also strengthens intellectual property protection by requiring providers to maintain documentation showing respect for associated rights.