How can you reconcile 1,000 stakeholder views to a fast-moving technology, which is expected to be defined in just two weeks? The Europe's AI office that is authorized to enforce the AI law – the world's first law on artificial intelligence systems – is struggling to find answers.
When the deadlines are emerging, the new legislation that aims to determine a global standard for trustworthy AI creates important conflicts and symptoms. The legitimacy of the AI law and the striving of the EU, the “worldwide leader in the safe AI” is at stake.
According to AI Act, providers of general AI systems such as Chatgpt4 must implement a number of risk reduction measures and ensure transparency and high-quality data records. The AI office designs a code of practice (“Der Code”), which describes practical guidelines for compliance with compliance. Since obligations for general AI providers come into force in August 2025, a final practical code will be due in April. It is too short time, say stakeholders.
The AI office consults about 1,000 stakeholders to write the code, including companies, national authorities, academic researchers and civil society. It published a first draft in mid -November 2024 in which the stakeholders only have 10 days to give feedback. Hundreds of written answers flocked. A second draft that recognized the “short time frame” – was presented on December 19, which forced the stakeholders to send feedback via the holiday season.
“The shortcomings of the AI law, in particular the excessive schedule for the application of its rules, are already becoming obvious,” says Boniface de Champris, Senior Policy Manager of the Computer and Communications Industry Association (CCIA), an organization that represents leading technology companies.
The third draft of the code, which is expected on February 17, must struggle with deep departments. A key area of disputes concerns the role of the “external evaluators” in the AI training process. Should AI developers have to open their models for third-party experts? Many academics and organizations of civil society think that. Representatives of the industry fear that this extent of supervision is unjustified and technically not feasible.
Another sticking point is to train data and copyright. The latest draft of the code states that AI developers have to provide detailed information on your training data, including the question of whether you have been legally preserved. Companies fear that these requirements will endanger business secrets. In addition, there is already what is legally preserved in the EU to obtain data. Lawyers do not agree whether the exceptions to text and data mining in the EU Copyright Directive – Write before the generative AI rise – enable commercial AI developers to scratch the copyright -protected data.
Get the latest
Register to receive regular bandwidths and remain up to date with CEPA's work.
If the code has not been completed by August, the AI office will be forced to determine the rules for yourself -a step that would damage the legitimacy of the AI Act. The starting process can either be an enormous success of the participatory, co-regulatory regulations that could determine an example of other areas, or it could fail, a big blow against the general credibility of the AI law and ultimately, and ultimately, the European Union Even ”, Laura Caroli, Senior Fellow at the Center for Strategic and International Studies.
Other parts of the AI Act stand before upcoming deadlines. A ban on AI practices that are considered “unacceptable risks” applies from February 2, 2025. This includes the AI, which is used for social evaluation, emotional recognition in jobs and education and behavioral manipulation. Developers who were found to be unusual are exposed to fines of up to 7% of the annual turnover.
Decisive details do not specify which systems fall in this forbidden category. The AI office aimed to publish guidelines for bans, “in good time to enter the application of these provisions on February 2”. But no details were published in just a few days.
Both civil society and industry are concerned. “With the AI Act, which is to come into force in two weeks, the business is not sure about critical problems,” said Business Group Digital Europe in mid-January. Similarly, an open letter from 21 organizations of civil society – including Amnesty International, European Digital Rights and Access – was signed that the schedules for the creation of the bans were too short to enable “more targeted and useful feedback”.
According to Mep Axel Voss, Shadow reporter for the AI law, the AI office is currently “massively understaffed”. A total of 85 employees work only 30 on the implementation of the AI law. In contrast, the newly founded British AI security institute employs over 150 employees.
It is not the first time that the AI law is exposed to setbacks – not least from technology companies who claim that it threatens innovation. At the beginning of this autumn, Anthropic, Apple and Meta were one of the companies that refused to sign the EU's voluntary AI pact, which aimed to increase the early compliance with the AI Act. Donald Trump's return to the White House has further encouraged the US technology companies to introduce a defiant attitude towards the EU legislation.
In contrast, the Commission emphasizes that the AI law will increase the AI reception and innovation in Europe. “You need the regulation to create trust and this trust will encourage innovation,” explains Lucilla Sioli, head of the KI office. The idea that the law would kill innovation is, according to Carme Artigas, co-chair of the United Nations Advisory Board, an “absolute lie”, the negotiations on the AI Act led.
It has always been an ambitious goal to regulate a quickly developing, transformative technology. A transparent design process and a clear guideline are required. If the EU is not successful, the skepticism of AI regulation will assemble. The days of Europe's much praised “Brusselserffect”, which determine global rules for technology, could be counted.
Oona Lagercrantz is a project assistant of the Tech policy program in the Center for European Policy Analysis (CEPA) in Brussels. Before he came to Cepa, Oona examined the ethics and government management of emerging technologies in the Center for Climate Separature and the existential Risk Initiative Cambridge.
Bandwidth is the online journal of CEPA, which is dedicated to promoting transatlantic cooperation in relation to the technical guideline. All opinions are that of the author and do not necessarily represent the position or views of the institutions or the Center for European Policy Analysis they represent.

How the United States and Europe can come together
Read more
Read more from the bandwidth
The CEPA online journal is dedicated to promoting transatlantic cooperation in relation to the technical guideline.
Read more