Implementation of the AI Act determines Europe's chances with AI

 

  • EU member states want to approve AI Act today 
  • Bitkom: "AI Act must not become an AI brake"

Berlin, 02 February 2024 - On the occasion of today's vote on the AI Act in the Committee of Permanent Representatives of the EU Member States, the digital association Bitkom is calling for a legally secure and innovation-friendly implementation. The AI Act transfers some key issues to the responsibility of the member states. "In its current form, the AI Act will hardly provide more legal certainty for the development and use of AI. A practicable interpretation and application of the provisions in the EU member states is crucial," says Susanne Dehmel, member of the Bitkom Executive Board. "The German government is called upon to focus on the opportunities of artificial intelligence for business, society and administration during implementation. We must not repeat the mistakes of the GDPR." This is the only way to achieve the EU's self-imposed goal of becoming a global leader in trustworthy AI. "The AI Act places requirements on companies in various areas that duplicate or even contradict existing requirements, such as in the Medical Devices and Machinery Directives. If we want companies in Europe and Germany to continue to develop and use AI, we must ensure that the implementation of the AI Act does not create unnecessary bureaucratic hurdles and is consistent with existing legislation," says Dehmel. "The AI Act and its implementation will determine whether European companies and start-ups have the chance to keep pace with the global innovation drivers of AI or even take the lead in this era-defining technology."

Since the start of negotiations in April 2021, Bitkom has supported the fundamental aim of the AI Act to strengthen trust in AI and thus promote the use of AI. On the one hand, this includes the original basic concept of the risk-based approach with a narrow and clearly defined high-risk area. On the other hand, the digital economy has also supported the successful model of the so-called "New Legislative Framework", in which the legislator defines protection goals and the economy implements these in concrete terms via standards. However, the current compromise on the AI Act does not adequately fulfil both tasks. For example, the rigid and far-reaching requirements for so-called "general purpose AI models" were an unnecessary departure from the risk-based approach. In addition, the advantages of the flexibility of mandatory regulated self-regulation have not been sufficiently utilised and, at the same time, the ability to connect to international self-regulation approaches such as the G7 AI Code of Conduct has been missed. 

Bitkom expressly warns against different interpretations of the AI Act within the EU. Start-ups and small and medium-sized enterprises in particular would face almost insurmountable problems if they had to align their offerings with 27 different AI regulations. "The AI Act must not become an AI brake. Under no circumstances should Germany push the possibilities for market intervention to the limits of what is permissible, as we had to experience with the GDPR. This would force companies into a regulatory corset that would nip innovation in the bud," says Dehmel. The measures to support start-ups and SMEs envisaged by the German government to date would have to be significantly expanded in view of the expected high effort required to fulfil the requirements for high-risk AI systems, for example.

Share