InfoGuard Cyber Security and Cyber Defence Blog

EU AI Act: Clear Guardrails are now Levering AI out of its Blind Spot

Written by Estelle Ouhassi | 02 Jun 2025

The cyber threat situation has changed fundamentally: Ransomware attacks, targeted phishing campaigns and attacks on organizations of all sizes and industries are intensifying in step with increasing digitalization. Whether in everyday life, in business applications or in the hands of cyber attackers: The increasing use of AI technologies is acting as an accelerator for new attack patterns.

The AI Act marks a strategic turning point in the field of tension between technological innovation and increasing threats. For the first time, the AI Act creates a comprehensive legal framework at EU level with clear guidelines for the use of AI systems - with the aim of promoting trust in innovative technologies and limiting risks in a targeted manner.

What does the AI Act stand for and where does it work?

The AI Act has been in force since August 1, 2024 and regulates the use, development and distribution of AI systems within the EU and beyond. The regulation is based on a risk-based approach and pursues the following main objectives

  • Protection of fundamental rights and security
  • Promotion of trustworthy innovation
  • Harmonization of the internal market in dealing with AI technologies

In contrast to data protection laws such as the EU GDPR, which focus on personal data, the AI Act addresses the technological functioning and context of use of AI. In doing so, it intervenes directly in adjacent regulations:

  • GDPR: data protection aspects of AI use
  • CRA: Cybersecurity requirements for digital products
  • NIS2: Resilience obligations for operators of critical infrastructures
  • Digital Services Act (DAS) / Product Liability Directive (PLD): Product liability, digital services

The scope of the AI Act extends beyond the EU. Especially when AI systems from third countries, such as Switzerland, are used within Europe. In the section "AI Act and Swiss law", you can find out what the requirements hold for Swiss companies and what recommendations for action result from this.

The 4 risk levels of AI systems

At the heart of the AI Act is the categorization of AI systems according to four risk levels:

  1. Unacceptable risk: AI systems that manipulate people, discriminate or make social judgments are strictly prohibited.
  2. High risk: AI systems with a significant impact on safety, health or fundamental rights are subject to strict requirements in terms of safety, traceability and human control mechanisms.
  3. Limited risk: AI systems such as chatbots or automated tools in non-critical areas require clear user information and basic transparency measures.
  4. Minimal risk: AI systems with a general purpose - such as spam filters or simple business tools - are considered low-risk and are only subject to the requirements to a limited extent. However, tried and tested information security procedures are still recommended.

This system has a direct impact on the design, development, market access and operation of AI systems. Security-relevant applications are a particular focus.

In this context, the AI Act distinguishes between two central categories:

  • AI systems: machine-supported systems with more or less autonomous functions that generate decisions, recommendations or content and can therefore influence real or virtual environments.
  • GPAI models (General Purpose AI Models): Broadly applicable AI models, trained with large amounts of data and self-monitoring. They fulfill a variety of tasks and are integrated in many different ways - with the exception of pure research or prototype models.

This conceptual distinction is essential for risk assessment, as GPAI models are generally not covered by regulation directly, but only through their integration into specific AI systems.

Safety requirements for high-risk AI in the cyber environment

In particular, AI systems with a "high risk" rating that are used in safety-critical environments have to meet strict requirements:

For those responsible for security, these requirements mean that existing AI systems must be carefully reviewed in terms of functionality, transparency and risks. At the same time, it is important to strategically develop their own security architecture in order to meet regulatory requirements and the threat situation in an appropriate technical and organizational manner.

Technologies such as XDR, SIEM or AI-supported attack detection are indispensable here. These AI solutions make an indispensable contribution to the early detection of and defense against complex threats and support the implementation of the AI Act.

AI Act: roles, requirements and data protection implementation

The AI Act provides guidance in a previously unregulated field by regulating AI systems according to their level of risk. The more an AI system potentially interferes with fundamental rights or social processes, the stricter the requirements of the AI Act apply. This approach promotes innovation without sacrificing the protection of fundamental rights and social responsibility.

Clear obligations apply to companies with AI systems on the EU market:

  • Providers: Provide AI services or AI products in the EU market and ensure compliance, CE marking and risk management.
  • Users: Integrate AI into operational processes and are responsible for its legally compliant use.
  • Developers: Design AI systems, especially in safety-critical application areas, and must implement regulatory requirements at an early stage.

Conformity with the EU AI Act is checked by independent conformity assessment bodies (notified bodies), especially for high-risk AI systems. Companies should use existing compliance structures such as an information security management system (ISMS) or an internal control system (ICS) to systematically meet the requirements. These tools help to efficiently implement risk management, documentation obligations and technical protective measures. In this way, compliance with regulatory requirements can be established at an early stage and demonstrated to the authorities.

Avoid sanctions - ensure compliance with the AI Act

The AI Act provides for severe penalties in the event of violations:

  • Up to EUR 35 million or 7% of annual global turnover
  • Enforcement by national supervisory authorities and the future EU AI Office

Compliance officers must now check which AI systems in the company are affected and whether the requirements are already being met. An independent AI gap analysis can provide clarity and define the maturity of AI use and the measures required.

AI Act and Swiss law

For Swiss companies with business relationships in the EU, the AI Act is also a signal to act, a clear orientation and an opportunity to rethink security and innovation. Whether exporting AI products, participating in international supply chains or operating critical infrastructures, the AI Act creates new requirements and clear rules of the game.

With its decision of February 12, 2025, the Federal Council sent a clear signal: Switzerland wants to ratify the Council of Europe's AI Convention and is adapting national law to this end. At the same time, sectoral regulatory activities - for example in healthcare or transport - are to be continued.

Even without EU membership, Switzerland is affected by the AI Act, because:

  • Swiss exporters must comply with the requirements if AI products are used in the EU,
  • Contracts and general terms and conditions must be reviewed and adapted if necessary,
  • compliance can be positioned as a quality feature in competition.

In addition, Switzerland is pursuing a technology-neutral approach with the revised Data Protection Act (in force since September 1, 2023), which also addresses AI applications. The key aspects of this are

  • Privacy by design & default: data protection must be an integral part of AI systems from the outset.
  • Profiling: The explicit consent of the data subjects is required for high-risk profiling.
  • Biometric data: Their processing is only permitted with clear justification or explicit consent - in the case of federal bodies, an additional legal basis is required.
  • Data protection impact assessment (DPIA): For high-risk AI applications, it supports the early identification and management of potential risks.
  • Responsibilities: Both data controllers and processors are responsible for complying with data protection regulations.

There is an increased focus on ethical requirements and transparency, fairness and accountability are enshrined in law. Companies in critical sectors and/or the financial sector in particular should act quickly. FINMA Supervisory Communication 08/2024 "Governance and risk management in the use of artificial intelligence" outlines future regulatory requirements.

Timeline and transition phases: What applies when?

Officially in force since August 1, 2024, the AI Act will come into effect in stages, with full implementation starting in August 2027. Now is the right time to systematically prepare for implementation.

These stages are relevant:

  • From February 2025: ban on certain AI systems with unacceptable risks, such as manipulative or discriminatory applications ("unacceptable risk").
  • From August 2025: Monetary and non-monetary enforcement measures and obligations for providers of so-called "General Purpose AI" (GPAI) - i.e. broadly applicable AI models such as speech or image generators.
  • From August 2026: Requirements for high-risk AI ("high risk") in security-relevant sectors, such as critical infrastructure, personnel management, public services, education or law enforcement.

In order to actively shape this transition, the European Commission has launched the AI Pact - a voluntary initiative that encourages developers to implement key obligations of the AI Act ahead of time.

Recommended next steps towards AI maturity

To determine the current situation and prepare for the AI Act requirements, a thorough AI gap analysis in accordance with ISO/IEC 42001:2023 is recommended. It assesses the maturity level of AI use in the company and identifies necessary measures.

With the increasing regulation of artificial intelligence at European and international level, there is growing pressure on companies to prove their internal AI competence - particularly in the context of risk assessments, audits and responsibility for trustworthy systems.

A formal certificate can be helpful here, but is not absolutely necessary. Instead, competencies can also be proven in alternative ways, for example through

  • Traceable documentation of development and decision-making processes in AI projects
  • Internal training programs that provide basic and specialized knowledge on AI, data protection and ethics
  • Designated AI officers or interdisciplinary AI governance teams who monitor and control the relevant processes
  • Use of tried-and-tested tools for risk and compliance assessment (e.g. DPIA, GAIRA tool, certAInmity)
  • Participation in networks or specialist committees that promote the exchange of best practices

Even without formal certification, a comprehensible, systematically documented approach with clearly assigned responsibilities creates trust - with supervisory authorities as well as with customers and partner institutions.

Wake-up call for secure AI and recommendations for action

The AI Act is not an obstacle to innovation, but an opportunity. The EU regulation not only requires regulatory compliance, but also strengthens trust in technology and transparency in the digital market.

Companies that act in good time, review their systems now and align themselves with established standards such as ISO/IEC 42001:2023 will secure a decisive advantage - in terms of security, responsibility and competitiveness.

If you are wondering what you should do now, then you have chosen the right time to get clarity. Let an AI gap analysis show you the extent to which your AI stack is already compliant, where exactly there is still a need for action and which measures you should prioritize. Whether in preparation for regulatory audits or as a strategic management tool for establishing a trustworthy AI governance model - the AI gap analysis determines the maturity level of your AI systems and creates the basis for safe, ethical and legally compliant use.


 

Caption: Image generated with AI