InfoGuard AG (Headquarter)
Lindenstrasse 10
6340 Baar
Switzerland
InfoGuard AG
Stauffacherstrasse 141
3014 Bern
Switzerland
InfoGuard Com-Sys GmbH
Frankfurter Straße 233
63263 Neu-Isenburg
Germany
InfoGuard Deutschland GmbH
Landsberger Straße 302
80687 Munich
Germany
InfoGuard Com-Sys GmbH
Am Gierath 20A
40885 Ratingen
Germany
The cyber threat situation has changed fundamentally: Ransomware attacks, targeted phishing campaigns and attacks on organizations of all sizes and industries are intensifying in step with increasing digitalization. Whether in everyday life, in business applications or in the hands of cyber attackers: The increasing use of AI technologies is acting as an accelerator for new attack patterns.
The AI Act marks a strategic turning point in the field of tension between technological innovation and increasing threats. For the first time, the AI Act creates a comprehensive legal framework at EU level with clear guidelines for the use of AI systems - with the aim of promoting trust in innovative technologies and limiting risks in a targeted manner.
The AI Act has been in force since August 1, 2024 and regulates the use, development and distribution of AI systems within the EU and beyond. The regulation is based on a risk-based approach and pursues the following main objectives
In contrast to data protection laws such as the EU GDPR, which focus on personal data, the AI Act addresses the technological functioning and context of use of AI. In doing so, it intervenes directly in adjacent regulations:
The scope of the AI Act extends beyond the EU. Especially when AI systems from third countries, such as Switzerland, are used within Europe. In the section "AI Act and Swiss law", you can find out what the requirements hold for Swiss companies and what recommendations for action result from this.
At the heart of the AI Act is the categorization of AI systems according to four risk levels:
This system has a direct impact on the design, development, market access and operation of AI systems. Security-relevant applications are a particular focus.
In this context, the AI Act distinguishes between two central categories:
This conceptual distinction is essential for risk assessment, as GPAI models are generally not covered by regulation directly, but only through their integration into specific AI systems.
In particular, AI systems with a "high risk" rating that are used in safety-critical environments have to meet strict requirements:
For those responsible for security, these requirements mean that existing AI systems must be carefully reviewed in terms of functionality, transparency and risks. At the same time, it is important to strategically develop their own security architecture in order to meet regulatory requirements and the threat situation in an appropriate technical and organizational manner.
Technologies such as XDR, SIEM or AI-supported attack detection are indispensable here. These AI solutions make an indispensable contribution to the early detection of and defense against complex threats and support the implementation of the AI Act.
The AI Act provides guidance in a previously unregulated field by regulating AI systems according to their level of risk. The more an AI system potentially interferes with fundamental rights or social processes, the stricter the requirements of the AI Act apply. This approach promotes innovation without sacrificing the protection of fundamental rights and social responsibility.
Clear obligations apply to companies with AI systems on the EU market:
Conformity with the EU AI Act is checked by independent conformity assessment bodies (notified bodies), especially for high-risk AI systems. Companies should use existing compliance structures such as an information security management system (ISMS) or an internal control system (ICS) to systematically meet the requirements. These tools help to efficiently implement risk management, documentation obligations and technical protective measures. In this way, compliance with regulatory requirements can be established at an early stage and demonstrated to the authorities.
The AI Act provides for severe penalties in the event of violations:
Compliance officers must now check which AI systems in the company are affected and whether the requirements are already being met. An independent AI gap analysis can provide clarity and define the maturity of AI use and the measures required.
For Swiss companies with business relationships in the EU, the AI Act is also a signal to act, a clear orientation and an opportunity to rethink security and innovation. Whether exporting AI products, participating in international supply chains or operating critical infrastructures, the AI Act creates new requirements and clear rules of the game.
With its decision of February 12, 2025, the Federal Council sent a clear signal: Switzerland wants to ratify the Council of Europe's AI Convention and is adapting national law to this end. At the same time, sectoral regulatory activities - for example in healthcare or transport - are to be continued.
Even without EU membership, Switzerland is affected by the AI Act, because:
In addition, Switzerland is pursuing a technology-neutral approach with the revised Data Protection Act (in force since September 1, 2023), which also addresses AI applications. The key aspects of this are
There is an increased focus on ethical requirements and transparency, fairness and accountability are enshrined in law. Companies in critical sectors and/or the financial sector in particular should act quickly. FINMA Supervisory Communication 08/2024 "Governance and risk management in the use of artificial intelligence" outlines future regulatory requirements.
Officially in force since August 1, 2024, the AI Act will come into effect in stages, with full implementation starting in August 2027. Now is the right time to systematically prepare for implementation.
These stages are relevant:
In order to actively shape this transition, the European Commission has launched the AI Pact - a voluntary initiative that encourages developers to implement key obligations of the AI Act ahead of time.
To determine the current situation and prepare for the AI Act requirements, a thorough AI gap analysis in accordance with ISO/IEC 42001:2023 is recommended. It assesses the maturity level of AI use in the company and identifies necessary measures.
With the increasing regulation of artificial intelligence at European and international level, there is growing pressure on companies to prove their internal AI competence - particularly in the context of risk assessments, audits and responsibility for trustworthy systems.
A formal certificate can be helpful here, but is not absolutely necessary. Instead, competencies can also be proven in alternative ways, for example through
Even without formal certification, a comprehensible, systematically documented approach with clearly assigned responsibilities creates trust - with supervisory authorities as well as with customers and partner institutions.
The AI Act is not an obstacle to innovation, but an opportunity. The EU regulation not only requires regulatory compliance, but also strengthens trust in technology and transparency in the digital market.
Companies that act in good time, review their systems now and align themselves with established standards such as ISO/IEC 42001:2023 will secure a decisive advantage - in terms of security, responsibility and competitiveness.
If you are wondering what you should do now, then you have chosen the right time to get clarity. Let an AI gap analysis show you the extent to which your AI stack is already compliant, where exactly there is still a need for action and which measures you should prioritize. Whether in preparation for regulatory audits or as a strategic management tool for establishing a trustworthy AI governance model - the AI gap analysis determines the maturity level of your AI systems and creates the basis for safe, ethical and legally compliant use.
Caption: Image generated with AI