InfoGuard AG (Headquarter)
Lindenstrasse 10
6340 Baar
Switzerland
InfoGuard AG
Stauffacherstrasse 141
3014 Bern
Switzerland
InfoGuard Deutschland GmbH
Frankfurter Straße 233
63263 Neu-Isenburg
Germany
InfoGuard Deutschland GmbH
Landsberger Straße 302
80687 Munich
Germany
InfoGuard Deutschland GmbH
Am Gierath 20A
40885 Ratingen
Germany
InfoGuard GmbH
Kohlmarkt 8-10
1010 Vienna
Austria
DevSecOps has made development processes more robust over the years: CI/CD pipelines are secure, infrastructures are more stable and security checks are firmly integrated. However, the use of artificial intelligence is now fundamentally shifting the starting position.
Developers are integrating LLMs into existing applications, data analysts are training machine learning models, AI tools are being used productively throughout the company - and the attack surface is growing faster than existing security models can keep up. Suddenly, gaps are appearing where processes were considered tried and tested.
The hard truth is: AI systems are inadequately protected by conventional security measures. Vulnerability scanners overlook so-called poisoned, manipulated training data, classic firewalls fail to detect prompt injection attacks on AI solutions, and even penetration tests do not detect vulnerability risks in model extraction. AI without specialized security measures means loss of control. Anyone who uses it without AI-specific protective measures is flying blind.
You know the DevSecOps philosophy: security is part of every process right from the start, is consistently automated and anchored as a shared responsibility. AI gives rise to completely new attack patterns that lie outside of previous experience and protection mechanisms.
The new cyber risks are evident on the following three levels: in data science, in the operational operation of AI systems and in strategic decision-making processes.
The following new security risks can arise unintentionally in the area of "Data Analysts & Scientists":
Use of unverified training data, potentially manipulated by attackers or competitors.
Use of pre-trained models from public repositories without verification of their security or copyright protection.
Disclosure of model APIs without rate limiting or extraction protection.
Providing models without regard to attacks by malicious actors.
The operation of ML systems follows a different security logic than classic applications with concrete effects on architecture, supply chain and monitoring:
Application code and ML pipelines require different types of security.
The infrastructure for the provision of models requires special protection.
Continuous training of AI models poses new risks for the supply chain.
A lack of transparency about AI risks can lead to strategic decisions being made on an uncertain basis:
Which AI systems are the most dangerous for your company?
Are we compliant with the new AI regulations (https://www.infoguard.ch/en/blog/ai-and-cybersecurity-part-1-fine-tuning-between-potential-and-risk) (e.g. EU AI law, regulations for certain industries)?
How likely is it that our models will be stolen, our data will be leaked or our algorithms will be biased?
How much money should we spend on AI security and where?
An AI security assessment will help you answer all these questions before they become problems.
We've taken the DevSecOps model you already trust and added three frameworks that work well together:
What the NIST AI Risk Management Framework does:
The NIST AI Risk Management Framework translates AI risks into governance language that executives can understand; such as security, privacy, fairness, transparency and regulatory compliance. It helps to systematically classify and prioritize the AI portfolio based on risk levels.
Why you need it:
Boards of directors are increasingly interested in how AI is used in the organization. Regulators are closely monitoring its use and customers expect transparency. The NIST AI RMF creates the necessary governance structure to address these requirements in a systematic and traceable way.
What MITRE ATLAS Threat Intelligence does for you:
MITRE ATLAS Threat Intelligence shows exactly how AI systems are attacked,¬ using real techniques that competitors, states and cyber criminals have used in the past. Not just hypothetical attacks, but real and verified attacks: from model theft through API queries to poisoning training data over months.
Why you need it:
A Red Team knows the methods of penetrating classic apps. However, they do not yet know how to extract models or poison ML pipelines. ATLAS fills this knowledge gap with attack patterns developed specifically for AI.
What the CRISP-ML(Q) Development Lifecycle does:
The CRISP-ML(Q) Development Lifecycle adds security controls at every step of ML development, from data collection to deployment to monitoring. It applies the "shift left" principle known from DevSecOps to the development cycle of AI systems.
Why you need it:
The competence for secure software development is established, while the secure construction of ML systems is often not yet comparably anchored. CRISP-ML(Q) closes this gap with a structured process that ensures security not only in theory but also in practice.
Let's assume you are developing an AI system for fraud detection. What happens if security mechanisms are not integrated from the outset?
In the absence of integrated frameworks, the risk unfolds along the entire value chain:
Data scientists train with the available data, making it more likely to be "poisoned".
A DevOps team deploys it like any other app, which means there are no AI-specific controls.
The security team tests it like normal software, meaning there is no consideration of AI attack vectors.
Six months later, competitors steal the model through API queries, regulators fine for bias, and executives wonder why security "didn't notice."
Based on the NIST AI RMF criteria, this is categorized as a high-risk system subject to strict controls as it affects customers' finances and is under regulatory oversight.
MITRE ATLAS threat modeling shows that the biggest threats are model extraction (by competitors), data poisoning (by adversaries seeking to avoid detection) and bias incidents (by regulators).
The integration of CRISP-ML(Q) ensures that data validation prevents falsification, fairness tests detect relevant biases and continuous monitoring detects attacks in production.
The result? Fraud detection goes live on schedule, passes the official inspection and works reliably, as the security mechanisms were integrated from the outset and vulnerabilities do not only become apparent during implementation.
"More AI, more risk. If you don't think about security along the ML lifecycle, you lose control."
An AI security assessment answers the most burning governance questions before they slow down your innovation.
Our AI security assessment provides DevSecOps teams with clarity on which measures to prioritize and what to focus on when using AI.
We create a map of all AI systems, including those that are known to IT and those that are "shadow AI". Systematic risk classification makes it transparent which AI projects pose an increased security risk and which are less critical.
What you will potentially see: "The data science team uses 12 different AI tools. Three of them process personal data from customers. One is connected to the database where the company stores production data. Two of them are high-risk according to EU AI law." Now you know where to start.
We use MITRE ATLAS to show the security team exactly how attackers will attack the AI systems. These are not general threats, but specific attack scenarios based on documented real-world methods.
What you will potentially see: "Prompt injection attacks can penetrate the customer service chatbot and steal customer data. A large number of API queries can be used to reconstruct how a fraud detection model works. An AI-powered recruitment solution can also be targeted by regulators if there is evidence of bias." Now your security team knows what to look out for.
We compare the current AI security situation with industry standards. What control mechanisms are in place? Where are there gaps? What is the risk rating for each gap?
What you will potentially see: "API security is robust, but lacks effective protection against model extraction. Quality checks exist for training data, but no defenses against data poisoning. Monitoring identifies performance issues but does not detect security-related attacks." Now you know exactly what you can improve.
We give you a plan with three implementation phases: Critical (months 1-2), High Priority (months 3-4) and Medium Priority (months 5-6). Each phase has its own controls as well as estimates of the amount of work required for implementation.
What you'll potentially see: "In Phase 1, you'll work on five key controls that will reduce your biggest risks within eight weeks of work. For additional expert engagement over eight weeks, Phase 2 adds a defense in depth. Phase 3 reaches a high level of maturity." Now you can create, budget and implement your plans.
An AI security assessment uncovers shadow AI and attack scenarios and provides the five critical controls for immediate risk reduction in a 4-step plan.
The assessment is not just a report, but the basis for how teams can develop AI models securely in the future:
Actions to be taken immediately (first 90 days)
Establish centralized controls to reduce the most critical vulnerabilities
Implementation of defensive measures against data poisoning (poisoning)
Limiting and monitoring API calls to protect against model extraction
Checking and ensuring model integrity
Closing identified security gaps with immediate damage potential.
Laying the foundations (3-6 months)
Integration of AI-specific security controls into existing DevSecOps pipelines
Automated tests to detect potentially harmful inputs
Verification of model artifacts and integrity within CI/CD processes
Monitoring to identify unusual or security-relevant queries
Anchoring AI security as a continuous component of operations instead of a reactive measure in the event of an incident.
Maturity phase (6-12 months)
Development of organizational competence for the protection of AI systems
Establishment of secure development practices in the area of machine learning
Confident and controlled use of AI in operations
Specific defense mechanisms against AI-related threats
Anchoring AI security as part of the corporate culture
Continuous optimization
With appropriate frameworks, it is possible to react flexibly to changing threats. New MITRE ATLAS methods are integrated into existing defense measures. Adjustments in accordance with the NIST AI RMF strengthen governance structures, while further developments in the CRISP-ML(Q) process continuously improve secure AI development. Security thus remains adaptable without having to redefine the security strategy from scratch each time.
DevSecOps has solved this problem by making security part of the process from the outset. Automation has allowed it to grow. It has developed a culture where everyone takes responsibility.
AI is the same, just with different tools. The teams that apply DevSecOps ideas to AI today will prevail. If you think of AI security as "something we'll worry about later", companies will lose out to competitors, regulators or attackers.
AI expands the attack surface - and requires a security logic that goes beyond traditional DevSecOps models. The decisive factor is not the number of individual controls, but a structured assessment of risks, clear prioritization and the consistent integration of security mechanisms along the entire ML lifecycle. This is the only way to make AI controllable, auditable and sustainably resilient. Isn't your innovation too valuable to be exposed to cyber attacks without protection?
Let us work together to ensure the security of your crown jewels and create a well-founded, comprehensible assessment.
Analysis of current AI initiatives
Identification of specific risk drivers
Definition of a targeted expansion of DevSecOps to include AI-specific security mechanisms
Together, we classify your AI security requirements and define the next steps for a customized package of measures.
List of sources
- NIST AI Risk Management Framework: https://www.nist.gov/itl/ai-risk-management-framework
- MITRE ATLAS: https://atlas.mitre.org
- CRISP-ML(Q): https://arxiv.org/pdf/2003.05155.pdf
- Cloud Security Alliance AI Controls Matrix: https://cloudsecurityalliance.org/artifacts/ai-controls-matrix
- EU AI Act: Regulation 2024/1689
- ISO/IEC 23894:2023 - AI Risk Management
- ISO/IEC 42001:2023 - AI Management Systems
Caption: Image generated with AI