DevSecOps: 3 Frameworks and a 4-Step Plan to End Cyber Blindness

Author
Martin Hüsser
Published
30. March 2026

Share article

Artificial intelligence is unleashing innovation on an unprecedented scale, whilst established security principles are becoming increasingly variable. Although DevSecOps effectively safeguards traditional development processes, AI is constantly giving rise to new cyber risks that existing models cannot fully account for. The solution: an extension of DevSecOps to include three targeted AI frameworks, along with a concrete four-step plan that prioritises risks and translates them into practical DevSecOps measures.

DevSecOps has made development processes more robust over the years: CI/CD pipelines are secure, infrastructures are more stable and security checks are firmly integrated. However, the use of artificial intelligence is now fundamentally shifting the starting position.

Developers are integrating LLMs into existing applications, data analysts are training machine learning models, AI tools are being used productively throughout the company - and the attack surface is growing faster than existing security models can keep up. Suddenly, gaps are appearing where processes were considered tried and tested.

Why scanners, firewalls and pentests fail with AI

The hard truth is: AI systems are inadequately protected by conventional security measures. Vulnerability scanners overlook so-called poisoned, manipulated training data, classic firewalls fail to detect prompt injection attacks on AI solutions, and even penetration tests do not detect vulnerability risks in model extraction. AI without specialized security measures means loss of control. Anyone who uses it without AI-specific protective measures is flying blind.

AI security does not start in the code, but in management

You know the DevSecOps philosophy: security is part of every process right from the start, is consistently automated and anchored as a shared responsibility. AI gives rise to completely new attack patterns that lie outside of previous experience and protection mechanisms.

The new cyber risks are evident on the following three levels: in data science, in the operational operation of AI systems and in strategic decision-making processes.

Data science and AI: how machine learning models create new areas of attack

The following new security risks can arise unintentionally in the area of "Data Analysts & Scientists":

  • Use of unverified training data, potentially manipulated by attackers or competitors.

  • Use of pre-trained models from public repositories without verification of their security or copyright protection.

  • Disclosure of model APIs without rate limiting or extraction protection.

  • Providing models without regard to attacks by malicious actors.

MLOps: Why traditional protection fails for ML systems

The operation of ML systems follows a different security logic than classic applications with concrete effects on architecture, supply chain and monitoring:

  • Application code and ML pipelines require different types of security.

  • The infrastructure for the provision of models requires special protection.

  • Continuous training of AI models poses new risks for the supply chain.

  • Drift detection (monitoring data or model changes) is not only about performance, but also about security.

When a lack of risk analysis becomes a business risk

A lack of transparency about AI risks can lead to strategic decisions being made on an uncertain basis:

  • Which AI systems are the most dangerous for your company?

  • Are we compliant with the new AI regulations (https://www.infoguard.ch/en/blog/ai-and-cybersecurity-part-1-fine-tuning-between-potential-and-risk) (e.g. EU AI law, regulations for certain industries)?

  • How likely is it that our models will be stolen, our data will be leaked or our algorithms will be biased?

  • How much money should we spend on AI security and where?

An AI security assessment will help you answer all these questions before they become problems.

About the AI Safety Assessment

The three-framework model: what makes it so successful?

We've taken the DevSecOps model you already trust and added three frameworks that work well together:

Strategic Level: NIST AI Risk Management Framework

What the NIST AI Risk Management Framework does:

The NIST AI Risk Management Framework translates AI risks into governance language that executives can understand; such as security, privacy, fairness, transparency and regulatory compliance. It helps to systematically classify and prioritize the AI portfolio based on risk levels.

Why you need it:

Boards of directors are increasingly interested in how AI is used in the organization. Regulators are closely monitoring its use and customers expect transparency. The NIST AI RMF creates the necessary governance structure to address these requirements in a systematic and traceable way.

Tactical level: MITRE ATLAS Threat Intelligence

What MITRE ATLAS Threat Intelligence does for you:

MITRE ATLAS Threat Intelligence shows exactly how AI systems are attacked,¬ using real techniques that competitors, states and cyber criminals have used in the past. Not just hypothetical attacks, but real and verified attacks: from model theft through API queries to poisoning training data over months.

Why you need it:

A Red Team knows the methods of penetrating classic apps. However, they do not yet know how to extract models or poison ML pipelines. ATLAS fills this knowledge gap with attack patterns developed specifically for AI.

Operational level: CRISP-ML(Q) Development Lifecycle

What the CRISP-ML(Q) Development Lifecycle does:

The CRISP-ML(Q) Development Lifecycle adds security controls at every step of ML development, from data collection to deployment to monitoring. It applies the "shift left" principle known from DevSecOps to the development cycle of AI systems.

Why you need it:

The competence for secure software development is established, while the secure construction of ML systems is often not yet comparably anchored. CRISP-ML(Q) closes this gap with a structured process that ensures security not only in theory but also in practice.

Practical implementation: do's and don'ts when detecting fraud with AI

Let's assume you are developing an AI system for fraud detection. What happens if security mechanisms are not integrated from the outset?

Without integrated DevSecOps frameworks (worst-case scenario)

In the absence of integrated frameworks, the risk unfolds along the entire value chain:

  • Data scientists train with the available data, making it more likely to be "poisoned".

  • A DevOps team deploys it like any other app, which means there are no AI-specific controls.

  • The security team tests it like normal software, meaning there is no consideration of AI attack vectors.

  • Six months later, competitors steal the model through API queries, regulators fine for bias, and executives wonder why security "didn't notice."

AI with integrated AI safety assessment

Based on the NIST AI RMF criteria, this is categorized as a high-risk system subject to strict controls as it affects customers' finances and is under regulatory oversight.

MITRE ATLAS threat modeling shows that the biggest threats are model extraction (by competitors), data poisoning (by adversaries seeking to avoid detection) and bias incidents (by regulators).

The integration of CRISP-ML(Q) ensures that data validation prevents falsification, fairness tests detect relevant biases and continuous monitoring detects attacks in production.

The result? Fraud detection goes live on schedule, passes the official inspection and works reliably, as the security mechanisms were integrated from the outset and vulnerabilities do not only become apparent during implementation
.

"More AI, more risk. If you don't think about security along the ML lifecycle, you lose control."

An AI security assessment answers the most burning governance questions before they slow down your innovation.

From analysis to implementation: the 4-step plan for safe AI

Our AI security assessment provides DevSecOps teams with clarity on which measures to prioritize and what to focus on when using AI.

Step 1: Find out how high your AI risk is

We create a map of all AI systems, including those that are known to IT and those that are "shadow AI". Systematic risk classification makes it transparent which AI projects pose an increased security risk and which are less critical.

What you will potentially see: "The data science team uses 12 different AI tools. Three of them process personal data from customers. One is connected to the database where the company stores production data. Two of them are high-risk according to EU AI law." Now you know where to start.

Step 2: Know how attackers will proceed

We use MITRE ATLAS to show the security team exactly how attackers will attack the AI systems. These are not general threats, but specific attack scenarios based on documented real-world methods.

What you will potentially see: "Prompt injection attacks can penetrate the customer service chatbot and steal customer data. A large number of API queries can be used to reconstruct how a fraud detection model works. An AI-powered recruitment solution can also be targeted by regulators if there is evidence of bias." Now your security team knows what to look out for.

Step 3: Look for gaps in your security

We compare the current AI security situation with industry standards. What control mechanisms are in place? Where are there gaps? What is the risk rating for each gap?

What you will potentially see: "API security is robust, but lacks effective protection against model extraction. Quality checks exist for training data, but no defenses against data poisoning. Monitoring identifies performance issues but does not detect security-related attacks." Now you know exactly what you can improve.

Step 4: Establish your action plan as a priority

We give you a plan with three implementation phases: Critical (months 1-2), High Priority (months 3-4) and Medium Priority (months 5-6). Each phase has its own controls as well as estimates of the amount of work required for implementation.

What you'll potentially see: "In Phase 1, you'll work on five key controls that will reduce your biggest risks within eight weeks of work. For additional expert engagement over eight weeks, Phase 2 adds a defense in depth. Phase 3 reaches a high level of maturity." Now you can create, budget and implement your plans.

An AI security assessment uncovers shadow AI and attack scenarios and provides the five critical controls for immediate risk reduction in a 4-step plan.

About the AI Safety Assessment

AI security assessment completed: From report to operational implementation

The assessment is not just a report, but the basis for how teams can develop AI models securely in the future:

Actions to be taken immediately (first 90 days)

  • Establish centralized controls to reduce the most critical vulnerabilities

  • Implementation of defensive measures against data poisoning (poisoning)

  • Limiting and monitoring API calls to protect against model extraction

  • Checking and ensuring model integrity

  • Closing identified security gaps with immediate damage potential.

Laying the foundations (3-6 months)

  • Integration of AI-specific security controls into existing DevSecOps pipelines

  • Automated tests to detect potentially harmful inputs

  • Verification of model artifacts and integrity within CI/CD processes

  • Monitoring to identify unusual or security-relevant queries

  • Anchoring AI security as a continuous component of operations instead of a reactive measure in the event of an incident.

Maturity phase (6-12 months)

  • Development of organizational competence for the protection of AI systems

  • Establishment of secure development practices in the area of machine learning

  • Confident and controlled use of AI in operations

  • Specific defense mechanisms against AI-related threats

  • Anchoring AI security as part of the corporate culture

Continuous optimization

With appropriate frameworks, it is possible to react flexibly to changing threats. New MITRE ATLAS methods are integrated into existing defense measures. Adjustments in accordance with the NIST AI RMF strengthen governance structures, while further developments in the CRISP-ML(Q) process continuously improve secure AI development. Security thus remains adaptable without having to redefine the security strategy from scratch each time.

From DevSecOps to MLSecOps: Why secure AI is a competitive advantage

DevSecOps has solved this problem by making security part of the process from the outset. Automation has allowed it to grow. It has developed a culture where everyone takes responsibility.

AI is the same, just with different tools. The teams that apply DevSecOps ideas to AI today will prevail. If you think of AI security as "something we'll worry about later", companies will lose out to competitors, regulators or attackers.

Act now: Add AI governance and ML security to DevSecOps

AI expands the attack surface - and requires a security logic that goes beyond traditional DevSecOps models. The decisive factor is not the number of individual controls, but a structured assessment of risks, clear prioritization and the consistent integration of security mechanisms along the entire ML lifecycle. This is the only way to make AI controllable, auditable and sustainably resilient. Isn't your innovation too valuable to be exposed to cyber attacks without protection?

Let us work together to ensure the security of your crown jewels and create a well-founded, comprehensible assessment.

  • Analysis of current AI initiatives

  • Identification of specific risk drivers

  • Definition of a targeted expansion of DevSecOps to include AI-specific security mechanisms

Together, we classify your AI security requirements and define the next steps for a customized package of measures.

About the AI Safety Assessment

 

List of sources
- NIST AI Risk Management Framework: https://www.nist.gov/itl/ai-risk-management-framework
- MITRE ATLAS: https://atlas.mitre.org
- CRISP-ML(Q): https://arxiv.org/pdf/2003.05155.pdf
- Cloud Security Alliance AI Controls Matrix: https://cloudsecurityalliance.org/artifacts/ai-controls-matrix
- EU AI Act: Regulation 2024/1689
- ISO/IEC 23894:2023 - AI Risk Management
- ISO/IEC 42001:2023 - AI Management Systems

 

Caption: Image generated with AI

Table of Contents
    Share article