InfoGuard AG (Headquarter)
Lindenstrasse 10
6340 Baar
Switzerland
InfoGuard AG
Stauffacherstrasse 141
3014 Bern
Switzerland
InfoGuard Deutschland GmbH
Frankfurter Straße 233
63263 Neu-Isenburg
Germany
InfoGuard Deutschland GmbH
Landsberger Straße 302
80687 Munich
Germany
InfoGuard Deutschland GmbH
Am Gierath 20A
40885 Ratingen
Germany
InfoGuard GmbH
Kohlmarkt 8-10
1010 Vienna
Austria
From privilege escalation and uncontrolled data exfiltration to systemic security vulnerabilities - Agentic AI not only expands opportunities, but also the attack surface. Recent incidents show: Agentic AI is not only an efficiency driver, but can also become a gateway for attacks and misuse. One example is the ServiceNow incident from January 2026 (CVE-2025-12420). A hardcoded, system-wide key enabled attackers to gain administrator rights via the Virtual Agent API.
However, this case is not an isolated incident - but an indication of a structural problem: if basic security principles are not implemented consistently, Agentic AI itself becomes a risk.
Current incidents show that Agentic AI opens up new attack paths - particularly through privilege escalation, manipulated decision logic and compromised agents:
Privilege escalation by agents
ServiceNow (January 2026, CVE-2025-12420): A hardcoded key in the virtual agent API allowed attackers to gain administrator privileges. A classic agent-to-agent attack (A2A).
Microsoft Copilot (EchoLeak, CVE-2025-32711): Manipulated prompts allowed attackers to extract sensitive data and escalate privileges.
Langflow (CVE-2025-3248): Code injection into AI agents led to complete compromise of AI infrastructure and connected systems.
2. Data exfiltration and manipulation
Reconciliation agent at a financial services provider (2024)
A seemingly normal business request caused a reconciliation agent to export 45,000 customer records. The AI did not recognize the manipulation because the request was formulated in "normal business terms".
Compromised open source frameworks
Modified agent frameworks installed backdoors that were only discovered months later - with a correspondingly long period of undetected data leakage.
3. Supply chain risks in the AI supply chain
Salt Typhoon (2024-2025)
State actors used the AI supply chain to compromise agent frameworks and gain persistent access to corporate networks.
4. Prompt injection and misuse of tools
Prompt injection on common platforms
Malicious prompts manipulated AI agents and led from data leakage to the execution of malicious scripts. Among others, GitHub Copilot, Salesforce Einstein and ChatGPT were affected.
Abuse of non-human identities (NHI)
Compromised agent credentials granted attackers undetected access to systems for weeks (NHI compromise).
Agentic AI opens up new attack vectors that are not covered by traditional security controls:
Autonomy
Agents act independently, often without ongoing human control or approval.
Complexity
Multi-agent systems, shared service accounts and dynamic workflows make it difficult to assign actions and responsibilities.
Dynamics
AI agents learn, combine tools and interact with each other; security gaps arise through unforeseen interactions - not just through traditional code.
Companies that treat AI agents like traditional software risk silent compromise, lateral movement and compliance violations.
Agentic AI not only changes technology, but also responsibility. If you want to use AI agents securely, you need clear governance, new roles and targeted awareness-raising.
The AI steward is the central point of contact for agentic AI security and governance, with the following core tasks
Risk assessment prior to the deployment of AI agents
Ongoing audits of agent activities and logs
Compliance checks (CH DSG, EU DSGVO, ISO 27001, NIST AI RMF)
Analysis of prompts and policies to detect data leaks and prompt injection
This role is the linchpin for Agentic AI Security Awareness in the company.
Agentic AI requires automated security controls that:
Monitor agent workflows and tool usage in real time
Detect compliance and policy violations
Check prompts and responses for sensitive content and anomalies
A zero-trust approach for AI agents is central to this: Every action must be authorized, logged and consistently restricted according to the principle of least privilege.
Training, cultural change and AI security awareness
Technical controls are not enough - security culture is crucial:
Sensitize employees: AI agents are not "black boxes". Teams need to understand what data flows into prompts, what risks agent tools have and how agentic AI can be misused.
Targeted AI security awareness: AI security assessments, training on topics such as prompt injection, data exfiltration, agent authorizations, non-human identities (NHI) and responsible use of AI tools.
Regular penetration tests for AI systems: Specialized AI pentests and red-teaming exercises to find vulnerabilities in agent workflows, tool integrations and security controls at an early stage.
Organizations that embed a culture of responsibility, AI stewardship and agentic AI security awareness not only reduce the risk of incidents - they create the foundation for trustworthy, scalable AI agents in the company.
We recommend the following technical measures for companies
Secrets management: no hardcoded secrets, consistent use of secure vaults.
MFA for all AI interactions: Especially for account linking, API access and agent configuration.
Automated code reviews: Security checks for agent code, policies and workflows.
Deprovisioning of inactive agents: consistently disable "zombie AI" to avoid hidden gateways.
Sandboxing: Isolated execution environments for AI agents to limit privilege escalation and lateral movement.
Organizational measures for companies:
Establish AI stewardship role: Clear accountability for agentic AI security and governance.
Transparency protocols: Decisions and actions of AI agents must be fully traceable.
Contingency plans for AI incidents: Defined playbooks if an agent is compromised or misused.
Regular audits: Use of frameworks such as AI CSA and OWASP Agentic AI Top 10 as guidelines.
Cultural measures:
Treat AI like "wetpaint": Basic principle "Don't trust, always verify " - even with own agents.
Regular awareness training: Agentic AI risks as an integral part of the security culture.
"Governance first" approach: guidelines, roles and control mechanisms must be in place before AI is scaled.
The key question is not whether AI is secure, but whether we can make it secure. Agentic AI offers enormous opportunities - provided that companies take the risks seriously and act now.
In concrete terms, this means
Define safety standards for AI agents
Orientation towards frameworks such as OWASP, NIST AI RMF, CSA Mythos Readiness Guidelines and the OECD AI Principles.
Clearly assign responsibility
Establish AI stewardship and involve compliance and security teams.
Establish transparency and control
Decisions and actions of AI agents must be traceable, verifiable and controllable.
This way, agentic AI does not become a risk, but a trustworthy strategic advantage.
Caption: Image generated with AI