InfoGuard AG (Headquarter)
Lindenstrasse 10
6340 Baar
Switzerland
InfoGuard AG
Stauffacherstrasse 141
3014 Bern
Switzerland
InfoGuard Deutschland GmbH
Frankfurter Straße 233
63263 Neu-Isenburg
Germany
InfoGuard Deutschland GmbH
Landsberger Straße 302
80687 Munich
Germany
InfoGuard Deutschland GmbH
Am Gierath 20A
40885 Ratingen
Germany
InfoGuard GmbH
Kohlmarkt 8-10
1010 Vienna
Austria
The pressure is mounting: "We need more efficiency!" The mood in the office of MysecureKI AG (* name changed by the editor) is tense. Shadow AI is gradually infiltrating the company. AI tools are establishing themselves outside of defined processes and elude any auditability. This results in cyber, data protection and liability risks that often only become apparent in an emergency.
Lena clicks on a serious-looking banner: "AI-supported contract analysis - ten times faster, free trial version." Two minutes later, she logs in with her service email and uploads the first customer contract. The tool immediately provides a precise summary.
Davide is impressed. "It's reminiscent of D***box back then," he says - of that phase when employees switched to external cloud services out of frustration with cumbersome internal systems. What began as a pragmatic shortcut ended in data leaks, compliance procedures and considerable reputational damage.
"This time it's different," they both think. "It's just an AI tool." This is exactly where the fallacy lies.
Many people are unaware of the temptingly simple use of AI systems in day-to-day business: Their data ends up on servers in countries without GDPR/DSGVO-compliant contracts. The AI tool saves all uploaded documents to train its models. Suddenly, confidential customer contracts, internal price lists and even personal data are published, without control.
The IT department doesn't notice anything. But one day, a customer calls and asks why their data is appearing in a public AI forum.
The compliance department panics: Where are the audit trails? Who released the data? How will this be explained in the next ISO 27001 audit?
The management learns about the incident and realizes that quite a few employees and the management itself are using similar tools. Shadow AI has infiltrated the company.
"It's like the cloud! Only this time it's AI that we don't understand and can't control," says the IT manager at the crisis meeting.
Active, automated data utilization: data is not only stored, but actively used to train models, generate new content or even make decisions.
Errors are difficult to catch: an incorrect AI result can lead to incorrect contracts, discrimination or legal violations - without anyone noticing immediately.
CH AI Act, CH DDSG, GDPR, EU AI Act, ISO 42001: all demand transparency, documentation and control over AI systems. But how is this supposed to work if nobody knows which tools are being used - and there are no clear instructions on the desired behavior?
Audit trails are missing: Who entered which data into which AI tool and when? Without logs, a real nightmare for any compliance department!
AI tools without a security check: AI systems can contain malware, generate phishing links or pass on data to third parties.
AI as an attack tool: Hackers use AI to write more realistic phishing emails or find vulnerabilities in systems - and we make it even easier for them with shadow AI.
A structured comparison with ISO 42001 helps to consistently align governance, processes and technical measures.
Live demo: An IT security expert shows how quickly an AI tool can reveal confidential data - and how easy it is to use it for targeted social engineering.
Message: "Do you remember the D***box incident in 2015? Back then, we learned that simply trying things out can be expensive. With AI, the risk is even more serious."
Tested and approved AI tools and apps: The company introduces company-wide licenses for secure AI platforms - with CH AI Act, CH DDSG, GDPR, EU AI Act, ISO 42001-compliant contracts and clear usage guidelines.
Fast approval processes: Employees can apply for new tools via a self-service portal - instead of using them in secret.
CASB solutions (Cloud Access Security Broker) monitor data traffic to AI tools and block unauthorized uploads.
AI-specific security tools such as Microsoft Purview analyze which data flows into AI systems - and issue alerts in the event of suspicious cases.
The company implements an AI system in accordance with ISO 42001:
Risk assessment: Which AI tools may be used? Which data is taboo?
Documentation: Every use of AI is logged - for audit security and transparency.
Regular training courses: Employees learn to recognize AI risks and use safe alternatives.
Shadow AI is not the result of malicious intent, but of frustration, time pressure and the desire for more efficient work processes. However, the consequences are real and costly: data loss, compliance violations, legal risks and reputational damage.
If you want to use AI productively, you need security - and a practicable solution. The right approach combines education, secure alternatives and clear guidelines for everyday use.
Three central pillars enable safe use:
Creating understanding: Why is shadow AI risky?
Offering safe alternatives: Fast, user-friendly AI tools that guarantee compliance and safety.
Control & transparency: Make invisible risks visible with standards such as ISO 42001, CASB and monitoring.
Shadow AI is not a trend phenomenon - it has long been a reality in everyday business life. If you want to use AI productively and securely, you need clear guidelines, reliable processes and transparency regarding data flows. With a well-founded AI security strategy, tried-and-tested tools and a structured approach, companies not only achieve compliance, but also transform AI into a genuine value proposition for security, efficiency and innovation.
With a structured and secure AI strategy, you win:
A resilient roadmap for the secure use of AI,
Clarity about the level of maturity and specific mistakes,
transparency about data and AI usage, audit-proof and traceable,
as well as a sustainable reduction in shadow AI through clear rules, secure platforms and targeted control.
Our experience from strategy and security projects shows: Effective AI governance is created where management, IT, compliance and security assume joint responsibility. Sustainable solutions combine regulatory requirements - for example from the EU AI Act or ISO 42001 - with existing processes and technical protective measures.
In this way, AI does not become a risk, but a controlled innovation factor.

Caption: Image generated with AI