New Microsoft 365 E7 Frontier Suite: What CISOs need to Know now

Author
Martin Hüsser
Published
18. March 2026

Share article

On 9 March 2026, Microsoft announced the biggest licence update in a decade with the Microsoft 365 E7 Frontier Suite. For the first time, a single package combines Copilot Cowork, autonomous AI agents and Anthropics Claude models – with far-reaching consequences for data protection, information security and compliance. We assess the risks involved and outline three steps for targeted preparation.

Microsoft announces the E7 Frontier Suite, the biggest Microsoft 365 license update in ten years. Copilot, autonomous AI agents and Claude models are deeply integrated into business processes for the first time - with direct consequences for security and compliance.

Microsoft 365 E7 Frontier: Copilot, agents and security in one bundle

The new tier costs USD 99 per user per month and bundles all E5 functions with Microsoft 365 Copilot, the new agent control center Agent 365 and the Entra Suite. Particularly relevant for E5 customers: Security Copilot is already included there at no extra charge - with automatic provisioning since January 2026. From July 2026, the prices of existing tiers will also increase, which in combination with the elimination of automatic volume discounts means an effective cost increase of around 20 percent. (Sources: Microsoft 365 Blog; Samexpert, 2026)

For security managers, this means that it is essential to review the Microsoft security configuration before activating new functions. Our Microsoft 365 Security Assessment analyses your entire M365 tenant for vulnerabilities - from authorization structures and Defender configurations to Entra ID and Conditional Access - and provides prioritized recommendations for action.

Microsoft 365 Security Assessment

Copilot Cowork: When AI agents act autonomously

Copilot Cowork is an extension within Copilot Wave 3, developed in partnership with Anthropic. Unlike the classic chat assistant, Cowork performs multi-stage tasks autonomously: Calendar optimization, meeting preparation, competition analysis or pitch deck creation - across Outlook, Teams, Excel, Word and PowerPoint. Microsoft itself already operates over 500,000 agents internally, and IDC forecasts 1.3 billion AI agents by 2028 (source: Microsoft 365 Blog, 09.03.2026)

Agent-based AI is fundamentally changing the threat landscape. In our blog article "Agentic AI on the attack", we show how human-AI teaming is realigning cyber defence - and why organizations need clear AI governance now.

Anthropic Claude in the EU: GDPR risk for Swiss companies

Anthropic has been an official sub-processor of Microsoft since January 2026. Claude models are deactivated by default for EU and EFTA tenants, as they are excluded from the EU Data Boundary. Functions such as the Researcher Agent, Agent Mode or Cowork are only available if administrators explicitly activate the Anthropic connection - with the knowledge that data is being processed outside the EU borders. (Sources: Microsoft Learn, 2026; Anthropic Privacy Center)

Anthropic is expanding in Europe (offices in Munich, Paris, Dublin) and has signed the EU Code of Practice for General-Purpose AI. However, data storage remains primarily in the USA. For Swiss companies under the revDSG and EU organizations under the GDPR, there is a concrete need for action that must be clarified before activation. (Sources: Anthropic, 11/2025; EU Commission).

Agentic AI and automation: the 3 biggest security risks

Prompt injection: According to OWASP, prompt injection is the No. 1 vulnerability for LLM applications. The documented zero-click attack EchoLeak (CVE-2025-32711) against Microsoft 365 Copilot proved that hidden instructions in emails can exfiltrate sensitive company data without any user interaction. (Source: Obsidian Security, 2025)

Shadow AI: 57% of employees use private AI tools in their day-to-day work without the knowledge of IT - 33% enter sensitive data in the process. Our blog article on the risk of shadow AI shows the path to secure AI use in four steps in accordance with ISO 42001 (Source: Vectra AI, 2025)

Oversharing via SharePoint and OneDrive: Studies show that Copilot can access around three million confidential data records per organization - including orphaned sites and decades-old documents. (Source: Concentric AI / TechRadar, 2025)

Standards and AI governance: ISO 42001 as a foundation

A resilient AI safety strategy is based on ISO/IEC 42001:2023 as an AI management system, the NIST AI RMF for risk assessment, the EU AI Act with enforcement from August 2026 and penalties of up to 35 million euros, and the BSI guidelines on LLM risks. (Sources: ISO; NIST, 12/2025; EU Commission)

With our ISO/IEC 42001 AI gap analysis, we independently assess the maturity level of your AI management system. You receive a profile of strengths and weaknesses, a risk assessment and prioritized measures including quick wins. The analysis can be extended with the NIST AI RMF or the BSI AIC4 framework. We are also happy to support you in setting up and/or integrating (if an ISMS is already in place) such a management system.

3 steps to the secure introduction of AI models

Structured preparation is required to ensure that AI systems such as Copilot or agent-based models can be used safely. Three steps help to identify risks at an early stage and set up governance and security controls in a targeted manner.

  1. Microsoft 365 Security & CoPilot Readiness Assessment: Our M365 Security Assessment checks your tenant for authorization gaps, Defender and Purview configurations, sensitivity labels and DLP policies. The integrated CoPilot Readiness Assessment also evaluates risks from prompt injection, oversharing and classification gaps.

  2. AI governance according to ISO 42001: Our AI gap analysis evaluates your AI policies, data handling and third-party relationships against the international standard - and provides a roadmap to certification readiness.

  3. Holistic cyber security: As a Microsoft Solutions Partner for 360° Cyber Security with threat protection specialization and an ISO 27001-certified Cyber Defence Center, we cover the entire security cycle - from penetration testing and incident response to security awareness programs.

Strengthening security and governance before AI takes over

The integration of AI agents in Microsoft 365 E7 Frontier changes the requirements for security, data protection and governance.

Organizations should review their configurations and governance structures before activating new features. Our experience from security projects shows that the decisive factor is not another tool, but transparency regarding data access, clear responsibilities and technical guard rails for the cyber-resilient use of AI.

AI Gap Analysis

 

List of sources
- Microsoft Official Blog: "Introducing the First Frontier Suite", 09.03.2026
- Microsoft 365 Blog: "Copilot Cowork: A new way of getting work done", 09.03.2026
- Microsoft Security Blog: "Security Copilot with M365 E5", 18.11.2025
- Microsoft Learn: "Anthropic as a subprocessor", 2026
- Anthropic: "New offices in Paris and Munich", 11/2025; "EU Code of Practice", 07/2025
- Anthropic Privacy Center: Server locations and EU data residency
- Samexpert: "Microsoft 365 Price Increases July 2026"
- Obsidian Security: "Prompt Injection Attacks", 2025
- Vectra AI: "Shadow AI: risks, costs, and governance", 2025
- Concentric AI / TechRadar: Copilot data access risks, 2025
- ISO/IEC 42001:2023; NIST AI RMF / AI 600-1; EU AI Act

 

Caption: Image generated with AI

Table of Contents
    Share article