The Problem Nobody Wants to Talk About
Microsoft 365 Copilot is an amplifier. Whatever state your tenant permissions are in right now — the overshared SharePoint sites, the “Everyone Except External Users” grants from 2019, the shared mailboxes with fifteen people who left two years ago — Copilot will find all of it and cheerfully surface it to anyone who asks.
This isn’t a Copilot bug. It’s by design. Copilot respects your existing permissions model via Microsoft Graph — it only shows users data they already have access to. The problem is that your existing permissions model is almost certainly a mess. Every tenant I’ve audited has had at least one “oh no” moment during the oversharing assessment. Most have had several.
The readiness work isn’t optional. Skip it and you’re handing every licensed user a turbocharged search engine across every permission mistake you’ve ever made.
The good news: Microsoft released an open-source automated readiness assessment tool (announcement) that replaces hours of manual PowerShell work with an API-driven scan that completes in seconds. It evaluates 200+ feature configurations across six service areas with Copilot-specific risk context. This post walks you through how to use it, what the results mean, and the things the tool can’t check for you.
Why Copilot Changes the Security Conversation
Before you run any tool, you need to understand why Copilot makes existing permission problems worse. This isn’t theoretical — it’s the architectural reality of how the system works, and understanding it is the difference between blindly following a checklist and actually protecting your tenant.
How Copilot Accesses Data
Copilot coordinates between the LLM, your tenant’s content (emails, chats, documents, calendar), and the M365 apps the user is working in. All data access goes through Microsoft Graph, and the Semantic Index honours the user’s identity-based access boundary during grounding.
The critical implication:
Copilot only surfaces data that the current user has permission to access. But it makes all of that data discoverable.
Before Copilot, a user with overly broad SharePoint permissions technically had access to the HR site — but they’d never find it because they didn’t know it existed. With Copilot, they ask “what’s the company policy on redundancies?” and Copilot helpfully pulls the answer from the HR site they were never supposed to be reading.
Copilot doesn’t break your permissions. It makes your broken permissions visible.
What Copilot Can and Cannot Access
| Data Source | Copilot Access | Notes |
|---|---|---|
| SharePoint/OneDrive files | Yes — if user has permission | Including files shared via org-wide links the user never explicitly opened |
| Exchange mailboxes | Yes — user’s own + shared mailboxes with Full Access | Can summarise entire shared mailboxes |
| Teams messages | Yes — all Teams/channels the user is a member of | Standard channels expose everything to all members |
| Teams meeting transcripts | Yes — if transcription enabled | Can summarise meetings the user attended |
| Encrypted files (sensitivity labels) | Depends on usage rights | Requires EXTRACT + VIEW rights. Labels without EXTRACT intentionally block Copilot |
| Files with User-Defined Permissions | No | Copilot agents cannot read UDP-encrypted files |
| Data from Graph connectors | Yes — if user has permission | Third-party data ingested via connectors is also surfaced |
The Privacy Commitments Worth Knowing
These come up in every stakeholder conversation, so commit them to memory:
- Prompts, responses, and accessed data are not used to train foundation LLMs. This has been consistent since launch.
- All processing stays within the Microsoft 365 service boundary. Azure OpenAI doesn’t cache customer content.
- Copilot interaction history is stored (prompts + responses + citations) and is subject to eDiscovery, retention policies, and audit logging. Users can delete their own history via the My Account portal.
- EU Data Boundary is respected for EU users. Non-EU customers may have queries processed in the US, EU, or other regions.
The Zero Trust Framing That Actually Matters
Microsoft frames Copilot security through Zero Trust principles, and it maps directly to real risks:
Verify explicitly — A compromised account with Copilot can summarise and exfiltrate data at machine speed. Strong authentication (MFA via Conditional Access, device compliance) isn’t optional — it’s the front door.
Least-privileged access — This is the big one. If users have more access than they need, Copilot amplifies it. Every “Everyone Except External Users” permission, every public Team, every org-wide sharing link becomes a data exposure vector that Copilot will cheerfully exploit on behalf of whoever asks.
Assume breach — If an attacker gets in, how much can Copilot help them? Audit logging, DLP policies, and threat protection determine whether you’ll detect the compromise before they’ve summarised your entire financial reporting folder.
These aren’t just conceptual. They map directly to the six areas the automated tool checks.
The Automated Readiness Assessment
Microsoft’s automated readiness assessment is a Python-based tool that queries your tenant via Microsoft Graph, Defender, Exchange, and Power Platform APIs. It replaces manual checking with an API-driven scan that completes in seconds and produces a prioritised CSV report.
If you’re an MSP running assessments across multiple tenants, this is a game-changer. Run it on every tenant, aggregate the CSVs, and you’ll quickly see which gaps are universal (most are) and which are tenant-specific.
Setup
# Clone the repo
git clone https://github.com/microsoft/m365-copilot-automated-readiness-assessment.git
cd m365-copilot-automated-readiness-assessment
# Install Python dependencies
pip install -r requirements.txt
# Set up the service principal (first run only)
pwsh setup-service-principal.ps1
# Validate required PowerShell modules
pwsh Check-PSModules.ps1
# Run the assessment — edit params.py for your tenant, or use CLI args
python main.py
The tool needs a service principal with appropriate Graph API permissions (the setup script handles this). For Purview and Power Platform data, run the additional collectors (collect_purview_data.ps1, collect_power_platform_and_copilot_studio_data.ps1) — these gather data the Graph APIs don’t expose directly.
Reading the Report
The output is a timestamped CSV (m365_recommendations_[TIMESTAMP].csv) with these columns:
| Column | What It Tells You |
|---|---|
| Service Area | Which of the six areas (M365, Entra ID, Defender XDR, Purview, Power Platform, Copilot Studio) |
| Feature | Specific capability checked |
| Status | Compliant, Warning, or Not Configured |
| Priority | High, Medium, Low |
| Observation | What the tool found in your tenant |
| Recommendation | What to do about it |
Start with the High priority items with Not Configured or Warning status. These are your biggest risks.
Understanding What the Tool Checks (And Why You Should Care)
The tool covers six service areas. Here’s what each one means for Copilot security and what to do when things come back red.
Microsoft 365
The tool checks Copilot licence assignments, M365 Apps feature availability, and Teams Premium capabilities.
Why it matters: No qualifying base licence (E3, E5, Business Standard, etc.) means Copilot won’t work. Wrong update channel (Semi-Annual instead of Current/Monthly Enterprise) means Copilot features won’t appear in desktop apps. These are table-stakes prerequisites — get them sorted first because nothing else matters if users can’t actually launch Copilot.
Common finding: Users on Semi-Annual Channel who need to be moved to Current Channel or Monthly Enterprise Channel. This is often a change management conversation with desktop support teams.
Entra ID (Identity)
The tool checks risky user counts, MFA enforcement coverage, Conditional Access policies, B2B guest policies, and sign-in risk policies.
Why it matters: Identity is where Copilot risk starts. A compromised account without MFA gives an attacker a turbocharged search engine across everything that user can access. Conditional Access policies — requiring MFA, device compliance, and blocking risky locations — are the front door to your tenant. If they’re weak, Copilot multiplies the damage.
Common findings:
- MFA policy exists but not all users have actually registered methods (policy ≠ protection)
- Conditional Access policies don’t require device compliance
- Too many Global Admins (should be 2-4 max)
- Security Defaults still enabled instead of proper Conditional Access
- No break-glass accounts, or break-glass accounts not excluded from CA policies
What to prioritise: If MFA coverage isn’t 100%, fix that before enabling Copilot for anyone. An admin account without MFA plus Copilot is a full tenant breach waiting to happen.
Defender XDR (Security)
The tool checks XDR activation, endpoint coverage, security posture scores, exposure scores, critical vulnerabilities, compromised accounts, and OAuth application risks.
Why it matters: This is the “assume breach” layer. If an attacker gets past your identity controls, Defender XDR is what detects and responds to the compromise. Without endpoint coverage and active threat detection, you won’t know someone is using Copilot to exfiltrate data until it’s too late.
Common findings:
- Defender for Endpoint not deployed to all devices
- Low Secure Score indicating broad security gaps
- Risky OAuth applications with broad Graph permissions that widen the attack surface
What to prioritise: Compromised accounts and critical vulnerabilities are urgent regardless of Copilot. Risky OAuth apps are particularly important because apps with broad permissions (Mail.Read, Files.Read.All) can access the same data Copilot surfaces — a malicious app combined with Copilot amplifies exposure.
Purview (Compliance and Data Protection)
The tool checks DLP policy coverage, sensitivity label adoption, retention policy enforcement, data classification status, and information governance.
Why it matters: This is your runtime control layer. Sensitivity labels classify content; DLP policies enforce what happens when classified content interacts with Copilot. Without labels and DLP, you’re relying entirely on permissions — and we’ve already established those are a mess.
Key things to understand about how Copilot interacts with these controls:
- Sensitivity labels with encryption that grant EXTRACT + VIEW rights: Copilot can process these documents. Without EXTRACT: Copilot cannot — the document may appear in citations but content isn’t used.
- DLP policies for the Copilot location are separate from your existing Exchange/SharePoint DLP. You need rules specifically targeting the “Microsoft 365 Copilot and Copilot Chat” location. Label-based rules and sensitive information type rules must be in separate rules — they can’t be combined.
- Auto-labelling catches content your users forgot to label. Without it, unlabelled sensitive content is invisible to label-based DLP rules and Copilot will surface it freely.
Common findings:
- Sensitivity labels exist but adoption is low (<50% of documents labelled)
- No DLP policies targeting the Copilot location
- No auto-labelling policies (requires E5)
- Retention policies missing for Copilot interaction data
What to prioritise: If you don’t have sensitivity labels deployed at all, start with a simple taxonomy (Public, Internal, Confidential, Highly Confidential) and set a default label on document libraries. Something is dramatically better than nothing. Then add Copilot-specific DLP policies.
Power Platform
The tool checks environment governance, environment-level DLP policies, connector classification, and AI Builder readiness.
Why it matters: Power Platform environments can contain sensitive business data and custom AI models. Without DLP policies at the environment level, connectors can move data between services in ways that bypass your M365 controls. If you’re deploying Copilot Studio agents, this is where governance for custom agents lives.
Common finding: Default environment has no DLP policy — meaning any user can build flows that connect to any connector. Lock this down.
Copilot Studio
The tool checks agent licensing, custom agent deployment readiness, authentication configurations, conversation analytics, and transcript retention.
Why it matters: If your organisation is building custom Copilot agents, these agents inherit the same data access model as M365 Copilot — and they can also access external data via connectors. Authentication configuration determines who can use agents and what they can access.
What to prioritise: If you’re not deploying custom agents yet, this section is informational. If you are, ensure authentication is configured and transcript retention policies are set.
What the Tool Doesn’t Check
The automated assessment is excellent for configuration posture, but it has blind spots. It checks whether policies exist, not whether your environment has specific oversharing problems. These areas need manual investigation.
SharePoint Oversharing
This is still the number one risk, and no automated scan catches all of it.
What to look for:
- Sites where “Everyone Except External Users” (EEEU) has been granted permissions — this is the single highest-impact oversharing vector
- Active “Anyone” (anonymous) sharing links, especially on sensitive sites
- Sites with sharing settings more permissive than the tenant default
- Orphaned sites with no active owner that still have broad permissions
- OneDrive files shared with “People in your organisation” by default
How to find it: Activate SharePoint Advanced Management (included with Copilot licences) and run the Data Access Governance reports from the SharePoint Admin Centre. These reports surface EEEU permissions, “Anyone” links, and site-level sharing overrides.
Microsoft’s oversharing blueprint provides a phased remediation approach (Pilot → Deploy → Operate) — follow it.
Quick wins:
- Set tenant-level sharing to “New and existing guests” or more restrictive (disable “Anyone” links)
- Set OneDrive default sharing to “Specific people” with “View” permission
- Enable Restricted Content Discovery (RCD) on sensitive sites so Copilot only surfaces content to users who’ve previously visited the site
- Enable Restricted Access Control (RAC) on your most sensitive sites (HR, finance, legal) to override all other access
Exchange Online
The tool doesn’t deeply inspect mailbox permissions. Watch for:
- Shared mailboxes with stale Full Access grants from former employees or role changes
- External forwarding rules (admin-configured or user Inbox rules) that effectively leak Copilot-summarised content
- Transport rules that BCC or redirect to external addresses
Teams Governance
The tool doesn’t assess individual Teams or channels. Check for:
- Public Teams that should be Private
- Org-wide Teams containing anything beyond all-staff announcements
- Uncontrolled Teams creation leading to sprawl
- Standard channels being used for sensitive discussions (should be Private channels)
Business Readiness
No tool checks this, and it’s where rollouts actually succeed or fail:
- AI Usage Policy — What data can be used in prompts, how to handle output, prohibited uses, escalation process. Not optional for regulated industries.
- Pilot Group — Start with 5-10% of users, mixed roles. Use the M365 Admin Centre readiness report (Reports > Usage > Microsoft 365 Copilot > Readiness) to identify strong candidates.
- Training — Role-specific training, prompt libraries, feedback channel. Users who get value in the first two weeks will champion it. Users who don’t will write it off permanently.
The Recommended Workflow
Here’s how to run this in practice:
Week 1: Assess
- Run the automated readiness assessment against the tenant
- Triage the CSV — High priority items first
- Activate SharePoint Advanced Management and run Data Access Governance reports
- Run the oversharing blueprint’s discovery steps
Week 2: Remediate
- Fix identity gaps (MFA, Conditional Access, admin role cleanup)
- Remediate SharePoint oversharing (EEEU cleanup, sharing settings, RCD/RAC on sensitive sites)
- Deploy or review sensitivity labels
- Create Copilot-specific DLP policies
Week 3: Pilot
- Define pilot group and review their access scope via SAM
- Deploy AI usage policy
- Enable Copilot for pilot users
- Enable DSPM for AI monitoring (E5) or configure audit log alerts
Ongoing: Operate
- Re-run the automated assessment periodically to track progress
- Monitor DSPM alerts and refine DLP policies
- Expand to broader user groups in waves
- Establish quarterly access reviews as operational hygiene
Licence Matrix for Readiness Features
| Feature | E3 | E5 | Business Premium |
|---|---|---|---|
| Manual sensitivity labels | Yes | Yes | Yes |
| Auto-labelling | No | Yes | No |
| DLP (Copilot location) | Yes | Yes | Yes |
| Audit (Standard) | Yes | Yes | Yes |
| Audit (Premium) + Copilot events | No | Yes | No |
| SharePoint Advanced Management | Included with Copilot licence | Yes | Included with Copilot licence |
| Conditional Access | P1 incl. | P2 incl. | P1 incl. |
| PIM (just-in-time admin) | No | P2 incl. | No |
| DSPM for AI | No | Yes | No |
References
- Microsoft — Automated Readiness Assessment for M365 Copilot — The open-source assessment tool
- Microsoft Tech Community — Accelerating Copilot Adoption with Automated Readiness Assessment — Official announcement and partner context
- Microsoft Learn — Data, Privacy, and Security for Microsoft 365 Copilot — Official data flow and permissions documentation
- Microsoft Learn — Security for Microsoft 365 Copilot — Defence-in-depth approach and security architecture
- Microsoft Learn — Blueprint for Oversharing — Phased oversharing remediation guide
- Microsoft Learn — Zero Trust for Microsoft 365 Copilot — Seven-layer protection framework
- Microsoft Learn — DLP for Microsoft 365 Copilot — Copilot-specific DLP location and rules
- Microsoft Learn — SharePoint Advanced Management — SAM features including RCD, RAC, and data access governance