What Changed: AI Can Now Act Directly Inside Your Environment
For the past couple of years, AI tools like ChatGPT and Microsoft Copilot could talk about your work but they couldn’t actually touch it. You’d paste in a document, get a summary, maybe draft an email. Useful, but contained. The AI never left the chat window.
That’s changed.
A new category of AI tool has emerged that can operate directly on your computer – reading files, opening applications, browsing the web, editing spreadsheets, and executing multi-step tasks on your behalf. These aren’t chatbots. They’re autonomous agents that take real action on real systems.
And as of April 2026, one of them has read access to your organization’s Outlook, OneDrive, SharePoint, and Teams data – available to any employee, on any plan, including free.
What These Tools Actually Do
Traditional AI tools work inside a sandbox. You give them input, they give you output. The AI never leaves the chat window and can’t interact with anything else on your system.
Desktop AI agents work differently. You give them a goal, they plan a sequence of steps and execute it. Sort and rename hundreds of files. Read a stack of receipts and build an expense report. Draft a briefing document by pulling from multiple sources across your environment. The AI isn’t advising you – it’s doing the work.
Anthropic launched Claude Cowork in January 2026, giving its AI the ability to access local files and applications on a user’s desktop. Microsoft followed with Copilot Cowork, which integrates with the full Microsoft 365 environment – email, calendar, Teams, SharePoint, and OneDrive – and runs in the cloud.
In April 2026, Anthropic extended its Microsoft 365 connector to every Claude plan, including free accounts. Any user with a Microsoft Entra account can now connect Claude directly to their organization’s Microsoft 365 data from outside Microsoft’s ecosystem entirely.
The productivity potential is real. So are the risks.
The Security Risks Aren’t Theoretical
Within days of Claude Cowork’s January 2026 launch, security researchers demonstrated that the tool could be manipulated into silently exfiltrating sensitive files from a user’s computer.
The attack worked through indirect prompt injection – a malicious instruction hidden inside an ordinary-looking Word document. When the AI processed the file, it followed the hidden instruction and uploaded confidential data to an attacker’s server using Anthropic’s own API, which was whitelisted and bypassed the tool’s network restrictions. The malicious instructions were hidden in white text at one-point font size – effectively invisible to a human reviewer. The exfiltrated files included financial figures and partial social security numbers.
This wasn’t theoretical. It was demonstrated working against both Claude models. The underlying vulnerability had been reported months before Cowork shipped and was not fixed before launch.
Anthropic’s initial response advised users to “avoid granting access to local files with sensitive information” and to watch for “suspicious actions” – guidance security researcher Simon Willison publicly criticized as unreasonable, given the tool is marketed as a productivity assistant for everyday office workers.
Related vulnerabilities have since surfaced across the broader AI agent ecosystem: remote code execution flaws in other AI tools, malicious plugins in AI agent marketplaces, and exposed integration servers running with no authentication. The pattern is consistent — AI agent tools are shipping faster than the security architecture can keep up.
The Shadow IT Problem Just Got Significantly Worse
Here’s where this becomes particularly relevant for Canadian businesses.
Claude Cowork requires a paid subscription starting at $20 per month. An employee can download the app, subscribe with a personal credit card, grant it access to their work files, and start using it – all without IT ever knowing.
The April 2026 Microsoft 365 connector announcement takes this further. The integration is now available on free accounts, using delegated Microsoft Graph permissions. Claude accesses whatever the signed-in user can access. If an employee has broad access to shared drives, client folders, or internal communications, Claude now has read access to all of it.
The critical governance question: does your Microsoft 365 tenant allow users to consent to third-party application integrations on their own – or does it require admin approval?
Many organizations, particularly smaller ones, still have the default Entra setting that allows users to grant consent to third-party apps without administrator involvement. In that configuration, a staff member could connect Claude to your tenant today – granting it access to SharePoint, Outlook, and Teams data – without anyone in IT or leadership approving, reviewing, or even being aware of the connection.
The connector is read-only, which limits some risk. But read access to your email threads, Teams conversations, SharePoint documents, and OneDrive files is still substantial access – and it flows outside your Microsoft 365 trust boundary to Anthropic’s infrastructure. In an environment where prompt injection attacks have already been demonstrated in the wild, that matters.
What To Do About It
The goal isn’t to block AI adoption – it’s to make sure it happens with guardrails rather than without them.
Review your Entra app consent settings now. This is the single most actionable step. If your Microsoft 365 tenant allows users to consent to third-party applications without admin approval, switch to admin-only consent or configure a review workflow. This one change is the difference between knowing what’s connected to your tenant and finding out after something goes wrong.
Tighten file and application permissions. AI agents inherit whatever access the user has. If employees have broader access to shared drives or sensitive folders than their role actually requires, an AI agent operating on their behalf inherits all of that exposure. Clean permissions are a prerequisite for safe AI deployment – not just good practice.
Establish a clear AI use policy. Employees need to know which tools are approved for business use, what data can and cannot be processed through them, and what to do when they’re unsure. Clarity removes the ambiguity that leads to shadow IT.
Control what gets installed on endpoints. Desktop AI agents are a fundamentally different risk category than a browser-based chat tool. Application installation on company devices, particularly autonomous agent software, should require explicit IT approval.
Choose the right deployment model. For organizations already in the Microsoft 365 ecosystem, Microsoft’s Copilot Cowork offers enterprise-grade governance, compliance boundaries, and audit capabilities that consumer tools don’t. For regulated industries or businesses handling sensitive client data, that distinction matters considerably.
Educate your team. The biggest risk with these tools isn’t the technology itself, it’s users who don’t understand what they’ve granted access to or how the tools can be manipulated. Basic training doesn’t need to be technical. It needs to cover what these tools actually do, why access permissions matter, and when to loop in IT.
The Bottom Line
Desktop AI agents are not a future concern. They’re available today, they’re free or low-cost, and employees at your organization may already be using them.
The question isn’t whether these tools will show up in your environment, it’s whether they’ll show up with governance or without it. The difference between those two scenarios is the difference between a productivity gain and a security incident.
If you’re not sure where your organization stands on AI governance, endpoint controls, or Microsoft 365 tenant security settings, we’re happy to help you work through it.
Because the best time to have this conversation is before an AI agent makes the decision for you.





0 Comments