1. What happened to all the data Copilot trained on that was confidential? How is that data separated and deleted from the model’s training? How can we be sure it’s gone?
2. This issue was found; unfortunately without a much better security posture from Microsoft, we have no way of knowing what issues are currently lurking that are as bad as —- if not worse than —- what happened here.
There’s a serious fundamental flaw in the thinking and misguided incentives that led to “sprinkle AI everywhere”, and instead of taking a step back and rethinking that approach, we’re going to get pieced together fixes and still be left with the foundational problem that everyone’s data is just one prompt injection away from being taken; whether it’s labeled as “secure” or not.
I'd add (3) - a DLP policy is apparently ineffective at its purpose: monitoring data sharing between machines. (https://learn.microsoft.com/en-us/purview/dlp-learn-about-dl...).
Directly from the DLP feature page:
> DLP, with collection policies, monitors and protects against oversharing to Unmanaged cloud apps by targeting data transmitted on your network and in Microsoft Edge for Business. Create policies that target Inline web traffic (preview) and Network activity (preview) to cover locations like:
> OpenAI ChatGPT—for Edge for Business and Network options > Google Gemini—for Edge for Business and Network options > DeepSeek—for Edge for Business and Network options > Microsoft Copilot—for Edge for Business and Network options > Over 34,000 cloud apps in the Microsoft Defender for Cloud Apps cloud app catalog—Network option only
/Offtopic
Yes, MSFT's DLP/software malfunctioned, but getting users to MANUALLY classify things as confidential is already an uphill battle. These are for the rare subset of people that are aware of and compliant with NDAs/Confidentiality Agreements!
They have significant experience in this. Microsoft software since the 2014, for the most part, is also paraphrased from other people's code they find laying around online.
It depends. E.g. OpenAI says: "By default, we do not train on any inputs or outputs from our products for business users, including ChatGPT Team, ChatGPT Enterprise, and the API."[0]
[0] https://openai.com/policies/how-your-data-is-used-to-improve...
That was pretty funny and explains a lot.
I wish I could do more :(
Instead I always break things when I paraphrase code without the GeniusParaphrasingTool
While I couldn’t have predicted the future, even classic data mining posed a risk.
It is just reality that if you give a third party access to your data, you should expect them to use it.
It is just too tempting of a value stream and legislation just isn’t there to avoid the EULA trap.
I was targeting a market where fractions of a percentage advantage were important which did drive my what at the time was labeled paranoia
Microsoft releasing overly ambitious features with disastrous consequences.
Apple releasing features so unambitious it's hard to remember they're there.
Big tech is reaping what they've sown in a very satisfying way.
Half of the time it's open user hostility and blatant incompetence. The other half it's just the incompetence. Ambition doesn't enter the picture at all.
How is having Copilot breach trust and privacy an “advisory”? Am I missing something?
Unfortunately "Advisory" is a report written about a security incident, like an official statement about the bug, it's impact, and how to fix it -- which differs from the english meaning... it's not meant to mean to "advise" people or to "take something" under "advisory" (which, is a very soft statement typically).
The basic distinction in the infosec industry is that advisories are what you publish to tell customers that you had a bug in your product that might have exposed them or their data to attacks and you want them to take some specific action (e.g., upgrade a package, review logs); while an incident report is what you publish when you know that the damage happened, it involved your infrastructure, and you want to share some details about happened and how you're going to prevent it from happening again.
Because the latter invites a lot more public attention and regulatory scrutiny, a company like Microsoft will go out of their way to stick to advisories whenever possible (or just keep incidents under wraps). It might have happened at some points in their history, but off the top of my head, I don't recall Microsoft ever publishing a first-party security incident report.
An advisory gives notice and/or warns about something, and may give recommendations on possible actions (but doesn’t have to).
So, yes, technically, it's de-facto advisory to publish this information, but assigning "advisory" as a severity tag here is questionable.
What's the actual action needed here by a security team? None. You can hate it or not care but the end of the day there's no remediation or imminent harm, just a potential issue with DLP policies. Don't make it look like a 0-day that they actually have to deal with.
The bug is fixable, but the underlying tension—giving AI tools enough permissions to help while respecting confidentiality boundaries—will keep surfacing in different forms as these tools become more capable.
We're essentially retrofitting permission models designed for human users onto AI agents that operate very differently.
In traditional access control, the pattern is: user requests data -> permissions checked -> data returned or denied. The model never sees unauthorized data.
With Copilot and most LLM agents today, the pattern is: user asks question -> model retrieves broadly -> sensitivity label checked as a filter -> model generates answer. The label-checking happens after the data is already in the model's context.
That's the bug waiting to happen, label system or not. You can't reliably instruct a model to 'ignore what you just read.'
The pattern that actually works - and I've had to build this explicitly for agent pipelines - is pre-retrieval filtering. The model emits a structured query (what it needs), that query gets evaluated against a permission layer before anything comes back, and only permitted content enters the context window. The model architecturally can't see what it's not allowed to see.
The DLP label approach is trying to solve a retrieval problem with a generation-time filter. It's a category error, and it'll keep producing bugs like this one regardless of how good the label detection gets.
I work in one of the special legal jurisdictions, such fubar would normally mean banning such product from company for good. Its micro$oft so unfortunately not possible yet, but oh boy are they digging their grave with such public incompetence, with horrible handling of the situation on top of that. For many companies, this is top priority right behind assuring enough cash flow, not some marginal regulatory topic. Dumb greedy amateurs.
Trusted operating system Mandatory Access Control where art thou?
100 nasty bugs in the code
100 bugs in the code
Take one down
Patch it around
-127 nasty bugs in the codeYou guys need to read the actual manifestos these AI leaders have written. And if not them, then read the propagandist stories they have others write like The Overstory by Richard Powers which is an arrogant pile of trash that culminates in the moral:
humans are horrible and obsolete and all should die and leave the earth for our new AI child
Which is of course, horseshit. They just want most people to die off, not all. And certainly not themselves.
They don't care about your confidential information, or anything else about you.
I guess everyone just ended up agreeing with Cypher, after all...