Can You Trust Meta?
Can You Trust Meta?
TLDR: no.
Meta is once again in the middle of a privacy scandal, and this time it is about its AI glasses. A recent investigation by Swedish newspapers, followed by a US class action lawsuit, alleges that workers at a Meta subcontractor in Kenya reviewed highly sensitive footage captured by users’ glasses, including nudity, sex, bathroom visits, and even visible financial information.
If you handle confidential work, that should get your attention fast.
What happened ?
According to the joint investigation by Svenska Dagbladet and Göteborgs-Posten, workers at Sama, a Meta subcontractor in Nairobi, said they reviewed intimate images, videos, and transcripts connected to Meta’s AI glasses. The reporting describes workers seeing naked bodies, bathroom scenes, sexual activity, bank cards, and private conversations. The reporters also say faces were supposed to be blurred, but workers said that anonymization did not always work.
The Verge summarized the same investigation by saying Meta’s AI glasses could be sending sensitive footage to human reviewers in Kenya, and TechCrunch reported that a proposed class action lawsuit followed shortly after, accusing Meta of false advertising and privacy violations.
That is the scandal.
Why this matters more than “just smart glasses”
The deeper issue is not only the glasses. It is the trust model.
Meta’s own AI terms say that, in some cases, it may review your interactions with AIs, including the content of your conversations or messages, and that this review may be automated or manual, meaning human.
Meta’s own voice privacy notice for its AI glasses also says that text transcripts and audio recordings of your voice interactions are stored by default to help improve Meta’s products. Reporting on Meta’s 2025 policy changes says the company removed the option to prevent voice recordings from being stored in the cloud, and that recordings can be kept for up to a year to improve products, or 90 days if the interaction appears accidental.
That means the issue is bigger than one bad headline. The company’s own product design and policy language already point in the same direction: your content is not something you should assume stays private by default.
The part professionals should care about
If you are a professional, the practical lesson is brutal:
Do not assume content passing through Meta services is private enough for confidential work.
The Swedish investigation found that the glasses require internet connectivity and send data to Meta servers for AI features to work. The reporters also found network traffic going to Meta servers even after declining extra data sharing for product improvement.
That does not mean every single photo, message, or voice clip is manually watched by a Meta employee. It does mean you should not build a professional privacy model around “I’m sure nobody sees this.” Meta’s own terms allow automated or human review in some cases, and its own notices say some voice data is stored by default to improve products.
If your work involves:
client documents
legal material
financials
HR files
strategy notes
personal archives
then “probably private” is not good enough.
So can you trust Meta?
If your standard is convenience, maybe.
If your standard is privacy, especially for professional work, then no. Not after repeated privacy controversies. Not after a fresh scandal involving intimate footage reviewed by offshore contractors. Not when the company’s own terms say interactions may be reviewed by humans. Not when voice recordings are stored by default to improve products.
That is not a privacy-first model. It is a data-hungry model with lots of caveats.
Privacy is freedom
The safest move is to reduce how much trust you need.
If you want AI help with your files, the better path is keeping that work on your own machine. On Mac, that is where Fenn fits.
Fenn is Private AI that finds any file on your Mac. It indexes locally, searches inside PDFs, docs, slides, screenshots, scans, audio, and video, and helps you work with your files without shipping your confidential corpus to Meta, OpenAI, Google, or Anthropic.
That is what privacy looks like in practice.
Not a promise. Control.
And you don't even have to trust us, once the AI models are downloaded on your Mac, you can use Fenn 100% offline.
