Can you trust Anthropic (Claude) ?

Mar 3, 2026

Can you trust Anthropic (Claude) ?
Can you trust Anthropic (Claude) ?

Can You Trust Anthropic (Claude)?

Claude is one of the best AI assistants for professional work.

But “great product” is not the same thing as “trusted infrastructure.”

If you are a professional handling confidential files, the trust question is simple: when you send your work to Claude, who is in control?

1) The anti open model stance, and the “we should decide” vibe

Anthropic’s CEO, Dario Amodei, has repeatedly dismissed the “open source” framing for AI models and argued that “open weights” are not the same as open source software. In one widely shared interview, he calls open source a “red herring,” basically arguing that you can’t “see inside” models the way you can inspect source code.

You can read that as a technical point. But it also lands as a philosophy: He says powerful models should not be put in the hands of everyone, and the people building them should have the power to decide how they are used.

That is where trust gets uncomfortable, because it implies a world where a few companies decide what you are allowed to run, and what you are allowed to ask.

2) Why this is anti-science

Here’s the part that feels backwards.

Without open research culture, open tooling, and scientists sharing breakthroughs, there is no modern AI boom. There is no Anthropic. There is no OpenAI.

The AI world is built on decades of shared academic progress and open publication norms. So when leaders argue that models should stay centralized “for safety,” it can read as rewriting history: benefiting from openness, then closing the doors once you’re on top.

3) Why this looks like regulation capture

Regulation capture is when an industry pushes rules that conveniently entrench incumbents and make competition harder.

That accusation is not coming out of nowhere. In the public feud between Nvidia and Anthropic, Nvidia’s side explicitly used “regulatory capture” language to describe the idea of restricting open models, and argued it would stifle open-source collaboration and competition.

Claude is a top-tier commercial model. The rise of free and open models that people can run themselves is an obvious threat to any cloud subscription business.

So when you hear “open models enable bioweapons” and “open models enable cyberattacks,” it just a defense of its business model.

4) The distillation accusations, and naming DeepSeek, Moonshot, MiniMax

Last week, Anthropic published a post claiming it detected “industrial-scale” distillation campaigns and explicitly named DeepSeek, Moonshot, and MiniMax. Anthropic says these campaigns generated over 16 million exchanges via around 24,000 fraudulent accounts, aiming to extract Claude’s capabilities to improve other models.

Anthropic can check and knows who is using their models, for what purposes, and can review the data.

5) Why this is cheeky, given Anthropic’s own training-data controversy

Here’s where trust gets messy.

Anthropic agreed to a $1.5B settlement in a major copyright case where authors alleged Anthropic used pirated books for training, with reporting citing hundreds of thousands of books involved (figures vary by outlet and court coverage).

You do not need to litigate every detail to see the optics problem:

Anthropic is loudly condemning “stealing model capability,” while it has faced credible legal claims about how its own training data was acquired, and chose to settle at a huge number.

That is not a trust killer by itself, but it should make you skeptical of moral grandstanding.

6) “How did they know?” and the privacy implication

You asked the key question: how did Anthropic feel comfortable naming specific labs?

Anthropic’s post describes identifying patterns across fraudulent accounts and large-scale usage.

Attribution like that can come from many signals, not necessarily “reading your prompts.” It can involve account networks, payment trails, IP behavior, usage fingerprints, and enforcement telemetry.

But here’s the practical privacy takeaway that matters for professionals:

When you use a cloud AI service, the provider has visibility into your usage at least at the metadata level, and often retains conversation data for some period depending on product and settings. Anthropic’s own privacy documentation describes data retention windows and how some data may be retained longer in de-identified form.

So if you are putting confidential work into Claude, you are trusting:

  • their retention rules,

  • their internal access controls,

  • their security,

  • their legal exposure,

  • and their incentives tomorrow.

That is a lot of trust to require from a single vendor.

Privacy is freedom

You do not need to believe Anthropic is evil to decide the cloud is the wrong place for sensitive work.

The sane default for professionals is reducing how much trust you need.

That means keeping your confidential corpus on your own machine whenever possible.

On Mac, that is exactly what Fenn is for: Private AI that finds any file on your Mac. It indexes locally, searches inside your PDFs, docs, slides, screenshots, scans, audio, and video, and opens the exact page or timestamp, without shipping your documents to Claude, OpenAI, or Google.

You don't even need to trust us, once AI models are downloaded you can cut your Wifi keep working with it.

Privacy is freedom because it keeps your workflow working, even when vendors change their policies, incentives, or priorities.