ChatGPT is not private, Fenn is

Dec 9, 2025

ChatGPT is not private
ChatGPT is not private

ChatGPT is not private, Fenn is

A US court recently ordered OpenAI to hand over millions of anonymized ChatGPT conversations in a copyright case with the New York Times. OpenAI pushed back and said this would expose users’ private chats. The Times replied that no one’s privacy was at risk because logs would be anonymized and under a legal protective order.

Both things can be true at the same time.

  • Your ChatGPT chats are stored on someone else’s servers

  • In legal disputes, those logs can be requested

  • You probably click “I agree” and forget this part

For casual use, that risk is acceptable for many people. For confidential work, it is not.

With Fenn, it is different. Fenn chat runs on your Mac, uses AI models on Apple silicon, and keeps your conversations and files on device. No transcripts are stored on a vendor server. There is nothing for another company to hand over.

Use ChatGPT, Gemini, or Claude for the open web. Use Fenn when you need to move fast on confidential files with zero exposure.

What that ChatGPT court fight really means

The legal details are complex, the practical point is simple.

  • ChatGPT conversations are stored so providers can improve systems and debug issues

  • In a lawsuit, a court can order samples of those logs

  • Even if names and emails are removed, you rarely control what is pulled or how broad it is

In the OpenAI case:

  • News outlets asked for chat logs to see when their content was reproduced

  • OpenAI argued this would expose private user conversations

  • The court said logs should be anonymized and protected, but still produced

So if you used ChatGPT in the last few years, you now see what “stored in the cloud” really means. Your chats exist as data that can be argued over, requested, and inspected under legal orders.

Again, for many use cases this is fine. For contracts, board material, internal finance, or sensitive client work, you probably want a different setup.

Cloud AI chat vs private AI chat on your Mac

When you type into a browser chatbot, a few things are always true:

  • Your text goes to a server you do not control

  • The provider can log prompts and responses

  • Those logs can be accessed internally, and sometimes by third parties under strict conditions

  • In legal disputes, courts can ask for those logs, subject to privacy rules and redaction

That does not mean companies are reckless. It means the architecture is built around “your content lives on our systems.”

With Fenn on your Mac:

  • Your files stay on your disk

  • Fenn indexes them on device

  • Chat mode uses models running on Apple silicon

  • There is no central server holding your conversations

If someone sues a vendor, there is no Fenn server with your chats in it, because your Mac never uploaded them.

What Fenn chat actually does

Fenn started as a file search engine for macOS:

  • It indexes PDFs, Word files, spreadsheets, and long reports

  • Reads text inside images and screenshots

  • Searches Apple Mail messages stored on your Mac

  • Works with notes, internal docs, audio, and video with useful timestamps

  • Runs on device by default for privacy

Chat mode sits on top of that index.

You:

  1. Choose which folders and app libraries Fenn can see

  2. Let Fenn index them on your Mac

  3. Open Chat and ask questions in plain language

  4. Fenn finds relevant passages in your files

  5. Local models on Apple silicon read those passages and answer

  6. You follow up, refine, and open the original files yourself

No raw archive is streamed to a cloud API. The work happens on your machine.

Fenn also keeps working when you are offline, because the models run locally and the index lives on your SSD.

When you should never use browser chatbots

There are clear cases where you should think twice before pasting into a web chatbot:

  • Customer contracts, NDAs, master service agreements

  • Internal board decks, investor updates, and financing documents

  • Detailed financial statements, pricing models, payroll data

  • Legal strategy notes or draft filings

  • Sensitive research, security documents, or internal roadmaps

  • Anything under regulatory or confidentiality constraints

In these cases, the question is not “is this provider trustworthy.” It is “do I want this text on someone else’s servers at all.”

If the answer is no, you need a local alternative.

How the same work looks with Fenn chat

Here is how you can handle those same tasks with Fenn on a Mac.

Contracts and legal

Instead of pasting clauses into a browser:

  • Ask Fenn chat, “Which NDAs auto renew and require more than 60 days notice. List counterparties and clauses.”

  • Ask, “Summarize the liability caps in our active customer contracts. Highlight any that differ from the standard template.”

Fenn searches your contract folders and Mail on your Mac, uses local models to answer, and you open the source documents to verify.

Board and investors

Instead of uploading decks:

  • Ask, “What did we tell the board about runway in the last three meetings. Pull the key numbers and dates.”

  • Ask, “Show where we discussed macOS Tahoe upgrades and performance risk in board material and investor updates.”

Fenn combines board PDFs, internal docs, and email threads, then responds in a single chat.

Finance and operations

Instead of pushing exports to a remote tool:

  • Ask, “Invoices from vendor X above 500 dollars in 2024, grouped by month and project.”

  • Ask, “Where in our PDFs and screenshots do we show Stripe revenue by month. Summarize the trend.”

All of this stays on your Mac. You move faster without paying in data risk.

Why Apple silicon and RAM matter here

Private AI chat works best when your Mac has room to breathe.

  • Fenn uses the M series chip and unified memory to run local models efficiently

  • 16 GB of RAM works for focused workloads

  • 32 GB or more feels better if you index a large archive and ask heavy questions often

The important part is that you are using the hardware you paid for. Instead of renting compute in the cloud, your Mac becomes the engine for AI that only knows your files.

If you are not sure whether your current Mac is enough, you can start small. Index a few folders, try realistic questions, then adjust from there.

How to switch sensitive work from browser chat to Fenn

You do not have to stop using ChatGPT or Gemini completely. Just set a simple rule.

  • Use browser chat for public knowledge, code snippets, and ideation

  • Use Fenn chat for anything that reveals confidential details about your company or clients

To make that work:

  1. Install Fenn on an Apple silicon Mac
    Sonoma 14 or later is recommended.

  2. Pick high risk, high value folders
    Contracts, Legal, Finance, Board, key project docs, research, screenshots, and the folder where Apple Mail stores messages.

  3. Add them as sources in Fenn
    You control what goes in.

  4. Let Fenn index on device
    It uses your Mac’s CPU and GPU, not a remote cluster.

  5. Open Fenn chat instead of a browser tab
    When you reach for a cloud chatbot with sensitive material, stop and ask, “Can I ask Fenn this instead.”

  6. Open and verify source files
    Use answers as navigation, not as a black box. You always stay grounded in your own documents.

This one habit change shifts your risk profile a lot without slowing you down.

Pricing

If your work involves anything you would hesitate to paste into a browser chatbot, Fenn is a small price compared to that risk.

  • Local, 9 USD per month, billed annually
    On device indexing. Semantic and keyword search. Chat with your own files on 1 Mac. Updates. Founder support.

  • Lifetime, 199 USD one time
    On device indexing. Semantic and keyword search. Chat mode on your Mac. 1 year of updates. 1 Mac. Founder support.

You can keep treating confidential files like any other prompt, or you can let your Mac handle them locally.

Download Fenn for Mac. Private on device. Find the moment, not the file.

See also