Siri, Gemini, and the Privacy Question Apple Has Not Answered
Feb 5, 2026
Siri, Gemini, and the Privacy Question Apple Has Not Answered
If Siri is about to get a major upgrade powered by Google’s Gemini, there is one detail that matters more than any demo.
Where will it run?
Apple’s messaging strongly implies “on-device” and “Private Cloud Compute.” Google’s execs are now using language that points in a different direction, including calling Google Apple’s “preferred cloud provider.”
For anyone who works with confidential files, client data, contracts, financials, medical info, or internal strategy, this is not a nerdy infrastructure debate. It is the privacy boundary.
The new Siri has a privacy problem, and it’s mostly a clarity problem
Here’s why the conversation is messy.
Apple’s line: privacy, on-device, and Private Cloud Compute
In recent remarks, Tim Cook reiterated that Apple will “continue to run on the device” and in “Private Cloud Compute” while maintaining privacy standards, and he declined to share details of the arrangement.
Apple’s Private Cloud Compute story, in general, is that some requests can be handled locally, while bigger requests run in Apple’s cloud with a privacy-focused design.
Google’s line: “preferred cloud provider”
During Alphabet’s Q4 FY2025 earnings call, Sundar Pichai and Philipp Schindler used nearly identical language: Google is “collaborating with Apple as their preferred cloud provider” and helping develop “the next generation of Apple Foundation Models, based on Gemini technology.”
That phrasing is what made people pause. “Preferred cloud provider” sounds like infrastructure, not just licensing.
Why this is confusing, even for smart people following closely
There are at least three plausible explanations, and Apple and Google have not clearly confirmed which one is true.
Scenario 1: Siri runs on Apple infrastructure, Gemini is “under the hood”
This would mean Gemini technology powers models that still run on Apple hardware, either on-device or in Private Cloud Compute. Some reporting has described a “custom” Gemini model running on Apple’s PCC.
Scenario 2: Siri runs on Google servers, at least for some requests
Bloomberg reporting has suggested that Apple and Google have discussed hosting the chatbot directly on Google servers (TPUs), while an earlier Siri update might run on Apple’s Private Cloud Compute.
Scenario 3: It’s a split rollout, and both statements are “true”
This is the “phased” possibility. Some Siri intents might be processed on-device or via Apple PCC, while more advanced Gemini-driven interactions use Google infrastructure. That would explain why the language is careful and why the two companies avoid a simple answer.
Right now, the key point is not which scenario is correct. It’s that Apple has not clearly answered the question in a way professionals can trust for sensitive work.
The scale argument: why Google hosting Siri sounds logical
Apple has a huge active device base. If the upgraded Siri becomes genuinely capable, usage will spike, and so will inference demand.
Running advanced AI at consumer scale is an infrastructure problem first. Google already runs planet-scale AI workloads, and TPUs are part of that story. So on pure logistics, it’s easy to see why a “Google hosts it” path could happen, even if Apple keeps tight control over the user experience.
But “logical at scale” and “acceptable for privacy” are not the same thing.
Why teams with confidential files should care
If Siri’s smartest mode runs on servers outside Apple’s Private Cloud Compute, the privacy questions get sharper:
What data is sent off-device, and what is never sent
What gets logged, retained, or used for diagnostics
What identity and account metadata is attached
What auditability exists for enterprise compliance
What “opt-in” actually means in day-to-day use
Even if Apple and Google implement strong safeguards, vague messaging is a problem by itself. Teams cannot build workflow policy around vibes.
If you are in legal, finance, healthcare, consulting, HR, or any client-facing business, “we think it’s private” is not good enough.
The practical takeaway: separate “writing help” from “file intelligence”
This rumor cycle highlights a broader point that matters on Mac.
There is a difference between:
AI that helps you write, rephrase, and summarize in apps
AI that can search across your real files and open the exact moment inside them
Even if Apple nails the first category, the second category is where daily productivity is won or lost, because work knowledge lives in PDFs, decks, screenshots, scans, recordings, and exported reports.
That’s also where privacy matters most, because those files are often the confidential stuff.
If you need private file search today on Mac
If your goal is to search inside your files across formats and jump directly to the relevant page, slide, frame, or timestamp, you can add a dedicated file intelligence layer that runs locally.
That’s exactly what Fenn is for: “Private AI that finds any file on your Mac.” It indexes on-device and is built to open the precise moment inside your files, not just return a filename.
If you work in a team, this matters even more. Policies are easier when the default is simple: keep data on your Macs.

Example of search inside a PDF, works 100% on device.

Chat mode in Fenn, works 100% locally
What Apple should say next
This whole story calms down the moment Apple answers three questions clearly:
For Gemini-powered Siri, which requests run on-device, which run on Apple Private Cloud Compute, and which (if any) run on Google infrastructure?
What data is sent, what is retained, and what is never stored?
How can users and teams verify what happened for a given request?
Until then, the privacy question is not answered, and professionals should assume it matters.
