remix logo

Hacker Remix

Launch HN: Tinfoil (YC X25): Verifiable Privacy for Cloud AI

138 points by FrasiertheLion 1 day ago | 99 comments

Hello HN! We’re Tanya, Sacha, Jules and Nate from Tinfoil: https://tinfoil.sh. We host models and AI workloads on the cloud while guaranteeing zero data access and retention. This lets us run open-source LLMs like Llama, or Deepseek R1 on cloud GPUs without you having to trust us—or any cloud provider—with private data.

Since AI performs better the more context you give it, we think solving AI privacy will unlock more valuable AI applications, just how TLS on the Internet enabled e-commerce to flourish knowing that your credit card info wouldn't be stolen by someone sniffing internet packets.

We come from backgrounds in cryptography, security, and infrastructure. Jules did his PhD in trusted hardware and confidential computing at MIT, and worked with NVIDIA and Microsoft Research on the same, Sacha did his PhD in privacy-preserving cryptography at MIT, Nate worked on privacy tech like Tor, and I (Tanya) was on Cloudflare's cryptography team. We were unsatisfied with band-aid techniques like PII redaction (which is actually undesirable in some cases like AI personal assistants) or “pinky promise” security through legal contracts like DPAs. We wanted a real solution that replaced trust with provable security.

Running models locally or on-prem is an option, but can be expensive and inconvenient. Fully Homomorphic Encryption (FHE) is not practical for LLM inference for the foreseeable future. The next best option is using secure enclaves: a secure environment on the chip that no other software running on the host machine can access. This lets us perform LLM inference in the cloud while being able to prove that no one, not even Tinfoil or the cloud provider, can access the data. And because these security mechanisms are implemented in hardware, there is minimal performance overhead.

Even though we (Tinfoil) control the host machine, we do not have any visibility into the data processed inside of the enclave. At a high level, a secure enclave is a set of cores that are reserved, isolated, and locked down to create a sectioned off area. Everything that comes out of the enclave is encrypted: memory and network traffic, but also peripheral (PCIe) traffic to other devices such as the GPU. These encryptions are performed using secret keys that are generated inside the enclave during setup, which never leave its boundaries. Additionally, a “hardware root of trust” baked into the chip lets clients check security claims and verify that all security mechanisms are in place.

Up until recently, secure enclaves were only available on CPUs. But NVIDIA confidential computing recently added these hardware-based capabilities to their latest GPUs, making it possible to run GPU-based workloads in a secure enclave.

Here’s how it works in a nutshell:

1. We publish the code that should run inside the secure enclave to Github, as well as a hash of the compiled binary to a transparency log called Sigstore

2. Before sending data to the enclave, the client fetches a signed document from the enclave which includes a hash of the running code signed by the CPU manufacturer. It then verifies the signature with the hardware manufacturer to prove the hardware is genuine. Then the client fetches a hash of the source code from a transparency log (Sigstore) and checks that the hash equals the one we got from the enclave. This lets the client get verifiable proof that the enclave is running the exact code we claim.

3. With the assurance that the enclave environment is what we expect, the client sends its data to the enclave, which travels encrypted (TLS) and is only decrypted inside the enclave.

4. Processing happens entirely within this protected environment. Even an attacker that controls the host machine can’t access this data. We believe making end-to-end verifiability a “first class citizen” is key. Secure enclaves have traditionally been used to remove trust from the cloud provider, not necessarily from the application provider. This is evidenced by confidential VM technologies such as Azure Confidential VM allowing ssh access by the host into the confidential VM. Our goal is to provably remove trust both from ourselves, aka the application provider, as well as the cloud provider.

We encourage you to be skeptical of our privacy claims. Verifiability is our answer. It’s not just us saying it’s private; the hardware and cryptography let you check. Here’s a guide that walks you through the verification process: https://docs.tinfoil.sh/verification/attestation-architectur....

People are using us for analyzing sensitive docs, building copilots for proprietary code, and processing user data in agentic AI applications without the privacy risks that previously blocked cloud AI adoption.

We’re excited to share Tinfoil with HN!

* Try the chat (https://tinfoil.sh/chat): It verifies attestation with an in-browser check. Free, limited messages, $20/month for unlimited messages and additional models

* Use the API (https://tinfoil.sh/inference): OpenAI API compatible interface. $2 / 1M tokens

* Take your existing Docker image and make it end to end confidential by deploying on Tinfoil. Here's a demo of how you could use Tinfoil to run a deepfake detection service that could run securely on people's private videos: https://www.youtube.com/watch?v=_8hLmqoutyk. Note: This feature is not currently self-serve.

* Reach out to us at contact@tinfoil.sh if you want to run a different model or want to deploy a custom application, or if you just want to learn more!

Let us know what you think, we’d love to hear about your experiences and ideas in this space!

sigmaisaletter 1 day ago

Looks great. Not sure how big the market is between "need max privacy, need on-prem" and "don't care, just use what is cheap/popular" tho.

Can you talk about how this relates to / is different / is differentiated from what Apple claimed to do during their last WWDC? They called it "private cloud compute". (To be clear, after 11 months, this is still "announced", with no implementation anywhere, as far as I can see.)

Here is their blog post on Apple Security, dated June 10: https://security.apple.com/blog/private-cloud-compute/

EDIT: JUST found the tinfoil blog post on exactly this topic. https://tinfoil.sh/blog/2025-01-30-how-do-we-compare

_heimdall 1 day ago

Anecdotal, but I work at a company offering an SMB product with LLM features. One of the first questions asked on any demo or sales call is what the privacy model for the LLM is, how the data is used, who has access to it, and can those features be disabled.

trebligdivad 14 hours ago

There are a few stories on the 'max privacy' stuff; one of the stories goes that you have two companies each with something private that needs to combine their stuff without letting the other see it; for example a bank with customer transactions and a company with analytics software they don't want to share; a system like this lets the bank put their transaction data through that analytics software without anyone being able to see the transaction data or the software. The next level on that is where two banks need to combine the transaction data to spot fraud, where you've now got three parties involved on one server.

davidczech 1 day ago

Private Cloud Compute has been in use since iOS 18 released.

sigmaisaletter 1 day ago

It seems that PCC indeed went live with 18.1 - tho not in Europe (which is where I am located). Thanks for the heads up, I will look into this further.

ts6000 1 day ago

Companies like Edgeless Systems have been building open-source confidential computing for cloud and AI for years, they are open-source, and have published in 2024 how they compare to Apple Private Cloud Compute. https://www.edgeless.systems/blog/apple-private-cloud-comput...

Etheryte 1 day ago

How large do you wager your moat to be? Confidential computing is something all major cloud providers either have or are about to have and from there it's a very small step to offer LLM-s under the same umbrella. First mover advantage is of course considerable, but I can't help but feel that this market will very quickly be swallowed by the hyperscalers.

threeseed 1 day ago

Cloud providers aren't going to care too much about this.

I have worked for many enterprise companies e.g. banks who are trialling AI and none of them have any use for something like this. Because the entire foundation of the IT industry is based on trusting the privacy and security policies of Azure, AWS and GCP. And in the decades since they've been around not heard of a single example of them breaking this.

The proposition here is to tell a company that they can trust Azure with their banking websites, identity services and data engineering workloads but not for their model services. It just doesn't make any sense. And instead I should trust a YC startup who statistically is going to be gone in a year and will likely have their own unique set of security and privacy issues.

Also you have the issue of smaller sized open source models e.g. DeepSeek R1 lagging far behind the bigger ones and so you're giving me some unnecessary privacy attestation at the expense of a model that will give me far better accuracy and performance.

Terretta 14 hours ago

> Cloud providers aren't going to care too much about this. ... [E]nterprise companies e.g. banks ... and none of them have any use for something like this.

As former CTO of world's largest bank and cloud architect at world's largest hedge fund, this is exactly opposite of my experience with both regulated finance enterprises and the CSPs vying to serve them.

The entire foundation of the IT industry is based on trusting the privacy and security policies of Azure, AWS and GCP. And in the decades since they've been around not heard of a single example of them breaking this.

On the contrary, many global banks design for the assumption the "CSP is hostile". What happened to Coinbase's customers the past few months shows why your vendor's insider threat is your threat and your customers' threat.

Granted, this annoys CSPs who wish regulators would just let banks "adopt" the CSP's controls and call it a day.

Unfortunately for CSP sales teams — certainly this could change with recent regulator policy changes — the regulator wins. Until very recently, only one CSP offered controls sufficient to assure your own data privacy beyond a CSP's pinky-swears. AWS Nitro Enclaves can provide a key component in that assurance, using deployment models such as tinfoil.

itsafarqueue 1 day ago

Being gobbled by the hyperscalers may well be the plan. Reasonable bet.

kevinis 1 day ago

GCP has confidential VMs with H100 GPUs; I'm not sure if Google would be interested. And they get huge discount buying GPUs in bulk. The trade-off between cost and privacy is obvious for most users imo.

trebligdivad 14 hours ago

I suspect Nvidia have done a lot of the heavy lifting to make this work; but it's not that trivial to wire the CPU and GPU confidential compute together.

ATechGuy 1 day ago

This. Big tech providers already offer confidential inference today.

julesdrean 1 day ago

Yes Azure has! They have very different trust assumptions though. We wrote about this here https://tinfoil.sh/blog/2025-01-30-how-do-we-compare

mnahkies 1 day ago

Last I checked it was only Azure offering the Nvidia specific confidential compute extensions, I'm likely out of date - a quick Google was inconclusive.

Have GCP and AWS started offering this for GPUs?

candiddevmike 1 day ago

julesdrean 1 day ago

Azure and GCP offer Confidential VMs which removes trust from the cloud providers. We’re trying to also remove trust in the service provider (aka ourselves). One example is that when you use Azure or GCP, by default, the service operator can SSH into the VM. We cannot SSH into our inference server and you can check that’s true.

threeseed 1 day ago

But nobody wants you as a service provider. Everyone wants to have Gemini, OpenAI etc which are significantly better than the far smaller and less capable model you will be able to afford to host.

And you make this claim that the cloud provider can SSH into the VM but (a) nobody serious exposes SSH ports in Production and (b) there is no documented evidence of this ever happening.

FrasiertheLion 1 day ago

We're not competing with Gemini or OpenAI or the big cloud providers. For instance, Google is partnering with NVIDIA to ship Gemini on-prem to regulated industries in a CC environment to protect their model weights as well as for additional data privacy on-prem: https://blogs.nvidia.com/blog/google-cloud-next-agentic-ai-r...

We're simply trying to bring similar capabilities to other companies. Inference is just our first product.

>cloud provider can SSH into the VM

The point we were making was that CC was traditionally used to remove trust from cloud providers, but not the application provider. We are further removing trust from ourselves (as the application provider), and we can enable our customers (who could be other startups or neoclouds) to remove trust from themselves and prove that to their customers.

threeseed 1 day ago

You are providing the illusion of trust though.

There are a multitude of components between my app and your service. You have secured one of them arguably the least important. But you can't provide any guarantees over say your API server that my requests are going through. Or your networking stack which someone e.g. a government could MITM.

osigurdson 1 day ago

I don't know anything about "secure enclaves" but I assume that this part is sorted out. It should be possible to use http with it I imagine. If not, yeah it is totally dumb from a conceptual standpoint.

amanda99 1 day ago

Does this not require one to trust the hardware? I'm not an expert in hardware root of trust, etc, but if Intel (or whatever chip maker) decides to just sign code that doesn't do what they say it does (coerced or otherwise) or someone finds a vuln; would that not defeat the whole purpose?

I'm not entirely sure this is different than "security by contract", except the contracts get bigger and have more technology around them?

natesales 1 day ago

We have to trust the hardware manufacturer (Intel/AMD/NVIDIA) designed their chips to execute the instructions we inspect, so we're assuming trust in vendor silicon either way.

The real benefit of confidential computing is to extend that trust to the source code too (the inference server, OS, firmware).

Maybe one day we’ll have truly open hardware ;)

perching_aix 10 hours ago

Isn't this not the case for FHE? (I understand that FHE is not practically viable as you guys mention in the OP.)

FrasiertheLion 9 hours ago

Yeah not the case for FHE. But yes, not practically viable. We would be happy to switch as soon as it is.

ignoramous 1 day ago

Hi Nate. Routinely your various networking-related FOSS tools. Surprising to see you now work in the AI infrastructure space let alone co-founding a startup funded by YC! Tinfoil looks über neat. All the best (:

> Maybe one day we'll have truly open hardware

At least the RoT/SE if nothing else: https://opentitan.org/

julesdrean 1 day ago

Love Open Titan! RISC-V all the way babe! The team is bunker: several of my labmates now work there

rkagerer 1 day ago

I agree, it's lifting trust to the manufacturer (which could still be an improvement over the cloud status quo).

Another (IMO more likely) scenario is someone finds a hardware vulnerability (or leaked signing keys) that let's them achieve a similar outcome.

max_ 1 day ago

The only way to guarantee privacy in cloud computing is via homorphic encryption.

This approach relies too much on trust.

If you have data you are seriously sensitive about, its better for you to run models locally on air gapped instances.

If you think this is an overkill, just see what happened to coinbase of recent. [0]

[0]: https://www.cnbc.com/2025/05/15/coinbase-says-hackers-bribed...

FrasiertheLion 1 day ago

Yeah, totally agree with you. We would love to use FHE as soon as it's practical. And if you have the money and infra expertise to deploy air gapped LLMs locally, you should absolutely do that. We're trying to do the best we can with today's technology, in a way that is cheap and accessible to most people.

threeseed 1 day ago

> The only way to guarantee privacy in cloud computing is via homorphic encryption

No. The only way is to not use cloud computing at all and go on-premise.

Which is what companies around the world do today for security or privacy critical workloads.

Terretta 13 hours ago

> The only way is to not use cloud computing at all and go on-premise.

This point of view may be based on a lack of information about how global finance handles security and privacy critical workloads in high-end cloud.

Global banks and the CSPs that serve them have by and large solved this problem by the late 2010s - early 2020s.

While much of the work is not published, you can look for presentations at AWS reInvent from e.g. Goldman Sachs or others willing to share about it, talking about cryptographic methods, enclaves, formal reasoning over not just code but things like reachability, and so on, to see the edges of what's being done in this space.