#1125: Probabilistic Reveal Tokens (for IP Protection)
Discussions
Log in to see TAG-private discussions.
Discussed
Aug 4, 2025 (See Github)
Martin: This doesn't seem feasible. Seems like a proxy, but not really. It makes me question what the exact threat model is.
Yoav: My understanding is that we want to mask user's IP addresses for realtime traffic but still want DDoS prevention and anti-fraud measures in the longer term.
Martin: to exercise those, you need the IP address of the attack. and you're not getting that.
Yoav: a website that collects the data is now aware of... how can they act on that data from the info coming from the proxy?
Martin: IP addresses delayed by 24 hrs
Yoav: That sounds like good feedback that they could put into the explainer.
Martin: And also, what is the user need? Don't think this is the right solution. Need to understand the threat model first.
Ehsan: Concerned how trustworthy the issuer is. Core cryptographic solution is vulnerable to an active attacker.
Yoav: With my API owner hat on, I've put this back on blinkdev to get feedback. if your feedback is "this is going to get shut down..." If you can recommend they send this to mast, that would be useful.
Proposed feedback:
First, the explainer doesn't address the end-user benefits that might come from a solution like this. For those users who reveal their IP address, this doesn't seem like a great deal, but maybe there is some indirect benefit you can point to.
Mostly, the things that we need to understand better what sort of operational model might apply here. That starts with what sort of expectations end users have with respect to the system you are deploying. Ordinarily, a user that engages a proxy does so to ensure that their activity cannot be traced back to their IP address. This upends that, with a 10% chance of leak after a 24h delay.
Then there is the question of how a site might use that information. If an IP is acting poorly, it learns about this 24h after that abuse starts. None of the requests that pass the proxy have an IP that can be used at the time of the request.
For this to work for DoS, or any other form of abuse where the reaction needs to occur in real time, there has to be some visibility of IP addresses at the time of a request. That rules this out for many forms of abuse mitigation.
The explainer says that this is for managing ad fraud, but we can't see how this works for many workflows related to ads. Given the 10% reveal rate, this might give a probabilistic read on fraud rates, but the 24h delay is going to be very limited in its applicability, even to this narrow case, because many of those cases need real time action as well. For example, it doesn't look like sites using the attribution API will be able to take advantage of this sort of information.
Finally, we recommend that you take this to the MASQUE working group in the IETF to collect feedback from the experts there.
Discussed
Aug 11, 2025 (See Github)
Martin: messaged privately to someone working on this API, got a response, not good. It seems they do not have an idea on our negative view and they did not address the question we had. Their architecture is not ideal. I will talk to David so he can put me in connection with someone with more knowledge.
HAdley: if you have trouble accessing
MArtin: I think there are many other options out there so there is no need to reveal IP addresses. I need to have more discussion with someone
Hadley: do we need to do anything to the issue?
Martin: I am happy to involve other but I can do it. If necessary, I 'll ask him to come to TAG meeting.
HAdley: okay, we'll leave this with you
Ehsan: would asking them directly "what is your threat model?" would help here?
Martin: I did ask more direct questions in the follow-up. we'll see how it goes.
Comment by @martinthomson Aug 12, 2025 (See Github)
Hey, thanks for sharing this. We have some thoughts, but thought we'd start with some high-level questions to clarify first.
First, the explainer doesn't address the end-user benefits that might come from a solution like this. For those users who reveal their IP address, this doesn't seem like a great deal, but maybe there is some indirect benefit you can point to. Privacy loss is forever.
Mostly, the things that we need to understand better what sort of operational model might apply here. That starts with what sort of expectations end users have with respect to the system you are deploying. Ordinarily, a user that engages a proxy does so to ensure that their activity cannot be traced back to their IP address. This upends that, with a 10% chance of leak after a 24h delay.
Then there is the question of how a site might use what it learns. If an IP is acting poorly, sites learn about who was responsible 24h after that abuse starts. Critically, none of the requests that pass the proxy have an IP that can be used at the time of the request, so real-time use is right out.
For this to work for DoS, or any other form of abuse where the reaction needs to occur in real time, there has to be some visibility of IP addresses at the time of a request. That rules this out for many forms of abuse mitigation.
The explainer says that this is for managing ad fraud, but we can't see how this works for many workflows related to ads. Given the 10% reveal rate, this might give a probabilistic read on fraud rates -- if IP is a significant factor -- but the 24h delay is going to makes this very limited in its applicability. That's even in this narrow case, because many ad fraud management cases need real time action as well. For example, it doesn't look like sites using the Attribution API will be able to take advantage of this sort of information.
Finally, we recommend that you take this to the MASQUE working group in the IETF to collect feedback from the experts there.
Comment by @etrouton Aug 14, 2025 (See Github)
Hi Martin and TAG reviewers,
Thanks for taking a look and hopefully we can help answer some of your questions. It seems we can carve this into two high level concerns:
- What utility does this actually provide to users and the web ecosystem more broadly?
- Why is this a reasonable privacy trade off?
For the first point, the main user benefit is derived from our free and cross-platform private proxy service that limits visibility from 3rd party trackers. This proxy provides tangible user benefits like limiting advertisers and adtechs from using a user’s IP address for ad targeting and personalization. However, offering an open relay on the web is potentially very risky and dangerous, so the free access is a package deal with tools like PRTs that help keep the proxy free of abuse.
While PRTs do not detect abuse in real time, the benefit it provides to sites and 3rd parties is:
- Allowing websites to monitor abuse and measure fraud in aggregate after the fact, which is helpful for things like make-goods/clawbacks for excessive ad fraud
- Provide organizations the ability to update models with sampled event level datasets, which is useful for fraud detection in non-proxied traffic and potentially in the future for real-time detection of proxied traffic, which we hope to discuss as a future initiative
- Lastly, it gives Chrome some ability to respond to reports of abuse of the proxy, where we have actionable levers like adjusting quotas to limit future abuse sites see
You might imagine, if a proxy service had low reputation through a suspected (or measured) high proportion of unmitigated abuse, sites would simply block those proxy IPs - which would be very bad for users who want more privacy provided by that service. Tor users often face this exact conundrum, where there is not enough signal for sites to differentiate good and bad users, so it’s easier for many sites to simply block access for all Tor users or known onion router egress IPs. Other proxy services maintain higher reputations by charging a high amount for access or requiring proprietary hardware, which raises the cost of investment for potential abuse, and therefore yields less abuse. As a free service, we are striving to have it all: widely accessible, greatly improved user privacy, and universally accepted by websites.
If there are better ways to do this, we would love to do them! In fact, we are actively investing in private feedback loops and hope Privacy Pass will take on Anonymous Credit Tokens, which can contribute to a private long term solution. However it does not currently solve the challenge about updating models and detecting novel attacks, which today often requires manual analysis and supervised learning. We think that PRTs are a necessary component to launching IP Protection responsibly today, fully acknowledging that we might discover better ways to do this in the future.
For the second issue, is this a reasonable tradeoff? This is a very tricky problem to balance access, openness, privacy, and web safety, but in my opinion it offers a great tradeoff, particularly considering it is free service. If some users have more sophisticated threat models, it’s possible there are other proxy services they may prefer but come with other drawbacks like being a paid service or having low reputation / site access issues.
10%/24hr are the starting parameters, and it’s possible (likely even) that we may adjust them up or down based on feedback. Our goal is to reduce the reveal rate while minimizing the prevalence of fraud, so hopefully the tradeoff for most users looks even better over time.
Hopefully that helps clarify our intent and address your questions. We also already pinged the MASQUE listserv as well. Looking forward to hearing your thoughts and feedback,
Thanks, Eric
Comment by @martinthomson Aug 14, 2025 (See Github)
Just a quick question for you to contemplate:
How can you promise that the proxy protects people from profiling (targeting and personalization) based on IP address if you give them the IP address 1 out of 10 times?
The delays do nothing to prevent the information from being usable, unless you have a reasonable expectation that the IP address that is revealed can never be used by the same person in either:
- another context where there is a direct connection, or
- via the same sort of proxy where the the IP address might be revealed using PRT.
Otherwise, the site just has to be patient, harvest the IP addresses they can, and do whatever activity linking they see fit for those that are revealed.
Discussed
Aug 18, 2025 (See Github)
Ehsan: Agree with Martin's point, at least the response wasn't convincing for me. Still waiting for them to respond to Martin's latest comment. Suspect it to be the same response as before.
Martin: Problem here is that we asked a question regarding the requirements, but it spoiled down into details.
… Had a conversation with someone on Google's Networking team that is aware of the proposal. Expects to get a negative response.
… Could be perceived as a thing that Google is doing in their browser.
… Think the implicit question is, is this good for the web? Think we can categorically say no.
… It's not correct to advertise a tool that is claiming to hide IP addresses, if it's not really doing that job.
Hadley: Think there is TAG consensus. Can you write a closing comment?
Martin: We should have the conversation, but I think the response will likely still be no.
Hadley: So we wait for a response?
Martin: Yes.
Comment by @etrouton Aug 19, 2025 (See Github)
The goal of IP protection in Chrome’s Incognito mode is to limit the availability of a user’s original IP address, enhancing Incognito's protections against cross-site tracking. A 90% reduction in the availability of IP addresses to proxied domains, one which is freely available across a wide range of devices & platforms, is a large privacy win.
When the IP is revealed, it is indeed linkable to any other use of that IP - that is by design - it is what enables PRTs to most effectively help address fraud and abuse.
However, regarding the point around sites being able to eventually harvest IP address by waiting for a sufficient amount of requests, we think it's strongly mitigated by the fact that PRTs (and IP Protection) are deployed in Incognito mode, where cookies and local storage are partitioned by site and then cleared at the end of each user session.
As identified in the explainer, because the epoch period is much longer than the 95th percentile Incognito session, the probability of IP disclosure to a single site will not increase over time, and other implementations should follow the same precautions.
Comment by @martinthomson Aug 20, 2025 (See Github)
That the "limits the availability" condition is not one that I would have used. The goal that this might work towards is denying trackers the ability to track... in this case, using IP addresses.
I personally find this "privacy at scale" argument pretty unconvincing. It's not the first time I've heard it used to justify degrading what might otherwise be a pretty significant privacy advance.
The fact that these are leaked in Incognito makes the problem worse. Because cookies are not useful for tracking, the IP addresses are already the only hook that many trackers have. So those 10% will be gathered and used. It's not the Incognito sessions that matter there, but the non-Incognito sessions, to which the trackers will be able to add all the information about "engagement ring shopping" sessions they are able to obtain IP addresses for.
The duration of sessions and delays are irrelevant if your goal is to assemble a profile. Sure, you can be reasonably sure that the session is not live when the IP address is revealed, but the IP address is.
To elevate the discourse to a higher level, my reason for pushing back was not to fixate on the particulars of the design. I hope that it's clear that I think it's unacceptable in the general case. However, if this is about what you promise Chrome users, that's a different story. You already let people track them with cookies, so this would not be inconsistent with that. And ultimately, what you promise people from your Incognito mode seems like a pure product decision. But you decided to ask the TAG what we think, so I have to assume the question is: is this design good for the web?
To answer that question, we need get at the actual needs that underlie the design. I do not accept that the actual requirement is access to IP addresses. Not without significantly more evidence to support the claim. I understand that IP is a cornerstone of a lot of fraud and abuse mitigation, to the point that it could genuinely be too hard to shift the ecosystem onto something else. However, that still requires that you present far more evidence than you have done so far.
That means establishing alternatives as being truly non-viable. I can think of about 3 or 4 alternatives that don't involve leaking IP addresses, or that only involve leaking IP addresses in specific narrow conditions (like establishing the existence of abuse). To get to those, you need a clearer articulation of the threat and trust models that apply.
Comment by @etrouton Aug 25, 2025 (See Github)
Thank you for your review and I appreciate your perspective. Your review has made it clearer that this design might be specific to Chrome's product-specific needs for the way we're incrementally improving privacy in our Incognito Mode, and as such, it may have been a mistake to even ask for this review given that the TAG declined the product-specific IP Protection review. We hope that having some shipping experience in one product will help inform the discussion about how any future standards around how IP protection can be reconciled with anti-abuse systems.
In that spirit, we'll close this review, but we're always happy to hear any other feedback you have, and we'll take it into account as we iterate on the feature.
OpenedJul 30, 2025
Explainer
https://github.com/GoogleChrome/ip-protection/blob/main/prt_explainer.md
Where and by whom is the work is being done?
Feedback so far
Multi-stakeholder feedback:
Major unresolved issues with or opposition to this design: None
You should also know that...
This has already come across the desk of the TAG as part of the IP Protection review request (noting some PRT questions asked + answered as part of that).
We're back to give the TAG the opportunity to consider this work separately from the broader IP work (as encouraged by some conversation on the associated blink-dev thread).
The most "separated" view of this work is as an approach to enabling random sampling over an otherwise protected signal. In that vein, we're looking at continuing conversations in an IETF forum based on the IETF draft.
<!-- Content below this is maintained by @w3c-tag-bot -->Track conversations at https://tag-github-bot.w3.org/gh/w3ctag/design-reviews/1125