However, the spec doesn't propose a mitigation for it. I'm afraid many new security policy mechanisms can actually be used to track users or devices this way, because you can experiment to see whether the browser has heard about a particular security policy by observing its behavior when you ask it to violate the policy. If you tell different devices about different policies, their behavior will be different (as if you told different kids who were going to visit a park about different rules for how to behave in the park, and then observed who obeyed and who violated which rules as a way of identifying individual kids).
For example, you can also get tracking out of public key pinning, by selectively pinning certs for some subdomains and not others, and then seeing which subresources are successfully loaded when you present a huge number of pin violations. (I think that's also documented in the HPKP spec.)
Good grief. They've replaced a SQLite database with a text file that's loaded into an in-memory hash table because "adoption of HSTS is not very widespread yet" so "using any kind of off-the-shelf database to store this would be inefficient and overly complex." This required a patch that took over a year to review, during which time issues were raised with the text file parser they had to write from scratch. Loading the whole table into memory has clear DoS implications, so they're limiting the table to 1024 entries (making the use of a hash table rather silly), with an eviction strategy[1] that is going to favor older entries and effectively prevent newer entries from being added.
> Among the entries was one named track.nextuser.com
In order to be a supercookie they need more than one entry (since each entry only stores 1 bit of information). Do you see any other entries that look like they could be associated with this one?
> The impact is that it's possible for a site to track you even if you choose to use "incognito" or "private" browsing features in an effort to avoid such tracking.
I've always thought that (despite user hopes) the point of 'private' browsing was explicitly and only to avoid leaving traces on the user's computer anyway. (For example, I used it when shopping for Christmas presents.) The Firefox new private window has a warning to this effect:
> While this computer won't have a record of your browsing history, your employer or internet service provider can still track the pages you visit.
Many people use it to spawn what is effectively a "guest session" in which they can log into some site with account B when they're already normally logged in with account A, without having to log out of account A. Or, similarly, to temporarily deactivate such things as Google's per-user search results personalization. Sites that leak credentials into such "guest sessions" break their usage for such purposes. (In fact, since in Chrome incognito windows are just a special case of user profile switching, HSTS probably leaks credentials between user profiles as well, completely mooting the point of them.)
This has been known for quite a while. I managed to find a case where HSTS allowed information leakage between private/non-private frames within the same browser in Firefox, but I think that's been fixed.
In general, the browser vendors seem to think that HSTS is worth the potential privacy leak. I've also heard some people say they're monitoring to see if anyone does it and will respond if it becomes a problem.
I'm actually pretty irritated that this researcher makes it out as an iOS thing only, it feels like he/she just didn't care to try on anything other than the device they had in front of them.
Chrome on Android behaves the same way the researcher described (fingerprinting works in Incognito tabs), but Chrome, Opera, Firefox, and IE on Windows all get different IDs.
FTA: "A website can encode a globally unique pseudonymous device identifier into any stateful web technology so long as it persists at least log2 n bits, where n is the number of Internet-connected devices (presently roughly 5 billion, requiring 33 bits)."
The basic background problem is that in the "normal" case, HSTS is a security and privacy protection rather than a tracking mechanism. That's why one would typically want it to persist as much as possible. But it has the potential for tracking effects too (as this project demonstrates). I guess the current browser behavior is indeed an attempt to project user intentions based on that.
Chrome has different SSL validation on each operating system - on Linux it uses it's own NSS instance, on OS X and Windows it uses the OS level certificate validation routines. Not sure how HSTS plays into that.
I'm not quite sure if I wouldn't _expect_ the incognito mode to respect HSTS. I'd think that you would use incognito mode for ~sensitive~ tasks.
Defaulting to https due to a known HSTS flag seems good in this case, otherwise every incognito session would start out blank, right? (I'm ignoring the white list from the browser vendor)
It sounds like HTTPS Everywhere is overlapping functionality with HSTS. Is there some way that HTTPS Everywhere could just inject HSTS rules rather than looking up every URL and rewriting it before sending a request?
HSTS can only change the protocol, while HTTPS everywhere can do more complex rewritings. So what you propose could only work in a very limited set of use cases among those handled by the plugin.
It seems like the main use case for HSTS is with the site being requested by the user in the URI bar, for protecting cookies and login credentials associated with that domain.
It does not seem like there's a major use-case for secondary resources: images, css, javascript, etc loaded on the page itself, and which serve as the vector in this attack. Such resources must be requested via https on a https site itself anyways.
So, wouldn't it be better to just restrict the usage of HSTS protocol overrides to just the main domain being requested by the user in the URI bar?
Am I right to understand that this can even be a server side cookie, i.e. that it can't even be killed by disabling javascript (since the server can tell if there was a redirect)?
Wow, can't get it to go away on Firefox 33.0 Ubuntu. I was able to clear it by manually deleting the info from the permissions.sqlite3 database as described by agwa.
imho it's not reasonable to perform dozens of http requests in order to create a device fingerprint. especially on mobile networks this will require a lot of time until all requests are through.
If it happens out of band of the page doing anything else, the time required won't matter much, especially given the value of a truly persistent tracking cookie.
https://www.rfc-editor.org/rfc/rfc6797.txt
However, the spec doesn't propose a mitigation for it. I'm afraid many new security policy mechanisms can actually be used to track users or devices this way, because you can experiment to see whether the browser has heard about a particular security policy by observing its behavior when you ask it to violate the policy. If you tell different devices about different policies, their behavior will be different (as if you told different kids who were going to visit a park about different rules for how to behave in the park, and then observed who obeyed and who violated which rules as a way of identifying individual kids).
For example, you can also get tracking out of public key pinning, by selectively pinning certs for some subdomains and not others, and then seeing which subresources are successfully loaded when you present a huge number of pin violations. (I think that's also documented in the HPKP spec.)