Let’s say I visit an HTTPS website. There is a decent chance my browser will use TLS1.3 to connect to the server. It will verify the certificate, and establish an ephemeral symmetric key, and transmit an encrypted request, and decrypt the response. It would immediately solve the needs of Uniquonym if I could then generate a zero-knowledge proof that the certificate was signed by a particular CA, and that the transcript of the connection (request and response) met certain parameters.

However, there is a huge gap here. Since my browser, which I can control entirely, negotiated the symmetric key, I can:

  1. Encrypt an entirely different request to what I actually sent to the server, and/or,
  2. Encrypt an entirely different response from what I actually received from the server.

This response will look equally authentic to a real response - nothing in the TLS1.3 protocol provides non-repudiation to stop me doing that, because it isn’t really designed for that.

There are proposals for extensions that, with the cooperation of the server, allow for TLS non-repudiation (e.g. TLS-N). If every government in the world could be convinced to adopt these on servers that prove user identities through their responses, this would enable Uniquonym. However, TLS-N has very limited adoption, and governments looking to monetise their identity systems probably are unlikely to go out of their way to help provide a free alternative. So relying on this option is probably unrealistic!

There are solutions that use multi-party computation to implement TLS to help interactively convince a group of users that a transcript is valid. However, they are relatively slow, don’t scale to large groups, and will only convince group members present at the time of the original request - they won’t produce a non-interactively verifiable proof. Making this into a non-interactive protocol is outside of what’s possible with currently known cryptographic primitives, and unlikely to be feasible (and for the request part would be impossible - arbitrary data could be encrypted, and then state rolled back and the correct data encrypted, with the arbitrary data substituted later).

One alternative is to lean on so-called “trusted computing” - i.e. specialised hardware that can perform operations and produce signed attestations about the process. This is similar to introducing a trusted third party, except that it is owned by the person who wants to verify their identity (but designed so that person doesn’t have access to the private keys to create false attestations). Data stays physically local, but there is still trust in the manufacturer of the trusted computing module to ensure keys are adequately protected, and not to sign certificates for fake modules (allowing fake transcripts to be attested to). This puts them in a similar position to governments - they can’t unmask a Uniquonym, but they can create multiple fake ones. Since we don’t have a clearly better option, this is the current area of research for Uniquonym.

Trusted computing is a controversial choice because the most common application is to reduce users’ freedom - the FSF calls it Treacherous Computing. Using it to create pseudonyms to escape censorship and astroturfing manipulation to manufacture consent is turning a technology created mainly for bad uses around and using it for good.

That are multiple types of trusted computing.

One of the easiest options might be to use a Trusted Execution Environment (TEE) like AMD Secure Encrypted Virtualization (SEV) and Intel SGX - which leverage existing CPUs, but let them run in a secure mode that allows access to secrets. However, these are slightly finicky to use, and significantly, many versions of them have been compromised through under-voltage attacks to glitch the CPU. A compromise would allow for arbitrary issuing of fake identities, and due to the Uniquonym being pseudonymous and based on ZKP, it would be nearly impossible to respond.

A more secure option would be to leverage physical TPM 2.0 chips. These are designed to be much more robust against various physical attacks, since they are separate to the main CPU, and typically have features like protective wires set up to destroy the key if decapped and analysed with probes. Thanks to the efforts of Microsoft, who made it mandatory for Windows 11, TPM 2.0 are now fairly common. Note however, that some TPM 2.0 implementations are actually virtual TPMs on top of AMD SEV / Intel SGX, and so secure TPM 2.0 chips suitable for this application might be less common.

One difficulty with using TPM 2.0 is that they provide a limited range of cryptograpic operations. An open research question for Project Uniquonym is whether this is enough to produce an attested TLS 1.3 transcript, with a cipher suite compatible with a sufficient number of servers to be useful, that can used from a zero-knowledge proof.