Since Project Uniquonym is aiming to use HTTPS / TLS transcripts from government sites as the starting point to establish unique identities, figuring out how to create a proof of the transcript is a key goal.

I had originally hoped that it would be possible to use individually owned TPM2.0 devices to create this proof for TLS 1.3. However, this has hit some snags:

  • Implementing GCM is problematic on TPM2.0, and that rules out the only mandatory ciphersuite supported by TLS 1.3.
  • There are two other ‘should implement’ ciphersuites - one other GCM one, and one using Chacha20. Chacha20 is not supported on TPM2.0.
  • In practice, major implementations of TLS 1.3 do not support any ciphers beyond those three.
  • There are TLS 1.2 ciphersuites that might be more feasible to implement. However, it’s over 6 years since TLS 1.3 was released, and browser adoption is now high. I expect it is only a matter of time before TLS 1.2 support becomes rare, so it seems like a dead-end approach to invest into.

This means it is time to pivot to an alternative approach to creating the transcripts.

Here are some options:

Use some other integrity mechanism than plain TLS

If every government would issue a cryptographically signed certificate to citizens, or would implement TLS-N, this would be a viable option.

In reality, I think that very few people would be able to use this due to limit governments supporting it, so it would be unlikely to allow for an effective rollout of uniquonyms any time soon, unfortunately.

Use an alternative locally hosted trusted element rather than TPM2.0

One option is to use CPU “Trusted Execution Environments”. There are some alternatives that could apply here. Some AMD CPUs support SEV to allow for protected execution, ARM has TrustZone, and Intel has SGX.

Unfortunately (or fortunately perhaps, because most applications of these devices are actually user-hostile), all of these solutions have known flaws - such as voltage glitching, where carefully timed application of the wrong voltage to the CPU causes a predictable logic fault and circumvents the security. Even where the attacks have a relatively low success rate, these make this technology unusable as a private input into zero-knowledge proofs - the leak of the attestation key from a single device would allow someone to create millions of fake identities.

There are other types of trusted elements (for example, various smart cards), but there would be significant distribution barriers to getting them rolled out, and it isn’t clear they offer more primitives than a TPM2.0 anyway, nor how attestation would work with them.

One day, the security of TEEs against attackers with physical access might improve, and it is possible that Uniquonyms could use these devices in the future. However, as of now, I think the locally hosted trusted element approach is, unfortunately, ruled out.

Rely on trusted third parties

The previous attempt was to use TPM 2.0 devices as a local equivalent to a trusted third party.

The alternative is to have the hardware be run by actual third parties that we would need to put trust in. The most viable approach for this is for the trusted parties to be major public cloud hosting providers - see below for a discussion of why this is feasible.

In relation to any one country, the cloud providers would have similar level of trust to the governments in the system: they could theoretically use their position of trust to enable the creation of fake uniquonyms (i.e. allow one person in a country to have an arbitrary number of uniquonyms). They would not be able to unmask which individual a uniquonym belongs to. One point of difference is that the same cloud providers might have access to fake identities across multiple countries (vs a single government being able to fake identities only their own country) - which could increase the risk of a cloud provider being targeted by a government that has jurisdiction over them. In cases where this is a concern, an option could be to trust different cloud providers by country.

There is a question of how trustworthy the big providers actually are. I would not trust Amazon not to union bust, Google not to use dark UI patterns to trick people into opting in to giving them more data, or Microsoft not to enshittify a product to squeeze consumers. However, their public cloud offerings, and particularly the confidential computing parts of them, are a bit different. They all promise (and enter into contractual agreements) to all their customers not to use the data outside very limited circumstances, they all make a lot of money from ensuring trust by their customers that they won’t do things, and all are externally audited on their security controls to prevent staff. So I’d say it is easier to trust them not to do dodgy things with regards to these cloud computing products.

It is worth noting that this trust model - placing some trust in some big providers - is how the public key infrastructure for X.509 certificates work. Any Certificate Authority trusted by browsers could theoretically start issuing bogus certificates (and in the past there, have been cases where this happened). The trust in this system is established by browsers (on behalf of their users) only trusting Certificate Authorities that comply with rigorous policies - including audits, commitment to prompt disclosure and revocation in the case of misissued certificates, and so on, with forums like the CA/Browser forum to establish standards. All of the big public cloud providers are actually also Certificate Authorities. Generally speaking, the root of trust for verifying an identity to support uniquonyms is a CA root certificate for a government website.

As such, a solution which requires the use of a cloud provider service (either paid for by the end user, or by someone else offering a service to them) once per renewal of a uniquonym isn’t so bad.

How to establish non-interactive trust in code running on a cloud provider

Amazon AWS, Google GCP and Microsoft Azure all support running cloud compute with a vTPM (virtual TPM) in measured boot mode - each component, starting from the cloud-provider provided virtual firmware, creates a hash of the next component to execute, and updates a ‘Platform Configuration Register’ (PCR) in an irreversible way. It is impossible to reset most of the PCRs, only append new hashes to them, meaning that an attestation of the PCR status is an attestation of the exact code running in a container (short of a cloud provider providing an exploit). The vTPM can be used to get a certificate chain, linking back to a root from the cloud provider, attesting that a particular key is resident in the vTPM (with a flag prohibiting it leaving the vTPM), and that access to the key is contingent on the PCRs being a particular value. This means that only the expected code can ask the TPM to sign a particular attestation. That expected code can then do things like attest to the TLS 1.3 transcript being genuine - creating a chain that requires no trust in anyone except the cloud provider.

This proof can then be used as a private input to construct a zero-knowledge proof (anywhere, not necessarily in the cloud).

Do we have to trust the cloud provider with the contents of the TLS 1.3 transcript?

The TLS 1.3 transcript likely contains sensitive information - such as a cookie for a logged in session to a government website.

It would technically be possible not to expose the sensitive data to the cloud provider at all; there is work such as TLSNotary which uses multi-party computation (MPC) to spread a TLS 1.2 implementation across two nodes, so that all the nodes are confident as to the transcript, but can’t see confidential data from other nodes.

However, this produces highly inefficient transcript proofs and is slow enough it might result in timeouts. It could theoretically be updated to TLS 1.3, but the existing implementation is only TLS 1.2.

It is possible to implement it to attest to an encryption public key on the vTPM, and have the client encrypt the data, so that it can only be decrypted by the container running the correct software (short of the cloud provider allowing access to the key against the policy or signing a false attestation). If the software has been checked to be correct, this is probably a very low risk for users.

Does the cloud instance need to be centrally maintained by one party, or can anyone run it?

Since the zero-knowledge proof would only check that the correct software is running in the cloud instance, and that there is a valid certificate chain back to one of the expected cloud providers, anyone could run the instance. We could give people a choice of which cloud provider to use.

Does every end user need to spin up their own instance on the cloud?

Not necessarily - this would be closer to a federated model where anyone who wanted to could spin up a TLS transcript verification service on one of the supported public clouds, but it would be possible to use one hosted by someone else (without even needing to trust that someone else, only the cloud provider).

Summary

Overall, the current approach of using attestation by public cloud providers is not as decentralised as I’d originally hoped would be possible. However, due to the obstacles hit with other options, I think it is the most realistic path forward, and I think it is still acceptable enough to be worth proceeding with.