• A1kmmA
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    What if you do end up accidentally or negligently sharing this never-to-be shared identity?

    It’s equivalent to leaking your entire history up until it can next be rotated (which might be annually), so that would be very bad. Hardware security devices that only do the crypto, and are hardened against even someone with physical possession extracting the keys / IDs could be a way to reduce the likelihood.

    What if you’re unlucky enough to live somewhere where the government is one of your principal adversaries, like a Palestinian in Israel or a gay person in any number of jurisdictions?

    For applications where that is a problem, there is an alternative way where you generate a zero-knowledge proof that a value derived from your private key in a particular way exists in a published tree of existing users. Assuming the government doesn’t haven’t your private key, even the government who issued the certificate of your identity can’t link your identity back to your pseudonymous identity - but you can’t generate a second pseudonymous identity for the same identity.

    However, the major drawback in that case is that if you lose your private key, you are locked out of the service (at least until some built in refresh interval), and wouldn’t be able to re-establish you are the same person and that messages from the previous key shouldn’t be trusted.

    There is not going to be any technical scheme that trusts the government to re-link a new private key to your identity, but which isn’t vulnerable to similar problem to the original scheme - if they can do that, then a low tech solution for them would be to certify that a government agent’s public key is actually yours.

    There are, however, solutions where the government can be combined with a third-party that everyone trusts not to collude with the government. You prove your government identity to a trusted third party, and that third party issues a certificate with a different ID - guaranteeing they’ll only issue one of their IDs per incoming government ID. Then sites would need to trust that third-party instead.

    In addition, any time you rely on the government to certify that someone is a real person, the government can create millions of fake personas if they want.

    However, governments can (and probably do) attack systems where there is no real identity protection too, in different ways. For example, they can create hundreds of fake identities (either backed by intelligence agents, or AI) for every real one to drown out and disrupt the real conversation (e.g. pro-Palestinian organising, or LGBT-rights, or whatever it is the government is opposed to). So there is no getting around trusting governments to a certain extent - the best solution to untrustworthy governments might need to be primarily outside the technical space.

    And how would you prevent the proliferation of plain ol unsigned data?

    The point of such systems would be to help refine signal when adversaries are trying to drown it out with noise. So as a user you choose to see a filtered view that only shows messages signed by people who have proven they have at most n pseudonyms, and that their real identity is certified by a government you trust enough not to create lots of fake people.

    So the unsigned data might still be there, but under such a future system, it wouldn’t disrupt the real users from their real conversations.