An essay on digital identity and democracy

This is not my first post on this topic, but it is probably the longest. It doesn’t contain any source code nor details of protocol exchanges, and it’s written in something vaguely approaching plain English. If you know me quite well, none of this is new, but it’s possibly the first time I’ve strung it all together into something attempting to be coherent.

First, a gentle introduction to public-key cryptography

I promise this won’t be as scary as it sounds.

Public-key cryptography was one of the more important developments of the twentieth century. It underpins most kinds of secure communications around the world, to the extent that problems with specific implementations get geeks even more nervous than reports of vulnerabilities in Adobe Flash and Oracle Java.

The principle is quite straightforward: instead of having the same key which both encrypts and decrypts data, there are two keys which have a particular mathematical relationship. If you encrypt with one, you can only decrypt with the other, and vice versa. What usually happens is that one key is designated the “public key” and distributed to anybody who might need it, while the other is the “private key” and is kept as secret as is humanly possible.

The mathematical relationship is such that although it’s quite easy to generate the pair of keys, it’s computationally extremely difficult to calculate the private key if all you have is the public key.

(If you’re interested, it’s because determining the prime factors for extraordinarily large numbers is not something even a huge fleet of computers can do quickly, even though your smartphone can do the opposite—that is, find some large prime numbers and multiply them together—without much effort).

The nature of this public-private “keypair” means that you can generate your own keys at will, and distribute the public key far and wide. Indeed, there are servers on the Internet whose sole purpose is to make it easy to redistribute public keys. Then, anybody who can find your public key can encrypt a message that only you can decrypt (or, if you’re careless, somebody else with your private key).

Although this all represented a revolution in cryptography, it’s not actually the most interesting thing that it made possible. The most interesting thing is digital signatures.

Digital signatures are a way of using public-key cryptography to create a kind of message which can only have been generated by a particular private key, but can be independently verified by anybody with access to the corresponding public key—in principle, verifiable by everybody.

They work by employing the public and private keys the opposite way around to when you want to encrypt something normally. First, you take the message you want to “sign” and generate a cryptographic hash of it: that is, you (or rather, your computer) performs a specialised mathematical operation which generates a fixed-length code based upon the contents of the message.

The key properties (no pun intended) of the hash function are that changing the message will change the hash value, and that it’s practically impossible to figure out how to change a particular part of a message in a way which would result in a predictable hash value.

In other words, if you have a copy of both the message and what the hash value should be, and you independently calculate the hash value yourself and find it doesn’t match what you received, it means the message has been tampered with.

Once you have the hash value, you encrypt it with the private key to generate a digital signature. This means that everybody with your public key is able to decrypt it and check that it matches the hash value they calculate for the same message. If it can’t be decrypted with your public key, it means you didn’t use your private key to create it, and if the hash values don’t match, then the message has been tampered with in transit. If all goes well, we can say that the signature was verified.

In case you’re wondering, this isn’t some kind of bleeding-edge Tommorrow’s World kind of thing: cryptographic hashes and digital signatures are used extensively by the military and security services, by your phone when it talks to the network, by the chip in your bank card, whenever you press the “Connect to Facebook” button in an app, by your web browser whenever you visit a secure website, and much more besides.

If you didn’t quite follow all of that, here is a nice video that explains it (with a bit more maths):—

Assuming you don’t do anything crazy and keep your private keys safe, properly-implemented digital signatures are “strong” enough for use in legal contexts, and are much less prone to attack than a hand-written signature on a piece of paper which can be copied and faxed (yes, faxed—which is what people often have to do with hand-written signatures…)

Cryptography and you

You can use a digital signature to identify yourself to certain services, similar to a user-name and password. This is also a technology which is in common use, although the implementation in web browsers is very ugly to the point that normal humans tend to need a step-by-step guide with screenshots to be able to make use of it properly.

The exchange goes a little like this under the hood:

If somebody else comes along, then they won’t be able to complete this exchange: the randomly-generated number (which in reality would usually be a bit longer) means that if somebody’s snooping on the exchange, they can’t go back to the server and pretend to hold the private key by sending the same message that you did—this is called a replay attack.

If they’re not trying to perform a replay attack, but simply don’t have your private key, they’ll fail to be able to generate a signature which can be decrypted with your public key, and the signature won’t be verifiable.

You can think of the public key itself as being the username in a traditional username/password setup, with “your ability to demonstrate possession of the corresponding private key” being the password.

A brief discourse on identification versus assurance

Like a username and password, public-key cryptography and digital signatures allow you to verify that the person who created an account was the same person trying to log in this time. What it won’t tell you—on its own—is what the person’s name is, or whether they have a bank account with the TSB, or where they live.

In other words, it allows a person to identify themselves, but it doesn’t provide any assurance about any claims that they make about who they actually are. The holder of a key can say all kinds of things, but there are only some circumstances where it makes sense to simply take their word for it.

In a traditional paper-and-ink world, identity and assurance were kept at a distance from one another. This worked quite well, and is in line with the “data minimisation” principles of data protection: only store what you actually need to.

For example, if you open a bank account, you provide a signature sample as your identification, along with the “one from column A, one from column B” forms of assurance, which are pieces of information about you that are provided by a third party your bank trusts to not lie about it (although slightly confusingly they are often termed “forms of ID”).

In the digital world, identification and assurance have tended to be mixed up together, resulting in it being actually very difficult to perform any kind of meaningful assurance processes: for each organisation which you might wish to issue you some piece of assurance data, you have a separate username and password, and nothing tangible which you can present to somebody else if they need it.

It gets really tricky in the many circumstances where you might need to provide multiple sets of assurance data, and at the moment the only practical way to achieve it is to outsource the whole process to one of a small group of “assurance brokers” who are able to establish relationships with everyone who might need to provide assurance data. Or, they can perform “proxy assurance”, which involves performing a traditional paper-based process to them periodically, and then them essentially asserting “we saw evidence on the 28th May 2014 that Rachel Jones has a bank account at the TSB”.

The model used to be one where the individual physically controls access to the flow of information, is able to provide it only where it’s needed, and only if they believe the entity they’re handing it over is trustworthy and will provide something worthwhile in exchange. As more services move online, we’ve shifted to a model where the individual has essentially no control over the flow of information about them and is told who they must trust.

It doesn’t matter especially whether you get instinctively twitchy at the loss of control, because this shift has plenty of other downsides that widespread Internet access was supposed to obviate. Creating a market for “identity assurance providers” (i.e., the middle-men), many of whom are profit-making companies, creates an imperative for what used to be a straightforward exchange of information for a service into an opaque black-box with strings and costs attached—and that applies whether you’re the service-provider at either end of the chain, or the individual in the middle.

For example, GOV.UK Verify is such an “assurance broker” scheme. Those participating in the scheme bid on quite large contracts put out to tender in order to provide their brokerage services to government departments and executive agencies who need assurance data. They had to be “certified” (that is, audited to help ensure they wouldn’t accidentally leave your assurance data on a USB stick on a train), and those contracts only cover the exchange of assurance information for government services. If you’re a service provider of some other kind and need some assurance data, you’ll need to contract one or more of them yourselves—and even then, it’s pretty likely that as an end-user you’d have to proffer up that data multiple times for multiple kinds of service.

In other words, in the process of transitioning services online the taxpayer has had to pay a small number of companies to perform a job that an ordinary human being could do (and, in fact, still has to do) on paper and made the world a more confusing and murky place into the bargain.

This isn’t to pick on GDS: they’ve done what they needed to in order to get the job done, but it’s very definitely a retrograde step in the grand scheme of things.

Chains of trust

Meanwhile, on your computers, phones and tablets are these things called “certification authorities”. These are the public keys of organisations around the world who for a (variable) fee perform a kind of assurance service.

The way that it works is this: you need to run a secure website, and so you need to get a digital certificate, which is a standard way of representing a piece of assurance information online. It contains your public key (to identify you, the subject of the assurance statement), along with a signature and associated public key from the certification authority who issues it.

Anybody receiving the certificate can unwrap the signature and confirm that it really was issued by certification authority, and that it really does contain your key as the subject. The same signature-verification identification process described earlier plays out in reverse when you connect to a secure website: your device asks it to verify that it holds the private key, and cross-checks it with the certificate to determine whether it’s trustworthy.

In practice you actually end up with chains of these certificates, with the website you’re visiting at the bottom, and what’s known as a root certification authority at the top. Your browser verifies each one in turn, going up the chain until it reaches the root, or encounters something which couldn’t be verified—resulting in a horrid and purposefully scary warning message.

Unlike the others below it in the chain, the root authority’s certificate isn’t issued by anybody, which is why a copy of it is stored on your device. Any certificates in the “root CA list” on your device are automatically deemed to be valid and verified, and as a consequence so are the active certificates issued by them.

This system was borne out of a pre-Web era where the national telecoms operator was the king and it seemed like a good idea for them not only to be the gatekeeper for your telephone and data lines, but also for the services that you accessed. This wasn’t quite as crazy as it sounds: prior to widespread de-regulation, there was often only one national operator, and because they installed your lines and had a billing relationship with you, they at least knew who you were and where you lived or worked. They were, in effect, the first digital assurance brokers.

Prior to the emergence Web, the thinking was very much of a federated model whereby services like Prestel constituted the online world, and so anybody wishing to provide online services also had to go through the same telecoms operators—meaning the assurance brokerage actually worked in both directions.

As the Web exploded, these same technologies were employed to ensure that secure websites could be verified, except that the “national telecoms operator” approach didn’t really work anymore.

Instead, organisations set themselves up as certification authorities—some of them independent, some part of larger corporations, and some of them government—and did deals with browser makers to ensure their certificates were included in the bundled “root CA list” (if they’re not in that list, it’s not be possible for your browser to verify the certificates issued to websites, because the root certificate at the top of the chain would be unknown).

This system persists more-or-less today, but is horribly broken (that link is not by any means to the only example, but is possibly the most well-known). For example, your computer probably trusts the China Internet Network Information Centre, GoDaddy, Swisscom, Visa, Wells Fargo Bank, the US government, the Taiwanese government and a whole heap of people you will never have heard of.

All of these entities, as well as the many more in the middle of these chains of trust, are trusted by your computer to perform assurance of web sites and services on your behalf.

In days gone by, this didn’t really matter too much: the stuff you did online wasn’t ever going to be that interesting to anybody (except perhaps your immediate friends and family), and you could always cancel your credit cards if the worst happened.

Nowadays, “the stuff you do online” encompasses so much more, and the list is growing all the time: banking, health, interactions with government. We actually do important things online now, and people haven’t yet stopped talking about trying to do really important things like voting in elections online. The ramifications in the event of a screw-up are growing in significance every day from the mere “minor inconvenience” that they used to be.

So, voting…

Voting in a modern free and fair election has a number of necessary constraints placed upon it which mean it’s not remotely as straightforward to shift online as, say, voting in The X Factor or Strictly.

Every stage of the process must be verifiable—by both the candidates (and their representatives), and by observers who keep an eye on behalf the electorate and anybody else impacted by the election. In other words, just about everyone.

The votes themselves must only be cast by those who are actually eligible to vote in the first place, and must be done so in secret and anonymously. This is so that duress cannot be applied either before or after the election has taken place.

Once cast, the votes must remain sealed (and disassociated with any information about who cast them), until the ballots close, to prevent undue influence being exerted over those who are yet to vote based upon information on votes which have already been cast.

Finally, there must not be any impediment to somebody who is entitled to vote actually doing so in practice.

All of this is quite tricky to replicate online without compromising some aspect of it.

The practical effect is that the process can’t readily rely upon voters’ digital identities being managed by third-party corporations such as Google, Facebook or Experian. All of the software involved needs to be open source and open to inspection by anybody, as do the network protocols and data flows, as well as any hardware that’s been installed, and the only “terms of use” which must be applied can be electoral law.

You can’t get into situations where somebody can’t vote because their account has been suspended, or an election has to be declared void because a system was compromised by a rogue employee, or because a dubious implementation meant that information about the votes themselves leaked.

On the plus side, we can already do online voter registration without huge problems, which means that the guts of the “eligibility” part of the puzzle has already been solved.

Putting it all together

Let’s start with the end result, because it’s useful to design things from the top down, defining requirements as we go:—

To do this, votes must be encrypted when they are cast in a way which means they can only be decrypted for counting when the polls close—that is, the encryption and decryption keys must be different—a fairly clear-cut use-case for public-key cryptography.

The knotty part of this is actually keeping the decryption key secret until the right time, which isn’t how public-key cryptography is usually deployed. While somebody could make and sell a black box which generates a keypair and hands over the public key immediately, but holds back the private key until a certain time and date, it would lack flexibility and could well struggle to be sufficiently verifiable. Let’s put that in the “nice idea in theory” bucket.

Instead, we can rely upon three things: the fact that in a given constituency, the candidates are all competing with one another, that the returning officer is a person who exists, and that the law doesn’t cease to apply or be implemented once you do some things electronically.

Encryption keys are bits of data. If you plug them into a some software that does encryption or decryption then they become useful, but until then they’re just lumps of opaque binary goo like any other. You can copy them around, split them into chunks, get them printed on t-shirts, or turn them into abstract digital art.

The solution to the key hold-back problem is to generate and issue the public and private keys at the same time, but for the private key to be broken into portions and itself encrypted with the public keys corresponding to each of the candidates and the returning officer.

That means that you can only get at the private key for decrypting the votes when all of the candidates and the returning officer getting together and combine their decrypted portions of the private key into a single “constituency private key” which can unlock the votes—and they won’t do that until after the polls close, or they’ll find themselves getting arrested.

Of course, unfortunate events can occur, and so you could perform that process several times over, cutting the key different ways, so that only a certain proportion of the group is required for it to be quorate (but you need to take care that you don’t end up in a situation where a small subset can get together and derive the private key from their collected chunks).

This does require that each candidate and returning officer has their own keypair, but we’ll come to that.

So, we now have a set of encrypted individual anonymous votes, encrypted with the “constitutency public key”, and a way to decrypt them for counting when the time comes.

The actual vote-casting itself can work very similarly to postal voting today: an inner envelope containing the anonymous vote cast, inside another envelope which confirms your identity.

We have each voter generate their actual vote, which is simply a piece of information formatted in a particular way so that it can be counted—the digital equivalent of “place a cross inside exactly one box”. If it’s incorrectly-formatted or contains something else, the ballot can be considered spoiled. The piece of software used by the constituent and responsible for actually generating this data can make sure that it happens consistently, so spoiled votes only occur intentionally.

Once generated—whether a valid vote or a spoil—the voter can encrypt it with the constituency public key. This means that unless they choose to share their vote with somebody else, it’s only readable once the polls close and the constituency private key needed to decrypt it has been reconstituted (but once that happens, the vote itself can be read by anybody).

Having created an encrypted (sealed) vote, they can then sign the vote using their own private key. This is the direct equivalent of putting an envelope containing a ballot paper inside another envelope containing information that identifies the voter: except for the fact that it’s now really impossible to open the inner envelope until the polls close and the constituency private key is released.

For their vote to be valid, their public key must appear on the Electoral Roll, having been placed there when they performed voter registration.

This of course requires two things: that everyone has their own keypair that they control themselves, and that voter registration be extended to use the sort of key-verification process described earlier in order to place public keys on the Electoral Roll.

It does make voter-eligibility verification quite straightforward, but also the area which is likely to require the greatest degree of scrutiny; this is contrast to the paper-based system where counting is where nearly all of the auditing happens.

When voter registration closes prior to an election, a list of all of the registered public keys for a constituency can be generated. In fact, it only needs to contain the public keys themselves—there’s no need for other details about the electorate. The list can be digitally signed by the voter registration system generating the list so that it can itself be verified by each constituency’s voting system upon receipt. These lists can be published openly, allowing people to check that their own keys are actually on the list, and also that the numbers aren’t out-of-step with the actual population, and also distributed to each of the constituencies themselves.

At this point, we have a set of sealed and signed votes ready to submit, and a set of valid public keys for the electorate in that constituency. Conceivably, the votes could just be sent via e-mail to a specific address, but a somewhat more robust system that can provide instant feedback would be more sensible. For the sake of argument, let’s say that it’s web-based, but it needn’t necessarily be.

Upon receipt of a vote, the system can verify its signature against the key from the Electoral Roll. If successfully verified, it can strip it out, and store the encrypted (and now anonymous) ballot somewhere secure. Because the ballot is encrypted, it doesn’t need to be secret, just secured against data loss. In fact, it would be sensible for a copy of every vote cast to be forwarded to every voting system in every constituency—that way, the local counts in a General Election (for example) could be cross-checked 650 times once the constituency’s private key is released, and so also serves as a sanity check against the counting system.

So, end-to-end:—

There’s a “but”…

This is not magic, and I have not described the precise details to the extent that you could confidently say “if you did it like this, then it would all go swimmingly”: there are attack vectors and things left undecided to a sufficient degree that you still need to think carefully about implementation (and consult widely with real experts) before pushing ahead with it.

To name a few, you need to think about to prevent denial-of-service attacks; what the protocols actually look like (taking care to prevent replay attacks and the like); how to ensure that private keys are actually kept secure; and define what “quorate” actually means for a group of candidates and returning officers (with delegates, presumably).

We also need to ensure that all candidates and constituents are able to generate and manage their own keypairs which only they (or their legal proxy, where needed) actually control.

However, doing all of these things, and implementing it as a well-documented, open-source system which is open to inspection throughout its development and operation—and, in particular, going to the trouble of inviting people to do so—is merely in the realms of “challenges” rather than really difficult system design: all of the technologies required to actually do it have been invented, tried and tested already.

By way of example, the devices that issue the constituency keys could be purchased as commodity PCs from a range of suppliers, all set up together in a controlled environment and running a particular audit-able stack, and then physically secured to prevent tampering ahead of distribution to each constituency. It’s not a great leap to think about security of these devices in similar terms as we’re used to in thinking about the security of tens of thousands of ballot boxes on election night.

What you really can’t do is compromise on the basic principles. What difference might it make, for example, to swap the signature-based voter-eligibility checks with one of a group of outsourced identity providers, for example?

Well, as much as their CEOs might be well-intentioned, you’ve just created a system where a group of private corporations literally control access to democracy, and introduced a whole suite of potential failure points which don’t exist in a decentralised world (private corporations tend to be resistant to forensic audit by members of the public and their staff now control en masse something which wouldn’t otherwise be collected together, and so become targets for criminals).

The approach set out above seeks to minimise the volume and scale of new things which could go wrong compared to paper voting, while also taking advantage of the efficiency gains that technologies of have brought.

However, there is one big challenge which remains, and needs serious effort: that of user experience. Public-key cryptography is extremely widespread, but individuals using it to identify themselves online tend to be confined to security experts and corporate users. For any of this to become a reality, operating systems and browsers need to get to a place where ordinary human beings can willingly use the technologies and understand what’s happening on their behalf. It needs to be both as easy and transparent as ticking a box and putting it into an envelope—and that’s no mean feat.

May cause side-effects

If we manage to get all of this in place, there are other things that we can do.

With everyone having their own keypairs, people can sign up for and into services without having to remember usernames and passwords, and without having to delegate the function to third parties who might suspend your account for spurious reasons.

We can break the link between the process of identification and the function of assurance, meaning individuals can control all of their identity, not just the bit of it they use to identify themselves as returning service-users. Making use of public-key cryptography and digital signatures means that assurance data becomes tangible again—something which can be kept safe by the individual until it’s needed, and passed on when required. Moreover, because it’s based on digital signatures, it can’t be forged or tampered with.

We can apply that to servers and services, too. Instead of relying on a shaky model of a near-endless list of certification authorities to tell us both that our connection to a website is secure and that they’re trustworthy for entering credit card details into, we can split that up and make it more granular–and more useful. With relatively little in the way of protocol changes, we could replace a blanket approach to trust with model where we trust the authorities which are actually meaningful for particular categories of transaction.

For example, if I’m visiting a site and all I’m doing is providing some personal details so that they can send me stuff, I don’t need to know whether their handling of money is up to scratch—but I do want to know that their registration with the Office of the Information Commissioner is in good standing, so I want a piece of assurance data sent by their web server, issued by the ICO, containing a copy of their Data Protection Registration certificate. If I do get as far as giving them some money, I want to know that their bank hasn’t frozen their accounts, that their annual return to Companies House is up to scratch, and that HM Revenue and Customs wasn’t at last check in the process of taking them to court over unpaid VAT.

Of course, people in different countries will want to trust different bodies to make assurances about these kinds of information, and the beauty of making the whole thing more granular is that it’s possible to do in any kind of a sensible way (currently, everybody everywhere would have to trust every authority equivalently, which would be nonsensical).

It also means that a certificate doesn’t have to be issued for sites which don’t collect any data—in other words, the barriers to the whole of the web being accessed securely (which also means free from tampering by intermediaries such as mobile phone and hotel WiFi operators) are vastly reduced because it now costs nothing more than a little configuration to make traffic to and from your web server secure.

Finally, we can simultaneously enhance privacy and impact upon reputation for actions. Because keys are generated and controlled by the individuals to which they pertain, there’s nothing stopping you generating a new key and using that for certain services: but with the caveat that the new identity would have no particular history associated with it, and would only have meaningful assurance data if you made it the key you use for accessing those assured services.

If you only ever used that key for writing abusive comments underneath YouTube videos, it would trivial for it to be summarily and automatically ignored by everybody who came into contact with it, because its entire associated history would consist of abusive comments underneath YouTube videos.

In contrast, if you do the same thing with a key that you use elsewhere, you run the risk of the link between your unpleasant activities and those conducted more normally being made quite easily.

Us geeks figured out a long time ago that public-key cryptography was pretty monumental, and it changed the way that we approached system design and security—and along with it, how a great many things that you use every day work behind the scenes. What we haven’t been able to do so far is put the technologies to good use in fixing some of the horrid technical hacks that have prevailed over the last couple of decades and in putting it in the hands of ordinary people so that we can all do important stuff better.

It’s time to change that. It’s time that technology stopped being an excuse to give people less control instead of more over their own lives, and it’s time that it lived up to its potential for making democratic expression more accessible and efficient instead of merely enabling a chaotic echo-chamber.