Thursday, June 7, 2018

EJBCA and Agile PKI

So in case anybody is wondering what the buzzword for 2018 is, it's quite obviously Post Quantum Cryptography. Besides full and entire conferences on the subject, large tracts of security conferences such as RSA and ICMC have been dedicated to it this year, not to mention a plethora of blog posts on other security and product blogs with similar titles to this one.

Anybody looking for a business idea? Harness the buzz of the last five years by implementing Silo Breaking Post Quantum Blockchain Microservices in the Cloud As A Service. No need to thank me, just remember where you heard it first.

No, but seriously...

In spite of the buzz though, Agile PKI is a thing, and a thing we should be taking very seriously.  There are several reasons why we should be honing in on this matter more than we are:
  • Everybody is talking algorithms and variants. While those discussions are interesting in themselves, unless we have a method to perform a world wide catastrophic migration then the discussion is moot. 
  • Quantum computers are far from the only threat to the PKI infrastructure:
    •  MD5 and SHA1 were at the time believed to be cryptographically sound, until the day they were proven not to be. How long will we be able to trust SHA2, even without passing the quantum singularity?
    • There may be flaws (and by that I mean that there are, just that they haven't been disclosed or found yet) in the implementations, Heartbleed and ROCA typically come to mind.  
    • PKI infrastructures may be knocked out through human mishandling, requiring mass migrations between CAs. 

Where are we now?

Depending on your outlook, developments in Quantum Computing are either progressing at a snail's or at a monster pace. To compare, in the 90s a qubit was still a relatively theoretical concept, while during the early 2000s large strides were made in small scale computations. In 2001 the number 15 was successfully factored using Shor's algorithm, and it took until 2012 to factor the then record number of 143, during which it was proven two years later that 56153 had been factored during the same computation. The currently largest known quantum computer is Google's Bristlecone, which has a whopping 72 qubits. So, have we passed the quantum singularity? Well, not quite. 
Image courtesy of the Google Quantum AI Lab.
In the above graph, the white line between the purple and blue fields is what's called "quantum supremacy", and is the threshold where a quantum computer solves a problem faster than its digital counterpart. The jury is mostly out on whether we've passed that level or not, but what is undebatable is that we are still very far from a workable quantum computer of millions or tens of millions of qubits working on concert. The term Error Correction Threshold refers to the fact that quantum computations are probabilistic, that while they are more likely to end up at the correct result, they are not guaranteed go, and all derived answers must be verifiable. 

It's also not a clear question of when:
  • qubits are fickle things, and as a quantum computer is only usable while all its qubits are part of a single coherent field. The more qubits added and the longer the computation, the larger the chance is that coherence is broken before the computation has finished. 
  • adding in more qubits is not trivial (as the meager gains of the last 20 years have shown), and moving from <100 qubits to millions requires passing several engineering challenges which haven't yet been met. 

So what's the hurry?

As mentioned above, the are plenty of reasons to be pursuing PKI agility that aren't part of sci-fi scenarios, but with regards to quantum computing there is good cause to start getting organized. As a reference, ETSI have published an IPR pertaining to post quantum computing, in which they declare the following doomsday equation:
  • X = the number of years the public-key cryptography needs to remain unbroken. 
  • Y = the number of years it will take to replace the current system with one that is quantum-safe. 
  • Z = the number of years it will take to break the current tools, using quantum computers or other means. 
  • T = the number of years it will take to develop trust in quantum-safe algorithms.
Where a state of X + Y + T > Z implies that everything has gone pear-shaped. I've taken the liberty to translate this into PKI friendly terms, where:
  • X = the longest validity of a certificate issued today in a PKI. Unless actively threatened, end users are extremely unlikely to migrate their certificates to agile variants when prompted. Even then, look at how prolific Triple-DES and SHA1 (and dare I say it, even MD5) still are. DV/EV certificates can have a validity of as long as two years, but many intermediate and root CAs have validities of 5-10 years, as do many eID and passport certificates. 
  • Y = the number of years it takes to establish an Agile PKI standard and implement that standard universally. Off the top of my head, I would say that this would take 1-3 years from inception to complete rollout. 
  • Z = the number of years until a quantum computer can solve factorization/discrete logarithm problems + computation time. The lowest estimate I've heard for this is 4 years (though unlikely) while there still are theories that quantum computing is unfeasible. Counting on the q-bomb being dropped within 10-20 years is not unrealistic though.
  • T = the number of years it will take for a PQ algorithm to be adopted. Certifying organizations such as NIST are notoriously slow in adopting new standards, and even though the PQ algorithm competitions are in full swing, we are still 3-4 years from having at least one algorithm be considered trusted. A worst case scenario is that this could take up to 20 years though.
So the main factor that PrimeKey as a software developer is Y, to encourage the adoption of open Agile PKI standards, to implement those standards and encourage the rollout of agile systems.

So where do we go from here?

From our point of view which algorithm is chosen by the certifying bodies doesn't matter greatly, though some are by their very nature inappropriate for PKI use due to statefulness or egregious key sizes. If you're interested in reading more about what implementations that currently exist and how the behave when implemented in EJBCA, you're welcome to read the very excellent thesis which was hosted by PrimeKey last year.

Our Requirements

From out point of view, our requirements for an Agile PKI implementation is that:
  1. all issued non-PQ certificates for our customer PKI’s must be able to safely and instantly migrate to post-quantum certificates, or already be in possesion of one.
  2. all issued non-PQ keys (for PKIs using client side authentication) must have a migration strategy to equivalent post-quantum key pairs 
There are currently two end transitions described, the first of which is described in this paper hosted by IACR:

Hybrid Certificates

This is the most straight forward idea and simplest to understand, and simply means rewriting RFC5280 to include a second tbsCertificate within the certificate body.
While conceptually simple, this approach is very unattractive from our point of view. Besides requiring a huge rewrite of existing RFCs, it also poses the challenge of how to declare a tbsCertificate load in a certificate as non-critical (or face backwards compatibility issues). 

Post Quantum Certificate Extensions

This solution exists in two variants, and is proposed in this draft written by our friends at Entrust, ISARA and Cisco. This solution proposes instead to create an intermediate certificate by adding post quantum elements as part of non-critical extensions, either the full tbsCertificate or just partial elements (public key/signature).
This approach is far more flexible, as it allows for full backwards compatibility, even allowing for cross-signing by several PQ keys in the same certificate in case there are trust issues prior to rollout. 

Both of these variants suffer from the same problem, which is potentially widely inflated certificate sizes depending on what algorithm is chosen  as XMSS, SPHINCS and similar hash-based algorithms can have very large public keys. 

In summary, what does the future bring?

If one only knew, right? In terms of just Agile PKI I believe that it's on the near horizon, both for the post-quantum use case and because there is a general use to codifying the general functionality. 

What a post-quantum world will look like I can only speculate. Based on current progress, I don't believe that private quantum computing will turn up overnight but rather be a very gradual progress a qubit clusters grow in scale and hurdles of size and coherence are gradually overcome, though there could well be sudden breakthroughs that proves this wrong. 

We also only know about private research in the matter. The 2014 Snowden Files showed that the NSA does have an interest in quantum computing, but it's pure speculation how much progress they or any other intelligence organization have made, if any at all. Should a breakthrough be made it's unlikely to be announced matter-of-factly as possession of such capabilities is best used in secret. A sudden announcement would cause a catastrophic breakdown in trade and infrastructure, so is unlikely.  The NSA also has a mandate to secure domestic communication, so long before a quantum breakthrough, no matter how secret, we will see US communications policies deprecate RSA and EC for commercial use. 

From our end, our aim is to continue researching the subject and try to proactively encourage all PKI using bodies to adopt PKI Agile methods within the next few years.

Mike Agrenius Kushner
Product Owner EJBCA

No comments: