## Technique allows attackers to passively decrypt Diffie-Hellman protected data.

Last week researchers revealed how undetectable backdoors could be placed in the cryptographic keys protecting most all websites, virtual private networks, and Internet servers. The technique devised, a feat in and of itself, allows hackers to passively decrypt hundreds of millions of encrypted communications as well as cryptographically impersonate key owners.

While all this may sound unnerving or at best like technical jargon it is… **NO JOKE. **

The technique is notable because it puts a backdoor—or in the parlance of cryptographers, a “trapdoor”—in 1,024-bit keys used in the Diffie-Hellman key exchange. Diffie-Hellman significantly raises the burden on eavesdroppers because it regularly changes the encryption key protecting an ongoing communication. Attackers who are aware of the trapdoor have everything they need to decrypt Diffie-Hellman-protected communications over extended periods of time, often measured in years. Knowledgeable attackers can also forge cryptographic signatures that are based on the widely used digital signature algorithm.

As with all public key encryption, the security of the Diffie-Hellman protocol is based on number-theoretic computations involving prime numbers so large that the problems are prohibitively hard for attackers to solve. The parties are able to conceal secrets within the results of these computations. A special prime devised by the researchers, however, contains certain invisible properties that make the secret parameters unusually susceptible to discovery. The researchers were able to break one of these weakened 1,024-bit primes in slightly more than two months using an academic computing cluster of 2,000 to 3,000 CPUs.

## Backdooring crypto standards—”completely feasible”

To the holder, a key with a trapdoored prime looks like any other 1,024-bit key. To attackers with knowledge of the weakness, however, the discrete logarithm problem that underpins its security is about 10,000 times easier to solve. This efficiency makes keys with a trapdoored prime ideal for the type of campaign former National Security Agency contractor Edward Snowden exposed in 2013, which aims to decode vast swaths of the encrypted Internet.

“The Snowden documents have raised some serious questions about backdoors in public key cryptography standards,” Nadia Heninger, one of the University of Pennsylvania researchers who participated in the project. “We are showing that trapdoored primes that would allow an adversary to efficiently break 1,024-bit keys are completely feasible.”

While NIST—short for the National Institute for Standards and Technology—has recommended from sinister looking headquarters in Maryland, minimum key sizes of 2,048 bits since 2010, (note keys of half that size remain abundant on the Internet). As of last month, a survey performed by the SSL Pulse service found that 22 percent of the top 200,000 HTTPS-protected websites performed key exchanges with 1,024-bit keys. A belief that 1,024-bit keys can only be broken at great cost by nation-sponsored adversaries is one reason for the wide use. Other reasons include implementation and compatibility difficulties. Java version 8 released in 2014, for instance, didn’t support Diffie-Hellman or DSA keys larger than 1,024 bits. And, to this day, the DNSSEC specification for securing the Internet’s domain name system limits Digital Signature Algorithm keys to a maximum of 1,024 bits.

## Poisoning The Well

If the NSA or another adversary succeeded in getting one or more trapdoored primes adopted as a mainstream specification, the agency would have a way to eavesdrop on the encrypted communications of millions, possibly hundreds of millions or billions, of end users over the life of the primes. So far, the researchers have found no evidence of *trapdoored* primes in widely used applications. But that doesn’t mean such primes haven’t managed to slip by unnoticed.

In 2008, the Internet Engineering Task Force published a series of recommended prime numbers for use in a variety of highly sensitive applications, including the transport layer security protocol protecting websites and e-mail servers, the secure shell protocol for remotely administering servers, the Internet key exchange for securing connections, and the secure/multipurpose Internet mail extensions standard for e-mail. Had the primes contained the type of trapdoor the researchers created, there would be virtually no way for outsiders to know, short of solving mathematical problems that would take centuries of processor time.

Similarly, Heninger said, there’s no way for the world at large to know that crucial 1,024-bit primes used by the Apache Web server aren’t similarly *backdoored*. In an e-mail, she wrote:

We show that we are never going to be able to detect primes that have been properly trapdoored. But we know exactly how the trapdoor works, and [we] can quantify the massive advantage it gives to the attacker. So people should start asking pointed questions about how the opaque primes in some implementations and standards were generated. Why should the primes in RFC 5114 be trusted without proof that they have not been trapdoored? How were they generated in the first place? Why were they standardized and pretty widely implemented by VPNs without proof that they were generated with verifiable randomness?

Unlike prime numbers in RSA keys, which are always supposed to be unique, certain Diffie-Hellman primes are extremely common. If the NSA or another adversary managed to get a trapdoored prime adopted as a real or de facto standard, it would be a coup. From then on, the adversary would have possession of the shared secret that two parties used to generate ephemeral keys during a Diffie-Hellman-encrypted conversation.

## Remember Dual_EC_DRBG?

Such a scenario, assuming it happened, wouldn’t be the first time the NSA intentionally weakened standards so it could more easily defeat cryptographic protections. In 2007, for example, NIST backed NSA-developed code for generating random number generators. Almost from the start, the so-called Dual_EC_DRBG was suspected of containing a deliberately designed weakness that allowed the agency to quickly derive the cryptographic keys that relied on the algorithm for crucial randomness. In 2013, some six years later, Snowden-leaked documents all but confirmed the suspicions.

RSA Security, at the time owned by the publicly traded corporation EMC, responded by first lying and denying claims of cooperating with the NSA to weaken its products as reported here on Luvatfirstbyte. Then RSA warning customers to stop using Dual_EC_DRBG. At the time, Dual_EC_DRBG was the default random number generator in RSA’s BSAFE and Data Protection Manager programs.

Early this year, Juniper Networks also removed the NSA-developed number generator from its NetScreen line of firewalls after researchers determined it was one of two backdoors allowing attackers to surreptitiously decrypt VPN traffic.

In contrast to 1,024-bit keys, keys with a trapdoored prime of 2,048 bits take 16 million times longer to crack, or about 6.4 × 10^{9} core-years, compared with the 400 core-years it took for the researchers to crack their trapdoored 1,024-bit prime. While even the 6.4 × 10^{9} core-year threshold is considered too low for most security experts, the researchers—from the University of Pennsylvania and France’s National Institute for Research in Computer Science and Control at the University of Lorraine—said their research still underscores the importance of retiring 1,024-bit keys as soon as possible.

**“The discrete logarithm computation for our backdoored prime was only feasible because of the 1,024-bit size, and the most effective protection against any backdoor of this type has always been to use key sizes for which any computation is infeasible,” they wrote in a research paper published last week. “NIST recommended transitioning away from 1,024-bit key sizes for DSA, RSA, and Diffie-Hellman in 2010. Unfortunately, such key sizes remain in wide use in practice.”**

In addition to using sizes of 2,048 bits or bigger, the researchers said, keys must also be generated in a way that holders can verify the randomness of the underlying primes. One way to do this is to generate primes where most of the bits come from what cryptographers call “a ‘nothing up my sleeve‘ number such as pi or *e*.” Another method is for standardized primes to include the seed values used to ensure their randomness. Sadly, such verifications are missing from a wide range of regularly used 1,024-bit primes. While the Federal Information Processing Standards imposed on US government agencies and contractors recommends a seed be published along with the primes they generated, the recommendation is marked as optional.

The only widely used primes the researchers have seen come with such assurances are those generated using the Oakley key determination protocol, the negotiated Finite Field Diffie-Hellman Ephemeral Parameters for TLS version 1.3, and the Java Development Kit.

Cracking crypto keys most often involves the use of what’s known as the number field sieve algorithm to solve, depending on the key type, either its discrete logarithm or factorization problem. To date, the biggest prime known to have its discrete logarithm problem solved was 768 bits in length from last year. The feat took about 5,000 core years. By contrast, solving the discrete logarithm problem for the researcher’s 1,024-bit key with the trapdoored prime required about a tenth of the computation.

## “More Distressing”

Since the early 1990s, researchers have known that certain composite integers are especially susceptible to being factored by NFS. They also know that primes with certain properties allow for easier computation of discrete logarithms. This special set of primes can be broken much more quickly than regular primes using NFS. For some 25 years, researchers believed the trapdoored primes weren’t a threat because they were easy to spot. The new research provided novel insights into the special number field sieve that proved these assumptions wrong.

Heninger wrote:

The condition for being able to use the faster form of the algorithm (the “special” in the special number field sieve) is that the prime has a particular property. For some primes that’s easy to see, for example if a prime is very close to a power of 2. We found some implementations using primes like this, which are clearly vulnerable. We did discrete log computations for a couple of them, described in Section 6.2 of the paper.

But there are also primes for which this is impossible to detect. (Or, more precisely, would be as much work to detect as it is to just do the discrete log computation the hard way.) This is more distressing, since there’s no way for any user to tell that a prime someone gives them has this special property or not, since it just looks like a large prime. We discuss in the paper how to construct primes that have this special property but the property is undetectable unless you know the trapdoor secret.

It’s possible to give assurance that a prime does

notcontain a trapdoor like this. One way is to generate primes where most of the bits come from a “nothing up my sleeve” number likeeorpi. Some standards do this. Another way is to give the seeds used for a verifiable random generation algorithm.

With the current batch of existing 1,024-bit primes already well past their, well, prime, the time has come to retire them to make way for 2,048-bit or even 4,096-bit replacements. Those 1,024-bit primes that can’t be verified as truly random should be viewed with special suspicion and banished from accepted standards as soon as possible.

Buckle up & get ready for things to get nasty… Luvatfirstbyte will keep you posted.