In the most recent RISKS mailing list digest, Peter Neuman includes a brief article by Adi Shamir describing a method of exploiting minor faults in math logic to break encryption keys in a particular class of processor.
Titled Microprocessor Bugs Can Be Security Disasters, the article makes an interesting argument. In fairly concise terms, Shamir outlines an approach that quickly circumvents much of the hard work in breaking private keys, no matter how heavily encrypted. He uses the RSA key encryption method in his example, probably out of humility. With even my limited knowledge of mathematics, I was able to follow the broad strokes of the approach.
Put most simply, if you know there is a math flaw in a particular kind of processor, then you can exploit that by injecting ‘poisoned’ values into the key decryption process. By watching what happens to that known value, you can infer enough about the key itself that you can, with a little more math, quickly break the private key.
And of course, once you’ve got someone’s private key, you can see anything that it’s been used to encrypt.
This is in some ways a new twist on a very old kind of attack. Code breakers have always exploited mechanical weaknesses in encryption and communications technology. During the Second World War, code breakers in the UK learned to identify morse code transmissions through the radio operator’s ‘hand’ – the particular rhythm and cadence that he used. This sometimes gave them more information than the contents of the communications themselves. Flaws in the Enigma coding machines allowed the Allies to break the device some time before Alan Turing and his early computers got their ‘Bombe’ computer working efficiently:
One mode of attack on the Enigma relied on the fact that the reflector (a patented feature of the Enigma machines) guaranteed that no letter could be enciphered as itself, so an A could not be sent as an A. Another technique counted on common German phrases, such as “Heil Hitler” or “please respond,” which were likely to occur in a given plaintext; a successful guess as to a plaintext was known at Bletchley as a crib. With a probable plaintext fragment and the knowledge that no letter could be enciphered as itself, a corresponding ciphertext fragment could often be identified. This provided a clue to message keys.
These days, computing processors and encryption are used in almost every aspect of our lives. The risks presented by this new class of attack are outlined in fairly plain English by Shamir:
How easy is it to verify that such a single multiplication bug does not exist in a modern microprocessor, when its exact design is kept as a trade secret? There are 2^128 pairs of inputs in a 64×64 bit multiplier, so we cannot try them all in an exhaustive search. Even if we assume that Intel had learned its lesson and meticulously verified the correctness of its multipliers, there are many smaller manufacturers of microprocessors who may be less careful with their design. In addition, the problem is not limited to microprocessors: Many cellular telephones are running RSA or elliptic curve computations on signal processors made by TI and others, FPGA or ASIC devices can embed in their design flawed multipliers from popular libraries of standard cell designs, and many security programs use optimized “bignum packages” written by others without being able to fully verify their correctness. As we have demonstrated in this note, even a single (innocent or intentional) bug in any one of these multipliers can lead to a huge security disaster, which can be secretly exploited in an essentially undetectable way by a sophisticated intelligence organization.
I’m surprised that I haven’t seen much concern voiced about this class of attacks. Maybe I just hang out with an insufficiently paranoid crowd….