It’s been a brutal month for crypto.
Starting with the Black Hat conference, researchers, engineers and hackers have been unveiling new weaknesses and attacks in different cryptographic implementations that threaten the security of communication and commerce on the Web.
The topper, however, could be a paper released by a team of scientists from MIT and the National University of Ireland at Maynooth who may just have flipped the study of crypto on its head with their findings. They conclude that the mathematical fundamentals on which cryptography is measured may be, well, misapplied.
The discipline of information theory is what’s at question here, in particular, the notion that most work done analyzing secure cryptographic schemes have depended on a common assumption—which is the incorrect approach according to the scientists.
In an article released by MIT, the scientists said that in information theory, information is tied to entropy, which is a measure of uncertainty in a random variable and usually refers to Shannon entropy, developed by MIT professor Claude Shannon. Shannon entropy, as it turns out, is flawed for secure implementations because it is based on the average probability that a string of bits will occur, according to the MIT article.
But since an attacker needs to make only one reliable correlation in order to continue guessing a password or private key, for example, the need to know the average probability is unnecessary. Giving more weight to an improbable outcome is a more accurate assessment of how to break the security of a given target. The article said that a computer cut loose to guess correlations between encrypted and unencrypted versions of a file would learn the answer much quicker than expected.
“It’s still exponentially hard, but it’s exponentially easier than we thought,” said Ken Duffy, a NUI researcher. “Attackers often use graphics processors to distribute the problem. You’d be surprised at how quickly you can guess stuff.”
Muriel Medard of the research laboratory of electronics at MIT told Threatpost that Shannon entropy works just fine to measure the efficiency of communication. Until now, the randomness of data at the basis of that theory was thought to be enough to protect it, but she said their work proves that data isn’t so random, especially if an attacker knows enough about a target when they’re able to enter passwords, for example.
“When doing a guess of some kind whether it’s a password or verifying a hash, that makes a huge difference in the amount of time it takes to arrive at the guess,” she said. “There are small variations which are small enough not to worry about them for network performance, but have a significant impact on the security of the network when looking at guessing attacks.”
When compression, or any kind of outside noise, is introduced, this alters how an attacker would go about guessing a secret because it’s changing the randomness of the data.
“It’s like when you play 20 Questions with someone you know, you’re likely to guess quickly versus someone you don’t know at all,” Medard said. “Theoretically, people could choose anything in the universe, but you may have knowledge about their preferences that now allow you to guess more quickly.
“That means that small variations from the uniform, whether as a result of compression or noise that are not completely uniformly distributed, you can use these small differences to hear very stark differentials of what happened under an idealized assumption and what might happen under slight non-uniformity.”
Medard said she’s not sure yet about the ultimate impact the team’s work would have on cryptography or security in general.
“We’re still trying to figure that out ourselves,” she said. “Probably in some domains where you’re just trying to hide information and are not so worried about guessing, there would be a limited impact.”