Traps are constantly set on the Internet to snare hackers in order to research their behavior and tactics. Many of these traps are honeypots or honeynets that take the form of deliberately unpatched computers or infrastructure exposed to the Internet that lure attackers to break in while their actions are recorded.

In very few instances are decoys built into security processes. However, two experts are in the research phase of building a tool that they say will do just that.

The project is called Honey Encryption and it will be formally rolled out at the Eurocrypt conference in Copenhagen this spring by former RSA Security chief scientist Ari Juels and Thomas Ristenpart of the University of Wisconsin. The concept involves pulling a bit of deceit against an attacker who has stolen some set of data encrypted with Honey Encryption. The tool produces a ciphertext, which, when decrypted with an incorrect key as guessed by the attacker, presents a plausible-looking yet incorrect plaintext password or encryption key.

With traditional encryption, an attacker making an incorrect guess gets gibberish in return to their request. “With Honey Encryption,” Juels told Threatpost, “he gets something that looks like real context.” An attacker would have no way of knowing which plausible-looking value is the correct one.

Juels said the initial motivation behind the project was the security of password vaults. Services such as LastPass, which was breached in 2011, enable users to secure a number of passwords with a master; synchronization of these services is often done in the cloud. If one of these providers is breached, an attacker can crack the master password associated with any vault and extract all of their passwords.

“We had the idea of exploring the possibility of encrypting a vault in such a way where if it were decrypted using the wrong master password, it would decrypt to something that looks plausible,” Juels said.

The trick is to build the capability into Honey Encryption to understand the appropriate structure of the messages an encryption system would try to recover.

“With credit card numbers, we understand them well. For all intents and purposes, they look like a uniformly random number,” Juels said. “You can construct a tight model for that. With a vault, for example, that’s trickier. You need to model how passwords are selected and stored for the particular vault.

“You need a good understanding of message-specific construction; encryption keys and credit card numbers are different than password vaults,” Juels said. “If you use ordinary encryption, it’s agnostic to the distribution of messages. You need to know what it means for a message to be plausible and application dependent.”

Luckily, research exists on password selection, and researchers can also learn from breaches such as the 2009 break-in at game developer RockYou where 32 million cleartext passwords were stolen. That kind of sample gives researchers a fairly accurate understanding of how well users compose secrets used as passwords, including how often words are phrases are re-used or appended according to a particular account.

“The model doesn’t have to be perfect to be good,” Juels said. “If just half of the decryption attempts yield something plausible, you still achieve the desired bafflement of the attacker.”

Categories: Cryptography

Comments (2)

  1. Tom
    1

    This sounds like quite an interesting method to make brute-forcing keys derived from low-entropy secrets (such as passwords) considerably more difficult.

    A problem that would arise, though, is that software can no longer detect when a legitimate user accidentaly makes a typo in their passphrase, which is a commonly occuring scenario.

    A possible method to get around this is to let the system generate a random easy-to-remember passphrase itself, present that to the user the first time, and then encrypt this secret with the user’s passphrase using this honey encryption scheme. Now, every time after a user inputs their passphrase, the system decrypts this secret, presents it and asks them whether it is correct. When they say no, they must’ve made a typo.

    A system in which a user has to comfirm the computer’s password after providing their own may feel a bit odd at first, but is probably not that counterintuive when presented well.

    • Anonymous
      2

      Suspicions may arise by an unauthorized user if the user is prompted to confirm whether the secret is correct.

Comments are closed.