Researchers at UCLA said they’ve developed a game-changing obfuscation mechanism that will put a dent in hackers’ efforts to reverse engineer patches and understand how an underlying piece of software works.

“You write your software in a nice, reasonable, human-understandable way and then feed that software to our system,” UCLA computer science professor Amat Sahai said in a university release. “It will output this mathematically transformed piece of software that would be equivalent in functionality, but when you look at it, you would have no idea what it’s doing.”

Sahai and his fellow researchers, Sanjam Garg, Craig Gentry, Shai Halevi and Mariana Raykova of IBM Research, and Brent Waters of the University of Texas, said this is the first time software obfuscation has been accomplished and could be an important tool in protecting intellectual property, for example. Sahai said that previous obfuscation attempts could be broken in days; this new method would require a hacker to spend hundreds of years to break the cryptography they’ve put in play.

“The real innovation that we have here is a way of transforming software into a kind of mathematical jigsaw puzzle,” Sahai said. “What we’re giving you is just math, just numbers, or a sequence of numbers. But it lives in this mathematical structure so that these individual pieces, these sequences of numbers, can only be combined with other numbers in very specified ways.

“You can inspect everything, you can turn it upside-down, you can look at it from different angles and you still won’t have any idea what it’s doing,” Sahai said. “The only thing you can do with it is put it together the way that it was meant to interlock. If you tried to do anything else — like if you tried to bash this piece and put it in some other way — you’d just end up with garbage.”

The team’s paper, “Candidate Indistinguishability Obfuscation and Functional Encryption for All Circuits,” will be presented at the IEEE Symposium on Foundations of Computer Science in October. It also covers functional encryption, a method that encrypts information on the fly and depending on identity characteristics of the recipient, they would be able to decrypt only certain bits of information. Sahai offered the example of a hospital sharing treatment outcomes with a researcher without sharing patient information.

The secret sauce, however, is in the jigsaw puzzle analogy.

In Multilinear Jigsaw Puzzle we view group elements as the puzzle pieces. The intuitive analogy to jigsaw puzzles is that these group elements can only be combined in very structured ways—like jigsaw puzzle pieces, different puzzle pieces either fit together or, if they do not fit, then they cannot be combined in any meaningful way,” the researchers wrote in their paper. “We view a valid multilinear form in these elements as a suggested solution to this jigsaw puzzle: a valid multilinear form suggests ways to interlock the pieces together.”

The paper said that the jigsaw puzzle scheme consists of two algorithms, a Jigsaw generator and verifier. The generator builds system parameters and group elements that are mathematically verified whether they are a correct solution.

By obfuscating software patches, for example, vulnerabilities being repaired would be hidden from an attacker, the paper said, giving IT teams time to test and deploy patches without fear of the patch being reverse engineered in the meantime. The same goes for cases where intellectual property is being shared and that legal protections would not be enough to protect the IP from being reverse engineering new software, for example.

Categories: Cryptography

Comments (7)

  1. Paul T. Lambert
    1

    The article doesn’t explain how obfuscation will stop us from reversing patches, given that what’s crucial is the diff between pre- and post-patch binaries. The pre- binary obviously can’t be obfuscated retroactively, so the diff shouldn’t change much due to obfuscation. In fact, the obfuscation should be very helpful in locating of the vuln.

    People have tried obfuscation to hide patches, including attempts to make the diff extremely large, but those have all failed, given the realities of production software. These days no one really cares, since the vuln-patch-exploit (or vuln-exploit-patch for 0-days) cycle has been pretty well established as the norm.

    I also don’t see how the authors can honestly claim to be the “first” in obfuscation, given that similar theoretical papers have appeared over the years, some written in part by the same authors. Years ago, I hoped that work of this kind would help us against crackers and reversers, so I read some of those papers in painstaking detail. Fundamentally, they’re actually somewhat shallow, unlike quantum physics and other areas that build upon layers and layers of prerequisites. However, none of that work has had any real impact on reality. I wonder what’s different about this paper and whether I should bother to try understanding it as before.

  2. napoleon41
    3

    I’m so glad that researchers are working so hard to protect intellectual property! What a relief that they aren’t trying to fine ways to make code run faster, with less resources, in a more secure way, on more diverse systems, with more stability than before. You know, the stuff end users keep clamoring for. Apparently those demands sexy enough for researchers to spend time on.

    Instead, lets pour time and money into helping developers make more money, and making malware harder to diagnose.

  3. Krautmick
    4

    And why would this possibly be a good idea? What stops the same technology being used to further obfuscate malware?

    It’s a flawed approach to the problem, same as proprietary, secret encryption has always been. Make it open and transparent, so you can validate benign code quickly.

    • Stormchaser64
      5

      +100 for this guy. I was thinking the same exact thing. This will just provide a great tool for malware writers to protect THEIR OWN malicious code. Even if they can’t figure it out right away, everything gets leaked/cracked eventually. Remember when they thought DVDs were secure? Pffft.

  4. Valery Boronin
    6

    I am not sure how it could be a real game-changer. Even if decompilers and other traditional code analysis methods would become ineffective due proposed functional encryption effort, we still able to anatomize obfuscated software’s behavior under debugger in VM (black box analysis), able to see variables and data in memory changes (gives ability to reconstruct algorithm by data changes in certain cases), able to see API calls with monitoring tools, etc.

    For example, malware’s requests to C&C servers are visible regardless of their code state (obfuscated or not, encrypted or not) itself. Btw, is there any serious malware w/o such a feature nowadays? It’s (obfuscators, cryptors, protectors) been here for decades. Did it significantly change the way reverse engineers do their job? I am not sure.

    There is no transformation that will keep a determined hacker from understanding your program.

    This is not even a problem that can be solved with a serious encryption scheme, because somewhere that binary has to be decrypted into ordinary machine code for the CPU to be able to read it.

    The only way to keep someone from decompiling your program is to keep it off of their system entirely, for example using a web service.

    Obfuscation did nothing in terms of security, in the long run. Therefore, I don’t think usage of cryptography change the state of things here. Although it really raises a bar for a reversers. But not as much as this article claims ;-)

    PS I’d heard many times propositions like “I’d force every developer who wants to secure his code to crack a moderate strength copy protection using a debugger. The experience might reduce the number of crackpot protection attempts”. I tend to agree, in certain cases – it has a strong practical side ;-)

  5. Valery Boronin
    7

    Ok, it was technical reply, now business part :)

    1) The way solution (“previous obfuscation attempts could be broken in days; this new method would require a hacker to spend hundreds of years to break the cryptography they’ve put in play” and other references in the context of executable code IP protection) presented is questionable.

    Nobody in their senses will start from breaking encryption algorithms like AES. There are always more clever \ cheaper ways to restore algorithm, see technical reply below.

    Executable code protection is basically equivalent to copy protection, and it’s trying to solve a problem that’s known to be impossible.

    2) Adequacy of the obfuscation / encryption usage as a recommended measure for a mentioned business scenarios (depersonalization in hospitals, protect the IP from being reverse engineering new software, obfuscating software patches) is doubtful as well.

    For unprepared reader it might sounds like “Hurrah! Copy protection problem solved at last!”. Although they just pretend on reducing certain risks a bit, in exchange for the not mentioned in the article drawbacks (and, therefore related risks for Business) of the obfuscation usage, especially with strong encryption onboard.

    And they forgot to mention probably most used malware obfuscation use case ;-)

Comments are closed.