As this chapter has demonstrated, searching for back doors is a practical method for attacking cryptographically secured hardware. The relatively high success rate of finding back doors in the Xbox is partially because the Xbox represents the first significant attempt made by a vendor to cryptographically secure a PC. Despite the lessons learned from the Xbox experience, future secure PC implementations are still at risk of having hardware security weaknesses, since the legacy of the PC is an open and unsecured hardware architecture.
Franz Lehner, 29, lives in Austria with his girlfriend. He studied Electrical Engineering for 5 years. Now, he programs “automated solutions” while running an ISP. In his spare time, he searches for projects that are fun and educational.
After finding bunnie’s Xbox hacking document, he met the Xbox-Linux team on sourceforge.net. He joined the Xbox-Linux project to learn about team programming, Linux kernel hacking and debugging, and cryptographic systems. He also joined the Xbox-Linux project to develop a better understanding of related systems, such as Palladium.
PC hardware is complex yet fragile, and building a chain of trust out of it is difficult because of this brittleness. Fundamentally, each component in a PC is designed to be “trusting” of its physical environment. The specifications for any commercial integrated circuit component clearly state that the IC is guaranteed to operate over a bounded range of temperatures, voltages, frequencies, and other conditions. If these maximum ratings are violated, then the behavior of the device is “undefined,” and al bets are off. Most chip engineers do not even consider trying to make their circuits recover gracefully from an out-of-range condition, as it is already hard enough to get a chip to work under the specified operating conditions. Furthermore, most consumer applications are very cost-sensitive, and the overhead of building in robust fault tolerance measures results in a product that is not price-competitive.
Thus, chips are typically implemented with no internal error-checking. If, for some reason, the Arithmetic Logic Unit (ALU, the computational “brains” of a CPU) adds two numbers incorrectly, the problem will only manifest itself symptomatically; you can observe only the effects of such an error, sometimes long after the error-causing event. One can think of attacks that take advantage of faults induced by out-of-range conditions as the analogy of buffer overruns in the software world.
Another problem with the PC architecture is that the processor is too trusting of its code environment. The Pentium processor architecture has no provision in hardware for discriminating between code that is insecure or secure. If the instruction pointer happens to find its way into an insecure code segment through a bug or an induced failure, the processor will happily execute this code.
Code compartmentalization based on hardware security levels is a different technique from sand-boxing. Sandboxing does not provide an adequate solution for situations where a user program requires direction from or interaction with secret or protected code or data. Lately, new processor architectures have been proposed that can solve this problem through the use of data tags that embed a sort of security audit log.5
Another source of back doors are the design bugs that exist in every complex chip. It is common practice to ship chips with plenty of known bugs, also known as errata. For example, the Intel i860 XP processor (first released in 1991, not to be confused with the recently released i860 chipset for the Pentium4 processor) shipped with a book of errata that was comparable in size to the processor’s data sheet. Another example closer to home is the bug in the nVidia MCPX’s address space decoder that made the MIST Premature Unmap attack possible. Most of these errata have simple work-arounds or have minor implications for the functionality of the chip under nominal conditions. However, some errata, such as those dealing with cache coherence, address decoding, and memory management can result in major software security holes.
In the case of the Xbox, the business impact of a hardware back door is probably small. Perhaps Microsoft loses some small fraction of game sales revenue, but the losses due to piracy are dwarfed by the losses Microsoft takes on hardware sales. Also, the Xbox is just a game console — grandma’s bank account is not being tapped dry or credit card numbers stolen as a result of security weaknesses in the Xbox. However, more than game revenues will be at risk with the trusted PC. Unless the trusted PC architecture is a fundamental change from legacy PCs, people will be blindly trusting financial secrets and personal data security to untrustworthy hardware.
Like most things in life, the first step is education. The more we learn about hardware security, even if it involves poking around a game console, the better our security systems will be tomorrow. Now, on with the lesson . . .
1 From Andy Green’s 19th Annual Chaos Communication Congress presentation on Xbox security hacking.
2 From an article by the Inquirer, http://www.theinquirer.net/?article=4735
3 From Andy Green’s 19th Annual Chaos Communication Congress presentation on Xbox security hacking.
4 Posting from www.xboxhacker.net under the Xbox Hacker BBS- > Xbox Hacking (TECHNICAL) -> BIOS/Flash ROM/Firmware -> News from the Xbox Linux Team, MS ‘made a hash of it,’ guts exposed.
5 http://www.ai.mit.edu/projects/aries/Documents/Memos/ARIES-15.pdf. “A Minimal Trusted Computing Base for Dynamically Ensuring Secure Information Flow,” by Tom Knight and Jeremy Brown.