how we can know that the hardware the software we secured is running on is doing what we expect it to?Bunnie's experience has made him very skeptical of the integrity of the hardware supply chain:
In the process of making chips, I’ve also edited masks for chips; chips are surprisingly malleable, even post tape-out. I’ve also spent a decade wrangling supply chains, dealing with fakes, shoddy workmanship, undisclosed part substitutions – there are so many opportunities and motivations to swap out “good” chips for “bad” ones. Even if a factory could push out a perfectly vetted computer, you’ve got couriers, customs officials, and warehouse workers who can tamper the machine before it reaches the user.Below the fold, some discussion of Bunnie's current project.
One way to make life harder for an attacker is to use randomization. In each cycle of the LOCKSS system, a peer chooses to compare content units it holds with that of a random selection among the population of peers it believes hold the same content unit. This makes it hard for an attacker to decide how to vote in comparisons into which peers it controls are invited. Similarly, operating systems use Address Space Layout Randomization (ASLR) to confuse an attacker as to where in memory vulnerable code is located.
Randomizing hardware would certainly make life for hardware supply chain attackers pretty miserable, At first glance it seems completely infeasible, but Bunnie shows how something rather like it can be achieved by using a Field Programmable Gate Array (FPGA):
FPGAs contain an array of programmable logic blocks, and a hierarchy of "reconfigurable interconnects" that allow the blocks to be "wired together", like many logic gates that can be inter-wired in different configurations. Logic blocks can be configured to perform complex combinational functions, or merely simple logic gates like AND and XOR. In most FPGAs, logic blocks also include memory elements, which may be simple flip-flops or more complete blocks of memory.[1] Many FPGAs can be reprogrammed to implement different logic functions,[2] allowing flexible reconfigurable computing as performed in computer software.In effect, an FPGA is generic hardware that can be configured into, for example, a CPU by downloading into it "software" written in a hardware description language (HDL). Bunnie's insight was to:
rely on logic placement randomization to mitigate the threat of fixed silicon backdoors, and ... rely on bitstream introspection to facilitate trust transfer from designers to user.By "bitstream introspection" Bunnie means the techniques (reproducible builds and certificate transparency) that I discussed in Securing The Software Supply Chain. By "logic placement randomization" he means the equivalent for HDLs of ASLR for operating systems; randomizing the location in the underlying hardware of each function, just as ASLR randomizes the locations of functions in memory. Thus:
- A supply chain attack on the FPGA hardware cannot know where to implant a back door, because the logic it would need to connect to is at a different place each time the HDL is loaded.
- A supply chain attack on the HDL software would be detected by the "bitstream introspection".
One may argue that in fact, FPGAs may be the gold standard for verifiable and trustworthy hardware until a viable non-destructive method is developed for the verification of custom silicon. After all, even if the mask-level design for a chip is open sourced, how is one to divine that the chip in their possession faithfully implements every design feature?FPGAs use area on the die much less efficiently that custom logic, which is why the CPUs in your phones and computers use custom logic. So there are limits to how much functionality Bunnie can protect in this way. But in a sense this is a feature not a bug. As he writes:
Since their inception, computer makers have been in an arms race to pack more features and more complexity into ever smaller packages. As a result, it is practically impossible to verify modern hardware, whether open or closed source. Instead, if trustworthiness is the top priority, one must pick a limited set of functions, and design the minimum viable verifiable product around that.Bunnie lays out three principles for doing so:
- Complexity is the enemy of verification
- Verify entire systems, not just components
- Empower end-users to verify and seal their hardware
Prototype |
In order to ground the conversation in something concrete, we (Sean ‘xobs’ Cross, Tom Mable, and I) have started a project called “Betrusted” that aims to translate these principles into a practically verifiable, and thus trustable, device. In line with the first principle, we simplify the device by limiting its function to secure text and voice chat, second-factor authentication, and the storage of digital currency.The peripherals are the keyboard and the display:
...
In line with the second principle, we have curated a set of peripherals for Betrusted that extend the perimeter of trust to the user’s eyes and fingertips. This sets Betrusted apart from open source chip-only secure enclave projects.
Betrusted’s keyboard is designed to be pulled out and inspected by simply holding it up to a light, and we support different languages by allowing users to change out the keyboard membrane.Betrusted's FPGA is a 7-series Xilinx chip:
The output surface for Betrusted is a black and white LCD with a high pixel density of 200ppi, approaching the performance of ePaper or print media, and is likely sufficient for most text chat, authentication, and banking applications. This display’s on-glass circuits are entirely constructed of transistors large enough to be 100% inspected using a bright light and a USB microscope.
The system described so far touches upon the first principle of simplicity, and the second principle of UI-to-silicon verification. It turns out that the 7-Series FPGA may also be able to meet the third principle, user-sealing of devices after inspection and acceptance.To satisfy the third principle:
users also need to be able to seal the hardware to protect their secrets. In an ideal work flow, users would:The reason Bunnie is hopeful is that Xilinx equips the 7-series FPGAs with (a) encryption hardware and (b) write-once read-many (WORM) memory in the form of fuse programmable ROM. The details are complex, you should read the whole post to understand them. One thing I don't understand from reading it is that I assume that both the encryption function and the fuses on the chip are at fixed, known locations, which could make possible targets for a mask-based attack.
Ideally, the keys are generated entirely within the Betrusted device itself, and once sealed it should be “difficult” for an adversary with direct physical possession of the device to extract or tamper with these keys.
- Receive a Betrusted device
- Confirm its correct construction through a combination of visual inspection and FPGA bitstream randomization and introspection, and
- Provision their Betrusted device with secret keys and seal it.
As usual, Bunnie is realistic about the limits of what he is doing:
I personally regard Betrusted as more of an evolution toward — rather than an end to — the quest for verifiable, trustworthy hardware. I’ve struggled for years to distill the reasons why openness is insufficient to solve trust problems in hardware into a succinct set of principles. I’m also sure these principles will continue to evolve as we develop a better and more sophisticated understanding of the use cases, their threat models, and the tools available to address them.You really need to read the whole post to appreciate the care with which the team is addressing an extremely difficult problem.
Kudos to the NLnet Foundation's project on Privacy & Trust Enhancing Technologies for sponsoring the project.
No comments:
Post a Comment