Throughout the past few weeks we have seen numerous disclosures from former NSA contractor Edward Snowden regarding the massive surveillance apparatus that the United States government has brought to bear against civilians, foreign governments and even corporations. We have also heard allegations that the NSA have deliberately weakened open cryptographic standards. Perhaps the most worrying piece of information to come out of these disclosures is their program to systematically infect hardware.
The network appliances that route the majority of Internet traffic run closed source embedded router operating systems like Cisco’s iOS. If I were a government level adversary I would start with the switching fabric and routers rather than waste my time on the end points, especially considering we are now aware that through NSLs they were able to compel organizations to secretly disclose their SSL key and enable surviellance (as aside the gov’t seems to have angered Google or at least affected their bottom line as they are now speaking of implementing PFS).
If we go further – can we really trust any hardware? Certainly not hardware produced from, say 2000 onwards. I don’t state that as some magic number or some line in the sand. Moreover it is an educated guess based on both political climate (things didn’t start getting super crazy until post 9/11) and technological capabilities at the time. Perhaps we are dead wrong in this regard too. After all, they have been trying to destroy civilian privacy online for about as long as the Internet has been accessible to the average Joe. Everyone no doubt remembers the Clipper chip of the 1990s. Well, the NSA clealy realized that key escrow just wasn’t going to stand up to public scrunity. I wonder how the boffins within the US intelligence committee will justify their promotion given the fallout from the Snowden saga?
No doubt many US based IT companies will be reassessing whether it is appropriate for them to continue conducting their business from within the United States or whether a move overseas may better suit them operationally. The damage that this could do to the IT industry in the US is immeasurable. While many in the industry are in damage control some (like Lavabit and today CryptoSeal Privacy) are shutting up shop, refusing to supply a public with a product that they may be forced (via a secret FISA court hearing or a NSL) to backdoor or otherwise modify to bypass the very anonymizing features the customer is paying good money for. This is a very bad time for the US’s image abroad – and it is all of the government’s own making.
Whilst software will always be buggy in non trivial cases and even people outside the industry accept that –usually without comment–, hardware for some reason is “assumed” to be different and saying this is treated almost as herasy.
If people studied the history of hardware design they would see first of all where the myth arose but also where it changed and begame as unreliable as software.
In times past electronics was comparitvly simple, active devices were mechanicaly difficult to manufacture and thus expensive as well as being tempramental both in design and operation.
Thus complexity was “designed out” as much as possible and the number of active devices kept to a minimum. Back in the mid 1950’s the transistor was discovered in part due to the tempramental and unreliable manufacturing processes of crystals for diodes. By the early 1960’s they had all but replaced glass envelope valves in electronics that did not involve power or high frequency. Due to “industrial control” requirments “logic circuits” moved from relay “ladder logic” to circuits made from diodes and resistors through to transistors as well. Some manufactures built “logic gates” from Resistor Transistor Logic (RTL) into small packages around an inch wide and two inches long and about three quaters of an inch high. One of the more famous being the Mullard Electronics “NORBIT” range from the early 1960’s. Although packaged as gates internaly NORBITs were still discreat components. However with the discovery of how to reliably replace resistors with transistors on single “chips” of semiconductor meterial Transistor Transistor Logic (TTL), Emitter Coupled Logic (ECL) soon followed. These “chips” in their Dual In Line packaging (DIL) quickly replaced discreat logic such as RTL and quickly went from just three or four gates to tens and hundreds of gates in Medium Scale Integraation (MSI) chips. But there were heat and power problems that limited the scale of MSI chips.However the advent of Field Effect Transistors (FET) gave rise to the design of Complementry Metal Oxide Semiconductor chips (CMOS) using N-fet and P-fet low power devices enabled by the mid 1970’s hundreds to thousands of gates in Large Scale Integration (LSI) to make both memory and CPU devices on single chips.
It was at this point where chip designs stoped being done by hand as time to market had an effect. New chips were designed not from the device or gate level but from predesigned blocks of logic functions. To do this “hardware languages” were developed. Of which the two major ones were VHDL and Verilog.
VHDL is still “gate” orientaited in design however verilog is much more “function” orientaited and is similar to a number of programing languages. Whilst VHDL designs are lengthy and consist of thousands of macros and implement Register Transfer Language (RTL) almost directly the result usually works when implemented on gates such as in FPGAs. Verilog however is markedly different and whilst functions can be small they can as easily be imposible to realise in logic gates.
The result is the usual “market drivers” causing almost as many bugs in verilog designs as in conventional programing languages. The result is usualy a whole bunch of seamingly Ad-hoc design rules that get embeded into verifier tools that take the language output and check it. And as with all rules and enforcment proceadures it often pays to be able to circumvent them if you know how to do so properly, however as is normaly the case people have a lot more faith in their capabilities than reality shows so errors abound.
Yes, it’s interesting that hardware is somehow regarded as sacrosanct. One only has to look at the errata for any of the modern Intel processors to see just how many bugs exist – and more importantly the bugs not only exist but somehow make it through testing and into the final product.
A great example would be the Intel floating point bug from the 1990s. I remember Intel wouldn’t even replace said processors unless the customer could somehow justify that they had been adversely affected!
Now of course with the advent of microcode with most stuffups they can just roll out a microcode update. Which brings us to yet another vector where subversion can occur.
I don’t know if there are any easy answers here.