480 likes | 729 Views
Secure Hardware Design. Secure Hardware Design. The Black Hat Briefings July 26-27, 2000. Brian Oblivion, Kingpin [oblivion, kingpin]@atstake.com. Why Secure Hardware?. Embedded systems now common in the industry Hardware tokens, smartcards, crypto accelerators, internet appliances
E N D
Secure Hardware Design The Black Hat Briefings July 26-27, 2000 Brian Oblivion, Kingpin [oblivion, kingpin]@atstake.com
Why Secure Hardware? • Embedded systems now common in the industry • Hardware tokens, smartcards, crypto accelerators, internet appliances • Detailed analysis & reverse engineering techniques available to all • Increase difficulty of attack • The means exist
Solid Development Process • Clearly identified design requirements • Identify risks in the life-cycle • Secure build environment • Hardware/Software Revision control • Verbose design documentation • Secure assembly and initialization facility • End-of-life recommendations • Identify single points of failure • Security fault analysis • Third-party design review
Sources of Attack • Attacker resources and methods vary greatly Source: Cryptography Research, Inc. 1999, “Crypto Due Diligence”
Attack Scenarios • System • Initial experimentation & probing • Viewed as a “black box” • Can be performed remotely • Bootstrapping attacks
Attack Scenarios • Enclosure • Gaining access to product internals • Probing (X-ray, thermal imaging, optical) • Bypassing tamper-proofing mechanisms
Attack Scenarios • Circuit • PCB design & parts placement analysis • Component substitution • Active bus and device probing • Fault induction attacks1 • Timing attacks2 • Integrated circuit die analysis3
Attack Scenarios • Firmware • Low-level understanding of the product • Obtain & modify intellectual property • Bypass system security mechanisms • Ability to mask failure detection
Attack Scenarios • Strictly Firmware - no product needed! • Obtain firmware from vendor’s public facing web site • Can be analyzed and disassembled without detection
What Needs To Be Protected? • Firmware binaries • Boot sequence • Cryptographic functionality (offloaded to coprocessor) • Secret storage and management • Configuration and management communication channels
Trusted Base • Minimal functionality • Trusted base to verify the integrity on firmware and/or Operating System • Secure store for secrets • Secrets never leave the base unencrypted • Security Kernel • Examples of a Trusted Base • A single IC (some provide secure store for secrets) • May be purchased or custom built (Secure Coprocessor) • All Internals - circuit boards, components, etc. • Entire trusted base resides within tamper envelope • Firmware • Security Kernel
Security Kernel • Better when implemented in Trusted Base, but can function in OS • Enforces the security policy • Ability to decouple secrets from OS Example: Cryptlib4
Failure Modes • Determine how the product handles failures • Fail-open or fail-closed? • Response depends on failure type • Halt system • Set failure flags and continue • Zeroization of critical areas
Management Interfaces • Do not include service backdoors! • Utilize Access Control • Encrypt all management sessions • SSH for shell administration • SSL for web administration
Secure Programming Practice • Code obfuscation & symbol stripping • Use compiler optimizations • Remove functionality not needed in production • Two versions of firmware: Development, Prod. • Remove symbol tables, debug info.
Secure ProgrammingPractice • Buffer overflows5 • Highly publicized and attempted • If interfacing to PC, driver code with overflow could potentially lead to compromise
Boot Sequence Trusted Boot Sequence
Run-Time Diagnostics • Make sure device is 100% operational all the time • Periodic system checks • Failing device may result in compromise
Secret Management • Never leak unencrypted secrets out • Escrow mechanisms are a security hazard • If required, perform at key generation, in the physical presence of humans • Physically export Key Encryption Key and protect • Export other keys encrypted with Key Encryption Key
Cryptographic Functions • If possible, move out of firmware • …into ASIC • Difficult to modify algorithm • Cannot be upgraded easily • Increased performance • …into commercial CSOC or FPGA • Can reconfigure for other algorithms • May also provide key management • Increased Performance • Reconfiguration via signed download procedure (CSOC only)
Field Programmability • Is your firmware accessible to everyone from your product support web page? • Encryption • Compressing the image is not secure • Encrypting code will limit exposure of intellectual property • Code signing • Reduce possibility of loading unauthorized code
PCB Design • Remove unnecessary test points • Traces as short as possible • Differential lines parallel (even if on separate layers) • Separate analog, digital & power GND planes • Alternate power and GND planes
Parts Placement • Difficult access to critical components • Proper power filtering circuit as close to input as possible • Noisy circuitry (i.e. inductors) compartmentalized
Physical Access to Components • Epoxy encapsulation of critical components • Include detection mechanisms in and under epoxy boundary
Power Supply & Clock Protection • Set min. & max. operating limits • Protect against intentional voltage variation • Watchdogs (ex: Maxim, Dallas Semi.) • dc-dc Converters, Regulators, Diodes • Monitor clock signals to detect variations
I/O Port Properties • Use unused pins to detect probing or tampering (esp. for FPGAs) - Digital Honeypot • Disable all unused I/O pins
Programmable Logic &Memory • Make use of on-chip security features • FPGA design • Make sure all conditions are covered • State machines should have default states in place • Be aware of what information is being stored in memory at all times6 (i.e. passwords, private keys, etc.) • Prevent back-powering of non-volatile memory devices
Advanced Memory Management • Often implemented in small FPGA • Bounds checking in hardware • Execution, R/W restricted to defined memory • DMA restricted to specified areas only • Trigger response based on detection of “code probing” or error condition
Bus Management • COMSEC Requirements • Keep black (encrypted) and red (in-the-clear) buses separate • Data leaving the device should always be black • Be aware of data on shared buses
Tamper Proofing • Resistance, Evidence, Detection, Response • Most effective when layered • Possibly bypassed with knowledge of method
Tamper Proofing • Tamper Resistance • Hardened steel enclosures • Locks • Encapsulation, potting • Security screws • Tight airflow channels, 90o bends to prevent optical probing • Side-effect is tamper evident
Tamper Proofing • Tamper Evidence • Major deterrent for minimal risk takers • Passive detectors - seals, tapes, cables • Special enclosure finishes • Most can be bypassed7
Tamper Proofing • Tamper Detection • Ex:
Tamper Proofing • Tamper Response • Result of tampering being detected • Zeroization of critical memory areas • Provide audit information
RF, ESD Emissions & Immunity • Clean, properly filtered power supply • EMI Shielding • Coatings, sprays, housings • Electrostatic discharge protection • Could be injected by attacker to cause failures • Diodes, Transient Voltage Suppressor devices (i.e. Semtech)
External Interfaces • Use caution if connecting to “outside world” • Protect against malformed, intentionally bad packets • Encrypt or (at least) obfuscate traffic • Be aware if interfaces provide access to internal bus • Control bus activity through transceivers • Attenuate signals which leak through transceivers with exposed buses (token interfaces) • Disable JTAG and diagnostic functionality in operational modes
In Conclusion… As a designer: • Think as an attacker would • As design is in progress, allocate time to analyze and break product • Peer review • Third-party analysis • Be aware of latest attack methodologies & trends
References • Maher, David P., “Fault Induction Attacks, Tamper Resistance, and Hostile Reverse Engineering in Perspective,” Financial Cryptography, February 1997, pp. 109-121 • Timing Attacks, Cryptography Research, Inc., http://www.cryptography.com/timingattack/ • Beck, F., “Integrated Circuit Failure Analysis: A Guide to Preparation Techniques,” John Wiley & Sons, Ltd., 1998 • Gutmann, P., Cryptlib, “The Design of a Cryptographic Security Architecture,” Usenix Security Symposium 1999, http://www.cs.auckland.ac.nz/~pgut001/cryptlib.html • Mudge, “Compromised Buffer Overflows, from Intel to SPARC version 8,” http://www.L0pht.com/advisories/bufitos.pdf • Gutmann, P., “Secure Deletion from Magnetic and Solid-State Memory Devices,” http://www.cs.auckland.cs.nz/~pgut001/secure_del.html • “Physical Security and Tamper-Indicating Devices,” http://www.asis.org/midyear-97/Proceedings/johnstons.html
Additional Reading • DoD Trusted Computer System Evaluation Criteria (Orange Book), 5200.28-STD, December 1985, http://www.radium.ncsc.mil/tpep/library/rainbow/5200.28-STD.html • Clark, Andrew J., “Physical Protection of Cryptographic Devices,” Eurocrypt: Advances in Cryptography, April 1987, pp. 83-93 • Chaum, D., “Design Concepts for Tamper Responding Systems,” Crypto 1983, pp. 387-392 • Weingart, S.H., White, S.R., Arnold, W.C., Double, G.P., “An Evaluation System for the Physical Security of Computing Systems,” Sixth Annual Computer Security Applications Conference 1990, pp. 232-243 • Differential Power Analysis, Cryptography Research, Inc., http://www.cryptography.com/dpa/ • The Complete, Unofficial TEMPEST Information Page, http://www.eskimo.com/~joelm/tempest.html