I'm fairly new to the security sector, so I've been brushing up on my security factoids. In this post, I talk about some of the most interesting chip-level security systems that I've found. I'd love to hear about any neat security measures that you've dealt with!
Image Courtesty of the US Army
There’s no doubt that we are living in troubled times. Every day, there seems to be another news report explaining the most recent security breach, newest terrorist threat, or next major international crisis. At the heart of many of these problems lies security. “Security” is a pretty broad term, and can refer to a lot of different issues and circumstances. For now, let’s focus on access and system-level security. This refers to the steps that are taken to ensure that only people with the proper credentials are given access to specific data or resources. When access security is breached, it means that an unauthorized person has gained access to data or resources that they should not have. This might then be traded, sold, or otherwise utilized to incite criminal activities or other forms of malfeasance. In the ideal scenario, we want to prevent these security breaches on the physical and technological levels.
My recent element14 video tutorial explaining RFID tag reading got me thinking about how we secure our electronics and our data. There are obviously physical security layers such as guards, locked doors, and biometric scanners, but the more commonly attacked layer of security is the kind that occurs at the system level. System security measures can range from good coding practices to microchips that are capable of destroying themselves when they detect unauthorized access.
Let’s start by looking at my favorite form of electronic security: self-destruction. Some integrated circuits feature a self-destruct mechanism that will fry the chip and any sensitive memory when it detects tampering. The MAXQ1740 (an IC for reading magnetic cards), for example, will destroy itself when it detects attempted hardware tampering. Thankfully, that is a last ditch scenario – it also utilizes AES encryption and scrambling to occlude sensitive data like credit card numbers from being accessed by an unauthorized party.
Importantly, hackers don’t necessarily always simply want access to EEPROM non-volatile memory where things like password hashes might be stored. A common hardware attack involves finding a way to dump the program firmware from a microcontroller. Once that’s been done, it can be manipulated to run additional routines, access secure memory, or to report sensitive data back to a hacker. Sometimes, competitors will want to steal firmware source code to identify proprietary algorithms used in another manufacturer’s CPU. Physical die coating, bus scrambling, and encryption keys are generally used to prevent physical probing and analysis of a chip.
Various additional forms of security are often applied at the software, system, and network levels. At the system level, some form of encryption is generally employed, where only the end-nodes of a communication system hold a key to unscramble transmitted data. Since the data is transmitted in a scrambled state and requires a key to decode, it isn’t possible to eavesdrop on anything useful. A hacker can attempt to acquire the secret key, but that would require gaining access to the firmware as described previously.
Network security can be particularly daunting, because it’s difficult to identify the “weakest link” that could lead to a breach. For this reason, organizations often employ the “honey pot” approach, where an intentionally crippled system is left partially exposed to lure in hackers. System administrators can then often trace individuals trying to break into the dummy system.
How does this all come back to Electrical Engineering? While software security is critical to building safe systems, security always eventually comes back to the chips themselves. If the hardware running the system includes bugs that could allow data to be read off directly, then all software defenses become useless. There’s an interesting balance between creating well-crafted hardware, and secure hardware. It’s often necessary to obfuscate integrated circuit design in order to improve physical chip security. The balance should be chosen based on the scope of the project; it is important to ensure that the design remains understandable to potential new developers, while effectively stumping potential system hackers. What are your thoughts on system security? Do you stick to good coding practices, or do you implement protection in your hardware as well? How much security is too much, and in what scenarios can you ignore it all together?