Back in August 2014, we wrote about BadUSB.
That was a paper about USB firmware hacking written by a pair of researchers from Germany and presented at the BlackHat 2014 conference.
Many firmware hacks, of course, are benign or even beneficial.
But the BadUSB paper wasn’t about how to unlock some hidden features in your latest digital camera.
And it didn’t look at the usefulness of getting rid of those bloatware apps your phone vendor decided to pack onto your latest handset.
Indeed, the full title makes the slant of the paper clear: BadUSB – On Accessories that Turn Evil.
If you’ve ever tried to tweak the firmware on your phone, even if it’s one of those Android devices that was deliberately made to be hackable, like Google’s own Nexus range, you’ll know that it’s unlikely to happen without you noticing.
And if you have a mobile device that the supplier, the vendor or the carrier has locked down “for your own convenience and safety” (or some other unstated reason), you’ll know that it can require a great deal of careful and deliberate effort to reflash it.
That’s after the jailbreakers or rooters have spent months to make the hack possible in the first place.
Reflashing the firmware
But as researchers Nohl and Lell pointed out, there are many devices where this sort of effort might be sidestepped.
Many, if not most, USB consumer devices, such as USB flash drives, can only interact with the outside world via their USB interface.
That includes not only formatting, writing and reading the data storage areas in regular use, but also writing out the firmware in the first place.
After all, even a humble USB stick needs some embedded software, or firmware, on it so that it can respond to the commands of the computers you plug it into.
That firmware is usually uploaded to the USB stick at the factory, before the device is shipped to the supplier.
The supplier might than write some data to the device, such as free software, instruction manuals, advertising materials or, sadly, malware.
Fortunately, on most operating systems at least, it’s possible to plug in a new USB device without activating any of the software stored on it, so you can scan it, wipe it or otherwise take control of it before using it in earnest.
→ That wasn’t always so easy. Microsoft Windows, at least, used to have a feature commonly known as AutoRun activated by default. Specially-named files would run automatically as soon as you inserted a USB device. After several years during which this was a major vehicle for malware activation, Microsoft finally capitulated and turned the feature off by default, at least for rewritable devices.
Two big deals
As Nohl and Lell discovered, the same isn’t true for the firmware part of many USB devices.
Very simply put, there are two big deals here:
- The firmware on the USB device has to run when you plug it in, in order to make the device work. So you can’t turn the firmware off.
- The firmware isn’t write-protected after a device is plugged in. So you can’t automatically trust it.
Catch 0x16! (That’s Catch 22 in hexadecimal.)
In theory, your PC could be infected with malware that would reflash any USB device you plugged in, so that when you used it to transfer data to another PC, the malware would go with it, invisibly to the recipient.
And in practice, as Noll and Lell reported in their paper, that really is possible, if you know what you are doing.
But Noll and Lell didn’t tell the world how to do it, figuring it would be irresponsible to release working proof-of-concept (PoC) code right away.
Most people would probably agree with their decision, which is why only showing people how to carry out a dangerous hack if you think they have a need to know and might actually be able to fix the problem, is generally known as responsible disclosure.
That’s not good enough for everyone, of course, and the contrary approach of full disclosure involves telling and showing everyone at the same time, as a way of forcing the good guys not to sit on their hands doing nothing.
Full disclosure has now happened, with hackers named Adam Caudill and Brandon Wilson publishing PoC code for maliciously attacking USB flash firmware.
The PoC code was announced at a recent security conference in Kentucky, USA.
Solving the BadUSB problem
A common “solution” proposed to this problem – a “solution” apparently endorsed by Nohl himself – is to hard-wire USB devices so they will only accept firmware that is digitally signed by the original manufacturer.
In my opinion, that’s exactly the wrong way forward, because it leaves the devices “vendor locked” – a configuration on many mobile phones right now that consumers in the USA are fighting for the right to work around.
If only the manufacturer can issue firmware upgrades, then:
- No open source or other commercial alternatives are ever possible.
- You are entirely reliant on the manufacturer for updates. (As vendor-locked Android devices remind us, that can be a fruitless reliance.)
- If ever you think that the manufacturer’s private key has been compromised, you’re back where we started.
- You still aren’t safe against firmware updates to which you haven’t given consent, informed or otherwise.
An alternative solution
As I wrote last time this came up, I’m keen on a solution based on a hardware interlock, whether you choose to use digital signatures elsewhere in the process or not.
In short, a button or switch on the device needs pressing or toggling to permit firmware updates.
That way, there is a completely different physical workflow between plugging in the device to use it, and plugging in the device to update or reprogram it.
That’s my two cents’ worth.
Where do you sit on this issue?
Image of tangled USB leads courtesy of Shutterstock.
53 comments on “BadUSB – now with Do-It-Yourself instructions”
thanks for the heads up and the options. Seems to me we need both solutions. My guess is tha 98% of all USB drives are used for dumb file transfer & temp storage. They can be hardwired (and cheap). If you want to use USBs more seriously in an enterprise/work environment, then I think your hardware interlock recommendation should be mandated.
I think I would agree with the power being left with the user and owner of the USB and a button or switch on the USB would see to this.
Yes, I totally agree, sadly however, as we all know the “user” really doesn’t care much about reading anything that doesn’t include the word “free” or coupon, so as long as the thing works they will click on everything they see that says “Click Here”… and then deny they ever did anything wrong when it all blows up in their faces.
So true, it reminds me of the automatic updates for WinDocs. My idea starts with ‘normilizing’ the new technology of USB’s evolution. If there was a norm that started with ALL usb’s being produced where all had the new physical security features and the tamperproof firmware (who needs to modify the UI for their USB?) while allowing advanced users to switch between FAT32 etc., why couldn’t the uninvolved consumer learn to use technology again? Make it so that there are no boxes to check or windows to ask for persmission, when in reality it’s just a hinderance to the user.I agree with more security, but watch what is put on the USB. Firmware updates only go so far until some bottom-bucket torrent is spreading the Black Plague across flash drives.
First of all, if it’s a device that’s simple enough that it’s unlikely ever to need a firmware upgrade, like a USB drive, then the firmware SHOULD be write-protected. If it’s more complicated than that, then the device SHOULD have a reset-to-manufacturer-settings button that reloads the stock firmware on demand. That just leaves the question of what to do when complicated devices are plugged into multiple computers. There should be some sort of special processing the first time a device is plugged into a new computer that verifies the firmware with the manufacturer or warns the user of the risks and asks for permission to trust it. Not foolproof, but it would help control the spread of any firmware-based malware.
Yes, I do agree with your proposal, for a switch which would require the user to slide one way or another to enable firmware writing on the device.
This I think will do the trick, but … what’s gonna happen to the world today?
The problem is the meanwhile, I mean, what should users to with the millions and millions of USB devices around the world today?
Should they be dumped, in a kind of sanitization campaign? Who would pay for that?
I guess we’re in deeper trouble with everyday.
The problem is the meanwhile, I mean, what should users to with the millions and millions of USB devices around the world today?
For the forceable fugure nothing, keep using them and discard them as people see fit. Personally I like the idea of just locking up the usb firmware since something as disposable as a USB Drive is not worth worring about. Less and less I find myself using USB for anything in daily live as well as work.
The only problem with just a physical interface being the key to changing the firmware on a USB device is that it does nothing to prevent me from rewriting the firmware on a flash drive and then dropping it in the parking lot of some company I want access to hoping some naive user finds it and tries to be helpful by plugging it in to see who it belongs to. It may prevent a mass-deployment of malware, but not the targeted strikes that have become almost commonplace in corporate environments. By requiring firmware to be signed you limit the third-party malicious use unless, as you mentioned, the private key of the manufacturer gets nabbed.
The problem i see with certificates is that if you require all the manufacturers to get certificates it would become very easy for a motivated person to purchase a valid certificate and use it however they wanted.
For flash drives, how about a hardware switch to write protect the drive? Like floppies had in the bad old days.
I have an SD card with just such a switch. What I can’t tell you is whether it write protects the firmware (which is, I assume, just stored in a special part of the same flash RAM that is used for your data). In any case, I’d prefer to be able to control them separately…
I’ve seen sdcards like this as well. I could be wrong, but I don’t think there is any firmware in an sdcard. I think it would be in the sdcard reader or adapter.
That’s something that is a good idea, but there are a few problems. The first is that write protecting the drive doesn’t touch the firmware, which is where this particular malware hangs out. The second is that with many of the USB flash drives that currently have a write protect switch (I have a couple in front of me), the switch connects to a logic gate — which can be programmed around by the firmware as it doesn’t physically do anything, it’s just a flag the firmware checks to see if it should allow writes to the flash. The hardware switch on the floppies was a physical thing; the write head couldn’t approach the disk if the write protect tab was in the open position. USB is purely digital, so there’s no room for such a thing (nor does flash memory work in this way).
Actually, floppy write protection on 5.25″ and 3.5″ floppies was also just a logic gate in the form of a electrical or mechanical switch that the controller interrogated.
Generally speaking, the switches were optical on the older 5.25″ drives (you cut a notch in the diskette cover to let the light through to write-enable, and stuck a black tag over the notch to write protect the diskette again). If the light or the sensor failed, the diskettes were protected. Notchless diskettes could be written (e.g. for software distribution) simply by unscrewing the switch and taping it to the side of the drive, so light always fell on the sensor.
On the 3.5″ drives there was a tiny plastic arm that either dropped into a hole in the disk and left the disk protected, or that was deflected if there was no hole for it to drop into, operating a microswitch to turn on protection. A plastic slide in the diskette allowed the “write protect hole” to be alternately closed or opened.
the best security is to have no needs for security.
Does not compute!
So true just don’t use the product but that can be said for everything even your own life.
IMO there should be both a hardware button to write protect the firmware and also a button to reset the firmware to original manufacturer version which would be written as non-writable (mask ROM, read-only) no matter the switch setting so resetting to manufacturer firmware would be guaranteed.
At the factory the mask ROM firmware gets copied into the writable firmware area and be write-protected by the button or switch. Some clever designer could incorporate both the firmware write protection and the reset to manufacturer firmware into a single switch or button.
Also, authors and vendors of virus and malware scan software should include scanning support for plug-in memory cards and USB memory sticks for the data content of the plug-in memory, which some already do, but it would be even slicker to support downloading a copy of the USB or other plug-in memory firmware and then scan and analyze it.
I’d prefer to have the firmware burned in at the factory with no way to upgrade.
1) lower manufacturing cost than adding a switch
2) no added complexity for users
3) reduces risk of prankster infecting a usb stick and leaving it for some hapless user
Ironically, the manufacturing cost might (anyone care to comment?) be higher for building a memory stick with some PROM (write-once memory for a one-off factory burn) and some RAM (rewritable memory)…
But for simplicity of simple devices, like low cost memory sticks, your way sounds pretty good. If the device turns out to have a bug…hit it with a hammer and be done with it 🙂
For more expensive devices, field updatable firmware is surprisingly handy.
Where’s Me Jumper?
It does raise a point though; it would probably be a lot more secure if devices had JTAG-style pins that needed to be used to re-flash, instead of re-flashing over USB. That way, a user would have to intentionally attempt to reprogram instead of it being up to the interface and the driver software.
This would be just as easy on the vendor, as they could quickly reflash all their stock as needed, but not expose end-users to unsafe computing. For in-the-field flash updates, they could provide an inexpensive JTAG-to-USB adapter that would let their customers plug in the programming port via USB to update firmware. Run a simple piece of software, and you’re done.
The way most flash updates work these days is that a specially crafted file is uploaded to a special folder on the flash device, and then the internal system reads that the next time it powers up, and reflashes the firmware from the file-on-flash. The pitfalls with this method are obvious.
Engineers solved this problem years ago, shortly after they started using embedded microcontrollers in everything from toasters to nuclear power stations. Since they needed to protect the IPR of their firmware, microcontroller manufacturers provided a one-time code protection switch in most if not all microcontroller families. That can make it impossible to reverse engineer the firmware, and if there isn’t a one-time write disable switch as well it’s obviously a conceptually trivial feature for a microcontroller manufacturer to add. Why even think about a 1024 or 2048 bit digital signature on the firmware (and we know that code signing keys have been stolen in the past) when a 1 bit lock will do the job more securely? And who wants to reflash their USB flash memory anyway, once branded and customised, except for nefarious purposes?
One reason for wanting to reflash is to fix bugs. There’s a lot of functionality in firmware even on simple, single-function devices (think: support for multiple file system types, wear levelling, data compression), a lot of competition on features in the marketplace, and increasing pressure to be able to patch known risky bugs. Now go beyond, say, a data storage device to products like printers, cameras, routers, scanners, modems, headsets, etc. There’s a full-on operating system plus numerous applications in the firmware…
Yes, Paul, but when did you last need to fix a firmware bug in a USB memory stick? You threw it away and got a new one! Manufacturers need to flash the firmware in memory sticks for different customisations, but you don’t need to do that. When it comes to high functionality devices such as routers and printers, the USB stack is in a separate chip (like on my Raspberry Pi), which may even be mask-programmed. We accept that the USB stack is sufficiently mature that it’s not likely to contain critical bugs (though I squirmed in my seat very slightly as I wrote that). And anyway, I haven’t heard of anyone deliberately “loosing” a printer in a company car park in the hope that an employee picks it up and plugs it into his company laptop! And the guy wandering around your office in a white coat and carrying a couple of fluorescent tubes can probably find easier exploits.
So I don’t see any reason why the microcontroller containing the USB stack can’t be set in concrete with a lock bit. But even if it is, I’m sure it’d still be possible to get an unlocked memory stick from Shenzhen, and it’d be no barrier to any reasonably competent national security agency. It just means that petty criminals and mischievous schoolboys can’t simply amble over to Github for an off-the-shelf exploit.
I never just “throw USB memory sticks” away 🙂 They go through a shredder.
(For low-cost devices like memory sticks I will accept your variant of my suggestion, which is like a pushbutton write-enable feature where the button just happens to be missing 🙂
“Loosing”? Presumably you mean “losing”!
No, he meant “loosing” as in “Letting something loose.”
I thought the same then realized he did mean loosing.
I am guessing all mobile phones have these in as well? So no one can trust anyone’s mobile phone that they “just want to charge” on your PC…
Just get a “charge only” cable, or make your own by cutting or covering up the data contacts, leaving only the power ones. Fairly easy to do with the current generation of cables (the ones with the flat A connector on one end).
20 years or more ago I was telling colleagues that I thought there should be a physical switch on the write line for the BIOS on PCs and this is no different. 99% of the time or more you don’t want the BIOS or any firmware on anything to be writable.
You weren’t wrong.
In fact, in the late 1990s, when the CIH (aka Chernobyl) BIOS-blasting virus came out, back in the days when many BIOSes were still in socketed dual-inline EEPROMS, we fitted the SophosLabs research computers with toggle switches on the BIOS chip write-enable pin…
Any option, physical switch or otherwise can be tricked, so putting the onus onto the user is false security, it is software you are manipulating at the end of the day.
99% of devices built probably never need its firmware updating, and in fact i have never have or needed to, or been requested to update a usb’s firmware, whether its a keyboard, mouse, flash drive or anything else.
So i would write protect all devices for the general public. With developers devices re-writable available and documented with the fact they are re-writable.
It’s possible to make a switch “untrickable,” for example if it physically controls the write circuitry. Like the main breaker on your mains supply at home…when it’s tripped, it’s not an advisory to to the switchboard not to supply 230V AC past the switch. It actually isolates the supply from the household.
This frightens me as a computer tech at a university. It’s not just usb flash drives that you have to worry about. Many of the devices that you use every day could be at risk, even things as simple as keyboards and mice. I troubleshoot problems for a living, so picking up a computer peripheral that hasn’t been in my possession and plugging it in elsewhere to test it is part of my job.
Now, imagine someone picking up devices at the store, worming the cord out just enough to infect the device, and then returning it. On the right device, it might look as if the package had never been opened and go straight back to the shelf.
“This frightens me as a computer tech at a university.” Ahhh, “the Petri dish”, I remember my first job in IT in a computer lab on campus… could make for a real horror movie… good luck guy.
The problem with a switch is that someone else getting their hands on the device could flip it. For flash drives, locked firmware is the only way to go IMNSHO.
If I can get my hands on the device, I can replace it with a facsimile which will do whatever I want (subject to technical limitations that are less harsh than than if I am merely reprogramming the original device).
Necessitating a manual step in the malware life-cycle neutralises the threat almost completely, ITYA?
The host OS needs to be able to verify for itself that the device is not malware. So it must be able compute the hash of the USB controller firmware; and then compare that hash to a known list of trusted firmware checksums. Putting pushbuttons on a USBdrive will not work – the bad guys will push that button.
But at least they have to be there to press the button. So they can’t do it remotely with malware…
Discussing how future devices might change is one thing. But is there any defence against this at all, that could be implemented by a software installation, to protect against all devices currently in existence? The equivalent of anti-virus?
As I read through these, I in MHO, need to have a ROM type storage. Maybe like we had for write once, then burn the fuse that makes it writable. Since the price of these things is really pretty cheep, why not just make them ROM type? Also, for a device to operate like a disk drive, it can be a pretty complex OS on the USB device, so the chance of bugs is very high. When you have a 2 wire serial interface there has to be some smarts behind it to make it work properly. Here in is the rub. Just make it a ROM device or burn then blow it so it can’t be written to again.
Is somebody going to lose data, probably, but since these devices are ‘use at your own risk’ anyway, why should the manufacturer care? Ship them with non modifiable code, is the only way to be certain. It’s up to the manufacturer to make sure the code is not buggy when they place it and if they fail, then they won’t sell many USB devices after it gets out.
Are we looking at two fairly separate problems?
a) The target is the host, not the USB peripheral. Ordinarily, the host has no need to run software that’s presented by the peripheral. (An exception is the 3G/4G modem dongle that presents your laptop with a driver to circumvent the Catch 22 of needing to get on line to download the driver to allow you to use the modem to connect to the internet.) Thus, providing the relevant parts of the host’s OS are free of the infamous buffer overrun bugs, the host can safely accept any USB device (except one that delivers abnormally-high voltages).
b) The target is a ‘phone / tablet (etc.) that features a swings-both-ways micro USB sockets and has been designed to accept firmware/software updates over it. In this case, I’d want a physical interlock. I’ve never intentionally updated the firmware of my ‘phone and I can’t believe being required to remove the back to do this would be a cause of insufferable frustration.
I do have few interrogation regarding this issue. So I let you guys respond about these items:
1. If we want to inject a malware in a specific brand, we need that specific firmware code to add the malware to it and inject it into the USB device?
2. As there are so many types of USB devices and firmware, a threat agent would have to get all targeted firmware source code, add a malware to it and then, inject it in the USB?
3. Based on your experience and knowledge, little Cisco USB device extension we plug into computer for projector are writable?
1) If you wish to achieve a specific behaviour of a specific device, of course your (malevolent) code has to be compatible with that device. However, in all likelihood, several other devices from the same manufacturer, and even different manufacturers, will behave as you wish when given the code you’ve crafted.
2) That would depend upon what the agent is aiming to achieve, and the available paths to his target. In the case of Stuxnet, the target was a Siemens Programmable Logic Controller and the attack route ran via a USB flash memory drive and networked x86 and/or x86_64 Windows computers. In that case, the USB memory drive’s behaviour was not changed; its role was simply to store files that were perfectly within its specification, but were toxic for buggy Windows. I doubt anyone will create a piece of malware that can infect every type of device that has a vulnerability, or that can take a route through all possible device types to reach its ultimate target.
3) I’ve no idea, sorry!
having to press or slide a hardware switch to make firmware upgrades on USB devices is a fantastic idea, but ultimately is still only Part One of a two-part solution…because of the possibility of the USB device being easily-infected while the switch is activated, there should still be some scheme for restricting how the firmware gets onto the device from the PC so that not just anything can jump across when the switch is activated
are just flash drives affected by badUSB or also external/portable USB 3 hard drives?
The problem exists because of the USB firmware – the computer code inside the USB device that makes it respond to requests from your computer. In theory, that means this sort of attack could apply to almost any sort of USB device, not just flash drives or removable disks. Audio recorders, mice, keyboards, modems, GPS receivers, even USB-controlled NERF guns. (Such a product does exist!)
While you idea about a physical button or switch to update the firmware is probably a better idea, the problem I see is that it could only be applied to chipsets going forward, whereas a firmware signature requirement is something that could be applied retroactively to the millions of already-manufactured chipsets that exist in the wild. Of course, it seems something of a pipe dream to imagine that all the manufacturers would release firmware updates for all those chipsets when the vast majority of the public are not even aware of this threat.
I think the firmware should be taken off the usb drive, standardized and made into part of the adapters firmware. I assumed it was already done this way and a little surprised to hear about this.
Why not just have a prebuilt database of trusted firmware hashes, that the end user IS allowed to add on, if he/she trusts their custom firmware hash?
Since the adoption of USB as the defacto standard interface for absolutely everything, how many firmware updates have there been for thumb drives, keyboards and mice? Seriously, when was the last time you downloaded and flashed the firmware in any USB device that wasn’t a phone? I’ve never even seen a firmware update for a mouse, keyboard or thumb drive. So why is flashable firmware in these low-cost devices even a thing? Once they leave the factory, nobody should be able to alter the way they operate without physically replacing their components.