PrivacyCon was organised by the the US Federal Trade Commission (FTC), America’s consumer watchdog, and aimed to bring together experts to focus on “research and trends related to consumer privacy and data security.”
Two Princeton students presented a paper entitled The Internet of Unpatched Things, which is probably (if sadly) what you were expecting to hear: that many IoT devices are still being designed and programmed for what they can be made to do, without consideration for how they do it, or even whether they ought to.
The researchers took an eclectic set of IoT devices and took a look at how they behaved while online:
They didn’t say how they chose the devices, but we’re guessing they simply did a whip-around amongst their friends for devices that had already passed the “we bought this because it seemed cool” test.
There was a home thermostat (yes, from Nest), a “digital speaker”, a security camera, a digital photoframe, and a Smartthings IoT hub that was itself a gateway between the internet and a door sensor and a smart switch.
(The smart switch is basically a power socket that you can turn on and off over the internet, for example to manage electronics or to take advantage of cheaper night-time electricity.)
Rather than trying to decompile the firmware and decide how the devices might behave using analytical techniques, the researchers took a practical, deductive approach: they plugged them in, set them up, and watched what they did.
Here are some of the highlights of what they found:
- The picture frame used unencrypted HTTP for its traffic, including identifying the device and leaking your email address.
- The security camera transmitted its images in cleartext over HTTP, so that as well as helping you to keep an eye out for crooks, it helps crooks to keep their eyes out for you.
- The speaker streamed its data unencrypted.
- The Nest thermostat used HTTPS, except that incoming weather updates included your ZIP code in plaintext. (This behaviour was reported to Nest and is now fixed.)
- The Smarthub appeared to use HTTPS throughout, although some traffic analysis was possible, for example to differentiate between activity caused by the door sensor and by the smart switch.
To explain: traffic analysis is where you watch to see if something was said, without worrying about exactly what it was; then you infer what was said from details such as where the message came from, what time it was sent, and how long it was.
The good news is that two of the vendors were at least trying, apparently with some success, to build security into their devices.
But the bad news is that the others seemed to have ignored security altogether – like the surveillance camera that lets other people keep tabs on you as easily as you keep tabs on them.
The paper didn’t go as far as looking at how reliably the Nest and Smarthub devices managed their HTTPS sessions.
That’s something the researchers might want to investigate in a follow-up.
If you intercept those HTTPS connections – what’s called a Man-in-the-Middle (MiTM) attack, where the traffic is signed with your own home-made HTTPS certificate, instead of with an official certificate from the vendor – will the device issue some sort of timely warning?
HTTPS that provides only encryption, but doesn’t make sure you are talking to the right person, and that your traffic hasn’t been tampered with, isn’t a whole lot of use, and typically gives a false sense of security.
Secure communications require a three-way nexus of confidentiality (encrypted so others can’t eavesdrop), integrity (no-one else modified the message along the way) and authenticity (you are talking to the right person, not to a crook).