For once, this isn’t an Internet of Things (IoT) story about an egregious security blunder in a webcam, or a printer, or a light bulb, or a talking doll, or a home router.
Quite the opposite, in fact.
It’s a story about a proposal by the US Congress to introduce a law called The Internet of Things (IoT) Cybersecurity Improvement Act of 2017.
In an intriguing choice of words, the bill aims to specify what the regulators are calling “minimal cybersecurity operational standards” for IoT devices.
We’re not sure if American English uses the words minimal standards where British English would prefer minimum standards (meaning the standards below which you may not go, even if those standards are quite high)…
…or if the US legislators are quite literally admitting that we are living in such an insecure IoT world that mandating even the most modest security standards would be an effective start.
We suspect that both these meanings apply.
We need minimum standards (i.e. ones that everyone is required to meet), but we might as well start with a minimal minimum (i.e. one that, although unimpressive, is unarguably achievable by everyone).
This is an interesting contrast to our law-makers’ story from yesterday in which we reported that UK Home Secretary Amber Rudd wanted to attack encryption in the other direction.
Rudd as good as said that she wants the UK to legislate for minimal maximum standards for cryptographic products (i.e. to weaken them on purpose).
Rudd argued that “real people” don’t care much about security, so it would be acceptable to regulate it away in order to fight terrorism and hate crime.
The US IoT Improvement Act, fortunately, as good as states that whether “real people” care about security or not, the vendors who sell them internet devices jolly well ought to care on their behalf.
Amongst some of the proposals in the US bill:
- Fix firmware vulnerabilities in a reasonable time.
- Provide a mechanism for authenticated firmware upgrades, so that fixes can actually be deployed.
- Or, if the firmware can’t be updated, send your customers a replacement device with the new firmware burned in.
- Don’t use hardcoded passwords or credentials that can’t be changed.
- Stick to trusted and approved encryption – no outdated or home-made algorithms.
Will it work?
Even if you are generally an opponent of government intervention in IT and the internet, on the grounds that the more you meddle, the muddier it all gets, and therefore the less innovation there will be…
…it’s hard to oppose a minimal minimum law of this sort.
After all, we already have billions of IoT devices in use and on sale, and security seems to take second place, tenth place, or even no place at all in many of them.
Sure, vendors with strong technical ability and decent business ethics are already at or above these proposed minimal minima, but an awful lot of vendors aren’t, and don’t have any incentive to change their approach.
If you want to stop a race to the bottom, a good way is to make the ocean shallower, and to put a bunch of spikes on the sea bed to prevent laggards from settling there in comfort.
17 comments on “Can US senators secure the Internet of Things?”
Short answer: No
Longer answer: Since legislation hasn’t been able to fix network security anywhere else, why should it work with IoT?
Proper answer: The IoT works off of the same flawed communication and networking protocols that the rest of the internet does. Singling out the IoT (a term that doesn’t make much sense, anyway) changes nothing and legislation can’t fix these flaws.
Yes IoT devices use TCP/IP et cetera, but they in general tend to scrape the bottom of the security barrel with hardcoded root passwds, plentiful vulnerabilities, and overly permissive features that are never considered from a security viewpoint. Occasional home router vulns aside, teakettles and “security” cameras seem to be racing toward who can breach your home the quickest.
The legislation should just drop “IoT” and raise the bottom for everyone. Most good products are already operating well above these levels anyway.
I’m sorry, but this is defeatist nonsense.
To say that IoT devices (see the link in the article to our definition – a perfectly serviceable one, if I do say so myself) aren’t securable because nothing is securable because all internet protocols are already flawed is IMO like saying you should never go outside because not everything you will encounter is 100% safe.
You can keep your laptop surprisingly secure these days if you keep it patched, cut down your browser surface area, use only HTTPS sites, turn on 2FA where you can, avoid using passwords sloppily, and so on. Many OS vendors, browser makers and service providers are starting to take security really seriously and have closed off hundreds or thousands of holes that used to exist because of historical carelessness and a sense in years gone by that security wasn’t really that important.
But much of the IoT world just isn’t doing that yet – it isn’t even taking the sort of security precautions that we were taking for granted more than a decade ago elsewhere. So a law with “minimal minima” might very well be what we need to get things changing – or at least to shake out those race-to-the-bottom vendors who intend to milk the market with casually insecure tat for as long as they can.
Here’s the blunt version: lots of IoT vendors are taking the [ding] right now, so they could do with a kick up the [dong] to force them to change.
I don’t like that last point.It could hinder the next maverick that wants to develop a truly strong encryption method. It should read instead “Use encryption that achieves the minimum recommended industry standard for security, and fix or replace if it is found to be lacking.” That wording allows for new and novel encryption methods to be used in the market, but requires their replacement if they are not good enough. Those who want to try out new encryption can, but it does not come without risk, as a failure could be costly to remedy. That is far better than banning innovation from the get-go.
I disagree about novel encryption. I don’t think they should be allowed.
The think about ciphers (encryption algorithms) is that it is very hard to tell if they have weaknesses. The history of encryption is littered with new ciphers that the authors claimed where super secure, and where later found to have fundamental weaknesses.
In contrast, a good new cipher will go through several years of analysis and study by experts, before being adopted by a standards body such as NIST. Once a new algorithm has been through that level of scrutiny I would trust it, but not before. In other words approval by a standards body is a necessary sign that a cipher is strong enough to be trustworthy.
David, I don’t think you’re approval will count. Given the source of the laws, “Stick to trusted and approved encryption” will probably determined by Homeland Security and/or the NSA.
I think that the OP meant that if you want to innovate in the field of encryption then you can still do by getting your fantastic new encryption officially approved and recognised by whatever means are appropriate…
…and *then* put it in real-world products.
It doesn’t stop innovation. There’s nothing in it preventing people from developing new encryption and getting it approved.
The problem I see is that the security landscape changes so quickly that any law will be obsolete before the ink dries.
Instead, perhaps they should just disallow license agreements that remove customer rights of implied merchantability. In other words, require fixes to be issued within a short time window, and then (if they don’t issue proper fixes) let customers first demand payment, and then sue them if they don’t pay.
Customer A: My baby monitor swore at my baby.
Customer B: My baby monitor swore at my baby.
Customer C: (Three months later) My baby monitor swore at my baby. Pay me.
Company: But, but, but … the license says we don’t warrant it against security breaches.
Customer C: Congress says you do. And, if you refuse, I can sue for treble damages. So, either pay me or pay me more.
So you prefer a society in which lawsuits and class actions against IoT vendors are a better use of everyone’s time than putting a bit of pressure on IoT vendors up front so they start taking security a bit more seriously and we don’t need all those pesky lawsuits?
How very sensible and er, European!
I fear though that the Anglo-American view of deregulate, self-certify and then sue on failure is going to win through!
Good for lawyers and bottom feeding manufacturers
Poor for consumers who have any expectation of security or privacy
Actually, I would like to see both. And, perhaps there is a minimum standard that won’t ever change. But, this industry changes SO fast that it seems unlikely the law can keep up for the long term.
My thoughts were to get the courts to do the details work over time, rather than Congress (where getting laws passed in a timely fashion seems unrealistic).
But, you make a good point. Perhaps they need both?
I think the article should point out the bill applies to devices sold to the US Federal government (or presumably, paid for by them, say, through grants). It does not yet apply to the consumer market. I imagine the hope is that companies will incorporate the same security standards in their consumer products, but will at least provide some buyer-protection for the government agencies who get stuck with junk they paid for with taxpayer’s dollars. The US has previously used this approach of mandating standards for items sold to the government as a way for improvements to “trickle down” into the consumer market.
I was going to mention that, but although the law will specifically apply – if passed – to devices bought with federal funds, the law rather neatly mentions that it is for “federal agencies, and for other purposes”, thus letting it paint with a broader brush than just “the public service”. So I decided to leave the issue of “where does this apply and when” open. Those who want the details on where this would apply, how long it would take to become effective, and so on – if passed – should look at the actual text of the bill, which we linked to in the article.
But you make a good point – the idea here is clearly to start that “trickle down” effect you mention and not to regulate all consumer electronics right away…
In this video, we discuss the issue a bit more, and we do mention the extent of this Bill:
Good intentions, but if these were consumer laws it would stifle business for short lifespan/budget devices.
A self rating system might serve consumers and businesses better. With hefty fines for lying.
Not all hard set passwords are bad, if they require physical access.
Not all permanent firmware is bad, it can keep people from altering a product for illegal use (recent drone mods article comes to mind)
Custom encryption – yeah that would piss off the CIA-NSA and Amber Rudd.
No, custom encryption would delight Amber Rudd as the chance of a hand-knitted crypto algorithm being crackable is approximately 100%.
Side question (sorta): I’m uncertain how requiring physical access would be possible for anything with a hardcoded passwd. Security through obscurity needs other layers to even resemble effective, and I can’t envision IoT requiring physical access–most are connected by design and definition.
Anything without an onboard screen must be accessible through some sort of CLI/HTTP (probably not HTTPS or a bad implementation). Maybe type the password with one hand while holding a reset button with the other? Apologies if your example isn’t related to IoT and I just assumed. 🙂
+1 for the drone example though–good point.