The number of data breaches reported by the British healthcare sector have doubled since 2013, according to the Information Commissioner’s Office (ICO).
Figures obtained from the ICO, via a Freedom of Information (FOI) request submitted by encryption firm Egress Software, reveal how health organisations experienced 183 data breaches between April and June 2014, a rise of 101% when compared to the 91 data leaks during the same period in 2013.
While the healthcare sector showed the largest number of breaches, other sectors also showed equally alarming increases in the number of incidents experienced – breaches within the education sector climbed by 56%, data leaks within the insurance industry leaped up by 200%, lenders also saw a 200% surge and the number of incidents across business in general rose by 143%.
As a result, it’s hardly surprising to learn that the ICO has fined organisations a total of £6.7 million (about $10 million) for violations under the Data Protection Act since 2010, with the public sector responsible for two thirds of that figure.
Egress Software CEO Tony Pepper said:
To date, the ICO has levied in excess of £6.7 million in fines. It is alarming to see that well over half of that, indeed £4.5m, is coming from the public sector alone. In particular, local government have contributed over one-third to this total. Not only are these organisations and bodies responsible for handling citizens data, their malpractice is being paid for by the public pocket.
The firm calculated that the vast majority of breaches (93%) were a result of human error, poor systems and processes and a lack of care while handling data. It is significant that no fines were levied in respect of technical failings, which accounted for just 7% of all data leaks.
Despite many firms investing in security training and awareness courses for their staff, Pepper claims that isn’t enough and firms need to look again at technical controls:
...we will never be able to completely rule out people making mistakes but clearly safeguards are urgently needed. What these statistics demonstrate is that training alone is not the answer…
Indeed it’s not, but good training can certainly help.
Data breach notification
The UK’s Data Protection Act, which is designed to protect consumers’ data, does not require organisations to make any form of formal disclosure in the event of a data breach (so it’s possible that some firms have not reported incidents) – rather the Information Commissioner ‘believes serious breaches should be brought to the attention of his Office’.
Where there is significant actual or potential detriment as a result of the breach, whether because of the volume of data, its sensitivity or a combination of the two, there should be a presumption to report.
However, an update to the EU General Data Protections Regulation will have a significant impact on British organisations as it will require any entity with European associations to inform regulators and adversely affected individuals “without undue delay” in the event of a data breach.
Also, the arguably low fines levied by the ICO are set to become a distant memory as the new regulation states that the maximum penalty for a data breach will be a fine of €100 million or 5% of a company’s annual turnover.
As Naked Security author John Hawes commented in his recent article on the epidemic of medical data breaches:
Whether this [higher fines] encourages better security practices, or simply drains more cash from budgets which are often already tight, is hard to say...
We need to cure the tendency of the healthcare industry to be sloppy with encryption, access control and web security. We need to ensure privacy and security are given top priority...
Image of health data courtesy of Shutterstock.
It is interesting to hear about the UK, and you may see data information protection as a global effort, but in reading your article, I am wondering how the US compares. Especially the medical data, where I find ground level staff have access and ability to change records.
What we really need are security controls that are easy to use. people tend to take the path of least resistance, especially when they have lots of other things to do. Until we have security systems that are easier to set up correctly and robustly, and even easier for end users to use, we’re going to keep seeing these issues. I don’t think it’s human error if the humans are being asked to deal with unreasonably complicated systems.
I work with a group of highly-educated, very intelligent, computer literate people. Our department folders each have a consistent subset of folders in them for Private files, cross-department shared files, and published files (where it’s read only for anyone not in the department. Trying to get these users to made good decisions about what goes where is difficult enough, and this is a relatively simple system they deal with every day. On the back-end, the script IT uses to actually set those permissions is a horrific nightmare of complexity that takes it’s own manual to try to add onto.
Every layer of every security system I’ve ever had to deal with is like this. It really doesn’t have to be. Mechanical locks, for example, are now very complicated machines, but they’re activated in the same way they always have been. Cars, similarly, are much more complicated than they were even a decade ago, but the same basic controls make them run. And no one expects the ordinary user to know how to dismantle the engine block in order to make the car function with crashing. So yes, humans make stupid mistakes. They always have, they always will. Why are our systems letting them?