Researchers warn of invisible attacks on electrical sensors

Are the humble analogue transducers embedded in vast numbers of sensors the next low-level technology in need of a security rethink?

A new research note discussing what are termed “transduction attacks” argues that they are being taken for granted but shouldn’t be.

To simplify, transducers are electronic components that turn analogue signals such as radio, sound or light waves, or the physical movement of something like a gyroscope, into an electrical signal that can be digitised by a computer.

Under our noses, these are becoming ubiquitous, with more appearing every day in voice-activated devices, drones, motor cars, and other IoT systems.

According to the authors:

A transduction attack exploits a vulnerability in the physics of a sensor to manipulate its output or induce intentional errors.

Something targeting a sensor is, then, conducting a sort of spoofing attack to make the sensor respond to a rogue input.

For example, the recent DolphinAttack proof-of-concept demo used inaudible ultrasonic commands to show how voice-activated systems used by cars, smartphones and devices such as Amazon’s Alexa, Apple’s Siri, and Google Now, could be made to dial phone numbers or visit websites.

Researchers have even demonstrated how something as simple as the sound from a YouTube video could be used to control the behaviour of a smartphone’s MEMS accelerometer.

In theory, the same basic principle might be used to disrupt all manner of devices: from interfering with heart pacemakers to making self-driving cars blind to obstacles.

It needs pointing out that these vulnerabilities weren’t caused by a design problem in software but exploit the basic physics of the transducer itself.

How did it come to this?

Most likely, the sensors were designed before the community understood the security risks.

One challenge is that while the principles of this kind of attack are now in the public domain, detecting real-world examples is likely to be very difficult.

The messy solution is to build software integrity checking into devices using these components, and to manufacture them so they respond to a narrower range of inputs (e.g. stop the transducers used by voice-activated devices from being able to “hear” ultrasonic sound).

Given the continued failure by large parts of the IoT to embrace even software security basics this does not bode well.

For those who are prepared to address the problem, this research implies the need for a new generation of transducers, which in turn will need the old-fashioned skills of electrical engineers.

Intriguingly, the authors predict a role for engineers who can approach this problem in an inter-disciplinary way, the lack of which is arguably how the problem developed in the first place.