A group of budding security researchers at the University of New Haven (UNH) in Connecticut, USA, recently taught themselves a handy lesson about the difference between liking something and trusting it.
The starting point of this story is a public admission, by students in the UNH Cyber Forensics Research & Education Group, that they “think WhatsApp is a great application.”
WhatsApp, in case you aren’t a fan yourself, is an online instant messaging service for phones and tablets that has the primary selling point that it allows you to exchange messages without having to pay for SMSes.
If you’re hooked on SMSing, and you send 1000 messages a month at 10 cents each, that is, indeed, quite a selling point.
So you’d imagine that an app of that sort might end up being fairly popular, but WhatsApp went way past that point, and was acquired recently by Facebook for an astonishing $19,000,000,000.
That sort of popularity and financial power means that WhatsApp handles a lot – an awful lot! – of personally identifiable information (PII) from and about its users, who in turn have to trust that the company does the right thing when it comes to guarding their privacy.
History suggests, however, that such trust is misplaced.
WhatsApp’s chequered security history
WhatsApp, indeed, has made various worrying privacy blunders in its brief history.
One blunder involved using non-secret information to construct secret encryption keys, which is a bit like using your pet’s name as a login password.
Another blunder involved the two-time use of a one-time pad – a cryptographic technique requiring, as its name suggests, that you never re-use its key material.
And Jan Koum, CEO of WhatsApp, went public recently to assert that “[r]espect for your privacy is coded into our DNA,” even though little more than a year had passed since the company was censured by Canadian and Dutch privacy authorities for violating privacy rules in both countries.
So it’s not surprising that our New Haven researchers decided to put WhatsApp’s latest smartphone software to the test.
Does the WhatsApp app really care about your privacy as much as you’d hope by now?
More trouble at t’mill
Sadly, the students found yet another badly-implemented aspect of WhatsApp's code.
Simply put, they noticed that when they shared their location, the WhatsApp software “called out” to Google Maps …
…without using Secure HTTP, better known as HTTPS.
What that means is that attackers who can sniff network traffic between your phone and Google’s servers can pinpoint you as soon as you share your location with other WhatsApp users.
The attackers don’t even have to be WhatsApp users themselves.
The New Haven students demonstrated this in fine style in a video in which they used the network sniffing tool NetworkMiner, running on Windows, to capture WhatsApp traffic to and from an Android phone.
NetworkMiner didn’t just intercept the geolocation co-ordinates on their way to Google, but also sniffed, recorded and handily popped up on screen the Google Maps image that came back.
In other words, the flaw didn’t just tell the researchers where their phone was located, it handily showed them on a map, pinpointed with one of those little red “golf tees” which with Google denotes locations.
What to do?
We’ve written before on Naked Security about one group of “attackers” who happily make hay while mobile apps shine forth their data, namely the intelligence services.
And we’ve written about how hard it is to judge whether special-purpose mobile apps – such as those for banking – should be considered safe to use at all.
WhatsApp, sadly, yet again joins the list of mobile apps that simply didn’t get it right.
The good news is that WhatsApp responded positively to the New Haveners’ report, and has claimed that the flaw will be fixed in the next release of the software.
Until then, our researchers warn, don’t share your location with your friends on WhatsApp.