A 41-year-old resident of Houston, Texas has been arrested after Google tipped off police that they had spotted child abuse images in his emails.
John Henry Skillern, a registered sex offender with a sexual assault charge dating back to 1994, was picked up on Tuesday, July 29th and later charged with one count of possession of child pornography and one count of promotion of child pornography.
A search of his home and equipment uncovered further images of child abuse, emails and text messages discussing his pedophilic tendencies, and even cell-phone videos of children visiting the branch of Denny’s in Pasadena, Tx., where Skillern worked as a cook.
He is now being held in custody on a $200,000 bond.
The investigation was apparently sparked by a tip-off sent by Google to the National Center for Missing and Exploited Children, after explicit images of a child were detected in an email he was sending.
The story seems like a simple one with a happy outcome – a bad man did a crime and got caught.
However, there will of course be some who see it as yet another sign of how the twin Big Brothers of state agencies and corporate behemoths have nothing better to do than delve into the private lives of all and sundry, looking for dirt.
Google has a long-running and often controversial relationship with the privacy of its users, in the past receiving criticism for its unclear and confusing privacy policies, data-slurping StreetView cars and leaky Google Drive services.
It’s been ordered to give people the option to have references to them ignored by its search engine, and most recently the Italian government demanded more openness about what will be done with people’s data once they’ve given it up to Google’s multi-tentacled services.
Email services in particular are seen as a sensitive area where people might expect some privacy, but Google’s business model relies on crunching everything we do in order to push the right advertising in our direction, and email is a great source of personal info on those who use it.
Google’s been sued for probing intrusively into student Gmail accounts, but at the same time US law has found Gmail accounts to be fair game for police investigation, provided a warrant is granted, and the FBI can get their hands on much more data on Google’s users should they want to.
A year ago Google’s attitude to the privacy of email users everywhere was apparently questioned.
We all like to get things for free, and like other “free” online providers, Google takes advantage of that, giving us all sorts of services in return for all the personal info we care to hand over. Exactly what is then done with that data is something we have very little control over.
So whenever we hear of something being spotted in (what we think is) our private stuff and reported, we find it worrying, even if it’s something entirely proper and innocuous.
On the other hand though, at least some people expect the Googles of the world to be stepping in more and more to prevent any kind of nastiness, impropriety or fraud on the internet.
This is particularly the case in the area of child abuse, exemplified by UK Prime Minister David Cameron’s crusade to persuade Google to implement more filters to remove child sex images from searches.
At the time this proposal was widely ridiculed, mainly on the grounds that pedophiles operate in sophisticated and well-hidden gangs who would never consider using Google.
In this case, where a man seems to have been caught simply because he was using Gmail, that claim seems proven to be not entirely accurate. Not everyone involved in this sort of nastiness is a criminal genius, fortunately.
When it comes down to it, we just need to be more aware of how the world works, and to act accordingly.
If you want your information kept private, just don’t hand it over to a free service which makes money leveraging that kind of info.
If you want to use free services, just don’t use them for anything you wouldn’t want the entire world knowing about.
If you feel you want to commit crimes against children, just don’t – get some psychological help instead.
Image of email and inbox courtesy of Shutterstock.
34 comments on “Google tips off cops after spotting child abuse images in email”
I wonder if this would have happened if he used a Google Apps account?
What do you think, do they only spy on personal accounts or on corporate ones too.
They record EVERYTHING. There are certain “key words” and likely other words disguised but, caught by google computers and flagged. They take a quick look. MOST of their flagged content is nothing. But, this time they caught child abuse- which is a wonderful thing! We all should be so vigilant and care for each others kids. But, on the flip-side, google has no right, the NSA has no right but, still take what they want whenever they want.
You have the option to not use Google if you don’t want to. They provide a free email service and should have the right to ensure their service isn’t being used iappropriately.
Actually the rest of us don’t. Google’s terms of service are appalling and frequently get worse, but apparently they believe I somehow consented when some clueless sucker who they have won over with apparently free services sends me a message.
I am now at the pint where I think I will need to ask such people to use another less risky service to talk to me.
The interesting thing here is how Google determined that there were child porn images in his email. Either they had a human look at images in his account, or the government has given them a stash of child porn images and told Google to run a search. Both are questionable at best from a legal standpoint.
I’m fairly certain neither of these scenarios are the case.
Google already has fairly beefy algorithms to classify images (as used by Google Images, for example). I imagine this is just a case of turning the same algorithms to images in emails.
Though I am wondering how/if the algorithm is capable of distinguishing perfectly legal pornography from illegal forms of pornography?
Microsofts PhotoDNA probably.
Either way… to bring charges a human still has to confirm it… at least I hope that’s the case.
They’re unlikely to be using Microsoft’s solution, they’ll be using their in house developed service. They’ve been talking about their image classification stuff for a while now, for which they hold various patents.
Somewhere on Naked Security there is another similar article about files uploaded to dropbox being hashed, and compared to hashes of copyrighted files to stop illegal sharing.
Google could very easily be doing the same thing.
Or, the email could have been flagged due to a word in the email or picture meta-data, rather than the picture its self.
Or, the other option, they are making use of some sort of web crawling service (no idea where they would get that from though…) and comparing to images on known ‘bad’ sites. That’s pure speculation though.
Police have hashes of known CP they might supply to Google, so all of this could happen in the background.
It’s definitely neither of the above. They probably checksum the images (using something like sha256) and compare the results to a database of known child pornography checksums (which already exists, since the government has a huge database to track victims). If the checksum of the image happens to match something in the database (which has a *very* low false positive rate), they probably send a message to a law enforcement agency.
So that would be a warrant-less search.
By Google, on data stored on Google’s servers, that users have given them permission to do.
The police are only coming into it when they have been contacted, much the same as if you call 999/911 because someone is breaking into a neighbor’s house.
I’m not saying what Google is doing is right, but once they have found CP, notifying the police is exactly the correct thing to do. I view non-action as nearly as bad as committing the act.
You’ve just described exactly my second option.
“David Cameron’s crusade to persuade Google to implement more filters to remove child sex images from searches”
I’m disappointed (but not surprised) that the author, like so many other journalists, is conflating “censoring perfectly legal adult content so that children can’t see it” with “filtering child abuse images”.
There is a *huge* difference between preventing children from seeing legal pornography, and preventing people from seeing child abuse images. But because both aims include the words “pornography” and “child”, lazy journalists tend to assume that they’re the same thing.
This then leads to people opposed to censorship of legal pornography being accused of sympathising with child abusers, which is absolutely absurd.
I don’t think John is conflating those two things at all. You’re right that there is lazy journalism on this subject but I reckon there’s lazy thinking all around, and not where you think.
In the speech where he announced the country’s largest ISPs (not Google) would be required to implement on-by-default filters to protect children from legal, adult pornography David Cameron also talked about a bunch of other things.
Those things included acknowledging the fallibility of the filters, education programs to help children and parents deal with a situation where kids can treat themselves to almost saturation coverage of adult pornography if they want to, demanding changes to the way searches that are clearly designed to find child pornography were handled and generally taking search companies to task for not doing enough on the subject of child pornography.
You could argue that by including those distinct subjects in the same speech that Cameron was conflating them, or that journalists who reported that speech were conflating them, but if you actually read the speech you won’t make that mistake.
Personally I think they were in the same speech because they both deal with protecting children from different things that happen on the internet.
The censorship (the filters) has never been mentioned as a way of stopping child pornography.
Also I’ve not seen anyone – not the government, not the opposition and not journalists – confusing child pornography with adult pornography or accusing anyone of siding with paedophiles if they object to censorship.
The reporting has been lazy (appalling actually) but only because it’s focussed entirely on the filters and the fact that they don’t work that well.
The rest of the speech has been all but ignored because it shows the kind of nuance and humanity that has to be ignored in order to report it, or object to it, in a shrill, black and white manner.
That lazy journalism has been egged on by a loud chorus of people alarmed at the idea of censorship by degrees and I think it’s that chorus of people who are creating a straw man – “don’t call us paedophiles!” – to push back.
There was a study (I think it was posted here) saying that a quarter of the sites the UK filters block weren’t porn related. Most of them were either health or equality related.
One of the sites blocked was a blog by a gay teenager who talks about what it’s like to be openly gay in high school. Most of it ha nothing to do with being gay, just what normal teenagers go through in high school.
I wasn’t advocating for the filters, or for on-by-default filters, but you’ve illustrated my point I think.
Ignore the rhetoric for a second and look at the process that users have to go through (I’ve done this).
The sequence of events is this: the ISP has to show you a screen that says “these are your filtering options” or words to that effect. The ‘adult filter’ is ticked.
If you want it, you click next. If you don’t want it you untick it and click next.
That step where you are asked *has to happen* and you can return to it whenever and as often as you like.
If you leave it ticked and then subsequently decide that it doesn’t work for you because there are too many false positives you go back and untick it.
The philosophical argument boils down to “should you be shown a tick box that is empty or ticked by default”.
Everything else is Fear, Uncertainty and Doubt.
Personally I think there are interesting arguments on both sides and I’d be happy to join a civilised debate about that but I’ve not found one yet.
The media can (and do) write breathlessly indignant stories about the quality of the filters but that’s just a fact of life. It’s off-the-shelf web filtering and that’s just how good or bad web filtering is in 2014.
If it helps your parenting, use it. If it doesn’t, don’t. But don’t use it as a substitute for parenting and don’t say the government said you could either, because they didn’t.
Bad filtering isn’t an argument for or against ticking that ‘adult filter’ box by default, it’s an argument for having an approach that doesn’t only rely on filtering and that is, in fact, the government’s position.
Straw man arguments are indeed poor form, but I think you miss the mark asserting the various forms of content are clearly separate.
In many countries including UK young people taking so called sexy selfies are at risk of being tagged as creating child porn. They are at risk of being prosecuted and put on child abuser registers. Sounds silly but has happened enough times to be real, and to be a threat.
Sometimes the black and white treatment, often meaning well, has unintended collateral impacts and risks for the intended beneficiaries.
For instance, there is also evidence that sysops are hesitant to report or remove child porn urgently because of the hyper criminalization and the potential risk they commit offences in the course of dealing with it, and it stays up longer than say copyright or defamation material.
Good intentions pave a well known road…
Proponents of grand schemes need to move beyond good intentions and explore whether stated aims are well understood, and are safely met by the given scheme.
In this case, are we accepting that Google has appointed itself the mass surveillance inspector of everyone’s messages, including non-customers like me, and the first line reporting agent of law enforcement in analyzing use Big Data tools the legality of all email image and text content it can get its voracious hands on?
Who needs a police state when you have a cute, ‘free’ foreign entity freelancing, helpfully based in another country, and doing its best to evade tax and other responsibilities of respectable business?
WRT to Google, their business is scanning and categorising content. They scan and categorise your email to show you ads and they scan and categorise the web to provide web search.
You might take issue with that but until you or anyone else convinces them to stop it’s a fact that it goes on.
Google is not a phone company – their business model is to make themselves aware of what’s they’re sending across their network – so they cannot claim to be a carrier.
If, during the course of that activity, they find something that they suspect is child pornography do you think they *shouldn’t* report it to the police?
It’s already well known that Google digests emails automatically for use in targeting advertisements. I’m imaging this has arisen as a side affect of the image classification algorithm ran on all emails.
There’s a bit more to it than this. The National Center for Missing and Exploited Children (The Center) provides a list, MD5 hashes, of known child porn images. This list is available to law enforcement and all ISPs. This has been going on for more than a decade, it’s hardly new. Every ISP is doing the exact same thing. This is not unique to Google. Among CP collectors, this technique has been know and it’s what has driven CP to .onion sites and in game CP.
When an ISP detects a known CP has, the information is forward to the The Center who then reaches out to an investigating law enforcement body. No one at Google is manually scanning or reviewing images.
The same hash and scan technique is also used to stop the spread of malware attachments.
I hope they are better at hash scanning for CP than they are at hash scanning for malware, viruses and like.
The Center is good at getting the hashes distributed based on it’s research and the outcome of criminal cases. Any suspect images pulled from a suspects PC are hashed, if a court determines them to be CP, then the hashes are sent to The Center and the database grows at a rapid rate. This is a bit different than malware and the motives behind malware distribution.
If this CP collector was using email, he’s an idiot too. The hashing techniques are and have been well know and it’s what’s driven the perverts to other venues.
How about encrypted email?
Depends what you mean by encrypted. If you mean in transit encryption, such as HTTPS, then it will have no effect, as it’s already reached it’s destination and is unencrypted on the Google server.
End-to-End encryption (like PGP/GPG), could be useful, depending on how/when Google apply it’s algorithms. Google can’t do anything to the mail until the user has decrypted it, and so it’s useless sitting on the server. However, you’re probably using the Gmail interface to read the message and they can then read the contents in your own browser.
If you used end-to-end encryption and read/decrypted the email using external tools, then you might be able to defeat Google. You’d be essentially using Gmail as a mail server, rather than a webmail client.
Another reason why using a cloud-based service for your data without adequate encryption and/or other security safeguards is just asking for trouble. Regardless of guarantees of safety and confidentiality by vendors, there is always the human factor which can bring the most secure behemoth to its knees.
The deal offered is free email and Google can scan your email and place ads using that information. Are they supposed to ignore evidence of a crime if they come across it?
If you don’t Google scanning the evidence of your crimes as you send it around to your friends, don’t use their free email service.
It seems to me that the myth of hidden advanced technologies that are kept from view of mere mortals like ourselves persist; and that this is seen as evidence of such, when it is far more likely that this was good old-fashioned policing of a known offender. I therefore doubt very much that the people at Google or the CIA for that matter spend their time looking at our emails, or use supercomputers to unravel each and everyone’s love-life..
A few issues I have with this
1) This moron was using email to communicate over a massive corporation that is most likely in cahoots with law enforcement anyways. He gave reasonable cause and was raided. I’m ok with that.
2) The authorities found that he had a CP image or two, and was arrested on those charges, I’m ok with that too.
3) Creepy as it may be, some of the images he posessed were (again creepy) but not against the law. Those should not be brought up in a case like this.
I think you are right on all points Mike. Point number three is just sensationalising and is building up a picture of the person that is not relevant in any way to the subject in hand – which I thought was Google’s ability to read and recognise the content of images attached to or embedded in IMAP emails.
Just so nobody misses the point here, Google is watching pictures in our mail. that is kind of creepy if you ask me its like Facebook’s twin. my computer is my phone and I’ve taken everything of my android that is attached to Google off it or anything with Facebook. it’s like having a stalker that won’t go away even though you know they’re not watching you.
If Google is scanning emails and reporting them to the government, it means they can be compelled to do the same by any government. If you are a reporter in China whose job it is to bring human rights violations to international attention, Google will tell the Chinese government. In some countries the government controls what you can buy, and people have been trying to circumvent this by using cryptocurrency so they could better their lives and engage in the free market. Google will probably have already reported them. In other countries being critical of the state religion is a capital offense.
I honestly wouldn’t be surprised if people have literally died already as a result of this. I mean, Google has demonstrated the ability to scan emails for any content they want, and if they are committed to cooperating with law enforcement that means cooperating with any unethical law a given country has.
It also means we are back to the days when photo centers call the police on parents who had pictures of their children taking a bath. There are far too many of those types of cases to count. Even cases that seemed obvious we’re sometimes just of young girls measuring their own development and then they had to go through hell trying to get their pictures away from prosecutors. Turns out it’s really hard for an underage girl to make the government destroy their pictures once the government has them, even if no crime has actually taken place.
One thing’s for sure. Google is not the company you want to do business with if you have any need for any level of privacy.
Not that I could really use Google for most useful things anyway. Ever try sending source code via email? Even in a plain text file it’s impossible.