A few years ago, Google simplified its prove-you’re-a-human reCAPTCHA test. To prove we’re not automated bots, it gave us a single, hopefully quivery “I’m not a robot” click to replace the previous deciphering of blobby melted characters and mathematical problems that made our brains hurt.
Google: So much simpler!
Our readers: MAKE IT STOP!!!
Carol February 12, 2016 at 10:02 pm
for days now every post I share on Facebook has that stupid test, and i’m ready to delete Facebook all together. I’ve had it, I shouldn’t have to prove i’m not a robot!!!!! over and over and over again, I’m at #4 test so far today and have over 15 yesterday and not sure how many the day before, I’M SICK OF IT!!! HOW DO I STOP IT!!!!!
nope September 24, 2016 at 9:10 pm
they can f off and die with this cr@p tired of I the user having to prove anything
Now, Google is making it stop.
Coming “soon” (hopefully) to a now-annoying site near you is Google’s Invisible reCAPTCHA: a free service designed to protect sites and apps from spam and abuse without any need for users to click in a quivery human fashion, select all the kitten pictures or whatever other thing developers have had us do to prove we’re real.
Google says on its developer signup page that Invisible reCAPTCHA will use advanced risk analysis technology to separate humans from bots: no need for us to click on anything at all (or to select images associated with a clue image, as mobile users have been doing).
Proving you’re human can be tiresome, but it’s certainly worth blocking bots, which never get bored. Nor do they tire of running through the automated nastiness they do.
Bots harvest email addresses from contact or guestbook pages, scrape sites and reuse the content without permission on automatically generated doorway pages, take part in Distributed Denial of Service (DDoS) attacks, and automatically try to log into sites with reused passwords ripped off from breaches.
One such: the recent password-reuse attack on the UK’s National Lottery.
Google’s been using risk analysis to fend off bots for years. In 2013, it revealed what it called its Advanced Risk Analysis backend for reCAPTCHA.
That back end doesn’t just look at whatever gobbledygook we type into a box or how human-like our mouse clicks are. Rather, it observes our entire engagement with a CAPTCHA, from start to finish – before, during, and after we click anything – to determine whether we’re carbon-based.
Specifically, the difference between bot and human can be revealed in clues as subtle as how a user (or a bot) moves a mouse in the brief moments before clicking the “I am not a robot” button, according to Vinay Shet, the product manager for Google’s Captcha team.
Without realizing it, humans also drop clues that can establish whether we’re automated or not: IP addresses and cookies show our movements elsewhere on the web and can help prove that we’re not a bad actor.
Regardless, reCAPTCHA hasn’t proved infallible, at least as far as the test for image selection goes.
In April, a trio of Columbia University researchers managed to figure out how to speed up the rate at which they could try new CAPTCHAs, guess how the CAPTCHA process works behind the scenes in order to game the system, and use other online services to automatically solve CAPTCHAs faster than the designers thought possible.
The new Invisible reCAPTCHA will mean that Google’s submerging its bot/human detection technology completely. Bots presumably won’t be able to game the system by doing what those researchers did: to (ironically enough!) use Google image search to come up with a list of words associated with given images and thus automate the click-all-the-kitties tests.
For those of us who’ve been tearing our hair out over, say, Craigslist’s overuse of reCAPTCHA, invisible culling of humans-vs-bots is likely going to be very welcome indeed.
For those who loathe and distrust the notion of Google silently lurking beneath the surface, tracking our mouse clicks and our cursor movements to see which ads we like best a la what Facebook has mulled doing, no, this won’t seem like a good move at all.
There’s nothing unique about Google’s ability to track our mouse movements and analyze our behavior on web sites, of course.
Facebook, Twitter, or any webpage can track everything you do and could be keylogging your every pointer movement or keystroke.
Logging keystrokes is no super-secret, privacy-sucking trick. It’s plain old Web 1.0. This isn’t news, but it’s certainly worth repeating: anybody with a website can capture what you type, as you type it, if they want to.
If you want to see that tracking in live action as you visit a site, ClickClickClick is a lot of fun. The site doesn’t just track you, it gives you a running commentary about its tracking of you, replete with amusing analysis of why you’re doing what you’re doing.
It’s a fully featured programming language that can be embedded in web pages, and all browsers support it. It enables tracking of cursor position, of our keystrokes and call “home” without refreshing the page or making any kind of visual display.
Those aren’t intrinsically bad things. In fact, they’re enormously useful. Without those sort of capabilities sites like Facebook and Gmail would be almost unusable, searches wouldn’t auto-suggest and Google Docs wouldn’t save our bacon in the background.
In the case of Google’s continuing advances with reCAPTCHA, such an ability can stop a lot of bad bots from doing things that can be worse than the annoyance of having to endure typing in text from a blobby image.
But does it have to be Google who’s doing it? Some people on Twitter would prefer it wasn’t.
Anonymity and privacy researcher Sarah Jamie Lewis noted that bots aren’t all bad. Some can be useful, crawling the web for information – the type that isn’t used for nefarious purposes.
But she wishes that Google weren’t the one calling the shots:
I’d ❤ a future where smart bots crawl for useful info & report back – I’d really like those bots to not to have to be licensed by Google.
— Sarah Jamie Lewis (@SarahJamieLewis) December 5, 2016