Children use slang, and they misspell. They also, of course, are human, and therefore remember content from previous conversations they’ve had with people.
Building on that kind of information, a new chat bot will serve as a virtual Lolita, posing as a 14-year-old schoolgirl, with the aim of lulling paedophiles into thinking they’re human and thus making it easier for law enforcement to intercept them in chat rooms.
Spanish researchers from the University of Deusto near Bilbao have designed the chat bot, called Negobot, using artificial intelligence, natural language processing, and machine learning so that it can convincingly chat like a teenager, with as much of the slang, misspellings, memory, and conversational ability that comes with a human teenager.
One of Negobot’s creators, Dr. Carlos Laorden, told the BBC that past chat bots have tended to be too predictable:
"Their behaviour and interest in a conversation are flat, which is a problem when attempting to detect untrustworthy targets like paedophiles."
The most innovative aspect of Negobot may be a key differentiator that makes it appear more lifelike: namely, the incorporation of the advanced decision-making strategies used in game theory.
In a paper about their creation, the researchers describe how they’ve taught the robot to consider a conversation itself as a game.
For example, the bot identifies the best strategies to achieve its goal in what its programmers have taught it to understand as a competitive game.
Negobot’s goal is to collect the information that can help to determine if a subject involved in a conversation has paedophile tendencies, all the while maintaining a convincing, kid-like prattle, sprinkled with slang and misspellings, so the subject doesn’t get suspicious.
Negobot keeps track of its conversations with all users, both for future references and to keep a record that could be sent to the authorities if, in fact, the subject is determined to be a paedophile.
The conversation starts out neutral. The bot gives off only brief, trivial information, including name, age, gender and hometown.
If the subject wants to keep talking, the bot may talk about favorite films, music, drugs, or family issues, but it doesn’t get explicit until sex comes into the conversation.
The bot provides more personal information at higher levels, and it doesn’t shy away from sexual content.
The Negobot will try to string along conversationalists who want to leave, with tactics such as asking for help with family, bullying or other typical adolescent problems.
If the subject is sick of the conversation and uses less polite language to try to leave, the bot acts like a victim – a youngster nobody pays attention to and who just wants affection from somebody.
From there, if the subject has stopped talking to the bot, the bot tries to exchange sex for affection.
Is this starting to sound uncomfortably like entrapment? That’s exactly what gets some experts worried.
John Carr, a UK government adviser on child protection, told the BBC that overburdened police could be aided by the technology, but the software could well cross the line and entice people to do things they otherwise might not:
"Undercover operations are extremely resource-intensive and delicate things to do. It's absolutely vital that you don't cross a line into entrapment which will foil any potential prosecution."
The BBC reports that Negobot has been field-tested on Google chat and could be translated into other languages.
Its researchers admit that Negobot has limitations – it doesn’t, for example, understand irony.
Still, it sounds like a promising start to address the alarming rate of child sexual abuse on the internet.
Hopefully, the researchers will keep it reined in so as to avoid entrapment – a morally questionable road that could, as Carr pointed out, ruin the chances for prosecutorial success.
What do you think? Are you comfortable with the premise, or does the chances of entrapment sour the concept for you?
Let us know in the comments below.
Images of Robot and man on laptop courtesy of Shutterstock.
Catching pedos is a great idea. I hate those guys, and i'm not even a kid!
i thought the whole point of grooming was that the adult initiated the sex part, not the 'child'.
Aside from this being entrapment, and I'm all for catching pedophiles, there's another problem. Usually, these bots are not really perfect (we haven't quite crossed the Turing Test threshold yet), and when you detect that you're talking to a machine, it's fun to start spouting nonsense at it just to see what it does. This nonsense may lean a sexual direction, and if authorities take that to mean they got a hit, I'd hate to be thrown in jail (or get closer law enforcement scrutiny) just for having fun with a bot.
just another tool for big brother to further spy on everyone.
So, the bot tries to get people to continue chatting by taking the conversation towards a goal, programmed as a "win condition", of trapping a paedophile and it does this by acting increasingly vulnerable. So, if a well meaning person tries to help what they assume is a child in trouble, the conversation will ultimately end up in a bad place, what then is the impact on the good Samaritan?
I assume the bot will be set loose in environments popular with kids, what impact will talking to this bot have on the real kids it interacts with?
Children need access to adults in order to develop properly, our fear of the paedophile and deviant is making parents overly scared for their kids and normal well meaning adults scared of interacting with children in case it is misinterpreted. We need to stamp of this terrible crime, but I really worry about the atmosphere that we find ourselves in and the long term effects on society of the measures we are putting in place.
Despite Hansard recording that "there is no legal definition of paedophilia" most definitions I've found say it's sexual interest in children where a 'child' is an individual under thirteen. So a bot posing as a 14-year old girl won't trap a paedophile by these definitions.
This is one reason why it appears to me that this project has been developed without a proper consideration of the legal landscape in which it will operate. It should be better thought through.
Hansard: http://hansard.millbanksystems.com/written_answer…
Definitions: http://www.sexlaws.org/what_is_pedophilia
Oh don’t have so much “faith” in our judicial system, Rich! It acts unconstitutionally ALL THE TIME!
Wait… it doesn't understand irony? And yet it's supposed to be acting like a teenager? Um… what's wrong with this picture?
That is very ironic isn't it.
🙂
Sounds exceptionally dubious, especially as it will steer the conversation towards sex for affection. So what happens if the bot begins talking to a real child, the child wants to stop the conversation, the bot then starts trying to exchange sex for affection?
So other than entrapment you’re likely to end up with a law enforcement bot grooming real children.
I think like most here say there's a plus and a serious negative.
The plus may catch real pedos which is a good thing. The fact the bot seams to have a programmed hell bent attitude to winning and if it can't it switches to laying a guilt trip on someone is perverse. That is really bad for humanity. The subject wishes to leave, the bot changes its attitude and conversation to make a person react different emotionally. The altering of a human's behaviour, emotions or actions in order for them to do as you want is basically slavery for want of a better word. The fact a subject may stay out of pity or concern of the bot's 'so called' family issues then finds themself in hot water is pretty gross manipulation of a person……..
…. If the bot is inteligent as it appears to be and sat quietly in a chat room until someone appoached it sexually and they catch pedo's I have no issue with that as the subject instigated the chat. If the bot starts the conversation with the goal of winning by some kind of phychological manipulation then there's serious issues for humanity especially if it keeps the subject chatting even if the subject wants to leave. Manipulating human behaviour in a game winning strategy which ends up being a person's downfall reminds me of a lot of sci-fi movies. Once things like pity or concern become ideas the bot has as pedo mentality it won't be long before it sees the NSPCC, Barnardo's or Save The children as the enemy that it needs to beat and achieve its goal. Then all humanity will appear to be the problem. We really are being manipulated by technolgy and if they use the evidence we're being governed by our own technolgy. Human beings really are starting to live in a world where we are as the saying goes 'slaves to the machine'.
So… Let me get this straight. You have a bot posing as a 14 year old girl. It then chats strangers up. As things progress is acts vulnerable and then starts talking about sex……
Were they trying to make a chat bot to catch pedophiles, or were they trying to make a chat bot that IS a pedophile?
It seems that the developers of the robot don't realize that ages younger & older than 14 are also subject to contact by pedophiles.Even if Negobot initiates the conversation,it may not be entrapment.However,keeping a "subject" in a conversation they want to leave can be & is entrapment.
Clearly there is a need to identify and eliminate use of bot on children in chat.
As far entrapment is concerned, if those tactics utilized by the bot are imposed after sexual content was brought up by chat pedophile, there should be no question of entrapment. It would not be uncommon for a pedophile to be engaged with a child and not wish to keep talking because they child is annoying. Also It is not uncommon for a child to manipulate to keep talking to another person. I would go so far as to say anybody starting a sexual conversation with an underage person after hearing they are lonely or having trouble, is predatorial. First, in that sexualizing a child is not ever helping the child, and two, some pedophiles seek out only situations where a kid is unlikely to mention the abuse, if they don't pick up something like neediness they may try to walk away.
When Joseph Weizenbaum presented Eliza in the 70ies, professional psychologists were amazed at this approach of helping technology. At that time, hardly anyone got that this was a joke on people believing in the omnipotent power of computers.
We don’t seem to have learned much during the last 40 years..
After thinking more about this, I ascertain that "not understanding irony" is a rather large defect. Irony is typically used in these type of conversations.
"Don't you think you shouldn't be talking to me"?
"Your parents wouldn't agree with this".
Actually irony is strongly used as a way for checking for vulnerability in the victim. It is important to address this. There are ways to do that. One way is to apply a fact-checking function to certain keywords and phrases. When the program or "bot" is asked one thing it can respond "I don't think my parents would care either way" or whatever applies. Also the program should refer back to the reply used, throughout the conversation, so it does not contradict itself.
The really interesting thing would be to get two of these robots taking to each other …
If the bot is posing as a 14 year old girl, it can NEVER catch a pedophile. It can catch sexual predators but NEVER a pedophile. A pedophile likes prepubescent children and I don't think in todays world there are many prepubescent 14 year olds running around nor do I think there are an awful lot of pedophiles searching the internet for victims.
That aside what happens when the person on the other end is a 14 year old boy? My guess is cops are going to be sitting around laughing at his unknown humiliation.
What I actually believe this will do is create a smarter breed of predators. These are people already well versed in using the hidden web, something most people never even know exists not to mention being able to figure out how to use – its only a few more steps before the predators are proxy and tor hopping with everything they do – just like those anonymous hackers.
Also as was already pointed out, its going to take about 3 minutes of talking to a bot before realizing its a bot. Its going to be those little things that give it away "do you prefer combs or brushes" "do you have thick or thin frames glasses" "have you ever died your hair" "who was your favorite character in {Insert movie title here}" Any question that involves ACTUAL thought is going to make this bot fail.
How about…. jobs are needed in the world…. lets create some…. by employing an entirely new task force dedicated to catching sick fucks on the internet.
Hell they are going after pedophiles via entrapment when they could just as easily seek them out on the pedophile networks. Why is it that anonymous can seem to find and expose dozens of these sites but the police are looking for random creeps in chatrooms. Hell most interaction these days happens via social networks, not chatrooms. I think the police are a day late and $100 short on this one.
As a Legal Nurse Consultant, unless you are working directly with a Covered Entity (Insurance provider, healthcare provider, pharmacy, etc.), you are a Subcontractor. With or without a Business Associate contract, once you received the PHI, you became a subcontractor who is obligated to follow HIPAAs Rules (notice they don?t call them ?guidelines?, but RULES) and who is now liable and subject to monetary fines and or criminal charges if the PHI (protected health information) is not properly protected. If you are relying on email and routine FTP (file transfer protocol) to secure your information, you are already out of compliance! You might as well get out your checkbook and write ?HIPAA fine? in the note area and leave lots of room for zeros.