When Twitter user @jeffrybooks tweeted “I seriously want to kill people” at an upcoming event in Amsterdam, police decided to pay the account owner a visit.
However, when officers questioned the owner, Jeffry van der Goot, they discovered that things were not quite as they seemed.
The 28-year-old Dutch software developer hadn’t actually typed the words himself.
Instead he was using a bot developed by technology student ‘Wxcafe’ that takes random words from his Twitter archive and, through the use of an algorithm, attempts to tweet coherent sentences.
In this instance the result proved to be a fatal mistake for the bot which had been engaging in a conversation with one of Twitter’s 23 million other bots. At the request of the Dutch authorities, it has now been terminated.
Speaking through his own Twitter account, van der Goot said:
I just got visited by the police because of a death threat my Twitter bot made.
So I had to explain Twitter bots to the police. And I can't really blame them for having to take it seriously.
I'm going to delete my bot for now, because that's what they want.
Speaking to The Guardian, van der Goot appeared a little confused over where legal responsibility for the bot’s actions should lie. While he admitted starting the bot and running it under his own ‘name’, it was, he said,
A random generator, so yes it is possible that something bad can come out of it, but to treat it as if I made that threat does not make sense to me. I feel very conflicted about it, I can see their point but it does not feel right to me that the random output of a program can be considered something I said.
Likewise, the bot’s developer Wxcafe said via Twitter that police involvement was scary, adding:
Of course since I don't have any legal knowledge I don't know who is/should be held responsible (if anyone) but like. kinda scared right now.
This isn’t the first article we’ve written about where a robot has got its owners in trouble. In January, we told how a bot went on a drug buying spree and ended up getting its stash, and itself, seized by police.
In that case, the bot’s owners believed their freedom of expression rights would protect them from prosecution.
I would imagine its likely in any case like this that the answer to legal responsibility lies with whoever programmed it – arguing artistic license and freedom of speech probably isn’t going to get you off the hook.
Whether that holds true in all cases remains to be seen, and will likely be determined in the future as or when a case featuring the actions of a bot appears before a judge.
The only question that remains, however, is how they found out about the tweet in the first place. According to CBR, the offending message was not reported to them, meaning they must have found out some other way – perhaps internet surveillance does have some legitimate value after all?
Composite image of robot and hashtag courtesy of Shutterstock.
Skynet is upon us!
There. Got that obligatory reference out of the way.
Err, when did saying something become a crime? Now if the bot *actually* killed someone. That’s different.
Err when living in the UK, we have no freedom of speech
err did anyone stop you saying that, Anonymous? It doesn’t look like it.
Can you give a single, genuine example of when you were prevented from saying something? I rather doubt it.
Having the freedom to say something doesn’t mean you are not responsible for what you say. If you slander a person by saying something untrue about him, claiming freedom of speech will not protect you from legal action.
Here in the UK, as in most of the so-called Western countries, we have far more freedom than the rest of the world, but still we get people whining about freedom of speech, government surveillance, etc, etc. Perhaps the anti government brigade would prefer to move to China, or Russia, or North Korea, or somewhere similar and see how their claims to freedom of speech are received there. No? I thought you would jump at the chance.
Mr Jeffry van der Goot doesn’t know who, if anyone, should be held responsible for what HIS bot did. Presumably the bot didn’t connect itself to Twitter, so whoever did connect it must be responsible. Looks like it’s down to you, Mr Jeffry van der Goot. That wasn’t really difficult to work out, was it?
Freedom of speech has nothing to do with your actual ability to say something, but everything to do with the legal consequences thereof. And probably more importantly, the chilling effects that the threat of legal consequences has on one’s day-to-day speech.
Speech in the UK is heavily curtailed relative to other western democracies:
http://en.wikipedia.org/wiki/Freedom_of_speech_by_country#United_Kingdom
Telling people to move to North Korea is idiotic. We should strive towards a higher ideal, rather than be content that there exist worse place to live.
Threats are a crime. I don’t know about the legality of bots, but if a human threatens bodily harm or worse, that is definitely illegal.
Consider this scenario:
A person legally buys a gun and fires it in to the sky as part of celebration. 10 miles away the bullet comes down and kills a person. Who is responsible?
Is the gun maker responsible or the person who fired it or no one?
I would argue the person firing it, even though they had no intention of hurting anyone and were not doing anything illegal.
Same scenario here. The Bot software maker can’t be responsible if someone sets up a system to tweet text on their behalf in the same way a gun manufacturer is not responsible if someone shoots someone. The software maker explains clearly what the bot will do, the individual chooses to install and use it. The law is pretty clear here – you are responsible for your actions.
If you set something like this up no matter what your intentions were you are responsible for the fall out if something goes wrong IMO.
>A person legally buys a gun and fires it in to the sky as part of celebration.
But that’s illegal in the UK. In what country is this legal to do?
“Never point your weapon at anything unless you are willing to destroy it.”
If you deliberately pop a cap into the air for fun, and it comes down and kills or injures someone, you should expect to be in deep and serious trouble. Even in jurisdictions where laws about firearm ownership and registration are quite liberal, e.g. some states in the USA, there are laws against damnfool reckless firearm stupidity…
And they want to have robots driving car soon! If they can go so wrong, they can’t be trusted with a lethal weapon like a car – or a gun. And there is far more to go wrong in a car and far too many variables in any circumstance it may meet that might well not be programmed for.
Is it 1984 already?
In this case the bot didn’t actually go wrong, LindaB.
According to the original story, the bot is designed so that it “takes random words from his Twitter archive and, through the use of an algorithm, attempts to tweet coherent sentences.”
The bot did exactly what it was supposed to do. The problem is that although the bot no doubt recognises the difference between nouns, verbs, and other types of words, it has no understanding of what those words actually mean. To a machine, the word ‘kill’ is no different from other verbs such as ‘look’, ‘walk’, or ‘smile’.
The words that the tweet consisted of were all in Mr van der Goot’s Twitter archive, so presumably he must have, at some point, typed them himself.
On your other point, I must admit that like you I also have doubts about cars driving themselves.
Car bots are not programed to be random, though. They’re programmed to be very timid, deliberate, and communicate to humans and bots around them as clearly as possible. They are still vulnerable to bugs and tampering, but that’s different than the question raised here.
The Shakespeare’s Monkey’s thought experiment is that if you had an infinite number of monkeys typing random things on a type writer an infinite amount of time, They’d eventually write the complete works of Shakespeare. You’d probably also get Anarchist’s Cookbook and several other texts that could land you in trouble. What’s interesting about this case is that bots can spew randomness much faster than type writer monkeys, and the thought experiment just manifested itself in reality. One could equally ask if the bot’s owner is legally responsible for the beautiful things it said. Or if it is possible to train a bot to not say random things that are bad (or at least filter them).
Attempting to prematurely use the jailing of a man that made no actual threat, and whose legal case is still in dispute, as evidence of “legitimate value” of overbearing (and inaccurate) surveillance is befitting of the weak justification of spying.
I for one welcome our new genocidal robot overlords !