Is artificial intelligence as big a threat as nuclear weapons?

Is artificial intelligence as big a threat as nuclear weapons?

He brainstormed an 800 mph subsonic air travel machine made of friction-foiling aluminum pods, provided the concept behind what’s now the second largest provider of solar power systems in the US, invested $100 million of his own money into putting people on Mars, and open-sourced electric car company Tesla’s patents for the betterment of mankind – or, well, at least, to jump-start development of electric cars.

In short, Mr. $1/year Tesla CEO Elon Musk knows a thing or two about the cutting edge, so when he says that artificial intelligence machines are “potentially more dangerous than nukes,” people are going to pay attention.

Here’s what he tweeted on Saturday:

@elonmusk

Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.

In Musk’s view, then, the prospect of melting flesh from bones, plunging the Earth into a 20-year-long winter and worldwide famine is slightly less horrible than Arnold Schwarzenegger promising us that he shall return.

As if an atomic holocaust weren’t prickling enough neck hairs, Musk went on to suggest that we humans might turn out to be just a bunch of flesh-lackeys that escort our digital overlords into their thrones:

@elonmusk

Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable

This isn’t the first time that Musk has expressed these concerns.

Appearing on CNBC in June, he told host Kelly Evans that he’s invested in more than one AI research company.

Profits aren’t motivating Musk. Rather, he wants to keep an eye on new advancements, he said:

It's really, I like to just keep an eye on what's going on with artificial intelligence. I think there is a potential dangerous outcome there.

He’s not real sure what AI could lead to, but The Terminator is definitely the situation we want to watch out for:

I mean, there have been movies about this, you know, like The Terminator. I don't think - in the movie The Terminator, they didn't create AI to, they didn't expect, you know, some sort of The Terminator-like outcome. It is sort of like the Monty Python thing. Nobody expects the Spanish Inquisition. It's just, you know, but you have to be careful.

Whatever the future holds, the AI of the present is still easily defeated.

In June, the Eugene Goostman chatbot was ballyhooed for supposedly passing the Turing test, which tests whether a computer can trick a human into thinking she’s communicating with another human.

But that supposed milestone wasn’t really a milestone at all, some said, including Naked Security’s own Paul Ducklin who declined to herald the dawn of our robot overlords.

Here’s what Jamie Bartlett, the Director of the Centre for the Analysis of Social Media at the think tank Demos, had to say about Eugene in an article he wrote for The Telegraph:

Frankly, [Eugene's] not very believable. ... Eugene sounds exactly like a robot should: repetitive, nonsensical, and littered with non-sequiturs.

This is because of how artificial intelligence works. Eugene would have been fed with hundreds of thousands, perhaps millions, of examples of conversations and been programmed or trained by a human analyst to look for grammatical, linguistic, word-based and other patterns in them.

He would then imitate and simulate a conversation based on the examples he's been given. That's why it's called "artificial" intelligence: it's mimicking the real thing.

If you want to understand for yourself why people weren’t fooled by Eugene, this conversation’s a lot of fun.

So is Eugene a sign that AI is nothing to worry about or an early, uncouth iteration of our soon-to-be rulers and masters? Is Musk misguided or do you think he’s on the money?

Tell us what you think in the comments and we’ll try to guess if you’re a bot.