Is artificial intelligence as big a threat as nuclear weapons?

Filed Under: Featured, Rogue applications

He brainstormed an 800 mph subsonic air travel machine made of friction-foiling aluminum pods, provided the concept behind what's now the second largest provider of solar power systems in the US, invested $100 million of his own money into putting people on Mars, and open-sourced electric car company Tesla's patents for the betterment of mankind - or, well, at least, to jump-start development of electric cars.

In short, Mr. $1/year Tesla CEO Elon Musk knows a thing or two about the cutting edge, so when he says that artificial intelligence machines are "potentially more dangerous than nukes," people are going to pay attention.

Here's what he tweeted on Saturday:

@elonmusk

Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.

In Musk's view, then, the prospect of melting flesh from bones, plunging the Earth into a 20-year-long winter and worldwide famine is slightly less horrible than Arnold Schwarzenegger promising us that he shall return.

As if an atomic holocaust weren't prickling enough neck hairs, Musk went on to suggest that we humans might turn out to be just a bunch of flesh-lackeys that escort our digital overlords into their thrones:

@elonmusk

Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable

This isn't the first time that Musk has expressed these concerns.

Appearing on CNBC in June, he told host Kelly Evans that he's invested in more than one AI research company.

Profits aren't motivating Musk. Rather, he wants to keep an eye on new advancements, he said:

It's really, I like to just keep an eye on what's going on with artificial intelligence. I think there is a potential dangerous outcome there.

He's not real sure what AI could lead to, but The Terminator is definitely the situation we want to watch out for:

I mean, there have been movies about this, you know, like The Terminator. I don't think - in the movie The Terminator, they didn't create AI to, they didn't expect, you know, some sort of The Terminator-like outcome. It is sort of like the Monty Python thing. Nobody expects the Spanish Inquisition. It's just, you know, but you have to be careful.

Whatever the future holds, the AI of the present is still easily defeated.

In June, the Eugene Goostman chatbot was ballyhooed for supposedly passing the Turing test, which tests whether a computer can trick a human into thinking she's communicating with another human.

But that supposed milestone wasn't really a milestone at all, some said, including Naked Security's own Paul Ducklin who declined to herald the dawn of our robot overlords.

Here's what Jamie Bartlett, the Director of the Centre for the Analysis of Social Media at the think tank Demos, had to say about Eugene in an article he wrote for The Telegraph:

Frankly, [Eugene's] not very believable. ... Eugene sounds exactly like a robot should: repetitive, nonsensical, and littered with non-sequiturs.

This is because of how artificial intelligence works. Eugene would have been fed with hundreds of thousands, perhaps millions, of examples of conversations and been programmed or trained by a human analyst to look for grammatical, linguistic, word-based and other patterns in them.

He would then imitate and simulate a conversation based on the examples he's been given. That's why it's called "artificial" intelligence: it's mimicking the real thing.

If you want to understand for yourself why people weren't fooled by Eugene, this conversation's a lot of fun.

So is Eugene a sign that AI is nothing to worry about or an early, uncouth iteration of our soon-to-be rulers and masters? Is Musk misguided or do you think he's on the money?

Tell us what you think in the comments and we'll try to guess if you're a bot.


, ,

You might like

4 Responses to Is artificial intelligence as big a threat as nuclear weapons?

  1. alias: "Peter Yates" · 263 days ago

    Lisa: "Tell us what you think in the comments..."
    'Peter': "I'm sorry, Lisa. I'm afraid I can't do that."

    To be a bit more serious ...
    Science fiction sometimes has a habit of becoming science fact. However, in this case, science fiction might be giving us a warning... which is being repeated by Elon.
    Humanity must always have control of the on/off switch that will reboot the AI to its 'manufacturer's configuration'. Maybe there should also be a backdoor routine that is accessible remotely that can never be modified by the AI.
    (With regards to Arthur C. Clarke, Stanley Kubrick, and others.)

  2. Technology is accelerating at a rapid rate.. and with all the data mining going on (aka training data), it could possibly happen in the future. As technology advances it can apply to pretty much anything. Agriculture, education, health care, business management, etc.. In the mean time, small forms of AI are being built for each, all constantly improving with the advances in technology. All while data is being collected in some form, whether that be in the "cloud" or on a local hard drive.

    Companies like Facebook, Google, and probably Instagram and others all use some form of AI algorithms. Facebook for faces (that we know of) and Google for comparing images, searching, advertising, and who knows what. Combine those algorithms with 3D manipulation of 2D images (www.cs.cmu.edu/~nkholgad/om3d.html) and multiple machine learning algorithms, next thing you know a Kinect like camera will be able to recognize objects and interact with them by passing on the information to motion control algorithms. Tags and search terms attached to images put into the training set can also be used to teach the name of the item, which is then passed along to another algorithm for speech synthesis.

    See the trend? Technically the more algorithms it takes to get the end result, the more power it requires. It already takes quite a bit of computing power to pull all of that off, but companies like Google, Amazon, etc. have the resources to pull it off.

    I don't see it being the threat of a nuke though for 20-50 years, assuming "We The People" stay on our toes and don't let the Government make the decisions on their own. Obviously they aren't qualified.

  3. David · 262 days ago

    Musk is referring to work by Nick Bostrom on existential risk. Worth reading.

  4. Canadian_pessimist · 258 days ago

    Turing tests with interaction between a machine and human are not the AI issues that we need to be concerned about... the hundreds of millions of sensors now taking in information across the globe, and how they are connected are the concern. For AI to run amok it has no need to ever actually interact or speak with humans, only somehow develop a need for self preservation and propagation, which are defining characteristics of even the most basic micro-organisms. If "AI" has only those goals in mind and has the means to actively carry them out (via robotics such as drones, factory robots, human companion robots, disruption of other computer systems and networks) there is potentially a serious problem.

    We're not there yet (needing to worry about a Terminator-like scenario), but unfortunately humanity is a collection of clans of greedy animals that are only slightly smarter than apes, and that are in competition with one another to create "the next big thing" either for profit, fame, "national security" etc. The lack of coordination among the dumb animals (us) means that unintended consequences of the interactions between various connected technologies are almost guaranteed, rather than simply likely. My guess is that given the accelerating pace of advances, we may expect some limited Terminator-like scenario withing the next 5-10 years. Hopefully it will be containable and will give us a good enough scare to realize our foolhardiness, much the way Nagasaki and Hiroshima made us realize precisely how extremely lethal the nuclear bomb technology we had developed was.

    Humans simply lack the capacity to look at the long-term effects of our actions, witness our current problems with global warming, food and income distribution, centuries-long tribal animosities, environmental destruction, poisoning of our environment with toxins and unnatural chemical compounds. We only evolved enough to be apex predators in the last ten thousand years, give or take, which is the blink of an eye in biological terms. Thus our ability to innovate technologically has far outstripped our capacity to envision the long-term consequences associated with those technological innovations.

    Creation of AI, either intentionally or unintentionally, will simply be a Chernobyl-like event at a digital level. Chernobyl spawned mutated physical entities, AI will spawn mutated digital entities

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

About the author

I've been writing about technology, careers, science and health since 1995. I rose to the lofty heights of Executive Editor for eWEEK, popped out with the 2008 crash, joined the freelancer economy, and am still writing for my beloved peeps at places like Sophos's Naked Security, CIO Mag, ComputerWorld, PC Mag, IT Expert Voice, Software Quality Connection, Time, and the US and British editions of HP's Input/Output. I respond to cash and spicy sites, so don't be shy.