This time last year Steve Wozniak was sounding a cautionary note about the future of Artificial Intelligence (AI), warning that computers would one day take over from humans and joking that we might even end up as their pets.
In a recent interview with Australia’s ABC TV’s Lateline the engineering genius appeared more sanguine about the future of self-aware, super-intelligent Artificial Intelligence and much more concerned with the real world killer robots that are all but with us: Lethal Autonomous Weapon Systems (LAWS).
The Apple co-founder maintains that human-level Artificial Intelligence won’t happen for “a very long time”:
It might take 200 years before they are really fully able to operate all of their needs in the world, until then they’re going to need human beings … I’m not really worried at all.
Starting the interview with a discussion on LAWS, the Apple co-founder stressed the dangers of deploying autonomous weapons:
It’s very scary to make autonomous weapons that are just following some programmed set of instructions … even when you’re driving a car there is no one set of rules … if a lane is closed off you have to do something against the rules … I don’t think it’s a good idea at all. I don’t think we can really stop it. I think it’s just too easy, low cost, economic.
… For the forces in the world that want to be combative you are going to have situations where they’ll say … our weapons are going to have to be competing with other people’s weapons …
The topic of autonomous weapons isn’t just on Woz’s mind. The third Convention on Conventional Weapons (CCW) meeting on lethal autonomous weapons systems was held last week at the United Nations (UN) Palais des Nations in Geneva.
The states attending the meeting agreed to continue their deliberations and the process to explore “possible recommendations on options” seems likely to continue on into 2018, a pace of action described by the Campaign to Stop Killer Robots as “lacklustre”.
During the meeting, five countries added themselves to the list of states seeking a pre-emptive ban on the use of autonomous weapons, bringing the total to fourteen. (The countries seeking a pre-emptive ban are Algeria, Bolivia, Chile, Costa Rica, Cuba, Ecuador, Egypt, Ghana, Holy See, Mexico, Nicaragua, Pakistan, State of Palestine, and Zimbabwe.)
Autonomous weapons aside, Mr Wozniak believes that, for the foreseeable future at least, machines will simply be helping us to go further. For AI to reach parity with humans they don’t just need intelligence he says, they need life experience too:
A lot of what makes a human too is how humans lives … a machine … hasn’t lived and walked around and gone to beaches and had nice days and fun days and falling in love and had their own families and children.
Sadly it seems that while robots might be launching lethal weapons in the very near future they could be a couple of centuries from enjoying a trip to the beach with loved ones.
Image of X-45A courtesy of DARPA
5 comments on “Woz on autonomous weapons: “I don’t think it’s a good idea. I don’t think we can stop it.””
I like how the drone in the image similar to a Cylon Raider. Can we stop calling them drones and call them Cylons, or do I have to wait a few more years?
It says a lot about the state of the world and priorities of humans that only 14 countries so far want autonomous weapons banned.
Talking about autonomous weapons reminds me of a ‘Star Trek: Voyager’ episode called: ‘Dreadnought’ … … A space weapon locates a planet that it ‘thinks’ is a target. It is incorrect because the weapon is in a different part of the galaxy, and the planet just happens to be very similar to the intended target.
Today, a software bug can potentially delete your files…. tomorrow it will be able to shot you on sight.
Or delete an o.