Artificial Intelligence will only be a risk to our existence if we don't change how we approach it. As the computer and science industry exists at the moment the discussion surrounding AI is quite narrow. You have on one side the people who don't think it should be pursued, and the people on the other side who think that only good can come from it. I think the correct stance here is, like with most things, somewhere in the middle.
Moderation is the key to getting this right because if we get it wrong we might only be able to get it wrong once.
AI is, on our current course, an inevitability. It would take an event so large and devastating in scale to permanently stop this process because it would need to stop our ability to improve our machines not just for some decades or centuries, but forever. The end of our species is what it would take to stop us from developing AI.
We continue to improve our computers year after year and eventually we are going to create machines which are more intelligent than us. This has already happened in some respects. The best chess player in the world is now a computer, they are able to recognize faces and portraits at a faster and more accurate rate than humans even if they have only been programmed to recognize a small number of aspects of a person's face. Eventually these machines will become so intelligent that they will be able to make improvements to themselves and there will be what the mathematician I.J. Goode called an 'Intelligence explosion.'
This is where the failure of modern society to address the risk inherent in this is most evident. Movies, TV shows, books, all forms of popular media, caricature this event and tell us stories of malicious robots enslaving or exterminating the human race simply because they are better than us.
This isn't what concerns the vast majority of computer scientists though. The problem isn't that these machines will become malevolent it is that these machines will be much more competent and intelligent than we are that any divergence between our goals and theirs could destroy us.
The easiest way to analogize this is by thinking about how we relate to ants. You never step on an ant because you really hate them and want to cause as much pain and suffering as possible to this tiny creature. You usually step on an ant by accident, or because it's being a nuisance. Him and his friends decided to build their nest right on your front doorstep, for instance.
The eventual gap between the intelligence of an AI and us will be so large that this is how they will view us.
Just as a quick side point, this is one of the main worries about encountering alien life that prominent cosmologists like Neil DeGrasse Tyson have. If you think that the genetic difference between us and a Chimpanzee is around about 1% and that is the difference between digging around in the ground with a stick and building a global civilization and travelling to the moon imagine just how easy it would be for an alien civilization with that amount more intelligence than us to either wipe us out or simply overlook us.
Getting back to AI, this looks as though it is going to happen and it will happen as quickly as possible.
For example, almost every country with a sizable scientific budget is already racing towards the goal of building an AI and whichever country does manage to build it first will be in control of the most powerful object in the world. You connect this AI to the internet and it will be able to invade any country's secure government databases and expose whatever information the country controlling it would like. When this is close to being achieved it isn't hard to imagine the world going into a kind of cold war state where each country is cutting safety corners to achieve the goal of AI and when one country does eventually win how will other countries react? Are they going to simply roll over and accept Sweden or Russia as the new overlord of planet Earth?
There needs to be a slow and methodical way of achieving AI which involves the co-operation of every country which wants to achieve it. This is not going to happen in the current political climate and it isn't going to happen if the threat of AI isn't taken seriously.
Neuroscientist Sam Harris has said about the discussion of AI that you can describe something which is both terrible and likely to happen and that because it involves super-intelligent robots people will find it cool and this is a problem.
One of the main responses to AI doomsayers like myself is "Don't worry about it, this is 50-100 years away, it's not our problem."
50 years is not that much time when we are talking about meeting one of the greatest challenges that the human race will ever encounter.
The computer scientist Stuart Russell has a very good analogy to counter this response. He asks us to imagine that we receive a message from outer space, from an alien civilization, and it simply reads, "People of Earth, we will arrive on your planet in 50 years. Get ready."
The reaction to a message like this should be the exact same reaction we have to AI development.