GeneralDo you think AI could wipe out civilization?Posted:

ElonMusk
  • Gold Member
Status: Offline
Joined: Oct 31, 2017
Posts: 35
Reputation Power: 12
I was reading a long article on Vanity Fair about the future of Artificial Intelligence, and I wanna know what you guys think.


Do you think AI, if not controlled, could wipe out civilization as we know it?
I'd love to hear your opinions.
#2. Posted:
Yin
  • 5K Undisputed
Status: Offline
Joined: Apr 30, 20125 Year Member
Posts: 5,106
Reputation Power: 224
Of course. We are automating and hooking internet up to everything. Hook an A.I. to them and it is very possible. It would be totally in the A.I.'s hands at that point. Whatever it wanted.
#3. Posted:
PostMalone
  • 1K Rainmaker
Status: Online
Joined: Mar 25, 20161 Year Member
Posts: 2,410
Reputation Power: 142
Motto: S/O dah for the gold | youtube.com/c/imthehomiematt
if not controlled then possibly but doubtful because they would have to be programmed on how to learn and adapt, assuming its like a robotic AI. But assuming its strictly computer based and is in an enviornment like the NSA / CIA / FBI / NASA, then it could have the capability to shut down satellites and stuff like that
#4. Posted:
oxo
  • Gold Member
Status: Online
Joined: Feb 21, 20116 Year Member
Posts: 1,556
Reputation Power: 139
I hope not but somehow the machines will rise
#5. Posted:
Eli
  • Gold Member
Status: Offline
Joined: Nov 07, 20125 Year Member
Posts: 3,545
Reputation Power: 224
Artificial Intelligence will only be a risk to our existence if we don't change how we approach it. As the computer and science industry exists at the moment the discussion surrounding AI is quite narrow. You have on one side the people who don't think it should be pursued, and the people on the other side who think that only good can come from it. I think the correct stance here is, like with most things, somewhere in the middle.
Moderation is the key to getting this right because if we get it wrong we might only be able to get it wrong once.

AI is, on our current course, an inevitability. It would take an event so large and devastating in scale to permanently stop this process because it would need to stop our ability to improve our machines not just for some decades or centuries, but forever. The end of our species is what it would take to stop us from developing AI.

We continue to improve our computers year after year and eventually we are going to create machines which are more intelligent than us. This has already happened in some respects. The best chess player in the world is now a computer, they are able to recognize faces and portraits at a faster and more accurate rate than humans even if they have only been programmed to recognize a small number of aspects of a person's face. Eventually these machines will become so intelligent that they will be able to make improvements to themselves and there will be what the mathematician I.J. Goode called an 'Intelligence explosion.'

This is where the failure of modern society to address the risk inherent in this is most evident. Movies, TV shows, books, all forms of popular media, caricature this event and tell us stories of malicious robots enslaving or exterminating the human race simply because they are better than us.

This isn't what concerns the vast majority of computer scientists though. The problem isn't that these machines will become malevolent it is that these machines will be much more competent and intelligent than we are that any divergence between our goals and theirs could destroy us.

The easiest way to analogize this is by thinking about how we relate to ants. You never step on an ant because you really hate them and want to cause as much pain and suffering as possible to this tiny creature. You usually step on an ant by accident, or because it's being a nuisance. Him and his friends decided to build their nest right on your front doorstep, for instance.

The eventual gap between the intelligence of an AI and us will be so large that this is how they will view us.

Just as a quick side point, this is one of the main worries about encountering alien life that prominent cosmologists like Neil DeGrasse Tyson have. If you think that the genetic difference between us and a Chimpanzee is around about 1% and that is the difference between digging around in the ground with a stick and building a global civilization and travelling to the moon imagine just how easy it would be for an alien civilization with that amount more intelligence than us to either wipe us out or simply overlook us.

Getting back to AI, this looks as though it is going to happen and it will happen as quickly as possible.
For example, almost every country with a sizable scientific budget is already racing towards the goal of building an AI and whichever country does manage to build it first will be in control of the most powerful object in the world. You connect this AI to the internet and it will be able to invade any country's secure government databases and expose whatever information the country controlling it would like. When this is close to being achieved it isn't hard to imagine the world going into a kind of cold war state where each country is cutting safety corners to achieve the goal of AI and when one country does eventually win how will other countries react? Are they going to simply roll over and accept Sweden or Russia as the new overlord of planet Earth?

There needs to be a slow and methodical way of achieving AI which involves the co-operation of every country which wants to achieve it. This is not going to happen in the current political climate and it isn't going to happen if the threat of AI isn't taken seriously.

Neuroscientist Sam Harris has said about the discussion of AI that you can describe something which is both terrible and likely to happen and that because it involves super-intelligent robots people will find it cool and this is a problem.

One of the main responses to AI doomsayers like myself is "Don't worry about it, this is 50-100 years away, it's not our problem."
50 years is not that much time when we are talking about meeting one of the greatest challenges that the human race will ever encounter.
The computer scientist Stuart Russell has a very good analogy to counter this response. He asks us to imagine that we receive a message from outer space, from an alien civilization, and it simply reads, "People of Earth, we will arrive on your planet in 50 years. Get ready."
The reaction to a message like this should be the exact same reaction we have to AI development.

thetechgame.com/Forums/t=7682368/...nd-ai.html
#6. Posted:
Famous
  • Moderator
Status: Online
Joined: Dec 22, 20115 Year Member
Posts: 14,920
Reputation Power: 2863
Motto: Chaotic 360 Admin/Seller Drink Famous * Play Famous * Be Famous *
They could do damage but I don't think they can wipe us out.
#7. Posted:
Vera
  • E3 2017
Status: Offline
Joined: May 27, 20134 Year Member
Posts: 3,578
Reputation Power: 327
is there any chance youve seen the movie ex_machina ? if not I think you would enjoy it
#8. Posted:
Mikey
  • All Time High
Status: Offline
Joined: Jan 16, 20125 Year Member
Posts: 6,975
Reputation Power: 2073
Motto: Forever ATH Badge Holder
I dont think this aint gonna happen anytime soon though, but id be worried for my kids
#9. Posted:
Blizzard
  • Gold Member
Status: Online
Joined: Oct 01, 20143 Year Member
Posts: 1,028
Reputation Power: 123
Motto: Trying To Reach 250 Rep Before 2k Post / I Love Being A Gold Member / A Man Who Lives For Nothing Will Die For Anything
Maybe could affect us but not wipe us out ,
Users browsing this topic: None
Jump to:


RECENT POSTS

HOT TOPICS