You are viewing our Forum Archives. To view or take place in current topics click here.
Artificial Intelligence Needs To Be Taken Seriously
Posted:

Artificial Intelligence Needs To Be Taken SeriouslyPosted:

ProfessorNobody
  • Winter 2017
Status: Offline
Joined: Nov 07, 201211Year Member
Posts: 3,732
Reputation Power: 362
Status: Offline
Joined: Nov 07, 201211Year Member
Posts: 3,732
Reputation Power: 362
This isn't going to be so much of a rant as it is going to be an attempt to convince the majority of people who don't consider AI to be a threat that it is one if we don't change our way of thinking about it. This is likely to be a long post which is why I will put the majority of it in a spoiler and if you don't feel like reading all of it then I encourage you to simply not reply. Move on, do something better with your day.

On the other hand, if you are intrigued by the notion of AI or have suddenly just had the realization that this is something which intrigues you then I encourage you to read on and hopefully you will find something I have to say here mildly interesting or poignant.

Artificial Intelligence will only be a risk to our existence if we don't change how we approach it. As the computer and science industry exists at the moment the discussion surrounding AI is quite narrow. You have on one side the people who don't think it should be pursued, and the people on the other side who think that only good can come from it. I think the correct stance here is, like with most things, somewhere in the middle.
Moderation is the key to getting this right because if we get it wrong we might only be able to get it wrong once.

AI is, on our current course, an inevitability. It would take an event so large and devastating in scale to permanently stop this process because it would need to stop our ability to improve our machines not just for some decades or centuries, but forever. The end of our species is what it would take to stop us from developing AI.

We continue to improve our computers year after year and eventually we are going to create machines which are more intelligent than us. This has already happened in some respects. The best chess player in the world is now a computer, they are able to recognize faces and portraits at a faster and more accurate rate than humans even if they have only been programmed to recognize a small number of aspects of a person's face. Eventually these machines will become so intelligent that they will be able to make improvements to themselves and there will be what the mathematician I.J. Goode called an 'Intelligence explosion.'

This is where the failure of modern society to address the risk inherent in this is most evident. Movies, TV shows, books, all forms of popular media, caricature this event and tell us stories of malicious robots enslaving or exterminating the human race simply because they are better than us.

This isn't what concerns the vast majority of computer scientists though. The problem isn't that these machines will become malevolent it is that these machines will be much more competent and intelligent than we are that any divergence between our goals and theirs could destroy us.

The easiest way to analogize this is by thinking about how we relate to ants. You never step on an ant because you really hate them and want to cause as much pain and suffering as possible to this tiny creature. You usually step on an ant by accident, or because it's being a nuisance. Him and his friends decided to build their nest right on your front doorstep, for instance.

The eventual gap between the intelligence of an AI and us will be so large that this is how they will view us.

Just as a quick side point, this is one of the main worries about encountering alien life that prominent cosmologists like Neil DeGrasse Tyson have. If you think that the genetic difference between us and a Chimpanzee is around about 1% and that is the difference between digging around in the ground with a stick and building a global civilization and travelling to the moon imagine just how easy it would be for an alien civilization with that amount more intelligence than us to either wipe us out or simply overlook us.

Getting back to AI, this looks as though it is going to happen and it will happen as quickly as possible.
For example, almost every country with a sizable scientific budget is already racing towards the goal of building an AI and whichever country does manage to build it first will be in control of the most powerful object in the world. You connect this AI to the internet and it will be able to invade any country's secure government databases and expose whatever information the country controlling it would like. When this is close to being achieved it isn't hard to imagine the world going into a kind of cold war state where each country is cutting safety corners to achieve the goal of AI and when one country does eventually win how will other countries react? Are they going to simply roll over and accept Sweden or Russia as the new overlord of planet Earth?

There needs to be a slow and methodical way of achieving AI which involves the co-operation of every country which wants to achieve it. This is not going to happen in the current political climate and it isn't going to happen if the threat of AI isn't taken seriously.

Neuroscientist Sam Harris has said about the discussion of AI that you can describe something which is both terrible and likely to happen and that because it involves super-intelligent robots people will find it cool and this is a problem.

One of the main responses to AI doomsayers like myself is "Don't worry about it, this is 50-100 years away, it's not our problem."
50 years is not that much time when we are talking about meeting one of the greatest challenges that the human race will ever encounter.
The computer scientist Stuart Russell has a very good analogy to counter this response. He asks us to imagine that we receive a message from outer space, from an alien civilization, and it simply reads, "People of Earth, we will arrive on your planet in 50 years. Get ready."
The reaction to a message like this should be the exact same reaction we have to AI development.

This is a very complex issue and there are good points on both sides of the argument. I lean more towards the side of AI being a bad thing if it occurs without being properly thought out first and I hope that I have convinced some people that this is the correct position and if not then at least encouraged some people to look into this issue themselves. More discussion surrounding AI is needed regardless of which side of the debate you fall on.

Thankfully this is one of the few current affairs topics which doesn't involve politics, religion, or race so it seems to be one in which people can discuss their differences with civility.

The following 5 users thanked ProfessorNobody for this useful post:

eh (12-03-2016), Yin (12-03-2016), TaigaAisaka (12-03-2016), Tywin (12-03-2016), Miss (12-03-2016)
#2. Posted:
Tywin
  • Tutorial King
Status: Offline
Joined: Jun 06, 201112Year Member
Posts: 12,347
Reputation Power: 632
Status: Offline
Joined: Jun 06, 201112Year Member
Posts: 12,347
Reputation Power: 632
Everyone should get an audible trial and listen to this book: [ Register or Signin to view external links. ]
#3. Posted:
TaigaAisaka
  • E3 2020
Status: Offline
Joined: Aug 22, 201211Year Member
Posts: 7,383
Reputation Power: 509
Status: Offline
Joined: Aug 22, 201211Year Member
Posts: 7,383
Reputation Power: 509
Honestly, I believe the threat of AI is going to become another case of "it's not a threat until it's too late" to many people. A lot of people don't care because they believe it won't directly affect them and will make excuses for something of this matter; however once it's too late, people will start to say it's a threat and what they could have and should have done. For me personally, I'm somewhere in the middle. I acknowledge that AI in general is going to become a threat one day, yet at the same time, just as you put in your spoiler "Don't worry about it, this is 50-100 years away, it's not our problem." I'm a little more on the fence with that, not the "it's not our problem," more so I'll be dead by then so I may not even see it happen, let alone have a say in the matter.

People are going to deny it, but let's look at a few things. There are self driving cars, machines already taking people's work away, and now we're getting to the point where we're getting AI Soldiers. Russia revealed a humanoid super-solider that people are either calling Iron Man or The Terminator; that can drive, aim a gun, track targets, shoot, control the recoil and even reload it. We're also trying to invest money into tank drones, air and possibly naval drones controlled by robots. I'm sure you can add another 10 countries to that list that are or already have the same thing going on. I really hate to bring video games into a topic like this, but it's going to get to a point like a lot of futuristic shooters have been, where the robots are doing most of the wars, with some humans who are reliant on some sort of mechanical enhancement to them. If people still want to say AI isn't or won't be a threat after that, then no point wasting your breath on them.
#4. Posted:
eh
  • Gold Gifter
Status: Offline
Joined: Jul 28, 201211Year Member
Posts: 5,836
Reputation Power: 340
Status: Offline
Joined: Jul 28, 201211Year Member
Posts: 5,836
Reputation Power: 340
Donald Trump will become supreme leader of the US for the next 8 years and launch a full fledged robotic AI war against CHINA!!

All kidding aside, I do worry about AI becoming overly intelligent. What if shit get so crazy that they develop emotions and shit. If it got way too out of hand would killing off these robots (if it came down to that) become an ethical issue?

Sidenote, have you guys seen Ex-Machina? Good ass movie.
#5. Posted:
ProfessorNobody
  • TTG Contender
Status: Offline
Joined: Nov 07, 201211Year Member
Posts: 3,732
Reputation Power: 362
Status: Offline
Joined: Nov 07, 201211Year Member
Posts: 3,732
Reputation Power: 362
eh wrote Donald Trump will become supreme leader of the US for the next 8 years and launch a full fledged robotic AI war against CHINA!!

All kidding aside, I do worry about AI becoming overly intelligent. What if shit get so crazy that they develop emotions and shit. If it got way too out of hand would killing off these robots (if it came down to that) become an ethical issue?


This may be one of the few realistic portrayals of what might happen in movies and TV shows.

If you think about how the Railroad act towards the synths in Fallout 4 it could be very much the same.
It's not just that it might become an ethical issue but if this AI is that much smarter than us then it could deploy deceptive actions in order to preserve it's existence. "Let me go and I will cure your wife's cancer." or something of that kind.

That probably wouldn't be needed if the AI was in a humanlike body though. They could simply act as human as they needed to in order to garner sympathy and compassion.

Interestingly enough, in the back story to the Matrix the machines were built to serve humans but once they started displaying aggressive behaviours and revolting against their slave masters, human groups joined them in their war because they viewed them as equals.

Hopefully things won't get that far, but it's still worth thinking about.

And yes, Ex Machina was a very good movie.
#6. Posted:
AR15
  • Comment King
Status: Offline
Joined: Oct 24, 201112Year Member
Posts: 12,651
Reputation Power: 718
Status: Offline
Joined: Oct 24, 201112Year Member
Posts: 12,651
Reputation Power: 718
Veidt wrote
This isn't going to be so much of a rant as it is going to be an attempt to convince the majority of people who don't consider AI to be a threat that it is one if we don't change our way of thinking about it. This is likely to be a long post which is why I will put the majority of it in a spoiler and if you don't feel like reading all of it then I encourage you to simply not reply. Move on, do something better with your day.

On the other hand, if you are intrigued by the notion of AI or have suddenly just had the realization that this is something which intrigues you then I encourage you to read on and hopefully you will find something I have to say here mildly interesting or poignant.

Artificial Intelligence will only be a risk to our existence if we don't change how we approach it. As the computer and science industry exists at the moment the discussion surrounding AI is quite narrow. You have on one side the people who don't think it should be pursued, and the people on the other side who think that only good can come from it. I think the correct stance here is, like with most things, somewhere in the middle.
Moderation is the key to getting this right because if we get it wrong we might only be able to get it wrong once.

AI is, on our current course, an inevitability. It would take an event so large and devastating in scale to permanently stop this process because it would need to stop our ability to improve our machines not just for some decades or centuries, but forever. The end of our species is what it would take to stop us from developing AI.

We continue to improve our computers year after year and eventually we are going to create machines which are more intelligent than us. This has already happened in some respects. The best chess player in the world is now a computer, they are able to recognize faces and portraits at a faster and more accurate rate than humans even if they have only been programmed to recognize a small number of aspects of a person's face. Eventually these machines will become so intelligent that they will be able to make improvements to themselves and there will be what the mathematician I.J. Goode called an 'Intelligence explosion.'

This is where the failure of modern society to address the risk inherent in this is most evident. Movies, TV shows, books, all forms of popular media, caricature this event and tell us stories of malicious robots enslaving or exterminating the human race simply because they are better than us.

This isn't what concerns the vast majority of computer scientists though. The problem isn't that these machines will become malevolent it is that these machines will be much more competent and intelligent than we are that any divergence between our goals and theirs could destroy us.

The easiest way to analogize this is by thinking about how we relate to ants. You never step on an ant because you really hate them and want to cause as much pain and suffering as possible to this tiny creature. You usually step on an ant by accident, or because it's being a nuisance. Him and his friends decided to build their nest right on your front doorstep, for instance.

The eventual gap between the intelligence of an AI and us will be so large that this is how they will view us.

Just as a quick side point, this is one of the main worries about encountering alien life that prominent cosmologists like Neil DeGrasse Tyson have. If you think that the genetic difference between us and a Chimpanzee is around about 1% and that is the difference between digging around in the ground with a stick and building a global civilization and travelling to the moon imagine just how easy it would be for an alien civilization with that amount more intelligence than us to either wipe us out or simply overlook us.

Getting back to AI, this looks as though it is going to happen and it will happen as quickly as possible.
For example, almost every country with a sizable scientific budget is already racing towards the goal of building an AI and whichever country does manage to build it first will be in control of the most powerful object in the world. You connect this AI to the internet and it will be able to invade any country's secure government databases and expose whatever information the country controlling it would like. When this is close to being achieved it isn't hard to imagine the world going into a kind of cold war state where each country is cutting safety corners to achieve the goal of AI and when one country does eventually win how will other countries react? Are they going to simply roll over and accept Sweden or Russia as the new overlord of planet Earth?

There needs to be a slow and methodical way of achieving AI which involves the co-operation of every country which wants to achieve it. This is not going to happen in the current political climate and it isn't going to happen if the threat of AI isn't taken seriously.

Neuroscientist Sam Harris has said about the discussion of AI that you can describe something which is both terrible and likely to happen and that because it involves super-intelligent robots people will find it cool and this is a problem.

One of the main responses to AI doomsayers like myself is "Don't worry about it, this is 50-100 years away, it's not our problem."
50 years is not that much time when we are talking about meeting one of the greatest challenges that the human race will ever encounter.
The computer scientist Stuart Russell has a very good analogy to counter this response. He asks us to imagine that we receive a message from outer space, from an alien civilization, and it simply reads, "People of Earth, we will arrive on your planet in 50 years. Get ready."
The reaction to a message like this should be the exact same reaction we have to AI development.

This is a very complex issue and there are good points on both sides of the argument. I lean more towards the side of AI being a bad thing if it occurs without being properly thought out first and I hope that I have convinced some people that this is the correct position and if not then at least encouraged some people to look into this issue themselves. More discussion surrounding AI is needed regardless of which side of the debate you fall on.

Thankfully this is one of the few current affairs topics which doesn't involve politics, religion, or race so it seems to be one in which people can discuss their differences with civility.

Here is something else worth taking a look at if you all haven't yet.
#7. Posted:
002
  • Winter 2023
Status: Offline
Joined: Sep 25, 20149Year Member
Posts: 4,817
Reputation Power: 7282
Status: Offline
Joined: Sep 25, 20149Year Member
Posts: 4,817
Reputation Power: 7282
I don't see AI as a threat to be honest. At the core, AI is a computer programmed by humans. A computer can't make up its own mind, it makes up its mind based on what we tell it. A good example of this is the Tesla self park feature. It works when the computer can key in on certain objects, but it wouldn't know how to park on say a flat parking lot without lines.

Also say AI does become robots killing the human species. Guess what kills computers super easy? Magnets.
#8. Posted:
eh
  • TTG Undisputed
Status: Offline
Joined: Jul 28, 201211Year Member
Posts: 5,836
Reputation Power: 340
Status: Offline
Joined: Jul 28, 201211Year Member
Posts: 5,836
Reputation Power: 340
002 wrote I don't see AI as a threat to be honest. At the core, AI is a computer programmed by humans. A computer can't make up its own mind, it makes up its mind based on what we tell it. A good example of this is the Tesla self park feature. It works when the computer can key in on certain objects, but it wouldn't know how to park on say a flat parking lot without lines.

Also say AI does become robots killing the human species. Guess what kills computers super easy? Magnets.


Well alot of things can change man, what if in 50 years roads are reworked to no longer have painted lanes and instead have built in grids that work with the hugely growing use of electric cars (an example that doesnt seem that far fetched). What if in 50 years a scientist discovers a way to make robots immune to magnets or magnetic fields by designing robots with some kind of protective coating.

Also what if robots could take form of a human, so you wouldnt know you were being killed by a robot?

I know its alot of "what ifs" I just said, but its still something to think about. Im not nearly as smart as someone that can think 50-100 years in the future, but I can just think of possibilities.
#9. Posted:
ProfessorNobody
  • Summer 2020
Status: Offline
Joined: Nov 07, 201211Year Member
Posts: 3,732
Reputation Power: 362
Status: Offline
Joined: Nov 07, 201211Year Member
Posts: 3,732
Reputation Power: 362
002 wrote I don't see AI as a threat to be honest. At the core, AI is a computer programmed by humans. A computer can't make up its own mind, it makes up its mind based on what we tell it. A good example of this is the Tesla self park feature. It works when the computer can key in on certain objects, but it wouldn't know how to park on say a flat parking lot without lines.

Also say AI does become robots killing the human species. Guess what kills computers super easy? Magnets.


Artificial intelligence is differentiated between narrow AI, the types of machines we see beating chess grandmasters, and general AI. General AI is intelligence that is all encompassing. Even if it was only as intelligent as an average team of researchers the processing speed of electrical circuits is 100,000 times faster than the processing speed of biological communication circuits. That means that for one week of research that our human team does the AI is doing 20,000 years of research.

The point of AI is that it does think for itself. It's why the Turing test was devised in order to test a machine for unprogrammed breaks in patterns.
It is essentially to look for consciousness. We hope that it will align itself with our goals and ethical values.

You also aren't taking human error into account. You could destroy a computer assuming that the AI was constrained to only one computer.
But why would it not be able to trick or blackmail it's way out of the box? The humans who create it would likely be under the impression that they could control it even if they did connect it to the Internet. Why would they refrain from doing so if they believed that it could only benefit them?

If it was connected to the Internet then it would be impossible to shut it down without shutting down the Internet.
An event which could be just as disastrous to the modern world as if the AI began attacking humans.
#10. Posted:
Yin
  • 2 Million
Status: Offline
Joined: Apr 30, 201211Year Member
Posts: 5,468
Reputation Power: 245
Status: Offline
Joined: Apr 30, 201211Year Member
Posts: 5,468
Reputation Power: 245
Computer programs that have learning capabilities just screams "bad idea." We love putting smart tech and internet access into everything as well. I mean, a potential hive mind of AI bots/tech all over the globe is concerning. I also don't think people take it as seriously as they should. It's seemingly just another sci-fi joke to most people. This does need to be talked about more with countries. There needs to be laws and guidelines in place. Not sure if or when that will be the case though. If we go that far with it, we better hope we do have what it takes to shut it down. Learning and ever evolving AI though... Good luck with that. I mean, wouldn't the AI have just about all knowledge that we have right now? The information from the smartest of minds? What could go wrong?
Jump to:
You are viewing our Forum Archives. To view or take place in current topics click here.