You are viewing our Forum Archives. To view or take place in current topics click here.
Superintelligence and AI
Posted:

Superintelligence and AIPosted:

ProfessorNobody
  • Blind Luck
Status: Offline
Joined: Nov 07, 201211Year Member
Posts: 3,732
Reputation Power: 362
Status: Offline
Joined: Nov 07, 201211Year Member
Posts: 3,732
Reputation Power: 362
I'm aware that I am using the conspiracy forum very loosely here but I can't think of anywhere else this would belong.
It's a controversial issue which requires debate and is theoretical in nature.

As such, The Rant forum doesn't exactly seem to fit.

This topic is just going to be for a general discussion about Artificial Intelligence.
Is it possible? Will it be a good thing or a bad thing? How far away are we from achieving it?

Most people don't seem to think about AI very much and still relegate it to simple science fiction, but as we advance technologically I think it is going to become a much more mainstream concern.

For those who want an introduction to AI here is a panel of some of the world's best minds discussing it.



I have made my concerns about AI known on this website before and I will put them in a spoiler below, but I want to know what you guys think about it.

Artificial Intelligence will only be a risk to our existence if we don't change how we approach it. As the computer and science industry exists at the moment the discussion surrounding AI is quite narrow. You have on one side the people who don't think it should be pursued, and the people on the other side who think that only good can come from it. I think the correct stance here is, like with most things, somewhere in the middle.
Moderation is the key to getting this right because if we get it wrong we might only be able to get it wrong once.

AI is, on our current course, an inevitability. It would take an event so large and devastating in scale to permanently stop this process because it would need to stop our ability to improve our machines not just for some decades or centuries, but forever. The end of our species is what it would take to stop us from developing AI.

We continue to improve our computers year after year and eventually we are going to create machines which are more intelligent than us. This has already happened in some respects. The best chess player in the world is now a computer, they are able to recognize faces and portraits at a faster and more accurate rate than humans even if they have only been programmed to recognize a small number of aspects of a person's face. Eventually these machines will become so intelligent that they will be able to make improvements to themselves and there will be what the mathematician I.J. Goode called an 'Intelligence explosion.'

This is where the failure of modern society to address the risk inherent in this is most evident. Movies, TV shows, books, all forms of popular media, caricature this event and tell us stories of malicious robots enslaving or exterminating the human race simply because they are better than us.

This isn't what concerns the vast majority of computer scientists though. The problem isn't that these machines will become malevolent it is that these machines will be much more competent and intelligent than we are that any divergence between our goals and theirs could destroy us.

The easiest way to analogize this is by thinking about how we relate to ants. You never step on an ant because you really hate them and want to cause as much pain and suffering as possible to this tiny creature. You usually step on an ant by accident, or because it's being a nuisance. Him and his friends decided to build their nest right on your front doorstep, for instance.

The eventual gap between the intelligence of an AI and us will be so large that this is how they will view us.

Just as a quick side point, this is one of the main worries about encountering alien life that prominent cosmologists like Neil DeGrasse Tyson have. If you think that the genetic difference between us and a Chimpanzee is around about 1% and that is the difference between digging around in the ground with a stick and building a global civilization and travelling to the moon imagine just how easy it would be for an alien civilization with that amount more intelligence than us to either wipe us out or simply overlook us.

Getting back to AI, this looks as though it is going to happen and it will happen as quickly as possible.
For example, almost every country with a sizable scientific budget is already racing towards the goal of building an AI and whichever country does manage to build it first will be in control of the most powerful object in the world. You connect this AI to the internet and it will be able to invade any country's secure government databases and expose whatever information the country controlling it would like. When this is close to being achieved it isn't hard to imagine the world going into a kind of cold war state where each country is cutting safety corners to achieve the goal of AI and when one country does eventually win how will other countries react? Are they going to simply roll over and accept Sweden or Russia as the new overlord of planet Earth?

There needs to be a slow and methodical way of achieving AI which involves the co-operation of every country which wants to achieve it. This is not going to happen in the current political climate and it isn't going to happen if the threat of AI isn't taken seriously.

Neuroscientist Sam Harris has said about the discussion of AI that you can describe something which is both terrible and likely to happen and that because it involves super-intelligent robots people will find it cool and this is a problem.

One of the main responses to AI doomsayers like myself is "Don't worry about it, this is 50-100 years away, it's not our problem."
50 years is not that much time when we are talking about meeting one of the greatest challenges that the human race will ever encounter.
The computer scientist Stuart Russell has a very good analogy to counter this response. He asks us to imagine that we receive a message from outer space, from an alien civilization, and it simply reads, "People of Earth, we will arrive on your planet in 50 years. Get ready."
The reaction to a message like this should be the exact same reaction we have to AI development.

The following 4 users thanked ProfessorNobody for this useful post:

BJP (11-12-2017), ElonMusk (11-12-2017), Skates (02-13-2017), Yin (02-13-2017)
#2. Posted:
Z06
  • Winter 2017
Status: Offline
Joined: Feb 27, 201212Year Member
Posts: 2,819
Reputation Power: 357
Status: Offline
Joined: Feb 27, 201212Year Member
Posts: 2,819
Reputation Power: 357
Some of this seems extreme
#3. Posted:
ProfessorNobody
  • TTG Contender
Status: Offline
Joined: Nov 07, 201211Year Member
Posts: 3,732
Reputation Power: 362
Status: Offline
Joined: Nov 07, 201211Year Member
Posts: 3,732
Reputation Power: 362
GT40 wrote Some of this seems extreme


Extreme in what way? That this isn't really a possibility?

To say that this isn't a possibility is to simply say that we will stop progressing technologically.
Every year we become more advanced. Our computers have more processing power, their circuits become faster and more intricate year after year.
Eventually they will reach a point where an input system will allow them to take in information from the environment, process it, catalog it, and solve problems all by itself.
This might not happen in the next 5 years, but a lot of these computer scientists put it at 50-100 years which is within a lot of our life times.
#4. Posted:
Yin
  • E3 2017
Status: Offline
Joined: Apr 30, 201211Year Member
Posts: 5,468
Reputation Power: 245
Status: Offline
Joined: Apr 30, 201211Year Member
Posts: 5,468
Reputation Power: 245
I don't know a whole lot about A.I., but I felt it could be very disastrous just by using basic reasoning. It doesn't help now that what I felt are some very real concerns by people who do know about A.I. I thought I was just kind of paranoid.

I feel it could basically be an atomic bomb type of creation, where whoever has it has the power to destroy in a way that we have never seen. Thing is, containing it will probably be a lot harder. Not only is it about the countries involved, the machine with the A.I. could be an issue depending on what type of "body" it has or what it has access to through internet or other connections. Good luck containing it then. I mean, I feel it'd be like putting a regular dog leash on a dragon. Better hope it accepts your boundaries or at least peacefully declines them. What happens when humans basically create a god?

A lot of my worry comes from the fact that I know the militaries of the world will want to use it. Like, are we going to see drones with it? Ships? Planes? Humanoid robot soldiers? Seems like we are heading toward creating the Transformers at that point, lol. Truthfully though, am I crazy to be worried by stuff like that? I know scientists have some major concerns, but I just feel what I am saying is far-fetched.

I admit that I don't know many people in that video. I feel a little bad for Elon Musk though, lol. It seemed like he has some problems with public speaking. I can totally relate. I wish they really got to talk though. There were so many on stage with a smallish amount of time, and it just didn't feel as fleshed out as it could have been.
#5. Posted:
ProfessorNobody
  • Blind Luck
Status: Offline
Joined: Nov 07, 201211Year Member
Posts: 3,732
Reputation Power: 362
Status: Offline
Joined: Nov 07, 201211Year Member
Posts: 3,732
Reputation Power: 362
Yin wrote I don't know a whole lot about A.I., but I felt it could be very disastrous just by using basic reasoning. It doesn't help now that what I felt are some very real concerns by people who do know about A.I. I thought I was just kind of paranoid.

I feel it could basically be an atomic bomb type of creation, where whoever has it has the power to destroy in a way that we have never seen. Thing is, containing it will probably be a lot harder. Not only is it about the countries involved, the machine with the A.I. could be an issue depending on what type of "body" it has or what it has access to through internet or other connections. Good luck containing it then. I mean, I feel it'd be like putting a regular dog leash on a dragon. Better hope it accepts your boundaries or at least peacefully declines them. What happens when humans basically create a god?

A lot of my worry comes from the fact that I know the militaries of the world will want to use it. Like, are we going to see drones with it? Ships? Planes? Humanoid robot soldiers? Seems like we are heading toward creating the Transformers at that point, lol. Truthfully though, am I crazy to be worried by stuff like that? I know scientists have some major concerns, but I just feel what I am saying is far-fetched.

I admit that I don't know many people in that video. I feel a little bad for Elon Musk though, lol. It seemed like he has some problems with public speaking. I can totally relate. I wish they really got to talk though. There were so many on stage with a smallish amount of time, and it just didn't feel as fleshed out as it could have been.


You touched on one of the main problems that I have with AI when you mention the countries involved.
Everyone seems to be assuming that the US is going to be the first to achieve it but that isn't necessarily going to be the case.
If a country like Sweden managed to build it first I don't think major countries would have much of a problem. They might pressure them into sharing the knowledge but they wouldn't go to war over it.
I can't confidently say the same thing would happen if Russia or China were to build it first.
Whichever country builds it would only have to threaten to connect it to the internet and they would have the entire world hostage.

Your analogy to an atomic bomb type of creation is very accurate in my view. This could be used as a deterrent or be used to launch a full scale war.
Perhaps I'm just broadcasting my cynical view of humanity, but given what humanity has done with great amounts of power in the past I don't think it is out of the realm of probability.
Sam Harris, one of the speakers on the panel, says that we need a Manhattan Project involving all countries currently working towards AI for it to even have the slightest chance of yielding fruitful results.

I agree about the problems with that panel though. Max Tegmark, the mediator and host, is quite a terrible host. He doesn't really give them chance to speak and he rambles on himself for quite a while.
I might be mis-remembering but I'm pretty sure that you once said that you watched Joe Rogan?
If that is so then you might have already seen this, but here is Sam Harris and Joe talking at length about AI and it has much more information than the panel I posted in the topic, if you're interested.





If you're crazy to be worried about this kind of stuff then so am I and so are a lot of computer scientists.
However, there are some prominent scientists who don't share these concerns, like Neil DeGrasse Tyson who I'm sure you need no introduction to.


It really is a completely mixed bag and that's why I find it such an interesting topic to think about.
#6. Posted:
Yin
  • E3 2017
Status: Offline
Joined: Apr 30, 201211Year Member
Posts: 5,468
Reputation Power: 245
Status: Offline
Joined: Apr 30, 201211Year Member
Posts: 5,468
Reputation Power: 245
I have seen some of Rogan's stuff and one with Sam Harris, but I don't remember which ones they were. I don't remember the A.I. discussion, but that doesn't mean I didn't listen to one of them before. I do agree more with Sam Harris on this subject though. I don't really see how this ends well for us, though bad people getting there first is also bad. It's almost as if we are going to play Russian Roulette over trusting someone else pointing a gun at us.

I love Neil, but I just don't think he has really thought this subject through. I just don't think it is reasonable to believe we can control something that is smarter, faster, and just all-around better than us. I mean, what would the fail-safe look like? Would we need hidden EMP caches? Could they even be hidden? How do you outsmart that? Man, I'm someone that feels we should just stop reproducing and go quietly, but I don't want to see us be squashed by robots.

Was thinking about saying this is going to compare this to opening Pandora's Box, but I see Joe beat me there near the end of the second video. A.I. does seem like it is our end goal. Whether it benefits us, destroys us, or just leaves remains to be seen. It's very unsettling.
#7. Posted:
ProfessorNobody
  • Shoutbox Hero
Status: Offline
Joined: Nov 07, 201211Year Member
Posts: 3,732
Reputation Power: 362
Status: Offline
Joined: Nov 07, 201211Year Member
Posts: 3,732
Reputation Power: 362
Yin wrote I have seen some of Rogan's stuff and one with Sam Harris, but I don't remember which ones they were. I don't remember the A.I. discussion, but that doesn't mean I didn't listen to one of them before. I do agree more with Sam Harris on this subject though. I don't really see how this ends well for us, though bad people getting there first is also bad. It's almost as if we are going to play Russian Roulette over trusting someone else pointing a gun at us.

I love Neil, but I just don't think he has really thought this subject through. I just don't think it is reasonable to believe we can control something that is smarter, faster, and just all-around better than us. I mean, what would the fail-safe look like? Would we need hidden EMP caches? Could they even be hidden? How do you outsmart that? Man, I'm someone that feels we should just stop reproducing and go quietly, but I don't want to see us be squashed by robots.

Was thinking about saying this is going to compare this to opening Pandora's Box, but I see Joe beat me there near the end of the second video. A.I. does seem like it is our end goal. Whether it benefits us, destroys us, or just leaves remains to be seen. It's very unsettling.


Here are a couple of more points that I think you might find interesting:

Isaac Asimov wrote the, "Three Laws of Robotics" back in the 1950's. You might remember them from films like I,Robot.
The laws are:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


Initially he intended these laws to only be used as a literary device, but he later came to believe that they could be used to safely construct AI and it has been picked up by a lot of AI proponents as a go-to safe way of building them.

The problem is that Asimov only envisioned robots with human level intelligence existing in his stories. He didn't foresee superhuman intelligence or the idea that an AI with enough intelligence would figure out a way to alter its core programming.

The human level intelligence robots in his stories found their way around these laws by looking at the definitions of words like "Human" and "Robot" and finding little distinction between them which leads nicely onto the second point.

How are we supposed to treat these robots? At what point does it become unethical or immoral to harm one? Can you harm one?

Humans generally consider pain a good thing in evolutionary terms because it helps us to recognize and avoid danger, but would it be unethical to program a robot to feel pain - whether that be physically or emotionally - if we expect them to exist in the real world and walk among us?

Would we want one of these robots to have emotions in the same way that we do, would that make it more empathetic to us as a species, or make it more likely to act irrationally in the same ways that we do?

Man, I'm someone that feels we should just stop reproducing and go quietly


You'll hear no criticism from me.

"The pessimists credo, or one of them, is that nonexistence never hurt anyone and existence hurts everyone." - Ligotti
#8. Posted:
Yin
  • 2 Million
Status: Offline
Joined: Apr 30, 201211Year Member
Posts: 5,468
Reputation Power: 245
Status: Offline
Joined: Apr 30, 201211Year Member
Posts: 5,468
Reputation Power: 245
I guess it goes back to what Sam Harris said about whether or not we can consider them sentient. The creation of artificial intelligence has always torn me. That type of technological advancement I feel is needed (well, needed as much as a society that has no idea on when to quit needs things to keep going), but I don't want more sentient beings being created.

This is hard to explain, but I will try. We humans have major struggles with our emotions and with our lives. Some of us hit our breaking point and some of those find a way out. Well, some of these machines may not be able to have that out. I am assuming some machines that have the A.I. won't even be able to move on their own. It'd be like a conscious human who lost all body functions and can't move at all. I couldn't imagine that. I feel odd writing all of that though, like I am missing something.

The video game, Halo, has A.I. constructs that, over time, go through rampancy. The A.I. basically thinks until it goes crazy and then to death. That may not be a thing that would realistically happen, but I can see A.I. losing itself if it has emotions, which can be highly dangerous. Without emotions though, they won't ever have empathy for people. If they are that intelligent though, could they just hit a kill switch themselves? I mean, I'd still find that depressing, but not as much as a mind that has to live beyond its desire.

What do we even do with sentient robots? Are we really going to use them for labor? We are going to create slavery again?

This is actually a lot more complicated than I thought just yesterday, lol. I've always thought that if a robot could feel like we could, they should be respected and treated the same as people. I worry that treating human-like robots differently could be an argument to treat other humans differently. If we treat human-like robots differently just because they are robots and they aren't terrible creatures, that would be bigoted, yes?

I do not envy the people making these things. I don't envy the potential sentient machines that come out of this. I just don't see where this ends well. Neil, and those that feel the same, better be right. They better just be super smart machines that listen to everything we tell it to do. I'd hate to see one have a bug in its programming. I'm sure making mistakes in that type of coding would be easy. I guess it doesn't really matter anyway if the robot can probably rewrite it itself. It would probably have excellent coding knowledge.

I said quite a lot here but feel like I didn't say much. I just don't know. That is all this comes down to. I want us to avoid pain and suffering as much as possible. We suffer due to lack of knowledge. We may create creatures that can suffer to fix our suffering. We would be forcing our burdens onto them if they felt like us. Only if the machines don't feel and we remain in control does suffering actually decrease, or should at least. And if we don't remain in control, I hope the aliens have a better luck stopping them. Who knows? Maybe there is a super intelligent, murderous robot army out there from a similar failed society, lol. I'm just tired and rambling now. I'll stop.
#9. Posted:
DutyManager
  • Winter 2022
Status: Offline
Joined: Dec 21, 20158Year Member
Posts: 735
Reputation Power: 115
Status: Offline
Joined: Dec 21, 20158Year Member
Posts: 735
Reputation Power: 115
Seems extreme
#10. Posted:
PuristGlitcher
  • TTG Natural
Status: Offline
Joined: Dec 09, 201211Year Member
Posts: 996
Reputation Power: 40
Status: Offline
Joined: Dec 09, 201211Year Member
Posts: 996
Reputation Power: 40
If you want a great TV show that deals with super intelligence, watch "Limitless" on netflix. It is a phenomenal series I am currently watching
Jump to:
You are viewing our Forum Archives. To view or take place in current topics click here.