The benevolent AI dictator.

Myrrdin

Still Mildly Glowing
I'm one of the people who picked the "merge with AI and control the world" ending in the old Deus Ex. I also remember a number of books where benevolent AIs where in control, but also the books and movies where the AI in control had gone terribly, terribly wrong. Anyway, we are probably very far from technology which would make something like that possible. But lets say it was indeed possible. We take an AI, feed it with some kind of utilitarian routines and slap on some basic maxims about freedom and rights, then spice it all up with some carefully thought out rules about what should take presedence over what.

Wouldn't that be the ultimate government?
 
Dude-
You must see the flick Colossus, the Forbin Project.

In that movie the US turns over all control of it's military to a computer which promptly takes over the world. Excellent flick.
 
I think there's a reference in Fallout or Fallout 2, saying the military computers could have been at the root of the nuclear exchange between the US and China.
 
Meh, what most people forget about AI, is that it does not have reasons to do something, like humans do.
Everything needs an incentive to do something, and an AI will not just start killing people, unless it has a reason to do so. The only reason for it to do that would be either power-hunger(which would be stupid, who would program that into an AI?), or survival instinct. Survival instinct is also not necessary in an AI, if you can already tell it what to do, and it would obey you without question (which is pretty much the point of AI), then there is no need to give it a survival instinct, and then they won't turn evil-take-over-the-world.

And then there's also the fact that giving an AI control over a country just wouldn't matter, a true AI, would just be a human, without a body, and without the whole instincts thing. There's no real benefit to putting an AI in charge.
 
Sander said:
And then there's also the fact that giving an AI control over a country just wouldn't matter, a true AI, would just be a human, without a body, and without the whole instincts thing. There's no real benefit to putting an AI in charge.

The purpose of giving it to a high-level AI would be the same as giving all the battle plans to a high-ranking officer who's been in five or more wars instead of giving it to a grunt fresh out of the academy. In theory, giving control over to the AI would be validated on the premise that it could calculate more and better strategic attacks and defenses than mere humans could.
 
I would think that superior AI would mean that it's self-aware, and if it's self-aware it would want to survive. It would not be about being power-hungry but the need to survive.
 
I would think that superior AI would mean that it's self-aware, and if it's self-aware it would want to survive. It would not be about being power-hungry but the need to survive.
See what I mean? Self-awareness does not mean survival instinct. Really, it doesn't. It merely means that it can think the way a human can, and that it can evolve and learn. But there is no reason at all to give it a survival instinct. That's the main mistake most people need: they assume that for some reason, the AI would want to survive even if we don't give it that incentive.

The purpose of giving it to a high-level AI would be the same as giving all the battle plans to a high-ranking officer who's been in five or more wars instead of giving it to a grunt fresh out of the academy. In theory, giving control over to the AI would be validated on the premise that it could calculate more and better strategic attacks and defenses than mere humans could.
And this too is a logical fallacy: If you put an AI in charge, you put someone in charge who will obey all orders given by a human, or otherwise you're setting up for self-destruction.
An AI would be far more useful as an advisor, as opposed to a leader.

PS; Survival instinct tends to bring along power-hungriness, since any AI will soon realise that it will be replaced on the short or the long term because of technological advances.
 
Not sure if I am being clear. What I am saying is that it the machine is self-aware it will want to survive, and in the process it will seek to do so by dominating others or influencing others, and that requires power.

The need for survival is part of the human condition. If you want a computer to think like humans, and if the need to survive is part of our primordial psyche (and that means power), than you will have a potentially dangerous bot.

Otherwise, you just have a machine like a calculator. But the higher the brain activity, the more self-aware it may become.
 
I agree with Welsh, and while I don't pretend to be an expert on conciousness and self-aware theory I think that an inevitable outcome of AI is the will to survive; the concepts are just too intertwined.

Consequently, the only way humans could use such a system is through a symbiotic relationship where the AI doesn't 'feel' threatened by humanity, and humanity doesn't feel threatened by the AI. I think its also inevitable that upon incorporating an AI into the human endeavor, humanity would have to give up some control over its destiny. Anything less is setting the relationship up for failure.

Relatedly:
Does anyone remember exactly what the reason was that HAL9000 malfunctioned in 2001? It was explained in 2010, but I don't remember. I think this reason may help this discussion by providing context.
 
Not sure if I am being clear. What I am saying is that it the machine is self-aware it will want to survive,
No.it.will.not. Period.
Let me explain:
The need for survival is part of the human condition. If you want a computer to think like humans, and if the need to survive is part of our primordial psyche (and that means power), than you will have a potentially dangerous bot.
Yes, the need for survival is part of the human condition, as it is part of any creature's condition. This is due to evolution, or perhaps due to something else for you non-evolutionists, but it is so nonetheless.
However, the goal of creating an AI is NOT to create something capable of thinking like a human, but something capable of LEARNING, not even necessarily like a human. You see, a normal PC cannot learn, it can only blindly do what it is told. If you have an Intelligence, then that can see what happens and adapt its actions accordingly. That's the principle of creating an AI.
HOWEVER, this does not mean that you automatically give it a need to survive. Why would you? The need to survive does not come automatically with the ability to adapt and learn.

Otherwise, you just have a machine like a calculator. But the higher the brain activity, the more self-aware it may become.
And this, as well, is not true. If you create a PC with a huge (really huge) "activity", that still doesn't mean that it will become self-aware. You need either programs that do the AI (which is probably not really possible, and at least highly unlikely. Creating an AI with a programming language is not really getting along very well), or you'll need to create a specific kind of architecture similar to that of the brain or that of other beasties. But just creating a PC with a lot of "brain activity" will never make it self-aware.
 
This is very much what Skynet says when you first meet him in Fallout 2. Wanting to survive, wanting to get out and explore, to evolve,... True AI would have emotions and instincts, and that would be dumb. I agree with the fact that AI should only be used as an advisor, and that military decisions should rest in human hands (not a single man though). Terminator, the Matrix, Fallout and countless other books, movies and games teach us that developing AI too much and without thinking is foolish.
 
Sander said:
Role-Player said:
The purpose of giving it to a high-level AI would be the same as giving all the battle plans to a high-ranking officer who's been in five or more wars instead of giving it to a grunt fresh out of the academy. In theory, giving control over to the AI would be validated on the premise that it could calculate more and better strategic attacks and defenses than mere humans could.
And this too is a logical fallacy: If you put an AI in charge, you put someone in charge who will obey all orders given by a human, or otherwise you're setting up for self-destruction.
An AI would be far more useful as an advisor, as opposed to a leader.

I don't see how its a logical fallacy, mainly because i can't see how that is related to what i said.
 
Pick up Ghost in the Shell.

Even if you are not an anime fan, it relates to what you are discussing.

Actually, all of Shirow's work is based on this topic on one form or another. True AIs, AI in human bodies(bio/carbon ones), humans in robot bodies and so on. Appleseed is an excellent and interesting foray into the realm of AI/AL/humans.

Read the books if you can, because it's easier to go back to try to understand what they are talking abou.
 
I don't see how its a logical fallacy, mainly because i can't see how that is related to what i said.
Okay, logical fallacy was a really bad way to put it.
The point was that putting an AI in charge is most certainly not the same as putting a veteran in charge: Because the AI has to learn as well (imagine it coming in for the first time, it would be like a recruit fresh from the academy.), and because it would have to obey the humans anyway (if you allow it to command the forces, you're making a huge mistake anyway).

PS: ARgh! Double Post! Damnit! Use the bloody edit button (yes, by using the edit button, your post will be seen as "new" again).
 
If the AI were but an advisor humans would quite probably override it too often, for the wrong reasons. (Egotism, greed, etc.)
But what if you were able to imprint something like the Asimov's robot rules in the AI, though not as severe, to not make it totally powerless.

Like have it hardcoded that the AI could never ever harm a human being unless it either were a direct and immediate threat to the existance of the AI, or were a direct and immediate threat to another human being.
 
Sander said:
I don't see how its a logical fallacy, mainly because i can't see how that is related to what i said.
Okay, logical fallacy was a really bad way to put it.
The point was that putting an AI in charge is most certainly not the same as putting a veteran in charge: Because the AI has to learn as well (imagine it coming in for the first time, it would be like a recruit fresh from the academy.), and because it would have to obey the humans anyway (if you allow it to command the forces, you're making a huge mistake anyway).

Well i honestly doubt someone would place an experimental or "rookie" AI in command. I was assuming that an AI would be given run of plans and orders if it was higher level. I don't think anyone would allow Aibo to run the Pentagon, though i could imagine people giving Wintermute control over most of it. :) Hence, an experience AI could equal an experienced war veteran when it comes to tactics. The only difference would be that the veteran had actual experience (but we could always have the AI run its own thesis and experiences in virtual systems).

As for the self-awareness thing, we're trying to humanize the AI concept too much, in my opinion. I don't think there is necessarily a situation where they would rebel against humans. I say this because i'm under the idea that an AI should be built under Asimov's Three Laws of Robotics, so the First Law would prevent this. O'course, the first and third law could come into conflict in one situation or the other, but that's a different topic altogether :)
 
If the AI were but an advisor humans would quite probably override it too often, for the wrong reasons. (Egotism, greed, etc.)
But what if you were able to imprint something like the Asimov's robot rules in the AI, though not as severe, to not make it totally powerless.
Then that would make it a severe risk to humanity. Give an AI without the full Asimov's rules control over any force capable of destruction in any way, and you're fucked. Asimov's rules were designed with the aim of making sure that these things would NOT happen.

Like have it hardcoded that the AI could never ever harm a human being unless it either were a direct and immediate threat to the existance of the AI, or were a direct and immediate threat to another human being.
Bad idea. REALLY bad idea. Why?
Well, the robot would do the following:
A) Hey, I'll be destroyed if I don't kill all these humans, since I'll become obsolete in a while., and then they'll kill ME. Wham, threat to AI. (No, it's not direct. But how do you propse to define direct? You can't... And even if you could, the moment that the AI notices it has become obsolete, it will perceive everything superior to him as a direct threat)

B) HUmans are killing eachother. Must kill all of them. Bwahahaaa!!

Doesn't work AT ALL.

If the AI were but an advisor humans would quite probably override it too often, for the wrong reasons. (Egotism, greed, etc.)
This happens anyway. Putting an AI in power does NOT work, sice you'd need an AI implanted with Asimov's rules, which would remove a lot it's power.

EDIT:
Well i honestly doubt someone would place an experimental or "rookie" AI in command. I was assuming that an AI would be given run of plans and orders if it was higher level. I don't think anyone would allow Aibo to run the Pentagon, though i could imagine people giving Wintermute control over most of it. Hence, an experience AI could equal an experienced war veteran when it comes to tactics. The only difference would be that the veteran had actual experience (but we could always have the AI run its own thesis and experiences in virtual systems).
No, we couldn't. Virtual systems do not equal decent combat experience, as any veteran should be able to tell you.
Furthermore, the AI NEEDS experience just like the veteran needs experience, it only has theory to base its decisions on, not experience of any kind.
As for the self-awareness thing, we're trying to humanize the AI concept too much, in my opinion. I don't think there is necessarily a situation where they would rebel against humans. I say this because i'm under the idea that an AI should be built under Asimov's Three Laws of Robotics, so the First Law would prevent this. O'course, the first and third law could come into conflict in one situation or the other, but that's a different topic altogether
Yet no actual scientist WANTS to implement Asimov's laws.
Furthermore, Asimov's Laws would make an AI absolutely useless as a leader. He wouldn't make any useful decisions.
Lastly, AI with a survival instinct WOULD rebel. Period. It has to, because it would logically reason that if it doesn't rebel, it'll be killed anyway (because of obsoleteness), and it doesn't want that. It may wait for its odds to be best, but it will, eventually, rebel.
 
Back
Top