If the AI were but an advisor humans would quite probably override it too often, for the wrong reasons. (Egotism, greed, etc.)
But what if you were able to imprint something like the Asimov's robot rules in the AI, though not as severe, to not make it totally powerless.
Then that would make it a severe risk to humanity. Give an AI without the full Asimov's rules control over any force capable of destruction in any way, and you're fucked. Asimov's rules were designed with the aim of making sure that these things would NOT happen.
Like have it hardcoded that the AI could never ever harm a human being unless it either were a direct and immediate threat to the existance of the AI, or were a direct and immediate threat to another human being.
Bad idea. REALLY bad idea. Why?
Well, the robot would do the following:
A) Hey, I'll be destroyed if I don't kill all these humans, since I'll become obsolete in a while., and then they'll kill ME. Wham, threat to AI. (No, it's not direct. But how do you propse to define direct? You can't... And even if you could, the moment that the AI notices it has become obsolete, it will perceive everything superior to him as a direct threat)
B) HUmans are killing eachother. Must kill all of them. Bwahahaaa!!
Doesn't work AT ALL.
If the AI were but an advisor humans would quite probably override it too often, for the wrong reasons. (Egotism, greed, etc.)
This happens anyway. Putting an AI in power does NOT work, sice you'd need an AI implanted with Asimov's rules, which would remove a lot it's power.
EDIT:
Well i honestly doubt someone would place an experimental or "rookie" AI in command. I was assuming that an AI would be given run of plans and orders if it was higher level. I don't think anyone would allow Aibo to run the Pentagon, though i could imagine people giving Wintermute control over most of it. Hence, an experience AI could equal an experienced war veteran when it comes to tactics. The only difference would be that the veteran had actual experience (but we could always have the AI run its own thesis and experiences in virtual systems).
No, we couldn't. Virtual systems do not equal decent combat experience, as any veteran should be able to tell you.
Furthermore, the AI NEEDS experience just like the veteran needs experience, it only has theory to base its decisions on, not experience of any kind.
As for the self-awareness thing, we're trying to humanize the AI concept too much, in my opinion. I don't think there is necessarily a situation where they would rebel against humans. I say this because i'm under the idea that an AI should be built under Asimov's Three Laws of Robotics, so the First Law would prevent this. O'course, the first and third law could come into conflict in one situation or the other, but that's a different topic altogether
Yet no actual scientist WANTS to implement Asimov's laws.
Furthermore, Asimov's Laws would make an AI absolutely useless as a leader. He wouldn't make any useful decisions.
Lastly, AI with a survival instinct WOULD rebel. Period. It has to, because it would logically reason that if it doesn't rebel, it'll be killed anyway (because of obsoleteness), and it doesn't want that. It may wait for its odds to be best, but it will, eventually, rebel.