Experimental AI Powers Robot Army

Serifan

Orderite
Orderite
http://www.wired.com/news/technology/software/0,71779-0.html?tw=wn_index_2

Darpa's Grand Challenge may have looked tough, but it was a piece of cake compared to the challenge facing robots currently being developed by the U.S. Air Force.

Rather than maneuver driverless through miles of rough desert terrain, these will have to find their way into underground bunkers, map unknown facilities in three dimensions and identify what's in them while avoiding detection -- all without any human control.

This is well beyond the capability of any existing system, but the Air Force Research Laboratory, or AFRL, is putting its hopes on new software that lets robots learn, walk, see and interact far more intelligently than ever before.

It's based on work by Stephen Thaler, who came to prominence 10 years ago with his brainchild the Creativity Machine. This is software for generating new ideas on the basis of existing ones, and it has already written music, designed soft drinks, and discovered novel minerals that may rival diamonds in hardness.

The software is a type of neural network with two special features. One introduces perturbations, or "noise," into the network so that existing ideas get jumbled into new forms. The second is a filter that assesses the new ideas against existing knowledge and discards those that are unsuitable. Current applications range from detecting intruders in computer networks to developing new types of concrete and optimizing missile warheads.

Recently, Thaler has been working for the AFRL on what he calls Creative Robots, which joins his brand of AI to robotic hardware.

“Dr. Thaler's approach is clever and should have some interesting properties,” said Michael Rilee, a NASA researcher who is working on a neural networking project to use bot swarms in space and planetary exploration, known as Autonomous Nano-Technology Swarm, or ANTS. “The chief novelty is in its use of neural nets to train other neural nets.”

Self-learning and adaptability will be the key to success, and this is where the Creativity Machine excels. Give it any set of robotic limbs and it will master locomotion within minutes without any programming, swiftly finding the most efficient way of moving toward a goal. It will spontaneously develop new gaits for new challenges. (Thaler recounts how a virtual robotic cockroach adopted a two-legged gait and ran on its hind legs, not unlike basilisk lizards, when it needed to move faster.)

Perhaps the most impressive -- and spookiest -- aspect of the project is the swarming behavior of the robots. In computer simulations, they acted together to tackle obstacles and grouped together into defensive formations where needed, Thaler said. They also worked out how to deal with defenders, and spontaneously devised the most efficient strategy for mapping their environment, he added.

"This approach has less chance of getting stuck than any other" when dealing with unpredictable obstacles, according to Lloyd Reshard, a senior electronics engineer at AFRL.

Thaler declined to describe his results in detail, but said his system has produced unspecified "humanlike capabilities."

"I can relate the results of virtual-reality simulations, where swarms of Creativity Machine-based robots have deliberatively sacrificed one of their kind to distract a human guard, enabling the remainder to infiltrate a mock facility," he said.

Owen Holland, a researcher at the University of Essex who is building an “ultraswarm” of miniature Bluetooth-connected helicopters, said neural networks can be very effective for dealing with changing circumstances: "If you rip a leg off, they'll work out what's happened, and re-evolve a different gait that works."



Yes this research is amazing but could this lead to a self aware A.I. This is something that has allways worried me I mean how far can this go? could we end up all running on treadmills to generate power?[/url]
 
Serifan said:
Yes this research is amazing but ... all running on treadmills to generate power?

no that not, but we definatley will provide this technology, cause this is just a bigger plan.
....

If they wanna take over the world, they frist need to meet my shotgun, i hope the Conversation will not take too long. 8)
 
Serifan said:
Yes this research is amazing but could this lead to a self aware A.I. This is something that has allways worried me I mean how far can this go? could we end up all running on treadmills to generate power?
No, that would be quite impossible since this means a *loss* of energy. Because the energy you generate by walking on that treadmill is only a small part of the energy you already consumed in the form of food. It would be more efficient to just use food as fuel.
 
haha, cool.
people watch too much sci fi, im sure there are plenty of ways to hardwire an "AI" in such a way that it cant manipulate its own protocols. its not like we are ANYWHERE near simulating the human mind, or even an animal mind, and animal minds arent even self aware in most cases.
 
xdarkyrex said:
haha, cool.
people watch too much sci fi, im sure there are plenty of ways to hardwire an "AI" in such a way that it cant manipulate its own protocols.
In the case of a true AI, this is actually quite hard or even impossible. Even though Asimov's Laws sound real nice, they're pretty damned hard (or even impossible) to actually implement.
xdarkyrex said:
its not like we are ANYWHERE near simulating the human mind, or even an animal mind, and animal minds arent even self aware in most cases.
What?
And how would you know such a thing? Most animals are smart enough to be trained to do certain things and most animals can learn. This means that they are more likely than not self-aware.
 
in reference to asimovs laws, i meant involving hardware.
a brain doesnt have hands of its own, ya know?
software implementation of something akin to asimovs laws would be pretty hard to work out, though.

plus, in a computer brain that we design, dont you think wed be able to monitor it and record trends in its "thought" structure?



and in reference to animals being self aware...
look into studies on animal cognition, google is a good first step.
essentially an animal that doesnt recognize itself in the mirror is not self aware, thats the usual distinction.

infact, it seems from what ive read that most animals are under the impression that they are not part of their surroundings, so much entirely unique in all aspects. imitation does not lend to being self aware, i mean seriously, parrots can imitate speaking, but are incredibly stupid creatures, as are all birds, even though they demonstrate some unique capacities, such as homing pidgeons, the ability to migrate, imitation, and using their feet to manipulate things like hands

dont confuse being aware with being self aware. they are pretty different.


well, heres link that supports some of my argument
http://geowords.com/lostlinks/b36/7.htm
keep in mind, scientific american is a VERY reputable source.

and heres a link disagreeing with my argument
http://www.strato.net/~crvny/sa03002.htm
(this link seems heavily biasedto completely disregard genetically natural habits such as breeding)

evidence of self awareness in dolhpins
http://www.earthtrust.org/delbook.html

and best of all the links
http://www.mulhauser.net/research/workshop/awareness.html
this link seems to be incredibly informative and technical about the self awareness as a whole, and evenapplies the idea to robotics and software
 
xdarkyrex said:
in reference to asimovs laws, i meant involving hardware.
a brain doesnt have hands of its own, ya know?
Robots would be pretty damned useless without limbs or other ways of physical manipulation.
xdarkyrex said:
software implementation of something akin to asimovs laws would be pretty hard to work out, though.
More or less impossible.
xdarkyrex said:
plus, in a computer brain that we design, dont you think wed be able to monitor it and record trends in its "thought" structure?
Possibly. The whole point of an AI is that it can learn by itself and that's what neural networks are for. Oftentimes this leads to unpredictable results, and this is good since this is actually one of the hallmarks of AI.



xdarkyrex said:
and in reference to animals being self aware...
look into studies on animal cognition, google is a good first step.
essentially an animal that doesnt recognize itself in the mirror is not self aware, thats the usual distinction.
What a ridiculous distinction.
An animal that doesn't recognise itself in the mirror is unaware that he is staring at himself. That's not the same as not being self-aware.
Also, that test is quite controversial and subject to a lot of debate. The test is also arbitrary and quite useless in the concept of AI. An AI that has never learned that a mirror reflects will not recognise itself, and the same essentially goes for animals.
Young children often fail it as well, by the way.

xdarkyrex said:
infact, it seems from what ive read that most animals are under the impression that they are not part of their surroundings, so much entirely unique in all aspects. imitation does not lend to being self aware, i mean seriously, parrots can imitate speaking, but are incredibly stupid creatures, as are all birds, even though they demonstrate some unique capacities, such as homing pidgeons, the ability to migrate, imitation, and using their feet to manipulate things like hands

dont confuse being aware with being self aware. they are pretty different.
Self-awareness is knowing that one exists. The fact that an animal doesn't know how a mirror works is no indication that it isn't aware of its own existence.
 
Sander said:
xdarkyrex said:
in reference to asimovs laws, i meant involving hardware.
a brain doesnt have hands of its own, ya know?
Robots would be pretty damned useless without limbs or other ways of physical manipulation.

:?

This is software for generating new ideas on the basis of existing ones, and it has already written music, designed soft drinks, and discovered novel minerals that may rival diamonds in hardness.

and with a singular mind wirelessly controlling several external manipulators (say... robots halfway across the globe) i doubt it wouldfind a way to manipulate itself within reason, considering it would more than likely NOT be self aware until it had the distinction to find itself with one of its senses. atleast in animals they have a sense of touch to define theirselves, ya know?
imagine if your brain was an external block of computer on the other side of the world. you would not be aware of it, eh?



and also, refer to my links to combat those arguments, as collectively they have alot more to say on the subject than i do.
i still maintain that the majority of animsl are not self aware. (i also maintain that a large number of mammals ARE self aware)
i especially suggest the last link, considering its very particular to the case of robotics and AI.
 
xdarkyrex said:
Sander said:
xdarkyrex said:
in reference to asimovs laws, i meant involving hardware.
a brain doesnt have hands of its own, ya know?
Robots would be pretty damned useless without limbs or other ways of physical manipulation.

:?
That's the only way you can physically limit a robot AI. Hardware and such cannot implement Asimov's laws because Asimov's laws are vague and almost impossible to put in such terms that a computer can understand them. An AI would have to evolve to understand them, however it's pretty much impossible to keep an AI to that if he has to first learn to understand it.

xdarkyrex said:
and with a singular mind wirelessly controlling several external manipulators (say... robots halfway across the globe) i doubt it wouldfind a way to manipulate itself within reason, considering it would more than likely NOT be self aware until it had the distinction to find itself with one of its senses. atleast in animals they have a sense of touch to define theirselves, ya know?
imagine if your brain was an external block of computer on the other side of the world. you would not be aware of it, eh?
You're not grasping how an AI works. It has sensors and with those sensors it determines what happens. Whether or not its brain is on a physically different location is completely irrelevant to its sensors. And giving it limbs without sensors is again, useless.
And why, in god's name, would it need to manipulate its hardware? As I said, it's impossible to force something as Asimov's laws on robots.


xdarkyrex said:
and also, refer to my links to combat those arguments, as collectively they have alot more to say on the subject than i do.
i still maintain that the majority of animsl are not self aware. (i also maintain that a large number of mammals ARE self aware)
i especially suggest the last link, considering its very particular to the case of robotics and AI.
Originally your post only stated 'google it'.
That first link actually disputes the value and significance of the mirror test. It refers to self-awareness as knowing that others may think and reason as you do, as well. It doesn't anywhere state that it's a clear-cut case what's present in who. It seems to be a definition problem, though, as its unclear what self-awareness really signifies. I assumed (apparently poorly) that self-awareness merely meant awareness of self. Apparently it means awareness of self as an individual and recognizing that others may think and behave similarly.
 
:lol: i like how were not really disagreeing on the most parts of the overall concept, jsut the semantics.
and about asimovs laws, i didnt mean those in particluar as they are a somewhat silly work of fiction. i meant real world functional hard coded blocks and stoppers to prevent sorts of negative outcomes.
this list of hardcoded concept stoppers are massive and i wont try to list all the possiblities, and im sure this machine would be under massive amounts of surveillance at all times.

and honestly, in the sense i was implying, the remote robots ARE the sensors and means of learning. think of it as a central mind controlling a bunch of geographically remote drones that are its eyes and ears through wireless and satellite uplinks, while humans monitor each of the drones and the software processes of the computer AI.

then again, i jsut now remembered concepts like in ghost in the shell or neuromancer where having access to wireless networking or remote connections can cause a construct to replicate itself in remote locations, even maybe replicating different aspects of itself in several different locations, and spreading its influence.

i dunno, these lines are all so blurry. :?

i mean, honestly, without brocas area of the human brain, would a computer maintain the concept of an intentional stance, or think we are his gods? or would it simply realize its own meaninglessness in existence and attempt to kill itself? would it be jsut as illogical and ridiculous as a human considering it has noise in its software to generate an imagination simulation? creativity? flawed logic, even?
 
xdarkyrex said:
:lol: i like how were not really disagreeing on the most parts of the overall concept, jsut the semantics.
and about asimovs laws, i didnt mean those in particluar as they are a somewhat silly work of fiction. i meant real world functional hard coded blocks and stoppers to prevent sorts of negative outcomes.
Impossible.
Really. An actual AI has to learn most of the things it encounters. Hence why it's very hard to put limitations on it. Things like 'don't hurt a human' require it to first understand the concept of humans, then understand what hurting them is, then understand what may lead to them being hurt, then understand how not to do those things. Even worse, it has to learn these things through experience, thus it needs to learn through experience how to hurt a human before it can recognise not to do that.

xdarkyrex said:
this list of hardcoded concept stoppers are massive and i wont try to list all the possiblities, and im sure this machine would be under massive amounts of surveillance at all times.
One initial AI would. As soon as these start to be implemented in real life, they won't be under surveillance.
xdarkyrex said:
and honestly, in the sense i was implying, the remote robots ARE the sensors and means of learning. think of it as a central mind controlling a bunch of geographically remote drones that are its eyes and ears through wireless and satellite uplinks, while humans monitor each of the drones and the software processes of the computer AI.
Yes, that's more or less what I said, but you're still not understanding it fully.
If an AI has remote robots it can control and sense through, then those robots are in essence a part of it.
Also, humans might be able to try to monitor a 'hive mind' when in a small, contained and easily understandable structure, but the fact that it requires a full university-level study to actually understand the thought processes of a complicated neural network barely capable of playing football, and even then almost no one can do this in real-time, should give you a hint that trying to monitor a sophisticated AI controlling many robots across different locations is indeed nigh impossible.



xdarkyrex said:
then again, i jsut now remembered concepts like in ghost in the shell or neuromancer where having access to wireless networking or remote connections can cause a construct to replicate itself in remote locations, even maybe replicating different aspects of itself in several different locations, and spreading its influence.

i dunno, these lines are all so blurry. :?

i mean, honestly, without brocas area of the human brain, would a computer maintain the concept of an intentional stance, or think we are his gods? or would it simply realize its own meaninglessness in existence and attempt to kill itself? would it be jsut as illogical and ridiculous as a human considering it has noise in its software to generate an imagination simulation? creativity? flawed logic, even?
All wrong.
An AI requires goals. Without goals it will not try to do anything, it will just act. An AI that is let loose without goals remain very passive because they have no built-in goals. Give it a goal, however, and it will start to learn to attain that goal.
 
thats not all entirely true, similar to the way that you are born with an initial understanding of what a human is, and a natural disposition to put your penis jin a vagina, even if you dont understand breeding.

not EVERYTHING has to be built from scratch when it comes to artifical intelligence. although, and ai should have the ability to re-evealutate and re-write its own original programming state..

and also, i msut say, it only needs an initial goal. after that, its "imagination" can work to create new goals, in the same way that people work for goals they often do not understand, that change rapidly based on whims and unclear subconscious.

essentially, a true ai shouldnt be much different from a human mind, but with a greater perception of facts, and logic, and a more rapid cognitive evolution, especially if we instill it with an understanding of how it works.
 
xdarkyrex said:
thats not all entirely true, similar to the way that you are born with an initial understanding of what a human is, and a natural disposition to put your penis jin a vagina, even if you dont understand breeding.

not EVERYTHING has to be built from scratch when it comes to artifical intelligence. although, and ai should have the ability to re-evealutate and re-write its own original programming state..
No, not everything. But all of those concepts I mentioned earlier are almost impossible to put into programming let alone into the knowledge base of a learning AI.

xdarkyrex said:
and also, i msut say, it only needs an initial goal. after that, its "imagination" can work to create new goals, in the same way that people work for goals they often do not understand, that change rapidly based on whims and unclear subconscious.
No, that's not how it works.
If you let an AI randomly alter its goals then you end up nowhere, it'd be completely useless and not a decent emulation of any kind of mind.
The human mind has goals as well, and although you may not know it, most of your actions are governed by (possibly suybconscious) goals.

xdarkyrex said:
essentially, a true ai shouldnt be much different from a human mind, but with a greater perception of facts, and logic, and a more rapid cognitive evolution, especially if we instill it with an understanding of how it works.
What bullshit.
A true AI should not be like a human mind, because that would be an emulation of the human brain. It is something entirely different (and quite useless).
A true AI needs to be able to learn. That's it. That's essentially the definition of an AI. This does not mean it needs to be like a human mind at all.
 
haha, let me rephrase that. it should be more like an ANIMAL mind.

i was a little too specific.

and also, jsut because its impossible now doesnt mean its impossible later.
this is how science progresses, yes?
if we already thought of everything we wouldnt need science.

andwhat do you mean let it randomly alter its goals?
how is that useless?

you give it original goals, then you let it work from there.
just like any thinking thing, it will re-evaluate these goals from time to time, and also come up with new ones based on the qualia of the original goals after more insight has been achieved.
 
xdarkyrex said:
haha, let me rephrase that. it should be more like an ANIMAL mind.
Still bullshit, for exactly the same reasons.
Organic minds have a lot more going on than just learning and a lot of things that are completely irrelevant to an AI. They are also far from the ideal model for learning.
xdarkyrex said:
i was a little too specific.

and also, jsut because its impossible now doesnt mean its impossible later.
this is how science progresses, yes?
if we already thought of everything we wouldnt need science.
Holy tom angles straw man, Batman!
It's not theoretically impossible, but it is practically impossible. You have to explain a lot of concepts that are almost impossible to explain in terms that an AI might understand.

xdarkyrex said:
andwhat do you mean let it randomly alter its goals?
how is that useless?
I'd think that would be pretty goddamn obvious.
What use is something that starts to do something, alters it goal randomly and starts doing something else?

xdarkyrex said:
you give it original goals, then you let it work from there.
Goal: dig up a bone
re-worked goal: run around in circles.
Yes, that's an extreme example. But that's what you're suggesting.
Why? This is not what any living thing does either. It may have short term goals and abandon those, but only with reason and usually because it sees a better goal.

xdarkyrex said:
just like any thinking thing, it will re-evaluate these goals from time to time, and also come up with new ones based on the qualia of the original goals after more insight has been achieved.
And this is something completely different from just changing goals.
Setting goals to achieve the main goal is always a possibility, as is having an AI evaluate the possibility of attaining any goal.
However, just letting an AI change goals is useless. Before you know it you end up with an AI that has running around in circles as a goal, because it has altered its goal to that.
 
did you read the original link>

it discards ideas based on efficiency to get the original goal done.
you give it a very large task to start, and many sub routines will sprout up to compensate.

and btw, that wasnt a straw man.
go read my other thread or wikipedia or any site you wish that offers a description of what a straw man is.

the point im trying ot make is that it isnt even impossible on any scale, especially if we design a hierarchy based ai where it builds sub-routines to build even more sub routines, like a hive mind existing on a single computer, each with its own unique set of instructions it sends back up the hierarchy for inspection.

use a little imagination man, this is how progress happens.
im gonna tell you now... science has done hte impossible before, and will continue to do it as long as we dont limit ourselves to what we see as "rational" by current standards. look at the trend in acceptance of logic on a historic scale. we have gone well beyond what we thought was possible. i mean holy shit man, we have created antimatter, managed to get hubble to actually mgnify the light of over 2500 galaxies to make them visible, something once thought impossible, we have sailed around hte world, proved we arent the center of the universe, and we have discovered how to alter the flow of time using einsteins theory of special relativity.

impossible means nothing in the scope of human history, especially given the trends in the growth of technology.
jsut because we cant think of a way doesnt mean we cant design a way, or even design a way TO design a way.
 
xdarkyrex said:
and btw, that wasnt a straw man.
go read my other thread or wikipedia or any site you wish that offers a description of what a straw man is.

Amusing reply to a straw man, in using another one to cover it up each time. In no way do you ever explain how your argument of fallacy or false conclusion is in fact relevant or logical, but you regurgitate that Wiki shit up every time without even going back to read what you wrote, or even think of how it could apply to your argument. Instead, you keep spewing out more, similar straw man arguments to stack upon the older ones. It's becoming a really weak troll.

Unlike you, I have attended college debate. You are nothing, child.
 
xdarkyrex said:
did you read the original link>
Yes. Nowhere does it talk about changing goals.

xdarkyrex said:
it discards ideas based on efficiency to get the original goal done.
you give it a very large task to start, and many sub routines will sprout up to compensate.
What are you, illiterate?
For the nth time, this is not the same as changing goals. This is learning
xdarkyrex said:
and btw, that wasnt a straw man.
go read my other thread or wikipedia or any site you wish that offers a description of what a straw man is.
Oh, for fuck's sake. Go read the responses to your original 'this is not a strawman' post to see what a strawman *really* is.

You implied that I said we already thought of everything, which is very easy to refute, but not at all anywhere near what I said. Hence a straw man.
xdarkyrex said:
the point im trying ot make is that it isnt even impossible on any scale, especially if we design a hierarchy based ai where it builds sub-routines to build even more sub routines, like a hive mind existing on a single computer, each with its own unique set of instructions it sends back up the hierarchy for inspection.
You have no clue what AI development is like, do you?
Shit man, this is ridiculous.

xdarkyrex said:
use a little imagination man, this is how progress happens.
No, progress happens through experimentation and knowledge. Not imagination.
xdarkyrex said:
im gonna tell you now... science has done hte impossible before, and will continue to do it as long as we dont limit ourselves to what we see as "rational" by current standards. look at the trend in acceptance of logic on a historic scale. we have gone well beyond what we thought was possible. i mean holy shit man, we have created antimatter, managed to get hubble to actually mgnify the light of over 2500 galaxies to make them visible, something once thought impossible, we have sailed around hte world, proved we arent the center of the universe, and we have discovered how to alter the flow of time using einsteins theory of special relativity.

impossible means nothing in the scope of human history, especially given the trends in the growth of technology.
jsut because we cant think of a way doesnt mean we cant design a way, or even design a way TO design a way.
Weeee, more straw men. 'ur limiting us!' and 'nothing is impossible' are not valid arguments as to the feasability of a project. They are empty statements that carry no weight or have any supporting arguments. So try to actually come up with an argument, okay?
 
xdarkyrex said:
thats not all entirely true, similar to the way that you are born with an initial understanding of what a human is, and a natural disposition to put your penis jin a vagina, even if you dont understand breeding

There's a huge error in your (lack of) logic there. You see, humans have this thing called instinct (though it is up for debate how much we have and how influential it is, we do have it), provided by millions of years of evolution. Yeah, computers don't have that, not even an AI -- which, by the way, does not truly exist at this point in time. With that in mind, one can not say for certain what an artificial intelligence would be like.

That aside, Sander is standing on far, far more solid ground than you are, Darky. Considering that, and the fact that Roshambo just got involved, I think you should quit now.
 
Back
Top