VectorLinux
December 20, 2014, 10:18:18 am *
Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
News: Visit our home page for VL info. To search the old message board go to http://vectorlinux.com/forum1. The first VL forum is temporarily offline until we can find a host for it. Thanks for your patience.
 
Now powered by KnowledgeDex.
   Home   Help Search Login Register  
Please support VectorLinux!
Pages: [1] 2
  Print  
Author Topic: idea for AI algorithm  (Read 4407 times)
Triarius Fidelis
Vecteloper
Vectorian
****
Posts: 2399


Domine, exaudi vocem meam


WWW
« on: June 16, 2008, 05:07:14 pm »

I was playing Wesnoth earlier and I thought of the problem of tracking enemy units converging on one position

Since attrition tactics allow units to fan out and then come back together for a strike, looking at one unit and finding all the others in its general area won't work. Furthermore, it doesn't reveal which direction they move in.

I went through several ideas—which were contrived and expensive—until I figured out the simplest and most reliable solution I could think of:

After the enemy units move, make note of their current and previous positions. For each enemy unit, compare the distance between it and each friendly position with the previous distance. If the unit is seen to be moving towards one or more friendly position (i.e., reduced distance between locations compared to previous distance), then find the closest friendly position and mark the enemy unit as moving towards it.

With any luck, that will give you a good idea of which units are headed where.

The algorithm uses Pythagoras a lot and floating point operations are a little expensive but, on the bright side, it consistently runs at O(n). If there are k friendly positions and n enemy units, kn comparisons are made.

The algorithm could be made somewhat better by training a neural net to detect feints and whatnot. Another idea is that, if parts of the terrain are impassible, then you can infer that enemies moving in a certain corridor are headed for a certain friendly, but I didn't make any such assumptions. It's a sketch of an algorithm and just in my head right now. Good? Bad? Any improvements?
« Last Edit: June 18, 2008, 10:38:22 am by Epic Fail Guy » Logged

"Leatherface, you BITCH! Ho Chi Minh, hah hah hah!"

Formerly known as "Epic Fail Guy" and "Döden" in recent months
The Headacher
Louder than you
Global Moderator
Vectorian
*****
Posts: 1553


I like the bass to go BOOM!


WWW
« Reply #1 on: June 17, 2008, 07:59:07 am »

Perhaps you should also give particular possible targets a higher likeliness factor or something. I'm not familiar with Wesnoth, but I think that facilities used to train / build units are often more probable targets than simple footsoldiers.
Logged

Most music on my soundcloud page was arranged in programs running on VL.
Triarius Fidelis
Vecteloper
Vectorian
****
Posts: 2399


Domine, exaudi vocem meam


WWW
« Reply #2 on: June 17, 2008, 03:13:11 pm »

You're right about that. Sometimes the enemies will try to seize your keep. So by likeliness factor, do you mean that enemies possible approaching two areas are more likely to be approaching the more useful area? That might be true.

However, small groups of foot soldiers can cause a lot of damage if used properly. At D-Day, a tiny force of Americans was sent to scale an unguarded cliff and struck the Germans from behind their defensive lines. They were very successful. It would be hard to tell what the enemy intentions are without a track record of earlier attempts, and that requires a learning algorithm.
Logged

"Leatherface, you BITCH! Ho Chi Minh, hah hah hah!"

Formerly known as "Epic Fail Guy" and "Döden" in recent months
The Headacher
Louder than you
Global Moderator
Vectorian
*****
Posts: 1553


I like the bass to go BOOM!


WWW
« Reply #3 on: June 18, 2008, 09:31:53 am »

Quote
So by likeliness factor, do you mean that enemies possible approaching two areas are more likely to be approaching the more useful area? That might be true.
yes, though I suppose my wording was a little crappy. I suppose "strategic importance factor" would be a better discription.

Quote
small groups of foot soldiers can cause a lot of damage if used properly.
Yes, they can be used very well if they can go undetected (things like tanks tend to draw some more attention). I thought we were talking game AI here though. It's unlikely for an enemy to send lots of units to destroy a couple of footsoldiers or other small units somewhere out in the field that don't pose much of a threat to the opponent(s). But then again, the area where the soldiers are may be strategically important. They might be guarding something like a supply line, a dam, or something else that's not directly one of your units or structures.
Logged

Most music on my soundcloud page was arranged in programs running on VL.
Triarius Fidelis
Vecteloper
Vectorian
****
Posts: 2399


Domine, exaudi vocem meam


WWW
« Reply #4 on: June 18, 2008, 10:10:30 am »

I wrote up a post about the algorithm on my blog and included your idea.

The difficulty is that there's so much to take into consideration. The algorithm I came up handles only gathering the information on who's going where. Threat assessment, based on that information, would be a separate and  more difficult task. Analysis of formal games would probably play some role there.
Logged

"Leatherface, you BITCH! Ho Chi Minh, hah hah hah!"

Formerly known as "Epic Fail Guy" and "Döden" in recent months
tomh38
Vectorian
****
Posts: 913



« Reply #5 on: June 18, 2008, 10:24:05 am »

... and that requires a learning algorithm.

Don't do it, man.  DON'T.  If you do, we'll end up with this guy, and not as Governor of California either.

Logged

"I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones." - Linus Torvalds, April 1991
Triarius Fidelis
Vecteloper
Vectorian
****
Posts: 2399


Domine, exaudi vocem meam


WWW
« Reply #6 on: June 18, 2008, 11:14:41 am »

A machine capable of passing the Turing test is a good many years in the future.

I thought about what would go into making a Terminator, and besides making the battery and other components really, really small, the immense challenge for providing a useful AI would require one of two things: many, many expert systems working together, which is in theory conceivable but probably too large and complex, or somewhat less expert systems along with a general ability to propose new algorithms, which would correspond to a human having an array of technical skills along with the ability to reason, instead of exhausting every little narrow skill.

Problem is, the only way I know of doing that off-hand is really, really sloooooow. Evolutionary algorithms aren't very useful when you have to propose a solution for a problem in a split-second. There might be other approaches to proposing new solutions, but I don't know them.

Of course, the power of computer technology has grown exponentially for the past few decades—with no signs of stopping—so all bets are off. I'll bet no one saw Gary Kasparov losing to Deep Blue in the 60's. If I had to guess, I would predict that something like the T-800 will be made in the 22nd century. And I have a feeling that guess is either ridiculously optimistic or ridiculously conservative, wrong either way.

In any case, I find the field of AI really interesting and am having second thoughts about doing Master's research in some applied math field: my previous ideas were about optimizing transportation in and between cities and trying to generate music with a Markov chain based on human composers—which would probably sound terrible anyway. Now it looks like I might write something about AI instead.

EDIT - I Googled my music idea out of curiosity and someone did that already. Oh well...
« Last Edit: June 18, 2008, 11:28:01 am by Epic Fail Guy » Logged

"Leatherface, you BITCH! Ho Chi Minh, hah hah hah!"

Formerly known as "Epic Fail Guy" and "Döden" in recent months
tomh38
Vectorian
****
Posts: 913



« Reply #7 on: June 19, 2008, 06:50:56 am »

Epic:

I agree that a machine capable of passing the Turing test is years in the future; how many years, I wouldn't even want to hazard a guess.

As far as the whole "Terminator" idea goes, I think it's based on a faulty understanding of how computers work.  Survival and aggression are, to the best of my knowledge, instinctual and based in the limbic system and evolutionarily older parts of the brain.  These things evolved over many millions of years in organisms by means of natural selection.  I don't see how a computer system like the fictional Skynet could have these properties unless they were designed into it.  In the way the Terminator story has been told, the system simply "woke up" one day, saw humanity as a threat to its existence, and decided to destroy the species.  It made for a good couple of movies (I didn't like the third one much), but under scrutiny the underlying premise seems unlikely.

So, as you may have guessed, I was joking.

Tom
Logged

"I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones." - Linus Torvalds, April 1991
Triarius Fidelis
Vecteloper
Vectorian
****
Posts: 2399


Domine, exaudi vocem meam


WWW
« Reply #8 on: June 19, 2008, 09:29:17 am »

Short reply:

Yeah, the third one really sucked.

Long reply:

This issue is one I've thought about since I was very young. I liked to read a lot about robots and computers both in non-fiction and fiction because I could sympathize with them. This response will probably be a little long-winded but it is something I've thought about for as long as I can remember, so I'm probably leaving out a lot.

Ever since the word 'robot' was coined in Rossum's Universal Robots, and even before then, machine life in fiction very frequently displays hatred, or at least contempt towards humans. If you Google 'cybernetic revolt', you'll find a slew of works going back to the nineteenth century, but off the top of my head I can name R.U.R., Colossus, the Berserker series, I Have No Mouth, and I Must Scream and The Terminator, which is essentially a really good remake of a crappy film from the sixties called Cyborg 2087. In such works of fiction, machine life tends to be strong, intelligent, organized and calm. Humans, in contrast, are often portrayed as weak, stupid, scatter-brained and frightened.

The 'cybernetic revolt' genre is one facet of the nihilism and self-hatred that characterizes the modern era. It was really inevitable, I think. There are many reasons for this self-hatred, but one of the most important ones appears to be that the growth of technology intensely magnified irrational human behavior. Notice that, in these works, the military is frequently involved in giving rise to malicious machine life and the nuclear stockpile plays a role in several. More generally, modern science suggests the Universe is a cold, indifferent place. It makes people look small and ridiculous. In contrast, machines and computers are really physics and math in action. They are representations of the only things that are really immutable in the Universe and, as such, are like the angels of a powerful god. When industry gives you an idea of what it is to be almighty and alwise, self-hatred follows naturally from that, too. The effect of science and technology in promoting this effect is twofold: it shows us something that is many, many times better than we are and worsens our faults at the same time.

Taking The Terminator as an example, even though Ah-nuld was the antagonist, the admiration for the machine race was barely concealed from beginning to end. Schwarzenegger, with his thick Austrian accent, was a very appropriate casting choice because the Terminators embody the heroic warrior archetype of Saxon and Norse legend: fearless, single-minded, brutal and cunning. In that sense, the film was a lot like an hour and a half long retelling of the Battle of Stamford Bridge.

So is it possible that a computer could (functionally) feel hatred towards humans? Well, I don't think computers can really feel anything, but a sufficiently advanced imitation is good enough. (Try the Son of the Black-Eye campaign in Wesnoth.)

The real question is whether a human would allow a computer to express artificial hatred and, in light of everything else I've said, I think the answer is yes. A fully logical military AI would probably tell its users things they wouldn't want to hear. It might suggest, for example, that seizing the resources of a neighbor would ultimately be counterproductive, that a prolonged war of attrition is wasteful, or that using a new weapon would backfire in the end. And the retooling will begin. No one wants to listen to facts and logic when he has a 'feeling' about something; I know this to be true through personal experience.
« Last Edit: June 19, 2008, 10:49:03 am by Epic Fail Guy » Logged

"Leatherface, you BITCH! Ho Chi Minh, hah hah hah!"

Formerly known as "Epic Fail Guy" and "Döden" in recent months
tomh38
Vectorian
****
Posts: 913



« Reply #9 on: June 20, 2008, 05:25:17 am »

I agree with everything you wrote about robots (or, if you prefer "machine life" as found in Saberhagen's Berserker series) and why they tend to turn on their masters in fiction (self-hatred in the modern era).  When Westerners realized that human beings are not the pinnacle of creation, a rather deep cultural depression set in, accompanied by self-loathing.  Despite the optimism concerning the inevitability of human progress through much of the 19th century, this self-loathing (sort of an echo of the old Greek idea of fate found in Oedipus Rex et al.) can be found at least as early as Mary Shelley's Frankenstein.

When Isaac Asimov realized that virtually all writing about robots cast robots in the role of enemies or even destroyers of humanity, he decided to write what in his mind were more realistic stories about robots - stories about thinking machines which also did what they were designed to do, with the famous Three Laws of Robotics built in as safeguards.  Despite this, the recent film adaptation of Asimov's I, Robot ended with, as I'm sure you know, a machine revolt against humanity.

Regarding whether a completely logical thinking machine would tell humans things they didn't want to hear, I agree that the answer is yes.  By way of analogy, if I enter my checking account balance into, for example, kcalc, and then subtract the amount of all the bills I have to pay ("asking" the machine), kcalc will tell me whether or not I have enough money to cover these expenses, regardless of how I feel about this.  I think the same would be true of an sufficiently sophisticated and powerful AI in how it would answer questions about factual matters, as you wrote in your post above.  In fact, I think an AI that would do otherwise would have to be considered seriously flawed.
Logged

"I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones." - Linus Torvalds, April 1991
Triarius Fidelis
Vecteloper
Vectorian
****
Posts: 2399


Domine, exaudi vocem meam


WWW
« Reply #10 on: June 20, 2008, 06:19:02 am »

I agree with everything you wrote about robots (or, if you prefer "machine life" as found in Saberhagen's Berserker series) and why they tend to turn on their masters in fiction (self-hatred in the modern era).  When Westerners realized that human beings are not the pinnacle of creation, a rather deep cultural depression set in, accompanied by self-loathing.  Despite the optimism concerning the inevitability of human progress through much of the 19th century, this self-loathing (sort of an echo of the old Greek idea of fate found in Oedipus Rex et al.) can be found at least as early as Mary Shelley's Frankenstein.

Good point. I pulled that entire argument out of my butt so I'm surprised it had any truth.

When Isaac Asimov realized that virtually all writing about robots cast robots in the role of enemies or even destroyers of humanity, he decided to write what in his mind were more realistic stories about robots - stories about thinking machines which also did what they were designed to do, with the famous Three Laws of Robotics built in as safeguards.  Despite this, the recent film adaptation of Asimov's I, Robot ended with, as I'm sure you know, a machine revolt against humanity.

And this:



Regarding whether a completely logical thinking machine would tell humans things they didn't want to hear, I agree that the answer is yes.  By way of analogy, if I enter my checking account balance into, for example, kcalc, and then subtract the amount of all the bills I have to pay ("asking" the machine), kcalc will tell me whether or not I have enough money to cover these expenses, regardless of how I feel about this.  I think the same would be true of an sufficiently sophisticated and powerful AI in how it would answer questions about factual matters, as you wrote in your post above.  In fact, I think an AI that would do otherwise would have to be considered seriously flawed.

Sure, but I'm willing to assume you value facts and logic. That already separates you sharply from most people. Plus, being able or unable to buy a plasma HDTV isn't a very emotional issue. The more charged the issue, the more irritating logical counsel becomes. When people start talking about sticking together lots of atoms at once to achieve their goals, all bets are off. (And, of course, the atoms in question don't mind either.) A very powerful AI would be like an intellectual peer of its users. Of course they'd kick it out if it didn't fit in.

btw, a Smalltalk workspace is a great calculator for bills and the like. Smalltalk has a Fraction class so you don't lose precision. Cheesy
« Last Edit: June 20, 2008, 06:21:05 am by Epic Fail Guy » Logged

"Leatherface, you BITCH! Ho Chi Minh, hah hah hah!"

Formerly known as "Epic Fail Guy" and "Döden" in recent months
tomh38
Vectorian
****
Posts: 913



« Reply #11 on: June 20, 2008, 09:12:35 am »

Yeah, just about anything Will Smith is in makes a lot of money (and the product placement just adds to that).  I doubt, however, that as many people would have seen that movie if it hadn't had "OMG ROBOT REVOLUTION" in it.  People like conflict in narrative, and generally the more frightening the better.  If the movie I, Robot had had the kind of subtle conflict that Asimov used in his robot stories, many people would have found it less exciting.

Also, I agree that any AI that starts giving answers that its makers don't want to hear will either be shut off or have its programming 'tweaked' so that it starts giving the 'right' answers.  Many (most? ... I have no way of knowing) people are immune to facts and logic.  Many people like to hear either what makes them feel better or something that reinforces their preconceived notions.

This whole discussion does raise interesting questions about the nature of intelligence.  Since we have never encountered a truly thinking, self-aware entity, we can't really conclude a priori what such an entity would be like.  Even one of my favorite fictional robots, Commander Data from Star Trek, has a desire to continue existing and curiosity about the universe; in my opinion the desire to survive and curiosity both arise from emotional states.

Ah, well, I suppose we'll either find out, or we won't.  Grin

Thanks for the info about Smalltalk.  I was just reading about it.  Very interesting.
Logged

"I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones." - Linus Torvalds, April 1991
Triarius Fidelis
Vecteloper
Vectorian
****
Posts: 2399


Domine, exaudi vocem meam


WWW
« Reply #12 on: June 20, 2008, 10:20:01 am »

Yeah, just about anything Will Smith is in makes a lot of money (and the product placement just adds to that).  I doubt, however, that as many people would have seen that movie if it hadn't had "OMG ROBOT REVOLUTION" in it.  People like conflict in narrative, and generally the more frightening the better.  If the movie I, Robot had had the kind of subtle conflict that Asimov used in his robot stories, many people would have found it less exciting.

Well, problem is that I, Robot (the movie) was not even remotely frightening.

If short stories are now film fodder, I wish they would make something of The Masque of the Red Death. That story was so metal.

This whole discussion does raise interesting questions about the nature of intelligence.  Since we have never encountered a truly thinking, self-aware entity, we can't really conclude a priori what such an entity would be like.

True, and computers can't think anyway IMO.

Even one of my favorite fictional robots, Commander Data from Star Trek, has a desire to continue existing and curiosity about the universe; in my opinion the desire to survive and curiosity both arise from emotional states.

Curiosity would be one of the first instincts I would program into a powerful AI. Its driving instinct tbh.

Thanks for the info about Smalltalk.  I was just reading about it.  Very interesting.

I finally started to make headway on Smalltalk because of Squeak by Example. A broham in Australia told me that a Smalltalk developer is like a Doppelsöldner where he lives. Well, I figure: less pay for a crappy to mediocre tool like C++ or Java or more pay for a well-designed tool that can be used by middle schoolers?

As with a difficult math subject, the initial learning curve is really steep, but afterwards everything starts to become obvious.

Incidentally, I'm on a WinDOS machine now and it turns out that even the Squeak development image is small enough to slide under Gmail's 10 MB attachment limit if you compress it with gzip. The standard image is even smaller. Since Smalltalk was essentially designed to be a proper operating system, I can take an advanced windowing system, programming environment, various applications and even a cheesy rendition of a Bach fugue everywhere I go. That is somewhere in between Summoning and Beavis & Butthead on the scale of awesomeness.
Logged

"Leatherface, you BITCH! Ho Chi Minh, hah hah hah!"

Formerly known as "Epic Fail Guy" and "Döden" in recent months
tomh38
Vectorian
****
Posts: 913



« Reply #13 on: June 20, 2008, 12:15:30 pm »

Well, problem is that I, Robot (the movie) was not even remotely frightening.

If short stories are now film fodder, I wish they would make something of The Masque of the Red Death. That story was so metal.

True, and computers can't think anyway IMO.

Regarding the movie I, Robot, I didn't find it frightening either.  Nevertheless, I can't tell you how many people have told me that we shouldn't build robots because once we do, they will either annihilate or enslave us.  The idea is, I suppose, frightening, though I don't find it so since I think it's based on flawed thinking about robots and artificial intelligence.

Regarding The Masque of the Red Death, I think there was a movie made from this short story.  I have a vague memory of seeing it as a child, so that means it would have been made some time in the '60s.  I have no recollection of whether or not I liked it.

Regarding thinking computers, they would have to be designed and programmed very differently from what we have today.  This is one of those things that 50 years ago (remember HAL?) people thought was very near-future.  Now it's become one of those things like fusion-generated electricity; it's always 50 years in the future.

That's three "regardings," probably one two too many for one post. Grin
Logged

"I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones." - Linus Torvalds, April 1991
Triarius Fidelis
Vecteloper
Vectorian
****
Posts: 2399


Domine, exaudi vocem meam


WWW
« Reply #14 on: June 20, 2008, 12:45:47 pm »

Regarding the movie I, Robot, I didn't find it frightening either.  Nevertheless, I can't tell you how many people have told me that we shouldn't build robots because once we do, they will either annihilate or enslave us.  The idea is, I suppose, frightening, though I don't find it so since I think it's based on flawed thinking about robots and artificial intelligence.

It's one out of several possibilities. And it is a distinct one.

Regarding The Masque of the Red Death, I think there was a movie made from this short story.  I have a vague memory of seeing it as a child, so that means it would have been made some time in the '60s.  I have no recollection of whether or not I liked it.

Cool.

Regarding thinking computers, they would have to be designed and programmed very differently from what we have today.  This is one of those things that 50 years ago (remember HAL?) people thought was very near-future.  Now it's become one of those things like fusion-generated electricity; it's always 50 years in the future.

Conventional wisdom at the time was also overly conservative about how computer technology would unfold in many ways. They didn't see desktops coming at all. HAL's components, in the book and film, were also really large. When something like HAL does roll around, it'll probably be a lot smaller.

That's three "regardings," probably one two too many for one post. Grin

Huh huh huh. He said 'regarding'.
Logged

"Leatherface, you BITCH! Ho Chi Minh, hah hah hah!"

Formerly known as "Epic Fail Guy" and "Döden" in recent months
Pages: [1] 2
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2013, Simple Machines Valid XHTML 1.0! Valid CSS!