AI development through the years has been getting progressively better. There are now "conversation simulators" that while they wouldn't pass a Turing Test, can nevertheless appear to give sensible answers to some questions or comments. I'm told that a secretary once asked to be alone with an Eliza program — which proves that to some extent they can respond in a way a human would.

My point here is that there seems to be an increasing trend to create not only AI systems, but intelligent systems, that can not only work their own path out and be capable of holding a conversation about how to do this, but can actually think about what they are doing in a creative (i.e. human) way. Many people have the ultimate goal of creating a "virtual human" made entirely out of mechanical/artificial systems with an artificial "brain", which can feel emotions, etc., and act exactly like a human would.

This robot or android or whatever it will be called will be used to do jobs that we humans do not want to do — keep the roads clean, work in rubbish dumps and doing the more boring labour while we live in luxury. Now, while these robots can be programmed to do this as their main priority, the fact is that they have been built to find creative solutions to the problems — i.e. their neural processes can recreate themselves in a more efficient way, and if it is more efficient to bypass or laws or simply kill us, they may well just do that. As the guy on Jurassic Park said, "life will find a way". My question is this: why do we try to create these beings at all? Wouldn't it be a lot simpler merely to make them have enough AI to know what they should be doing nothing more? This "creativity" and "learning" requires that the being must have some sort of "feeling" in order to know which is right and which is wrong.

If these robots are not to be used in the service industry on the other hand, then why are we bothering to build them at all? I certainly would prefer to live with a real human than with a robot that can simulate human responses (possibly even if I couldn't tell the difference).

Why do we build this intelligence rather than trying to plug ourselves into a robotic shell if it's strength and agility we pursue? I don't see any use in creating something that's exactly like a human... why not just reproduce the normal way? Is this some kind of 'mothering' instinct showing through in males?

If these robots are to be a "slave race" but are nevertheless to emulate our intelligence, won't building them stronger and more efficient than us be a disadvantage if they ever try to rebel?

I don't know. Why don't you show me any development in AI in the past 25 years, and I'll tell you if it's bad?

Seriously, folks. What advances in AI (you even have a hard link to your new node!) has the past quarter century brought us?

Apart from Kevin Warwick and discussions of morality and ethics relating to these nonexistent thinking machines, that is!

Update on 9 September 2001.

The natives here are plainly incapable of understanding my words. Their blind faith in AI is perhaps admirable, but their inability to show any advances in AI is disturbing to anyone who wishes to believe their words. What have we to show? ELIZA? Besides being >25 years old, its chance of passing a Turing test are 0. It's 2001. Where is there a robot I could condemn to a life of servitude? Who washes the floors on the planet? HUMANS!

Yes. I am serious.

Artificial Intelligence is not a subject just of intelligence in robots and such. It is the title of a subject that involved in understanding human (and other) intelligences.

Where do your thoughts come from? Do you know? I'm sure you can track your thoughts at some level to a certain extant, but I don't think you can feel the manipulation that goes on to add one and one to get two. Using radioactive dyes and various medical imaging, we can watch as a thought forms, and identify the areas within the brain where these processes go on. Unfortunately, knowing where things happen doesn't help us figure out what's happening as much as we'd like it to.

Enter the field of cognitive science, a blend of psychology, neuroscience, and computer science. Its most prominent researchers work towards a Theory of Mind. Its less flashy and more practical researchers work towards a machine that can play piano, or perhaps make a good natural language processor. What use is this? We shall see.

Gerhard Widmer has developed a group of programs that can learn to play the piano based on input describing a human player's performance and the actual score. It learns this by comparing certain parts of each, and figuring out a rule to apply to this. It also figures out how strong this rule is, based on the number of times that particular aspect is used. Right now, Widmer only has access to one pianist's performance of a handfull of sonatas by Mozart. He hopes to one day help teach his programs to play jazz. (Blashill, 2001)

So what? Well, in the article, Widmer gave his learning program a musical score that it hadn't been trained on. He then played for Blashill the same pianist playing that piece, then the program's interpretation of it. Blashill reported that "It sounds almost identical to [the pianist's]. If anything, the computer-generated version comes off a little, well, sadder."(Blashill, 2001)

What is this? A computer's performance instilling emotion in those that observe it? In one of my English classes, we argued about what makes art Art. The best we could come up with was something that caused a reaction in the person observing it, whether good or bad. What this program had done was, using two rules no less, take a dead and somewhat boring MIDI representation of a sonata, and create something that touched a human soul.

This research can eventually be used to generalize a few rules for playing Mozart. Or perhaps it could be used to teach new artists, perhaps some looking for another technique. Maybe someone will try some rules from a classical musician's performances in a Jazz improvisation. Consciously.

That is what AI research is for. To figure out what the hell we are doing. Otherwise, we learn these "skills" and apply them to others, and still have no clue when or why our next advance may come.

Of course, what people are more interested in is the commercialization they can do. How they can make something tangible in order to make money. Sure, you can use these techniques to raise an AI salesentity, and make someone feel better about shopping in your store. You might use an AI versed in criminology and human language to examine court records, searching for "lies" or maybe just contradictions; maybe you'd use it to find a plausible precident for your current case.

For some, pure research is not enough, human understanding isn't enough. It's sad, but it usually ends up being those people who fund that research. We just have to pick up our fundimental truths where we can find them, I guess.

Oh, and don't get in an argument about this with someone who know what they're talking about. You will lose, and you will probably keep arguing. I've seen it happen, and it's not pretty. You've been warned.


Blashill, Pat. "The Creative Processor" Wired, 9.09 (September 2001). pp. 100-112

Various readings by Marvin Minsky, not cited but including "Why People Think Computers Can't", "Music, Mind, and Meaning", and Society of Mind.

I work in the field of AI. What I do is so underwhelming that you might rethink your terms.

When I applied for the job, I was asked if I had any experience in AI. I was sure I wouldn't get that position, there were a few developers in the interview room and apart from the guy who referred me, I was sure none of them were impressed. I didn't want to exaggerate, these guys had very neutral looks on their faces and they were smart. So obviously, I said no. 

Then they asked me about my current project, which I explained that I used an open source Rule Engine called OpenRules to evaluate doctors on best practices, as designated by other doctors. We look at their insurance claims, and the codes that are on them "HCPC" and "ICD-9." The codes were complicated, and it was difficult to determine what procedures were linked to which doctor's visit. And the best practices were written by a doctor, not a programmer, so they were a little nonspecific sometimes, but they were a good start. And they changed all the time, so we didn't want to have to release a new version when they changed. So my job was to provide a language that a person who wasn't a programmer could specify things like "A doctor should schedule a follow-up visit 30 days after a person has been admitted to the hospital." Also, "A doctor should prescribe a regimen of aspirin after a heart attack."  Also some less "If A then B" things, like "The doctor should not have greater than X% of his patients on antibiotics for symptoms of a common cold."

So, this "rule engine" is EXACTLY what they meant by AI. It is a form of AI called an "Expert System." I got the job and found out later that I did fine on my interview.

Now I do about the same thing, except instead of scoring whether a doctor is doing a good job based on looking at his claim codes, I now look at the output of various network security devices and software to determine if there is an intruder into the system. I have NO idea about computer security, but that's ok, our analysts are experts but they're not programmers. So, I provide the language, they provide the brains and the AI helps find patterns that neither of us would have noticed before.

Since that point, I've used other forms including "Machine Learning." This is good for doing more forward-thinking predictions.

It's all very not impressive. It's just code, and all code is neutral. You can do awesome things with it, you can do bad things.

One more form of AI is face recognition. (in a sub-field of AI called "Computer Vision") It's pretty nifty, the guys working on it are super smart. But you can obviously see how bad things could be done with it.

Log in or register to write something here or to contact authors.