The essential question of Computer Science is whether it is possible to duplicate the process of some specific type of thinking. The most primitive manifestation of this question is simply: is it possible to mechanize process X. This question can be further reduced to the deconstruction of the process into component parts. For example, multiplying numbers can be performed by repeated addition. A search through a text file can be done as a series of comparisons to find a match. Using this method, many previously "intelligent" actions are now routinely performed mechanically by computers: pattern recognition, mathematical proofs, even playing chess.

Artificial Intelligence is an aspect of this essential question; is what we call intelligence something that can be reduced into constituent pieces? This question, however, is not nearly as simple as previously attacked questions in computer science for a very simple reason; Intelligence seems not only undefined, but undefinable. Even if "I know it when I see it," it is undefined both scientifically (see debates as far ranging as when does death occur, to whether animals have rights). Intuitively, the questions are; (is the turing test testing the program, or the user, does writing require intelligence, and even is free will an aspect of what we call intelligence.)

Using different tests, computers are already intelligent (since they play chess and pass turing tests), or can never be intelligent (since they lack a soul, and are programmed, unlike "true" intelligence.) This central ambiguity in Artificial Intelligence leads to many questions; for a decision to be "Intelligent" it may need to be understandable. Or, perhaps the exact opposite, that any fully understood decision is de-facto not a product of intelligence, but rather mechanized pattern following, even if done by a human. This lack of a true goal has lead many AI researchers to make strange, and divergent statement about what research in the field is intending to do.

The basis of all of the confusion is that the question is not a scientific one, but a psychological and/or philosophical question. Whether it can be emulated or not can only be a product of research in those areas. The goal of "Artificial Intelligence," then, is merely to simulate intelligence, allowing for seeming intelligence. This, of course, it has already done in many areas, such as game playing, problem solving, and computational tasks. In other areas it has failed, miserably when not spectacularly; writing, autonomous thought, and self-directed advances in any field, specifically mathematics or art.

The basis of all sucess and failure is a simple one; any field that we actually understand, we can reproduce in computers. If we can identify what a "good chess move" is, then a computer can produce it. Since, however, we seem unable to identify precisely what "good character development" or "plot structure" is, or what "autonomous thought" is, we cannot expect the computer to do it for us. It seems straightforward, then, that the real failure of Artificial Intelligence is twofold; it is merely an application of previously understood ideas in general algorithmic computer science, and has done nothing truly new. Secondly, in a very real sense, it is completely goal-less, and therefore unable to succeed at defining the phantoms it chases.

Sorry if I seem whiney, but I really enjoy written feedback, even if it accompanies a downvote, but commenting through soflinks is kind of annoying, hurts the nodegel, and doesn't let me know what point/points people disagree with so that I can, possibly, respond to them.