Artificial general intelligence (AGI) refers to an artificial intelligence that can function intelligently over many domains. The most common benchmark for AGI is that it is accomplished when a AI can perform any intellectual task that a human can.

AGI is quite important as it is the point where AIs move into a new level of usefulness. We already have a number of computer programs that are really very useful, many of which meet simple definitions of AI. These programs are sometimes called applied AI or narrow AI; they may technically be artificial intelligence, but they are targeted at specific tasks (canonically, playing chess). AGI is the point where the AGI can do things as least as well as we can -- and in many cases, that will mean better than we can -- including selecting which problems to work on, identifying good enough solutions, and implementing them.

As an example, we currently have programs that can play chess better than a human player, but which do not know to question what's happening if we set up a game in which they are missing a bishop. An AGI should know to question such oddities, and moreover should be able to learn to play chess without first being programmed specifically for chess, be able to design other games approximately as fun as chess, and should be able to determine when it is appropriate to stop playing games and start working on a cure for cancer.

AGI is not the same as being able to pass the Turing test, although technically any AGI would probably be able to pass the Turing test (because that is something that humans can do). There is some debate as to what abilities are required for an AGI, but they include the ability to use reason to solve problems, make decisions under uncertainty, and represent knowledge. As a matter of practicality, an AGI should also communicate in natural language at human levels and understand human values -- as much as one can. It is also generally presumed that AGI would probably involve having sensory systems in parallel with humans, although what that means in terms of qualia is an open question.

Of course, being an intelligent agent with good problem-solving abilities doesn't guarantee that an AI will be anything like a human; even with the additional qualities of natural language, morals, and traditional sensory inputs tacked on, there is a good chance that an AGI will act more like an alien than like a human. This gives rise to the problem of making sure that the AGI is friendly to humans.

Additionally, it is often held that any AI approximating this level of intelligence would be likely to undergo a hard takeoff. In a strong sense, AGI is the same as superintelligence. Any AI can currently have math calculation skills, memory capacity, focus, and concentration greater than that of a human. Whatever might be missing from machine intelligence, as soon as it is developed the machine will in fact be able to solve more problems than humans can -- unless humans intentionally hobble it.

On the other hand, AGI is also not a prerequisite for significant AI risk; an advanced AI might be able to program grey goo or code a virus without the ability to communicate fluently in natural language or visually decode captchas... or even understand chess.

Despite these issues, AI research is moving slowly towards AGI, and is likely to reach it eventually. Predictions for completion of an AIG usually range from about 2045 to 2150. Whatever happens after the event of AGI, it is likely to be very different from what passed before.

Log in or register to write something here or to contact authors.