An oracle artificial intelligence, often abbreviated OAI, is a form of artificial general intelligence which has strictly limited goals and power, and is programmed to simply respond to questions that are asked of it.

This is a common answer to the problem of boxing AI -- that is, how do we ensure that an entity that is much smarter than us does not do things that we do not understand and do not want? Creating an AI that shares our goals is very hard, but creating an AI that has only the goal of answering questions should be much easier.

Proposed features of oracle AIs usually include the ability to shut it down when not in use, the ability to easily reset it to an initial state, and the built-in instruction to stop working as soon as the desired answer is found. Even so, there is the possibility of risk; for example, an AI whose only goal is to tell us how to cure cancer does not necessarily know that we are absolutely not looking for a course of gene therapy that infects the entire population, reformatting all genes to match an idealized (cancer free!) human genotype.

Moreover, directives to limit responses have the same sort of negative pitfalls. An AI given the directive to find the solution that has the least impact on humans outside of target domain will need strict definitions of 'least impact' and 'humans'; if you upload all humans into VR that repeats the same 7 billion lives for all eternity, does that minimize impact to human lives? If so, no other cure for cancer can be recommended. Alternatively, a cure for cancer that kills those it is used upon has the least impact of all. Along the same lines, an AI told to give simple, easily verifiable answers whenever possible will have an implied directive to ensure that it is asked simpler questions in the future, which does not bode well for the human race.

Areas of potential disaster are more common in the extremes of medicine, physics, and technology, which is exactly the areas where an oracle AI is useful. Likewise, in many cases, it will be the answers that we cannot fully understand that are most useful to us. Including some safety to make sure complex solutions follow human values in their execution and consequences remains a serious problem even with the Oracle AIs.