A
raging debate within
Artificial Intelligence. Some
researchers feel that the
objective of
AI is to build sentient
beings of
similar intelligence to
humans; and the really really
strong AI people believe that this must be done by
mirroring the brain's function on computers. This is the
strong AI position.
The weak AI position holds that AI should try to develop technologies which have facets of intelligence, but the objective is not necessarily to build a completely sentient entity. For example, weak AI researchers see their contribution as things like expert systems used for medical diagnosis, speech recognition and data mining. These use "intelligent" models, but they do not help create a sentient entity
Thanks to m_turner for the inspiration.