<-- Steward | Transhumanist Terminology Index | Strong Convergence Hypothesis -->

The strong AI postulate is the assumption that artificial intelligence (AI) is possible - true, complete, artificial intelligence that is equivalent, or even superior, to the form of intelligence demostrated by human beings. In essence, it is saying that intelligence is not a function of the material it is made out of - that something made out of silicon and semiconductors can process information equivalently to one made out of organic materials. That there is nothing inherently unique about the brain of humans that is not reproducible. This does rule out the existence of a soul or other similar "spirit" that is necessary for life.

Whether this is an actual truth or not has yet to be determined, and it is likely this will continue to be unknown for many years. The strong AI postulate, though, is still often used in futurist/transhuman/extropian discussions and ideas, as many technologies and possibilities may come from such a development. For example, the idea of uploading a consciousness would require this to be true, as moving your consciousness from an organic body to a machine would require the ability for a "machine" to be intelligent.

The first part is rather logical, even given the difficulty in defining "intelligent": Can a device, assembled out of matter, be intelligent in the same sense that a human being is intelligent?

See exhibit A, a typical human being. (s)he is made completely out of matter, and is by definition "intelligent in the same sense that a human being is intelligent". Therefor it is possible, at least in principle.

If you claim that "awareness" not intelligence is key, this changes nothing - just replace "intelligent" with "aware" throughout.

For the second part: Could intelligence be implemented using other materials? Or to put it differently, is there anything innately special about the materials and structures used to build human beings? Can it be done in silico?

I'm inclined to believe that no, there isn't, our design just happens to be the adequate solution to the problem of conciousness that nature hit upon, and that there are many other designs as good or better.

And if there is something special about our design, sooner or later we will work out what that special thing is, and replicate it.

But would a concousness implemented in different materials be indistinguishable from ours? I would think that this is a difficult, maybe even impossible, and in any case a useless goal. Why reimplement all the quirks and failings of old hardware on a new platform?

In short, the idea that intelligence is not an emergent property of carbon. I think so, mostly because I can't see any good reason why carbon could be the only material that we can make intelligent beings from. Silicon should be usable for this eventually (or maybe other materials. Nanotechnology will help lots with this)

What we are talking about here, isn't intelligence, but the cause of intelligence, which is awareness. We KNOW we can create machines that can solve problems. But are those machines actually aware? Do they actually know what they are doing? Or are they like sand falling from one's hands, which may make lovely patterns on the floor, and may move, but are dead, and cold, and devoid of any intelligence.

To put it another way, can a machine observe?

If we can create a machine that can do that, (not just record) then the rest of the AI is a piece of cake in comparison.

Log in or register to write something here or to contact authors.