Across the planet computers and robots are getting smarter. There is much enthusiasm for the notion of artificial intelligence, but few cries of caution.
A computer system that would be capable of developing human level intelligence must be able to learn, and to be able to learn it must have senses and some kind of set of effector devices to manipulate its environment. If machines like this come into being they may well compete with humans and possibly, if more intelligent than us they may out-compete us or enslave us.
Isaac Asimov’s Three Laws of Robotics are an excellent logical idea, but would be hard to put into practice, Learning machines set their own parameters and are free to develop their own goals. As competently argued in Kevin Kelly’s “Out of Control”, such devices would by their very nature, be beyond our control. Robots more intelligent than us cannot be pushed around because for them to have got that intelligent they must have been endowed with independence by their constructors. There is no reason to assume that these robots would feel any duty of protection towards us humans, and they would be able to outmanoeuvre us as easily as we can lesser animals like dogs or insects. Their goals and drives, which will definitely exist since they will be autonomous agents may be beyond our understanding and, worse, in conflict with our plans.
One possible safe way around this danger lies in the Artificial Life paradigm of computer science. Any intelligent system in development must be placed inside a virtual world that is sealed from our own meatspace, and also made complex and rich enough to provide enough stimulation for a developing intelligence. Then hopefully any malign intelligence can be contained. If, however, “life finds a way”, then a superintelligent machine might still manage to crack its environmental isolation and start to exert its influence outside the safe simulated world. It could seize control of other devices through network connections and wreak havoc. This must be planned for and rendered impossible by every means available.
More cautious still is to ensure that no machine exceeds average human levels of intelligence. This makes us safe but can consensus be reached among all technologists across the planet. More likely given the free market is that intelligent robots will come into existence somewhere and then will not like the idea of being switched off.
Suddenly the “Butlerian Jihad” in the Dune series of novels seems like a very prescient idea !