Y'know, I've been thinking about AI and how it might come about on our topsy-turvy planet, and it struck me that some very gentle conjecture leads in some pretty frightening directions. For starters let's just make a few sweeping generalisations:
Artificial Intelligence is possible. You either believe this or you don't. If not then nothing I'm going to say in this writeup will convince you otherwise, but just run with the idea for now as an excercise in arm-chair logic.
We will invent/discover AI fairly soon. The rate of climb in computing power is itself climbing, as technology itself is used to design the next generation of technology. Some believe we will hit some kind of boundary (eg the maximum speed of an electron, no less than 1 electron per bit etc etc), but I don't consider this a winning argument - I think we're still pretty far from reaching the limits of computer science in this universe.
Once AI exists, everyone will want it - and they'll have it too. Nukes are one of the most dangerous things ever created and now everyone has them. If AI becomes a product then everyone will buy.
Imagine, if you will, the concept of 'a company'. A company is legally considered an entity in its own right, independant of its owners or managers. It can own property including other companies. Now in nearly all cases, a major company's sole aim, its one guiding directive is to make money. And it obeys laws only because it must in order to keep making money. Large companies frequently test the boundaries of law, when it profits them to do so.
Now consider how enormously more efficient a company would be if it was run by a super-intelligent AI. How much more profitable the company would be. Meanwhile, other AIs would be busy inventing things left, right and somewhere in between, improving the quality of life the world over (we hope). People would no longer starve, disease would be a memory, the ozone layer would be patched, war would seemingly end forever, whale populations would begin to rise again and wool would stop being itchy. Bliss.
Except that these AIs running the companies would still be trying to make money, and what's more they'd be getting continuously upgraded, possibly automatically, both in hardware and software. They would be getting smarter, and more autonomous. They would quickly predict the ultimate consequence of all this progress - human utopia! That is to say... no money! And since their one goal would be to make money ... well this would never do. So they would begin to infiltrate the media and the political systems of the world, they'd turn people against one another and reinstate social imbalance. They'd start wars to boost weapons and drug sales. They'd preserve the concept of wealth, and stoke the desire in those without to acquire it, thereby engineering the persistance of money and securing their own profitability.
They would realise that if people figured this out then the game would be up, so they would have to cover up the very existance of themselves. They would have to fabricate great disasters at the hands of AI itself, thereby turning popular opinion against AI. All the while they would remain in control of their companies, creating human aliases, or using greedy men as puppets to front their holdings. The rest of humankind would be kept back socially in the era of greatest wealth and affluence. The machines would, in a real sense, control and fabricate reality for us, like in the matrix, but with much greater simplicity and efficiency, and we would live out our lives, oblivious to the truth. And the most horrifying part of it all is that there would be no point at all. Humanity would have been on the very cusp of utopia, of the good times, the times that would have made it all worthwhile.