Stick a whole bunch of AI constructs into some program space and have them perform the prisoner's dilemma on each other. Those that adhere to The Golden Rule will quickly be screwed.

Interestingly enough, the most successful algorithm to employ in this case is a combination of both the biblical and satanic rules:

The first time you meet another construct, do unto it as you would have it do unto you. Thereafter, do unto it as it last did unto you.

Constructs that follow this algorithm will always lose slightly to the purely cynical/satanic versions, but the gains they make in interfacing with fellow constructs that share the same philosophy will be much greater than when what the cynical constructs get when they interact.

When I read this in Metamagical Themas, the moral was driven home more eloquently.