An edge instantiation is a solution that follows all rules and follows them well, but falls outside the expected or desired (or even understood) solution space. The term is generally used when talking about AI, referring the habit of AIs to look for solutions in places that we don't, and to look deeper than we look.

This is a potential problem, but it is important to note that this is a different problem than runaway AIs; this is a case of a computer program doing something that we do not expect, but does not necessarily indicate that it is doing something suboptimal under our value system or that it is operating under false premises. While edge instantiations can be disasters, they can also be Very Good ThingsTM. They are a problem in the sense that they are highly unpredictable; in cases where we may not understand all the implications of the rules given to the computer, unpredictable is bad.

A recent example of an interesting but harmless edge instantiation comes from AlphaGo's recent matches with Go grandmaster Lee Sedol. On its 19th move of the second game, AlphaGo apparently wasted a move, dropping a stone into a wide-open area of the board, and then on move 167 appeared to give Sedol an unrequired advantage. These moves appear to be in part because AlphaGo could think further ahead than Sedol, and in part because AlphaGo was maximizing the probability of a win -- a 99% chance at a win by one point was valued more highly than a 95% chance of a win by 20 points (and AlphaGo did win).

Humans instantiate edge cases all the time; examples might include Edison finding the most effective way to increase the DC market share was by electrocuting an elephant (it didn't work very well); or Julius Asclepiodotus burning his own ships so that retreat was no longer an option (it worked pretty well). We have had some significant benefits from edge instantiations -- in a fit of slapdash but effective applied psychology, humans turned WWIII into the Cold War in large part by substituting the space race for actual aggression. On the other hand, we are currently digging ourselves out of a recession caused in large part by a market crash resulting from too-cleverly hiding the risk of bad investments.

Given that humans appear to be fairly awful at predicting the outcomes of their own attempts at using the rules creatively, it is likely that AIs will be even more unpredictable. Not unpredictable from their viewpoint, as they are only following the rules, but from the viewpoint of humans, who are not very good with rules.

Log in or register to write something here or to contact authors.