A central result in Game Theory, and one of the earliest, providing the foundation for John Von Neumann's creation of the discipline in the 1920s.

(Minimax Theorem). Any finite 2 player zero-sum game in strategic form has
a value V such that Player 1 has a mixed strategy ensuring an average payoff of V , regardless of
Player 2’s strategy; and Player 2 has a mixed strategy which ensures an average payoff of -V ,
regardless of Player 1’s strategy.

It therefore follows that the optimal strategy for Player 1 is one that ensures a payoff equal to
the value. Were they to attempt to secure a larger return, they would fail if Player 2 followed the
strategy prescribed by the theorem: a payoff greater than V for Player 1 would force a payoff of
less than -V to Player 2, which contradicts the Minimax result.

Von Neumann’s proof of the Minimax theorem, advanced in his 1928 paper *Zur Theorie der
Gessellshaftspiele*, depended upon topological notions such as Brouwer’s fixed point theorem. With the post-war emergence of Operational Research, a simpler proof of this two player, zero-sum case became accessible. Thus the details of the original proof shall be omitted, and it shall instead be demonstrated that the Minimax theorem is a linear programming problem and hence specific instances solvable through application of (for instance) the Simplex algorithm.

For many motivational examples, such an approach is overkill. Some methods, such as identifying dominated strategies or saddle points are both simpler to compute and more enlightening, but do not generalise. There are also many special cases which exploit structural features of the payoff matrix in question rather than properties of game-theoretic interest. For instance, matrices where one of the dimensions is 2 can be treated graphically. To resolve the general (finite) case of an m × n payoff matrix, it suffices to show that finding an optimal strategy for Player 1 is a linear programming problem.

It is clear that if there is any solution, it is finite: the best return Player 1 can guarantee is bounded by the maximum of the a_{ij} . Further, the dual problem is to minimise Player 2’s losses, and so by the duality theorem the objectives of each coincide at some value V, which is the value of the game.

To formulate as a linear programming problem, we require an objective function and a set of
constraints. If Player 1 is to employ a mixed strategy **P** = (p_{1}, . . . , p_{m}), to a game with payoff
Matrix A = (a_{ij}), then the objective is to maximise

z = min_{j} Σ_{i=1:m} p_{i}a_{ij} , where j∈{1...n}

However, this is not a linear objective. This can be resolved as follows- introduce a variable x with
the constraint x ≤ z, and maximise x instead. Unravelling the definition of z, this gives n linear
conditions on x to be satisfied simultaneously, namely

x≤ Σ_{i=1:m}p_{i}a_{i1} , x≤ Σ_{i=1:m}p_{i}a_{i2} , ... , x≤ Σ_{i=1:m}p_{i}a_{in}

i.e., written closer to canonical form,

Σ_{i=1:m}p_{i}a_{ij} - x ≥ 0 ∀ j∈ {1,...,n}

Extra constraints on {pi} also arise. Since **P** is a probability distribution , we require

p_{1} + . . . + p_{m} = 1

Further, for each i, p_{i} ≥ 0 and p_{i} ≤ 1, since each p_{i} is to be a probability; this gives another 2m conditions.

Thus the set of 2m + n + 1 constraints on the variables p_{1}, . . . , p_{m}, x with linear objective x captures the minimax theorem as a linear programming problem.

Part of *A survey of game theory*- see project homenode for details and links to the print version.