A *doubly stochastic* matrix M is a square matrix with non-negative elements such that every row and every column sums to 1. As such, it's a stochastic matrix whose transpose is also a stochastic matrix. So the row vector (1,...,1) is a left eigenvector of the matrix, and the column vector (1 ... 1) is a right eigenvector -- both corresponding to the eigenvalue 1.

Stochastic matrices are chiefly interesting due to their being representations of Markov chains, and the ensuing convergence theorems we have for them. A doubly stochastic matrix describes a Markov chain which remains a Markov chain if we reverse all arrows. (Note that this is not usually the case!)

Also, the Markov chain will have the uniform distribution u=(1/n,...,1/n)^{t} as a stable distribution (a steady state) -- by what we said above, Mu=u. If the matrix is ergodic, it only has one stable distribution, so for all initial distributions d, we have the limiting distribution M^{n}d → u. So a doubly-stochastic matrix, repeatedly multiplied, does an *excellent* job of mixing its input!

Doubly stochastic matrices are interesting combinatorial objects. For a start, the set of all doubly stochastic matrices is a convex polytope in M_{n}(**R**), the n^{2}-dimensional vector space of all n×n matrices. The vertices of the polytope are precisely the permutation matrices. While somewhat intuitive, the proof is tricky. Enough so that most analysts prefer to invoke the Krein-Milman theorem rather than prove it "by hand".