A symmetric function is a polynomial or rational function (quotient
of polynomials) in n variables which remains invariant no
matter how you permute variables (e.g. swap x_{1}
with x_{2}). They feature prominently in Galois
theory. The elementary symmetric functions appear as the coefficients
of a polynomial in n indeterminates (i.e. the coefficients
of f(t) = (t
 x_{1})···(t
 x_{n}) ), and the fundamental theorem of
symmetric functions says that any symmetric function can be expressed
as a polynomial or rational function of elementary symmetric
functions. When the original function isn't symmetric, we can still
say something interesting.
Theorem: Let g(x) be any polynomial
where x = (x_{1},
..., x_{n}) are n variables, and
let s_{1}, ..., s_{n} be the
elementary symmetric functions in n
variables. Then g(x) can be written as a linear
combination of monomials
x_{1}^{ν1}
x_{2}^{ν2}
···
x_{n}^{νn}
such that ν_{i} ≤ i  1 and the
coefficients of the monomials are polynomials in
the s_{i}.
This theorem, seemingly due to Emil Artin, is a slight
generalisation of the fundamental theorem of symmetric functions. It
gives the closest possible expression of any polynomial in terms of
symmetric functions no matter if the original polynomial is symmetric
or not. Or if you prefer, the fundamental theorm of symmetric
functions comes as an easy corollary to this theorem.
The corollary is obvious. Observe that the nature of monomials is such
that they can't be symmetrised, because the powers in the
monomial have to be nondecreasing by indices. Thus, if the original
polynomial g(x) was symmetric, then the only way
it can still be symmetric after being written in this form is if the
only monomial with nonzero coefficient is the one for which all
the ν_{i} are zero, i.e. the constant term. But
then the constant term is a polynomial of elementary symmetric
functions, proving the corollary.
The proof is an algorithm for putting g(x) in
the desired form.
Proof: Let f_{n}(t) :=
(t  x_{1})(t
 x_{2} )···(t
 x_{n}) = t^{n}
 s_{1}t^{n1} +
··· +
(1)^{n}s_{n} and define
recursively
f_{i  1}(t)
:= f_{i}(t)/(t
 x_{i}).
Three things are immediately clear:

The polynomial f_{i}(t)
has x_{i} as a root, the other roots being the
other x_{j} with j < i, because it's just f_{n}(t) with the last n  i linear factors divided away.

By synthetic division and by the recursive definition, the
coefficients of f_{i}(t) are
polynomials in terms of the elementary symmetric functions and
the x_{j} with j > i.

The degree of f_{i}(t) is i.
Now for the algorithm to put g(x) in the desired
form. Since x_{1} is a root
of f_{1}(t), it is possible to
express x_{1} in terms of the symmetric
functions s_{i} and the rest of
the x_{i} with i > 1. Substitute this
expression of x_{1}
into g(x), and expand out the result, which does
not contain any term with x_{1} now.
We proceed recursively as follows. Since x_{2} is a
root of f_{2}(t), it is possible to
express x_{2}^{2} or any higher power in
terms of the symmetric functions s_{i} and the rest
of the x_{i} with i > 2, with perhaps
a few terms of x_{2} of degree less than
2. Substitute this expression of x_{2}^{2}
(or higher) into g(x), and expand out the
result, which no longer contains any term
with x_{2}^{2} or higher degree.
Continuing in this process of eliminating all third powers
of x_{3} or higher
with f_{3}(t), all fourth powers
of x_{4} or higher
with f_{4}(t), we obtain the desired
form for g(x).
QED.
Let's work out an example. Unfortunately, the only way to make an
interesting enough example involves heavy computations. I will work
out some steps of the example, but I will leave most of the boring
manipulations to Maxima or to a diligent reader.
Let us consider the symmetric polynomial in 3 variables
g(x)
= x_{1}^{2}x_{2}
+ x_{1}^{2}x_{3}
+ x_{2}^{2}x_{1}
+ x_{2}^{2}x_{3}
+ x_{3}^{2}x_{1}
+ x_{3}^{2}x_{2}
Now, in 3 variables, the f_{i}(t)
from the proof above are
f_{3}(t) = t^{3}
 s_{1}t^{2}
+ s_{2}t  s_{3},
f_{2}(t) = t^{2} +
(x_{3}  s_{1})t
+ (s_{2}
 s_{1}x_{3}
+ x_{3}^{2}),
f_{1}(t) = t
 s_{1} + x_{2}
+ x_{3}.
Recall that f_{2} and f_{1} are
obtained by symbolic synthetic division of the polynomial above them
and that the remainders are zero. Also, recall at this point that the
elementary symmetric functions in three variables are
s_{1} = x_{1}
+ x_{2} + x_{3},
s_{2}
= x_{1}x_{2}
+ x_{1}x_{3}
+ x_{2}x_{3},
s_{3}
= x_{1}x_{2}x_{3}.
Since f_{1}(x_{1}) =
0, f_{2}(x_{2}) = 0
and f_{3}(x_{3}) = 0, we obtain that
x_{1} = s_{1}
 x_{2}  x_{3},
x_{2}^{2}
= s_{1}x_{2}
+ s_{1}x_{3}
 s_{2}
 x_{2}x_{3}
 x_{3}^{2},
x_{3}^{3}
= s_{1}x_{3}^{2}
 s_{2}x_{3}
+ s_{3}.
So, the algorithm now says to replace this expression
for x_{1} into g(x), which
after expanding everything out becomes
3x_{2}x_{3}^{2}
 s_{1}x_{3}^{2} +
3x_{2}^{2}x_{3} 
4s_{1}x_{2} x_{3}
+ s_{1}^{2}x_{3}
 s_{1}x_{2}^{2} +
s_{1}^{2}x_{2}.
Note that we have succeeded in eliminating x_{1}
from this expression. Now we do the same
with x_{2}^{2}, to obtain
3x_{3}^{3} +
3s_{1}x_{3}^{2} 
3s_{2}x_{3}
+ s_{1}s_{2}.
Finally we replace x_{3}^{3} by its own
expression to conclude that
g(x)
= s_{1}s_{2} 
3s_{3},
which is the expression of g(x) in terms of
elementary symmetric functions that we sought.