To actually do this:

1. Calculate the Surface Normal to the polygon. One of the ways to do this is to calculate the cross product of two of the sides of the polygon in ymelup's writeup above, eg A - > B, A - > C:

vector U = (Bx-Ax, By-Ay, Bz-Az)
vector V = (Cx-Ax, Cy-Ay, Cz-Az)
vector W = U x V

There are two vectors of non-zero length that are at right angles to U & V, W and -W. Order is important with the cross product: U x V = W and V x U = -W. Since we obtained the values of U and V from the triangle A,B,C above, you can see that the vertex ordering direction is important. Following the right hand rule which states that if you put the base of your palm at the first vertex and curl your fingers around in ascending order, your thumb will point out the vector perpendicular to the polygon.

As we want the normal facing "outwards" from the invisible space "inside" the polygons, we order the verticies in an anti-clockwise order with the outwards side facing us.

2. Calculate the dot product between the surface normal and the sight vector, which is the vector of the camera looking into the world. In camera co-ordinates this would be (0,0,-1) or nothing but Z. You can read about what the inequalities of the dot product mean in it's node, but here are what they mean to us:

< 0 : The polygon is facing away from us, so it is effectively invisible and we can cull it from further rendering.
= 0 : The polygon is side on to us, and since it is 2D and infinitely thin, it is effectively invisible.
> 0 : The polygon is visible.

It is possible for part of a model to be just one polygon thick with no internal invisible space, where both sides are always visible. A flag should be set and backface culling skipped in this case.

discofever brought to my attention the optimisation of pre-calculating normals then rotating them with the polygon to avoid recomputing them. Another trick is to initially calculate the magnitude (or vector length) of the normal, which does not change on rotation, only scaling.