I have to jump in here, since nobody has mentioned what is probably the most pressing reason for continued nuclear testing. A nuclear weapon is not a stable, happy, put-me-on-a-shelf type of object. It is a fairly complex machine, with some dangerous and unstable stuff at the core which puts out radiation and can affect the rest of the machine. As a result, nobody really knows what the shelf life of these things is.

A deterrent depends on you (and the rest of the world) being damn sure that the things will actually go boom when the button is pressed. The U.S. has already experienced demonstrated failures in entire design groups of its atomic weapons. Just as one example, a service warhead for the Polaris missile, when first actually detonated during a test, was found to have an almost subcritical yield due to the degradation (through oxidation) of the plutonium core. Another example: neutron flux is known to have a deleterious effect on electronics, particularly those in close proximity to the neutron source.

In any case, while the second failure type can be discovered and fixed with regular dismantling and inspection of the weapons, and testing the electronic components on the bench by themselves, the first could not. If, for example, neutron flux actually degraded the high explosive 'trigger' for the device, or perhaps the initiator, probably the only way to determine this for sure would be to pop one off.

The present reliance on fusion-boosted and fission-fusion weapons (hydrogen bombs) in the U.S. arsenal actually makes things much trickier. In these weapons, there are whole additional layers of interaction that must function properly; and some of these layers are likely to involve the precise shape design of the weapon (precise as in micrometry) so that simply inspecting dismantled components wouldn't help. The U.S. is spending an incredible amount of money on being able to simulate, using computers, the first few seconds of an atomic detonation in order to avoid having to perform explosive testing of the weapons. This problem is so large in terms of data that in order to do a single 'run,' thousands of computer processors at multiple locations must be involved and the time required to complete the run is being measured in months or years.

Finally, impressive though the simulation is, it is very difficult to be convinced that it is sufficient. After all, the entire purpose of testing is to find out what you didn't know. If there is some effect that we don't know about (like the core oxidation in the original example) then the simulation won't do us any good.

So, to conclude, it's not necessarily true that the U.S. wants to continue testing in order to build new weapons and/or increase the size of its arsenal. In fact, much of recent high energy weapon design has focussed not on building bigger bombs, but on building more stable, more reliable and smaller weapons in order to reduce the need for future maintenance and testing. It's fairly easy to build an atomic weapon if you have access to large amounts of fissionable materials; however, it's quite a challenge to do it with a very small amount. The U.S. has spent a great deal of resources to design and produce weapons that don't require enormous quantities of fissionables, meaning that the production of said fissionables can be minimized or halted; also, the weapons can be made more secure if they are harder to make and detonate.