A nanocomputer is a computer which either uses nanoscale elements in its design, or is of a total size measured in nanometers. The first definition is pretty shaky, in that many modern CPUs use transistors which are in the 65 to 130 nanometer range, and yet require not only several cubic centimeters of space, but the help of memory controllers, RAM, and external I/O devices which are not included on-chip. Nanoscale computing, in this sense, is making good progress (more or less as scheduled by Moore's Law), and memory devices using nanometer-sized elements are currently in research labs, and will probably be available commercially within five years. However, a true nanocomputer, that is, a full computer, designed using nanoengineering, which fits into a few thousand square nanometers, is many years out. No doubt the first devices of this size will have a minuscule amount of computational power, but while a single one might be useless, their incredibly small size will allow applications that can only be dreamed of now.
There are numerous proposed methods of constructing a nanocomputer. Molecular computing (sometimes also called DNA computing) performs complex computations using chemical reactions on large chemical chains (DNA being a relatively well-understood case, it is commonly used in molecular computing research). Interestingly, both Ralph Merkle, one of the creators of public key cryptography, and Leonard Adelman (the 'A' in RSA) have been working in this field. Some promising results have been found, particularly with regards to massively parallel algorithms such as brute force searches of cryptographic keys. Rod logic designs seek to create the equivalent of a Babbage machine with trillions of times more elements than Babbage could ever have managed. Merkle and Drexler's helical logic is at least theoretically much more efficient in terms of power and space than even rod logic, but the design is mostly untested, and even if feasible may not scale down sufficiently to build a nanocomputer.
It bears mention that while quantum computers also rely on elements which are nanoscale (or, usually, much smaller), nanocomputers do not rely on quantum effects as quantum computers do. Thus, while a nanocomputer is in all respects a normal Turing machine, with all of its limits, a quantum computer is more powerful computationally (at least in theory).
The obvious use for a nanocomputer is controlling a nanobot/nanite, though an alternate design would be to have the nanobots be "dumb" and take all their commands from a central, regular size computer, which takes in any sensory data that they can produce and provides commands such as "go left", "go right", and "build car" to them. Since the power consumption of any individual nanocomputer is going to be very small, an alternative would be to pack billions of them together to form a massively parallel computer. This has some precedent;
IBM's Blue Gene, the fastest supercomputer in the world, is composed of 32,768 quite modest 700 MHz PowerPC processors. In fact, IBM researchers cite the low heat and power consumption of the processors as one of the key factors that allowed them to significantly reduce space and power usage as compared to the previous generation of supercomputers while maintaining performance several times that of their closest competitors. Expanding further, with many more (but much less capable) processors, it might be possible to build something equivalent to a current supercomputer which requires no more space or power than an Athlon. However, applications would have to be specifically written for it, much as they have to be specially written for supercomputers; taking full advantage of a massively parallel machine with tens of thousands of CPUs requires a completely different design than one that makes use of only one or two. In addition, many algorithms do not parallelize well; any bottleneck in the algorithms will reduce the effectiveness of such a computer dramatically. This effect gets worse as the number of units increase, as well - if a two way SMP system is bottlenecked to a single CPU, it is half as fast. If a machine with a billion compute units is bottlenecked, it is a billionth as fast as its capacity.
Learning to build and effectively use nanocomputers may well be the next big challenge facing computer science. The ramifications not only for computing, but for human life in general, are staggering.