In the transhumanist definition, a singularity is basically when computers begin designing progressively more intelligent computers, and eventually they end up smarter than humans. Then, as Vinge famously put it, "the human era comes to an end."
Folks like Eliezer Yudkowsky have written lots about this concept, and it seems as though they're right. I haven't thought about it too much, mainly because I get really shaken whenever I do. Maybe that's what fundamentalists go through - I don't know, and I'm not sure I want to.
What Yudkowsky, and many others, are trying for is a friendly singularity - one in which the AI will act in a manner similiar to how humanity would prefer it to act. An unfriendly AI could be worse than any dictator, or use nanotechnology to turn the solar system into a machine to do number-crunching to help with someone's thesis - it's impossible to be sure. The consensus among transhumanists is that a singularity is more or less inevitable, but that we need to be careful how and when it occurs.