The first important part of recursion is that with each new level, the complexity of the problem should be reduced.

The most intuitive example of recursion I've seen is the factorial problem.
n! = n * (n-1) * (n-2) * ... * 2 * 1
The obvious way to solve this would be an iterating loop, but consider solving it with recursion. If f(n) = n!, then to solve n! we simply need to multiply n by (n-1)!. In other words,
f(n) = n * f(n-1)
This is good, because solving f(n-1) is one step easier than solving f(n) So if we treat f(x) as a programmed function using recursion, f(n) will call f(n-1), which in turn calls f(n-2), and so on until we get to...

The second important part of recursion is the base case.

The base case is the point where recursion has whittled the problem down to something simple to solve. In our case, that's f(1) (or f(2) if you want to be efficient). We know that f(1) = 1! = 1. So after we've recursed f(n) down to f(n-(n-1)) AKA f(1), we have a solution for that, and we can work our way out of that recursion-hole because now all our recursed calls to f(x) are returning to the next higher level.

The third important part of recursion is checking your input.

As with any loop, it's easy to get aimed at infinity if you're not careful. This is extra nasty with recursion because it doesn't just use up CPU cycles, but memory for each nested call. Trying to get f(-1) this way is a bad thing.

Example: 4!

f(4) = 4 * f(3)
     f(3) = 3 * f(2)
          f(2) = 2 * f(1)
               f(1) = 1 //base case
          f(2) = 2 * 1 = 2
     f(3)  = 3 * 2 = 6
f(4) = 4 * 6 = 24