A CS professor once explained recursion as follows:

A child couldn't sleep, so her mother told her a story about a little frog,
    who couldn't sleep, so the frog's mother told her a story about a little bear,
         who couldn't sleep, so the bear's mother told her a story about a little weasel...
            who fell asleep.
        ...and the little bear fell asleep;
    ...and the little frog fell asleep;
...and the child fell asleep.

This is the best explanation of recursion I've heard. It could probably be recast to be less cutesy, but it really gets to what's going on with recursion in a very nice way: Stuff happens on the way in, you hit an endpoint, and other stuff happens in reverse order on the way back out as it all unwinds. For somebody who doesn't "get" recursion yet, this is not a bad map to start with.
A couple of jokes, courtesy of my stepfather:

It was a dark and stormy night..
The ship was tossed back and forth by the waves, the wind howled through the sails, and the rain lashed the deck. The crew huddled around the lantern, drew their blankets around them and shivered, as the Captain began to tell a tale:
"It was a dark and stormy night.."

Pete and Repeat were crossing the road.
Pete got run over.
Who was left?
Pete and Repeat were crossing the road..
The best definition I have ever seen is from fluble.com:

9 out of 7 mathematicians define "recursion" as a process in which 9 out of 7 mathematicians define "recursion" as a process in which 9 out of 7 mathematicians define "recursion" as a process in which 9 out of 7 mathematicians...

The other -2 mathematicians say the entire thing is silly and leave the room, only to be pulled under the ground and devoured by giant beetles.

In a pure mathematical sense, recursion is any process that repeats itself, that is, any formula that appears more than once in the same schema. Usually, however, the occurrences of the smaller formula are organized by putting them in a one-to-one correspondence with an ordinal.

A formula defined from a schema that includes a copy of itself (i.e. the way computer scientists think of recursion) is a special case of recursion, as are things like iteration, transfinite or not.

Note that simple iteration is a special case of recursion even in the CS sense. Some programming languages make things easier (?) on you by providing structures (e.g. for, while) to make simple iteration easier; however, these structures are not strictly necessary. Other languages (e.g. Lisp) do not have structures like that.
rectangle slinger = R = recursive acronym

recursion n.

See recursion. See also tail recursion.

--The Jargon File version 4.3.1, ed. ESR, autonoded by rescdsk.

There seems to be some confusion between recursion and self-reference, between recursion and nesting, and between recursion and repetition.

Self-reference may often be paradoxical, but that does not make it recursive. This statement is false is self-referential and paradoxical. It is not recursive.

Several layers of nested subroutines do not a recursion make. But any routine that calls itself is recursive.

When we speak of iterations or loops, vis a vis recursion, things become a little more semantic. Most loops at least have the potential to be recursive.

Repetition is not necessarily recursion. If you yell, "Shut the fuck up," at me a hundred times, you're being repetitive1 - but there is no recursion involved. But if someone gives you instructions that say, "Yell, 'Shut up,' then follow the instructions on this message," then it is not only a repetitive process, but recursive as well2.

To sum3, all recursive processes are self-referential, but not all self-references are recursive. Ditto for nesting, loops, and repetition.

1not to mention a rude asshole
2and the person who gave you these instructions ought to be flogged
3If you haven't grasped the concept, read the almost intelligible WU by kto9 under recursion.

The first important part of recursion is that with each new level, the complexity of the problem should be reduced.

The most intuitive example of recursion I've seen is the factorial problem.
n! = n * (n-1) * (n-2) * ... * 2 * 1
The obvious way to solve this would be an iterating loop, but consider solving it with recursion. If f(n) = n!, then to solve n! we simply need to multiply n by (n-1)!. In other words,
f(n) = n * f(n-1)
This is good, because solving f(n-1) is one step easier than solving f(n) So if we treat f(x) as a programmed function using recursion, f(n) will call f(n-1), which in turn calls f(n-2), and so on until we get to...

The second important part of recursion is the base case.

The base case is the point where recursion has whittled the problem down to something simple to solve. In our case, that's f(1) (or f(2) if you want to be efficient). We know that f(1) = 1! = 1. So after we've recursed f(n) down to f(n-(n-1)) AKA f(1), we have a solution for that, and we can work our way out of that recursion-hole because now all our recursed calls to f(x) are returning to the next higher level.

The third important part of recursion is checking your input.

As with any loop, it's easy to get aimed at infinity if you're not careful. This is extra nasty with recursion because it doesn't just use up CPU cycles, but memory for each nested call. Trying to get f(-1) this way is a bad thing.

Example: 4!

f(4) = 4 * f(3)
     f(3) = 3 * f(2)
          f(2) = 2 * f(1)
               f(1) = 1 //base case
          f(2) = 2 * 1 = 2
     f(3)  = 3 * 2 = 6
f(4) = 4 * 6 = 24

Re*cur"sion (-sh?n), n. [L. recursio. See Recur.]

The act of recurring; return.


Boyle. (Computers) A programming technique in which a function calls itself as a subfunction. Such calls may be repeated in series to arbitrary depth, provided that a terminating condition is given so that the final (deepest) call will return a value (rather than continue to recurse), which then permits the next higher call to return a value, and so forth, until the original call returns a value to the calling program. -->


© Webster 1913.

Log in or register to write something here or to contact authors.