Also known as Leibniz Integral Rule.
The story behind the rule is interesting. American physicist Richard P. Feynman (what a droll sounding phrase) self-taught himself Calculus when he was in high school. The textbook he used, authored by Woods, had an interesting technique which didn't seem to crop up in many other Calculus textbooks. It was called "differentiating under the integral sign," and Feynman recalls in Surely You're Joking, Mr. Feynman! that he seemed to have a different "box of tools." "I used that one damn tool again and again."

Okay. So, with that said, I will explain the rule.



So, what is this wonderful set of tools which Feynman had? Here's how it goes.
You have a function, h(x), but it has a parameter in it, s. Let's say you can express this function as the derivative of another function, f(x,s), taken with respect to the parameter s. So, h(x) = ∂/∂s f(x,s). To find the integral of h(x), you can use that function f to find the integral in the following way:

     ⌠b(s)         ⌠b(s)
∂/∂s | f(x,s) dx = | ∂/∂s[f(x,s)] dx + f(b(s), s) ∂b/∂s - f(a(s), s) ∂a/∂s
     ⌡a(s)         ⌡a(s)

That's what the rule says formally. If the bounds of the definite integral are constant, those ugly last two terms drop out. Here's a less gobbled form, applicable for indefinite integrals, and easily modified for definite integrals (by plopping on the bounds):

∂/∂s ∫ f(x,s) dx = ∫ ∂/∂s[f(x,s)] dx

N.B. That ∂ symbol is a partial differential. It's basically the same thing as a derivative, in calculation. All it means is "Find the derivative of the function where s is the variable and act as if x were constant, even though it's another variable."

Using this technique is rather ingenious, and quite peculiar. In the end, what you wind up doing is evaluating an integral by taking a derivative (or an nth derivative, in some cases).
Let's say you wanted to evaluate the following integral:

x e2x dx

It could be done using integration by parts, but we'll use this technique anyway, because it demonstrates the technique adequately. And because I can't think of a better example.
What we do is make a function, f, which is es x.

f(s) = es x
f'(s) = x es x

(notice how the variable of f is actually s, not x).

This is the same thing as saying:

f(x,s) = es x
∂/∂s[f(x,s)] = x es x

That first derivative looks ALMOST like the function we want to find (except there's an s instead of a 2). Using the technique:

∂/∂s[∫ f(x,s) dx] = ∫ ∂/∂s[f(x,s)] dx

Now substitute in the function for f, and solve.

∂/∂s[∫ es x dx] = ∫ ∂/∂s[es x] dx
= ∫ x es x dx

So now, on the left side, we have a simpler integral to calculate (es x), and all we need to do is find the derivative of that integral with respect to another variable (s).

∫ es x dx = es x/s
x es x dx = ∂/∂s[es x/s] = (s es x - x es x)/s2

And then we plug in 2 for s.
Okay, so you say you could have done that easier with integration by parts? Well, the beauty of this technique shines when you have an integral like:

x5 e3x dx

You could evaluate this through repeated integration by parts. Of course, this is a pain, and keeping track of the variables gets really annoying really quick. The neat thing about this is that you only need to take derivatives, which can be slightly more straightforward and less of a pain.

xn es x dx

Which is:

xn es x dx = ∂n/∂sn[∫ es x dx]

While finding that nth derivative may be a pain if the quotient rule is involved, it's sometimes easier than using multiple integration by parts.

There are probably many other, much more creative ways to use this technique which I haven't explaint here. Please, elaborate on your own here.

-Calculus&Mathematica, Calculus II