The most

straightforward root-finding method. It also requires the least effort to

program and to guarantee

convergence. If

`f` is a

continuous function, and you wish to find a root, you first find 2 points

`a_1,b_1` between which f changes

sign (

`f(a_1) <= 0 <= f(b_1)`).

Now just do a binary search for the root: at each iteration, pick `m_n=(a_n+b_n)/2`, and take `a_{n+1}=m_n, b_{n+1}=b_n` if `f(m_n) <= 0` or `a_{n+1}=a_n, b_{n+1}=m_n` otherwise.

It's easy to see that the method must converge to a root (unlike the Newton-Raphson method or the "regula falsi" method); however, each iteration adds just one bit of accuracy (it's a binary search, after all), so convergence is much slower than with these methods (when they converge).

For unknown functions, it is usually best to combine several iterations of a fast method with something based on the bisection method, to ensure we're still converging.