# Numerical root finding algorithms

A root finding algorithm is a numerical method or algorithm for finding a value x such that f(x) = 0, for a given function f.

Two of the best known root finding algorithms are the bisection method and Newton method, named after the eminent 18th century mathematician and scientist Isaac Newton. Newton method uses calculus in the computation of slopes of tangent lines. While is more efficient, it can fail when the original estimate is too far from the root. The bisection method does not use any calculus and usually takes longer to converge.

One-dimensional root finding algorithms can be divided into two
classes: **root bracketing** and **root
polishing**.

## Root bracketing methods

A root is bracketed in the interval (a, b) if f(a) and f(b) have
opposite signs. If the function is continuous, then at least one root must lie in
that interval. Algorithms that proceed by bracketing a root are guaranteed to
converge. Bracketing algorithms begin with a bounded region known
to contain a root. The size of this bounded region is reduced,
iteratively, until it encloses the root to a desired tolerance.
This provides rigorous error estimation for the location of the
root.

## Root polishing methods

The technique of root polishing attempts to improve an initial
guess to the root. These algorithms converge only if started
"close enough" to a root, and sacrifice a rigorous error
bound for speed. By approximating the behavior of a function near a
root, they attempt to find a higher order improvement of an initial
guess. When the behavior of the function is compatible with the
algorithm and a good initial guess is available, a polishing
algorithm can provide rapid convergence.

To find more about numerical methods, visit the sites below:

See how these numerical algorithms work, solve equations free and take the
results with you.

DeadLine OnLine - free equation solver. Copyright 2003-2007 Ionut Alex. Chitu. |

Contact |

Sitemap