07. Solution of Non Linear Equations (Secant
Method)
Dated: 16-08-2025
It is the modified version of Newton Raphson method
in which we replace \(f^\prime(x_n)\) by the difference ratio
\[
f^\prime(x_n) = \frac{f(x_n)-f(x_{n-1})}{x_n-x_{n-1}}
\]
Where \(x_n\) and \(x_{n - 1}\) are two approximations of the root
we get
\[
x_{n+1} = x_n - \frac{f(x_n)(x_n - x_{n-1})}{f(x_n) - f(x_{n-1})}
\]
\[
= \frac{x_n f(x_n) - x_n f(x_{n-1}) - f(x_n)(x_n - x_{n-1})}{f(x_n) - f(x_{n-1})}
\]
\[
= \frac{x_{n-1} f(x_n) - x_n f(x_{n-1})}{f(x_n) - f(x_{n-1})}
\]
Provided \(f(x_n) \ne f(x_{n - 1})\).
This method requires two starting values \(x_0\) and \(x_1\).
Geometrical Interpretation
Geometrically, the secant
method corresponds to drawing secants
rather than tangents
to obtain various approximations to root \(\alpha\).
Imagine two points \((x_0, f(x_0))\) and \((x_1, f(x_1))\). \(x_2\) is obtained by the intersection of secant
line between these 2 points and the \(x\) axis.
There is a possibility of the divergence if the two roots
lie on the same side of the curve
.
The order
of the convergence of the secant
method equal to \(\frac{1 + \sqrt 5}{2} = 1.618\).
The equation of the chord is
\[
y - f(x_0) = \frac{f(x_1) - f(x_0)}{x_1 - x_0}(x-x_0)
\]
Putting \(y = 0\), we have
\[
x_2 = x = \frac{x_0 f(x_1) - x_1 f(x_0)}{f(x_1) - f(x_0)}
\]
Convergence of Secant
Method
\[
x_{n+1} = \frac{x_{n-1} f(x_n) - x_n f(x_{n-1})}{f(x_n) - f(x_{n-1})}
\]
Starting with \(x_0\) and \(x_1\), \(\{x_0, x_1, \ldots\}\).
It will converge to \(\alpha\) that is \(f(\alpha) = 0\).
The Secant
method converges faster than linear and slower than Newton
’s quadratic
Example
Do three iterations of secant
method to find the root
of
\[f(x)=x^3-3x+1=0\]
Taking
\[x_0 = 1, \quad x_1 = 0.5\]
\[n = 1\]
\[f(x_0) = f(1) = 1^3 - 3(1) + 1 = -1\]
\[f(x_1) = f(0.5) = 0.5^3 - 3(0.5) + 1 = 0.125 - 1.5 + 1 = -0.375\]
\[x_2 = \frac{x_0 f(x_1) - x_1 f(x_0)}{f(x_1) - f(x_0)} = \frac{(1)(-0.375) - (0.5)(-1)}{-0.375 - (-1)} = 0.2\]
\[n = 2\]
\[f(x_2) = f(0.2) = 0.2^3 - 3(0.2) + 1 = 0.008 - 0.6 + 1 = 0.408\]
\[x_3 = \frac{x_1 f(x_2) - x_2 f(x_1)}{f(x_2) - f(x_1)} = \frac{(0.5)(0.408) - 0.2(-0.375)}{0.408 - (-0.375)} = 0.3563\]
\[n = 3\]
\[f(x_3) = f(0.3563) = 0.3563^3 - 3(0.3563) + 1 = 0.04523 - 1.0689 + 1 = -0.02367\]
\[x_4 = \frac{x_2 f(x_3) - x_3 f(x_2)}{f(x_3) - f(x_2)} = \frac{(0.2)f(0.3563) - 0.3563 f(0.2)}{f(0.3563) - f(0.2)}\]
\[x_5 = 0.3473, \quad f(x_5) = -0.0000096\]
\[|x_5 - x_4| = 0.0004\]
Muller’s Method
In this method \(f(x) = 0\) is approximated by a 2nd degree polynomial
, that is, a quadratic equation that fits through 3 points in the vicinity of a root
. The roots
of this quadratic equation are then approximated to the roots
of the equation \(f(x) = 0\). This method can be used to determine both real
and complex
roots
.
Suppose \(x_{i - 2}, x_{i - 1}, x_i\) be any three distinct approximations to a root
of \(f(x) = 0\)
\[f(x_{i - 2}) = f_{i - 2}\]
\[f(x_{i - 1}) = f_{i - 1}\]
\[f(x_{i}) = f_{i}\]
A 2nd degree polynomial
will pass through
- \((x_{i}, f(x_{i}))\)
- \((x_{i - 1}, f(x_{i - 1}))\)
- \((x_{i - 2}, f(x_{i - 2}))\)
Then the following equations will be satisfied
\[ax_{i-2}^2 + bx_{i-2} + c = f_{i-2}\]
\[ax_{i-1}^2 + bx_{i-1} + c = f_{i-1}\]
\[ax_i^2 + bx_i + c = f_i\]
Eliminating \(a, b, c\), we obtain
\[
\begin{vmatrix}
x^2 & x & 1 & f \\
x_{i-2}^2 & x_{i-2} & 1 & f_{i-2} \\
x_{i-1}^2 & x_{i-1} & 1 & f_{i-1} \\
x_i^2 & x_i & 1 & f_i
\end{vmatrix} = 0
\]
\[f = \frac{(x - x_{i-1})(x - x_i)}{(x_{i-2} - x_{i-1})(x_{i-2} - x_i)} f_{i-2} + \frac{(x - x_{i-2})(x - x_i)}{(x_{i-1} - x_{i-2})(x_{i-1} - x_i)} f_{i-1} + \frac{(x - x_{i-2})(x - x_{i-1})}{(x_i - x_{i-2})(x_i - x_{i-1})} f_i\]
\[\lambda = \frac{h}{h_i} = \frac{x - x_i}{x_i - x_{i-1}}\]
\[\lambda_i = \frac{h_i}{h_{i-1}}\]
\[\delta_i = 1 + \lambda_i\]
\[f = \frac{1}{\delta_i} \left( \lambda(\lambda + 1)\lambda_i^2 f_{i-2} - \lambda(\lambda + 1 + \lambda_i^{-1})\lambda_i \delta_i f_{i-1} + (\lambda + 1 + \lambda_i^{-1})\lambda_i f_i \right)\]
or
\[f = \lambda^2 \left( f_{i-2}\lambda_i^2 - f_{i-1}\lambda_i\delta_i + f_i\lambda_i \right)\delta_i^{-1} + \lambda \left( f_{i-2}\lambda_i^2 - f_{i-1}\delta_i^2 + f_i(\lambda_i + \delta_i) \right)\delta_i^{-1} + f_i\]
To compute \(\lambda\), we set \(f = 0\)
\[\lambda_i(f_{i-2}\lambda_i - f_{i-1}\delta_i + f_i)\lambda^2 + g_i\lambda + \delta_i f_i = 0\]
\[\implies g_i = f_{i-2}\lambda_i^2 - f_{i-1}\delta_i^2 + f_i(\lambda_i + \delta_i)\]
\[\frac{f_i\delta_i}{\lambda^2} + \frac{g_i}{\lambda} + \lambda_i(f_{i-2}\lambda_i - f_{i-1}\delta_i + f_i) = 0\]
\[\frac{1}{\lambda} = \frac{-g_i \pm \left( g_i^2 - 4f_i \delta_i \lambda_i (f_{i-2}\lambda_i - f_{i-1}\delta_i + f_i) \right)^{1/2}}{2f_i\delta_i}\]
\[\implies \lambda = \frac{-2f_i\delta_i}{g_i \pm \left(g_i^2 - 4f_i\delta_i\lambda_i(f_{i-2}\lambda_i - f_{i-1}\delta_i + f_i)\right)^{1/2}}\]
We can get a better approximation to the root
, by using
\[x_{i+1} = x_i + h_i\lambda\]
References
Read more about notations and symbols.