2. often faster than naive direct methods in such cases. Iterative methods starts with an initial guess, they
repeatedly iterates through the constraints of a linear specification, refining the solution until a sufficient
precision is reached.
In past years various methods have been designed to solve non-linear systems of equations. Some of
the iterative methods are as follows.
1. Bisection Method
2. Newton-Raphson Method
3. Secant Method
4. False Position Method
5. Gauss Seidel Method
There are some limitations using Newton,s method that it is impractical for large scale problems because
of large memory requirements.
1.1 Bisection Method
The Bisection Method [15] is the most primitive method for finding real roots of function f(x) = 0 where f
is a continuous function. This method is also known as Binary-Search Method and Bolzano Method. Two
initial guess is required to start the procedure. This method is based on the Intermediate value theorem:
if function f(x) = 0 is continuous between f(l) and f(u) and have opposite signs and less than zero, then
there is at least one root.
We have to bracket the root. After that to calculate the midpoint xm = xl+xu
2
the method divides the interval into two. Then we try to find f(xl)f(xm)
It is illustrated in Figure 1.
It is an advantageous of the Bisection Method [14] that it always converges. This method is very useful
for computer based solvers. As iterations are conducted, interval gets halved. The error can be controlled.
Since the method brackets the root, convergence is guaranteed. In contrast, there are some pitfalls as well.
That the Bisection Method is very slow because it converges linearly. It is not good for the Bisection
Method to have initial guess close to the root, otherwise it will take more number of iterations to find the
root.
1.1.1 Algorithm
Choose {x_{l}} and {x_{u}} as the initial guess such that
f(x_{l})f(x_{u})<0
Try the midpoint x_{m}=frac{x_{l}+x_{u}}{2}
Find
f(x_{l})f(x_{m})
If
f(x_{l})f(x_{m})<0
Then
2
3. X
f(x)
Xu
Xl
Figure 1: Bisection Method [15]
x_{l}=x_{l};x_{u}=x_{m}
If
f(x_{l})f(x_{m})>0
Then
x_{l}=x_{m};x_{u}=x_{u}
If
f(x_{l})f(x_{m})=0
Then the root is exact root.
epsilon_{x}ˆ{i}|=|x_{i}-x_{i+1}|
Check if
|epsilon_{x}ˆ{i}|leqepsilon
1.2 Newton-Raphson Method
Newton Raphson Method [15] is the most popular method for finding the roots of non-linear equations.
Function can be approximated by its tangent line as shown in Figure 2. It starts with an initial guess that is
close to the root. The basic difference between Newton and other methods is that only one initial guess is
required. Newton Method is efficient if the guess is close to the root. On the other hand, Newton Method
works slowly or may diverge if initial guess is not very close to the root. One advantage of Newton Method
is that it converges fast, If it converges. The best thing for Newton Raphson Method is that it consumes
less time and less iterations to find the root than the Bisection and False Position Method. It has drawback
of more complicated calculations. It resembles with the Bisection Method in approximating roots in a
consistent fashion. In this way user can easily predict exactly how many iterations it require to arrive at a
root.
3
4. X
f(x)
f(xi)
f(xi-1)
Xi+2 Xi+1 Xi
[xi, f(xi)]
θ
Figure 2: Newton Raphson Method [15]
It is necessary for Newton Method [5] to use the function as well as the derivative of that function
to approximate the root. The process starts with an initial guess close to the root. After that one can
approximate the function by its tangent line to estimate the location of the root and x-intercept of its
tangent line is also approximated. The better approximation will be x-intercept than original guess, and the
process is repeated until the desired root is found.
Error calculation is another interesting possible action with the Newton Method. Error calculations are
simply the difference between approximated value and the actual value. Newton-Raphson Method is faster
than the Bisection Method. However as it requires derivative of the function which is sometime difficult
and most laborious task. Sometimes there are some functions which are not solved by Newton-Raphson
Method. So False Position Method is best method for such kind of functions. To meet the convergence
criteria, it should fulfill the condition of
f(x) × f′′
(x) < [f(x)]2
However, there is no guarantee of convergence. If derivative is too small or zero then the method will
diverge
1.2.1 Algorithm
Write out {f(x)}
Calculate {f’(x)}
Choose an initial guess
4
5. Find
x_{i+1}=x_{i}-frac{f(x_{i})}{f’(x_{i})}
Continue iterating till
|epsilon_{x}ˆ{i}|=|x_{i}-x_{i+1}|
Check if
|epsilon_{x}ˆ{i}|leqepsilon
1.3 Secant Method
Secant Method [15] is called root finding algorithm. It does not need derivative of the function like Newton-
Raphson Method. We replace the derivative with the finite difference formula for Secant Method.
We need two starting values for Secant Method to proceed. The values of function at these starting
values are calculated which give two points on the curve. The initial values which we chose in Secant
Method do not need opposite signs like the Bisection Method. New point is obtained where straight line
crosses the x-axis as illustrated geometrically in Figure3. This new point replaces the old point being used
in calculation. This process can be continued to get approximate root.
It is good point that we do not need to bracket the root. As it requires two guesses. This is why its
convergence is fast as compared to the Bisection and False Position Method if it converges.
To compute zeros of continuous functions for standard computer programs [1]. This method is joined
with some method for which convergence is assured for example, False Position Method. On contrary,
Secant Method can fail if function is flat. One of the drawbacks of Secant Method is that the slope of
secant line can become very small which can cause it to move far from the points you have.
For the convergence of Secant Method, initial guess should be close to the root. The order of conver-
gence is α where α = 1+
√
5
2 = 1.618
1.3.1 Algorithm
From Newton-Raphson Method,we have
x_{n+1}=x_{n}-frac{f(x_{n})}{f’(x_{n})}
We replace the derivative with this formula
f’(x_{n})=x_{1}-frac{f(x_{n}-f(x_{n-1}))}{x_{n}-x_{n-1}}
After substituting the value of {x_{n}} in Newton’s Method formula,
we obtain
5
6. Secant method
-10
-5
0
5
10
0 0.5 1 1.5 2 2.5 3 3.5 4
f(x)
oldest
previous
x
Figure 3: Secant Method [15]
x_{n+1}=x_{n}-f(x_{n})timesfrac{x_{n}-x_{n-1}}{f(x_{n}-f(x_{n-1}))}
x_{n+1}=frac{x_{n-1}f(x_{n})-x_{n}f(x_{n-1})}{f(x_{n})-f(x_{n-1})}
1.4 False Position method
The method [15] which combines the features of Bisection Method and Secant Method is known as False
Position Method. It is also called Regular Falsi and Interpolation Method. Two initial approximations
are needed to start this method so that function of these two initial values is less than zero. It means that
functions must be of opposite signs. The new value is found as the intersection between the chord joining
functions of initial approximation and the x-axis. It is illustrated in the Figure 4. The formula used for
False Position Method is as follows:
xn+1 =
xn−1f(xn) − xnf(xn−1)
f(x)n − f(xn−1)
(1)
The reason behind the discovery of this method [5] was that Bisection Method Converges slowly. So False
Position Method is more efficient than the Bisection Method.
1.4.1 Algorithm
For interval[l,u] first value is calculated
By using this formula
6
7. xl
xu
Xn+1
f(xl)
f(xu)
x
f(x)
Figure 4: False Position Method [15]
x_{n+1}=frac{x_{n-1}f(x_{n})-x_{n}f(x_{n-1})}{f(x_{n})-f(x_{n-1})}
Find
f(x_{l})f(x_{u}) and multiply
If
f(x_{l})f(x_{u})<0
then take first half of the new interval
If
f(x_{l})f(x_{u})>0
then take second half as new interval
|epsilon_{x}ˆ{i}|=|x_{i}-x_{i+1}|
Check if
|epsilon_{x}ˆ{i}|leqepsilon
7
8. 0.2 0.6 1 1.4 1.8 2.2
2
1.5
1
0.5
Figure 5: Gauss Seidel Method [6]
Repeat the process untill root is found.
1.5 Gauss-Seidel method
Gauss Seidel Method [15] is an iterative method. It is designed to find solution for non-linear systems of
equations within each period by a series of iterations. The graphical illustration of Gauss Seidel Method
has been shown in figure 5.
First, we have to assume initial guess to start the Gauss Seidel Method. After that substitute the solution
in the above equation and use the most recent value of the previously computed (xk+1
1 , xk+1
2 , ..., xk+1
i ) in
the update of xi+1 such that
xk+1
i+1 = gi+1(xk+1
1 , xk+1
2 , ..., xk+1
i , xk
i+1, ..., xk
n)
Iteration is continued until the relative approximate error is less than the pre-specified tolerance. Gauss
Seidel Method has linear convergence. This method is simple to program and sparsity is used simply.
Iteration requires little execution time. It is robust method as compared to Newton-Raphson Method. The
hard coded example has been shown below.
1.5.1 PROGRAM
% This function applies N iterations of Gauss Seidel method
to the system of non-linear equations Ax = b
% x0 is intial value
tol =0.01
x0=[0 0 0];
N=100; n = length(x0);
x=x0; X=[x0];
8
9. for k=1:N %for N iterations
%Consider non-linear systems of equations as follows.
x(1)=1/4*(11-x0(2)ˆ2-x0(3));
x(2)=1/4*(18-x(1)-x0(3)ˆ2);
x(3)=1/4*(15-x(2)-x(1)ˆ2);
% breaking loop if tollerance condition meets
X=[X;x];
if norm(x-x0,inf) < tol
disp(’TOLRENCE met in k interations, where’)
k=k
break;
end
x0 = x
end if (k==N)
disp(’TOLRENCE not met in maximum number of interations’)
end
The above system has solution:(0.999, 1.999, 2.999).
1.6 Related work
One of the most difficult problem [5] is to find out solutions for non-linear systems of equations in numeri-
cal analysis. The methods which are used for solving non-linear systems of equations are iterative in nature
because they [14] guarantee a solution with predetermined accuracy
Specific methods are discussed in [1] and a comparison of the methods in [1] and [8] with several others
have been carried out in [3]. Significant success in solving some of the nonlinear equations that relates to
space trajectory optimization has been achieved with one of the quasi-Newton methods in [4]. Methods
that do not require derivative i.e., methods do not require assessment of the Jacobian matrix [8] of each
iteration, have received significant attention in the past eight years. In this paper two new iterative methods
for solving non-linear systems of equations have been proposed and analysis between new and existing
approaches has been carried out. They showed that new iterative method for solving non-linear systems
of equations is more efficient than the existing methods such as Newton-Raphson Method, the Method of
Cordero and Torregrosa [9] and the Method of Darvishi and Barati [11].
Bisection Method in higher dimentions has been devised by [16]. Generalized Bisection Method
[2]have been proposed for constrained minimization problem when feasible region is n-dimentionaal sim-
plex. Then they [10] extended it to multidimentional Bisection Method for solving the unconstrained
minimization problem.
Various computational methods have been proposed for solving non-linear systems of equations. One
of which is Gauss Seidel Method. Gauss Seidel Method for non-linear systems of equations has been
presented by [15]. Gauss Seidel Method [13] have been discussed in multidimensions. They [12] presented
non-linear Gauss Seidel Method for network problems. They Showed comparison between Jacobi and
Gauss Seidel Method for these problems and proved that non-linear Gauss Seidel Method is more efficient
9
10. then the Jacobi Method. Motivated and inspired by the on-going research in this area, we suggest Gauss
Seidel Method for solving non linear systems of equations.
1.7 Comparison between Non-Linear Solvers
Newton’s Method is popular method to find solution for systems of nonlinear equations. But it requires
initial estimates close to the solution and derivative is required at each iteration, there can be some non-
linear problems where derivative of function is not known. In that case it will be worst to use Newton’s
Method. There is no doubt that Newton’s Method is efficient but it is not stable method. If we look at
Secant Method it is also modified form of Newton’s Method, it does not need derivative of the function.
But one of the drawbacks of Secant Method is that its convergence is not guaranteed, it can fail if function
is flat. Bisection and False position are simpler methods to find the roots of non-linear equations in one
dimension. But it is quite complicated process for these methods to solve multiple non-linear equations in
several dimensions. One of the best method is Gauss Seidel for solving non-linear systems of equations.
It is simple to use, efficient and its convergence is linear. On contrary, there is no need of derivative like
Newton’s Method. We do not have to bracket the root as well like the Bisection Method.
1.8 Conclusion
It is worthmentioning that many methods have been proposed for solving non-linear equations. It has been
shown that these methods can also be extended for multiple non-linear equations. But it depends on the
problem that which particular method will provide best suitable solution for that specific problem. From the
above discussion it is concluded that Gauss Seidel algorithm is best because of its assured convergence and
simplicity. That is why it can be said that Gauss Seidel Method is suitable method for solving non-linear
systems of equations.
References
[1] J. G. P. Barnes. An algorithm for solving non-linear equations based on the secant method. 8(1):66–
72, 1965.
[2] M. E. Baushev A.N. A multidimensional bisection method for minimizing function over simplex.
pages 801–803, 2007.
[3] M. J. Box. A comparison of several current optimization methods, and the use of transformations in
constrained problems. 9(1):67–77, 1966.
[4] M. J. Box. Parameter hunting techniques. 1966.
[5] H.M.Anita. Numerical-Methods for Scientist and Engineers. Birkhauser-verlag, 2002.
[6] Hornberger and Wiberg. Numerical Methods in the Hydrological Sciences. 2005.
[7] S. Kunis and H. Rauhut. Random sampling of sparse trigonometric polynomials, ii. orthogonal match-
ing pursuit versus basis pursuit. Journal Foundations of Computational Mathematics, 8(6):737–763,
Nov. 2008.
10
11. [8] M. Kuo. Solution of nonlinear equations. Computers, IEEE Transactions on, C-17(9):897 – 898, sep.
1968.
[9] J. MA.Cordero. Variants of newton’s method using fifth-order quadrature formulas. pages 686–698,
2007.
[10] E. Morozova. A multidimensional bisection method for unconstrained minimization problem. 2008.
[11] A. M.T.Darvishi. A third-order newton-type method to solve systems of nonlinear equations. pages
630–635, 2007.
[12] T. A. Porsching. Jacobi and gauss-seidel methods for nonlinear network problems. 6(3), 1969.
[13] J. D. F. Richard L. Burden. Numerical Analysis. 8, 2005.
[14] W. H. Robert. Numerical Methods. Quantum, 1975.
[15] N. A. Saeed, A.Bhatti. Numerical Analysis. Shahryar, 2008.
[16] G. Wood. The bisection method in higher dimensions. 1989.
11