A General Class of Derivative Free Optimal Root Finding Methods

Hindawi Publishing Corporation
ξ€ e Scientific World Journal
Volume 2015, Article ID 934260, 12 pages
http://dx.doi.org/10.1155/2015/934260
Research Article
A General Class of Derivative Free Optimal Root Finding
Methods Based on Rational Interpolation
Fiza Zafar, Nusrat Yasmin, Saima Akram, and Moin-ud-Din Junjua
Centre for Advanced Studies in Pure and Applied Mathematics, Bahauddin Zakariya University, Multan 60800, Pakistan
Correspondence should be addressed to Fiza Zafar; [email protected]
Received 22 May 2014; Accepted 18 August 2014
Academic Editor: S. A. Mohiuddine
Copyright © 2015 Fiza Zafar et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We construct a new general class of derivative free 𝑛-point iterative methods of optimal order of convergence 2π‘›βˆ’1 using rational
interpolant. The special cases of this class are obtained. These methods do not need Newton’s iterate in the first step of their iterative
schemes. Numerical computations are presented to show that the new methods are efficient and can be seen as better alternates.
1. Introduction
The problem of root finding has been addressed extensively
in the last few decades. In 1685, the first scheme to find the
roots of nonlinear equations was published by John Wallis.
Its simplified description was published in 1690 by Joseph
Raphson and was called Newton-Raphson method. In 1740,
Thomas Simpson was the first to introduce Newton’s method
as an iterative method for solving nonlinear equations. The
method is quadratically convergent but it may not converge
to real root if the initial guess does not lie in the vicinity
of root and 𝑓󸀠 (π‘₯) is zero in the neighborhood of the real
root. This method is a without-memory method. Later, many
derivative free methods were defined, for example, Secant’s
and Steffensen’s methods. However, most of the derivative
free iterative methods are with-memory methods; that is,
they require old and new information to calculate the
next approximation. Inspite of the drawbacks of Newton’s
method, many multipoint methods for finding simple root
of nonlinear equations have been developed in the recent
past using Newton’s method as the first step. However, many
higher order convergent derivative free iterative methods
have also appeared most recently by taking Steffensen type
methods at the first step. A large number of optimal higher
order convergent iterative methods have been investigated
recently up to order sixteen [1–3]. These methods used
different interpolating techniques for approximating the first
derivative.
In the era of 1960–1965, many authors used rational
function approximation for finding the root of nonlinear
equation, for example, Tornheim [4], Jarratt, and Nudds [5].
In 1967, Jarratt [6] effectively used rational interpolation
of the form
π‘₯βˆ’π‘Ž
,
𝑦= 2
(1)
𝑏π‘₯ + 𝑐π‘₯ + 𝑑
to approximate 𝑓(π‘₯) for constructing a with-memory scheme
involving first derivative. The order of the scheme was 2.732
and its efficiency was 0.2182.
In 1987, Cuyt and Wuytack [7] described a with-memory
iterative method involving first derivative based on rational
interpolation and they also discussed two special cases of
their scheme having order 1.84 with efficiency 0.1324 and
order 2.41 with efficiency 0.1910.
In 1990, Field [8] used rational function to approximate
the root of a nonlinear equation as follows:
π‘₯𝑖+1 = π‘₯𝑖 + 𝑑𝑖 ,
(2)
where 𝑑𝑖 , the correction in each iterate, is the root of the
numerator of the Pade approximant to the Taylor series:
𝑓(𝑗) (π‘₯𝑖 )
𝑗
(π‘₯ βˆ’ π‘₯𝑖 ) .
𝑗!
𝑗=0
∞
𝑓 (π‘₯) = βˆ‘
(3)
He proved that π‘₯𝑖 converges to the root πœ‰ with order
π‘š + 𝑛 + 1, where π‘š and 𝑛 are degrees of the denominator
and numerator of the Pade approximant.
2
The Scientific World Journal
In 1974, Kung and Traub [9] conjectured that a multipoint
iterative scheme without memory for finding simple root of
nonlinear equations requiring 𝑛 functional evaluations for
one complete cycle can have maximum order of convergence
2π‘›βˆ’1 with maximal efficiency index 2(π‘›βˆ’1)/𝑛 . Multipoint methods with this property are usually called optimal methods.
Several researchers [1–3, 10] developed optimal multipoint
iterative methods based on this hypothesis.
In 2011, Soleymani and Sharifi [10] developed a fourstep without-memory fourteenth order convergent iterative
method involving first derivative having an efficiency index of
1.6952. The first derivative at the fourth step is approximated
using rational interpolation as follows:
where πœ™8 is eighth order optimal method. They used the
rational interpolant of the following form:
𝑝 (𝑑) =
Recently in 2013, Sharma et al. [1] developed a three-step
eighth order method and its extension to four-step sixteenth
order method using rational interpolation. In the scheme,
the first three steps are any arbitrary eight order convergent
method. The fourth step is the root of the numerator of the
method, which is given as
𝑑𝑛 = πœ™8 (π‘₯𝑛 , 𝑧𝑛 , 𝑀𝑛 ) ,
= π‘₯𝑛 βˆ’
π‘₯𝑛+1
2
(1 + 𝑏4 (𝑀𝑛 βˆ’ π‘₯𝑛 ))
𝑓 (𝑀𝑛 ) ,
σΈ€ 
𝑓 (π‘₯𝑛 ) + 𝑏3 (𝑀𝑛 βˆ’ π‘₯𝑛 ) (2 + 𝑏4 (𝑀𝑛 βˆ’ π‘₯𝑛 ))
(4)
where πœ™8 is an optimal eighth order convergent method and
the rational interpolant is given as:
π‘š (𝑑) =
𝑏 + 𝑏2 (𝑑 βˆ’ π‘₯) + 𝑏3 (𝑑 βˆ’ π‘₯)2
.
1 + 𝑏4 (𝑑 βˆ’ π‘₯)
𝑃1 𝑓 [𝑧𝑛 , 𝑀𝑛 ] + 𝑄1 𝑓 [π‘₯𝑛 , 𝑀𝑛 ] + 𝑅𝑓 [𝑑𝑛 , 𝑀𝑛 ]
𝑓 (π‘₯𝑛 ) ,
𝑃1 𝐿 + 𝑄1 𝑀 + 𝑅𝑁
(9)
where,
𝐿=
𝑧𝑛 = πœ™4 (π‘₯𝑛 , 𝑦𝑛 , 𝑀𝑛 ) ,
𝑓 (𝑀𝑛 ) 𝑓 [π‘₯𝑛 , 𝑧𝑛 ] βˆ’ 𝑓 (𝑧𝑛 ) 𝑓 [π‘₯𝑛 , 𝑀𝑛 ]
,
𝑀𝑛 βˆ’ 𝑧𝑛
𝑀=
𝑓 (𝑀𝑛 ) 𝑓󸀠 (π‘₯𝑛 ) βˆ’ 𝑓 (π‘₯𝑛 ) 𝑓 [π‘₯𝑛 , 𝑀𝑛 ]
,
𝑀𝑛 βˆ’ π‘₯𝑛
𝑁=
𝑓 (𝑀𝑛 ) 𝑓 [π‘₯𝑛 , 𝑑𝑛 ] βˆ’ 𝑓 (𝑑𝑛 ) 𝑓 [π‘₯𝑛 , 𝑀𝑛 ]
,
𝑀𝑛 βˆ’ 𝑑𝑛
(5)
In 2012, Soleymani et al. [3] developed a three-step
derivative free eighth order method using rational interpolation as follows:
(10)
𝑃1 = (π‘₯𝑛 βˆ’ 𝑑𝑛 ) 𝑓 (π‘₯𝑛 ) 𝑓 (𝑑𝑛 ) ,
𝑄1 = (𝑑𝑛 βˆ’ 𝑧𝑛 ) 𝑓 (𝑑𝑛 ) 𝑓 (𝑧𝑛 ) ,
𝑅 = (𝑧𝑛 βˆ’ π‘₯𝑛 ) 𝑓 (𝑧𝑛 ) 𝑓 (π‘₯𝑛 ) ,
π‘₯𝑛+1
and rational polynomial of the following form:
2
= 𝑧𝑛 βˆ’
(8)
π‘₯𝑛+1
𝑀𝑛 = πœ™8 (π‘₯𝑛 , 𝑦𝑛 , 𝑧𝑛 )
= 𝑀𝑛 βˆ’
𝑏1 + 𝑏2 (𝑑 βˆ’ π‘₯) + 𝑏3 (𝑑 βˆ’ π‘₯)2 + 𝑏4 (𝑑 βˆ’ π‘₯)3
.
1 + 𝑏5 (𝑑 βˆ’ π‘₯)
(1 + π‘Ž3 (𝑧𝑛 βˆ’ π‘₯𝑛 ))
2
π‘Ž1 βˆ’ π‘Ž0 π‘Ž3 + 2π‘Ž2 (𝑧𝑛 βˆ’ π‘₯𝑛 ) + π‘Ž2 π‘Ž3 (𝑧𝑛 βˆ’ π‘₯𝑛 )
𝑓 (𝑧𝑛 ) ,
(6)
where πœ™4 is any two-step fourth order convergent derivative
free iterative method. They used the same rational interpolant
as given in (5). The constants are determined using the
interpolating conditions.
In 2012, Soleymani et al. [2] added his contribution by
developing a sixteenth order four-point scheme using Pade
approximation. The scheme required four evaluations of
functions and one evaluation of first derivative and achieved
optimal order of sixteen and and an efficiency index 1.741. The
scheme was of the form
𝑀𝑛 = πœ™8 (π‘₯𝑛 , 𝑦𝑛 , 𝑧𝑛 ) ,
2
π‘₯𝑛+1 = 𝑀𝑛 βˆ’ ((1 + 𝑏5 (𝑀𝑛 βˆ’ π‘₯𝑛 )) 𝑓 (𝑀𝑛 ))
2
× (𝑓󸀠 (π‘₯𝑛 ) + 2𝑏3 (𝑀𝑛 βˆ’ π‘₯𝑛 ) + (3𝑏4 + 𝑏3 𝑏5 ) (𝑀𝑛 βˆ’ π‘₯𝑛 )
3 βˆ’1
+2𝑏4 𝑏5 (𝑀𝑛 βˆ’ π‘₯𝑛 ) ) ,
(7)
𝑝4 (π‘₯) =
3
(π‘₯ βˆ’ π‘₯𝑖 ) + πœ†
2
πœ‡(π‘₯ βˆ’ π‘₯𝑖 ) + ](π‘₯ βˆ’ π‘₯𝑖 ) + πœ‰ (π‘₯ βˆ’ π‘₯𝑖 ) + πœ‚
.
(11)
The efficiency index of the above sixteenth order method is
1.741. The method involves one derivative evaluation.
In this paper, we present a general class of derivative free
𝑛-point iterative method which satisfies Kung and Tarub’s
Hypothesis [9]. Proposed schemes require 𝑛 functional evaluations to acquire the convergence order 2π‘›βˆ’1 and efficiency
index can have 2(π‘›βˆ’1)/𝑛 . The contents of the paper are summarized as follows. In Section 2, we present a general class
of 𝑛-point iterative scheme and its special cases with second,
fourth, eighth, and sixteenth order convergence. Section 3
consists of the convergence analysis of the iterative methods
discussed in Section 2. In the last section of the paper, we give
concluding remarks and some numerical results to show the
effectiveness of the proposed methods.
2. Higher Order Derivative
Free Optimal Methods
In this section, we give a general class of 𝑛-point iterative
method involving 𝑛 functional evaluations having order of
The Scientific World Journal
3
convergence 2π‘›βˆ’1 . Thus, the scheme is optimal in the sense
of the conjecture of Kung and Traub [9].
Consider a rational polynomial of degree 𝑛 βˆ’ 1 as follows:
𝑝 (𝑑)
,
π‘Ÿπ‘›βˆ’1 (𝑑) = 1
π‘žπ‘›βˆ’2 (𝑑)
By using conditions (20), we have
π‘Ž0 = 𝑓 (π‘₯) ,
π‘Ž1 = 𝑓 [𝑀2 , π‘₯] + 𝑏1 𝑓 (𝑀2 ) ,
(12)
where,
𝑏1 =
𝑝1 (𝑑) = π‘Ž0 + π‘Ž1 (𝑑 βˆ’ π‘₯) ,
π‘žπ‘›βˆ’2 (𝑑) = 1 + 𝑏1 (𝑑 βˆ’ π‘₯) + β‹… β‹… β‹… + π‘π‘›βˆ’2 (𝑑 βˆ’ π‘₯)π‘›βˆ’2 ,
𝑛 β‰₯ 2, (13)
π‘ž0 ≑ 1.
We approximate 𝑓(π‘₯) by rational function given by (12)
to construct a general class of 𝑛-point iterative scheme. Then,
the root of nonlinear equation 𝑓(π‘₯) = 0 is the root of
the numerator of the rational interpolant of degree 𝑛 βˆ’ 1
for the 𝑛-point method. The unknowns π‘Ž0 , π‘Ž1 , 𝑏1 , . . . , π‘π‘›βˆ’1 are
determined by the following interpolating conditions:
π‘Ÿπ‘›βˆ’1 (π‘₯) = 𝑓 (π‘₯) ,
π‘Ÿπ‘›βˆ’1 (π‘€π‘˜ ) = 𝑓 (π‘€π‘˜ ) ,
(14)
π‘˜ = 1, . . . , 𝑛 βˆ’ 1, 𝑛 β‰₯ 2.
𝑀1 = π‘₯ + 𝛽𝑓 (π‘₯) ,
𝑀2 = π‘₯ βˆ’
𝑓 (π‘₯)
,
𝑓 [𝑀1 , π‘₯]
𝑀3 = π‘₯ βˆ’
𝑓 (π‘₯) (𝑓 (𝑀2 ) βˆ’ 𝑓 (𝑀1 ))
.
𝑓 (𝑀2 ) 𝑓 [𝑀1 , π‘₯] βˆ’ 𝑓 (𝑀1 ) 𝑓 [𝑀2 , π‘₯]
π‘Ž
𝑀𝑛 = π‘₯ βˆ’ 0 ,
π‘Ž1
π‘Ÿ3 (𝑑) =
π‘Ÿ3 (π‘₯) = 𝑓 (π‘₯) ,
Now, we are going to derive its special cases. For 𝑛 = 2 in
(14)-(15),
(16)
We find π‘Ž0 and π‘Ž1 such that
π‘Ÿ1 (π‘₯) = 𝑓 (π‘₯) ,
π‘Ÿ1 (𝑀1 ) = 𝑓 (𝑀1 ) .
π‘Ž0 + π‘Ž1 (𝑑 βˆ’ π‘₯)
.
1 + 𝑏1 (𝑑 βˆ’ π‘₯)
(19)
We find π‘Ž0 , π‘Ž1 , and 𝑏1 using the following conditions:
π‘Ÿ2 (𝑀2 ) = 𝑓 (𝑀2 ) .
π‘Ÿ2 (𝑀1 ) = 𝑓 (𝑀1 ) ,
π‘Ÿ3 (𝑀3 ) = 𝑓 (𝑀3 ) .
(24)
The conditions (24) are used to determine the unknowns π‘Ž0 ,
π‘Ž1 , 𝑏1 , and 𝑏2 . Thus, we attain a four-point iterative method as
follows:
𝑀2 = π‘₯ βˆ’
𝑓 (π‘₯)
,
𝑓 [𝑀1 , π‘₯]
𝑀3 = π‘₯ βˆ’
𝑓 (π‘₯) (𝑓 (𝑀2 ) βˆ’ 𝑓 (𝑀1 ))
,
𝑓 (𝑀2 ) 𝑓 [𝑀1 , π‘₯] βˆ’ 𝑓 (𝑀1 ) 𝑓 [𝑀2 , π‘₯]
𝑀4 = π‘₯ βˆ’
𝑓 (π‘₯) (β„Ž1 + β„Ž2 + β„Ž3 )
,
β„Ž1 𝑓 [𝑀1 , π‘₯] + β„Ž2 𝑓 [𝑀2 , π‘₯] + β„Ž3 𝑓 [𝑀3 , π‘₯]
(18)
𝑓 (π‘₯)
.
𝑓 [𝑀1 , π‘₯]
The iterative scheme (18) is the same as given by Steffensen
[11] for 𝛽 = 1; thus, is a particular case of our scheme given
by (14)-(15).
For 𝑛 = 3, we have
π‘Ÿ2 (π‘₯) = 𝑓 (π‘₯) ,
π‘Ÿ3 (𝑀2 ) = 𝑓 (𝑀2 ) ,
π‘Ÿ3 (𝑀1 ) = 𝑓 (𝑀1 ) ,
𝑀1 = π‘₯ + 𝛽𝑓 (π‘₯) ,
𝑀1 = π‘₯ + 𝛽𝑓 (π‘₯) ,
π‘Ÿ2 (𝑑) =
(23)
(17)
So, the two-point iterative scheme becomes
𝑀2 = π‘₯ βˆ’
π‘Ž0 + π‘Ž1 (𝑑 βˆ’ π‘₯)
,
1 + 𝑏1 (𝑑 βˆ’ π‘₯) + 𝑏2 (𝑑 βˆ’ π‘₯)2
such that
𝑛 β‰₯ 2.
π‘Ÿ1 (𝑑) = π‘Ž0 + π‘Ž1 (𝑑 βˆ’ π‘₯) .
(22)
For 𝑛 = 4, we have the following rational interpolant:
𝑀1 = π‘₯ + 𝛽𝑓 (π‘₯) ,
(15)
𝑓 [𝑀1 , π‘₯] βˆ’ 𝑓 [𝑀2 , π‘₯]
.
𝑓 (𝑀2 ) βˆ’ 𝑓 (𝑀1 )
Now, using (21), we have the following three-point iterative
scheme:
Then, the general 𝑛-point iterative method is given by
..
.
(21)
(20)
(25)
where,
β„Ž1 = 𝑓 (𝑀2 ) 𝑓 (𝑀3 ) (𝑀3 βˆ’ 𝑀2 ) ,
β„Ž2 = 𝑓 (𝑀1 ) 𝑓 (𝑀3 ) (𝑀1 βˆ’ 𝑀3 ) ,
β„Ž3 = 𝑓 (𝑀1 ) 𝑓 (𝑀2 ) (𝑀2 βˆ’ 𝑀1 ) .
(26)
4
The Scientific World Journal
For 𝑛 = 5, we have the following five-point iterative scheme:
× {(𝑀3 βˆ’ π‘₯) (𝑀4 βˆ’ π‘₯) (𝑀4 βˆ’ 𝑀3 )
βˆ’ (𝑀1 βˆ’ π‘₯) (𝑀4 βˆ’ π‘₯) (𝑀4 βˆ’ 𝑀1 )}
𝑀1 = π‘₯ + 𝛽𝑓 (π‘₯) ,
𝑀2 = π‘₯ βˆ’
βˆ’ (𝑀1 βˆ’ π‘₯) (𝑀3 βˆ’ π‘₯) β„Ž2 𝑓 (𝑀4 )
𝑓 (π‘₯)
,
𝑓 [𝑀1 , π‘₯]
π‘š3 = 𝑓 (𝑀1 ) 𝑓 (𝑀2 ) 𝑓 (𝑀4 )
× {βˆ’ (𝑀2 βˆ’ π‘₯) (𝑀4 βˆ’ π‘₯) (𝑀4 βˆ’ 𝑀2 )
𝑓 (π‘₯) (𝑓 (𝑀2 ) βˆ’ 𝑓 (𝑀1 ))
𝑀3 = π‘₯ βˆ’
,
𝑓 (𝑀2 ) 𝑓 [𝑀1 , π‘₯] βˆ’ 𝑓 (𝑀1 ) 𝑓 [𝑀2 , π‘₯]
+ (𝑀1 βˆ’ π‘₯) (𝑀4 βˆ’ π‘₯) (𝑀4 βˆ’ 𝑀1 )}
(27)
𝑀4 = π‘₯ βˆ’
𝑓 (π‘₯) (β„Ž1 + β„Ž2 + β„Ž3 )
,
β„Ž1 𝑓 [𝑀1 , π‘₯] + β„Ž2 𝑓 [𝑀2 , π‘₯] + β„Ž3 𝑓 [𝑀3 , π‘₯]
𝑀5 = π‘₯ βˆ’
π‘Ž0
,
π‘Ž1
βˆ’ (𝑀1 βˆ’ π‘₯) (𝑀2 βˆ’ π‘₯) β„Ž3 𝑓 (𝑀4 )
π‘š4 = 𝑓 (𝑀1 ) 𝑓 (𝑀2 ) 𝑓 (𝑀3 )
× {(𝑀2 βˆ’ π‘₯) (𝑀3 βˆ’ π‘₯) (𝑀3 βˆ’ 𝑀2 )
βˆ’ (𝑀1 βˆ’ π‘₯) (𝑀3 βˆ’ π‘₯) (𝑀3 βˆ’ 𝑀1 )}
+ (𝑀1 βˆ’ π‘₯) (𝑀2 βˆ’ π‘₯) β„Ž3 𝑓 (𝑀3 ) ,
where,
(31)
π‘Ÿ4 (𝑑) =
π‘Ž0 + π‘Ž1 (𝑑 βˆ’ π‘₯)
,
1 + 𝑏1 (𝑑 βˆ’ π‘₯) + 𝑏2 (𝑑 βˆ’ π‘₯)2 + 𝑏3 (𝑑 βˆ’ π‘₯)3
(28)
and β„Ž1 , β„Ž2 , β„Ž3 are given as in (26). Hence, we obtain the
following iterative method:
𝑀1 = π‘₯ + 𝛽𝑓 (π‘₯) ,
such that
π‘Ÿ4 (π‘₯) = 𝑓 (π‘₯) ,
π‘Ÿ4 (𝑀2 ) = 𝑓 (𝑀2 ) ,
π‘Ÿ4 (𝑀1 ) = 𝑓 (𝑀1 ) ,
π‘Ÿ4 (𝑀3 ) = 𝑓 (𝑀3 ) ,
𝑀2 = π‘₯ βˆ’
𝑓 (π‘₯)
,
𝑓 [𝑀1 , π‘₯]
𝑀3 = π‘₯ βˆ’
𝑓 (π‘₯) (𝑓 (𝑀2 ) βˆ’ 𝑓 (𝑀1 ))
,
𝑓 (𝑀2 ) 𝑓 [𝑀1 , π‘₯] βˆ’ 𝑓 (𝑀1 ) 𝑓 [𝑀2 , π‘₯]
𝑀4 = π‘₯ βˆ’
𝑓 (π‘₯) (β„Ž1 + β„Ž2 + β„Ž3 )
,
β„Ž1 𝑓 [𝑀1 , π‘₯] + β„Ž2 𝑓 [𝑀2 , π‘₯] + β„Ž3 𝑓 [𝑀3 , π‘₯]
(29)
π‘Ÿ4 (𝑀4 ) = 𝑓 (𝑀4 ) .
(32)
𝑀5 = π‘₯ βˆ’ (𝑓 (π‘₯) (π‘š1 + π‘š2 + π‘š3 + π‘š4 ))
The interpolating conditions (29) yield
× (π‘š1 𝑓 [𝑀1 , π‘₯] + π‘š2 𝑓 [𝑀2 , π‘₯]
βˆ’1
+π‘š3 𝑓 [𝑀3 , π‘₯] + π‘š4 𝑓 [𝑀4 , π‘₯]) .
π‘Ž0 = 𝑓 (π‘₯) ,
π‘Ž1 = (π‘š1 𝑓 [𝑀1 , π‘₯] + π‘š2 𝑓 [𝑀2 , π‘₯] + π‘š3 𝑓 [𝑀3 , π‘₯]
(30)
We, now, give the convergence analysis of the proposed
iterative methods (18), (22), (25), and (32).
βˆ’1
+π‘š4 𝑓 [𝑀4 , π‘₯]) (π‘š1 + π‘š2 + π‘š3 + π‘š4 ) ,
where,
π‘š1 = 𝑓 (𝑀2 ) 𝑓 (𝑀3 ) 𝑓 (𝑀4 )
× {βˆ’ (𝑀3 βˆ’ π‘₯) (𝑀4 βˆ’ π‘₯) (𝑀4 βˆ’ 𝑀3 )
+ (𝑀2 βˆ’ π‘₯) (𝑀4 βˆ’ π‘₯) (𝑀4 βˆ’ 𝑀2 )}
βˆ’ (𝑀2 βˆ’ π‘₯) (𝑀3 βˆ’ π‘₯) β„Ž1 𝑓 (𝑀4 )
π‘š2 = 𝑓 (𝑀1 ) 𝑓 (𝑀3 ) 𝑓 (𝑀4 )
3. Convergence Analysis
Theorem 1. Let us consider πœ” ∈ 𝐼 as the simple root of
sufficiently differentiable function 𝑓 : 𝐼 βŠ† R β†’ R in the
neighborhood of the root for interval 𝐼. If π‘₯ is sufficiently close
to πœ”, then, for every 𝛽 ∈ R \ {βˆ’1}, the iterative methods defined
by (18) and (22) are second and fourth order convergent,
respectively, with the error equations given by
𝑒𝑛+1 = 𝑐2 (1 + 𝛽) 𝑒𝑛2 + 𝑂 (𝑒𝑛3 ) ,
𝑒𝑛+1 = (𝑐23 βˆ’ 𝑐2 𝑐3 + 2𝑐23 𝛽 + 𝑐23 𝛽2
βˆ’2𝑐2 𝑐3 𝛽 βˆ’ 𝑐2 𝑐3 𝛽2 ) 𝑒𝑛4 + 𝑂 (𝑒𝑛5 ) ,
(33)
The Scientific World Journal
5
respectively, where,
π‘π‘˜ =
1 𝑓(π‘˜) (πœ”)
,
π‘˜! 𝑓󸀠 (πœ”)
which shows that the method (18) is quadratically convergent
for all 𝛽 ∈ R \ {βˆ’1}. Again using Taylor expansion of 𝑓(𝑀2 ),
we have
π‘˜ = 2, 3, . . .
(34)
𝑓 (𝑀2 ) = 𝑓󸀠 (πœ”) [𝑐2 (1 + 𝛽) 𝑒𝑛2
Proof. Let π‘₯ = πœ” + 𝑒𝑛 , where πœ” is the root of 𝑓 and 𝑒𝑛 is the
error at 𝑛th step. Now, using Taylor expansion of 𝑓(π‘₯) about
the root πœ”, we have
𝑓 (π‘₯) = 𝑓󸀠 (πœ”) [𝑒𝑛 + 𝑐2 𝑒𝑛2 + 𝑐3 𝑒𝑛3 + 𝑐4 𝑒𝑛4
+ β‹… β‹… β‹… + 𝑐8 𝑒𝑛8 + 𝑂 (𝑒𝑛9 )] ,
+𝑐4 𝑒𝑛4 + 𝑐5 𝑒𝑛5 ) + 𝑂 (𝑒𝑛6 ) ,
+3𝛽𝑐3 + 𝑐3 𝛽2 βˆ’ 𝑐22 𝛽2 ) 𝑒𝑛3
+ (5𝑐23 + 7𝑐23 𝛽 βˆ’ 7𝑐2 𝑐3 βˆ’ 10𝑐2 𝛽𝑐3
(35)
βˆ’ 7𝑐2 𝛽2 𝑐3 + 4𝑐23 𝛽2
βˆ’ 2𝑐3 𝛽3 𝑐2 + 3𝑐4 + 6𝛽𝑐4
where π‘π‘˜ is defined by (34). Taylor’s expansions for 𝑀1 and
𝑓(𝑀1 ) are
𝑀1 = 𝑒𝑛 + πœ” + 𝛽 (𝑒𝑛 + 𝑐2 𝑒𝑛2 + 𝑐3 𝑒𝑛3
+ (βˆ’2𝑐22 βˆ’ 2𝑐22 𝛽 + 2𝑐3
(36)
𝑓 (𝑀1 )
+4𝑐4 𝛽2 + 𝑐4 𝛽3 + 𝑐23 𝛽3 ) 𝑒𝑛4 + 𝑂 (𝑒𝑛5 )] .
(39)
Now, using (35)–(39), we see that the order of convergence of
the method (22) is four and the error equation is given by
𝑒𝑛+1 = (𝑐23 + 2𝑐23 𝛽 βˆ’ 𝑐2 𝑐3 βˆ’ 2𝑐2 𝑐3 𝛽
= 𝑓󸀠 (πœ”) [(1 + 𝛽) 𝑒𝑛 + (3𝛽𝑐2 + 𝑐2 + 𝑐2 𝛽2 ) 𝑒𝑛2
βˆ’π‘2 𝑐3 𝛽2 + 𝑐23 𝛽2 ) 𝑒𝑛4 + 𝑂 (𝑒𝑛5 ) .
+ (2𝑐22 𝛽 + 2𝑐22 𝛽2 + 𝑐3 + 4𝛽𝑐3
(40)
+ 3𝑐3 𝛽2 + 𝑐3 𝛽3 ) 𝑒𝑛3
Theorem 2. Let us consider πœ” ∈ 𝐼 as the simple root of
sufficiently differentiable function 𝑓 : 𝐼 βŠ† R β†’ R in the
neighborhood of the root for interval 𝐼. If π‘₯ is sufficiently close to
πœ”, then for all 𝛽 ∈ R\{βˆ’1}, the iterative methods defined by (25)
and (32) are eighth and sixteenth order convergent, respectively,
with the error equations given by
+ (5𝑐2 𝛽𝑐3 + 8𝑐2 𝛽2 𝑐3 + 3𝑐3 𝛽3
+ 𝑐4 + 5𝛽𝑐4 + 6𝛽2 𝑐4 + 4𝑐4 𝛽3
+ 𝑐4 𝛽4 + 𝑐23 𝛽2 ) 𝑒𝑛4
+ (𝑐5 + 6𝛽𝑐5 + 10𝑐5 𝛽2 + 10𝑐5 𝛽3 + 5𝑐5 𝛽4
+ 𝑐5 𝛽5 + 6𝑐2 𝛽𝑐4 + 14𝑐2 𝑐4 𝛽2 + 12𝑐4 𝑐2 𝛽3
+ 4𝑐4 𝑐2 𝛽4 + 5𝑐22 𝑐3 𝛽2 + 3𝑐32 𝛽 + 6𝑐32 𝛽2
(37)
+ 4𝑐4 𝑐22 𝛽 + 8𝛽𝑐2 𝑐32 βˆ’ 3𝑐23 𝑐3
𝑀2 = πœ” + 𝑐2 (1 + 𝛽) 𝑒𝑛2
βˆ’4𝛽𝑐3 𝑐4 + 𝑐22 𝑐4 + 2𝑐2 𝑐32 βˆ’ 𝑐3 𝑐4 ) 𝑒𝑛8 + 𝑂 (𝑒𝑛9 ) ,
+ (βˆ’2𝑐22 βˆ’ 2𝑐22 𝛽 + 2𝑐3 + 3𝛽𝑐3 + 𝑐3 𝛽2 βˆ’ 𝑐22 𝛽2 ) 𝑒𝑛3
+𝑐4 𝛽3 + 𝑐23 𝛽3 ) 𝑒𝑛4 + 𝑂 (𝑒𝑛5 ) ,
+ 4𝑐25 𝛽 βˆ’ 4𝑐4 𝑐3 𝛽3 + 6𝑐4 𝑐22 𝛽2
+ 12𝑐32 𝑐2 𝛽2 βˆ’ 12𝑐3 𝑐23 𝛽 + 𝑐25 βˆ’ 6𝑐4 𝑐3 𝛽2
Using (35), (36), and (37), we have
+ 3𝑐23 𝛽2 βˆ’ 2𝑐3 𝛽3 𝑐2 + 3𝑐4 + 6𝛽𝑐4 + 4𝑐4 𝛽2
+ 2𝑐32 𝑐2 𝛽4 βˆ’ 12𝑐3 𝑐23 𝛽3 + 6𝑐25 𝛽2
βˆ’ 𝑐4 𝑐3 𝛽4 + 4𝑐4 𝑐22 𝛽3 + 8𝑐32 𝑐2 𝛽3 βˆ’ 18𝑐3 𝑐23 𝛽2
+ 3𝑐3 𝛽3 𝑐22 + 3𝑐32 𝛽3 ) 𝑒𝑛5 + 𝑂 (𝑒𝑛6 )] .
+ (4𝑐23 + 5𝑐23 𝛽 βˆ’ 7𝑐2 𝑐3 βˆ’ 10𝑐2 𝛽𝑐3 βˆ’ 7𝑐2 𝛽2 𝑐3
𝑒𝑛+1 = 𝑐22 (𝑐25 𝛽4 βˆ’ 3𝑐3 𝑐23 𝛽4 + 4𝑐25 𝛽3 + 𝑐4 𝑐22 𝛽4
𝑒𝑛+1 = ((4𝑐25 𝑐3 𝑐5 βˆ’ 𝑐32 𝑐4 𝑐5 + 2𝑐2 𝑐32 𝑐42 + 2𝑐2 𝑐33 𝑐5
(38)
βˆ’ 4𝑐23 𝑐3 𝑐42 βˆ’ 5𝑐23 𝑐32 𝑐5 βˆ’ 𝑐24 𝑐4 𝑐5 + 18𝑐24 𝑐32 𝑐4
βˆ’ 13𝑐26 𝑐3 𝑐4 βˆ’ 9𝑐22 𝑐33 𝑐4 + 2𝑐25 𝑐42 + 𝑐34 𝑐4
βˆ’ 2𝑐2 𝑐35 βˆ’ 𝑐27 𝑐5 βˆ’ 7𝑐29 𝑐3 + 3𝑐28 𝑐4
6
The Scientific World Journal
+ 11𝑐23 𝑐34 βˆ’ 21𝑐25 𝑐33 + 18𝑐27 𝑐32
Now, using (35)–(39), (43), and (44), we see that the method
(25) has eighth order convergence with the error equation
+2𝑐22 𝑐3 𝑐4 𝑐5 + 𝑐211 ) 𝑐24 𝛽8
𝑒𝑛+1 = 𝑐22 (𝑐25 𝛽4 βˆ’ 3𝑐3 𝑐23 𝛽4 + 4𝑐25 𝛽3 + 𝑐4 𝑐22 𝛽4
+ β‹… β‹… β‹… + (4𝑐25 𝑐3 𝑐5 βˆ’ 𝑐32 𝑐4 𝑐5 + 2𝑐2 𝑐32 𝑐42 + 2𝑐2 𝑐33 𝑐5
+ 2𝑐32 𝑐2 𝛽4 βˆ’ 12𝑐3 𝑐23 𝛽3 + 6𝑐25 𝛽2
βˆ’ 4𝑐23 𝑐3 𝑐42 βˆ’ 5𝑐23 𝑐32 𝑐5 βˆ’ 𝑐24 𝑐4 𝑐5
βˆ’ 𝑐4 𝑐3 𝛽4 + 4𝑐4 𝑐22 𝛽3 + 8𝑐32 𝑐2 𝛽3 βˆ’ 18𝑐3 𝑐23 𝛽2
+ 18𝑐24 𝑐32 𝑐4 βˆ’ 13𝑐26 𝑐3 𝑐4 βˆ’ 9𝑐22 𝑐33 𝑐4
+ 4𝑐25 𝛽 βˆ’ 4𝑐4 𝑐3 𝛽3 + 6𝑐4 𝑐22 𝛽2
+ 2𝑐25 𝑐42 + 𝑐34 𝑐4 βˆ’ 2𝑐2 𝑐35 βˆ’ 𝑐27 𝑐5 βˆ’ 7𝑐29 𝑐3
+ 12𝑐32 𝑐2 𝛽2 βˆ’ 12𝑐3 𝑐23 𝛽 + 𝑐25 βˆ’ 6𝑐4 𝑐3 𝛽2
+ 3𝑐28 𝑐4 + 11𝑐23 𝑐34 βˆ’ 21𝑐25 𝑐33 + 18𝑐27 𝑐32
+ 4𝑐4 𝑐22 𝛽 + 8𝛽𝑐2 𝑐32 βˆ’ 3𝑐23 𝑐3
+2𝑐22 𝑐3 𝑐4 𝑐5 + 𝑐211 ) 𝑐24 ) 𝑒𝑛16 + 𝑂 (𝑒𝑛17 ) ,
(41)
βˆ’4𝛽𝑐3 𝑐4 + 𝑐22 𝑐4 + 2𝑐2 𝑐32 βˆ’ 𝑐3 𝑐4 ) 𝑒𝑛8 + 𝑂 (𝑒𝑛9 ) .
(45)
where,
π‘π‘˜ =
1 𝑓(π‘˜) (πœ”)
,
π‘˜! 𝑓󸀠 (πœ”)
π‘˜ = 2, 3, . . .
(42)
Proof. Let π‘₯ = πœ” + 𝑒𝑛 , where πœ” is the root of 𝑓 and 𝑒𝑛 is the
error in the approximation at 𝑛th iteration. We will use (35),
(37), (39), and (40) up to 𝑂(𝑒𝑛30 ) in this result and set
𝑀3 = πœ” + (𝑐23 + 2𝑐23 𝛽 βˆ’ 𝑐2 𝑐3 βˆ’ 2𝑐2 𝑐3 𝛽
2
βˆ’π‘2 𝑐3 𝛽 +
𝑀4 = 𝑐22 (𝑐25 𝛽4 βˆ’ 3𝑐3 𝑐23 𝛽4 + 4𝑐25 𝛽3 + 𝑐4 𝑐22 𝛽4
+ 2𝑐32 𝑐2 𝛽4 βˆ’ 12𝑐3 𝑐23 𝛽3 + 6𝑐25 𝛽2 βˆ’ 𝑐4 𝑐3 𝛽4
+ 4𝑐4 𝑐22 𝛽3 + 8𝑐32 𝑐2 𝛽3 βˆ’ 18𝑐3 𝑐23 𝛽2
+ 4𝑐25 𝛽 βˆ’ 4𝑐4 𝑐3 𝛽3 + 6𝑐4 𝑐22 𝛽2
𝑐23 𝛽2 ) 𝑒𝑛4
+ 12𝑐32 𝑐2 𝛽2 βˆ’ 12𝑐3 𝑐23 𝛽 + 𝑐25 βˆ’ 6𝑐4 𝑐3 𝛽2
+ (4𝑐3 𝛽3 𝑐22 βˆ’ 2𝑐24 𝛽3 βˆ’ 𝑐32 𝛽3 βˆ’ 𝑐4 𝛽3 𝑐2
βˆ’ 4𝑐2 𝛽2 𝑐4 + 14𝑐22 𝛽2 𝑐3 βˆ’ 6𝑐24 𝛽2 βˆ’ 4𝑐32 𝛽2
To find the error equation of (32), we use (45) up to 𝑂(𝑒𝑛30 )
and set
(46)
+ 4𝑐4 𝑐22 𝛽 + 8𝛽𝑐2 𝑐32 βˆ’ 3𝑐23 𝑐3
(43)
+ 18𝑐22 𝛽𝑐3 βˆ’ 5𝑐2 𝛽𝑐4 βˆ’ 8𝑐24 𝛽 βˆ’ 5𝑐32 𝛽
βˆ’4𝛽𝑐3 𝑐4 + 𝑐22 𝑐4 + 2𝑐2 𝑐32 βˆ’ 𝑐3 𝑐4 ) 𝑒𝑛8
+ β‹… β‹… β‹… + 𝑂 (𝑒𝑛30 ) .
βˆ’4𝑐24 βˆ’ 2𝑐2 𝑐4 + 8𝑐22 𝑐3 βˆ’ 2𝑐32 ) 𝑒𝑛5
Taylor expansion of 𝑓(𝑀4 ) is
+ β‹… β‹… β‹… + 𝑂 (𝑒𝑛30 ) .
𝑓 (𝑀4 ) = 𝑓󸀠 (πœ”) [(𝑐27 𝛽4 + 𝑐24 𝑐4 + 𝑐27 + 𝑐4 𝛽4 𝑐24
Again, using Taylor expansion of 𝑓(𝑀3 ), we have
βˆ’ 𝑐22 𝑐4 𝑐3 βˆ’ 𝑐4 𝑐3 𝛽4 𝑐22 + 8𝑐23 𝑐32 𝛽3
𝑓 (𝑀3 ) = 𝑓󸀠 (πœ”) [(𝑐23 + 2𝑐23 𝛽 βˆ’ 𝑐2 𝑐3 βˆ’ 2𝑐2 𝑐3 𝛽
+ 2𝛽4 𝑐32 𝑐23 + 8𝑐23 𝛽𝑐32 + 12𝑐23 𝛽2 𝑐32
βˆ’π‘2 𝑐3 𝛽2 + 𝑐23 𝛽2 ) 𝑒𝑛4
βˆ’ 18𝑐25 𝑐3 𝛽2 βˆ’ 12𝑐25 𝑐3 𝛽 βˆ’ 3𝛽4 𝑐25 𝑐3
+ (4𝑐3 𝛽3 𝑐22 βˆ’ 2𝑐24 𝛽3 βˆ’ 𝑐32 𝛽3 βˆ’ 𝑐4 𝛽3 𝑐2
+ 6𝑐24 𝑐4 𝛽2 + 4𝑐24 𝑐4 𝛽3 + 4𝑐24 𝑐4 𝛽
βˆ’ 4𝑐2 𝛽2 𝑐4 + 14𝑐22 𝛽2 𝑐3 βˆ’ 6𝑐24 𝛽2 βˆ’ 4𝑐32 𝛽2
βˆ’ 12𝑐25 𝑐3 𝛽3 + 6𝑐27 𝛽2 + 4𝑐27 𝛽3 + 4𝑐27 𝛽
+ 18𝑐22 𝛽𝑐3 βˆ’ 5𝑐2 𝛽𝑐4 βˆ’ 8𝑐24 𝛽 βˆ’ 5𝑐32 𝛽
βˆ’ 3𝑐25 𝑐3 + 2𝑐23 𝑐32 βˆ’ 6𝑐3 𝛽2 𝑐4 𝑐22
βˆ’4𝑐24 βˆ’ 2𝑐2 𝑐4 + 8𝑐22 𝑐3 βˆ’ 2𝑐32 ) 𝑒𝑛5
βˆ’4𝑐22 𝑐4 𝛽𝑐3 βˆ’ 4𝑐4 𝑐22 𝑐3 𝛽3 ) 𝑒𝑛8
+ β‹… β‹… β‹… + 𝑂 (𝑒𝑛30 )] .
+ β‹… β‹… β‹… + 𝑂 (𝑒𝑛30 )] .
(44)
(47)
The Scientific World Journal
7
Table 1: Test functions and exact roots.
Numerical example
𝑓1 (π‘₯) = √π‘₯4 + 8 sin(πœ‹/(π‘₯2 + 2)) + π‘₯3 /(π‘₯4 + 1) βˆ’ √6 + 8/17
𝑓2 (π‘₯) = sin(π‘₯) βˆ’ π‘₯/100
𝑓3 (π‘₯) = (1/3)π‘₯4 βˆ’ π‘₯2 βˆ’ (1/3)π‘₯ + 1
𝑓4 (π‘₯) = 𝑒sin(π‘₯) βˆ’ 1 βˆ’ π‘₯/5
2
𝑓5 (π‘₯) = π‘₯𝑒π‘₯ βˆ’ (sin(π‘₯))2 + 3 cos(π‘₯) + 5
𝑓6 (π‘₯) = π‘’βˆ’π‘₯ + cos(π‘₯)
2
𝑓7 (π‘₯) = 10π‘₯π‘’βˆ’π‘₯ βˆ’ 1
𝑓8 (π‘₯) = π‘₯3 + 4π‘₯2 βˆ’ 15
Exact roots
πœ”1 = βˆ’2
πœ”2 = 0
πœ”3 = 1
πœ”4 = 0
πœ”5 β‰ˆ βˆ’1.207647827130919
πœ”6 β‰ˆ 1.746139530408013
πœ”7 β‰ˆ 1.679630610428450
πœ”8 β‰ˆ 1.631980805566063
Hence, using (35)–(39), (43), (44), (46), and (47), we see that
iterative method (32) is sixteenth order convergent with the
error equation given by
𝑒𝑛+1 = ((4𝑐25 𝑐3 𝑐5 βˆ’ 𝑐32 𝑐4 𝑐5 + 2𝑐2 𝑐32 𝑐42 + 2𝑐2 𝑐33 𝑐5
βˆ’ 4𝑐23 𝑐3 𝑐42 βˆ’ 5𝑐23 𝑐32 𝑐5 βˆ’ 𝑐24 𝑐4 𝑐5
+ 18𝑐24 𝑐32 𝑐4 βˆ’ 13𝑐26 𝑐3 𝑐4 βˆ’ 9𝑐22 𝑐33 𝑐4 + 2𝑐25 𝑐42
+ 𝑐34 𝑐4 βˆ’ 2𝑐2 𝑐35 βˆ’ 𝑐27 𝑐5 βˆ’ 7𝑐29 𝑐3
+ 3𝑐28 𝑐4 + 11𝑐23 𝑐34 βˆ’ 21𝑐25 𝑐33 + 18𝑐27 𝑐32
+2𝑐22 𝑐3 𝑐4 𝑐5 + 𝑐211 ) 𝑐24 𝛽8
+ β‹… β‹… β‹… + (4𝑐25 𝑐3 𝑐5 βˆ’ 𝑐32 𝑐4 𝑐5 + 2𝑐2 𝑐32 𝑐42 + 2𝑐2 𝑐33 𝑐5
βˆ’ 4𝑐23 𝑐3 𝑐42 βˆ’ 5𝑐23 𝑐32 𝑐5 βˆ’ 𝑐24 𝑐4 𝑐5 + 18𝑐24 𝑐32 𝑐4
βˆ’ 13𝑐26 𝑐3 𝑐4 βˆ’ 9𝑐22 𝑐33 𝑐4 + 2𝑐25 𝑐42 + 𝑐34 𝑐4 βˆ’ 2𝑐2 𝑐35
order scheme (32) (FNMS-16). For the sake of comparison,
we consider the existing higher order convergent methods
based on rational interpolation. We consider the fourteenth
order method of Soleymani and Sharifi (4) (SS-14), the
sixteenth order method of Soleymani et al. (7) (SSS-16),
and the sixteenth order method of Sharma et al. (9) (SGG16). All the computations for the above-mentioned methods
are performed using Maple 16 with 4000 decimal digits
precision. The test functions given in Table 1 are taken from
[1, 2, 10]. We used almost all types of nonlinear functions,
polynomials, and transcendental functions to test the new
methods. Table 2 shows that the newly developed sixteenth
order methods are comparable with the existing methods
of this domain in terms of significant digits and number of
function evaluations per iteration. In many examples, the
newly developed methods perform better than the existing
methods. It can also be seen from the tables that, for the
choice of initial guess, near to the exact root or far from the
exact root, the performance of the new methods is better.
βˆ’ 𝑐27 𝑐5 βˆ’ 7𝑐29 𝑐3 + 3𝑐28 𝑐4 + 11𝑐23 𝑐34
5. Attraction Basins
βˆ’ 21𝑐25 𝑐33 + 18𝑐27 𝑐32 + 2𝑐22 𝑐3 𝑐4 𝑐5
Let πœ”π‘– be the roots of the complex polynomial 𝑝𝑛 (π‘₯), 𝑛 β‰₯
1, π‘₯ ∈ C, where 𝑖 = 1, 2, 3, . . . , 𝑛. We use two different
techniques to generate basins of attraction on MATLAB
software. We take a square box of [βˆ’2, 2] × [βˆ’2, 2] ∈ C in the
first technique. For every initial guess π‘₯0 , a specific color is
assigned according to the exact root, and dark blue is assigned
for the divergence of the method. We use |𝑓(π‘₯π‘˜ )| < 10βˆ’5 as the
stopping criteria for convergence and the maximum number
of iterations are 30. β€œJet” is chosen as the colormap here.
For the second technique, the same scale is taken but each
initial guess is assigned a color depending upon the number
of iterations for the method to converge to any of the roots
of the given function. We use 25 as the maximum number
of iterations; the stopping criteria are the same as above
and colormap is selected as β€œhot.” The method is considered
divergent for that initial guess if it does not converge in the
maximum number of iterations and this case is displayed by
black color.
We take three test examples to obtain basins of attraction,
which are given as 𝑝3 (π‘₯) = π‘₯3 βˆ’ 1, 𝑝4 (π‘₯) = π‘₯4 βˆ’ 10π‘₯2 + 9,
and 𝑝5 (π‘₯) = π‘₯5 βˆ’ 1. The roots of 𝑝3 (π‘₯) are 1.0, βˆ’0.5000 +
0.86605𝐼, and βˆ’0.5000βˆ’0.86605𝐼, the roots of 𝑝4 (π‘₯) are βˆ’3, 3,
+𝑐211 ) 𝑐24 ) 𝑒𝑛16 + 𝑂 (𝑒𝑛17 ) .
(48)
Remark 3. From Theorems 1 and 2, it can be seen that the
iterative schemes (18), (22), (25), and (32) are second, fourth,
eighth, and sixteenth order convergent requiring two, three,
four, and five functional evaluations, respectively. Hence,
the proposed iterative schemes (18), (22), (25), and (32) are
optimal in the sense of the hypothesis of Kung and Traub [9]
with the efficiency indices 1.414, 1.587, 1.681, 1.741. Also, it
is clear that (14)-(15) is a general 𝑛-point scheme with optimal
order of convergence 2π‘›βˆ’1 . The efficiency index of this scheme
is 2(π‘›βˆ’1)/𝑛 .
4. Numerical Results
In this section, we present some test functions to demonstrate the performance of the newly developed sixteenth
8
The Scientific World Journal
Table 2: Comparison of various iterative methods.
𝑓(π‘₯), π‘₯0
𝑓1 , π‘₯0 = βˆ’1.2
󡄨
󡄨󡄨
󡄨󡄨𝑓1 (π‘₯1 )󡄨󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓1 (π‘₯2 )󡄨󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓1 (π‘₯3 )󡄨󡄨󡄨
𝑓1 , π‘₯0 = βˆ’3
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓1 (π‘₯1 )󡄨󡄨
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓1 (π‘₯2 )󡄨󡄨
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓1 (π‘₯3 )󡄨󡄨
𝑓2 , π‘₯0 = 1.5
󡄨
󡄨󡄨
󡄨󡄨𝑓2 (π‘₯1 )󡄨󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓2 (π‘₯2 )󡄨󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓2 (π‘₯3 )󡄨󡄨󡄨
𝑓2 , π‘₯0 = 3
󡄨
󡄨󡄨
󡄨󡄨𝑓2 (π‘₯1 )󡄨󡄨󡄨
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓2 (π‘₯2 )󡄨󡄨
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓2 (π‘₯3 )󡄨󡄨
𝑓3 , π‘₯0 = 0.5
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓3 (π‘₯1 )󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓3 (π‘₯2 )󡄨󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓3 (π‘₯3 )󡄨󡄨󡄨
𝑓3 , π‘₯0 = 1.5
󡄨
󡄨󡄨
󡄨󡄨𝑓3 (π‘₯1 )󡄨󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓3 (π‘₯2 )󡄨󡄨󡄨
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓3 (π‘₯3 )󡄨󡄨
𝑓4 , π‘₯0 = 5
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓4 (π‘₯1 )󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓4 (π‘₯2 )󡄨󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓4 (π‘₯3 )󡄨󡄨󡄨
𝑓4 , π‘₯0 = 4
󡄨
󡄨󡄨
󡄨󡄨𝑓4 (π‘₯1 )󡄨󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓4 (π‘₯2 )󡄨󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓4 (π‘₯3 )󡄨󡄨󡄨
𝑓5 , π‘₯0 = βˆ’1
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓5 (π‘₯1 )󡄨󡄨
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓5 (π‘₯2 )󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓5 (π‘₯3 )󡄨󡄨󡄨
𝑓5 , π‘₯0 = βˆ’0.6
󡄨
󡄨󡄨
󡄨󡄨𝑓5 (π‘₯1 )󡄨󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓5 (π‘₯2 )󡄨󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓5 (π‘₯3 )󡄨󡄨󡄨
𝑓6 , π‘₯0 = 0.5
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓6 (π‘₯1 )󡄨󡄨
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓6 (π‘₯2 )󡄨󡄨
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓6 (π‘₯3 )󡄨󡄨
𝑓6 , π‘₯0 = 3
󡄨
󡄨󡄨
󡄨󡄨𝑓6 (π‘₯1 )󡄨󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓6 (π‘₯2 )󡄨󡄨󡄨
(SS-14)
(SGG-16)
(SSS-16)
(FNMS-16)
.3𝑒 βˆ’ 13
.1𝑒 βˆ’ 181
.6𝑒 βˆ’ 2538
.1𝑒 βˆ’ 15
.6𝑒 βˆ’ 248
.6𝑒 βˆ’ 3751
.1𝑒 βˆ’ 13
.1𝑒 βˆ’ 211
.7𝑒 βˆ’ 3379
.1𝑒 βˆ’ 15
.5𝑒 βˆ’ 245
.6𝑒 βˆ’ 3916
.1𝑒 βˆ’ 5
.4𝑒 βˆ’ 75
.1𝑒 βˆ’ 1050
.3𝑒 βˆ’ 6
.1𝑒 βˆ’ 96
.3𝑒 βˆ’ 1545
.3𝑒 βˆ’ 7
.2𝑒 βˆ’ 113
.2𝑒 βˆ’ 1812
.2𝑒 βˆ’ 8
.1𝑒 βˆ’ 132
.5𝑒 βˆ’ 2123
.95
.1𝑒 βˆ’ 10
.2𝑒 βˆ’ 88
.95
.1𝑒 βˆ’ 4
.5𝑒 βˆ’ 90
.64
.3𝑒 βˆ’ 6
.1𝑒 βˆ’ 111
.3𝑒 βˆ’ 4
.5𝑒 βˆ’ 100
.4𝑒 βˆ’ 1632
.2𝑒 βˆ’ 24
.8𝑒 βˆ’ 358
0
.1𝑒 βˆ’ 25
.8𝑒 βˆ’ 425
0
.3𝑒 βˆ’ 27
.2𝑒 βˆ’ 452
0
.2𝑒 βˆ’ 44
.9𝑒 βˆ’ 743
0
.5𝑒 βˆ’ 8
.8𝑒 βˆ’ 114
.2𝑒 βˆ’ 1595
.6𝑒 βˆ’ 9
.3𝑒 βˆ’ 144
.1𝑒 βˆ’ 2307
.3𝑒 βˆ’ 9
.4𝑒 βˆ’ 149
.1𝑒 βˆ’ 2386
.9𝑒 βˆ’ 10
.6𝑒 βˆ’ 238
0
.2𝑒 βˆ’ 11
.2𝑒 βˆ’ 158
.3𝑒 βˆ’ 2218
.2𝑒 βˆ’ 13
.6𝑒 βˆ’ 214
.8𝑒 βˆ’ 3423
.2𝑒 βˆ’ 11
.1𝑒 βˆ’ 179
.2𝑒 βˆ’ 2869
.1𝑒 βˆ’ 11
.6𝑒 βˆ’ 183
.6𝑒 βˆ’ 2935
.8𝑒 βˆ’ 1
.4𝑒 βˆ’ 15
.1𝑒 βˆ’ 213
.25
.1𝑒 βˆ’ 6
.7𝑒 βˆ’ 109
.8𝑒 βˆ’ 1
.3𝑒 βˆ’ 16
.9𝑒 βˆ’ 262
.1𝑒 βˆ’ 2
.3𝑒 βˆ’ 56
.1𝑒 βˆ’ 914
.92
.4𝑒 βˆ’ 5
.1𝑒 βˆ’ 73
.9𝑒 βˆ’ 1
.1𝑒 βˆ’ 25
.1𝑒 βˆ’ 423
2.51
.6𝑒 βˆ’ 3
.6𝑒 βˆ’ 49
.1𝑒 βˆ’ 17
.3𝑒 βˆ’ 295
0
.1𝑒 βˆ’ 3
.2𝑒 βˆ’ 66
.4𝑒 βˆ’ 946
.1𝑒 βˆ’ 7
.5𝑒 βˆ’ 144
.8𝑒 βˆ’ 2327
.8𝑒 βˆ’ 4
.9𝑒 βˆ’ 80
.1𝑒 βˆ’ 1294
.7𝑒 βˆ’ 5
.4𝑒 βˆ’ 92
.6𝑒 βˆ’ 1488
.2𝑒44770
.4𝑒44769
.5𝑒44767
.4𝑒 βˆ’ 1
.4𝑒 βˆ’ 40
.4𝑒 βˆ’ 664
.2𝑒44770
.7𝑒44768
.2𝑒44767
.1
.3𝑒 βˆ’ 23
.8𝑒 βˆ’ 386
.6𝑒 βˆ’ 7
.8𝑒 βˆ’ 116
.1𝑒 βˆ’ 1748
.9𝑒 βˆ’ 9
.1𝑒 βˆ’ 152
.7𝑒 βˆ’ 2456
.3𝑒 βˆ’ 8
.7𝑒 βˆ’ 145
.9𝑒 βˆ’ 2332
.1𝑒 βˆ’ 14
.1𝑒 βˆ’ 254
.1𝑒 βˆ’ 3999
.7
.9𝑒 βˆ’ 5
.53
.4𝑒 βˆ’ 16
.79
.2𝑒 βˆ’ 5
0.7𝑒 βˆ’ 8
.2𝑒 βˆ’ 145
The Scientific World Journal
9
Table 2: Continued.
𝑓(π‘₯), π‘₯0
󡄨󡄨
󡄨
󡄨󡄨𝑓6 (π‘₯3 )󡄨󡄨󡄨
𝑓7 , π‘₯0 = 0
󡄨
󡄨󡄨
󡄨󡄨𝑓7 (π‘₯1 )󡄨󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓7 (π‘₯2 )󡄨󡄨󡄨
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓7 (π‘₯3 )󡄨󡄨
𝑓7 , π‘₯0 = 2.2
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓7 (π‘₯1 )󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓7 (π‘₯2 )󡄨󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓7 (π‘₯3 )󡄨󡄨󡄨
𝑓8 , π‘₯0 = 0.5
󡄨
󡄨󡄨
󡄨󡄨𝑓8 (π‘₯1 )󡄨󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓8 (π‘₯2 )󡄨󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓8 (π‘₯3 )󡄨󡄨󡄨
𝑓8 , π‘₯0 = 1
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓8 (π‘₯1 )󡄨󡄨
󡄨󡄨
󡄨󡄨
󡄨󡄨𝑓8 (π‘₯2 )󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨𝑓8 (π‘₯3 )󡄨󡄨󡄨
βˆ—
(SS-14)
.7𝑒 βˆ’ 102
(SGG-16)
.1𝑒 βˆ’ 269
(SSS-16)
.5𝑒 βˆ’ 125
(FNMS-16)
0.3𝑒 βˆ’ 2345
.1𝑒 βˆ’ 6
.5𝑒 βˆ’ 99
.3𝑒 βˆ’ 1394
.2𝑒 βˆ’ 9
.3𝑒 βˆ’ 161
.6𝑒 βˆ’ 2562
.1𝑒 βˆ’ 5
.6𝑒 βˆ’ 98
.5𝑒 βˆ’ 1575
.8𝑒 βˆ’ 12
.1𝑒 βˆ’ 199
.3𝑒 βˆ’ 3205
1
1
D
.9𝑒 βˆ’ 2
.6𝑒 βˆ’ 40
.5𝑒 βˆ’ 651
1
D
D
.8𝑒 βˆ’ 8
.9𝑒 βˆ’ 136
.4𝑒 βˆ’ 2183
.1𝑒7
16856.81
183.46
.97
.2𝑒 βˆ’ 25
.3𝑒 βˆ’ 435
.1𝑒7
16856.81
183.46
1.0
.8𝑒 βˆ’ 15
.1𝑒 βˆ’ 256
.2𝑒 βˆ’ 2
.2𝑒 βˆ’ 65
.6𝑒 βˆ’ 1073
.5𝑒 βˆ’ 5
.1𝑒 βˆ’ 109
.1𝑒 βˆ’ 1782
.2𝑒 βˆ’ 2
.2𝑒 βˆ’ 65
.6𝑒 βˆ’ 1073
.6𝑒 βˆ’ 3
.7𝑒 βˆ’ 67
.1𝑒 βˆ’ 1089
D stands for divergence.
2
2
1.5
1.5
18
1
16
2.5
1
2
0.5
βˆ’0.5
1
βˆ’1
0.5
βˆ’1.5
βˆ’2
βˆ’2
βˆ’1.5
βˆ’1
βˆ’0.5
0
0.5
1
1.5
0
2
14
0.5
1.5
0
20
12
0
10
βˆ’0.5
8
βˆ’1
6
4
βˆ’1.5
βˆ’2
βˆ’2
2
βˆ’1.5
βˆ’1
βˆ’0.5
0
0.5
1
1.5
2
Figure 1: Basins of attraction of method (7) for 𝑝3 (π‘₯).
2
1.5
2.5
1
0.5
2
8
1.5
7
1
6
0.5
5
0
0
1.5
βˆ’0.5
4
βˆ’0.5
3
βˆ’1
βˆ’1
1
βˆ’1.5
βˆ’2
βˆ’2
2
βˆ’1.5
βˆ’1
βˆ’0.5
0
0.5
1
1.5
2
2
βˆ’1.5
βˆ’2
βˆ’2
βˆ’1.5
βˆ’1
βˆ’0.5
Figure 2: Basins of attraction of method (9) for 𝑝3 (π‘₯).
0
0.5
1
1.5
2
1
10
The Scientific World Journal
2
1.5
2.5
1
2
18
1.5
16
14
1
2
0.5
1.5
0
βˆ’0.5
1
βˆ’1
0.5
βˆ’1.5
βˆ’2
βˆ’2
βˆ’1.5
βˆ’1
βˆ’0.5
0
0.5
1
1.5
10
0
8
βˆ’0.5
6
βˆ’1
4
βˆ’1.5
2
βˆ’2
2
12
0.5
βˆ’2
βˆ’1.5
βˆ’1
βˆ’0.5
0
0.5
1
1.5
2
0
Figure 3: Basins of attraction of method (32) for 𝑝3 (π‘₯).
2
2
1.5
3.5
12
1
1
3
0.5
2.5
0
βˆ’0.5
2
βˆ’1
1.5
10
0.5
0
8
βˆ’0.5
6
βˆ’1
4
βˆ’1.5
βˆ’1.5
βˆ’2
βˆ’2
14
1.5
1
βˆ’1.5
βˆ’1
βˆ’0.5
0
0.5
1
1.5
2
βˆ’2
βˆ’2
2
βˆ’1.5
βˆ’1
βˆ’0.5
0
0.5
1
1.5
2
Figure 4: Basins of attraction of method (7) for 𝑝4 (π‘₯).
2
1.5
3.5
1
5.5
1.5
5
1
3
0.5
2.5
0
βˆ’0.5
2
βˆ’1
4.5
0.5
4
3.5
0
3
βˆ’0.5
2.5
βˆ’1
1.5
βˆ’1.5
βˆ’2
βˆ’2
6
2
1
βˆ’1.5
βˆ’1
βˆ’0.5
0
0.5
1
1.5
2
2
βˆ’1.5
βˆ’2
βˆ’2
1.5
βˆ’1.5
βˆ’1
βˆ’0.5
Figure 5: Basins of attraction of method (9) for 𝑝4 (π‘₯).
0
0.5
1
1.5
2
1
The Scientific World Journal
11
2
2
3.5
1.5
1
3
0.5
2.5
0
2
βˆ’0.5
1.5
βˆ’1
1
βˆ’1.5
βˆ’2
βˆ’2
0.5
βˆ’1.5
βˆ’1
βˆ’0.5
0
0.5
1
1.5
2
0
16
1.5
14
1
12
0.5
10
0
8
βˆ’0.5
6
βˆ’1
4
βˆ’1.5
2
βˆ’2
βˆ’2
βˆ’1.5
βˆ’1
βˆ’0.5
0
0.5
1
1.5
2
0
Figure 6: Basins of attraction of method (32) for 𝑝4 (π‘₯).
2
2
4.5
1.5
4
1
3.5
0.5
3
0
2.5
2
βˆ’0.5
1.5
βˆ’1
1
βˆ’1.5
βˆ’2
βˆ’2
0.5
βˆ’1.5
βˆ’1
βˆ’0.5
0
0.5
1
1.5
2
0
30
1.5
25
1
20
0.5
0
15
βˆ’0.5
10
βˆ’1
5
βˆ’1.5
βˆ’2
βˆ’2
βˆ’1.5
βˆ’1
βˆ’0.5
0
0.5
1
1.5
2
0
Figure 7: Basins of attraction of method (7) for 𝑝5 (π‘₯).
2
2
4.5
1.5
4
1
3.5
0.5
3
0
2.5
2
βˆ’0.5
1.5
βˆ’1
1
βˆ’1.5
βˆ’2
βˆ’2
0.5
βˆ’1.5
βˆ’1
βˆ’0.5
0
0.5
1
1.5
2
0
16
1.5
14
1
12
0.5
10
0
8
βˆ’0.5
6
βˆ’1
4
βˆ’1.5
2
βˆ’2
βˆ’2
βˆ’1.5
βˆ’1
βˆ’0.5
Figure 8: Basins of attraction of (9) for 𝑝5 (π‘₯).
0
0.5
1
1.5
2
0
12
The Scientific World Journal
2
2
4.5
1.5
4
1
3.5
0.5
3
0
2.5
2
βˆ’0.5
1.5
βˆ’1
1
βˆ’1.5
βˆ’2
βˆ’2
0.5
βˆ’1.5
βˆ’1
βˆ’0.5
0
0.5
1
1.5
2
0
22
20
1.5
18
1
16
14
0.5
12
0
10
βˆ’0.5
8
βˆ’1
6
4
βˆ’1.5
βˆ’2
βˆ’2
2
βˆ’1.5
βˆ’1
βˆ’0.5
0
0.5
1
1.5
2
0
Figure 9: Basins of attraction of (32) for 𝑝5 (π‘₯).
βˆ’1, 1, and for 𝑝5 (π‘₯) roots are 1.0, 0.3090+0.95105𝐼, βˆ’0.8090+
0.58778𝐼, βˆ’0.8090 βˆ’ 0.58778𝐼, and 0.30902 βˆ’ 0.95105𝐼.
We compare the results of our newly constructed method
(32) with some existing methods (7) and (9), as given in
Section 1. Figures 1, 2, 3, 4, 5, 6, 7, 8, and 9 show the dynamics
of the methods (7), (9), and (32) for the polynomials π‘₯3 βˆ’
1, π‘₯4 βˆ’ 1, and π‘₯5 βˆ’ 1. Two types of attraction basins are
given in all figures. One can easily see that the appearance
of darker region shows that the method consumes a fewer
number of iterations. Color maps for both types are given
with each figure which shows the root to which an initial
guess converges and the number of iterations in which the
convergence occurs.
Conflict of Interests
The authors declare that there is no conflict of interests
regarding the publication of this paper.
References
[1] J. R. Sharma, R. K. Guha, and P. Gupta, β€œImproved King’s
methods with optimal order of convergence based on rational
approximations,” Applied Mathematics Letters, vol. 26, no. 4, pp.
473–480, 2013.
[2] F. Soleymani, S. Shateyi, and H. Salmani, β€œComputing simple
roots by an optimal sixteenth-order class,” Journal of Applied
Mathematics, vol. 2012, Article ID 958020, 13 pages, 2012.
[3] F. Soleymani, S. Karimi Vanani, and M. J. Paghaleh, β€œA class
of three-step derivative-free root solvers with optimal convergence order,” Journal of Applied Mathematics, vol. 2012, Article
ID 568740, 15 pages, 2012.
[4] L. Tornheim, β€œConvergence of multipoint iterative methods,”
Journal of the Association for Computing Machinery, vol. 11, pp.
210–220, 1964.
[5] P. Jarratt and D. Nudds, β€œThe use of rational functions in
the iterative solution of equations on a digital computer,” The
Computer Journal, vol. 8, pp. 62–65, 1965.
[6] P. Jarratt, β€œA rational iteration function for solving equations,”
The Computer Journal, vol. 9, pp. 304–307, 1966.
[7] A. Cuyt and L. Wuytack, Nonlinear Methods in Numerical
Analysis, Elsevier Science Publishers, 1987.
[8] D. A. Field, β€œConvergence rates for Padé-based iterative solutions of equations,” Journal of Computational and Applied
Mathematics, vol. 32, no. 1-2, pp. 69–75, 1990.
[9] H. T. Kung and J. F. Traub, β€œOptimal order of one-point and
multipoint iteration,” Journal of the Association for Computing
Machinery, vol. 21, no. 4, pp. 643–651, 1974.
[10] F. Soleymani and M. Sharifi, β€œOn a general efficient class
of four-step root-finding methods,” International Journal of
Mathematics and Computers in Simulation, vol. 5, pp. 181–189,
2011.
[11] J. F. Steffensen, β€œRemarks on iterations,” Scandinavian Actuarial
Journal, vol. 16, no. 1, pp. 64–72, 1933.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Applied Mathematics
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Mathematical Problems
in Engineering
Journal of
Mathematics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Discrete Dynamics in
Nature and Society
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Abstract and
Applied Analysis
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Journal of
Stochastic Analysis
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014