Approximation algorithm for fixed points of nonlinear operators and

Available online at www.tjnsa.com
J. Nonlinear Sci. Appl. 5 (2012), 475–494
Research Article
Approximation algorithm for fixed points of
nonlinear operators and solutions of mixed
equilibrium problems and variational inclusion
problems with applications
Uamporn Witthayarata , Yeol Je Chob,∗, Poom Kumama,∗
a
Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi, KMUTT, Bangkok 10140,
Thailand.
b
Department of Mathematics Education and the RINS, Gyeongsang National University, Chinju 660-701, Korea.
Dedicated to George A Anastassiou on the occasion of his sixtieth birthday
Communicated by Professor R. Saadati
Abstract
The purpose of this paper is to introduce an iterative algorithm for finding a common element of the set
of fixed point of nonexpansive mappings, set of a mixed equilibrium problem and the set of variational
inclusions in a real Hilbert space. We prove that the sequence xn which is generated by the proposed
iterative algorithm converges strongly to a common element of four sets above. Furthermore, we give an
application to optimization and some numerical examples which support our main theorem in the last part.
Our result extended and improve the existing result of Yao et al. [19] and references therein.
Keywords: Common fixed point; Equilibrium problem; Iterative algorithm; Nonexpansive mapping;
Variational inequality.
∗
Corresponding author
Email addresses: [email protected] (Uamporn Witthayarat), [email protected] (Yeol Je Cho),
[email protected] (Poom Kumam)
Received 2011-9-11
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
476
1. Introduction
Throughout this paper, we assume that H be a real Hilbert space with the inner product h·, ·i and norm
k · k and C is a nonempty closed convex subset of H. Let F : C → H be a nonlinear mapping, ϕ : C → R
be a function and Θ : C × C → R be a bifunction.
First, we consider the following mixed equilibrium problem: find x∗ ∈ C such that
Θ(x∗ , y) + ϕ(y) − ϕ(x∗ ) + hF x∗ , y − x∗ i ≥ 0,
∀y ∈ C.
(1.1)
If F = 0, then the mixed equilibrium problem (1.1) becomes the following mixed equilibrium problem,
which was studied by Ceng and Yao [2]: Find x∗ ∈ C such that
Θ(x∗ , y) + ϕ(y) − ϕ(x∗ ) ≥ 0,
∀y ∈ C.
(1.2)
If ϕ = 0, then the mixed equilibrium problem (1.1) becomes the following equilibrium problem, which
was considered by Takahashi and Takahashi [16]: Find x∗ ∈ C such that
Θ(x∗ , y) + hF x∗ , y − x∗ i ≥ 0,
∀y ∈ C.
(1.3)
If ϕ = 0 and F = 0, then the mixed equilibrium problem (1.1) becomes the following equilibrium
problem: Find x∗ ∈ C such that
Θ(x∗ , y) ≥ 0, ∀y ∈ C.
(1.4)
If Θ(x, y) = 0 for all x, y ∈ C, then the mixed equilibrium problem (1.1) becomes the following variational
inequality problem: Find x∗ ∈ C such that
ϕ(y) − ϕ(x∗ ) + hF x∗ , y − x∗ i ≥ 0,
∀y ∈ C.
(1.5)
We denote the sets of solutions of the problems (1.1)-(1.5) by EP (1)-EP (5), respectively. Equilibrium
problem theory is one of most interesting and useful branch for the existence of solutions of many problems
arising in economics, physics, operation research and other fields. The mixed equilibrium problems include
fixed point problems, optimization problems, variational inequality problems, Nash equlibrium problems as
the special cases. Also, many authors have introduced some kind of methods to solve these problems.
In 1997, Combettes and Hirstoaga [4] introduced an iterative method to find the best approximation to
the initial data and also prove some strong convergence theorems by using the proposed method. Consequently, Takahashi and Takahashi [17] presented an another iterative scheme for finding a common element
of the set of solutions of the equilibrium problem (1.2) and the set of fixed points of a nonexpansive mappings. Moreover, Yao, Liou and Yao [20], [21] also introduced new iterative schemes for finding a common
element of the set of solutions of the equilibrium problem and the set of common fixed points of finitely
(infinitely) nonexpansive mappings.
Recently, Ceng and Yao [2] introduced a new iterative scheme for finding a common element of the set
of solution of the mixed equilibrium problem and the set of common fixed points of infinitely nonexpansive
mappings. Furthermore, Peng and Yao [12] applied the CQ method to solve the mixed equilibrium problem
and variational inequality problem and they also obtained the strong convergence results. Their result
extended and improved the corresponding results in [3], [9], [16] and [21].
Recall that the mapping f : C → C is called a ρ-contraction if there exists a constant ρ ∈ [0, 1) such
that
kf (x) − f (y)k ≤ ρkx − yk, ∀x, y ∈ C.
A mapping T : C → C is said to be nonexpansive if
kT x − T yk ≤ kx − yk,
∀x, y ∈ C.
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
477
A mapping B : C → C is said to be β -inverse strongly monotone if there exists a constant β > 0 such
that
hBx − By, x − yi ≥ βkBx − Byk2 , ∀x, y ∈ C.
A mapping G is said to be strongly positive on H if there exists a constant µ > 0 such that
hAx, xi ≥ µkxk2 ,
∀x ∈ H.
(1.6)
Let A : H → H be a single-valued nonlinear mapping and R : H → 2H be a set-valued mapping. The
variational inclusion problem is as follows: Find x ∈ H such that
θ ∈ B(x) + R(x),
(1.7)
where θ is the zero vector in H. We denote the set of solution of this problem by I(A, R).
Remark 1.1. (1) If R = ∂φ : H → 2H in (1.7), where φ : H → R is proper convex lower semi-continuous
and ∂φ is the sub-differential of φ, then the variational inclusion (1.7) is equivalent to the following problem
: Find x ∈ H such that
hBx, v − xi + φ(y) − φ(x) ≥ 0, ∀v, y ∈ H,
which is called the mixed quasi-variational inequality in Noor [10].
(2) Let M = ∂δC in (1.7), where C is a nonempty closed convex subset of H and δC : H → [0, ∞) is the
indicator function of C, i.e.,
0,
x ∈ C;
δC (x) =
+∞, otherwise.
Then the variational inclusion (1.7) is equivalent to the following problem: Find x ∈ H such that
hBx, v − xi ≥ 0,
∀v ∈ H,
which is called Hartman-Stampacchia’s variational inequality.
Remark 1.2. (1) If H = Rm , then the problem (1.7) becomes the generalized equation introduced by
Robinson [13].
(2) If B = 0, then the problem (1.7) becomes the inclusion problem introduced by Rockafellar [14].
The problem (1.7) is the most widely use for the study of optimal solutions in many related areas
including mathematical programming, complementarity, variational inequalities, optimal control and many
other fields. Many kinds of variational inclusions problems have been improved, extended and generalized
in recent years by many authors.
In 2008, Zhang et al. [22] introduced an algorithm for finding a common solutions for quasi variational
in clusion and fixed point problems, they also prove a strong convergence theorems for approximating this
common elements under the suitable condition. Moreover, Kocourek et al. [7] presented a new iterative
scheme for finding a common element of the set of solution to the problem (1.7) and the set of fixed points
of nonexpansive, nonspreading and hybrid mappings in Hilbert spaces. Peng et al. [11] introduced another
iterative algorithm by the viscosity approximate method for finding a common element of the set of solutions
of a variational inclusion with a set-valued maximal monotone mapping and inverse strongly monotone
mappings, the set of solutions of an equilibrium problem and the set of fixed points of a nonexpansive
mapping.
Very recently, Yao et al. [19], proposed the following iterative algorithm:

 Θ(u , y) + ϕ(y ) − ϕ(u ) + 1 hy − u , u − (x − rF x )i ≥ 0,
n
n
n
n
n
n
n
r
(1.8)

xn+1 = αn (u + γf (xn )) + βn xn + [(1 − βn )I − αn (I + νA)]Wn JR,λ (zn − rAzn ),
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
478
where {αn }, {βn } are two real sequences in [0, 1]. Furthermore, they also proved that the proposed above
algorithm converges strongly to a common element of the set of solution of mixed equilibrium problem and
the set of fixed point of nonexpansive mapping and the set of a variational inclusion in a real Hilbert space.
Motivated and inspired by the above works, in this paper, we propose an iterative algorithm which
extend from Yao et al. [19] as follows:

1


Θ(un , y) + ϕ(yn ) − ϕ(un ) + hy − un , un − (xn − rF xn )i ≥ 0,


r


zn = JR,s2 (un − s2 Bun ),
(1.9)


(z
−
s
Az
),
y
=
J

n
1
n
n
R,s
1



xn+1 = αn (u + γf (xn )) + βn xn + [(1 − βn )I − αn (I + νG)]Wn yn ,
where {αn } and {βn } are two real sequences in [0, 1]. Furthermore, we prove the strong convergence theorem
and give an illustrative example to support our main theorem.
2. Preliminaries
Let H be a real Hilbert space and C be a nonempty closed convex subset of H. It follows that
kx − yk2 = kxk2 + kyk2 − 2hx, yi,
∀x, y ∈ H.
(2.1)
Recall that the nearest projection PC from H to C assigns to each x ∈ H, the unique point PC x ∈ C
satisfying the property
kx − PC xk = min kx − yk,
y∈C
which is equivalent to the following inequality
hx − PC x, PC x − yi ≥ 0,
∀y ∈ C.
A set-valued mapping T : H → 2H is called monotone if, for all x, y ∈ H, f ∈ T x and g ∈ T y imply
hx − y, f − gi ≥ 0. A monotone mapping T : H → 2H is said to be maximal if its graph G(T ) is not properly
contained in the graph of any other monotone mapping. It is well known that a monotone mapping T is
maximal if and only if, for all (x, f ) ∈ H × H, hx − y, f − gi ≥ 0 for all (y, g) ∈ G(T ) implies f ∈ T x.
Let a set valued mapping R : H → 2H be a maximal monotone. We define a resolvent operator JR,λ
generated by R and λ as follows:
JR,λ = (I + λR)−1 (x),
∀x ∈ H,
where λ is a positive number. It is easily to see that the resolvent operator JR,λ is single-valued, nonexpansive
and 1-inverse strongly monotone and moreover, a solution of the problem (1.7) is a fixed point of the operator
JR,λ (I − λB) for all λ > 0 (see, for example, [8]).
In this paper, we assume that a bifunction Θ : H × H → R and a convex function ϕ : H → R satisfy
the following conditions:
(H1) Θ(x, x) = 0 for all x ∈ H;
(H2) Θ is monotone, i.e., Θ(x, y) + Θ(y, x) ≤ 0 for all x, y ∈ H;
(H3) for any y ∈ H, x → Θ(x, y) is weakly upper semi-continuous;
(H4) for any x ∈ H, y → Θ(x, y) is convex and lower semi-continuous;
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
479
(H5) for any x ∈ H and r > 0, there exists a bounded subset Dx ⊂ H and yx ∈ H such that, for any
z ∈ H \ Dx ,
1
Θ(z, yx ) + ϕ(yx ) − ϕ(z) + hyx − z, z − xi < 0.
r
Next, we recall some lemmas which will be needed in the rest of this paper.
Lemma 2.1. [12] Let H be a real Hilber space and Θ : H × H → R be a bifunction. Let ϕ : H → R be a
proper lower semicontinuous and convex function. For any r > 0 and x ∈ H, define a mapping Sr : H → H
as follow: for all x ∈ H,
1
Sr (x) = {z ∈ H : Θ(z, y) + ϕ(y) − ϕ(z) + hy − z, z − xi ≥ 0, ∀y ∈ H}.
r
Assume that the condition (H1)-(H5) hold. Then we have the following:
(1) For each x ∈ H, Sr (x) 6= 0 and Sr is single valued;
(2) Sr is firmly nonexpansive, i.e., for any x, y ∈ H,
kSr x − Sr yk2 ≤ hSr x − Sr y, x − yi;
(3) F ix(Sr ) = EP (1);
(4) EP (1) is closed and convex.
Lemma 2.2. [15] Let {xn } and {zn } be bounded sequences in a Banach space E and {βn } be a sequence in
[0, 1] satisfying the following condition:
0 < lim inf βn ≤ lim sup βn < 1.
n→∞
n→∞
Suppose that xn+1 = βn xn + (1 − βn )zn for all n ≥ 0 and lim supn→∞ (kzn+1 − zn k − kxn+1 − xn k) ≤ 0. Then
limn→∞ kzn − xn k = 0.
Now, we define the mapping Wn by
Un,n+1 = I,
Un,n = λn Tn Un,n+1 + (1 − λn )I,
Un,n−1 = λn−1 Tn−1 Un,n + (1 − λn−1 )I,
···
Un,k = λk Tk Un,k+1 + (1 − λk )I,
Un,k−1 = λk−1 Tk−1 Un,k + (1 − λk−1 )I,
···
Un,2 = λ2 T2 Un,3 + (1 − λ2 )I,
Wn = Un,1 = λ1 T1 Un,2 + (1 − λ1 )I,
(2.2)
where λ1 , λ2 , · · · are real numbers such that 0 ≤ λn ≤ 1 for all n ≥ 1 and {Ti }∞
n=1 is an infinite family of
nonexpansive mappings Tn : H → H.
Note that Wn is usually called the W -mapping generated by Tn , Tn−1 , · · · , T1 and λn , λn−1 , · · · , λ1 . It
can be easily seen that Wn is also nonexpansive mapping.
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
480
Lemma 2.3. [15] Let C be a nonempty closed convex subset of a real Hilbert space H. Let {Tn }∞
n=1 be an
∞
infinite family of nonexpanxive mappings Tn : H → H such that ∩n=1 F ix(Tn ) 6= 0. Let λ1 , λ2 , · · · be a real
number such that 0 < λn ≤ b < 1 for all n ∈ N . Then the following statements hold:
(1) For all x ∈ H and k ∈ N , the limit limn→∞ Un,k x exists;
(2) F ix(W ) = ∩∞
n=1 F ix(Tn ) where W x = limn→∞ Wn x = limn→∞ Un,1 x for all x ∈ C;
(3) For any bounded sequence {xn } in H,
lim kW xn − Wn xn k = 0.
n→∞
Lemma 2.4. [1] Let R : H → 2H be a maximal monotone mapping and B : H → H be a Lipschitz and
continuous monotone mapping. Then the mapping R + B : H → 2H is maximal monotone.
Lemma 2.5. [5],[6] Let {an } be a sequence of non-negative real numbers satisfying
an+1 ≤ (1 − γn )an + δn ,
n ≥ 1,
where {γn } is a sequence in (0, 1) and {δn } satisfy the following conditions:
P∞
(i)
n=1 = ∞;
P
(ii) lim supn→∞ γδnn ≤ 0 or ∞
n=1 |δn | < ∞;
Then limn→∞ an = 0.
Lemma 2.6. [5],[6] Let C be a nonempty closed convex subset of a real Hilbert space H and g : C → R∪{∞}
be a proper lower-semicontinuous differentiable convex function. If x∗ is a solution to the minimization
problem:
g(x∗ ) = inf g(x),
x∈C
then
hg 0 (x), x − x∗ i ≥ 0,
∀x ∈ C.
In particular, if x∗ solves the optimization problem:
ν
1
min hAx, xi + kx − uk2 − h(x),
x∈C 2
2
then
hu + (γf − (I + νA))x∗ , x − x∗ i ≤ 0.
3. Main Results
In this section, we prove some strong convergence theorems for finding a common element of the set of
solution of a mixed equilibrium problem, the set of solution of variational inclusions and the set of fixed
points of nonexpansive mappings.
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
481
Theorem 3.1. Let H a real Hilbert space and {Tn }∞
n=1 be an infinite family of nonexpansive mappings
Tn : H → H. Let ϕ : H → R be a lower semicontinuous and convex function and Θ : H × H → R
be a bifunction satisfying the condition (H1)-(H5). Let G be a strongly positive bounded linear operator
with coefficient µ > 0 and R : H → 2H be a maximal monotone mapping. Let A, B, F : H → H be
α, β, η-inverse strongly monotone mappings, respectively. Let f : H → H be a ρ-contraction and r >
0, s1 , s2 > 0, γ > 0 be four constants such that s1 < 2α, s2 < 2β, r < 2ρ and γ < (1+ν)µ
. Assume that
ρ
∞
Ω := ∩n=1 F ix(Tn ) ∩ EP (1) ∩ I(A, R) ∩ I(B, R) 6= ∅. Let Wn be the mapping defined by (2.2). If {xn } is
the sequence generated by x1 ∈ H and

1


Θ(un , y) + ϕ(yn ) − ϕ(un ) + hy − un , un − (xn − rF xn )i ≥ 0,


r


zn = JR,s2 (un − s2 Bun ),
(3.1)


(z
−
s
Az
),
y
=
J

n
1
n
n
R,s
1



xn+1 = αn (u + γf (xn )) + βn xn + [(1 − βn )I − αn (I + νG)]Wn yn ,
where {αn } and {βn } are two real sequences in [0, 1] satisfying the following conditions:
(C1) limn→∞ αn = 0 and Σ∞
n=1 αn = ∞;
(C2) 0 < lim inf n→∞ βn < lim supn→∞ βn < 1.
Then the sequence {xn } converges strongly to a point x∗ ∈ Ω, which solve the following optimization problem:
ν
1
min hGx, xi + kx − uk2 − h(x),
x∈Ω 2
2
(3.2)
where h is a potential function for γf .
Proof. We divide the proof of Theorem 3.1 into several steps.
Step 1. We show that {xn } is bounded. First, we note that I − s1 A and I − s2 B are nonexpansive for
all x, y ∈ C and
k(I − s1 A)x − (I − s1 A)yk2 = kx − yk2 − 2s1 hx − y, Ax − Ayi + s21 kAx − Ayk2
≤ kx − yk2 − 2s1 αkAx − Ayk2 + s21 kAx − Ayk2
= kx − yk2 − s1 (2α − s1 )kAx − Ayk2
≤ kx − yk2 .
Thus I − s1 A is nonexpansive and so are I − s2 B and I − rF . Let p ∈ Ω. From (C1) and (C2), we assume
that αn ≤ (1 − βn )(1 + νkGk)−1 for all n ≥ 1. Since G is a linear bounded self-adjoint operator on H, we
have
kGk = sup{|hGx, xi| : x ∈ H, kxk = 1}
and
h((1 − βn )I − αn (I + νG))x, xi = h(1 − βn )x − αn x − αn νGx, xi
= 1 − βn − αn − αn νhGx, xi
≥ 1 − βn − αn − αn νkGk
≥ 0.
Thus we can see that (1 − βn )I − αn (I + νG) is positive. Further, it follows that
k(1 − βn )I − αn (I + νG)k = sup{h((1 − βn )I − αn (I + νG))x, xi x ∈ H, kxk = 1}
= sup{1 − βn − αn − αn νhGx, xi, x ∈ H, kxk = 1}
≤ 1 − βn − αn (1 + νµ).
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
482
From zn = JR,s (un − s2 Bun ) for all n ≥ 0, we can compute
kzn − pk = kJR,s2 (un − s2 Bun ) − JR,s2 (p − s2 Bp)k
≤ k(un − p) − s2 (Bun − Bp)k
≤ kun − pk
and
kyn − pk = kJR,s1 (zn − s1 Azn ) − JR,s1 (p − s1 Ap)k
≤ k(zn − p) − s1 (Azn − Ap)k
≤ kzn − pk
≤ kun − pk.
Letting un = Sr (xn − rF xn ) for all n ≥ 0, by Lemma 2.1, we have
kun − pk2 = kSr (xn − rF xn ) − Sr (p − rF p)k2
≤ kxn − pk2 − 2rhxn − p, F xn − F pi + r2 kF xn − F pk2
≤ kxn − pk2 − r(2ρ − r)kF xn − F pk2
≤ kxn − pk2
(3.3)
and so kzn − pk ≤ kxn − pk and kyn − pk ≤ kxn − pk. Thus we have
kxn+1 − pk = kαn u + αn (γf (xn ) − (I + νG)p) + βn (xn − p)
+[(1 − βn )I − αn (I + νG)](Wn yn − p)k
≤ (1 − βn − αn (1 + νµ))kyn − pk + βn kxn − pk
+αn kuk + αn kγf (x − n) − (I + νG)pk
≤ (1 − αn (1 + νµ))kxn − pk + αn kuk + αn γηkxn − pk
+αn kγf (p) − (I + νG)pk
= [1 − αn (1 + νµ + γη)]kxn − pk + αn [kγf (p) − (I + νG)pk + kuk]
n
kγf (p) − (I + νG)pk + kuk o
≤ max kx0 − pk,
.
1 + νµ − γp
Hence {xn } is bounded and so are {un }, {zn }, {yn }, {Wn yn }, {f (xn )} and {GWn yn }.
Step 2. We show that limn→∞ kxn+1 − xn k = 0. Define xn+1 = βn + (1 − βn )vn for all n ≥ 0, that is,
xn+1 − βn xn
vn =
for all n ≥ 0. It follows that
1 − βn
xn+2 − βn+1 xn+1 xn+1 − βn xn
−
1 − βn+1
1 − βn
αn+1
αn
=
[u + γf (xn+1 )] −
[u + γf (xn )] + Wn+1 yn+1 − Wn yn
1 − βn+1
1 − βn
αn+1
αn
−
(I + νG)Wn+1 yn+1 +
(I + νG)Wn yn
1 − βn+1
1 − βn
αn+1
=
[u + γf (xn+1 ) − (I + νG)Wn+1 yn+1 ]
1 − βn+1
αn
+
[u + γf (xn ) + (I + νG)Wn yn ]
1 − βn
+Wn+1 yn+1 − Wn+1 yn + Wn+1 yn − Wn yn
vn+1 − vn =
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
483
and so
kvn+1 − vn k − kxn+1 − xn k ≤
αn+1
[kuk + kγf (xn+1 )k + k(I + νG)Wn+1 yn+1 k]
1 − βn+1
αn
+
[kuk + kγf (xn )k + k(I + νG)Wn yn k]
1 − βn
+kyn+1 − yn k + kWn+1 yn − Wn yn k − kxn+1 − xn k.
Since Ti and Un,i are nonexpansive, it follows that
kWn+1 yn − Wn yn k = kλ1 Tq Un+1,2 yn − λ1 T1 Un,2 yn k
≤ λ1 kUn+1,2 yn − Un,2 yn k
= λ1 kλ2 T2 Un+1,3 yn − λ2 T2 Un,3 yn k
≤ λ1 λ2 kUn+1,3 yn − Un,3 yn k
..
.
≤ λ1 λ2 . . . λn kUn+1,n+1 yn − Un,n+1 yn k
n
Y
≤ M
λi ,
i=1
where M > 0 is a constant such that sup{kUn+1,n+1 yn − Un,n+1 yn k : n ≥ 0} ≤ M .
Observe that
kyn+1 − yn k = kJR,s1 (zn+1 − s1 Azn+1 ) − JR,s1 (zn − s1 Azn )k
≤ k(I − s1 A)zn+1 − (I − s1 A)zn k
≤ kzn+1 − zn k
≤ kJR,s2 (un+1 − s2 Bun+1 ) − JR,s2 (un − s2 Bun )k
≤ kun+1 − un k
≤ kSr (xn+1 − rF xn+1 ) − Sr (xn − rF xn )k
≤ kxn+1 − xn k.
Therefore, we have
kvn+1 − vn k − kxn+1 − xn k ≤
αn+1
[kuk + kγf (xn+1 )k + k(I + νG)Wn+1 yn+1 k]
1 − βn+1
n
Y
αn
[kuk + kγf (xn )k + k(I + νG)Wn yn k] + M
λi ,
+
1 − βn
i=1
which implies that
lim sup(kvn+1 − vn k − kxn+1 − xn k) ≤ 0.
n→∞
Hence, by Lemma 2.2, it follows that limn→∞ kvn − xn k = 0 and
xn+1 − βn xn
− xn k
1 − βn
kxn+1 − xn k
= lim
,
n→∞
1 − βn
lim kvn − xn k =
n→∞
lim k
n→∞
and so
lim kxn+1 − xn k = 0.
(3.4)
n→∞
Step 3. We show that
lim kF xn − F pk = 0, lim kBun − Bpk = 0, lim kAzn − Apk = 0.
n→∞
n→∞
n→∞
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
484
From (3.1), it follows that
xn+1 = αn (u + γf (xn )) + βn xn + [(1 − βn )I − αn (I + νG)]Wn yn ,
which can be rewritten as
xn+1 = αn (u + γf (xn ) − (I + νG)Wn yn ) + βn (xn − Wn yn ) + Wn yn .
Observe that
kxn − Wn yn k ≤ kxn − xn+1 k + kxn+1 − Wn yn k
≤ kxn − xn+1 k + αn ku + γg(xn ) − (I + νG)Wn yn k + βn kxn − Wn yn k
and so
kxn − Wn yn k ≤
αn
1
kxn − xn+1 k +
ku + γf (xn ) − (I + νG)Wn yn k.
1 − βn
1 − βn
Thus, from αn → 0 and limn→∞ kxn+1 − xn k = 0, it follows that
lim kxn − Wn yn k = 0.
n→∞
(3.5)
Since I − rA, I − sB and I − rF are nonexpansive, it follows from (3.3) that
kyn − pk2 = kJR,s1 (zn − s1 Azn ) − JR,s1 (p − s1 Ap)k2
≤ kzn − pk2 + s1 (s1 − 2α)kAzn − Apk2
≤ kJR,s2 (un − s2 Bun ) − JR,s2 (p − s2 Bp)k2 + s1 (s1 − 2α)kAzn − Apk2
≤ kun − pk2 + s2 (s2 − 2β)kBun − Bpk2 + s1 (s1 − 2α)kAzn − Apk2
≤ kxn − pk2 + r(r − 2ρ)kF xn − F pk2 + s2 (s2 − 2β)kBun − Bpk2
+s1 (s1 − 2α)kAzn − Apk2 .
Thus we obtain
kxn+1 − pk2 = kαn (u + γf (xn ) − (I + νG)p) + βn (xn − Wn yn ) + (I − αn (I + νG))(Wn yn − p)k2
≤ k(I − αn (I + νG))(Wn yn − p) + βn (xn − Wn yn )k2
+2αn hu + γf (xn ) − (I + νG)p, xn+1 − pi
≤ [kI − αn (I + νG)kkyn − pk + βn kxn − Wn yn k]2
+2αn ku + γf (xn ) − (I + νG)p)kkxn+1 − pk
= (1 − α − n(1 + νµ))2 kyn − pk2 + βn2 kxn − Wn yn k2
+2(1 − αn (1 + νµ))βn kyn − pkkxn − Wn yn k
+2αn ku + γf (xn ) − (I + νG)pkkxn+1 − pk.
Since (1 − αn (1 + νµ))2 < 1, it follows that
kxn+1 − pk2 ≤ kyn − pk2 + βn2 kxn − Wn yn k2 + 2(1 − αn (1 + νµ))βn kyn − pkkxn − Wn yn k
+2αn ku + γf (xn ) − (I + νG)pkkxn+1 − pk
≤ kxn − pk2 + r(r − 2ρ)kF xn − F pk2 + s2 (s2 − 2β)kBun − Bpk2
+s1 (s1 − 2α)kAzn − Apk2 βn2 kxn − Wn yn k2
+2(1 − αn (1 + νµ))βn kyn − pkkxn − Wn yn k
+2αn ku + γf (xn ) − (I + νG)pkkxn+1 − pk,
(3.6)
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
485
and so
r(2ρ − r)kF xn − F pk2 + s2 (2β − s2 )kBun − Bpk2 + s1 (2α − s1 )kAzn − Apk2
≤ (kxn − pk + kxn+1 − pk)(kxn+1 − xn k) + βn kxn − Wn yn k2
+2(1 − αn (1 + νµ))βn kyn − pkkxn − Wn yn k
+2αn ku + γf (xn ) − (I + νG)pkkxn+1 − pk.
(3.7)
Therefore, from (3.4), (3.5) and (3.7) it follows that
lim kF xn − F pk = 0, lim kBun − Bpk = 0, lim kAzn − Apk = 0.
n→∞
n→∞
n→∞
Step 4. We show that kxn − W xn k → 0. Since Sr is firmly nonexpansive, we have
kun − pk2 = kSr (xn − rF xn ) − Sr (p − rF p)k2
≤ hxn − rF xn − (p − rF p), un − pi
1
=
(kxn − pk2 + kun − pk2 − kxn − un − r(F xn − F p)k2 )
2
1
(kxn − pk2 + kun − pk2 − kxn − un k2 + 2rhF xn − F p, xn − un i
=
2
−r2 kF xn − F pk2 ),
which implies that
kun − pk2 ≤ kxn − pk2 − kxn − un k2 + 2rhF xn − F p, xn − un i − r2 kF xn − F pk2
= kxn − pk2 − kxn − un k2 + 2rkF xn − F pkkxn − un k − r2 kF xn − F pk2
≤ kxn − pk2 − kxn − un k2 + 2rkF xn − F pkkxn − un k.
Since JR,s is 1-inverse strongly monotone, we have
kzn − pk2 = kJR,s2 (un − s2 Bun ) − JR,s2 (p − s2 Bp)k2
≤ h(u − s2 Bun ) − (p − s2 Bp), zn − p)i
1
=
(k(un − s2 Bun ) − (p − s2 Bp)k2 + kzn − pk2
2
−k(un − s2 Bun ) − (p − s2 Bp) − (zn − p)k2 )
1
≤
(kun − pk2 + kzn − pk2 − kun − zn − s2 (Bun − Bp)k2 )
2
1
(kun − pk2 + kzn − pk2 − kun − zn k2 + 2s2 hBun − Bp, un − zn i
=
2
−s22 kBun − Bpk2 ),
which implies that
kzn − pk2 ≤ kun − pk2 − ku − n − zn k2 + 2s2 kBun − Bpkkun − zn k.
Now, we observe that
kyn − pk2 = kJR,s1 (zn − s1 Azn ) − JR,s1 (p − s1 Ap)k2
≤ hzn − s1 Azn − (p − s1 Ap), yn − pi
1
(kzn − pk2 + kyn − pk2 − kzn − yn − s1 (Azn − Ap)k2 )
=
2
1
=
(kzn − pk2 + kyn − pk2 − kzn − yn k2 + 2s1 hAzn − Ap, zn − yn i
2
−s21 kAzn − Apk2 ),
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
486
that is,
kyn − pk2 ≤ kzn − pk2 − kzn − yn k2 + 2s1 kAzn − Apkkzn − yn k
≤ kun − pk2 − kun − zn k2 + 2s2 kBun − Bpkkun − zn k − kzn − yn k2
+2s1 kAzn − Apkkzn − yn k
≤ kxn − pk2 − kxn − un k2 + 2rkF xn − F pkkxn − un k − kun − zn k2
+2s2 kBun − Bpkkun − zn k − kzn − yn k2 + 2s1 kAzn − Apkkzn − yn k.
(3.8)
Substituting (3.8) in (3.6), we have
kxn+1 − pk2 ≤ (1 − αn (1 + νµ))2 {kxn − pk2 − kxn − un k2 − kun − zn k2 − kzn − yn k2
+2rkF xn − F pkkxn − un k + 2s2 kBun − Bpkkun − zn k
+2s1 kAzn − Apkkzn − yn k}
βn2 kxn − Wn yn k2 + 2(1 − αn (1 + νµ))βn kyn − pkkxn − Wn yn k
+2αn ku + γf (xn ) − (I + νG)pkkxn+1 − pk.
It follows that
(1 − αn (1 + νµ))2
{
kxn − un k2 + kun − zn k2 + kzn − yn k2 }
≤ (kxn − pk + kxn+1 − pk)kxn+1 − xn k + 2rkF xn − F pkkxn − un k
+2s2 kBun − Bpkkun − zn k + 2s1 kAzn − Apkkzn − yn k
+2(1 − αn (1 + νµ))βn kyn − pkkxn − Wn yn k
+2αn ku + γf (xn ) − (I + νG)pkkxn+1 − pk.
Thus we have
lim kxn − un k = 0, lim kun − zn k = 0, and lim kzn − yn k = 0.
n→∞
n→∞
n→∞
Note that
kW yn − yn k ≤ kW yn − Wn yn k + kWn yn − yn k
and so, from this and Lemma 2.3, it follows that limn→∞ kW yn − yn k = 0. Therefore, we have
lim kxn − W xn k = 0.
n→∞
Step 5. We show that
lim suphu + γf (x∗ ) − (I + νG)x∗ , xn − x∗ i ≤ 0,
n→∞
where x∗ is a solution of the optimization problem. First, we note that there exists a subsequence {xni } of
{xn } such that
lim suphu + (γf − (I + νG))x∗ , xn − x∗ i =
n→∞
lim hu + (γf − (I + νG))x∗ , xnj − x∗ i.
j→∞
Since {xnj } is bounded, there exists a subsequence {xnji } of {xnj } which converges weakly to ω. Without
loss of generality, we can assume that {xnj } converges weakly to the point ω. Since kW xn − xn k → 0, it
follows the demiclosed principle of nonexpansive mappings that ω ∈ F ix(W ).
Now, we show that ω ∈ EP (1). By un = Sr (xn − rF xn ), it follows that
1
Θ(un , y) + ϕ(y) − ϕ(un ) + hy − un , un − (xn − rF xn )i ≥ 0,
r
∀y ∈ H.
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
487
From (H2), we have
1
ϕ(y) − ϕ(un ) + hy − un , un − (xn − rF xn )i ≥ Θ(y, un ), ∀y ∈ H,
r
and hence
1
ϕ(y) − ϕ(uni ) + hy − uni , uni − (xni − rF xni )i ≥ Θ(y, uni ), ∀y ∈ H.
r
For any t ∈ (0, 1] and y ∈ H, let yt = ty + (1 − t)ω. From (3.9), we have
(3.9)
1
ϕ(yt ) − ϕ(uni ) + hyt − uni , uni − (xni − rF xni )i ≥ Θ(yt , uni ),
r
uni − xni + rF xni
i ≥ Θ(yt , uni ) − ϕ(yt ) + ϕ(uni ),
hyt − uni ,
r
un − xni + rF xni
hyt − uni , i
+ F (yt ) − F( yt )i ≥ Θ(yt , uni ) − ϕ(yt ) + ϕ(uni ).
r
It follows that
hyt − uni , F yt i ≥ ϕ(uni ) − ϕ(yt ) + hyt − uni , F yt i + Θ(yt − uni )
un − x − ni + rF xni
−hyt − uni , i
i
r
= hyt − uni , F yt − F uni i + hyt − uni , F uni − F xni i − hyt − uni ,
uni − xni
i
r
+ϕ(uni ) − ϕ(yt ) + Θ(yt , uni ).
Since kuni − xni k → 0, we have kF uni − F xni k → 0. Further, from the inverse strongly monotonicity of
F , it follows that hyt − uni , F yt − F uni i ≥ 0 and so, from (H4), the weakly lower semi-continuity of ϕ,
uni −xni
→ 0 and uni → ω weakly, it follows that
r
hyt − ω, F yt i ≥ −ϕ(yt ) + ϕ(ω) + Θ(yt , ω).
(3.10)
Thus, from (H1), (H4) and (3.10), we have
0 = Θ(yt , yt ) − ϕ(yt ) + ϕ(yt )
= Θ(yt , ty + (1 − t)ω) − ϕ(yt ) + ϕ(ty + (1 − t)ω)
≤ t[Θ(yt , y) + ϕ(y) − ϕ(yt )] + (1 − t)[Θ(yt , ω) + ϕ(ω) − ϕ(yt )]
≤ t[Θ(yt , y) + ϕ(y) − ϕ(yt )] + (1 − t)hyt − ω, F yt i
= t[Θ(yt , y) + ϕ(y) − ϕ(yt )] + (1 − t)thy − ω, F yt i.
Letting t → 0, we have, for any y ∈ H,
0 ≤ Θ(ω, y) + ϕ(y) − ϕ(ω) + hy − ω, F ωi,
which implies that ω ∈ EP (1).
Next, we show that ω ∈ I(B, R). Since A be an α-inverse strongly monotone, A is Lipschitz continuous
monotone mapping. It follows from Lemma (2.4) that R+A is maximal monotone. Let (v, g) ∈ G(R+A), i.e.,
g−Av ∈ R(v). Since yni = JR,s1 (zni −s1 Azni ), we have zni −s1 zni ∈ (I +s1 R)yni , i.e., s11 (zni −yni −s1 Azni ) ∈
R(yni ). By maximal monotonicity of R + A, we have
hv − yni , g − Bv −
1
(zn − yni − s1 Azni )i ≥ 0
s1 i
and so
1
(zn − yni − s1 Azni )i
s1 i
1
− Azni i + hv − yni , (zni − yni )i.
s1
hv − yni , gi ≥ hv − yni , Av +
≥ hv − yni , Ayni
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
488
It follows from kzn − yn k → 0, kAzn − Ayn k → 0 and yni → ω weakly that
lim hv − yni , gi = hv − ω, gi ≥ 0.
ni →∞
Therefore, ω ∈ I(A, R) and, by the same method as in above, we can also find that ω ∈ I(B, R). Further,
we have ω ∈ Ω and
lim suphu + (γf − (I + νG))x∗ , xn − x∗ i =
n→∞
lim hu + (γf − (I + νG))x∗ , xnj − x∗ i
j→∞
= hu + (γf − (I + νG))x∗ , ω − x∗ i
≤ 0.
Step 6. We show that the sequence {xn } converges to the point x∗ . Observe that
kxn+1 − x∗ k2 = kαn (u + γf (xn )) + βn xn + [(1 − βn )I − αn (I + νG)]Wn yn − x∗ k2
= kαn (u + γf (xn ) − (I + νG)x∗ ) + βn (xn − x∗ )
+((1 − βn )I − αn (I + νG))(Wn yn − x∗ )k2
= kβn (xn − x∗ ) + ((1 − βn )I − αn (I + νG))(Wn yn − x∗ )k2
+2αn hu + γf (xn ) − (I + νG)x∗ , xn+1 − x∗ i
≤ [k((1 − βn )I − αn (I + νG))(Wn yn − x∗ )k + kβn (xn − x∗ )k]2
+2αn γhf (xn ) − f (x∗ ), xn+1 − x∗ i
+2αn hu + γf (x∗ ) − (I + νG)x∗ , xn+1 − x∗ i
≤ [k((1 − βn )I − αn (I + νµ))kyn − x∗ k + βn kxn − x∗ k]2
+2αn γρkxn − x∗ kkxn+1 − x∗ k
+2αn hu + γf (x∗ ) − (I + νG)x∗ , xn+1 − x∗ i
≤ (1 − αn (1 + ν)µ)2 kxn − x∗ k2 + αn γρ{kxn − x∗ k2 + kxn+1 − x∗ k2 }
+2αn hu + γf (x∗ ) − (I + νG)x∗ , xn+1 − x∗ i
and so
αn γρ
(1 − αn (1 + ν)µ)2
kxn − x∗ k2 +
kxn − x∗ k2
1 − αn γρ
1 − αn γρ
2αn
+
hu + γf (x∗ ) − (I + νG)x∗ , xn+1 − x∗ i
1 − αn γρ
2αn ((1 + ν)µ − γρ)
(αn (1ν )µ)2
kxn − x∗ k2 +
kxn − x∗ k2
= [1 −
1 − αn γρ
1 − αn γρ
2αn
+
hu + γf (x∗ ) − (I + νG)x∗ , xn+1 − x∗ i
1 − αn γρ
2αn ((1 + ν)µ − γρ)
2αn ((1 + ν)µ − γρ)
= [1 −
kxn − x∗ k2 +
1 − αn γρ
1 − αn γρ
2
αn ((1ν )µ)
1
×{
M1 +
hu + γf (x∗ ) − (I + νG)x∗ , xn+1 − x∗ i}
1 − αn γρ
(1ν )µ − γρ
= (1 − δn )kxn − x∗ k2 + δn αn ,
kxn+1 − x∗ k ≤
where
M1 = sup{kxn − x∗ k2 : n ≥ 1},
σn =
δn =
2αn ((1 + ν)µ − γρ)
,
1 − αn γρ
αn ((1ν )µ)2
1
M1 +
hu + γf (x∗ ) − (I + νG)x∗ , xn+1 − x∗ i.
1 − αn γρ
(1ν )µ − γρ
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
489
P
It can be easily seen that ∞
n=1 δn = ∞ and lim supn→∞ σn ≤ 0. Hence, by Lemma 2.5, we conclude that
the sequence {xn } converges strongly to the point x∗ ∈ Ω. This completes the proof.
Using Theorem 3.1, we can obtain the following corollaries:
Corollary 3.2. For any x0 ∈ H, let {xn } be the sequence in H generated by the following iterative algorithm:
for any y ∈ H and n ≥ 1

1


Θ(un , y) + ϕ(yn ) − ϕ(un ) + hy − un , un − (xn − rF xn )i ≥ 0,


r


zn = JR,s2 (un − s2 Bun ),
(3.11)


yn = JR,s1 (zn − s1 Azn ),




xn+1 = αn (u + γf (xn )) + βn xn + (1 − βn − αn )Wn yn ,
where {αn } and {βn } are two real sequences in [0, 1] satisfying the following conditions:
(C1) limn→∞ αn = 0 and Σ∞
n=1 αn = ∞;
(C2) 0 < lim inf n→∞ βn < lim supn→∞ βn < 1.
Suppose that Ω := ∩∞
n=1 F ix(Tn ) ∩ EP (1) ∩ I(A, R) ∩ I(B, R) 6= ∅ and the mapping Wn is defined by
(2.2). Then the sequence {xn } converges strongly to a point x∗ ∈ Ω, which solves the following variational
inequality:
hu + γf (x∗ ) − x∗ , y − xi ≤ 0, ∀y ∈ Ω.
Corollary 3.3. For any x0 ∈ H, let {xn } be the sequence in H generated by the following iterative algorithm:
for any y ∈ H and n ≥ 1

1


Θ(un , y) + ϕ(yn ) − ϕ(un ) + hy − un , un − xn i ≥ 0,


r


zn = JR,s2 (un − s2 Bun ),
(3.12)


yn = JR,s1 (zn − s1 Azn ),




xn+1 = αn γf (xn ) + βn xn + [(1 − βn )I − αn G]Wn yn ,
where {αn } and {βn } are two real sequences in [0, 1] satisfying the following conditions:
(C1) limn→∞ αn = 0 and Σ∞
n=1 αn = ∞;
(C2) 0 < lim inf n→∞ βn < lim supn→∞ βn < 1.
Suppose that Ω := ∩∞
n=1 F ix(Tn ) ∩ EP (2) ∩ I(A, R) ∩ I(B, R) 6= ∅ and the mapping Wn is defined by (2.2).
Then the sequence {xn } converges strongly to x∗ ∈ Ω, where x∗ = PΩ (γf (x∗ ) + (I − G)x∗ ), which solves the
following variational inequality:
hγf (x) − Gx, y − xi ≤ 0,
∀y ∈ Ω.
Corollary 3.4. For any x0 ∈ H, let {xn } be the sequence in H generated by the following iterative algorithm:
for any y ∈ H and n ≥ 1

1

 Θ(un , y) + ϕ(yn ) − ϕ(un ) + hy − un , un − (xn − rF xn )i ≥ 0,


r


zn = JR,s2 (un − s2 Bun ),
(3.13)


y
=
J
(z
−
s
Az
),

n
1
n
R,s1 n



xn+1 = αn γf (xn ) + βn xn + [(1 − βn )I − αn G]yn ,
where {αn } and {βn } are two real sequences in [0, 1], which satisfy the following conditions:
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
490
(C1) limn→∞ αn = 0 and Σ∞
n=1 αn = ∞;
(C2) 0 < lim inf n→∞ βn < lim supn→∞ βn < 1.
Suppose that Ω := EP (1) ∩ I(A, R) ∩ I(B, R) 6= ∅ and the mapping Wn is defined by (2.2). Then the
sequence {xn } converges strongly to x∗ ∈ Ω, where x∗ = PΩ (γf (x∗ ) + (I − G)x∗ ), which solves the following
variational inequality:
hγf (x) − Gx, y − xi ≤ 0, ∀y ∈ Ω.
Corollary 3.5. For any x0 ∈ H, let {xn } be the sequence in H generated by the following iterative algorithm:
for any y ∈ H and n ≥ 1

1


Θ(un , y) + hy − un , un − (xn − rF xn )i ≥ 0,


r


zn = JR,s2 (un − s2 Bun ),
(3.14)


yn = JR,s1 (zn − s1 Azn ),




xn+1 = αn γf (xn ) + βn xn + [(1 − βn )I − αn G]Wn yn ,
where {αn } and {βn } are two real sequences in [0, 1] satisfying the following conditions:
(C1) limn→∞ αn = 0 and Σ∞
n=1 αn = ∞;
(C2) 0 < lim inf n→∞ βn < lim supn→∞ βn < 1.
Suppose that Ω := ∩∞
n=1 F ix(Tn ) ∩ EP (3) ∩ I(A, R) ∩ I(B, R) 6= ∅ and the mapping Wn is defined by (2.2).
Then the sequence {xn } converges strongly to x∗ ∈ Ω, where x∗ = PΩ (γf (x∗ ) + (I − G)x∗ ), which solves the
following variational inequality:
hγf (x) − Gx, y − xi ≤ 0,
∀y ∈ Ω.
Corollary 3.6. For any x0 ∈ H, let {xn } be the sequence in H generated by the following iterative algorithm:
for any y ∈ H and n ≥ 1

1


Θ(un , y) + hy − un , un − (xn − rF xn )i ≥ 0,


r


zn = JR,s2 (un − s2 Bun ),
(3.15)


y
=
J
(z
−
s
Az
),

n
n
1
n
R,s
1



xn+1 = αn γf (xn ) + βn xn + [(1 − βn )I − αn G]yn ,
where {αn } and {βn } are two real sequences in [0, 1] satisfying the following conditions:
(C1) limn→∞ αn = 0 and Σ∞
n=1 αn = ∞;
(C2) 0 < lim inf n→∞ βn < lim supn→∞ βn < 1.
Suppose that Ω := EP (3) ∩ I(A, R) ∩ I(B, R) 6= ∅ and the mapping Wn is defined by (2.2). Then the
sequence {xn } converges strongly to x∗ ∈ Ω, where x∗ = PΩ (γf (x∗ ) + (I − A)x∗ ), which solves the following
variational inequality:
hγf (x) − Gx, y − xi ≤ 0, ∀y ∈ Ω.
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
491
Corollary 3.7. For any x0 ∈ H, let {xn } be the sequence in H generated by the following iterative algorithm:
for any y ∈ H and n ≥ 1

1

 Θ(un , y) + hy − un , un − (xn − rF xn )i ≥ 0,


r


zn = JR,s2 (un − s2 Bun ),
(3.16)


(z
−
s
Az
),
y
=
J

1
n
n
R,s1 n



xn+1 = αn f (xn ) + βn xn + (1 − βn − αn )yn ,
where {αn } and {βn } are two real sequences in [0, 1] satisfying the following conditions:
(C1) limn→∞ αn = 0 and Σ∞
n=1 αn = ∞;
(C2) 0 < lim inf n→∞ βn < lim supn→∞ βn < 1.
Suppose that Ω := EP (3)∩I(A, R)∩I(B, R) 6= ∅ and the mapping Wn is defined by (2.2). Then the sequence
{xn } converges strongly to x∗ ∈ Ω, where x∗ = PΩ f (x∗ ), which solves the following variational inequality:
hf (x) − x, y − xi ≤ 0, ∀y ∈ Ω.
4. Numerical Examples
In this section, we give a real numerical example of the Main Theorem as follows:
Example 4.1. For simplicity, we assume H = R and C = [−1, 1]. Let Θ(z, y) = −7z 2 + zy + 6y 2 , F = I
and φ(x) = x2 . Find z ∈ [−1, 1] such that
1
Θ(z, y) + φ(y) − φ(z) + hy − z, z − (x − rx)i ≥ 0,
r
∀y ∈ [−1, 1].
Solution. It can easily be seen that Θ, φ and F are satisfied the conditions in Theorem 3.1. For any
r > 0 and x ∈ [−1, 1], by Lemma 2.1, we can see that there exists z ∈ [−1, 1] such that, for any y ∈ [−1, 1],
1
Θ(z, y) + φ(y) − φ(z) + hy − z, z − (x − rx)i ≥ 0,
r
1
2
2
2
−7z + 6y + zy + y − z 2 + hy − z, z − (x − rx)i ≥ 0,
r
2
7ry + (rz + z − x + rx)y + (−8rz 2 − z 2 + xz − rxz) ≥ 0.
Let H(y) = 7ry 2 + (rz + z − x + rx)y + (−8rz 2 − z 2 + xz − rxz). Then H is a quadratic function of y with
coefficient a = 7r, b = rz + z − x + rx and c = −8rz 2 − z 2 + xz − rxz. Therefore, we can compute the
discriminant ∆ of H as follows:
∆ = b2 − 4ac
= [rz + z − x + rx]2 − 4(7r)(−8rz 2 − z 2 + xz − rxz)
= 225r2 z 2 + 30rz 2 − 28rxz + 30r2 xz + z 2 − 2zx − 2rx2 + r2 x2 + x2
= (225r2 + 30r + 1)z 2 + (30r2 − 28r − 2)xz + (r2 − 2r + 1)x2
= (15r + 1)2 z 2 + 2(15r + 1)(r − 1)xz + (r − 1)2 x2
= [(15r + 1)z + (r − 1)x]2 .
We know that H(y) ≥ 0 for all y ∈ [−1, 1] if it has at most one solution in [−1, 1]. Thus ∆ ≤ 0 and
1−r 1−r x. Then we have z = Sr xn =
xn . Let {xn } be the sequence generated by
hence z =
15r + 1
15r + 1
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
x1 = x ∈ [−1, 1] and

1


Θ(un , y) + ϕ(yn ) − ϕ(un ) + hy − un , un − (xn − rF xn )i ≥ 0,


r


zn = JR,s2 (un − s2 Bun ),


yn = JR,s1 (zn − s1 Azn ),




xn+1 = αn (u + γf (xn )) + βn xn + [(1 − βn )I − αn (I + νG)]Wn yn .
492
(4.1)
Assume that JR,s1 , JR,s2 , G = I, Ax = x3 , Bx = x2 and Wn = x2n .
1
n
1
Algorithm 1. Choose r = 0.5, αn = 10n
, βn = 2n+1
, f (xn ) = 10
xn and s1 = s2 = 31 , γ = 1, ν = 12 , u = 0.
Then the algorithm (4.1) becomes

3


un = xn ,


17




1


 zn = un
3
(4.2)
2


y
=
z
n
n


9




nxn
n + 1 yn
xn

yn

 xn+1 =
+
+(
) −
, ∀n ≥ 1.
100n 2n + 1
2n + 1 2
20n
We can see that xn → 0 as n → ∞, where 0 is the unique solution of the optimization problem:
min
7 2
x + C.
10
(4.3)
Algorithm 2. Choose r = 0.5, αn =
1
50n ,
βn =
n
20n+5 .
Then the algorithm (4.1) becomes

3


un = xn ,


17




1


 zn = un
3
2


yn = zn


9




xn
nxn
19n + 5 yn

yn

 xn+1 =
+
+(
) −
, ∀n ≥ 1.
500n 20n + 5
20n + 5 2
20n
(4.4)
Numerical result.
In this part we give the numerical results that support our main theorem as shown by calculating and
plotting graphs using Matlab 7.11.0.
Algorithm 1.
n
1
2
3
4
5
6
..
.
xn
0.500000000000000
0.173518518518519
0.070898759380295
0.030870860056218
0.013904609765372
0.006395839590864
..
.
47
48
49
0.000000000000001
0.000000000000001
0.000000000000000
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
Figure 1. This table shows the value of sequence {xn } on each iteration steps.
Figure 2. This figure shows the graph of the above table, we can see that xn converges to zero.
Algorithm 2.
n
1
2
3
4
5
6
..
.
xn
0.500000000000000
0.023810457516340
0.001222979105852
0.000064618469337
0.000003465087782
0.000000187506436
..
.
11
12
13
0.000000000000093
0.000000000000005
0.000000000000000
Figure 3. This table shows the value of sequence {xn } on each iteration steps.
Figure 4. This figure shows the graph of the above table, we can see that xn converges to zero.
493
U. Witthayarat, Y.J. Cho, P. Kumam, J. Nonlinear Sci. Appl. 5 (2012), 475–494
494
5. Acknowledgements
We would like to thank the Faculty of Science, King Mongkut’s University of Technology Thonburi
(KMUTT) and the National Research Council of Thailand (NRCT-2555).
This research is finished at Department of Mathematics Education, Gyeongsang National University. We
are very grateful to Professor Yeol Je Cho for his kindness and most helpful comments.
References
[1] H. Brezis , Operateurs maximaux monotones et semi-groups de contractions dans les espaces de Hilbert, NorthHolland Mathematics Studies, 1973. 2.4
[2] L.C. Ceng and J.C. Yao, A hybrid iterative scheme for mixed equilibrium problems and fixed point problems,
Journal of Computational and Applied Mathematics, 214 (2008), 186–201. 1, 1
[3] P.L. Combettes and S.A. Hirstaoga, Equilibrium programming in Hilbert spaces, Journal of Nonlinear Convex
Analalysis, 6 (2005), 117–136. 1
[4] P.L. Combettes and S.A. Hirstoaga, Equilibrium programming using proximal-like algorithms, Mathematics Programming, 78 (1996), 29–41. 1
[5] J.S. Jung, Iterative algorithms with some control conditions for quadratic optimizations, Panamer. Math. J., 16
(2006), 13–25. 2.5, 2.6
[6] J.S. Jung, A general iterative scheme for k- strictly pseudo-contractive mappings and optimization problems,
Applied Mathematics and Computation, 217 (2011), 5581–5588. 2.5, 2.6
[7] P. Kocourek, W. Takahashi, J.C. Yao, Fixed point theorems and weak convergence theorems for generalized hybrid
mappings in Hilbert spaces, Taiwanese Journal of Mathematics, 14 (2010), 2497–2511. 1
[8] B. Lemaire, Which fixed point does the iteration method select? Recent advances in optimization (Trier,1996), ,
Lecture Notes in Economics and Mathematical Systems, Springer, berlin, 452 (1997), 154–167. 2
[9] A. Moudafi, Viscosity approximation methods for fixed-point problems, Journal of Mathematical Analysis and
Applications, 241 (2000), 46–55. 1
[10] M.A. Noor, Generalized set-valued variational inclusions and resolvent equation, Journal of Mathematical Analysis
and Applications, 228 (1998), 206–220. 1.1
[11] J.W. Peng, Y. Wang, D.S. Shyu, J.C. Yao, Common solutions of an iterative scheme for variational inclusions,
equilibrium problems and fixed point problems, Journal of Inequalities and Applications, vol. 2008, Article ID
720371, 15 pages. 1
[12] J.W. Peng, J.C. Yao, A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point
problems and variational inequality problems, Taiwanese Journal of Mathematics, 12 (2008), 1401–1432. 1, 2.1
[13] S.M. Robinson, Generalized equation and their solutions, part I: basic theory, Mathematical Programming Studies,
10 (1976), 128–141. 1.2
[14] R.T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM Journal on Control and Optimization, 14 (1976), 877–898. 1.2
[15] T. Suzuki, Strong convergence of Krasnoselskii and Mann’s type sequences for one parameter nonexpansive semigroups without Bochner integrals, Journal of Mathematical Analysis and Applications, 305 (2005), 227–239. 2.2,
2.3
[16] S. Takahashi, W. Takahashi, Strong convergence theorem for a generalized equilibrium problem and a nonexpansive
mapping in a Hilbert space, Nonlinear Analysis: Theory, Methods and Applications, 69 (2008), 1025–1033. 1, 1
[17] S. Takahashi, W. Takahashi, Viscosity approximation methods for equilibrium problems and fixed point problems
in Hilbert spaces, Journal of Mathematical Analysis and Applications, 331 (2007), 506–515. 1
[18] H.K. Xu, Iterative algorithms for nonlinear operators, Journal of London Mathematical Society, 66 (2002), 240–
256.
[19] Y. Yao, Y.J. Cho, and Y.C. Liou, Iterative algorithms for variational inclusions, mixed equilibrium and fixed
point problems with application to optimization problems, Central European Journal of Mathematics, 9 (2011)
640–656. (document), 1, 1
[20] Y. Yao, Y.C. Liou, J.C. Yao, Convergence theorem for equilibrium problems and fixed point problems of infinite
family of nonexpansive mappings, Fixed Point Theory and Applications, vol. 2007, Article ID 64363, 12 pages. 1
[21] Y. Yao, Y.C. Liou, C. Lee, M.M. Wong, Convergence theorem for equilibrium problems and fixed point problems,
Fixed Point Theory, 10 (2009), 347–363. 1
[22] S.S. Zhang, J.H.W. Lee, C.K. Chan, Algorithms of common solutions for quasi variational inclusion and fixed
point problems, Applied Mathematics and Mechanics, 29 (2008) , 571–581, .