Mathematical analysis of a wave energy converter model
Arnaud Rougirel
To cite this version:
Arnaud Rougirel. Mathematical analysis of a wave energy converter model. 2012. <hal00678055>
HAL Id: hal-00678055
https://hal.archives-ouvertes.fr/hal-00678055
Submitted on 12 Mar 2012
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
MATHEMATICAL ANALYSIS OF A WAVE ENERGY CONVERTER
MODEL
ARNAUD ROUGIREL
Abstract. In a context where sustainable development should be a priority, Orazov
et al. have proposed in 2010, an excitation scheme for buoy-type ocean wave energy
converter. The simplest model for this scheme is a non autonomous piecewise linear
second order differential equation. The goal of that paper is to give a mathematical framework for this model and to highlight some properties of its solutions. In
particular, we will look at bounded and periodic solutions, and compare the energy
performance of this novel WEC with respect to the one of wave energy converter
without mass modulation.
1. Introduction
In the recent years, sustainable development has become a great challenge. Because
of the crisis of fossil fuels, alternative sources of energy must be found. Moreover,
decades of non sustainable development have produced pollution, climate disturbance
and loss of biodiversity. Thus it is preferable to seek for novel sources of energy with
low impact on the environment.
In this respect, ocean waves provides a important source of renewable energy. Mechanical devices that harvest energy stored in ocean waves are called wave energy
converter (WEC): see [OOS10].
Basically, a WEC is a floating body with a power takeoff system. It uses the vertical
oscillations induced by waves to produce electrical current. In order to optimize the
harvesting capabilities of a WEC, Orazov et al. have proposed in 2010 a novel excitation method based on state-dependent mass modulations. We refer to [OOS10] for
detailed explanations and to
http://me.berkeley.edu/˜bayram/wec/water intake animation.html for an animation of
this mechanism. Anyway, let us explain briefly how it works. The WEC can change
its mass by holding back or ejecting water from its floats: this is mass modulation.
The new idea of Orazov et al. is that the mass change occurs when the system is in a
precise state : this is state-dependent mass modulation.
d 2
x = xẋ > 0 – then
If the square of the vertical position starts to increase – i.e. 21 dt
mass is added. When xẋ becomes negative, the water is ejected. This gives rises to
quasilinear equation. Orazov et al. have proposed the following simple model
ẍ + 2ωδ ẋ + ωx = ωg
ẍ + 2δ ẋ + x = g
Date: March 12, 2012.
1
if xẋ > 0
if xẋ < 0.
(1.1)
2
ARNAUD ROUGIREL
d
x and g : R → R is a forcing term
Here x is the rescaled vertical position, ẋ = dt
accounting for waves excitation. The positive constant δ is a damping coefficient and
the parameter ω represents the mass added, namely
ω=
M
M + ∆M
where M is the mass of the WEC and ∆M the mass modulation.
If ∆M = 0 i.e. ω = 1 then there is no mass modulation and the model is linear.
Otherwise, (1.1) is a piecewise linear system. The goal of that paper is to give a
mathematical framework to equation (1.1) and to highlight some properties of its
solutions. In particular, we will look at bounded and periodic solutions, and compare
the energy performance of this novel WEC with respect to the one of WEC without
mass modulation.
In the sequel, we will introduce our main results. As a matter of fact, solutions
to (1.1) are C 1 functions thus the well posedness of (1.1) will be achieved with a
continuous dynamical system approach. Remark that it should be possible to use
alternatively the less standard theory of (discontinuous) piecewise linear system (see
for instance [dBBCK08]) which models for example, bilinear or impact oscillators.
Regarding well posedness, the main point is to define the vector field on xẋ = 0.
That is to say, at each point M where xẋ = 0, we have to decide if the solution will
enter in xẋ > 0 or either in xẋ < 0. In the former case, we set at the point M the
vector field defined by the first equation of (1.1). Since we are looking for oscillating
solutions we will eliminate solutions that remain on xẋ = 0. However, since the system
is non autonomous, it may happen that the trajectory leaves xẋ > 0 at time t0 but
enters again in xẋ > 0 for t > t0 , t close to t0 . This occurs for instance if 0 is a
local munimum of xẋ. Under non degeneracy conditions on g, we are able to give,
in Theorem 3.1, an existence and uniqueness result. The key point is that the flows
defined by the two equations of (1.1) are consistent on xẋ = 0.
Concerning the properties of the solutions, following [OOS10], we address the issue
of bounded-input-bounded-output stability. That is to say, if g is bounded, is x also
bounded? Surprisingly, this is not always the case even if g = 0 (see [OOS10] or
Corollary 4.1). However, Theorem 4.2 states that solutions are bounded in C 1 − norm
if the damping terms δ 2 and ωδ 2 are not too small. Notice that this result holds for
fairly general forcing term g. In particular, we do not suppose g to be periodic.
Periodic solutions are investigated in Section 5. If ω = 1 then (1.1) reduces to a
linear equation which admits a periodic solution. Hence we will use a perturbation
argument to prove that, for ω ≃ 1, (1.1) posseses periodic solutions. This argument
is based on the implicit function theorem. The problem is reduced to an algebraic
equation in R12 parametrized by ω. However, g must be a single sinusoidal wave.
Once the existence of periodic solutions is shown, we may ask if this new WEC is
more efficient than a WEC without mass modulation. From a physical point of view,
we feature the following energy balance for Equation (1.1).
EW + ED = Eg + Em
ANALYSIS OF A WAVE ENERGY CONVERTER
3
where EW is the energy of the WEC; Eg , the energy brought by ocean waves. ED is the
sum of the energy harvested by the power takeoff system and the energy dissipated by
hydrodynamic damping. It should be useful to distinguish between these two energies
but this is not possible with the model (1.1). The new term Em is the energy generated
by mass modulation. If the variation ∆Em is positive then mass modulation increases
the energy of the system. If there is no mass modulation i.e. ω = 1, then ∆Em = 0.
Remark 6.1 states that, for periodic solution x, the variation of Em satisfies
∆Em := 21 ( ω1 − 1)(ẋ2 (t0 ) + x2 (t2 )).
Thus, if ω < 1 then
∆ED = ∆Eg + ∆Em > ∆Eg .
In this sense, the new WEC proposed by Orazov et al. is more efficient than standard
oscillators.
2. Mathematical settings
In this section, we will define solutions for Equation (1.1). We choose a continuous
dynamical system approach. We will make use of the following assumptions.
g ∈ C 1 (R, R)
(2.1)
if ġ(t0 ) = 0 for some t0 ∈ R then g̈(t0 ) 6= 0
(2.3)
g ∈ C 2 (R, R)
(2.2)
δ ∈ R, ω > 0.
(2.4)
x
3
A generic vector in R will be denoted by ẋ. Setting
t
n x
o
3
ẋ ∈ R | xẋ > 0
A+ =
t
n x
o
3
ẋ
A− =
∈ R | xẋ < 0
t
n x
o
A0 = ẋ ∈ R3 | xẋ = 0 ,
t
we have a partition of R3 , namely
˙ − ∪A
˙ 0.
R3 = A+ ∪A
4
ARNAUD ROUGIREL
In the sequel, we will use the following subsets of A0 .
n 0
o n x
A0−+ = ẋ ∈ R3 ∪ 0 ∈ R3 |
t
t
o
x(−x + g(t)) > 0 or {−x + g(t) = 0 and ġ(t) = 0 and xg̈(t) > 0}
n x
A0+− = 0 ∈ R3 | x(−x + g(t)) < 0 or
t
o
{−x + g(t) = 0 and ġ(t) = 0 and xg̈(t) < 0}
n x
o
3
0 ∈ R | x 6= 0, −x + g(t) = 0, xġ(t) > 0
A0++ =
t
n x
o
3
0
A0−− =
∈ R | x 6= 0, −x + g(t) = 0, xġ(t) < 0 .
t
x(t0 )
Roughly speaking, if a solution x of (1.1) satisfies ẋ(t0 ) ∈ A0−+ for a certain time
t0
x(t)
t0 then the curve t 7→ ẋ(t) leaves A− and enters A+ at time t0 . With the above
t
notation, we have the following results. Since their proof are obvious, Proposition 2.1
and Lemma 2.2 are stated without proof.
Proposition 2.1. If (2.2) and (2.3) hold then A0−+ , A0+− , A0++ and A0−− form a
partition of A0 .
Next, we compute a Taylor expansion of xẋ.
Lemma 2.2. If x ∈ C 4 (R, R) then for t ∈ R and h ≃ 0, we have
...
xẋ(t + h) = xẋ(t) + (xẍ + ẋ2 )(t)h + ( 21 x x + 32 ẋẍ)(t)h2
....
...
+ ( 16 x x + 23 ẋ x + 21 ẍ2 )(t)h3 + O(h4 ).
Lemma 2.3. Assume (2.2)–(2.4). Let (x0 , ẋ0 , t0 )t ∈ R3 and x be the solution of
ẍ + ω(2δ ẋ + x − g) = 0
on R
x(t0 ) = x0 ,
ẋ(t0 ) = ẋ0 .
For all t ∈ R, we put V (t) = (x(t), ẋ(t), t)t . Then, for some ε > 0,
(1) V (t0 ) ∈ A0−+ if and only if V ((t0 − ε, t0 )) ⊂ A− and V ((t0 , t0 + ε)) ⊂ A+ ;
(2) V (t0 ) ∈ A0+− if and only if V ((t0 − ε, t0 )) ⊂ A+ and V ((t0 , t0 + ε)) ⊂ A− ;
(3) V (t0 ) ∈ A+ ∪ A0++ if and only if V ((t0 − ε, t0 + ε) \ {t0 }) ⊂ A+ ;
ANALYSIS OF A WAVE ENERGY CONVERTER
5
(4) V (t0 ) ∈ A− ∪ A0−− if and only if V ((t0 − ε, t0 + ε) \ {t0 }) ⊂ A− .
Proof. We will first prove the if part of each assertion. If V (t0 ) belongs to A+ ∪ A− ,
the result follows from the continuity of xẋ. If V (t0 ) ∈ A0++ then ẋ(t0 ) = 0 and
(−x + g)(t0 ) = 0, hence
With Lemme 2.2,
ẍ(t0 ) = ω(−2δ ẋ − x + g)(t0 ) = 0
...
x (t0 ) = ω(−2δẍ − ẋ + ġ)(t0 ) = ω ġ(t0 ).
...
xẋ(t0 + h) = 21 x x (t0 )h2 + O(h3 ) = ω2 xġ(t0 )h2 + O(h3 ).
From the definition of A0++ , we have xġ(t0 ) > 0, thus V ((t0 − ε, t0 + ε) \ {t0 }) ⊂ A+ .
The proof for V (t0 ) ∈ A0−− is analog. If V (t0 ) ∈ A0−+ then one of the four cases
below holds.
1) x(t0 ) = 0, ẋ(t0 ) 6= 0. By Lemma 2.2,
xẋ(t0 + h) = ẋ(t0 )2 h + O(h2 ).
Hence, there exists some positive ε such that V ((t0 − ε, t0 )) ⊂ A− and V ((t0 , t0 + ε)) ⊂
A+ .
2) x(t0 ) 6= 0, ẋ(t0 ) = 0 and x(−x + g)(t0 ) > 0. Then
ẍ(t0 ) = ω(−x + g)(t0 )
xẋ(t0 + h) = ωx(−x + g)(t0 )h + O(h2 ).
And we obtain the same conclusion as in the previous case since ω > 0.
3) x(t0 ) 6= 0, ẋ(t0 ) = 0, (−x + g)(t0 ) = ġ(t0 ) = 0 and xg̈(t0 ) > 0. Then
...
ẍ(t0 ) = x (t0 ) = 0
....
...
x (t0 ) = ω(−2δ x − ẍ + g̈)(t0 ) = ωg̈(t0 ).
With Lemma 2.2,
....
xẋ(t0 + h) = 61 x x (t0 )h3 + O(h4 ) = ω6 xg̈(t0 )h3 + O(h4 ).
And the same conclusion holds.
4) x(t0 ) = 0, ẋ(t0 ) = 0. Then
ẍ(t0 ) = ωg(t0)
xẋ(t + h) = 21 ẍ2 (t0 )h3 + O(h4 ) = 12 ω 2 g 2 (t0 )h3 + O(h4 ).
If g(t0) 6= 0 then we are done. If g(t0 ) = 0 then
...
x (t0 ) = ω(−2δẍ − ẋ + ġ)(t0 ) = ω ġ(t0 ).
...
If ġ(t0 ) > 0 then x (t0 ) > 0 and we can deduce the sign of xẋ from the variations of x
and ẋ. Indeed, since ẍ(t0 ) = 0, ẍ is negative on some interval of the form (t0 − ε, t0 )
and positive on (t0 , t0 + ε). Hence ẋ is positive in a neighborhood of t0 except in t0
where it vanishes. Finally, we obtain that x is negative on some interval of the form
(t0 − ε, t0 ) and positive on (t0 , t0 + ε). If ġ(t0 ) < 0 then all signs are reversed and we
have the same conclusion.
6
ARNAUD ROUGIREL
If ġ(t0 ) = 0 then g̈(t0 ) 6= 0 according to (2.3). Then
...
x = ẋ = ẍ = x = 0
at t0
....
x (t0 ) = ωg̈(t0 ) 6= 0.
As in the case ġ(t0 ) 6= 0, we compute the sign of xẋ by studying the variations of x
....
and ẋ. However, we consider at first, x .
The four cases above prove the if part of the assertion (1) in Lemma 2.3, namely
V (t0 ) ∈ A0−+ =⇒ V ((t0 − ε, t0 )) ⊂ A− , V ((t0 , t0 + ε)) ⊂ A+ .
The case where V (t0 ) ∈ A0+− is proved in the same way, which conclude the if part
of the proof.
Conversely, assume that V ((t0 −ε, t0 )) ⊂ A− and V ((t0 , t0 +ε)) ⊂ A+ for some ε > 0.
Then V (t0 ) cannot belong to A0+− ∪ A+ ∪ A0++ ∪ A− ∪ A0−− according to the if part
of the proof. Hence V (t0 ) is in A0−+ by Proposition 2.1. This proves the equivalence
(1) of the lemma. The other cases are proved in the same way, which completes the
proof of the Lemma.
2.1. Solutions of (1.1) and the corresponding Cauchy Problem.
Definition 2.1 (Local solutions). Let I be an interval of R. Under assumptions (2.1)
and (2.4), we say that x is a local solution to (1.1) on I if
• x ∈ C 1 (I, R) and I has positive (Lebesgue) measure;
• one of the three conditions below holds.
(i) x ∈ C 2 (I, R), V (t) := (x(t), ẋ(t), t)t belongs to A+ ∪ A0++ for all t ∈ I and
ẍ + 2ωδ ẋ + ωx = ωg
on I.
(ii) x ∈ C 2 (I, R), V (t) belongs to A− ∪ A0−− for all t ∈ I and
ẍ + 2δ ẋ + x = g
on I.
(iii) There is a time t0 in I such that if I1 := I ∩ (−∞, t0 ) and I2 := I ∩ (t0 , ∞) then
one of the two following conditions holds.
(iii-a) V (I1 ) ⊂ A− , V (I2 ) ⊂ A+ and
ẍ + 2ωδ ẋ + ωx = ωg
ẍ + 2δ ẋ + x = g
(iii-b) V (I1 ) ⊂ A+ , V (I2 ) ⊂ A− and
ẍ + 2ωδ ẋ + ωx = ωg
ẍ + 2δ ẋ + x = g
on I2
on I1 .
(2.5)
on I1
on I2 .
(2.6)
Remark 2.1. This definition excludes oscillatory solutions x such that xẋ has infinetly
many changes of signs on some compact interval. For instance, solutions of the form
x(t) = t5 sin 1t are excluded.
If x is a local solution on I then xẋ changes its sign at most once (case (iii)).
However, xẋ may have many zeros on I. Indeed, in case (i), if V (t) belongs to A0++
then xẋ(t) = 0.
ANALYSIS OF A WAVE ENERGY CONVERTER
7
If (2.2)–(2.4) hold and I is an open interval then, in case (iii-a), V (t0 ) lies in A0−+
and in case (iii-b), V (t0 ) lies in A0+− (see Lemma 2.3).
If g = 0 then x = 0 is not a local solution since 0 ∈ A0−+ .
Definition 2.2. Let J be an interval of R. We say that x is a (global ) solution of (1.1)
on J if
• x ∈ C 1 (J, R);
• for all t0 ∈ J, there exists a interval I ⊂ J, relatively open in J and containing t0
such that x is a local solution on I.
Definition 2.3. Let J be an interval of R, t0 ∈ J and x0 , ẋ0 be two real numbers.
Under assumptions (2.1) and (2.4), we say that x is a solution to the Cauchy problem
(corresponding to (1.1)) with initial condition (x0 , ẋ0 ) at time t0 if
• x is a global solution of (1.1) on J;
• x(t0 ) = x0 and ẋ(t0 ) = ẋ0 .
3. Existence and uniqueness of the Cauchy problem
In this section, we will consider separately the cases where the forcing term g is non
trivial and the case where it is identically equal to zero.
Theorem 3.1. Assume (2.2)–(2.4) and
g(t) = 0 =⇒ ġ(t) 6= 0.
(3.1)
Then, for any t0 , x0 and ẋ0 in R, the Cauchy problem corresponding to (1.1) with
initial condition (x0 , ẋ0 ) at time t0 admits a unique solution on R.
ẋ
7
6
t4
5
4
t0
3
2
1
0
t1
t3
x
−1
−2
−3
−2.5
t2
−2
−1.5
−1
−0.5
0
0.5
1
Figure 1. Trajectory in the phase space (x, ẋ) for a positive cycle, in
the sense of Definition 6.1.
8
ARNAUD ROUGIREL
Proof. Uniqueness. Let x and y two solutions of the above Cauchy problem and suppose
that (x, ẋ)(t) 6= (y, ẏ)(t) for some time t > t0 . If t1 denotes the infimum of these t then
(x, ẋ)(t1 ) = (y, ẏ)(t1 ) and there exists a decreasing sequence (τn )n≥0 ⊂ R such that
τn −−−→ t1
and (x, ẋ)(τn ) 6= (y, ẏ)(τn ), ∀n ≥ 0.
n→∞
(3.2)
Since x, y are global solutions, there exists some ε > 0 such that x and y are local
solutions on I := (t1 − ε, t1 + ε) ; here we have used the fact that, in Definition 2.2, I
is relatively open in J = R. According to (3) Lemma 2.3, if V (t1 ) ∈ A+ ∪ A0++ then
x and y satisfy condition (i) of Definition 2.1 on (t1 − ε′ , t1 + ε′ ), for some ε′ ∈ (0, ε].
Hence x ≡ y on I by Cauchy’s Theorem. We then get a contradiction with (3.2).
Cases (1), (2) and (4) of Lemma 2.3 lead also to a contradiction. Then Proposition
2.1 warrants that there is no other possibility which finish to prove the uniqueness of
the solution to the Cauchy problem.
Existence. We will prove that there exists a solution on [t0 , ∞). Performing the
same construction on (−∞, t0 ), we will obtain a global solution on R.
If V (t0 ) ∈ A+ ∪ A0−+ ∪ A0++ then let x be the solution of
ẍ + ω(2δ ẋ + x − g) = 0
x(t0 ) = x0 ,
ẋ(t0 ) = ẋ0 .
on [t0 , ∞)
(3.3)
By Lemma 2.3, xẋ > 0 on (t0 , t0 + ε) for some positive ε. If xẋ ≥ 0 on [t0 , ∞) then we
have a solution on [t0 , ∞). Otherwise, there is a time t1 > t0 such that
xẋ > 0 on [t0 , t1 ),
(x(t1 ), ẋ(t1 ), t1 )t ∈ A0+− .
is a local solution on [t0 , t1 ).
Then x
[t0 ,t1 )
If V (t0 ) ∈ A− ∪ A0+− ∪ A0−− then we consider the solution x of
ẍ + 2δ ẋ + x − g = 0
x(t0 ) = x0 ,
ẋ(t0 ) = ẋ0 .
on [t0 , ∞)
We argue as in the previous case to get a local solution on [t0 , t1 ) for some t1 ∈ (t0 , ∞].
Without loss of generality, we may assume V (t0 ) ∈ A+ ∪ A0−+ ∪ A0++ and t1 < ∞.
Then by induction, we construct an increasing sequence of time t1 , . . . , tN , tN +1 with
N ≥ 1, tN ∈ R and tN +1 ∈ (tN , ∞] so that we have a global solution on [t0 , tN +1 ). If
tN +1 = ∞ then we have a solution on [t0 , ∞). Otherwise, there exists an increasing
sequence (tn )n≥0 and x in C 1 (∪n≥0 [tn , tn+1 ], R) such that
V (t2n ) ∈ A0−+ ,
2
V (t2n−1 ) ∈ A0+−
x ∈ C ((tn , tn+1 ), R)
ẍ + ω(2δ ẋ + x − g) = 0,
ẍ + 2δ ẋ + x − g = 0,
xẋ ≥ 0
xẋ ≤ 0
∀n ≥ 1
(3.4)
on (t2n , t2n+1 )
(3.5)
on (t2n+1 , t2n+2 ), ∀n ≥ 0.
(3.6)
ANALYSIS OF A WAVE ENERGY CONVERTER
9
Then the theorem will be proved if tn → ∞. Let us argue by contradiction assuming
that tn → T ∈ R. Then we denote by a the function of L∞ (R) defined by
(
ω
if x > 0
a(x) =
1
if x < 0.
Since ẋ is continuous on [t0 , T ) and xẋ vanishes on a set measure zero (see Lemma
2.3), we have for all t ∈ [t0 , T ),
Z t
ẋ(t) − ẋ(t0 ) +
a(xẋ)(2δ ẋ + x − g)(s) ds = 0.
(3.7)
t0
Then
|ẋ(t)| ≤ |ẋ(t0 )| + max(1, ω)
|x(t)| ≤ |x(t0 )| +
Z
t
t0
Z
t
t0
2|δ||ẋ| + |x| + |g| ds
|ẋ| ds.
Hence, y(t) := |x(t)| + |ẋ(t)| satisfies
y(t) ≤ y(t0 ) + C(T − t0 ) sup |g| + C
[t0 ,T )
Z
t
y(s) ds
t0
where C is a constant depending only on δ and ω. By Gronwall’s Lemma, y is bounded
on [t0 , T ). Thus, using also (3.7) with t0 = t′ , we obtain for another constant still
labelled C,
|x(t) − x(t′ )| ≤ C|t − t′ |
|ẋ(t) − ẋ(t′ )| ≤ C|t − t′ |
∀t, t′ ∈ [t0 , T ).
Hence, there exist a, b in R such that
x(t) → a,
ẋ(t) → b
as t → T.
By (3.4), V (t2n+1 ) ∈ A0+− thus ẋ(t2n+1 ) = 0 and b = 0.
If a 6= g(T ) then for all t ∈ ∪n≥0 (t2n , t2n+1 ),
ẍ(t) = ω(−2δ ẋ − x + g)(t) −−→ ω(−a + g(T )) 6= 0.
t→T
Since ω > 0, we deduce that ẍ has a sign on ∪n≥n1 (tn , tn+1 ) for some large index n1 .
Then ẋ is monotone (since it is continuous) on [T − ε, T [. Thus xẋ has a sign on
[T − ε′ , T [ which contradicts (3.5) and (3.6).
Then a = g(T ). From (3.5), (3.6), we deduce
lim
t→T, t∈∪n≥0 (tn ,tn+1 )
ẍ(t) = 0.
Differentiating (3.5) and (3.6) with respect to time, we get
...
x (t) = ω ġ(T ),
lim
lim
t→T, t∈∪n≥0 (t2n ,t2n+1 )
t→T, t∈∪n≥0 (t2n+1 ,t2n+2 )
(3.8)
...
x (t) = ġ(T ).
(3.9)
10
ARNAUD ROUGIREL
...
If ġ(T ) 6= 0 then x has a sign on ∪n≥n0 (tn , tn+1 ) for some large index n0 . Consequently ẍ is monotone on ∪n≥n0 (tn , tn+1 ). Remark that since ẍ is not continuous, we
cannot deduce, at this stage, that ẍ has a sign in a left neighborhood of T . However
we may obtain the sign of xẋ with Lemma 3.2 below. Now we will check that the last
assumption of Lemma 3.2 holds.
By (3.5), (3.6),
+
ẍ(t−
2n+1 ) = ω ẍ(t2n+1 ),
−
ẍ(t+
2n ) = ω ẍ(t2n )
(3.10)
where
ẍ(t−
2n+1 ) :=
lim
t→t2n+1 , t<t2n+1
ẍ(t).
+
Since ω > 0, we have ẍ(t−
n )ẍ(tn ) ≥ 0. Then Lemma 3.2 leads to a contradiction with
(3.5) and (3.6).
Lemma 3.2. Let t0 , T be real numbers such that t0 < T . Assume that (tn )n≥0 ⊂ R
increases toward T . Besides, let x ∈ W 2,∞ (t0 , T ) be such that
• ẍ
has a continuous extension on [tn , tn+1 ] for all n ≥ 0;
(tn ,tn+1 )
• one of the two conditions holds:
(i) ẍ is increasing on (tn , tn+1 ), for all n ≥ 0; or
(ii) ẍ is decreasing on (tn , tn+1 ), for all n ≥ 0;
+
• ẍ(t−
n )ẍ(tn ) ≥ 0 for all n ≥ 1.
Then xẋ has a sign on (T − ε, T ) some some positive ε, i.e. xẋ > 0 on (T − ε, T )
or xẋ < 0 on (T − ε, T ).
Proof. If ẍ is increasing on (tn , tn+1 ) for all n ≥ 0 then one of the two following cases
holds.
+
−
+
1) ẍ(t−
n1 ) > 0 for some index n1 ≥ 1. Then ẍ(tn1 ) ≥ 0 since ẍ(tn )ẍ(tn ) ≥ 0 by
−
assumption. Thus ẍ(tn1 +1 ) > 0 since ẍ is increasing on (tn1 , tn1 +1 ). By induction,
+
−
+
ẍ(t−
n ) > 0 for all n ≥ n1 . Thus, for all n ≥ n1 , ẍ(tn ) ≥ 0 (since ẍ(tn )ẍ(tn ) ≥ 0) and,
then, ẍ > 0 on (tn , tn+1 ). Hence ẋ is increasing on (tn1 , T ) and has a sign since it is
continuous. We then deduce that xẋ has a sign on (T − ε, T ).
2) ẍ(t−
n ) ≤ 0 for all n ≥ 1. Hence ẍ < 0 on (tn , tn+1 ) for all n ≥ 0, since ẍ is
increasing. Thus ẍ < 0 a.e. on (t0 , T ). Then ẋ is decreasing. As above, we conclude
that xẋ has a sign on (T − ε, T ).
The case where ẍ is decreasing is analog.
Let us continue the proof of Theorem 3.1. There remains to consider the case where
ġ(T ) = 0. Then g̈(T ) 6= 0 by (2.3). We obtain from (3.5), (3.6) and (3.9),
(
...
ω(−2δ x − ẍ + g̈)(t) → ωg̈(T )
if t ∈ ∪n≥0 (t2n , t2n+1 )
....
x (t) =
.
(3.11)
...
(−2δ x − ẍ + g̈)(t) → g̈(T )
if t ∈ ∪n≥0 (t2n+1 , t2n+2 )
...
If g̈(T ) > 0 (the case g̈(T ) < 0 can be done in a same way) then x is increasing on
...
∪n≥n0 (tn , tn+1 ) by (3.11). Let us show that x < 0 on J := ∪n≥n1 (tn , tn+1 ) for some
ANALYSIS OF A WAVE ENERGY CONVERTER
11
...
large index n1 . For this, we will first show that x (t−
2n+1 ) < 0. Indeed,
... −
x (t2n+1 ) = ω − 2δẍ(t−
2n+1 ) + (−ẋ + ġ)(t2n+1 )
ẍ(t−
2n+1 ) = ω(−x + g)(t2n+1 ),
(3.12)
(3.13)
since ẋ(t2n+1 ) = 0. On J, we have when t → T ,
−x + g → 0,
−ẋ + ġ → 0,
−ẍ + g̈ → g̈(T ) > 0,
by (3.8). Thus −ẍ + g̈ > 0 a.e. on (T − ε, T ) for some ε > 0. Hence,
−ẋ + ġ < 0,
−x + g > 0
on (T − ε, T ).
(3.14)
Thus, for large n, ẍ(t−
2n+1 ) > 0 (see (3.13)) and (−ẋ + ġ)(t2n+1 ) < 0. With (3.12),
... −
x (t2n+1 ) < 0.
(3.15)
...
Next, let us show that x (t−
2n ) < 0. We have
... −
x (t2n ) = −2δẍ(t−
(3.16)
2n ) + (−ẋ + ġ)(t2n )
ẍ(t−
2n ) = (−2δ ẋ − x + g)(t2n ).
(3.17)
If there exists a subsequence (t2nk )k≥0 of (t2n )n≥0 such that ẋ(t2nk ) > 0 for all k then,
since V (t2nk ) ∈ A0−+ (by (3.4)), we have x(t2nk ) = 0. Therefore x(t) → 0 and g(T ) = 0.
Since, it is assumed here that ġ(T ) = 0, there is a contradiction with (3.1).
Thus ẋ(t2n ) ≤ 0 for n large enough. Then (3.17), (3.14) imply that ẍ(t−
2n ) > 0 and,
... −
...
x
)
<
0
for n large
(t
)
<
0,
and
with
(3.15),
with (3.14), (3.16), we obtain x (t−
n
2n
...
...
x
x
enough. Since
is increasing on J, we have < 0 on J. It follows from Lemma 3.2
that xẋ has a sign on (T − ε, T ). This contradicts again (3.5), (3.6) and completes the
proof of the theorem.
3.1. The homogenous problem. In this subsection, we will assume that
g=0
on R.
Then the subsets of A0 reduce to
n 0
o
n x
o
A0−+ = ẋ ∈ R3 , A0+− = 0 ∈ R3 | x 6= 0 ,
t
t
(3.18)
A0++ = A0−− = ∅.
Moreover we have the following partition of R3 :
˙ − ∪A
˙ 0−+ ∪A
˙ 0+− .
R3 = A+ ∪A
Proposition 3.3. Under assumptions (2.4), (3.18), let x be a global solution of (1.1)
on R. Then
(x(t), ẋ(t)) 6= (0, 0) ∀t ∈ R.
Proof. Arguing by contradiction, we suppose that there exists some t0 such that
V (t0 ) = (0, 0, t0)t .
12
ARNAUD ROUGIREL
There results that V (t0 ) belongs to A0−+ . Thus the case (iii) of Definition 2.1 holds.
If (iii-b) is true then, for some positive ε, x solves the problem
ẍ + 2δ ẋ + x = 0
x(t0 ) = 0,
on (t0 , t0 + ε)
ẋ(t0 ) = 0.
Hence x = 0 on (t0 , t0 + ε). But V ((t0 , t0 + ε)) ⊂ A− by Definition 2.1. We get a
contradiction. The case (iii-a) is analog; accordingly
(x(t), ẋ(t)) 6= (0, 0)
∀t ∈ R.
Theorem 3.4. Under assumptions (2.4), (3.18), let t0 ∈ R and (x0 , ẋ0 ) ∈ R2 \{(0, 0)}.
Then the Cauchy problem corresponding to (1.1) with initial condition (x0 , ẋ0 ) at time
t0 admits a unique solution.
Proof. The proof is similar and even simpler than the one of Theorem 3.1. So we skip
the details.
Uniqueness. We consider two solutions x and y and a time t1 satisfying (x, ẋ)(t1 ) =
(y, ẏ)(t1 ) and (3.2). Lemma 2.3 holds except the first equivalence which becomes
V (t0 ) ∈ A0−+ \ {(0, 0, t0 )t } ⇔ V ((t0 − ε, t0 )) ⊂ A− and V ((t0 , t0 + ε)) ⊂ A+ .
By Proposition 3.3,
(x(t1 ), ẋ(t1 ), t1 )t 6= (0, 0, t1)t ,
so that the uniqueness follows as in Theorem 3.1.
Existence. For the homogenous equation, explicit solutions on A+ and A− are available. We then construct a solution using the method of Theorem 3.1.
We will now investigate some properties of oscillating solutions of (1.1) when g = 0.
They are produced when δ and ωδ 2 range (0, 1).
Proposition 3.5. Let us assume that δ and ωδ 2 belong to (0, 1) and g = 0. If x
denotes the solution given by Theorem 3.4 then there exits a time τ0 ∈ R such that
x(τ0 ) = 0 and ẋ(τ0 ) > 0.
Proof. Arguing by contradiction, we assume that
x(t) 6= 0
or
Then there exists a time t0 such that
x < 0 in (t0 , ∞)
ẋ(t) ≤ 0,
or
∀t ∈ R.
x > 0 in (t0 , ∞).
(3.19)
(3.20)
Indeed, if x(t0 ) = 0 then (3.19) and Proposition 3.3 imply x < 0 in (t0 , ∞); hence
(3.20) follows.
Next, we set
(
x
if x > 0 in (t0 , ∞)
y=
−x
if x < 0 in (t0 , ∞).
Then y solves (1.1) and y > 0 in (t0 , ∞). We claim that, for some t1 ≥ t0 ,
ẏ < 0 in (t1 , ∞)
or
ẏ ≥ 0 in (t1 , ∞).
(3.21)
ANALYSIS OF A WAVE ENERGY CONVERTER
13
Indeed, if ẏ(t1 ) < 0 and there exists t2 > t1 such that
ẏ(t2 ) = 0,
ẏ < 0
on [t1 , t2 ),
−
then ÿ(t−
2 ) ≥ 0; recall that ÿ(t2 ) := limt→t2 , t<t2 ÿ(t). However,
ÿ(t−
) + 2ωδ ẏ(t2 ) + y(t2 ) = 0.
| {z } | {z }
| {z2 }
=0
≥0
>0
We get a contradiction, therefore, (3.21) follows.
Using y > 0 in (t0 , ∞) and (3.21), we deduce that y is monotone in (t1 , ∞) and
satisfies
ÿ + 2δ ẏ + y = 0 in (t1 , ∞)
or
ÿ + 2ωδ ẏ + ωy = 0 in (t1 , ∞).
By computing the explicit solution, we obtain a contradiction since δ and ωδ 2 belong
to (0, 1). This completes the proof of the proposition.
The next result is due to Orazov et al.
Proposition 3.6 ([OOS10]). Let us assume that δ and ωδ 2 belong to (0, 1) and g = 0.
For all global solution x of (1.1), there exists an increasing sequence of time (tn )n≥0
such that the following properties hold for all n ≥ 0.
(1) If u := (x, ẋ)t then
u(t4n ) ∈ {0} × (0, ∞)
u(t4n+2 ) ∈ {0} × (−∞, 0)
u(t4n+1 ) ∈ (0, ∞) × {0}
u(t4n+3 ) ∈ (−∞, 0) × {0}.
(2) xẋ > 0 in (t2n , t2n+1 ) and xẋ < 0 in (t2n+1 , t2n+2 ).
(3) With the notation
√
Ωω := ω(1 − ωδ 2 )1/2
1
Ωω
arctan
Tω :=
Ωω
ωδ
π
1
Ω1
T− :=
−
arctan ,
Ω1 Ω1
δ
we have
tn+2 − tn = t2 − t0 = Tω + T− .
(4) Finally, by setting
1
k(ω) := exp(−ωδTω − δT− ) √ ,
ω
there holds
ẋ(t4n+2 ) = −k(ω)ẋ(t4n ).
(3.22)
(3.23)
(3.24)
(3.25)
(3.26)
Proof. By Proposition 3.3, V (0) 6= 0. By Proposition 3.5, we may assume without loss
of generality, that u(0) lies in {0} × (0, ∞); so that we set t0 := 0.
• A function x0 ∈ C 2 (R, R) is solution of
ẍ + 2ωδ ẋ + ωx = 0,
if and only if
x(0) = 0,
ẋ0 (0) = ẋ0
(3.27)
14
ARNAUD ROUGIREL
x0 (t) = e−ωδt
ẋ0
sin Ωω t.
Ωω
Moreover,
Ωω
.
(3.28)
ωδ
The smallest positive T satisfying (3.28) gives Tω in (3.23). As a consequence,
ẋ0 (T ) = 0 ⇔ tan Ωω T =
x0 (Tω ) = e−ωδTω
ẋ0
sin Ωω Tω
Ωω
(3.29)
and, since Tω is minimum, we have Ωω Tω ∈ (0, π/2).
• Let x1 be the solution of
ẍ + 2δ ẋ + x = 0,
x(0) = x0 (Tω ),
ẋ(0) = 0.
If we write x1 in the form
x1 (t) = e−δt (A cos Ω1 t + B sin Ω1 t),
we have
A = x0 (Tω ),
A
Ω1
=
.
B
δ
Besides,
Ω1
.
(3.30)
δ
The smallest positive T satisfying this equation gives T− in (3.24). We have also
Ω1 T− ∈ (π/2, π). From (3.29) and (3.26), we deduce that
x1 (T ) = 0 ⇔ tan Ω1 T = −
ẋ1 (T− ) = −k(ω)ẋ0
where
δ 2 + Ω21
.
Ω1 Ωω
Since Ωω Tω ∈ (0, π/2) and Ω1 T− ∈ (π/2, π), k(ω) is positive. Moreover, with (3.28)
and (3.30), we infer
k(ω) := exp(−ωδTω − δT− ) sin(Ωω Tω ) sin(Ω1 T− )
sin2 (Ωω Tω ) =
Ω2ω
,
ω 2δ 2 + Ω2ω
sin2 (Ω1 T− ) =
Ω21
.
δ 2 + Ω21
Thus
1
k(ω)2 = exp(−2ωδTω − 2δT− ) ,
ω
and (3.26) follows. Since the equation is autonomous, we obtain the existence of a
sequence (tn )n≥0 satisfying the four properties of the theorem.
Next, we will investigate some properties of k(ω).
ANALYSIS OF A WAVE ENERGY CONVERTER
15
Proposition 3.7. Let δ ∈ (0, 1) and k : (0, δ12 ) → (0, ∞) be the function defined by
−
−
(3.26). Then k is decreasing on (0, 1/δ 2 ) from ∞ to k( δ12 ). Besides, k( δ12 ) < δe < 1
and
e−δT−
k(ω) ≃ √
when ω → 0.
ω
As a consequence, there exists a unique number denoted by ω(δ), satisfying
k ω(δ) = 1.
Moreover, ω(δ) < 1, ω(0+ ) = 1, ω(1− ) = 0, and, setting X1 := ( δ12 − 1)1/2 , there holds
(see Figure 2)
2π 2π
arctan X1 exp −
≤ ω(δ) ≤ exp −
.
(3.31)
+2
X1
X1
X1
1
Lower bound for ω(δ)
Upper bound
0.8
0.6
ω(δ)
0.4
0.2
0
0
0.25
0.5
0.75
δ
1
Figure 2. Bounds for ω(δ) given by (3.31), where ω(δ) is the unique
value of ω for which the homogenous equation (1.1) has a periodic solution.
From (3.31), we deduce the less sharp but simpler estimates for ω(δ):
2π π exp −
≤ ω(δ) ≤ exp −
.
X1
X1
(3.32)
Proof. Setting
Xω :=
1
Ωω
=
− 1)1/2 ,
2
ωδ
ωδ
we obtain with (3.23),
1
arctan Xω .
Xω
is decreasing on (0, ∞). Thus
ωδTω =
By a concavity argument, y 7→
arctan y
y
ω 7→ exp(−ωδTω )
(3.33)
16
ARNAUD ROUGIREL
is decreasing on (0, δ12 ), and so k does. By (3.33),
ωδTω → 1
as ω →
1
.
δ2
Thus
δ
−
k( δ12 ) = δ exp(−1 − δT− ) < .
e
Let us now look at k(ω) for ω close to 0. When ω → 0, Xω → ∞. Thus (see (3.33)),
ωδTω → 0.
We conclude with (3.26).
Let us now consider the solution ω(δ) to k(ω) = 1. By (3.26), (3.24),
2π
arctan X1 +2
.
ω(δ) ≤ exp(−2δT− ) = exp −
X1
X1
Regarding the lower bound, since k decreases and k(1) < 1, we have ω(δ) < 1. Hence,
ω(δ)δTω(δ) =
1
Xω(δ)
arctan Xω(δ) ≤
1
arctan X1 .
X1
Thus, with (3.24),
p
π 1
arctan X1 − δT− = exp −
.
ω(δ) ≥ exp −
X1
X1
This proves (3.31).
Finally, ω(0+ ) = 1 and ω(1− ) = 0 follow from (3.32).
4. Asymptotic properties of the solutions to (1.1)
We start with the homogenous case. In this situation, the long time behavior of the
system is completely known and depends on the value of k(ω). More precisely, we have
the following statement.
Corollary
• k(ω) < 1
• k(ω) = 1
• k(ω) > 1
4.1. Under the assumptions and notation of Propositions 3.6 and 3.7, if
i.e. ω > ω(δ) then |x(t)| + |ẋ(t)| → 0 as t → ∞;
i.e. ω = ω(δ) then x is a 2(Tω + T− )− periodic fonction;
i.e. ω < ω(δ) then |x(t)| + |ẋ(t)| → ∞ as t → ∞.
Remark 4.1. The case k(ω) = 1 is surprising since from the point of view of application
it implies that power can be extracted without wave! By Proposition 3.7, for any
admissible δ, we can choose appropriate ω so that k(ω) = 1. This suggests that the
model (1.1) should be modified in order to avoid such periodic solutions.
There results from the proof below that the decay (respectively, growth) rate toward
zero (infinity) is exponential.
ANALYSIS OF A WAVE ENERGY CONVERTER
17
Proof. According to Proposition 3.6 and (3.29), for all n ≥ 0, we have
ẋ(t4n ) = k(ω)2n ẋ(t0 )
x(t4n ) = 0
x(t4n+1 ) = e−ωδTω
sin(Ωω Tω )
ẋ(t4n )
Ωω
ẋ(t4n+1 ) = 0
ẋ(t4n+2 ) = −k(ω)ẋ(t4n )
x(t4n+2 ) = 0
x(t4n+3 ) = e−ωδTω
sin(Ωω Tω )
ẋ(t4n+2 )
Ωω
ẋ(t4n+3 ) = 0.
• If k(ω) < 1 then the above equalities imply
|x(tn )| + |ẋ(tn )| −−−→ 0.
(4.1)
n→∞
Let n ∈ N and t ∈ (t2n , t2n+1 ). By (2) Proposition 3.6, V (t) := (x(t), ẋ(t), t)t ∈ A+ .
Thus, if we multiply the first equation of (1.1) by ẋ and integrate between t2n and t,
we get
Z t
1 2
ω 2
ẋ + 2 x (t) + 2ωδ
ẋ2 (s) ds = 21 ẋ2 + ω2 x2 (t2n ).
(4.2)
2
t2n
In a same way, if t ∈ (t2n+1 , t2n+2 ) then V (t) ∈ A− and
Z t
1 2
1 2
ẋ + 2 x (t) + 2δ
ẋ2 (s) ds = 21 ẋ2 + 21 x2 (t2n+1 ).
2
(4.3)
t2n+1
Combining (4.1)–(4.3), we obtain |x(t)| + |ẋ(t)| → 0.
• If k(ω) > 1 then, since ẋ(t0 ) 6= 0 according to Proposition 3.6, we have
|x(tn )| + |ẋ(tn )| −−−→ ∞.
(4.4)
n→∞
For all t ∈ (t2n , t2n+1 ), there holds (see (4.2))
Z t2n+1
ω 2
1 2
ẋ + 2 x (t2n+1 ) + 2ωδ
ẋ2 (s) ds =
2
t
1 2
ẋ
2
Hence,
lim
t→∞, t∈∪n≥0 [t2n ,t2n+1 ]
+ ω2 x2 (t).
|x(t)| + |ẋ(t)| = ∞.
Arguing in a same way on A− , we obtain the convergence toward ∞.
• If k(ω) = 1 then
x(t4 ), ẋ(t4 ) = x(t0 ), ẋ(t0 ) .
Since the equation is autonomous and Theorem 3.4 provides an uniqueness result, x is
periodic.
In the sequel, we give a bounded-input-bounded-output stability result. The main
tools are a rescaling argument and an energy method. Fairly general forcing terms g
are allowed. In particular, g is not assumed to be periodic.
18
ARNAUD ROUGIREL
Theorem 4.2. Under assumption (2.1), let us suppose, in addition that
ωδ 2 > 1/2
√
δ > 1/ 2
(4.5)
(4.6)
g is bounded on [0, ∞).
(4.7)
Then Equation (1.1) admits a bounded absorbing set in the phase space R2 . In particular, for all positive ε and any global solution x of (1.1) on [0, ∞), there exists a time
T > 0 (depending on ε, x(0) and ẋ(0)) such that for t > T ,
p
|x(t)| ≤ 8δ 3 C(δ, ω)kgk∞ + ε
p
|ẋ(t)| ≤ 2 δC(δ, ω)kgk∞ + ε
where
kgk∞ := sup |g(t)|
(4.8)
t≥0
C(δ, ω) :=
ω2
δ
1 .
max
,
4
2ωδ 2 − 1 2δ 2 − 1
(4.9)
Proof. Let x be a global solution of (1.1) in the sense of Definition 2.2. Without loss
of generality, we may assume that, for some positive ε,
V (0, ε) ⊂ A+ ∪ A0++ .
Next, we claim that for all T > 0, there exists a nonnegative integer N and a sequence
0 = t0 < t1 < · · · < tN < tN +1 := T such that for n ≥ 0,
ẍ + 2ωδ ẋ + ωx = ωg
on (t2n , t2n+1 )
(4.10)
ẍ + 2δ ẋ + x = g
on (t2n+1 , t2n+2 ).
(4.11)
Since the assumptions on g are weaker than the ones in Theorem 3.1, this claim has
to be justified. If V ((0, T )) ⊂ A+ ∪ A0++ then we put N := 0, t1 := T so that, by
Definition 2.1, (4.10) holds on (0, T ) = (t0 , t1 ). Otherwise, V leaves A+ ∪ A0++ at some
time t1 > 0. Thus since x is a local solution, there exists ε > 0 such that
V (t1 , t1 + ε) ⊂ A−
and x solves (4.11) on (t1 , t1 + ε). Then the claim follows easily by induction.
Next we set
1
β :=
(4.12)
2δ
y(t) := eβt x(t)
1
1
E(t) := ẏ 2(t) + 2 y 2(t) ∀t ≥ 0.
2
8δ
Then for t ∈ (0, t1 ),
1
ÿ + 2(ωδ − β)ẏ + 2 y = eβt ωg(t)
4δ
ANALYSIS OF A WAVE ENERGY CONVERTER
and
E(t) + 2(ωδ − β)
Due to (4.5),
Z
t
2
ẏ (s) ds =
0
Z
ωδ − β = ωδ −
19
t
eβs ωg(s)ẏ(s) ds + E(0).
(4.13)
0
1
> 0.
2δ
So, using Young’s inequality
ab ≤ 2(ωδ − β)a2 +
we derive from (4.13)
ω2
E(t) ≤
8(ωδ − β)
Z
1
b2 ,
8(ωδ − β)
t
e2βs g 2 (s) ds + E(0).
(4.14)
0
In particular, since E is continuous on R,
Z t1
ω2
e2βs g 2 (s) ds + E(0).
E(t1 ) ≤
8(ωδ − β) 0
(4.15)
If N ≥ 1 then, noticing that E is independent of ω – this fact is the key point of the
proof – we have
Z t2
1
E(t2 ) ≤
e2βs g 2 (s) ds + E(t1 ).
(4.16)
8(δ − β) t1
Hence, (4.15), (4.16) and the notation (4.9) imply
Z t2
E(t2 ) ≤ C(δ, ω)
e2βs g 2 (s) ds + E(0).
0
By induction, there results
e2βT
kgk2∞ + E(0).
2β
Let us go back to the function x. For all t ≥ 0, we have
2E(t) = e2βt (ẋ + βx)2 + β 2 x2 (t).
E(T ) ≤ C(δ, ω)
Thus
C(δ, ω)kgk2∞
+ 2e−2βt E(0).
(ẋ + βx) (t) + β x (t) ≤
β
2
2 2
Using
1 2
a ≤ (a + b)2 + b2 ,
2
we derive
1
ẋ(t)2 ≤ 4δC(δ, ω)kgk2∞ + 4e− δ t E(0)
1
x(t)2 ≤ 8δ 3 C(δ, ω)kgk2∞ + 8δ 2 e− δ t E(0).
This completes the proof of the theorem.
20
ARNAUD ROUGIREL
Remark 4.2. An immediate consequence of Theorem 4.2 is that any periodic solution
x and a fortiori, any limit cycle of (1.1) satisfy
p
|x(t)| ≤ 8δ 3 C(δ, ω)kgk∞
p
|ẋ(t)| ≤ 2 δC(δ, ω)kgk∞, ∀t ∈ R.
5. Periodic Solutions
If ω = 1 and g is periodic then (1.1) reduces to a linear equation which admits a
periodic solution. Hence we will use a perturbation argument to prove that, for ω ≃ 1,
(1.1) posseses periodic solutions. This argument is based on the implicit function
theorem. However, due to the discontinuity of ẍ, standard results on the persistence
of periodic solutions do not apply here. Accordingly, we will use the fact that explicit
solutions are available for suitable forcing g. We then perform a perturbation argument
on the coefficients of these explicit solutions. We get an algebraic equation in R12
parametrized by ω.
Theorem 5.1. Let us assume that
δ ∈ (0, 1),
ωδ 2 ∈ (0, 1)
g(t) = g0 sin(αt)
(5.1)
∀t ∈ R
(5.2)
where
α > 0,
g 0 ∈ R∗ .
Then for ω close enough to 1, (1.1) admits a
(5.3)
2π
−periodic
α
solution.
Proof. Without loss of generality, we may assume g0 = 1. For ω = 1, (1.1) becomes
ẍ + 2δ ẋ + x = sin(αt).
(5.4)
Let the function xp,1 be defined by
xp,1 (t) = X1 sin(αt − θ1 ),
with
X1 = 1
(1 − α2 )2 + 4δ 2 α2
1/2 ,
θ1 =
∀t ∈ R,
2δα
arctan 1−α2
π
2
π + arctan
2δα
1−α2
if 0 < α < 1
.
if α = 1
if 1 < α
−periodic solution to (5.4).
Remark that θ1 belongs to (0, π) and that xp,1 is a 2π
α
Moreover, if we set
θ1 kπ
tk :=
+
, ∀k = 0, 1, 2, 3, 4
α
2α
V (t) := (xp,1 (t), ẋp,1 (t), t)t
then for i = 0, 1,
V (t2i , t2i+1 ) ⊂ A+ ,
V (t2i+1 , t2i+2 ) ⊂ A− .
(5.5)
ANALYSIS OF A WAVE ENERGY CONVERTER
21
For ω ≃ 1, we define xp,ω by
xp,ω (t) = Xω sin(αt − θω ),
∀t ∈ R,
with
ω
Xω = 1/2 ,
(ω − α2 )2 + 4ω 2δ 2 α2
θω =
2ωδα
arctan ω−α2
π
2
π + arctan
2ωδα
ω−α2
if α2 < ω
if ω = α2 .
if ω < α2
Then, for ω ≃ 1, a function x ∈ C 1 (R, R) is a 2π
−periodic solution to (1.1) on R
α
in the sense of Definition 2.2, if there exist 12 real numbers A0 , B0 , . . . , A3 , B3 and
t0 (ω) < t1 (ω) < t2 (ω) < t3 (ω) such that the four following conditions hold true.
• For all t ∈ (t0 (ω), t1 (ω)),
x(t) = x0 (t) := e−ωδt (A0 cos Ωω t + B0 sin Ωω t) + xp,ω (t)
√
(recall that Ωω := ω(1 − ωδ 2 )1/2 , see (3.22)) and
x0 (t0 (ω)) = 0,
ẋ0 (t1 (ω)) = 0
(5.6)
V (t0 (ω), t1(ω)) ⊂ A+ .
• For all t ∈ (t1 (ω), t2 (ω)),
x(t) = x1 (t) := e−δt (A1 cos Ω1 t + B1 sin Ω1 t) + xp,1 (t)
and
x1 (t1 (ω)) = x0 (t1 (ω)), ẋ1 (t1 (ω)) = 0,
V (t1 (ω), t2(ω)) ⊂ A− .
x1 (t2 (ω)) = 0
(5.7)
• For all t ∈ (t2 (ω), t3 (ω)),
x(t) = x2 (t) := e−ωδt (A2 cos Ωω t + B2 sin Ωω t) + xp,ω (t)
and
ẋ2 (t2 (ω)) = ẋ1 (t2 (ω)), x2 (t2 (ω)) = 0,
V (t2 (ω), t3(ω)) ⊂ A+ .
• For all t ∈ (t3 (ω), t4 (ω)) where t4 (ω) := t0 (ω) +
ẋ2 (t3 (ω)) = 0
(5.8)
2π
,
α
x(t) = x3 (t) := e−δt (A3 cos Ω1 t + B3 sin Ω1 t) + xp,1 (t)
and
x3 (t3 (ω)) = x2 (t3 (ω)), ẋ3 (t3 (ω)) = 0, x3 (t4 (ω)) = 0, ẋ3 (t4 (ω)) = ẋ0 (t0 (ω))
V (t3 (ω), t4(ω)) ⊂ A− .
(5.9)
Since t4 (ω) = t0 (ω) + 2π
, (5.6)–(5.9) contain 12 scalar equations with 12 unknowns
α
A0 , . . . , B3 , t0 (ω), . . . , t3 (ω) and one parameter, namely ω.
22
ARNAUD ROUGIREL
For ω = 1, xp,1 gives a solution to these equations. Indeed,
A0 = · · · = B3 = 0
θ1 kπ
+
,
tk (1) := tk =
α
2α
(5.10)
∀k = 0, 1, 2, 3.
solve (5.6)–(5.9) for ω = 1.
Next, we use the implicit function theorem to show the persistence of the solution
to (5.6)–(5.9). For this, we have to compute the jacobian matrix of the underlying
function at the point given by (5.10) and for ω = 1. Afterwards, we will prove that
this matrix is invertible.
Its first line is computed by differentiating the first equation of (5.6), namely
x0 (t0 (ω)) = 0. For simplicity, we define
u(t) := e−δt cos Ω1 t,
v(t) := e−δt sin Ω1 t,
∀t ∈ R.
Then we get
∂
x0 (t0 (ω))
∂A0
A0 ,...,B3 ,t0 (ω),...,t3 (ω),ω =(0,...,0,t0 ,...,t3 ,1)
At the same point, we have also
∂
x0 (t0 (ω)) = v(t0 ),
∂B0
= u(t0 ) = u
θ1 .
α
∂
x0 (t0 (ω)) = ẋp,1 (t0 ) = αX1 .
∂t0
The derivative with respect to the other variables vanish. Hence there is only 3 non
trivial entries in the first line of the jacobian matrix. Taking into account the other
equations in (5.6)–(5.9), the jacobian matrix corresponding to (5.6)–(5.9) reads
u(t ) v(t )
αX
0
u̇(t1 )
u(t1 )
0
v̇(t1 )
v(t1 ) −u(t1 ) −v(t1 )
u̇(t1 ) v̇(t1 )
u(t2 ) v(t2 )
u̇(t2 ) v̇(t2 ) −u̇(t2 ) −v̇(t2 )
u(t2 ) v(t2 )
u̇(t3 ) v̇(t3 )
u(t3 ) v(t3 )
−u̇(t0 ) −v̇(t0 )
1
−α2 X1
−α2 X
1
−αX1
−αX1
−u(t3 )
u̇(t3 )
u(t0 + 2π
)
α
−v(t3 )
v̇(t3 )
v(t0 + 2π
)
α
u̇(t0 + 2π
)
α
v̇(t0 + 2π
)
α
αX1
.
α2 X1
α2 X1
In the above matrix, the first column contains the partial derivative with respect to
A0 . The 8th and 9th columns contain the partial derivative with respect to B3 and t0 .
Zero entries are omitted.
Now, we will apply elementary transformations to this matrix what will do not
change its invertibility. First, we replace the
• 11th line by the difference of the 11th line and the 1st line;
• 8th line by the difference of the 8th line and the 10th line;
• 5th line by the difference of the 5th line and the 7th line;
• 2nd line by the difference of the 2nd line and the 4th line.
ANALYSIS OF A WAVE ENERGY CONVERTER
23
Thus the 4 last columns contain only one non zero entry. This leads us to the
following 8 × 8 matrix.
u̇(t1 )
v̇(t1 ) −u̇(t1 ) −v̇(t1 )
u(t1 )
v(t1 ) −u(t1 ) −v(t1 )
u(t2 )
v(t2 ) −u(t2 ) −v(t2 )
u̇(t2 )
v̇(t2 ) −u̇(t2 ) −v̇(t2 )
(5.11)
u̇(t
)
v̇(t
)
−
u̇(t
)
−
v̇(t
)
3
3
3
3
u(t3 )
v(t3 ) −u(t3 ) −v(t3 )
−u(t ) −v(t )
u(t4 )
v(t4 )
0
0
−u̇(t0 ) −v̇(t0 )
u̇(t4 )
v̇(t4 )
We write this matrix under the form
A −A
B −B
C −C
D
E
where A, . . . , E are 2 × 2 matrices defined in an obvious way according to (5.11). Remark that A, . . . , E are invertible since their determinant is (up to a sign) a wronskien
of u and v.
In the above matrix, we replace the
• 2nd column by the sum of the 2nd and the 1st column;
• 3rd column by the sum of the 3rd and the 2nd column;
• 4th column by the sum of the 4th and the 3rd column.
We get
A
B
C
D D D D+E
By multiplying this matrix to the right by the invertible matrix
I
I
I
−1
−1
−1
−DA
−DB
−DC
I
(where I denotes the identity matrix of R2 ), we get
A
B
.
C
D+E
Since, as explain above, A, . . . , E are invertible, there remains to compute the determinant of D + E. Setting T := 2π
, this determinant is the wronskien of u(· + T ) − u
α
24
ARNAUD ROUGIREL
and v(· + T ) − v at time t0 . Thus it is enough to compute this wronskien for t = 0.
This gives
Ω1 (e−2δT + 1 − 2e−δT cos Ω1 T ).
Thus, if the determinant of D + E vanishes then cos Ω1 T = ch δT and δT = 0. Our
assumptions on δ imply that det(D + E) 6= 0. Then, for ω ≃ 1, (5.6)–(5.9) has a
solution that we denote by
A0 (ω), . . . , B0 (ω), t0 (ω), . . . , t3 (ω).
By choosing ω close enough to 1, we have in addition
t0 (ω) < t1 (ω) < t2 (ω) < t3 (ω).
Then we define x : R → R in the following way. For k = 0, 1,
(
e−ωδt (A2k (ω) cos Ωω t + B2k (ω) sin Ωω t) + xp,ω (t)
if t ∈ [t2k (ω), t2k+1(ω))
x(t) :=
−δt
e (A2k+1 (ω) cos Ω1 t + B2k+1 (ω) sin Ω1 t) + xp,1 (t) if t ∈ [t2k+1 (ω), t2k+2(ω)).
Outside of [t0 (ω), t4 (ω)), we extend x by periodicity. From (5.6)–(5.9), there results
that x is a C 1 function and is T −periodic. Thus there remains to check that xẋ is
positive on (t2k (ω), t2k+1(ω)) and negative on (t2k+1 (ω), t2k+2(ω)). We will only prove
that xẋ is positive on (t0 (ω), t1(ω)). For this, we set
y(t, ω) := x0 (t + t0 (ω))ẋ0 (t + t0 (ω)),
where
∀t ∈ R
x0 (t) := e−ωδt (A0 (ω) cos Ωω t + B0 (ω) sin Ωω t) + xp,ω (t).
Since ẏ(0, 1) = α2 X12 , and y is of class C 2 , there exists ε ∈ (0, 1) such that for all
ω ∈ (1 − ε, 1 + ε),
1
ẏ(0, ω) ≥ ẏ(0, 1)
2
M := sup{|ÿ(t, ω)| | ω ∈ (1 − ε, 1 + ε), t ∈ (0, t1 (ω) − t0 (ω)} < ∞.
Thus for all ω ∈ (1 − ε, 1 + ε),
2
y(t, ω) = tẏ(0, ω) + t
Z
(5.12)
(5.13)
1
0
(1 − s)ÿ(ts, ω) ds
t
t2
≥ ẏ(0, 1) − M
2
2
1
≥ ẏ(0, 1)t
4
. Arguing in a same way in a neighborhood of t1 (ω) − t0 (ω), we
provided that t < ẏ(0,1)
2M
obtain that, for some τ > 0 independent of ω,
y(·, ω) > 0
on (0, τ ] ∪ [t1 (ω) − t0 (ω) − τ, t1 (ω) − t0 (ω)).
(5.14)
Since τ is independent of ω, we may reduce ε if necessary, so that
t1 (ω) − t0 (ω) − τ ≤ t1 − t0 − τ /2.
(5.15)
ANALYSIS OF A WAVE ENERGY CONVERTER
25
Now
y(·, ω) −−→ y(·, 1)
ω→1
uniformly on compact sets and y(·, 1) is positive on (0, t1 − t0 ), thus for ω close enough
to 1,
y(·, ω) > 0
on [τ, t1 − t0 − τ /2].
(5.16)
Combining (5.14)–(5.16), we derive
y(·, ω) > 0
on (0, t1 (ω) − t0 (ω))
i.e.
xẋ > 0
on (t0 (ω), t1(ω)).
This completes the proof of the theorem.
6. Energy balance
If there is no mass modulation i.e. ω = 1, then the energy balance for Equation
(1.1) takes the form
EW + ED = Eg
where EW is the energy of the WEC, namely
1
EW (t) = (ẋ2 + x2 )(t).
(6.1)
2
The damping energy, ED is the sum of the energy harvested by the power takeoff
system and the energy dissipated by hydrodynamic damping. According to [OOS10],
d
ED (t) = 2δ ẋ2 (t).
dt
Finally, Eg , the energy brought by ocean waves has the density
d
Eg (t) = g ẋ2 (t).
dt
In the case of mass modulation (i.e. ω 6= 1), we rewrite (1.1) under the form
1
ẍ
ω
+ 2δ ẋ + x = g
ẍ + 2δ ẋ + x = g
if xẋ > 0
if xẋ < 0.
(6.2)
(6.3)
(6.4)
Now, we propose the energy balance
EW + ED = Eg + Em
(6.5)
where the extra term Em is the energy generated by mass modulation. By (6.4), Eg
and ED still satisfy (6.2) and (6.3). However, roughly speaking,
(
1 2
Eω (t) := 2ω
ẋ (t) + 21 x2 (t)
if xẋ > 0
EW (t) =
1 2
1 2
if xẋ < 0.
E1 (t) := 2 ẋ (t) + 2 x (t)
26
ARNAUD ROUGIREL
More precisely, assuming (2.1), (2.4), if x denotes a global solution to (1.1) on R, we
set for all t ∈ R (see Section 2 for the notation),
(
Eω (t)
if V (t) ∈ A+ ∪ A0++
EW (t) :=
(6.6)
E1 (t)
if V (t) ∈ A− ∪ A0−− .
If V (t) 6∈ A+ ∪ A0++ ∪ A− ∪ A0−− , then by Definition 2.1, there exists ε > 0 such that
V (t, t + ε) ⊂ A−
or
V (t, t + ε) ⊂ A+ .
(6.7)
Hence, we may extend EW by setting
EW (t) :=
lim EW (s).
s→t, s>t
We will compute the variation of energy during one cycle, where a cycle is illustrated
by Figures 1, 3, 4 and 5.
x
1
0.5
t2
t3
t4
t
0
t0
t1
−0.5
−1
−1.5
−2
−2.5
1
1.5
2
2.5
3
3.5
4
Figure 3. The graph of t 7→ x(t) corresponding to the cycle in Fig. 1.
The formal definition of a cycle is the following.
Definition 6.1. Assume (2.1), (2.4) and that x is a solution to (1.1) on R in the sense
of Definition 2.2.
• A list (t0 , . . . , t4 ) of 5 real numbers is a positive cycle if t0 < · · · < t4 and there exists
ε > 0 such that for k = 0, 1,
V (t2k , t2k+1 ) ⊂ A+ ∪ A0++
V (t2k+1 , t2k+2 ) ⊂ A− ∪ A0−−
(6.8)
V (t0 − ε, t0 ) ⊂ A− , V (t4 , t4 + ε) ⊂ A+ .
• The definition of a negative cycle is obtained by exchanging the signs + and − in
the definition of a positive cycle.
• (t0 , . . . , t4 ) is a cycle if it is a negative or a positive cycle.
ANALYSIS OF A WAVE ENERGY CONVERTER
27
ẋ
1.2
1
0.8
0.6
0.4
0.2
t4
x
0
t0
t2
t1
t3
−0.2
−0.4
0
1
2
3
4
5
6
7
8
Figure 4. Trajectory in the phase space (x, ẋ) where x(t) > 0 for a
positive cycle.
8
x
7
6
5
4
3
2
1
t
t0
4
6
t1
8
t2
10
12
t3
14
t4
Figure 5. The graph of x corresponding to Fig. 4.
Theorem 6.1. Under assumptions (2.1), (2.4) and the above notation, let Em be the
energy generated by mass modulation and defined according to the energy balance (6.5)
i.e.
Em := EW + ED − Eg .
Then the variation of Em during a cycle (t0 , . . . , t4 ), namely
satisfies
∆Em := Em (t4 ) − Em (t0 )
∆Em = 12 ( ω1 − 1)(ẋ2 (t2 ) + x2 (t4 )).
Proof. Let (t0 , . . . , t4 ) be a positive cycle. For all δt > 0,
Z t4 +δt
ẋg dt.
∆Eg := Eg (t4 + δt) − Eg (t0 ) =
t0
(6.9)
28
ARNAUD ROUGIREL
By Definition 6.1, this latter integral is equal to
Z
Z
1
ẋ( ω ẍ + 2δ ẋ + x) dt +
(t0 ,t1 )∪(t2 ,t3 )∪(t4 ,t4 +δt)
Z
ẋ(ẍ + 2δ ẋ + x) dt =
(t1 ,t2 )∪(t3 ,t4 )
t4 +δt
t0
2δ ẋ2 dt + Eω (t1 ) − Eω (t0 ) + Eω (t3 ) − Eω (t2 ) + Eω (t4 + δt) − Eω (t4 )
+ E1 (t2 ) − E1 (t1 ) + E1 (t4 ) − E1 (t3 ).
We have xẋ(t1 ) = 0. If ẋ(t1 ) 6= 0 then xẋ > 0 on (t1 , t1 + ε). This contradicts (6.8)
since (t0 , . . . , t4 ) is a positive cycle. Thus ẋ(t1 ) = 0 and E1 (t1 ) = Eω (t1 ). In a same
way, E1 (t3 ) = Eω (t3 ). Thus
Z t4 +δt
Eg (t4 + δt) − Eg (t0 ) =
2δ ẋ2 dt + Eω (t4 + δt) − Eω (t0 )
t0
+ (E1 − Eω )(t2 ) + (E1 − Eω )(t4 ).
Letting δ → 0+ and using (6.2), the definition of EW and (6.5), we get (6.9). If the
cycle is negative, we obtain also (6.9).
Remark 6.1. From (6.9) and the energy balance (6.5), it is clear that when ω < 1,
mass modulation increases the energy of the system. Moreover if x is periodic and the
period is equal to the length of a cycle then
∆ED = ∆Eg + ∆Em
= ∆Eg + 21 ( ω1 − 1)(ẋ2 (t2 ) + x2 (t0 )).
So the additional energy ∆Em brought by mass modulation is transferred to the damping part of the system. This damping part is made of the damping of the power takeoff
device and the hydrodynamic damping. In this sense, the new WEC proposed by
Orazov et al. is more efficient than standard oscillators.
References
[dBBCK08] M. di Bernardo, C. J. Budd, A. R. Champneys, and P. Kowalczyk. Piecewise-smooth dynamical systems, volume 163 of Applied Mathematical Sciences. Springer-Verlag London
Ltd., London, 2008. Theory and applications.
[OOS10]
B. Orazov, O.M. O’Reilly, and Ö. Savaş. On the dynamics of a novel ocean wave energy
converter. Journal of Sound and Vibration, 329(24):5058–5069, 2010.
Laboratoire de Mathématiques et Applications
UMR 7348
Université de Poitiers & CNRS
SP2MI - BP 30179
86 962 Futuroscope Chasseneuil Cedex - France
E-mail address: [email protected]
© Copyright 2026 Paperzz