SCAN 2014 Book of Abstracts

UNI
WÜRZBURG , SCAN
16th GAMM-IMACS International Symposium on
Scientific Computing, Computer Arithmetic and Validated Numerics
Book of Abstracts
Department of Computer Science
University of Würzburg
Germany
September 21-26, 2014
16th GAMM-IMACS International Symposium on
Scientific Computing, Computer Arithmetic and
Validated Numerics
SCAN 2014
Book of Abstracts
Department of Computer Science
University of Würzburg
Germany
September 21-26, 2014
Editor: Marco Nehmeier
Cover design: Marco Nehmeier
Cover photo: Marco Nehmeier
Scientific Committee
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
G. Alefeld (Karlsruhe, Germany)
J.-M. Chesneaux (Paris, France)
G.F. Corliss (Milwaukee, USA)
T. Csendes (Szeged, Hungary)
A. Frommer (Wuppertal, Germany)
R.B. Kearfott (Lafayette, USA)
W. Kraemer (Wuppertal, Germany)
V. Kreinovich (El Paso, USA)
U. Kulisch (Karlsruhe, Germany)
W. Luther (Duisburg, Germany)
G. Mayer (Rostock, Germany)
S. Markov (Sofia, Bulgaria)
J.-M. Muller (Lyon, France)
M. Nakao (Fukuoka, Japan)
M. Plum (Karlsruhe, Germany)
N. Revol (Lyon, France)
J. Rohn (Prague, Czech Republic)
S. Rump (Hamburg, Germany)
S. Shary (Novosibirsk, Russia)
Yu. Shokin (Novosibirsk, Russia)
W. Walter (Dresden, Germany)
J. Wol↵ von Gudenberg (Würzburg, Germany)
N. Yamamoto (Tokyo, Japan)
Organizing Committee
•
•
•
•
•
•
J. Wol↵ von Gudenberg (Chair)
Alexander Dallmann
Fritz Kleemann
Marco Nehmeier
Anika Schwind
Susanne Stenglin
3
Preface
SCAN 2014 will certainly become another flagship conference in the
research areas of
• reliable computer arithmetic
• enclosure methods
• self validating algorithms
and will provide a lot of helpful and useful ideas for trustworthy applications and further research. This booklet contains the abstracts of
the invited or contributed talks.
The conference starts with awarding the Moore Prize for the best
application of interval methods to Prof. Kenta Kobayashi for his
Computer-Assisted Uniqueness Proof for Stokes’ Wave of Extreme
Form (p. 83).
Every morning and every afternoon starts with an invited keynote,
and I think we can be proud that the following distinguished experts
have accepted our invitation:
•
•
•
•
Ekaterina Auer (University of Duisburg-Essen, Germany)
Andrej Bauer (University of Ljubljana, Slovenia)
Sylvie Boldo (Inria, France)
Jack Dongarra (University of Tennessee and ORNL, USA; University of Manchester, UK)
• John Gustafson (Ceranovo Inc., USA)
• Bartlomiej Jacek Kubica (Warsaw University of Technology, Poland)
• John Pryce (Cardi↵ University, UK)
SCAN 2014 in Würzburg
The relations between Würzburg and SCAN 2014 are surprisingly obvious. The University of Würzburg was founded in 1402. Its logo is
a left bracket symbol expressing the ongoing, open ended progress of
knowledge. If we permute the digits of the foundation year and close
the left bracket by a right one we have our particular interval for the
logo of the SCAN 2014 conference.
4
Organizing SCAN 2014 in Würzburg
I want to thank all our sponsors, there donations made the conference
possible.
I further want to thank the members of the scientific committee,
the organizers of the last 3 meetings V. Kreinovich, N. Revol, and
S. Shary in particular.
Organizing this conference was a pleasure for me, because of the
tremendous assistance I got from the organizing committee:
Alexander Dallmann, Fritz Kleemann, Marco Nehmeier, Anika
Schwind and Susanne Stenglin.
Würzburg, September 2014
Jürgen Wol↵ von Gudenberg
Last but not least I wish all of us a
SCAN
Successful Conference and Attractive Night program
5
Schedule
Sunday, September 21, 2014
18:00 – 20:00 Get-together and Registration (GHOTEL)
Monday, September 22, 2014
8:00 – 9:00 Registration (Conference Office)
9:00 – 9:30 Opening Session (Turing, Chair: J. Wol↵ von Gudenberg)
9:30 – 10:30 R. E. Moore Prize Awarding Ceremony (Turing, Chair:
V. Kreinovich)
Kenta Kobayashi
Computer-Assisted Uniqueness Proof for Stokes’ Wave of Extreme
Form (p. 83)
10:30 – 11:00 Co↵ee Break
11:00 – 12:00 Plenary Talk (Turing, Chair: B. Kearfott)
John Pryce
The architecture of the IEEE P1788 draft standard for interval
arithmetic (p. 135)
12:00 – 13:20 Lunch
13:20 – 14:20 Plenary Talk (Turing, Chair: W. Luther)
Ekaterina Auer
Result Verification and Uncertainty Management in Engineering
Applications (p. 30)
14:20 – 16:00 Parallel Sessions
Session A1: Solution Sets (Turing, Chair: J. Garlo↵)
(1) Shinya Miyajima
Verified solutions of saddle point linear systems (p. 114)
6
(2) Takeshi Ogita
Iterative Refinement for Symmetric Eigenvalue Problems (p. 127)
(3) Sergey P. Shary
Maximum Consistency Method for Data Fitting under Interval Uncertainty (p. 147)
(4) Günter Mayer
A short description of the symmetric solution set (p. 107)
Session A2: Non Standard Interval Libraries (Zuse, Chair: C.P. Jeannerod)
(1) Abdelrahman Elskhawy, Kareem Ismail and Maha Zohdy
Modal Interval Floating Point Unit with Decorations (p. 49)
(2) Jordan Ninin and Nathalie Revol
Accurate and efficient implementation of affine arithmetic using
floating-point arithmetic (p. 125)
(3) Philippe Théveny
Choice of metrics in interval arithmetic (p. 157)
(4) Olga Kupriianova and Christoph Lauter
Replacing branches by polynomials in vectorizable elementary functions (p. 92)
Session A3: ODE – Orbits (Moore, Chair: A. Rauh)
(1) Tomohirio Hiwaki and Nobito Yamamoto
A numerical verification method for a basin of a limit cycle (p. 66)
(2) Christoph Spandl
True orbit simulation of dynamical systems and its computational
complexity (p. 153)
(3) M. Konečný, W. Taha, J. Duracz and A. Farjudian
Implementing the Interval Picard Operator (p. 87)
(4) Luc Jaulin, Jordan Ninin, Gilles Chabert, Stéphane Le Menec,
Mohamed Saad, Vincent Le Doze and Alexandru Stancu
Computing capture tubes (p. 72)
19:00 – 21:00 Reception (Town Hall Würzburg)
7
Tuesday, September 23, 2014
9:00 – 10:00 Plenary Talk (Turing, Chair: U. Kulisch)
John Gustafson
An Energy-Efficient and Massively Parallel Approach to Valid Numerics (p. 62)
10:00 – 10:30 Co↵ee Break
10:30 – 11:45 Parallel Sessions
Session B1: Linear Systems (Turing, Chair: G. Mayer)
(1) Evgenija D. Popova
Improved Enclosure for Parametric Solution Sets with Linear Shape
(p. 134)
(2) Irene A. Sharaya and Sergey P. Shary
Reserve as recognizing functional for AE-solutions to interval system of linear equations (p. 145)
(3) Katsuhisa Ozaki, Takeshi Ogita and Shin’ichi Oishi
Automatic Verified Numerical Computations for Linear Systems
(p. 129)
Session B2: Matrix Arithmetic (Zuse, Chair: L. Jaulin)
(1) Shinya Miyajima
Fast inclusion for the matrix inverse square root (p. 111)
(2) Stef Graillat, Christoph Lauter, Ping Tak Peter Tang, Naoya Yamanka and Shin’ichi Oishi
A method of calculating faithful rounding of l2 -norm for n-vectors
(p. 60)
(3) Lars Balzer
SONIC – a nonlinear solver (p. 35)
Session B3: Dynamic Systems (Moore, Chair: W. Tucker)
(1) Balázs Bánhelyi, Tibor Csendes, Tibor Krisztin and Arnold Neumaier
On a conjecture of Wright (p. 31)
8
(2) S. I. Kumkov
Applied techniques of interval analysis for estimation of experimental data (p. 90)
(3) Kenta Kobayashi and Takuya Tsuchiya
Error Estimations of Interpolations on Triangular Elements (p. 85)
11:45 – 13:00 Lunch
13:00 – 14:00 Plenary Talk (Turing, Chair: S. Shary)
Andrej Bauer
Programming techniques for exact real arithmetic (p. 37)
14:00 – 15:15 Parallel Sessions
Session C1: Elliptic PDE (Turing, Chair: M. Nakao)
(1) Tomoki Uda
Numerical Verification for Elliptic Boundary Value Problem with
Nonconforming P 1 Finite Elements (p. 159)
(2) Henning Behnke
Curve Veering for the Parameter-Dependent Clamped Plate (p. 38)
(3) Takehiko Kinoshita, Yoshitaka Watanabe and Mitsuhiro T. Nakao
Some remarks on the rigorous estimation of inverse linear elliptic
operators (p. 81)
Session C2: Modeling and Uncertainty (Zuse, Chair: C. Spandl)
(1) Rene Alt, Svetoslav Markov, Margarita Kambourova, Nadja Radchenkova and Spasen Vassilev
On the mathematical modelling of a batch fermentation process using interval data and verification methods (p. 26)
(2) Andreas Rauh, Ramona Westphal, Harald Aschemann and Ekaterina Auer
Exponential Enclosure Techniques for Initial Value Problems with
Multiple Conjugate Complex Eigenvalues (p. 141)
(3) Andreas Rauh, Luise Senkel and Harald Aschemann
Computation of Confidence Regions in Reliable, Variable-Structure
State and Parameter Estimation (p. 139)
9
Session C3: Global Optimization (Moore, Chair: M. Konečný)
(1) Charlie Vanaret, Jean-Baptiste Gotteland, Nicolas Durand and
Jean-Marc Alliot
Combining Interval Methods with Evolutionary Algorithms for Global
Optimization (p. 162)
(2) Yao Zhao, Gang Xu and Mark Stadtherr
Dynamic Load Balancing for Rigorous Global Optimization of Dynamic Systems (p. 164)
(3) Ralph Baker Kearfott
Some Observations on Exclusion Regions in Interval Branch and
Bound Algorithms (p. 78)
15:15 – 15:45 Co↵ee Break
15:45 – 16:35 Parallel Sessions
Session D1: Interval Matrices (Turing, Chair: J. Horáček)
(1) Mihály Csaba Markót and Zoltán Horváth
Finding positively invariant sets of ordinary di↵erential equations
using interval global optimization methods (p. 105)
(2) Jürgen Garlo↵ and Mohammad Adm
Sign Regular Matrices Having the Interval Property (p. 53)
Session D2: Non Linear Systems (Zuse, Chair: N. Yamamoto)
(1) Makoto Mizuguchi, Akitoshi Takayasu, Takayuki Kubo and Shin’ichi
Oishi
A sharper error estimate of verified computations for nonlinear heat
equations. (p. 119)
(2) Makoto Mizuguchi, Akitoshi Takayasu, Takayuki Kubo and Shin’ichi
Oishi
A method of verified computations for nonlinear parabolic equations
(p. 117)
10
Session D3: Tools and Workflows (Moore, Chair: E. Popova)
(1) Pacôme Eberhart, Julien Brajard, Pierre Fortin and Fabienne Jézéquel
Towards High Performance Stochastic Arithmetic (p. 47)
(2) Wolfram Luther
A workflow for modeling, visualizing, and querying uncertain (GPS)localization using interval arithmetic (p. 100)
16:45 – 17:30 Meeting of the Scientific Committee and Editorial
Board (Moore)
19:30 – 23:00 Conference Dinner (Festung Marienberg)
Wednesday, September 24, 2014
9:00 – 10:00 Plenary Talk (Turing, Chair: G. Alefeld)
Jack Dongarra
Algorithmic and Software Challenges at Extreme Scales (p. 46)
10:00 – 10:30 Co↵ee Break
10:30 – 11:45 Parallel Sessions
Session E1: HPC (Turing, Chair: G. Bohlender)
(1) Sylvain Collange, David Defour, Stef Graillat and Roman Iakymchuk
Reproducible and Accurate Matrix Multiplication for High-Performance
Computing (p. 42)
(2) Chemseddine Chohra, Philippe Langlois and David Parello
Level 1 Parallel RTN-BLAS: Implementation and Efficiency Analysis (p. 40)
(3) Hao Jiang, Feng Wang, Yunfei Du and Lin Peng
Fast Implementation of Quad-Precision GEMM on ARMv8 64-bit
Multi-Core Processor (p. 76)
11
Session E2: Parametric Linear Systems (Zuse, Chair: K. Ozaki)
(1) Milan Hladı́k
Optimal preconditioning for the interval parametric Gauss–Seidel
method (p. 68)
(2) Atsushi Minamihata, Kouta Sekine, Takeshi Ogita, Siegfried M.
Rump and Shin’ichi Oishi
A Simple Modified Verification Method for Linear Systems (p. 109)
(3) Andreas Rauh, Luise Senkel and Harald Aschemann
Verified Parameter Identification for Dynamic Systems with NonSmooth Right-Hand Sides (p. 137)
Session E3: Uncertainty (Moore, Chair: N. Louvet)
(1) Igor Sokolov
Non-arithmetic approach to dealing with uncertainty in fuzzy arithmetic (p. 151)
(2) Joe Lorkowski and Vladik Kreinovich
How much for an interval? a set? a twin set? a p-box? a Kaucher
interval? An economics-motivated approach to decision making under uncertainty (p. 98)
(3) Boris S. Dobronets and Olga A. Popova
Numerical probabilistic approach for optimization problems (p. 44)
11:45 – 13:00 Lunch
13:00 – 22:00 Excursion to Bamberg
Thursday, September 25, 2014
9:00 – 10:00 Plenary Talk (Turing, Chair: N. Revol)
Sylvie Boldo
Formal verification of tricky numerical computations (p. 39)
10:00 – 10:30 Co↵ee Break
12
10:30 – 11:45 Parallel Sessions
Session F1: Miscellaneous (Turing, Chair: B. Bánhelyi)
(1) Roumen Anguelov and Svetoslav Markov
On the sets of H- and D-continuous interval functions (p. 28)
(2) Luise Senkel, Andreas Rauh and Harald Aschemann
Numerical Validation of Sliding Mode Approaches with Uncertainty
(p. 143)
(3) Amin Maher and Hossam A. H. Fahmy
Using range arithmetic in evaluation of compact models (p. 103)
Session F2: Floating Point Operations (Zuse, Chair: P. Langlois)
(1) Hong Diep Nguyen and James Demmel
Toward hardware support for Reproducible Floating-Point Computation (p. 123)
(2) Claude-Pierre Jeannerod and Siegfried M. Rump
On relative errors of floating-point operations: optimal bounds and
applications (p. 75)
(3) Stefan Siegel
An Implementation of Complete Arithmetic (p. 149)
Session F3: Solvability and Singularity (Moore, Chair: T. Ogita)
(1) Jaroslav Horáček and Milan Hladı́k
On Unsolvability of Overdetermined Interval Linear Systems (p. 70)
(2) Luc Longpré and Vladik Kreinovich
Towards the possibility of objective interval uncertainty in physics.
II (p. 96)
(3) David Hartman and Milan Hladı́k
Towards tight bounds on the radius of nonsingularity (p. 64)
11:45 – 13:00 Lunch
13:00 – 14:00 Plenary Talk (Turing, Chair: T. Csendes)
Bartlomiej Jacek Kubica
Interval methods for solving various kinds of quantified nonlinear
problems (p. 89)
13
14:00 – 15:15 Parallel Sessions
Session G1: Verification Methods (Turing, Chair: H. Behnke)
(1) Balázs Bánhelyi and Balázs László Lévai
Verified localization of trajectories with prescribed behaviour in the
forced damped pendulum (p. 33)
(2) Xuefeng Liu and Shin’ichi Oishi
Verified lower eigenvalue bounds for self-adjoint di↵erential operators (p. 94)
(3) Kazuaki Tanaka and Shin’ichi Oishi
Numerical verification for periodic stationary solutions to the AllenCahn equation (p. 155)
Session G2: Bernstein Branch and Bound (Zuse, Chair: M. Stadtherr)
(1) Jürgen Garlo↵ and Tareq Hamadneh
Convergence of the Rational Bernstein Form (p. 56)
(2) Bhagyesh V. Patil and P. S. V. Nataraj
Bernstein branch-and-bound algorithm for unconstrained global optimization of multivariate polynomial MINLPs (p. 131)
Session G3: Stochastic Intervals (Moore, Chair: F. Jézéquel)
(1) Ronald van Nooijen and Alla Kolechkina
Two applications of interval analysis to parameter estimation in
hydrology. (p. 161)
(2) Tiago Montanher and Walter Mascarenhas
An Interval arithmetic algorithm for global estimation of hidden
Markov model parameters (p. 121)
(3) Valentin Golodov
Interval regularization approach to the Firordt method of the spectroscopic analysis of the nonseparated mixtures (p. 58)
15:20 – 15:35 Closing Session (Turing, Chair: J. Wol↵ von Gudenberg)
15:40 – 16:10 Co↵ee Break
19:00 – 22:30 Wine Tasting Party (Staatlicher Hofkeller)
14
Friday, September 26, 2014
10:00 – 13:00 IEEE P1788 Meeting (Moore)
15
Contents
On the mathematical modelling of a batch fermentation process using interval data and verification methods
26
Rene Alt, Svetoslav Markov, Margarita Kambourova, Nadja Radchenkova and Spasen Vassilev
On the sets of H- and D-continuous interval functions
Roumen Anguelov and Svetoslav Markov
28
Result Verification and Uncertainty Management in Engineering
Applications
30
Ekaterina Auer
On a conjecture of Wright
31
Balázs Bánhelyi, Tibor Csendes, Tibor Krisztin and Arnold Neumaier
Verified localization of trajectories with prescribed behaviour in the
forced damped pendulum
33
Balázs Bánhelyi and Balázs László Lévai
SONIC – a nonlinear solver
Lars Balzer
35
Programming techniques for exact real arithmetic
Andrej Bauer
37
Curve Veering for the Parameter-Dependent Clamped Plate
Henning Behnke
38
Formal verification of tricky numerical computations
Sylvie Boldo
39
16
Level 1 Parallel RTN-BLAS: Implementation and Efficiency Analysis
40
Chemseddine Chohra, Philippe Langlois and David Parello
Reproducible and Accurate Matrix Multiplication for High-Performance
Computing
42
Sylvain Collange, David Defour, Stef Graillat and Roman Iakymchuk
Numerical probabilistic approach for optimization problems
Boris S. Dobronets and Olga A. Popova
44
Algorithmic and Software Challenges at Extreme Scales
Jack Dongarra
46
Towards High Performance Stochastic Arithmetic
47
Pacôme Eberhart, Julien Brajard, Pierre Fortin and Fabienne
Jézéquel
Modal Interval Floating Point Unit with Decorations
Abdelrahman Elskhawy, Kareem Ismail and Maha Zohdy
49
Sign Regular Matrices Having the Interval Property
Jürgen Garlo↵ and Mohammad Adm
53
Convergence of the Rational Bernstein Form
Jürgen Garlo↵ and Tareq Hamadneh
56
Interval regularization approach to the Firordt method of the spectroscopic analysis of the nonseparated mixtures
58
Valentin Golodov
A method of calculating faithful rounding of l2 -norm for n-vectors 60
Stef Graillat, Christoph Lauter, Ping Tak Peter Tang, Naoya
Yamanka and Shin’ichi Oishi
17
An Energy-Efficient and Massively Parallel Approach to Valid Numerics
62
John Gustafson
Towards tight bounds on the radius of nonsingularity
David Hartman and Milan Hladı́k
64
A numerical verification method for a basin of a limit cycle
Tomohirio Hiwaki and Nobito Yamamoto
66
Optimal preconditioning for the interval parametric Gauss–Seidel
method
68
Milan Hladı́k
On Unsolvability of Overdetermined Interval Linear Systems
Jaroslav Horáček and Milan Hladı́k
70
Computing capture tubes
72
Luc Jaulin, Jordan Ninin, Gilles Chabert, Stéphane Le Menec,
Mohamed Saad, Vincent Le Doze and Alexandru Stancu
On relative errors of floating-point operations: optimal bounds and
applications
75
Claude-Pierre Jeannerod and Siegfried M. Rump
Fast Implementation of Quad-Precision GEMM on ARMv8 64-bit
Multi-Core Processor
76
Hao Jiang, Feng Wang, Yunfei Du and Lin Peng
Some Observations on Exclusion Regions in Interval Branch and
Bound Algorithms
78
Ralph Baker Kearfott
Some remarks on the rigorous estimation of inverse linear elliptic
operators
81
Takehiko Kinoshita, Yoshitaka Watanabe and Mitsuhiro T. Nakao
18
Computer-Assisted Uniqueness Proof for Stokes’ Wave of Extreme
Form
83
Kenta Kobayashi
Error Estimations of Interpolations on Triangular Elements
Kenta Kobayashi and Takuya Tsuchiya
85
Implementing the Interval Picard Operator
M. Konečný, W. Taha, J. Duracz and A. Farjudian
87
Interval methods for solving various kinds of quantified nonlinear
problems
89
Bartlomiej Jacek Kubica
Applied techniques of interval analysis for estimation of experimental data
90
S. I. Kumkov
Replacing branches by polynomials in vectorizable elementary functions
92
Olga Kupriianova and Christoph Lauter
Verified lower eigenvalue bounds for self-adjoint di↵erential operators
94
Xuefeng Liu and Shin’ichi Oishi
Towards the possibility of objective interval uncertainty in physics.
II
96
Luc Longpré and Vladik Kreinovich
How much for an interval? a set? a twin set? a p-box? a Kaucher
interval? An economics-motivated approach to decision making under uncertainty
98
Joe Lorkowski and Vladik Kreinovich
19
A workflow for modeling, visualizing, and querying uncertain (GPS)localization using interval arithmetic
100
Wolfram Luther
Using range arithmetic in evaluation of compact models
Amin Maher and Hossam A. H. Fahmy
103
Finding positively invariant sets of ordinary di↵erential equations
using interval global optimization methods
105
Mihály Csaba Markót and Zoltán Horváth
A short description of the symmetric solution set
Günter Mayer
107
A Simple Modified Verification Method for Linear Systems
109
Atsushi Minamihata, Kouta Sekine, Takeshi Ogita, Siegfried M.
Rump and Shin’ichi Oishi
Fast inclusion for the matrix inverse square root
Shinya Miyajima
111
Verified solutions of saddle point linear systems
Shinya Miyajima
114
A method of verified computations for nonlinear parabolic equations117
Makoto Mizuguchi, Akitoshi Takayasu, Takayuki Kubo and Shin’ichi
Oishi
A sharper error estimate of verified computations for nonlinear heat
equations.
119
Makoto Mizuguchi, Akitoshi Takayasu, Takayuki Kubo and Shin’ichi
Oishi
An Interval arithmetic algorithm for global estimation of hidden
Markov model parameters
121
Tiago Montanher and Walter Mascarenhas
20
Toward hardware support for Reproducible Floating-Point Computation
123
Hong Diep Nguyen and James Demmel
Accurate and efficient implementation of affine arithmetic using
floating-point arithmetic
125
Jordan Ninin and Nathalie Revol
Iterative Refinement for Symmetric Eigenvalue Problems
Takeshi Ogita
127
Automatic Verified Numerical Computations for Linear Systems 129
Katsuhisa Ozaki, Takeshi Ogita and Shin’ichi Oishi
Bernstein branch-and-bound algorithm for unconstrained global optimization of multivariate polynomial MINLPs
131
Bhagyesh V. Patil and P. S. V. Nataraj
Improved Enclosure for Parametric Solution Sets with Linear Shape134
Evgenija D. Popova
The architecture of the IEEE P1788 draft standard for interval
arithmetic
135
John Pryce
Verified Parameter Identification for Dynamic Systems with NonSmooth Right-Hand Sides
137
Andreas Rauh, Luise Senkel and Harald Aschemann
Computation of Confidence Regions in Reliable, Variable-Structure
State and Parameter Estimation
139
Andreas Rauh, Luise Senkel and Harald Aschemann
21
Exponential Enclosure Techniques for Initial Value Problems with
Multiple Conjugate Complex Eigenvalues
141
Andreas Rauh, Ramona Westphal, Harald Aschemann and Ekaterina Auer
Numerical Validation of Sliding Mode Approaches with Uncertainty143
Luise Senkel, Andreas Rauh and Harald Aschemann
Reserve as recognizing functional for AE-solutions to interval system of linear equations
145
Irene A. Sharaya and Sergey P. Shary
Maximum Consistency Method for Data Fitting under Interval Uncertainty
147
Sergey P. Shary
An Implementation of Complete Arithmetic
Stefan Siegel
149
Non-arithmetic approach to dealing with uncertainty in fuzzy arithmetic
151
Igor Sokolov
True orbit simulation of dynamical systems and its computational
complexity
153
Christoph Spandl
Numerical verification for periodic stationary solutions to the AllenCahn equation
155
Kazuaki Tanaka and Shin’ichi Oishi
Choice of metrics in interval arithmetic
Philippe Théveny
22
157
Numerical Verification for Elliptic Boundary Value Problem with
159
Nonconforming P 1 Finite Elements
Tomoki Uda
Two applications of interval analysis to parameter estimation in
hydrology.
161
Ronald van Nooijen and Alla Kolechkina
Combining Interval Methods with Evolutionary Algorithms for Global
Optimization
162
Charlie Vanaret, Jean-Baptiste Gotteland, Nicolas Durand and
Jean-Marc Alliot
Dynamic Load Balancing for Rigorous Global Optimization of Dynamic Systems
164
Yao Zhao, Gang Xu and Mark Stadtherr
23
24
Abstracts
25
On the mathematical modelling of a
batch fermentation process using interval
data and verification methods
Rene Alt1, Svetoslav Markov2, Margarita Kambourova3,
Nadja Radchenkova3 and Spasen Vassilev3
2
1
Sorbonne Universites, LIP6, UPMC, CNRS UMR7606
Institute of Mathematics and Informatics, Bulgarian Academy of
Sciences
3
Institute of Microbiology, Bulgarian Academy of Sciences
1
Boite courrier 169 4 place Jussieu 75252 Paris Cedex 0,
2
“Akad. G. Bonchev” st., bl. 8, 1113 Sofia, Bulgaria
3
“Akad. G. Bonchev” st., bl. 26, 1113 Sofia, Bulgaria
[email protected]
Keywords: batch fermentation processes, reaction schemes, dynamic
models, numerical simulations, verification methods
An experiment in a batch laboratory bioreactor for the of EPS production by Aeribacillus pallidus 418 bacteria is performed and interval
experimental data for the biomass-product dynamics are obtained [1].
The dynamics of microbial growth and product synthesis is described
by means of several bio-chemical reaction schemes, aiming an understanding of the underlying biochemical/metabolic mechanisms [2,3].
The proposed reaction schemes lead to systems of ordinary di↵erential
equations whose solutions are fitted to the observed interval experimental data. Suitable parameter identification of the mathematical
models is performed aiming that the numerically computed results are
included into the interval experimental data following a verification
approach [4,5]. The proposed models reflect specific features of the
mechanism of the fermentation process, which may suggest further experimental and theoretical work. We believe that using the proposed
methodology one can study the basic mechanisms underlying the dynamics of cell growth, substrate uptake and product synthesis.
26
References:
[1] Radchenkova, N., M. Kambourova, S. Vassilev, R. Alt,
S. Markov, On the mathematical modelling of EPS production
by a thermophilic bacterium, submitted to BIOMATH.
[2] Alt, R., S. Markov, Theoretical and computational studies of
some bioreactor models,Computers and Mathematics with Applications 64 (2012), 350–360.
http://dx.doi.org/10.1016/J.Camwa.2012.02.046
[3] Markov, S., Cell Growth Models Using Reaction Schemes: Batch
Cultivation, Biomath 2/2 (2013), 1312301.
http://dx.doi.org/10.11145/j.biomath.2013.12.301
[4] Wolff v. Gudenberg, J., Proceedings of the Conference Interval96, Reliable Computing 3 (3).
[5] Krämer, W., J. Wolff v. Gudenberg, Scientific Computing,
Validated Numerics, Interval Methods, Proceedings of the conference Scan-2000/Interval-2000, Kluwer/Plenum, 2001.
27
On the sets of H- and D-continuous
interval functions
1
Roumen Anguelov1 and Svetoslav Markov2
Department of Mathematics and Applied Mathematics, University
of Pretoria, Pretoria 0002, South Africa
2
Institute of Mathematics and Informatics, Bulgarian Academy of
Sciences, “Akad. G. Bonchev” st., bl. 8, 1113 Sofia, Bulgaria
[email protected]
Keywords: real functions, interval functions, H-continuous functions,
D-continuous functions, tight enclosures.
It has been shown that the space of Hausdor↵ continuous (Hcontinuous) functions is the largest linear space of interval functions
[1]. This space has important applications in the Theory of PDE’s and
Real Analysis [2]. Moreover, the space of H-continuous functions has
a very special place in Interval Analysis as well, more specifically in
the Analysis of Interval-valued Functions. It has been also shown that
the practically relevant set, in terms of providing tight enclosures of
sets of real functions, is the set of the so-called Dilworth continuous (Dcontinuous) interval-valued functions [1]. Here we apply the concept of
quasivector space as defined in [3] which captures and preserves the essential properties of computations with interval-valued functions while
also providing a relatively simple structure for computing. Indeed, a
quasivector space is a direct sum of a vector (linear) space and a symmetric quasivector space which makes the computations essentially as
easy as computations in a linear space. In the considered setting we
prove that, the space of H-continuous functions is precisely the linear
space in the direct sum decomposition of the respective quasivector
space.
References:
[1] R. Anguelov, S. Markov, B. Sendov, The set of Hausdor↵
continuous functions — the largest linear space of interval func28
tions, Reliable Computing 12, 337–363 (2006).
http://dx.doi.org/10.1007/s11155-006-9006-5
[2] J. H. van der Walt, The Linear Space of Hausdor↵ Continuous
Interval Functions, Biomath 2 (2013), 1311261.
http://dx.doi.org/10.11145/j.biomath.2013.11.261
[3] S. Markov, On Quasilinear Spaces of Convex Bodies and Intervals, Journal of Computational and Applied Mathematics 162 (1),
93–112, 2004.
http://dx.doi.org/10.1016/j.cam.2003.08.016
29
Result Verification and Uncertainty
Management in Engineering Applications
Ekaterina Auer
University of Duisburg-Essen
47048 Duisburg, Germany
[email protected]
Verified methods can have di↵erent uses in engineering applications.
On the one hand, they are able to demonstrate the correctness of results
obtained on a computer using a certain model of the considered system
or process. On the other hand, we can represent bounded parameter
uncertainty in a natural way with their help. This allows us to study
models and make statements about them over whole parameter ranges
instead of their (possibly incidental) point values. However, whole processes cannot be feasibly verified in all cases, the reasons ranging from
inherent difficulties (e.g., for chaotic models) to problems caused by
dependency and wrapping to drawbacks arising simply from choosing
the wrong verified technique. In this talk, we give a detailed overview
of how verified methods – sometimes in combinations with other techniques – improve the quality of simulations in engineering. We start
by providing a general view on the role of verified methods in the
verification/validation systematics and the modeling and simulation
cycle for a given process. After that, we point out what concepts and
tools are necessary to successfully apply verified methods, including
not yet (fully) implemented ones that would be nonetheless advantageous. Finally, we consider several applications from (bio)mechanics
and systems control which exemplify the general approach and focus
on di↵erent usage areas for verified methods.
Invited talk
30
On a conjecture of Wright
Balázs Bánhelyi, Tibor Csendes, Tibor Krisztin, and Arnold
Neumaier
University of Szeged
6720 Szeged, Hungary
[email protected]
Keywords: delayed logistic equation, Wright’s equation, Wright’s
conjecture, slowly oscillating periodic solution, discrete Lyapunov functional, Poincaré–Bendixson theorem, verified computational techniques,
computer-assisted proof, interval arithmetic
In 1955 E.M. Wright proved that all solutions of the delay di↵erential equation ẋ(t) = ↵ ex(t 1) 1 converge to zero as t ! 1 for
↵ 2 (0, 3/2], and conjectured that this is even true for ↵ 2 (0, ⇡/2).
The present paper proves the conjecture for ↵ 2 [1.5, 1.5706] (compare
with ⇡/2 = 1.570796...). The proof is based on the proven fact that
it is sufficient to guarantee the nonexistence of slowly oscillating periodic solutions, and that slowly oscillating periodic solutions with small
amplitudes cannot exist. The talk will give details on the a computerassisted proof part that exclude slowly oscillating periodic solutions
with large amplitudes.
References:
[1] B. Bánhelyi, Discussion of a delayed di↵erential equation with
verified computing technique (in Hungarian), Alkalmazott Matematikai Lapok 24 (2007) 131–150.
[2] B. Bánhelyi, T. Csendes, T. Krisztin, and A. Neumaier,
Global attractivity of the zero solution for Wright’s equation. Accepted for publication in the SIAM J. on Applied Dynamical Systems.
31
[3] T. Csendes, B.M. Garay, and B. Bánhelyi, A verified optimization technique to locate chaotic regions of a Hénon system, J.
of Global Optimization 35 (2006) 145–160.
[4] E.M. Wright, A non-linear di↵erence-di↵erential equation, J. für
die Reine und Angewandte Mathematik 194 (1955) 66–87.
32
Verified localization of trajectories with
prescribed behaviour in the forced
damped pendulum
Balázs Bánhelyi and Balázs László Lévai
University of Szeged
6720 Szeged, Hungary
[email protected]
Keywords: localization, forced damped pendulum, chaos
In mathematics, it is quite difficult to define exactly what chaos
really means. In particular, it is easier to prepare a list of properties
which describe a so called chaotic system than give a precise definition.
A dynamic system is generally classified as chaotic if it is sensitive to
its initial conditions. Chaos can be also characterized by dense periodic
orbits and topological transitivity.
While studying computational approximations of solutions of differential equations, it is an important question is whether the given
equation has chaotic solutions. The nature of chaos implies that the
numerical simulation must be carried out carefully, considering fitting
measures against possible distraction due to accumulated rounding errors. Unfortunately except a few cases, the recognition of chaos has
remained a hard task that is usually handled by theoretical means
[Hubbard1988].
In our present studies, we investigate a simple mechanical system,
Hubbard’s sinusoidally forced damped pendulum [Hubbard1988]. Applying rigorous computations, his 1999 conjecture on the existence of
chaos was proved in Bánhelyi et al. [Bánhelyi2008] in 2008 but the
problem of finding chaotic trajectories remained entirely open. This
time, we present a fitting verified numerical technique capable to locate finite trajectory segments theoretically with arbitrary prescribed
qualitative behaviour and thus shadowing di↵erent types of chaotic
trajectories with large precision. For example, we can achieve that our
33
pendulum goes through any specified finite sequence of gyrations by
choosing the initial conditions correctly.
To be able to provide solutions with mathematical precision, the
computation of trajectories has to be executed rigorously. Keeping
in mind this intention, we calculated the inclusion of a solution of
the di↵erential equation with the VNODE algorithm [Nedialkov2001]
and based on the PROFIL/BIAS interval environment [Knüppel1993].
The search for a solution point is a global optimization problem to
which we applied the C version of the GLOBAL algorithm, a clustering
stochastic global optimization technique [Csendes1988]. This method
is capable to find the global optimizer points of moderate dimensional
global optimization problems, when the relative size of the region of
attraction of the global minimizer points are not very small.
References:
[1] Bánhelyi, B., T. Csendes, B.M. Garay, and L. Hatvani,
A computer–assisted proof for ⌃3 –chaos in the forced damped pendulum equation, SIAM J. Appl. Dyn. Syst., 7, 843–867 (2008).
[2] Csendes, T., Nonlinear parameter estimation by global optimization – efficiency and reliability, Acta Cybernetica, 8, 361-370 (1988).
[3] Hubbard, J.H., The forced damped pendulum: chaos, complication and control, Amer. Math. Monthly, 8, 741–758 (1999).
[4] Knüppel, O., PROFIL – Programmer’s Runtime Optimized Fast
Interval Library, Bericht 93.4., Technische Universität HamburgHarburg (1993).
[5] Nedialkov, N.S., VNODE – A validated solver for initial value
problems for ordinary di↵erential equations, Available at
www.cas.mcmaster.ca/⇠ nedialk/Software/VNODE/VNODE.shtml
(2001).
34
SONIC – a nonlinear solver
Lars Balzer
University of Wuppertal
42097 Wuppertal, Germany
[email protected]
Keywords: verified computing, nonlinear systems, SONIC, C-XSC,
filib++
This talk presents the program SONIC - a Solver and Optimizer
for Nonlinear Problems based on Interval Computation. It solves nonlinear systems of equations and yields a list of boxes containing all
solutions within an initial box. The solver is written in C++ and uses
either C-XSC or filib++ as an interval library. Parallelization of the
code is possible by the usage of MPI or OpenMP. Members of the
Applied Computer Science Group of the University of Wuppertal are
working on the development of the solver.
SONIC uses a branch-and-bound approach to find and discard subboxes of the initial starting box that don’t contain a solution of the
nonlinear system. To speed up the algorithm the branch-and-bound
method is combined with further components. The goal is to reduce
the computation time and the number of boxes that have to be considered.
There is the constraint propagation whose idea is to spread the
known enclosures of the variables over the term net that represents the
nonlinear system. After several sweeps over the term net the considered
box is contracted.
Another key element of the algorithm is the interval Newton method
with Gauss-Seidel iteration complemented by di↵erent choices for a
preconditioning matrix.
The solver generates a list of small boxes that cover all solutions
of the system. To verify the existence of a solution in these boxes a
final verification step is applied. For this task SONIC has implemented
several verification methods.
35
References:
[1] T. Beelitz, C.H. Bischof, B. Lang, P. Willems, SONIC – A
framework for the rigorous solution of nonlinear problems, report
BUW-SC 04/7, University of Wuppertal, 2004.
[2] W. Hofschuster, W. Krämer, C-XSC 2.0: A C++ Library for
Extended Scientific Computing, Numerical Software with Result
Verification, Volume 2991/2004, Springer-Verlag, Heidelberg, pp.
15 - 35, 2004.
[3] M. Lerch et al., filib++, a Fast Interval Library Supporting
Containment Computations, ACM TOMS, volume 32, number 2,
pp. 299-324, 2006.
[4] L. Balzer, Untersuchung des Einsatzes von Taylor-Modellen bei
der Lösung nichtlinearer Gleichungssysteme, University of Wuppertal, Master-Thesis, 2013.
36
Programming techniques for exact real
arithmetic
Andrej Bauer
Faculty of Mathematics and Physics
University of Ljubljana
Ljubljana, Slovenia
[email protected]
There are several strategies for implementing exact computation
with real numbers. Two common ones are based on interval arithmetic
with forward or backward propagation of errors. A less common way
of computing with exact reals is to use Dedekind’s construction of reals
as cuts. In such a setup a real number is defined by two predicates
that describe its lower and upper bounds. We can extract efficient
evaluation strategies from such declarative descriptions by using an
interval Newton’s method. From the point of view of programming
language design it is desirable to express the mathematical content of
a computation in a direct and abstract way, while still retaining flexibility and control over evaluation strategy. We shall discuss how such
a goal might be achieved using techniques that modern programming
languages have to o↵er.
Invited talk
37
Curve Veering for the
Parameter-Dependent Clamped Plate
Henning Behnke
TU Clausthal
38678 Clausthal-Zellerfeld, Germany
[email protected]
Keywords: partial di↵erential equations, eigenvalue problem, eigenvalue enclosures
The computation of vibrations of a thin rectangular clamped plate
results in an eigenvalue problem with a partial di↵erential equation of
fourth order.
@4
@4
@4
'+P
' + Q 4 ' = ' in ⌦,
4
2
2
@x
@x @y
@y
@'
' = 0 and
= 0 on @⌦,
@n
for P ,Q 2 R, P > 0, Q > 0, and ⌦ = ( a2 , a2 ) ⇥ ( 2b , 2b ) ✓ R2 .
If we change the geometry of the plate for fixed area, this results in a
parameter-dependent eigenvalue problem. For certain parameters, the
eigenvalue curves seem to cross. We give a numerically rigorous proof
of curve veering, which is based on the Lehmann-Goerisch inclusion
theorems and the Rayleigh-Ritz procedure.
References:
[1] H. Behnke, A Numerically Rigorous Proof of Curve Veering in an
Eigenvalue Problem for Di↵erential Equations, Z. Anal. Anwendungen, (1996), No. 15, pp. 181–200.
38
Formal verification of tricky numerical
computations
Sylvie Boldo
Inria
LRI, CNRS UMR 8623, Université Paris-Sud
PCRI - Bâtiment 650
Université Paris-Sud
91405 ORSAY Cedex
FRANCE
[email protected]
Keywords: Floating-point, formal proof, deductive verification
Computer arithmetic has applied formal methods and formal proofs
for years. As the systems may be critical and as the properties may
be complex to prove (many sub-cases, error-prone computations), a
formal guarantee of correctness is a wish that can now be fulfilled.
This talk will present a chain of tools to formally verify numerical
programs. The idea is to precisely specify what the program requires
and ensures. Then, using deductive verification, the tools produce
proof obligation that may be proved either automatically or interactively in order to guarantee the correctness of the specifications. Many
examples of programs from the literature will be specified and formally
verified.
Invited talk
39
Level 1 Parallel RTN-BLAS:
Implementation and Efficiency Analysis
Chemseddine Chohra,
Philippe Langlois and David Parello
Univ. Perpignan Via Domitia, Digits, Architectures et Logiciels
Informatiques, F-66860, Perpignan. Univ. Montpellier II, Laboratoire
d’Informatique Robotique et de Microélectronique de Montpellier,
UMR 5506, F-34095, Montpellier. CNRS, Laboratoire d’Informatique
Robotique et de Microélectronique de Montpellier, UMR 5506,
F-34095, Montpellier.
[email protected]
Keywords: Floating point arithmetic, numerical reproducibility,
Round-To-Nearest BLAS, parallelism, summation algorithms.
Modern high performance computation (HPC) performs a huge
amount of floating point operations on massively multi-threaded systems. Those systems interleave operations and include both dynamic
scheduling and non-deterministic reductions that prevent numerical reproducibility, i.e. getting identical results from multiple runs, even on
one given machine. Floating point addition is non-associative and the
results depend on the computation order. Of course, numerical reproducibility is important to debug, check the correctness of programs and
validate the results. Some solutions have been proposed like parallel
tree scheme [1] or new Demmel and Nguyen’s reproducible sums [2].
Reproducibility is not equivalent to accuracy: a reproducible result
may be far away from the exact result. Another way to guarantee the
numerical reproducibility is to calculate the correctly rounded value of
the exact result, i.e. extending the IEEE-754 rounding properties to
larger computing sequences. When such computation is possible, it is
certainly more costly. But is it unacceptable in practice?
We are motivated by round-to-nearest parallel BLAS. We can implement such RTN-BLAS thanks to recent algorithms that compute correctly rounded sums. This work is a first step for the level 1 of the
40
BLAS routines. We study the efficiency of computing parallel RTNsums compared to reproducible or classic ones – MKL for instance.
We focus on HybridSum and OnlineExact, two algorithms that smooth
the over-cost e↵ect of the condition number for large sums [3,4]. We
start with sequential implementations: we describe and analyze some
hand-made optimizations to benefit from instruction level parallelism,
pipelining and to reduce the memory latency. The optimized over-cost
is at least 25% reduced in the sequential case. Then we propose parallel RTN versions of these algorithms for shared memory systems. We
analyze the efficiency of OpenMP implementations. We exhibit both
good scaling properties and less memory e↵ect limitations than existing solutions. These preliminary results justify to continue towards the
next levels of parallel RTN-BLAS.
References:
[1] O. Villa, D. G. Chavarrı́a-Miranda, V. Gurumoorthi,
A. Márquez, and S. Krishnamoorthy. E↵ects of floatingpoint non-associativity on numerical computations on massively
multithreaded systems. In CUG Proceedings, (2009), pp. 1–11.
[2] James Demmel and Hong Diep Nguyen, Fast Reproducible
Floating-Point Summation. In 21st IEEE Symposium on Computer Arithmetic, Austin, TX, USA, April 7-10, (2013), pp. 163—
172.
[3] Yong-Kang Zhu and Wayne. B. Hayes, Correct rounding
and a hybrid approach to exact floating-point summation. SIAM
J. Sci. Comput., (2009), Vol. 31, No. 4, pp. 2981–3001.
[4] Yong-Kang Zhu and Wayne. B. Hayes, Algorithm 908: Online exact summation of floating-point streams. ACM Trans. Math.
Software, (2010), 37:1–37:13.
41
Reproducible and Accurate
Matrix Multiplication for
High-Performance Computing
Sylvain Collange, David Defour, Stef Graillat, and
Roman Iakymchuk
INRIA – Centre de recherche Rennes – Bretagne Atlantique
Campus de Beaulieu, F-35042 Rennes Cedex, France
[email protected]
DALI–LIRMM, Université de Perpignan
52 avenue Paul Alduy, F-66860 Perpignan, France
[email protected]
Sorbonne Universités, UPMC Univ Paris 06, UMR 7606, LIP6
F-75005 Paris, France
CNRS, UMR 7606, LIP6, F-75005 Paris, France
Sorbonne Universités, UPMC Univ Paris 06, ICS
F-75005 Paris, France
{stef.graillat, roman.iakymchuk}@lip6.fr
Keywords: Matrix multiplication, reproducibility, accuracy, long accumulator, multi-precision, multi- and many-core architectures.
The increasing power of current computers enables one to solve
more and more complex problems. This, therefore, requires to perform
a high number of floating-point operations, each one leading to a roundo↵ error. Because of round-o↵ error propagation, some problems must
be solved with a longer floating-point format.
As Exascale computing (1018 operations per second) is likely to
be reached within a decade, getting accurate results in floating-point
arithmetic on such computers will be a challenge. However, another
challenge will be the reproducibility of the results – meaning getting
a bitwise identical floating-point result from multiple runs of the same
code – due to non-associativity of floating-point operations and dynamic scheduling on parallel computers.
42
Reproducibility is becoming so important that Intel proposed a
“Conditional Numerical Reproducibility” (CNR) in its MKL (Math
Kernel Library). However, CNR is slow and does not give any guarantee concerning the accuracy of the result. Recently, Demmel and
Nguyen [1] proposed an algorithm for reproducible summation. Even
though their algorithm is fast, no information is given on the accuracy.
More recently, we introduced [2] an approach to compute deterministic sums of floating-point numbers efficiently and with the best possible accuracy. Our multi-level algorithm consists of two main stages:
filtering that relies upon fast vectorized floating-point expansions; accumulation which is based on superaccumulators in a high-radix carrysave representation. We presented implementations on recent Intel
desktop and server processors, on Intel Xeon Phi accelerator, and on
both AMD and NVIDIA GPUs. We showed that the numerical reproducibility and bit-perfect accuracy can be achieved at no additional
cost for large sums that have dynamic ranges of up to 90 orders of
magnitude by leveraging arithmetic units that are left underused by
standard reduction algorithms.
In this talk, we will present a reproducible and accurate (rounding
to the nearest) algorithm for the product of two floating-point matrices
in parallel environments like GPU and Xeon Phi. This algorithm is
based on the DGEMM implementation. We will show that the performance of our algorithm is comparable with the classic DGEMM.
References:
[1] J. Demmel, H. D. Nguyen, Fast Reproducible Floating-Point
Summation, Proceeding of the 21st IEEE Symposium on Computer
Arithmetic, Austin, Texas, USA (2013), pp. 163-172.
[2] S. Collange, D. Defour, S. Graillat, R. Iakymchuk, FullSpeed Deterministic Bit-Accurate Parallel Floating-Point Summation on Multi- and Many-Core Architectures, Research Report. HAL
ID: hal-00949355. February 2014.
43
Numerical probabilistic approach for
optimization problems
Boris S. Dobronets and Olga A. Popova
Siberian Federal University
79, Svobodny Prospect.
660041 Krasnoyarsk, Russia
[email protected], [email protected]
Keywords: Random programming, numerical probabilistic analysis,
mathematical programming.
Currently being developed methods and approaches to solving optimization problems under di↵erent types of uncertainty [1]. In most of
uncertain programming algorithms are used the expectation operator
and are held the averaging procedure. We consider a new approach to
optimization problems with uncertain input data. This approach uses
a numerical probabilistic analysis and allows us to construct the joint
probability density function of optimal solutions.
Methods to construct the solution set for the optimization problem
with random input parameters, we determine as the Random Programming [2,3].
Let us formulate problem of random programming as follows:
f (x, ⇠) ! min,
(1)
gi (x, ⇠)  0, i = 1, ..., m.
(2)
where x is the solution vector, ⇠ is random vector of parameters, f (x, ⇠)
is objective function, gi (x, ⇠) are constraint functions.
Vector x⇤ is the solution of problem (1)–(2), if
f (x⇤ , ⇠) = inf f (x, ⇠),
U
where
U = {x|gi (x, ⇠)  0, i = 1, ..., m.}
44
The solution set of (1)–(2) is defined as follows
X = {x|f (x, ⇠) ! min, gi (x, ⇠)  0, i = 1, ..., m, ⇠ 2 ⇠}
Note that x⇤ is random vector. So in contrast to the deterministic problem, for x⇤ is necessary to determine the probability density function
for each component of x⇤i as the joint probability density function.
Unlike most methods of stochastic programming in Random Programming we can to construct a joint probability density function Px
of random vector x⇤ .
To construct the joint probability density function Px for problem
(1)–(2) we use probabilistic extensions and quasi Monte Carlo method
[2]. This allows us to construct procedures for solving systems of linear
algebraic equations and nonlinear equations with random coefficients.
Relying on numerical examples, we showed that the random programming procedures is e↵ective method for linear and non linear optimization problems.
References:
[1] B.Liu, Theory and Practice of Uncertain Programming (2nd Edition), Springer-Verlag, Berlin, 2009.
[2] O.A. Popova, Optimization Problems with Random Data Journal of Siberian Federal University. Mathematics & Physics, 6,
(2013), No. 4, pp. 506–515
[3] B. Dobronets, O. Popova, Linear optimization problems with
random data. VII Moscow International Conference on Operation
Research (ORM 2013): Moscow, October 15–19, 2013. Proceedings Vol. 1 / Eds. P.S. Krasnoschekov , A.A. Vasin , A.F. Izmailov.
— M., MAKS Press, 2013. pp. 15–18
45
Algorithmic and Software Challenges at
Extreme Scales
Jack Dongarra
University of Tennessee, USA
Oak Ridge National Laboratory, USA
University of Manchester, UK
[email protected]
In this talk we examine how high performance computing has
changed over the last 10-year and look toward the future in terms
of trends. These changes have had and will continue to have a major
impact on our software. Some of the software and algorithm challenges
have already been encountered, such as management of communication
and memory hierarchies through a combination of compile–time and
run–time techniques, but the increased scale of computation, depth of
memory hierarchies, range of latencies, and increased run–time environment variability will make these problems much harder. We will
look at five areas of research that will have an importance impact in
the development of software and algorithms.
We will focus on following themes:
• Redesign of software to fit multicore and hybrid architectures
• Automatically tuned application software
• Exploiting mixed precision for performance
• The importance of fault tolerance
• Communication avoiding algorithms
Invited talk
46
Towards High Performance Stochastic
Arithmetic
Pacôme Eberhart1, Julien Brajard2,
Pierre Fortin1 and Fabienne Jézéquel3
1
2
Sorbonne Universités, UPMC Univ Paris 06, CNRS, UMR 7606,
LIP6, F-75005, Paris, France
Sorbonne Universités (UPMC, Univ Paris 06), CNRS, IRD, MNHN,
LOCEAN Laboratory, 4 place Jussieu, F-75005, Paris, France
3
Sorbonne Universités, UPMC Univ Paris 06, CNRS, UMR 7606,
LIP6, F-75005, Paris, France and Université Paris 2, France
[email protected]
Keywords: numerical validation, stochastic arithmetic, high performance computing, SIMD processing
Because of the finite representation of floating-point numbers in
computers, the results of arithmetic operations need to be rounded.
The CADNA library [1],based on discrete stochastic arithmetic [2],
can be used to estimate the propagation of rounding errors in scientific
codes. By synchronously computing each operation three times with
a randomly chosen rounding mode, CADNA estimates the number of
exact significant digits of the result within a 95% confidence interval.
To ensure the validity of the method and allow a better analysis of the
program, several types of anomalies are checked at execution time.
However, the overhead on computation time can be of up to 80
times depending on the program and on the level of anomaly detection [3]. There are two main factors that can explain this: the cost of
anomaly detection and that of stochastic operations. Firstly, cancellation (sudden loss of accuracy in a single operation) detection is based
on the computation of the number of exact significant digits that relies on a logarithmic evaluation. This mathematical function is much
more costly than floating-point arithmetic operations. Secondly, the
47
stochastic operators are currently implemented through the overloading of arithmetic operators and the change of the rounding mode of
the FPU (Floating Point Unit). However, this method makes vectorization impossible, as each vector lane would need a di↵erent rounding
mode. Moreover, it causes performance overhead due to function calls
and to the flushing of the FPU pipelines, respectively. This implies
an even greater performance drop for HPC applications that rely on
SIMD (Single Instruction Multiple Data) processing and on pipeline
filling for better efficiency.
To bypass these overheads and allow the use of vector instructions
for SIMD parallelism, we propose several improvements in the CADNA
library. Since only the integer part of the number of exact significant
digits is required, we can use the exponent of a floating-point value
as an approximation of the logarithm evaluation, which removes the
logarithm function call. To avoid the cost of function calls, we propose
to inline the stochastic operators. Finally, rather than depending on
the rounding modes of the FPU, we compute the randomly rounded
arithmetic operations by handling the sign bit of the operands through
masks. These contributions provide a speedup factor of up to 2.5 on
a scalar code. They also enable the use of CADNA with vectorized
code: SIMD performance results on high-end CPUs and on an Intel
Xeon Phi are presented.
References:
[1] J. Vignes, Discrete stochastic arithmetic for validating results of
numerical software Num. Algo., 37(1–4):377–390, Dec. 2004.
[2] Université Pierre et Marie Curie, Paris, CADNA: Control
of Accuracy and Debugging for Numerical Applications http://
www.lip6.fr/cadna
[3] F. Jézéquel, J.-L. Lamotte, and O. Chubach., Parallelization of discrete stochastic arithmetic on multicore architectures
2013 Tenth International Conference on Information Technology:
New Generations (ITNG), pp. 160–166, Apr. 2013.
48
Modal Interval Floating Point Unit with
Decorations
Abdelrahman Elskhawy, Kareem Ismail and Maha Zohdy
Electronics and Communications department,Faculty of Engineering,
Cairo University, Egypt
[email protected]
1
Introduction
Rounding errors in digital computations using floating point numbers
may result in totally inaccurate results. One of the mathematical solutions to monitor and control rounding errors is the interval computations which was popularized as classical interval arithmetic by
Ramon E.Moore in 1966 [1]. Results obtained via interval arithmetic
operations are mathematically proven to bind the correct result of the
computation.
2
Modal intervals
A generalized extension of the classical intervals was presented in 1980
which is the modal intervals. Modal Intervals Arithmetic (MIA) has
some advantages over classical intervals as it managed to solve some
of the problems in the latter like existence of Additive inverse and
Multiplicative inverse, a stronger sub-distributive law, and ability to
solve the interval equations that classical intervals failed to solve and
obtaining meaningful interval result when solving these equations [2].
This leads to solving serious problems in applications like control and
computer graphics. Due to the bad performance of software implementation of intervals basic operations, researches were lead to hardware
implementation of MIA.
49
3
Previous Work
There is only one hardware implementation published for Modal interval Adder/Subtractor in [3]. It provides a hardware implementation
of the Modal Interval Double Floating Point Adder/Subtractor and
Multiplier units. It proposes two di↵erent hardware implementation
approaches(serial and parallel) for each of these units.
4
Decorations
A decoration, mechanism to handle the exceptions, is information attached to an interval; the combination is called a decorated interval.
It’s used to describe a property not of the interval it is attached to but
of the function evaluated on the input.
Worth mentioning that this is a part of the draft standard P1788 for
Interval floating point arithmetic, and that there is NO previous hardware implementation for it[4].
This standard’s decoration model, in contrast with IEEE-754’s, has no
status flags [5].
The set D of decorations has five members shown in Table 1.
A decorated interval may take one of five values, thus it is implemented as 3 –bits giving di↵erent decorated output intervals according
to the input intervals and the operation evaluated.
5
Proposed Implementation
The work presented in here adopts the double path floating point
adders which are based on performing speculatively addition on two
distinct low latency paths (CLOSE and FAR path) depending on the
exponent di↵erence and e↵ective operation [6].The correct result is
then selected at the end of the computation. The double path unit is
built on the following assumptions:
1) FAR path: If exponent di↵erence >1, for e↵ective subtraction the
maximum number of leading zeros is one, and only one-bit left shift
50
Decoration Logical Short
value
description
com
000
Common
dac
001
def
010
trv
011
ill
100
Definition
x is a bounded, nonempty subset of Dom(f); f is continuous
at each point of x and the computed interval f(x) is bounded.
Defined & x is a nonempty subset of
continuous Dom(f) and the restriction of f
to x is continuous.
Defined
x is a nonempty subset of
Dom(f).
Trivial
Always true (so gives no information).
Ill-Formed Not an Interval;
formally
Dom(f)= ø.
Table 1: Decorations values
might be required for normalization and no need for the leading zero
detection. For e↵ective addition, no possibility of leading zeros appearance, however a large full length right shifter is required for the
two mantissas’ alignment.
2) Near Path, used for e↵ective subtraction only, if exponent di↵erence
is 0 or 1, then only one bit right shift might be needed. Counting the
possible leading zeros is performed in parallel with the operation.
In the NEAR path, a compound adder is used to calculate all the
possible result and reduce the conversion step to only a simple selection
[7].While in the FAR path the mantissas are swapped based on the
exponent di↵erence to produce only positive results [7]. Thus, the
conversion step (2’s complementing) is not any more needed.
The proposed design supports all addition/subtraction, special cases
resulting from infinities, de-normalized numbers, and Nan input, as
well as Decorations according to IEEE P1788 standard.
51
6
Results & Testing
After implementation using Verilog and simulation using Quartus &
DC Compiler, we obtained a maximum frequency of 283 MHZ using
device number EP3SL50F780C2 of Stratix III Family, and 1.126 GHZ
in 65nm technology for ASIC simulation.
The design was tested using a C++ algorithm to generate testing vectors, and “File Compare Tool” to compare the outputs of the design
with the testing vectors. Due to the impossibility to cover all the possible combinations of the input, numbers were divided into di↵erent
ranges that were covered independently.
References:
[1] R.E. Moore, Interval Analysis, Prentice Hall Inc., Englewood
Cli↵s, New Jersey, 1966.
[2] M. A. E. Gardenes, Modal Intervals, Reliable Computing., August 2008.
[3] A.A.B. Omar, Hardware Implementation of Modal Interval Adder/
Subtractor and Multiplier, MSc thesis, Electronics and Communications Department, Faculty of Engineering, Cairo University,
2012.
[4] IEEE P1788 draft standard for interval floating point arithmetic .
[5] 754-2008 IEEE Standard for Binary Floating Point Arithmetic.,
August 2008.
[6] G. Even and P.M. Seidel, Delay-Optimized Implementation of
IEEE Floating Point Addition, IEEE Transaction on Computers,
Vol. 53, No.2 , pp. 97-113, 2004.
[7] S. Oberman, Design Issues in High Performance Floating Point
Arithmetic Units, PhD. Thesis, Stanford University,” 1996.
52
Sign Regular Matrices Having the
Interval Property
Jürgen Garlo↵1) and Mohammad Adm2)
1)
University of Applied Sciences / HTWG Konstanz
Faculty for Computer Science
D-78405 Konstanz, Germany
P. O. Box 100543
[email protected]
and
2)
University of Konstanz
Department of Mathematics and Statistics
D-78464 Konstanz, Germany
[email protected]
We say that a class C of n-by-n matrices possesses the interval property if for any n-by-n interval matrix [A] = [A, A] = ([aij , aij ])i,j=1,...,n
the membership [A] ⇢ C can be inferred from the membership to C
of a specified set of its vertex matrices, where a vertex matrix of [A] is
a matrix A = (aij ) with aij 2 aij , aij , i, j = 1, . . . , n. Examples of
such classes include the
• M -matrices or, more generally, inverse-nonnegative matrices [8],
where only the bound matrices A and A are required to be in the
class;
• inverse M -matrices [7], where all vertex matrices are needed;
• positive definite matrices [3], [11], where a subset of cardinality
2n 1 is required (here only symmetric matrices in [A] are considered).
A class of matrices which in the nonsingular case are somewhat related to the inverse nonnegative matrices are the totally nonnegative
53
matrices. A real matrix is called totally nonnegative if all its minors
are nonnegative. Such matrices arise in a variety of ways in mathematics and its applications, e.g., in di↵erential and integral equations,
numerical mathematics, combinatorics, statistics, and computer aided
geometric design. For background information we refer to the recently
published monographs [4], [10]. The speaker posed in 1982 the conjecture that the set of the nonsingular totally nonnegative matrices
possesses the interval property, where only two vertex matrices are involved [5], see also [4, Section 3.2] and [10, Section 3.2]. The two vertex
matrices are the bound matrices with respect to the checkerboard ordering which is obtained from the usual entry-wise ordering in the set
of the square matrices of fixed order by reversing the inequality sign
for each entry in a checkerboard fashion. This conjecture originated
in the interpolation of interval-valued data by using B-splines. During the last three decades many attempts have been made to settle
the conjecture. Some subclasses of the totally nonnegative matrices
have been identified for which the interval property holds, however,
the general problem remained open. In our talk we apply the Cauchon
algorithm (also called deleting derivation algorithm [6] and Cauchon
reduction algorithm [9]) to settle the conjecture. We report further on
some other recent results, viz. we
• give for each entry of a nonsingular totally nonnegative matrix
the largest amount by which this entry can be perturbed without
losing the property of total nonnegativity,
• identify other subclasses exhibiting the interval property of the
sign regular matrices, i.e., of matrices with the property that all
their minors of fixed order have one specified sign or are allowed
also to vanish. This leads us to a new open problem.
References:
[1] M. Adm and J. Garloff, Invariance of total nonnegativity of a
tridiagonal matrix under element-wise perturbation, Oper. Matrices, in press.
54
[2] M. Adm and J. Garloff, Intervals of totally nonnegative matrices, Linear Algebra Appl., 439 (2013), No. 12, pp. 3796-3806.
[3] S. Bialas and J. Garloff, Intervals of P-matrices and related
matrices, Linear Algebra Appl., 58 (1984), pp. 33-41.
[4] S. M. Fallat and C. R. Johnson, Totally Nonnegative Matrices, Princeton Series in Applied Mathematics, Princeton University Press, Princeton and Oxford, 2011.
[5] J. Garloff, Criteria for sign regularity of sets of matrices, Linear
Algebra Appl., 44 (1982), pp. 153-160.
[6] K. R. Goodearl, S. Launois and T. H. Lenagan, Totally
nonnegative cells and matrix Poisson varieties, Adv. Math., 226
(2011), pp. 779-826.
[7] C. R. Johnson and R. S. Smith, Intervals of inverse M -matrices,
Reliab. Comput., 8 (2002), pp. 239-243.
[8] J.R. Kuttler, A fourth-order finite-di↵erence approximation for
the fixed membrane eigenproblem, Math. Comp., 25 (1971), pp. 237256.
[9] Launois and T. H. Lenagan, Efficient recognition of totally
nonnegative matrix cells, Found. Comput. Mat., in press.
[10] A. Pinkus, Totally Positive Matrices, Cambridge Tracts in Mathematics 181, Cambridge Univ. Press, Cambridge, UK, 2010.
[11] J. Rohn, Positive definiteness and stability of interval matrices,
SIAM J. Matrix Anal. Appl., 15 (1994), pp. 175-184.
55
Convergence of the Rational Bernstein
Form
Jürgen Garlo↵1) and Tareq Hamadneh2)
1)
University of Applied Sciences / HTWG Konstanz
Faculty for Computer Science
D-78405 Konstanz, Germany
P. O. Box 100543
[email protected]
and
2)
University of Konstanz
Department of Mathematics and Statistics
D-78464 Konstanz, Germany
[email protected]
A well-established tool for finding tight bounds on the range of a
multivariate polynomial
p(x) =
l
X
ai xi , x = (x1 , . . . , xn ), i = (i1 , . . . , in ),
i=0
over a box X is the (polynomial) Bernstein form [1,2,4-6]. This is
obtained by expanding p by Bernstein polynomials. Then the minimum and maximum of the coefficients of this expansion, the so-called
Bernstein coefficients
bi (p) =
i
X
j=0
i
j
l
j
aj , i = 0, . . . , l,
provide lower and upper bounds for the range of p over X. It is known
that the bounds converge to the range
• linearly if the degree of the Bernstein polynomials is elevated,
• quadratically with respect to the width of X,
• quadratically with respect to the width of subboxes if subdivision
is applied.
56
In [3] the rational Bernstein form for bounding the range of the
rational function f = p/q over X is presented, viz.
bi (p)
bi (p)
l
 f (x)  max
, x 2 X.
i=0 bi (q)
bi (q)
It turned out that some important properties of the polynomial
Bernstein form do not carry over to the rational Bernstein form, e.g.,
the convex hull property and the monotonic convergence of the bounds.
In our talk we show that, however, the convergence properties listed
above remain in force for the rational Bernstein form. Similar results
hold for the rational Bernstein form over triangles.
l
min
i=0
References:
[1] G. T. Cargo and O. Shisha, The Bernstein form of a polynomial, J. Res. Nat. Bur. Stand., 70B (1966), pp.79-81.
[2] ] J. Garloff, Convergent bounds for the range of multivariate polynomials, in Interval Mathematics 1985, K. NICKEL, Ed.,
Lecture Notes in Computer Science, 212 (1986), Springer-Verlag,
Berlin, Heidelberg, New York, pp. 37-56.
[3] A. Narkawicz, J. Garloff, A. P. Smith and C. Munoz,
Bounding the range of a multivariate rational function over a box,
Reliab. Comput., 17 (2012), pp. 34-39.
[4] T. J. Rivlin, Bounds on a polynomial, J. Res. Nat. Bur. Stand.,
74B (1970), pp. 47-54.
[5] V. Stahl, Interval methods for bounding the range of polynomials
and solving systems of nonlinear equations, dissertation, Johannes
Kepler Universität Linz (1995).
[6] M. Zettler and J. Garloff, Robustness analysis of polynomials with polynomial parameter dependency, IEEE Trans. Automat. Contr., 43 (1998), pp. 425-431.
57
Interval regularization approach to the
Firordt method of the spectroscopic
analysis of the nonseparated mixtures
Valentin Golodov
South Ural State University
454080 Chelyabinsk, Russia
[email protected]
Keywords: system of linear equations, interval uncertainty, interval
regularization, Firordt method, exact computations
Firordt method is one of the methods of the analysis of the nonseparated mixtures [1]. According Firordt’s method we can determine
concentration cj of the components in the m-component mixture as
solving system of the equations of the form
bi =
m
X
j=1
aij · l · cj ,
(1)
where bi is an absorbancy of the analized mixture on the i-th analytical wave length(AWL), aij is an molar coefficient of the absorbtion
of the j-th component on i-th AWL, l is constant. Number of the
AWL(k) (number of the equations) usually is equal to the number
of the components(m) in the mixture. Overdetermined systems with
k > m are used for the enhanced accuracy.
Results of the spectroscopy may be imprecise so we have some imprecise system of linear algebraic equations for analysis with equations
of the form (1).
m
X
aij · l · cj ,
(2)
bi =
j=1
We consider interval linear algebraic systems of equations Ax = b,
with an interval matrix A and interval right-hand side vector b, as a
58
model of imprecise systems of linear algebraic equations of the same
form.
We use a new regularization procedure proposed in [3] that reduces
the solution of the imprecise linear system to computing a point from
the tolerable solution set for the interval linear system with a widened
right-hand side. Tolerable solution set is least sensitive, among all
the solution sets [2], to the change in the interval matrix of the system
Ax = b. We exploit this idea that may be called interval regularization
for the system of equations of the Firordt method of the form (2).
With regards to system of equations of the Firordt method (especially overdetermined) such interval regularization technique provides the enhanced accuracy. Our computing technique uses exact
rational computations, it allows to solve sensitive and ill-conditioned
problems[4].
References:
[1] Vlasova I.V., Vershinin V.I., Determination of binary mixture components by the Firordt method with errors below the
specified limit, Journal of Analytical Chemistry, vol. 64(2009),
No. 6, pp. 553–558.
[2] S.P. Shary, A new technique in systems analysis under interval uncertainty and ambiguity, Reliable Computing, vol. 8 (2002),
No. 5, pp. 321–418.
[3] Anatoly V. Panyukov, Valentin A. Golodov, Computing Best Possible Pseudo-Solutions to Interval Linear Systems of Equations, .
Reliable Computing, Volume 19(2013), Issue 2, pp. 215-228.
[4] V.A. Golodov and A.V. Panyukov, Library of classes “Exact
Computation 2.0”. State. reg. 201361818, March 14, 2013. Official
Bulletin of Russian Agency for Patents and Trademarks, Federal
Service for Intellectual Property, 2013, No. 2. Series “Programs
for Computers, Databases, Topology of VLSI”. (in Russian)
59
A method of calculating faithful rounding
of l2-norm for n-vectors
Stef Graillat1, Christoph Lauter1, Ping Tak Peter Tang2,
Naoya Yamanka3 and Shin’ichi Oishi3
1
3
Sorbonne Universités 2 Intel Corporation
Faculty of Science
UPMC Univ Paris 06
2200 Mission College Blvd and Engineering
UMR 7606, LIP6
Santa Clara, CA 95054
Waseda University
4, place Jussieu
USA
3-4-1 Okubo
F - 75005 Paris
Tokyo 169-8555
France
Japan
[email protected], [email protected], [email protected],
[email protected], [email protected]
Keywords: Floating-point arithmetic, error-free transformations, faithful rounding, 2-norm, underflow, overflow
In this paper, we present an q
efficient algorithm to compute the
Pn 2
faithful rounding of the l2 -norm,
j xj , of a floating-point vector
[x1 , x2 , . . . , xn ]T . This means that the result is accurate to within one
bit of the underlying floating-point type. The algorithm is also faithful
in exception generations: an overflow or underflow exception is generated if and only if the input data calls for this event. This new
algorithm is also well suited for parallel and vectorized implementations. In contrast to other algorithms, the expensive floating-point
division operation is not used. We demonstrate our algorithm with
an implementation that runs about 4.5 times faster then the netlib
version [1].
There are three novel aspects to our algorithm for l2 -norms:
First, for an arbitrary real value , we establish an accuracy condition for a floating-point approximation
p S to that guarantees the
S) to be a faithful rounding of
correct
rounding
of
the
square
root
(
p
.
Second,
Pwe propose a way of computing an approximation S to the
sum = j x2j that satisfies the accuracy condition. This summation
60
algorithm makes use of error-free transformations [4] at crucial steps.
Our error-free transformation is custom designed for l2 -norm computation and thus requires fewer renormalization steps than a more general
error-free transformation needs. We show that the approximation S is
accurate up to a relative error bound of ` (3"2 ), where " is the machine epsilon and ` ( ) = ` /(1 ` ) bounds the accumulated error
over ` summation steps [3] for an underlying addition operation with a
relative error bound of . Our derivation of = 3"2 is an enhancement;
the standard bounds on in the literature are strictly greater than 3"2 .
Third, in order to avoid spurious overflow and underflow in the intermediate computations, our algorithm extends the previous work by
Blue [2]: the input data xj are appropriately scaled into “bins” such
that computing and accumulating their squares x2j are guaranteed exception free. While Blue uses three bins and the division operation,
our algorithm uses only two and is division free. These properties economize registers usage and improve performance. The claim of faithful rounding and exception generation is supported by mathematical
proofs. The proof of faithful overflow generation is relatively straightforward, but that for faithful underflow generation requires considerably greater care.
References:
[1] Anderson, Bai, Bischof, Blackford, Demmel, Dongarra,
Croz, Hammarling, Greenbaum, McKenney, and
Sorensen, LAPACK Users’ guide (third ed.), Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1999.
[2] Blue, A portable Fortran program to find the Euclidean norm of
a vector, ACM Trans. Math. Softw., 4 (1978), No. 1, pp. 15–23.
[3] Higham, Accuracy and stability of numerical algorithms (second
ed.), Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2002.
[4] Ogita, Rump, and Oishi, Accurate Sum And Dot Product,
SIAM J. Sci. Comput., 26 (2005), No. 6, pp. 1955–1988.
61
An Energy-Efficient and Massively
Parallel Approach to Valid Numerics
John Gustafson
Ceranovo Inc.
Palo Alto, CA USA
[email protected]
Computer hardware manufacturers have shown little interest in improving the validity of their numerics, and have accepted the hazards of
floating point arithmetic. However, they have a very strong and growing interest in energy efficiency as they compete for the battery life of
mobile devices as well as the amount of capability they can achieve in
a large data center with strict megawatt-level power budgets. They
are also concerned that multicore parallelism is growing much faster
than algorithms can exploit. It may be possible to persuade manufacturers to embrace valid numerics not because of validity concerns but
because having valid numerics can solve energy/power and parallelism
concerns.
A new universal number format, the “unum”, allows valid (provably bounded) arithmetic with about half the bits of conventional IEEE
floating point on average; the bit reduction saves energy and power by
reducing the bandwidth and storage demands on the processor. It
also relieves the programmer from being an expert in numerical analysis, by automatically tracking the exact or ULP-wide inexact state of
each value and by promoting and demoting dynamic range and fraction precision automatically. Unums pass the difficult validity tests
published by Kahan, Rump, and Bailey. When used to solve physics
problems, such as nonlinear ordinary di↵erential equations, they also
expose a new source of massive parallelism in what were thought to be
highly serial time-dependent problems. Furthermore, because unum
arithmetic obeys associative and distributive laws, parallelization of
Invited talk
62
algorithms does not produce changes in the answer from rounding errors that unsophisticated programmers mistake for logic errors; this
further facilitates the use of parallel architectures.
The new format and the new algorithms that go with it have the
potential to completely disrupt the way computers are designed and
used for technical computing.
63
Towards tight bounds on the radius of
nonsingularity
David Hartman, Milan Hladı́k
Department of Applied Mathematics, Charles University
11800 Prague, Czech Republic
{hartman,hladik}@kam.mff.cuni.cz
Institute of Computer Science Academy of Sciences
18207 Prague 8, Czech Republic
[email protected]
Keywords: radius of nonsingularity, semidefinite programming, approximation algorithm
Radius of nonsingularity of a square matrix is the minimal distance
to a singular matrix in the Chebyshev norm. More formally, for a
matrix A 2 Rn⇥n , the radius of nonsingularity [1,2] is defined by
d(A) := inf {" > 0; 9 singular B : |aij
bij |  " 8i, j}.
It has been shown [2,3] that this characteristic can be computed as
d(A) =
1
,
kA 1 k1,1
(1)
where k · k1,1 is a matrix norm defined as
kM k1,1 := max {kM xk1 ; kxk1 = 1} = max {kM zk1 ; z 2 {±1}n }.
Unfortunately, computing k·k1,1 is an NP-hard problem [2]. In fact,
provided P 6= N P no polynomial time algorithm for approximating
d(A) with a relative error at most 4n1 2 exists [3]. That is why there
were investigated various lower and upper bounds. Rohn [3] provided
the following bounds
1
1
 d(A) 
⇢(|A 1 |E)
max (E|A 1 |)ii
i=1,...,n
64
On the other side, Rump [4, 5] developed other estimations
1
6n
 d(A) 
.
⇢(|A 1 |E)
⇢(|A 1 |E)
We provide better bounds based on approximation of k · k1,1 . More
concretely, we propose a randomized approximation method with expected error 0.7834. The mentioned algorithm is based on a semidefinite relaxation of original problem [6]. This relaxation gives the best
known approximation algorithm for Max-Cut problem, and we utilize
similar principle to derive tight bounds on the radius of nonsingularity.
Supported by grants 13-17187S and 13-10660S of the Czech Science Foundation.
References:
[1] S. Poljak, J. Rohn, Radius of Nonsingularity, Technical report
KAM Series, Department of Applied Mathematics, Charles University, 117 (1988), pp. 1–11.
[2] S. Poljak, J. Rohn, Checking robust nonsingularity is NP-hard,
Math. Control Signals Syst., 6 (1993), No. 1, pp. 1–9.
[3] J. Rohn, Checking properties of interval matrices, Technical Report, Institute of Computer Science, Academy of Sciences of the
Czech Republic, Prague, 686 (1996).
[3] V. Kreinovich and A. Lakeyev and J. Rohn and P. Kahl,
Computational Complexity and Feasibility of Data Processing and
Interval Computations, Kluwer, 1998.
[4] S. M. Rump, Almost sharp bounds for the componentwise distance to the nearest singular matrix, Linear Multilinear Algebra,
42 (1997), No. 2, pp.93–107.
[5] S. M. Rump, Bounds for the componentwise distance to the nearest singular matrix, SIAM J. Matrix Anal. Appl., 18 (1997), No. 1,
pp.83–103.
[6] B. Gärtner and J. Matoušek, Approximation Algorithms and
Semidefinite Programming, Springer, 2012.
65
A numerical verification method for a
basin of a limit cycle
Tomohirio Hiwaki and Nobito Yamamoto
The University of Electro-Communications
Chofugaoka 1-5-1, Chofu, Tokyo, Japan
[email protected]
Keywords: numerical verification, dynamical system, basin of limit
cycle
1
Introduction
We propose a method of validated computation to verify a domain
within a basin of a closed orbit, which is asymptotic stable in a dynamical system described by ODEs. This method proves a contractibility
of Poincaré map.
2
Problem
We treat ordinary di↵erential equations
du
= f (u),
0 < t < 1,
dt
u 2 D ⇢ Rn , f : D 7! Rn ,
(1)
where f (u) is continuously di↵erentiable with respect to u.
In order to specify a time period T explicitly, we apply variable
t
transformation by s = and v(s) = u(T s), then get an expression
T
(
dv
= T f (v),
0 < s < 1,
(2)
ds
v(0) = v(1),
66
for a closed orbit. We define a unit vector n together with as a
Poincaré section, which is a plain perpendicular to n . Additionally
'(T, w) is defined as the point of the trajectory at s = 1 with an initial
value w and a period T .
3
Our idea for verification of a basin
Using a projection P of (T, v(1)) to (T, ) which is defined by
✓
◆
1
0
0, n T ,
P =I
n
n Tn
we prove the contractibility of the Poincaré map in [W ] which is a set
of points on . In verification process we compute 2-norm of a certain
matrix which comes from an operator P '(T, w) and verify the norm
is less than 1 by validated computation. Consequently we prove that
[W ] is included by a basin of a limit cycle.
In actual calculation, we use numerical verification technique, e.g.
Lohner method, mean value form and so on. Especially, an efficient
method is adopted which is developed by P.Zgliczyński, so calloed C 1 Lohner method.
We will present numerical examples in our talk.
References:
[1] Rihm R.Rihm, Interval methods for initial value problems in ODEs,
Elsevier(North-Holland), Topics in validated computation (ed. by
J.Herzberger), 1994
[2] Zgli P.Zgliczyński, C 1 -Lohner algorithm, Found.Comput.Math.,
2 (2002), 429-465
67
Optimal preconditioning for the interval
parametric Gauss–Seidel method
Milan Hladı́k
Charles University, Faculty of Mathematics and Physics, Department
of Applied Mathematics
Malostranské nám. 25, 11800 Prague, Czech Republic
[email protected]
Keywords: interval computation, interval parametric system, preconditioner, linear programming
Consider an interval parametric system of linear equations
A(p)x = b(p),
p 2 p,
where the constraint matrix and the right-hand side vector linearly
depends on parameters p1 , . . . , pK as follows
A(p) =
K
X
Ak p k ,
b(p) =
k=1
K
X
bk p k .
k=1
Herein, A1 , . . . , AK 2 Rn⇥n are given matrices, b1 , . . . , bK 2 Rn are
given vectors, and p = (p1 , . . . , pK ) is a given interval vector. The
corresponding solution set is defined as
{x 2 Rn ; 9p 2 p : A(p)x = b(p)}.
Various methods for computing an enclosure to the solution set exist
[1]. In our contribution, we focus on the interval Gauss–Seidel iteration
[4]. In particular, we will be concerned with the problem of determining an optimal preconditioner that minimizes either the widths of the
resulting intervals, or their upper/lower bounds. A preconditioner is a
matrix C 2 Rn⇥n , by which the system is pre-multiplied
CA(p)x = Cb(p),
68
p 2 p.
Usually, the midpoint inverse A(pc ) 1 is chosen since it performs well
practically, but it needn’t be the optimal choice.
The problem of computing an optimal preconditioner for linear
non-parametric interval systems was studied, e.g., in [2, 3]. In our
paper, we will extend some of their results for the parametric systems.
We will show that optimal preconditioners can be computed by solving
suitable linear programs using approximately Kn variables and similar
number of constraints, which means that the problem is polynomially
solvable.
We also show by several examples that, in some cases, such optimal
preconditioners are able to significantly decrease overestimation of the
enclosures computed by common methods.
References:
[1] M. Hladı́k. Enclosures for the solution set of parametric interval
linear systems. Int. J. Appl. Math. Comput. Sci., 22(3):561–574,
2012.
[2] R. B. Kearfott. Preconditioners for the interval Gauss–Seidel
method. SIAM J. Numer. Anal., 27(3):804–822, 1990.
[3] R. B. Kearfott, C. Hu, and M. Novoa III. A review of preconditioners for the interval Gauss–Seidel method. Interval Comput.,
1991(1):59–85, 1991.
[4] E. D. Popova. On the solution of parametrised linear systems. In
W. Krämer and J. W. von Gudenberg, editors, Scientific Computing, Validated Numerics, Interval Methods, pages 127–138. Kluwer,
2001.
69
On Unsolvability of Overdetermined
Interval Linear Systems
Jaroslav Horáček and Milan Hladı́k
Charles University, Faculty of Mathematics and Physics, Department
of Applied Mathematics
Malostranské nám. 25, 118 00, Prague, Czech Republic
[email protected], [email protected]
Keywords: interval linear systems, overdetermined systems, unsolvability conditions
By an overdetermined interval linear system (OILS) we mean an interval linear system with more equations than variables. By a solution
set of an interval linear system Ax = b we mean
⌃ = {x | Ax = b for some A 2 A, b 2 b},
where A is an interval matrix and b is an interval vector. If ⌃ is an
empty set, we call the system unsolvable. It is appropriate to point
out that this approach is di↵erent from the least squares method.
The set ⌃ is usually hard to be described. That is why it is often enclosed (among other possibilities) by some n-dimensional box.
Computing the tightest possible box (interval hull ) containing ⌃ is
NP-hard. Therefore, we usually compute in polynomial time a slightly
bigger box containing the interval hull (interval enclosure). For more
see [2].
There exist many methods for computing interval enclosures of
OILS see e.g, [1]. Nevertheless, many of them return nonempty solution set even if the OILS has no solution (e.g., if we use the interval
least squares as an enclosure method). In some applications we do
care whether systems are solvable or unsolvable (e.g. system validation, technical computing).
Unfortunatelly, deciding whether an interval system is solvable is
an NP-hard problem. There exist some results for square systems (i.e,
70
systems where A in Ax = b is a square matrix) like [3]. In our talk we
would like to address the solvability and unsolvability of OILS. There
is a lack of necessary and sufficient conditions for detecting solvability
and unsolvability of OILS. We would like to present some newly developed conditions and algorithms concerning these problems. We will
test the strength of various conditions numerically and nicely visualize
the results.
References:
[1] J. Horáček, M. Hladı́k, Computing enclosures of overdetermined interval linear systems, Reliable Computing, 19 (2013), No. 2,
pp. 142–155.
[2] A. Neumaier, Interval methods for systems of equations, Cambridge University Press, Cambridge, 1990.
[3] J. Rohn, Solvability of systems of interval linear equations and inequalities, Linear optimization problems with inexact data, (2006),
pp. 35–77.
71
Computing capture tubes
Luc Jaulin1, Jordan Ninin1, Gilles Chabert3,
Stéphane Le Menec2, Mohamed Saad1, Vincent Le Doze2,
Alexandru Stancu4
1
4
Labsticc, IHSEV, OSM, ENSTA-Bretagne
2
EADS/MBDA, Paris, France
3
Ecole des Mines de Nantes
Aerospace Research Institute, University of Manchester, UK
Keywords: capture tube, contractors, interval arithmetic, robotics,
stability.
1
Introduction
A dynamic system can often be described by a state equation ẋ =
h(x, u, t) where x 2 Rn is the state vector, u 2 Rm is the control
vector and h : Rn ⇥ Rp ⇥ R ! Rn is the evolution function. Assume that the control low u = g (x, t) is known (this can be obtained
using control theory), the system becomes autonomous. If we define
f (x, t) = h (x, g (x, t) , t), we get the following equation.
ẋ = f (x, t) .
The validation of some stability properties of this system is an important and difficult problem [2] which can be transformed into proving
the inconsistency of a constraint satisfaction problem. For some particular properties and for invariant system (i.e., f does not depend on
t), it has been shown [1] that the V-stability approach combined interval analysis [3] can solve the problem efficiently. Here, we extend this
work to systems where f depends on time.
72
2
Problem statement
Consider an autonomous system described by a state equation ẋ =
f (x, t). A tube G(t) is a function which associates to each t 2 R a
subset of Rn . A tube G(t) is said to be a capture tube if the fact that
x(t) 2 G(t) implies that x(t + t1 ) 2 G(t + t1 ) for all t1 > 0. Consider
the tube
G (t) = {x, g (x, t)  0}
(1)
where g : Rn ⇥ R ! Rm . The following theorem, introduced recenltly
[4], shows that the problem of proving that G (t) is a capture tube can
be cast into solving a set of inequalities.
Theorem. If the system of constraints
8
@gi
i
< (i) @g
0
@x (x, t) .f (x, t) + @t (x, t)
(2)
(ii)
gi (x, t) = 0
:
(iii)
g (x, t)  0
is inconsistent for all x, all t 0 and all i 2 {1, . . . , m} then G (t) =
{x, g (x, t)  0} is a capture tube.
3
Computing capture tubes
If a candidate G (t) for a capture tube is available, we can check that
G (t) is a capture tube by checking the inconsistency of a set of nonlinear equations (see the previous section). This inconsistency can then
easily be checked using interval analysis [3]. Now, for many systems
such as for non holonomous systems, we rarely have a candidate for
a capture tube and we need to find one. Our main contribution is to
provide a method that can help us to find such a capture tube. The
idea if to start from a non-capture tube G(t) and to try to characterize
the smallest capture tube G+ (t) which encloses G(t). To do this, we
predict for all (x, t), that are solutions of (2), a guaranteed envelope for
trajectory within finite time-horizon window [t, t + t2 ] (where t2 > 0 is
fixed). If all corresponding x(t+t2 ) belongs to G(t+t2 ), then the union
73
of all trajectories and the initial G (t) (in the (x, t) space) corresponds
to the smallest capture tube enclosing G (t).
References:
[1] L. jaulin, F. Le Bars, An interval approach for stability analysis; Application to sailboat robotics, IEEE Transaction on Robotics,
2012.
[2] S. Le Menec, Linear Di↵erential Game with Two Pursuers and
One Evader, Advances in Dynamic Games, 2011.
[3] R.E. Moore, R.B. Kearfott, M.J. Cloud, Introduction to
Interval Analysis, SIAM, Philadelphia, 2009.
[4] A. Stancu, L. Jaulin, A. Bethencourt, Set-membership tracking using capture tubes, to be submitted.
74
On relative errors of floating-point
operations: optimal bounds and
applications
Claude-Pierre Jeannerod1 and Siegfried M. Rump2,3
1
Inria, laboratoire LIP (CNRS, ENS de Lyon, Inria, UCBL),
Université de Lyon, France.
2
Institute for Reliable Computing, Hamburg University of
Technology, Germany.
3
Faculty of Science and Engineering, Waseda University, Tokyo,
Japan.
[email protected][email protected]
Keywords: floating-point arithmetic, rounding error analysis
Rounding error analyses of numerical algorithms are most often
carried out via repeated applications of the so-called standard models
of floating-point arithmetic. Given a round-to-nearest function fl and
barring underflow and overflow, such models bound the relative errors
E1 (t) = |t fl(t)|/|t| and E2 (t) = |t fl(t)|/|fl(t)| by the unit roundo↵ u.
This talk will investigate the possibility of refining these bounds,
both in the case of an arbitrary real t and in the case where t is the
exact result of an arithmetic operation on some floating-point numbers.
Specifically, we shall provide explicit and attainable bounds on E1 (t),
which are all less than or equal to u/(1 + u) and, therefore, smaller
than u. For E2 (t) we will see that the situation is di↵erent and that
optimal bounds can or cannot equal u, depending on the operation and
the floating-point radix.
Then we will show how to apply this set of sharp bounds to the
rounding error analysis of various numerical algorithms, including summation, dot products, matrix factorizations, and complex arithmetic:
in all cases, we obtain much shorter proofs of the best-known error
bounds for such algorithms and/or improvements on these bounds
themselves.
75
Fast Implementation of Quad-Precision
GEMM on ARMv8 64-bit Multi-Core
Processor
Hao Jiang, Feng Wang, Yunfei Du and Lin Peng
National University of Defence Technology
410072 Changsha, China
[email protected],[email protected]
Keywords: quad-precision GEMM, error-free transformation, ARMv8
64-bit multi-core processor,
In recent years, ARM-based SoCs have a rapid evolution. The
promising qualities, such as competitive performance and energy efficiency, make ARM-based SoCs the candidates for the next generation
High Performance Computing (HPC) system [1]. For instance, supported by the Mont-Blanc project, Barcelona Supercomputing Center
builds the world’s first ARM-based HPC cluster–Tibidabo. Recently,
the new 64-bit ARMv8 instruction set architecture (ISA) improves
some important features including using 64-bit addresses, introducing
double-precision floating point in the NEON vector unit, increasing the
number of registers, supporting fused multiply-add (FMA) instruction,
etc. Hence, it shows increasing interest to build the HPC system with
ARMv8-based SoC.
In HPC, large-scale and long-time numerical calculations often produce inaccurate and invalidated results owing to cancellation from
round-o↵ errors. In the cases above, double precision accuracy is not
sufficient, then higher precision is required. Some high precision emulation softwares, such as MPFR, GMP and QD library, perform well
for some applications. BLAS is the fundamental math library. To
improve it’s accuracy, M. Nakata designed MBLAS [2] based on the
three high precision libraries above, and some other researchers did
the similar researches on GPUs. All the high precision BLAS libraries
above are independent of the computer architectures. Hence, in some
platforms, they can not achieve the optimal performance of processors.
76
Matrix-matrix multiplication (GEMM) is the basic function in the
level-3 BLAS. In this paper, we present the first implementation of the
quad precision GEMM (QGEMM) on ARMv8 64-bit multi-core processor. We utilize double-double type format to store the quadruple
precision floating point value. We choose the blocking and packing algorithms and parallelism method from GotoBLAS [3]. We propose the
optimization model with the purpose of maximizing the compute-tomemory access ratio to construct the inner kernel of QGEMM. Considering the double-double format and the 128-bit vector register in
ARMv8 64-bit processor, we let one vector register store one doubledouble floating point value to save the memory space. As each ARMv8
64-bit processor core contains 32 vector registers, we choose the 4x2
register blocking. The basic segment of the inner kernel is the product of two double-double values adding a double-double value. With
error-free transformation, we implement this segment in the assembly language, using ARM 64-bit memory accessing instruction, cache
pre-fetching instruction and FMA instruction. Considering the data
dependency, we reorder the instruction and unroll the loop to perform
calculation in parallel. The numerical results show that our implementation shows better performance than MBLAS on ARMv8 64-bit
processor.
References:
[1] N. Rajovic, P.M. Carpenter, I. Gelado, N. Puzovic, A.
Ramirez, M. Valero, Supercomputing with Commodity CPUs:
Are Mobile SoCs Ready for HPC?, SC’13.
[2] M. Nakata, The MPACK(MBLAS/MLAPACK):A multiple precision arithmetic version of BLAS and LAPACK, version 0.8.0,
2012, http://mplapack.sourceforge.net.
[3] K. Goto, R. A. v. d. Geijn, Anatomy of high-performance matrix multiplication, ACM Transactions on Mathematics Software,
34 (2008), No. 12, pp. 1–25.
77
Some Observations on Exclusion Regions
in Interval Branch and Bound Algorithms
Ralph Baker Kearfott
University of Louisiana at Lafayette
Department of Mathematics, U.L. Box 4-1010
Lafayette, Louisiana 70504-1010 USA
[email protected]
Keywords: cluster problem, backboxing, epsilon-inflation, complete
search, branch and bound, interval computations
In branch and bound algorithms for constrained global optimization, an acceleration technique is to construct regions x around local
optimizing points x̌, then delete these regions from further search. The
result of the algorithm is then a list these constructed small regions
in which all globally optimizing points must lie. If the constructed
regions are too small, the algorithm will not be able to easily reject
adjacent regions in the search, while, if the constructed regions are too
large, the set of optimizing points is not known accurately. We briefly
review previous methods of constructing boxes about approximate optimizing points. We then derive a formula for determining the size of
a constructed solution-containing region, proportional to the smallest
radius ✏ of any box generated in the branch and bound algorithm. We
prove that, if a box of this size is constructed, adjacent regions of radius ✏ on qualifying faces will necessarily be rejected, without the need
to actually process them in the branch and bound algorithm. Based on
this, we propose a class of algorithms that construct exclusion boxes
from concentric shells of small boxes of increasing size surrounding the
initial exclusion box x. The behavior of such algorithms would be more
predictable and controllable than use of branch and bound algorithms
without such auxiliary constructions.
78
References:
[1] Ferenc Domes and Arnold Neumaier. Rigorous verification of feasibility, 2014. submitted, preprint at http://www.mat.univie.
ac.at/~neum/ms/feas.pdf.
[2] Jürgen Herzberger, editor. Topics in Validated Computations: proceedings of IMACS-GAMM International Workshop on Validated
Computation, Oldenburg, Germany, 30 August–3 September 1993,
volume 5 of Studies in Computational Mathematics, Amsterdam,
The Netherlands, 1994. Elsevier.
[3] Ralph Baker Kearfott. Abstract generalized bisection and a cost
bound. Mathematics of Computation, 49(179):187–202, July 1987.
[4] Ralph Baker Kearfott. Rigorous Global Search: Continuous Problems. Number 13 in Nonconvex Optimization and its Applications.
Kluwer Academic Publishers, Dordrecht, Netherlands, 1996.
[5] Ralph Baker Kearfott and Kaisheng Du. The cluster problem in
multivariate global optimization. Journal of Global Optimization,
5:253–265, 1994.
[6] Günter Mayer. Epsilon-inflation in verification algorithms. J.
Comput. Appl. Math., 60(1-2):147–169, 1995.
[7] Siegfried M. Rump. Verification methods for dense and sparse
systems of equations. In Herzberger [2], pages 63–136.
[8] Hermann Schichl, Mikály Csaba Markót, and Arnold Neumaier.
Exclusion regions for optimization problems, 2013. accepted for
publication; preprint available at http://www.mat.univie.ac.
at/~neum/ms/exclopt.pdf.
[9] Hermann Schichl and Arnold Neumaier. Exclusion regions for
systems of equations. SIAM Journal on Numerical Analysis,
42(1):383–408, 2004.
79
[10] R. J. Van Iwaarden. An Improved Unconstrained Global Optimization Algorithm. PhD thesis, University of Colorado at Denver,
1996.
80
Some remarks on the rigorous estimation
of inverse linear elliptic operators
Takehiko Kinoshita1,2, Yoshitaka Watanabe3, Mitsuhiro T.
Nakao4
1
Center for the Promotion of Interdisciplinary Education and
Research, Kyoto University, Kyoto 606-8501, Japan
2
Research Institute for Mathematical Sciences, Kyoto University,
Kyoto 606-8502, Japan
3
Research Institute for Information Technology, Kyushu University,
Fukuoka 812-8581, Japan
4
Sasebo National College of Technology, Nagasaki 857-1193, Japan
[email protected]
Keywords: Linear elliptic PDEs, inverse operators, validated computations
In this talk, we consider several kinds of constructive a posteriori
estimates for the inverse linear elliptic operator. We show that the
computational costs depend on the concerned elliptic problems as well
as the approximation properties of used finite element subspaces, e.g.,
mesh size or so. Also, we propose a new estimate which is e↵ective for
an intermediate mesh size. Moreover, we describe some results on the
asymptotic behaviour of the approximate inverse estimates. Numerical
examples which confirm us these facts are presented.
References:
[1] T. Kinoshita, Y. Watanabe, and M. T. Nakao, An improvement of the theorem of a posteriori estimates for inverse elliptic
operators, Nonlinear Theory and Its Applications, 5 (2014), no. 1,
pp. 47–52.
[2] M. T. Nakao, K. Hashimoto, and Y. Watanabe, A numerical method to verify the invertibility of linear elliptic operators
81
with applications to nonlinear problems, Computing, 75 (2005),
pp. 1–14.
[3] Y. Watanabe, T. Kinoshita, and M. T. Nakao, A posteriori estimates of inverse operators for boundary value problems in
linear elliptic partial di↵erential equations, Mathematics of Computation, 82 (2013), pp. 1543–1557.
82
Computer-Assisted Uniqueness Proof
for Stokes’ Wave of Extreme Form
Kenta Kobayashi
Hitotsubashi University
2-1 Naka, Kunitachi-City, Tokyo 186-8601, Japan
[email protected]
Keywords: Stokes’ wave, global uniqueness, Stokes conjecture
We present computer-assisted proof for the global uniqueness of
Stokes’ wave of extreme form. The gravity and the surface tension
have much influence on the form of water waves. Assuming that the
flow is infinitely deep, the gravitational acceleration is a unique external force of the system and the wave profile is stationary, we obtain
Nekrasov’s equation[1]. In particular, a positive solution of Nekrasov’s
equation corresponds to a water wave which has just one peak and
one trough per period. Stokes’ wave of extreme form has a sharp crest
and is considered to be the limit of the positive solution of Nekrasov’s
equation with respect to parameters such as gravity, wave length, and
wave velocity.
Stokes’ wave of extreme form is obtained by the following nonlinear
integral equation for the unknown ✓ : (0, ⇡] ! R:
8
Z ⇡
sin s+t
sin ✓(t)
1
>
2
>
dt,
· Rt
log
✓(s)
=
>
>
3⇡ 0
<
sin s 2 t
0 sin ✓(w)dw
⇡
>
0 < ✓(s) <
s 2 (0, ⇡),
>
>
2
>
:
✓(⇡) = 0.
The wave profile of Stokes’ wave of extreme form is represented as
(x(s), y(s)) (0 < s < 2⇡), where x and y are determined by
dx
=
ds
L
e
2⇡
H✓(s)
cos ✓(s),
dy
=
ds
L
e
2⇡
H✓(s)
sin ✓(s).
Invited talk
83
Here, L is the wavelength and H is the Hilbert transform.
Although the existence of Stokes’ wave of extreme form has been
proved[2], the global uniqueness had not been proved for long time. We
suppose it is almost 30 years during which a lot of e↵orts were devoted
in order to solve the problem. Finally we proved the global uniqueness
in 2010 [3] and published the summarized version in 2013 [4].
The uniqueness of Stokes’ wave concerns the second Stokes’ conjecture[5] which has been a longtime open problem for 130 years. The second conjecture supposed that the profile of Stokes’ wave between two
consecutive crests should be downward convex. Existence of Stokes’
wave of extreme form which has the profile of downward convex was
proved in 2004 [6]. Therefore, the complete settlement of the second
Stokes’ conjecture was brought by our result.
References:
[1] A. I. Nekrasov, On waves of permanent type I, Izv. IvanovoVoznesensk. Polite. Inst., 3 (1921), pp. 52–56. (in Russian)
[2] J. F. Toland, On the existence of waves of greatest height and
Stokes’ conjecture, Proc. R. Soc. Lond., A 363 (1978), pp. 469–
485.
[3] K. Kobayashi, On the global uniqueness of Stokes’ wave of extreme form, IMA J. Appl. Math., 75 (2010), pp. 647–675.
[4] K. Kobayashi, Computer-assisted uniqueness proof for Stokes’
wave of extreme form, Nankai Series in Pure, Applied Mathematics
and Theoretical Physics, 10 (2013), pp. 54–67.
[5] G. G. Stokes, On the theory of oscillatory waves, Appendix B
: Consideration relative to the greatest height of oscillatory irrotational waves which can be propagated without change of form,
Math. Phys. Paper, 1 (1880), pp. 225–228.
[6] J. F. Toland, & P. I. Plotnikov Convexity of Stokes waves
of extreme form, Arch. Ration. Mech. Anal., 171 (2004), pp.
349–416.
84
Error Estimations of Interpolations
on Triangular Elements
Kenta Kobayashi and Takuya Tsuchiya
Graduate School of
Graduate School of
Commerce and Management, Science and Engineering,
Hitotsubashi University, Japan Ehime University, Japan
[email protected] [email protected].
ac.jp
Keywords: the circumradius condition, interpolation, finite element
methods
The interpolations and their error estimations are important fundamentals for, in particular, the finite element error analysis. Let
K ⇢ R2 be a triangle with apices xi , i = 1, 2, 3. Let P1 be the set of
all polynomials whose degree is at most 1. For a continuous function
1
v 2 P1 is defined by
v 2 C 0 (K), the linear interpolation IK
1
v)(xi ) = v(xi ),
(IK
i = 1, 2, 3.
It has been known that we need to impose a geometric condition
1
uk1,2,K . One of the wellto K to obtain an error estimation of ku IK
known such conditions is the following. Let hK be the diameter of
K.
• The maximum angle condition, Babuška-Aziz [1], Jamet [2]
(1976). Let ✓1 , 2⇡/3  ✓1 < ⇡, be a constant. If any angle ✓ of K
satisfies ✓  ✓1 and hK  1, then there exists a constant C = C(✓1 )
independent of hK such that
kv
1
IK
vk1,2,K  ChK |v|2,2,K ,
8v 2 H 2 (K).
85
Since its discovery, the maximum angle condition was believed to be
the most essential condition for convergence of solutions of the finite
element method.
Recently, we obtained the following error estimate which is more
essential than the maximum angle condition. Let RK be the circumradius of K.
• The circumradius condition, Kobayashi-Tsuchiya [3] (2014).
For an arbitrary triangle K with RK  1, there exists a constant
Cp independent of K such that the following estimate holds:
kv
1
vk1,p,K  Cp RK |v|2,p,K ,
IK
8v 2 W 2,p (K), 1  p  1.
We have also pointed out that the circumradius condition is closely
related to the definition of surface area [4]. In this talk we will explain
the circumradius condition and the related topics. We will also mention
recent developments on the subject.
References:
[1] I. Babuška, A.K. Aziz, On the angle condition in the finite
element method, SIAM J. Numer. Anal., 13 (1976) 214–226
[2] P. Jamet, Estimations d’erreur pour des elements finis droits
presque degeneres, R.A.I.R.O. Anal. Numer., 10 (1976) 43–61
[3] K. Kobayashi, T. Tsuchiya, A Babuška-Aziz type proof of the
circumradius condition, Japan J. Indus. Appl. Math., 31 (2014)
193–210
[4] K. Kobayashi, T. Tsuchiya, On the circumradius condition for
piecewise linear triangular elements, submitted, arXiv:1308.2113
[5] X. Liu, F. Kikuchi, Analysis and estimation of error constants for
P0 and P1 interpolations over triangular finite elements, J. Math.
Sci. Univ. Tokyo, 17 (2010) 27–78
86
Implementing
the Interval Picard Operator
M. Konečný, W. Taha, J. Duracz and A. Farjudian
Aston University (first author only)
Aston Triangle
B4 7ET, Birmingham, UK
[email protected]
Keywords: ODE, interval Picard operator, function arithmetic
Edalat & Pattinson give an elegant constructive description of the
exact solutions of Lipschitz ODE IVPs based on an interval Picard
operator [1]. We build on this theoretical work and propose a verifiable
and practically useful method for validated ODE solving. In particular,
this method
• is very simple;
• is correct by construction in a strong sense;
• produces arbitrarily precise results;
• works for problems with uncertain initial values;
• produces tight enclosures for non-trivial problems.
Simplicity, correctness by construction and arbitrary precision are properties that our method inherits from Edalat & Pattinson’s work.
We employ a number of ideas from established validated ODE solving approaches. Most importantly, we employ a function arithmetic
similar to Taylor Models (TMs) [2]. In our arithmetic, an enclosure
is formed by two independent polynomials, which makes it possible
to closely approximate interval functions of non-constant width. Such
functions arise naturally from the interval Picard operator.
To support problems with uncertain initial value, we borrow two
further techniques from TM methods. First, we enclose the (n + 1)-ary
ODE flow instead of enclosing the union of the graphs of the unary
87
ODE solutions over all initial values. Second, we use a version of
shrink-wrapping [3] to minimize the loss of information between steps.
We demonstrate that our implementation of the method is capable
of enclosing solutions of non-smooth ODEs and classical examples of
non-linear systems, including the Van der Pol system and the Lorenz
system with uncertain initial conditions. While our method is not attempting to compete with mature systems such as COSY and VNODE
in terms of speed and power, we believe it is a theoretically pleasing and
easily verifiable alternative worth exploring and testing to its limits.
References:
[1] A. Edalat, D. Pattinson, A Domain-Theoretic Account of Picard’s Theorem, LMS Journal of Computation and Mathematics,
10 (2007), pp. 83–118.
[2] K. Makino, M. Berz, Taylor Models and Other Validated Functional Inclusion Methods, International Journal of Pure and Applied Mathematics, 4 (2003), No. 4, pp. 379–456.
[3] M. Berz, K. Makino, Taylor Models and Other Validated Functional Inclusion Methods, International Journal of Di↵erential Equations and Applications, 10 (2005), No. 4, pp. 385–403.
88
Interval methods for solving various kinds
of quantified nonlinear problems
Bartlomiej Jacek Kubica
Warsaw University of Technology
Warsaw, Poland
[email protected]
Interval branch-and-bound type methods can be used to sovle various problems, in particular: equations systems, constraint satisfaction
problems, global optimization, Pareto sets seeking, Nash points and
other game equilibria seeking and other problems, e.g., seeking all local (but non-global) optima of a function.
We show that each of these problems can be expressed by a specific
kind of first-order logic formula and investigate, how this a↵ects the
structure of the algorithm and used tools. In particular, we discuss
several aspects of parallelization of these algorithms.
The focus is on seeking game equilibria, that is a relatively novel
application of interval methods.
Invited talk
89
Applied techniques of interval
analysis for estimation
of experimental data
S. I. Kumkov1
Institute of Mathematics and Mechanics UrB RAS
16, S.Kovalevskaya str., 620990, Ekaterinburg, Russia
Ural Federal University, Ekaterinburg, Russia
[email protected]
Keywords: interval analysis, estimation, experimental data
In practice of processing experimental data with bounded measuring errors and unknown probabilistic characteristics, interval analysis
methods [1,2] are successfully applied in contrast to statistical ones.
Moreover, using the concrete properties of the process, it becomes possible to elaborate more e↵ective procedures that ones based the boxtechniques [1,2]. The paper deals with adjusting the interval analysis
methods to practical processing the experimental chemical processes
[3,4]. Here, two version of description the processes are considered.
In the first version, an analytical function with parameters to be estimated is used, In the second one, a system of the process kinetic system
of ordinary di↵erential equations with parameters is used. Also, short
samples of measurements with bounded errors are given.
The problem of estimation is formulated as follows: it is necessary to estimate the set of admissible values of the process parameters
consistent with the given process description and input data.
In the first version, the experimental process is described by the
function S(t, V, ↵, BG) = V exp(↵t) + BG, where, t is the process
time, ↵, V, BG are parameters. Here, the sought-for information set
I(↵, lnV, BG) is constructed (Fig.1a) as a collection of two-dimensional
cross-sections {I(↵, lnV, Bn )}, n = 1, 101 on the grid {Bn }. The crosssections are convex polygons with exact linear boundaries. The example of a middle cross-section is marked by the thick boundary.
1
90
The work was supported by the RFBR Grants, nos. 12-01-00537 and 13-01-96055
lnV
a)
0.00195
-1.355
I(!,lnV,BG)
-1.360
K3
with 101 cross-sections
-1.365
b)
outer minimal
box-approximation
point-wise
section for
maximal
value)
cross-section
for minimal
K1=0
I(!,lnV,BGn)
-1.370
-1.375
I(!,lnV,BGmin)
-1.380
0.000575 0
0
-1.385
K2
0.04
I(!,lnV,BGmax)
!
-1.390
-2900
-2800
-2700
-2600
-2500
-2400
-2300
K1= 0.00133 (maximal value)
In the second practical example, the process is described by the
kinetic system of ordinary di↵erential equations:
ẋ1 = K1 x1 x2 K2 x1 , ẋ2 = K1 x1 x2 K3 x2 x3 , ẋ3 = K2 x1 K3 x2 x3 ,
where x1 , x2 , x3 is the phase vector, K1 , K2 , K3 are parameters to be
estimated. To construct the informational set, the three–dimensional
(on parameters K1 , K2 , K3 ) grid-wise approach was used. The information set I(K1 , K2 , K3 ) is represented (Fig.1b) by a collection of twodimensional cross-sections {I(K1,k , K2,m , K3,n )}, k = 1, 51, m = 1, 101,
n = 1, 101 on the tree-dimensional grid. The cross-sections are nonconvex polygons with approximate grid-wise boundary. Note that the
grid techniques allows to construct this collection to be a “practically
maximal” internal approximation of the set I(K1 , K2 , K3 ).
Note that both used approaches works significantly faster and gives
more accurate results than ones based on application of usual boxparallelotopes [1,2].
References:
[1] E. Hansen, G.W. Walster, Global Optimization Using Interval Analysis,
Marcel Dekker Inc., New York, USA, 2004.
[2] Shary S.P., Finite–Dimensional Interval Analysis, Electronic Book, 2013,
http://www.nsc.ru/interval/Library/InteBooks
[3] S.I. Kumkov, Yu.V. Mikushina, Interval Approach to Identification of Catalytic Process Parameters, Reliable Computing 2014; 19, issue 2: 197-214. Reliable Computing, 19 (2014), No. 2, pp. 197–214.
[4] S.I. Kumkov, Procession of experimental data on ionic conductivity of molten
electrolyte by the interval analysis methods, Rasplavy, (2010), No. 3, pp. 86–96.
91
Replacing branches by polynomials in
vectorizable elementary functions
Olga Kupriianova, Christoph Lauter
Sorbonne Universités, UPMC Univ Paris 06, UMR 7606, LIP6,
F-75005 Paris, France
4 place Jussieu 75252 PARIS CEDEX 05
{olga.kupriianova, christoph.lauter}@lip6.fr
Keywords: vectorizable code, interpolation polynomial, elementary
functions, linear tolerance problem
The collection of codes that give the value of the elementary mathematical function (e.g. sin, exp) is called a mathematical library. There
are several existing examples of such libraries, libms, but they all contain only manually implemented codes, i.e. written long time ago and
non-adapted for particular tasks [1]. The existing implementations are
a compromise between speed, accuracy and portability [2]. In order to
produce di↵erent flavors of implementation for each elementary function (e.g. fast or precise), we use Metalibm1 , an academic prototype
for a parametrized code generator of mathematical functions.
Metalibm splits the specified domain I for the function implementation in order to reduce argument range [3], hence we get the splitting
{Ik }0kn 1 . Then on each of the subdomains Metalibm approximates
the given function with a minimax polynomial. Thus, in order to get
the value f (x) for a particular input x, one has to get the corresponding polynomial coefficients. This means first determinating the index
k 2 Z of the subinterval: Ik 3 x, 8x 2 I. Typically this is done with
if -statements. To avoid branches on this step we propose to build a
mapping function P (x) that returns the needed interval index k.
We propose to build a continuous function p(x) such that bp(x)c =
P (x). The splitting intervals can be represented as Ik = [ak , ak+1 ]. As
1
92
http://lipforge.ens-lyon.fr/www/metalibm/
we want the function to verify bp(x)c = P (x), we get the following
conditions for its values:
p(x) 2 [k, k + 1) when ak  x  ak+1 , 0  k  n
1.
(1)
Among the continuous functions we choose the interpolation polynomials which pass through the split points and take the values p(ak ) = k.
However, classical interpolation theory does not allow us to take into
account the conditions (1), so they are checked a posteriori. This verification can reliably be done using interval arithmetic implemented in
Sollya2 .
First testing results of the proposed method show that it is possible
to build a mentioned mapping function with an interpolation polynomial and a posteriori condition check. However, sometimes the values
of the polynomial exceed the required bounds.
In order to take into account the conditions (1) a priori we could
also find the tolerable solution set [3] of the interval system of linear
algebraic equations Ac = k, where A is Vandermonde matrix composed of splitting intervals, k is an interval vector of allowable values
for the function. The system solution c will give the set of the possible
polynomial coefficients. Constructing polynomials by solving linear
tolerance problem [3] is left for future work.
References:
[1] Jean-Michel Muller, Elementary Functions: Algorithms and
Implementation, Birkhäuser, Boston, 1997.
[2] Christoph Lauter, Arrondi correct de fonctions mathématiques.
Fonctions univariés et bivariés, certification et automatisation, phD
thesis, ENS de Lyon, 2008.
[3] Sergey P. Shary, Solving the linear interval tolerance problem,
Mathematics and Computers in Simulation. – 1995. – Vol. 39. –
P. 53-85.
2
http://sollya.gforge.inria.fr/
93
Verified lower eigenvalue bounds for
self-adjoint di↵erential operators
Xuefeng Liu and Shin’ichi Oishi
Research Institue of Science and Engineering, Waseda University
3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan
[email protected]
Keywords: Eigenvalue bounds, finite element method, self-adjoint
di↵erential operators
By using finite element methods(FEM), we develop a new theorem
to give verified eigenvalue bounds for generally defined self-adjoint differential operators, which includes the Laplace operator, the Biharmonic operators and so on. The explicit a priori error estimations
for conforming and non-conforming FEMs play an import role in constructing explicit lower eigenvalue bounds. As a feature of proposed
theorem, it can even give bounds for eigenvalues that the corresponding
eigenfunctions may have a singularity.
We consider the eigenvalue problem in an abstract form. Let V
be a Hilbert function space and V h be a finite dimensional space,
Dim(V h ) = n. Here, V h may not be a subspace of V . Suppose M (·, ·),
N (·, ·) are semi-positive symmetric bilinear forms on both V and V h .
Moreover, for any u 2 V or V h , N (u, u) p 0 implies u = 0.
p Define
norm | · |N and semi-norm | · |M by, | · |M := M (·, ·), | · |N := N (·, ·).
We consider an eigenvalue problem defined by the bilinear forms M (·, ·)
and N (·, ·): Find u 2 V and 2 R such that,
M (u, v) = N (u, v) 8v 2 V.
(1)
With proper setting of M and N , the main theorem to provide lower
eigenvalue bounds is given as below.
Theorem 1 Let Ph : V ! V h be a projection such that,
M (u
94
Ph u, vh ) = 0
8vh 2 V h ,
(2)
along with an error estimation as
|u
Ph u|N  Ch |u
Then we have lower bounds for
h,k
1+
2
h,k Ch

Ph u|M .
(3)
k ’s,
k
(k = 1, 2, · · · , n) .
(4)
The concrete form of projection Ph and the value of constant Ch ,
which tends to 0 as mesh size h ! 0, are depending on the FEM spaces
in use. In case of the eigenvalue problems of Laplacian, a new method
based on hyper-circle equation for conforming FEM spaces is developed to give an explicit bound for the constant Ch ; see [1]. Generally,
by using proper non-conforming FEMs, the projection Ph is just an
interpolation operator and the constant Ch can be easily obtained by
considering the interpolation error estimation on local elements.
To impove the precision of eigenvalue bounds, we combine lower
eigenvalue bounds of Theorem 1 and Lehmann-Goerisch’s theorem to
give sharp bounds; see [2]. Also, the explicit a priori error estimation
for FEMs has been successfully applied to verify the solution existence
for semi-linear elliptic partial di↵erential equations [3].
References:
Xuefeng Liu and Shin’ichi Oishi, Verified eigenvalue evaluation
for Laplacian over polygonal domain of arbitrary shape, SIAM J.
Numer. Anal., 51(3), 1634-1654, 2013.
Xuefeng Liu and Shin’ichi Oishi, Guaranteed high-precision
estimation for P0 interpolation constants on triangular finite elements, Japan Journal of Industrial and Applied Mathematics,
30(3), 635-652, 2013.
Akitoshi Takayasu, Xuefeng Liu and Shin’ichi Oishi, Verified
computations to semilinear elliptic boundary value problems on
arbitrary polygonal domains, NOLTA, IEICE, Vol.E96-N, No.1,
pp.34-61, Jan. 2013.
95
Towards the possibility of objective
interval uncertainty in physics. II
Luc Longpré and Vladik Kreinovich
University of Texas at El Paso
El Paso, TX 79968, USA
[email protected], [email protected]
Keywords: algorithmic randomness, interval uncertainty, quantum
physics
Applications of interval computations usually assume that while
we only know an interval [x, x] containing the actual (unknown) value
of a physical quantity x, there is the exact value x of this quantity –
and that in principle, we can get more and more accurate estimates of
this value. This assumption is in line with the usual formulations of
physical theories – as partial di↵erential equations relating exact values
of di↵erent physical quantities, fields, etc., at di↵erent spatial locations
and moments of time; see, e.g., [2]. Physicists know, however, that
due, e.g., to Heisenberg’s uncertainty principle, there are fundamental
limitations on how accurately we can determine the values of physical
quantities [2, 5].
One of the important principles of modern physics is operationalism
– that a physical theory should only use observable quantities. This
principle is behind most successes of the 20 century physics, starting
with relativity theory (vs. un-observable aether) and quantum mechanics. From this viewpoint, it is desirable to avoid using un-measurable
exact values and modify physical theories so that they explicitly take
objective uncertainty into account.
According to quantum physics, we can only predict probabilities of
di↵erent events. Thus, uncertainty means that instead of exact values
of these probabilities, we can only determine intervals; see, e.g., [3].
From the observational viewpoint, a probability measure means
that we observe a sequence which is random (in Kolmogorov-MartinLöf (KML) sense) relative to this measure. What we thus need is
96
the ability to describe a sequence which is random relative to a set of
possible probability measures. This is not easy: in [1, 4], we have shown
that in seemingly reasonable formalizations, every random sequence is
actually random relative to one of the original measures. Now we know
how to overcome this problem: for example, for a sequence of events
!1 !2 . . . occurring with the interval probability [p, p], we require that
this sequence is random relative to a product measure corresponding
to some sequence of values pi 2 [p, p] – and that it is not random in
this sense for any narrower interval. We show that this can be achieved
when lim inf pi = p and lim sup pi = p.
We also analyze what will happen if we take into account that in
physics, not only events with probability 0 are physically impossible
(this is the basis of KML definition), but also events with very small
probability are impossible (e.g., it is not possible that all gas molecules
would concentrate, by themselves, in one side of a vessel).
References:
[1] D. Cheu, L. Longpré, Towards the possibility of objective interval uncertainty in physics, Reliable Computing, 15(1) (2011),
pp. 43–49.
[2] R. Feynman, R. Leighton, M. Sands, Feynman Lectures on
Physics, Basic Books, New York, 2005.
[3] I.I. Gorban, Theory of Hyper-Random Phenomena, Ukrainian
National Academy of Sciences Publ., Kyiv, 2007 (in Russian).
[4] V. Kreinovich, L. Longpré, Pure quantum states are fundamental, mixtures (composite states) are mathematical constructions: an argument using algorithmic information theory, International Journal on Theoretical Physics, 36(1) (1997) pp. 167–176.
[5] L. Longpré, V. Kreinovich, When are two wave functions distinguishable: a new answer to Pauli’s question, with potential application to quantum cosmology, International Journal of Theoretical Physics, 47(3) (2008), pp. 814–831.
97
How much for an interval? a set?
a twin set? a p-box? a Kaucher interval?
An economics-motivated approach to
decision making under uncertainty
Joe Lorkowski and Vladik Kreinovich
University of Texas at El Paso
El Paso, TX 79968, USA
[email protected], [email protected]
Keywords: decision making, interval uncertainty, set uncertainty,
p-boxes
There are two main reasons why decision making is difficult. First,
we need to take into account many di↵erent factors, there is usually a
trade-o↵. For example, shall we stay in a slightly better hotel or in a
reasonably good cheaper one?
But even when we know how to combine di↵erent factors into a
single objective function, decision making is still difficult because of
uncertainty. For example, when deciding on the best way to invest
money, the problem is that we are not certain which financial instrument will lead to higher returns.
Let us use economic ideas to solve such economic problems: namely,
let us assign a fair price to each case of uncertainty.
What does “fair price” mean? One of the reasonable properties is
that if v is a pair price for an instrument x and v 0 is a fair price for an
instrument x0 , then the fair price for a combination x + x0 of these two
instruments should be equal to the sum of the prices.
In [3], this idea was applied to interval uncertainty [4], for which
this requirement takes the form v([x, x]+[x0 , x0 ]) = v([x, x])+v([x0 , x0 ]).
Under reasonable monotonicity conditions, all such functions have the
form v([x, x]) = ↵·x+(1 ↵)·x for some ↵ 2 [0, 1]; this is a well-known
Hurwicz criterion.
98
In this talk, we show that for sets S, we similarly get v(S) =
↵ · sup S + (1 ↵) · inf S.
For probabilistic uncertainty, for large N , buying N copies of this
random instrument is equivalent to buying a sample of N values coming
from the corresponding probability distribution. One can show that
for this type of uncertainty, additivity implies that the fair price should
be equal to the expected value µ.
A similar idea can be applied to finding the price of a p-box (see,
e.g., [1, 2]), a situation when, for each x, we only know an interval
[F (x), F (x)] containing the actual (unknown) value
F (x) = Prob(⌘  x)
of the cumulative distribution function. In this case, additivity leads
to the fair price ↵ · µ + (1 ↵) · µ, where [µ, µ] is the range of possible
values of the mean µ.
We also come up with formulas describing fair price of twins (intervals whose bounds are only known with interval uncertainty) and
of Kaucher (improper) intervals [x, x] for which x > x.
References:
[1] S. Ferson, Risk Assessment with Uncertainty Numbers: RiskCalc,
CRC Press, Boca Raton, Florida, 2002.
[2] S. Ferson et al., Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty, Sandia National Laboratories, Report SAND2007-0939, May 2007; available as
http://www.ramas.com/intstats.pdf
[3] J. McKee, J. Lorkowski, T. Ngamsantivong, Note on Fair
Price under Interval Uncertainty, Journal of Uncertain Systems,
(8) 2014, to appear.
[4] R.E. Moore, R.B. Kearfott, M.J. Cloud, Introduction to
Interval Analysis, SIAM, Philadelphia, 2009.
99
A workflow for modeling, visualizing, and
querying uncertain (GPS-)localization
using interval arithmetic
Wolfram Luther
University of Duisburg-Essen
Lotharstrasse 63, 47057 Duisburg, Germany
[email protected]
Keywords: GPS-localization, Dempster-Shafer theory, query language,
3D visualization
A number of applications use GPS-based localization for a variety
of purposes, such as navigation of cars, robots or to localize images.
Global Positioning System receivers usually report an error magnification factor as a ratio of output variables and input parameters
as Geometric/Positional/Horizontal/Vertical/ Time Dilution of Precision (XDOP) using one- to four-dimensional coordinate systems. Over
time, various algorithmic approaches have been developed to compensate for errors due to environmental disturbances and uncertain parameters occurring in GPS signal measurement, such as an adaptive
Kalman filter-based approach that was implemented using a fuzzy logic
inference system, or by combining GPS measurements with further
sensory data. In terms of querying GPS-based data, less work can be
found that takes the uncertain characteristic of GPS data into account,
especially if semantic querying mechanisms are involved.
In this talk we highlight new verified method for uncertain (GPS)localization based on Dempster-Shafer theory (DST) [1], with multidimensional and interval-valued basic probability assignments (nDIBPA)
and masses estimated via statistical observations or/and expert knowledge. In order to define and work with these focal elements, we extended the Dempster-Shafer with Intervals (DSI) toolbox by adding
functions that provide the capability to compute imprecise
(GPS-)localization.
100
Besides aggregation and normalization methods on 2DIBPAs as
well as plausibility (PL) and belief (BEL) interval functions, the DSI
toolbox provides discrete interval generalizations of the normal and
Weibull distribution with compact support and interval-valued parameters
to
model
the
error
of
a
GPS
measurement
(http://udue.de/DSIexamples). Thus, an adequate error distribution
measured in radians, (x, y)- or (x, y, z)-coordinates can be assumed
via nDIBPAs, n = 1, 2, 3, using expert knowledge based on long-time
measurements or results reported in the literature, i.e., the Weibull
distribution for radial GPS position error.
In [2], we provide an extended example concerning localization and
alignment of a truck equipped with two GPS sensors at a given distance. Roughly speaking, the grid approach using 2DIBPAs is similar
to the Riemann integral concept using upper and lower sums. We ask
for inner and outer domains I and O, which contain all possible localizations of the object with a given high probability, and alignments
enclosed by a contour C situated in the shape S := O\I. The thickness
of the shape S depends on the sample size and the computing errors
when rounding to ±1. By summing up the lower mass bounds of all
focal elements in I and the upper bounds for focal elements constituting O, we acquire an enclosure for the BEL(Y ) and PL(Y ).
To dynamically generate X3D scenes, we use the Replicave framework [3] –a Java based X3D and X3DOM toolkit for modeling of 3D
scenes–which can be interfaced by C and C++ via the Java Native
Interface (JNI) together with a layered visualization approach for visualizing map and terrain data and multiple 2D/3D overlays and grids
for visualizing information as demonstrated in [4].
To support queries of the type Are objects A and B in space C at
time T with a plausibility of 50 percent, we extended the GeoSPARQL
standard based on the ability of introducing DST models into the ontology as features of spatial objects and the set of custom SPARQL
and/or extensions of GeoSPARQL functions that should be able to
evaluate queries with uncertainty [5].
The main benefit our approach o↵ers for GIS applications is a
workflow concept using DST-based models that are embedded into
101
an ontology-based semantic querying mechanism accompanied by 3D
visualization techniques. This workflow provides an interactive way
of semantically querying uncertain GIS models and providing visual
feedback.
Our future work consists in the final implementation and extension
of the di↵erent components of the workflow. Thanks to Nelson Baloian,
Gabor Rebner, Daniel Sacher, and Benjamin Weyers for preparing the
material during a stay at the University of Chile and to the DFG and
the DAAD for funding this cooperation within the SADUE13 and the
PRASEDEC projects.
References:
[1] G. Rebner, D. Sacher, W. Luther, Verified stochastic methods: The evolution of the Dempster-Shafer with intervals (DSI)
toolbox, Taylor & Francis, London, 2013, pp. 541–548.
[2] G. Rebner, D. Sacher, B. Weyers, W. Luther, Verified
stochastic methods in geographic information system applications
with uncertainty, to appear in Structural Safety.
[3] D. Biella, W. Luther, D. Sacher, Schema migration into
a web-based framework for generating virtual museums and laboratories, 18th International Conference on Virtual Systems and
Multimedia (VSMM) (2012), pp. 307–314.
[4] F. Calabrese, C. Ratti, Real Time Rome, Networks and Communication Studies, (2006), No. 3 & 4, pp. 247–258.
[5] Open Geospatial Consortium, OGC GeoSPARQL – A Geographic Query Language for RDF Data,
http://www.opengis.net/doc/IS/geosparql/1.0
102
Using range arithmetic in evaluation of
compact models
Amin Maher1 and Hossam A. H. Fahmy2
1
Deep Submicron Division, Mentor Graphics Corporation
Cairo, Egypt, 11361
amin [email protected]
2
Electronics and Communications Engineering, Cairo University
Giza, Egypt, 12613
[email protected]
Keywords: range arithmetic, interval arithmetic, affine arithmetic,
compact models, circuit simulation, Monte Carlo simulation, design
variability
At nowadays semiconductor technologies, electronics circuits performance is a↵ected by process variation. To accommodate for these
variation at design stage, it is normal to simulate the design several
times for several process corners. As well doing Monte Carlo simulation to take random parameters variation into consideration. Large
number of runs are needed to have good results with Monte Carlo simulation. As an alternative to these simulations, range arithmetic may
be used to simulate variations in parameters.
Approaches use range arithmetic in circuit simulation, show good
results[1][2]. To complete the simulation flow, a set of device models should be available. These models should be able to work with
range arithmetic based simulator. For high level compact models, like
BSIM4, replacing the floating point calculation with interval ones is
not enough. Re-writing parts of the model is necessary to provide
correct results with the interval calculations.
In this work we evaluate compact models using range arithmetic.
We compare the results for accuracy, efficiency and reliability when
using di↵erent representation of range arithmetic, interval and affine
arithmetic. Results are tested against point intervals data and MonteCarlo simulations.
103
References:
[1] Grabowski, Darius, Markus Olbrich, and Erich Barke.
”Analog circuit simulation using range arithmetics.” In Design Automation Conference, 2008. ASPDAC 2008. Asia and South Pacific, pp. 762-767. IEEE, 2008.
[2] Tang, Qian Ying, and Costas J. Spanos. ”Interval-value
based circuit simulation for statistical circuit design.” In SPIE Advanced Lithography, pp. 72750J-72750J. International Society for
Optics and Photonics, 2009.
104
Finding positively invariant sets of
ordinary di↵erential equations using
interval global optimization methods
Mihály Csaba Markót and Zoltán Horváth
University of Vienna
Oskar-Morgenstern-Platz 1, A-1090 Vienna, Austria
Széchenyi István University
Egyetem tér 1., H-9026 Győr, Hungary
[email protected], [email protected]
Keywords: global optimization, interval branch-and-bound, ODE,
positive invariance
Let us consider the initial value problem y 0 (t) = f (y(t)), t  0,
y(0) = y0 , where f : V ✓ RN ! RN is continuously di↵erentiable, and
assume that a unique solution exists for all u0 2 V . A set C ✓ V is
called positively invariant w.r.t. f , if for all u0 2 C the solution u(t)
stays in C for all t 0. If C is convex and closed, then a sufficient condition of C being positively invariant is the existence of a real positive "
constant such that for all v 2 C the containment relation v + "f (v) 2 C
holds.
For the discretized case, one can introduce the concept of discrete
positive invariance: Let us given f , y0 , and a stepsize ⌧ > 0, and
denote a numerical integration scheme (e.g., one from the family of
Runga-Kutta methods) by , i.e., y(i + 1) = (y(i), ⌧, f ), i = 0, . . .
A set C ✓ V is discrete positively invariant w.r.t.
with stepsize
constant ⌧ ⇤ > 0, if for all ⌧ 2 (0, ⌧ ⇤ ] and for all y(0) 2 C the relation
y(i + 1) 2 C holds for i = 0, . . . Finding discrete positively invariant
sets and large stepsize constants may be essential for carrying out longterm integration in an efficient way, especially for sti↵ problems.
In both the continuous and the discrete cases, the related basic
problem is the following: given F : V ✓ RN ! RN , C ✓ V , and a
constant " > 0, decide whether v + "F (v) 2 C for all v 2 C. That is,
105
we have to verify a mathematical property in all points of a given set,
hence, for tackling the problem on a computer we need interval-based
reliable numerical algorithms. In the talk we show how to translate the
above decision problem into a set of global optimization problems when
C is box-shaped, and solve them with an interval branch-and-bound
algorithm. Furthermore, we introduce a reliable method for finding
boxes that are positively invariant for given F and ", and an algorithm
to find the maximal " for which a given C set is positively invariant.
The applicability and efficiency of the methods are demonstrated on
(low-dimensional) sti↵ chemical reaction models.
106
A short description of the symmetric
solution set
Günter Mayer
University of Rostock
18051 Rostock, Germany
[email protected]
Keywords: interval linear systems, symmetric solution set, Oettli–
Prager–like theorem
Given a regular real n⇥n interval matrix [A] and an interval vector
[b] with n components the well–known Oettli–Prager theorem describes
the solution set S = {x 2 Rn | Ax = b, A 2 [A], b 2 [b] } by means of
the vector inequality
|b̌
Ǎx|  rad([A]) · |x| + rad([b]),
(1)
i.e., the statements ‘ x 2 S ’ and ‘ (1) holds for x 2 Rn ’ are equivalent. Here, Ǎ, b̌ denote the midpoints of [A], and [b], respectively, and
rad([A]), rad([b]) denote their radii. Restricting A and [A] in S to be
symmetric leads to the symmetric solution set
Ssym = {x 2 Rn | Ax = b, A = AT 2 [A] = [A]T , b 2 [b] } ✓ S
which is more complicated to be described. Starting with 1995 several attempts were made to find such a description. They essentially
represented a way how to end up with a set of inequalities extending
that in (1); cf. for instance [1], [2], and [6]. It lasted up to 2008 until
Hladı́k presented in [3] an extension comparable with (1). In [4] the
following more compact reformulation of [3] was stated:
x 2 Ssym
|b̌
if and only if
Ǎx|  rad([A]) · |x| + rad([b])
as in (1)
107
and
|xT (Dp
Dq )(b̌
Ǎx)|  |x|T · |Dp rad([A]) rad([A]) Dq | · |x|
(2)
+ |x|T · |Dp Dq | · rad([b])
for all vectors p, q 2 {0, 1}n \{0, (1, . . . , 1)T } such that p lex q and
pT q = 0. Here Dv denotes a diagonal matrix Dv = diag(v) for
v = (vi ) 2 Rn and ‘ lex ’ denotes the strict lexicographic ordering
of vectors, i.e., u lex v if for some index k we have ui = vi , i < k, and
uk < vk . At most (3n 2n+1 + 1)/2 inequalities are needed in (2).
In our talk we review these inequalities, outline a proof in [4] different from that in [3] and indicate some ways from various authors
how to enclose Ssym ; cf. the survey [5].
References:
[1] G. Alefeld, G. Mayer, On the symmetric and unsymmetric
solution set of interval systems SIAM J. Matrix Anal. Appl., 16
(1995), pp. 1223–1240.
[2] G. Alefeld, V. Kreinovich, G. Mayer, On the shape of the
symmetric, persymmetric and skew–symmetric solution set SIAM
J. Matrix Anal. Appl., 18 (1997), pp. 693–705.
[3] M. Hladı́k, Description of symmetric and skew–symmetric solution set, SIAM J. Matrix Anal. Appl., 30 (2008), No. 2, pp. 509–
521.
[4] G. Mayer, An Oettli–Prager–like theorem for the symmetric solution set and for related solution sets, SIAM J. Matrix Anal. Appl.,
33 (2012), No. 3, pp. 979–999.
[5] G. Mayer, A survey on properties and algorithms for the symmetric solution set, Preprint 12/2, Universität Rostock, Preprints aus
dem Institut für Mathematik, ISSN 0948–1028, Rostock, 2012.
[6] E. Popova, Explicit description of 2D Parametric solution sets,
BIT Numerical Mathematics, 52 (2012), No. 1, pp. 179–200.
108
A Simple Modified Verification Method
for Linear Systems
Atsushi Minamihata, Kouta Sekine, Takeshi Ogita, Siegfried
M. Rump and Shin’ichi Oishi
Graduate School of Fundamental Science and Engineering,
Waseda University
3–4–1 Okubo, Shinjuku-ku, Tokyo 169–8555, Japan
[email protected]
Keywords: verified numerical computations, componentwise error
bound, INTLAB,
This talk is concerned with the problem of verifying the accuracy of
approximate solutions of linear systems. We propose a simple modified
method of calculating a componentwise error bound of the computed
solution, which is based on the following Rump’s theorem:
Theorem 1 (Rump [1, Theorem 2.1]) Let A 2 Rn⇥n and b, x̃ 2
Rn be given. Assume v 2 Rn with v > 0 satisfies u := hAiv > 0. Let
hAi = D E denote the splitting of hAi into the diagonal part D and
the o↵-diagonal part E, and define w 2 Rn by
wk := max
1in
where G := I
hAiD
|A 1 b
1
= ED
Gik
ui
1
x̃|  (D
for 1  k  n,
O. Then A is nonsingular, and
1
+ vwT )|b
Ax̃|.
(1)
In particular, the method based on Theorem 1 is implemented in the
routine verifylss in INTLAB Version 7 [2].
We modify Theorem 1 as follows:
Theorem 2 Let A, b, x̃, u, v, w be defined as in Theorem 1. Define
c := |b Ax̃| and Ds := diag(s) where s 2 Rn with
sk := uk wk
for 1  k  n.
109
Then,
|A 1 b
1
x̃|  (D
+ vwT )(I + Ds ) 1 c.
(2)
Moreover,
|A 1 b
where
1
x̃|  v + (D
+ vwT )(I + Ds ) 1 (c
u),
(3)
:= min1in ucii .
Theorem 3 Let A, b, x̃, u, v, w be defined as in Theorem 1. Define
:= uwT ED 1 and c := |b Ax̃|. Then,
|A 1 b
x̃|  (D
1
+ vwT )(c
(I
)c).
(4)
Moreover,
|A 1 b
where
x̃| 
v + (D 1 + vwT )
min c
u, (c
u
(I
)(c
u)) ,
(5)
:= min1in ucii .
(3) and (5) always give better bounds than Theorem 1. Detailed
proofs will be presented. Numerical results will be shown to illustrate
the efficiency of the proposed theorems.
References:
[1] S. M. Rump, Accurate solution of dense linear systems, Part II:
Algorithms using directed rounding, J. Comp. Appl. Math., 242
(2013), 185–212.
[2] S. M. Rump, INTLAB - INTerval LABoratory, Developments in Reliable Computing, T. Csendes, ed., 77–104, Kluwer, Dordrecht,
1999. http://www.ti3.tuhh.de/rump/
110
Fast inclusion for the matrix inverse
square root
Shinya Miyajima
Faculty of Engineering, Gifu University
1-1 Yanagido, Gifu-shi, Gifu 501-1193, Japan
[email protected]
Keywords: numerical inclusion, matrix inverse square root, Newton
operator
Given a nonsingular matrix A 2 Cn⇥n , a matrix X such that
AX 2 = I,
where I is the n⇥n identity matrix, is called an inverse square root of A.
The matrix inverse square root always exists for A being nonsingular,
and appears in important problems in science and technology, e.g., the
optimal symmetric orthogonalization of a set of vectors [1] and the
generalized eigenvalue problem [2]. Several numerical algorithms for
computing the matrix inverse square root have been proposed (see [1–
5], e.g.). It is known that the inverse square root is not unique (see [6],
in which the matrix square root is treated, but all considerations there
immediately carry over to the matrix inverse square root). If A has no
nonpositive real eigenvalues, the principal inverse square root (see [1])
can be defined by requiring that all the eigenvalues of X have positive
real parts. The principal inverse square root is of particular interest,
since this has important applications such as the matrix sign function,
the unitary polar factor and the geometric mean of two positive definite
matrices (see [7], e.g.).
In this talk, we consider enclosing the matrix inverse square root,
specifically, computing an interval matrix containing the inverse square
root using floating point computations. Frommer, Hashemi and Sablik [8] have firstly proposed two such algorithms. In these algorithms,
numerical spectral decomposition of A is e↵ectively utilized, so that
111
they require only O(n3 ) operations. The first algorithm computes the
interval matrix containing the inverse square root by enclosing a solution of the matrix equation F (X) = 0, where F (X) := XAX I, via
the Krawczyk operator, and guarantees the uniqueness of the inverse
square root contained in the computed interval matrix. In the second
algorithm, an affine transformation of F (X) = 0 making use of the
numerical results for the spectral decomposition is adopted. Although
this algorithm does not verify the uniqueness of the contained inverse
square root, the numerical results in [8] show that this algorithm usually computes narrower interval matrices and is successful for larger
dimensions than the first algorithm. As an application of these two
algorithms, the algorithms for enclosing the matrix sign function have
also been developed in [8].
The purpose of this talk is to propose an algorithm for enclosing the
matrix inverse square root. The proposed algorithm also utilizes the
spectral decomposition of A and requires only O(n3 ) operations. In
this algorithm, the affine transformation of F (X) = 0 and the Newton
operator is adopted in order to verify the existence of the inverse square
root in a candidate interval matrix, and the uniqueness is also verified
using the nontransformed equation. This algorithm moreover verifies
the principal property of the inverse square root uniquely contained
in the computed interval matrix by utilizing the theory in [9] which
enables us to enclose all eigenvalues of a matrix. We finally report
numerical results to observe the properties of the proposed algorithm.
References:
[1] N. Sherif, On the computation of a matrix inverse square root,
Computing, 46 (1991), pp. 295–305.
[2] P. Laasonen, On the iterative solution of the matrix equation
AX 2 I = 0, Math. Tables Other Aids Comput., 12 (1958),
pp. 109–116.
[3] A. Boriçi, A Lanczos approach to the inverse square root of a
large and sparse matrix, J. Comput. Phys., 162 (2000), pp. 123–
131.
112
[4] C.H. Guo, N.J. Higham, A Schur-Newton method for the matrix
pth root and its inverse, SIAM J. Matrix Anal. Appl., 28 (2006),
pp. 788–804.
[5] S. Lakić, A one parameter method for the matrix inverse square
root, Appl. Math., 42 (1997), No. 6, pp. 401–410.
[6] N.J. Higham, Functions of Matrices: Theory and Computation,
SIAM, Philadelphia, 2008.
[7] B. Iannazzo, B. Meini, Palindromic matrix polynomials, matrix
functions and integral representations, Linear Algebra Appl., 434
(2011), pp. 174–184.
[8] A. Frommer, B. Hashemi, T. Sablik, Computing enclosures
for the inverse square root and the sign function of a matrix, Linear Algebra Appl., (2014), http://dx.doi.org/10.1016/j.laa.
2013.11.047
[9] S. Miyajima, Numerical enclosure for each eigenvalue in generalized eigenvalue problem, J. Comp. Appl. Math., 236 (2012),
pp. 2545–2552.
113
Verified solutions of saddle point linear
systems
Shinya Miyajima
Faculty of Engineering, Gifu University
1-1 Yanagido, Gifu-shi, Gifu 501-1193, Japan
[email protected]
Keywords: verified computation, saddle point linear systems, error
estimation
In this talk, we are concerned with the accuracy of numerically
computed solutions of saddle point linear systems
◆
✓ ◆
✓ ◆
✓
x
f
A BT
, u :=
, b :=
,
(1)
Hu = b, H :=
y
g
B
C
where A 2 Rn⇥n , B 2 Rm⇥n , f 2 Rn and g 2 Rm are given, x 2 Rn
and y 2 Rm are to be solved, n m, A is symmetric positive definite
(SPD), B has full rank, and C is symmetric positive semi-definite,
which implies that H is nonsingular. The systems (1) arise a variety
of science and engineering applications, including partial di↵erential
equations and optimization problems.
T
T
Let u⇤ = (x⇤ , y ⇤ )T and ũ = (x̃T , ỹ T )T denote the exact and numerical solution of (1), respectively. We consider in this talk verified
computation of u⇤ , specifically, computing rigorous upper bounds for
kũ u⇤ k1 using floating point operations.
The pioneering work has been given by Chen and Hashimoto [1].
They skillfully exploited the special structure of (1). Their result enables us to avoid computing an approximation of H 1 . Let
◆✓ ◆ ✓ ◆
◆
✓
✓
x̃
rf
f
A BT
:=
rg
ỹ
g
B
C
be residual vectors. They have been presented the error estimation
kx̃
kỹ
114
x⇤ k2  kA 1 k2 (krf k2 + kB T k2 kỹ y ⇤ k2 ),
y ⇤ k2  ⇣(kBA 1 k2 krf k2 + krg k2 ),
(2)
(3)
where
⇣ :=
kAk2 k(BB T ) 1 k2
1 + kAk2 k(BB T ) 1 k2 min (C)
and min (C) denotes the smallest singular value of C. From (2) and
(3), we can obtain the upper bound for kũ u⇤ k1 , since kũ u⇤ k1 =
max(kx̃ x⇤ k1 , kỹ y ⇤ k1 )  max(kx̃ x⇤ k2 , kỹ y ⇤ k2 ). Substituting
(3) into (2), we obtain
kx̃
x⇤ k2  kA 1 k2 ((1 + ⇣kB T k2 kBA 1 k2 )krf k2 + ⇣kB T k2 krg k2 ). (4)
The important special case is when C = 0. We then have
so that (3) and (4) give
kx̃
kỹ
min (C)
= 0,
x⇤ k2  kA 1 k2 (1 + kAk2 kB T k2 k(BB T ) 1 k2 kBA 1 k2 )krf k2
+(A)kB T k2 k(BB T ) 1 k2 krg k2 ,
(5)
⇤
(6)
y k2  kAk2 k(BB T ) 1 k2 (kBA 1 k2 krf k2 + krg k2 ),
where (A) := kAk2 kA 1 k2 . They have been proposed the verification
algorithms based on (5) and (6).
Hashimoto [2] has also treated the case when C = 0 and improved
(5) and (6). Since B has full rank, there exists a nonsingular m ⇥ m
matrix LB such that LB LTB = BB T . He has been presented the error
estimation
kx̃ x⇤ k2  kA 1 k2 krf k2 + (A)kLB1 rg k2 ,
kLB (ỹ y ⇤ )k2  (A)krf k2 + kAk2 kLB1 rg k2 .
(7)
(8)
From (8), we have
kỹ y ⇤ k2  kLB1 k2 kLB (ỹ y ⇤ )k2  kLB1 k2 ((A)krf k2 +kAk2 kLB1 rg k2 ).
(9)
The purpose of this talk is to present and propose error estimations and verification algorithms in (1), respectively. Since A is SPD,
there exists a nonsingular n ⇥ n matrix LA such that LA LTA = A. Let
LA1 B T = QR be thin QR factorization of LA1 B T , where Q 2 Rn⇥m
is column-orthogonal and R 2 Rm⇥m is upper triangular. Since LA is
115
nonsingular and B has full rank, R is also nonsingular. We then derive
the error estimation
kx̃
kỹ
x⇤ k2  kLA1 k2 kLA1 rf k2 + ⌘kA 1 B T k2 krg k2 ,
y ⇤ k2  ⌘(kBA 1 rf k2 + krg k2 ),
where
⌘ :=
Let R
T
(10)
(11)
kR 1 k22
.
1 + kR 1 k22 min (C)
:= (R 1 )T . When C = 0, we present the error estimation
kx̃
kỹ
x⇤ k2  kLA1 k2 (kLA1 rf k2 + kR T rg k2 ),
y ⇤ k2  kR T k2 kLA1 rf k2 + kR 1 R T rg k2 .
(12)
(13)
Let "CH , CH , CH0 , "CH0 , H , "H , M , "M , M0 and "M0 be the right
hand sides of (3), (4), (5), (6), (7), (9), (10), (11), (12) and (13),
respectively. We prove M  CH , "M  "CH , M0  H  CH0 and
"M0  "H  "CH0 , and propose the verification algorithms based on
(12) and (13). These algorithms do not assume but prove that A is
SPD and B has full rank. Numerical results are finally reported to
show the properties of the proposed algorithms.
References:
[1] X. Chen, K. Hashimoto, Numerical validation of solutions of
saddle point matrix equations, Numer. Linear Algebra Appl., 10
(2003), pp. 661–672.
[2] K. Hashimoto, A preconditioned method for saddle point problems, MHF Preprint Series, MHF 2007-6 (2007), http://www2.
math.kyushu-u.ac.jp/coe/report/pdf/2007-6.pdf
116
A method of verified computations for
nonlinear parabolic equations
Makoto Mizuguchi1, Akitoshi Takayasu1, Takayuki Kubo2,
and Shin’ichi Oishi1,3
1
Waseda University, 2 University of Tsukuba, 3 CREST JST
1698555 Tokyo, Japan,
3050006 Ibaraki, Japan
[email protected]
Keywords: parabolic initial-boundary value problems, verified computations, existence, error bounds
Let ⌦ be a bounded polygonal or polyhedral domain in Rd (d =
1, 2, 3). Let V := H01 (⌦), X := L2 (⌦) and V ⇤ := H 1 (⌦). A dual
product between V and V ⇤ is defined by h·, ·i. The inner product in
X is denoted by (·, ·)X . In this talk, we consider the initial-boundary
value problem of heat equations:
8
u = f (u) in (0, 1) ⇥ ⌦,
< @t u
u(t, x) = 0
on (0, 1) ⇥ @⌦,
(1)
:
u(0, x) = u0 (x) in ⌦,
where @t u := du
dt , f : V ! X is a Fréchet di↵erentiable nonlinear
function, and u0 2 X is a given initial function. A : V ! V ⇤ is
defined by hAu, vi := (ru, rv)X for all v 2 V . Here, A generates
an analytic semigroup {e tA }t 0 . Letting n 2 N be a fixed natural
· · < tn < 1. For
number, we divide the time: 0 = t0 < t1 < ·S
k = 1, 2, ..., n, we define Tk = (tk 1 , tk ] and T = Tk . The main aim
of this paper is to present a computer-assisted method of verifying local
existence and uniqueness of exact solution of (1) in the function space:
⇢
L1 (T ; V ) := u : ess sup ku(t)kV < 1 .
t2T
117
Let ûk ⇡ u(tk ) be the approximate solution by the finite element
method and the backward Euler method [1]. We construct an approximate solution ! denoted by
!(t) :=
n
X
ûk
k (t),
k=1
t 2 T,
where k (t) is a piecewise linear Lagrange basis in Tk . We have the
following result with the semigroup thory and Banach’s fixed-point
theorem.
Theorem 1 (Verification principle) For t 2 Tk and v(t) 2 V , let
Bk (v, ⇠) be a ball centered at v with radius ⇠ in the norm k · kL1 (Tk ;V ) .
Suppose that we have constants Lk (v, ⇠), k and "k , defined by
Lk (v, ⇠) :=
sup
y2Bk (v,⇠),
w2V, kwkV =1
k
:=
ûk
ûk
⌧k
1
+ Aûk
kf 0 [y]wkL1 (Tk ;X) ,
f (ûk )
V⇤
, and "k := kûk
ûk 1 kV .
⇤
of A in
Also let us assume that min > 0 is the minimal eigenvalue
⌘ V .
⇣
p ⌧k
1 e ⌧k min
"k +
We denote ⌘ > 0 by ⌘ := 2 e Lk (ûk , "k )"k + k + 1 + ⌧k min
If ⇢k > 0 satisfies
⇢p
k 1 , where e is Napier’s constant, ⇢0 = 0.
2 ⌧ek Lk (!, ⇢k )⇢k + ⌘ < ⇢k , then the solution u(t) of (1) uniquely exists
in the ball Bk (!, ⇢k ).
References:
[1] V. Thomée, Galerkin Finite Element Methods for Parabolic Problems, Springer, Berlin, 1997.
[2] A. Pazy, Semigroups of linear operators and applications to partial
di↵erential equations, Springer, New York, 1983.
118
A sharper error estimate of verified
computations for nonlinear heat
equations.
Makoto Mizuguchi1, Akitoshi Takayasu1, Takayuki Kubo2,
and Shin’ichi Oishi1,3
1
Waseda University, 2 University of Tsukuba, 3 CREST JST
1698555 Tokyo, Japan,
3050006 Ibaraki, Japan
[email protected]
Keywords: parabolic initial-boundary value problems, computerassisted proof, rigorous error estimate
Let ⌦ be a bounded polygonal or polyhedral domain in Rd (d =
1, 2, 3). Let V := H01 (⌦), X := L2 (⌦) and V ⇤ := H 1 (⌦). A dual
product between V and V ⇤ is defined by h·, ·i. The inner product in
X is denoted by (·, ·)X . In this talk, we consider the initial-boundary
value problem of heat equations:
8
u = f (u) in (0, 1) ⇥ ⌦,
< @t u
u(t, x) = 0
on (0, 1) ⇥ @⌦,
(1)
:
u(0, x) = u0 (x) in ⌦,
where @t u := du
dt , f : V ! X is a Fréchet di↵erentiable nonlinear
function, and u0 2 X is a given initial function. A : V ! V ⇤ is defined
by hAu, vi := (ru, rv)X for all v 2 V . Letting n 2 N be a fixed
· · · < tn < 1.
natural number, we divide the time: 0 = t0 < t1 < S
For k = 1, 2, ..., n, we define Tk = (tk 1 , tk ] and T = Tk . Let ûk ⇡
u(tk ) be a fully discretized approximate solution obtained by the finite
element method and the backward Euler method [1]. We define an
approximate solution ! by
!(t) :=
n
X
k=1
ûk
k (t),
t 2 T,
119
where k (t) is a piecewise linear Lagrange basis in Tk . By using !(t),
we have established a computer-assisted proof of local existence and
uniqueness of u(t). In this method, the precision of the error estimate
is slightly rough.
The topic of this talk is to derive a shaper error estimate by adding
some assumptions that the initial data u0 2 V and f 0 , which denotes
by a Fréchet derivative of f , is a local continuous function. The key
point of this talk is to use an ideal approximation: ū(t) defined by
ū(t) :=
n
X
k=1
uk
k (t),
t 2 T,
where uk 2 V satisfies an elliptic equation:
✓
◆
u k uk 1
,v
+ (ruk , rv) = (f (uk ), v)X , 8v 2 V.
⌧k
X
Then, we divide the error estimate into the following two parts:
ku
!kL1 (Tk ;V )  ku
ūkL1 (Tk ;V ) + kū
!kL1 (Tk ;V ) .
First, we rigorously construct the ideal approximation ū(t) using the
framework of verified computations for elliptic equations, e.g. M. Plum
[3]. Next, by using ū, a local existence and uniqueness of u(t) is validated by a computer-assisted method depending on Banach’s fixedpoint theorem and the semigroup theory [2]. Then, the sharper error
estimate is provided.
References:
[1] V. Thomée, Galerkin Finite Element Methods for Parabolic Problems, Springer, Berlin, 1997.
[2] A. Pazy, Semigroups of linear operators and applications to partial
di↵erential equations, Springer, New York, 1983.
[3] M. Plum, Existence and multiplicity proofs for semilinear elliptic boundary value problems by computer assistance, Jahresber.
Dtsch. Math.-Ver., 110 (2008), pp. 19–54.
120
An Interval arithmetic algorithm for
global estimation of hidden Markov
model parameters
Tiago Montanher and Walter Mascarenhas
University of São Paulo
1010 Matão St. São Paulo, SP. Brazil
[email protected]
Keywords: Hidden Markov models, interval arithmetic, linear programming bounds
Hidden Markov Models are important tools in statistics and applied
mathematics, with applications in speech recognition, physics, mathematical finance and biology. The Hidden Markov Models we consider
here are formed by two discrete time and finite state stochastic process. The first process is a Markov chain (A, ⇡) and is not observable
directly. Instead, we observe a second process B which is driven by
the hidden process. For instance, a Markov chain is a simple Hidden
Markov Model in which the observed process and the hidden process
are the same. These models have received much attention in the literature in the past forty years, and the book by Cappé[2] presents a
good didactic overview on the topic. From a historical perspective,
the seminal paper by Rabiner[1] provides a good motivation for this
subject.
In order to extract conclusions from a Hidden Markov Models we
must estimate the parameters defining the hidden process (A, ⇡) and
the observed process B. In this article we present efficient global optimization techniques to estimate these parameters by maximum likelihood and compare our estimates with the ones obtained by the local
likelihood maximization methods already described in the literature.
Usually, this estimation problem is solved by local methods, like the
Baum-Welch algorithm. These methods are efficient, however they
only find local maximizers and do not estimate the distance from the
121
resulting parameters to global optima. Our work aims to improve this
situation in practice.
We develop a global optimization algorithm based on the classical
interval branch and bound framework described by Kearfott[3]. In a
successful execution, the algorithm is able to find a box with prescribed
width which rigorously contains at least one feasible point x⇤ for the
problem and such that x⇤ is a ✏ -global maximum. The objective
function and its derivatives are evaluated by the so called backward
recursion presented on Rabiner’s work. In order to obtain sharper estimates of functions we do not evaluate them using the natural interval
extension. Instead, at each evaluation we solve a set of small linear
programs given by the backward recursion. We also try to improve the
lower bound for the maximum implementing a multi-start Baum-Welch
procedure. To handle the underflow problems which arise frequently
in the estimation problem for Hidden Markov models we derive a new
scaling scheme based on C + + functions scalbln and frexp. This
approach is significantly di↵erent from the literature where authors
suggests to take log of the objective function. We present numerical
experiments illustrating the e↵ectiveness of our method.
References:
[1] Rabiner, Lawrence R., A tutorial on hidden markov models
and selected applications in speech recognition, Proceedings of the
IEEE, 1989, pp. 257–286
[2] Cappé, Olivier and Moulines, Eric and Ryden, Tobias,
Inference in Hidden Markov Models (Springer Series in Statistics),
Springer-Verlag New York, Inc., 2005
[3] Kearfott, R. B., Rigorous Global Search: Continuous Problems,
Kluwer Academic Dordrecht, 1996
122
Toward hardware support for
Reproducible Floating-Point
Computation
Hong Diep Nguyen, James Demmel
University of California, Berkeley
Berkeley, CA 94720, USA
{hdnguyen,demmel}@eecs.berkeley.edu
Keywords: Reproducibility, Floating-point arithmetic, Hardware implementation, Parallel Computation
Reproducibility is the ability to compute bit-wise identical results
from di↵erent runs of the same program on the same input data. This
is very important for debugging and for understanding the reliability of
the program. In a parallel computing environment, especially on very
large-scale systems, it is usually not possible to control the available
computing resources such as the processor count and the reduction tree
shape. Therefore the order of evaluation di↵ers from one run to another
run of the same program, which leads to di↵erent computed results due
to the non-associativity of floating-point addition and multiplication.
In a previous paper [1] we proposed the pre-rounding technique
for reproducible summation regardless of the available computing resources such as processor count, reduction tree shape, data partition,
multimedia instruction set (SSE, AVX), etc. This technique has been
implemented in a production library, ReproBLAS [2], which currently
supports SSE instructions and MPI computing environment. Experimental results showed that on large-scale systems, for example on
a CRAY XC30 machine with more than 1024 processors, the reproducible sum runs only 20% slower than the performance-optimized
nonreproducible sum. On a single processor, however, the reproducible
sum can be up to 8 times slower than the performance-optimized nonreproducible sum. That is because the pre-rounding technique is implemented using extra floating-point operations for multiple passes of
error-free vector transformation.
123
For the purpose of performance enhancement, having hardware support will help to reduce the additional cost of extra floating-point operations needed. In this paper, we propose a new instruction for reproducible add, which ideally can be issued at every clock cycle, which
will reduce the cost of reproducible summation to almost as small as
a normal nonreproducible sum on a single processor. In comparison
with using a long accumulator, which can also provide reproducibility
by computing exact dot product and summation, our new instruction
exhibits the following advantages:
• the new instruction operates on the existing register file instead of
having to implement a special accumulator unit,
• it does not require drastic changes to the existing scheduling system,
• it requires less memory space, hardware area as well as energy
consumption,
• the new instruction can be well pipelined and multi-threaded.
In this talk, I will present some preliminary results of this on-going
work of hardware implementation. First, I will present the sketch
of the hardware layout in order to implement the reproducible add
instruction. Then I will show some experimental results to demonstrate
that the chosen hardware configuration is sufficient to obtain good
accuracy. Finally I will discuss some possible future work.
References:
[1] J. Demmel, H.D. Nguyen, Fast Reproducible Floating-Point
Summation, ARITH 21, Austin, Texas, April 7-10, 2013.
[2] ReproBLAS, Reproducible Basic Linear Algebra Subprograms,
http://bebop.cs.berkeley.edu/reproblas.
124
Accurate and efficient implementation
of affine arithmetic
using floating-point arithmetic
Jordan Ninin and Nathalie Revol
J. Ninin : IHSEV team, LAB-STICC, ENSTA-Bretagne
2 rue François Verny, 29806 Brest, France
N. Revol: INRIA - Université de Lyon - AriC team
LIP (UMR 5668 CNRS - ENS de Lyon - INRIA - UCBL)
ENS de Lyon, 46 allée d’Italie, 69007 Lyon, France
[email protected]
Keywords: interval arithmetic, affine arithmetic, floating-point arithmetic, roundo↵ error
Affine arithmetic is one of the extensions of interval arithmetic
that aim at counteracting the variable dependency problem. With
affine arithmetic, defined in [5] by Stolfi and Figueiredo, variables are
represented as affine combination of symbolic noises. It di↵ers from
the generalized interval arithmetic, defined by Hansen in [1], where
variables are represented as affine combination of intervals. Non-affine
operations are realized through the introduction of a new noise, that
accounts for nonlinear terms. Variants of affine arithmetic have been
proposed, they aim at limiting the number of noise symbols. Let us
mention [4] by Messine and [6] by Vu, Sam-Haroud and Faltings to
quote only a few.
The focus here is on the implementation of affine arithmetic using
floating-point arithmetic, specified in [2]. With floating-point arithmetic, an issue is to handle roundo↵ errors and to incorporate them in
the final result, so as to satisfy the inclusion property, which is the fundamental property of interval arithmetic. In [4], [5] and [6], roundo↵
errors are accounted for in a manner that implies frequent switches of
the rounding mode; this incurs a severe time penalty. Implementations
of these variants are available in YalAA, developed by Kiel [2].
125
We propose an implementation that uses one dedicated noise symbol for accumulated roundo↵ errors. For accuracy purposes, the roundo↵ error ✏ of each arithmetic operation is computed exactly via EFT
(Error Free Transforms). For efficiency purposes, the rounding mode
is never switched. Instead, a brute-force bound on the roundo↵ error
incurred by the accumulation of the ✏s mentioned above is used.
Experimental results are presented. The proposed implementation
is one of the most accurate and its execution time is significantly reduced; it can be up to 50% faster than other implementations. Furthermore, the use of a FMA (Fused Multiply-and-Add) reduces the
cost of the EFT and the overall performance is even better.
References:
[1] E.R. Hansen, A generalized interval arithmetic, Lecture Notes in
Computer Science, No. 29, pp. 7–18, 1975.
[2] American National Standards Institute and Institute
of Electrical and Electronic Engineers, IEEE standard
for binary floating-point arithmetic. Std 754-2008. ANSI/IEEE
Standard, 2008.
[3] S. Kiel, YalAA: yet another library for affine arithmetic, Reliable
Computing, Vol. 16, pp. 114–129, 2012.
[4] F. Messine, Extensions of affine arithmetic: Application to unconstrained global optimization, Journal of Universal Computer
Science, Vol. 8, No. 11, pp. 992–1015, 2002.
[5] J. Stolfi and L. de Figueiredo, Self-Validated Numerical Methods and Applications, Monograph for 21st Brazilian Mathematics
Colloquium, Rio de Janeiro, Brazil, 1997.
[6] X.-H. Vu, D. Sam-Haroud, and B. Faltings, Combining multiple inclusion representations in numerical constraint propagation,
in IEEE Int. Conf. on Tools with Artificial Intelligence, pp. 458–
467, IEEE Computer Society, 2004.
126
Iterative Refinement for Symmetric
Eigenvalue Problems
Takeshi Ogita
Tokyo Woman’s Christian University
Tokyo 167-8585, Japan
[email protected]
Keywords: eigenvalue problem, iterative refinement, accurate numerical algorithms
Let us consider a standard eigenvalue problem
Ax = x
(1)
where A = AT 2 Rn⇥n . To solve (1) is ubiquitous since it is one of the
significant tasks in scientific computing. The purpose of this talk is to
compute an arbitrarily accurate eigenvalue decomposition:
bD
bX
b
A=X
1
bD
bX
bT ,
=X
b 2 Rn⇥n is diagonal.
b 2 Rn⇥n is orthogonal and D
where X
Most of the existing refinement algorithms are based on Newton’s
method for nonlinear equations, e.g. [1,2]. These methods can improve
eigenpairs one-by-one. On the other hand, we develop a method of
improving all eigenvalues and eigenvectors at the same time.
b 2 Rn⇥n be an orthogonal matrix consisting of all exact eigenLet X
b Here
vectors of A. Let X0 2 Rn⇥n be an initial approximation of X.
we assume that X0 satisfies
b
kX
1
X0 k =: "0 < .
2
In this talk we propose a simple iterative refinement algorithm for
calculating Xk , k = 1, 2, . . . such that
• Xk := Xk
1
+ Wk for k = 1, 2, . . .
127
b
• kX
Xk k =: "k ⇡ "2k
1
⇡ "2k
0
(quadratic convergence)
which implies
max |
i
e(k) | ⇡ "k max | i | = "k kAk,
i
e(k) := (X T AXk )ii .
k
i
b define
The idea is as follows: for an approximation X 2 Rn⇥n of X,
b = X(I + E). Then we try to compute a good
E 2 Rn⇥n such that X
e of E by utilizing the following two relations:
approximation E
⇢
b 1X
b =X
bT X
b =I
X
(orthogonality)
1 b
T
b
b =D
b (diagonalizability)
b
X AX = X AX
e we can update X by X(I + E).
e In general, we can
After obtaining E,
iteratively update Xk by
ek ) = Xk + Xk E
ek .
Xk+1 := Xk (I + E
Detailed discussions and numerical results will be presented in the
talk.
References:
[1] J.J. Dongarra, C.B. Moler, J.H. Wilkinson, Improving the
accuracy of computed eigenvalues and eigenvectors, SIAM J. Numer. Anal., 20:1 (1983), 23–45.
[2] F. Tisseur, Newton’s method in floating point arithmetic and
iterative refinement of generalized eigenvalue problems, SIAM. J.
Matrix Anal. Appl., 22:4 (2001), 1038–1057.
128
Automatic Verified Numerical
Computations for Linear Systems
Katsuhisa Ozaki, Takeshi Ogita and Shin’ichi Oishi
Shibaura Institute of Technology
307 Fukasaku, Minuma-ku, Saitama-shi, Saitama 337-8570, Japan
[email protected]
Keywords: Verified Numerical Computations, Floating-Point Arithmetic, Numerical Linear Algebra
This talk is concerned with verified numerical computations for
linear systems. Our aim is to improve an automatic verified method
for linear systems which is discussed in [2]. Let F be a set of floatingpoint numbers as defined by IEEE 754. Let I be the identity matrix
with suitable size. For A 2 Fn⇥n , if R exists such that
kRA
Ik  ↵ < 1,
(1)
then A is non-singular. This is dominant computations in verified
numerical computations of linear systems. The discussion is how to
obtain ↵ in (1) as fast as possible. The notation fl(·) and fl4 (·) means
that each operation in the parenthesis is evaluated by floating-point
arithmetic as defined by IEEE 754 with rounding to nearest and rounding upwards, respectively. Let e = (1, 1, . . . , 1)T 2 Fn . A constant u
denotes the roundo↵ unit, for example, u = 2 53 for binary64. Assume
that neither overflow nor underflow occurs in fl(·). If we apply a priori
error analysis (for example [1]) for fl(RA I), then an upper bound of
kRA Ik1 can be computed by
kRA
Ik1  fl4 (kfl(|RA
I|e) + du(|R|(|A|e) + e)k1 )
(2)
where d is a constant with log2 n . d . n depending on the order of
the evaluation. For (2), the following relation often holds [2]:
fl4 (fl(|RA
I|e)) ⌧ fl4 (du(|R|(|A|e) + e))
129
Our idea is as follows: After obtaining R ⇡ A 1 , we first evaluate
fl4 (u(|R|(|A|e) + e)). Then, the constant d can be controlled by the
following block computations in order to prove kRA Ik1 < 1. Assume that s = dn/n0 e for block size n0 . We use block notations for
C = AB (A, B, C 2 Fn⇥n ) as follows:
1 0
10
1
0
A11 · · · A1s
B11 · · · B1s
C11 · · · C1s
.
.
.
.
.
.
@ .. . . . .. A = @ .. . . . .. A @ .. . . . .. A
Cs1 · · · Css
As1 · · · Ass
Bs1 · · · Bss
We introduce a variant of block matrix computations with ↵ 2 N,
w = ds/↵e and 1  q  w 1 as follows:
Cij = fl(
w
X
Tk ), Tq = fl(
k=1
↵q
X
l=↵(q 1)+1
Ail Blj ), Tw = fl(
s
X
Ail Blj ).
l=↵(w 1)+1
Then, |C AB|  |A||B|, = n0 + ↵ + w 2. The minimal for
n = r3 (r 2 N) is identical with some bounds in [3]. Our algorithm
automatically defines suitable n0 and ↵, and obtain ↵ as fast as possible.
Similar discussion can be applied into other methods in [2]. As a result,
the computing time of the proposed algorithm is much smaller than
that of the algorithm in [2], which will be shown in the presentation.
References:
[1] N. J. Higham, Accuracy and Stability of Numerical Algorithms,
Second Edition, SIAM, Philadelphia, 2002.
[2] K. Ozaki, T. Ogita, S. Oishi, An Algorithm for Automatically
Selecting a Suitable Verification Method for Linear Systems, Numerical Algorithms, 56 (2011), No. 3, pp. 363-382.
[3] A. M. Castaldo, R. C. Whaley, A. T. Chronopoulos, Reducing Floating Point Error in Dot Product using the Superblock
Family of Algorithms, SIAM Journal on Scientific Computing, 31
(2009), No. 2, pp. 1156-1174.
130
Bernstein branch-and-bound algorithm
for unconstrained global optimization
of multivariate polynomial MINLPs
Bhagyesh V. Patil1 and P. S. V. Nataraj2
1
Laboratoire d’Informatique de Nantes Atlantique
2, rue de la Houssinière
BP 92208, Nantes
44322, France
[email protected]
2
Systems and Control Engineering Group
Indian Institute of Technology Bombay
Powai-400076, Mumbai
[email protected]
Keywords: Branch-and-bound, Bernstein polynomials, Global optimization, Mixed-integer nonlinear programming.
Optimization of mixed-integer nonlinear programming (MINLP)
problems constitutes an active area of research. A standard strategy
to solve MINLP problems is to use a branch-and-bound (BB) framework [1]. Specifically, a relaxed NLP is solved at each node of the
branch-and-bound tree. Di↵erent variants of the BB approach have
been reported in the literature [2] and are widely adapted by several
state-of-the-art MINLP solvers (cf. BARON , Bonmin, SBB). Albeit
of the widespread enjoyed interest by the BB approach, the type of
NLP solver used has found to limit its performance in practice. To
solve polynomial nonlinear programming (NLP) problems, an alternative approach is provided by the Bernstein algorithms. The Bernstein
algorithms are similar in philosophy to interval branch-and-bound procedures. Several variants of the Bernstein algorithms to solve unconstrained polynomial NLPs have been reported in the literature (see,
for instance, work by Nataraj and co-workers). However, no work has
131
yet been reported in the literature for global optimization of unconstrained polynomial MINLP problems using the Bernstein polynomial
approach.
In this paper, we propose a Bernstein algorithm for unconstrained
global optimization of multivariate polynomial MINLPs. The proposed
algorithm is similar to the classical Bernstein algorithm for the global
optimization of unconstrained NLPs, but with several modifications
listed as follows. It uses tools, namely monotonicity and concavity
tests, a modified subdivision procedure (to handle integer decision variables in the given MINLPs), and the Bernstein box and Bernstein hull
consistency techniques to contract the search domain. The Bernstein
box and Bernstein hull consistency techniques is applied to constraints
based on the gradient and upper bound on the global minimum of the
objective polynomial to delete nonoptimal points from the given search
domain of interest.
The performance of the proposed algorithm is numerically tested
on a collection of 10 test problems (three to nine dimensional, and
one to six integer variables) taken from [3]. These problems are constructed as MINLPs, and the test results are compared with those of
the classical Bernstein algorithm to solve polynomial NLPs 1 . For these
test problems, we first compare the performance of the proposed algorithm with and without accelerating devices (namely the cut-o↵, the
monotonicity, and the concavity tests), combinations of the di↵erent
accelerating devices, and the combinations of the Bernstein box and
the Bernstein hull consistency techniques with the three accelerating
devices. Based on our findings, the proposed algorithm is found to
be considerably more efficient than the classical Bernstein algorithm,
giving average reduction of 50 % in the number of boxes processed and
the computational time, depending on the tools used in the proposed
algorithm.
1
It may be noted that the classical Bernstein algorithm is extended (based on simple rounding
heuristics borrowed from the MINLP literature) in this case to handle the integer variables of the
MINLPs.
132
References:
[1] C. A. Floudas, Nonlinear and mixed-integer optimization: Fundamentals and applications, New York, U.S.A: Oxford University
Press, 1995.
[2] GAMS- The solver manuals, GAMS Development Corp., Washington DC, 2003.
[3] J. Verschelde, PHC pack, the database of polynomial systems,
Technical report, Mathematics Department, University of Illinois,
Chicago, USA, 2001.
133
Improved Enclosure for Parametric
Solution Sets with Linear Shape
Evgenija D. Popova
Inst. of Mathematics and Informatics, Bulgarian Academy of Sciences
Acad. G. Bonchev str., block 8, 1113 Sofia, Bulgaria
[email protected]
Keywords: linear systems, dependent data, solution set enclosure
Consider linear algebraic systems involving linear dependencies between interval parameters. An interval method [1] for enclosing the
parametric united solution set is known as the best one. Its efficient
application requires particular structure of the dependencies which is
representative for finite element models of truss structures. It is not
know which parametric systems (in general form) have the required
representation.
We generalize the method [1] for systems involving dependencies
between the matrix and the right-hand side vector. Some sufficient
conditions for a parametric solution set to have linear boundary will
be presented. Via these conditions any parametric system that satisfies
them is transformed into the form required by the method of Neumaier
and Pownuk, and its generalization. Thus, an expanded scope of applicability is achieved. Examples will demonstrate parametric solution
sets with linear boundary that appear in various application domains.
The linear boundary of any parametric solution set (AE- in general)
with respect to a given parameter, which is proven by the above sufficient conditions, can be utilized for further improving an enclosure of
the solution set.
References:
[1] A. Neumaier, A. Pownuk, Linear systems with large uncertainties, with applications to truss structures, Reliable Computing, 13
(2007), pp. 149–172.
134
The architecture of the IEEE P1788 draft
standard for interval arithmetic
John Pryce
Cardi↵ University, UK
[email protected]
Keywords: IEEE standard, Interval arithmetic, Interval exception
handling, Interval flavors
Interval arithmetic (IA) is the most used way of producing rigorously proven results in problems of continuous mathematics, usually in
the form of real intervals that (even in presence of rounding error) are
guaranteed to enclose a value of interest, such as a solution of a di↵erential equation at some point. The basics of IA are generally agreed –
e.g., to add two intervals x, y, find an interval containing all x + y for
x in x and y in y.
Many versions of IA theory exist, individually consistent but mutually incompatible. They di↵er especially in how to handle operations
not everywhere defined on their inputs, such as division by an interval
containing zero. In this situation a standard is called for, which not
all will love but which is usable and practical in most IA applications.
The IEEE working group P1788 [1], begun in 2008, has produced a
draft standard for interval arithmetic, currently undergoing the IEEE
approval process. The talk will concentrate on aspects of its architecture, especially:
• the levels structure, with a mathematical, a datum and an implementation level;
• the decoration system, which notes when a library operation is
applied to input where it is discontinuous or undefined.
Time permitting, I may outline the P1788 flavor concept, by which
implementations based on other versions of IA theory may be included
into the standard in a consistent way.
Invited talk
135
References:
[1] IEEE Interval Standard Working Group - P1788,
http://grouper.ieee.org/groups/1788/.
136
Verified Parameter Identification for
Dynamic Systems with Non-Smooth
Right-Hand Sides
Andreas Rauh, Luise Senkel and Harald Aschemann
Chair of Mechatronics
University of Rostock
D-18059 Rostock, Germany
{Andreas.Rauh,Luise.Senkel,Harald.Aschemann}@uni-rostock.de
Keywords: Non-smooth ordinary di↵erential equations, parameter
identification, mechanical systems, friction
Dynamic system models given by ordinary di↵erential equations
(ODEs) with non-smooth right-hand sides are widely used in engineering. They can, for example, be employed to model transitions
between static and sliding friction in mechanical systems and to represent variable degrees of freedom for dynamic applications in robotics
with contacts between at least two (rigid) bodies.
The verified simulation of such systems has to detect those points
of time at which either one of the discrete model states (in a representation of the ODEs by means of a state-transition diagram) becomes
active or at which one of the discrete states is deactivated [1,2]. As
long as mechanical systems are taken into consideration that are described by position and velocity as corresponding state variables, it is
guaranteed that the state trajectories (i.e., the solutions of the ODE)
remain continuous at the before-mentioned switching points.
For practical applications, however, it is not only necessary to derive verified simulation techniques and to compute state variables that
can be reached within a given time horizon under consideration of a
predefined control law. Such a control law is usually given by the
actuator signal (e.g. force) acting onto the (mechanical) system [3,4].
In addition, a system identification is necessary to determine parameter values that comply with the non-smooth system model on the one
137
hand and the measured data on the other hand. In engineering applications, these measurements are usually subject to uncertainty that is
often in the same order of magnitude as the measured data themselves.
For that reason, it is in general not reliable to determine point values
for the system parameters. Instead, confidence intervals have to be
computed which satisfy both the constraints imposed by the dynamic
system model (the ODEs with the variable-structure behavior) and the
measurements with the corresponding uncertainty.
In this contribution, an interval-based o✏ine system identification
routine is presented and compared to a guaranteed stabilizing sliding
mode state and parameter estimator. This estimator is proven to be
asymptotically stable within a desired operating range, and may be
employed in real time within engineering applications. However, it has
the drawback that it does not directly produce confidence intervals
that are required for a guaranteed identification. Simulations and experimental results are shown for a laboratory test rig representing the
uncertain longitudinal dynamics of a vehicle.
References:
[1] E. Auer, S. Kiel, and A. Rauh, A Verified Method for Solving
Piecewise Smooth Initial Value Problems, Intl. J. of Applied Mathematics and Computer Science, 23 (2013). No. 4, pp. 731–747.
[2] N.S. Nedialkov and M. v. Mohrenschildt, Rigorous Simulation of Hybrid Dynamic Systems with Symbolic and Interval
Methods, Proc. of American Control Conference ACC, Anchorage, USA, (2002). pp. 140–147.
[3] A. Rauh, M. Kletting, H. Aschemann, and E.P. Hofer,
Interval Methods for Simulation of Dynamical Systems with StateDependent Switching Characteristics, Proc. of the IEEE Intl. Conf.
on Control Applications, Munich, Germany, (2006). pp. 355–360.
[4] A. Rauh, Ch. Siebert, H. Aschemann, Verified Simulation
and Optimization of Dynamic Systems with Friction and Hysteresis, Proc. of ENOC 2011, Rome, Italy, (2009).
138
Computation of Confidence Regions in
Reliable, Variable-Structure State and
Parameter Estimation
Andreas Rauh, Luise Senkel and Harald Aschemann
Chair of Mechatronics
University of Rostock
D-18059 Rostock, Germany
{Andreas.Rauh,Luise.Senkel,Harald.Aschemann}@uni-rostock.de
Keywords: Ordinary di↵erential equations, sliding mode techniques,
interval arithmetic, cooperativity of dynamic systems
Interval-based sliding mode controllers and estimators provide a
possibility to stabilize the error dynamics despite not accurately known
parameters and bounded measurement uncertainty [2]. However, current implementations of both types of approaches are commonly characterized by the fact that they only provide point-valued estimates
without any explicit computation of confidence intervals [4]. Therefore, this contribution aims at developing fundamental techniques for
an extension towards the computation of guaranteed confidence intervals.
The corresponding procedure is based on the use of symbolic formula manipulation and interval arithmetic for the computation of those
sets of state variables (and estimated states, respectively) that can be
reached in a finite time horizon. For that purpose, the nonlinear system is embedded into a (quasi-) linear state-space representation with
piecewise constant input signals for which the sets of reachable states
can be computed by using Müller’s theorem, or more generally, by
exploiting the system property of cooperativity for linear parametervarying finite-dimensional state equations [1,3]. The necessary proof
for cooperativity is performed by verified range computation procedures that rely on interval arithmetic software libraries. Basic building blocks of these procedures are presented for the use of the Matlab
toolbox IntLab.
139
After a summary of the before-mentioned fundamental procedures,
extensions are presented which show how these techniques can be employed in a framework for designing sliding mode estimators. These
estimators, extended by the use of interval arithmetic, determine the
sets of state variables and parameters that are consistent with both
a given dynamic system model and information about bounded measurement uncertainty.
Finally, necessary extensions are highlighted which allow for an extension of the implementation in such a manner that the corresponding
estimation schemes can make use of interval arithmetic in real time.
Numerical results for estimation tasks related to the longitudinal dynamics of a vehicle conclude this contribution.
References:
[1] M. Müller, Über die Eindeutigkeit der Integrale eines Systems
gewöhnlicher Di↵erenzialgleichungen und die Konvergenz einer Gattung von Verfahren zur Approximation dieser Integrale, Sitzungsbericht Heidelberger Akademie der Wissenschaften, (1927). In German.
[2] L. Senkel, A. Rauh, H. Aschemann, Interval-Based Sliding
Mode Observer Design for Nonlinear Systems with Bounded Measurement and Parameter Uncertainty, In Proc. of IEEE Intl. Conference on Methods and Models in Automation and Robotics, Miedzyzdroje, Poland, 2013.
[3] H.L. Smith, Monotone Dynamical Systems; An Introduction to
the Theory of Competitive and Cooperative Systems, Mathematical Surveys and Monographs. American Mathematical Society,
Providence, USA, vol. 41 (1995).
[4] V. Utkin, Sliding Modes in Control and Optimization, (SpringerVerlag, Berlin, Heidelberg, (1992).
140
Exponential Enclosure Techniques for
Initial Value Problems with Multiple
Conjugate Complex Eigenvalues
Andreas Rauh1, Ramona Westphal1, Harald Aschemann1 and
Ekaterina Auer2
1
Chair of Mechatronics
University of Rostock
D-18059 Rostock, Germany
2
Faculty of Engineering, INKO
University of Duisburg-Essen
D-47048 Duisburg, Germany
{Andreas.Rauh,Ramona.Westphal,Harald.Aschemann}@uni-rostock.de,
[email protected]
Keywords: Ordinary di↵erential equations, initial value problems,
complex interval arithmetic, ValEncIA-IVP
ValEncIA-IVP is a verified solver providing guaranteed enclosures for the solution to initial value problems (IVPs) for sets of ordinary di↵erential equations (ODEs). In the basic version of this solver,
the verified solution was computed as the sum of a non-verified approximate solution (computed, for example, by Euler’s method) and
additive guaranteed error bounds determined using a simple iteration
scheme [1,2].
The disadvantage of this iteration scheme, however, is that the
widths of the resulting state enclosures might get larger even for asymptotically stable ODEs [2,3]. This phenomenon is caused by the so-called
wrapping e↵ect which arises if non-axis-parallel state enclosures are
described by axis-aligned interval boxes in a state-space of dimension
n > 1. To avoid the resulting overestimation, it is useful to transform the ODEs into a suitable canonical form. For the case of linear
ODEs with real eigenvalues of multiplicity one, the canonical form is
given by the Jordan normal form. It results in a decoupling of the
141
vector-valued set of state equations. A solutions of this transformed
IVP can then be determined by an exponential enclosure technique
which guarantees that asymptotically stable solutions are represented
by contracting interval bounds if a suitable time discretization step
size is chosen. For real eigenvalues, this property holds as long as the
value zero is not included in any vector component of the solution interval. This advantageous property can be preserved for linear ODEs
with conjugate complex eigenvalues if a transformation into the complex Jordan normal form is employed. Then, a complex-valued interval
iteration scheme is used to determine state enclosures [4].
This contribution extends the solution procedure, described in [4]
for eigenvalues of multiplicity one, to more general situations with
several multiple real and complex eigenvalues. Simulation results for
technical system models from control engineering, containing bounded
uncertainty in initial values and parameters, conclude this contribution.
References:
[1] E. Auer, A. Rauh, E.P. Hofer, and W. Luther, Validated
Modeling of Mechanical Systems with SmartMOBILE: Improvement of Performance by ValEncIA-IVP, Lecture Notes in Computer Science 5045, (2008), Springer, pp. 1–27.
[2] A. Rauh and E. Auer, Verified Simulation of ODEs and DAEs in
ValEncIA-IVP, Reliable Computing, 15 (2011). No. 4, pp. 370–
381.
[3] A. Rauh, M. Brill, C. Günther, A Novel Interval Arithmetic Approach for Solving Di↵erential-Algebraic Equations with
ValEncIA-IVP, International Journal of Applied Mathematics
and Computer Science, 19 (2009). No. 3, pp. 381–397.
[4] A. Rauh, R. Westphal, E. Auer, and H. Aschemann, Exponential Enclosure Techniques for the Computation of Guaranteed State Enclosures in ValEncIA-IVP, Reliable Computing, 19
(2013). No. 1, pp. 66–90.
142
Numerical Validation of Sliding Mode
Approaches with Uncertainty
Luise Senkel, Andreas Rauh and Harald Aschemann
Chair of Mechatronics University of Rostock
18059 Rostock, Germany
{Luise.Senkel,Andreas.Rauh,Harald.Aschemann}@uni-rostock.de
Keywords: Sliding Mode Techniques, Stochastic and bounded uncertainty, Interval Arithmetic
Many technical systems are a↵ected by bounded and stochastic
disturbances, which are usually summarized as uncertainty in general.
Bounded uncertainty comprises, for example, lack of knowledge about
specific parameters as well as manufacturing tolerances. In contrast,
stochastic disturbances have to be taken into consideration in control
and estimation tasks if only inaccurate sensor measurements are available and if random e↵ects influencing the stability of the system are to
be modeled, as for example friction. To cope with these phenomena, a
sliding mode approach is derived in this presentation that takes these
types of uncertainty into account and stabilizes the error dynamics
even if system parameters are not exactly known and measurements
are a↵ected by noise processes.
The sliding mode approach consists of two parts: a quasi-linear and
a variable structure part. This provides the possibility to take not only
linear but also nonlinear systems into account because the first part
stabilizes the linear dynamics, and the second one counteracts nonlinear influences on the system. In contrast to some known sliding mode
approaches, restrictive matching conditions are avoided. Additionally,
intervals for uncertain parameters as well as control, estimation and
measurement errors are used for the calculation of the switching amplitudes based on the Itô di↵erential operator in combination with a
suitable candidate for a Lyapunov function.
To show the applicability, consider a dynamic system a↵ected by
stochastic disturbance inputs dw that act on the system dynamics as
143
a standard Brownian motion. Then, the system can be described by
the stochastic di↵erential equation
dx = f (x(t), p, u(x(t))) dt + g (x(t), p) dw .
(1)
Applying the Itô di↵erential operator L(V (x̄, p)), cf. [3], to the Lya0, its
punov function candidate V (x̄) = 12 · x̄T · P · x̄ with P = PT
time derivative becomes
✓
◆T
⇢
@V
1
@ 2V
@V
+
·f (x̄, p)+ ·trace gT (x̄, p) ·
· g(x̄, p)
L(V (x̄, p)) =
@t
@ x̄
2
@ x̄2
(2)
with the vector of interval parameters p 2 [p ; p] where pi < pi < pi
holds for each parameter pi . In (2), x1 is the equilibrium state with
x̄ = x x1 . If the condition L(V (x̄, p)) < 0 holds in the interior of a
domain V (x̄) < c, c > 0, containing x1 , the equilibrium can be proven
asymptotically stable in a stochastic sense.
This procedure can be exploited for both control as well as state and
parameter estimation tasks. Numerical results conclude this presentation and show the practical applicability of the sliding mode approach
considering bounded and stochastic disturbances.
References:
[1] L. Senkel, A. Rauh, H. Aschemann: Interval-Based Sliding Mode
Observer Design for Nonlinear Systems with Bounded Measurement and Parameter Uncertainty, In Proc. of IEEE Intl. Conference on Methods and Models in Automation and Robotics,
Miedzyzdroje, Poland, 2013.
[2] V. Utkin, Sliding Modes in Control and Optimization, (SpringerVerlag, Berlin, Heidelberg, 1992).
[3] H. Kushner: Stochastic Stability and Control (Academic Press,
New York, 1967).
144
Reserve as recognizing functional for
AE-solutions to interval system of linear
equations
Irene A. Sharaya and Sergey P. Shary
Institute of Computational Technologies SD RAS
6, Lavrentiev ave., 630090 Novosibirsk, Russia
{sharaya,shary}@ict.nsc.ru
Keywords: interval linear systems, AE-solutions, reserve
It is shown in [1] that, for interval systems (A8 + A9 )x = b8 + b9
with A8 , A9 2 IRm⇥n , b8 , b9 2 IRm such that A8ij · A9ij = 0, b8i · b9j = 0
for every i, j, any AE-solution set
⌅ = {x 2 Rn | 8A0 2 A8 , 8b0 2 b8 , 9A00 2 A9 , 9b00 2 b9
(A0 + A00 )x = b0 + b00 }
can be characterized in Kaucher interval arithmetic by the inclusion
Cx ✓ d,
where C = A8 + dual(A9 ), d = dual(b8 ) + b9 .
(1)
Definition of reserve z. We call by reserve of the inclusion (1)
the maximal real number z such that Cx + e [ z, z] ✓ d for the
m-vector e = (1 1 . . . 1)> .
Formulas for z. From the above definition, we get
z = min min C i: x
di ,
i
+
C i: x + di =
C i: x
= min min C i: x
i
( n
X sgn(x )
j
Cij
xj
= min min
i
j=1
di ,
di ,
C i: x+ + C i: x + di =
)
n
X
sgn(xj )
Cij
xj + d i =
j=1
⌘
mid di mid C i: x =
= min rad di rad C i: |x|
i
⇣
A8i: x b8i + mid(A9i: )x
= min rad(A9i: )|x| + rad(b9i )
⇣
i
mid(b9i )
⌘
145
using the notation x+ , x 2 Rn+ , x+ = max{0, x}, x = max{0, x},
(
(
C ij , if xj 0,
C ij , if xj 0,
sgn(xj )
sgn(x )
Cij j =
Cij
=
C ij , otherwise,
C ij , otherwise.
Geometrical properties of the functional z(x). The functional
z(x) is defined on the entire Rn , continuous and piecewise-linear. It is
concave in each orthant of Rn .
For C = A8 (in particular, for the tolerable solution set and the
set of strong solutions), the functional z(x) is concave on the whole of
Rn and bounded from above by the real number mini rad(b9i ).
Recognizing properties of the functional z(x). Judging on the
value of z(x) at a point x, we can decide on whether the point x belongs
to the solution set ⌅, its topological interior int ⌅ or the boundary @⌅:
z(x)
0 , x 2 ⌅,
z(x) > 0 ) x 2 int ⌅,
z(x) = 0 ( x 2 @⌅. (2)
Examining the value of maxx2Rn z(x), we can recognize whether the
sets ⌅ and int ⌅ are empty or not:
maxn z(x)
x2R
0 , ⌅ 6= ?,
maxn z(x) > 0 ) int ⌅ 6= ?.
x2R
(3)
We have derived necessary and sufficient conditions on C, A8 , A9 ,
b , b9 for changing “)” and “(” in (2) and (3) to “,”.
The set Arg maxx2Rn z(x) turns out to be quite useful too:
1) if maxx2Rn z(x) 0, then Arg maxx2Rn z(x) consists of the ‘best’
points of ⌅ at which the inclusion (1) holds with maximum reserve;
2) if maxx2Rn z(x) > 0, then Arg maxx2Rn z(x) ✓ int ⌅;
3) if maxx2Rn z(x) < 0, then the set Arg maxx2Rn z(x) consists of
the points where the inclusion (1) is violated in the minimum amount.
The latter enables one to use such points as ‘type ⌅ pseudosolutions’.
8
References:
[1] S.P. Shary, A new technique in systems analysis under interval
uncertainty and ambiguity, Reliable Computing, 8 (2002), No. 5,
pp. 321–418. — Electronic version is downloadable from
http://interval.ict.nsc.ru/shary/Papers/ANewTech.pdf
146
Maximum Consistency Method
for Data Fitting
under Interval Uncertainty
Sergey P. Shary
Institute of Computational Technologies
and Novosibirsk State University
Novosibirsk, Russia
[email protected]
Keywords: interval uncertainty, data fitting, interval linear systems,
solution set, recognizing functional, maximum consistency method
For the linear regression model b = a1 x1 + a2 x2 + . . . + an xn , we
consider the problem of data fitting under interval uncertainty. Let an
interval m ⇥ n-matrix A = (aij ) and an interval m-vector b = (bi )
represent, respectively, the input data and output responses of the
model, such that a1 2 ai1 , a2 2 ai2 , . . . , an 2 ain , b 2 bi in the i-th
experiment, i = 1, 2, . . . , m. It is necessary to find the coefficients that
best fit the above linear relation for the data given.
A family of values of the parameters is called consistent with the
interval data (ai1 , ai2 , . . . , ain ), bi , i = 1, 2, . . . , m, if, for every index
i, there exist such point representatives ai1 2 ai1 , ai2 2 ai2 , . . . , ain 2
ain , bi 2 bi that ai1 x1 + ai2 x2 + . . . + ain xn = bi . The set of all
the parameter values consistent with the data given form a parameter
uncertainty set. As an estimate of the parameters, it makes sense to
take a point from the parameter uncertainty set providing that it is
nonempty. Otherwise, if the parameter uncertainty set is empty, then
the estimate should be a point where maximal “consistency” (in a
prescribed sense) with the data is achieved.
The parameter uncertainty set, as defined above, is nothing but the
solution set ⌅(A, b) to the interval system of linear equations Ax = b
introduced in interval analysis:
⌅(A, b) = { x 2 Rn | Ax = b for some A from A and b from b }.
147
For the data fitting problem under interval uncertainty, we propose,
as the consistency measure, the values of the recognizing functional of
the solution set ⌅(A, b) which is defined as
(
n
X
(rad aij ) |xj |
Uss(x, A, b) = min rad bi +
1im
j=1
mid bi
n
X
j=1
(mid aij ) xj
)
,
where “mid” and “rad” mean the midpoint and radius of an interval.
The functional Uss “recognizes” the points of ⌅(A, b) by the sign of
its values: x 2 ⌅(A, b) if and only if Uss (x, A, b) 0. Additionally,
Uss has reasonably good properties as a function of x and A, b.
As an estimate of the parameters in the data fitting problem, we
take the value of x = (x1 , x2 , . . . , xn ) that provides maximum of the
recognizing functional Uss (Maximum Consistency Method). Then,
• if the parameter uncertainty set is nonempty, we get its point,
• if the parameter uncertainty set is empty, we get a point that still
has maximum possible consistency (determined by values of the
functional Uss) with the data given.
In our work, we discuss properties of the recognizing functional Uss,
interpretation and features of the estimates obtained by the Maximum
Consistency Method. Also treated is correlation with the other approaches to data fitting under interval uncertainty.
References:
[1] S.P. Shary, Solvability of interval linear equations and data analysis under uncertainty, Automation and Remote Control, 73 (2012),
No. 2, pp. 310–322.
[2] S.P. Shary and I.A. Sharaya, Recognizing solvability of interval equations and its application to data analysis, Computational
Technologies, 18 (2013), No. 3, pp. 80–109. (in Russian)
148
An Implementation of Complete
Arithmetic
Stefan Siegel
University of Würzburg
97074 Würzburg, Germany
[email protected]
Keywords: Exact dot product, Correctly rounded sum, Long accumulator
To enlarge the acceptance of interval arithmetic the IEEE interval
standard working group P1788 [1] has been founded in 2008.
For this upcoming standard, complete arithmetic as described in [2]
should be provided by implementing libraries to provide (bit) lossless
arithmetics. The so called complete format C(F), with F describing
the number format, uses a long accumulator to ensure precise results.
As opposed to the more commonly used fixed (double) precision
interval arithmetic the 2nd arithmetical feature dynamic precision interval arithmetic is able to provide
• conversion of several number formats to and from the accumulator,
e.g. from floating-point format to complete format, vice versa or
from one complete format to another,
• addition and subtraction, e.g. add or subtract two complete or
floating-point formats, of which at least one is complete,
• multiply-add, to compute a = x ⇤ y + z with x, y 2 F and a, z 2
C(F) and
• a dot product to calculate an exact P
result of two vectors a, b 2 F
with length n. The exact results of nk=1 ak ⇤ bk is available from
the accumulator.
149
In this talk we will give a short overview of the complete arithmetic as suggested by the upcoming P1788 standard and present our
standard implementation which provide this claimed function. Furthermore we will talk about our testing environment.
References:
[1] IEEE Interval Standard Working Group - P1788, April 2014, http:
//grouper.ieee.org/groups/1788/.
[2] Kulisch, Ulrich and Snyder, Van The exact dot product as
basic tool for long interval arithmetic, Position paper, P1788 Working Group, version 11, July 2009.
150
Non-arithmetic approach to dealing with
uncertainty in fuzzy arithmetic
Igor Sokolov
Moscow State University
124482 Moscow, Russia
[email protected]
Keywords: Fuzzy Arithmetic, Interval Arithmetic, Fuzzy Sets
Interval arithmetic is one of the most common ways to deal with
fuzzy numbers. In the interval arithmetic each fuzzy number is represented as a set of intervals for each ↵-level, where ↵-levels are chosen
based on required precision. Such representation of fuzzy numbers allows usage of all interval arithmetic tools, including basic arithmetic
operations.
But such representation of fuzzy numbers doesn’t form either a field
or even a group with any basic arithmetic operations. It lacks both
transitivity and inverse element.
Some researchers try to construct unnatural and counter-intuitive
operations to deal with these problems. For example - we can find
operations like ”inverse-addition” and ”inverse-multiplication”.
Such operations make it possible to use inverse element for addition and multiplication respectively, but have very strict limits of
usage. But in this paper it is proposed that seeking for inverse element is unnecessary, because it ruins the intuition of fuzzy arithmetic.
The following intuition is used as basic idea of this paper. Suppose
A and B are fuzzy numbers, so that A = B. We declare that:
1. A - B = 0, if both A and B reflect the same measurement of the
same object or event;
2. A - B 6= 0, if A and B do not reflect the same measurement of the
same object or event.
151
So before using interval arithmetic to deal with problem on fuzzy
numbers, it is proposed to start with analysis of variables in terms of
their relations. Such analysis allows to dramatically reduce uncertainty
caused by lack of transitivity and inverse element.
Let’s consider a brief example. Numbers are provided only for ↵0
for simplification, but it is valid for any ↵-level:
X = [100, 150], profit of a company.
Y = 0.3*X = [30, 45], a 30% tax this company has to pay.
Z = X - Y, profit left after the taxes.
While calculating Z we face the issue - there are two di↵erent ways
to calculate it:
1. Z = X - Y = [100, 150] - [30, 45] = [55, 120]
2. Z = X - Y = X - 0.3*X = 0.7*X = 0.7*[100, 150] = [70, 105]
Thinking about this example will definitely lead us to the conclusion, that 1st way is wrong and 2nd way is right.
This paper suggests, that by applying proposed analysis of relationship between variables we can reduce uncertainty caused by choosing
improper way to solve such conflicts. Several common rules and more
complex examples are provided.
References:
[1] M. Hanss, Applied Fuzzy Arithmetic, 2004.
[2] R. Boukezzoula, S. Galichet, L. Foulloy, Inverse arithmetic operators for fuzzy intervals, LISTIC, Université de Savoie
Domaine Universitaire, 2007
152
True orbit simulation of dynamical
systems and its computational complexity
Christoph Spandl
Computer Science Department, Universität der Bundeswehr München
85577 Neubiberg, Germany
[email protected]
Keywords: Dynamical systems, Lyapunov exponents, computational
complexity
Due to respectable power of computers nowadays, molecular dynamics simulation is meanwhile state-of-the-art practice in academic
and industrial research. The complexity of some classes of systems to
simulate, e.g. proteins, lead to the development of simulation software
packages. This situation in turn raises the question of validation [1].
One aspect of validation concerns the implementation of the numerical integration scheme. A widespread algorithm in use is the Verlet
method, a discretized version of Newton’s second law. Despite the
fact that the Verlet method only approximates the true solution of
the ODE, is has some pleasant properties inherited from the original
equations of motion. Discretization is one source of error, but the true
problem in simulating orbits in molecular dynamics is another [2]. As
examined in the one dimensional case in [3], chaotic behavior leads,
when iterating the (discrete) equations of motion, asymptotically to
a constant loss of significant bits per iteration step in the state space
variable. Thus, using standard IEEE-754 floating-point arithmetic for
iteration, as typically is done in molecular dynamics, rounding errors
overwhelm the dynamics even after iteration times lying of orders below the times required for obtaining a reasonable simulation.
In this contribution, the results obtained in [3] are generalized to
the multidimensional case. The starting point is a discrete dynamical system (M, f ), where M ✓ Rn is a box and f : M ! M a C 2 153
di↵eomorphism. The time evolution is governed by the iteration equation
x(k+1) = f (x(k) ), x(0) 2 M.
To handle numerical errors, the iteration is reformulated on boxes instead of points and f is replaced by an appropriate centered form in the
sense of [4]. The candidate chosen for a centered form is obtained by
using techniques coming from ergodic theory [5]. Specifically, methods
used for proving the existence of Lyapunov exponents for dynamical
systems are applied. As the main result, an algorithm for computing
true orbits (x(k) )0kN of length N of the dynamical system with predefined accuracy is obtained, supposed that a computable expression for
f exists. Moreover, the asymptotic loss of precision in each iteration
step is shown to be given by the Lyapunov exponents. These results
may form the starting point for developing more accurate integration
schemes.
References:
[1] W.F. van Gunsteren, A.E. Mark, Validation of molecular
dynamics simulation, Journal of Chemical Physics, 108 (1998),
pp. 6109–6116.
[2] R.D. Skeel, What makes molecular dynamics work? SIAM Journal on Scientific Computing, 31 (2009), pp. 1363–1378.
[3] Ch. Spandl, Computational complexity of iterated maps on the
interval, Mathematics and Computers in Simulation, 82 (2012),
pp. 1459–1477.
[4] R. Krawczyk, Centered forms and interval operators, Computing,
34 (1985), pp. 243–259.
[5] L. Barreira, Y.B. Pesin, Introduction to Smooth Ergodic Theory, AMS, Providence, Rhode Island, 2013.
154
Numerical verification for periodic
stationary solutions to the Allen-Cahn
equation
Kazuaki Tanaka1 and Shin’ichi Oishi2,3
1
Graduate School of Faculty of Science and Engineering, Waseda
University.
2
Faculty of Science and Engineering,Waseda University.
3
CREST, JST.
1,2
Building 63, Roeom 419, Okubo 3-4-1, Shinjuku, Tokyo 169-8555,
Japan.
[email protected]
Keywords: Allen-Cahn equation, Numerical verification, Periodic solutions
The main purpose of this talk is to propose and discuss the results
of numerical verification for some periodic stationary solutions to the
Allen-Cahn equation, which form attractive patterns like a kaleidoscope.
To derive stationary solutions to the Allen-Cahn equation, we try
to solve the following equation:
(
"2 u = u u3 in ⌦,
(1)
@u
=0
on @⌦,
@n
where " is a given positive number and ⌦ is a square domain (0, 1)2 .
Here, we set V = H 1 (⌦) and denote the dual space of V by V ⇤ . Defining operator F : V ! V ⇤ by
hF (u) , vi := (ru, rv)L2
"
2
u
u3 , v
L2
, 8v 2 V,
the equation (1) can be transformed into the equation
F(u) = 0 in V ⇤ .
155
We derived the approximate solutions to this equation with spectral
method and verified these solutions using Newton-Kantorovich’s theorem (the verification method with this theorem summarized in [1]).
One of the most important things for verification is how to estimate
the norm of inverse of linearized operator kF 0 [û] 1 k, where û 2 V is
an approximate solution and F 0 [û] is the Fréchet derivative of F at
û. We estimated the operator norm using the theorem in [2] based
on Liu-Oishi’s theorem [3], which is an e↵ective theorem to evaluate
eigenvalues of the Laplace operator on arbitrary polygonal domains.
The Allen-Cahn equation has various solutions, which constitute
interesting patterns, especially when " is small. Since a small " makes
solutions to (1) singular, a more accurate basis becomes necessary to
obtain an appropriate approximate solution for small ". Of course,
numerical verification also becomes difficult when " is small.
This type of solutions often have periodicity and therefore they
can be composed by solutions on some small domain. In this talk,
we would like to show the verification results using periodicity and
discuss its e↵ectiveness. A consideration about behavior of solutions
to (1) with respect to " also will be performed.
References:
[1] A. Takayasu and S. Oishi, A Method of Computer Assisted
Proof for Nonlinear Two-point Boundary Value Problems Using
Higher Order Finite Elements, NOLTA IEICE, 2 (2011), No. 1,
pp. 74–89.
[2] K. Tanaka, A. Takayasu, X. Liu, and S. Oishi, Verified norm
estimation for the inverse of linear elliptic operators using eigenvalue evaluation, submitted in 2012. (http://oishi.info.waseda.ac.
jp/˜takayasu/preprints/LinearizedInverse.pdf)
[3] X. Liu and S. Oishi, Verified eigenvalue evaluation for Laplacian
over polygonal domain of arbitrary shape, SIAM J. Numer. Anal,
51 (2013), No. 3, pp. 1634–1654.
156
Choice of metrics in interval arithmetic
Philippe Théveny
École Normale Supérieure de Lyon – Université de Lyon
LIP (UMR 5668 CNRS - ENS de Lyon - INRIA - Université Claude
Bernard), Université de Lyon
ENS de Lyon, 46 allée d’Italie 69007 Lyon, France
[email protected]
Keywords: Interval analysis, error analysis, matrix multiplication
The correctness of algorithms computing with intervals depends on
the respect of the inclusion principle. So, for a given problem, di↵erent
algorithms may give di↵erent solutions, as long as each output contains
the mathematically exact result. This raises the problem of comparing
the computed approximations. When the exact solution is a real point,
several measures of the distance to the exact result have been proposed:
for instance, Kulpa and Markov define relative extent [KM03], Rump
defines relative precision [Rum99]. When the exact solution itself is an
interval, the ratio of radii is often used.
In this work, we discuss the possible choices for such metrics. We
introduce the notion of relative accuracy for quantifying the amount
of information that an interval conveys with respect to an unknown
exact value it encloses. This measure is similar, yet not equivalent, to
the relative precision, the relative extent, or the relative approximation
error proposed by Kreinovich [Kre13].
We then advocate the use of the Hausdor↵ metric for measuring
absolute error as well as relative error between two intervals. We show
how ratios of radii simplify the analysis with the Hausdor↵ distance
in the case of nested intervals. We also point out that this simpler
approach may overlook some important phenomena and we illustrate
this shortcoming on the example of interval matrix product.
157
References:
[Rum99] Siegfried M. Rump. Fast and parallel interval arithmetic.
BIT Numerical Mathematics, 39:534–554, 1999.
[KM03] Zenon Kulpa and Svetoslav Markov. On the inclusion properties of interval multiplication: A diagrammatic study. BIT Numerical Mathematics, 43:791–810, 2003.
[Kre13] Vladik Kreinovich. How to define relative approximation
error of an interval estimate: A proposal. Applied Mathematical
Sciences, 7(5):211–216, 2013.
158
Numerical Verification for Elliptic
Boundary Value Problem with
Nonconforming P 1 Finite Elements
Tomoki Uda
Department of Mathematics, Kyoto University
Kitashirakawa Oiwake-cho, Sakyo-ku, Kyoto, 606-8502, Japan
[email protected]
Keywords: Nakao’s method, numerical verification, elliptic boundary
value problem, nonconforming P 1 finite element
In 1988, M. T. Nakao [1] developed a method to verify the existence of solutions to an elliptic boundary value problem. Nakao’s
method, which is based on a finite element method (FEM), implicitly assumed the finite element space to be conforming. We generalize
Nakao’s method for the nonconforming P 1 FEM.
Let us consider the following boundary value problem:
⇢
4 u = f (u)
in ⌦,
(1)
u=0
on @⌦,
where 4 denotes the Laplace operator, ⌦ is a bounded convex polygon
and f : H 2 (⌦) ! L2 (⌦) is a continuous map. If we use the conforming
P 1 FEM for the problem (1), a finite element basis function i belongs
to H01 (⌦), that is, i 2 H 1 (⌦) and i |@⌦ = 0. Thus, for any v 2
H 2 (⌦) \ H01 (⌦), we get the following formula by integration by parts:
Z
Z
r i · rv dx dy =
(2)
i 4 v dx dy.
⌦
⌦
In original (or classical) Nakao’s method, this formula (2) plays an important role. On the other hand, if we use the nonconforming P 1 FEM,
a finite element basis function 'i does not vanish on the boundary of
159
Ki = supp 'i . Therefore, by integration by parts, we get the following
formula:
Z
Z
Z
r'i · rv dx dy =
'i @n v ds
'i 4 v dx dy,
(3)
Ki
@Ki
Ki
where n denotes the outward unit normal vector on the boundary
@Ki . In order to deduce one of sufficient conditions for verification,
we apply v = 4 1 'j to the formula (3). Hence, it is difficult to
calculate the boundary integration accurately by numerical means. We
here use an upper and lower estimate of width O(h) for the boundary
integration instead, where h denotes the mesh
R size. That is to say,
we get constructive inequalities C(Ki )h  | @Ki 'i @n (4 1 'j )ds| 
C(Ki )h. For this purpose, we apply a similar analysis to the estimate
of error in nonconforming P 1 FEM [2].
Finally, we show some numerical results of our method. Practically,
a naive implementation of our method tends to fail verification or to
make the candidate set too large even if the verification is successful.
Those are because the interval vector derived from the boundary integrations is enlarged by the wrapping e↵ect. We also propose a device
to avoid this problem, which improves the numerical results.
References:
[1] Mitsuhiro T. Nakao, A Numerical Approach to the Proof of
Existence of Solutions for Elliptic Problems, Japan Journal of Applied Mathematics, 5 (1988), Issue. 2, pp. 313–332.
[2] Philippe G. Ciarlet, The Finite Element Method for Elliptic
Problems, SIAM, Philadelphia, 2002.
160
Two applications of interval analysis to
parameter estimation in hydrology.
Ronald van Nooijen1 and Alla Kolechkina2
1
Delft University of Technology
Stevinweg 1, 2628 CN, Delft, Netherlands
[email protected]
2
Aronwis, Den Hoorn Z.H., Netherlands
Keywords: Hydrology, parameter estimation, Gamma distribution,
interval analysis
This paper concerns two applications of interval analysis to parameter estimation in hydrology: an e↵ort to develop an interval analysis
based optimization code for parameter estimation related to groundwater tracer experiments and a code for parameter estimation for probability distributions in a hydrological context by maximizing the likelihood [1]. At the time the work started in 2008 there was no interval
function library that contained a specialized interval extension for the
pdf of the Gamma distribution with the distribution parameters as
variables or even an easily available version for the digamma function.
References:
[1] R. van Nooijen, T. Gubareva, A. Kolechkina, and B. Gartsman.
Interval analysis and the search for local maxima of the log likelihood for the Pearson III distribution. In Geophysical Research
Abstracts, volume 10, pages EGU2008–A–05006, 2008.
161
Combining Interval Methods with
Evolutionary Algorithms for Global
Optimization
Charlie Vanaret, Jean-Baptiste Gotteland, Nicolas Durand
and Jean-Marc Alliot
Institut de Recherche en Informatique de Toulouse
31000 Toulouse, France
[email protected]
Keywords: global optimization, interval analysis, evolutionary algorithms
Reliable global optimization is dedicated to solving problems to
optimality in the presence of rounding errors. The most successful
approaches for achieving a numerical proof of optimality in global optimization are mainly exhaustive interval-based algorithms [1] that interleave pruning and branching steps. The Interval Branch & Prune
(IBP) framework has been widely studied [2] and has benefitted from
the development of refutation methods and filtering algorithms stemming from the Interval Analysis and Interval Constraint Programming
communities.
In a minimization problem, refutation consists in discarding subdomains of the search-space where a lower bound of the objective function is worse than the best known solution. It is therefore crucial: i)
to compute a sharp lower bound of the objective function on a given
subdomain; ii) to find a good approximation (an upper bound) of the
global minimum. Many techniques aim at narrowing the pessimistic
enclosures of interval arithmetic (centered forms, convex relaxation,
local monotonicity, etc.) and will not be discussed in further detail.
State-of-the-art solvers are generally integrative methods, that is
they embed local optimization algorithms (BFGS, LP, interior points)
to compute an upper bound of the global minimum over each subspace.
In this presentation, we propose a cooperative approach in which interval methods collaborate with Evolutionary Algorithms (EA) [3] on
162
a global scale. EA are stochastic algorithms in which a population of
individuals (candidate solutions) iteratively evolves in the search-space
to reach satisfactory solutions. They make no assumption on the objective function and are equipped with nature-inspired operators that
help individuals escape from local minima. EA are thus particularly
suitable for highly multimodal nonconvex problems. In our approach
[4], the EA and the IBP algorithm run in parallel and exchange bounds
and solutions through shared memory: the EA updates the best known
upper bound of the global minimum to enhance the pruning, while the
IBP updates the population of the EA when a better solution is found.
We show that this cooperation scheme prevents premature convergence
toward local minima and accelerates the convergence of the interval
method. Our hybrid algorithm also exploits a geometric heuristic to
select the next subdomain to be processed, that compares well with
standard heuristics (best first, largest first).
We provide new optimality results for a benchmark of difficult multimodal problems (Michalewicz, Egg Holder, Rana, Keane functions).
We also certify the global minimum of the (open) Lennard-Jones cluster problem for 5 atoms. Finally we present an aeronautical application
to solve conflicts between aircraft.
References:
[1] Moore, R. E. (1966). Interval Analysis. Prentice-Hall.
[2] Hansen, E. and Walster, G. W.(2003). Global optimization using
interval analysis: revised and expanded. Dekker.
[3] Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine learning. Addison-Wesley Reading Menlo Park.
[4] Vanaret, C., Gotteland, J.-B., Durand, N., and Alliot, J.-M. (2013).
Preventing premature convergence and proving the optimality in
evolutionary algorithms. In International Conference on Artificial
Evolution (EA-2013), pages 84–94.
163
Dynamic Load Balancing for Rigorous
Global Optimization of Dynamic Systems
Yao Zhao*, Gang Xu** and Mark Stadtherr*
*University of Notre Dame
Notre Dame, Indiana, USA
[email protected]
**Schneider Electric/SimSci
Lake Forest, California, USA
Keywords: Dynamic Load Balancing, Global Optimization, Interval
Methods, Dynamic Systems, Parallel Computing
Interval methods [e.g., 1, 2] that provide rigorous, verified enclosures of all solutions to parametric ODEs serve as powerful bounding
tools for use in branch-and-bound methods for the deterministic global
optimization of dynamic systems [e.g., 3, 4]. In practice, however, the
number of decision variables that can be handled by this approach is
often severely limited by considerations of computation time, especially
in real-time or near real-time applications.
Since the early 2000s, parallel computing (multiprocessing), generally in the form of multicore processors, has replaced frequency scaling
to become the dominant cause of processor performance gains. Today,
parallel computing hardware is ubiquitous, but the extent to which it
is well exploited depends significantly on the application. There are
many opportunities to exploit fine-grained parallelism in applications
of interval methods—for example, in basic interval arithmetic [5]. We
will focus here on the coarse-grained parallelism that arises naturally in
the interval branch-and-bound procedure for global optimization. It is
well known that this provides the opportunity for superlinear speedups,
and so has been a topic of continuing interest [e.g., 6, 7]. A key issue is the ability to dynamically balance workload, thus avoiding idle
computational resources. We describe here a dynamic load balancing
(DLB) framework, and implement two versions of it, one using MPI
164
(Message Passing Interface) and one using POSIX multi-threads, for
solving global dynamic optimization problems on a multicore, multiprocessor server. We will use computational results to compare the two
implementations and to evaluate several DLB design factors, including
the communication scheme and virtual network used. Through this
framework it is possible to significantly reduce problem solution times
(wall-clock) and increase the number of decision variables that can be
considered within reasonable computation times.
References:
[1] Y. Lin, M.A. Stadtherr, Validated solutions of initial value
problems for parametric ODEs, Appl. Numer. Math., 57 (2007),
pp. 1145–1162.
[2] A.M. Sahlodin, B. Chachuat, Discretize-then-relax approach
for convex/concave relaxations of the solutions of parametric ODEs,
Appl. Num. Math., 61 (2011) pp. 803–820.
[3] Y. Zhao, M.A. Stadtherr, Rigorous global optimization for
dynamic systems subject to inequality path constraints, Ind. Eng.
Chem. Res., 50 (2011), pp. 12678–12693.
[4] B. Houska, B. Chachuat B, Branch-and-lift algorithm for deterministic global optimization in nonlinear optimal control, J. Optimiz. Theory App., in press (2014).
[5] S.M. Rump, Fast and parallel interval arithmetic, BIT, 39 (1999),
pp. 534–554.
[6] J.F. Sanjuan-Estrada, L.G. Casado, I. Garcı́a, Adaptive
parallel interval branch and bound algorithms based on their performance for multicore architectures, J. Supercomput., 58 (2011),
pp. 376–384.
[7] J.L. Berenguel, L.G. Casado, I. Garcı́a, E.M.T. Hendrix,
On estimating workload in interval branch-and-bound global optimization algorithms, J. Glob. Optim., 56 (2013), pp. 821–844.
165
Author index
Adm, Mohammad, 53
Alliot, Jean-Marc, 162
Alt, Rene, 26
Anguelov, Roumen, 28
Aschemann, Harald, 137, 139, 141,
143
Auer, Ekaterina, 30, 141
Bánhelyi, Balázs, 31, 33
Balzer, Lars, 35
Bauer, Andrej, 37
Behnke, Henning, 38
Boldo, Sylvie, 39
Brajard, Julien, 47
Chabert, Gilles, 72
Chohra, Chemseddine, 40
Collange, Sylvain, 42
Csendes, Tibor, 31
Defour, David, 42
Demmel, James, 123
Dobronets, Boris S., 44
Dongarra, Jack, 46
Du, Yunfei, 76
Duracz, J., 87
Durand, Nicolas, 162
Eberhart, Pacôme, 47
Elskhawy, Abdelrahman, 49
Fahmy, Hossam A. H., 103
Farjudian, A., 87
Fortin, Pierre, 47
166
Garlo↵, Jürgen, 53, 56
Golodov, Valentin, 58
Gotteland, Jean-Baptiste, 162
Graillat, Stef, 42, 60
Gustafson, John, 62
Hamadneh, Tareq, 56
Hartman, David, 64
Hiwaki, Tomohirio, 66
Hladı́k, Milan, 64, 68, 70
Horáček, Jaroslav, 70
Horváth, Zoltán, 105
Iakymchuk, Roman, 42
Ismail, Kareem, 49
Jézéquel, Fabienne, 47
Jaulin, Luc, 72
Jeannerod, Claude-Pierre, 75
Jiang, Hao, 76
Kambourova, Margarita, 26
Kearfott, Ralph Baker, 78
Kinoshita, Takehiko, 81
Kobayashi, Kenta, 83, 85
Kolechkina, Alla, 161
Konečný, M., 87
Kreinovich, Vladik, 96, 98
Krisztin, Tibor, 31
Kubica, Bartlomiej Jacek, 89
Kubo, Takayuki, 117, 119
Kumkov, S. I., 90
Kupriianova, Olga, 92
Lévai, Balázs László, 33
Langlois, Philippe, 40
Lauter, Christoph, 60, 92
Le Doze, Vincent, 72
Le Menec, Stéphane, 72
Liu, Xuefeng, 94
Longpré, Luc, 96
Lorkowski, Joe, 98
Luther, Wolfram, 100
Maher, Amin, 103
Markót, Mihály Csaba, 105
Markov, Svetoslav, 26, 28
Mascarenhas, Walter, 121
Mayer, Günter, 107
Minamihata, Atsushi, 109
Miyajima, Shinya, 111, 114
Mizuguchi, Makoto, 117, 119
Montanher, Tiago, 121
Nakao, Mitsuhiro T., 81
Nataraj, P. S. V., 131
Neumaier, Arnold, 31
Nguyen, Hong Diep, 123
Ninin, Jordan, 72, 125
Pryce, John, 135
Radchenkova, Nadja, 26
Rauh, Andreas, 137, 139, 141, 143
Revol, Nathalie, 125
Rump, Siegfried M., 75, 109
Saad, Mohamed, 72
Sekine, Kouta, 109
Senkel, Luise, 137, 139, 143
Sharaya, Irene A., 145
Shary, Sergey P., 145, 147
Siegel, Stefan, 149
Sokolov, Igor, 151
Spandl, Christoph, 153
Stadtherr, Mark, 164
Stancu, Alexandru, 72
Taha, W., 87
Takayasu, Akitoshi, 117, 119
Tanaka, Kazuaki, 155
Tang, Ping Tak Peter, 60
Théveny, Philippe, 157
Tsuchiya, Takuya, 85
Uda, Tomoki, 159
van Nooijen, Ronald, 161
Ogita, Takeshi, 109, 127, 129
Oishi, Shin’ichi, 60, 94, 109, 117, Vanaret, Charlie, 162
Vassilev, Spasen, 26
119, 129, 155
Ozaki, Katsuhisa, 129
Wang, Feng, 76
Watanabe, Yoshitaka, 81
Parello, David, 40
Westphal, Ramona, 141
Patil, Bhagyesh V., 131
Peng, Lin, 76
Xu, Gang, 164
Popova, Evgenija D., 134
Popova, Olga A., 44
Yamamoto, Nobito, 66
167
Yamanka, Naoya, 60
Zhao, Yao, 164
Zohdy, Maha, 49
168
169
List of participants
170
Country
Participants Country
Participants
Germany
Japan
France
United States
Russia
Czech Republic
Bulgaria
Egypt
Hungary
18
17
16
7
6
3
2
2
2
2
1
1
1
1
1
1
1
1
United Kingdom
Austria
Brazil
China
India
Netherlands
Poland
Slovenia
Sweden
Last Name
First Name
Country
Alefeld
Auer
Balzer
Bánhelyi
Bauer
Behnke
Bohlender
Boldo
Ceberio
Chohra
Csendes
Dallmann
Dobronets
Dongarra
Eberhart
Elskhawy
Garlo↵
Golodov
Götz
Ekaterina
Lars
Balázs
Andrej
Henning
Gerd
Sylvie
Martine
Chemseddine
Tibor
Alexander
Boris
Jack
Pacôme
Abdelrahman
Jürgen
Valentin
Germany
Germany
Germany
Hungary
Slovenia
Germany
Germany
France
United States
France
Hungary
Germany
Russia
United States
France
Egypt
Germany
Russia
Last Name
First Name
Country
Graillat
Gustafson
Hartman
Hiwaki
Hladı́k
Horacek
Iakymchuk
Jaulin
Jeannerod
Jezequel
Jiang
Kearfott
Kimura
Kinoshita
Kobayashi
Konecny
Kreinovich
Kubica
Kulisch
Kumkov
Kupriianova
Langlois
Liu
Louvet
Luther
Maher
Markót
Markov
Mayer
Melquiond
Mezzarobba
Minamihata
Miyajima
Stef
John
David
Tomohiro
Milan
Jaroslav
Roman
Luc
Claude-Pierre
Fabienne
Hao
Ralph Baker
Takuma
Takehiko
Kenta
Michal
Vladik
Bartlomiej
Ulrich
Sergey I.
Olga
Philippe
Xuefeng
Nicolas
Wolfram
Amin
Mihály Csaba
Svetoslav
Günter
Guillaume
Marc
Atsushi
Shinya
France
United States
Czech Republic
Japan
Czech Republic
Czech Republic
France
France
France
France
China
United States
Japan
Japan
Japan
United Kingdom
United States
Poland
Germany
Russia
France
France
Japan
France
Germany
Egypt
Austria
Bulgaria
Germany
France
France
Japan
Japan
171
172
Last Name
First Name
Country
Mizuguchi
Montanher
Nakao
Nehmeier
Nguyen
Ogita
Otsokov
Ozaki
Paluri
Popova
Pryce
Rauh
Revol
Schulze
Senkel
Shary
Siegel
Sokolov
Spandl
Stadtherr
Takayasu
Tanaka
Theveny
Tsuchiya
Tucker
Uda
van Nooijen
Vanaret
Watanabe
Wol↵ von Gudenberg
Yamamoto
Ziegler
Makoto
Tiago
Mitsuhiro
Marco
Hong Diep
Takeshi
Shamil
Katsuhisa
Nataraj
Evgenija
John
Andreas
Nathalie
Friederike
Luise
Sergey
Stefan
Igor
Christoph
Mark
Akitoshi
Kazuaki
Philippe
Takuya
Warwick
Tomoki
Ronald
Charlie
Yoshitaka
Jürgen
Nobito
Martin
Japan
Brazil
Japan
Germany
United States
Japan
Russia
Japan
India
Bulgaria
United Kingdom
Germany
France
Germany
Germany
Russia
Germany
Russia
Germany
United States
Japan
Japan
France
Japan
Sweden
Japan
Netherlands
France
Japan
Germany
Japan
Germany