Subdivision methods for solving polynomial equations

SUBDIVISION METHODS FOR SOLVING
POLYNOMIAL EQUATIONS
B. Mourrain, J.P. Pavone , 2009
1
ABSTRACT
The purpose of the talk today is to present a new algorithm for
solving a system of polynomials in a domain of ℝ𝑛 . It uses:
β€’ a powerful reduction strategy
β€’ based on univariate root finder
β€’ using Bernstein basis representation
β€’ And Descarte's rule of signs.
2
MOTIVATION
solving a system of polynomials is a subject that has been studied
extensively throughout history and has practical applications in many
different fields:
Physics
Computer
Graphics
mechanics
The film and
game
industry
Robotics
Financial
information
processing
Bioinformatics
Coding
Signal and
image
processing
Computer
vision
dynamics
and flow
And many
many more..
3
MOTIVATION
We will briefly review two examples of such uses to
emphasize the importance of solving these
problems efficiently, quickly and precisely.
4
EXAMPLE #1 - GPS
Our GPS device receives a satellite signal with the following
known
information packet:
β€’ Satellite identity (serial number k)
β€’ Satellite coordinates in space (π‘₯π‘˜ , π‘¦π‘˜ , π‘§π‘˜ ) relative to the
unknown
center of the earth.
β€’ The π‘‘π‘˜ time point where the information was transmitted in
relation to an agreed global clock
5
EXAMPLE #1 - GPS
We want to find our position in space (x, y, z). In addition, suppose that the
point of time in which we receive the information from all the satellites is
unknown and is indicated by t. We get:
Distance between the satellite and
the device according to coordinates
The distance between the
satellite and the instrument
according to time differences
from transmission to reception
6
EXAMPLE #1 - GPS
Each satellite will give us one equation in four variables. If we subtract the
first satellite equation from the rest of the equations, we will obtain a simple
linear equations system of four variables.
From five satellites we can offer a solution of a simple system. Of course, our
location changes in a fraction of a second and relies on a larger number of
satellites. Hence, we must be able to solve large equations systems quickly.
7
EXAMPLE #2 - COMPUTER GRAPHICS
Computer graphics is an area that deals with the study of methods for
creating and processing digital visual content by computer. In order to create
a "smooth" view of different objects, a display with a high refresh rate is
required, meaning that any change of the arena and / or the viewer's
perspective should be displayed in second fraction. This is a refresh rate of
50 or 60 frames per second, which creates display that looks completely
smooth.
8
EXAMPLE #2 - COMPUTER GRAPHICS
A common method for representing objects in computer graphics is to
assemble them from different polygons. Each polygon is represented by an
equation and we need to be able to find their intersection (solving system of
equations). For example, for the purpose of emphasizing certain parts of an
object.
9
INTRODUCTION
As stated, solving system of polynomials is the basis of
many geometric problems. In the solution discussed
today, We exploit the properties of polynomial
representations in the Bernstein basis, to deduce easily
information on the corresponding real functions in a
domain of in ℝ𝑛 .
10
INTRODUCTION
β€’ In previous lectures we dealt extensively
with Bernstein's representation.
β€’ It is known to be more numerically stable.
β€’ Extensive use of these curves was made following the work of
the French engineer Pierre Bezier in 1962 who used them to
design a Renault vehicle body.
β€’ Their properties (control points, convex hull , etc.) combined
with the reduction methods explain the varied amount of
algorithms proposed to solve univariate systems.
11
The situation in the multivariate case has not been
studied so extensively. Two main subfamilies coexist:
family which is
based on
family which is
based on
subdivision
techniques
Reduction
approaches
12
SUBDIVISION TECHNIQUES
β€’ The subdivision approaches use an exclusion test
β€’ The result of the test is: no solution exists or there may be a solution.
β€’ If the result is that there is no solution then we will reject the given
domain. otherwise, we will divide the domain.
β€’
We will continue to carry out the process until a certain criterion is
accomplished (size of the field or much more complex).
β€’
The method provides algorithms with multiple iterations, especially in
cases of multiple roots. However, the iteration price is significantly
lower than the second approach.
13
REDUCTION APPROACHES
β€’ The power of the method is based on the ability to focus on
the parts of the domain where the roots are located.
β€’
A reduction can not completely replace division because it
is not always possible to reduce the given domain
β€’ reduction significantly reduces the number of iterations and
of course it has drastic effects on performance.
14
BERNSTEIN POLYNOMIAL REPRESENTATION
So far, we have dealt with Bernstein polynomials in the limited
interval [0,1]. Let us now look at a general representation of a
polynomial in any [a, b] section. Each univariate polynomial
𝑓 π‘₯ ∈ 𝕂[π‘₯] from degree d can be represented as follows:
for each π‘Ž < 𝑏 ∈ ℝ,
𝑓 π‘₯ =
𝑑
𝑑
𝑏
𝑖=0 𝑖 𝑖
1
(π‘₯
π‘βˆ’π‘Ž 𝑑
βˆ’ π‘Ž)𝑖 𝑏 βˆ’ π‘₯
π‘‘βˆ’π‘–
We will indicate the Bernstein polynomials:
B𝑑𝑖 π‘₯; π‘Ž, 𝑏 =
𝑑
𝑖
1
(π‘₯
π‘βˆ’π‘Ž 𝑑
And then , 𝑓 π‘₯ =
βˆ’ π‘Ž)𝑖 𝑏 βˆ’ π‘₯
𝑑
𝑖
𝑖=0 𝑏𝑖 B𝑑
π‘‘βˆ’π‘–
π‘₯; π‘Ž, 𝑏
15
LEMMA #1- DESCARTE'S RULE OF SIGNS
The number of real roots of 𝒇 𝒙
π’Š
𝒅
= π’Š=𝟎 π’ƒπ’Š 𝑩𝒅 𝒙; 𝒂, 𝒃 in [𝒂, 𝒃] is bounded by the
number 𝑽(𝒃) of sign changes of 𝒃 = π’ƒπ’Š π’Š=𝟎,..,𝒏
As a consequence, if 𝑽 𝒃 = 𝟎 there is no root in [𝒂, 𝒃]
and if 𝑽 𝒃 = 𝟏 , there is one root in [𝒂, 𝒃]
16
AN EXAMPLE OF THE LEMMA IN STANDARD
REPRESENTATION USING MONOMIAL
𝑓 π‘₯ = π‘₯3 + π‘₯2 βˆ’ π‘₯ βˆ’ 1
The set of coefficients’ signs is : [+ + βˆ’ βˆ’ ] , hence
there is one positive root.
For negative roots we will look on
𝑓 βˆ’π‘₯ = βˆ’π‘₯ 3 + π‘₯ 2 + π‘₯ βˆ’ 1 and then the set of
coefficients' signs is : [βˆ’ + + βˆ’ ] So there are two
negative roots
Indeed, 𝒇 𝒙 = 𝒙 + 𝟏 𝟐 (𝒙 βˆ’ 𝟏) and the roots
are: (-1) from multiplicity 2 and 1.
17
If we extend the representation to the
multi-dimensional case,
to any polynomial 𝒇 π’™πŸ , . . , 𝒙𝒏 ∈ 𝕂[π’™πŸ , . . 𝒙𝒏 ] where
π’™π’Š from degree π’…π’Š It can be represented as follows:
𝒇 π’™πŸ , … . , 𝒙𝒏 =
π’…πŸ
π’ŠπŸ =𝟎
…
𝒅𝒏
π’Šπ’ =𝟎
π’Š
𝒋
π’ƒπ’ŠπŸ ,…,π’Šπ’ π‘©π’…πŸπŸ π’™πŸ ; π’‚πŸ , π’ƒπŸ … 𝑩𝒅𝒏𝒏 (𝒙𝒏 ; 𝒂𝒏 , 𝒃𝒏 )
18
DEFINITION
𝑓 π‘₯1 , . . , π‘₯𝑛 ∈ 𝕂[π‘₯1 , . . π‘₯𝑛 ]
and 𝑗 = 1,2, . . , 𝑛 :
For any polynomial
mj f; x j =
M f; x j =
dj
min
ij =0
dj
ij =0
max
ij
0≀ik ≀dk ,kβ‰ j bi1 ,…,in Bdj (x j ; a j , bj )
ij
0≀ik ≀dk ,kβ‰ j bi1 ,…,in Bdj (x j ; a j , bj )
19
The picture illustrates projection of the control
points and the enveloping univariate polynomials
20
PROJECTION LEMMA
For any 𝒖 = π’–πŸ , … , 𝒖𝒏 ∈ 𝑫 and
𝐣
= 𝟏, 𝟐, . . , 𝐧 we have
π’Žπ’‹ 𝒇; 𝒖𝒋 ≀ 𝒇 𝒖 ≀ 𝑴𝒋 𝒇; 𝒖𝒋
21
PROJECTION LEMMA - PROOF
First, recalled that previously we have shown that for
any k = 1, ..., n
π‘‘π‘˜
π‘–π‘˜
π‘˜=0 π΅π‘‘π‘˜
π‘’π‘˜ ; π‘Žπ‘˜ , π‘π‘˜ = 1 .
And then:
𝑑1
𝑓 𝑒 =
𝑑𝑛
𝑖1 =0
≀(
dj
ij
max
𝑖
𝑖
𝑏𝑖1 ,…,𝑖𝑛 𝐡𝑑11 𝑒1 ; π‘Ž1 , 𝑏1 … . 𝐡𝑑𝑛𝑛 𝑒𝑛 ; π‘Žπ‘› , 𝑏𝑛
…
𝑖𝑛 =0
ij
0≀ik ≀dk ,kβ‰ j bi1 ,…,in Bdj (x j ; a j , bj ))
𝑖
π΅π‘‘π‘˜π‘˜ π‘’π‘˜ ; π‘Žπ‘˜ , π‘π‘˜
0≀𝑖𝑙 ≀𝑑𝑙 ,𝑖≠ 𝑗 (π‘˜β‰ π‘—)
≀ 𝑀𝑗 𝑓; 𝑒𝑗
In the same way we can show the second direction of the bounder of the
minimum polynomial.
22
COROLLARY
For any root ΞΎ = πœ‰1 , … , πœ‰π‘› ∈ ℝ𝑛 of the equation
𝑓 π‘₯ = 0 in domain D , we have μ𝑗 ≀ πœ‰π‘— ≀ πœ‡π‘— where:
β€’ πœ‡π‘— is a root of π‘šπ‘— 𝑓; π‘₯𝑗 = 0 and μ𝑗 is a root of
𝑀𝑗 𝑓; π‘₯𝑗 = 0 in [π‘Žπ‘— , 𝑏𝑗 ]
β€’ πœ‡π‘— = π‘Žπ‘— , μ𝑗 = 𝑏𝑗 if π‘šπ‘— 𝑓; π‘₯𝑗 = 0
β€’ There is no root in [π‘Žπ‘— , 𝑏𝑗 ] if 𝑀𝑗 𝑓; π‘₯𝑗 = 0
23
UNIVARIATE ROOT SOLVER
β€’ Our approach is based on an effective solution for
finding univariate roots.
β€’ Common methods to approximate the root are
based on bisection. We perform these methods by:
ο‚§ splitting the interval into two sub-sections
ο‚§ selecting the sub-section containing the root.
β€’ The division can be done in several ways:
24
METHOD #1- IN BERNSTEIN BASIS USING
DE CASTELJAU ALGORITHM
This is a recursive algorithm that allows us to evaluate
a Bernstein polynomial. The algorithm is based on the
formula:
𝑏𝑖0 = 𝑏𝑖
𝑖 = 0, . . , 𝑑
π‘Ÿβˆ’1
π‘π‘–π‘Ÿ = 1 βˆ’ 𝑑 π‘π‘–π‘Ÿβˆ’1 + 𝑑𝑏𝑖+1
𝑖 = 0, … , 𝑑 βˆ’ π‘Ÿ
The coefficients π‘π‘–π‘Ÿ at a given level are obtained as (1 βˆ’ 𝑑, 𝑑)
π‘Ÿβˆ’1
barycenter of two consecutive π‘π‘–π‘Ÿβˆ’1 , 𝑏𝑖+1
of the previous level.
We repeat the process until we reach one point.
25
METHOD #1- IN BERNSTEIN BASIS USING
DE CASTELJAU ALGORITHM
C# implementation:
The algorithm requires πœ—(d2 ) arithmetic operations
26
METHOD #2- IN MONOMIAL BASIS USING
HORNER METHODS
For the polynomial 𝑝 π‘₯ = 𝑛𝑖=0 π‘Žπ‘– π‘₯ 𝑖 and some point
π‘₯0 , we will do the following steps:
𝑏𝑛 = π‘Žπ‘›
π‘π‘›βˆ’1 = π‘Žπ‘›βˆ’1 + 𝑏𝑛 π‘₯0
.
.
.
𝑏0 = π‘Ž0 + 𝑏1 π‘₯0
And get 𝑝 π‘₯0 = 𝑏0
27
HORNER METHODS - EXAMPLE
The polynomial 𝑓 π‘₯ = 2π‘₯ 3 βˆ’ 6π‘₯ 2 + 2π‘₯ βˆ’ 1 and the
point π‘₯0 = 3 ∢
𝑖𝑛𝑑𝑒𝑒𝑑, 𝑓 3 = 5
28
HORNER METHODS - EXAMPLE
From the algorithm we obtain additional important
information. If we divide 𝑓(π‘₯) by (π‘₯ βˆ’ π‘₯0 ) the
remainder of the division will be the last coefficient on
the bottom row and the other coefficients represent a
polynomial of one degree less which is the outcome of
the division.
In our example we get :
2π‘₯ 3 βˆ’ 6π‘₯ 2 + 2π‘₯ βˆ’ 1
= 2π‘₯ 2 + 2 π‘₯ βˆ’ 3 + 5
π‘₯βˆ’3
29
HORNER METHODS
C implementation:
The algorithm requires πœ—(𝑑) arithmetic operations
30
HOW TO USE EVALUATION
METHODS FOR ROOT FINDING
For polynomial 𝑝𝑛 (π‘₯) with roots 𝑧𝑛 < β‹― < 𝑧1 we will
do the next algorithm:
β€’ Find 𝑧1 by another method with a initial guess
(Newton–Raphson)
β€’ Use Horner methods to find π‘π‘›βˆ’1 =
𝑝𝑛
(π‘₯βˆ’π‘§1 )
β€’ Go to the first step with π‘π‘›βˆ’1 and 𝑧1
Repeat the steps until you find all the roots.
31
METHOD #3- A SECANT-LIKE METHOD
A two-point method in which we draw a line between
two points and the position where the secant
intersects the X axis becomes our new point. We
repeat the steps in an iterative way.
C implementation:
32
METHOD #4
Computing iteratively the first intersection of the
convex hull of the control polygon, with the
x-axis and in subdividing the polynomial
representation at this point
33
METHOD #5 - NEWTON-LIKE METHOD
IN MONOMIAL BASIS
single-point methods where we split the domain
at the point where the tangent cuts the X axis. If
there is no such point, we will cross in the
middle of the section.
34
β€’ Experimentations on polynomials with random roots
shows superiority of Horner and Newton iterations
(Methods 2, 5).
β€’ These methods allow us to solve more than 106
equations with the precision of Ο΅ = 10βˆ’12 (experiments
were performed on an Intel Pentium4 2.0 GHz processor).
β€’ the Newton outcome the Horner method in β€œsimple”
situations.
35
MULTIVARIATE ROOT FINDING
Now we will discuss system of s polynomial
equations with n variables and coefficients in ℝ.
𝑓1 π‘₯1 , . . π‘₯𝑛 = 0, … , 𝑓𝑠 π‘₯1 , … π‘₯𝑛 = 0.
We are looking for an approximation of the real
root of 𝑓 π‘₯ = 0 in the domain D
= π‘Ž1 , 𝑏1 π‘₯ … π‘₯[π‘Žπ‘› , 𝑏𝑛 ] with precision Ο΅.
36
GENERAL SCHEMA OF THE
SUBDIVISION ALGORITHM
Step #1- applying a
preconditioning step on
the equations
Step #2 -reducing the
domain
Step #3- if the reduction
ratio is too small,
splitting the domain
37
β€’ We are starting from polynomials with exact rational
coefficients
β€’ Converted them into Bernstein basis using exact
arithmetic
β€’ Round their coefficients up and down to the closest
machine precision floating point numbers.
β€’ Preconditioning, reduction or subdivision steps will
be performed on these enveloping polynomials
obtained.
β€’ All along the algorithm, the enveloping polynomials
are upper and lower bounds of the actual
polynomials.
38
GENERAL SCHEMA OF THE
SUBDIVISION ALGORITHM
Step #1- applying a
preconditioning step on
the equations
Step #2 -reducing the
domain
Step #3- if the reduction
ratio is too small,
splitting the domain
39
PRECONDITIONER
First, we convert the equation system into a matrix of size 𝑠𝑋𝑠.
β€’ such a transformation may increase the degree of some
equations.
β€’ we may receive a sparse matrix. To avoid producing a sparse
matrix we might prefer a partial preconditioner on a subsets of
the equations sharing a subset of variables.
For simplicity, let us suppose that polynomials 𝑓1 , … 𝑓𝑠 are
already expressed in Bernstein's base and we will now discuss
two types of transformations:
40
Step #1- applying a
preconditioning step on the
equations
Global
transformation
Local
straightening
41
GLOBAL TRANSFORMATION
A typical difficult situation appears when two of the functions
have similar graphs in domain D. A way to avoid such a situation
is to transform these equations in order to increase the
differences between them.
42
GLOBAL TRANSFORMATION
For 𝑓, 𝑔 ∈ ℝ 𝑋 , let:
𝑑𝑖𝑠𝑑 2
𝑓, 𝑔 = 𝑓 βˆ’ 𝑔
2
.
In order to use the above formula, we define a polynomial norm
2
at the Bernstein polynomial base B in the following way: 𝑓
=
𝑏 𝑓
0≀𝑖1 ≀𝑑1 ,…,0≀𝑖𝑛 ≀𝑑𝑛
𝑏𝑖1 ,…,𝑖𝑛 𝑓
2
.
is the vector of the control coefficients of function f.
This norm on the vector space of polynomials generated by the
basis B is associated to a scalar product that we denote by < |
>.
43
GLOBAL TRANSFORMATION
Goal: improve the angle between the vectors by
creating a system that is orthogonal for <|>.
44
PROPOSITION
Let 𝐐 = < π’‡π’Š 𝒇𝒋 >
πŸβ‰€π’Š,𝒋≀𝒔
and let E be a
matrix of unitary eigenvectors of Q.
Then 𝒇 = 𝑬𝒕 𝒇 is a system of polynomials
which are orthogonal for the scalar
product < | >.
45
PROOF
Let 𝑓 = 𝐸 𝑓 = (𝑓1 , … ,𝑓𝑛 ) . Then the matrix
of scalar products (𝑓𝑖 |𝑓𝑗 ) is
𝑑
𝑓𝑓 𝑑 = 𝐸𝑑 𝑓 𝐸𝑑 𝑓
𝑑
= 𝐸 𝑑 𝑓𝑓 𝑑 𝐸 = 𝐸 𝑑 𝑄𝐸 = π‘‘π‘–π‘Žπ‘”(𝜎1 , … .,πœŽπ‘› )
where 𝜎1 , … .,πœŽπ‘› are the positive
eigenvalues of Q.
This shows that the system 𝑓 is
orthonormal for the scalar product < | >
46
The picture illustrates the impact of the global preconditioner on
two bivariate functions which graphs are very closed to each other
before the preconditioning, and which are well separated after this
preconditioning step.
47
LOCAL STRAIGHTENING
We consider square systems, for which 𝑠 = 𝑛.
Since we are going to use the projection Lemma, interesting
situation for reduction steps, are when the zero-level of the
functions 𝑓𝑖 are orthogonal to the π‘₯𝑖 -directions.
We illustrate this remark :
48
LOCAL STRAIGHTENING
β€’ In case (a) - the reduction based on the corollary from the
projection Lemma, will be of no use.
οƒ˜ The projection of the graphs cover the intervals.
β€’ In case (b) - a good reduction strategy will yield a good
approximation of the roots.
49
β€’ The idea of this preconditioner is to transform the
system, in order to be closed to the case (b).
β€’ We transform locally the system f into a syste Jπ‘“βˆ’1 (𝑒0 )
where
J𝑓 𝑒0 = πœ•π‘₯𝑖 𝑓𝑗 𝑒0
1≀𝑖,𝑗≀𝑠
is
the
Jacobian
matrix of f at the point 𝑒0 ∈ 𝐷.
β€’ A direct computation shows that locally (in a
neighborhood of 𝑒0 ), the level-set of 𝑓𝑖 (𝑖 = 1, . . , 𝑛)
are orthogonal to the π‘₯𝑖 -axes.
50
GENERAL SCHEMA OF THE
SUBDIVISION ALGORITHM
Step #1- applying a
preconditioning step on
the equations
Step #2 -reducing the
domain
Step #3- if the reduction
ratio is too small,
splitting the domain
51
REDUCING THE DOMAIN
A possible reduction strategy is:
β€’ Find the first root of the polynomial π‘šπ‘— (π‘“π‘˜ ; 𝑒𝑗)
β€’ Find the last root of the polynomial 𝑀𝑗 (π‘“π‘˜ ; 𝑒𝑗)
οƒ˜ In the given domain π‘Žπ‘— , 𝑏𝑗 .
β€’ reduce the domain to [πœ‡π‘— , μ𝑗 ]
οƒ˜ As we proof at the projection Lemma.
οƒ˜ An effective search of the roots will result a fast and
efficient process.
52
REDUCING THE DOMAIN
β€’ Another approach is IPP – β€œInterval Projected Polyhedron”.
β€’
This method is based on the use of the convex hull feature.
β€’
we will reduce the search domain of the root to a domain
where the convex hull cuts the axis.
53
The following picture shows the improvement of the
first approach compared to IPP that can lead to a
significant change in the algorithm.
54
GENERAL SCHEMA OF THE
SUBDIVISION ALGORITHM
Step #1- applying a
preconditioning step on
the equations
Step #2 -reducing the
domain
Step #3- if the reduction
ratio is too small,
splitting the domain
55
SUBDIVISION STRATEGY
β€’ check the sign of the control coefficients of 𝑓𝑖 in the domain.
β€’ If for one of the nonzero polynomial π‘“π‘˜ , its control coefficient
vectors has no sign change:
οƒ˜ Then D does not contain any root - should be excluded.
οƒ˜ Otherwise, split the domain in half in the direction j where
|bj βˆ’ π‘Žπ‘— | is maximal and larger than a given size.
56
SUBDIVISION STRATEGY
Another variant of the method is based on a division of the
domain in the direction j where bj βˆ’ π‘Žπ‘— > πœ– and if the
coefficients of 𝑀𝑗 (π‘“π‘˜ ; 𝑒𝑗) are not all positive and of π‘šπ‘— π‘“π‘˜ ; 𝑒𝑗 are
not all negative.
This method allows us to accept domains that are more suited to
root geometry, but a post - processing step for gluing together
connected domains may be required.
57
GOOD TO KNOW:
The algorithm we presented was implemented in the
C ++ library β€œsynaps”:
http://www-sop.inria.fr/teams/galaad-static/
58
EXPERIMENTS
Our objective in these experimentations is to evaluate the
impact of reduction approaches compared with subdivision
techniques, with and without preconditioning.
The different methods that we compare are the following:
β€’ sbd stands for subdivision
β€’ rd stands for reduction.
β€’ sbds stands for subdivision using the global preconditioner
β€’ rds stands for reduction using the global preconditioner
β€’ rdl stands for reduction using the local preconditioner.
59
For each method and example, we present the following data:
Number of
iterations in
the process
number of
subdivisions
steps
the time in
milliseconds on a
Intel Pentium 4 2.0
GHz with 512 MB
RAM
number of
domains
computed
result size
60
Details:
β€’ The most interesting characterization for us is the
number of iterations.
β€’ Simple roots are found using bisection.
οƒ˜ changing the algorithm can improve the process time.
οƒ˜ It will not change the number of iterations.
β€’ The subdivision rules are based on a splitting according
to the largest variable.
β€’ The calculations are done with the accuracy of πŸπŸŽβˆ’πŸ” .
61
RESULTS
implicit curve intersection problems
defined by bi-homogeneous polynomials
Case #1
Case #2
62
RESULTS
intersection of two curves
Case #3
Case #4
63
RESULTS
Case #5
selfintersection
Case #6
rational
curves
64
ANALYSIS OF RESULTS
β€’ The superiority of a reduction approach was observed.
οƒ˜ mainly reduction with local strategies (rdl) that converged at the lowest
number of iterations.
β€’ In complex cases (case 4 for example) reduction and subdivision methods
with preconditioning (sbds, rdl, rds) have better performance compared
with the classical reduction or subdivision (rd, sbd).
65
ANALYSIS OF RESULTS
β€’ In most examples only rds, rdl provide a good answer.
β€’ In these examples, the maximal difference between the
coefficients of the upper and the lower polynomials
οƒ˜ (which indicates potential instability during the
computations)
remains much smaller than the precision we expect on the
roots.
β€’
Since our reduction principle is based on projection,
difficulties may arise when we increases the number of
variables in our systems
66
β€’ Another experiment presents problems in 6 variables with a
degree ≀ 2 that comes from the robotics community.
β€’ In this example, we will remain with the three reduction
methods: rd, rds, rdl and add combinations of them that use
projections:
οƒ˜ Before and after local straightening (rd + rdl)
οƒ˜ Before and after global preconditioning (rd + rds)
οƒ˜ The three techniques (rd + rdl + rds).
67
β€’ The combination of projections is an improvement.
β€’ The global preconditioner tends to be better than local straightening.
β€’ If we look at the first table, we see that the number of iterations of rdl and
of rd are close, but it is not the case for rds.
The reason is that rdl uses a local information (a Jacobian evaluation) while
rds use a global information computed using all the coefficients in the
system.
68
We first experiment that both subdivisions and reductions
methods are bad solutions if they are not preconditioned.
But using the same preconditioner reduction will, most of
the time, beat subdivision.
69
CONCLUSION
We presented algorithms for solving polynomial equations
based on subdivision methods. The innovation in this approach
composed of two main additions:
β€’ Integration of preconditioning steps and transformation of the
system to achieve convergence in fewer iterations.
β€’ Improving the use of reduction strategies. We have shown
improvement to the reduction strategy by basing on methods for
finding roots in multivariate systems.
70
CONCLUSION
In the past there have been articles in the field that argued that
the strategy of reduction is not interesting because these
methods do not prevent many subdivisions.
In our experiments it can be clearly seen that reduction may
save many divisions. We emphasize that despite the many
advantages of the reduction strategy, the experiments
demonstrate that it is better to use a preconditioning subdivision
than a pure reduction.
71
In conclusion, we need to
understand what kind of problem
we are dealing with and adapt the
best strategies to solve it with the
tools we have acquired today.
72