Self-Optimization as a Framework for Advanced Control Systems

Self-Optimization as a Framework for
Advanced Control Systems
Joachim Böcker, Bernd Schulz, Tobias Knoke and Norbert Fröhleke
Paderborn University
Faculty of Computer Science, Electrical Engineering and Mathematics
Institute of Power Electronics and Electrical Drives
D-33095 Paderborn, Germany
[email protected]
Abstract— Optimization is a usual step of control design. To
do so, clear design goals have to be defined and sufficient system
information must be given as prerequisites. However, data are
often inaccurate or missing and design goals may change during
operation. That is why a concept of self-optimizing systems is
proposed, which is able to optimize the system even during operation. The proposed concept should be understood as a framework
to incorporate various control and optimization methods. A key
element of the proposal is the Operator Controller Module, which
consists of a cognitive part for planning tasks with lower realtime requirements and a reflective part for the execution level.
A particular focus is given to how to handle multi-objective
optimization with on-line adaptation of the objectives depending
on internal and external design goals. Examples how to employ
the concept in practice are given.
II. A DVANCED C ONTROL T ECHNICS
I. I NTRODUCTION
The concept of Self-Optimization presented in this contribution should not be understood as a novel control design
method in addition to other well established approaches, but
in the sense to provide a framework, which allows to integrate
various control and optimization methods. Before explaining
the proposal, a glance of today’s control engineering practice
is given first. Today’s practice includes a number of powerful
methods, an extract of which will be mentioned in the following section. Thanks to computer aided tools, even more
sophisticated control design methods can be applied today
with moderate manual effort. To do so, however, decisions
on the goals of the design have to be taken, which are not
based on a solid ground at the beginning of a project and even
not during later phases. Furthermore, sufficient knowledge of
the system to be controlled, mostly reflected by a model, and
of the influences from the environment as well as operation
conditions must be available. The design work, even with
sophisticated methods, will inevitably suffer, if these prerequisites are not sufficiently given. A striking comment was
given in [1], characterizing controllers as ”overdesigned and
underperforming”. Despite the fact that there exist singular
applications with thoroughly designed control, e.g. (hopefully)
space mission systems, many other control applications are
far from being optimally designed. That is due to various
economic or technical reasons:
- A series product is employed by various customers in
rather different environments, which the product designer
1-4244-0136-4/06/$20.00 '2006 IEEE
does not know in details, and even if he would, an
optimization could not be run for each single case.
- The given data are not sufficient to perform an optimal
design, e.g. unknown system or disturbance parameters.
- The design goals are not clear, perhaps conflicting, or
may change during design, even later during operation.
- An optimal design process could be performed, but would
be too extensive in terms of time or cost or would exceed
the skills of the development staff.
Before explaining the concept of Self-Optimization, a short
summary of up-to-date advanced control methods will be given
in Chapter II.
Today, the basics of control theory of linear single-input
single-output systems are general knowledge of most electrical
and mechanical engineers. However, in the last decades a wide
variety of control methods has been developed and elaborated,
which largely exceeds the horizon of the very basics. The
developments have gone in quite different directions, and at
current stage, it is difficult to nominate the approaches, which
should be included in a today’s standard toolkit for a control
engineer. Anyway, a short assessment of some most relevant
methods is presented in the following.
State-space methods as state-feedback control or state observers, though well-known for linear systems for decades and
often included in the standard academic curriculum, suffered
first from being rarely applied. It was argued that this was due
to restricted computational power in the beginning of digital
control, but seemingly there were also educational reasons.
It took a generation until this methodology found its road
to broader applications in practice. It is wise to mention this
development, when introducing more sophisticated methods.
Adaptive control was given a main research focus in control
theory up to the 1980s. Important concepts as Self-Tuning
Adaptive Control, Model-Reference Adaptive Control were
developed and the mathematical background was formulated,
e.g. [2], [3]. However, the former expectations that adaptive
controllers would be able to cope with a large scale of apriori unknown or time-varying systems has cooled down.
Many of today’s adaptive controllers found in practice are
of the simplest but most robust type, i.e. Gain-Scheduling
4672
Controllers, where controller parameters are scheduled by a
supervisory level depending on operating conditions.
Fuzzy Control was up-to-date in the 1990s, e.g. [4]. The
method was enthusiastically welcomed as linguistic control
design method that could be easily handled even by noncontrol engineers, but it was highly overestimated. It was
often claimed that Fuzzy Control should per se result in good
robustness, while it seems the other way round that goodnatured systems can easily be fuzzy-controlled. Today, Fuzzy
Control has acquired a legitimated position for easy design of
nonlinear maps and for supervisory functions.
Neural Networks are today esteemed in control engineering,
either as black-box model of a system with unknown internal structure, or as a nonlinear controller. However, tuning
of neural networks has to undergo learning procedures so
that the neural network itself will not contribute more to
the performance than that what has been learned. However,
combinations of fuzzy and neural methods have often been
proposed to the engineering community.
A lot has been accomplished in understanding and design
of nonlinear control systems, particularly by concepts of
differential geometry [5], [3], [6], [7]. The methods of Exact
Linearization and Flatness-Based Control are some of the
outcomes. Anyway, the methods are restricted to certain types
of nonlinearities and the robustness to uncertainties is also a
crucial item.
The H2 and H∞ Designs are meanwhile established as
methods for optimal controller design, particularly to ensure
specification of robustness. Though the math behind it is of
high level, design tools are available for easy use.
Optimization algorithms are often used in combination with
the above mentioned methods. Meanwhile both the algorithms
and the computing power are able to cope with rather complicated optimization problems. Evolutionary algorithms [8]
have proved their capability, particularly for highly nonlinear
problems with a huge number of design parameters.
An interesting approach that gains more and more popularity is Model Predictive Control [9], [10]. During run-time,
the controller actions are subject of an online-optimization
based on predictions of the future behavior. Of course that
requires also a system model to carry out the prediction.
Unlike the principle of adaptive control, model predictive
control is less sensitive to the accuracy of the system model,
in partly postponing the optimization usually done during
design to the run-time phase. Thus an on-line optimization
is implemented. Latter design procedure will be extended by
the self-optimizing framework outlined below.
III. S ELF -O PTIMIZING C ONTROL S YSTEMS
A. Definition
A self-optimizing system is, according to the definition within the Collaborative Research Center 614 ”SelfOptimizing Concepts and Structures in Mechanical Engineering” (SFB 614) [11], able to optimize its behavior by
adapting the structure of utilized mechanical components,
controllers, actuators and/or sensors when the system is facing
disturbances by the environment, by the system itself due
to wear or altered user requirements. The adaptation within
this widespanned horizon implies of course an endogenous
variation of goals implemented via a self-optimizing data
processing unit, the so called Operator Controller Module
(OCM). The basic idea to realize self-optimizing systems
is not particularly new. Already in 1958, a self-optimizing
machine was proposed by Kalman [12]. This machine does
not only use firm strategies to adapt itself automatically.
Furthermore, it could recognize the requirements for a control
system within a changing environment, independently. Kalman
identified three steps which his machine had to execute to
realize a self-optimizing control:
1) Measure the dynamic characteristics of the system
2) Specify the desired characteristics of the controller
3) Put together a controller using standard elements which
has the required dynamic characteristics
”... By contrast, the machine can repeat steps (1-3) continually
and thereby detect and make correction in accordance with
any change in the dynamic characteristics of a process which
it controls ... It may be said that the machine adapts itself to
changes in its surrounding ... The author prefers to call this
property of the machine ”Self-Optimization” ...”
Independent of Kalman, the SFB 614 defined also three
steps [13], to be executed by a self-optimizing system by definition. The recurring performing of these actions is described
as Self-Optimization process:
1) Analysis of the current situation
2) Determination of the system objectives
3) Adaptation of the system behavior
These individual steps are very similar to the sequence given
by Kalman. The definition of the SFB 614, however, is not
limited to control systems, but also applicable to general
technical systems.
Within step 1 ”analysis of the current situation” a detection
of influences is performed affecting the system as well as
the internal states. This can be carried out by measurements,
identification procedures and by communication with other
systems. Influences may include not only disturbances but also
parameters which only indirectly affect the system behavior.
For example, the efficiency or the lifetime of a component may
change, without affecting the main function of the system. A
last but substantial aspect of the analysis is the evaluation of
the fulfillment degree of the goals.
In step 2 ”determination of objectives”, new goals are
ascertained based on recorded influences or old goals are
confirmed. The potential of generating automatically new
objective functions for an optimization represent one major
issue of Self-Optimizing-Systems.
In step 3 ”adaptation of the system behavior”, the system
behavior is adapted by new parameter settings or/and structural changes in respect to controller, sensor or/and actuator
selection. The adaptation of the system behavior shall always
be automatically carried out regarding changes of external
objectives (see Fig. 1). An optimization can be realized in steps
4673
Operator Controller Module (OCM)
2 and 3. Self-optimization can be considered as an extension
of advanced control engineering.
Cognitive Operator cognitive information processing
model-based Self-Optimization
inherent
goals
self-optimizing system
self-optimizing control
Fig. 1.
controller
system
idealized system
optimal system
soft real-time
reference
values
adaptive control
decision making
objective 1
cognitive loop
Reflective Operator reflective information processing
influences from system
and enviroment
supervision
classic control
Self-Optimization as enhancement of adaptive control systems
B. Structuring of Self-Optimizing Systems
A Self-Optimizing system which is capable of an autonomous situation analysis, determination of objectives, optimization, and adaptation of the system behavior, is of
highly complex nature. Hence, a clear structuring is required
as developed within SFB 614 [14]. While Fig. 1 points
out some elementary functions of Self-Optimizing systems
without considering time requirements, Fig. 2 presents the
proposed general structure of Self-Optimizing systems called
Operator Controller Module (OCM) able to cope with realtime requirements.
The bottom level of the OCM includes the controller interfacing with the process to be controlled by actuators and
sensors. It forms the control loop. This level of the OCM has
to fulfill hard real-time requirements.
The intermediate reflective operator represents the interface
between controller and the superimposed cognitive operator.
It governs the controller by providing new set values and
supervises process and control in order to ensure safety
requirements. These tasks of the reflective operator have to be
carried out also in hard real-time, perhaps in an event-oriented
manner. The reflective operator takes care of adaptation of controller parameters subject to weaker real-time requirements.
Goal determination and optimization algorithms as parts
of planning procedures run on the cognitive operator level
utilizing model-based or behavior-based approaches. There is
no need to realize these planning procedures in the same hard
real-time environment like the controller. Access to actuators
is only allowed via the hierarchical order of the OCM. So
reflective and cognitive operators do not have direct access to
the actuators. For complex mechatronics systems consisting of
several functional modules, each one should be equipped with
it’s own OCM.
C. Optimizing Algorithms for Self-Optimizing Systems
As stated above, an optimization algorithm is usually required in steps 2 and 3 of the Self-Optimization process. These
reference value transfer
(primary current and
frequency)
hard real-time
mapping
of goals
influences
planning level
external
goals
adaption algorithm
execution level
internal
goal system
objective 2
Pareto-Optimization
emergency
sequencer
reflective loop
Controller
motor information processing
control loop
linear motor
Fig. 2. Self-optimizing System structured by the Operator Controller Module
(OCM)
algorithms are to be implemented in the cognitive operator.
From the classical single-objective optimization the following
fundamental procedures are well-known.
Gradient Methods
Gradient methods use information about the slope of a function
to dictate a direction of search, in which the minimum of
the function is assumed. The simplest gradient method using
only the first derivative is the method of steepest descent, in
which a search is performed in the direction of the negative
gradient of the objective function. Possible constrains can be
regarded using Lagrange multipliers. The second derivative,
the Hessian, is additionally utilized by the group of Newtontype methods. If an analytical description of the optimization
problem can be formulated, gradient methods are fast. In the
context of the Self-Optimization, the regarded systems are
often too complex, to write down an analytical description
of the optimization problem. Then necessary derivatives must
be determined numerically whereby the advantage of low
calculation time is lost. Depending on the starting point, only
4674
a local minimum is often found. An overview about gradient
methods is given in [15].
Simplex Methods
A simplex is a polytope of n + 1 vertices in n dimensions: a
triangle in a plane, a tetrahedron in three-dimensional space
and so on. Contrary to gradient methods, simplex methods
do not need information about gradients or the Hessian. The
optimization is done on the basis of direct comparison of the
simplex vertices. In each step, the worst vertex is replaced by a
better one. Constrains can be regarded by the use of so-called
penalty functions. Simplex methods are robust, but not fast.
Depending on the starting point, usually a local minimum is
found. An often used algorithm of the simplex group is the
Nelder Mead algorithm [16].
Evolutionary Algorithms
Evolutionary algorithms are based on principles copied from
nature. In an initialization phase, a set of possible solutions
of the optimization problem is randomly generated. For each
solution the function value is decided and the best solutions
are used to create new solutions (survival of the fittest).
Evolutionary algorithms are not deterministic but have the
great advantage to find with a certain chance the global
optimum and do not need information about derivatives. For
complex systems with a great number of degrees of freedom,
evolutionary algorithms are fast compared to simplex methods.
More details are found in [17].
So far, the optimization methods assume only a single
objective. In the context of Self-Optimization and very often in
engineering science, a single-objective optimization is a rare
exception while in most cases several objectives have to be
considered in parallel. A multi-objective optimization could be
reduced to a single-objective problem by weighted summation.
If the weights have to be varied due to changing goals, this
method evolves as unhandy, moreover it does not provide
information about the trade-offs, i.e. the interdependencies
between the various objectives. A favored method to consider
several objectives simultaneously without prior weighting is
the concept of Pareto optimization [18]. A point is called
Pareto optimal, if an improvement of one objective is only
possible by declining of at least one other. Usually the solution
of a multi-objective optimization leads to a set of such Pareto
points, called Pareto set. In the end, a decision has to be taken
which point of the Pareto set is selected for current design
depending on the actual goal settings.
Usually optimization methods are applied during the design phase without real-time requirements. An optimization
within the cognitive operator, however, has to be carried
out under at least soft real-time requirements. Particularly,
Pareto optimization within real-time is a huge challenge. If
not only the goals are changing, but also the system itself,
a recalculation of the Pareto set is necessary under realtime conditions. This requires high computational power and
efficient algorithms. However, it has been shown that this
problem can be solved with powerful numerical algorithms,
partly incorporating evolutionary methods [19], [20], [21],
[22].
IV. E XAMPLES OF S ELF -O PTIMIZING S YSTEMS
Example 1: Operating Point Assignment for a Doubly-fed
Linear Motor
As a particular example, Self-Optimization utilizing the
concept of the Operator Controller Module (OCM) has been
applied to a novel sophisticated railway transportation system
(RailCab) [23]. These railcars are propelled by a doublyfed linear motor allowing operation of several vehicles, even
grouped as convoy, on the same stator segments. Unlike other
types of linear motors, both primary and secondary of the
motor (track and vehicle parts) are actively fed by electrical
currents. The thrust force depends, simplifying the interrelationships, mainly on the product of primary and secondary
currents. Thus, there are degrees of freedom to assign the
operation point for a demanded thrust force, which are subject
to an optimization. The optimization goals concern efficiency,
losses, allowed temperatures of primary and secondary, controllability of the operation point, state of charge (SOC) of the
vehicle’s batteries etc. Obviously, the optimization cannot be
completely accomplished as a task of design due to varying
conditions and also to varying importance of the goals. For
example, with low SOC of the vehicle’s battery, minimization
of the primary (stator) losses will be of minor importance. The
computation of the Pareto set including decision making was
implemented within the cognitive part of the OCM satisfying
weak real-time standards. Additionally, a tracking procedure
of the current Pareto point was employed in order to react
also in hard real-time to fast changes of requirements until
the next complete computation of the Pareto set is completed.
More details have been reported in [19], [20].
Example 2: Energy Management for an Onboard Storage
System Based on Multi-Objective Optimization
Design of an energy management for a tram can be seen
as a multi-objective optimization problem with the two objectives “minimize line peak power” and “minimize energy
consumption”. Solutions of the multi-objective optimization
problem are a set of ”optimal compromises” between these
two objectives (Pareto set). The calculation of the Pareto set
is based on a-priori knowledge of the exact system parameters
and the complete drive cycle. In real systems like tram grid,
there are influences like changing passenger load or traffic,
which lead to variations of the system parameters and drive
cycles. These variations will change the Pareto set so that a
new Pareto set must be calculated. Based on today’s computing
power this calculation is not possible in real-time. To solve
this problem the drive cycle is divided into short sections.
For each section, an amount of Pareto sets regarding different
possible sets of system parameters is calculated off-line and
stored in a database. Based on current system parameters
for each section a suitable Pareto set is selected out of the
data base on-line. In order to select a Pareto set an OCM
is used. The cognitive operator implements the identification
of new system parameters by calculations, knowledge bases
and/or smart deductions. Newly determined system parameters
4675
are forwarded to the reflective operator, which realizes the
selection of a suitable Pareto set and a Pareto point based on
the system parameters transferred by the cognitive operator.
The selected energy management related to the Pareto point is
passed to the controller. Additionally, preset safety-measures
can be performed in case of unexpected power demands.
Finally, the controller executes the energy management chosen
by the reflective operator. More details are found in [24].
V. C ONCLUSION
A framework for self-optimizing control systems is outlined in this contribution with reference to advanced control techniques. The Operator Controller Module is proposed
as a key element, able to incorporate various methods of
control, system identification, supervision and multi-objective
optimization. Two examples in the field of mechatronics such
as an operating point assignment for a linear motor and an
energy management of an onboard storage system for vehicles
are presented. As the work on self-optimizing control systems
continues, its potential and benefits in respect to reactions
on failures, on system complexity alterations on different
levels, on utilization of computing power and e.g. to reactions
on eigenfrequency variation of a vehicle structure will be
presented.
ACKNOWLEDGMENTS
This work was partly developed in the course of the Collaborative Research Center 614 – Self-Optimizing Concepts
and Structures in Mechanical Engineering – University of
Paderborn, and was published on its behalf and funded by
the Deutsche Forschungsgemeinschaft.
[14] T. Hestermeyer, O. Oberschelp, and H. Giese, “Structured Information
Processing For Self-optimizing Mechatronic Systems,” in Proc. of 1st
International Conference on Informatics in Control, Automation and
Robotics (ICINCO 2004), Setubal, Portugal, H. Araujo, A. Vieira,
J. Braz, B. Encarnacao, and M. Carvalho, Eds. INSTICC Press, Aug.
2004, pp. 230–237.
[15] P. Spellucci, Numerische Verfahren der nichtlinearen Optimierung.
Birkhäuser Verlag, Basel, 1993.
[16] J. A. Nelder and R. Mead, “A simplex method for function minimization,” Computer J., Vol. 7, pp. 308–313, 1965.
[17] T. Back, Evolutionary Algorithms in Theory and Practice: Evolution
Strategies, Evolutionary Programming, Genetic Algorithms. Oxford
University Press, USA, 1996.
[18] K. M. Miettinen, Nonlinear Multiobjective Optimization.
Kluwer
Academic Publishers, Dordrecht, 1998.
[19] A. Pottharst, K. Baptist, O. Schütze, J. Böcker, N. Fröhleke, and
M. Dellnitz, “Operating point assignment of a linear motor driven
vehicle using multiobjective optimization methods,” in Proc. of the 11th
International Conference EPE-PEMC 2004, Riga, Latvia, Sept. 2004.
[20] K. Witting, B. Schulz, M. Dellnitz, J. Böcker, and N. Fröhleke, “A
new approach for online multiobjective optimization of mechatronical
systems,” 2006, accepted for Int. J. on Software Tools for Technology
Transfer STTT (Special Issue on Self-Optimizing Mechatronic Systems).
[21] R. Li, A. Pottharst, N. Fröhleke, J. Böcker, K. Witting, M. Dellnitz,
O. Znamenshchykov, and R. Feldmann, “Design and implementation of
a hybrid energy supply system for railway vehicles,” in APEC 2005,
IEEE Applied Power Electronics Conference, Austin, USA, 2005.
[22] M. Dellnitz, O. Schütze, and T. Hestermeyer, “Covering pareto sets by
multilevel subdivision techniques,” Journal of Optimization Theory and
Application, vol 124(1), pp. 113–136, 2005.
[23] http://www.railcab.de/en. Homepage of the RailCab research project
of University Paderborn.
[24] T. Knoke, C. Romaus, J. Böcker, A. Dell’Aere, and K. Witting, “Energy
Management for an Onboard Storage System Based on Multi-objective
Optimization,” 32nd Annual Conference of the IEEE Industrial Electronics Society (IECON 2006), Paris, FRANCE.
R EFERENCES
[1] T. Samad and G. Balas, Software-Enabled Control. John Wiley & Sons
Inc, 2002.
[2] K. J. Åström and B. Wittenmark, Adaptive Control. Addison Wesley,
1995.
[3] M. Krstic, I. Kanellakopoulos, and P. V. Kokotovic, Nonlinear and
Adaptive Control Design. Wiley-Interscience, 1995.
[4] M. Johansson, A Primer on Fuzzy Control. Department of Automatic
Control, Lund Institute of Technology, Sweden, 1996.
[5] A. Isidori, Nonlinear Control Systems 1. An Introduction. Springer,
Berlin, 1995.
[6] R. Sepulchre, M. Jankovic, and P. V. Kokotovic, Constructive Nonlinear
Control. Springer, Berlin, 1997.
[7] J.-J. Slotine and W. Li, Applied Nonlinear Control. Pearson Education,
1990.
[8] D. C. Dracopoulos, Evolutionary Learning Algorithms for Neural Adaptive Control. Springer, Berlin, 1997.
[9] F. Allgöwer and A. Zheng, Nonlinear Model Predictive Control.
Birkhäuser, 2000.
[10] E. F. Camacho and C. Bordons, Model Predictive Control. Springer,
Berlin, 2004.
[11] http://www.sfb614.de/en.
Homepage of the Collaborative Research
Center 614.
[12] R. E. Kalman, “Design of a self-optimizing control system,” Transactions of the American Society of Mechanical Engineers, vol. 80, pp.
468–478, 1958.
[13] U. Frank, H. Giese, F. Klein, O. Oberschelp, A. Schmidt, B. Schulz,
H. Vöcking, and K. Witting, Selbstoptimierende Systeme des Maschinenbaus - Definitionen und Konzepte., ser. HNI-Verlagsschriftenreihe.
Paderborn: Heinz Nixdorf Institut, Universität Paderborn, 2004, vol. 155.
4676