Solution of generalized matrix Riccati differential equation for

Applied Mathematics and Computation 204 (2008) 671–679
Contents lists available at ScienceDirect
Applied Mathematics and Computation
journal homepage: www.elsevier.com/locate/amc
Solution of generalized matrix Riccati differential equation for
indefinite stochastic linear quadratic singular system using
neural networks
P. Balasubramaniam *, N. Kumaresan
Department of Mathematics, Gandhigram Rural University, Gandhigram 624 302, Tamilnadu, India
a r t i c l e
i n f o
Keywords:
Generalized matrix Riccati differential
equation
Indefinite stochastic linear singular system
Neural networks
Optimal control and Runge Kutta method
a b s t r a c t
In this paper, solution of generalized matrix Riccati differential equation (GMRDE) for
indefinite stochastic linear quadratic singular system is obtained using neural networks.
The goal is to provide optimal control with reduced calculus effort by comparing the solutions of GMRDE obtained from well known traditional Runge Kutta (RK) method and nontraditional neural network method. To obtain the optimal control, the solution of GMRDE is
computed by feed forward neural network (FFNN). Accuracy of the solution of the neural
network approach to the problem is qualitatively better. The neural network solution is
also compared with the solution of ode45, a standard solver available in MATLAB which
implements RK method for variable step size. The advantage of the proposed approach is
that, once the network is trained, it allows instantaneous evaluation of solution at any
desired number of points spending negligible computing time and memory. The computation time of the proposed method is shorter than the traditional RK method. An illustrative
numerical example is presented for the proposed method.
Ó 2008 Elsevier Inc. All rights reserved.
1. Introduction
Stochastic linear quadratic regulator (LQR) problems have been studied by many researchers [2,6,7,13,25]. Indefinite stochastic linear quadratic theory has been extensively developed and has found interesting applications in mathematical finance. Chen et al. [10] have shown that the stochastic LQR problem is well posed if there are solutions to the Riccati
equation and an optimal feedback control can then be obtained. For LQR problems, it is natural to study an associated Riccati
equation. However, the existence and uniqueness of the solution of the Riccati equation in general, seem to be very difficult
problems due to the presence of the complicated nonlinear term. Zhu and Li [26] used the iterative method for solving Riccati
equations for stochastic LQR problems. There are several numerical methods to solve conventional Riccati equation as a result of the nonlinear process essential error accumulations may occur. In order to minimize the error, recently the conventional Riccati equation has been analyzed using neural network approach (see [3–5]). This paper is the extension of the
neural network approach for solving stochastic Riccati equation.
Neural networks or simply neural nets are computing systems, which can be trained to learn a complex relationship
between two or many variables or data sets. Having the structures similar to their biological counterparts, neural networks
are representational and computational models. It is also processing information in a parallel distributed fashion composed
of interconnecting simple processing nodes [24]. Neural net techniques have been successfully applied in various fields such
* Corresponding author.
E-mail addresses: [email protected] (P. Balasubramaniam), [email protected] (N. Kumaresan).
0096-3003/$ - see front matter Ó 2008 Elsevier Inc. All rights reserved.
doi:10.1016/j.amc.2008.04.023
672
P. Balasubramaniam, N. Kumaresan / Applied Mathematics and Computation 204 (2008) 671–679
as function approximation, signal processing and adaptive (or) learning control for nonlinear systems. Using neural networks, a variety of off line learning control algorithms have been developed for nonlinear systems [17,20]. A variety of
numerical algorithms (see [11]) have been developed for solving the algebraic Riccati equation. In recent years, neural network problems have attracted considerable attention of many researchers for numerical aspects for algebraic Riccati equations (see [14,16,23]).
Singular systems contain a mixture of algebraic and differential equations. In that sense, the algebraic equations represent
the constraints to the solution of the differential part. The complex nature of singular system causes many difficulties in the
analytical and numerical treatment of such systems, particularly when there is a need for their control. The system arises
naturally as a linear approximation of system models or linear system models in many applications such as electrical networks, aircraft dynamics, neutral delay systems, chemical, thermal and diffusion processes, large scale systems, robotics,
biology, etc., see [8,9,19]. As the theory of optimal control of linear systems with quadratic performance criteria is well developed, the optimal feedback with minimum cost control has been characterized by the solution of a Riccati equation. Da Prato
and Ichikawa [12] showed that the optimal feedback control and the minimum cost are characterized by the solution of a
Riccati equation. Solving the matrix Riccati differential equation (MRDE) is the central issue in optimal control theory.
Although parallel algorithms can compute the solutions faster than sequential algorithms, there have been no report on
neural network solutions for MRDE that is compared with RK method solutions. This paper focuses upon the implementation
of neurocomputing approach for solving GMRDE in order to get the optimal solution. The solution was found with uniform
accuracy and trained neural network provides a compact expression for the analytical solution over the entire finite domain.
An example is given which illustrates the advantage of the fast and accurate solutions compared to RK method.
This paper is organized as follows. In Section 2, the statement of the problem is given. In Section 3, solution of the GMRDE is
presented. In Section 4, numerical example is discussed. The final conclusion section demonstrates the efficiency of the method.
2. Statement of the problem
Consider the indefinite stochastic linear singular system that can be expressed in the form:
FdxðtÞ ¼ ½AxðtÞ þ BuðtÞdt þ DuðtÞdWðtÞ;
xð0Þ ¼ 0;
t 2 ½0; tf ;
n
ð1Þ
m
where the matrix F is possibly singular, xðtÞ 2 R is a generalized state space vector, uðtÞ 2 R is a control variable and it takes
value in some Euclidean space, W(t) is a Brownian motion and A 2 Rnn ; B 2 Rnm and D 2 Rnm are known coefficient matrices associated with xðtÞ and uðtÞ, respectively, x0 is given initial state vector and m 6 n.
In order to minimize both state and control signals of the feedback control system, a quadratic performance index is usually minimized:
Z
1 T
1 tf T
½x ðtÞQxðtÞ þ uT ðtÞRuðtÞdt ;
x ðtf ÞF T SFxðt f Þ þ
J¼E
2
2 0
where the superscript T denotes the transpose operator, S 2 Rnn and Q 2 Rnn are symmetric and positive definite (or semidefinite) weighting matrices for xðtÞ; R 2 Rmm is a symmetric and indefinite (in particular negative definite) weighting
matrix for uðtÞ. It will be assumed that j sF A j –0 for some s. This assumption guarantees that any input uðtÞ will generate
one and only one state trajectory xðtÞ.
If all state variables are measurable, then a linear state feedback control law
uðtÞ ¼ ðR þ DT KðtÞDÞþ BT kðtÞ
can be obtained to the system described by Eq. (1), where ðR þ DT KðtÞDÞþ is the Moore–Penrose pseudoinverse of
ðR þ DT KðtÞDÞ and
kðtÞ ¼ KðtÞFxðtÞ;
KðtÞ 2 Rnn is a symmetric matrix and the solution of GMRDE.
The relative GMRDE (see [1]) for the Indefinite stochastic linear singular system (1) is
_
þ F T KðtÞA þ AT KðtÞF þ Q F T KðtÞBðR þ DT KðtÞDÞþ BT KðtÞF ¼ 0
F T KðtÞF
T
ð2Þ
T
with terminal condition(TC) Kðt f Þ ¼ F SF; R < 0 and ðR þ D KðtÞDÞ ¼ 0 (singular). After substituting the appropriate matrices
in the above equation, it is transformed into a singular system/ differential algebraic system of index one. The system can be
transformed into a system of nonlinear differential equations by differentiating the algebraic equation one time. Therefore
solving GMRDE is equivalent to solving the system of nonlinear differential equations.
3. Solution of GMRDE
Consider the system of nonlinear differential equations for (2)
k_ ij ðtÞ ¼ /ij ðkij ðtÞÞ;
ðkij Þðtf Þ ¼ Aij
ði; j ¼ 1; 2; . . . ; nÞ:
ð3Þ
P. Balasubramaniam, N. Kumaresan / Applied Mathematics and Computation 204 (2008) 671–679
673
3.1. Runge Kutta solution
Numerical integration is one of the oldest and most fascinating topics in numerical analysis. It is the process of producing
a numerical value for the integration of a function over a set. Numerical integration is usually utilized when analytic techniques fail. Even if the indefinite integral of the function is available in a closed form, it may involve some special functions,
which cannot be computed easily. In such cases, numerical integration can also be used for computation. RK algorithms have
always been considered as the best tool for the numerical integration of ODEs. The system (3) contains n2 first order ODEs
with n2 variables. In particular n ¼ 2, the system will contain four equations. Since the matrix KðtÞ is symmetric and the system is singular, k12 ¼ k21 and k22 is free (let k22 ¼ 0). Finally the system will have two equation in two variables. Hence RK
method is explained for a system of two first order ODEs with two variables
1
k11 ði þ 1Þ ¼ k11 ðiÞ þ ðk1 þ 2k2 þ 2k3 þ k4 Þ;
6
1
k12 ði þ 1Þ ¼ k12 ðiÞ þ ðl1 þ 2l2 þ 2l3 þ l4 Þ;
6
where
k1 ¼ h /11 ðk11 ; k12 Þ;
l1 ¼ h /12 ðk11 ; k12 Þ;
k1
l1
k2 ¼ h /11 k11 þ ; k12 þ
;
2
2
k1
l1
;
l2 ¼ h /12 k11 þ ; k12 þ
2
2
k2
l2
;
k3 ¼ h /11 k11 þ ; k12 þ
2
2
k2
l2
;
l3 ¼ h /12 k11 þ ; k12 þ
2
2
k4 ¼ h /11 ðk11 þ k3 ; k12 þ l3 Þ;
l4 ¼ h /12 ðk11 þ k3 ; k12 þ l3 Þ:
In the similar way, the original system (3) can be solved for n2 first order ODEs.
3.2. Ode45 solver solution
The function ode45 is a standard solver for ordinary differential equations in MATLAB. This function implements a Runge
Kutta method with a variable time step for efficient computation. This solver can be used to find the solution of first order,
second order and system of coupled first order equations. Hence it is used to find the solution of Eq. (3) and the respective
numerical results are also displayed in the Section 4.
3.3. Neural network solution
In this approach, new feedforward neural network is used to transfer the trial solution of Eq. (3) to the neural network
solution of (3). The trial solution is expressed as the difference of two terms as below (see [18])
ðkij Þa ðtÞ ¼ Aij tN ij ðt j ; wij Þ:
ð4Þ
The first term satisfies the TCs and contains no adjustable parameters. The second term employs a feedforward neural
network and parameters wij correspond to the weights of the neural architecture.
Consider a multilayer perceptron with n input units, one hidden layer with n sigmoidal units and a linear output unit. The
extension to the case of more than one hidden layer can be obtained accordingly. For a given input vector, the output of the
network is
N ij ¼
n
X
vi rðzi Þ;
ð5Þ
i¼1
where
zi ¼
n
X
wij tj þ ui ;
j¼1
wij denotes the weight from the input unit j to the hidden unit i; vi denotes the weight from the hidden unit i to the output, ui
denotes the bias of the hidden unit i and rðzÞ is the sigmoidal transfer function.
674
P. Balasubramaniam, N. Kumaresan / Applied Mathematics and Computation 204 (2008) 671–679
The order of differentiability of (5) is the same as that of the activation function rðÞ. Since the chosen sigmoid functions
are infinitely differentiable, the derivative of the network output with respect to its inputs is
n
n
oN ij X
oN ij ozi X
¼
¼
vi r0ðN ij Þwij ;
otj
ozi ot j
i¼1
i¼1
ð6Þ
where r0 ðÞ denotes the sigmoid function with respect to its scalar input. The Eqs. (5) and (6) constitute the network’s output
and gradient equations, respectively.
The error quantity to be minimized is given by
Er ¼
n 2
X
ðk_ ij Þa /ij ððkij Þa Þ :
ð7Þ
i;j¼1
The neural network is trained until the error function (7) tends to zero. Whenever Er tends to zero, the trial solution (4)
becomes the neural network solution of the Eq. (3).
3.4. Structure of the FFNN
The architecture consists of n input units, one hidden layer with n sigmoidal units and a linear output. Each neuron produces its output by computing the inner product of its input and its appropriate weight vector. The neural network architecture is given in the Fig. 1 for computing N ij .
The weights and biases of the network are initialized by Nguyen and Widrow rule [21,22]. The rule is used to speed up the
training process by setting the initial weights of the hidden layer so that each hidden node is assigned its own interval at the
start of training. The network is trained as each hidden node still having the freedom to adjust its interval size and its location during training. The function kij ðtÞ is to be approximated by the neural network over the region (0, 1), which has length
one. There are n hidden units. Therefore each hidden unit will be responsible for an interval of length 1=n on the average.
Since rðwij tj þ ui Þ is approximately linear over 0 < wij t j þ ui < 1, this yields the interval 0 < t j < w 1u which has length
ij
i
1=ðwij ui Þ. Therefore
1
1
¼ ;
wij ui n
wij ui ¼ n;
However, it is preferable to have the intervals overlap slightly and so it is taken as wij ¼ 0:7n. Next ui is picked so that the
intervals are located randomly in the region 0 < x < 1. The center of an interval is located at
tj ¼ wij
¼ uniform random value between 0 and 1:
ui
This good initialization can significantly speed up the learning process.
The weights and biases are iteratively updated by Levenberg–Marquardt (LM) algorithm [15] until the error function
tends to zero. LM algorithm is a variation of Newton’s method that was designed for minimizing functions that are sums
of squares of other nonlinear functions. This is very well suited to neural network training.
Let us begin by considering the form of Newton’s method where the error function is a sum of squares. Suppose the error
function Er is to be minimized with respect to the parameter vector kij , then Newton’s method would be
Fig. 1. Neural network architecture.
P. Balasubramaniam, N. Kumaresan / Applied Mathematics and Computation 204 (2008) 671–679
ðkij Þiþ1 ¼ ðkij Þi Dkij ;
ð8Þ
Dkij ¼ ½r2 Er ðkij Þ1 rEr ðkij Þ;
where r2 Er ðkij Þ is the Hessian matrix and rEr ðkij Þ is the gradient. If Er is assumed as a sum of square function
Er ðkij Þ ¼
N
X
e2i ðkij Þ;
i¼1
then it can be shown that
rEr ðkij Þ ¼ J T ðkij Þeðkij Þ;
r2 Er ðkij Þ ¼ J T ðkij ÞJðkij Þ þ Sðkij Þ;
where Jðkij Þ is the Jacobian matrix
2 oe1 ðkij Þ oe1 ðkij Þ
oe1 ðkij Þ 3
oðkij Þ
n 7
6 oðkij Þ1 oðkij Þ2
6 oe2 ðkij Þ oe2 ðkij Þ
oe2 ðkij Þ 7
7
6
oðkij Þ2
oðkij Þn 7
6 oðk Þ
Jðkij Þ ¼ 6 ij 1
7;
6 .
.. 7
..
..
6 ..
. 7
.
.
5
4
oeN ðkij Þ
oeN ðkij Þ
oeN ðkij Þ
oðkij Þ
oðkij Þ
oðkij Þ
1
2
675
n
Start Training
Feed the input vector
Initialize weight matrix and bias
using Nguyen and Widrow rule
Compute zi= Σ wij tj + ui and σ(zi)
Initialize the weight vector vi
Calculate Nij= Σ vi σ(zi) and
Purelin (N)ij
Compute the error function Er
Update the weight
matrix using
LM algorithm
If Er
0
Stop Training
Fig. 2. Flow chart.
P. Balasubramaniam, N. Kumaresan / Applied Mathematics and Computation 204 (2008) 671–679
676
and
S¼
N
X
ei ðkij Þr2 ei ðkij Þ:
i
For the Gauss–Newton method, it is assumed that Sðkij Þ 0 and the update (8) becomes
Dkij ¼ ½J T ðkij ÞJðkij Þ1 J T ðkij Þeðkij Þ:
The Levenberg–Marquardt modifiction to the Gauss–Newton method is
ðkij Þiþ1 ¼ ðkij Þi ½J T ðkij ÞJðkij Þ þ lI1 J T ðkij Þeðkij Þ:
The parameter l is multiplied by some factor ðbÞ whenever a step would result in an increased Er ðkij Þ. When a step reduces
Er ðkij Þ; l is divided by b. Notice that when l is large the algorithm becomes steepest descent while for small, the algorithm
becomes Gauss–Newton. LM algorithm is the fastest method to train moderate-sized feedforward neural networks. The neural network algorithm was implemented in MATLAB on a PC, CPU 1.7 GHz for the neuro computing approach to solve GMRDE
(2) for the stochastic linear singular system (1).
Algorithm. Neural network algorithm
Step 1
Step 2
Step 3
Step 4
Step 5
Step 6
Step 7
Step 8
Step 9
Step 10
Feed the input vector t j .
Initialize randomized weight matrix wij and bias ui using Nguyen and Widrow rule.
P
Compute zi ¼ nj¼1 wij t j þ ui .
Pass zi into n sigmoidal functions.
Initialize the weight vector vi from the hidden unit to output unit.
P
Calculate N ij ¼ ni¼1 vi rðzi Þ.
Compute purelin function ðN ij Þ.
Compute the value of the error function Er .
If Er is zero, then stop training. Otherwise update the weight matrix using LM algorithm.
Repeat the neural network training until the following error function
Er ¼
n 2
X
ðk_ ij Þa /ij ððkij Þa Þ #0:
i;j¼1
The scheme of neural network training is also given in Fig. 2.
Table 1
Solutions of MRDE
t
0.000000
0.100000
0.200000
0.300000
0.400000
0.500000
0.600000
0.700000
0.800000
0.900000
1.000000
1.100000
1.200000
1.300000
1.400000
1.500000
1.600000
1.700000
1.800000
1.900000
2.000000
Runge Kutta solution
Neural network solution
k11
k12
k11
k12
0.527475
0.533558
0.540988
0.550063
0.561146
0.574684
0.591219
0.611415
0.636082
0.666210
0.703009
0.747955
0.802852
0.869904
0.951800
1.051828
1.174002
1.323225
1.505486
1.728100
2.000000
0.736263
0.733221
0.729506
0.724969
0.719427
0.712658
0.704390
0.694293
0.681959
0.666895
0.648495
0.626022
0.598574
0.565048
0.524100
0.474086
0.412999
0.338387
0.247257
0.135950
0.000000
0.519429
0.524146
0.530007
0.537291
0.546343
0.557592
0.571572
0.588946
0.610537
0.637370
0.670716
0.712156
0.763655
0.827656
0.907193
1.006037
1.128874
1.281530
1.471242
1.707006
2.000000
0.683389
0.681212
0.678507
0.675145
0.670966
0.665774
0.659321
0.651302
0.641336
0.628951
0.613560
0.594432
0.570661
0.541120
0.504408
0.458785
0.402087
0.331625
0.244059
0.135238
0.000000
677
P. Balasubramaniam, N. Kumaresan / Applied Mathematics and Computation 204 (2008) 671–679
4. Numerical example
Consider the optimal control problem:
Minimize
Z
1 T
1 tf T
½x ðtÞQxðtÞ þ uT ðtÞRuðtÞdt
x ðtf ÞF T SFxðt f Þ þ
J¼E
2
2 0
subject to the stochastic linear singular system
FdxðtÞ ¼ ½AxðtÞ þ BuðtÞdt þ DuðtÞdWðtÞ;
xð0Þ ¼ x0 ;
where
S¼
D–0;
0
;
0
2
0
F¼
1
0
0
;
0
A¼
1
0
2
;
4
B¼
0
;
1
R ¼ 1;
Q¼
1
0
0
;
0
ðR þ DT KðtÞDÞ ¼ 0:
Table 2
Solutions of MRDE
0.000000
0.100000
0.200000
0.300000
0.400000
0.500000
0.600000
0.700000
0.800000
0.900000
1.000000
1.100000
1.200000
1.300000
1.400000
1.500000
1.600000
1.700000
1.800000
1.900000
2.000000
Neural network solution
Ode45 solver solution
k11
k12
k11
k12
0.527475
0.533558
0.540988
0.550063
0.561146
0.574684
0.591219
0.611415
0.636082
0.666210
0.703009
0.747955
0.802852
0.869904
0.951800
1.051828
1.174002
1.323225
1.505486
1.728100
2.000000
0.736263
0.733221
0.729506
0.724969
0.719427
0.712658
0.704390
0.694293
0.681959
0.666895
0.648495
0.626022
0.598574
0.565048
0.524100
0.474086
0.412999
0.338387
0.247257
0.135950
0.000000
0.527473
0.533556
0.540985
0.550060
0.561143
0.574681
0.591215
0.611411
0.636077
0.666206
0.703002
0.747950
0.802843
0.869897
0.951789
1.051821
1.173988
1.323219
1.505470
1.728096
2.000000
0.736263
0.733221
0.729507
0.724969
0.719428
0.712659
0.704392
0.694294
0.681961
0.666896
0.648498
0.626024
0.598578
0.565051
0.524105
0.474089
0.413005
0.338390
0.247264
0.135951
0.000000
2
RK
NN
1.5
k11
t
1
0.5
0
0.5
1
t
Fig. 3. Solution curve for k11 .
1.5
2
678
P. Balasubramaniam, N. Kumaresan / Applied Mathematics and Computation 204 (2008) 671–679
0
RK
NN
-0.1
-0.2
k12
-0.3
-0.4
-0.5
-0.6
-0.7
-0.8
0
0.5
1
1.5
2
t
Fig. 4. Solution curve for k12 .
The numerical implementation could be adapted by taking tf ¼ 2 for solving the related GMRDE of the above indefinite stochastic linear singular system. The appropriate matrices are substituted in Eq. (2), the GMRDE is transformed into system of
differential equation in k11 and k12 . In this problem, the value of k22 of the symmetric matrix KðtÞ is free and let k22 ¼ 0.
Then the optimal control of the system can be found out by the solution of GMRDE. The numerical solutions of GMRDE
are calculated and displayed in the Table 1 using the RK method and the neural network approach. A multilayer perceptron
having one hidden layer with 10 hidden units and one linear output unit is used. The sigmoid activation function of each
hidden units is rðtÞ ¼ 1þe1 t . The numerical solutions of GMRDE are also calculated and displayed in the Table 2 using the
ode45 solver and the neural network approach. The neural network solution is more or less similar to ode45 solver
solution.
4.1. Solution curves using neural networks
The solution of GMRDE and the error between the solution by neural network and traditional RK method is displayed in
Figs. 3 and 4. The numerical values of the required solution are listed in the Table 1. The computation time for neural network solution is 1.6 s. whereas the RK method is 2.2 s. Hence the neural solution is faster than RK method.
5. Conclusion
The solution of generalized matrix Riccati differential equation for indefinite stochastic linear quadratic singular system is
obtained by neural network approach. Neuro computing approach can yield a solution of GMRDE significantly faster than
standard solution techniques like RK method. The neural network solution is more or less similar to ode 45 solver solution
and qualitatively better than the solution of fourth order Runge Kutta method. A numerical example is given to illustrate the
derived results. The long calculus time of finding solution of GMRDE is avoided by this neuro approach. The efficient approximations of the solution of GMRDE are done in MATLAB on PC, CPU 1.7 GHz.
Acknowledgements
The authors are very much thankful to the referees for the valuable comments and suggestions for improving this manuscript. The work of the authors was supported by the UGC-SAP Grant No. F.510/6/DRS/2004(SAP-1), New Delhi, India.
References
[1] M. Ait Ram, J.B. Moore, Xun Yu Zhou, Indefinite stochastic linear quadratic control and generalized differential Riccati equation, SIAM J. Control Optim.
40 (4) (2001) 1296–1311.
[2] M. Athens, Special issues on linear quadratic Gaussian problem, IEEE Automat. Control AC-16 (1971) 527–869.
[3] P. Balasubramaniam, J. Abdul Samath, N. Kumaresan, A. Vincent Antony Kumar, Solution of matrix Riccati differential equation for the linear quadratic
singular system using neural networks, Appl. Math. Comput. 182 (2006) 1832–1839.
[4] P. Balasubramaniam, J. Abdul Samath, N. Kumaresan, Optimal control for nonlinear singular systems with quadratic performance using neural
networks, Appl. Math. Comput. 187 (2007) 1535–1543.
[5] P. Balasubramaniam, J. Abdul Samath, N. Kumaresan, A. Vincent Antony Kumar, Neuro approach for solving matrix Riccati differential equation, Neural
Parallel Sci. Comput. 15 (2007) 125–135.
[6] A. Bensoussan, Lecture on stochastic control part I, in: Nonlinear and Stochastic Control, Lecture Notes in Math, 972, Springer-Verlag, Berlin, 1983, pp.
1–39.
P. Balasubramaniam, N. Kumaresan / Applied Mathematics and Computation 204 (2008) 671–679
679
[7] F. Bucci, L. Pandolfi, The regulator problem with indefinite quadratic cost for boundary control systems: the finite horizon case, Syst. Control Lett. 39
(2000) 79–86.
[8] S.L. Campbell, Singular Systems of Differential Equations, Pitman, Marshfield, 1980.
[9] S.L. Campbell, Singular Systems of Differential Equations II, Pitman, Marshfield, 1982.
[10] S.P. Chen, X.J. Li, X.Y. Zho, Stochastic linear quadratic regulators with indefinite control weight costs, SIAM J. Control Optim. 36 (5) (1998) 1685–1702.
[11] Chiu H. Choi, A survey of numerical methods for solving matrix Riccati differential equation, IEEE Proc. Southeastcon (1990) 696–700.
[12] G. Da Prato, A. Ichikawa, Quadratic control for linear periodic systems, Appl. Math. Optim. 18 (1988) 39–66.
[13] M.H.A. Davis, Linear Estimation and Stochastic Control, Chapman and Hall, London, 1977.
[14] S.W. Ellacott, Aspects of the numerical analysis of neural networks, Acta Numer. 5 (1994) 145–202.
[15] M.T. Hagan, M. Menhaj, Training feedforward networks with the Marquardt algorithm, IEEE Trans. Neural Networ. 5 (6) (1994) 989–993.
[16] F.M. Ham, E.G. Collins, A neurocomputing approach for solving the algebraic matrix Riccati equation, Proc. IEEE Int. Conf. Neural Networ. 1 (1996) 617–
622.
[17] A. Karakasoglu, S.L. Sudharsanan, M.K. Sundareshan, Identification and decentralized adaptive control using neural networks with application to
robotic manipulators, IEEE Trans. Neural Networ. 4 (1993) 919–930.
[18] I.E. Lagaris, A. Likas, D.I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE Trans. Neural Networ. 9 (1998)
987–1000.
[19] F.L. Lewis, A survey of linear singular systems, Circ. Syst. Sig. Proc. 5 (1) (1986) 3–36.
[20] K.S. Narendra, K. Parathasarathy, Identification and control of dynamical systems using neural networks, IEEE Trans. Neural Networ. 1 (1990) 4–27.
[21] D. Nguyen, B. Widrow, Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights, Proc. Int. Joint Conf.
Neural Networ. III (1990) 21–26.
[22] A.P. Paplinski, Lecture Notes on Feedforward Multilayer Neural Networks, NNet (L.5) 2004.
[23] J. Wang, G. Wu, A multilayer recurrent neural network for solving continuous time algebraic Riccati equations, Neural Networ. 11 (1998) 939–950.
[24] P. De Wilde, Neural Network Models, second ed., Springer-Verlag, London, 1997.
[25] W.M. Wonham, On a matrix Riccati equation of stochastic control, SIAM J. Control Optim. 6 (1968) 681–697.
[26] J. Zhu, K. Li, An iterative method for solving Stochastic Riccati differential equations for the stochastic LQR problem, Optim. Methods. Softw. 18 (2003)
721–732.