Optimization techniques for autonomous underwater vehicles: a practical point of view M. Chyba, T. Haberkorn Department of Mathematics University of Hawaii, Honolulu, HI 96822 Email: [email protected], [email protected] Abstract— The main focus of this paper is to design time efficient trajectories for underwater vehicles. The equations of motion are derived from Lagrangian mechanics and we assume bounds on the controls. We develop a numerical algorithm based on the time optimal trajectories. Our goal is to design controls that are piecewise constants with a small number of switchings and such that the duration for the vehicle to achieve the trajectory differs from the time optimal one by a number negligible in terms of the application. I. I NTRODUCTION In the exploration of our world’s oceans, there comes a time when human controlled vehicles are not up to the task. Whether it be deep ocean survey, or long term monitoring projects, Autonomous Underwater Vehicles (AUV’s) are the natural alternative. One of the essential components for an underwater vehicle to be autonomous is the ability to design its own trajectories. It is known as the motion planning problem. Moreover, the goal is to find trajectories that are efficient with respect to some criteria. Indeed, we may want our vehicle to stay down longer for monitoring, or finish the search faster to save money. All of these questions can be addressed through optimal control. Much research has been allocated to AUV propulsion, robustness, sensors, and power sources. With the plethora of sizes and shapes combined with the youth of the AUV field, and the complexity of hydrodynamics, accurate models of AUV’s are hard to come by. Thus, among all AUV research areas, the area of optimal control is one of the least developed. Almost all existing controllers have the disadvantage that they do not take into account minimization criteria such as time and energy. In this paper, we address the problem of finding the time minimal trajectories for underwater vehicles; it is an optimal control problem (OCP). Previous results were obtained in [2], [3], [4], [5], [6], [7] about the structure of time minimal trajectories and the role of the singular trajectories. However, due to the complexity of the equations we need to supplement our theoretical study with numeric computations. It is the subject of this paper. A first way to obtain numerically the desired trajectories is to discretized our OCP and to solve the resulting optimization problem whose unknowns are the discretized state and control of the system. However, the obtained solutions are Research supported in part by NSF grant DMS-030641 usually not feasible in terms of practical implementation since their control structures have a lot of switchings. Another drawback of this method is that the resulting optimization problem is time consuming to solve since it needs a large number of unknowns in order to have the desired accuracy. For these reasons, we propose another approach that consists in looking for control strategies that are time optimal with respect to a given switching structure. To accomplish that goal, we fix the number of switching times and rewrite the OCP as an optimization problem whose unknowns are the switching times and the value of the constant arcs for each component of the control. This method has two main advantages. The first one is that such optimization problem has a small dimension and that we obtain a much better accuracy since we can use high order integration scheme to integrate the dynamical system of the model. The second, and main, advantage is that the obtained control strategies are easier to implement than the ones resulting from the solving of the discretized OCP on a real model. Indeed, the former ones are just concatenation of constant arcs for the control and we can even fix a relatively small number of switching times so that real thrusters can follow these strategies more efficiently. Furthermore, it is also possible to add some smoothing at the switching times so that the junctions between constant thrust arcs become more regular. II. M ODEL In this section, using Lagrangian mechanics, we derive the equations of motion for marine vehicles in 6 degrees of freedom in the body-fixed frame. Let us first introduce some notations. We denote by η = (x, y, z, φ, θ, ψ)t the position and orientation of the vehicle with respect to the earth-fixed reference frame, the coordinates φ, θ, ψ being the Euler angles for the body frame. The coordinates corresponding to translational and rotational velocities in the body frame are ν = (u, v, w, p, q, r)t . See Figure 1. We define η1 = (x, y, z)t , η2 = (φ, θ, ψ)t and ν1 = (u, v, w)t , ν2 = (p, q, r)t . We then have the following relation η̇ = J(η)ν (1) where J represents the linear and angular velocity transformations. Let L = T − V be the Lagrangian, with and with the following control vector fields: p u r z G(χ) = q v w 0 M −1 (10) In the sequel we will call χ the state of the system while the variable η represents a configuration of the system. y x Fig. 1. Constraints on the control Earth-fixed and body-fixed reference frame. T = TRB + TA ; respectively the rigid-body kinetic energy and the fluid kinetic energy. The potential V is defined implicitly by ∂V = J −1 (η)g(η) (2) ∂η where g(η) represents the restoring forces (gravitational forces and moments). Let us recall first the quasi-Lagrange equations (a generalization of Kirchoff’s equations when considering a non zero potential): d ∂L ∂L ∂L ( ) + ν2 × − J1T (η2 ) = τ1 dt ∂ν1 ∂ν1 ∂η1 (3) ∂L ∂L ∂L d ∂L ( ) + ν2 × + ν1 × − J2T (η2 ) = τ2 (4) dt ∂ν2 ∂ν2 ∂ν1 ∂η2 where τ = (τ1 , τ2 ) is the vector of control inputs. Let us determine more precisely these equations. Since ∂T ∂η = 0, ∂L we have by construction that −J(η) ∂η = g(η). Moreover, d ∂L we have that ∂L ∂ν = M ν, dt ∂ν = M ν̇. Defining the Coriolis and Centripetal matrix as ∂T ν2 × ∂ν 1 C(ν)ν = ∂T ∂T ν2 × ∂ν + ν1 × ∂ν 2 1 we can rewrite equations (3) and (4) as M ν̇ + C(ν)ν + D(ν)ν + g(η) = τ (5) where the term D(ν)ν accounts for the damping forces. We obtained: ν̇ η̇ = −M −1 (C + D)ν − M −1 g + M −1 τ = J(η)ν (6) (7) where M is the inertia matrix, C the coriolis and centripetal matrix, D the damping forces and the variable τ represents the control. See [8] for instance for more details. We can rewrite equations (6) and (7) as an affine control system: χ̇(t) = f (χ(t)) + G(χ(t))u(t), χ(t) ∈ IR12 . where χ = (η, ν), the drift is J(η)ν f (χ) = −M −1 (C + D)ν − M −1 g (8) (9) The controls represent the thrust, and we assume the following bounds: U = {τ ∈ IR6 ; αi ≤ τi ≤ βi , αi < 0 < βi , i = 1, · · · , 6}. (11) A control is said to be admissible if it is a measurable bounded function τ such that τ (t) ∈ U for a.e. t. Let us now describe the parameters used for the numerical computations. The location of the center of buoyancy and gravity are respectively: (xB , yB , zB ) = (0, 0, 8.1e−4) (meters) and (xG , yG , zG ) = (0, 0, 5.64e − 4) (meters). The inertia matrix, M , takes the following form: 3/2 m 0 0 0 mzG 0 0 3/2 m 0 −mzG 0 0 0 0 0 −mzG 3/2 m 0 0 Ix 0 −Ixy 0 −Ixz mzG 0 0 −Ixy Iy −Iyz 0 0 0 −Ixz −Iyz Iz (12) where m = 120.9091 kg is the mass of the AUV, and I. and I.. are inertia factors (Ix = 6.8 kg.m, Iy = 7.2 kg.m, Iz = 9.1 kg.m, Iyz = Ixz = Iyz = 0). The restoring forces and moment vector g(η) is taken to be: g(η) = (Wg − B) sin θ (−Wg + B) cos θ sin φ (−Wg + B) cos θ cos φ (zG Wg − zB B) cos θ sin φ (zG Wg − zB B) sin θ 0 where Wg is the weight of the vehicle, and B = Wg + 38 (Newton) is the buoyancy force on the vehicle. The damping forces D(ν) are of the form −diag(Xu + Xuu |u|, Yv + Yvv |v|, Zw + Zww |w|, Kp + Kpp |p|, Mq + Mqq |q|, Nr + Nrr |r|) where (Xu , Xuu ) (= (0, −150)), (Yv , Yvv ) (= (0, −150)), and (Zw , Zww ) (= (0, −150)) are the drag coefficients for pure surge, sway, and heave, respectively. And, (Kp , Kpp ) (= (−50, −100)), (Mq , Mqq ) (= (−50, −100)), (Nr , Nrr ) (= (−50, −100)) are the drag coefficients for pure roll, pitch, and yaw, respectively. Finally, the coriolis matrix C is: 0 0 0 −m zG r m − w + Zwd w m v − Yvd v −m (zG r) Zwd w − m w C3,4 0 C5,4 C6,4 satisfies the maximum principle is called an extremal, and the vector function λ(.) is called the adjoint vector. The maximum condition (14), along with the control domain (11), is equivalent to almost everywhere: 0 0 0 0 0 0 m w − Zwd w C4,3 −m zG r C5,3 m − u + Xud u −m 0 m (w) − Zwd w −m zG r C3,5 C4,5 0 C6,5 C1,6 m u − Xud u m0 C4,6 C5,6 0 τi (t) = αi if ϕi (t) < 0 and τi (t) = βi if ϕi (t) > 0 (15) Where Xud = Yvd = Zwd = −m/2 and Kpd = Mqd = Nrd = 0 and the coefficients C.,. are: C1,6 C3,4 C3,5 C4,3 C4,5 C4,6 C5,3 C5,4 C5,6 C6,4 C6,5 = = = = = = = = = = = m (zG p − v) + Yvd v m (v − zG p) − Yvd v Xud u − m (u + zG q) m (zG p − v) + Yvd v Iz r − Iyz q − Ixz p − Nrd r, Iyz r + Ixy p − Iy q + Mqd q m (u + zG q) − Xud u Iyz q + Ixz p − Iz r + Nrd r, Ixz r − Ixy q + Ix p − Kpd p Iy q − Iyz r − Ixy p − Mqd q, Ixy q − Ix p − Ixz r + Kpd p for i = 1, · · · , 6, where ϕi (.) are called the switching functions. The switching functions are absolutely continuous functions defined as follows (i = 1, · · · , 6): ϕi : [0, T ] → IR, ϕi (t) = λt (t)gi (χ(t)) According to (15), the zeroes of the switching functions define the structure of the time-optimal control τ (.). If ϕi (t) is zero on a nontrivial subinterval of [0, T ], the corresponding extremal is called τi -singular. The component τi is then called singular on the corresponding subinterval. The maximum principle implies that if ϕi (t) 6= 0 for almost all t ∈ [0, T ] the component τi of the control is bang-bang, which means that τi only takes values in {αi , βi } for almost every t ∈ [0, T ]. A time ts ∈ (0, T ) connecting a singular and a bang arc, or two bang arcs (with different values) is called a switching time. To study the zeroes of the switching functions and the existence of singular extremals we analyze their derivatives. It follows from ϕi (t) = λt (t)gi , that almost everywhere in t the following is verified: ϕ̇i = −λt [f, gi ](χ) − The domain of control is assumed to be U = [−20, 20] × [−20, 50] × [−5, 5]3 Newton. Let χ0 , χT be initial and final states. Assume that there exists an admissible time-optimal control τ : [0, T ] → U such that the corresponding trajectory χ(.) solution of (8) steers the system from χ0 to χT . The maximum principle, [1], implies that there exists an absolutely continuous vector λ : [0, T ] → IR12 , λ(t) 6= 0 for all t, such that the following conditions hold almost everywhere: ∂H χ̇j = (χ, λ, τ ), ∂λj ∂H λ̇j = − (χ, λ, τ ) ∂χj (13) H(χ, λ, τ ) = λt f (χ) + 6 X (17) ϕ̇i (t) = −λt (t)[f, gi ](χ(t)). (18) Notice that it is a property shared by the set of controlled mechanical system of the form kinetic minus potential energy. Using equations (9) and (10) we have that J(η)Mi−1 [f, gi ] = (19) ∂ −1 (C + D)ν)Mi−1 ∂ν (−M where Mi−1 denotes the ith column of the inverse of the inertia matrix. The second derivative is given by: n X λt [gj , [f, gi ]](χ)τj (20) j=1 λt gi (χ)τi i=1 is the Hamiltonian function where the gi are the column vectors of G. Furthermore, the maximum condition holds: H(χ(t), λ(t), τ (t)) = max H(χ(t), λ(t), γ) γ∈U λt [gj , gi ](χ)τj . Since the control vector fields are given by the inverse of the inertia matrix, their are constant. A consequence is that ϕ̇i is an absolutely continuous function given by: ϕ̈i = λt ad2f gi + for j = 1, · · · , 12, where n X j=1 2 III. M AXIMUM P RINCIPLE (16) (14) Moreover, the maximum of the Hamiltonian is constant along the solutions of (13) and must satisfy H(χ(t), λ(t), τ (t)) = λ0 , λ0 ≥ 0. A triple (χ, λ, τ ) that where ad2f gi stands for [f, [f, gi ]]. Notice that since the first n components of the vector field [f, gi ] do not depend on ν but only on η, we have that the first n components of [gj , [f, gi ]] are zeroes. As a consequence, in the case we make the assumption that the damping matrix is linear in ν, we have that the vector fields [gj , [f, gi ]] can be written as a linear combination of the gi . Unless we make additional assumptions on the matrices involved in the equations of motion of the vehicle, which x (m) 2 4 6 8 10 12 4 2 0 0 2 4 6 8 10 12 w (m/s) z (m) 0.5 0 0 2 4 6 8 10 12 1 p (rad/s) φ (rad) −0.5 0 −1 0 2 4 6 8 10 12 q (rad/s) 0.5 0 −0.5 0 2 4 6 8 10 12 0 r (rad/s) θ (rad) 0 v (m/s) y (m) 0 METHOD −0.5 −1 0 2 4 6 t (s) Fig. 2. 8 10 12 0.5 0 0 2 4 6 8 10 12 0 2 4 6 8 10 12 0 2 4 6 8 10 12 0 2 4 6 8 10 12 0 2 4 6 8 10 12 0 2 4 6 t (s) 8 10 12 1 0.5 0 0.5 0 −0.5 0.1 0 −0.1 0.1 0 −0.1 0.1 0 −0.1 Time optimal trajectory for (N LP ) 1 20 τ 0 −20 0 2 4 6 8 10 12 0 2 4 6 8 10 12 0 2 4 6 8 10 12 0 2 4 6 8 10 12 0 2 4 6 8 10 12 0 2 4 6 t (s) 8 10 12 2 20 τ 0 −20 3 50 τ 0 −50 τ 4 5 0 −5 τ 5 5 0 −5 5 6 They are two kind of methods to solve numerically an optimal control problem: the indirect and the direct. Due to the form of our problem, the use of indirect methods is very complicated without knowing the structure of the solution in advance. We then use a direct method. It consists in discretizing (OCP ) in order to transform it into a nonlinear optimization problem. We call this new problem (N LP ). The unknowns of (N LP ) are the discretized states and controls. Its nonlinear constraints come from the discretization according to a fixed step integration scheme, here a Heun one, of the dynamic (8). Additional constraints are the ones on the final state and the upper and lower bounds on the controls. To solve (N LP ), we write it using the AMPL modeling language, see [9], and we apply the large-scale nonlinear solver IpOpt, see [13]. The solving usually takes around 30 min, counting refining steps, for a time grid of 4000 points (so approximatively 72000 unknowns and 48000 nonlinear constraints). Notice that a solution of (N LP ) is only an approximation of a solution of (OCP ) since the fixed step integration scheme used to discretized the dynamic introduces inaccuracy but mostly because the control is piecewise constant due to the discretization of the problem. To increase the accuracy, we need to refine the time discretization grid which also increases the number of unknowns of (N LP ). A direct effect of a larger number u (m/s) 5 τ IV. O PTIMIZATION of unknowns is an increase in the execution time needed to solve (N LP ), and ultimately a failure to solve it without important computational resources. Figure 2 shows an example of a solution of (N LP ) for χ0 = (0, 0, · · · , 0) and χT = (5, 3, 0, · · · , 0) for the model described previously. The evolution of the corresponding controls are represented on Figure 3. The time for this ψ (rad) would result in an unrealistic model, the complexity of the equations is such that an analytical study is very difficult. Partial results can be derived but we cannot expect general results such as a uniform bounds on the number of switchings. Moreover, our major goal is the implementation of our techniques to an existing vessel. We then have to take into account practical constraints such as the capacity of the thrusters to follow a prescribed control and the ability of the vehicle to realize a given path. It is a well known fact that optimal trajectories can involve singular trajectories or have infinitively many switchings. In our problem, the complexity of the optimal structure has been outlined already on a simplified model where we constructed a semi-camonical form for our system in a neigborhood of singular trajectories and deduce the existence of chattering extremals, see [2]. It is a consequence of the fact that a τi -singular extremal, with i = 4, 5 or 6 is of order 2. d2q It means that the lowest order derivative dt 2q ϕi in which τi appears explicitely with a nonzero coefficient is given by 2q = 4. Hence, to compute a singular control τi , we must differentiate the corresponding switching function ϕi 4 times and solve this final equation for the control. Such a control τi is then computed as a feedback of the state χ, of the adjoint vector λ and of the other component of the control (assuming they are constants). For this reason, we developed a numerical methods taking into account the criterion to minimize as well as the ability of the vehicle and the thrusters to follow a prescribed path. 0 −5 Fig. 3. Time optimal control strategy for (N LP ) trajectory is tfmin ≈ 12.441 s. Looking at the Figure 3, it is obvious that the control strategy is inappropriate to a practical implementation on real vehicle for at least two reasons. The first one comes from the existence of singular arcs for the component τ6 . Along these singular arcs, the control varies continuously which is impossible to implement. The second reason is that there is a large number of switching times. In practice we observe delays with the thrusters to respond to a given command, as a consequence we want to keep the number of switchings small. Moreover, the fact that the trajectory is computed through a fixed step integration make it less accurate to the theoretical model than a proper high-order, adaptative step integration. R EMARK 4.1: The reason to use a high order scheme to improve the accuracy of the solution of the (N LP ) problem is that we already have a poor knowledge of the hydrodynamics coefficients which introduces uncertainties in the experiments, hence we would like to minimize any other source of potential difference between the theoretical computations and the experiments. Inspired from the work in [10], [12], we develop another approach to overcome the main issues of the previous solving method. In [12], the authors use the discretized solution of an optimal control problem to extract the switching structure of the optimal control. The next step is to rewrite the optimal control problem as a nonlinear optimization problem whose unknowns are the switching times (or more precisely the time length between two switching times). They are then able to integrate their dynamical system with a high order integrator. The motivation for their approach is the verification of second order sufficient conditions for optimality. This method cannot be applied to our problem. The reason is that along time minimal trajectories we have the existence of singular arcs. As mentioned before, along these arcs the singular control can be computed as a feedback of the state and of the adjoint vector, contrary to [12] where the singular control is a feedback of the state only. Moreover, the switching structure of the solutions of (N LP ) is not always easy to extract, see Figure 3. Our motivation is different, we want to develop an algorithm to compute trajectories that are efficient in time but mostly feasible by the vehicle. Hence, our goal is, based on our previous computations for the (N LP ) problem, compute trajectories that are close in time from the optimal one but that are such that their implementation on a vessel is possible. To reach our goal, we rewrite (OCP ) as an optimization problem but without taking into account the switching structure given by the solution of (N LP ). We fix the number of switching times, preferably to a small number. The new optimization problem, called (ST P P )p (for Switching Time Parameterization Problem), has then for unknowns the time arclengths between two switching times (plus the time arclength between the last switch and the final time) but also the values of the constant thrust arcs. (ST P P )p has the min tp+1 z∈D t0 = ti+1 = χi+1 = χp+1 = z = D = following form: 0 ti + ξi , i = 1, · · · , p Rt χi + tii+1 χ̇(t, τ i )dt χT (ξ1 , · · · , ξp+1 , τ 1 , · · · , τ p+1 ) n(p+1) IR+ × U p+1 (21) where ξi , i = 1, · · · , p + 1 are the time arclengths and τ i ∈ U, i = 1, · · · , p + 1 are the values of the constant thrust arcs. χ̇(t, τ i ) is the right hand side of the dynamical system (8) with the constant control τ i . The integration of the dynamical system of (ST P P )p (which is simply (8)) can be done with a high order adaptative step integrator. We use DOP853, see [11]. (ST P P )p has a much smaller number of unknowns than (N LP ) and so, even if the integration takes more time because it is more accurate, the computational resources needed to solve (ST P P )p are drastically reduced. To solve it we use once more IpOpt, see [13], which yields very good results. V. N UMERICAL RESULTS In this section we describe the results using the method described in the previous section. One of the main advantage of (ST P P )p is that we are usually able to fix a small p, that is a small number of switching times. However, notice that a solution of (ST P P )p will only be optimal with respect to the fixed number of switching times. So it won’t be a solution of (OCP ), but only an admissible control and trajectory that steers the system from χ0 to χT . Figure 5 shows a control strategy solution of (ST P P )3 with the same terminal configurations as for Figure 2, i.e. χ0 = (0, 0, · · · , 0) and χT = (5, 3, 0, · · · , 0). We impose the total number of switching times along the trajectory to be 3. The final time corresponding to the solution shown on Figure 5 is tf ≈ 12.814 s. This final time has to be compared to the one of a solution of (N LP ), Figure 2, which is tfmin ≈ 12.441 s. We see that the difference is neglectable providing that the obtained longer control strategy is easier than the time-optimal one. A solution of (ST P P )1 yields a tf ≈ 17.445 s and a solution of (ST P P )2 yields tf ≈ 13.028 s. Note that along the solution of (ST P P )3 the control strategy is not bang-bang. Indeed, since the values of the control between two switching times belong to the unknowns of the problem, there is no guarantee that the thurst of the vehicle will be saturating all the time, i.e. take their values in {αi , βi }. VI. C ONCLUSION In this paper, we focus on a practical implementation of time optimal trajectories for an underwater vehicle. Time optimal trajectories are often difficult to implement due to their structure, moreover in practice we are not interested x (m) u (m/s) 5 0 0 5 10 v (m/s) y (m) 4 2 0 0 5 10 0.5 5 10 φ (rad/s) 0 −0.5 0 5 10 θ (rad/s) 0.4 0.2 0 0 5 10 0 ψ (m/s) ψ (rad) 0 0.5 θ (rad) φ (rad) 0 −0.5 −1 0 5 10 0 0 5 10 0 5 10 0 5 10 0 5 10 0 5 10 0.5 0 −0.5 w (m/s) z (m) 1 0.5 0.4 0.2 0 0.1 0 −0.1 0.1 0 −0.1 0.1 R EFERENCES 0 −0.1 0 5 t (s) Fig. 4. 10 t (s) Trajectory corresponding to (ST P P )3 . 1 20 τ 0 −20 0 2 4 6 8 10 12 0 2 4 6 8 10 12 0 2 4 6 8 10 12 0 2 4 6 8 10 12 0 2 4 6 8 10 12 0 2 4 6 8 10 12 2 20 τ 0 −20 3 50 τ 0 −50 τ 4 5 0 −5 τ 5 5 0 −5 τ 6 5 0 −5 t (s) Fig. 5. Computations on the model presented in this paper have showned that usually no more than 5 switching times are necessary to obtain a solution with a final time very close to the optimal one. We are currently testing our algorithms on an underwater vehicle, the results will be presented in a forthcoming article. Notice that some additional issues may have to be taken into account. Indeed, underwater propeller, like any other engines, are not able to accomplish instantaneous switchings. However, since the structure of (ST P P )p allows flexibility to additional constraints, we can also adapt the switching time parameterization in order to impose continuous junctions between the constant arcs for the control. For instance, one can introduce a linear junction. Control strategy corresponding to (ST P P )3 . in the time optimal trajectory but in a trajectory efficient in time and easily feasible by the vehicle. Our method allows us to restrict the control strategy to piecewise constant functions with a small total number of switching times. The essential idea for the algorithm to work was to introduce the value of constant arcs for the control has unknowns in our optimization problem. The results exceeded our expectations. [1] B. Bonnard and M. Chyba. “Singular Trajectories and their Role in Control Theory”. Springer-Verlag, 2003. [2] M. Chyba and T. Haberkorn. “Autonomous Underwater Vehicles: Singular extremals and chattering”. Proceedings of the 22nd IFIP TC 7 Conference on System Modeling and Optimization, Italy, 18-22 July 2005. [3] M. Chyba and T. Haberkorn. “Designing efficient trajectories for underwater vehicles using geometric control theory”. Proceedings of the 24rd International Conference on Offshore Mechanics and Arctic Engineering, Greece, 12-17 June 2005. [4] M. Chyba, N.E. Leonard and E.D. Sontag. “Singular trajectories in the multi-input time-optimal problem: Application to controlled mechanical systems”. Journal on Dynamical and Control Systems 9(1):73-88, 2003. [5] M. Chyba, N.E. Leonard and E.D. Sontag. “Optimality for underwater vehicles”. In Proceedings of the 40th IEEE Conf. on Decision and Control, Orlando, 2001. [6] M. Chyba. “Underwater vehicles: a surprising non time-optimal path”.In 42th IEEE Conf. on Decision and Control, Maui 2003. [7] M.Chyba, H. Maurer, H.J. Sussmann and G. Vossen. “Underwater Vehicles: The Minimum Time Problem”. In Proceedings of the 43th IEEE Conf. on Decision and Control, Bahamas, 2004. [8] T.I.Fossen. “Guidance and control of ocean vehicles”. Wiley, New York, 1994 [9] R. Fourer, D.M. Gay and B.W. Kernighan. “AMPL: A Modeling Language for Mathematical Programming”. Duxbury Press, BrooksCole Publishing Company,1993. [10] C.Y. Kaya and J.L. Noakes. “Computation method for time-optimal switching control”. Journal of Optimization Theory and Applications, 117(1):69-92, 2003. [11] E. Hairer, S.P. Norsett and G. Wanner.“Solving ordinary differential equations I. Nonstiff problems. 2nd edition”.Springer series in computational mathematics, Springer Verlag, 1993. [12] H. Maurer, C. Büskens, J-H.R. Kim and I.C. Kaya. “Optimization methods for numerical verification of second order sufficient conditions for bang-bang controls”. Optimal Control Application and Methods, 26:129-156, 2005. [13] A. Waechter and L. T. Biegler. “On the Implementation of an InteriorPoint Filter-Line Search Algorithm for Large-Scale Nonlinear Programming”. Research Report RC 23149, IBM T.J. Watson Research Center, Yorktown, New-York.
© Copyright 2025 Paperzz