Differential dynamic programming

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Differential dynamic programming (DDP) is an optimal control algorithm of the trajectory optimization class. The algorithm was introduced in 1966 by Mayne[1] and subsequently analysed in Jacobson and Mayne's eponymous book.[2] The algorithm uses locally-quadratic models of the dynamics and cost functions, and displays quadratic convergence. It is closely related to Pantoja's step-wise Newton's method.[3][4]

Finite-horizon discrete-time problems

The dynamics

\mathbf{x}_{i+1} = \mathbf{f}(\mathbf{x}_i,\mathbf{u}_i)

 

 

 

 

(1)

describe the evolution of the state \textstyle\mathbf{x} given the control \mathbf{u} from time i to time i+1. The total cost J_0 is the sum of running costs \textstyle\ell and final cost \ell_f, incurred when starting from state \mathbf{x} and applying the control sequence \mathbf{U} \equiv \{\mathbf{u}_0,\mathbf{u}_1\dots,\mathbf{u}_{N-1}\} until the horizon is reached:

J_0(\mathbf{x},\mathbf{U})=\sum_{i=0}^{N-1}\ell(\mathbf{x}_i,\mathbf{u}_i) + \ell_f(\mathbf{x}_N),

where \mathbf{x}_0\equiv\mathbf{x}, and the \mathbf{x}_i for i>0 are given by Eq. 1. The solution of the optimal control problem is the minimizing control sequence \mathbf{U}^*(\mathbf{x})\equiv \operatorname{argmin}_{\mathbf{U}} J_0(\mathbf{x},\mathbf{U}). Trajectory optimization means finding \mathbf{U}^*(\mathbf{x}) for a particular \mathbf{x}_0, rather than for all possible initial states.

Dynamic programming

Let \mathbf{U}_i be the partial control sequence \mathbf{U}_i \equiv \{\mathbf{u}_i,\mathbf{u}_{i+1}\dots,\mathbf{u}_{N-1}\} and define the cost-to-go J_i as the partial sum of costs from i to N:

J_i(\mathbf{x},\mathbf{U}_i)=\sum_{j=i}^{N-1}\ell(\mathbf{x}_j,\mathbf{u}_j) + \ell_f(\mathbf{x}_N).

The optimal cost-to-go or value function at time i is the cost-to-go given the minimizing control sequence:

V(\mathbf{x},i)\equiv \min_{\mathbf{U}_i}J_i(\mathbf{x},\mathbf{U}_i).

Setting V(\mathbf{x},N)\equiv \ell_f(\mathbf{x}_N), the dynamic programming principle reduces the minimization over an entire sequence of controls to a sequence of minimizations over a single control, proceeding backwards in time:

V(\mathbf{x},i)= \min_{\mathbf{u}}[\ell(\mathbf{x},\mathbf{u}) + V(\mathbf{f}(\mathbf{x},\mathbf{u}),i+1)].

 

 

 

 

(2)

This is the Bellman equation.

Differential dynamic programming

DDP proceeds by iteratively performing a backward pass on the nominal trajectory to generate a new control sequence, and then a forward-pass to compute and evaluate a new nominal trajectory. We begin with the backward pass. If

\ell(\mathbf{x},\mathbf{u}) + V(\mathbf{f}(\mathbf{x},\mathbf{u}),i+1)

is the argument of the \min[] operator in Eq. 2, let Q be the variation of this quantity around the i-th (\mathbf{x},\mathbf{u}) pair:

\begin{align}Q(\delta\mathbf{x},\delta\mathbf{u})\equiv &\ell(\mathbf{x}+\delta\mathbf{x},\mathbf{u}+\delta\mathbf{u})&&{}+V(\mathbf{f}(\mathbf{x}+\delta\mathbf{x},\mathbf{u}+\delta\mathbf{u}),i+1)
\\
-&\ell(\mathbf{x},\mathbf{u})&&{}-V(\mathbf{f}(\mathbf{x},\mathbf{u}),i+1)
\end{align}

and expand to second order


\approx\frac{1}{2}
\begin{bmatrix}
1\\
\delta\mathbf{x}\\
\delta\mathbf{u}
\end{bmatrix}^\mathsf{T}
\begin{bmatrix}
0 & Q_\mathbf{x}^\mathsf{T} & Q_\mathbf{u}^\mathsf{T}\\
Q_\mathbf{x} & Q_{\mathbf{x}\mathbf{x}} & Q_{\mathbf{x}\mathbf{u}}\\
Q_\mathbf{u} & Q_{\mathbf{u}\mathbf{x}} & Q_{\mathbf{u}\mathbf{u}}
\end{bmatrix}
\begin{bmatrix}
1\\
\delta\mathbf{x}\\
\delta\mathbf{u}
\end{bmatrix}

 

 

 

 

(3)

The Q notation used here is a variant of the notation of Morimoto where subscripts denote differentiation in denominator layout.[5] Dropping the index i for readability, primes denoting the next time-step V'\equiv V(i+1), the expansion coefficients are


\begin{alignat}{2}
Q_\mathbf{x} &= \ell_\mathbf{x}+ \mathbf{f}_\mathbf{x}^\mathsf{T} V'_\mathbf{x} \\
Q_\mathbf{u} &= \ell_\mathbf{u}+ \mathbf{f}_\mathbf{u}^\mathsf{T} V'_\mathbf{x} \\
Q_{\mathbf{x}\mathbf{x}} &= \ell_{\mathbf{x}\mathbf{x}} + \mathbf{f}_\mathbf{x}^\mathsf{T} V'_{\mathbf{x}\mathbf{x}}\mathbf{f}_\mathbf{x}+V_\mathbf{x}'\cdot\mathbf{f}_{\mathbf{x}\mathbf{x}}\\
Q_{\mathbf{u}\mathbf{u}} &= \ell_{\mathbf{u}\mathbf{u}} + \mathbf{f}_\mathbf{u}^\mathsf{T} V'_{\mathbf{x}\mathbf{x}}\mathbf{f}_\mathbf{u}+{V'_\mathbf{x}} \cdot\mathbf{f}_{\mathbf{u} \mathbf{u}}\\
Q_{\mathbf{u}\mathbf{x}} &= \ell_{\mathbf{u}\mathbf{x}} + \mathbf{f}_\mathbf{u}^\mathsf{T} V'_{\mathbf{x}\mathbf{x}}\mathbf{f}_\mathbf{x} + {V'_\mathbf{x}} \cdot \mathbf{f}_{\mathbf{u} \mathbf{x}}.
\end{alignat}

The last terms in the last three equations denote contraction of a vector with a tensor. Minimizing the quadratic approximation (3) with respect to \delta\mathbf{u} we have


{\delta \mathbf{u}}^* = \operatorname{argmin}\limits_{\delta \mathbf{u}}Q(\delta \mathbf{x},\delta
\mathbf{u})=-Q_{\mathbf{u}\mathbf{u}}^{-1}(Q_\mathbf{u}+Q_{\mathbf{u}\mathbf{x}}\delta \mathbf{x}),

 

 

 

 

(4)

giving an open-loop term \mathbf{k}=-Q_{\mathbf{u}\mathbf{u}}^{-1}Q_\mathbf{u} and a feedback gain term \mathbf{K}=-Q_{\mathbf{u}\mathbf{u}}^{-1}Q_{\mathbf{u}\mathbf{x}}. Plugging the result back into (3), we now have a quadratic model of the value at time i:


\begin{alignat}{2}
\Delta V(i) &= &{} -\tfrac{1}{2}Q_\mathbf{u} Q_{\mathbf{u}\mathbf{u}}^{-1}Q_\mathbf{u}\\
V_\mathbf{x}(i) &= Q_\mathbf{x} & {}- Q_\mathbf{u} Q_{\mathbf{u}\mathbf{u}}^{-1}Q_{\mathbf{u}\mathbf{x}}\\
V_{\mathbf{x}\mathbf{x}}(i) &= Q_{\mathbf{x}\mathbf{x}} &{} - Q_{\mathbf{x}\mathbf{u}}Q_{\mathbf{u}\mathbf{u}}^{-1}Q_{\mathbf{u}\mathbf{x}}.
\end{alignat}

Recursively computing the local quadratic models of V(i) and the control modifications \{\mathbf{k}(i),\mathbf{K}(i)\}, from i=N-1 down to i=1, constitutes the backward pass. As above, the Value is initialized with V(\mathbf{x},N)\equiv \ell_f(\mathbf{x}_N). Once the backward pass is completed, a forward pass computes a new trajectory:


\begin{align}
\hat{\mathbf{x}}(1)&=\mathbf{x}(1)\\
\hat{\mathbf{u}}(i)&=\mathbf{u}(i) + \mathbf{k}(i) +\mathbf{K}(i)(\hat{\mathbf{x}}(i) - \mathbf{x}(i))\\
\hat{\mathbf{x}}(i+1)&=\mathbf{f}(\hat{\mathbf{x}}(i),\hat{\mathbf{u}}(i))
\end{align}

The backward passes and forward passes are iterated until convergence.

Regularization and line-search

Differential dynamic programming is a second-order algorithm like Newton's method. It therefore takes large steps toward the minimum and often requires regularization and/or line-search to achieve convergence [6] .[7] Regularization in the DDP context means ensuring that the Q_{\mathbf{u}\mathbf{u}} matrix in Eq. 4 is positive definite. Line-search in DDP amounts to scaling the open-loop control modification \mathbf{k} by some 0<\alpha<1.

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.

External links