Broyden–Fletcher–Goldfarb–Shanno algorithm

From Infogalactic: the planetary knowledge core
(Redirected from BFGS method)
Jump to: navigation, search

In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems.

The BFGS method approximates Newton's method, a class of hill-climbing optimization techniques that seeks a stationary point of a (preferably twice continuously differentiable) function. For such problems, a necessary condition for optimality is that the gradient be zero. Newton's method and the BFGS methods are not guaranteed to converge unless the function has a quadratic Taylor expansion near an optimum. These methods use both the first and second derivatives of the function. However, BFGS has proven to have good performance even for non-smooth optimizations.[1]

In quasi-Newton methods, the Hessian matrix of second derivatives doesn't need to be evaluated directly. Instead, the Hessian matrix is approximated using rank-one updates specified by gradient evaluations (or approximate gradient evaluations). Quasi-Newton methods are generalizations of the secant method to find the root of the first derivative for multidimensional problems. In multi-dimensional problems, the secant equation does not specify a unique solution, and quasi-Newton methods differ in how they constrain the solution. The BFGS method is one of the most popular members of this class.[2] Also in common use is L-BFGS, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables (e.g., >1000). The BFGS-B[3] variant handles simple box constraints.

Rationale

The search direction pk at stage k is given by the solution of the analogue of the Newton equation

 B_k \mathbf{p}_k = - \nabla f(\mathbf{x}_k)

where B_k is an approximation to the Hessian matrix which is updated iteratively at each stage, and \nabla f(\mathbf{x}_k) is the gradient of the function evaluated at xk. A line search in the direction pk is then used to find the next point xk+1. Instead of requiring the full Hessian matrix at the point xk+1 to be computed as Bk+1, the approximate Hessian at stage k is updated by the addition of two matrices.

B_{k+1}=B_k+U_k+V_k\,\!

Both Uk and Vk are symmetric rank-one matrices but have different (matrix) bases. The symmetric rank one assumption here means that we may write

C=\mathbf{a}\mathbf{a}^\mathrm{T}

So equivalently, Uk and Vk construct a rank-two update matrix which is robust against the scale problem often suffered in the gradient descent searching (e.g., in Broyden's method).

The quasi-Newton condition imposed on this update is

B_{k+1} (\mathbf{x}_{k+1}-\mathbf{x}_k ) = \nabla f(\mathbf{x}_{k+1}) -\nabla f(\mathbf{x}_k ).

Algorithm

From an initial guess \mathbf{x}_0 and an approximate Hessian matrix B_0 the following steps are repeated as \mathbf{x}_k converges to the solution.

  1. Obtain a direction \mathbf{p}_k by solving: B_k \mathbf{p}_k = -\nabla f(\mathbf{x}_k).
  2. Perform a line search to find an acceptable stepsize \alpha_k in the direction found in the first step, then update \mathbf{x}_{k+1} = \mathbf{x}_k + \alpha_k\mathbf{p}_k.
  3. Set  \mathbf{s}_k=\alpha_k \mathbf{p}_k.
  4. \mathbf{y}_k = {\nabla f(\mathbf{x}_{k+1}) - \nabla f(\mathbf{x}_k)}.
  5. B_{k+1} = B_k + \frac{\mathbf{y}_k \mathbf{y}_k^{\mathrm{T}}}{\mathbf{y}_k^{\mathrm{T}} \mathbf{s}_k} - \frac{B_k \mathbf{s}_k \mathbf{s}_k^{\mathrm{T}} B_k }{\mathbf{s}_k^{\mathrm{T}} B_k \mathbf{s}_k}.

f(\mathbf{x}) denotes the objective function to be minimized. Convergence can be checked by observing the norm of the gradient, \left|\nabla f(\mathbf{x}_k)\right|. Practically, B_0 can be initialized with B_0 = I, so that the first step will be equivalent to a gradient descent, but further steps are more and more refined by B_{k}, the approximation to the Hessian.

The first step of the algorithm is carried out using the inverse of the matrix B_k, which is usually obtained efficiently by applying the Sherman–Morrison formula to the fifth line of the algorithm, giving

B_{k+1}^{-1} =  \left (I-\frac { s_k y_k^T} {y_k^T s_k} \right ) B_{k}^{-1} \left (I-\frac { y_k s_k^T} {y_k^T s_k} \right )+\frac
{s_k s_k^T} {y_k^T \, s_k}.

This can be computed efficiently without temporary matrices, recognizing that B_k^{-1} is symmetric, and that \mathbf{y}_k^{\mathrm{T}} B_k^{-1} \mathbf{y}_k and \mathbf{s}_k^{\mathrm{T}} \mathbf{y}_k are scalar, using an expansion such as

B_{k+1}^{-1} = B_k^{-1} + \frac{(\mathbf{s}_k^{\mathrm{T}}\mathbf{y}_k+\mathbf{y}_k^{\mathrm{T}} B_k^{-1} \mathbf{y}_k)(\mathbf{s}_k \mathbf{s}_k^{\mathrm{T}})}{(\mathbf{s}_k^{\mathrm{T}} \mathbf{y}_k)^2} - \frac{B_k^{-1} \mathbf{y}_k \mathbf{s}_k^{\mathrm{T}} + \mathbf{s}_k \mathbf{y}_k^{\mathrm{T}}B_k^{-1}}{\mathbf{s}_k^{\mathrm{T}} \mathbf{y}_k}.

In statistical estimation problems (such as maximum likelihood or Bayesian inference), credible intervals or confidence intervals for the solution can be estimated from the inverse of the final Hessian matrix. However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix.

Implementations

The GSL implements BFGS as gsl_multimin_fdfminimizer_vector_bfgs2. Ceres Solver implements both BFGS and L-BFGS. In SciPy, the scipy.optimize.fmin_bfgs function implements BFGS. It is also possible to run BFGS using any of the L-BFGS algorithms by setting the parameter L to a very large number.

Octave uses BFGS with a double-dogleg approximation to the cubic line search.

In R, the BFGS algorithm (and the L-BFGS-B version that allows box constraints) is implemented as an option of the base function optim().

In the MATLAB Optimization Toolbox, the fminunc function uses BFGS with cubic line search when the problem size is set to "medium scale."

A high-precision arithmetic version of BFGS (pBFGS), implemented in C++ and integrated with the high-precision arithmetic package ARPREC is robust against numerical instability (e.g. round-off errors).

Another C++ implementation of BFGS, along with L-BFGS, L-BFGS-B, CG, and Newton's method) using Eigen (C++ library) are available on github under the MIT License here.

BFGS and L-BFGS are also implemented in C as part of the open-source Gnu Regression, Econometrics and Time-series Library (gretl).

See also

Notes

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Nocedal & Wright (2006), page 24
  3. Lua error in package.lua at line 80: module 'strict' not found.

Bibliography

  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.

External links