Class LBFGS

java.lang.Object
ffx.numerics.optimization.LBFGS

public class LBFGS extends Object
This class implements the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm for large-scale multidimensional unconstrained optimization problems.
Since:
1.0
Author:
Michael J. Schnieders
Derived from:
Robert Dodier's Java translation of original FORTRAN code by Jorge Nocedal.
J. Nocedal, "Updating Quasi-Newton Matrices with Limited Storage", Mathematics of Computation, 35, 773-782 (1980)
D. C. Lui and J. Nocedal, "On the Limited Memory BFGS Method for Large Scale Optimization", Mathematical Programming, 45, 503-528 (1989)
J. Nocedal and S. J. Wright, "Numerical Optimization", Springer-Verlag, New York, 1999, Section 9.1
  • Field Summary

    Fields
    Modifier and Type
    Field
    Description
    static final double
    The default maximum angle between search direction and gradient.
    static final double
    Controls the accuracy of the line search.
    static final int
    The default maximum number of interpolations during line search.
    static final double
    The default projected gradient above which step size is reduced.
    static final double
    This specifies the default upper bound for the step in the line search.
    static final double
    This specifies the default lower bound for the step in the line search.
  • Method Summary

    Modifier and Type
    Method
    Description
    static int
    minimize(int n, int mSave, double[] x, double f, double[] g, double eps, int maxIterations, OptimizationInterface potential, OptimizationListener listener)
    This method solves the unconstrained minimization problem
    static int
    minimize(int n, int mSave, double[] x, double f, double[] g, double eps, OptimizationInterface potential, OptimizationListener listener)
    This method solves the unconstrained minimization problem

    Methods inherited from class java.lang.Object

    clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
  • Field Details

    • DEFAULT_CAPPA

      public static final double DEFAULT_CAPPA
      Controls the accuracy of the line search.

      If the function and gradient evaluations are inexpensive with respect to the cost of the iteration (which is sometimes the case when solving very large problems) it may be advantageous to set CAPPA to a small value.

      A typical small value is 0.1.

      Restriction: CAPPA should be greater than 1e-4.

      See Also:
    • DEFAULT_STEPMIN

      public static final double DEFAULT_STEPMIN
      This specifies the default lower bound for the step in the line search.

      The default value of 1.0e-16 does not need to be modified unless the problem is extremely badly scaled (in which case the exponent should be increased).

      See Also:
    • DEFAULT_STEPMAX

      public static final double DEFAULT_STEPMAX
      This specifies the default upper bound for the step in the line search.

      The default value of 5.0e0 does not need to be modified unless the problem is extremely badly scaled (in which case the exponent should be increased).

      See Also:
    • DEFAULT_SLOPEMAX

      public static final double DEFAULT_SLOPEMAX
      The default projected gradient above which step size is reduced.
      See Also:
    • DEFAULT_ANGLEMAX

      public static final double DEFAULT_ANGLEMAX
      The default maximum angle between search direction and gradient.
      See Also:
    • DEFAULT_INTMAX

      public static final int DEFAULT_INTMAX
      The default maximum number of interpolations during line search.
      See Also:
  • Method Details

    • minimize

      public static int minimize(int n, int mSave, double[] x, double f, double[] g, double eps, int maxIterations, OptimizationInterface potential, @Nullable OptimizationListener listener)
      This method solves the unconstrained minimization problem
           min f(x),    x = (x1,x2,...,x_n),
       

      using the limited-memory BFGS method. The routine is especially effective on problems involving a large number of variables. In a typical iteration of this method an approximation Hk to the inverse of the Hessian is obtained by applying m BFGS updates to a diagonal matrix Hk0, using information from the previous m steps.

      The user specifies the number m, which determines the amount of storage required by the routine.

      The user is required to calculate the function value f and its gradient g .

      The step length is determined at each iteration by means of the line search routine lineSearch, which is a slight modification of the routine CSRCH written by More and Thuente.

      Parameters:
      n - The number of variables in the minimization problem. Restriction: n > 0 .
      mSave - The number of corrections used in the BFGS update. Values of mSave less than 3 are not recommended; large values of mSave will result in excessive computing time. 3 <= mSave <= 7 is recommended.

      Restriction: mSave > 0.

      x - On initial entry this must be set by the user to the values of the initial estimate of the solution vector. On exit, it contains the values of the variables at the best point found (usually a solution).
      f - The value of the function f at the point x.
      g - The components of the gradient g at the point x.
      eps - Determines the accuracy with which the solution is to be found. The subroutine terminates when G RMS < EPS
      maxIterations - Maximum number of optimization steps.
      potential - Implements the Potential interface to supply function values and gradients.
      listener - Implements the OptimizationListener interface and will be notified after each successful step.
      Returns:
      status code (0 = success, 1 = max iterations reached, -1 = failed)
      Since:
      1.0
    • minimize

      public static int minimize(int n, int mSave, double[] x, double f, double[] g, double eps, OptimizationInterface potential, OptimizationListener listener)
      This method solves the unconstrained minimization problem
           min f(x),    x = (x1,x2,...,x_n),
       

      using the limited-memory BFGS method. The routine is especially effective on problems involving a large number of variables. In a typical iteration of this method an approximation Hk to the inverse of the Hessian is obtained by applying m BFGS updates to a diagonal matrix Hk0, using information from the previous m steps.

      The user specifies the number m, which determines the amount of storage required by the routine.

      The user is required to calculate the function value f and its gradient g .

      The step length is determined at each iteration by means of the line search routine lineSearch, which is a slight modification of the routine CSRCH written by More and Thuente.

      Parameters:
      n - The number of variables in the minimization problem. Restriction: n > 0 .
      mSave - The number of corrections used in the BFGS update. Values of mSave less than 3 are not recommended; large values of mSave will result in excessive computing time. 3 <= mSave <= 7 is recommended. * Restriction: mSave > 0.
      x - On initial entry this must be set by the user to the values of the initial estimate of the solution vector. On exit, it contains the values of the variables at the best point found (usually a solution).
      f - The value of the function f at the point x.
      g - The components of the gradient g at the point x.
      eps - Determines the accuracy with which the solution is to be found. The subroutine terminates when G RMS < EPS
      potential - Implements the Potential interface to supply function values and gradients.
      listener - Implements the OptimizationListener interface and will be notified after each successful step.
      Returns:
      status code (0 = success, -1 = failed)
      Since:
      1.0