Home > manopt > solvers > linesearch > linesearch_adaptive.m

linesearch_adaptive

PURPOSE ^

Adaptive line search algorithm (step size selection) for descent methods.

SYNOPSIS ^

function [stepsize, newx, newkey, lsstats] =linesearch_adaptive(problem, x, d, f0, df0, options, storedb, key)

DESCRIPTION ^

 Adaptive line search algorithm (step size selection) for descent methods.

 function [stepsize, newx, newkey, lsstats] = 
        linesearch_adaptive(problem, x, d, f0, df0, options, storedb, key)

 Adaptive linesearch algorithm for descent methods, based on a simple
 backtracking method. Contrary to linesearch.m, this function is not
 invariant under rescaling of the search direction d. These two line
 search methods vary mainly in their strategy to pick the initial step
 size.
 
 Below, the step is constructed as alpha*d, and the step size is the norm
 of that vector, thus: stepsize = alpha*norm_d. The step is executed by
 retracting the vector alpha*d from the current point x, giving newx.

 This line-search may create and maintain a structure called lsmem inside
 storedb.internal. This gives the linesearch the opportunity to remember
 what happened in the previous calls. This is typically used to make a
 first guess at the step size, based on previous events.

 Inputs/Outputs : see help for linesearch

 See also: steepestdescent conjugategradients linesearch

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SOURCE CODE ^

0001 function [stepsize, newx, newkey, lsstats] = ...
0002   linesearch_adaptive(problem, x, d, f0, df0, options, storedb, key)
0003 % Adaptive line search algorithm (step size selection) for descent methods.
0004 %
0005 % function [stepsize, newx, newkey, lsstats] =
0006 %        linesearch_adaptive(problem, x, d, f0, df0, options, storedb, key)
0007 %
0008 % Adaptive linesearch algorithm for descent methods, based on a simple
0009 % backtracking method. Contrary to linesearch.m, this function is not
0010 % invariant under rescaling of the search direction d. These two line
0011 % search methods vary mainly in their strategy to pick the initial step
0012 % size.
0013 %
0014 % Below, the step is constructed as alpha*d, and the step size is the norm
0015 % of that vector, thus: stepsize = alpha*norm_d. The step is executed by
0016 % retracting the vector alpha*d from the current point x, giving newx.
0017 %
0018 % This line-search may create and maintain a structure called lsmem inside
0019 % storedb.internal. This gives the linesearch the opportunity to remember
0020 % what happened in the previous calls. This is typically used to make a
0021 % first guess at the step size, based on previous events.
0022 %
0023 % Inputs/Outputs : see help for linesearch
0024 %
0025 % See also: steepestdescent conjugategradients linesearch
0026 
0027 % This file is part of Manopt: www.manopt.org.
0028 % Original author: Bamdev Mishra, Dec. 30, 2012.
0029 % Contributors: Nicolas Boumal
0030 % Change log:
0031 %
0032 %   Sept. 13, 2013 (NB) :
0033 %       The automatic direction reversal feature was removed (it triggered
0034 %       when df0 > 0). Direction reversal is a decision that needs to be
0035 %       made by the solver, so it can know about it.
0036 %
0037 %   Nov. 7, 2013 (NB) :
0038 %       The whole function has been recorded to mimic more closely the new
0039 %       version of linesearch.m. The parameters are available through the
0040 %       options structure passed to the solver and have the same names and
0041 %       same meaning as for the base linesearch. The information is logged
0042 %       more reliably.
0043 %
0044 %   April 3, 2015 (NB):
0045 %       Works with the new StoreDB class system.
0046 %
0047 %   April 8, 2015 (NB):
0048 %       Got rid of lsmem input/output: now maintained in storedb.internal.
0049 %
0050 %   Aug. 2, 2018 (NB):
0051 %       Now using storedb.remove() to keep the cache lean.
0052 
0053 
0054     % Allow omission of the key, and even of storedb.
0055     if ~exist('key', 'var')
0056         if ~exist('storedb', 'var')
0057             storedb = StoreDB();
0058         end
0059         key = storedb.getNewKey();
0060     end
0061 
0062     % Backtracking default parameters. These can be overwritten in the
0063     % options structure which is passed to the solver.
0064     default_options.ls_contraction_factor = .5;
0065     default_options.ls_suff_decr = .5;
0066     default_options.ls_max_steps = 10;
0067     default_options.ls_initial_stepsize = 1;
0068     
0069     if ~exist('options', 'var') || isempty(options)
0070         options = struct();
0071     end
0072     options = mergeOptions(default_options, options);
0073     
0074     contraction_factor = options.ls_contraction_factor;
0075     suff_decr = options.ls_suff_decr;
0076     max_ls_steps = options.ls_max_steps;
0077     initial_stepsize = options.ls_initial_stepsize;
0078     
0079     % Compute the norm of the search direction.
0080     norm_d = problem.M.norm(x, d);
0081     
0082     % If this is not the first iteration, then lsmem should have been
0083     % filled with a suggestion for the initial step.
0084     if isfield(storedb.internal, 'lsmem')
0085         lsmem = storedb.internal.lsmem;
0086         if isfield(lsmem, 'init_alpha')
0087             % Pick initial step size based on where we were last time,
0088             alpha = lsmem.init_alpha;
0089         end
0090     % Otherwise, fall back to a user supplied suggestion.
0091     else
0092         alpha = initial_stepsize / norm_d;
0093     end
0094 
0095     % Make the chosen step and compute the cost there.
0096     newx = problem.M.retr(x, d, alpha);
0097     newkey = storedb.getNewKey();
0098     newf = getCost(problem, newx, storedb, newkey);
0099     cost_evaluations = 1;
0100     
0101     % Backtrack while the Armijo criterion is not satisfied
0102     while newf > f0 + suff_decr*alpha*df0
0103         
0104         % Reduce the step size,
0105         alpha = contraction_factor * alpha;
0106         
0107         % and look closer down the line.
0108         storedb.remove(newkey);              % we no longer need this cache
0109         newx = problem.M.retr(x, d, alpha);
0110         newkey = storedb.getNewKey();
0111         newf = getCost(problem, newx, storedb, newkey);
0112         cost_evaluations = cost_evaluations + 1;
0113         
0114         % Make sure we don't run out of budget.
0115         if cost_evaluations >= max_ls_steps
0116             break;
0117         end
0118         
0119     end
0120     
0121     % If we got here without obtaining a decrease, we reject the step.
0122     if newf > f0
0123         alpha = 0;
0124         newx = x;
0125         newkey = key;
0126         newf = f0; %#ok<NASGU>
0127     end
0128     
0129     % As seen outside this function, stepsize is the size of the vector we
0130     % retract to make the step from x to newx. Since the step is alpha*d:
0131     stepsize = alpha * norm_d;
0132 
0133     % Fill lsmem with a suggestion for what the next initial step size
0134     % trial should be. On average we intend to do only one extra cost
0135     % evaluation. Notice how the suggestion is not about stepsize but about
0136     % alpha. This is the reason why this line search is not invariant under
0137     % rescaling of the search direction d.
0138     switch cost_evaluations
0139         case 1
0140             % If things go very well, push your luck.
0141             init_alpha = 2 * alpha;
0142         case 2
0143             % If things go reasonably well, try to keep pace.
0144             init_alpha = alpha;
0145         otherwise
0146             % If we backtracked a lot, the new stepsize is probably quite
0147             % small: try to recover.
0148             init_alpha = 2 * alpha;
0149     end
0150     storedb.internal.lsmem.init_alpha = init_alpha;
0151     
0152     % Return some statistics also, for possible analysis.
0153     lsstats.costevals = cost_evaluations;
0154     lsstats.stepsize = stepsize;
0155     lsstats.alpha = alpha;
0156     
0157 end

Generated on Fri 30-Sep-2022 13:18:25 by m2html © 2005