Home > manopt > solvers > steepestdescent > steepestdescent.m

steepestdescent

PURPOSE ^

Steepest descent (gradient descent) minimization algorithm for Manopt.

SYNOPSIS ^

function [x, cost, info, options] = steepestdescent(problem, x, options)

DESCRIPTION ^

 Steepest descent (gradient descent) minimization algorithm for Manopt.

 function [x, cost, info, options] = steepestdescent(problem)
 function [x, cost, info, options] = steepestdescent(problem, x0)
 function [x, cost, info, options] = steepestdescent(problem, x0, options)
 function [x, cost, info, options] = steepestdescent(problem, [], options)

 Apply the steepest descent minimization algorithm to the problem defined
 in the problem structure, starting at x0 if it is provided (otherwise, at
 a random point on the manifold). To specify options whilst not specifying
 an initial guess, give x0 as [] (the empty matrix).

 In most of the examples bundled with the toolbox (see link below), the
 solver can be replaced by the present one if need be.

 The outputs x and cost are the best reached point on the manifold and its
 cost. The struct-array info contains information about the iterations:
   iter : the iteration number (0 for the initial guess)
   cost : cost value
   time : elapsed time in seconds
   gradnorm : Riemannian norm of the gradient
   stepsize : norm of the last tangent vector retracted
   linesearch : information logged by options.linesearch
   And possibly additional information logged by options.statsfun.
 For example, type [info.gradnorm] to obtain a vector of the successive
 gradient norms reached.

 The options structure is used to overwrite the default values. All
 options have a default value and are hence optional. To force an option
 value, pass an options structure with a field options.optionname, where
 optionname is one of the following and the default value is indicated
 between parentheses:

   tolgradnorm (1e-6)
       The algorithm terminates if the norm of the gradient drops below this.
   maxiter (1000)
       The algorithm terminates if maxiter iterations have been executed.
   maxtime (Inf)
       The algorithm terminates if maxtime seconds elapsed.
   minstepsize (1e-10)
       The algorithm terminates if the linesearch returns a displacement
       vector (to be retracted) smaller in norm than this value.
   linesearch (@linesearch or @linesearch_hint)
       Function handle to a line search function. The options structure is
       passed to the line search too, so you can pass it parameters. See
       each line search's documentation for info.
       If the problem structure includes a line search hint, then the
       default line search used is @linesearch_hint; otherwise
       the default is @linesearch.
       There are other line search algorithms available in
       /manopt/solvers/linesearch/. For example:
       - @linesearch_adaptive
       - @linesearch_constant
       See their documentation with the help command.
   statsfun (none)
       Function handle to a function that will be called after each
       iteration to provide the opportunity to log additional statistics.
       They will be returned in the info struct. See the generic Manopt
       documentation about solvers for further information.
   stopfun (none)
       Function handle to a function that will be called at each iteration
       to provide the opportunity to specify additional stopping criteria.
       See the generic Manopt documentation about solvers for further
       information.
   verbosity (3)
       Integer number used to tune the amount of output the algorithm
       generates during execution (mostly as text in the command window).
       The higher, the more output. 0 means silent.
   storedepth (2)
       Maximum number of different points x of the manifold for which a
       store structure will be kept in memory in the storedb for caching.
       For the SD algorithm, a store depth of 2 should always be
       sufficient.
   hook (none)
       A function handle which allows the user to change the current point
       x at the beginning of each iteration, before the stopping criterion
       is evaluated. See applyHook for help on how to use this option.


 See also: conjugategradient trustregions manopt/solvers/linesearch manopt/examples

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SUBFUNCTIONS ^

SOURCE CODE ^

0001 function [x, cost, info, options] = steepestdescent(problem, x, options)
0002 % Steepest descent (gradient descent) minimization algorithm for Manopt.
0003 %
0004 % function [x, cost, info, options] = steepestdescent(problem)
0005 % function [x, cost, info, options] = steepestdescent(problem, x0)
0006 % function [x, cost, info, options] = steepestdescent(problem, x0, options)
0007 % function [x, cost, info, options] = steepestdescent(problem, [], options)
0008 %
0009 % Apply the steepest descent minimization algorithm to the problem defined
0010 % in the problem structure, starting at x0 if it is provided (otherwise, at
0011 % a random point on the manifold). To specify options whilst not specifying
0012 % an initial guess, give x0 as [] (the empty matrix).
0013 %
0014 % In most of the examples bundled with the toolbox (see link below), the
0015 % solver can be replaced by the present one if need be.
0016 %
0017 % The outputs x and cost are the best reached point on the manifold and its
0018 % cost. The struct-array info contains information about the iterations:
0019 %   iter : the iteration number (0 for the initial guess)
0020 %   cost : cost value
0021 %   time : elapsed time in seconds
0022 %   gradnorm : Riemannian norm of the gradient
0023 %   stepsize : norm of the last tangent vector retracted
0024 %   linesearch : information logged by options.linesearch
0025 %   And possibly additional information logged by options.statsfun.
0026 % For example, type [info.gradnorm] to obtain a vector of the successive
0027 % gradient norms reached.
0028 %
0029 % The options structure is used to overwrite the default values. All
0030 % options have a default value and are hence optional. To force an option
0031 % value, pass an options structure with a field options.optionname, where
0032 % optionname is one of the following and the default value is indicated
0033 % between parentheses:
0034 %
0035 %   tolgradnorm (1e-6)
0036 %       The algorithm terminates if the norm of the gradient drops below this.
0037 %   maxiter (1000)
0038 %       The algorithm terminates if maxiter iterations have been executed.
0039 %   maxtime (Inf)
0040 %       The algorithm terminates if maxtime seconds elapsed.
0041 %   minstepsize (1e-10)
0042 %       The algorithm terminates if the linesearch returns a displacement
0043 %       vector (to be retracted) smaller in norm than this value.
0044 %   linesearch (@linesearch or @linesearch_hint)
0045 %       Function handle to a line search function. The options structure is
0046 %       passed to the line search too, so you can pass it parameters. See
0047 %       each line search's documentation for info.
0048 %       If the problem structure includes a line search hint, then the
0049 %       default line search used is @linesearch_hint; otherwise
0050 %       the default is @linesearch.
0051 %       There are other line search algorithms available in
0052 %       /manopt/solvers/linesearch/. For example:
0053 %       - @linesearch_adaptive
0054 %       - @linesearch_constant
0055 %       See their documentation with the help command.
0056 %   statsfun (none)
0057 %       Function handle to a function that will be called after each
0058 %       iteration to provide the opportunity to log additional statistics.
0059 %       They will be returned in the info struct. See the generic Manopt
0060 %       documentation about solvers for further information.
0061 %   stopfun (none)
0062 %       Function handle to a function that will be called at each iteration
0063 %       to provide the opportunity to specify additional stopping criteria.
0064 %       See the generic Manopt documentation about solvers for further
0065 %       information.
0066 %   verbosity (3)
0067 %       Integer number used to tune the amount of output the algorithm
0068 %       generates during execution (mostly as text in the command window).
0069 %       The higher, the more output. 0 means silent.
0070 %   storedepth (2)
0071 %       Maximum number of different points x of the manifold for which a
0072 %       store structure will be kept in memory in the storedb for caching.
0073 %       For the SD algorithm, a store depth of 2 should always be
0074 %       sufficient.
0075 %   hook (none)
0076 %       A function handle which allows the user to change the current point
0077 %       x at the beginning of each iteration, before the stopping criterion
0078 %       is evaluated. See applyHook for help on how to use this option.
0079 %
0080 %
0081 % See also: conjugategradient trustregions manopt/solvers/linesearch manopt/examples
0082 
0083 % This file is part of Manopt: www.manopt.org.
0084 % Original author: Nicolas Boumal, Dec. 30, 2012.
0085 % Contributors:
0086 % Change log:
0087 %
0088 %   April 3, 2015 (NB):
0089 %       Works with the new StoreDB class system.
0090 %
0091 %   Aug. 2, 2018 (NB):
0092 %       Now using storedb.remove() to keep the cache lean.
0093 %
0094 %   July 19, 2020 (NB):
0095 %       Added support for options.hook.
0096 
0097     
0098     % Verify that the problem description is sufficient for the solver.
0099     if ~canGetCost(problem)
0100         warning('manopt:getCost', ...
0101                 'No cost provided. The algorithm will likely abort.');
0102     end
0103     if ~canGetGradient(problem) && ~canGetApproxGradient(problem)
0104         % Note: we do not give a warning if an approximate gradient is
0105         % explicitly given in the problem description, as in that case the
0106         % user seems to be aware of the issue.
0107         warning('manopt:getGradient:approx', ...
0108                ['No gradient provided. Using an FD approximation instead (slow).\n' ...
0109                 'It may be necessary to increase options.tolgradnorm.\n' ...
0110                 'To disable this warning: warning(''off'', ''manopt:getGradient:approx'')']);
0111         problem.approxgrad = approxgradientFD(problem);
0112     end
0113     
0114     % Set local defaults here.
0115     localdefaults.minstepsize = 1e-10;
0116     localdefaults.maxiter = 1000;
0117     localdefaults.tolgradnorm = 1e-6;
0118     
0119     % Depending on whether the problem structure specifies a hint for
0120     % line-search algorithms, choose a default line-search that works on
0121     % its own (typical) or that uses the hint.
0122     if ~canGetLinesearch(problem)
0123         localdefaults.linesearch = @linesearch;
0124     else
0125         localdefaults.linesearch = @linesearch_hint;
0126     end
0127     
0128     % Merge global and local defaults, then merge w/ user options, if any.
0129     localdefaults = mergeOptions(getGlobalDefaults(), localdefaults);
0130     if ~exist('options', 'var') || isempty(options)
0131         options = struct();
0132     end
0133     options = mergeOptions(localdefaults, options);
0134     
0135     timetic = tic();
0136     
0137     % If no initial point x is given by the user, generate one at random.
0138     if ~exist('x', 'var') || isempty(x)
0139         x = problem.M.rand();
0140     end
0141     
0142     % Create a store database and get a key for the current x.
0143     storedb = StoreDB(options.storedepth);
0144     key = storedb.getNewKey();
0145     
0146     % Compute objective-related quantities for x.
0147     [cost, grad] = getCostGrad(problem, x, storedb, key);
0148     gradnorm = problem.M.norm(x, grad);
0149     
0150     % Iteration counter.
0151     % At any point, iter is the number of fully executed iterations so far.
0152     iter = 0;
0153     
0154     % Save stats in a struct array info, and preallocate.
0155     stats = savestats();
0156     info(1) = stats;
0157     info(min(10000, options.maxiter+1)).iter = [];
0158     
0159     if options.verbosity >= 2
0160         fprintf(' iter\t               cost val\t    grad. norm\n');
0161     end
0162     
0163     % Start iterating until stopping criterion triggers.
0164     while true
0165 
0166         % Display iteration information.
0167         if options.verbosity >= 2
0168             fprintf('%5d\t%+.16e\t%.8e\n', iter, cost, gradnorm);
0169         end
0170         
0171         % Start timing this iteration.
0172         timetic = tic();
0173 
0174         % Apply the hook function if there is one: this allows external code to
0175         % move x to another point. If the point is changed (indicated by a true
0176         % value for the boolean 'hooked'), we update our knowledge about x.
0177         [x, key, info, hooked] = applyHook(problem, x, storedb, key, ...
0178                                                     options, info, iter+1);
0179         if hooked
0180             [cost, grad] = getCostGrad(problem, x, storedb, key);
0181             gradnorm = problem.M.norm(x, grad);
0182         end
0183         
0184         % Run standard stopping criterion checks.
0185         [stop, reason] = stoppingcriterion(problem, x, options, ...
0186                                                              info, iter+1);
0187         
0188         % If none triggered, run specific stopping criterion check.
0189         if ~stop && stats.stepsize < options.minstepsize
0190             stop = true;
0191             reason = sprintf(['Last stepsize smaller than minimum '  ...
0192                               'allowed; options.minstepsize = %g.'], ...
0193                               options.minstepsize);
0194         end
0195     
0196         if stop
0197             if options.verbosity >= 1
0198                 fprintf([reason '\n']);
0199             end
0200             break;
0201         end
0202 
0203         % Pick the descent direction as minus the gradient.
0204         desc_dir = problem.M.lincomb(x, -1, grad);
0205         
0206         % Execute the line search.
0207         [stepsize, newx, newkey, lsstats] = options.linesearch( ...
0208                              problem, x, desc_dir, cost, -gradnorm^2, ...
0209                              options, storedb, key);
0210         
0211         % Compute the new cost-related quantities for x
0212         [newcost, newgrad] = getCostGrad(problem, newx, storedb, newkey);
0213         newgradnorm = problem.M.norm(newx, newgrad);
0214         
0215         % Transfer iterate info, remove cache from previous x.
0216         storedb.removefirstifdifferent(key, newkey);
0217         x = newx;
0218         key = newkey;
0219         cost = newcost;
0220         grad = newgrad;
0221         gradnorm = newgradnorm;
0222         
0223         % Make sure we don't use too much memory for the store database.
0224         storedb.purge();
0225         
0226         % iter is the number of iterations we have accomplished.
0227         iter = iter + 1;
0228         
0229         % Log statistics for freshly executed iteration.
0230         stats = savestats();
0231         info(iter+1) = stats;
0232         
0233     end
0234     
0235     
0236     info = info(1:iter+1);
0237 
0238     if options.verbosity >= 1
0239         fprintf('Total time is %f [s] (excludes statsfun)\n', ...
0240                 info(end).time);
0241     end
0242     
0243     
0244     
0245     % Routine in charge of collecting the current iteration stats.
0246     function stats = savestats()
0247         stats.iter = iter;
0248         stats.cost = cost;
0249         stats.gradnorm = gradnorm;
0250         if iter == 0
0251             stats.stepsize = NaN;
0252             stats.time = toc(timetic);
0253             stats.linesearch = [];
0254         else
0255             stats.stepsize = stepsize;
0256             stats.time = info(iter).time + toc(timetic);
0257             stats.linesearch = lsstats;
0258         end
0259         stats = applyStatsfun(problem, x, storedb, key, options, stats);
0260     end
0261     
0262 end

Generated on Fri 30-Sep-2022 13:18:25 by m2html © 2005