Home > manopt > solvers > trustregions > trustregions.m

trustregions

PURPOSE ^

Riemannian trust-regions solver for optimization on manifolds.

SYNOPSIS ^

function [x, cost, info, options] = trustregions(problem, x, options)

DESCRIPTION ^

 Riemannian trust-regions solver for optimization on manifolds.

 function [x, cost, info, options] = trustregions(problem)
 function [x, cost, info, options] = trustregions(problem, x0)
 function [x, cost, info, options] = trustregions(problem, x0, options)
 function [x, cost, info, options] = trustregions(problem, [], options)

 This is the Riemannian Trust-Region solver (with tCG inner solve), named
 RTR. This solver will attempt to minimize the cost function described in
 the problem structure. It requires the availability of the cost function
 and of its gradient. It will issue calls for the Hessian. If no Hessian
 nor approximate Hessian is provided, a standard approximation of the
 Hessian based on the gradient will be computed. If a preconditioner for
 the Hessian is provided, it will be used.

 If no gradient is provided, an approximation of the gradient is computed,
 but this can be slow for manifolds of high dimension.

 For a description of the algorithm and theorems offering convergence
 guarantees, see the references below. Documentation for this solver is
 available online at:

 http://www.manopt.org/solver_documentation_trustregions.html


 The initial iterate is x0 if it is provided. Otherwise, a random point on
 the manifold is picked. To specify options whilst not specifying an
 initial iterate, give x0 as [] (the empty matrix).

 The two outputs 'x' and 'cost' are the last reached point on the manifold
 and its cost. Notice that x is not necessarily the best reached point,
 because this solver is not forced to be a descent method. In particular,
 very close to convergence, it is sometimes preferable to accept very
 slight increases in the cost value (on the order of the machine epsilon)
 in the process of reaching fine convergence.
 
 The output 'info' is a struct-array which contains information about the
 iterations:
   iter (integer)
       The (outer) iteration number, or number of steps considered
       (whether accepted or rejected). The initial guess is 0.
    cost (double)
       The corresponding cost value.
    gradnorm (double)
       The (Riemannian) norm of the gradient.
    numinner (integer)
       The number of inner iterations executed to compute this iterate.
       Inner iterations are truncated-CG steps. Each one requires a
       Hessian (or approximate Hessian) evaluation.
    time (double)
       The total elapsed time in seconds to reach the corresponding cost.
    rho (double)
       The performance ratio for the iterate.
    rhonum, rhoden (double)
       Regularized numerator and denominator of the performance ratio:
       rho = rhonum/rhoden. See options.rho_regularization.
    accepted (boolean)
       Whether the proposed iterate was accepted or not.
    stepsize (double)
       The (Riemannian) norm of the vector returned by the inner solver
       tCG and which is retracted to obtain the proposed next iterate. If
       accepted = true for the corresponding iterate, this is the size of
       the step from the previous to the new iterate. If accepted is
       false, the step was not executed and this is the size of the
       rejected step.
    Delta (double)
       The trust-region radius at the outer iteration.
    cauchy (boolean)
       Whether the Cauchy point was used or not (if useRand is true).
   And possibly additional information logged by options.statsfun.
 For example, type [info.gradnorm] to obtain a vector of the successive
 gradient norms reached at each (outer) iteration.

 The options structure is used to overwrite the default values. All
 options have a default value and are hence optional. To force an option
 value, pass an options structure with a field options.optionname, where
 optionname is one of the following and the default value is indicated
 between parentheses:

   tolgradnorm (1e-6)
       The algorithm terminates if the norm of the gradient drops below
       this. For well-scaled problems, a rule of thumb is that you can
       expect to reduce the gradient norm by 8 orders of magnitude
       (sqrt(eps)) compared to the gradient norm at a "typical" point (a
       rough initial iterate for example). Further decrease is sometimes
       possible, but inexact floating point arithmetic will eventually
       limit the final accuracy. If tolgradnorm is set too low, the
       algorithm may end up iterating forever (or at least until another
       stopping criterion triggers).
   maxiter (1000)
       The algorithm terminates if maxiter (outer) iterations were executed.
   maxtime (Inf)
       The algorithm terminates if maxtime seconds elapsed.
    miniter (3)
       Minimum number of outer iterations (used only if useRand is true).
    mininner (1)
       Minimum number of inner iterations (for tCG).
    maxinner (problem.M.dim() : the manifold's dimension)
       Maximum number of inner iterations (for tCG).
    Delta_bar (problem.M.typicaldist() or sqrt(problem.M.dim()))
       Maximum trust-region radius. If you specify this parameter but not
       Delta0, then Delta0 will be set to 1/8 times this parameter.
   Delta0 (Delta_bar/8)
       Initial trust-region radius. If you observe a long plateau at the
       beginning of the convergence plot (gradient norm VS iteration), it
       may pay off to try to tune this parameter to shorten the plateau.
       You should not set this parameter without setting Delta_bar too (at
       a larger value).
    useRand (false)
       Set to true if the trust-region solve is to be initiated with a
       random tangent vector. If set to true, no preconditioner will be
       used. This option is set to true in some scenarios to escape saddle
       points, but is otherwise seldom activated.
    kappa (0.1)
       tCG inner kappa convergence tolerance.
       kappa > 0 is the linear convergence target rate: tCG will terminate
       early if the residual was reduced by a factor of kappa.
    theta (1.0)
       tCG inner theta convergence tolerance.
       1+theta (theta between 0 and 1) is the superlinear convergence
       target rate. tCG will terminate early if the residual was reduced
       by a power of 1+theta.
    rho_prime (0.1)
       Accept/reject threshold : if rho is at least rho_prime, the outer
       iteration is accepted. Otherwise, it is rejected. In case it is
       rejected, the trust-region radius will have been decreased.
       To ensure this, rho_prime >= 0 must be strictly smaller than 1/4.
       If rho_prime is negative, the algorithm is not guaranteed to
       produce monotonically decreasing cost values. It is strongly
       recommended to set rho_prime > 0, to aid convergence.
   rho_regularization (1e3)
       Close to convergence, evaluating the performance ratio rho is
       numerically challenging. Meanwhile, close to convergence, the
       quadratic model should be a good fit and the steps should be
       accepted. Regularization lets rho go to 1 as the model decrease and
       the actual decrease go to zero. Set this option to zero to disable
       regularization (not recommended). See in-code for the specifics.
       When this is not zero, it may happen that the iterates produced are
       not monotonically improving the cost when very close to
       convergence. This is because the corrected cost improvement could
       change sign if it is negative but very small.
   statsfun (none)
       Function handle to a function that will be called after each
       iteration to provide the opportunity to log additional statistics.
       They will be returned in the info struct. See the generic Manopt
       documentation about solvers for further information. statsfun is
       called with the point x that was reached last, after the
       accept/reject decision. See comment below.
   stopfun (none)
       Function handle to a function that will be called at each iteration
       to provide the opportunity to specify additional stopping criteria.
       See the generic Manopt documentation about solvers for further
       information.
   verbosity (2)
       Integer number used to tune the amount of output the algorithm
       generates during execution (mostly as text in the command window).
       The higher, the more output. 0 means silent. 3 and above includes a
       display of the options structure at the beginning of the execution.
   debug (false)
       Set to true to allow the algorithm to perform additional
       computations for debugging purposes. If a debugging test fails, you
       will be informed of it, usually via the command window. Be aware
       that these additional computations appear in the algorithm timings
       too, and may interfere with operations such as counting the number
       of cost evaluations, etc. (the debug calls get storedb too).
   storedepth (20)
       Maximum number of different points x of the manifold for which a
       store structure will be kept in memory in the storedb. If the
       caching features of Manopt are not used, this is irrelevant. If
       memory usage is an issue, you may try to lower this number.
       Profiling may then help to investigate if a performance hit was
       incurred as a result.

 Notice that statsfun is called with the point x that was reached last,
 after the accept/reject decision. Hence: if the step was accepted, we get
 that new x, with a store which only saw the call for the cost and for the
 gradient. If the step was rejected, we get the same x as previously, with
 the store structure containing everything that was computed at that point
 (possibly including previous rejects at that same point). Hence, statsfun
 should not be used in conjunction with the store to count operations for
 example. Instead, you should use storedb's shared memory for such
 purposes (either via storedb.shared, or via store.shared, see
 online documentation). It is however possible to use statsfun with the
 store to compute, for example, other merit functions on the point x
 (other than the actual cost function, that is).


 Please cite the Manopt paper as well as the research paper:
     @Article{genrtr,
       Title    = {Trust-region methods on {Riemannian} manifolds},
       Author   = {Absil, P.-A. and Baker, C. G. and Gallivan, K. A.},
       Journal  = {Foundations of Computational Mathematics},
       Year     = {2007},
       Number   = {3},
       Pages    = {303--330},
       Volume   = {7},
       Doi      = {10.1007/s10208-005-0179-9}
     }

 See also: steepestdescent conjugategradient manopt/examples

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SUBFUNCTIONS ^

SOURCE CODE ^

0001 function [x, cost, info, options] = trustregions(problem, x, options)
0002 % Riemannian trust-regions solver for optimization on manifolds.
0003 %
0004 % function [x, cost, info, options] = trustregions(problem)
0005 % function [x, cost, info, options] = trustregions(problem, x0)
0006 % function [x, cost, info, options] = trustregions(problem, x0, options)
0007 % function [x, cost, info, options] = trustregions(problem, [], options)
0008 %
0009 % This is the Riemannian Trust-Region solver (with tCG inner solve), named
0010 % RTR. This solver will attempt to minimize the cost function described in
0011 % the problem structure. It requires the availability of the cost function
0012 % and of its gradient. It will issue calls for the Hessian. If no Hessian
0013 % nor approximate Hessian is provided, a standard approximation of the
0014 % Hessian based on the gradient will be computed. If a preconditioner for
0015 % the Hessian is provided, it will be used.
0016 %
0017 % If no gradient is provided, an approximation of the gradient is computed,
0018 % but this can be slow for manifolds of high dimension.
0019 %
0020 % For a description of the algorithm and theorems offering convergence
0021 % guarantees, see the references below. Documentation for this solver is
0022 % available online at:
0023 %
0024 % http://www.manopt.org/solver_documentation_trustregions.html
0025 %
0026 %
0027 % The initial iterate is x0 if it is provided. Otherwise, a random point on
0028 % the manifold is picked. To specify options whilst not specifying an
0029 % initial iterate, give x0 as [] (the empty matrix).
0030 %
0031 % The two outputs 'x' and 'cost' are the last reached point on the manifold
0032 % and its cost. Notice that x is not necessarily the best reached point,
0033 % because this solver is not forced to be a descent method. In particular,
0034 % very close to convergence, it is sometimes preferable to accept very
0035 % slight increases in the cost value (on the order of the machine epsilon)
0036 % in the process of reaching fine convergence.
0037 %
0038 % The output 'info' is a struct-array which contains information about the
0039 % iterations:
0040 %   iter (integer)
0041 %       The (outer) iteration number, or number of steps considered
0042 %       (whether accepted or rejected). The initial guess is 0.
0043 %    cost (double)
0044 %       The corresponding cost value.
0045 %    gradnorm (double)
0046 %       The (Riemannian) norm of the gradient.
0047 %    numinner (integer)
0048 %       The number of inner iterations executed to compute this iterate.
0049 %       Inner iterations are truncated-CG steps. Each one requires a
0050 %       Hessian (or approximate Hessian) evaluation.
0051 %    time (double)
0052 %       The total elapsed time in seconds to reach the corresponding cost.
0053 %    rho (double)
0054 %       The performance ratio for the iterate.
0055 %    rhonum, rhoden (double)
0056 %       Regularized numerator and denominator of the performance ratio:
0057 %       rho = rhonum/rhoden. See options.rho_regularization.
0058 %    accepted (boolean)
0059 %       Whether the proposed iterate was accepted or not.
0060 %    stepsize (double)
0061 %       The (Riemannian) norm of the vector returned by the inner solver
0062 %       tCG and which is retracted to obtain the proposed next iterate. If
0063 %       accepted = true for the corresponding iterate, this is the size of
0064 %       the step from the previous to the new iterate. If accepted is
0065 %       false, the step was not executed and this is the size of the
0066 %       rejected step.
0067 %    Delta (double)
0068 %       The trust-region radius at the outer iteration.
0069 %    cauchy (boolean)
0070 %       Whether the Cauchy point was used or not (if useRand is true).
0071 %   And possibly additional information logged by options.statsfun.
0072 % For example, type [info.gradnorm] to obtain a vector of the successive
0073 % gradient norms reached at each (outer) iteration.
0074 %
0075 % The options structure is used to overwrite the default values. All
0076 % options have a default value and are hence optional. To force an option
0077 % value, pass an options structure with a field options.optionname, where
0078 % optionname is one of the following and the default value is indicated
0079 % between parentheses:
0080 %
0081 %   tolgradnorm (1e-6)
0082 %       The algorithm terminates if the norm of the gradient drops below
0083 %       this. For well-scaled problems, a rule of thumb is that you can
0084 %       expect to reduce the gradient norm by 8 orders of magnitude
0085 %       (sqrt(eps)) compared to the gradient norm at a "typical" point (a
0086 %       rough initial iterate for example). Further decrease is sometimes
0087 %       possible, but inexact floating point arithmetic will eventually
0088 %       limit the final accuracy. If tolgradnorm is set too low, the
0089 %       algorithm may end up iterating forever (or at least until another
0090 %       stopping criterion triggers).
0091 %   maxiter (1000)
0092 %       The algorithm terminates if maxiter (outer) iterations were executed.
0093 %   maxtime (Inf)
0094 %       The algorithm terminates if maxtime seconds elapsed.
0095 %    miniter (3)
0096 %       Minimum number of outer iterations (used only if useRand is true).
0097 %    mininner (1)
0098 %       Minimum number of inner iterations (for tCG).
0099 %    maxinner (problem.M.dim() : the manifold's dimension)
0100 %       Maximum number of inner iterations (for tCG).
0101 %    Delta_bar (problem.M.typicaldist() or sqrt(problem.M.dim()))
0102 %       Maximum trust-region radius. If you specify this parameter but not
0103 %       Delta0, then Delta0 will be set to 1/8 times this parameter.
0104 %   Delta0 (Delta_bar/8)
0105 %       Initial trust-region radius. If you observe a long plateau at the
0106 %       beginning of the convergence plot (gradient norm VS iteration), it
0107 %       may pay off to try to tune this parameter to shorten the plateau.
0108 %       You should not set this parameter without setting Delta_bar too (at
0109 %       a larger value).
0110 %    useRand (false)
0111 %       Set to true if the trust-region solve is to be initiated with a
0112 %       random tangent vector. If set to true, no preconditioner will be
0113 %       used. This option is set to true in some scenarios to escape saddle
0114 %       points, but is otherwise seldom activated.
0115 %    kappa (0.1)
0116 %       tCG inner kappa convergence tolerance.
0117 %       kappa > 0 is the linear convergence target rate: tCG will terminate
0118 %       early if the residual was reduced by a factor of kappa.
0119 %    theta (1.0)
0120 %       tCG inner theta convergence tolerance.
0121 %       1+theta (theta between 0 and 1) is the superlinear convergence
0122 %       target rate. tCG will terminate early if the residual was reduced
0123 %       by a power of 1+theta.
0124 %    rho_prime (0.1)
0125 %       Accept/reject threshold : if rho is at least rho_prime, the outer
0126 %       iteration is accepted. Otherwise, it is rejected. In case it is
0127 %       rejected, the trust-region radius will have been decreased.
0128 %       To ensure this, rho_prime >= 0 must be strictly smaller than 1/4.
0129 %       If rho_prime is negative, the algorithm is not guaranteed to
0130 %       produce monotonically decreasing cost values. It is strongly
0131 %       recommended to set rho_prime > 0, to aid convergence.
0132 %   rho_regularization (1e3)
0133 %       Close to convergence, evaluating the performance ratio rho is
0134 %       numerically challenging. Meanwhile, close to convergence, the
0135 %       quadratic model should be a good fit and the steps should be
0136 %       accepted. Regularization lets rho go to 1 as the model decrease and
0137 %       the actual decrease go to zero. Set this option to zero to disable
0138 %       regularization (not recommended). See in-code for the specifics.
0139 %       When this is not zero, it may happen that the iterates produced are
0140 %       not monotonically improving the cost when very close to
0141 %       convergence. This is because the corrected cost improvement could
0142 %       change sign if it is negative but very small.
0143 %   statsfun (none)
0144 %       Function handle to a function that will be called after each
0145 %       iteration to provide the opportunity to log additional statistics.
0146 %       They will be returned in the info struct. See the generic Manopt
0147 %       documentation about solvers for further information. statsfun is
0148 %       called with the point x that was reached last, after the
0149 %       accept/reject decision. See comment below.
0150 %   stopfun (none)
0151 %       Function handle to a function that will be called at each iteration
0152 %       to provide the opportunity to specify additional stopping criteria.
0153 %       See the generic Manopt documentation about solvers for further
0154 %       information.
0155 %   verbosity (2)
0156 %       Integer number used to tune the amount of output the algorithm
0157 %       generates during execution (mostly as text in the command window).
0158 %       The higher, the more output. 0 means silent. 3 and above includes a
0159 %       display of the options structure at the beginning of the execution.
0160 %   debug (false)
0161 %       Set to true to allow the algorithm to perform additional
0162 %       computations for debugging purposes. If a debugging test fails, you
0163 %       will be informed of it, usually via the command window. Be aware
0164 %       that these additional computations appear in the algorithm timings
0165 %       too, and may interfere with operations such as counting the number
0166 %       of cost evaluations, etc. (the debug calls get storedb too).
0167 %   storedepth (20)
0168 %       Maximum number of different points x of the manifold for which a
0169 %       store structure will be kept in memory in the storedb. If the
0170 %       caching features of Manopt are not used, this is irrelevant. If
0171 %       memory usage is an issue, you may try to lower this number.
0172 %       Profiling may then help to investigate if a performance hit was
0173 %       incurred as a result.
0174 %
0175 % Notice that statsfun is called with the point x that was reached last,
0176 % after the accept/reject decision. Hence: if the step was accepted, we get
0177 % that new x, with a store which only saw the call for the cost and for the
0178 % gradient. If the step was rejected, we get the same x as previously, with
0179 % the store structure containing everything that was computed at that point
0180 % (possibly including previous rejects at that same point). Hence, statsfun
0181 % should not be used in conjunction with the store to count operations for
0182 % example. Instead, you should use storedb's shared memory for such
0183 % purposes (either via storedb.shared, or via store.shared, see
0184 % online documentation). It is however possible to use statsfun with the
0185 % store to compute, for example, other merit functions on the point x
0186 % (other than the actual cost function, that is).
0187 %
0188 %
0189 % Please cite the Manopt paper as well as the research paper:
0190 %     @Article{genrtr,
0191 %       Title    = {Trust-region methods on {Riemannian} manifolds},
0192 %       Author   = {Absil, P.-A. and Baker, C. G. and Gallivan, K. A.},
0193 %       Journal  = {Foundations of Computational Mathematics},
0194 %       Year     = {2007},
0195 %       Number   = {3},
0196 %       Pages    = {303--330},
0197 %       Volume   = {7},
0198 %       Doi      = {10.1007/s10208-005-0179-9}
0199 %     }
0200 %
0201 % See also: steepestdescent conjugategradient manopt/examples
0202 
0203 % An explicit, general listing of this algorithm, with preconditioning,
0204 % can be found in the following paper:
0205 %     @Article{boumal2015lowrank,
0206 %       Title   = {Low-rank matrix completion via preconditioned optimization on the {G}rassmann manifold},
0207 %       Author  = {Boumal, N. and Absil, P.-A.},
0208 %       Journal = {Linear Algebra and its Applications},
0209 %       Year    = {2015},
0210 %       Pages   = {200--239},
0211 %       Volume  = {475},
0212 %       Doi     = {10.1016/j.laa.2015.02.027},
0213 %     }
0214 
0215 % When the Hessian is not specified, it is approximated with
0216 % finite-differences of the gradient. The resulting method is called
0217 % RTR-FD. Some convergence theory for it is available in this paper:
0218 % @incollection{boumal2015rtrfd
0219 %     author={Boumal, N.},
0220 %     title={Riemannian trust regions with finite-difference Hessian approximations are globally convergent},
0221 %     year={2015},
0222 %     booktitle={Geometric Science of Information}
0223 % }
0224 
0225 
0226 % This file is part of Manopt: www.manopt.org.
0227 % This code is an adaptation to Manopt of the original GenRTR code:
0228 % RTR - Riemannian Trust-Region
0229 % (c) 2004-2007, P.-A. Absil, C. G. Baker, K. A. Gallivan
0230 % Florida State University
0231 % School of Computational Science
0232 % (http://www.math.fsu.edu/~cbaker/GenRTR/?page=download)
0233 % See accompanying license file.
0234 % The adaptation was executed by Nicolas Boumal.
0235 %
0236 %
0237 % Change log:
0238 %
0239 %   NB April 3, 2013:
0240 %       tCG now returns the Hessian along the returned direction eta, so
0241 %       that we do not compute that Hessian redundantly: some savings at
0242 %       each iteration. Similarly, if the useRand flag is on, we spare an
0243 %       extra Hessian computation at each outer iteration too, owing to
0244 %       some modifications in the Cauchy point section of the code specific
0245 %       to useRand = true.
0246 %
0247 %   NB Aug. 22, 2013:
0248 %       This function is now Octave compatible. The transition called for
0249 %       two changes which would otherwise not be advisable. (1) tic/toc is
0250 %       now used as is, as opposed to the safer way:
0251 %       t = tic(); elapsed = toc(t);
0252 %       And (2), the (formerly inner) function savestats was moved outside
0253 %       the main function to not be nested anymore. This is arguably less
0254 %       elegant, but Octave does not (and likely will not) support nested
0255 %       functions.
0256 %
0257 %   NB Dec. 2, 2013:
0258 %       The in-code documentation was largely revised and expanded.
0259 %
0260 %   NB Dec. 2, 2013:
0261 %       The former heuristic which triggered when rhonum was very small and
0262 %       forced rho = 1 has been replaced by a smoother heuristic which
0263 %       consists in regularizing rhonum and rhoden before computing their
0264 %       ratio. It is tunable via options.rho_regularization. Furthermore,
0265 %       the solver now detects if tCG did not obtain a model decrease
0266 %       (which is theoretically impossible but may happen because of
0267 %       numerical errors and/or because of a nonlinear/nonsymmetric Hessian
0268 %       operator, which is the case for finite difference approximations).
0269 %       When such an anomaly is detected, the step is rejected and the
0270 %       trust region radius is decreased.
0271 %       Feb. 18, 2015 note: this is less useful now, as tCG now guarantees
0272 %       model decrease even for the finite difference approximation of the
0273 %       Hessian. It is still useful in case of numerical errors, but this
0274 %       is less stringent.
0275 %
0276 %   NB Dec. 3, 2013:
0277 %       The stepsize is now registered at each iteration, at a small
0278 %       additional cost. The defaults for Delta_bar and Delta0 are better
0279 %       defined. Setting Delta_bar in the options will automatically set
0280 %       Delta0 accordingly. In Manopt 1.0.4, the defaults for these options
0281 %       were not treated appropriately because of an incorrect use of the
0282 %       isfield() built-in function.
0283 %
0284 %   NB Feb. 18, 2015:
0285 %       Added some comments. Also, Octave now supports safe tic/toc usage,
0286 %       so we reverted the changes to use that again (see Aug. 22, 2013 log
0287 %       entry).
0288 %
0289 %   NB April 3, 2015:
0290 %       Works with the new StoreDB class system.
0291 %
0292 %   NB April 8, 2015:
0293 %       No Hessian warning if approximate Hessian explicitly available.
0294 %
0295 %   NB Nov. 1, 2016:
0296 %       Now uses approximate gradient via finite differences if need be.
0297 
0298 
0299 % Verify that the problem description is sufficient for the solver.
0300 if ~canGetCost(problem)
0301     warning('manopt:getCost', ...
0302             'No cost provided. The algorithm will likely abort.');  
0303 end
0304 if ~canGetGradient(problem) && ~canGetApproxGradient(problem)
0305     % Note: we do not give a warning if an approximate gradient is
0306     % explicitly given in the problem description, as in that case the user
0307     % seems to be aware of the issue.
0308     warning('manopt:getGradient:approx', ...
0309            ['No gradient provided. Using an FD approximation instead (slow).\n' ...
0310             'It may be necessary to increase options.tolgradnorm.\n' ...
0311             'To disable this warning: warning(''off'', ''manopt:getGradient:approx'')']);
0312     problem.approxgrad = approxgradientFD(problem);
0313 end
0314 if ~canGetHessian(problem) && ~canGetApproxHessian(problem)
0315     % Note: we do not give a warning if an approximate Hessian is
0316     % explicitly given in the problem description, as in that case the user
0317     % seems to be aware of the issue.
0318     warning('manopt:getHessian:approx', ...
0319            ['No Hessian provided. Using an FD approximation instead.\n' ...
0320             'To disable this warning: warning(''off'', ''manopt:getHessian:approx'')']);
0321     problem.approxhess = approxhessianFD(problem);
0322 end
0323 
0324 % Define some strings for display
0325 tcg_stop_reason = {'negative curvature',...
0326                    'exceeded trust region',...
0327                    'reached target residual-kappa (linear)',...
0328                    'reached target residual-theta (superlinear)',...
0329                    'maximum inner iterations',...
0330                    'model increased'};
0331 
0332 % Set local defaults here
0333 localdefaults.verbosity = 2;
0334 localdefaults.maxtime = inf;
0335 localdefaults.miniter = 3;
0336 localdefaults.maxiter = 1000;
0337 localdefaults.mininner = 1;
0338 localdefaults.maxinner = problem.M.dim();
0339 localdefaults.tolgradnorm = 1e-6;
0340 localdefaults.kappa = 0.1;
0341 localdefaults.theta = 1.0;
0342 localdefaults.rho_prime = 0.1;
0343 localdefaults.useRand = false;
0344 localdefaults.rho_regularization = 1e3;
0345 
0346 % Merge global and local defaults, then merge w/ user options, if any.
0347 localdefaults = mergeOptions(getGlobalDefaults(), localdefaults);
0348 if ~exist('options', 'var') || isempty(options)
0349     options = struct();
0350 end
0351 options = mergeOptions(localdefaults, options);
0352 
0353 % Set default Delta_bar and Delta0 separately to deal with additional
0354 % logic: if Delta_bar is provided but not Delta0, let Delta0 automatically
0355 % be some fraction of the provided Delta_bar.
0356 if ~isfield(options, 'Delta_bar')
0357     if isfield(problem.M, 'typicaldist')
0358         options.Delta_bar = problem.M.typicaldist();
0359     else
0360         options.Delta_bar = sqrt(problem.M.dim());
0361     end 
0362 end
0363 if ~isfield(options,'Delta0')
0364     options.Delta0 = options.Delta_bar / 8;
0365 end
0366 
0367 % Check some option values
0368 assert(options.rho_prime < 1/4, ...
0369         'options.rho_prime must be strictly smaller than 1/4.');
0370 assert(options.Delta_bar > 0, ...
0371         'options.Delta_bar must be positive.');
0372 assert(options.Delta0 > 0 && options.Delta0 < options.Delta_bar, ...
0373         'options.Delta0 must be positive and smaller than Delta_bar.');
0374 
0375 % It is sometimes useful to check what the actual option values are.
0376 if options.verbosity >= 3
0377     disp(options);
0378 end
0379 
0380 ticstart = tic();
0381 
0382 % If no initial point x is given by the user, generate one at random.
0383 if ~exist('x', 'var') || isempty(x)
0384     x = problem.M.rand();
0385 end
0386 
0387 % Create a store database and get a key for the current x
0388 storedb = StoreDB(options.storedepth);
0389 key = storedb.getNewKey();
0390 
0391 %% Initializations
0392 
0393 % k counts the outer (TR) iterations. The semantic is that k counts the
0394 % number of iterations fully executed so far.
0395 k = 0;
0396 
0397 % Initialize solution and companion measures: f(x), fgrad(x)
0398 [fx, fgradx] = getCostGrad(problem, x, storedb, key);
0399 norm_grad = problem.M.norm(x, fgradx);
0400 
0401 % Initialize trust-region radius
0402 Delta = options.Delta0;
0403 
0404 % Save stats in a struct array info, and preallocate.
0405 if ~exist('used_cauchy', 'var')
0406     used_cauchy = [];
0407 end
0408 stats = savestats(problem, x, storedb, key, options, k, fx, norm_grad, Delta, ticstart);
0409 info(1) = stats;
0410 info(min(10000, options.maxiter+1)).iter = [];
0411 
0412 % ** Display:
0413 if options.verbosity == 2
0414    fprintf(['%3s %3s      %5s                %5s     ',...
0415             'f: %+e   |grad|: %e\n'],...
0416            '   ','   ','     ','     ', fx, norm_grad);
0417 elseif options.verbosity > 2
0418    fprintf('************************************************************************\n');
0419    fprintf('%3s %3s    k: %5s     num_inner: %5s     %s\n',...
0420            '','','______','______','');
0421    fprintf('       f(x) : %+e       |grad| : %e\n', fx, norm_grad);
0422    fprintf('      Delta : %f\n', Delta);
0423 end
0424 
0425 % To keep track of consecutive radius changes, so that we can warn the
0426 % user if it appears necessary.
0427 consecutive_TRplus = 0;
0428 consecutive_TRminus = 0;
0429 
0430 
0431 % **********************
0432 % ** Start of TR loop **
0433 % **********************
0434 while true
0435     
0436     % Start clock for this outer iteration
0437     ticstart = tic();
0438 
0439     % Run standard stopping criterion checks
0440     [stop, reason] = stoppingcriterion(problem, x, options, info, k+1);
0441     
0442     % If the stopping criterion that triggered is the tolerance on the
0443     % gradient norm but we are using randomization, make sure we make at
0444     % least miniter iterations to give randomization a chance at escaping
0445     % saddle points.
0446     if stop == 2 && options.useRand && k < options.miniter
0447         stop = 0;
0448     end
0449     
0450     if stop
0451         if options.verbosity >= 1
0452             fprintf([reason '\n']);
0453         end
0454         break;
0455     end
0456 
0457     if options.verbosity > 2 || options.debug > 0
0458         fprintf('************************************************************************\n');
0459     end
0460 
0461     % *************************
0462     % ** Begin TR Subproblem **
0463     % *************************
0464   
0465     % Determine eta0
0466     if ~options.useRand
0467         % Pick the zero vector
0468         eta = problem.M.zerovec(x);
0469     else
0470         % Random vector in T_x M (this has to be very small)
0471         eta = problem.M.lincomb(x, 1e-6, problem.M.randvec(x));
0472         % Must be inside trust-region
0473         while problem.M.norm(x, eta) > Delta
0474             eta = problem.M.lincomb(x, sqrt(sqrt(eps)), eta);
0475         end
0476     end
0477 
0478     % Solve TR subproblem approximately
0479     [eta, Heta, numit, stop_inner] = ...
0480                 tCG(problem, x, fgradx, eta, Delta, options, storedb, key);
0481     srstr = tcg_stop_reason{stop_inner};
0482 
0483     % If using randomized approach, compare result with the Cauchy point.
0484     % Convergence proofs assume that we achieve at least (a fraction of)
0485     % the reduction of the Cauchy point. After this if-block, either all
0486     % eta-related quantities have been changed consistently, or none of
0487     % them have changed.
0488     if options.useRand
0489         used_cauchy = false;
0490         % Check the curvature,
0491         Hg = getHessian(problem, x, fgradx, storedb, key);
0492         g_Hg = problem.M.inner(x, fgradx, Hg);
0493         if g_Hg <= 0
0494             tau_c = 1;
0495         else
0496             tau_c = min( norm_grad^3/(Delta*g_Hg) , 1);
0497         end
0498         % and generate the Cauchy point.
0499         eta_c  = problem.M.lincomb(x, -tau_c * Delta / norm_grad, fgradx);
0500         Heta_c = problem.M.lincomb(x, -tau_c * Delta / norm_grad, Hg);
0501 
0502         % Now that we have computed the Cauchy point in addition to the
0503         % returned eta, we might as well keep the best of them.
0504         mdle  = fx + problem.M.inner(x, fgradx, eta) ...
0505                    + .5*problem.M.inner(x, Heta,   eta);
0506         mdlec = fx + problem.M.inner(x, fgradx, eta_c) ...
0507                    + .5*problem.M.inner(x, Heta_c, eta_c);
0508         if mdlec < mdle
0509             eta = eta_c;
0510             Heta = Heta_c; % added April 11, 2012
0511             used_cauchy = true;
0512         end
0513     end
0514     
0515     
0516     % This is only computed for logging purposes, because it may be useful
0517     % for some user-defined stopping criteria. If this is not cheap for
0518     % specific applications (compared to evaluating the cost), we should
0519     % reconsider this.
0520     norm_eta = problem.M.norm(x, eta);
0521     
0522     if options.debug > 0
0523         testangle = problem.M.inner(x, eta, fgradx) / (norm_eta*norm_grad);
0524     end
0525     
0526 
0527     % Compute the tentative next iterate (the proposal)
0528     x_prop  = problem.M.retr(x, eta);
0529     key_prop = storedb.getNewKey();
0530 
0531     % Compute the function value of the proposal
0532     fx_prop = getCost(problem, x_prop, storedb, key_prop);
0533 
0534     % Will we accept the proposal or not?
0535     % Check the performance of the quadratic model against the actual cost.
0536     rhonum = fx - fx_prop;
0537     rhoden = -problem.M.inner(x, fgradx, eta) ...
0538              -.5*problem.M.inner(x, eta, Heta);
0539     % rhonum could be anything.
0540     % rhoden should be nonnegative, as guaranteed by tCG, baring numerical
0541     % errors.
0542     
0543     % Heuristic -- added Dec. 2, 2013 (NB) to replace the former heuristic.
0544     % This heuristic is documented in the book by Conn Gould and Toint on
0545     % trust-region methods, section 17.4.2.
0546     % rhonum measures the difference between two numbers. Close to
0547     % convergence, these two numbers are very close to each other, so
0548     % that computing their difference is numerically challenging: there may
0549     % be a significant loss in accuracy. Since the acceptance or rejection
0550     % of the step is conditioned on the ratio between rhonum and rhoden,
0551     % large errors in rhonum result in a very large error in rho, hence in
0552     % erratic acceptance / rejection. Meanwhile, close to convergence,
0553     % steps are usually trustworthy and we should transition to a Newton-
0554     % like method, with rho=1 consistently. The heuristic thus shifts both
0555     % rhonum and rhoden by a small amount such that far from convergence,
0556     % the shift is irrelevant and close to convergence, the ratio rho goes
0557     % to 1, effectively promoting acceptance of the step.
0558     % The rationale is that close to convergence, both rhonum and rhoden
0559     % are quadratic in the distance between x and x_prop. Thus, when this
0560     % distance is on the order of sqrt(eps), the value of rhonum and rhoden
0561     % is on the order of eps, which is indistinguishable from the numerical
0562     % error, resulting in badly estimated rho's.
0563     % For abs(fx) < 1, this heuristic is invariant under offsets of f but
0564     % not under scaling of f. For abs(fx) > 1, the opposite holds. This
0565     % should not alarm us, as this heuristic only triggers at the very last
0566     % iterations if very fine convergence is demanded.
0567     rho_reg = max(1, abs(fx)) * eps * options.rho_regularization;
0568     rhonum = rhonum + rho_reg;
0569     rhoden = rhoden + rho_reg;
0570    
0571     if options.debug > 0
0572         fprintf('DBG:     rhonum : %e\n', rhonum);
0573         fprintf('DBG:     rhoden : %e\n', rhoden);
0574     end
0575     
0576     % This is always true if a linear, symmetric operator is used for the
0577     % Hessian (approximation) and if we had infinite numerical precision.
0578     % In practice, nonlinear approximations of the Hessian such as the
0579     % built-in finite difference approximation and finite numerical
0580     % accuracy can cause the model to increase. In such scenarios, we
0581     % decide to force a rejection of the step and a reduction of the
0582     % trust-region radius. We test the sign of the regularized rhoden since
0583     % the regularization is supposed to capture the accuracy to which
0584     % rhoden is computed: if rhoden were negative before regularization but
0585     % not after, that should not be (and is not) detected as a failure.
0586     %
0587     % Note (Feb. 17, 2015, NB): the most recent version of tCG already
0588     % includes a mechanism to ensure model decrease if the Cauchy step
0589     % attained a decrease (which is theoretically the case under very lax
0590     % assumptions). This being said, it is always possible that numerical
0591     % errors will prevent this, so that it is good to keep a safeguard.
0592     %
0593     % The current strategy is that, if this should happen, then we reject
0594     % the step and reduce the trust region radius. This also ensures that
0595     % the actual cost values are monotonically decreasing.
0596     model_decreased = (rhoden >= 0);
0597     
0598     if ~model_decreased 
0599         srstr = [srstr ', model did not decrease']; %#ok<AGROW>
0600     end
0601     
0602     rho = rhonum / rhoden;
0603     
0604     % Added June 30, 2015 following observation by BM.
0605     % With this modification, it is guaranteed that a step rejection is
0606     % always accompanied by a TR reduction. This prevents stagnation in
0607     % this "corner case" (NaN's really aren't supposed to occur, but it's
0608     % nice if we can handle them nonetheless).
0609     if isnan(rho)
0610         fprintf('rho is NaN! Forcing a radius decrease. This should not happen.\n');
0611         if isnan(fx_prop)
0612             fprintf('The cost function returned NaN (perhaps the retraction returned a bad point?)\n');
0613         else
0614             fprintf('The cost function did not return a NaN value.');
0615         end
0616     end
0617    
0618     if options.debug > 0
0619         m = @(x, eta) ...
0620           getCost(problem, x, storedb, key) + ...
0621           getDirectionalDerivative(problem, x, eta, storedb, key) + ...
0622           .5*problem.M.inner(x, getHessian(problem, x, eta, storedb, key), eta);
0623         zerovec = problem.M.zerovec(x);
0624         actrho = (fx - fx_prop) / (m(x, zerovec) - m(x, eta));
0625         fprintf('DBG:   new f(x) : %+e\n', fx_prop);
0626         fprintf('DBG: actual rho : %e\n', actrho);
0627         fprintf('DBG:   used rho : %e\n', rho);
0628     end
0629 
0630     % Choose the new TR radius based on the model performance
0631     trstr = '   ';
0632     % If the actual decrease is smaller than 1/4 of the predicted decrease,
0633     % then reduce the TR radius.
0634     if rho < 1/4 || ~model_decreased || isnan(rho)
0635         trstr = 'TR-';
0636         Delta = Delta/4;
0637         consecutive_TRplus = 0;
0638         consecutive_TRminus = consecutive_TRminus + 1;
0639         if consecutive_TRminus >= 5 && options.verbosity >= 2
0640             consecutive_TRminus = -inf;
0641             fprintf(' +++ Detected many consecutive TR- (radius decreases).\n');
0642             fprintf(' +++ Consider decreasing options.Delta_bar by an order of magnitude.\n');
0643             fprintf(' +++ Current values: options.Delta_bar = %g and options.Delta0 = %g.\n', options.Delta_bar, options.Delta0);
0644         end
0645     % If the actual decrease is at least 3/4 of the precicted decrease and
0646     % the tCG (inner solve) hit the TR boundary, increase the TR radius.
0647     % We also keep track of the number of consecutive trust-region radius
0648     % increases. If there are many, this may indicate the need to adapt the
0649     % initial and maximum radii.
0650     elseif rho > 3/4 && (stop_inner == 1 || stop_inner == 2)
0651         trstr = 'TR+';
0652         Delta = min(2*Delta, options.Delta_bar);
0653         consecutive_TRminus = 0;
0654         consecutive_TRplus = consecutive_TRplus + 1;
0655         if consecutive_TRplus >= 5 && options.verbosity >= 1
0656             consecutive_TRplus = -inf;
0657             fprintf(' +++ Detected many consecutive TR+ (radius increases).\n');
0658             fprintf(' +++ Consider increasing options.Delta_bar by an order of magnitude.\n');
0659             fprintf(' +++ Current values: options.Delta_bar = %g and options.Delta0 = %g.\n', options.Delta_bar, options.Delta0);
0660         end
0661     else
0662         % Otherwise, keep the TR radius constant.
0663         consecutive_TRplus = 0;
0664         consecutive_TRminus = 0;
0665     end
0666 
0667     % Choose to accept or reject the proposed step based on the model
0668     % performance. Note the strict inequality.
0669     if model_decreased && rho > options.rho_prime
0670         accept = true;
0671         accstr = 'acc';
0672         x = x_prop;
0673         key = key_prop;
0674         fx = fx_prop;
0675         fgradx = getGradient(problem, x, storedb, key);
0676         norm_grad = problem.M.norm(x, fgradx);
0677     else
0678         accept = false;
0679         accstr = 'REJ';
0680     end
0681     
0682     
0683     % Make sure we don't use too much memory for the store database
0684     storedb.purge();
0685     
0686     % k is the number of iterations we have accomplished.
0687     k = k + 1;
0688 
0689     % Log statistics for freshly executed iteration.
0690     % Everything after this in the loop is not accounted for in the timing.
0691     stats = savestats(problem, x, storedb, key, options, k, fx, ...
0692                       norm_grad, Delta, ticstart, info, rho, rhonum, ...
0693                       rhoden, accept, numit, norm_eta, used_cauchy);
0694     info(k+1) = stats; %#ok<AGROW>
0695 
0696     
0697     % ** Display:
0698     if options.verbosity == 2,
0699         fprintf(['%3s %3s   k: %5d     num_inner: %5d     ', ...
0700         'f: %+e   |grad|: %e   %s\n'], ...
0701         accstr,trstr,k,numit,fx,norm_grad,srstr);
0702     elseif options.verbosity > 2,
0703         if options.useRand && used_cauchy,
0704             fprintf('USED CAUCHY POINT\n');
0705         end
0706         fprintf('%3s %3s    k: %5d     num_inner: %5d     %s\n', ...
0707                 accstr, trstr, k, numit, srstr);
0708         fprintf('       f(x) : %+e     |grad| : %e\n',fx,norm_grad);
0709         if options.debug > 0
0710             fprintf('      Delta : %f          |eta| : %e\n',Delta,norm_eta);
0711         end
0712         fprintf('        rho : %e\n',rho);
0713     end
0714     if options.debug > 0,
0715         fprintf('DBG: cos ang(eta,gradf): %d\n',testangle);
0716         if rho == 0
0717             fprintf('DBG: rho = 0, this will likely hinder further convergence.\n');
0718         end
0719     end
0720 
0721 end  % of TR loop (counter: k)
0722 
0723 % Restrict info struct-array to useful part
0724 info = info(1:k+1);
0725 
0726 
0727 if (options.verbosity > 2) || (options.debug > 0),
0728    fprintf('************************************************************************\n');
0729 end
0730 if (options.verbosity > 0) || (options.debug > 0)
0731     fprintf('Total time is %f [s] (excludes statsfun)\n', info(end).time);
0732 end
0733 
0734 % Return the best cost reached
0735 cost = fx;
0736 
0737 end
0738 
0739 
0740 
0741     
0742 
0743 % Routine in charge of collecting the current iteration stats
0744 function stats = savestats(problem, x, storedb, key, options, k, fx, ...
0745                            norm_grad, Delta, ticstart, info, rho, rhonum, ...
0746                            rhoden, accept, numit, norm_eta, used_cauchy)
0747     stats.iter = k;
0748     stats.cost = fx;
0749     stats.gradnorm = norm_grad;
0750     stats.Delta = Delta;
0751     if k == 0
0752         stats.time = toc(ticstart);
0753         stats.rho = inf;
0754         stats.rhonum = NaN;
0755         stats.rhoden = NaN;
0756         stats.accepted = true;
0757         stats.numinner = NaN;
0758         stats.stepsize = NaN;
0759         if options.useRand
0760             stats.cauchy = false;
0761         end
0762     else
0763         stats.time = info(k).time + toc(ticstart);
0764         stats.rho = rho;
0765         stats.rhonum = rhonum;
0766         stats.rhoden = rhoden;
0767         stats.accepted = accept;
0768         stats.numinner = numit;
0769         stats.stepsize = norm_eta;
0770         if options.useRand,
0771           stats.cauchy = used_cauchy;
0772         end
0773     end
0774     
0775     % See comment about statsfun above: the x and store passed to statsfun
0776     % are that of the most recently accepted point after the iteration
0777     % fully executed.
0778     stats = applyStatsfun(problem, x, storedb, key, options, stats);
0779     
0780 end

Generated on Fri 08-Sep-2017 12:43:19 by m2html © 2005