Home > manopt > core > getGradientFD.m

getGradientFD

PURPOSE ^

Computes an approx. of the gradient w/ finite differences of the cost.

SYNOPSIS ^

function gradfd = getGradientFD(problem, x, storedb, key)

DESCRIPTION ^

 Computes an approx. of the gradient w/ finite differences of the cost.

 function gradfd = getGradientFD(problem, x)
 function gradfd = getGradientFD(problem, x, storedb)
 function gradfd = getGradientFD(problem, x, storedb, key)

 Returns a finite difference approximation of the gradient at x for
 the cost function described in the problem structure. The finite
 difference is based on M.dim()+1 computations of the cost.

 storedb is a StoreDB object, key is the StoreDB key to point x.

 If the cost cannot be computed, an exception is thrown.

 See also: approxgradientFD

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SOURCE CODE ^

0001 function gradfd = getGradientFD(problem, x, storedb, key)
0002 % Computes an approx. of the gradient w/ finite differences of the cost.
0003 %
0004 % function gradfd = getGradientFD(problem, x)
0005 % function gradfd = getGradientFD(problem, x, storedb)
0006 % function gradfd = getGradientFD(problem, x, storedb, key)
0007 %
0008 % Returns a finite difference approximation of the gradient at x for
0009 % the cost function described in the problem structure. The finite
0010 % difference is based on M.dim()+1 computations of the cost.
0011 %
0012 % storedb is a StoreDB object, key is the StoreDB key to point x.
0013 %
0014 % If the cost cannot be computed, an exception is thrown.
0015 %
0016 % See also: approxgradientFD
0017 
0018 % This file is part of Manopt: www.manopt.org.
0019 % Original author: Nicolas Boumal, Nov. 1, 2016.
0020 % Contributors:
0021 % Change log:
0022 
0023     % Allow omission of the key, and even of storedb.
0024     if ~exist('key', 'var')
0025         if ~exist('storedb', 'var')
0026             storedb = StoreDB();
0027         end
0028         key = storedb.getNewKey();
0029     end
0030 
0031     % This gradient approximation is based on the cost:
0032     % check availability.
0033     if ~canGetCost(problem)
0034         up = MException('manopt:getGradientFD:nocost', ...
0035             'getGradientFD requires the cost to be computable.');
0036         throw(up);
0037     end
0038     
0039     
0040     % Default parameters. See approxgradientFD for explicit user access to
0041     % these parameters.
0042     stepsize = 2^-23;
0043     subspacedim = [];
0044     
0045     
0046     % Evaluate the cost at the root point
0047     fx = getCost(problem, x, storedb, key);
0048 
0049     % Pick an orthonormal basis for the tangent space at x, or a subspace
0050     % thereof. The default is a full subspace. If a strict subspace is
0051     % picked, the returned vector approximates the orthogonal projection of
0052     % the gradient to that subspace.
0053     B = tangentorthobasis(problem.M, x, subspacedim);
0054     
0055     % Use finite differences to approximate the directional derivative
0056     % along each direction in the basis B.
0057     df = zeros(size(B));
0058     for k = 1 : numel(B)
0059         % Move in the B{k} direction
0060         xk = problem.M.retr(x, B{k}, stepsize);
0061         keyk = storedb.getNewKey();
0062         % Evaluate the cost there
0063         fxk = getCost(problem, xk, storedb, keyk);
0064         % Don't keep this point in cache
0065         storedb.remove(keyk);
0066         % Finite difference
0067         df(k) = (fxk - fx)/stepsize;
0068     end
0069     
0070     % Build the gradient approximation.
0071     gradfd = lincomb(problem.M, x, B, df);
0072     
0073 end

Generated on Fri 30-Sep-2022 13:18:25 by m2html © 2005