Home > manopt > manifolds > fixedranktensors > fixedranktensorembeddedfactory.m

fixedranktensorembeddedfactory

PURPOSE ^

Manifold of tensors with fixed multilinear rank in Tucker format

SYNOPSIS ^

function M = fixedranktensorembeddedfactory(tensor_size, tensor_rank)

DESCRIPTION ^

 Manifold of tensors with fixed multilinear rank in Tucker format
 as a submanifold of a Euclidean space.

 function M = fixedranktensorembeddedfactory(tensor_size, tensor_rank)

 NOTE: Tensor Toolbox version 2.6 or higher is required for this factory:
 see https://www.tensortoolbox.org/ or https://gitlab.com/tensors/tensor_toolbox

 The set of tensors with fixed multilinear rank in Tucker format is endowed
 with a Riemannian manifold structure as an embedded submanifold of a
 Euclidean space. This function returns a structure M representing this
 manifold for use with Manopt.

 The inputs tensore_size and tensor_rank are vectors of length d, where
 d is the order (the number of modes) of the tensors considered.
 The entries of tensor_size are the tensor dimensions in each mode.
 The entries of tensor_rank are the multilinear ranks (Tucker ranks,
 matricization ranks) in each mode.

 A point on the manifold is represented as a structure with two fields:
 X: a ttensor object (see Tensor Toolbox), the actual point on the manifold.
 Cpinv: a cell list of the pseudoinverses of all matricizations of X.core.
 (This preprocessing makes subsequent operations more efficient.)

 A tangent vector is represented as a structure with two fields: 
 G: variation in the core tensor
 V: a cell list of variations in the core matrices

 Vectors in the ambient space (e.g., vector E for the function M.proj
 and vectors egrad and ehess for the functions M.egrad2rgrad and
 M.ehess2rhess) can be given as either full tensors (tensor objects
 in the Tensor Toolbox) or sparse tensors (sptensor objects in the
 Tensor Toolbox).

 For details, refer to the article
 "A Riemannian trust-region method for low-rank tensor completion"
 Gennadij Heidel and Volker Schulz, doi:10.1002/nla.2175.

 Please cite the Manopt and MATLAB Tensor Toolbox papers as well as the
 research paper:
     @Article{heidel2018riemannian,
       Title   = {A {R}iemannian trust-region method for low-rank tensor completion},
       Author  = {G. Heidel and V. Schulz},
       Journal = {Numerical Linear Algebra with Applications},
       Year    = {2018},
       Volume  = {23},
       Number  = {6},
       Pages   = {e1275},
       Doi     = {10.1002/nla.2175}
     }

 See also: fixedrankfactory_tucker_preconditioned fixedrankembeddedfactory

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SUBFUNCTIONS ^

SOURCE CODE ^

0001 function M = fixedranktensorembeddedfactory(tensor_size, tensor_rank)
0002 % Manifold of tensors with fixed multilinear rank in Tucker format
0003 % as a submanifold of a Euclidean space.
0004 %
0005 % function M = fixedranktensorembeddedfactory(tensor_size, tensor_rank)
0006 %
0007 % NOTE: Tensor Toolbox version 2.6 or higher is required for this factory:
0008 % see https://www.tensortoolbox.org/ or https://gitlab.com/tensors/tensor_toolbox
0009 %
0010 % The set of tensors with fixed multilinear rank in Tucker format is endowed
0011 % with a Riemannian manifold structure as an embedded submanifold of a
0012 % Euclidean space. This function returns a structure M representing this
0013 % manifold for use with Manopt.
0014 %
0015 % The inputs tensore_size and tensor_rank are vectors of length d, where
0016 % d is the order (the number of modes) of the tensors considered.
0017 % The entries of tensor_size are the tensor dimensions in each mode.
0018 % The entries of tensor_rank are the multilinear ranks (Tucker ranks,
0019 % matricization ranks) in each mode.
0020 %
0021 % A point on the manifold is represented as a structure with two fields:
0022 % X: a ttensor object (see Tensor Toolbox), the actual point on the manifold.
0023 % Cpinv: a cell list of the pseudoinverses of all matricizations of X.core.
0024 % (This preprocessing makes subsequent operations more efficient.)
0025 %
0026 % A tangent vector is represented as a structure with two fields:
0027 % G: variation in the core tensor
0028 % V: a cell list of variations in the core matrices
0029 %
0030 % Vectors in the ambient space (e.g., vector E for the function M.proj
0031 % and vectors egrad and ehess for the functions M.egrad2rgrad and
0032 % M.ehess2rhess) can be given as either full tensors (tensor objects
0033 % in the Tensor Toolbox) or sparse tensors (sptensor objects in the
0034 % Tensor Toolbox).
0035 %
0036 % For details, refer to the article
0037 % "A Riemannian trust-region method for low-rank tensor completion"
0038 % Gennadij Heidel and Volker Schulz, doi:10.1002/nla.2175.
0039 %
0040 % Please cite the Manopt and MATLAB Tensor Toolbox papers as well as the
0041 % research paper:
0042 %     @Article{heidel2018riemannian,
0043 %       Title   = {A {R}iemannian trust-region method for low-rank tensor completion},
0044 %       Author  = {G. Heidel and V. Schulz},
0045 %       Journal = {Numerical Linear Algebra with Applications},
0046 %       Year    = {2018},
0047 %       Volume  = {23},
0048 %       Number  = {6},
0049 %       Pages   = {e1275},
0050 %       Doi     = {10.1002/nla.2175}
0051 %     }
0052 %
0053 % See also: fixedrankfactory_tucker_preconditioned fixedrankembeddedfactory
0054 
0055 % This file is part of Manopt: www.manopt.org.
0056 % Original author: Gennadij Heidel, January 24, 2019.
0057 % Contributors:
0058 % Change log:
0059 
0060 % References to papers in the code:
0061 %
0062 % Kressner et al.:
0063 % "Low-rank tensor completion by Riemannian optimization"
0064 % D. Kressner, M. Steinlechner, B. Vandereycken
0065 % doi:10.1007/s10543-013-0455-z
0066 %
0067 % De Lathauwer et al.:
0068 % "A multilinear singular value decomposition"
0069 % L. De Lathauwer, B. De Moor, J. Vandewalle
0070 % doi:10.1137/S0895479896305696
0071 
0072     assert(exist('ttensor', 'file') == 2, sprintf( ...
0073         ['It seems the Tensor Toolbox is not installed.\nIt is needed ', ...
0074          'for the execution of fixedranktensorembeddedfactory.m.\n', ...
0075          'Please download the toolbox at https://www.tensortoolbox.org/', ...
0076          '\nor https://gitlab.com/tensors/tensor_toolbox.']));
0077 
0078     % Tensor size and rank
0079     d = length(tensor_size);
0080     if d ~= length(tensor_rank)
0081         error(['Tensor dimensions and ranks do not match: ' ...
0082                'the two inputs should have the same length.']);
0083     end
0084     n = tensor_size;
0085     r = tensor_rank;
0086     
0087     % Generate a string that describes the used manifold
0088     manifold_name = mfname(n, r, d);
0089     M.name = @() manifold_name;
0090     
0091     % Dimension of the manifold
0092     mfdim = prod(r) + sum(r.*(n-r));
0093     M.dim = @() mfdim;
0094     
0095     % Efficient inner product on tangent space exploiting orthogonality
0096     M.inner = @iproduct;
0097     function ip = iproduct(X, eta, zeta)
0098         ip = innerprod(eta.G, zeta.G);
0099         for i = 1:d
0100             ip = ip + innerprod(X.X.core, ...
0101                                 ttm(X.X.core, eta.V{i}'*zeta.V{i}, i));
0102         end
0103     end
0104 
0105     M.norm = @(X, eta) sqrt(iproduct(X, eta, eta));
0106     
0107     M.dist = @(x, y) error('fixedranktensorembeddedfactory.dist not implemented yet.');
0108     
0109     % This number is heuristic
0110     M.typicaldist = @() 10*mean(n)*mean(r);
0111     
0112     % Orthogonal projection of a vector E in the ambient space to the
0113     % tangent space at X.
0114     M.proj = @projection;
0115     function Eproj = projection(X, E)
0116         if ~isstruct(E)
0117             uList = X.X.U;
0118             
0119             % cf. Kressner et al., p. 454, bottom
0120             G = ttm(E, uList, 't');
0121 
0122             % cf. Kressner et al., p. 454, bottom
0123             V = cell(1, d);
0124             for i = 1:d
0125                 % modes vector without ith mode for ttm multiplication
0126                 modes = 1:d;
0127                 modes(i) = [];
0128 
0129                 % list of basis matrices U without the index i
0130                 uListWoI = uList;
0131                 uListWoI(:, i) = [];
0132                 
0133                 % the term of V_i before the multiplication by the orthogonal projector
0134                 beforeProj = tenmat(ttm(E, uListWoI, modes, 't'), i) * X.Cpinv{i};
0135 
0136                 % orthogonal projection
0137                 V{i} = double(beforeProj - X.X.U{i}*(X.X.U{i}'*beforeProj));
0138             end
0139             
0140             Eproj.G = G;
0141             Eproj.V = V;
0142         else
0143             error(['fixedranktensorembeddedfactory.proj only ' ...
0144                    'implemented for ambient tensors so far.']);
0145         end
0146         
0147     end
0148     
0149     % The Riemannian gradient is the projection of the Euclidean gradient
0150     M.egrad2rgrad = @egrad2rgrad;
0151     function rgrad = egrad2rgrad(X, egrad)
0152         rgrad = projection(X, egrad);
0153     end
0154     
0155     % The Riemannian Hessian is the projection of the Euclidean Hessian
0156     % plus a curvature term: see code below in this file.
0157     M.ehess2rhess = @ehess2rhess;
0158     function Hess = ehess2rhess(X, egrad, ehess, eta) 
0159         Hess = lincomb(X, 1, projection(X, ehess), ...
0160                           1, curvature_term(egrad, X, eta));
0161     end
0162 
0163     % Re-orthogonalize basis matrix variations in the tangent vector.
0164     % When applied to a tangent vector, this should do nothing up to
0165     % numerical errors.
0166     M.tangent = @tangent;
0167     function xi = tangent(X, eta)
0168         xi = eta;
0169         for i = 1:d
0170             xi.V{i} = eta.V{i} - X.X.U{i}*(X.X.U{i}'*eta.V{i});
0171         end
0172     end
0173 
0174     % Generate full n1-by-...-by-nd tensor in the ambient space from
0175     % a tangent vector.
0176     M.tangent2ambient = @tan2amb;
0177     function E = tan2amb(X, eta)
0178         E = ttm(eta.G, X.X.U);
0179         
0180         for i = 1:d
0181             % modes vector without ith mode for ttm multiplication
0182             modes = 1:d;
0183             modes(i) = [];
0184 
0185             % list of basis matrices U without the index i
0186             uListWoI = X.X.U;
0187             uListWoI(:, i) = [];
0188 
0189             E = E + ttm(ttm(X.X.core, eta.V{i}, i), uListWoI, modes);
0190         end
0191         
0192     end
0193     
0194     % Efficient retraction, see Kressner at al., 2014
0195     M.retr = @retraction;
0196     function Y = retraction(X, xi, alpha)
0197         % if no alpha is given, assume it to be 1
0198         if nargin < 3
0199             alpha = 1.0;
0200         end
0201 
0202         Q = cell(1, d);
0203         R = cell(1, d);
0204         for i = 1:d
0205             [Q{i}, R{i}] = qr([X.X.U{i}, xi.V{i}], 0);
0206         end
0207 
0208         S = zeros(2*r);
0209 
0210         % First block C+alpha*G, see Kressner et al., Fig. 2
0211         sBlock = double(X.X.core + alpha*xi.G);
0212         for i = 1:d
0213             sBlock = cat(i, sBlock, zeros(size(sBlock)));
0214         end
0215         S = S + sBlock;
0216 
0217         % Adjacent Blocks alpha*C, see Kressner et al., Fig. 2
0218         for i = 1:d
0219             sBlock = double(alpha*X.X.core);
0220             modes = 1:d;
0221             modes(i) = [];
0222             for j=modes
0223                 sBlock = cat(j, sBlock, zeros(size(sBlock)));
0224             end
0225             sBlock = cat(i, zeros(size(sBlock)), sBlock);
0226             S = S + sBlock;
0227         end
0228 
0229         % no concatenation in tensor toolbox --> detour over Matlab arrays
0230         S = tensor(S);
0231 
0232         % absorb R factors in core tensor
0233         S = ttm(S, R);
0234 
0235         % actual retraction
0236         sHosvd = hosvd(S, r);
0237 
0238         % absorb Q factors in basis matrices
0239         U = cell(1, d);
0240         for i = 1:d
0241             U{i} = Q{i} * sHosvd.U{i};
0242         end
0243 
0244         Z = ttensor(sHosvd.core, U);
0245         Y.X = Z;
0246         Y.Cpinv = cell(1, d);
0247         for i = 1:d
0248             % Should consider replacing use of pinv with another
0249             % preprocessed form.
0250            Y.Cpinv{i} = pinv(double(tenmat(Y.X.core, i)));
0251         end
0252     end
0253     
0254     M.hash = @hashing;
0255     function h = hashing(X)
0256         v = zeros(2*d+1, 1);
0257         for i = 1:d
0258             v(i) = sum(X.X.U{i}(:));
0259         end
0260         v(d+1) = sum(X.X.core(:));
0261         for i = 1:d
0262             v(d+1+i) = sum(X.Cpinv{i}(:));
0263         end
0264         h = ['z' hashmd5(v)];
0265     end
0266     
0267     % Random tensor on manifold
0268     M.rand = @random;
0269     function X = random()
0270         U = cell(1, d);
0271         R = cell(1, d);
0272         for i = 1:d
0273             [U{i}, R{i}] = qr(rand(n(i), r(i)), 0);
0274         end
0275         C = tenrand(r);
0276         C = ttm(C,R);
0277         
0278         % Perform an HSOVD of the core to ensure all-orthogonality
0279         Z = hosvd(C, r);
0280         for i = 1:d
0281             U{i} = U{i}*Z.U{i};
0282         end
0283         
0284         Y = ttensor(Z.core, U);
0285         X.X = Y;
0286         Cpinv = cell(1, d);
0287         for i = 1:d
0288             Cpinv{i} = pinv(double(tenmat(X.X.core, i)));
0289         end
0290         X.Cpinv = Cpinv;
0291     end
0292     
0293     % Random unit norm tangent vector
0294     M.randvec = @randomvec;
0295     function eta = randomvec(X)
0296         G = tensor(randn(r));
0297         xi.G = G;
0298         
0299         V = cell(1, d);
0300         for i = 1:d
0301             V{i} = randn(n(i), r(i));
0302         end
0303         xi.V = V;
0304         
0305         xi = M.tangent(X, xi);
0306         nrm = M.norm(X, xi);
0307         
0308         eta.G = xi.G / nrm;
0309         for i = 1:d
0310             xi.V{i} = xi.V{i} / nrm;
0311         end
0312         eta.V = xi.V;
0313     end
0314     
0315     % Evaluate lambda1*eta1* + lambda2*eta2 in the tangent space
0316     M.lincomb = @lincomb;
0317     function xi = lincomb(X, lambda1, eta1, lambda2, eta2) %#ok<INUSL>
0318         if nargin == 3
0319             V = cell(1, d);
0320             for i = 1:d
0321                 V{i} = lambda1*eta1.V{i};
0322             end
0323             xi.G = lambda1*eta1.G;
0324             xi.V = V;
0325         elseif nargin == 5
0326             V = cell(1, d);
0327             for i = 1:d
0328                 V{i} = lambda1*eta1.V{i} + lambda2*eta2.V{i};
0329             end
0330             xi.G = lambda1*eta1.G + lambda2*eta2.G;
0331             xi.V = V;
0332         else
0333             error('fixedranktensorembeddedfactory.lincomb takes 3 or 5 inputs.');
0334         end
0335     end
0336     
0337     M.zerovec = @zerovector;
0338     function eta = zerovector(X) %#ok<INUSD>
0339         G = tenzeros(r);
0340         V = cell(1, d);
0341         for i = 1:d
0342             V{i} = zeros(n(i), r(i));
0343         end
0344         eta.G = G;
0345         eta.V = V;
0346     end
0347 
0348     % Efficient vector transport by orthogonal projection,
0349     % see Kressner at al.
0350     M.transp = @transport;
0351     function eta = transport(X, Y, xi)
0352 
0353         % Follow notation from Kressner et al.
0354         C = X.X.core;
0355         U = X.X.U;
0356         uList = U;
0357         U_tilde = Y.X.U;
0358         uTildeList = U_tilde;
0359         G = xi.G;
0360         V = xi.V;
0361 
0362         G_tilde = ttm(ttm(G, U), U_tilde, 't');
0363 
0364         for i = 1:d
0365 
0366             % List of basis matrices U without the index i
0367             uListWoI = U;
0368             uListWoI{i} = V{i};
0369 
0370             G_tilde = G_tilde + ttm(ttm(C, uListWoI), U_tilde, 't');
0371         end
0372 
0373         V_tilde = cell(1, d);
0374         for i = 1:d
0375 
0376             % Modes vector without ith mode for ttm multiplication
0377             modesWoI = 1:d;
0378             modesWoI(i) = [];
0379 
0380             % List of basis matrices U without the index i
0381             uTildeListWoI = uTildeList;
0382             uTildeListWoI(:, i) = [];
0383 
0384             beforeProj = ttm(ttm(G, U), uTildeListWoI, modesWoI, 't');
0385 
0386             for k = 1:d
0387                 uListWoK = uList;
0388                 uListWoK{k} = V{k};
0389 
0390                 beforeProj = beforeProj + ...
0391                        ttm(ttm(C, uListWoK), uTildeListWoI, modesWoI, 't');
0392             end
0393 
0394             beforeProj = tenmat(beforeProj, i) * Y.Cpinv{i};
0395 
0396             V_tilde{i} = double(beforeProj - U_tilde{i}*(U_tilde{i}'*beforeProj));
0397         end
0398 
0399         eta.G = G_tilde;
0400         eta.V = V_tilde;
0401     end
0402     
0403     M.vec = @(X, eta) [eta.V{1}(:); eta.V{2}(:); eta.V{3}(:);eta.G(:)];
0404 
0405     M.mat = @normrep;
0406     function eta = normrep(X, eta_vec) %#ok<INUSL>
0407         
0408         V = cell(1, d);
0409         first_ind = 1;
0410         for i = 1:d
0411             V{i} = reshape(eta_vec(first_ind : first_ind + n(i)*r(i)-1), n(i), r(i));
0412             first_ind = first_ind + n(i)*r(i);
0413         end
0414         G = tensor(reshape(eta_vec(first_ind : end), r));
0415         
0416         eta.G = G;
0417         eta.V = V;
0418     end
0419 
0420     % vec and mat are not isometries
0421     M.vecmatareisometries = @() false;
0422     
0423 end
0424 
0425 % Higher-order SVD, see De Lathauwer et al., 2000
0426 function T = hosvd(X, r)
0427     if ndims(X) == length(r)
0428         d = ndims(X);
0429     else
0430         error('Dimensions of tensor and multilinear rank vector do not match.')
0431     end
0432 
0433     % Store matrices U of r left singular vectors of X and store them
0434     % in a cell array
0435     uList = cell(1, d);
0436     for i = 1:d
0437         % Cast to double because svds of tenmat is undefined
0438         A = double(tenmat(X, i));
0439         [U, ~, ~] = svds(A, r(i));
0440         uList{i} = U;
0441     end
0442 
0443     C = ttm(X, uList, 't');
0444 
0445     T = ttensor(C, uList);
0446 end
0447 
0448 % Curvature term for Riemannian Hessian,
0449 % see Heidel/Schulz, 2018, Corollary 3.7
0450 function eta = curvature_term(E, X, xi)
0451     G = tenzeros(size(X.X.core));
0452     d = length(size(X.X.core));
0453     V = cell(1, d);
0454 
0455     for i = 1:d
0456         modesWoI = 1:d;
0457         modesWoI(i) = [];
0458 
0459         uListWoI = X.X.U;
0460         uListWoI(:,i) = [];
0461         
0462         EUit = ttm(E, uListWoI, modesWoI, 't');
0463         Gi = double(tenmat(xi.G, i));
0464         Ci = double(tenmat(X.X.core, i));
0465         
0466         G = G + ttm(EUit,xi.V{i}, i, 't')...
0467               - ttm(X.X.core, ...
0468                     double(xi.V{i}'*(tenmat(EUit,i)*X.Cpinv{i})), i); 
0469         
0470         Cplusi2 = X.Cpinv{i}'*X.Cpinv{i};
0471         Vi = (tenmat(EUit, i)*Gi')*Cplusi2 + ...
0472              (tenmat(EUit, i)*X.Cpinv{i})*(Ci*Gi')*Cplusi2;
0473         for k = 1:length(modesWoI)
0474             modesWoIWoK = modesWoI;
0475             modesWoIWoK(k) = [];
0476             
0477             uListWoIWoK = uListWoI;
0478             uListWoIWoK(:, k) = [];
0479 
0480             EUiEUkdott = ttm(ttm(E, uListWoIWoK, modesWoIWoK, 't'), ...
0481                              xi.V{modesWoI(k)}, modesWoI(k), 't');
0482             Vi = Vi + tenmat(EUiEUkdott, i)*X.Cpinv{i};
0483         end
0484         V{i} = double(Vi - X.X.U{i}*(X.X.U{i}'*Vi));
0485     end
0486 
0487     eta.G = G;
0488     eta.V = V;
0489 end
0490 
0491 function spf = mfname(n, r, d)
0492     s = 'C';
0493     for i = 1:d
0494         s = strcat(s, ' x U',int2str(i));
0495     end
0496     s = strcat(s, ' Tucker manifold of ');
0497     for i = 1:10
0498        if n(i) < 10^i
0499            digits = i;
0500            break;
0501        end
0502     end
0503     s = strcat(s, '%', int2str(digits+1), 'd-by-');
0504     for i = 2:d-1
0505         s = strcat(s, '%d-by-');
0506     end
0507     s = strcat(s, '%d tensors of rank ');
0508     for i = 1:10
0509        if r(i)<10^i
0510            digits = i;
0511            break;
0512        end
0513     end
0514     s = strcat(s, '%', int2str(digits+1), 'd-by-');
0515     for i = 2:d-1
0516         s = strcat(s, '%d-by-');
0517     end
0518     s = strcat(s, '%d');
0519     spf = sprintf(s, n, r);
0520 end

Generated on Fri 30-Sep-2022 13:18:25 by m2html © 2005