Home > examples > nonlinear_eigenspace.m

nonlinear_eigenspace

PURPOSE ^

Example of nonlinear eigenvalue problem: total energy minimization.

SYNOPSIS ^

function Xsol = nonlinear_eigenspace(L, k, alpha)

DESCRIPTION ^

 Example of nonlinear eigenvalue problem: total energy minimization.

 function Xsol = nonlinear_eigenspace(L, k, alpha)

 L is a discrete Laplacian operator,
 alpha is a given constant, and
 k corresponds to the dimension of the least eigenspace sought. 

 This example demonstrates how to use the Grassmann geometry factory 
 to solve the nonlinear eigenvalue problem as the optimization problem:

 minimize 0.5*trace(X'*L*X) + (alpha/4)*(rho(X)*L\(rho(X))) 
 over X such that X'*X = Identity,

 where L is of size n-by-n,
 X is an n-by-k matrix, and
 rho(X) is the diagonal part of X*X'.

 This example is motivated in the paper
 "A Riemannian Newton Algorithm for Nonlinear Eigenvalue Problems",
 Zhi Zhao, Zheng-Jian Bai, and Xiao-Qing Jin,
 SIAM Journal on Matrix Analysis and Applications, 36(2), 752-774, 2015.

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SUBFUNCTIONS ^

SOURCE CODE ^

0001 function Xsol = nonlinear_eigenspace(L, k, alpha)
0002 % Example of nonlinear eigenvalue problem: total energy minimization.
0003 %
0004 % function Xsol = nonlinear_eigenspace(L, k, alpha)
0005 %
0006 % L is a discrete Laplacian operator,
0007 % alpha is a given constant, and
0008 % k corresponds to the dimension of the least eigenspace sought.
0009 %
0010 % This example demonstrates how to use the Grassmann geometry factory
0011 % to solve the nonlinear eigenvalue problem as the optimization problem:
0012 %
0013 % minimize 0.5*trace(X'*L*X) + (alpha/4)*(rho(X)*L\(rho(X)))
0014 % over X such that X'*X = Identity,
0015 %
0016 % where L is of size n-by-n,
0017 % X is an n-by-k matrix, and
0018 % rho(X) is the diagonal part of X*X'.
0019 %
0020 % This example is motivated in the paper
0021 % "A Riemannian Newton Algorithm for Nonlinear Eigenvalue Problems",
0022 % Zhi Zhao, Zheng-Jian Bai, and Xiao-Qing Jin,
0023 % SIAM Journal on Matrix Analysis and Applications, 36(2), 752-774, 2015.
0024 %
0025 
0026 
0027 % This file is part of Manopt and is copyrighted. See the license file.
0028 %
0029 % Main author: Bamdev Mishra, June 19, 2015.
0030 % Contributors:
0031 %
0032 % Change log:
0033 %
0034 %    Aug. 20, 2021(XJ)
0035 %       Added AD to compute the egrad and the ehess
0036 
0037     % If no inputs are provided, generate a  discrete Laplacian operator.
0038     % This is for illustration purposes only.
0039     % The default example corresponds to Case (c) of Example 6.2 of the
0040     % above referenced paper.
0041     
0042     if ~exist('L', 'var') || isempty(L)
0043         n = 100;
0044         L = gallery('tridiag', n, -1, 2, -1);
0045     end
0046     
0047     n = size(L, 1);
0048     assert(size(L, 2) == n, 'L must be square.');
0049     
0050     if ~exist('k', 'var') || isempty(k) || k > n
0051         k = 10;
0052     end
0053     
0054     if ~exist('alpha', 'var') || isempty(alpha)
0055         alpha = 1;
0056     end
0057     
0058     
0059     % Grassmann manifold description
0060     Gr = grassmannfactory(n, k);
0061     problem.M = Gr;
0062     
0063     % Cost function evaluation
0064     problem.cost =  @cost;
0065     function val = cost(X)
0066         rhoX = sum(X.^2, 2); % diag(X*X');
0067         val = 0.5*trace(X'*(L*X)) + (alpha/4)*(rhoX'*(L\rhoX));
0068     end
0069     
0070     % Euclidean gradient evaluation
0071     % Note: Manopt automatically converts it to the Riemannian counterpart.
0072     problem.egrad = @egrad;
0073     function g = egrad(X)
0074         rhoX = sum(X.^2, 2); % diag(X*X');
0075         g = L*X + alpha*diag(L\rhoX)*X;
0076     end
0077     
0078     % Euclidean Hessian evaluation
0079     % Note: Manopt automatically converts it to the Riemannian counterpart.
0080     problem.ehess = @ehess;
0081     function h = ehess(X, U)
0082         rhoX = sum(X.^2, 2); %diag(X*X');
0083         rhoXdot = 2*sum(X.*U, 2); 
0084         h = L*U + alpha*diag(L\rhoXdot)*X + alpha*diag(L\rhoX)*U;
0085     end
0086     
0087     % An alternative way to compute the egrad and the ehess is to use
0088     % automatic differentiation provided in the deep learning toolbox (slower)
0089     % Notice that the function trace is not supported for AD so far.
0090     % Replace it with ctrace described in the file manoptADhelp.m.
0091     % Also, operations between sparse matices and dlarrys are not
0092     % supported. Convert L into a full matrix for the use of AD.
0093     % The operation \ is not supported for AD. Convert it to inv()*
0094     % L_full = full(L);
0095     % problem.cost = @cost_AD;
0096     %    function val = cost_AD(X)
0097     %        rhoX = sum(X.^2, 2); % diag(X*X');
0098     %        val = 0.5*ctrace(X'*(L_full*X)) + (alpha/4)*(rhoX'*(inv(L_full)*rhoX));
0099     %    end
0100     % call manoptAD to prepare AD for the problem structure
0101     % problem = manoptAD(problem);
0102 
0103     % Check whether gradient and Hessian computations are correct.
0104     % checkgradient(problem);
0105     % pause;
0106     % checkhessian(problem);
0107     % pause;
0108     
0109     
0110     % Initialization as suggested in above referenced paper.
0111     X = randn(n, k);
0112     [U, S, V] = svd(X, 0); %#ok<ASGLU>
0113     X = U*V';
0114     [U0, S0, V0] = eigs(L + alpha*diag(L\(sum(X.^2, 2))), k,'sm'); %#ok<NASGU,ASGLU>
0115     X0 = U0;
0116   
0117     % Call manoptsolve to automatically call an appropriate solver.
0118     % Note: it calls the trust regions solver as we have all the required
0119     % ingredients, namely, gradient and Hessian, information.
0120     Xsol = manoptsolve(problem, X0);
0121     
0122 end

Generated on Fri 30-Sep-2022 13:18:25 by m2html © 2005