Home > examples > elliptope_SDP.m

elliptope_SDP

PURPOSE ^

Solver for semidefinite programs (SDP's) with unit diagonal constraints.

SYNOPSIS ^

function [Y, problem, S] = elliptope_SDP(A, p, Y0)

DESCRIPTION ^

 Solver for semidefinite programs (SDP's) with unit diagonal constraints.
 
 function [Y, problem, S] = elliptope_SDP(A)
 function [Y, problem, S] = elliptope_SDP(A, p)
 function [Y, problem, S] = elliptope_SDP(A, p, Y0)

 A is a real, symmetric matrix of size n.

 This function uses a local optimization method in Manopt to solve the SDP

   min_X  trace(A*X)  s.t.  diag(X) = 1 and X is positive semidefinite.

 In practice, the symmetric matrix X of size n is parameterized
 as X = Y*Y', where Y has size n x p. By default, p is taken large enough
 (about sqrt(2n)) to ensure that there exists an optimal X whose rank is
 smaller than p. This ensures that the SDP is equivalent to the new
 problem in Y:

   min_Y  trace(Y'*A*Y)  s.t.  diag(Y*Y') = 1.

 The constraints on Y require each row of Y to have unit norm, which is
 why Manopt is appropriate software to solve this problem. An optional
 initial guess can be specified via the input Y0.

 See the paper below for theory, specifically, for a proof that, for
 almost all A, second-order critical points of the problem in Y are
 globally optimal. In other words: there are no local traps in Y, despite
 non-convexity.

 Outputs:

       Y: is the best point found (an nxp matrix with unit norm rows.)
          To find X, form Y*Y' (or, more efficiently, study X through Y.)
 
       problem: is the Manopt problem structure used to produce Y.
 
       S: is a dual optimality certificate (a symmetric matrix of size n,
          sparse if A is sparse). The optimality gap (in the cost
          function) is at most n*min(eig(S)), for both Y and X = Y*Y'.
          Hence, if min(eig(S)) is close to zero, Y is close to globally
          optimal. This can be computed via eigs(S, 1, 'SR').
 
 Paper: https://arxiv.org/abs/1606.04970

 @inproceedings{boumal2016bmapproach,
   author  = {Boumal, N. and Voroninski, V. and Bandeira, A.S.},
   title   = {The non-convex {B}urer-{M}onteiro approach works on smooth semidefinite programs},
   booktitle={Neural Information Processing Systems (NIPS 2016)},
   year    = {2016}
 }
 
 See also: maxcut elliptope_SDP_complex

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SUBFUNCTIONS ^

SOURCE CODE ^

0001 function [Y, problem, S] = elliptope_SDP(A, p, Y0)
0002 % Solver for semidefinite programs (SDP's) with unit diagonal constraints.
0003 %
0004 % function [Y, problem, S] = elliptope_SDP(A)
0005 % function [Y, problem, S] = elliptope_SDP(A, p)
0006 % function [Y, problem, S] = elliptope_SDP(A, p, Y0)
0007 %
0008 % A is a real, symmetric matrix of size n.
0009 %
0010 % This function uses a local optimization method in Manopt to solve the SDP
0011 %
0012 %   min_X  trace(A*X)  s.t.  diag(X) = 1 and X is positive semidefinite.
0013 %
0014 % In practice, the symmetric matrix X of size n is parameterized
0015 % as X = Y*Y', where Y has size n x p. By default, p is taken large enough
0016 % (about sqrt(2n)) to ensure that there exists an optimal X whose rank is
0017 % smaller than p. This ensures that the SDP is equivalent to the new
0018 % problem in Y:
0019 %
0020 %   min_Y  trace(Y'*A*Y)  s.t.  diag(Y*Y') = 1.
0021 %
0022 % The constraints on Y require each row of Y to have unit norm, which is
0023 % why Manopt is appropriate software to solve this problem. An optional
0024 % initial guess can be specified via the input Y0.
0025 %
0026 % See the paper below for theory, specifically, for a proof that, for
0027 % almost all A, second-order critical points of the problem in Y are
0028 % globally optimal. In other words: there are no local traps in Y, despite
0029 % non-convexity.
0030 %
0031 % Outputs:
0032 %
0033 %       Y: is the best point found (an nxp matrix with unit norm rows.)
0034 %          To find X, form Y*Y' (or, more efficiently, study X through Y.)
0035 %
0036 %       problem: is the Manopt problem structure used to produce Y.
0037 %
0038 %       S: is a dual optimality certificate (a symmetric matrix of size n,
0039 %          sparse if A is sparse). The optimality gap (in the cost
0040 %          function) is at most n*min(eig(S)), for both Y and X = Y*Y'.
0041 %          Hence, if min(eig(S)) is close to zero, Y is close to globally
0042 %          optimal. This can be computed via eigs(S, 1, 'SR').
0043 %
0044 % Paper: https://arxiv.org/abs/1606.04970
0045 %
0046 % @inproceedings{boumal2016bmapproach,
0047 %   author  = {Boumal, N. and Voroninski, V. and Bandeira, A.S.},
0048 %   title   = {The non-convex {B}urer-{M}onteiro approach works on smooth semidefinite programs},
0049 %   booktitle={Neural Information Processing Systems (NIPS 2016)},
0050 %   year    = {2016}
0051 % }
0052 %
0053 % See also: maxcut elliptope_SDP_complex
0054 
0055 % This file is part of Manopt: www.manopt.org.
0056 % Original author: Nicolas Boumal, June 28, 2016
0057 % Contributors:
0058 % Change log:
0059 %
0060 %    Xiaowen Jiang Aug. 20, 2021
0061 %       Added AD to compute the egrad and the ehess
0062 
0063     % If no inputs are provided, since this is an example file, generate
0064     % a random Erdos-Renyi graph. This is for illustration purposes only.
0065     if ~exist('A', 'var') || isempty(A)
0066         n = 100;
0067         A = triu(rand(n) <= .1, 1);
0068         A = (A+A.')/(2*n);
0069     end
0070 
0071     n = size(A, 1);
0072     assert(n >= 2, 'A must be at least 2x2.');
0073     assert(isreal(A), 'A must be real.');
0074     assert(size(A, 2) == n, 'A must be square.');
0075     
0076     % Force A to be symmetric
0077     A = (A+A.')/2;
0078     
0079     % By default, pick a sufficiently large p (number of columns of Y).
0080     if ~exist('p', 'var') || isempty(p)
0081         p = ceil(sqrt(8*n+1)/2);
0082     end
0083     
0084     assert(p >= 2 && p == round(p), 'p must be an integer >= 2.');
0085 
0086     % Pick the manifold of n-by-p matrices with unit norm rows.
0087     manifold = obliquefactory(p, n, true);
0088     
0089     problem.M = manifold;
0090     
0091     
0092     % These three, quick commented lines of code are sufficient to define
0093     % the cost function and its derivatives. This is good code to write
0094     % when prototyping. Below, a more advanced use of Manopt is shown,
0095     % where the redundant computation A*Y is avoided between the gradient
0096     % and the cost evaluation.
0097     % % problem.cost  = @(Y) .5*sum(sum((A*Y).*Y));
0098     % % problem.egrad = @(Y) A*Y;
0099     % % problem.ehess = @(Y, Ydot) A*Ydot;
0100     
0101     % Products with A dominate the cost, hence we store the result.
0102     % This allows to share the results among cost, grad and hess.
0103     % This is completely optional.
0104     function store = prepare(Y, store)
0105         if ~isfield(store, 'AY')
0106             AY = A*Y;
0107             store.AY = AY;
0108             store.diagAYYt = sum(AY .* Y, 2);
0109         end
0110     end
0111     
0112     % Define the cost function to be /minimized/.
0113     problem.cost = @cost;
0114     function [f, store] = cost(Y, store)
0115         store = prepare(Y, store);
0116         f = .5*sum(store.diagAYYt);
0117     end
0118 
0119     % Define the Riemannian gradient.
0120     problem.grad = @grad;
0121     function [G, store] = grad(Y, store)
0122         store = prepare(Y, store);
0123         G = store.AY - bsxfun(@times, Y, store.diagAYYt);
0124     end
0125 
0126     % If you want to, you can specify the Riemannian Hessian as well.
0127     problem.hess = @hess;
0128     function [H, store] = hess(Y, Ydot, store)
0129         store = prepare(Y, store);
0130         SYdot = A*Ydot - bsxfun(@times, Ydot, store.diagAYYt);
0131         H = manifold.proj(Y, SYdot);
0132     end
0133 
0134     % An alternative way to compute the gradient and the hessian is to use
0135     % automatic differentiation provided in the deep learning toolbox (slower)
0136     % Define the cost function without the store structure
0137     % function f = cost_AD(Y)
0138     %    AY = A*Y;
0139     %    diagAYYt = sum(AY .* Y, 2);
0140     %    f = .5*sum(diagAYYt);
0141     % end
0142     % problem.cost = @cost_AD;
0143     % call manoptAD to prepare AD for the problem structure
0144     % problem = manoptAD(problem);
0145 
0146     % If no initial guess is available, tell Manopt to use a random one.
0147     if ~exist('Y0', 'var') || isempty(Y0)
0148         Y0 = [];
0149     end
0150 
0151     % Call your favorite solver.
0152     opts = struct();
0153     opts.verbosity = 0;      % Set to 0 for no output, 2 for normal output
0154     opts.maxinner = 500;     % maximum Hessian calls per iteration
0155     opts.tolgradnorm = 1e-6; % tolerance on gradient norm
0156     Y = trustregions(problem, Y0, opts);
0157     
0158     % If required, produce an optimality certificate.
0159     if nargout >= 3
0160         S = A - spdiags(sum((A*Y).*Y, 2), 0, n, n);
0161     end
0162 
0163 end

Generated on Fri 30-Sep-2022 13:18:25 by m2html © 2005