Home > examples > elliptope_SDP_complex.m

elliptope_SDP_complex

PURPOSE ^

Solver for complex semidefinite programs (SDP's) with unit diagonal.

SYNOPSIS ^

function [Y, problem, S] = elliptope_SDP_complex(A, p, Y0)

DESCRIPTION ^

 Solver for complex semidefinite programs (SDP's) with unit diagonal.
 
 function [Y, problem, S] = elliptope_SDP_complex(A)
 function [Y, problem, S] = elliptope_SDP_complex(A, p)
 function [Y, problem, S] = elliptope_SDP_complex(A, p, Y0)

 A is a Hermitian matrix of size n.

 This function uses a local optimization method in Manopt to solve the SDP

   min_X trace(A*X) s.t. diag(X) = 1, X is complex, positive semidefinite.

 In practice, the Hermitian matrix X of size n is parameterized as
 X = Y*Y', where Y has size n x p. By default, p is taken large enough
 (that is, sqrt(n)) to ensure that there exists an optimal X whose rank is
 smaller than p. This ensures that the SDP is equivalent to the new
 problem in Y:

   min_Y  trace(Y'*A*Y)  s.t.  diag(Y*Y') = 1, Y complex

 The constraints on Y require each row of Y to have unit norm, which is
 why Manopt is appropriate software to solve this problem. An optional
 initial guess can be specified via the input Y0.

 See the paper below for theory, specifically, for a proof that, for
 almost all A, second-order critical points of the problem in Y are
 globally optimal. In other words: there are no local traps in Y, despite
 non-convexity.

 Outputs:

       Y: is the best point found (an nxp matrix with unit norm rows.)
          To find X, form Y*Y' (or, more efficiently, study X through Y.)
 
       problem: is the Manopt problem structure used to produce Y.
 
       S: is a dual optimality certificate (a Hermitian matrix of size n,
          sparse if A is sparse). The optimality gap (in the cost
          function) is at most n*min(eig(S)), for both Y and X = Y*Y'.
          Hence, if min(eig(S)) is close to zero, Y is close to globally
          optimal. This can be computed via eigs(S, 1, 'SR').
 
 Paper: https://arxiv.org/abs/1606.04970

 @inproceedings{boumal2016bmapproach,
   author  = {Boumal, N. and Voroninski, V. and Bandeira, A.S.},
   title   = {The non-convex {B}urer-{M}onteiro approach works on smooth semidefinite programs},
   booktitle={Neural Information Processing Systems (NIPS 2016)},
   year    = {2016}
 }
 
 See also: maxcut elliptope_SDP

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SUBFUNCTIONS ^

SOURCE CODE ^

0001 function [Y, problem, S] = elliptope_SDP_complex(A, p, Y0)
0002 % Solver for complex semidefinite programs (SDP's) with unit diagonal.
0003 %
0004 % function [Y, problem, S] = elliptope_SDP_complex(A)
0005 % function [Y, problem, S] = elliptope_SDP_complex(A, p)
0006 % function [Y, problem, S] = elliptope_SDP_complex(A, p, Y0)
0007 %
0008 % A is a Hermitian matrix of size n.
0009 %
0010 % This function uses a local optimization method in Manopt to solve the SDP
0011 %
0012 %   min_X trace(A*X) s.t. diag(X) = 1, X is complex, positive semidefinite.
0013 %
0014 % In practice, the Hermitian matrix X of size n is parameterized as
0015 % X = Y*Y', where Y has size n x p. By default, p is taken large enough
0016 % (that is, sqrt(n)) to ensure that there exists an optimal X whose rank is
0017 % smaller than p. This ensures that the SDP is equivalent to the new
0018 % problem in Y:
0019 %
0020 %   min_Y  trace(Y'*A*Y)  s.t.  diag(Y*Y') = 1, Y complex
0021 %
0022 % The constraints on Y require each row of Y to have unit norm, which is
0023 % why Manopt is appropriate software to solve this problem. An optional
0024 % initial guess can be specified via the input Y0.
0025 %
0026 % See the paper below for theory, specifically, for a proof that, for
0027 % almost all A, second-order critical points of the problem in Y are
0028 % globally optimal. In other words: there are no local traps in Y, despite
0029 % non-convexity.
0030 %
0031 % Outputs:
0032 %
0033 %       Y: is the best point found (an nxp matrix with unit norm rows.)
0034 %          To find X, form Y*Y' (or, more efficiently, study X through Y.)
0035 %
0036 %       problem: is the Manopt problem structure used to produce Y.
0037 %
0038 %       S: is a dual optimality certificate (a Hermitian matrix of size n,
0039 %          sparse if A is sparse). The optimality gap (in the cost
0040 %          function) is at most n*min(eig(S)), for both Y and X = Y*Y'.
0041 %          Hence, if min(eig(S)) is close to zero, Y is close to globally
0042 %          optimal. This can be computed via eigs(S, 1, 'SR').
0043 %
0044 % Paper: https://arxiv.org/abs/1606.04970
0045 %
0046 % @inproceedings{boumal2016bmapproach,
0047 %   author  = {Boumal, N. and Voroninski, V. and Bandeira, A.S.},
0048 %   title   = {The non-convex {B}urer-{M}onteiro approach works on smooth semidefinite programs},
0049 %   booktitle={Neural Information Processing Systems (NIPS 2016)},
0050 %   year    = {2016}
0051 % }
0052 %
0053 % See also: maxcut elliptope_SDP
0054 
0055 % This file is part of Manopt: www.manopt.org.
0056 % Original author: Nicolas Boumal, Oct. 21, 2016
0057 % Contributors:
0058 % Change log:
0059 %
0060 %    Xiaowen Jiang Aug. 20, 2021
0061 %       Added AD to compute the egrad and the ehess
0062 
0063     % If no inputs are provided, since this is an example file, generate
0064     % a random complex matrix. This is for illustration purposes only.
0065     if ~exist('A', 'var') || isempty(A)
0066         n = 100;
0067         A = randn(n) + 1i*randn(n);
0068         A = (A+A')/sqrt(2*n);
0069     end
0070 
0071     n = size(A, 1);
0072     assert(n >= 2, 'A must be at least 2x2.');
0073     assert(size(A, 2) == n, 'A must be square.');
0074     
0075     % Force A to be Hermitian
0076     A = (A+A')/2;
0077     
0078     % By default, pick a sufficiently large p (number of columns of Y).
0079     if ~exist('p', 'var') || isempty(p)
0080         p = floor(sqrt(n)+1);
0081     end
0082     
0083     assert(p >= 1 && p == round(p), 'p must be an integer >= 1.');
0084 
0085     % Pick the manifold of complex n-by-p matrices with unit norm rows.
0086     manifold = obliquecomplexfactory(p, n, true);
0087     
0088     problem.M = manifold;
0089     
0090     
0091     % These three, quick commented lines of code are sufficient to define
0092     % the cost function and its derivatives. This is good code to write
0093     % when prototyping. Below, a more advanced use of Manopt is shown,
0094     % where the redundant computation A*Y is avoided between the gradient
0095     % and the cost evaluation.
0096     % % problem.cost  = @(Y) .5*sum(sum(real((A*Y).*conj(Y))));
0097     % % problem.egrad = @(Y) A*Y;
0098     % % problem.ehess = @(Y, Ydot) A*Ydot;
0099     
0100     % Products with A dominate the cost, hence we store the result.
0101     % This allows to share the results among cost, grad and hess.
0102     % This is completely optional.
0103     function store = prepare(Y, store)
0104         if ~isfield(store, 'AY')
0105             AY = A*Y;
0106             store.AY = AY;
0107             store.diagAYYt = sum(real(AY .* conj(Y)), 2);
0108         end
0109     end
0110     
0111     % Define the cost function to be /minimized/.
0112     problem.cost = @cost;
0113     function [f, store] = cost(Y, store)
0114         store = prepare(Y, store);
0115         f = .5*sum(store.diagAYYt);
0116     end
0117 
0118     % Define the Riemannian gradient.
0119     problem.grad = @grad;
0120     function [G, store] = grad(Y, store)
0121         store = prepare(Y, store);
0122         G = store.AY - bsxfun(@times, Y, store.diagAYYt);
0123     end
0124 
0125     % If you want to, you can specify the Riemannian Hessian as well.
0126     problem.hess = @hess;
0127     function [H, store] = hess(Y, Ydot, store)
0128         store = prepare(Y, store);
0129         SYdot = A*Ydot - bsxfun(@times, Ydot, store.diagAYYt);
0130         H = manifold.proj(Y, SYdot);
0131     end
0132 
0133     % An alternative way to compute the egrad and the ehess is to use
0134     % automatic differentiation provided in the deep learning toolbox
0135     % (slower). AD does not support complex numbers if the Matlab version
0136     % is R2021a or earlier. The cost function should be defined differently
0137     % In this case. See complex_example_AD.m and manoptADhelp.m for more
0138     % information.
0139     % problem.cost = @cost_AD;
0140     %    function f = cost_AD(Y)
0141     %        AY = cprod(A, Y);
0142     %        diagAYYt = csum(creal(cdottimes(AY, cconj(Y))), 2);
0143     %        f = .5*csum(diagAYYt);
0144     %    end
0145     % Call manoptAD to automatically obtain egrad and ehess:
0146     % problem = manoptAD(problem);
0147 
0148     % If the version of Matlab installed is R2021b or later, specify the
0149     % cost function in the normal way and call manoptAD.
0150     % problem.cost = @cost_AD;
0151     %    function f = cost_AD(Y)
0152     %        AY = A*Y;
0153     %        diagAYYt = sum(real(AY .* conj(Y)), 2);
0154     %        f = .5*sum(diagAYYt);
0155     %    end
0156     % problem = manoptAD(problem);
0157 
0158 
0159     % If no initial guess is available, tell Manopt to use a random one.
0160     if ~exist('Y0', 'var') || isempty(Y0)
0161         Y0 = [];
0162     end
0163 
0164     % Call your favorite solver.
0165     opts = struct();
0166     opts.verbosity = 0;      % Set to 0 for no output, 2 for normal output
0167     opts.maxinner = 500;     % maximum Hessian calls per iteration
0168     opts.tolgradnorm = 1e-6; % tolerance on gradient norm
0169     Y = trustregions(problem, Y0, opts);
0170     
0171     % If required, produce an optimality certificate.
0172     if nargout >= 3
0173         S = A - spdiags(sum(real((A*Y).*conj(Y)), 2), 0, n, n);
0174     end
0175 
0176 end

Generated on Fri 30-Sep-2022 13:18:25 by m2html © 2005