Home > examples > maxcut.m

maxcut

PURPOSE ^

Algorithm to (try to) compute a maximum cut of a graph, via SDP approach.

SYNOPSIS ^

function [x, cutvalue, cutvalue_upperbound, Y] = maxcut(L, r)

DESCRIPTION ^

 Algorithm to (try to) compute a maximum cut of a graph, via SDP approach.
 
 function x = maxcut(L)
 function [x, cutvalue, cutvalue_upperbound, Y] = maxcut(L, r)

 L is the Laplacian matrix describing the graph to cut. The Laplacian of a
 graph is L = D - A, where D is the diagonal degree matrix (D(i, i) is the
 sum of the weights of the edges adjacent to node i) and A is the
 symmetric adjacency matrix of the graph (A(i, j) = A(j, i) is the weight
 of the edge joining nodes i and j). If L is sparse, this will be
 exploited.

 If the graph has n nodes, then L is nxn and the output x is a vector of
 length n such that x(i) is +1 or -1. This partitions the nodes of the
 graph in two classes, in an attempt to maximize the sum of the weights of
 the edges that go from one class to the other (MAX CUT problem).

 cutvalue is the sum of the weights of the edges 'cut' by the partition x.

 If the algorithm reached the global optimum of the underlying SDP
 problem, then it produces an upperbound on the maximum cut value. This
 value is returned in cutvalue_upperbound if it is found. Otherwise, that
 output is set to NaN.

 If r is specified (by default, r = n), the algorithm will stop at rank r.
 This may prevent the algorithm from reaching a globally optimal solution
 for the underlying SDP problem (but can greatly help in keeping the
 execution time under control). If a global optimum of the SDP is reached
 before rank r, the algorithm will stop of course.

 Y is a matrix of size nxp, with p <= r, such that X = Y*Y' is the best
 solution found for the underlying SDP problem. If cutvalue_upperbound is
 not NaN, then Y*Y' is optimal for the SDP and cutvalue_upperbound is its
 cut value.
 
 By Goemans and Williamson 1995, it is known that if the optimal value of
 the SDP is reached, then the returned cut, in expectation, is at most at
 a fraction 0.878 of the optimal cut. (This is not exactly valid because
 we do not use random projection here; sign(Y*randn(size(Y, 2), 1)) will
 give a cut that respects this statement -- it's usually worse though).

 The algorithm is essentially that of:
 Journee, Bach, Absil and Sepulchre, SIAM 2010
 Low-rank optimization on the cone of positive semidefinite matrices.

 It is itself based on the famous SDP relaxation of MAX CUT:
 Goemans and Williamson, 1995
 Improved approximation algorithms for maximum cut and satisfiability
 problems using semidefinite programming.
 
 See also: elliptope_SDP

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SUBFUNCTIONS ^

SOURCE CODE ^

0001 function [x, cutvalue, cutvalue_upperbound, Y] = maxcut(L, r)
0002 % Algorithm to (try to) compute a maximum cut of a graph, via SDP approach.
0003 %
0004 % function x = maxcut(L)
0005 % function [x, cutvalue, cutvalue_upperbound, Y] = maxcut(L, r)
0006 %
0007 % L is the Laplacian matrix describing the graph to cut. The Laplacian of a
0008 % graph is L = D - A, where D is the diagonal degree matrix (D(i, i) is the
0009 % sum of the weights of the edges adjacent to node i) and A is the
0010 % symmetric adjacency matrix of the graph (A(i, j) = A(j, i) is the weight
0011 % of the edge joining nodes i and j). If L is sparse, this will be
0012 % exploited.
0013 %
0014 % If the graph has n nodes, then L is nxn and the output x is a vector of
0015 % length n such that x(i) is +1 or -1. This partitions the nodes of the
0016 % graph in two classes, in an attempt to maximize the sum of the weights of
0017 % the edges that go from one class to the other (MAX CUT problem).
0018 %
0019 % cutvalue is the sum of the weights of the edges 'cut' by the partition x.
0020 %
0021 % If the algorithm reached the global optimum of the underlying SDP
0022 % problem, then it produces an upperbound on the maximum cut value. This
0023 % value is returned in cutvalue_upperbound if it is found. Otherwise, that
0024 % output is set to NaN.
0025 %
0026 % If r is specified (by default, r = n), the algorithm will stop at rank r.
0027 % This may prevent the algorithm from reaching a globally optimal solution
0028 % for the underlying SDP problem (but can greatly help in keeping the
0029 % execution time under control). If a global optimum of the SDP is reached
0030 % before rank r, the algorithm will stop of course.
0031 %
0032 % Y is a matrix of size nxp, with p <= r, such that X = Y*Y' is the best
0033 % solution found for the underlying SDP problem. If cutvalue_upperbound is
0034 % not NaN, then Y*Y' is optimal for the SDP and cutvalue_upperbound is its
0035 % cut value.
0036 %
0037 % By Goemans and Williamson 1995, it is known that if the optimal value of
0038 % the SDP is reached, then the returned cut, in expectation, is at most at
0039 % a fraction 0.878 of the optimal cut. (This is not exactly valid because
0040 % we do not use random projection here; sign(Y*randn(size(Y, 2), 1)) will
0041 % give a cut that respects this statement -- it's usually worse though).
0042 %
0043 % The algorithm is essentially that of:
0044 % Journee, Bach, Absil and Sepulchre, SIAM 2010
0045 % Low-rank optimization on the cone of positive semidefinite matrices.
0046 %
0047 % It is itself based on the famous SDP relaxation of MAX CUT:
0048 % Goemans and Williamson, 1995
0049 % Improved approximation algorithms for maximum cut and satisfiability
0050 % problems using semidefinite programming.
0051 %
0052 % See also: elliptope_SDP
0053 
0054 % This file is part of Manopt: www.manopt.org.
0055 % Original author: Nicolas Boumal, July 18, 2013
0056 % Contributors:
0057 % Change log:
0058 %
0059 %   April 3, 2015 (NB):
0060 %       L products now counted with the new shared memory system. This is
0061 %       more reliable and more flexible than using a global variable.
0062 %   Aug  20, 2021 (XJ):
0063 %       Added AD to compute the egrad and the ehess
0064 
0065     % If no inputs are provided, generate a random graph Laplacian.
0066     % This is for illustration purposes only.
0067     if ~exist('L', 'var') || isempty(L)
0068         n = 20;
0069         A = triu(rand(n) <= .4, 1);
0070         A = A+A';
0071         D = diag(sum(A, 2));
0072         L = D-A;
0073     end
0074 
0075 
0076     n = size(L, 1);
0077     assert(size(L, 2) == n, 'L must be square.');
0078 
0079     if ~exist('r', 'var') || isempty(r) || r > n
0080         r = n;
0081     end
0082     
0083     % We will let the rank increase. Each rank value will generate a cut.
0084     % We have to go up in the rank to eventually find a certificate of SDP
0085     % optimality. This in turn will provide an upperbound on the MAX CUT
0086     % value and ensure that we're doing well, according to Goemans and
0087     % Williamson's argument. In practice though, the good cuts often come
0088     % up for low rank values, so we better keep track of the best one.
0089     best_x = ones(n, 1);
0090     best_cutvalue = 0;
0091     cutvalue_upperbound = NaN;
0092     
0093     time = [];
0094     cost = [];
0095     
0096     for rr = 2 : r
0097         
0098         manifold = elliptopefactory(n, rr);
0099         
0100         if rr == 2
0101             
0102             % At first, for rank 2, generate a random point.
0103             Y0 = manifold.rand();
0104              
0105         else
0106             
0107             % To increase the rank, we could just add a column of zeros to
0108             % the Y matrix. Unfortunately, this lands us in a saddle point.
0109             % To escape from the saddle, we may compute an eigenvector of
0110             % Sy associated to a negative eigenvalue: that will yield a
0111             % (second order) descent direction Z. See Journee et al ; Sy is
0112             % linked to dual certificates for the SDP.
0113             Y0 = [Y zeros(n, 1)];
0114             LY0 = L*Y0;
0115             Dy = spdiags(sum(LY0.*Y0, 2), 0, n, n);
0116             Sy = (Dy - L)/4;
0117             % Find the smallest (the "most negative") eigenvalue of Sy.
0118             eigsopts.isreal = true;
0119             [v, s] = eigs(Sy, 1, 'SA', eigsopts);
0120             % If there is no negative eigenvalue for Sy, than we are not at
0121             % a saddle point: we're actually done!
0122             if s >= -1e-8
0123                 % We can stop here: we found the global optimum of the SDP,
0124                 % and hence the reached cost is a valid upper bound on the
0125                 % maximum cut value.
0126                 cutvalue_upperbound = max(-[info.cost]);
0127                 break;
0128             end
0129             
0130             % This is our escape direction.
0131             Z = manifold.proj(Y0, [zeros(n, rr-1) v]);
0132             
0133             % % These instructions can be uncommented to see what the cost
0134             % % function looks like at a saddle point. But will require the
0135             % % problem structure which is not defined here: see the helper
0136             % % function.
0137             % plotprofile(problem, Y0, Z, linspace(-1, 1, 101));
0138             % drawnow; pause;
0139             
0140             % Now make a step in the Z direction to escape from the saddle.
0141             % It is not obvious that it is ok to do a unit step ... perhaps
0142             % need to be cautious here with the stepsize. It's not too
0143             % critical though: the important point is to leave the saddle
0144             % point. But it's nice to guarantee monotone decrease of the
0145             % cost, and we can't do that with a constant step (at least,
0146             % not without a proper argument to back it up).
0147             stepsize = 1;
0148             Y0 = manifold.retr(Y0, Z, stepsize);
0149             
0150         end
0151         
0152         % Use the Riemannian optimization based algorithm lower in this
0153         % file to reach a critical point (typically a local optimizer) of
0154         % the max cut cost with fixed rank, starting from Y0.
0155         [Y, info] = maxcut_fixedrank(L, Y0);
0156         
0157         % Some info logging.
0158         thistime = [info.time];
0159         if ~isempty(time)
0160             thistime = time(end) + thistime;
0161         end
0162         time = [time thistime]; %#ok<AGROW>
0163         cost = [cost [info.cost]]; %#ok<AGROW>
0164 
0165         % Time to turn the matrix Y into a cut.
0166         % We can either do the random rounding as follows:
0167         % x = sign(Y*randn(rr, 1));
0168         % or extract the "PCA direction" of the points in Y and cut
0169         % orthogonally to that direction, as follows (seems faster than
0170         % calling svds):
0171         [U, ~, ~] = svd(Y, 0);
0172         u = U(:, 1);
0173         x = sign(u);
0174 
0175         cutvalue = (x'*L*x)/4;
0176         if cutvalue > best_cutvalue
0177             best_x = x;
0178             best_cutvalue = cutvalue;
0179         end
0180         
0181     end
0182     
0183     x = best_x;
0184     cutvalue = best_cutvalue;
0185     
0186     figure;
0187     plot(time, -cost, '.-');
0188     xlabel('Time [s]');
0189     ylabel('Relaxed cut value');
0190     title('The relaxed cut value is an upper bound on the optimal cut value.');
0191 
0192 end
0193 
0194 
0195 function [Y, info] = maxcut_fixedrank(L, Y)
0196 % Try to solve the (fixed) rank r relaxed max cut program, based on the
0197 % Laplacian of the graph L and an initial guess Y. L is nxn and Y is nxr.
0198 
0199     [n, r] = size(Y);
0200     assert(all(size(L) == n));
0201     
0202     % The fixed rank elliptope geometry describes symmetric, positive
0203     % semidefinite matrices of size n with rank r and all diagonal entries
0204     % are 1.
0205     manifold = elliptopefactory(n, r);
0206     
0207     % % If you want to compare the performance of the elliptope geometry
0208     % % against the (conceptually simpler) oblique manifold geometry,
0209     % % uncomment this line.
0210     % manifold = obliquefactory(r, n, true);
0211     
0212     problem.M = manifold;
0213     
0214     % % For rapid prototyping, these lines suffice to describe the cost
0215     % % function and its gradient and Hessian (here expressed using the
0216     % % Euclidean gradient and Hessian).
0217     % problem.cost  = @(Y)  -trace(Y'*L*Y)/4;
0218     % problem.egrad = @(Y) -(L*Y)/2;
0219     % problem.ehess = @(Y, U) -(L*U)/2;
0220 
0221     % It's also possible to compute the egrad and the ehess via AD but is
0222     % much slower. Notice that the function trace is not supported for AD
0223     % so far. Replace it with ctrace described in the file manoptADhelp.m.
0224     % problem.cost  = @(Y)  -ctrace(Y'*L*Y)/4;
0225     % problem = manoptAD(problem);
0226     
0227     % Instead of the prototyping version, the functions below describe the
0228     % cost, gradient and Hessian using the caching system (the store
0229     % structure). This alows to execute exactly the required number of
0230     % multiplications with the matrix L. These multiplications are counted
0231     % using the shared memory in the store structure: that memory is
0232     % shared , so we get access to the same data, regardless of the
0233     % point Y currently visited.
0234 
0235     % For every visited point Y, we will need L*Y. This function makes sure
0236     % the quantity L*Y is available, but only computes it if it wasn't
0237     % already computed.
0238     function store = prepare(Y, store)
0239         if ~isfield(store, 'LY')
0240             % Compute and store the product for the current point Y.
0241             store.LY = L*Y;
0242             % Create / increment the shared counter (independent of Y).
0243             if isfield(store.shared, 'counter')
0244                 store.shared.counter = store.shared.counter + 1;
0245             else
0246                 store.shared.counter = 1;
0247             end
0248         end
0249     end
0250 
0251     problem.cost = @cost;
0252     function [f, store] = cost(Y, store)
0253         store = prepare(Y, store);
0254         LY = store.LY;
0255         f = -(Y(:)'*LY(:))/4; % = -trace(Y'*LY)/4; but faster
0256     end
0257 
0258     problem.egrad = @egrad;
0259     function [g, store] = egrad(Y, store)
0260         store = prepare(Y, store);
0261         LY = store.LY;
0262         g = -LY/2;
0263     end
0264 
0265     problem.ehess = @ehess;
0266     function [h, store] = ehess(Y, U, store)
0267         store = prepare(Y, store); % this line is not strictly necessary
0268         LU = L*U;
0269         store.shared.counter = store.shared.counter + 1;
0270         h = -LU/2;
0271     end
0272 
0273     % statsfun is called exactly once after each iteration (including after
0274     % the evaluation of the cost at the initial guess). We then register
0275     % the value of the L-products counter (which counts how many products
0276     % were needed so far).
0277     % options.statsfun = @statsfun;
0278     % function stats = statsfun(problem, Y, stats, store) %#ok
0279     %     stats.Lproducts = store.shared.counter;
0280     % end
0281     % Equivalent, but simpler syntax:
0282     options.statsfun = statsfunhelper('Lproducts', ...
0283                      @(problem, Y, stats, store) store.shared.counter );
0284     
0285 
0286     % % Diagnostics tools: to make sure the gradient and Hessian are
0287     % % correct during the prototyping stage.
0288     % checkgradient(problem); pause;
0289     % checkhessian(problem); pause;
0290     
0291     % % To investigate the effect of the rotational invariance when using
0292     % % the oblique or the elliptope geometry, or to study the saddle point
0293     % % issue mentioned above, it is sometimes interesting to look at the
0294     % % spectrum of the Hessian. For large dimensions, this is slow!
0295     % stairs(sort(hessianspectrum(problem, Y)));
0296     % drawnow; pause;
0297     
0298     
0299     % % When facing a saddle point issue as described in the master
0300     % % function, and when no sure mechanism exists to find an escape
0301     % % direction, it may be helpful to set useRand to true and raise
0302     % % miniter to more than 1, when using trustregions. This will tell the
0303     % % solver to not stop before at least miniter iterations were
0304     % % accomplished (thus disregarding the zero gradient at the saddle
0305     % % point) and to use random search directions to kick start the inner
0306     % % solve (tCG) step. It is not as efficient as finding a sure escape
0307     % % direction, but sometimes it's the best we have.
0308     % options.useRand = true;
0309     % options.miniter = 5;
0310     
0311     options.verbosity = 2;
0312     [Y, Ycost, info] = trustregions(problem, Y, options); %#ok<ASGLU>
0313     
0314     fprintf('Products with L: %d\n', max([info.Lproducts]));
0315 
0316 end

Generated on Fri 30-Sep-2022 13:18:25 by m2html © 2005