Sparsest Cut by Random Contraction
(suggested by Jens Vygen)
Let be an undirected graph with edge weights and vertex weights . The sparsest cut problem asks for a nonempty proper subset of the vertices that minimizes (its sparsity). Often the interesting special case where for all is considered.
Consider the following random contraction algorithm, inspired by Karger's algorithm for the minimum cut problem (i.e., no denominators). In one iteration the algorithm randomly selects an edge with probability proportional to and contracts it, setting the weight of the new vertex to . The algorithm proceeds until only two vertices are left and outputs the corresponding cut.
What is known?
The best known approximation algorithm for the sparsest cut problem (Arora-Rao-Vazirani; ARV) has approximation ratio , where . For the minimum cut problem, Karger's random contraction algorithm outputs a minimum weight cut with probability at least . The corresponding property cannot hold for the sparsest cut problem; here the probability of finding a sparsest cut can be exponentially small. However, the algorithm works well in practice. It is much simpler than ARV and even Leighton-Rao, easy to implement, and fast, but its theoretical behaviour is not understood.
Is it true that with polynomially large probability the algorithm outputs a cut with sparsity at most times the optimum? Maybe even the expected value of the computed solution is times the optimum?
We know no counterexamples. The bound on the expectation is true for cliques with one appended edge, and for double-stars (trees with two vertices of degree ), for unit vertex weights. It is also true for circuits, but even paths are difficult to analyze. (Some of these easy results have been obtained in joint discussions with Nicolai Hähnle and Lukas Dreyer.)