Deterministic algorithm for maximizing a monotone submodular function over a matroid

From himtp
Revision as of 15:41, 9 October 2015 by Filmus (Talk | contribs) (New open problem)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Problem statement

Is there a deterministic 1−1/e approximation algorithm for maximizing a monotone submodular function over a matroid?

In more detail:

  • Given:
    • A normalized (f(∅)=0) monotone submodular function f, given as a value oracle,
    • A matroid 𝔪, given as an independence oracle.
  • Output:
    • An independent set S with f(S) as large as possible.

Problem background

The problem of maximizing a monotone submodular function over a matroid generalizes the classical problem MaxCover. It is NP-hard, and furthermore its approximability threshold is 1−1/e in two senses:

  • It is NP-hard to approximate MaxCover better than 1−1/e (Feige).
  • Any algorithm approximating monotone submodular maximization over a uniform matroid better than 1−1/e must make an exponential number of value oracle queries (Fisher, Nemhauser and Wolsey).

There are several different 1−1/e approximation algorithms, all of which are randomized:

  • Vondrák's continuous greedy algorithm and its variants.
  • Filmus and Ward's non-oblivious local search algorithm.

Both algorithms compute auxiliary functions (the multilinear relaxation in the first case, the auxiliary objective function in the second case) whose formula involves an exponential sum. While this sum can be approximated by sampling arbitrarily well, it cannot be computed deterministically in polynomial time.

The best known deterministic algorithm is the greedy algorithm, whose approximation ratio is only 1/2 (over a uniform matroid, it does give the optimal approximation ratio 1−1/e).

In contrast, maximizing a coverage function over a matroid can be done deterministically, using the methods above or using linear programming.

Until recently, another case in which there was a discrepancy between the best randomized and deterministic algorithms was unconstrained submodular maximization, but recent work of Buchbinder and Feldman shows how to (sort of) derandomize the double greedy algorithm.