MCMC and the Theory of Computing

Prof. Peter Winkler

MCMC Logo

The basic problem in the Theory of Computing is do determine what kinds of problems can be efficiently solved by computer; in other words, what can algorithms do or not do.

In the 70's, theorists uncovered a wide range of "hard" decision and computation problems which are all intractable—except in the (unlikely) event that they were all tractable! Since then, exciting new techniques involving probabilistically checkable proofs have established that many quantities are as hard to approximate as they are to compute exactly.

Ironically, however, at about the same time theorists discovered a powerful mathematical tool which has greatly expanded the range of quantities which we can approximate efficiently. Included are volumes of high-dimensional objects, partition functions in statistical physics, the permanent of a matrix, the number of linear extensions of an ordered set, generation of contingency tables (used by actuaries to compute insurance rates) and many more. The method, called MCMC (Markov chain Monte Carlo), entails randomly sampling the objects being counted by starting with a non-random object and moving to a random neighboring object.

The mathematical crux of this method is proving "rapid mixing", i.e., showing that the process reaches a random object in sufficiently short time. Practically every branch of mathematics has contributed to the delightful variety of ways that have been devised to show rapid mixing in its many guises.

Shown in the picture is a configuration of the Widom-Rowlinson (WR) model of statistical physics, generated by MCMC. The WR model simulates two gases (here, with red and green molecules), under the constraint that differing molecules may not occupy adjacent points on a grid.

Read more...