I work broadly in combinatorial optimization and adjacent areas, particularly probability and game theory. A lot of my work (though by no means all) concerns optimization problems with a network structure, and I am currently funded by a VIDI grant of the Dutch Science Foundation entitled “Usable Algorithms for Network Design” on this topic. Below, some of my current research interests are listed.
Network design problems are typically motivated by applications in communication, transport and logistics networks. See here for some relevant publications; two of my favourite topics are:
Steiner tree. One of the most fundamental network design problems, the Steiner tree problem asks for the cheapest way to connect a subset of the nodes of a given network. Despite being very simple to state, it is an NP-hard problem, and there is much left to be understood. Having explored the strong but rather impractical hypergraphic relaxation I am currently very interested in the simpler bidirected cut relaxation.
Robust network design. This refers to a class of problems where we wish to design networks in the face of uncertain or varying usage. Robust network design provides a flexible way to model this uncertainty, and a host of fascinating algorithmic problems arise from it. Along with Navin Goyal and Bruce Shepherd, we resolved the “VPN Conjecture”, a long-standing open question in the area. Challenging and interesting questions remain: for instance, the approximability of robust network design in the single-commodity setting is completely open, and interesting for rather diverse reasons.
One of the biggest open problem in optimization asks: is there a strongly polynomial algorithm for linear programming? “Strongly polynomial” refers to a certain refined notion of efficiency; essentially, that the algorithm is efficient when working in a real model of computation where basic arithmetic operations on real numbers are considered to have unit cost.
One major next step towards resolving this question is to devise a strongly polynomial algorithm for the minimum cost generalized flow problem. This is a variant of the standard minimum cost flow problem where flow can be scaled as it traverses an arc. In recent joint work with László Végh, we gave a much faster and simpler strongly polynomial algorithm for generalized flow maximization.
I am interested in fundamental questions about equilibrium behaviour—how traffic should be expected to behave—in simplified, abstracted models of traffic, taking the perspective of computation, optimization, and algorithmic game theory. I have been particularly fascinated by a very natural, seemingly simple, but in fact exceedingly rich model where traffic congestion is modelled through simple point queues, induced when traffic flow entering a link exceeds the capacity of the link. It is perhaps the simplest model that captures the dynamic, time-varying behaviour of traffic in a satisfactory way, and is well-studied by transportation economists as the “Vickrey bottleneck model”.
There are many fascinating questions about the structure of equilibria in this model; basic questions about even computing equilibria are open. We recently showed that under the appropriate necessary conditions (the capacity of the network should be no less than the flow rate into the network), equilibrium behaviour eventually “settles down” into a situation where queue lengths no longer vary—and hence remain of bounded length. Most recently, I have been studying a variant of the model where users are able to choose their departure time in addition to their route through the network.
Many, many problems are of a stochastic nature. Moreover, probabilistic methods are a crucial tool in the modern design of approximation algorithms. I am interested in these interactions of probability with operations research and algorithms, but also for its own sake; see these publications. Some recent projects:
Appointment scheduling. Motivated by applications in healthcare, this topic is concerned with scheduling appointments to balance patient waiting times and doctor/surgery idle times, when faced with patient service times that are highly stochastic. We obtain the first theoretical guarantees for certain popular heuristics (principally, “smallest-variance-first”, which takes the natural approach of putting the patients with the most predictable service times first), and seem to have been the first to view the problem through the lens of approximation algorithms.
Derandomization. In theoretical computer science, randomness is often thought of as a limited resource. Many of the recent advances in fast algorithms for linear and convex optimization problems currently make heavy use of randomization, and finding ways to make these algorithms deterministic (while maintaining their advantages in speed) is of great interest. Along these lines, we recently gave a version of the Kane-Nelson sparse Johnson-Lindenstrauss Lemma for dimensionality reduction which is deterministic, but does not suffer a loss in running time compared to the randomized version. In another work, we obtain a derandomization of the more refined Gordon’s Lemma as a byproduct of a more optimizer-friendly perspective on the landmark majorizing measures theory of Talagrand.