Invited talks

All invited talks will be held online

June 28, 15:00 MSK: Tim Roughgarden (Columbia University, USA), "How Computer Science Informs Modern Auction Design" (The opening lecture)

Over the last twenty years, computer science has relied on concepts borrowed from game theory and economics to reason about applications ranging from internet routing to real-time auctions for online advertising. More recently, ideas have increasingly flowed in the opposite direction, with concepts and techniques from computer science beginning to influence economic theory and practice.

In this lecture, Tim Roughgarden will illustrate this point with a detailed case study of the 2016-2017 Federal Communications Commission incentive auction for repurposing wireless spectrum. Computer science techniques, ranging from algorithms for NP-hard problems to nondeterministic communication complexity, have played a critical role both in the design of the reverse auction (with the government procuring existing licenses from television broadcasters) and in the analysis of the forward auction (when the procured licenses sell to the highest bidder).

June 29, 14:00 MSK: Edith Elkind (University of Oxford, UK), "United for Change: Deliberative Coalition Formation to Change the Status Quo"

We study a setting in which a community wishes to identify a strongly supported proposal from a space of alternatives, in order to change the status quo. We describe a deliberation process in which agents dynamically form coalitions around proposals that they prefer over the status quo. We formulate conditions on the space of proposals and on the ways in which coalitions are formed that guarantee deliberation to succeed, that is, to terminate by identifying a proposal with the largest possible support. Our results provide theoretical foundations for the analysis of deliberative processes, in particular in systems for democratic deliberation support, such as, for instance, LiquidFeedback or Polis.

June 29, 16:30 MSK: Joel Spencer (New York University, USA), Balancing Problems: A Fourfold Approach

The balancing of items -- often called discrepany -- arises naturally in Computer Science. Here we consider n vectors in n-space, all coordinate +1 or -1. We create a signed sum of the vectors, with the goal that this signed sum is as small as possible, Here we use the max [or L-infinity] norm, though many variants are possible. We create a game with Paul (Erdos) selecting the vectors and Carole (find the anagram!) choosing to add or subtract. This becomes four (two TIMES two) different problems. The vectors (Paul) can be chosen randomly or adversarially, the second denoting worst-case analysis. The choice of signed sum (Carole) and be done on-line or off-line. All four variants are interesting and are at least partially solved. We emphasize the random (Paul) on-line (Carole) case, joint work with Nikhil Bansal

June 30, 15:30 MSK: Jens Vygen (University of Bonn, Germany), "Traveling Salesman Problems: Approximation Algorithms and Black-Box Reductions"

We survey the recent progress on approximation algorithms and integrality ratios for variants of the traveling salesman problem, with a focus on black-box reductions from one problem to another. In particular, we explain recent results for the Path TSP and the Capacitated Vehicle Routing Problem, which are joint works with Vera Traub and Rico Zenklusen and with Jannis Blauth and Vera Traub.

June 30, 17:00 MSK: Ugo dal Lago (University of Bologna, Italy), "On Differential Program Semantics"

Traditionally, program semantics is centered around notions of program equivalence and refinement, based on which verification and transformation techniques can be justified. More and more often, however, programs are substituted with approximately equivalent programs, or verified against imprecise specifications. Program semantics has started dealing with program differences only in recent years, through the interpretation of programs in metric spaces. We give a brief survey about the state of the art on metric program semantics, and on the inadequacy of metrics as a way to deal with program differences. We thus point at a few recent results on a new kind of differential program semantics in which differences are somehow context-sensitive, and thus richer in structure than mere numbers.

July 1, 14:30 MSK: Hugo Gimbert (LaBRI, Bordeaux, France), Two-Sided Random Matching Markets

Stable matching in a community consisting of N men and N women is a classical combinatorial problem that has been the subject of intense theoretical and empirical study since its introduction in 1962 in a seminal paper by Gale and Shapley. There is empirical evidence showing that in many instances, in practice the stable matching is essentially unique. We survey a number of theoretical explanations for this phenomenon, based on a popularity model introduced by Immorlica and Mahdian where preference lists are randomly generated and correlated.

July 1, 17:00 MSK: Merav Parter (Weizmann Institute of Science, Rehovot, Israel), Resilient Distributed Computation

Following the immense recent advances in distributed networks, the explosive growth of the Internet, and our increased dependence on these infrastructures, guaranteeing the uninterrupted operation of communication networks has become a major objective in network algorithms. The modern instantiations of distributed networks, such as the Bitcoin network and cloud computing, introduce in addition, new security challenges that deserve urgent attention in both theory and practice.

In this talk, I will present a recent framework for obtaining fast, resilient and secure distributed algorithms for fundamental graph problems. We will mainly provide general compilation schemes that are based on exploiting the high-connectivity of the graph, and highlight major open problems in this area.

Based on joint work with Yael Hitron and Eylon Yogev.

July 2, 14:00 MSK: Amin Coja-Oghlan (Goethe University, Frankfurt, Germany), "Belief Propagation"

Graphical models are graph representations of dependencies among random variables. They occur across several disciplines from statistical physics to machine learning. The Belief Propagation message passing algorithm is a heuristic for calculating the partition functions of such graphical models, the key quantity that characterises the model. Understanding Belief Propagation rigorously is an outstanding open problem in theoretical computer science. I am going to give an overview of what we know and what we don’t.

July 2, 17:00 MSK: Toniann Pitassi (University of Toronto, Canada), How Hard is it to Refute a Random Unsatisfiable Formula?

The Satisfiability problem is perhaps the most famous problem in theoretical computer science, and significant effort has been devoted to understanding randomly generated SAT instances. The random k-SAT model (where a random k-CNF formula is chosen by uniformly selecting m clauses from the set of all possible clauses on k variables) is important for several reasons. First, it is an intrinsically natural model analogous to random graph models, and closely related to phase transitions and structural phenomena occurring in statistical physics. Thus like random graph models, understanding the complexity of random SAT sheds light on fundamental structural properties of the satisfiability problem. Secondly, the random k-SAT distribution gives us a testbench of empirically hard examples which are useful for comparing and analyzing SAT algorithms. Third, the difficulty of certifying the unsatisfiability of a random k-SAT formula is connected to several other central questions in complexity theory, including the complexity of approximately solving NP-hard problems, and the complexity of learning DNF formulas.

We will first review the fascinating world of random structured distributions (k-SAT, k-CSP and random graph models), their structural properties, and conjectures relating their hardness to other questions. We then focus on the following question: how difficult is it to certify that a given random k-SAT formula is unsatisfiable (in a given proof system)? Chvatal and Szemeredi famously proved exponential lower bounds for Resolution refutations of random k-SAT. More recently exponential lower bounds have been shown for SOS (Sum-of-Squares) and Cutting Planes refutations. Taken together these lower bounds prove that most state-of-the-art algorithms will require exponential-time on random k-SAT instances. On the other hand, current algorithms are unreasonably effective at solving random k-SAT instances! After reviewing these positive and negative results, we conclude with some open problems.