Понедельник, 8 июня, 12:00, к. 106

Понедельник, 8 июня, комната 106. Начало в 12:00.

Докладчики: Sofya Raskhodnikova (Penn State University), Adam Smith (Penn State University).

Будет проведено 2 доклада:

Sofya Raskhodnikova (Penn State University) Transitive-closure Spanners

Adam Smith (Penn State University) What Can We Learn Privately?

Title: Transitive-closure Spanners.

Abstract

We define the notion of a transitive-closure spanner of a directed graph. A transitive-closure spanner of a given directed graph is a graph of small diameter that preserves connectivity of the original graph. Formally, given a directed graph $ G = (V,E) $ and an integer $ k\ge 1 $, a k-transitive-closure-spanner (k-TC-spanner) of G is a directed graph $ H = (V, E_H) $ that has (1) the same transitive-closure as G and (2) diameter at most k. These spanners were studied implicitly in sublinear algorithms, access control, and data structures, and properties of these spanners have been rediscovered over the span of 20 years. We bring these areas under the unifying framework of TC-spanners. We abstract the common task implicitly tackled in these diverse applications as the problem of constructing sparse TC-spanners.

We study the size of the sparsest k-TC-spanners for specific graph families, as well as the computational task of finding the sparsest k-TC-spanner for a given input graph. We present new positive and negative results on both fronts. In the talk, we will explain some ramifications of these results for testing monotonicity of functions.

Joint work with Arnab Bhattacharyya, Elena Grigorescu, Kyomin Jung and David Woodruff.

Title: What Can We Learn Privately?

Abstract

Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy "differential privacy", a recent notion that provides meaningful confidentiality guarantees in the presence of arbitrary side information.
We introduce and formulate private learning problems. Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. Along the way we develop novel algorithmic tools and bounds on the sample size required by private learning algorithms. Specifically, we provide:
(1) A generic, distribution-free private learning algorithm that uses approximately $ log|C| $ samples to learn a concept class C. This is a private analogue of Occam's razor. The generic learner is not always computationally efficient.
(2) A computationally efficient, distribution-free private PAC learner for the class of parity functions.
(3) A precise characterization of local, or randomized response, private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model.
(4) A separation between the power of interactive and
noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning.

Based on joint work with Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim and Sofya Raskhodnikova, published at FOCS 2008.