This reading group will explore the connections between hardness of learning and cryptography, including connections to refutation, Goldreich’s PRG, learning with errors, public-key crypto, etc.
Meeting Time: Mondays at 10-11 PT
Location: Calvin Lab Room 116
Organizers: Sitan Chen, Sushrut Karmalkar, Frederic Koehler
Date |
Speaker |
Talk Info |
Links |
9/7 |
|
Introductory meeting |
|
9/20 |
Andrej Bogdanov |
Title: Cryptography basics for hardness of learning
Abstract
I plan to give a broad overview of cryptographic notions and techniques that have been used to argue computational hardness (and easiness) of learning:
- pseudorandom generators, pseudorandom functions, public key encryption
- relation between distinguishability and predictability
- examples based on Discrete Log, Learning with Errors, Goldreich's construction
Based on interest and time constraints we can also discuss some of the following topics:
- one-way functions and hardness-to-randomness reductions
- barriers: worst-case versus average-case, private-key versus public-key
- hardness of refutation
|
PRGs, PRFs |
9/27 |
Andrej Bogdanov |
Title: Cryptography basics for hardness of learning (cont'd)
Abstract
I plan to give a broad overview of cryptographic notions and techniques that have been used to argue computational hardness (and easiness) of learning:
- pseudorandom generators, pseudorandom functions, public key encryption
- relation between distinguishability and predictability
- examples based on Discrete Log, Learning with Errors, Goldreich's construction
Based on interest and time constraints we can also discuss some of the following topics:
- one-way functions and hardness-to-randomness reductions
- barriers: worst-case versus average-case, private-key versus public-key
- hardness of refutation
|
|
10/4 |
Sidhanth Mohanty |
Title: Complexity of refuting random CSPs
Abstract
|
|
10/18 |
Min Jae Song |
Title: Continuous Learning with Errors
Abstract
In this talk, I will introduce the Continuous Learning with Errors (CLWE) problem, which can be seen as a continuous variant of the Learning with Errors (LWE) problem from lattice-based cryptography. Then, I will give a high-level overview of a polynomial-time quantum reduction from worst-case lattice problems to CLWE, which establishes average-case hardness guarantees similar to those of LWE. This result can be leveraged to further establish the (conditional) computational hardness of canonical unsupervised and supervised learning problems, such as density estimation for Gaussian mixtures and learning one-hidden-layer neural networks on the standard Gaussian distribution. This is based on joint work with Joan Bruna, Oded Regev, Yi Tang, and Ilias Zadik.
|
CLWE paper |
10/27 |
Ilias Zadik |
Title: Cryptographic Hardness and LLL for the Cosine Neuron
Abstract
In this talk, we are going to discuss a new polynomial-time algorithmic framework for inference, based on the celebrated Lenstra-Lenstra-Lovasz lattice basis reduction algorithm. Potentially surprisingly this algorithmic framework is able to successfully bypass multiple notions of "computational hardness for inference" for a number of "exponentially-low-noise" inference settings. Such examples include the algorithm provably succeeding in previously believed "hard" regimes of 1) sparse regression, where there is Overlap Gap Property and Low-Degree failure, 2) phase retrieval, where AMP fails, and 3) cosine neuron learning, where SQ-methods fail. Time permitted, we are going to present the framework, and study the details of some of the mentioned applications. This is based on joint works with David Gamarnik, Eren Kizildag, Min Jae Song, and Joan Bruna.
|
Cosine neuron |
11/15 |
Prayaag Venkat |
Title: Public-key Cryptography from Different Assumptions
Abstract
I will explain how to build a public-key encryption scheme whose security is based on the assumed hardness of solving a random system of sparse linear equations mod 2. This assumption is in contrast with those made in other constructions of public-key cryptography, which are typically based on algebraic or lattice problems. In order to prove security and correctness of this scheme, I will discuss a search-to-distinguishing problem reduction as well as a connection to non-deterministic refutation of random CSPs. To conclude, I will give a survey of evidence supporting the hardness assumption. Based on a paper of Applebaum-Barak-Wigderson.
|
ABW |
11/22 |
Michael Kim |
Title: Planting Undetectable Backdoors in ML Models
Abstract
Given the costs of training ML models, users often elect to delegate learning to service providers whom they do not fully trust. In this talk, we explore possible abuses of power by such untrusted ML trainers. We show how a malicious trainer can plant *undetectable backdoors* in classifiers. The trainer produces a backdoored classifier together with a backdoor key. Given an input and the backdoor key, there is an efficient algorithm for computing a backdoored input---a small perturbation to the original input, for which the model output is arbitrarily inaccurate. Importantly, without the backdoor key, the planted backdoor *cannot be detected*---a property we formalize and prove under standard cryptographic assumptions.
Our construction of undetectable backdoors sheds light on the related issue of robustness to adversarial examples. Paradoxically, undetectable backdoors can be planted into classifiers that are certified to be robust against adversarial examples---not only under existing, but also future certifiably-robust classifiers. In the extreme, our results imply the existence of classifiers where every input has an adversarial example, but no efficient algorithm can find any such examples. In all, our findings represent a significant theoretical roadblock to certifying adversarial robustness.
Joint work with Shafi Goldwasser, Vinod Vaikuntanathan, and Or Zamir.
|
|
11/29 |
Shafi Goldwasser |
Title: TBA
Abstract
TBA
|
|
Possible papers:
- Michael Kearns, Les Valiant. Cryptographic Limitations on Learning Boolean Formulae and Finite Automata.
- Moni Naor. Evaluation may be easier than generation
- Avrim Blum, Merrick Furst, Michael Kearns, Richard Lipton. Cryptographic Primitives Based on Hard Learning Problems
- Michael Kharitonov. Cryptographic hardness of distribution-specific learning.
- Michael Kharitonov. Cryptographic lower bounds for learnability of Boolean functions on the uniform distribution.
- Dana Angluin, Michael Kharitonov. When Won't Membership Queries Help?
- Adam Klivans, Alexander Sherstov. Cryptographic Hardness Results for Learning Intersections of Halfspaces.
- Benny Applebaum, Boaz Barak, David Xiao. On Basing Lower-Bounds for Learning on Worst-Case Assumptions.
- Benny Applebaum, Boaz Barak, Avi Wigderson. Public-key cryptography from different assumptions.
- Benny Applebaum, Pavel Raykov. Fast Pseudorandom Functions Based on Expander Graphs.
- Amit Daniely, Shai Shalev-Schwartz. Complexity Theoretic Limitations on Learning DNFs.
- Amit Daniely. Complexity Theoretic Limitations on Learning Halfspaces.
- Aloni Cohen, Shafi Goldwasser, Vinod Vaikuntanathan. Aggregate Pseudorandom Functions and Connections to Learning
- Salil Vadhan. On Learning vs. Refutation.
- Pravesh Kothari, Roy Livni. Improper Learning by Refuting.
- Joan Bruna, Oded Regev, Min Jae Song, Yi Tang. Continuous LWE.
- Amit Daniely, Gal Vardi. Hardness of learning neural networks with natural weights.
- Mikito Nanashima. Extending Learnability to Auxiliary-Input Cryptographic Primitives and Meta-PAC Learning.
- Amit Daniely, Gal Vardi. From Pseudorandom Generators to Hardness of Learning.
|