This reading group will explore the connections between hardness of learning and cryptography, including connections to refutation, Goldreichâ€™s PRG, learning with errors, publickey crypto, etc.
Meeting Time: Mondays at 1011 PT
Location: Calvin Lab Room 116
Organizers: Sitan Chen, Sushrut Karmalkar, Frederic Koehler
Date 
Speaker 
Talk Info 
Links 
9/7 

Introductory meeting 

9/20 
Andrej Bogdanov 
Title: Cryptography basics for hardness of learning
Abstract
I plan to give a broad overview of cryptographic notions and techniques that have been used to argue computational hardness (and easiness) of learning:
 pseudorandom generators, pseudorandom functions, public key encryption
 relation between distinguishability and predictability
 examples based on Discrete Log, Learning with Errors, Goldreich's construction
Based on interest and time constraints we can also discuss some of the following topics:
 oneway functions and hardnesstorandomness reductions
 barriers: worstcase versus averagecase, privatekey versus publickey
 hardness of refutation

PRGs, PRFs 
9/27 
Andrej Bogdanov 
Title: Cryptography basics for hardness of learning (cont'd)
Abstract
I plan to give a broad overview of cryptographic notions and techniques that have been used to argue computational hardness (and easiness) of learning:
 pseudorandom generators, pseudorandom functions, public key encryption
 relation between distinguishability and predictability
 examples based on Discrete Log, Learning with Errors, Goldreich's construction
Based on interest and time constraints we can also discuss some of the following topics:
 oneway functions and hardnesstorandomness reductions
 barriers: worstcase versus averagecase, privatekey versus publickey
 hardness of refutation


10/4 
Sidhanth Mohanty 
Title: Complexity of refuting random CSPs
Abstract


10/18 
Min Jae Song 
Title: Continuous Learning with Errors
Abstract
In this talk, I will introduce the Continuous Learning with Errors (CLWE) problem, which can be seen as a continuous variant of the Learning with Errors (LWE) problem from latticebased cryptography. Then, I will give a highlevel overview of a polynomialtime quantum reduction from worstcase lattice problems to CLWE, which establishes averagecase hardness guarantees similar to those of LWE. This result can be leveraged to further establish the (conditional) computational hardness of canonical unsupervised and supervised learning problems, such as density estimation for Gaussian mixtures and learning onehiddenlayer neural networks on the standard Gaussian distribution. This is based on joint work with Joan Bruna, Oded Regev, Yi Tang, and Ilias Zadik.

CLWE paper 
10/27 
Ilias Zadik 
Title: Cryptographic Hardness and LLL for the Cosine Neuron
Abstract
In this talk, we are going to discuss a new polynomialtime algorithmic framework for inference, based on the celebrated LenstraLenstraLovasz lattice basis reduction algorithm. Potentially surprisingly this algorithmic framework is able to successfully bypass multiple notions of "computational hardness for inference" for a number of "exponentiallylownoise" inference settings. Such examples include the algorithm provably succeeding in previously believed "hard" regimes of 1) sparse regression, where there is Overlap Gap Property and LowDegree failure, 2) phase retrieval, where AMP fails, and 3) cosine neuron learning, where SQmethods fail. Time permitted, we are going to present the framework, and study the details of some of the mentioned applications. This is based on joint works with David Gamarnik, Eren Kizildag, Min Jae Song, and Joan Bruna.

Cosine neuron 
11/15 
Prayaag Venkat 
Title: Publickey Cryptography from Different Assumptions
Abstract
I will explain how to build a publickey encryption scheme whose security is based on the assumed hardness of solving a random system of sparse linear equations mod 2. This assumption is in contrast with those made in other constructions of publickey cryptography, which are typically based on algebraic or lattice problems. In order to prove security and correctness of this scheme, I will discuss a searchtodistinguishing problem reduction as well as a connection to nondeterministic refutation of random CSPs. To conclude, I will give a survey of evidence supporting the hardness assumption. Based on a paper of ApplebaumBarakWigderson.

ABW 
11/22 
Michael Kim 
Title: Planting Undetectable Backdoors in ML Models
Abstract
Given the costs of training ML models, users often elect to delegate learning to service providers whom they do not fully trust. In this talk, we explore possible abuses of power by such untrusted ML trainers. We show how a malicious trainer can plant *undetectable backdoors* in classifiers. The trainer produces a backdoored classifier together with a backdoor key. Given an input and the backdoor key, there is an efficient algorithm for computing a backdoored inputa small perturbation to the original input, for which the model output is arbitrarily inaccurate. Importantly, without the backdoor key, the planted backdoor *cannot be detected*a property we formalize and prove under standard cryptographic assumptions.
Our construction of undetectable backdoors sheds light on the related issue of robustness to adversarial examples. Paradoxically, undetectable backdoors can be planted into classifiers that are certified to be robust against adversarial examplesnot only under existing, but also future certifiablyrobust classifiers. In the extreme, our results imply the existence of classifiers where every input has an adversarial example, but no efficient algorithm can find any such examples. In all, our findings represent a significant theoretical roadblock to certifying adversarial robustness.
Joint work with Shafi Goldwasser, Vinod Vaikuntanathan, and Or Zamir.


11/29 
Shafi Goldwasser 
Title: TBA
Abstract
TBA


Possible papers:
 Michael Kearns, Les Valiant. Cryptographic Limitations on Learning Boolean Formulae and Finite Automata.
 Moni Naor. Evaluation may be easier than generation
 Avrim Blum, Merrick Furst, Michael Kearns, Richard Lipton. Cryptographic Primitives Based on Hard Learning Problems
 Michael Kharitonov. Cryptographic hardness of distributionspecific learning.
 Michael Kharitonov. Cryptographic lower bounds for learnability of Boolean functions on the uniform distribution.
 Dana Angluin, Michael Kharitonov. When Won't Membership Queries Help?
 Adam Klivans, Alexander Sherstov. Cryptographic Hardness Results for Learning Intersections of Halfspaces.
 Benny Applebaum, Boaz Barak, David Xiao. On Basing LowerBounds for Learning on WorstCase Assumptions.
 Benny Applebaum, Boaz Barak, Avi Wigderson. Publickey cryptography from different assumptions.
 Benny Applebaum, Pavel Raykov. Fast Pseudorandom Functions Based on Expander Graphs.
 Amit Daniely, Shai ShalevSchwartz. Complexity Theoretic Limitations on Learning DNFs.
 Amit Daniely. Complexity Theoretic Limitations on Learning Halfspaces.
 Aloni Cohen, Shafi Goldwasser, Vinod Vaikuntanathan. Aggregate Pseudorandom Functions and Connections to Learning
 Salil Vadhan. On Learning vs. Refutation.
 Pravesh Kothari, Roy Livni. Improper Learning by Refuting.
 Joan Bruna, Oded Regev, Min Jae Song, Yi Tang. Continuous LWE.
 Amit Daniely, Gal Vardi. Hardness of learning neural networks with natural weights.
 Mikito Nanashima. Extending Learnability to AuxiliaryInput Cryptographic Primitives and MetaPAC Learning.
 Amit Daniely, Gal Vardi. From Pseudorandom Generators to Hardness of Learning.
