Home
Talks
Publications
Contact
Light
Dark
Automatic
Recent & Upcoming Talks
2024
Understanding Training Dynamics in Deep Learning using Simplified Models
March 2024
UPenn
Beyond Worst-case Sequential Prediction: Adversarial Robustness via Abstention
March 2024
JHU
Beyond Worst-case Sequential Prediction: Adversarial Robustness via Abstention
March 2024
IPAM, UCLA
How do Large Language Models Think?
February 2024
UPenn
2023
Thinking Fast with Transformers: Algorithmic Reasoning via Shortcuts
December 2023
UT Austin
Beyond Worst-case Sequential Prediction: Adversarial Robustness via Abstention
November 2023
UPenn
Beyond Worst-case Sequential Prediction: Adversarial Robustness via Abstention
November 2023
Princeton
Beyond Worst-case Sequential Prediction: Adversarial Robustness via Abstention
October 2023
UC Berkeley
Beyond Worst-case Sequential Prediction: Adversarial Robustness via Abstention
August 2023
MPI/UCLA
Beyond Worst-case Sequential Prediction: Adversarial Robustness via Abstention
June 2023
MIT
Thinking Fast with Transformers: Algorithmic Reasoning via Shortcuts
June 2023
ICTP, Trieste, Italy
Thinking Fast with Transformers: Algorithmic Reasoning via Shortcuts
April 2023
University of Pennsylvania
Thinking Fast with Transformers: Algorithmic Reasoning via Shortcuts
April 2023
NYU
2022
What Functions Do Transformers Prefer to Represent?
October 2022
Simons Institute, UC Berkeley
Sparse Feature Emergence in Deep Learning
September 2022
Schloss Elmau, Germany
What do self-attention blocks prefer to represent?
August 2022
MSR Redmond
The Hidden Progress Behind Loss Functions
July 2022
EPFL
Demystifying Attention-based Architectures in Deep Learning
June 2022
Naxos, Greece
2021
What functions do self-attention blocks prefer to represent?
December 2021
USC
What functions do self-attention blocks prefer to represent?
December 2021
IST-Austria
What functions do self-attention blocks prefer to represent?
October 2021
Google
Computational Barriers For Learning Some Generalized Linear Models
September 2021
Simons Institute for the Theory of Computing, UC Berkeley
Slides
Video
Computational Complexity of Learning Neural Networks over Gaussian Marginals
July 2021
Stanford University
Computational Complexity of Learning ReLUs
April 2021
Institute for Mathematical and Statistical Innovation (IMSI)
Computational Complexity of Learning Neural Networks over Gaussian Marginals
January 2021
UW-Madison
2020
Computational Complexity of Learning Neural Networks over Gaussian Marginals
December 2020
MIT
Computational Complexity of Learning Neural Networks over Gaussian Marginals
November 2020
TTIC
Computational Complexity of Learning Neural Networks over Gaussian Marginals
November 2020
Georgia Tech
Computational Complexity of Learning Neural Networks over Gaussian Marginals
October 2020
Harvard University
Computational Complexity of Learning Neural Networks over Gaussian Marginals
October 2020
Duke University
Computational Complexity of Learning Neural Networks over Gaussian Marginals
July 2020
NYU
2019
Exploring Surrogate Losses for Learning Neural Networks
December 2019 12:00 AM
TTIC
Efficiently Learning Simple Neural Networks
September 2019 12:00 AM
University of Maryland Institute for Advanced Computer Studies
Learning Ising Models with Independent Failures
July 2019 12:00 AM
Simons Institute for the Theory of Computing, UC Berkeley
2018
Efficiently Learning Simple Neural Networks
September 2018 12:00 AM
Tsinghua University
Cite
×