ANDREW ILYAS

I am a Stein Fellow at Stanford Statistics and an (incoming) faculty member at CMU. I received my PhD from MIT, where I was fortunate to be advised by Costis Daskalakis and Aleksander Madry and supported by an Open Philanthropy AI Fellowship. I went to MIT for undergrad, majoring in CS and in Math. Outside of research, I enjoy playing soccer and table tennis.

Research interests: My goal is to uncover general principles that describe and predict the behavior of ML systems—ideally enabling predictably reliable future systems. This goal entails combining statistical tools with large-scale experiments to precisely understand the ML "pipeline," from training data (and the way we collect it), to learning algorithms, to deployment. I also like thinking broadly about (human) trust in AI systems.



(Selected) recent updates

Selected Papers

* denotes equal contribution

Data Debiasing with Datamodels (D3M): Improving Subgroup Robustness via Data Selection
Saachi Jain*, Kimia Hamidieh*, Kristian Georgiev*, Andrew Ilyas, Marzyeh Ghassemi, Aleksander Madry (2024)
NeurIPS 2024. (Blog post)

Measuring Strategization in Recommendation: Users Adapt Behavior to Shape Future Content
Sarah H. Cen, Andrew Ilyas, Jennifer Allen, Hannah Li, Aleksander Madry (2024)
EC 2024. (Slides)

Decomposing and Editing Predictions by Modeling Model Computation
Harshay Shah, Andrew Ilyas, Aleksander Madry (2024)
ICML 2024 (Blog post 1, Blog post 2, GitHub)

User Strategization and Trustworthy Algorithms
Sarah Cen, Andrew Ilyas, Aleksander Madry (2023)
EC 2024

TRAK: Attributing Model Behavior at Scale
Sung Min Park*, Kristian Georgiev*, Andrew Ilyas*, Guillaume Leclerc, Aleksander Madry (2023)
Oral presentation, ICML 2023. (Project page, Blog post, GitHub)

ModelDiff: A Framework for Comparing Learning Algorithms
Harshay Shah*, Sung Min Park*, Andrew Ilyas*, Aleksander Madry (2023)
ICML 2023 (Blog post, GitHub)

Raising the Cost of Malicious AI-Powered Image Editing
Hadi Salman*, Alaa Khaddaj*, Guillaume Leclerc*, Andrew Ilyas, Aleksander Madry (2023)
Oral presentation, ICML 2023. (Blog post, GitHub)

Rethinking Backdoor Attacks
Alaa Khaddaj*, Guillaume Leclerc*, Aleksandar Makelov*, Kristian Georgiev*, Hadi Salman, Andrew Ilyas, Aleksander Madry (2023)
ICML 2023. (Blog post, GitHub)

When does Bias Transfer in Transfer Learning?
Hadi Salman*, Saachi Jain*, Andrew Ilyas, Logan Engstrom, Eric Wong, Aleksander Madry (2022)
Blog Post, GitHub

What Makes A Good Fisherman? Linear Regression under Self-Selection Bias
Yeshwanth Cherapanamjeri, Constantinos Daskalakis, Andrew Ilyas, Manolis Zampetakis (2022)
STOC 2023. (Video)

Estimation of Standard Auction Models
Yeshwanth Cherapanamjeri, Constantinos Daskalakis, Andrew Ilyas, Manolis Zampetakis (2022)
EC 2022. (Slides)

Datamodels: Predicting Predictions from Training Data
Andrew Ilyas*, Sung Min Park*, Logan Engstrom*, Guillaume Leclerc, Aleksander Madry (2022)
ICML 2022. (Blog post 1, Part 2, Data)

Constructing and adjusting estimates for household transmission of SARS-CoV-2 from prior studies, widespread-testing and contact-tracing data
Mihaela Curmei*, Andrew Ilyas*, Jacob Steinhardt, Owain Evans (2021)
International Journal of Epidemiology (medRxiv (previous draft), Code and data)

3DB: A Framework for Debugging Computer Vision Models
Guillaume Leclerc*, Hadi Salman*, Andrew Ilyas*, Sai Vemprala, Logan Engstrom, Vibhav Vineet, Kai Xiao, Pengchuan Zhang, Shibani Santurkar, Greg Yang, Ashish Kapoor, Aleksander Madry (2021)
NeurIPS 2022. (Blog post and walkthrough, Code and demos, Quickstart and API Documentation)

Unadversarial Examples: Designing Objects for Robust Vision
Hadi Salman*, Andrew Ilyas*, Logan Engstrom, Sai Vemprala, Aleksander Madry, Ashish Kapoor (2020)
NeurIPS 2021. (Blog post, GitHub)

Do Adversarially Robust ImageNet Models Transfer Better?
Hadi Salman*, Andrew Ilyas*, Logan Engstrom, Ashish Kapoor, Aleksander Madry (2020)
Oral presentation, NeurIPS 2020. (Blog post, Code and models)

Noise or Signal: The Role of Image Backgrounds in Object Recognition
Kai Xiao, Logan Engstrom, Andrew Ilyas, Aleksander Madry (2020)
ICLR 2021 (Blog post)

From ImageNet to Image Classification: Contextualizing Progress on Benchmarks
Dimitris Tsipras*, Shibani Santurkar*, Logan Engstrom, Andrew Ilyas, Aleksander Madry (2020)
ICML 2020. (Blog post)

Identifying Statistical Bias in Dataset Replication
Logan Engstrom*, Andrew Ilyas*, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry (2020)
ICML 2020 (Blog post)

Implementation Matters in Deep Policy Gradient Algorithms
Logan Engstrom*, Andrew Ilyas*, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry (2020)
Oral presentation, ICLR 2020. (Slides and video)

A Closer Look at Deep Policy Gradient Algorithms
Andrew Ilyas*, Logan Engstrom*, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry (2020)
Oral presentation, ICLR 2020. (Slides and video)

Image Synthesis with a Single (Robust) Classifier
Shibani Santurkar*, Dimitris Tsipras*, Brandon Tran*, Andrew Ilyas*, Logan Engstrom*, Aleksander Madry (2019)
NeurIPS 2019. (Blog post, Github)

Adversarial Robustness as a Prior for Learned Representations
Logan Engstrom*, Andrew Ilyas*, Shibani Santurkar*, Dimitris Tsipras*, Brandon Tran*, Aleksander Madry (2019)
Blog Post, Github

Adversarial Examples are not Bugs, They are Features
Andrew Ilyas*, Shibani Santurkar*, Dimitris Tsipras*, Logan Engstrom*, Brandon Tran, Aleksander Madry (2019)
Spotlight presentation, NeurIPS 2019. (Blog post, Datasets)

Prior Convictions: Black-box Adversarial Attacks with Bandits and Priors
Andrew Ilyas*, Logan Engstrom*, Aleksander Madry
ICLR 2019. Github

How Does Batch Normalization Help Optimization?
Shibani Santurkar*, Dimitris Tsipras*, Andrew Ilyas*, Aleksander Madry
Oral presentation, NeurIPS 2018. (Blog post, Video (3 minutes))

Black-box Adversarial Attacks with Limited Queries and Information
Andrew Ilyas*, Logan Engstrom*, Anish Athalye*, Jessy Lin*
ICML 2018. (Blog post 1, Blog post 2, Github)

Synthesizing Robust Adversarial Examples
Anish Athalye*, Logan Engstrom*, Andrew Ilyas*, Kevin Kwok
ICML 2018. (Blog post)

Training GANs with Optimism
Constantinos Daskalakis*, Andrew Ilyas*, Vasilis Syrgkanis*, Haoyang Zeng*
ICLR 2018. (Github)

Extracting Syntactic Patterns From Databases
Andrew Ilyas, Joana M.F. da Trindade, Raul C. Fernandez, Samuel Madden
ICDE 2018. (Github)

MicroFilters: Harnessing Twitter for Disaster Managment
Andrew Ilyas
Chairman's award winner, IEEE GHTC 2015.

Short Papers/Miscellanea

Data Attribution at Scale
Andrew Ilyas, Logan Engstrom, Kristian Georgiev, Aleksander Madry, Sam Park
ICML 2024 Tutorial. (Notes, Slides [coming soon])

"On AI Deployment" Blog Post Series
Sarah Cen, Aspen Hopkins, Andrew Ilyas, Aleksander Madry, Isabella Struckman, Luis Videgaray
Part 1 Part 2 Part 3 Part 4

Social Media Blog Post Series
Sarah Cen, Andrew Ilyas, Aleksander Madry
Part 1 Part 2 Part 3 Part 4

FFCV: Fast Forward Computer Vision
Python Library. Homepage

The robustness python library
GitHub repository/PyPI package. Documentation on ReadTheDocs

A Game-Theoretic Perspective on Trust in Recommender Systems
Sarah Cen, Andrew Ilyas, Aleksander Madry (2022)
Talk Recording, Poster
Oral presentation, ICML Workshop on Responsible Decision-Making 2022

Evaluating and Understanding the Robustness of Adversarial Logit Pairing
Logan Engstrom*, Andrew Ilyas*, Anish Athalye* (2018)
NeurIPS Security in Machine Learning Workshop 2018