I’m an concurrent B.S./M.Eng student at MIT, double majoring in CS and Math; my interests currently lay somewhere around theoretical Machine Learning.

My master’s thesis is being supervised by Constantinos Daskalakis at MIT CSAIL. I also work in the CBBM at MIT, investigating invariances in Neural Networks. I’m also a founding member of labsix.

In past years, I’ve worked in the Database Lab at MIT, and as an intern at Two Sigma Labs, where I worked on Convex Optimization and Online Learning.

Publications and Preprints (*=equal contribution)

The Robust Manifold Defense: Adversarial Training using Generative Models
Andrew Ilyas, Ajil Jalal, Eirini Asteri, Constantinos Daskalakis, Alex G. Dimakis (2017)

Query-efficient Black-box Adversarial Examples
Andrew Ilyas*, Logan Engstrom*, Anish Athalye*, Jessy Lin* (2017)
arXiv, Blog Post

Synthesizing Robust Adversarial Examples
Anish Athalye*, Logan Engstrom*, Andrew Ilyas*, Kevin Kwok (2017)
arXiv, Blog Post

Training GANs with Optimism (ICLR 2018, to appear)
Constantinos Daskalakis*, Andrew Ilyas*, Vasilis Syrgkanis*, Haoyang Zeng* (2017)
arXiv, Github

Extracting Syntactic Patterns From Databases (ICDE 2018, to appear)
Andrew Ilyas, Joana M.F. da Trindade, Raul C. Fernandez, Samuel Madden (2017)
arXiv, Github

MicroFilters: Harnessing Twitter for Disaster Managment
Andrew Ilyas (2014)
IEEE Xplore

Personal Projects

In this section I have included a list of previous hardware/software projects, including hackathon products, and weekend projects.

Research Archives (Pre-MIT)

In this section I have included a near-complete list of previous research topics, and links to the papers, reports, or products created as a result of that research. The projects are sorted chronologically, with every project from 2009 (Grade 5!) to present (Grade 12).