I am an ML PhD student at Georgia Tech's College of Computing, working with Polo Chau and supported by the National Science Foundation Graduate Research Fellowship.
My work centers on methods for guiding generative models, particularly ones for image generation. I am broadly interested also in the application of data visualization and HCI to understanding and guiding machine learning systems.
I previously worked with Chris Rozell at Georgia Tech on methods for guiding generative models based on human feedback. I have also interned at Adobe on the Firefly team under Oliver Brdiczka . Prior to that I interned at IBM Research with Achille Fokue where my work focused on applying graph neural networks at large scale language models to the task of document summarization. Prior to that I was a software engineering intern at Microsoft where I worked on scaling a data analytics service. I also was an intern in the Machine Learning and Instrument Autonomy Group at NASA Jet Propulsion Laboratory with Lukas Mandrake. At NASA my work focused on developing web-based visualization tools for interacting with machine learning models. I started my research career working with David Koes at the University of Pittsburgh. There I worked on machine learning applications to computational drug discovery, specifically trying to understand protein-ligand interactions. I also worked on a web-based molecular visualization library called 3Dmol.js.
I am interested in machine learning and software engineering. I am specifically interested in generative modeling, multimodal modeling, and visualization.
We showed that it is possible to defend against adversarial attacks on language models, which encourage LLMs to produce harmful content, by simply filtering out this content using another instance of an LLM.
We developed ManimML, an open-source Python library for easily generating animations of ML algorithms directly from code. ManimML has a familiar syntax for specifying neural networks that mimics popular deep learning frameworks like Pytorch. A user can take a preexisting neural network architecture and easily write a specification for an animation in ManimML, which will then automatically compose animations for different components of the system into a final animation of the entire neural network.
We developed a self-supervised learning approach derived from Lie algebra to apply automatic data augmentations to image data that stay on low-dimensional manifolds in high-dimensional latent representations.
We developed a method for controlling the features of images generating with a GAN by asking user's relative queries of of the form "do you prefer image a or image b?"
From these queries we could infer what images a user prefers and use that understanding of user preferences to generate reccomendation images.
Oracle Guided Image Synthesis with Relative Queries Alec Helbling,
Christopher John Rozell,
Matthew O'Shaughnessy,
Kion Fallah
International Conference on Learning Representations Workshop on Deep Generative Models for Highly Structured Data, 2022
OpenReview
We developed a method for guiding the generative process of VAEs by asking user's relative queries of of the form "do you prefer image a or image b?"
From these queries we could infer what images a user prefers and use that understanding of user preferences to generate reccomendation images with our VAE.
We applied convolutional neural networks to the problem of predicting whether or not a protein-ligan pair are likely to bind. We developed methods of visualizing and understanding the learned structure of these models.
I made significant contributions to 3Dmol.js while working with David Koes at the University of Pittsburgh Department of Computational and Systems Biology. 3Dmol.js is a JavaScript tool allowing biologists to easily visualize 3D molecular structures like proteins.