I'm a Ph.D. student in EECS at UC Berkeley, focusing on applied machine learning. I'm fortunate to be advised by Gopala K. Anumanchipalli and Edward F. Chang, with support from the NSF GRFP. I am part of the Berkeley AI Research (BAIR) Lab and Chang Lab at UCSF.
I co-led the development of multimodal AI tools to accurately translate brain activity into text, audible personalized speech, and a high-fidelity "digital talking avatar" (Nature 2023). Currently, I am working on improved methods for naturalistic speech synthesis and spoken language modeling. I am also currently a research intern at Meta.
I earned my B.S. degree at Columbia University, where I studied under Paul Sajda.
Outside of my professional life, I have previously completed various endurance races, such as the Moab 240-mile Endurance Run and the Bigfoot 200-mile Endurance Run. I've also spent over 100 days in silent meditation at Vipassana meditation centers worldwide.
Example work: A high-performance neuroprosthesis for speech decoding and avatar control
Example work: A streaming brain-to-voice neuroprosthesis to restore naturalistic communication