I’m a Ph.D. candidate in EECS at UC Berkeley, where I’m fortunate to be advised by Gopala K. Anumanchipalli and Edward F. Chang, with support from the NSF GRFP. I am part of the Berkeley AI Research (BAIR) Lab and Chang Lab at UCSF.

Decoding accurate text and audio from the brain could enable speech neuroprostheses to restore communication and embodiment to folks who have lost this ability due to severe paralysis (Nat. Rev. Neurosci 2024). During my Ph.D., I co-led the development of multimodal AI tools to accurately translate brain activity into text, audible personalized speech, and a high-fidelity "digital talking avatar" (Nature 2023). Currently, I am working on improved methods for expressive brain-to-voice synthesis.

I earned my B.S. degree at Columbia University, where I studied under Paul Sajda.

Outside of my professional life, I have previously completed various endurance races, such as the Moab 240-mile Endurance Run and the Bigfoot 200-mile Endurance Run. I've also spent over 100 days in silent meditation at Vipassana meditation centers worldwide.

Example work: A high-performance neuroprosthesis for speech decoding and avatar control

Example work: A streaming brain-to-voice neuroprosthesis to restore naturalistic communication