Kaylo Littlejohn

Hello! I’m a Ph.D. candidate in EECS at UC Berkeley, where I’m fortunate to be advised by Gopala K. Anumanchipalli and Edward F. Chang, with support from the NSF GRFP. I am part of the Berkeley AI Research (BAIR) Lab and Chang Lab at UCSF. My research focuses on restoring speech and movement to those who have lost it due to severe paralysis using Communication Brain-Computer Interfaces (BCIs) and Speech AI Tools. I am interested in AI applications for spoken language modeling, healthcare, and biosensing technology.

Decoding accurate speech from the brain has long been an elusive goal and could enable speech neuroprostheses to restore communication and embodiment to folks who have lost this ability due to severe paralysis (Nat. Rev. Neurosci 2024). During my Ph.D., I co-led the development of multimodal AI tools to accurately translate brain activity into text, audible personalized speech, and a high-fidelity "digital talking avatar" (Nature 2023). These tools enabled a person with paralysis to converse for the first time in over 18 years (video). Currently, I am working on improved methods for expressive brain-to-voice synthesis.

My work has been featured in the New York Times (front cover story 08/24/2023), the White House, the WSJ, NPR, Nature, MIT Tech Review, and more.

I earned my B.S. degree at Columbia University, where I studied under Paul Sajda. Outside of my professional life, I have previously completed various endurance races, such as the Moab 240-mile Endurance Run and the Bigfoot 200-mile Endurance Run. I've also spent over 100 days in silent meditation at Vipassana meditation centers worldwide.