Projects
I approach projects as system-building efforts grounded in real-world constraints.
Each project reflects an attempt to take a technical idea, often at the intersection of sensing, AI, and physics, and turn it into something that can operate reliably, scale, and persist outside the lab.
Althea (Vertical Agentic AI and LLMs)
Althea is a voice-first AI platform that I co-founded and lead as CEO, designed to deploy autonomous, closed-loop AI agents that interact with humans through language, action, and feedback.
The platform is multimodal and integrates:
- Specialized multimodal models, LLMs, and reasoning engines
- Real-time, ultra-low-latency and realistic voice interaction
- Longitudinal memory and feedback loops
- Orchestration layer for multi-step task execution, tool use, and human-in-the-loop control
- Human-in-the-loop feedback and continuous learning
We leveraged core system design principles to enable large-scale agent orchestration and parallelization across complex, multi-step asynchronous workflows in high-stakes settings, where tolerance for errors is low.
Healthcare serves as a high-stakes proving ground, but the underlying platform is about much more than a single vertical—it is a testbed for how AI systems can operate continuously, adapt to humans, and perform real work over time.
We have deployed Althea across care management, pharma, and payer environments, spanning both administrative and clinical workflows. Our goal was to improve patient access and engagement, support care coordination and adherence, and extend clinical and operational capacity in the face of growing workforce constraints.
What excites me most about Althea from a technical standpoint is voice as the most natural human–AI interface, particularly given its central role in patient access and services. Building high-fidelity voice agents presents significant challenges across both user experience and system execution. We recently published an article on some of these elements, which you can read about here.
Expanding the platform toward advanced applications involving computational phenotyping and personalized, closed-loop human–AI interaction is core to our roadmap.
Translational Neurotechnology Initiatives at Yale
At Yale, my work has focused on translation—moving frontier neurotechnology from research environments into deployable, scalable systems validated through preclinical and early clinical studies.
A central theme is building integrated neural interface systems that combine sensing, modeling, and modulation of brain activity. This includes work across focused ultrasound (FUS) neuromodulation, blood–brain barrier (BBB) opening, and emerging microelectronic sensing platforms (CMUT/ASIC), coupled with AI-driven neural decoding and generative modeling.
The technical challenge is not just advancing individual components, but unifying them into coherent systems—where neural signals can be captured, interpreted, and influenced in a closed loop. This requires bridging multiple layers:
- Physical interfaces with the brain and multimodal sensing (ultrasound, EEG, fNIRs, and fMRI)
- Signal acquisition and representation of complex neural activity
- Modeling and decoding of latent cognitive and physiological states
- Feedback and modulation through targeted stimulation or intervention
A key focus has been on AI-assisted neural decoding, where models learn to map high-dimensional, noisy neural signals into meaningful representations that can support communication, control, or therapeutic intervention. In parallel, generative and adaptive models enable systems that evolve with the user over time.
What makes this work particularly compelling is the need to operate at the intersection of engineering, neuroscience, and clinical reality—where systems must be robust to biological variability, constrained by safety and regulatory requirements, and ultimately useful in real-world settings.
These efforts are aimed at advancing platforms for cognitive restoration, neurorehabilitation, and human–AI interaction—while pushing toward a broader goal: building systems that enable continuous, adaptive communication between the brain and machines.
Liminal / Hyperfine (Neuroimaging and Neurosensing)
At Liminal and Hyperfine, I worked on building a non-invasive, multimodal brain sensing and monitoring platform that aimed to extract meaningful physiological signals from one of the most complex and noisy systems we know—the human brain.
The core challenge was not just sensing, but making sense of weak, indirect, and highly confounded signals. Unlike traditional systems, we were not measuring a single modality or clean signal—we were combining ultrasound, electrophysiology, and other sensing techniques with signal processing and machine learning to infer latent physiological states such as cerebral blood flow (CBF) and intracranial pressure (ICP). This work led to the development of Acousto-Encephalography (AEG), a new sensing approach we introduced to non-invasively estimate ICP and brain perfusion from utrasound and multimodal measurements.
We developed this for applications in neurological and neurocritical conditions such as traumatic brain injury (TBI), stroke, and epilepsy, where continuous, non-invasive monitoring of brain state can meaningfully impact diagnosis, intervention, and care.
This required thinking about the system end-to-end:
- Designing sensing hardware and transducers capable of operating through complex biological media
- Building signal acquisition and processing pipelines robust to noise, variability, and patient-specific differences
- Developing models that could map noisy measurements to clinically meaningful representations
- Integrating embedded systems, cloud infrastructure, and real-time inference into a cohesive pipeline
A key aspect of the work was bridging physics-based modeling and data-driven approaches—combining first-principles understanding of wave propagation and tissue interaction with machine learning models that could adapt to real-world variability.
Another challenge was operating under real clinical constraints:
- limited control over the environment
- variability across patients
- strict requirements for reliability, safety, and interpretability
These systems had to work not just in controlled settings, but in environments where failure modes are subtle and consequences matter.
What made this work particularly interesting to me is that these platforms were not just devices—they were closed-loop systems integrating sensing, inference, and decision-making under uncertainty. They required co-design across hardware, algorithms, and system architecture, rather than optimization of any single component.
This work shaped how I think about building systems today: starting from the constraints of the real world, designing for uncertainty, and integrating sensing, computation, and feedback into systems that can operate continuously and reliably over time.
Sensing, Imaging, and Computational Modeling at Stanford
My work at Stanford was centered on understanding how to extract meaningful information from physical systems that are inherently complex, nonlinear, and often chaotic.
A large part of this began with my PhD work, where I developed an ultrasonic touchscreen based on guided Lamb waves and wave chaos. Instead of relying on conventional sensing approaches, the system leveraged the complex interference patterns of waves propagating through a plate, combined with learning-based methods, to localize touch and interaction. This line of work naturally extended into broader problems of localization and inference in reverberant and ill-conditioned environments, where signals are indirect, highly entangled, and sensitive to boundary conditions.
In parallel, I worked on a range of sensing systems across both biometric and imaging applications, including ultrasound-based imaging, airborne sensing, and methods for biological tissue characterization. This included work such as acoustic microscopy for in-situ characterization of complex media, where the challenge is not just sensing, but interpreting how waves interact with heterogeneous structures.
My postdoctoral work focused on one of the fundamental challenges in the field: how to effectively deliver and control ultrasound through the skull. We developed new approaches to circumvent the limitations of transcranial propagation, enabling more efficient and targeted energy delivery. This became foundational for applications in neuromodulation and blood–brain barrier (BBB) opening, where precision, safety, and understanding of the underlying physics are critical.
Another core thread of my work was understanding the physical mechanisms of ultrasound-mediated neuromodulation—how mechanical energy couples into neural activity. This required close integration of device engineering, experimental systems, and biophysical modeling, in collaboration with various groups across the medical school and school of engineering, to bridge the gap between observed effects and underlying mechanisms.
Across all of these efforts, a unifying theme was multiscale modeling and system design. I worked across multiple abstraction layers—from PDE-based and finite element models, to reduced-order representations, to equivalent circuit models—depending on the question and constraints. These models were often coupled with algorithms for imaging, localization, and inverse problems, where the goal is to reconstruct hidden states from indirect and noisy measurements.
What made this work compelling to me is that it sits at the intersection of physics, computation, and system design—where sensing is not just about measurement, but about designing systems that can infer, adapt, and operate under real-world complexity.
Other Adhoc Projects
Coming soon...
Selected publications & patents
I’ve authored 50+ peer-reviewed publications and am a named inventor on 50+ patents, with work spanning early academic research through commercial deployment.
Rather than listing everything, I curate representative work here.
- Non-invasive neural sensing and brain vital monitoring
- Acousto-encephalography (AEG) and advanced ultrasonic interfaces
- Focused ultrasound neuromodulation and blood–brain barrier opening
- Machine-learning-driven sensing and signal interpretation
For a complete list, see my Google Scholar profile.