news

Jan 06, 2026 I will be participating in MATS 9.0 (ML Alignment & Theory Scholars Program) in Berkeley. I will be working with Adam Shai and Paul Riechers (Simplex AI Safety) on interpreting Chain of Thought in Toy Models using Computational Mechanics.
Jul 17, 2025 I’m attending ICML 2025, where I’ll be presenting some of my recent work on interpretability in large language models:
Jul 10, 2025 I’m participating in MARS 3.0 (Mentorship for Alignment Research Students), a research program run by the Cambridge AI Safety Hub. As part of this program, I will be working alongside a group of talented researchers on problems related to Chain-of-Thought reasoning in AI systems.
May 27, 2025 Workshop Announcement: Interpretability in LLMs using Geometric and Statistical Methods
I am organizing a workshop taking place on May 27-28 where we will explore recent developments in interpretability for large language models (LLMs) using geometric and statistical methods. For further details, check out the workshop page.