Hi! I’m a postdoctoral researcher at the Dynamical Inference Lab, Helmholtz Munich, hosted by Prof. Steffen Schneider. I received my Ph.D. from KAIST AI, advised by Prof. Jaegul Choo.
My goal is to build AI systems that are testable, explainable, and controllable, so that people can understand, trust, and interact with them more naturally. I apply these methods across scientific domains, from medical imaging to language, bridging interpretability research with real-world use.
Previously, I worked with collaborators at NAVER AI Lab, Qualcomm AI Research, and CMU — see my CV for details.
Selected publications. For the full list, see my Google Scholar.

Grounding visual concepts in large vision-language models via a attention-based linking mechanism.

Characterizes dataset bias by disentangling visual concepts learned by sparse autoencoders on vision models.

Sparse autoencoders for interpretable single-cell embeddings in hematology.

Sparse autoencoders expose how visual concepts are selectively remapped when vision models adapt to new domains.

Calibrated, robust fine-tuning method for vision-language models that preserves uncertainty estimates under distribution shift.

Domain-shift aware batch normalization layer that improves test-time adaptation under distribution shift.

A vocabulary adaptation strategy for pretrained language models targeting downstream domains.
The easiest way to reach me is by email.