Jinho Choi

VisualScratchpad: Grounding Visual Concepts in Large Vision Language Models featured image

VisualScratchpad: Grounding Visual Concepts in Large Vision Language Models

Grounding visual concepts in large vision-language models via a attention-based linking mechanism.

hyesu-lim
ConceptScope: Characterizing Dataset Bias via Disentangled Visual Concepts featured image

ConceptScope: Characterizing Dataset Bias via Disentangled Visual Concepts

Characterizes dataset bias by disentangling visual concepts learned by sparse autoencoders on vision models.

jinho-choi
Sparse Autoencoders Reveal Selective Remapping of Visual Concepts During Adaptation featured image

Sparse Autoencoders Reveal Selective Remapping of Visual Concepts During Adaptation

Sparse autoencoders expose how visual concepts are selectively remapped when vision models adapt to new domains.

hyesu-lim