Hyesu Lim

VisualScratchpad: Grounding Visual Concepts in Large Vision Language Models featured image

VisualScratchpad: Grounding Visual Concepts in Large Vision Language Models

Grounding visual concepts in large vision-language models via a attention-based linking mechanism.

hyesu-lim
ConceptScope: Characterizing Dataset Bias via Disentangled Visual Concepts featured image

ConceptScope: Characterizing Dataset Bias via Disentangled Visual Concepts

Characterizes dataset bias by disentangling visual concepts learned by sparse autoencoders on vision models.

jinho-choi
CytoSAE: Interpretable Cell Embeddings for Hematology featured image

CytoSAE: Interpretable Cell Embeddings for Hematology

Sparse autoencoders for interpretable single-cell embeddings in hematology.

muhammed-furkan-dasdelen
Sparse Autoencoders Reveal Selective Remapping of Visual Concepts During Adaptation featured image

Sparse Autoencoders Reveal Selective Remapping of Visual Concepts During Adaptation

Sparse autoencoders expose how visual concepts are selectively remapped when vision models adapt to new domains.

hyesu-lim
Towards Calibrated Robust Fine-Tuning of Vision-Language Models featured image

Towards Calibrated Robust Fine-Tuning of Vision-Language Models

Calibrated, robust fine-tuning method for vision-language models that preserves uncertainty estimates under distribution shift.

changdae-oh
TTN: A Domain-Shift Aware Batch Normalization in Test-Time Adaptation featured image

TTN: A Domain-Shift Aware Batch Normalization in Test-Time Adaptation

Domain-shift aware batch normalization layer that improves test-time adaptation under distribution shift.

hyesu-lim
AVocaDo: Strategy for Adapting Vocabulary to Downstream Domain featured image

AVocaDo: Strategy for Adapting Vocabulary to Downstream Domain

A vocabulary adaptation strategy for pretrained language models targeting downstream domains.

jimin-hong