š Hello! Iām Yoonjun Cho
I am a PhD student at Yonsei University, advised by Prof. Albert No. My research focuses on efficient large language models, with particular interest in quantization and low-rank adaptation. I am broadly interested in model compression, and my work seeks to revisit and rethink assumptions that have long been treated as standard practice. Moving forward, I plan to explore the quantization of discrete diffusion models, and investigate the role and construction of calibration datasets in compressed models.
š° News
š£ Jun 2025: One paper is accepted to ICML 2025 (TTODLer-FM Workshop), Oral!
Preserve then Quantize: Dominant-Subspace Guided Low-Rank Reconstruction
š May 2025: One paper is accepted to ACL 2025 Findings!
Assigning Distinct Roles to Quantized and Low-Rank Matrices Toward Optimal Weight Decomposition