<수학/통계학과 외부 연사 초청 강연 안내>
1. 연사 : 서울대학교 김용대 교수
2. 주제 : Knowledge distillation of uncertainty and its application to Bayesian fine tuning of LLM
3. 일시 : 2025년 6월 5일(목) 오전 10시
4. 장소 : 사이언스홀 (자1-100)
5. 발표초록 : Deep ensembles deliver state-of-the-art, reliable uncertainty quantification, but their heavy computational and memory requirements hinder their practical deployments to real applications such as on-device AI.
Knowledge distillation compresses an ensemble into small student models, but existing techniques struggle to preserve uncertainty partly because reducing the size of DNNs typically results in variation reduction.
To resolve this limitation, we introduce a new method of distribution distillation (i.e. compressing a teacher ensemble into a student distribution instead of a student ensemble) called Gaussian distillation, which estimates the distribution of a teacher ensemble through a special Gaussian process called the deep latent factor model (DLF) by treating each member of the teacher ensemble as a realization of a certain stochastic process.
The mean and covariance functions in the DLF model are estimated stably by use of the expectation-maximization (EM) algorithm.
By using multiple benchmark datasets, we demonstrate that the proposed Gaussian distillation outperforms existing baselines. In addition, we illustrate that Gaussian distillation works well for fine-tuning of LLMs and distribution shift problems.