Hanseul Cho (์กฐํ์ฌ) ๐
Iโm a Ph.D. student in the Optimization & Machine Learning (OptiML) Laboratory, where Iโm fortunate to be advised by Prof. Chulhee โCharlieโ Yun at Kim Jaechul Graduate School of AI in Korea Advanced Institute of Science and Technology (KAIST AI). Previously, I completed my M.Sc. (in AI) and B.Sc. (in Math, minor in CS, Summa Cum Laude) at KAIST. Here is my CV.
Iโll work as a Google Student Researcher in New York for Spring 2025 (05/05โ07/25), collaborating with Srinadh Bhojanapalli.
๐ค Please donโt hesitate to reach out for questions, discussions, and collaborations! ๐ค
๐ฌ Research Interests ๐ญ
My primary research interests lie in optimization, machine learning (ML), and deep learning (DL), especially focusing on both mathematical/theoretical analysis and empirical improvements (usually based on theoretical understanding).
During my journey to a Ph.D.๐จ๐ปโ๐, my ultimate research goal is to rigorously understand and practically overcome the following three critical
-
[Generalizability] Out-of-distribution generalization of (large) language models (e.g., length generalization and compositional generalization of Transformers)
-
[Adaptability] Training adaptable models under evolving environments (e.g., continual learning, maintaining the plasticity of neural networks, sample-efficient reinforcement learning)
-
[Multifacetedness] Learning with multiple (possibly conflicting and/or orthogonal) goals (e.g., minimax optimization, biโlevel optimization, fairness in ML)
โผ๏ธNewsโผ๏ธ
- ๐๏ธ [Feb. '25]
(NEW) I'll work as a Google Student Researcher in New York๐บ๐ธ! (05/05/2025–07/25/2025, Host: Srinadh Bhojanapalli) - ๐๏ธ [Jan. '25]
(NEW) Invited as a reviewer of Transactions on Machine Learning Research (TMLR). - ๐๏ธ [Jan. '25]
(NEW) Two papers got accepted to ICLR 2025! ๐ One is the sequel of our Position Coupling paper; another is about a theoretical analysis of continual learning algorithm. See you in Tampines, Singapore๐ธ๐ฌ! - ๐๏ธ [Nov. '24] An early version of our paper on theoretical analysis of continual learning is accepted to JKAIA 2024 and won the Best Paper Award (top 3 papers)! ๐
- ๐๏ธ [Nov. '24] I'm selected as one of the Top Reviewers (top 8.6%: 1,304 of 15,160 reviewers) at NeurIPS 2024! (+ Free registration! ๐)
- ๐๏ธ [Sep. '24] Two papers got accepted to NeurIPS 2024! ๐ One is about length generalization of arithmetic Transfomers, and another is about mitigating loss of plasticity in incremental neural net training. See you in Vancouver, Canada๐จ๐ฆ!
- ๐๏ธ [Jun. '24] An early version of our paper on length generalization of Transformers got accepted to the ICML 2024 Workshop on Long-Context Foundation Models!
- ๐๏ธ [May. '24] A paper got accepted to ICML 2024 as a spotlight paper (top 3.5% among all submissions)! ๐ We show global convergence of Alt-GDA (which is strictly faster than Sim-GDA) and propose an enhanced algorithm called Alex-GDA for minimax optimization. See you in Vienna, Austria๐ฆ๐น!
- ๐๏ธ [Sep. '23] Two papers are accepted to NeurIPS 2023! ๐ One is about Fair Streaming PCA and another is about enhancing plasticity in RL. See you in New Orleans, USA๐บ๐ธ!
- ๐๏ธ [Jan. '23] Our paper about shuffling-based stochastic gradient descent-ascent got accepted to ICLR 2023!
- ๐๏ธ [Nov. '22] An early version of our paper about shuffling-based stochastic gradient descent-ascent is accepted to 2022 Korea AI Association + NAVER Autumnal Joint Conference (JKAIA 2022) and selected as the NAVER Outstanding Theory Paper (top 3 papers)!
- ๐๏ธ [Oct. '22] I am happy to announce that our very first preprint is now on arXiv! It is about convergence analysis of shuffling-based stochastic gradient descent-ascent.
- ๐๏ธ [Feb. '22] Now I am part of OptiML Lab of KAIST AI.
Contact & Info
๐ Curriculum Vitae (CV): [PDF] | [Overleaf-ReadOnly]
๐ง E-mail: jhs4015 at kaist dot ac dot kr