Welcome!

My name is Hanseul Cho(μ‘°ν•œμŠ¬). I am a Ph.D. student in the Optimization & Machine Learning (OptiML) Laboratory, advised by Prof. Chulhee Yun at Kim Jaechul Graduate School of AI in Korea Advanced Institute of Science and Technology (KAIST AI).

🚨I am Looking for Internship Opportunitiesβ€ΌοΈπŸš¨

I am interested in a broad range of fields in optimization, machine learning (ML), and deep learning (DL), especially focusing on both mathematical/theoretical analysis and empirical improvements (usually based on theoretical understanding). Recently, I have been into understanding and mitigating the fundamental limitations of modern language models (e.g., length generalization and compositional generalization problems). Also, I am always interested in hierarchical/multi-level optimization (e.g., minimax optimization), optimization with constraints (e.g., fairness in ML), optimization under circumstance shifts (e.g., reinforcement learning and continual learning).

News

  • πŸ“° [Sep. '24] Two papers got accepted to NeurIPS 2024! πŸŽ‰ One is about length generalization of arithmetic Transfomers, and another is about mitigating loss of plasticity in incremental neural net training. See you in VancouverπŸ‡¨πŸ‡¦!
  • πŸ“° [Jun. '24] An early version of our paper on length generalization of Transformers got accepted to the ICML 2024 Workshop on Long-Context Foundation Models!
  • πŸ“° [May. '24] A paper got accepted to ICML 2024 as a spotlight paper (top 3.5% among all submissions)! πŸŽ‰ We show global convergence of Alt-GDA (which is strictly faster than Sim-GDA) and propose an enhanced algorithm called Alex-GDA for minimax optimization. See you in ViennaπŸ‡¦πŸ‡Ή!
  • πŸ“° [Sep. '23] Two papers are accepted to NeurIPS 2023! πŸŽ‰ One is about Fair Streaming PCA and another is about enhancing plasticity in RL.
  • πŸ“° [Jan. '23] Our paper about shuffling-based stochastic gradient descent-ascent got accepted to ICLR 2023! πŸŽ‰
  • πŸ“° [Nov. '22] Our paper about shuffling-based stochastic gradient descent-ascent is accepted to 2022 Korea AI Association + NAVER Autumnal Joint Conference (JKAIA 2022) and selected as the NAVER Outstanding Theory Paper!
  • πŸ“° [Oct. '22] I am happy to announce that our very first preprint is now on arXiv! It is about convergence analysis of shuffling-based stochastic gradient descent-ascent.
  • πŸ“° [Feb. '22] Now I am part of OptiML Lab of KAIST AI.

Education

  • 🏫 Ph.D. in Artificial Intelligence KAIST, Sept. 2023 - Current
  • 🏫 M.Sc. in Artificial Intelligence KAIST, Mar. 2022 - Aug. 2023
  • 🏫 B.Sc. in Mathematical Sciences KAIST, Mar. 2017 - Feb. 2022
    • Minor in Computing Sciences / Summa Cum Laude

Contact & Info

πŸ“‹ Curriculum Vitae: Here
πŸ“§ Email address: jhs4015 at kaist dot ac dot kr