Welcome!

My name is Hanseul Cho(์กฐํ•œ์Šฌ). I am a Ph.D. student in the Optimization & Machine Learning (OptiML) Laboratory, advised by Prof. Chulhee Yun at Kim Jaechul Graduate School of AI in Korea Advanced Institute of Science and Technology (KAIST AI).

I am interested in a broad range of fields in optimization and machine learning (ML), especially theoretical analysis and/or empirically finding out โ€œwhy.โ€ In particular, I am into hierarchical optimization (e.g., minimax/bi-level optimization), optimization with constraints (e.g., fairness in ML), and optimization under circumstance shifts (e.g., reinforcement learning and continual learning).

Education

  • ๐Ÿซ Ph.D. in Artificial Intelligence KAIST, Sept. 2023 - Current
  • ๐Ÿซ M.Sc. in Artificial Intelligence KAIST, Mar. 2022 - Aug. 2023
  • ๐Ÿซ B.Sc. in Mathematical Sciences KAIST, Mar. 2017 - Feb. 2022
    • Minor in Computing Sciences / Summa Cum Laude

Contact & Info

๐Ÿ“‹ Curriculum Vitae: Here
๐Ÿ“ง Email address: jhs4015 at kaist dot ac dot kr

News

  • ๐Ÿ“ฐ [Sep. '23] Two papers are accepted to NeurIPS 2023! ๐ŸŽ‰ One is about Fair Streaming PCA and another is about enhancing plasticity in RL.
  • ๐Ÿ“ฐ [Jan. '23] Our paper about shuffling-based stochastic gradient descent-ascent got accepted to ICLR 2023! ๐ŸŽ‰
  • ๐Ÿ“ฐ [Nov. '22] Our paper about shuffling-based stochastic gradient descent-ascent is accepted to 2022 Korea AI Association + NAVER Autumnal Joint Conference (JKAIA 2022) and selected as the NAVER Outstanding Theory Paper!
  • ๐Ÿ“ฐ [Oct. '22] I am happy to announce that our very first preprint is now on arXiv! It is about convergence analysis of shuffling-based stochastic gradient descent-ascent.
  • ๐Ÿ“ฐ [Feb. '22] Now I am part of OptiML Lab of KAIST AI.