김예찬 교수 사진
김예찬 교수
학위
Ph.D in Artificial Intelligence, Korea University
연구분야
Natural Language Processing, Efficient Deep Learning, Language Agent
전화번호
이메일
yeachan@hufs.ac.kr
연구실
교수회관 520

세부내용

We explore the intersection of natural language processing and machine learning, focusing on developing efficient training and inference algorithms for large language models. Our research is not limited to improving task accuracies or computational efficiency of language AI — we aim to infuse human-like efficiency into language intelligence, designing systems that adaptively allocate resources and reasoning efforts as humans do. Our recent research interests include, but are not limited to:

최종학력

  • Ph.D in Artificial Intelligence, Korea University

전공분야

  • Natural Language Processing
  • Efficient Deep Learning

주요 연구

  • Efficient Methods for Natural Language Processing
  • Adaptive Methods for Natural Language Processing
  • Language Agents

주요 논문/저서

(* denotes equal contribution) 


 (2025)
  • Yeachan Kim*, Hyuntae Park*, SangKeun Lee, Bridging the Gap Between Molecule and Textual Descriptions via Substructure-aware Alignment, EMNLP 2025
  • Yeachan Kim, SangKeun Lee, Forward Knows Efficient Backward Path: Saliency-Guided Memory-Efficient Fine-tuning of Large Language Models, ACL 2025
  • Mingyu Lee, Yeachan Kim, Wing-Lam Mok, SangKeun Lee, Curriculum Debiasing: Toward Robust Parameter-Efficient Fine-Tuning Against Dataset Biases, ACL 2025
(2024)
  • Yeachan Kim, SangKeun Lee, SparseFlow: Accelerating Transformers by Sparsifying Information Flows, ACL 2024
  • Yeachan Kim*, Junho Kim*, SangKeun Lee, Towards Robust and Generalized Parameter-Efficient Fine-Tuning for Noisy Label Learning, ACL 2024
  • Sungho Kim∗, Juhyung Park∗, Yeachan Kim, SangKeun Lee, KOMBO: Korean Character Representations Based on Combinations Rules of Subcharacters, Findings of ACL: ACL 2024
  • Jun-Hyung Park, Yeachan Kim, Mingyu Lee, Hyuntae Park, SangKeun Lee, MolTRES: Improving Chemical Language Representation Learning for Molecular Property Prediction, EMNLP 2024
  • Yeachan Kim*, Hyuntae Park*, Jun-Hyung Park, SangKeun Lee, Zero-shot Commonsense Reasoning over Machine Imagination, Findings of ACL: EMNLP 2024
  • Yeachan Kim*, Junho Kim*, Jun-Hyung Park, Yerim Oh, Suho Kim, SangKeun Lee, Towards Robust and Generalized Parameter-Efficient Fine-Tuning for Noisy Label Learning, Findings of ACL: EMNLP 2024
  • Yeachan Kim, Jun-Hyung Park, SungHo Kim, Juhyeong Park, Sangyun Kim, SangKeun Lee, SEED: Semantic Knowledge Transfer for Language model Adaptation to Materials Science, EMNLP 2024 (Industry Track)
  • Jun-Hyung Park∗, Hyuntae Park∗, Yeachan Kim, Woosang Lim, SangKeun Lee, Moleco: Molecular Contrastive Learning with Chemical Language Models for Molecular Property Prediction, EMNLP 2024 (Industry Track)
(2023)
  • Yeachan Kim, Junho Kim, Jun-Hyung Park, Mingyu Lee, SangKeun Lee, Leap-of-Thought: Accelerating Transformers via Dynamic Token Routing, EMNLP 2023
  • Eojin Jeon∗, Mingyu Lee∗, Juhyeong Park, Yeachan Kim, Wing-Lam Mok, SangKeun Lee, Improving Bias Mitigation through Bias Experts in Natural Language Understanding, EMNLP 2023
  • Yeachan Kim*, Junho Kim*, Wing-Lam Mok, Jun-Hyung Park, Mingyu Lee, SangKeun Lee, Client-Customized Adaptation for Parameter-Efficient Federated Learning, Findings of ACL: ACL 2023
  • Jun-Hyung Park, Yeachan Kim, Junho Kim, Joon-Young Choi, SangKeun Lee, Dynamic Structure Pruning for Compressing CNNs, AAAI 2023
  • Yeachan Kim, Seongyeon Kim, Ihyeok Seo, Bonggun Shin, Phase-shifted Adversarial Training, UAI 2023

(~2022)

  • Yeachan Kim, Bonggun Shin, In Defense of Core-set: A Density-aware Core-set Selection for Active Learning, KDD 2022 (Research Track)
  • Yeachan Kim, Bonggun Shin, An Interpretable Framework for Drug-Target Interaction with Gated Cross Attention, MLHC 2021
  • Yeachan Kim, Do-Myoung Lee, Chang-gyun Seo, Context-based Virtual Adversarial Training for Text Classification with Noisy Labels, LREC 2021
  • Kang-Min Kim, Bumsu Hyeon, Yeachan Kim, Jun-Hyung Park, SangKeun Lee, Multi-pretraining for Large-scale Text Classification, Findings of ACL: EMNLP 2020
  • Yeachan Kim, Kang-Min Kim, SangKeun Lee, Adaptive Compression of Word Embeddings, ACL 2020
  • Yeachan Kim, Kang-Min Kim, SangKeun Lee, Representation Learning for Unseen Words by Bridging Subwords to Semantic Networks, LREC 2020
  • Kang-Min Kim, Yeachan Kim, Jungho Lee, Ji-Min Lee, SangKeun Lee, From Small-scale to Large-scale Text Classification, WWW 2019
  • Yeachan Kim, Kang-Min Kim, Ji-Min Lee, SangKeun Lee, Learning to Generate Word Representations using Subword Information, COLING 2018