이재홍 교수 (학부장)  사진
이재홍 교수 (학부장)
학위
Ph.D., Department of Electronic, Computer and Communication Engineering, Hanyang University
연구분야
Statistical Machine Learning, Speech and Audio Processing
전화번호
이메일
ljh93ljh@hufs.ac.kr
연구실
교수회관 401호

세부내용

최종학력

Ph.D., Department of Electronic, Computer and Communication Engineering

Hanyang University


전공분야

  • Statistical Machine Learning
  • Speech and Audio Processing

주요 연구

  • Foundation models
  • Continual learning
  • Unsupervised domain adaptation
  • Online learning 


주요 강의

  • 객체지향프로그래밍
  • 음성처리
  • 기계학습

주요 논문/저서


  • Language Model Personalization for Speech Recognition: A Clustered Federated Learning Approach with Adaptive Weight Average, Signal Processing Letter (SPL), 2024, Chae-Won Lee, Jae-Hong Lee, Joon-Hyuk Chang
  • Online Subloop Search via Uncertainty Quantization for Efficient Test-Time Adaptation, INTERSPEECH 2024, Jae-Hong Lee*, Sang-Eon Lee*, Dong-Hyun Kim, Doe-Hee Kim, Joon-Hyuk Chang
  • Whisper Multilingual Downstream Task Tuning using Task Vector, INTERSPEECH 2024, Ji-Hun Gang, Jae-Hong Lee, Joon-Hyuk Chang, et al.
  • Balanced-Wav2Vec: Enhancing Stability, and Robustness of Representation Learning Through Sample Reweighting Techniques, INTERSPEECH 2024, Mun-Hak Lee, Jae-Hong Lee, Joon-Hyuk Chang, et al.
  • Stationary Latent Weight Inference for Unreliable Observations from Online Test-Time Adaptation, International Conference on Machine Learning (ICML), 2024, Jae-Hong Lee, Joon-Hyuk Chang
  • Text-only unsupervised domain adaptation for neural transducer-based ASR personalization using synthesized data, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2024, Dong-Hyun Kim, Jae-Hong Lee, Joon-Hyuk Chang
  • Continual momentum filtering on parameter space for online test-time adaptation, International Conference on Learning Representations (ICLR), 2024, Jae-Hong Lee, Joon-Hyuk Chang
  • Partitioning attention weight: Mitigating adverse effect of incorrect pseudo-labels for self-supervised ASR, IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), 2023, Jae-Hong Lee, Joon-Hyuk Chang
  • AWMC: Online test-time adaptation without mode collapse for continual adaptation, IEEE Automatic Speech Recognition and Understanding (ASRU), 2023, Jae-Hong Lee, Doe-Hee Kim, Joon-Hyuk Chang
  • M-CTRL: A CONTINUAL REPRESENTATION LEARNING FRAMEWORK WITH SLOWLY IMPROVING PAST PRE-TRAINED MODEL, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023, Jin-Seong Cho, Jae-Hong Lee, Chae-Won Lee, Joon-Hyuk Chang
  • REPACKAGINGAUGMENT: OVERCOMING PREDICTION ERROR AMPLIFICATION IN WEIGHT-AVERAGED SPEECH RECOGNITION MODELS SUBJECT TO SELF-TRAINING, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023, Jae-Hong Lee, Dong-Hyun Kim, Joon-Hyuk Chang
  • CTRL: CONTINUAL REPRESENTATION LEARNING TO TRANSFER INFORMATION OF PRE-TRAINED FOR WAV2VEC 2.0, INTERSPEECH 2022, Jae-Hong Lee*, Chae-Won Lee*, Jin-Seong Cho*, Joon-Hyuk Chang, et al.
  • W2V2-LIGHT: A LIGHTWEIGHT VERSION OF WAV2VEC 2.0 FOR AUTOMATIC SPEECH RECOGNITION, INTERSPEECH 2022, Dong-Hyun Kim*, Jae-Hong Lee*, Joon-Hyuk Chang


(* denotes equal contribution)