My primary research lies in the area of natural language processing and efficient artificial intelligence. I am particularly interested in the sustainability and truthfulness of language models, which have opened promising avenues for research including:
Efficient Language Models: Reduction of the computational and memory complexities in language models, while maintaining their performance on downstream tasks
Augmented Language Models: Fast knowledge learning and editing of language models from symbolic resources, such as knowledge graphs, texts, and tools
Complex Reasoning with Language Models: Compositional, commonsense, and multimodal reasoning inherent in various problems using language models.
저는 자연어처리와 효율적 인공지능 분야에서 연구를 수행해왔습니다. 특히, 언어 모델의 지속가능성과 진실성을 향상시키기 위한 연구를 수행해왔으며, 다음과 같은 유망한 미래 연구 분야를 다루고 있습니다.
- 효율적 언어모델: 다운스트림 작업 성능을 유지하며 언어모델의 계산, 메모리 복잡도 감소
- 증강 언어모델: 지식그래프, 텍스트, 도구와 같은 심볼릭 자원을 활용한 언어모델의 빠른 지식 학습과 교정
- 고급 추론: 언어모델을 기반으로 다양한 문제에 포함된 복합적, 상식, 멀티모달 추론 수행
최종학력
Ph.D. in Computer Science and Engineering, Korea University
전공분야
Deep Learning
주요 연구
- Efficient Language Model
- Augmented Language Models
- Complex Reasoning with Language Models
주요 강의
- Natural Language Processing
- Machine Learning
- Programming
- Information Retrieval
- Data Science
주요 논문/저서
(* denotes equal contribution)
(2024)
- Jun-Hyung Park, Mingyu Lee, Junho Kim, and SangKeun Lee. Coconut: Contextualized Commonsense Unified Transformers for Graph-Based Commonsense Augmentation of Language Models. Findings of ACL 2024.
(2023)
- Jun-Hyung Park*, Hyuntae Park*, Youjin Kang, Eojin Jeon, and SangKeun Lee. DIVE: Towards Descriptive and Diverse Visual Commonsense Generation. EMNLP 2023.
- Yeachan Kim, Junho Kim, Jun-Hyung Park, Mingyu Lee, and SangKeun Lee. Leap-of-Thought: Accelerating Transformers via Dynamic Token Routing. EMNLP 2023.
- Joon-Young Choi, Junho Kim, Jun-Hyung Park, Wing-Lam Mok, and SangKeun Lee. SMoP: Towards Efficient and Effective Prompt Tuning with Sparse Mixture-of-Prompts. EMNLP 2023.
- Yeachan Kim*, Junho Kim*, Wing-Lam Mok, Jun-Hyung Park and SangKeun Lee. Client-Customized Adaptation for Parameter-Efficient Federated Learning. Findings of ACL 2023.
- Jun-Hyung Park, Yeachan Kim, Junho Kim, Joon-Young Choi, and SangKeun Lee. Dynamic Structure Pruning for Compressing CNNs. AAAI 2023.
(2022)
- Jun-Hyung Park*, Mingyu Lee*, Junho Kim, Kang-Min Kim, and SangKeun Lee. Efficient Pre-training of Masked Language Model via Concept-based Curriculum Masking. EMNLP 2022.
- Jun-Hyung Park*, Junho Kim*, Mingyu Lee, Wing-Lam Mok, Joon-Young Choi, and SangKeun Lee. Tutoring Helps Students Learn Better: Improving Knowledge Distillation for BERT with Tutor Network. EMNLP 2022.
- Jun-Hyung Park*, Nayeon Kim*, Joon-Young Choi, Eojin Jeon, Youjin Kang, and SangKeun Lee. Break it Down into BTS: Basic, Tiniest Subword Units for Korean. EMNLP 2022.
- Jun-Hyung Park, Kang-Min Kim, and SangKeun Lee. Quantized Sparse Training: A Unified Trainable Framework for Joint Pruning and Quantization of DNNs. ACM TECS.
- Jun-Hyung Park*, Yong-Ho Jung*, Joon-Young Choi, Mingyu Lee, Junho Kim, Kang-Min Kim, and SangKeun Lee. Learning from Missing Relations: Contrastive Learning with Commonsense Knowledge Graphs for Commonsense Inference. Findings of ACL 2022.
- Jun-Hyung Park, Byung-Ju Choi, and SangKeun Lee. Examining the Impact of Adaptive Convolution on Natural Language Understanding. ESWA.