저는 인공지능을 통해 사람의 언어와 프로그래밍 언어를 이해하고 생성하기 위한 자연어처리를 전공하였습니다. 특히 자동 코드 생성, 자동 코드 요약, 코드 질의 응답 등을 위한 대규모 언어 모델을 활용하는 연구에 관심을 두고 있습니다. 앞으로는 프로그래밍 교육을 위한 대규모 언어 모델에 대한 연구를 진행할 예정입니다.
I majored in natural language processing to understand and generate human languages and programming languages through artificial intelligence. In particular, I am interested in research utilizing large language models for automatic code generation, automatic code summarization, and code question answering. In the future, I plan to conduct research on large language models for programming education.
최종학력
Ph.D. in Computer Science, Sungkyunkwan University
전공분야
Natural Language Processing
주요 연구
- Natural Language Processing
- AI for Programming Language
- Large Language Models
주요 강의
- Artificial Intelligence
- Database
주요 논문/저서
(International Conference)
- *Jimin An, YunSeok Choi (co-first author) and Jee-Hyong Lee, Code Defect Detection using Pre-trained Language Models with Encoder-Decoder via Line-Level Defect Localization, Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation: LREC-COLING 2024 (NLP Top-tier Conference)
- YunSeok Choi and Jee-Hyong Lee, CodePrompt: Task-Agnostic Prefix Tuning for Program and Language Generation, Findings of the Association for Computational Linguistics: ACL 2023 (NLP Top-tier Conference)
- YunSeok Choi, Hyojun Kim, and Jee-Hyong Lee, BLOCSUM: Block Scope-based Source Code Summarization via Shared Block Representation, Findings of the Association for Computational Linguistics: ACL 2023 (NLP Top-tier Conference)
- CheolWon Na, YunSeok Choi, and Jee-Hyong Lee, DIP: Dead code Insertion based Black-box Attack for Programming Language Model, Proceedings of the 2023 Conference on Association for Computational Linguistics: ACL 2023 (NLP Top-tier Conference)
- Heeyoon Yang, YunSeok Choi, Gahyung Kim, and Jee-Hyong Lee, LOAM: Improving Long-tail Session-based Recommendation via Niche Walk Augmentation and Tail Session Mixup, Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval: SIGIR 2023 (AI Top-tier Conference)
- YunSeok Choi, Hyojun Kim, and Jee-Hyong Lee, TABS: Efficient Textual Adversarial Attack for Pre-trained NL Code Model Using Semantic Beam Search, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: EMNLP 2022 (NLP Top-tier Conference)
- Ida Ayu Putu Ari Crisdayanti, JinYeong Bak, YunSeok Choi, and Jee-Hyong Lee, IA-BERT: Context-Aware Sarcasm Detection by Incorporating Incongruity Attention Layer for Feature Extraction, Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing: ACM-SAC 2022
- YunSeok Choi, JinYeong Bak, CheolWon Na, and Jee-Hyong Lee, Learning Sequential and Structural Information for Source Code Summarization, Findings of the Association for Computational Linguistics: ACL 2021 (NLP Top-tier Conference)
- Min-Sub Won, YunSeok Choi, Samuel Kim, CheolWon Na, and Jee-Hyong Lee, An Embedding Method for Unseen Words Considering Contextual Information and Morphological Information, Proceedings of the 36th Annual ACM Symposium on Applied Computing: ACM-SAC 2021
- Hyunsoo Lee, YunSeok Choi, and Jee-Hyong Lee, Attention History-Based Attention for Abstractive Text Summarization, Proceedings of the 35th Annual ACM Symposium on Applied Computing: ACM-SAC 2020
- YunSeok Choi, Suah Kim, and Jee-Hyong Lee, Source Code Summarization Using Attention-Based Keyword Memory Networks, Proceedings of IEEE International Conference on Big Data and Smart Computing: BigComp 2020
- WonKyu Lee, YunSeok Choi, and Jee-Hyong Lee, Deep Convolutional Neural Network with Autocorrelograms for Environmental Sound Classification, Proceedings of the 3rd Asian Conference on Artificial Intelligence Technology: ACAIT 2019
- YunSeok Choi, Dahae Kim, and Jee-Hyong Lee, Abstractive Summarization by Neural Attention Model with Document Content Memory, Proceedings of Conference on Research in Adaptive and Convergent Systems: ACM-RACS 2018
- YunSeok Choi, Suah Kim, and Jee-Hyong Lee, Recurrent Neural Network for Storytelling, Proceedings of International Symposium on Advanced Intelligent Systems: SCIS-ISIS 2016
(International Journal)
- YunSeok Choi, Hyojun Kim, and Jee-Hyong Lee, READSUM: Retrieval-Augmented Adaptive Transformer for Source Code Summarization, IEEE Access 2023 (SCIE)
- YunSeok Choi, Dahae Kim, and Jee-Hyong Lee, Neural Attention Model with Keyword Memory for Abstractive Document Summarization, Concurrency and Computation Practice and Experience: CCPE 2020 (SCIE)
- Noo-ri Kim, YunSeok Choi, HyunSoo Lee, Jae-Young Choi, Suntae Kim, Jeong-Ah Kim, Youngwha Cho, and Jee-Hyong Lee, Detection of Document Modification based on Deep Neural Networks, Journal of Ambient Intelligence and Humanized Computing: JAIHC 2018 (SCIE)
- Noo-ri Kim, YunSeok Choi, HyunSoo Lee, and Jee-Hyong Lee, Detection of Content Changes based on Deep Neural Networks, Advances in Computer Science and Ubiquitous Computing: SA&CUTE 2016