참고
[1] Lim, S., Kim, M., & Lee, J. (2019, September 17). KorquAD1.0: Korean QA dataset for Machine Reading Comprehension. arXiv.org. https://arxiv.org/abs/1909.07005

[2]KorQuAD 2.0: Korean QA Dataset for Web Document Machine Comprehension, https://korquad.github.io/dataset/KorQuAD_2.0/KorQuAD_2.0_paper.pdf

[3] Kenton, Jacob Devlin Ming-Wei Chang, and Lee Kristina Toutanova. "Bert: Pre-training of deep bidirectional transformers for language understanding." Proceedings of naacL-HLT. Vol. 1. 2019.

[4] Lewis, Mike, et al. "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension." arXiv preprint arXiv:1910.13461 (2019).

[5] Raffel, Colin, et al. "Exploring the limits of transfer learning with a unified text-to-text transformer." The Journal of Machine Learning Research 21.1 (2020): 5485-5551.

[6] Chung, Hyung Won, et al. "Scaling instruction-finetuned language models." arXiv preprint arXiv:2210.11416 (2022).

[7] Karpukhin, Vladimir, et al. "Dense passage retrieval for open-domain question answering." arXiv preprint arXiv:2004.04906 (2020).

[8] Izacard, Gautier, and Edouard Grave. "Leveraging passage retrieval with generative models for open domain question answering." arXiv preprint arXiv:2007.01282 (2020).

[9] Lewis, Patrick, et al. "Retrieval-augmented generation for knowledge-intensive nlp tasks." Advances in Neural Information Processing Systems 33 (2020): 9459-9474.

[10] Liu, Frederick, et al. "Enct5: Fine-tuning t5 encoder for non-autoregressive tasks." arXiv e-prints (2021): arXiv-2110.

[11] Gururangan, Suchin, et al. "Don't stop pretraining: Adapt language models to domains and tasks." arXiv preprint arXiv:2004.10964 (2020).

[12] Gur, Izzeddin, et al. "A real-world webagent with planning, long context understanding, and program synthesis." arXiv preprint arXiv:2307.12856 (2023).

[13] Kongyoung, Sarawoot, Craig Macdonald, and Iadh Ounis. "monoQA: Multi-Task Learning of Reranking and Answer Extraction for Open-Retrieval Conversational Question Answering." Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022.