참고

[1] LG AI Research "ChatEXAONE: Enterprise AI Agent" https://www.lgresearch.ai/data/upload/tech_report/en/Technical_report_ChatEXAONE.pdf (2024).

[2] Lin, Lizhi, et al. "Against The Achilles' Heel: A Survey on Red Teaming for Generative Models." arXiv preprint arXiv:2404.00629 (2024).

[3] Ganguli, Deep, et al. "Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned." arXiv preprint arXiv:2209.07858 (2022).

[4] Ge, Suyu, et al. "Mart: Improving llm safety with multi-round automatic red-teaming." arXiv preprint arXiv:2311.07689 (2023).

[5] Kopf, Andreas, et al. "Openassistant conversations-democratizing large language model alignment." Advances in Neural Information Processing Systems 36 (2024).

[6] Zhou, Chunting, et al. "Lima: Less is more for alignment." Advances in Neural Information Processing Systems 36 (2024).

[7] LG AI Research “Introducing ChatEXAONE : Enterprise AI Agent Boosting Corporate Efficiency with Expert AI Insights” https://www.lgresearch.ai/blog/view?seq=458

[8] OpenAI, "OpenAI Red Teaming Network" https://openai.com/index/red-teaming-network/ (2023).

[9] Microsoft, "Microsoft AI Red Team building future of safer AI", https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai/ (2023).

[10] Meta, "Introducing Purple Llama for Safe and Responsible AI Development" https://about.fb.com/news/2023/12/purple-llama-safe-responsible-ai-development/ (2023).

[11] LG AI Research "2023 LG AI 윤리 책무성 보고서" https://www.lgresearch.ai/about/vision#ethics (2023).