참고

[1] Suhee Yoon, Sanghyu Yoon, et al. Diffusion-based Semantic-Discrepant Outlier Generation for Out-of-Distribution Detection. (NeurIPS 2023, Synthetic4ML Workshop)

[2] Du, Xuefeng, et al. Dream the impossible: Outlier imagination with diffusion models. (NeurIPS 2023) Zhang, Jingyang, et al. OpenOOD v1. 5: Enhanced Benchmark for Out-of-Distribution Detection. arXiv preprint arXiv:2306.09301, 2023.

[3] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models, 2020. Niv Cohen, Ron Abutbul, and Yedid Hoshen. Out-of-distribution detection without class labels. In European Conference on Computer Vision, pages 101-117. Springer, 2022.

[4] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In International Conference on Learning Representations, 2017.

[5] Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich. Deep anomaly detection with outlier exposure. In International Conference on Machine Learning, 2019.

[6] Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Yiran Chen, and Hai Li. Mixture outlier exposure: Towards out-of-distribution detection in fine-grained environments. In IEEE/CVF Winter Conference on Applications of Computer Vision, page 5531-5540, 2023.

[7] Qing Yu and Kiyoharu Aizawa. Unsupervised out-of-distribution detection by maximum classifier discrepancy. In IEEE/CVF international conference on computer vision, pages 9518-9526, 2019.

[8] Konstantin Kirchheim and Frank Ortmeier. On outlier exposure with generative models. In NeurIPS ML Safety Workshop, 2022.

[9] Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. Scan: Learning to classify images without labels. In European conference on computer vision, pages 268-285. Springer, 2020.

[10] Du, Xuefeng, et al. Vos: Learning what you don't know by virtual outlier synthesis. arXiv preprint arXiv:2202.01197, 2022.