참고
[1] Sangjun Han, Hyeongrae Lim, Moontae Lee, and Woohyung Lim, “Symbolic Music Loop Generation with Neural Discrete Representations,” in Proc. of the 23th International Society for Music Information Retrieval Conference, 2022.

[2] Colin Raffel, “Learning-based Methods for Comparing Sequences, with Applications to Audio-to-MIDI Alignment and Matching,” Ph.D. dissertation, Columbia University, 2016.

[3] Bee Suan Ong, and Sebastian Streich, “Music Loop Extraction from Digital Audio Signals,” in Proc. of the 2008 IEEE International Conference on Multimedia and Expo, IEEE, 2008, pp. 681-684.

[4] Zhengshan Shi, and Gautham J. Mysore, “LoopMaker: Automatic Creation of Music Loops from Pre-recorded Music,” in Proc. of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 1-6.

[5] Lukas Ruff, Robert A. Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib A. Siddiqui, Alexander Binder, Emmanuel Muller, and Marius Kloft, “Deep One-Class Classification,” in Proc. of the 35th International Conference on Machine Learning, PMLR, 2018, pp. 4393-4402.

[6] Aaron Van Den Oord, Oriol Vinyals, and Koray Kavukcuoglu, “Neural Discrete Representation Learning,” in Advances in Neural Information Processing Systems, 2017, pp. 6306-6315.

[7] Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain Gelly, “Assessing Generative Models via Precision and Recall,” in Advances in Neural Information Processing Systems, 2018, pp. 5228-5237.

[8] Muhammad Ferjad Naeem, Seong Joon Oh, Youngjung Uh, Yunjey Choi, and Jaejun Yoo, “Reliable Fidelity and Diversity Metrics for Generative Models,” in Proc. of the 37th International Conference on Machine Learning, PMLR, 2020, pp. 7176-7185.

[9] Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Ian Simon, Curtis Hawthorne, Andrew M. Dai, Matthew D. Hoffman, Monica Dinculescu, and Douglas Eck, “Music Transformer,” arXiv preprint arXiv:1809.04281, 2018.

[10] Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang, “MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment,” in Proc. of the AAAI Conference on Artificial Intelligence, 2018, pp. 34-41.

[11] Ali Razavi, Aaron Van den Oord, and Oriol Vinyals, “Generating Diverse High-Fidelity Images with VQVAE-2,” in Advances in Neural Information Processing Systems, 2019, pp. 7989-7999.

[12] Sangjun Han, Hyeongrae Lim, DaeHan Ahn, and Woohyung Lim, “Instrument Separation of Symbolic Music by Explicitly Guided Diffusion Model,” in NeurIPS Workshop on Machine Learning for Creativity and Design, 2022.

[13] Jonathan Ho, Ajay Jain, and Pieter Abbeel, “Denoising Diffusion Probabilistic Models,” Advances in Neural Information Processing Systems, 33:6840-6851, 2020.

[14] Jiaming Song, Chenlin Meng, and Stefano Ermon, “Denoising Diffusion Implicit Models,” arXiv preprint arXiv:2010.02502, 2020.