Application of Deep Neural Networks to Music Composition Based on MIDI Datasets and Graphical Representation
Mateusz Modrzejewski , Mateusz Dorobek , Przemysław Rokita
AbstractIn this paper we have presented a method for composing and generating short musical phrases using a deep convolutional generative adversarial network (DCGAN). We have used a dataset of classical and jazz music MIDI recordings in order to train the network. Our approach introduces translating the MIDI data into graphical images in a piano roll format suitable for the DCGAN, using the RGB channels as additional information carriers for improved performance. We show that the network has learned to generate images that are indistinguishable from the input data and, when translated back to MIDI and played back, include several musically interesting rhythmic and harmonic structures. The results of the conducted experiments are described and discussed, with conclusions for further work and a short comparison with selected existing solutions.
|Publication size in sheets||0.5|
|Book||Rutkowski Leszek, Scherer Rafal, Korytkowski Marcin, Pedrycz Witold, Tadeusiewicz Ryszard, Zurada Jacek M. (eds.): Artificial Intelligence and Soft Computing, 18th International Conference, ICAISC 2019, Proceedings, Part I, Lecture Notes in Artificial Intelligence, vol. 11508, 2019, Cham, Springer, ISBN 978-3-030-20911-7, [978-3-030-20912-4], 688 p., DOI:10.1007/978-3-030-20912-4|
|Keywords in English||AI Artificial intelligence Neural networks GAN Music MIDI|
|Score||= 20.0, 17-06-2020, ChapterFromConference|
|Publication indicators||= 0; = 0|
* presented citation count is obtained through Internet information analysis and it is close to the number calculated by the Publish or Perish system.