Document Type

Thesis

Comments

This research was conducted to combine the relevant topic of Artificial Intelligence with the creative practice of Music Generation. It aims to give an understanding of the relationship and future implications of AI in the Music Industry through experimentation involving Music Variational Autoencoder.

First Faculty Advisor

TingTing Zhao

Second Faculty Advisor

Joan Zaretti

Keywords

music; artificial intelligence; autoencoder

Publisher

Bryant University

Rights Management

CC - BY - NC - ND

Abstract

The use of artificial intelligence (AI) is quickly gaining relevancy in creative fields, and its emergence into the music industry comes with many unique implications. This paper examines the technical processes of creating music with AI and machine learning, the relationship between music and emotion, and finally the implications and ethical considerations for AI generated music in creative industries. As part of this project, a generative deep learning model (Music Variational Autoencoder) is explored and applied to generate music using a pre-trained training set of piano rolls. The AI reconstructions are based on self-made 4 measure electronic instrumental tracks. 46 machine learning students then take a survey to blindly compare the human generated tracks with the AI generated tracks to see if they can tell the difference. There are two trials of this survey, with a presentation on MusicVAE given in between the two trials. Exploratory analysis indicates that music experience is correlated with increased ability to distinguish AI generated music. Chi-Square Tests are then conducted for each set in each trial with a null hypothesis stating that the chance of guessing if a song is AI generated is equal to 50%. These results indicate that the null hypothesis cannot be rejected for the first trial, but that it is rejected for the second trial with the addition of the presentation in between.

Share

COinS