Introduction

Music can touch the hearts of any audience without them possessing any knowledge of its context. The power of music is transcendental, and it stems from the timbre of the instrument(s), the fundamental rhythmic structure and melody, the dynamics, instrumentation, and many more, all of which cooperate in some form of harmony to create the final product. With the recent rise of Artificial Intelligence-Generated Contents (AIGC), AI for music is a promising field full of creativity, novel methodologies, and technologies that are yet to be explored. Currently, AI for music methods have been commonly concentrated on utilizing machine learning and deep learning techniques to generate new music. Despite the significant milestones that have been achieved thus far, many are not necessarily robust for a wide range of applications.

AI music itself is a timely topic. This workshop aims to generate momentum around this topic of growing interest, and to encourage interdisciplinary interaction and collaboration between AI, music, Natural Language Processing (NLP), machine learning, multimedia, Human-Computer Interaction (HCI), audio processing, computational linguistics, and neuroscience. It serves as a forum to bring together active researchers and practitioners from academia and industry to share their recent advances in this promising area.

Topics

This is an open call for papers, which includes original contributions considering recent findings in theory, applications, and methodologies in the field of AI music generation. The list of topics includes, but not limited to:

  • Machine learning/AI for music
  • Natural language processing for music generation
  • Algorithmic music generation
  • Music generation based on a specific aspect: Lyrical, chordal, motivic, melodic, and rhythmic
  • AI-generated lyrics
  • AI-generated instrumental audio (including vocal)
  • Computational musicology
  • AI music interpretation
  • AI music data representation
  • Music evaluation metrics
  • Multiple-Channel AI music generation
  • AI musical fusion (notes, audio, etc.)
  • AI generation for musical performance and expression
  • AI music enhancement (e.g. AI-generated instrumentation)
  • AI musical ethics
  • AI music generation datasets
  • Human-Centered Interaction (HCI) for AI music generation
  • AI music for neuroscientific applications
  • AI-Aided music theory applications

Important Dates


Priority submission and notification dates:

  • Oct 1, 2024: Submission of full (8-10 pages) and short (5-7 pages) papers
  • Oct 8, 2024: Submission of poster abstracts (3-4 pages)
  • Oct 15, 2024: Submission of AI musical compositions (1- or 2-page abstract, 1 music sheet, and 1 mp3 audio)
  • Nov 3, 2024: Notification of paper or music acceptance

Final camera-ready submission dates:

Submission

Program Chair

  • Callie Liao, IntelliSky, USA
  • Ellie Zhang, IntelliSky, USA

Program Committee

  • Kaiqun Fu, South Dakota State University, USA
  • Jesse Guessford, George Mason University, USA
  • Ge Jin, Purdue University, USA
  • Fanchun Jin, Google, USA
  • Lindi Liao, George Mason University, USA
  • Sean Luke, George Mason University, USA
  • Jeffrey Morris, Texas A&M University, USA
  • Chen Shen, Google, USA
  • Yanjia Zhang, Boston University, USA

If you are interested in serving on the workshop program committee or paper reviewing, please contact Workshop Chair.