Introduction

Music can touch the hearts of any audience without them possessing any knowledge of its context. The power of music is transcendental, and it stems from the timbre of the instrument(s), the fundamental rhythmic structure and melody, the dynamics, instrumentation, and many more, all of which cooperate in some form of harmony to create the final product. With the recent rise of Artificial Intelligence-Generated Contents (AIGC), AI for music is a promising field full of creativity, novel methodologies, and technologies that are yet to be explored. Currently, AI for music methods have been commonly concentrated on utilizing machine learning and deep learning techniques to generate new music. Despite the significant milestones that have been achieved thus far, many are not necessarily robust for a wide range of applications.

AI music itself is a timely topic. This workshop aims to generate momentum around this topic of growing interest, and to encourage interdisciplinary interaction and collaboration between AI, music, Natural Language Processing (NLP), machine learning, multimedia, Human-Computer Interaction (HCI), audio processing, computational linguistics, and neuroscience. It serves as a forum to bring together active researchers and practitioners from academia and industry to share their recent advances in this promising area.

Program Schedule

Dec. 15, 2024, Wasshington DC, USA (GMT-5)


Virtually: Please join IEEE Big Data Workshop - AIMG 2024 using your paper author information.

Physically: Hyatt Regency Washington on Capitol Hill, Conference Room - COLUMBIA A (Ballroom level)

Paper Type Paper Title Author(s)
Coffee Break at Columbia Foyer (10:00-10:20)

Session I: Paper Presentation & AI Music Showcase(10:20-12:00)

Opening Remarks
Poster Musical Scene Detection in Comics: Comparing Perception of Humans and GPT-4 Megha Sharma, Muhammad Taimoor Haseeb, Gus Xia, and Yoshimasa Tsuruoka
Poster Ourmuse: Plot-Specific AI Music Generation for Video Advertising Hyeseong Park, Myung Won Raymond Jung, Sanjarbek Rakhmonov, and Sngon Kim
Short EME33: A Dataset of Classical Piano Performances Guided by Expressive Markings with Application in Music Rendering Tzu-Ching Hung, Jingjing Tang, Kit Armstrong, Yi-Cheng Lin, and Yi-Wen Liu
Short Decomposing Audio into Timbral Features with Convolutional Neural Networks Nicole Cosme-Clifford
Full Synthesising Handwritten Music with GANs: A Comprehensive Evaluation of CycleWGAN, ProGAN, and DCGAN Elona Shatri, Kalikidhar Palavala, and George Fazekas
Full Sparse Sounds: Exploring Low-Dimensionality in Music Generation Model Shu Wang and Shiwei Liu
Poster AI Assisted Workflow for Set-list Preparation with Loops for Live Musicians Subhrojyoti Roy Chaudhuri
AI Music Showcase
Lunch Break (12:00-13:00)

Session II: Keynote, Paper Presentation, & AI Music Showcase (13:00-15:00)

Short Graph Neural Network Guided Music Mashup Generation Xinyang Wu and Andrew Horner
Short Integrating Machine Learning and Rule-Based Approaches in Symbolic Music Generation Tsubasa Tanaka
Short Diffusion Models for Automatic Music Mixing Xinyang Wu and Andrew Horner
Full Relationships between Keywords and Strong Beats in Lyrical Music Callie C. Liao, Duoduo Liao, and Ellie L. Zhang
AI Music Showcase
Keynote:
Hands-In: Let's Demystify and Remystify AI for Artists and Collaborators
Prof. Dr. Jeffrey Morris, Texas A&M University, USA

Session III: AI Music Competition & Showcase (15:00-16:00)

Music Interpreting Graphic Notation with MusicLDM: An AI Improvisation of Cornelius Cardew's Treatise Tornike Karchkhadze, Keren Shao, and Shlomo Dubnov
Music Expressive MIDI-format Piano Performance Generation Jingwei Liu
Music The Lows' for Solo Saxophone - Composition and Improvisation Assisted by a SampleRNN Model of Instrumental Practice Mark Hanslip
Music Oscillations (iii) - Symbolic Neural Generation and Drifting Using Neural Networks and Constraint Algorithms Juan Vassallo
Music Polyphony à la Bach with Boulezian Harmony Tsubasa Tanaka
AI Music Showcase
Closing Remarks & Award Announcements

AI Music Open Discussion (Optional) (16:00-17:00)

Coffee Break at Columbia Foyer (16:00-16:30)

*The program schedule is subject to change.

Topics

This is an open call for papers, which includes original contributions considering recent findings in theory, applications, and methodologies in the field of AI music generation. The list of topics includes, but not limited to:

  • Machine learning/AI for music
  • Natural language processing for music generation
  • Algorithmic music generation
  • Music generation based on a specific aspect: lyrical, chordal, motivic, melodic, and rhythmic
  • AI-generated lyrics
  • AI-generated instrumental audio (including vocal)
  • Computational musicology
  • AI music interpretation
  • AI music data representation
  • Music evaluation metrics
  • Multiple-channel AI music generation
  • AI musical fusion (notes, audio, etc.)
  • AI generation for musical performance and expression
  • AI music enhancement (e.g. AI-generated instrumentation)
  • AI musical ethics
  • AI music generation datasets
  • Human-Centered Interaction (HCI) for AI music generation
  • AI music for neuroscientific applications
  • AI-aided music theory applications

Important Dates


Priority submission and notification dates:

  • Oct 1, 2024: Submission of full (8-10 pages) and short (5-7 pages) papers
  • Oct 8, 2024: Submission of poster abstracts (3-4 pages)
  • Oct 15, 2024: Submission of AI musical compositions (1- or 2-page abstract, 1 music sheet, and 1 mp3 audio)
  • Nov 3, 2024: Notification of paper or music acceptance

Regular submission and notification dates:

  • Oct 15, 2024: Submission of full (8-10 pages) and short (5-7 pages) papers
  • Oct 27, 2024 Extended to Nov. 4, 2024: Submission of poster abstracts (3-4 pages)
  • Oct 27, 2024 Extended to Nov. 4, 2024: Submission of AI musical compositions (1- or 2-page abstract, 1 music sheet, and 1 mp3 audio)
  • Nov 10, 2024: Notification of paper or music acceptance

Final camera-ready submission dates:

Submission

  • Please follow IEEE conference manuscript templates (Overleaf or US Letter) and IEEE reference guide to format your paper, and then directly submit to IEEE Big Data paper submission site.
  • The submissions must be in PDF format without author list (double-blind), written in English, and formatted according to the IEEE publication camera-ready style. All the paper review follows double-blind peer review.
  • Conditionally accepted papers: please replace the original submission with your combined single-PDF submission of the revised version and your responses to the reviewers.
  • For the AI composition submission, please include a sharable mp3 audio link (required) and music sheets (if applicable, no more than 8 pages) in the abstract (1 or 2 pages). The abstract is structured according to the IEEE conference paper format.
  • Accepted papers will be published in the IEEE Big Data proceedings.

Program Chair

  • Callie Liao, IntelliSky, USA
  • Ellie Zhang, IntelliSky, USA

Program Committee

  • Shlomo Dubnov, University of California - San Diego, USA
  • Kaiqun Fu, South Dakota State University, USA
  • Jesse Guessford, George Mason University, USA
  • Tao Gui, Electric Arts, USA
  • Cheng Huang, Sony, USA
  • Ge Jin, Purdue University, USA
  • Fanchun Jin, Google, USA
  • Lindi Liao, George Mason University, USA
  • Sean Luke, George Mason University, USA
  • Jeffrey Morris, Texas A&M University, USA
  • Chen Shen, Google, USA
  • Alex Wong, Yale University, USA
  • Yanjia Zhang, Boston University, USA

If you are interested in serving on the workshop program committee or paper reviewing, please contact Workshop Chair.