Music can touch the hearts of any audience without them
possessing
any knowledge of its context. The power of music is transcendental, and it stems from
the timbre of the instrument(s), the fundamental rhythmic structure and melody, the
dynamics, instrumentation, and many more, all of which cooperate in some form of harmony
to create the final product. With the recent rise of Artificial Intelligence-Generated
Contents (AIGC), AI for music is a promising field full of creativity, novel
methodologies, and technologies that are yet to be explored. Currently, AI for music
methods have been commonly concentrated on utilizing machine learning and deep learning
techniques to generate new music. Despite the significant milestones that have been
achieved thus far, many are not necessarily robust for a wide range of applications.
AI music itself is a timely topic. This workshop aims to generate momentum around this
topic of growing interest, and to encourage interdisciplinary interaction and
collaboration between AI, music, Natural Language Processing (NLP),
machine learning, multimedia,
Human-Computer Interaction (HCI), audio processing, computational
linguistics, and neuroscience.
It serves as a forum to bring together active researchers and practitioners from
academia and industry to share their recent advances in this promising area.
Dec. 15, 2024, Wasshington DC, USA (GMT-5)
Virtually: Please join IEEE Big Data Workshop - AIMG 2024 using your paper author information.
Physically: Hyatt Regency Washington on Capitol Hill, Conference Room - COLUMBIA A (Ballroom level)
Paper Type | Paper Title | Author(s) |
Coffee Break at Columbia Foyer (10:00-10:20) | ||
Session I: Paper Presentation & AI Music Showcase(10:20-12:00) | ||
Opening Remarks | ||
Poster | Musical Scene Detection in Comics: Comparing Perception of Humans and GPT-4 | Megha Sharma, Muhammad Taimoor Haseeb, Gus Xia, and Yoshimasa Tsuruoka |
Poster | Ourmuse: Plot-Specific AI Music Generation for Video Advertising | Hyeseong Park, Myung Won Raymond Jung, Sanjarbek Rakhmonov, and Sngon Kim |
Short | EME33: A Dataset of Classical Piano Performances Guided by Expressive Markings with Application in Music Rendering | Tzu-Ching Hung, Jingjing Tang, Kit Armstrong, Yi-Cheng Lin, and Yi-Wen Liu |
Short | Decomposing Audio into Timbral Features with Convolutional Neural Networks | Nicole Cosme-Clifford |
Full | Synthesising Handwritten Music with GANs: A Comprehensive Evaluation of CycleWGAN, ProGAN, and DCGAN | Elona Shatri, Kalikidhar Palavala, and George Fazekas |
Full | Sparse Sounds: Exploring Low-Dimensionality in Music Generation Model | Shu Wang and Shiwei Liu |
Poster | AI Assisted Workflow for Set-list Preparation with Loops for Live Musicians | Subhrojyoti Roy Chaudhuri |
AI Music Showcase | ||
Lunch Break (12:00-13:00) | ||
Session II: Keynote, Paper Presentation, & AI Music Showcase (13:00-15:00) | ||
Short | Graph Neural Network Guided Music Mashup Generation | Xinyang Wu and Andrew Horner |
Short | Integrating Machine Learning and Rule-Based Approaches in Symbolic Music Generation | Tsubasa Tanaka |
Short | Diffusion Models for Automatic Music Mixing | Xinyang Wu and Andrew Horner |
Full | Relationships between Keywords and Strong Beats in Lyrical Music | Callie C. Liao, Duoduo Liao, and Ellie L. Zhang |
AI Music Showcase | ||
Keynote: Hands-In: Let's Demystify and Remystify AI for Artists and Collaborators Prof. Dr. Jeffrey Morris, Texas A&M University, USA | ||
Session III: AI Music Competition & Showcase (15:00-16:00) | ||
Music | Interpreting Graphic Notation with MusicLDM: An AI Improvisation of Cornelius Cardew's Treatise | Tornike Karchkhadze, Keren Shao, and Shlomo Dubnov |
Music | Expressive MIDI-format Piano Performance Generation | Jingwei Liu |
Music | The Lows' for Solo Saxophone - Composition and Improvisation Assisted by a SampleRNN Model of Instrumental Practice | Mark Hanslip |
Music | Oscillations (iii) - Symbolic Neural Generation and Drifting Using Neural Networks and Constraint Algorithms | Juan Vassallo |
Music | Polyphony à la Bach with Boulezian Harmony | Tsubasa Tanaka |
AI Music Showcase | ||
Closing Remarks & Award Announcements | ||
AI Music Open Discussion (Optional) (16:00-17:00) | ||
Coffee Break at Columbia Foyer (16:00-16:30) |
*The program schedule is subject to change.
This is an open call for papers, which includes original contributions considering recent findings in theory, applications, and methodologies in the field of AI music generation. The list of topics includes, but not limited to:
Priority submission and notification dates:
Regular submission and notification dates:
Final camera-ready submission dates:
If you are interested in serving on the workshop program committee or paper reviewing, please contact Workshop Chair.