Music can touch the hearts of any audience without them
possessing
any knowledge of its context. The power of music is transcendental, and it stems from
the timbre of the instrument(s), the fundamental rhythmic structure and melody, the
dynamics, instrumentation, and many more, all of which cooperate in some form of harmony
to create the final product. With the recent rise of Artificial Intelligence-Generated
Contents (AIGC), AI for music is a promising field full of creativity, novel
methodologies, and technologies that are yet to be explored. Currently, AI for music
methods have been commonly concentrated on utilizing machine learning and deep learning
techniques to generate new music. Despite the significant milestones that have been
achieved thus far, many are not necessarily robust for a wide range of applications.
AI music itself is a timely topic. This workshop aims to generate momentum around this
topic of growing interest, and to encourage interdisciplinary interaction and
collaboration between AI, music, Natural Language Processing (NLP),
machine learning, multimedia,
Human-Computer Interaction (HCI), audio processing, computational
linguistics, and neuroscience.
It serves as a forum to bring together active researchers and practitioners from
academia and industry to share their recent advances in this promising area.
This is an open call for papers, which includes original contributions considering recent findings in theory, applications, and methodologies in the field of AI music generation. The list of topics includes, but not limited to:
Priority submission and notification dates:
Regular submission and notification dates:
Final camera-ready submission dates:
If you are interested in serving on the workshop program committee or paper reviewing, please contact Workshop Chair.