In recent years, artificial intelligence has been reshaping the way we think about music composition. From rule-based systems and generative grammars to data-driven deep learning models, machines now engage in creative processes once considered uniquely human. This presentation examines how contemporary AI systems represent and generate music either through symbolic formats such as scores and MIDI, or subsymbolic representations like audio waveforms and latent embeddings. It introduces the concept of hybrid representation models, which aim to combine the structural clarity of symbolic systems (e.g., DeepBach, Music Transformer) with the expressive power of deep learning methods (e.g., WaveNet, MusicLM). Case studies of prototype tools and pedagogical exercises will illustrate how such hybrid approaches bridge theory and practice, opening new avenues for AI Art within academic education.
Daniel Kvak is a Ph.D. candidate at the Faculty of Arts, Masaryk University, where he explores the intersection of artificial intelligence, algorithmic composition, and representation models in music. Beyond musicology, he is actively involved in interdisciplinary research connecting AI, digital humanities, and visual data, and has contributed to international conferences and academic publications across both artistic and scientific domains.