To my ears, music generated by AI is not convincing – yet.

As a computer musician with a formal background in musical training (violin) and science (engineering and mathematics), I have always been fascinated by exploring and expanding the possibilities of computer assisted composition. With this in mind, I don’t feel threatened by AI in these early stages, if used in an ethically responsible way.

Dr Ivan Zavada. Photo supplied

I believe music exists to convey human emotion and expression through experience, such as live performance and sharing music through other engaged listening experiences. As a computer musician, I emphasise meticulous crafting of sound with the aim of conveying specific human emotion, not only through synthetic sound, but also by integrating acoustic sound material. I combine real instruments in my works to connect with the real world and create a musical signature – a sonic watermark.

Invoking synergy between machine and human imagination with the intention of communicating musical expression and emotion through sound is one of my primary goals. This notion is built upon, and extends many 20th century musical movements of avant-garde and computer music composition, within a contemporary 21st century setting.

When coming across increasingly promoted AI-generated music on internet listening platforms however, this is where I reach existential confrontation with synthetic and non-human generated artforms. There are an estimated 100 million songs or musical tracks available on streaming platforms with an increasing number of AI generated material proliferating through playlists.

AI-generated image. © Pixabay

AI-generated music is primarily based on pre-existing material fed into complex machine learning algorithms. While databases are growing exponentially, they also don’t consider oral traditions and less commercial or more experimental musical forms not available on commercial platforms.

One of my first lessons in engineering was to learn that a computer is a ‘very dumb machine doing multitudes of simple calculations very quickly’.

A few decades later we now enter a different scenario with the emergence of artificial intelligence.

The importance of feeling and expression is overlooked in the current format of AI-generated music. In our current post-digital world, I believe understanding music from within should be a foundational educational priority for the young and future generations of creators and listeners.

Just like consuming less plastic to reduce our individual carbon footprint with environmental concerns, it has also become a crucial priority to promote cultural sustainability, oral tradition, singing, playing and dancing – living music in all its manifestations.

As humans we therefore need to keep our cultural footprint intact, alive and evolving. In these early stages of AI development, digital rights management and wider public awareness on the advantages and pitfalls of AI generated material should be scrutinised. Governments and legislative institutions have a crucial role of ensuring we don’t fall into a potential cultural cul-de-sac.

Dr Ivan Zavada. Photo supplied

Artistic expression makes us human

Cultural footprint is the DNA of our existence as human beings, artistic expression, communicating, conceptualising, and dreaming are the essence of our evolution. It won’t stop simply because digital systems can simulate the past and propose average renditions of what our past might have sounded like.

We as humans must forge the future to create new forms of human expression to survive, beyond the music and the paintings of AI, and make the music even better and more real to walk on the bridges that were built by our ancestors. This time however, there is no turning back, AI will be omnipresent in our lives in the contemporary world and beyond.

It is our duty as musicians and creators to accurately preserve and transmit musical knowledge and heritage through performance practice, composition, and innovation to remain at the forefront of potentially threatening technologies such as AI generative music. One way to achieve this quest is to write music that AI won’t be able to recreate, by relying on human, musical and sound imagination to design and share innovative listening experiences with new audiences.

Music has fashions

Musical movements, styles and aesthetics have burgeoned exponentially from medieval times to the digital era. They were transformed by recording technologies and the introduction of computer technology to forge an entirely different sonic eco-system. The diversity and proliferation of music genres continues to grow over time.

Music has fashions too, and according to some sources there are over 1000 genres associated with popular music alone. In art music, categories can be defined more broadly stylistically, but the impressive number of works composed over several centuries can span over the millions. Each of these works have unique musical signatures and individual forms of expression.

The 20th century is obviously known for its predominant effervescence in the visual world with the introduction of cinema, video, and all combined forms of visual art.

Similarly, the 21st century will rapidly evolve into other fields like sound and music in unprecedented ways, transformed by AI and other systems allowing a deeper understanding of its inherent codes representing the human perception of the world surrounding us.

From the advent of sound recording to the world-first generation of sound by a computer on the Australian SCIRAC device in 1951, with Colonel Bogey March, we have a fascination to capture, generate, transform, create, and archive material representing human evolution through time.

Although AI can be somewhat useful in understanding the music of the past, we should really create new music that is not ‘corporate databased’ to generate new interest in the music-making process all together. New perspectives in music …

As a composer, I see music as the interpretation of the world we live in. Today, we are confronted with our own inventions and essentially competing with the processes that we invented ourselves to further understand our origins as humans. I still haven’t heard a good AI version of didgeridoo and clap sticks and I hope to never come across one. Then again, it could be useful to preserve tradition.

Preserving cultural heritage through real sound, perhaps with the aid of AI in other useful ways than imitation can be of benefit. In rural societies, songs and enjoyment from acoustic instruments and human voice still prevail, there are certainly ways we are not aware of yet to use this technology effectively in keeping our cultural footprint intact and evolving – creating new fashions.

AI as teaching assistant

At Sydney Conservatorium of Music, a beacon for keeping diverse musical traditions alive in Australia, I have no choice but to address and introduce the concept of AI in our composition units, mainly to highlight its creative possibilities, but certainly not to replace any aspect of musical tradition, performance, and creativity.

One good example is to compare the differences between what students can do with mixing and mastering by themselves and compare results with the very generic options offered by an AI version. Often the AI version is not optimal as it all relies on broad stylistic attributes with a one size fits all solution, and it fails to include expressive nuances in the equation.

Crafting sound involves personal and individual taste, as well as being informed by knowledge of music history, aesthetics and, of course, experience. In this context, we discuss the differences from a technical and expressive standpoint. As a counterexample of threat, AI serves as a good assistant in the decision-making process to emphasise human emotion in music.

Students should be made aware of evolving technologies to navigate independently through evolving trends and disciplines. The goal is to make informed and ethical decisions based on critical thinking and analysis whilst balancing craft and technique with unique creativity in mind. In this context the emphasis is on promoting human expression and different perceptions of music and art in general. This is can only be achieved through experiential transmission of knowledge.

Breaking musical boundaries

Music has perhaps been associated with stardom and success in some cultures. AI development companies are seeking ways to capitalise on the music industry by interfering with its transformation from tangible media consumption to now even more accessible and predominant listening habits in worldwide consumption of the ‘musical experience’.

This imminent challenge to our own ability in distinguishing between what was created by a machine and what is human can be viewed as an existential threat by some. But it shouldn’t take away our desire as humans to walk across that sonic bridge of imagination with the ambition of forging new paths into the future of music!

It takes many engineers to design and build bridges. In similar ways perhaps one way forward is to develop new models of collective artistry to create extraordinary music with teams of composers participating in a new effervescence of musical expression in the 21st century. Emphasising on the human pool of talent, resources, combined skills, and cultural diversity could, and hopefully will create something greater than what is achievable on an individual basis.

Dr Ivan Zavada is Senior Lecturer and Program Leader in Composition and Music Technology at Sydney Conservatorium of Music.

Take the Limelight Reader Survey and you could win an Australian Digital Concert Hall gift voucher