Sound mixer
Credit: Marcela Laskoski on Unsplash

Forget the controversy about using artificial intelligence in music. Beyond right or wrong, enhancement of new works or loss of careers, embracing or giving up, AI marches on. Let's take a look at what is actually happening.

The answer: a lot. A whole, amazing, unexpected lot.

AI itself says: “AI in music uses artificial intelligence to generate, animate, and sync visuals for songs, or even compose music/lyrics, simplifying what used to require manual expertise. These are emerging paradigms in music production where human creativity is merged with artificial intelligence tools. This approach is changing how music is created, edited, and produced.”

It may surprise some that the relationship between AI and music goes back seven decades: in 1957, the ILLIAC I (Illinois Automatic Computer) produced a string quartet as a completely computer-generated piece of music. 

In 1965, inventor Ray Kurzweil developed software capable of recognizing musical patterns and synthesizing new compositions from them. The computer first appeared on the quiz show I've Got a Secret that same year.

AI was used as long ago as 2021 to complete Beethoven's unfinished 10th Symphony, which was premiered with a live orchestra. Yamaha AI systems include the Dear Glenn project, which can play any piece in Glenn Gould’s style, and Duet With YOO, in which the AI does real-time analysis to respond to a human pianist in real time. Tools such as ORB Composer use AI to create full orchestral arrangements from simple musical ideas, offering composers a virtual orchestra to test and refine their works. OpenAI's MuseNet and Google’s Magenta Project can compose original classical music, often mimicking the styles of famous composers or pushing the boundaries of creativity.

In a recent Guardian article, Tarik O’Regan describes the atmosphere in the Bay Area: “Nobody I meet in San Francisco — where this technology is being dreamed up, built, and sold — is riding a wave. Riding a wave means surrendering to its pull. The people here have no interest in that. They’re trying to control the tides, to shift the moon if necessary.”

In thinking about AI today, let’s hear what some humans told SF Classical Voice.

Alexandra Ivanoff, a well-known San Francisco musician and ex-pat for many years in Turkey and Hungary, raises our central questions: “Are algorithms only able to repeat and rehash what’s already been composed and fed into its system? Do the results mean that composers will be limited in their sound and style options? Will composers using AI truly be able to be original?”

THE CRITICS

Composer David Denniston: “I feel like we have created Jurassic Park, in Golden Gate Park without a fence. ‘Oh that's cute,’ everyone says, except for writers, artists, and composers. I’m writing gigantic orchestral scores these days, but the Skynet Symphony will be composed in under a second in a windowless server farm in the desert, and we’ll all be looking into the abyss. I need to write something elegiac. Maybe with more than three hammer blows.”

Bart Picqueur: “Honestly, as a composer, I am scared. For copywriters and illustrators, it is already fatal. For us, we’ve got a few months to keep up appearances, I think. I guess I’m getting old, so it was time to devote my time to billiards anyway…”

Musician Jay Swami
Composer-technologist Jay Swami | Credit: Anand Kanneh, courtesy of Jay Swami
THE OPTIMISTS

Composer and percussionist Sabrina Peña Young says a positive use of AI in music is in marketing: “You can use AI to build a monthly social media calendar or a custom marketing plan for your next album release. I only trust Perplexity Deep Search. It’s correct and provides real sources without much hallucinating. You can input class unit content and get ideas for lesson plans. Do quick formatting like for bios or grants or a resume. Notice, this isn’t for the creation of music. I’ve used it to analyze the chords of famous works, but not for creation.”

Composer-technologist Jay Swami wrote to SF Classical Voice: “The current debate around generative music is framed as a false binary: anti-AI artists versus pro-AI ‘prompters.’ In reality, there are three groups. First are the fearful and uninformed, reacting to headlines rather than tools. Second are pure prompters with little musical grounding. The third group is quieter and far more consequential: musicians, producers, and serious hobbyists who already live inside DAWs (Digital Audio Workstations), play instruments, write lyrics, arrange, and mix. For them, Gen-AI is not a replacement but an accelerator. They feed in demos, audio ideas, chord progressions, rhythms, lyrics, and clear artistic intent. This is no different in spirit from sample libraries, MIDI programming, or Auto-Tune.

AI slop is real, but as long as the workflow is not pure prompting and human intent drives the inputs, the intelligence remains human. Hybrid workflows don’t erase artistry; they amplify it.”

SFCV board member Thomas Varghese responded to Swami: “You live in this world of tech and music, where the AI developer in you has created a ‘producer’ who now feels like your own nemesis — yet you still find the courage to confront it and hope to overcome it.”

ANECDOTES
Marika Kuzma
Marika Kuzma | Credit: Lisa Keating

Administrator Linda Rogers: “At the Scarborough Philharmonic, we did a great project called ‘Songs of Hope,’ that resulted in 15 compositions inspired by themes of hope. One of the composers, Bruno Degazio, asked a colleague to animate his composition which had a text from St. Thomas Aquinas. The animator (an artist in his own right) decided to use AI as a tool to animate some visual art of the same period as the text. While the animated film was hardly ‘solely AI’ and most of us found it lighthearted and appropriate, we got huge push back from a few members of our classical audience when we showed it prior to a concert.”  

Digital creator SN Fender Jr. wrote: “Like all, I have no idea where it is going. Just recently Bill Reynolds, the bass player for a rockin’ rag-based band, has begun pushing a new project, Feel Spector, billed as a biography of ‘Old Hickory,’ making heavy use of AI. I’m pretty sure the project is satirical and built to poke fun at the fakeness of AI and also of the commercialization of Nashville music.”

UC Berkeley Professor Emerita Marika Kuzma: “A friend recently shared his experiment with AI music-composition. It was surprisingly convincing. He had written a poem about nuclear physics (no kidding) and plugged in some directions for three different song versions. The melodies were all formulaic, but maybe no more so than a lot of pop melodies. The instrumental riffs were unremarkable but supported the melodies well. The vocals were good but showed no imperfections (i.e. humanity). Even the version in metal style sounded too polished somehow. Mostly the music sounded disconnected from the words. AI-generated stuff is impressive but lacking in something that seems hard to pre-program.”

Logistics consultant for major opera and ballet companies Katharina Natividad: “Two weeks ago, we were in an Uber in Taiwan. The driver had some nice calm vocal jazz playing. We noted down the YouTube channel to check it out later. Then we found out that all the songs were AI and that there are so many channels that already offer AI music. We listened to it for a bit, but then it became somehow repetitive and boring and we went back to listen to real music and artists again.”

THE WORKERS

AI and labor is a thorny issue. It was at the center of the lengthy, consequential strikes by the Writers Guild of America and SAG-AFTRA.

SF Opera’s Collective Bargaining Agreement includes “an agreement to set up a working group to explore the potential opportunities and challenges of AI as it relates to the Orchestra." 

An SF Symphony spokesperson told SF Classical Voice that “Media Agreements between organizations and artists are separate from union contracts.”

These agreements outline how the organization records, distributes, and monetizes its performances and content. The Integrated Media Agreement governs how orchestras can record and use media for promotional purposes, streaming, broadcasts, and other digital distribution. Guidelines for live-streaming, broadcasting, recordings, and other agreements cover larger-scale recording projects.

New World Symphony Wallcast
New World Symphony's "Wallcast", now AI-enhanced. | Credit: Courtesy of New World Symphony
IMPLEMENTATION

A few examples of AI being employed in music around the world:

Michael Tilson Thomas’s New World Symphony in Miami has been a pioneer in exploring the use of AI. Just a month ago, NWS updated its DiGiCo console, adding a Quantum225 Pulse. Performances are projected onto the venue’s 7,000 square foot Wallcast screen in the SoundScape Park outdoor listening area for the public with live mixing. 

South Korean and Chinese TV series, among others in Asia, are increasingly using AI-generated soundtracks, from full scores for short-form series to generating musical ideas for human composers. AI technology has been used to create virtual K-pop idols, (like SM’s Naevis) that release their own singles.

Voice synthesis has also been used to digitally resurrect the voices of deceased singers for posthumous performances on television.

Major music labels, such as the South Korean HYBE (the company behind boy band BTS), use AI to translate and adapt songs into multiple languages with flawless pronunciation to reach a global audience.

In the U.S., Universal and Warner have signed their first licensing deals with AI firms, grappling with how music catalogs are licensed for AI training and how artists get paid, while — observers say — “protecting the value of their libraries in a world where anyone can generate a convincing imitation.”