In September, a crowd gathered at the MIT Media Lab for a concert featuring musician Jordan Rudess and two collaborators. One of them, violinist and singer Camilla Bäckman, had previously performed with Rudess. The other was an AI model informally named jam_bot, which Rudess developed with a team at MIT over the preceding months, making its public debut as a work in progress.
Throughout the performance, Rudess and Bäckman exchanged signals and smiles like seasoned musicians finding their groove together. Rudess’s interactions with jam_bot suggested a different and unfamiliar type of exchange. During a Bach-inspired duet, Rudess alternated between playing a few measures and allowing the AI to continue the music in a similar baroque style. Each time the model took its turn, a range of expressions moved across Rudess’s face: perplexity, concentration, curiosity. At the end of the piece, Rudess admitted to the audience, « It’s a combination of a lot of fun and really, really challenging. »
Rudess is an acclaimed keyboardist—hailed as the greatest of all time according to a Music Radar magazine poll—known for his work with the platinum-selling, Grammy-winning progressive metal band Dream Theater, which is embarking on a 40th-anniversary tour this fall. He is also a solo artist whose latest album, « Permission to Fly, » was released on September 6; an educator who shares his skills through detailed online tutorials; and founder of the software company Wizdom Music. His work combines a rigorous classical foundation (he began his piano studies at Juilliard School at age 9) with a genius for improvisation and a penchant for experimentation.
Last spring, Rudess became a visiting artist at the MIT Center for Art, Science and Technology (CAST), collaborating with the MIT Media Lab’s Responsive Environments group on creating new AI-based music technology. Rudess’s main collaborators in the endeavor are Lancelot Blanchard, a Media Lab graduate student researching generative AI’s musical applications (informed by his own classical piano studies), and Perry Naseck, an artist and engineer specializing in interactive, kinetic, light, and time-based media applications. The project is overseen by Professor Joseph Paradiso, head of the Responsive Environments group and a longtime fan of Rudess. Paradiso joined the Media Lab in 1994 with a background in physics and engineering and a side interest in designing and building synthesizers to explore his avant-garde musical tastes. His group has a tradition of exploring musical frontiers through new user interfaces, sensor networks, and unconventional data sets.
The researchers set out to develop a machine learning model channeling Rudess’s distinctive musical style and technique. In a paper published online by MIT Press in September, co-authored with Eran Egozy, MIT professor of music technology, they outline their vision of what they call « symbiotic virtuosity »: enabling humans and computers to duet in real-time, learning from each performance they give together, and creating new music worthy of live audience performance.
Rudess provided the data on which Blanchard trained the AI model. Rudess also provided ongoing testing and feedback, while Naseck experimented with ways to visualize the technology for the audience.
« The audience is used to seeing lighting, graphics, and stage elements at many concerts. So we needed a platform for the AI to build its own relationship with the audience, » says Naseck. In early demos, this took the form of a sculptural installation with lighting that changed each time the AI changed chords. At the September 21 concert, a grid of petal-shaped panels mounted behind Rudess came to life with choreography based on the AI model’s activity and future generation.
« If you see jazz musicians looking each other in the eye and nodding, it gives the audience anticipation of what’s going to happen, » explains Naseck. « The AI effectively generates scores and then plays them. How can we show what’s coming next and communicate it? »
Naseck designed and programmed the structure from scratch at the Media Lab with help from Brian Mayton (mechanical design) and Carlo Mandolini (fabrication), drawing some of its movements from an experimental machine learning model developed by visiting student Madhav Lavakare that maps music into moving points in space. With the ability to rotate and tilt its petals at speeds ranging from subtle to spectacular, the kinetic sculpture distinguished the AI’s contributions during the concert from those of the human performers while conveying the emotion and energy of its output: gently swaying when Rudess took the lead, for example, or curling and unfurling like a flower as the AI model generated majestic chords for an improvised adagio. This latter was one of Naseck’s favorite moments in the series.
« At the end, Jordan and Camilla left the stage and allowed the AI to fully explore its own direction, » he recalls. « The sculpture made that moment very powerful: it kept the stage animated and heightened the grand nature of the chords played by the AI. The audience was clearly captivated by this part, sitting on the edge of their seats. »
« The goal is to create a musical visual experience, » explains Rudess, « to show what’s possible and enhance the play. »
Futurs musicaux
As a starting point for his model, Blanchard used a musical transformer, an open-source neural network architecture developed by Anna Huang SM ’08, an assistant professor at MIT who joined the MIT faculty in September.
« Musical transformers work in the same way as large language models, » explains Blanchard. « Just as ChatGPT would generate the most likely next word, the model we have would predict the most likely next notes. »
Blanchard refined the model using Rudess’s own playing elements, from bass lines to chords to melodies, which Rudess recorded variations of in his New York studio. Along the way, Blanchard ensured the AI would be agile enough to respond in real-time to Rudess’s improvisations.
« We framed the project, » says Blanchard, « in terms of musical futures hypothesized by the model and only realized at that moment based on Jordan’s decision. »
As Rudess puts it: « How can the AI react—how can I engage with it? That’s the cutting-edge part of what we’re doing. »
Another priority emerged: « In the generative AI and music space, you hear about startups like Suno or Udio that can generate music from text prompts. These are very interesting, but they lack controllability, » explains Blanchard. « It was important for Jordan to be able to anticipate what was going to happen. If he could see the AI was going to make a decision he didn’t want, he could restart the generation or have a kill switch to take back control. »
In addition to giving Rudess a screen displaying a preview of the model’s musical decisions, Blanchard integrated different modalities that the musician could activate while playing—prompting the AI to generate chords or lead melodies, for example, or initiating a call-and-response pattern.
« Jordan is the brain of everything that’s happening, » he says.
Que ferait Jordan
Although the residency has ended, the collaborators see many avenues for continuing the research. For example, Naseck would like to experiment with more ways Rudess could interact directly with his installation, through features like capacitive sensing. « We hope that in the future, we can work with more subtle movements and postures, » says Naseck.
While the collaboration with MIT focused on how Rudess can use the tool to augment his own performances, it’s easy to imagine other applications. Paradiso recalls an early encounter with the technology: « I played a chord sequence, and Jordan’s model generated the leads. It was like a musical ‘bee’ of Jordan Rudess buzzing around the melodic base I was establishing, doing something like Jordan would do, but subject to the simple progression I was playing, » he recalls, his face echoing the delight he felt at the time. « You’re going to see AI plugins for your favorite musician that you can integrate into your own compositions, with buttons allowing you to control the details, » he postulates. « That’s the kind of world we’re opening up with this. »
Rudess also wants to explore educational uses. Since the samples he recorded to train the model were similar to the ear training exercises he used with students, he believes the model itself could one day be used for pedagogical purposes. « This work goes beyond mere entertainment value, » he says.
The foray into artificial intelligence is a natural progression of Rudess’s interest in music technology. « It’s the next step, » he believes. However, when discussing his work with other musicians, his enthusiasm for AI often meets resistance. « I can have sympathy or compassion for a musician who feels threatened, I totally understand that, » he admits. « But my mission is to be part of those who move this technology towards positive things. »
« At the Media Lab, it’s very important to think about how AI and humans come together for the benefit of all, » says Paradiso. « How is AI going to elevate us all? Ideally, it will do what so many technologies have done: bring us to another perspective where we are better able to help ourselves. »
« Jordan is ahead of the pack, » adds Paradiso. « Once it’s established with him, people will follow. »
Jammer avec le MIT
The Media Lab first landed on Rudess’s radar before his residency because he wanted to try the knitted keyboard created by another member of Responsive Environments, textile researcher Irmandy Wickasono PhD ’24. From that point on, « it was a discovery for me to find out about the interesting things happening at MIT in the world of music, » says Rudess.
During two visits to Cambridge last spring (accompanied by his wife, theater and music producer Danielle Rudess), Rudess reviewed the final projects from Paradiso’s course on electronic music controllers, which included videos of his own past performances. He brought a new gestural synthesizer called Osmose to a class on interactive music systems taught by Egozy, whose credits include co-creating the video game « Guitar Hero. » Rudess also gave advice on improvisation to a composition class; played GeoShred, a touchscreen musical instrument he co-created with researchers at Stanford University, with student musicians from the MIT Laptop Ensemble and Arts Scholars program; and experimented with immersive sound at the MIT Spatial Sound Lab. On his last campus visit in September, he gave a masterclass for pianists as part of the MIT Emerson/Harris program, which provides a total of 67 scholars and fellows with support for conservatory-level music instruction.
« I feel a kind of urgency every time I come to the university, » says Rudess. « I feel like, wow, all my musical ideas, inspiration, and interests have come together in this really cool way. »