In September, a vibrant crowd gathered at the MIT Media Lab for an extraordinary concert featuring acclaimed musician Jordan Rudess, alongside his skilled collaborators. Among them was violinist and vocalist Camilla Bäckman, who has previously performed with Rudess. The other collaborator—a groundbreaking artificial intelligence model affectionately known as the jam_bot—made its public debut, a testament to months of innovative work with Rudess and his MIT team.
Throughout the performance, the chemistry between Rudess and Bäckman was unmistakable as they exchanged smiles and musical cues, forging a seamless groove. Meanwhile, Rudess’ interactions with the jam_bot unveiled a novel kind of musical dialogue. During a duet reminiscent of Bach, he would alternate between playing distinct passages and allowing the AI to carry the melody in a similar baroque style. The range of expressions that crossed Rudess’ face—curiosity, concentration, and even bemusement—reflected the challenge and excitement of this unique collaboration. Wrapping up the piece, he shared with the audience, “That is a combination of a whole lot of fun and really, really challenging.”
Jordan Rudess is widely celebrated as one of the greatest keyboardists, a title echoed by a Music Radar poll. He is best known for his role in the Grammy-winning progressive metal band Dream Theater, which is gearing up for its 40th anniversary tour this fall. Beyond his band, Rudess is a solo artist whose latest album, “Permission to Fly,” launched on September 6. An educator as well, he shares his expertise through comprehensive online tutorials and is the founder of the software company Wizdom Music. His musical expertise harmoniously blends rigorous classical training, begun at The Juilliard School at age 9, with a passion for improvisation and experimentation.
Last spring, Rudess accepted the role of visiting artist at the MIT Center for Art, Science and Technology (CAST), where he collaborated with the Responsive Environments research group at the Media Lab to develop cutting-edge AI-powered music technology. Teaming up with him were graduate students Lancelot Blanchard, who explores generative AI’s musical applications, and Perry Naseck, an artist-engineer skilled in interactive and kinetic media. Professor Joseph Paradiso, head of the Responsive Environments group and an ardent fan of Rudess, has long been involved in exploring musical frontiers through innovative user interfaces and sensor networks.
The research group sought to create a machine learning model that could emulate Rudess’ unique musical style. In a paper released in September by MIT Press, co-authored with MIT music technology professor Eran Egozy, the team articulated their concept of “symbiotic virtuosity”—a partnership where human and computer duet in real-time while learning from their performances, generating new music live on stage.
Rudess provided the data necessary for Blanchard to train the AI model, continuously testing and providing feedback while Naseck focused on crafting visual experiences that would engage the audience. “Audiences expect visuals like lighting and graphics at concerts, so we needed a platform that enabled the AI to build its own connection with them,” Naseck explained. Initial demonstrations transformed into an interactive sculptural installation, where lighting shifted to reflect the AI’s chord changes. At the concert on September 21, a dynamic grid of petal-shaped panels behind Rudess danced to the rhythm and future predictions of the AI’s musical contributions.
Naseck highlighted the importance of non-verbal communication between musicians, drawing parallels to jazz performers. “The AI generates sheet music while playing; how do we visualize what’s next and convey that anticipation?” Naseck pondered. He engineered and programmed the kinetic sculpture at the Media Lab, aided by Brian Mayton (mechanical design) and Carlo Mandolini (fabrication). The sculpture’s movements were inspired by an experimental machine learning model created by visiting student Madhav Lavakare, which aimed to map music to spatial movements. With the ability to spin and tilt dramatically, the installation visually distinguished the AI’s contributions from the human performers, beautifully articulating the emotional intensity of the music generated.
“At one moment, Jordan and Camilla stepped back to let the AI explore on its own,” Naseck recalled. “The sculpture illuminated the stage, enhancing the power of the AI’s output. The audience was clearly captivated, sitting at the edge of their seats.”
“Our goal is to craft a musical and visual experience,” Rudess stated, “to expand the possibilities and elevate expectations.”
Exploring Musical Futures
Blanchard used a music transformer—a sophisticated open-source neural network designed by MIT Assistant Professor Anna Huang SM ’08—as the foundation for the model. “Music transformers function similarly to large language models,” he explained. “Just as ChatGPT predicts the next word, our model anticipates the next notes.”
Blanchard refined the model using Rudess’s recordings that featured bass lines, chords, and melodies in his New York studio, ensuring the AI could respond in real-time to his improvisations.
“We reframed the project in terms of musical futures hypothesized by the AI, realized as Jordan made decisions in the moment,” shared Blanchard.
Rudess further emphasized this collaborative dialogue, asking, “How can the AI respond to me? How can I engage in a conversation with it? That’s the cutting-edge aspect of what we’re developing.”
An equally important priority surfaced: controllability in music generation. “While approaches like Suno and Udio generate music from text prompts, they lack control,” Blanchard noted. “It was crucial for Jordan to predict outcomes. If he sensed the AI would make an undesirable decision, he could restart or regain control.”
Blanchard also ensured Rudess had a screen displaying musical decisions made by the AI while incorporating various modalities that he could activate while playing—encouraging the AI to create melodies, generate chords, or initiate call-and-response patterns.
“Jordan masterminds everything happening,” Blanchard emphasized.
Future Directions
Although the residency has concluded, the collaborators envision many potential future developments. Naseck is particularly interested in exploring enhanced interaction methods, like capacitive sensing for Rudess, allowing for more nuanced gestures and movements. “Our hope is to integrate more subtlety into how he interacts,” he articulated.
The MIT partnership, which concentrated on enriching Rudess’ performance, has implications beyond the stage. Professor Paradiso reminisced about an early experience with the technology: “I played a chord sequence while the AI generated leads, creating a musical ‘bee’ buzz around the foundation I laid down,” he recalled, delight evident on his face. “Imagine AI plugins that musicians could use in their compositions—a world of creative possibilities is unfolding.”
Rudess is enthusiastic about potential educational applications. The samples he recorded for training the AI align with ear-training exercises he uses for music students, hinting at the model’s future as a teaching tool. “This work stretches beyond entertainment,” he affirmed.
Exploring the intersection of artificial intelligence and music comes naturally for Rudess, who views it as an evolution in his technology-driven journey. However, his passion for AI often meets skepticism from fellow musicians. “I empathize with those who feel threatened; I understand their concerns,” he reflected. “But I aim to guide this technology toward positive outcomes.”
“At the Media Lab, merging AI and human capabilities for mutual benefit is crucial,” Paradiso remarked. “How will AI elevate our creative endeavors? Ideally, it will broaden our horizons and enhance our abilities, just like so many past technologies.”
“Jordan is leading the frontier,” Paradiso concluded. “Once established with him, others will undoubtedly follow.”
Connecting with MIT
Rudess’ introduction to the Media Lab predates his residency, sparked by his interest in the Knitted Keyboard designed by textile researcher Irmandy Wickasono PhD ’24. From that point forward, he has immersed himself in discovering the exciting music-related innovations at MIT. During two visits to Cambridge last spring with the help of his wife, theater and music producer Danielle Rudess, he evaluated projects in Professor Paradiso’s electronic music controller course, which featured videos of his past performances. He showcased an innovative gesture-driven synthesizer called Osmose in a class on interactive music systems taught by Egozy, and offered valuable improvisation tips in a composition class. Additionally, he collaborated with student musicians in the MIT Laptop Ensemble, played GeoShred—a touch-based musical instrument he developed with researchers from Stanford University—and explored immersive audio technology in the Spatial Sound Lab. In September, he conducted a masterclass for pianists in MIT’s Emerson/Harris Program, supporting 67 talented scholars and fellows in their conservatory-level musical training.
“Every visit to the university gives me an exhilarating rush,” Rudess shared. “It’s a thrilling experience where all my musical ideas and inspirations converge beautifully.”
Photo credit & article inspired by: Massachusetts Institute of Technology