Imagine a boombox that not only plays music but also tracks your movements to suggest tunes that perfectly match your unique dance style. This captivating concept is explored in the project “Be the Beat,” part of the MIT course 4.043/4.044 (Interaction Intelligence), led by Marcelo Coelho in the Department of Architecture. The course’s innovative projects were showcased at the esteemed 38th annual NeurIPS (Neural Information Processing Systems) conference in December 2024, which attracted over 16,000 AI enthusiasts and researchers in Vancouver.
This course dives deep into the field of large language objects and examines how artificial intelligence can seamlessly transition into our physical environments. While “Be the Beat” revolutionizes dance, the creativity doesn’t stop there; other student projects explore diverse areas like music, storytelling, critical thinking, and memory. Collectively, these initiatives propose a new vision for AI, one that enhances creativity, transforms education, and redefines social interactions.
Be the Beat
Crafted by Ethan Chang, a mechanical engineering and design student, and Zhixing Chen, a mechanical engineering and music student, “Be the Beat” serves as an AI-integrated boombox that chooses songs based on a dancer’s movements. Throughout history, music has influenced dance across various cultures, yet the reverse—creating music through dance—remains largely unexplored.
This project fosters human-AI collaboration for freestyle dance, empowering dancers to reshape the traditional relationship between music and movement. Utilizing PoseNet to analyze movements, it interacts with a large language model to identify music styles that share similar energy and tempo as the dancer. Participants have expressed feeling a newfound sense of artistic expression and discovery, enjoying this innovative method of exploring dance genres and choreography.
A Mystery for You
D developed by recent graduates Mrinalini Singha SM ’24 from the Art, Culture, and Technology program and Haoheng Tang from Harvard University Graduate School of Design, “A Mystery for You” is an educational game designed to sharpen critical thinking and fact-checking skills in young minds. By blending a large language model (LLM) with a hands-on interface, players immerse themselves in a detective-like experience, acting as citizen fact-checkers responding to AI-generated “news alerts.”
Through configuring cartridge combinations for follow-up “news updates,” players engage with complex scenarios, assess evidence, and navigate conflicting information, empowering them to make informed choices. This experience reimagines news consumption by replacing digital screens with a tactile analog device, encouraging deeper, more meaningful interactions that prepare players to better navigate today’s challenging media landscape.
Memorscope
“Memorscope,” developed by MIT Media Lab collaborator Keunwook Kim, is a device designed to foster collective memories by intertwining human interactions with cutting-edge AI technology. Just as microscopes and telescopes reveal hidden details, Memorscope enables two users to connect by “looking into” each other’s faces, using this intimate moment to explore and create shared memories.
The device employs AI models like OpenAI and Midjourney to introduce diverse aesthetic and emotional interpretations, thus crafting a vibrant, collective memory landscape. Unlike traditional shared albums, this innovative space allows memories to evolve as dynamic narratives, tied to users’ ongoing relationships.
Narratron
Created by Harvard Graduate School of Design students Xiying (Aria) Bao and Yubo Zhao, “Narratron” is an interactive projector that collaborates with children to co-create and perform stories using shadow puppetry and large language models. Users can capture desired protagonists by pressing a shutter, transforming hand shadows into characters within the story. As they introduce new shadow figures, the narrative unfolds through a projector, while narration plays through an accompanying speaker, allowing users to engage in real-time creative storytelling.
Perfect Syntax
In “Perfect Syntax,” Karyn Nakamura ’24 delves into the syntax of motion and video through an artistic lens. This video art project utilizes AI to manipulate fragments of video, investigating how machines can simulate and reconstruct the fluidity of motion and time. By challenging the relationship between perception and technology, Nakamura explores how our experience of time and motion can be reinterpreted through computational art
Photo credit & article inspired by: Massachusetts Institute of Technology