AI Method Quickly Finds Actions in Videos

The internet is filled with instructional videos ranging from how to flip a perfect pancake to applying the Heimlich maneuver in critical situations. However, finding the exact moment an action occurs in lengthy videos can be a real chore. To simplify this, researchers are developing AI systems capable of identifying specific actions based on user descriptions, which would allow the system to jump straight to that moment in the video.

Traditionally, training machine learning models for this task—known as spatio-temporal grounding—demands extensive datasets, often requiring costly manual annotation. However, a recent study undertaken by MIT researchers and the MIT-IBM Watson AI Lab offers a more streamlined solution. Their innovative method trains a model utilizing only raw video content paired with automatically generated transcripts, eliminating the need for extensive labeled data.

This research focuses on teaching models to comprehend unlabeled videos in two main dimensions: spatial (understanding the location of objects) and temporal (understanding when an action occurs). This dual approach proves to be more effective than previous AI methods, especially when dealing with longer videos containing multiple actions. Interestingly, the simultaneous training on both dimensions enhances the model’s accuracy for each category.

The implications of this research extend beyond improving online learning and training platforms. In healthcare, for instance, this technique could swiftly navigate through videos of diagnostic procedures to highlight critical moments.

Brian Chen, the lead author of a study detailing this technique, emphasizes the methodology: “We separate the challenge of encoding spatial and temporal information into two distinct processes, akin to having two experts, which results in superior overall performance.” Chen, who graduated from Columbia University in 2023, worked on this research as a visiting student at the MIT-IBM Watson AI Lab alongside senior research scientist James Glass and other collaborators from various institutions.

Global versus Local Learning

Typically, spatio-temporal grounding relies on video clips that have been painstakingly annotated by humans to denote the start and end times of specific actions. This method is not only labor-intensive but also subjective; for example, when does the action “cooking a pancake” truly begin? Is it when the batter is mixed or when it hits the pan?

Chen points out the vast diversity of domains that could require annotation: “This time it might be cooking, but next it could involve car repairs. If we can harness information without extensive labeling, we have a more universally applicable solution.” The researchers utilize unannotated instructional videos sourced from platforms like YouTube alongside their associated transcripts for training.

Their training regimen is divided into two components. First, they instruct the AI to grasp overarching event timelines through a global representation of the video. Next, they make the model hone in on specific actions in targeted video segments, which provides a fine-grained, local representation. For instance, in a sprawling kitchen scene, the focus could be solely on the wooden spoon the chef is using, rather than the entire kitchen layout.

To refine their framework, the team integrated measures to address discrepancies between the narration and visual action. For example, a chef might discuss the cooking process before executing it.

A key aspect of their approach is the use of lengthy, uncut videos—contrasting with most AI methods that train on short, segmented clips. This enables a more comprehensive understanding of actions.

A New Benchmark

When it came to evaluating their method, the researchers realized existing benchmarks were inadequate for testing on these uncut video formats, so they created a new one. This new dataset utilizes innovative annotation techniques that focus on identifying multistep actions through more precise marking, like pinpointing where a knife intersects with a tomato, rather than drawing generic boxes around objects.

“This approach clarifies definitions and accelerates the annotation process, significantly reducing labor costs,” explains Chen. Engaging multiple annotators in point notation allows for better capture of actions occurring over time, such as the dynamic flow of milk being poured.

Testing their method against this benchmark revealed that it outperformed other AI techniques in accurately pinpointing actions. Their model effectively highlighted human-object interactions, focusing on moments like the precise instant a chef flips a pancake onto a plate, rather than merely identifying the pancakes on a counter.

Conventional methods heavily depend on human-labeled data, which presents scalability challenges. The work by these researchers is paving the way for a more versatile technique to localize events in space and time, specifically leveraging the naturally occurring speech in videos. While such data is abundant, it is often loosely related to the visuals, complicating its utility in machine-learning applications. This research takes notable strides in rectifying that imbalance, aiding future developments in systems that integrate this multimodal data.

The next phase of research aims to upgrade their model’s capability to identify mismatches between visual and audio elements, thereby allowing a shift in focus from one to the other as needed. There are also plans to expand this framework to include audio data, which generally has strong correlations with the actions depicted.

“Though AI has made significant advancements with models like ChatGPT for image understanding, our progress in video comprehension has lagged. This research marks a pivotal advancement in bridging that gap,” states Kate Saenko, a professor at Boston University.

This groundbreaking research has received funding from the MIT-IBM Watson AI Lab.

Photo credit & article inspired by: Massachusetts Institute of Technology

Leave a Reply

Your email address will not be published. Required fields are marked *