The renowned Irish philosopher George Berkeley once posed a thought-provoking question: “If a tree falls in a forest and no one is around to hear it, does it make a sound?” This curiosity raises intriguing thoughts about AI-generated trees. Although they may not produce sound, these digital trees are becoming essential for adapting urban flora to climate change. Enter the groundbreaking “Tree-D Fusion” system, a collaboration between researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Google, and Purdue University. This innovative system integrates artificial intelligence and tree growth models with Google’s Auto Arborist data to create precise 3D representations of urban trees. The project has compiled an impressive large-scale database of 600,000 environmentally conscious, simulation-ready tree models across North America.
“We’re merging decades of forestry science with contemporary AI technology,” comments Sara Beery, an assistant professor in MIT’s electrical engineering and computer science (EECS) department, a principal investigator at MIT CSAIL, and a co-author of a new research paper on Tree-D Fusion. “This approach allows us to not only identify trees in urban settings but also to predict their growth and how they will affect their environments over time. We’re not dismissing three decades of advancements in creating 3D synthetic models; instead, we’re leveraging AI to enhance the utility of this knowledge across a wider variety of individual trees in cities throughout North America and, ultimately, globally.”
Building on previous urban forestry monitoring initiatives that utilized Google Street View data, Tree-D Fusion takes a leap forward by generating complete 3D models from just single images. Prior models faced limitations in specific neighborhoods and often struggled with accuracy at scale. In contrast, Tree-D Fusion has the capability to produce detailed models that encompass features usually obscured, like the back sides of trees not visible in street-view images.
The practical implications of this technology extend far beyond mere observation. Urban planners could harness Tree-D Fusion to foresee potential issues, such as branches interfering with power lines, or identify areas where strategic tree placement could enhance cooling and air quality. According to the research team, such predictive capabilities can transform urban forest management from a reactive approach to a proactive one.
A tree grows in Brooklyn (and other urban areas)
The researchers implemented a hybrid strategy to their methodology, employing deep learning to outline each tree’s spatial structure and then applying traditional procedural models to create realistic branch and leaf patterns based on each tree’s genus. This dual approach enables the model to predict how trees will grow under various environmental conditions and climate scenarios, varying from local temperatures to access to groundwater.
As cities globally face challenges from rising temperatures, this research provides new insights into the future of urban forests. In partnership with MIT’s Senseable City Lab, the team from Purdue University and Google is undertaking a worldwide study that envisions trees as living climate protectors. Their digital modeling system depicts the intricate patterns of shade across seasons, showing how strategic urban forestry can transform hot city blocks into naturally cooler neighborhoods.
“Every time a mapping vehicle drives through a city, we’re capturing more than just images – we’re witnessing these urban forests evolve in real-time,” says Beery. “This ongoing monitoring establishes a living digital forest that reflects its physical counterpart, equipping cities with a valuable perspective to analyze how environmental stressors influence tree health and growth across urban landscapes.”
AI-driven tree modeling has also emerged as a powerful tool in the pursuit of environmental justice. Through detailed mapping of the urban tree canopy, a related project from the Google AI for Nature team has brought to light disparities in green space access among various socioeconomic groups. “Our mission goes beyond studying urban forests — we aim to promote equity,” emphasizes Beery. The research team is currently collaborating with ecologists and tree health specialists to refine these models, ensuring that the expansion of green canopies benefits all city residents.
It’s a breeze
While Tree-D Fusion signifies a substantial advancement in the field, modeling trees presents unique challenges for computer vision systems. Unlike the fixed shapes of buildings or vehicles that existing 3D modeling techniques can accurately depict, trees are dynamic — swaying in the wind, intertwining with neighboring branches, and constantly evolving as they grow. The Tree-D Fusion models are “simulation-ready,” meaning they can forecast the future shape of trees based on environmental conditions.
“What drives this work is how it compels us to rethink core principles of computer vision,” states Beery. “While techniques like photogrammetry or NeRF [neural radiance fields] excel at capturing static objects, trees necessitate innovative strategies that accommodate their dynamic nature, where even a slight breeze can alter their form in an instant.”
The method of forming rough structural outlines to approximate each tree’s shape has proven remarkably successful, yet some complex challenges persist. One of the most perplexing is the “entangled tree problem,” wherein neighboring trees grow into one another, creating a tangle of branches that no current AI can fully resolve.
The scientists view their dataset as a launchpad for future advances in computer vision, already exploring applications well beyond street view imagery, extending to resources like iNaturalist and wildlife camera traps.
“This is just the beginning for Tree-D Fusion,” states Jae Joong Lee, a PhD student at Purdue University who developed, implemented, and deployed the Tree-D Fusion algorithm. “Together with my collaborators, I aspire to broaden the platform’s capabilities to a planetary scale. Our aim is to leverage AI-driven insights in support of natural ecosystems — fostering biodiversity, enhancing global sustainability, and ultimately benefiting the health of our planet.”
Beery and Lee’s co-authors include Jonathan Huang, head of AI at Scaled Foundations (formerly with Google), and four PhD students from Purdue University: Jae Joong Lee, Bosheng Li, Professor Songlin Fei (Dean’s Chair of Remote Sensing), Assistant Professor Raymond Yeh, and Professor Bedrich Benes (Associate Head of Computer Science). Their efforts receive support from the USDA’s Natural Resources Conservation Service and are backed by the USDA’s National Institute of Food and Agriculture. They presented their findings at the recent European Conference on Computer Vision.
Photo credit & article inspired by: Massachusetts Institute of Technology