Generative AI Environmental Impact Explained

In a compelling two-part series, delves into the environmental challenges associated with generative AI. This article focuses on why this innovative technology consumes so many resources. A follow-up piece will highlight expert initiatives aimed at reducing generative AI’s carbon footprint and mitigating its environmental impacts.

The buzz surrounding the benefits of generative AI, such as enhancing productivity and advancing scientific research, is undeniable. However, as this technology explodes onto the scene, its ecological ramifications during this generative AI “gold rush” are becoming increasingly hard to quantify and address.

The immense computational power necessary to train generative AI models, which often consist of billions of parameters—like OpenAI’s GPT-4—requires a considerable amount of electricity. This leads to heightened carbon dioxide emissions and strains on our electrical infrastructure. Moreover, deploying these models for everyday use, allowing millions to access generative AI, and fine-tuning them for improved performance consumes significant energy even after the initial development phase.

But electricity isn’t the only concern; substantial amounts of water are also necessary to cool the hardware used for training, deploying, and fine-tuning these advanced AI systems. This water requirement can burden municipal supplies and disrupt local ecosystems. The surge in generative AI applications has further increased the demand for high-performance computing hardware, contributing additional environmental costs from its manufacture and transportation.

“When we consider the environmental impact of generative AI, it extends beyond just the electricity needed to power the computer. The implications resonate at a systemic level, persisting from the actions we take,” explains Elsa A. Olivetti, a professor in the Department of Materials Science and Engineering, who leads the Decarbonization Mission of MIT’s Climate Project. Olivetti is also a senior author on a forthcoming paper titled “The Climate and Sustainability Implications of Generative AI,” co-authored by MIT colleagues in response to calls for research that evaluates the transformative potential—both positive and negative—of generative AI.

The Demands of Data Centers

One primary factor driving the environmental footprint of generative AI is the electricity demand of data centers. These facilities are essential for training and running complex deep learning models behind popular applications like ChatGPT and DALL-E. A data center can be defined as a temperature-controlled facility that houses computing infrastructure, including servers, data storage drives, and network equipment. For example, Amazon operates over 100 data centers globally, each hosting around 50,000 servers to support its cloud computing services.

While data centers have existed since the late 1940s, the rise of generative AI has accelerated their construction. “What distinguishes generative AI is the power density it demands. Though fundamentally just computing, a training cluster for generative AI might consume seven or eight times more energy than typical workloads,” states Noman Bashir, lead author of the impact paper and Computing and Climate Impact Fellow at MIT Climate and Sustainability Consortium.

Estimates indicate that the power demands of North American data centers surged from 2,688 megawatts at the end of 2022 to 5,341 megawatts at the end of 2023, largely due to generative AI. Worldwide, data centers consumed 460 terawatts of electricity in 2022, ranking them the 11th largest electricity consumer globally, just after Saudi Arabia and France. Projections suggest this consumption could reach 1,050 terawatts by 2026, which would elevate data centers to the fifth spot globally, nestled between Japan and Russia.

Not all data center operations relate directly to generative AI, but this technology is a primary driver behind increasing energy demands. “The rapid demand for new data centers cannot be met sustainably. The speed at which companies are constructing them means the majority of their electricity will derive from fossil fuels,” warns Bashir.

Calculating the electricity needed to train models like OpenAI’s GPT-3 is challenging. A 2021 study from researchers at Google and the University of California at Berkeley suggested that the training alone consumed 1,287 megawatt hours of electricity—enough to power approximately 120 average U.S. homes for a year—resulting in around 552 tons of carbon dioxide emissions. Unlike traditional AI, generative AI experiences substantial fluctuations in energy use during various training phases, necessitating power grid operators to implement strategies to manage these changes, often relying on diesel generators.

Ongoing Energy Demands in Inference

Even after a generative AI model is trained, its energy consumption continues. For instance, when someone asks ChatGPT to summarize an email, the required computing hardware uses energy. Research estimates that a ChatGPT query consumes about five times more electricity than a standard web search.

“However, the average user may not consider these impacts,” remarks Bashir. “The user-friendly interfaces of generative AI and a general ignorance of their environmental ramifications can lead to excessive consumption with little incentive for moderation.”

In traditional AI, energy use is evenly divided among data processing, model training, and inference. As generative AI models proliferate across applications, Bashir predicts that the electricity needed for inference will overshadow other aspects, particularly as future versions grow in size and complexity.

Moreover, generative AI models tend to have a short lifespan, driven by the fast-paced demand for new applications. New models are frequently introduced, often consuming more energy for training due to their increased parameters. While data center power demands receive substantial attention, the quantity of water consumed is another environmental facet that requires scrutiny.

Chilled water is essential for cooling data centers, absorbing heat generated by computing equipment. It’s estimated that for every kilowatt-hour consumed, a data center requires about two liters of water for cooling. “Despite the term ‘cloud computing,’ the hardware exists in the physical world, and its water usage has direct and indirect implications for biodiversity,” Bashir adds.

The hardware in data centers presents additional environmental challenges. Assessing the power needed to create a GPU—a powerful processor for intensive generative AI tasks—reveals higher energy demands than those for a standard CPU, owing to a more complex fabrication process. The carbon footprint of a GPU is further exacerbated by emissions related to material transportation and extraction.

Market research firm TechInsights projects that the three leading GPU manufacturers (NVIDIA, AMD, and Intel) shipped approximately 3.85 million GPUs to data centers in 2023, a significant increase from about 2.67 million in 2022. Expectations point to a larger spike in 2024.

Faced with an unsustainable trajectory, experts like Bashir and Olivetti advocate for a responsible approach to generative AI development that aligns with environmental objectives. They emphasize the necessity for a holistic examination of both the environmental and societal costs of generative AI, coupled with a thorough evaluation of the benefits it purportedly offers.

“We need to systematically and comprehensively understand the implications of developments in this sector. The rapid advancements have outpaced our ability to measure and appreciate the trade-offs,” Olivetti concludes.

Photo credit & article inspired by: Massachusetts Institute of Technology

Leave a Reply

Your email address will not be published. Required fields are marked *