Scottish AI Alliance

View Original

What is the climate impact of Artificial Intelligence?

Nidia Dias & Google DeepMind / Better Images of AI / AI for Biodiversity / CC-BY 4.0

by Calum McDonald, Engagement & Participation Manager, Scottish AI Alliance

Artificial Intelligence (AI) has huge potential to drive climate-smart developments, particularly in addressing the world's environmental challenges. Yet, behind this excitement lies a growing concern about the environmental impact of AI technologies.

AI technologies that combat the climate crisis often focus on identifying better ways to track and predict climate impact, mapping land use and habitat rehabilitation, or to take action on data-driven insights in making logistics processes more efficient and more environmentally sustainable.

In fact, companies in Scotland are actively engaged in using AI technologies to fight the climate crisis. Space Intelligence, based in Edinburgh, map deforestation and carbon stored in forests. EOLAS, based in Glasgow, use AI to monitor at risk wildlife in satellite imagery.

These benefits, however, could be overshadowed by the rise of Generative AI tools in recent years, the type of AI technology that generates text, images and speech. This includes tools like Large Language Models (LLMs) and image-generating models, both of which come with a substantial environmental cost. The process of training and deploying models like ChatGPT is energy and resource-intensive, and despite their potential to create administrative efficiencies, they do little to address the climate crisis.

When AI technologies are used in a targeted way, it could be argued that their carbon cost can be justified by their potential social good. However, as Generative AI proliferates across digital spaces, AI use is rising within the automation of administrative tasks, drafting emails, and even generating images of cats riding dragons, where the social benefit is far less clear.

Now that our world has been overlayed with a digital layer, that digital layer is now being gilded in AI. When we look beyond the shine, what is the climate impact of Generative AI? And what can we do about it?

How Does AI Training Consume Energy?

AI models can be designed to serve different purposes, and their environmental impact varies depending on how they’re built and used. Some AI models are small, sharp, and highly efficient, optimised for specific tasks. Others are large and powerful, requiring vast computational resources.

The complexity of an AI model can often be measured in its parameters. The more parameters there are, the more detailed and accurate the model can be in recognising patterns and generating responses. However, more parameters also mean higher energy consumption, as more computing power is needed to process them.

GPT models, like GPT-3, are on the "big and powerful" end of the scale. Training these models requires enormous amounts of computing power, meaning they need specialised hardware that performs complex calculations. As a result, the energy required increases as the model grows in size.

When ChatGPT launched, it was based on GPT-3, which had 175 billion parameters. One study estimated that training GPT-3 consumed about 1,287 megawatt-hours of electricity and produced 502 tons of carbon dioxide emissions. To put this in perspective, that’s roughly the same as powering 20,000 homes in Scotland for one week, or driving a petrol-powered car around the world 60 times.

However, estimates on the carbon footprint of AI models vary. This is partly because OpenAI, the company behind GPT, has not disclosed specific details about the energy consumption and emissions tied to training its models. Without this transparency, it's challenging to make accurate comparisons or take concrete steps to reduce the environmental impact.

Advances in AI are typically driven by increasing the scale of models: larger datasets, more compute power, and more parameters. While GPT-3 had 175 billion parameters, the more recent versions of ChatGPT are based on GPT-4 and GPT-4o, which are believed to be even larger.

Although OpenAI has not publicly confirmed the exact number of parameters in GPT-4, estimates suggest it could have 1.76 trillion parameters, making it around 10,000 times more complex than GPT-3. While training can be made more efficient with improvements in techniques, it's likely that the energy and carbon costs of training GPT-4 are many magnitudes greater than those of GPT-3.

These models are incredibly powerful and useful. But when we use GPT models for unnecessary tasks, it’s like using a Ferrari to drive to Tesco – fast and fun, but wildly inefficient.

The good news is that work is already underway to make AI more sustainable, from more energy-efficient training techniques to the creation of smaller, more tactile models known as Small Language Models.

While the potential for AI to revolutionise how we live is becoming clearer, its climate impact should never be overlooked, and AI being a net benefit is not inevitable without climate-conscious leadership.

What Happens After Training?

Once an LLM is trained, its continued operation generates a constant energy and carbon cost. Each time an LLM like ChatGPT processes a prompt, it performs complex calculations to generate the ideal response, using a small amount of energy in the process.

While each individual query may seem negligible, the scale of usage is building. OpenAI announced in December 2024 that Chat GPT now has 300 million active users. As more applications integrate LLMs, such as email assistants, search engine summarisers and content generation helpers, these small uses of AI begin to add up. 

While it is more widely known that general digital tasks like sending an email require a little energy, it has been estimated that prompting an LLM has a much larger impact. Studies have estimated that prompting a LLM is from 20 to 60 times more energy-intensive than performing a basic search engine query. With millions of users, this difference can quickly become a massive carbon footprint.

The huge amount of energy needed to power the servers and data centres that AI technologies rely on creates more than electricity, it also generates heat as a by-product. To prevent servers from overheating, data centres rely on cooling systems, such as large air conditioning units or cooling towers. These systems require substantial amounts of clean, fresh water, adding another environmental cost to the equation. As a result, the AI industry not only consumes vast amounts of energy, but it also competes for local water resources, impacting ecosystems.

While the environmental costs of operating AI systems are significant, there are developments in early stages to help mitigate the rise of Generative AI. Green data centres, powered by renewable energy are becoming more common and advances in closed-loop water cooling systems may reduce a reliance on fresh water.

As users, we may also have a role to play by being more mindful of our AI usage. Perhaps in treating AI as more of a rare resource to be used in a targeted way, we can choose to be more conscious about our invisible consumption.

What’s Inside AI Hardware?

The environmental cost of AI doesn’t stop with the energy required for training and operation. The GPUs, servers and devices that power AI technologies aren’t just invisible data in a far-off cloud, they’re physical, tangible products. And just like all other electronics, they are made from raw materials extracted from the Earth. These materials come with a hidden cost, impacting both the environment and communities in ways we often don’t consider.

At the heart of AI’s hardware is silicon, a material used in computer chips. While it’s essential for the tech we rely on daily, the extractive nature of silicon mining has serious environmental consequences. Mining for silicon is energy-intensive in and of itself, and can lead to groundwater pollution and ecosystem disruption, as well as posing health risks to those mining it.

AI also relies on a range of Rare Earth Elements. Metals like Neodymium, dysprosium, lanthanum and cerium are all important to the hardware needs of training and deployment of AI, and all are rare within Earth’s crust. This rarity means that the process of mining them can cause particularly serve environmental damage, including deforestation and soil erosion.

Cobalt is a key element withing lithium-ion batteries, which power many of our smart devices and laptops, as well as the servers in data centres that keep LLMs running. Other specialised components within AI hardware require platinum, palladium, tin, tantalum, aluminium and gold – sourced from South Africa, Zimbabwe, Rwanda, Indonesia, Australia and China. Mercury and cyanide are known to be used within mining these metals, which seep into surrounding ecosystems.

The physical impact of AI is much more than just the energy consumed in training models. The metals, minerals and raw materials used to create the hardware powering AI come with a serious environmental cost, and when extractive industries operate in regions where there are weak labour regulations, human exploitation may also be an element of AI.

It’s difficult as an end-user to have an impact on systemic issues. Advocating for ethical sourcing initiatives and pushing for more transparency within supply chains with decision-makers and policy-makers could be part of the change that would help make AI hardware less damaging to our planet and communities. Supporting companies that prioritise sustainability and ethical practices will help steer the issue towards a greener horizon.

Conclusion

The climate impact of AI is wide-ranging, and often invisible. The way in which we interact with AI technologies is often the same as other digital services – via our laptops and smart phones – but we don’t see or feel the increased weight of using them.

Talking about the delicate balance of AI & climate more until it is a mainstream way in which we engage in conversations about AI will help, as transparency is key.

This transparency is layered. AI companies could be tracking and reporting about the energy use of training models. Organisations that track their carbon footprint could be more transparent about the increased carbon costs of AI technologies. 

And as individuals, we could consider whether we want to use the Ferrari solution for a small task. While it’s difficult to measure exactly how much energy is consumed per query or task, small actions can accumulate into significant environmental impact. Perhaps if we use AI to amplify climate action and climate conversations, then we can reach a better balance.

I used ChatGPT to help me write this blog is different ways: to provide graded feedback on the flow of each section into the next, to research where different rare earth elements are mined, and to estimate the carbon cost of an individual prompt.

ChatGPT pointed me towards studies estimating a carbon equivalent of 4.3 grams of CO2 per average prompt. While we need to consider that amount with a grain of salt due to the lack of transparency that exists, if we take that forward, I used the equivalent of approximately 70 grams of CO2 to create this blog.

In a non-Ferrari petrol car, this would take me about a third of a mile down the road, perhaps to Tesco. While that doesn’t seem like much, if all 300 million active users of ChatGPT were also using it to help them structure blogs right now we could drive to the moon and back, one hundred times!

These back-of-a-napkin calculations also serve to highlight the fact that we don’t have real figures to deal with, and without increased transparency from the AI sector we won’t be able to fully grasp the climate cost of Generative AI.

The climate impact of AI is complex, and with the rise of Generative AI we are seeing a huge surge in energy needed to keep up with the mass use of these tools. Where other AI techniques and technologies are being used to actively fight the climate crisis, with Generative AI the social good trade-off is less clear. To ensure that our relationship with Generative AI technologies is one beneficial to the Earth, we need advocates for increased transparency and we need more people to join the conversation on AI & climate.


 At the Scottish AI Alliance, we work towards Scotland becoming a leader in the use of trustworthy, ethical and inclusive AI. Part of this work includes hosting conversations about how we reach that ethical future with AI.

On Wednesday 26 March, 2025, we are forming a People’s Panel on AI & Climate, and we’re looking for people from across Scotland to come join the conversation. If you are interested in the delicate balance of AI & Climate, we would love to have you on our People’s Panel. While hosted in Edinburgh, we can cover travel costs and accommodation to ensure that voices from across Scotland can join.

To find out more, check out: https://www.scottishai.com/news/peoples-panel-ai-climate or email engage@ScottishAI.com to find out how you can host your own AI & Climate community conversation.

The People’s Panel Call-Out ends 08:00 on Monday 10 March, 2025.