Environmental Footprint of Technology
1 | Introduction
The escalating climate crisis looms large, with only six years remaining until we reach 1.5°C of warming (Climate Clock). In response, technology developers and researchers are increasingly turning to machine learning and AI to tackle environmental challenges. Researchers from Climate Change AI, for instance, have highlighted several promising applications of these technologies, including predicting electricity generation, assessing forest carbon stocks, or optimizing buildings.
The growing belief that technology — often portrayed as rational, and objective — will resolve the "Tragedy of the Commons" (Kopec, 2016) and foster trust where human collaboration has faltered is gaining momentum. Recently, the Bezos Earth Fund's $100 million AI for Climate and Nature has drawn significant attention and “climate tech” now accounts for 11% of new active corporate venture capital deals, up from just 2% in 2020 (SVB).
But what if our bets are misguided or misplaced? At the annual American Geophysical Union meeting, concerns have been growing about why this past year was hotter than expected (Zhong, 2024). One potential explanation is that the increasing energy consumption associated with technologies such as AI may be affecting the climate in ways that were not previously considered (Gelles, 2024).
The increasing complexity of AI models and the growing size of training datasets are significantly driving up the demand for computational resources and increasing energy consumption. This blog post examines the environmental footprint of a single technological device—the Amazon Echo—tracing its impact from production to deployment. Much of this analysis builds on Crawford et al.’s work, Anatomy of an AI System, where the authors explore the extractive environmental and labor practices behind such technologies. Using the Amazon Echo as a case study, this blog post expands on Crawford’s work by focusing specifically on the role of data centers and the AI technologies embedded within the device.
2 | Amazon Echo
An image of the 3rd Generation Amazon Echo Dot device. Image sourced from WikiCommons.
Consider the Amazon Echo: sleek, black, and round—designed to blend seamlessly into any environment (Crawford et al., 2018). Its simple, non-threatening appearance invites interaction, positioning it as a docile assistant. With its voice-activated AI, Alexa, you can ask for the weather, control the lights, play music, or call friends.
An image of a the motherboard of an Amazo Echo device. Image sourced from WikiCommons.
The device has no visible screws and is designed in such a way that you can’t easily open it to see what's inside. But if you were able to lift its exterior, you would find a complex network of circuitry and machinery hidden beneath its minimalist shell. Wilk et al. describe this phenomena as "Hiding Technology," where designers intentionally conceal the complexity of a device by hiding cables, selecting more streamlined components, and, when possible, making the device itself invisible or completely removing it from view.
This approach makes sense: it’s a key reason why technology has become so seamlessly integrated into our daily lives. By reducing the cognitive load on individuals, it allows people to engage with technology without feeling overwhelmed (Wilk et al., 2021). In this sense, technology has made life easier.
However, this abstraction—keeping the inner workings of technology hidden—also obscures the environmental costs associated with it. When technology is invisible, it’s easy to forget that it has a material presence (Wilk et al., 2021).
This abstraction is not limited to just the cables and wires inside a device, as Wilk et al. note, but it also extends to the physical infrastructure behind phenomena such as “the cloud." While the cloud is often thought of as an intangible, virtual space, it relies on physical data centers that are hidden from view, much like the complex internal circuitry of the Echo is concealed behind its sleek exterior (Burrington, 2014).
The remainder of this blog post tries to explore the obscured physical infrastructure — and by extension, the energy and environmental costs — involved in utilizing an everyday technology such as the Amazon Echo.
3 | “Alex, Turn Off the Lights”
On a cozy afternoon, a friend and I gear up to watch a movie. The popcorn and snacks are laid out, and after much deliberation, the movie has been picked. One final detail remains: My friend turns to his Echo device and instructs, "Alexa, turn off the lights".
This is what Paay et al. refer to as “mundane tasks”, tasks imbued in one’s daily life that are repetitive and boring and it is how users predominantly interface with their digital personal assistants (Paay et al).
Step 1: Data Centers
Figure 1 illustrates the steps involved when someone says, "Alexa, turn off the lights." The sound is captured by the Amazon Echo device and sent to the "Cloud." The first step in this process is routing the request to a data center.
When you send audio information, it first travels to data centers, which have evolved significantly over the years. Today, many data centers are shifting towards Hyperscale Data Centers. These are massive facilities designed to support large-scale workloads, high volumes of data processing, and extensive computing and storage services. Typically spanning more than 10,000 square feet and housing at least 5,000 servers—roughly equivalent to ten two-bedroom apartments—these centers are essential for modern digital infrastructure.
The first true hyperscale data center is often credited to Google, which launched its facility in The Dalles, Oregon, in 2006 (Google). This facility, which continues to operate today, occupies a staggering 1.3 million square feet and employs around 200 operators.
Figure 2 displays a map of Seattle with 46 opaque circles. Each circle corresponds to the location of a Seattle data center. Red opaque circles vary in size to reflect the kilowatt capacity of the data center in that location. Their radii are based on the following formula: (0.1m)*(kW Consumption Capacity). Seattle’s data centers have a diverse range of radii, some boasting capacities near 50,000 kW, the equivalence of 40,000 residential homes based on 2020 figures from the EIA. Black opaque circles represent data centers whose kW capacities were not reported and could not be found. Figure sourced from Carr et al., 2022.
Traditionally, companies built their data centers in remote locations to take advantage of cheap land and lower electricity costs. However, there has been a significant shift in this trend. The need to be closer to end users— to improve latency and enhance user experience—has led many companies to reconsider their geographical choices. This has introduced new challenges: hyperscale facilities now require substantial investment, vast amounts of land, access to water for cooling, and highly skilled labor.
Connectivity is a crucial consideration in designing these large-scale facilities. Data centers must be built with multiple, robust network connections in mind, incorporating diverse fiber routes, metro fiber networks, and other critical infrastructure. Where electronic impulses were once transmitted over copper wires, today’s data is carried by light signals traveling through glass fiber lines. These fiber connections are what link the servers and enable the rapid data transfer that powers modern applications.
Redundancy is another key feature in hyperscale data centers. Essentially, this means that systems are built with backups—multiple layers of support infrastructure—that ensure seamless operation, even if one component fails. This level of resilience is critical in maintaining the uptime and reliability that users expect.
For instance, Google’s planned $1 billion campus in Hertfordshire, UK, illustrates the scale and complexity of these projects. Hyperscale data centers like these are not just large—they are fundamental to the modern digital economy.