A new study traces artificial intelligence from mines and factories to data centers and digital piecework, arguing that the real stakes of AI lie in the global systems of extraction and power that sustain it—and that it strengthens.

A technician working on servers in a data center, highlighting the importance of infrastructure in AI development.

Brussels – Artificial intelligence has a people problem. Not because there are too few machine‑learning engineers—though they are scarce and richly paid—but because there are far more human hands than the industry likes to admit holding up the edifice. A new study released this month follows AI’s supply chains from the salt flats of the Andes to Chinese electronics plants and European and U.S. data centers, and into the browser windows of millions of digital workers who label, filter and correct the data that models ingest.

The thesis is blunt: behind the sleek apps and headline‑grabbing models sits a web of extraction—of minerals, water, energy and human labor—that AI depends on and, in many cases, reinforces. ‘Automation’ is not the end of work so much as a redistribution of it, the authors argue, away from celebrated campuses and toward overlooked sites at the edge of global markets.

Start at the beginning of the chain. The hardware powering today’s AI boom relies on minerals with complicated footprints: lithium from high‑altitude brine fields, cobalt from the Democratic Republic of Congo, rare‑earth elements refined in China, and tin from Southeast Asian deposits. In each case, the study documents environmental costs that are unevenly distributed. Dry regions face water stress as brine is pumped and evaporated for lithium; Congolese mines have long struggled with safety and labor‑rights violations; and rare‑earth processing leaves toxic tailings that require careful, long‑term management. These are not abstract risks. They are visible pits, tailings ponds and scarred hillsides that shift local economies and ecosystems.

From ores to wafers, the trail runs through foundries and factories where clean rooms meet repetitive, low‑autonomy tasks. On the assembly lines that turn chips into servers and phones, the workforce is large, young and often on short‑term contracts. The study highlights how outsourcing and agency labor outnumber direct employees across parts of the electronics industry—an echo of a wider trend in tech, where contractors maintain buildings, clean offices, secure sites and, increasingly, help operate data centers.

Those data centers—temples of the digital economy—anchor AI’s material footprint. They consume huge volumes of electricity to train and run models, and they need water to keep racks cool. In regions where grid mixes remain heavily fossil‑based, the carbon intensity of AI services can be far higher than the marketing suggests. Where water is scarce, withdrawals for evaporative cooling spark community pushback. The energy intensity of frontier model training has grown faster than the efficiency gains of chips and software, the authors warn, making location and power sourcing critical to any credible sustainability plan.

Yet the most intimate layer of AI’s labor economy is not in server halls but on screens. Millions of people worldwide—crowdworkers who annotate images, write snippets of text, transcribe audio and flag toxic content—prepare and police the data that make models legible. Their work is essential and often invisible. Pay is typically pegged to short tasks that can take longer than advertised; benefits are sparse; trauma exposure for content moderators is real. In many large technology companies, this contracted ‘shadow workforce’ now exceeds the number of full‑time employees.

The study calls this the ‘illusion of automation’: systems that appear seamless because thousands of people, often paid per micro‑task, hold the seams together. Where chatbots seem polite, moderators have filtered the bile. Where models seem factual, annotators and raters have corrected outputs and ranked answers. Where assistants seem instantaneous, human operators triage edge cases and escalate failures. The last mile of AI is a relay of humans smoothing the rough edges of code.

The environmental accounting is porous, too. Companies trumpet power‑purchase agreements and renewable credits, but the net climate impact depends on when and where consumption occurs. A data center drawing on a fossil‑heavy grid at night has a very different footprint than one co‑located with flexible renewables and storage that can serve training runs on low‑carbon power. Water metrics vary widely and are inconsistently reported. The authors argue that the ecological costs of AI remain undercounted and rarely internalized into service prices.

Beyond description, the report attempts a contentious calculation: how much of the sector’s extra profits can be attributed to low pay and uncompensated human input? The authors build a conservative range using three buckets: (1) the gap between local living‑wage benchmarks and prevailing pay for annotation and moderation; (2) the unpaid, captured labor embedded in ubiquitous online tests and micro‑tasks such as CAPTCHA‑style puzzles; and (3) the margin captured by platforms that intermediate gig work. Their global estimate—a low‑single‑digit share of large AI players’ revenue—still comes out to several billions of dollars annually. The methodology is transparent, if imperfect: it triangulates public filings, procurement notices, academic surveys and labor‑platform data, with wide error bars and caveats about regional variation and task complexity.

Not everyone will buy the arithmetic. Some executives argue that AI also creates new, better‑paid roles and accelerates productivity in sectors from healthcare to engineering. Others say platform work is a stepping stone for people who value flexibility. The report does not dismiss these points; it argues instead for paying the real costs of the system we have, rather than the one we wish we had.

The authors’ policy recommendations land in three clusters. First, recognise and remunerate human contribution: require transparent disclosure of the share of model performance attributable to human‑labeled data; set minimum standards for pay, benefits and mental‑health support for moderators and raters; and ensure that public tenders do not reward the lowest bid at the expense of basic protections. Second, shine light through the materials chain: adopt due‑diligence rules for critical minerals, publish water‑use and emissions data with time‑of‑day granularity, and tie tax incentives for data‑center development to low‑carbon power and closed‑loop cooling. Third, internalise ecological costs: phase in carbon‑and‑water prices for large compute projects and align depreciation schedules with realistic lifecycles of gear to discourage wasteful churn.

For companies, the study sketches practical checklists. Map the real workforce—including contractors—and set floor standards that apply across suppliers. Share a portion of product revenue with the communities that contribute training data and with the workers who shape outputs. Treat safety and reliability testing as labor that merits pay, not just volunteer ‘feedback’. Push engineering to design for efficiency and load‑shifting so that the dirtiest hours of the grid are not when models do their heaviest lifting.

What would success look like? Not a return to artisanal computing, but an industry that acknowledges its human and environmental foundations and prices them honestly. Models would still get built; progress would continue. But the people who mine, solder, ship, label and moderate would show up in the story—and on the balance sheet. The study’s provocation is simple: AI will be judged not only by what it can do, but by what it chooses to pay for.

Leave a comment

Trending