Using AI can seem like magic – type a prompt, create an automated process, or give a command and the answer just appears. However, the growth, power, and ease of use of AI masks the resources required to underpin it. Every AI model rests on a substantial industrial backbone of minerals, manufacturing, electricity, and water, particularly when it comes to the data centers that deliver the necessary computing power to drive it. This is accelerating investments in this area. For example, research from S&P Global found that over $61 billion flowed into the data center market in 2025.

And usage continues to grow at a business and personal level. According to a recent survey from my company, Prosper Insights & Analytics survey, 54.4% of executives already use generative AI, with a further 19.6% looking to experience it going forward.

Nearly half (48.6%) of all Americans over 18 have used generative AI for research.

Understanding the risks of AI

With AI becoming more embedded in business life, organizations need to better understand its risks. Traditionally, these have focused on areas such as avoiding algorithm bias, protecting data privacy, and finding sufficient relevant and high-quality data for training AI models.

However, widescale AI adoption brings new risks for businesses, as Dr. Albert Meige, Global Director of Arthur D. Little’s (ADL’s) Blue Shift Institute explains. “AI has three key resource vulnerabilities – environmental impacts, energy supply, and compute infrastructure. AI’s dependence on these resources leads to vulnerabilities that can significantly affect its future growth trajectory, with implications not just for players in the AI value chain itself, but also enterprise end users.”

Drilling down into these vulnerabilities in detail provides sobering reading:

  • Environmental impacts — these include emissions resulting from AI’s heavy energy usage, water for cooling, and mineral usage for the manufacturing and fabrication of the chips and other hardware.
  • Energy supply — not just the cost of electricity itself but ensuring its availability. This can be held back by issues related to electricity grid strain.
  • Compute infrastructure — issues relating to supply chain choke points and dependencies on dominant regions, states, and providers for IT.

The impact of dependencies on scaling AI

As AI increasingly becomes critical infrastructure, businesses are moving from AI pilots to full-scale production, making AI central to their organizational objectives and success. More complex applications such as agentic AI increase energy needs exponentially. This brings greater risk – a company’s AI plans could be held back or even stopped completely if it fails to safeguard access to sufficient AI resources and compute power.

At the same time, there is poor transparency on the environmental impact of AI models. Companies are losing visibility on the environmental footprint of their activities, especially as AI tools become increasingly embedded as part of broader solutions.

Complicating this, the growth trajectory of AI adoption still remains unclear, which can hold back investment in infrastructure. This could lead to potential shortages of resources such as compute power or electricity, and supply chain dependencies on specific regions, states, or providers.

The problem is that it is currently hard to plan for what the true level of compute demand will be for AI going forward or its access and availability. This is affected by a host of factors, including power generation, grid capacity, geopolitics, and supply chain resilience for data centers and chip manufacturing.

The need for strategic, board-level planning to address these risks

This puts the onus on boards to plan for and guard against these risks. Previously, they have primarily focused on AI challenges such as securing talent, inaccurate or biased algorithm results and the reputational and financial damage of failed projects.

Now, they need to equally understand and address emerging infrastructure dependencies that could prevent their access to sufficient resources for AI success, against a backdrop of uncertainty in the market.

Given this uncertainty, there is no clear path for how the market will develop. Organizations therefore need to adopt scenario planning to cover the potential outcomes of different combinations of demand and access. This will allow them to identify and take “no-regret” actions that strengthen their efficiency, adaptability, and resilience.

In a recent Blueshift report, AI’s Hidden Dependencies , ADL outlines these four scenarios - AI divided (tempered demand, tight access); Bubble burst (tempered demand, ample access); Compute wars (surging demand, tight access); and Gigaflow, the most optimistic, (surging demand, ample access).

“We see four potential scenarios for future AI development, based on a combination of low to high demand and low to high access to AI,” comments Albert Meige of ADL. “Currently it is impossible to predict which scenarios, or combinations of scenarios, are most likely. But that doesn’t mean companies should do nothing – there are activities to prioritize that will deliver benefits in every scenario.”

The “no regret” actions to take now to mitigate AI risk

Whichever scenario comes to pass, ADL identifies three areas to focus on. Firstly, organizations need to gain visibility and control over the real footprint of their company’s AI use in order to restore environmental credibility. This starts with embedding environmental disclosure around the use of energy, water, and materials in supplier contracts, while pushing suppliers to adopt comparable, auditable metrics to avoid inconsistent or partial reporting. Given that there are likely to be future regulations around AI environmental reporting, organizations should start to prepare now.

At the same time, they must anticipate the real cost of AI, ensuring that investment remains predictable and aligns with real business value. It is vital to include the true cost of computing, energy, and water in financial planning, particularly as today’s prices are artificially low. Prioritizing efficiency and securing long-term pricing from providers will also ensure costs remain predictable, particularly as usage grows.

Finally, they have to build strategic resilience, which means having the flexibility and freedom to move and adapt across providers and jurisdictions. Rather than being tied to a single provider, architecture should be able to segment workloads across on-premises, sovereign, and public AI platforms, allowing firms to balance cost, compliance, and control. This should also make migration between clouds, vendors, or regions easier and enable provider diversification. As with any vital resource, organizations should map their end-to-end supply chain, identifying any potential weak points or risks and putting measures in place to counteract them.

Across the globe, AI is becoming a vital part of how businesses operate. While its true growth and impact is still difficult to predict, now is the time for organizations to safeguard their future AI infrastructure, ensuring they have access to the right resources to support their future needs, whatever scenario plays out.

Disclosure: The consumer sentiment study referenced above was conducted by my company, Prosper Insights & Analytics . This is the same dataset used by the National Retail Federation, and available from Amazon Web Services, Bloomberg, and the London Stock Exchange Group for economic benchmarking.