What does artificial intelligence look like around the world?

Presumably, there are any number of think tanks and independent researchers focused on this question, and others having to do with how nations handle AI. As with the general trajectory of the technologies, there’s a lot that’s unknown, but there are also observable trends and notions that interested parties can use to speculate on what we’re most likely to see in the next few years.

Part of the backdrop that almost has to be mentioned is the Sino-American “AI race,” where the U.S. White House famously waffled on whether to give China’s market access to Nvidia H100s, H200s, B200s, etc. But there’s a lot more context, too.

Last week, I wrote about the new merger of Stanford’s HAI lab, and the data science initiative. Here are some takeaways from Stanford HAI’s report for 2026 :

· AI capability is not plateauing. It is accelerating and reaching more people than ever.

· The U.S.-China AI model performance gap has effectively closed.

· The United States hosts the most AI data centers, with the majority of their chips fabricated by one Taiwanese foundry.

Number three is notable, because the control of foundry service is, by any measure, a big deal, a “choke point,” as one expert calls it – (read on.)

Speaking of Stanford HAI, in a segment of our Imagination in Action event at MIT April 9-10, we had a panel talking about today’s international realities around AI. (Imagination in Action, with which I am affiliated, runs a number of annual events centered on technology.)

Simran Chana, Director of Cambridge Frontier Technologies Laboratory, interviewed Mark Machin, Co-Founder of Intrepid Growth Partners, Alvin Graylin of the newly consolidated Stanford HAI, and Sean Batir, currently of AWS whose experience in the military world included a leadership role on DoD’s Maven AI tool.

“We're living in an increasingly multi-polar world,” Batir said, in reaction to a question about new international realities. “And I think one of the key elements for us to consider when we think about the chessboard is, it's no longer two-dimensional, right? If you think about how people have typically been thinking about military strategy and power of nations, and two axes of military power and economic power, those still very much exist, but now we have evolved into a world where there are three substrates underneath it, where there is, of course, the infrastructure of data centers and of compute that powers a lot of these capabilities.”

Machin talked about the strategic value of “choke points.”

“The capital markets are huge in the U.S.,” he said. “It's become one of only two effective big IPO markets in the world. That's a huge advantage in the effectiveness of actually driving capital away from China, as well as a way of decreasing the availability of capital to drive innovation and growth in the country.”

Graylin challenged what he sees as a series of poor assumptions that, in his view, bring confusion to the international stage: that one clear winner will hit a “finish line” and definitively “win at AI.”

“There’s no clear finish line,” he said. “AI is not a 100-meter race, right? We can see that every week there's a different model that leads and, you know, the gap between open source and closed source is getting much, much closer.”

He also spoke to the diversity of methods for AI enhancement, beyond just scaling.

“A lot of the gains that are coming are not purely from having more chips,” he explained. “It's actually from algorithmic improvements. It's from distillation. It's from quantization, and also just different approaches, right? All of those are actually giving either linear or sometimes exponential gains.”

Partnerships and Initiative

Batir talked about private companies supporting governments.

“AWS, and Google and Microsoft as well, are all supporting different parts of the national security fabric,” he said. “But when you go to another government, sometimes the compliance and cybersecurity requirements are slightly different, typically driven by their signals intelligence agency or cybersecurity experts, and as a result, when we seek to mitigate and sort of harden those networks, sometimes they translate, and sometimes they don't. And I think one of the key elements is building that strong relationship between the private and public sectors.”

“There’s an increased urgency, starting with defense tech,” Machin said, noting some of the ways that Europe, for example, lags behind the U.S. and China. “The adoption is the big thing.”

Graylin, explaining that many countries are going to be seeking to modify open source models, also pointed to the assertions of U.S. companies about Chinese competition as, in a way, misguided. He explained:

“You hear from the American labs, ‘we can't slow down because China is going to beat us’ and ‘you can't regulate us, because China is going to go faster.’ The reality is, if you go and look at the Chinese regulation, there are some of the stiffest regulations on AI models around the world. They force them to certify every model before they go out for safety. They make sure of the provenance of the data, that there are good, clean sources. Anything that's put into the world, if it creates harm, there's liability on those model makers, right? None of that is actually the case for American labs today. And so, you know, using the excuse that we can't slow down because China is going to go faster is actually a bit of a misdirection.”

Chana followed up on this, asking the panel if fear of missing out drives AI work that might be going a little too fast.

“Mission drives our development,” Batir said. “There’s a saying that we have in the building: ‘never waste a good crisis,’ because it is through crisis that you realize what resources truly are constrained, versus where the bureaucracy of paper pushing and people being afraid to take risk disappears, because when human lives are at stake, when your own countrymen are at stake, I think that pushes people to think, ‘Okay, how do I fine tune?’ Or, let's get even simpler, ‘how do I simply build the necessary knowledge base to query the necessary intelligence, to then inform an operation to occur in minutes instead of hours?’”

“I think there's a level of guardrails that is needed for consumer or commercial use cases, versus military,” Graylin said, “because, for military uses, they take away the guardrails, because you need that flexibility, right? So the models that are being used in military use cases are probably a year to a year and a half old, compared to what's commercially available.”

On open source, Machin had this to say:

“Every time there's a breakthrough in China in an open source model, and it's published, they're forced to make these innovations down at the kernel level. But if you're on the (American) West Coast, you just need to look at your screen in the morning, read the paper, drink your coffee, and implement, put it straight into production in your models.”

Graylin noted that the open source design does help Chinese labs to spend a lot less than their American counterparts.

Toward the end, Chana talked a bit about ongoing work at Cambridge, and surveyed participants on whether AI is leading toward a dystopian future, or one of abundance. I thought the resulting convo was interesting, but so is the above, on how we break down, if you will, the “diplomacy” of AI, in a world that seems frenetically in competition.