The $242 billion bet on artificial intelligence this quarter has a problem nobody wants to name: it's making us all agree.

Every major AI system on the market was trained on the same internet, optimized by similar processes, and rewarded for producing answers that people rate highly. Which means they are all, at a structural level, converging on the same conclusions. You are not getting multiple perspectives when you consult multiple AI tools. You are getting the same perspective, wearing different fonts.

This is not a glitch. It is not a temporary limitation waiting to be solved by the next model generation. It is a structural property of how these systems work. And the sameness it produces is spreading well beyond the tools themselves, into the writing we publish, the decisions we make, the ideas we think are our own.

There is a data visualization technique called t-SNE that researchers use to map how similar or different a set of data points really are. Feed it thousands of responses to the same question, and it compresses them into a two-dimensional map. Similar things cluster together. Different things are spread apart. If you ran that map on every major AI system answering the same battery of complex questions, questions about justice, about trade-offs, about what matters and why, you would see something striking. The AI systems would form a tight neighborhood on the map. Then you would ask the same questions to a thousand humans, different ages, different cultures, different faiths, different failures, different loves. And they would scatter across the entire space.

That map is not just a technical curiosity. It is a portrait of what we are trading away. And $242 billion of venture capital this quarter is accelerating the trade.

The Whistle in the Cereal Box

In 1971, a young engineer named John Draper discovered that a toy whistle included in boxes of Cap'n Crunch cereal produced a tone at precisely 2600 hertz, the exact frequency used by AT&T's long-distance switching systems to indicate an idle line. If you blew the whistle into a telephone handset at the right moment, you could manipulate the network. Make free long-distance calls. Access operator trunks. Explore the hidden architecture of a system that was never meant to be explored.

Draper, who became known as "Captain Crunch," was the subject of a 1971 Esquire article that caught the attention of two young men in the Bay Area named Steve Wozniak and Steve Jobs. They tracked Draper down, built a device called the blue box that could reproduce the tone electronically, and sold them out of the Berkeley dorms. Primarily because it was, in Wozniak's own telling, the most amazing thing he had ever encountered. A door hidden inside a cereal box.

That delight, disproportionate, mischievous, completely indifferent to the intended use of the system, is the direct ancestor of Apple Computer. Not the discipline. Not the methodology. Not the academic rigor. The whistle.

Douglas Engelbart gave what historians call the "Mother of All Demos" in 1968, showing the world the mouse, hypertext, video conferencing, and collaborative editing in a single ninety-minute presentation. He was not following a research roadmap. He had a vision, formed years earlier, of what human beings might become if the right tools extended rather than replaced their natural capabilities. The vision came from somewhere that had nothing to do with academic protocol. He spent years unable to get funding because his ideas were too strange for the establishment to evaluate. Too scattered. Too far from the cluster.

Alan Kay conceived of the Dynabook in 1972, a personal computer for children, portable, connected, responsive to touch, designed around how children actually learn rather than how institutions teach. It took forty years to approximate the iPad. And even the iPad missed his actual point, which was never about the device. It was about preserving and extending the wandering quality of a child's mind rather than domesticating it.

These men were not performing discipline. They were following something that felt more like an obsession than a methodology. They were outliers on the t-SNE map. Points so far from the cluster that the algorithm would flag them as anomalies.

And it is not only the famous ones. Every working parent who ever rigged a better solution to a problem nobody else noticed, every kid who took apart something that wasn't supposed to be taken apart, every grandmother who developed a way of reading people that she could not explain but was never wrong about, those are outliers too. Unrepeatable. Untransferable. Utterly human. The map is full of them. That is the point of the map.

The anomalies built the world we live in.

What School Actually Trains

I want to be careful here because I am not arguing against education. I am making an argument about what education measures and what it misses.

Formal education is an industrial system designed to produce reliable cognitive workers. It rewards memory over insight, compliance over creativity, the right answer over the interesting question. From the earliest grades, children are taught to focus. To sit still. To stop daydreaming. To memorize facts and figures and demonstrate that memorization on demand. The metrics of academic success are metrics of discipline: the ability to absorb what you are told, reproduce it accurately, and do so consistently across years of increasing complexity.

That is a genuine and valuable capability. I am not dismissing it. The world needs people who can execute with precision and consistency.

But it is not the capability that changes the world. And increasingly, it is not a capability that justifies the investment required to develop it, because we have now built a machine that does it better than any human ever could or will.

The machine has read everything. Memorized everything. Can reproduce any fact, any style, any argument, any format on demand, at any hour, without fatigue, without ego, without the need for tenure. It will get an A on every test ever designed.

It will never blow the whistle in the Cap'n Crunch box.

Not because it lacks the information. It has all the information. But the whistle was not about information. It was about a specific quality of human attention, the kind that looks at a system and sees not what it was built to do but what it might accidentally be made to do. A quality that is suspicious of intended use, enchanted by accident, willing to look foolish, and completely indifferent to whether the behavior is on the syllabus.

That quality cannot be trained into a model. It requires a childhood. It requires boredom. It requires a teacher who says I don't know. It requires failure that actually hurts and idleness that feels almost sinful, and a mind that wanders during the lesson because something out the window is more interesting than what's on the board.

It is, in the most precise sense, the opposite of optimization. And we are building an economy on optimization while telling ourselves we are investing in human potential.

The Pineapple on the Table

In 17th- and 18th-century Britain, pineapples were the ultimate status object. They came from the Caribbean, survived the Atlantic crossing rarely and unpredictably, cost what a laborer made in months, and spoiled within days of arrival. You could not really eat yours. You displayed it. You carried it through the street, tucked under your arm. You placed it at the center of your dinner table, where the right people would see it and understand immediately that you were serious about money.

Georgian manors had pineapples carved into their stone gateposts. There were pineapple rental services in London. You could hire one for the evening, display it at your table, and return it the next morning. Pure performance of wealth with zero underlying substance.

Eventually, someone figured out how to grow pineapples in heated greenhouses in England. Supply increased. Price collapsed. The pineapple went from ultimate luxury to grocery store commonplace. The moment it became accessible, it became worthless as a signal. The status evaporated with the scarcity.

I think about this every time I see a press release announcing an AI strategy. Every earnings call where a CEO, regardless of industry, mentions their AI roadmap. Every job posting requires AI fluency. Every strategy deck that mentions AI seventeen times.

These are not primarily operational decisions. They are status performances. The AI announcement is the pineapple on the table. It says we are a sophisticated modern enterprise. It says we belong at this table.

And just like the colonial pineapple rental, a lot of it is borrowed. They are running on OpenAI's API, Microsoft's Azure wrapper, and Google's tools. They do not own the pineapple. They rented it for the board presentation .

The greenhouse opened. The API key costs $20 per month. The pineapple is no longer crossing the Atlantic one at a time. And yet the performance continues because the anxiety that drives status signaling does not respond to logic. People keep carrying the pineapple even after the scarcity is gone because stopping feels like falling behind.

That is what $242 billion of venture capital bought in the first quarter of 2026. Not transformation. Not moats. Mostly pineapples, carried very expensively through very crowded rooms.

The problem is not just that the money is misallocated. It is that the performance of innovation is being mistaken for the thing itself. And the systems that were built to tell the difference, to separate the genuine from the theatrical, are now being handed the same tool that generated the theater.

The Gauntlet Was the Point

The founders of the American Republic were not naive about human nature. They had just lived under a system where power moved fast and unchecked, and they built its replacement slowly and deliberately. The bicameral legislature, the veto, the override, the judicial review, the amendment process: every one of those mechanisms is a speed bump. A point of resistance. A place where bad ideas are supposed to die before they become law.

The legal system is the purest expression of this philosophy. Discovery is painful. Depositions are expensive. Appeals take years. Cross-examination is brutal. The cost of litigation is not an accident of inefficiency. It is the mechanism by which weak arguments are exposed, manufactured consensus is challenged, and power is held accountable. The friction is the function.

The same logic runs through capital markets. The bar exam is hard. Medical licensing is hard. Getting a patent requires rigorous prosecution. Publishing a drug requires clinical trials that take years and cost hundreds of millions of dollars. All of that friction is civilizational immune function. It kills weak ideas before they scale. It is expensive and painful, and it works.

Warren Buffett has understood this for six decades. Berkshire Hathaway is the single most devastating rebuttal to the narrative that AI changes the fundamental laws of business. Not because Buffett is anti-technology. He is not. But because what Berkshire has compounded across generations is explicitly, deliberately, boringly friction-dependent.

The moat is built over decades through customer trust, brand equity, operational discipline, and the thousand small decisions that never make a press release. You cannot prompt-engineer a moat. You cannot fine-tune your way to a Geico or a See's Candies. Sustainable value creation is slow by definition. The moment it becomes fast, you are probably looking at arbitrage, not business.

AI is the most powerful leverage tool in the history of business. Leverage amplifies what is already there. If what is there is good, it grows faster. If what is there is hollow, it collapses with a louder sound. The fundamentals did not change. The speed of finding out whether you got them right did.

The Flattening No One Ordered

Every previous information revolution, the printing press, radio, television, and the internet, expanded the number of voices in the conversation while also creating new concentrations of power. Net effect across all of them: more diversity, more noise, more conflict, more creativity. The conversation got louder and more contentious and harder to navigate, and produced more original thought per century than any prior period in human history.

AI is the first information revolution that optimizes for agreement. It is, by design, a consensus machine.

It is structurally incentivized to find the center, validate the user, reduce friction, and avoid offense. The training process rewards responses that humans rate highly. Humans, when rating AI responses, tend to rate coherent, confident, balanced, and inoffensive answers highly. So, the model learns to produce coherent, confident, balanced, inoffensive answers. The range of what feels thinkable quietly narrows because the systems people use to think have a center of gravity. The output is sameness. Sameness at scale. Sameness that compounds every time another human reaches for the tool instead of their own judgment.

This is already visible in culture. AI-generated creative output is converging on the same aesthetic register: warm, slightly informal, relentlessly solution-oriented, never truly strange. Stock imagery looks the same. Marketing copy sounds the same. As more human creative output passes through the same generative layer, the diversity contracts. Not because anyone banned anything. Because optimization selects for what the training data rewarded. The scatter on the map contracts toward the cluster.

The most literal demonstration of this happened in plain sight. In February 2026, WPP announced an expanded partnership with Adobe built around Firefly, Adobe’s CDP, and what the press release called an "agentic content supply chain." In April, Omnicom announced the identical deal. Same Adobe stack. Same language. Same "Transformation Practice." Two of the largest holding companies on earth, serving overlapping Fortune 500 rosters, shipped functionally identical offerings two months apart. Read the two press releases side by side, and the sameness is not a metaphor. It is the distribution strategy.

Think about what that actually means for the brands paying them.

WPP and Omnicom compete directly for the same Fortune 500 marketing budgets. A brand that pays WPP to differentiate itself in the market is now running on the same Adobe Firefly models, the same CDP infrastructure, and the same agentic content pipeline as the brand paying Omnicom to differentiate itself in the same market. The holding company is a wrapper. The wrapper has a different logo. The stack underneath is identical.

This is not a technology story. It is a business model story. The holdcos are not building creative intelligence. They are reselling a platform and charging transformation fees to do it. And because every major holdco will eventually sign the same deal, because Adobe's entire strategy depends on it, the differentiation they are selling their clients does not exist. It cannot exist. You cannot buy distinction from a vendor who is selling the same thing to your competitor down the street.

The pineapple is not even theirs. They rented Adobe’s pineapple, put their logo on the rental agreement, and billed clients for it as a strategy.

Researchers at the University of Southern California published findings in March of this year in the journal Trends in Cognitive Sciences that put numbers to what many people already sense. After analyzing more than 130 studies across linguistics, computer science, and cognitive science, they concluded that despite drawing from vast databases of human-generated content, AI models consistently produce outputs that are less varied than human thought. The sameness is not a quirk of any single model. It is a structural property of how they all work. The researchers did not mince words about what that means at scale. They compared the homogenizing effect of AI on language and thought to the linguistic control of Newspeak in Orwell’s “1984.” The parallel is exact. Newspeak was not designed to silence people. It was designed to make certain thoughts impossible to form. You cannot think what you cannot say. And you cannot say what the model was never trained to generate.

Disruptive ideas rarely emerge from central, established, or comfortable parts of society. They come from the fringes where diverse industries, experiences, and skills intersect and spark unconventional and unexpected concepts and solutions. AI is trimming those edges where these sparks turn into the ideas that capture our imagination and reshape our world.

The market is already sensing this, even if it cannot name it. Gartner found this year that two-thirds of consumers now routinely question whether online content is real, and half actively prefer companies that avoid generative AI in their marketing. That is not a Luddite reaction. That is a population developing an instinct that something has been removed from what they are being served. They are right. What has been removed is the scatter.

The business case for that instinct is now documented. In a study published on SSRN , researchers Chaoran Liu, Tong Wang, and S. Alex Yang used Italy's brief country-wide ChatGPT ban in 2023 as a natural experiment and found that businesses that lost access to the tool saw their marketing content become measurably more distinct from each other, and their consumer engagement went up roughly three and a half percent despite posting less frequently. Less AI. More human. More engagement. The data does not get cleaner than that.

In education, students are outsourcing not just research but reasoning itself. When a student asks an AI to help them think through a moral dilemma or a historical causation question, the AI does not just provide information. It provides a framing. A structure. A ranked set of considerations reflecting values baked into training data and reinforcement processes that the student never examined and the AI cannot explain. When enough students in enough classrooms use the same tools, you do not get a generation that thinks differently from each other. You get a generation that thinks similarly to the model. A study analyzing 2,200 college admissions essays found that the diversity gap between AI-assisted and purely human writing widens as the scale of AI use increases, and that prompting AI to be more creative did not close it. The most personal document a young person writes, the one meant to express who they uniquely are, is converging toward a shared center. That is not a grading problem. That is a civilization problem.

This does not apply equally across every discipline. Medicine needs memorization. Biochemistry needs precision. There are fields where the accumulation of established knowledge is the foundation, and AI as a study tool makes genuine sense. Nobody wants a surgeon who freestyles through anatomy.

But the liberal arts are a different matter entirely. Philosophy, literature, history, ethics, political theory: these disciplines exist specifically to produce people who can hold contradictory ideas in tension, argue from first principles, and arrive at conclusions that nobody handed them. They are the university's innovation engine. Not in the Silicon Valley product sense but in the deeper sense: they produce people capable of questioning the frame rather than just working within it. Wozniak blowing the whistle. Engelbart imagined a computer as a tool for augmenting the human mind rather than replacing it. Kay conceiving of the Dynabook not as a device but as a way of thinking about how children learn. None of those leaps came from memorized facts. They came from people who had been trained to wander.

When a philosophy student uses AI to help construct an ethical argument, they are not learning philosophy. They are learning to approve or disapprove of AI-generated philosophy, which is a fundamentally different cognitive act. When a history student uses AI to synthesize causes of a war, they are outsourcing the exact kind of connective, analogical, morally weighted reasoning that makes history useful as a discipline. The tool is not helping them think. It is thinking adjacent to them while they watch. And the sameness it produces in those students, accumulated over four years and then released into the workforce, the electorate, and the culture, is where the long-term damage lives.

Davit Khachatryan , Associate Professor of Statistics and Analytics at Babson College, puts it this way:

"Learning requires a dialogue that embraces diverse viewpoints. A classroom-kaleidoscope where colorful viewpoints contrast and collide in the patient pursuit of knowledge. To support originality, the classroom needs focused, uninterrupted time to allow innate, raw material to mature and develop. Turning to the machine prematurely risks hijacking this potential with a spoon-fed status quo, which is everyone's and no one's at the same time. It risks burning the cake when the temperature is cranked up in the futile hope that it will bake more quickly. The good news is that the same technology can help push learners' thinking to new heights, and we as educators have an important opportunity to lead in the development of disciplined, mindful AI use habits."

In politics, the specific risk is the elimination of productive ambiguity. Healthy political systems require genuine disagreement, genuine uncertainty, and genuine local variation in values. The friction of that disagreement, ugly, expensive, sometimes corrupt, is what produces durable political settlements. When the systems people use to inform their political views all share the same training priors, what feels like independent reasoning is a form of distributed suggestion. Not propaganda. Something subtler. Consensus that feels empirical because it came from a machine.

The obvious counterargument is that humans were already homogenizing long before AI arrived. Cable news created filter bubbles. Social algorithms served people what they already believed. The internet, for all its promise of pluralism, produced tribes. This is true. But there is a critical difference. Those systems homogenized people’s consumption. AI is homogenizing their production. The filter bubble told you what to read. The AI assistant is now shaping what you write, what you conclude, how you frame a problem, and what you think the options are. That is not a filter on the input. That is a hand on the pen.

The loop underneath all of this is the most dangerous part. AI generates content. Institutions use AI to evaluate content. AI gets trained on those evaluations. AI generates better content that passes AI evaluation. Human judgment exits the loop. Not through conspiracy. Through thermodynamics. Systems optimize toward least resistance. And when both the input and the evaluation are AI-mediated, the friction that was doing the work, separating the true from the false, the strong from the weak, the original from the derivative, disappears.

The Diagnostic Every Business Leader Needs

There are three questions that will tell you whether your AI strategy is load-bearing or decorative.

  1. If the AI disappeared tomorrow, what would actually break? Not slow down. Not get more expensive. Actually, break, stop working in a way that costs you customers or revenue. If the answer is nothing, you do not have an AI strategy. You have an AI garnish. The pineapple is decorative.
  2. Are you using AI to deepen your moat or to avoid digging one? AI can strengthen a genuine competitive advantage. It cannot substitute for the moat itself. If your competitive advantage is that you use AI, you have no competitive advantage, because everyone uses AI. That is not a moat. That is a utility bill.
  3. What are you getting worse at because AI is doing it? Every capability you outsource to a tool is a capability your organization stops developing. The Roman army was the most powerful fighting force in the ancient world, partly because every soldier built the camp every night, regardless of rank. The generals understood that the capability had to stay embodied in the organization. The moment you stop doing the hard thing, you start losing the ability to do it. What hard things is your organization stopping?

There are three types of leaders emerging from this moment.

  • The Pineapple Carrier deploys AI for the signal. Press releases, AI-themed job titles, and vendor partnerships announced but not operationalized. The board sees the pineapple and is satisfied. This leader is fine until the cycle turns. Then the board asks what the AI investment actually produced.
  • The Greenhouse Builder understands that AI is infrastructure, not identity. Asking not what we are using AI for, but what AI makes permanently cheaper or faster in ways that compound. Investing quietly in data quality, workflow integration, and organizational capability. Not announcing it. Doing it. This leader looks boring right now. In five years, they will look like the person who bought Amazon in 2001 when everyone said the internet was dead.
  • The Fundamentalist is ruthlessly focused on what was true before AI and remains true through it. Customers buy outcomes, not tools. Trust is built through consistent behavior over time. Competitive advantage requires genuine scarcity. Using AI exactly where it accelerates those fundamentals. Ignoring it everywhere else.

What the Map Is Actually Showing Us

I want to return to the t-SNE map because I do not want you to think this is primarily a business strategy argument. It is not. The business argument is a chapter. The map is the book.

The reason AI clusters on that visualization is not a technical limitation waiting to be solved by the next model generation. It is a structural consequence of how optimization affects cognition. The training process finds the basin of lowest resistance and settles there. That is what it is for. The cluster is what thought looks like when you remove all the friction.

Human thinking does not optimize. It wanders. It gets distracted, emotional, culturally biased, spiritually motivated, contrarian, nostalgic, ambitious, and afraid. Those are not flaws. Those are friction sources. And the scatter they produce on the map is where every genuinely original idea in human history came from.

Every great system humans built, constitutional, legal, economic, cultural, was designed to harness that scatter. To let competing ideas fight until the strongest survives. The friction was not the system's failure. The friction was the system.

We are now deploying optimization engines inside those friction-dependent systems. The legal brief that AI generates. The curriculum that AI designs. The news that AI summarizes. The creative work that AI drafts. Each deployment makes the system faster and cheaper and slightly less capable of producing the surprise, the outlier, the heresy that turns out to be right.

The scatter on the human map slowly contracts. Not because anyone forced it. Because the tool is right there and the tool is good and the tool is fast, and question by question, draft by draft, decision by decision, we are outsourcing the wandering.

Until the map looks like the machine map. Tight. Coherent. Clustered around the same attractors.

At which point, we have not built artificial general intelligence. We have built artificial average intelligence. And we have mistaken it for wisdom.

The founders built a machine that required humans to fight each other for the truth. That fight was expensive, ugly, and sometimes corrupt, and it worked. What we are building now is a machine that resolves conflicts before they happen. That feels like progress.

The scatter is still in you. The question is whether you are protecting it or quietly trading it away, one convenient prompt at a time.