Companies Can Win With AI
Why do enterprise AI efforts fail?
We know they do – in fact, a study right here at MIT made waves and got passed around tech media recently, showing that a full 95% of genAI projects at surveyed companies just deliver the bang for the buck.
I was reading about this, and pondering some of the reasons given for this troubling statistic. First, there’s misalignment in incentives and human motivation. For example, if the boss tells you that you “have to” use AI, you probably won’t want to. Then employees who don’t embrace their company’s top-down plan often end up secretly relying on their own tools, in their own ways.
The fix? Let the employees in – make them a part of the process. That way, they have “skin in the game” and feel like they are contributing their own ideas.
As for other reasons for the failure of a company’s “best laid plans,” there’s poor workflow fitting, lack of buy-in, bikeshedding , and blind trust in tools that may not have earned it.
For more, let’s go to a recent segment of a conference here at MIT, celebrating, as it were, local people made good. We had Sarah Krouse of the Wall Street Journal interviewing Brian Elliott, co-founder and CEO of Blitzy, and Andrew Lau, co-founder and CEO of Jellyfish, about creating companies, in the AI age, from the ground up.
Elliott called for a new and fundamentally different approach, to fit a company’s needs in the AI age.
“It's designing systems in a way that actually works with these enterprise environments, and then consulting with them to say: ‘the workforce is going to look a little bit different. You're going to need folks to look further over the horizon on what you want, as opposed to this quick, iterative back and forth, moving inches at a time versus yards at a time.’”
The trio observed a study showing that a year ago, only around 57% of respondents had adopted AI in their companies. Now, that number is in the 90s.
Lau talked about vanguard firms.
“If you look at the leading edge, these folks are inventing the new processes,” he said. “The rest of the world is still figuring out, ‘how do I incorporate and transform to accommodate these new processes?’ They're just at that early stage of ingesting that world right now.”
Elliott pointed out that in today’s industry, using AI doesn’t mean going whole hog, without human supervision.
“I think in a lot of the organizations that we've talked to, there's some sort of adoption of AI, but there's still a desire for some level of human intervention at the end - the human is ultimately responsible,” he said. “And so we see a huge rise of responsibility of the QA person on the back end, doing that quality assurance before they press merge - and so there's a emphasis on taste, on judgment, on knowing what correct looks like, defined up front, and then approved at the back end.”
In the context of coding, Lau described new scenarios where people’s jobs overlap more, where various project roles need to “duke it out” to get jurisdiction, and where the old ways of approving a codebase no longer apply, partly because AI is doing the coding.
“There's going to be a bunch of change that has to happen in these organizations, and labor shifts are going to go back and forth on this stuff, right?” he said. “Because if you look at this transformation, we're really talking about an unbundling and rebundling of roles. What is a role? A role is actually a collection of tasks and jobs that actually assemble into workflows. Workflows actually fit into a role. All those tasks are changing, so we actually need to rebundle everything - and that rebundling is going to be hard.”
Later, Lau brought up some more around new enterprise strategy, in the context of today’s world.
“It's no longer like, let's sling stuff,” he said. “We actually have to think what's actually going to work.”
Elliott critiqued the process of bringing value to AI.
“If you're saying, ‘I can't leverage these people to further solve my customer problem and then capture part of that value,’ that's a lack of creativity,” he said. “Value is net present value of future cash flow. So AI is an enabler to help do that. But ultimately, we are constrained by creativity.”
The two discussed the new practice of “token maxing,” identifying super users, and working with LLM use rates.
“All of a sudden, now, you've got an intern kicking off a $7,000 monthly bill,” Elliott said, pointing out some situations where, in his view, leaders will want employees to consistently use models at high rates. “You're like, what just happened here? Right? You know, companies now are starting to allocate with questions like: which departments are allowed to use it? Which model can they use? A long running model? Can they use Opus 4.6? Do they have to use the fast models?”
He discussed relevant factors.
“There are trade-offs now, because the costs have become significant, right?” Elliott said.
Lau talked about the imperative to “net increase the output with tools.”
Instead of just “labor,” he suggested, the equation is now, labor plus tokens. Or, labor, plus tokens, plus tools.
“This is the future we're glimpsing into,” Elliott said. “We're going to be in a place where organizations can dial up, dial down, by department, by feature, by project, what they actually want to invest in, right? Historically, from a labor perspective, that was metered by hiring cycles, departmental margin, you know, which teams are doing this, managers, promotions, all of these things are going on. The digital version of this is actually much more flexible.”
Lau discussed the relative value of several Anthropic and Google tools.
“Anthropic is really good at flexing on entropy, right?” he added. “So let's say I want to switch between Sonnet and Opus right at runtime. You use Sonnet for the more deterministic stuff, Opus for the sort of harder, more complex edge case stuff - Gemini has a good context window.”
Elliott mentioned the logistics of lots of LLM queries.
“We'll use Sonnet when we need to, Gemini when we need to,” he said.
At the end of the talk, Krouse asked the other two about timelines for AI progress.
Lau suggested the next wave of tech might come in around 24 months – in other words, two years.
Elliott put it a bit further off.
“The ability to actually affect a code base that’s large, and transfer an organization, will take a while,” he said.
I thought all of this was pretty instructive. Stay tuned for more.
Loading article...