You're not investing in AI — you're investing in your goals
May 2026

You're not investing in AI — you're investing in your goals

You're not investing in AI — you're investing in your goals

AI Sweden recently published Leadership Report 2026 — an analysis of what actually separates organizations that succeed with AI transformation from those that don't. The report is based on in-depth interviews with leading business executives in Sweden: CEOs, CDOs, and board members from AstraZeneca, Södra, Saab, Norra Skog, and others.

One sentence in the report captures the most important distinction:

"As a leader, you do not invest in AI — you invest in achieving your objectives. We doubled our revenue between 2020 and 2025, with AI as a key enabler. Now AI is crucial to achieve our 2030 Ambition: 20 new medicines, doubled revenue, and a very ambitious sustainability agenda." — Peder Blomgren, VP Head of Data Office R&D, AstraZeneca

That sentence is simple. But it contains an entire strategic reframe that most organizations miss completely.

The wrong question to start with

I sit in meetings where the question is framed something like this: "Which AI tools should we start with?" Or: "How much should we budget for AI?" Or: "How do we stay relevant when everyone else is also investing in AI?"

All of these questions start with the technology. And when you start with the technology, every decision is governed by what the technology can do — not by what you're actually trying to achieve.

The result is predictable: isolated pilot projects, productivity tools solving problems nobody felt urgent, and a diffuse sense of "doing AI" without it affecting anything that's actually measured.

Efficiency versus expansion

The AI Sweden report makes a distinction I've seen confirmed repeatedly. The vast majority of organizations start with automation — doing what they already do, but cheaper. Saving hours in back-office processes. Reducing processing time. Standardizing manual steps.

That's rational. It's measurable. And it delivers about 10 percent of the available value.

The organizations that actually shift their competitive position start with expansion instead. Not "how do we do what we do cheaper" — but "what can we do now that we couldn't do before?"

That might be new revenue streams. Products that weren't economically viable without AI. Decisions that previously required weeks of analysis and now take hours. Customer experiences that were impossible to scale.

The report: "AI's real value is not in doing the same things cheaper, but in doing new things altogether — reshaping products, decisions, and business models."

What I see from the inside

The difference shows up in how questions get formulated.

The efficiency organization asks: "Which processes can we automate?" and "How do we measure saved hours?"

The expansion organization asks: "What's constraining us today that doesn't need to constrain us?" and "What would we do if that constraint disappeared?"

The second type of question requires understanding your business more deeply — not AI more deeply. And it opens an entirely different design space.

How this changes investment logic

One consequence of this is that traditional ROI calculations work poorly in the early stage of AI transformation.

Efficiency logic is calculable: process X takes Y hours, AI reduces that to Z hours, the difference multiplied by hourly cost gives a number. It looks like a basis for decision.

Expansion logic is a bet: if this works we can reach markets we haven't accessed, deliver offerings we haven't been able to price, make decisions at a speed we've never had. How do you calculate that?

You don't. You assess the probability, cover the risk with continued delivery in the core business, and make a strategic judgment.

The report again: "The first phase is almost impossible to get through because you won't have ROI or clear answers. It requires courage and leadership."

That's not an argument against calculation. It's an argument against letting calculation replace judgment at the stage of transformation where judgment actually determines the outcome.

What the question should be

Instead of "Which AI tools should we use?" — what do you want to achieve that you're not achieving today?

Answer that concretely enough. Then ask whether AI is the right means to get there, and which parts of the chain need to change for it to work in practice.

That's a different conversation. Harder. But it produces different decisions.

See also: The complete chain (series 7) and The one who succeeds with AI isn't who you think (series 18).

AI Sweden Leadership Report 2026: ai.se/sv/ai-sweden-leadership-report-2026

Was this helpful?