Here’s the uncomfortable truth: we’re building AI platforms using the same playbook that gave us surveillance capitalism and engagement addiction.
I’ve seen this pattern before—at VMware, at Stripe, and now watching AI platforms repeat the same mistakes. We start with powerful technology that promises to empower users. We bundle it with convenient features. We optimize for engagement metrics. And before we realize it, we’ve created systems that constrain the very people they’re meant to serve.
The future of AI should look like the cloud infrastructure revolution—composable, federated, user-controlled. Instead, it’s trending toward social media consolidation—monolithic, centralized, engagement-optimized.
The Anti-Pattern We’re Building
Sam Altman’s “gentle singularity” vision – where he shares how AI will help accelerate innovation, solve hard problems, empower humanity – resonates with anyone who’s spent their career building platforms. I share that optimism.
But I also see the architectural mistakes and the fallacy of magical thinking.
ChatGPT as a “super-assistant” sounds powerful. Look closer and you’ll see vertical integration that creates dependency, not enablement. AI models bundled with centralized storage of your contexts and memories. The exact monolithic, tightly-coupled architecture that every platform engineer has learned to avoid.
These systems feel powerful initially. They always do. Then they become bottlenecks. Over the long term, they even devolve into systems with the wrong incentives for sustained success.
The Productivity Debt Problem
At Stripe, sustainable platform growth came from enabling user success, not maximizing dependency. We learned what works. On the flip side, we saw how the aggregator model in consumer tech created perverse incentives: optimize for time spent, not problems solved.
When platforms optimize for stickiness over utility, they create productivity debt. Systems that feel helpful short-term but constrain the people they’re meant to serve.
The AI industry is walking straight into this trap. Your business model depends on keeping users engaged with your AI assistant? You’ve created a fundamental misalignment. Platform success now conflicts with user success. Users who accomplish goals quickly are bad for your metrics. Users who get stuck in long conversations are good for growth.
This isn’t philosophical. It’s structural and systemic debt that compounds over time.
Breaking the Pattern: The Three As Framework
I don’t believe engineers lack good intentions. I believe it is systems that fail to create environments for building what users actually need.
The reinforcing loop is predictable: engagement metrics shape product decisions, product decisions create lock-in, lock-in reduces user agency, and reduced agency drives more dependency on the platform. It feeds itself. These loops have a common root: platforms optimized for engagement rather than outcomes.
To break them requires redesigning from first principles. In my InfoQ presentation on building successful platforms, I outlined three architectural principles that do exactly that: Acceleration, Autonomy, and Accountability.
Acceleration: Build for clarity and capability, not dependency
AI platforms should create leverage for the business by making users more capable, not more dependent.
Measure success by how quickly users achieve impact, not how long they spend on the platform. When AWS built EC2, they didn’t optimize for “time spent on EC2.” They optimized for how quickly developers could build and ship on top of EC2. Its value came from what developers could do after using it.
The architectural consequence: composability
This means building simple primitives that compose into powerful configurations. AWS created S3, EC2, Lambda—building blocks that could be mixed and matched. AI platforms need the same approach.
Treat AI capabilities as APIs you can combine according to your needs, not sealed applications you must accept wholesale. Let specialized AI services be composed, orchestrated and deployed based on what users actually need to accomplish.
When your platform’s value is measured by user outcomes rather than engagement, you naturally build for composition. Users move faster. They solve problems efficiently. They leave your platform to go build their own solutions. This is why AWS became one of the world’s most valuable companies—they accelerated their users rather than capturing them.
That’s acceleration.
Autonomy: Separate What’s Shared from What’s Sovereign
Users maintain agency over their data, workflows, and choices. The platform serves the user, not the other way around.
The most critical architectural decision: separate AI models (compute) from user data (context).
The architectural consequence: data sovereignty
Share the expensive computational infrastructure. Model training and inference require massive scale to be economically viable. This is the part that benefits from centralization—the compute layer.
But user contexts, memories, personal data? Those remain sovereign and portable. This isn’t just about privacy. It’s about preventing vendor lock-in that stifles innovation.
When your personal AI context is trapped in one company’s platform, you lose the ability to experiment with better tools or switch to superior services. You can’t compose with other platforms. You can’t take your investment elsewhere.
Cloud providers offer compute infrastructure while customers control their data and applications. AI platforms should work the same way. The compute can be centralized and shared. The context must be decentralized and owned.
Paradoxically, reducing lock-in often increases loyalty. Users who know they can leave feel safer investing deeply. The platforms that respect user sovereignty tend to retain users longer than those that trap them. This isn’t just a platform principle — I’ve seen the same pattern in engineering teams. Autonomy breeds trust, and trust breeds retention.
Accountability: Align Incentives with Outcomes
Platform providers answer to user outcomes, not engagement metrics. This requires transparent measurement, the right guardrails and aligned incentives.
The hardest part isn’t technical—it’s resisting the gravitational pull of engagement metrics. When your board asks “how much time do users spend with our AI?”, the right answer is “as little as possible to achieve maximum impact.”
The architectural consequence: transparent measurement, clear ownership and ecosystem growth
This means:
- Measuring what users accomplish, not how long they stay
- Open APIs that third parties can build on
- Clear and owned interfaces between components
- Business models that reward solving user problems, not capturing user attention
Build systems where growth comes from enabling others to build, not just direct usage.
When you’re accountable to outcomes, you document your limitations honestly. You make it easy for users to integrate with other services. You celebrate when users graduate from your platform to build their own solutions.
This creates a different kind of moat—not lock-in, but genuine value. Users return not because they’re trapped, but because you’re actually solving their problems.
Stress-Testing the Framework: The Hard Questions
AI is reshaping how humans interact with systems and with each other. Before we commit to an architecture, we need to stress-test it:
How do you prevent mission drift when growth incentives diverge from user interests?
Accountability prevents mission drift. When your platform’s success is measured by user outcomes rather than engagement, the misalignment becomes obvious in your metrics. You can’t hide behind vanity metrics when you’re measuring actual impact.
How do you build systems that remain trustworthy as they become more powerful?
Autonomy builds trust. Users trust platforms where they maintain control and can leave. Lock-in and trust are inversely correlated. As your AI becomes more powerful, users need more assurance they won’t become dependent. Data sovereignty provides that assurance.
How do you maintain these principles at scale?
Acceleration forces discipline at scale. When you measure time-to-impact rather than time-on-platform, bloat becomes visible. Features that don’t accelerate user capability fail your metrics. Scale becomes more about making users faster, not making them stay longer.
Each of these maps directly to one of the Three As.
What This Means for Platform Engineers
For Acceleration:
- Design APIs that compose naturally
- Measure user outcomes, not platform usage
- Optimize for time-to-impact, not time-on-platform
- Build primitives, not prescriptive workflows
For Autonomy:
- Separate compute infrastructure from user context
- Make user data portable by default
- Enable seamless experiences without lock-in
- Let users own their AI’s memory and learned behaviors
For Accountability:
- Expose transparent metrics tied to user success
- Document limitations honestly
- Build business models that align with user empowerment
- Celebrate when users graduate from your platform
These require building AI platforms with the same rigor we bring to critical infrastructure: redundancy, observability, graceful degradation, clear ownership boundaries between components.
The future I want to build: AI platforms that free people to channel their energy toward creativity and craftsmanship. Systems that distribute power instead of concentrating it in the hands of a few. A world where we retain the ability to choose.
The Choice
We’ve seen what happens when platforms optimize for engagement over outcomes. We don’t have to build it that way again.
As platform engineers, we get to make that call. What we measure, what we make portable, what we open up — these decisions determine whether AI empowers people or constrains them. We know enough to get this right.
I wrote a follow-up post on moving from pilots to production — applying these principles to the practical challenge of rolling out AI across an engineering organization.
What are your thoughts on AI platform architecture? Reach out on LinkedIn.