The 7 Categories of AI: Enterprise Decision Guide
Why This Framework Exists
The Lab
I spent years generating trillions of data points, building force fields to predict material properties at the nanoscale. That work taught me what most AI conversations miss: the model is never the hard part.
The Enterprise
Then I moved into the enterprise world. I rolled up my sleeves, worked with messy data nobody wanted to touch, and built models that actually ran in production, end to end. Not advising. Doing. Back then we kept coming back to "Hidden Technical Debt in Machine Learning Systems," because it made something explicit that we were already living: the model is a tiny slice of the real system. The hard part was never the architecture on the whiteboard. It was the data pipelines, the interfaces, the organizational plumbing. Most of the risk and cost lives in everything around the model.
And yet, then and now, leaders stay fixated on "the next model." It is far easier to say "we are using GPT-X" than "we refactored our data contracts and refit our org structure." Vendors and media reinforce this by celebrating benchmark accuracy, not the boring wins like stable data pipelines or clean API boundaries. The result: model choice becomes a proxy for AI maturity, even though it explains very little of real-world value.
The better question is not "what is the best model?" It is "what is the right AI system for this business problem?" That means starting with the outcome, the constraints (data quality, latency, risk, regulatory environment), and the architecture and governance needed (data contracts, monitoring, access controls, retraining and rollback paths). Once those are clear, the model is almost a replaceable component. Not the centrepiece.
The University
Currently at tech transfer, moving AI from university research into commercial products. The same gap followed me here. Academic breakthroughs stall because IP ownership is murky, valuation models do not exist, and the handoff from researcher to operator is poorly designed. Enterprises mirror this failure. They lack clear AI pipelines just as labs do.
The pattern I keep seeing across all three worlds, the lab, the enterprise, and the university, is the same: organizations do not have a shared language for what AI actually is, what it demands, and how to govern it.
That is the gap this framework closes.
The Old View Is Not Enough
For years, the industry understood AI through a simple set of nested circles: Artificial Intelligence contains Machine Learning, which contains Deep Learning. The question was always about capability: how smart is the model? That framing was useful once. It is not sufficient now.
Three shifts broke it:
1. Generative AI made AI visible to everyone
When AI moved from backend prediction engines to generating text, images, and code, the old circles could not explain it. "Generative" is not a layer inside deep learning. It is a different mode of interaction, with entirely different governance needs: IP ownership, copyright, hallucination detection.
2. AI started acting, not just predicting
Agentic AI plans, executes, uses tools, and makes decisions autonomously. It is not "smarter ML." The governance it needs (autonomy boundaries, escalation protocols, real-time drift detection) has nothing in common with a traditional classification model.
3. Sovereignty became a design constraint
Governments started asking: where is the data stored? Who controls the compute? Can this cross borders? The nested circles have no place for geopolitics. Sovereign AI is not a capability layer. It is a deployment constraint.
The old diagram asks: what type of intelligence is this?
The 7 Categories ask: what does this AI demand from your organization?
That is the shift. From capability classification to operational classification. You can run the same transformer architecture as Analytical AI (batch predictions), Generative AI (content creation), Agentic AI (autonomous workflows), or Contextual AI (reasoning over large context). The model is the same. What changes is everything around it. The governance. The infrastructure. The workflow integration.
The model is never the hard part.
The Framework
The 7 Categories of AI framework maps every AI application into one of seven distinct types, each with its own governance profile, infrastructure requirements, and integration pattern:
How to use it
Map your current and planned AI initiatives against the seven categories. For each one, check: does your governance match the category? Does your infrastructure support it? Is the integration pattern designed, or assumed? Most organizations discover they are governing Agentic AI with Descriptive AI controls. That mismatch is where projects die.
This guide is free
Download it, share it with your team, and use it as the starting point for a conversation your organization probably needs to have.
[CTA: Download the 7 Categories of AI Decision Guide → email capture → upsell AI Readiness Assessment ($72)]
How to use it:
Map your current and planned AI initiatives against the seven categories. For each one, check: does your governance match the category? Does your infrastructure support it? Is the integration pattern designed, or assumed? Most organizations discover they are governing Agentic AI with Descriptive AI controls. That mismatch is where projects die.
This guide is free. Download it, share it with your team, and use it as the starting point for a conversation your organization probably needs to have.
[CTA: Download the 7 Categories of AI Decision Guide → email capture → upsell AI Readiness Assessment ($72)]