Dimension: See
Pick the highest level whose hard test you can honestly answer "yes" to.
One person at a time pastes context into AI. There's no persistent view of the business — each conversation starts from zero.
Does AI see anything about your business beyond what one person pastes into a single chat?
Not this if: A great personal prompt library or saved templates. That's still one person's context, not org context.
A team has built shared AI context — a context file, an MCP server connected to the team's primary tool, prompts that the team maintains together.
Can AI answer a team-scoped question (pipeline status, sprint progress) without a human first pulling the data?
Not this if: Many context files that nobody actually uses, or a Notion page of prompts. If a human still gathers data to answer team questions, you're at L1.
Your systems of record — CRM, ticketing, code repo, data warehouse — are reachable by AI through MCP or APIs. Cross-functional questions can be answered without a human assembling the data.
Can an agent answer a question that spans three or more systems without a human assembling the data first?
Not this if: A data warehouse that exists but isn't queryable by AI. Storage isn't legibility.
The system maintains relationships between events over time and surfaces patterns no one asked about. Synthesis is continuous, not request-driven.
Has the system recently surfaced an insight that no human framed as a question first?
Not this if: Configured anomaly alerts and rule-based dashboards. Those are still rule-based — they don't qualify as autonomous noticing.
The system identifies its own knowledge gaps and runs investigations to fill them. It asks questions no human posed first.
Can you name a recent investigation the system initiated to fill a gap it noticed on its own?
Not this if: Sophisticated alerting rebranded as "the AI noticed something." If a human wrote the rule, it isn't L5.