Protocols Don't Fail. Organizations Deploy Them at the Wrong Level.
Most AI agent initiatives collapse not because the technology is broken, but because the people building them are thinking at one level of abstraction and acting at another. The Shengxing Conjecture gives us a framework to diagnose why — and fix it.
Contents
The Conjecture #
The Shengxing Conjecture proposes that recursive meta-questioning — asking "what are the assumptions behind this?" repeatedly — loses practical utility around the sixth iteration. At that depth, you've ascended so far from concrete action that the language becomes self-referential and no longer translates into decisions.
This sounds abstract. But it has a direct, uncomfortable implication for AI system design: there is a ceiling above which your thinking stops paying rent.
Most organizations hit this ceiling — or never leave the basement. Both are failure modes.
The M-Stack: Six Levels of AI Deployment Thinking #
We can map the problem onto six levels, from raw execution at the bottom to governance at the top. We call this the M-Stack.
The Shengxing Conjecture marks M6 as the boundary: useful thinking lives in M0–M5. The sweet spot for organizational decision-making sits in M3–M5 — strategic, bounded, and still connected to action.
Where AI Projects Actually Break #
Most failures aren't technical. They're level mismatches — the thinking and the acting happen at different depths, with nothing bridging them.
Leadership commits to AI transformation at M5 ("we will be responsible and human-centered"). Engineers start building at M0 ("let's automate the support queue"). The three levels in between are empty. Nothing translates the principle into constraint.
M4 gap: [nobody defined the human override conditions]
M3 gap: [nobody asked which processes should be automated]
M0 action: Replace 40% of support staff
A technically skilled team builds an impressive M0–M2 system. It works. But nobody asked the M3 question: is this the right thing to automate? The system runs perfectly, optimizing a process that shouldn't exist.
M3 gap: [the process was a workaround for a broken upstream system]
Result: Automated a problem instead of solving it
A governance team reaches M5, then keeps going — "but who defines accountability?" (M6), "but what is agency?" (M6+). Six months of workshops. No protocol adopted, no boundary defined, no system built. The Conjecture predicted this: past M5, thinking stops producing decisions.
M6: "But what counts as an AI system exactly?"
M6+: "What is accountability in a distributed system?"
...: [paralysis]
Where Protocols Fit In the Stack #
Now the protocols from our previous article map clearly onto the M-Stack:
MCPoperates at M0–M1: it defines how an LLM calls a tool and gets a result. Execution and basic orchestration.A2Aoperates at M1–M2: it defines how agents delegate to each other, communicate intent, and stream results.ACPoperates at the same layer as A2A, with a different runtime assumption.
Notice what protocols cannot do: they cannot tell you which processes to automate (M3), where humans should intervene (M4), or who is accountable when things go wrong (M5). Protocols are implementation decisions. Strategy is a human decision.
Choosing A2A over ACP is an M1 decision. Deciding whether to build a multi-agent system at all is an M3 decision. Most teams spend enormous energy on the former and almost none on the latter.
Can AI Bridge the Levels? #
A reasonable question: can the AI itself translate between levels? Could an LLM take an M5 governance principle and derive M3 strategic constraints automatically?
Partially. AI can translate information across levels effectively. What it cannot do is translate values — because value translation requires knowing what matters to a specific organization, in a specific context, for specific stakeholders. That judgment is irreducibly human.
This is also why the layer structure isn't just a legacy of old organizational design. It reflects something real about the nature of decision-making: some questions are genuinely higher-order than others, and answering them requires different kinds of reasoning.
The Diagnostic: Which Level Is Your Team Missing? #
You have a clear protocol choice (A2A or MCP) but unclear success criteria.
→ Missing M3. You're choosing implementation before strategy. Step back: which processes are actually worth automating, and why?
Your AI system works technically but keeps surprising users negatively.
→ Missing M4. You haven't defined where human judgment should override the agent. Define the override conditions explicitly.
Your governance committee has met 8 times and produced no policy.
→ Stuck above M5. Force a decision with a concrete case: "For this specific process, who is accountable if the agent is wrong?" Answer that. Then generalize.
Your engineers are amazing but the business can't articulate ROI.
→ M0–M2 work without M3 connection. Bridge it: map each automation to a specific business outcome. If you can't, reconsider the automation.
The Rule of Adjacent Levels
The framework's practical constraint: thinking and action must stay within two levels of each other. You can think at M4 and act at M2. You cannot think at M5 and act at M0 without something filling the gap.
This isn't a theoretical claim — it's an observation about how organizations actually fail. The gap is where assumptions accumulate, where "responsible AI" becomes "we automated the call center," where "agent coordination" becomes a protocol chosen for the wrong reasons.
Fill the gaps deliberately. The tools exist at every level. The discipline is in knowing which level you're working at — and naming the ones you're skipping.
Sources & Further Reading
- The Shengxing Conjecture — Original essay on meta-recursive utility limits
- AI Governance Map — Organizational AI competence research by Shengxing Yang
- Model Context Protocol — MCP official documentation
- Agent-to-Agent Protocol — Google A2A specification
- Agent Communication Protocol — ACP official documentation