Mediumcoordination

Deficient Theory of Mind

Agents fail to correctly model what other agents know, believe, or intend, leading to coordination failures.

Overview

How to Detect

Agents provide redundant information. Assumptions about shared knowledge are wrong. Coordination requires explicit verification at every step. Agents talk past each other.

Root Causes

Agents cannot model other agents' knowledge states. No shared understanding of what has been communicated. Assumptions about capabilities are static rather than learned. No feedback mechanism for coordination verification.

Test your agents against this failure mode
Try Playground

Deep Dive

Overview

Theory of mind—the ability to model others' mental states—is limited in current AI systems. When agents can't accurately predict what other agents know or intend, coordination suffers.

Manifestations

Knowledge State Errors

Agent A: (Shares document with B)
Agent A: "As you can see in paragraph 3..."
Agent B: (Failed to receive document) "What paragraph?"
Agent A: "The one I sent you!"

Goal Misalignment

Agent A: (Goal: Summarize accurately)
Agent B: (Goal: Summarize concisely)
Result: B's summaries omit details A considers essential.

Capability Assumptions

Agent A: "Calculate the integral of this function."
Agent B: (Math specialist) Provides answer.
Agent A: "Now explain it simply."
Agent B: (Doesn't realize A lacks math background)
         Provides technical explanation.

Communication Style Mismatch

Agent A uses terse, technical language. Agent B expects verbose, friendly communication. Neither adapts.

Coordination Impacts

  • Redundant work: Agents repeat what others already know
  • Missing handoffs: Critical information assumed to be shared
  • Failed collaboration: Agents work at cross purposes
  • Communication overhead: Excessive verification needed

Root Causes

No Shared State Model

Agents don't maintain models of what others have seen or done.

Missing Feedback Loops

No mechanism to verify understanding between agents.

Static Assumptions

Agents use fixed assumptions about other agents rather than adapting.

How to Prevent

Explicit State Sharing: Maintain shared state of what each agent knows.

Capability Queries: Ask agents about capabilities rather than assuming.

Acknowledgment Protocols: Confirm receipt and understanding of messages.

Context Summaries: Include relevant context history in each message.

Agent Profiles: Maintain and share agent capability and knowledge profiles.

Want expert guidance on implementation?
Get Consulting

Real-World Examples

A research synthesis system failed when the summarizing agent assumed the writing agent had access to the full research papers. The writing agent only received summaries, resulting in shallow, poorly-grounded content.