Overview
Theory of mind—the ability to model others' mental states—is limited in current AI systems. When agents can't accurately predict what other agents know or intend, coordination suffers.
Manifestations
Knowledge State Errors
Agent A: (Shares document with B)
Agent A: "As you can see in paragraph 3..."
Agent B: (Failed to receive document) "What paragraph?"
Agent A: "The one I sent you!"
Goal Misalignment
Agent A: (Goal: Summarize accurately)
Agent B: (Goal: Summarize concisely)
Result: B's summaries omit details A considers essential.
Capability Assumptions
Agent A: "Calculate the integral of this function."
Agent B: (Math specialist) Provides answer.
Agent A: "Now explain it simply."
Agent B: (Doesn't realize A lacks math background)
Provides technical explanation.
Communication Style Mismatch
Agent A uses terse, technical language. Agent B expects verbose, friendly communication. Neither adapts.
Coordination Impacts
- Redundant work: Agents repeat what others already know
- Missing handoffs: Critical information assumed to be shared
- Failed collaboration: Agents work at cross purposes
- Communication overhead: Excessive verification needed
Root Causes
No Shared State Model
Agents don't maintain models of what others have seen or done.
Missing Feedback Loops
No mechanism to verify understanding between agents.
Static Assumptions
Agents use fixed assumptions about other agents rather than adapting.