Overview
Chain of Thought (CoT) prompting encourages agents to "show their work" by generating explicit reasoning steps. This linear reasoning architecture mirrors how humans solve complex problems—step by step.
Core Mechanism
Question: If a store has 23 apples and sells 17, how many remain?
Without CoT: "6 apples"
With CoT: "The store starts with 23 apples.
They sell 17 apples.
23 - 17 = 6
Therefore, 6 apples remain."
Implementation Approaches
Zero-Shot CoT
Simply append "Let's think step by step" to prompts:
Solve this problem. Let's think step by step.
Few-Shot CoT
Provide examples of reasoning chains:
Example: [Problem] → [Step 1] → [Step 2] → [Answer]
Now solve: [New Problem]
Automatic CoT (Auto-CoT)
Automatically generate diverse reasoning demonstrations without manual examples.
Benefits
Transparency
Each step is visible, making it easy to identify where reasoning goes wrong.
Debugging
When answers are incorrect, you can pinpoint the faulty step.
Verification
Intermediate steps can be validated independently.
Improved Accuracy
Research shows 10-40% accuracy improvements on complex reasoning tasks.
When to Use
Good fit:
- Mathematical reasoning
- Multi-step logical deduction
- Tasks requiring explicit justification
- Debugging and development
Less effective for:
- Simple factual lookups
- Tasks where speed matters more than accuracy
- Very long reasoning chains (context limits)
Relationship to Other Patterns
- ReAct: Interleaves CoT reasoning with actions
- Reflection: Uses CoT to critique previous outputs
- Tree of Thoughts: Explores multiple CoT paths
Limitations
CoT uses a linear, left-to-right reasoning process. For problems requiring exploration of multiple hypotheses, consider Tree of Thoughts.