orchestration

Chain of Thought Pattern

Overview

The Challenge

LLMs often make errors on complex reasoning tasks when asked to produce answers directly without showing their work.

The Solution

Prompt agents to explicitly generate intermediate reasoning steps before reaching a conclusion, enabling verification and debugging of the thought process.

Implement this pattern with our SDK
Get RepKit

Deep Dive

Overview

Chain of Thought (CoT) prompting encourages agents to "show their work" by generating explicit reasoning steps. This linear reasoning architecture mirrors how humans solve complex problems—step by step.

Core Mechanism

Question: If a store has 23 apples and sells 17, how many remain?

Without CoT: "6 apples"

With CoT: "The store starts with 23 apples.
          They sell 17 apples.
          23 - 17 = 6
          Therefore, 6 apples remain."

Implementation Approaches

Zero-Shot CoT

Simply append "Let's think step by step" to prompts:

Solve this problem. Let's think step by step.

Few-Shot CoT

Provide examples of reasoning chains:

Example: [Problem] → [Step 1] → [Step 2] → [Answer]
Now solve: [New Problem]

Automatic CoT (Auto-CoT)

Automatically generate diverse reasoning demonstrations without manual examples.

Benefits

Transparency

Each step is visible, making it easy to identify where reasoning goes wrong.

Debugging

When answers are incorrect, you can pinpoint the faulty step.

Verification

Intermediate steps can be validated independently.

Improved Accuracy

Research shows 10-40% accuracy improvements on complex reasoning tasks.

When to Use

Good fit:

  • Mathematical reasoning
  • Multi-step logical deduction
  • Tasks requiring explicit justification
  • Debugging and development

Less effective for:

  • Simple factual lookups
  • Tasks where speed matters more than accuracy
  • Very long reasoning chains (context limits)

Relationship to Other Patterns

  • ReAct: Interleaves CoT reasoning with actions
  • Reflection: Uses CoT to critique previous outputs
  • Tree of Thoughts: Explores multiple CoT paths

Limitations

CoT uses a linear, left-to-right reasoning process. For problems requiring exploration of multiple hypotheses, consider Tree of Thoughts.

Want to learn more patterns?
Explore Learning Paths
Considerations

CoT increases token usage and latency. For simple tasks, direct answers may be more efficient.