- Learn
- Prompt Engineering
- Chain-of-Thought Prompting
Learn how to use chain-of-thought prompting to help AI reason through complex coding problems step by step.
Chain-of-Thought Prompting
Chain-of-thought (CoT) prompting is a technique that enables AI to tackle complex reasoning tasks by breaking them down into intermediate steps. It's especially powerful for debugging, algorithm design, and architectural decisions.
What is Chain-of-Thought Prompting?
Chain-of-thought prompting encourages the AI to "show its work" by reasoning through problems step by step, similar to how a human developer would think through a problem.
Without CoT:
Fix this bug in my authentication code.
[code]
With CoT:
Analyze this authentication bug step by step:
1. First, trace the authentication flow
2. Identify where the token validation fails
3. Check for race conditions or timing issues
4. Propose a fix with explanation
[code]
Why Chain-of-Thought Works
Research by Wei et al. (2022) showed that CoT prompting significantly improves performance on:
- Mathematical reasoning
- Multi-step logic problems
- Complex code analysis
- Architecture decisions
Important: CoT is most effective with larger models (100B+ parameters). With smaller models, it may produce illogical chains.
Zero-Shot Chain-of-Thought
The simplest CoT technique: just add "Let's think step by step."
I need to optimize this database query that's running slowly.
Let's think step by step.
[query]
This simple phrase triggers the model to break down its reasoning before providing an answer.
More Zero-Shot Triggers
- "Let's think step by step"
- "Let's work through this methodically"
- "First, let's analyze the problem"
- "Let's break this down"
- "Walk me through the solution"
Few-Shot Chain-of-Thought
Provide examples that demonstrate the reasoning process:
When debugging code, follow this reasoning process:
Example Problem: Function returns undefined unexpectedly
Reasoning:
1. First, I check the function signature and return type
2. I trace the execution path for each branch
3. I identify the branch that leads to undefined
4. Root cause: Early return without value on line 12
5. Fix: Add explicit return value
Solution:
[shows the fix]
---
Now analyze this problem:
[your code with bug]
Use the same step-by-step reasoning process.
Structured Chain of Thought (SCoT) for Code
Research shows that planning with programming structures improves code generation:
Before writing the code, plan using programming structures:
Task: Create a function to find duplicate items in a shopping cart
Planning:
1. SEQUENTIAL: Initialize empty Set for seen items, empty array for duplicates
2. LOOP: Iterate through all cart items
3. BRANCH: Check if item ID already in Set
- YES: Add to duplicates array
- NO: Add to seen Set
4. SEQUENTIAL: Return duplicates array
Now implement this plan in TypeScript:
CoT for Debugging
The Two-Step Debug Process
Step 1: Analysis (Don't fix yet)
Analyze this error step by step:
Error: TypeError: Cannot read property 'map' of undefined
Code:
```typescript
const UserList = ({ users }) => {
return users.map(user => <UserCard key={user.id} user={user} />);
};
Reasoning process:
- What is the error telling us?
- Where in the code could this occur?
- What are the possible causes?
- What's the most likely root cause?
**Step 2: Fix (After understanding)**
Based on your analysis, the issue is that users is undefined on initial render.
Now provide a fix that:
- Handles the undefined case
- Adds proper TypeScript types
- Includes a loading state
- Explains why this fix works
### Debug Reasoning Template
Analyze this bug systematically:
Bug Description: [what's happening]
Step 1 - Understand the Error:
- What does the error message mean?
- What line/component is affected?
Step 2 - Trace the Data Flow:
- Where does the problematic data come from?
- What transformations occur?
Step 3 - Identify Root Cause:
- What assumption is being violated?
- Is this a timing, data, or logic issue?
Step 4 - Consider Edge Cases:
- What inputs could cause this?
- Are there race conditions?
Step 5 - Propose Solution:
- What's the minimal fix?
- Are there preventive measures?
Code: [paste your code]
## CoT for Algorithm Design
Design an algorithm to [task]. Think through it step by step.
Task: Find the longest palindromic substring in a string
Step 1 - Understand the Problem:
- What is a palindrome?
- What does "longest" mean?
- What's the expected output?
Step 2 - Consider Approaches:
- Brute force: Check all substrings (O(n³))
- Expand around center (O(n²))
- Manacher's algorithm (O(n))
Step 3 - Choose Approach:
- For this case, expand around center balances simplicity and efficiency
Step 4 - Design the Algorithm:
- For each character, treat it as center
- Expand outward while characters match
- Track the longest found
Step 5 - Handle Edge Cases:
- Empty string
- Single character
- Even vs odd length palindromes
Now implement in TypeScript with comments explaining each step.
## CoT for Architecture Decisions
I need to decide on a state management approach for my React app. Let's analyze this decision systematically.
Context:
- Medium-sized e-commerce app
- 50+ components
- Real-time inventory updates needed
- Team of 3 developers
Step 1 - Identify Requirements:
- What state needs to be shared?
- How frequently does state change?
- Do we need persistence?
- What's the team's experience?
Step 2 - Evaluate Options: Option A: React Context + useReducer
- Pros: Built-in, simple, no dependencies
- Cons: Re-render issues at scale, no devtools
Option B: Redux Toolkit
- Pros: DevTools, middleware, time-travel
- Cons: Boilerplate, learning curve
Option C: Zustand
- Pros: Simple API, good performance, small bundle
- Cons: Smaller ecosystem
Step 3 - Consider Trade-offs: [analysis]
Step 4 - Make Recommendation: Based on the requirements, I recommend...
## CoT for Code Review
Review this code step by step, analyzing each aspect:
Code to review: [paste code]
Step 1 - Correctness:
- Does the code do what it's supposed to?
- Are there logic errors?
Step 2 - Security:
- Input validation?
- SQL injection risk?
- XSS vulnerabilities?
Step 3 - Performance:
- Any O(n²) operations that could be O(n)?
- Unnecessary re-renders?
- Memory leaks?
Step 4 - Maintainability:
- Clear naming?
- Appropriate abstraction level?
- Documentation needed?
Step 5 - Edge Cases:
- Null/undefined handling?
- Empty arrays/objects?
- Boundary conditions?
Provide findings for each step, then overall recommendations.
## Chain-of-Thought and Reasoning Models
Modern "thinking" models (Claude with extended thinking, GPT-5.4 Thinking) already reason internally before responding. This changes how you should use CoT prompting.
### Standard vs Reasoning Models
| Aspect | Standard Models | Reasoning Models |
|--------|-----------------|------------------|
| Internal reasoning | None | Built-in |
| "Think step by step" | Very helpful | Often unnecessary |
| Explicit steps | Guide the model | May cause overthinking |
| Best use | Guide reasoning process | Complex problems, skip guidance |
### When CoT Still Helps with Reasoning Models
- **Structured output**: When you need specific format
- **Verification**: When you want to see the reasoning
- **Domain-specific chains**: Custom reasoning for your domain
- **Teaching**: When reviewing with others
### When to Skip CoT with Reasoning Models
❌ Unnecessary with reasoning models: "Let's think step by step about this algorithm"
✅ Better for reasoning models: "Design an algorithm for [task]. Show your approach."
The reasoning model will naturally think through the problem—you don't need to prompt it to do so.
## When to Use Chain-of-Thought
| Task | Use CoT? | Model Type | Why |
|------|----------|------------|-----|
| Simple CRUD operation | No | Any | Straightforward, no complex reasoning |
| Complex algorithm | Yes (standard) / Optional (reasoning) | Any | Requires step-by-step logic |
| Debugging mystery bugs | Yes | Any | Need to trace and analyze |
| Architecture decisions | Yes | Any | Multiple factors to weigh |
| Code review | Yes | Any | Systematic analysis helps |
| Refactoring complex code | Yes | Any | Need to understand before changing |
| Simple formatting | No | Any | Pattern matching, not reasoning |
## Combining CoT with Other Techniques
### CoT + Few-Shot
Here's how I debug authentication issues:
Example: Problem: Users randomly get logged out Reasoning:
- Check token expiration logic - tokens expire correctly
- Check token storage - found: using sessionStorage
- Root cause: sessionStorage clears on new tab
- Fix: Switch to localStorage with proper expiry
Now debug my issue using the same approach: [your problem]
### CoT + Role Prompting
You are a senior security engineer reviewing code.
Walk through this authentication implementation step by step:
- First, identify the authentication flow
- Then, check each step for vulnerabilities
- Finally, rate severity and provide fixes
[code]
## Tips for Effective CoT
1. **Be explicit about steps**: Don't just say "think about it"—specify what to think about
2. **Number your steps**: Helps the AI organize its reasoning
3. **Request intermediate output**: "Show your reasoning at each step"
4. **Validate the chain**: Review the AI's reasoning, not just the conclusion
5. **Break complex problems**: Use multiple CoT prompts for very complex issues
## Practice Exercise
Use chain-of-thought to analyze this performance issue:
```typescript
const searchUsers = async (query: string) => {
const allUsers = await fetchAllUsers(); // Returns 100k+ users
return allUsers.filter(user =>
user.name.toLowerCase().includes(query.toLowerCase()) ||
user.email.toLowerCase().includes(query.toLowerCase())
);
};
Structure your analysis with:
- Understanding the current implementation
- Identifying performance issues
- Considering alternative approaches
- Recommending optimal solution
Summary
- Chain-of-thought prompting breaks complex reasoning into steps
- Zero-shot CoT: "Let's think step by step"
- Few-shot CoT: Provide example reasoning chains
- Best for debugging, algorithms, architecture, code review
- Combine with other techniques for better results
Next Steps
Now that you can guide AI through complex reasoning, let's learn about role prompting and personas—how to shape the AI's perspective and expertise.