- Learn
- Development Workflows
- Test-Driven Development with AI
Learn how to combine TDD principles with AI assistance—using tests to guide code generation and ensure quality.
Test-Driven Development with AI
Test-Driven Development (TDD) and AI are a powerful combination. Tests provide clear specifications that guide AI code generation, resulting in code that works correctly from the start.
The AI-Assisted TDD Cycle
Traditional TDD: Red → Green → Refactor
AI-Assisted TDD:
- Spec - Define requirements clearly
- Red - Generate failing tests from requirements
- Green - Use AI to generate code that passes tests
- Refactor - Use AI to improve code quality
- Verify - Review and validate everything
Why TDD Works Well with AI
- Tests are specifications: Clear, unambiguous requirements
- Verifiable output: You can objectively check if code works
- Fast feedback: Run tests immediately after generation
- Confidence to iterate: Tests catch regressions
Step 1: Requirements to Test Spec
Start by converting requirements to a test specification:
I need a function with these requirements:
- Validates email addresses
- Accepts standard email formats
- Rejects emails without @ or domain
- Handles edge cases (multiple dots, plus signs)
Generate a test specification (not the tests yet):
1. What test cases should exist?
2. What are the happy path cases?
3. What are the edge cases?
4. What are the error cases?
Output:
Test Specification for validateEmail():
Happy Path:
- Standard email (user@example.com) → true
- Email with subdomain (user@mail.example.com) → true
- Email with plus (user+tag@example.com) → true
Edge Cases:
- Multiple dots in domain (user@example.co.uk) → true
- Single character local part (a@example.com) → true
- Numbers in local part (user123@example.com) → true
Error Cases:
- Missing @ (userexample.com) → false
- Missing domain (user@) → false
- Missing local part (@example.com) → false
- Empty string ("") → false
- Just @ ("@") → false
- Multiple @ signs (user@@example.com) → false
Step 2: Generate Failing Tests
Generate Jest tests for this specification:
[paste test specification]
Requirements:
- Use describe/it structure
- Group by happy path, edge cases, error cases
- Include descriptive test names
- Use expect().toBe() for boolean results
Function signature: validateEmail(email: string): boolean
Generated Tests:
describe('validateEmail', () => {
describe('happy path', () => {
it('should accept standard email format', () => {
expect(validateEmail('user@example.com')).toBe(true);
});
it('should accept email with subdomain', () => {
expect(validateEmail('user@mail.example.com')).toBe(true);
});
it('should accept email with plus sign', () => {
expect(validateEmail('user+tag@example.com')).toBe(true);
});
});
describe('edge cases', () => {
it('should accept multiple dots in domain', () => {
expect(validateEmail('user@example.co.uk')).toBe(true);
});
it('should accept single character local part', () => {
expect(validateEmail('a@example.com')).toBe(true);
});
it('should accept numbers in local part', () => {
expect(validateEmail('user123@example.com')).toBe(true);
});
});
describe('error cases', () => {
it('should reject email without @', () => {
expect(validateEmail('userexample.com')).toBe(false);
});
it('should reject email without domain', () => {
expect(validateEmail('user@')).toBe(false);
});
it('should reject email without local part', () => {
expect(validateEmail('@example.com')).toBe(false);
});
it('should reject empty string', () => {
expect(validateEmail('')).toBe(false);
});
it('should reject multiple @ signs', () => {
expect(validateEmail('user@@example.com')).toBe(false);
});
});
});
Step 3: Verify Tests Are Red
Run the tests to confirm they fail:
npm test -- validateEmail.test.ts
All tests should fail (or error because the function doesn't exist yet).
Step 4: Generate Implementation
These tests are failing because the function doesn't exist yet:
[paste tests]
Generate an implementation that passes ALL these tests.
Requirements:
- TypeScript
- Single function
- No external dependencies
- Handle all test cases
Generated Implementation:
export const validateEmail = (email: string): boolean => {
if (!email || typeof email !== 'string') {
return false;
}
// Simple regex for email validation
const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
// Additional checks
const atCount = (email.match(/@/g) || []).length;
if (atCount !== 1) {
return false;
}
return emailRegex.test(email);
};
Step 5: Run Tests (Green)
npm test -- validateEmail.test.ts
If tests pass, move to refactor. If tests fail, iterate:
Some tests are still failing:
Failing tests:
- "should accept multiple dots in domain" - expected true, got false
Current implementation:
[paste implementation]
Fix the implementation to pass this test while keeping others passing.
Step 6: Refactor
Once tests pass, improve the code:
All tests pass. Now refactor for:
1. Better readability
2. More maintainable structure
3. Better naming
4. Type safety improvements
Current passing code:
[paste code]
Tests to maintain:
[paste tests]
Refactor without breaking any tests.
TDD Patterns for Different Scenarios
Pattern: Component Testing
Generate React Testing Library tests for this component spec:
Component: SearchInput
- Renders input with placeholder "Search..."
- Calls onChange prop when user types
- Calls onSubmit prop when Enter is pressed
- Shows clear button when input has value
- Clears input when clear button clicked
Include tests for:
- Initial render state
- User interaction (typing, clicking, keypresses)
- Callback invocations with correct arguments
Pattern: API Endpoint Testing
Generate Supertest tests for this API endpoint spec:
POST /api/users
- Creates user with valid data → 201
- Returns created user (without password)
- Rejects duplicate email → 409
- Validates required fields → 400
- Requires authentication → 401
Include:
- Setup and teardown
- Realistic test data
- Database state verification
Pattern: Integration Testing
Generate integration tests for this user registration flow:
1. User submits registration form
2. API creates user in database
3. Email verification sent
4. User clicks verification link
5. Account is activated
Test each step and the full flow.
Include mocking for email service.
Handling Complex Test Scenarios
Mocks and Stubs
Generate tests for this function that requires mocking:
```typescript
async function sendWelcomeEmail(userId: string): Promise<boolean> {
const user = await db.users.findById(userId);
if (!user) return false;
await emailService.send(user.email, 'Welcome!');
return true;
}
Mock:
- db.users.findById
- emailService.send
Test cases:
- User found, email sent successfully
- User not found
- Email service fails
### Test Data Factories
Create a test data factory for this type:
interface Order {
id: string;
userId: string;
items: OrderItem[];
status: 'pending' | 'confirmed' | 'shipped' | 'delivered';
createdAt: Date;
total: number;
}
Factory should:
- Generate valid random data by default
- Allow overriding any field
- Have presets for common scenarios (emptyOrder, shippedOrder)
## AI-Assisted Test Expansion
### Finding Missing Test Cases
Review these tests for gaps:
[paste tests]
What test cases are missing? Consider:
- Boundary conditions
- Type coercion scenarios
- Race conditions
- Performance edge cases
- Security-related cases
### Generating Property-Based Tests
Convert these example-based tests to property-based tests using fast-check:
it('should add two numbers', () => {
expect(add(2, 3)).toBe(5);
expect(add(0, 0)).toBe(0);
expect(add(-1, 1)).toBe(0);
});
Properties to test:
- Commutativity: add(a, b) === add(b, a)
- Identity: add(a, 0) === a
- Associativity: add(add(a, b), c) === add(a, add(b, c))
## TDD Workflow Tips
### 1. Write Tests You'd Want to Pass
Before generating implementation, ask: "If I were implementing this, would these tests adequately verify correctness?"
### 2. Don't Let AI Skip Red
Always run tests before implementing. The "red" phase confirms your tests are testing the right thing.
### 3. Review Generated Tests
AI-generated tests might miss cases. Always review:
Are there edge cases these tests don't cover? [paste generated tests]
### 4. Keep Tests Independent
Each test should work in isolation:
Review these tests for coupling or shared state issues. [paste tests]
## Common TDD Mistakes
### 1. Tests Too Coupled to Implementation
Tests should verify behavior, not implementation details.
### 2. Skipping the Red Phase
Always verify tests fail first—prevents false positives.
### 3. Too Many Tests at Once
Generate tests in batches, verifying each batch.
### 4. Ignoring Edge Cases
Use AI to brainstorm edge cases you might miss.
## Practice Exercise
Use TDD with AI to build a password validator:
Requirements:
- Minimum 8 characters
- At least one uppercase letter
- At least one lowercase letter
- At least one number
- At least one special character (!@#$%^&*)
Workflow:
1. Generate test specification from requirements
2. Generate test file (verify it fails)
3. Generate implementation (verify it passes)
4. Refactor for readability
5. Ask AI to find missing test cases
6. Add those tests and verify
## Summary
- TDD provides clear specs for AI code generation
- Follow: Spec → Red → Green → Refactor → Verify
- Generate test specifications before test code
- Always verify tests fail before implementing
- Use AI to find missing test cases
- Review and understand all generated code
## Next Steps
Let's explore debugging and code review workflows—using AI as your debugging partner.