Part 2: Code Review Process and Best Practices

Series Navigation: ← Part 1: Foundations and Principles | Part 3: Refactoring Techniques and Patterns β†’

Introduction

After establishing the foundations in Part 1, let's dive into the practical aspects of code review. Over the years of maintaining TypeScript microservices, I've participated in thousands of code reviewsβ€”both as a reviewer and as an author. I've seen reviews that caught critical bugs before production and reviews that turned into unproductive debates about tab vs. spaces.

In my current team, we process 50-100 pull requests weekly across our microservices. Each service has different owners, but we all review each other's code. This cross-pollination has been invaluable for maintaining consistency and sharing knowledge across the organization.

The Code Review Workflow

Our Pull Request Process

Here's the workflow we've refined over two years:

  1. Developer creates feature branch: Branching off main or develop

  2. Implementation with self-review: Review your own diff before creating PR

  3. Create pull request: With clear description and context

  4. Automated checks: CI/CD runs tests, linting, type checking

  5. Peer review: Minimum one approval, typically 1-2 reviewers

  6. Address feedback: Discussion and code updates

  7. Final approval: Reviewer approves changes

  8. Merge: Squash and merge into main branch

  9. Deploy: Automatic deployment to staging/production

PR Size Guidelines

From experience, I've learned that PR size dramatically affects review quality:

  • Ideal: 200-400 lines changed

  • Acceptable: Up to 600 lines

  • Large: 600-1000 lines (needs extra care)

  • Too large: 1000+ lines (should be split)

When I was implementing a new payment provider, my initial PR was 2,500 lines. The review took three days and missed several issues. I split it into five focused PRs:

  1. Add payment provider interface and types (150 lines)

  2. Implement core payment logic (300 lines)

  3. Add error handling and retries (250 lines)

  4. Integration tests (400 lines)

  5. Update documentation and examples (200 lines)

Each PR was reviewed within hours, and the quality improved significantly.

What to Look For During Review

1. Correctness and Logic

Does the code do what it's supposed to do?

Review Comment I Would Leave:

Should the second condition be >= instead of >? Currently, an order of exactly $50 gets no discount. Is this intentional?

2. Error Handling

Are errors properly handled?

Review Comment:

We should handle the case where the API returns a non-404 error. What happens if we get a 500 or 503? Also, we should validate the response structure before assuming it matches UserProfile.

3. Security Vulnerabilities

In my order service, I caught this during review:

Other Security Checks:

  • Sensitive data in logs

  • Authentication/authorization checks

  • Input validation and sanitization

  • Secrets in code (should use environment variables)

  • Rate limiting for public endpoints

4. Performance Issues

Review Comment:

This will cause N+1 queries. For 100 orders, we'd make 101 database calls. We should batch load the customers.

5. Testing Coverage

During review, I check:

  • Are edge cases tested?

  • Are error paths tested?

  • Are mocks used appropriately?

  • Is there integration test coverage for critical paths?

6. Code Organization and Architecture

Giving Effective Feedback

The Feedback Framework I Use

1. Be Specific and Actionable

❌ Poor feedback:

"This code is messy"

βœ… Good feedback:

"This function has three different responsibilities: validation, business logic, and persistence. Consider extracting the validation into a separate method. For example:

2. Explain the "Why"

❌ Poor feedback:

"Use async/await instead of promises"

βœ… Good feedback:

"Consider using async/await instead of promise chains here. It's easier to read and handle errors consistently with try/catch. Example:

3. Distinguish Must-Fix from Suggestions

I use these prefixes in my reviews:

  • 🚨 Critical: Must be fixed (security, bugs, breaking changes)

  • ⚠️ Important: Should be fixed (performance, error handling)

  • πŸ’‘ Suggestion: Nice to have (naming, minor refactoring)

  • ❓ Question: Seeking clarification

Example review:

🚨 Critical: This endpoint doesn't validate the user has permission to access the order. We need to add an authorization check.

⚠️ Important: This will cause N+1 queries under load. Consider using a DataLoader or batch fetching.

πŸ’‘ Suggestion: The variable name data is generic. orderDetails would be more descriptive.

❓ Question: Why are we using a 30-second timeout here? Our normal timeout is 5 seconds.

4. Praise Good Work

Code review isn't just about finding problems:

βœ… Nice use of the builder pattern here! This makes the test data creation much more readable.

βœ… Good catch adding the null check. That would have caused issues in production.

βœ… Excellent error messages. These will make debugging much easier.

Receiving Feedback Gracefully

As a PR author, I've learned to:

1. Don't Take It Personally

Feedback is about the code, not about you as a developer. I used to get defensive when reviewers found issues. Now I view every piece of feedback as a learning opportunity.

2. Ask for Clarification

If feedback isn't clear:

"Thanks for the suggestion! Can you clarify what you mean by 'this could cause race conditions'? I want to make sure I understand the scenario you're concerned about."

3. Explain Your Reasoning

If you disagree:

"I considered using a Set here, but chose an Array because we need to maintain insertion order for the UI display. The collection size is capped at 10 items, so the O(n) lookup isn't a concern. What do you think?"

4. Accept You'll Make Mistakes

I recently pushed code that had an obvious bug. The reviewer caught it:

"Line 45: You're comparing the string 'true' instead of the boolean true"

My response:

"Oof, good catch! Fixed in the latest commit. Added a test case for this too."

No excuses, just acknowledgment and fix.

Common Code Review Patterns

The Checklist Approach

Before marking a PR as ready for review, I run through this checklist:

Self-Review Checklist:

Reviewer Checklist:

Time-Boxing Reviews

I limit reviews to 60 minutes at a time. After that, my effectiveness drops. For large PRs, I do multiple review sessions.

The Two-Pass Review

First Pass (10 minutes): High-level

  • Does the overall approach make sense?

  • Is the PR size reasonable?

  • Are tests present?

Second Pass (30-45 minutes): Detailed

  • Line-by-line code review

  • Logic verification

  • Edge case analysis

Real-World Review Example

Here's an actual review I did on a colleague's PR for our notification service:

Original Code:

My Review Comments:

🚨 Critical: No error handling. If the email API is down, the SMS won't be sent. These should be wrapped in try-catch.

🚨 Critical: API keys are missing. How are we authenticating with these services?

⚠️ Important: These API calls are sequential. They could run in parallel with Promise.all() to improve performance.

⚠️ Important: No retry logic. Transient network failures will cause notifications to be lost.

πŸ’‘ Suggestion: The API URLs should come from configuration, not be hardcoded.

πŸ’‘ Suggestion: Consider extracting email and SMS sending to separate services for better testability.

❓ Question: What happens if the user doesn't exist? Should we throw an error or silently fail?

Revised Code After Discussion:

Building a Review Culture

What's Worked in My Team

  1. No code in production without review: Hard rule, no exceptions

  2. Review within 24 hours: We prioritize reviews to avoid blocking teammates

  3. Limit reviewers to 2-3: More reviewers = diffusion of responsibility

  4. Author can't approve their own PR: Obvious, but needs to be enforced

  5. Celebrate good reviews: Recognize reviewers who catch important issues

Handling Disagreements

Sometimes reviewers and authors disagree. Our process:

  1. Discussion in PR comments: Most disagreements resolve here

  2. Video call if needed: Complex issues benefit from real-time discussion

  3. Team decision: If still unresolved, bring to team meeting

  4. Document the decision: Update architectural decision records (ADRs)

Review Metrics We Track

  • Average PR size (lines changed)

  • Time to first review

  • Number of review cycles per PR

  • Review approval rate

  • Issues found in production vs. in review

These metrics help us improve our process over time.

Conclusion

Effective code review is a skill that requires practice and empathy. Key takeaways:

  • Size matters: Keep PRs small and focused

  • Be thorough: Check correctness, security, performance, and tests

  • Be constructive: Explain the "why" and suggest solutions

  • Be timely: Don't block teammates waiting for reviews

  • Learn from reviews: Both as author and reviewer

In Part 3, we'll dive into specific refactoring techniques and patterns you can apply during code review or as follow-up improvements.

Series Navigation: ← Part 1: Foundations and Principles | Part 3: Refactoring Techniques and Patterns β†’

Last updated