How to Optimize Your Code Reviews: Build an AI Assistant with Python and GitHub Actions
- Leanware Editorial Team
- 15 hours ago
- 5 min read
Updated: 10 hours ago
Code reviews often slow down the development cycle. Developers wait. Senior engineers get bogged down checking for things that an automated tool could have flagged. But AI can now handle the first pass. It spots routine issues instantly and gives your team a head start.
In this guide, you'll learn how to set up an AI assistant for PR reviews using Python, GitHub Actions, and Claude via the langchain_anthropic and mcp-use libraries. You’ll also see how to choose the right tool, customize it to your workflow, and scale it for your team.
TL;DR: Automate first-pass PR reviews using Claude and GitHub Actions - catch security flaws and performance issues with codebase-specific prompt customization.
Why Use an AI Assistant for Code Reviews?

PR queues often slow teams down, not because of complex bugs, but because engineers get stuck waiting for feedback on small mistakes or missed conventions. An AI assistant helps streamline that process by flagging routine problems early in the review cycle.
1. For developers: It gives early feedback on common problems - missing checks, unsafe patterns, inconsistent naming, before anyone else reviews the code. That shortens the feedback loop and reduces back-and-forth.
2. For tech leads and senior engineers: It filters out low-level noise. Instead of flagging spacing or basic violations, they can focus on design decisions, architecture, and whether the code makes sense in context.
3. For the team: It reduces lead time and helps reviews happen consistently, even when the team is busy. The assistant doesn’t replace reviews, but it makes them faster and more focused.
Choose the Right AI PR Review Tool
AI tools for code reviews vary in depth, customization, and integration. Picking the right one depends on your team’s size, workflow, and codebase.
Evaluate based on team size and workflow needs:
Small teams or startups: Prefer ease of setup, GitHub integration, and default rule sets.
Large or regulated teams: You’ll likely need customization, granular rule tuning, and support for internal CI/CD systems.
Compare Key Features: Language Support, Integrations, Customization
Select the option that best suits your ecosystem and governance requirements.
Set Up the Tool in Your Development Environment
Option A: Commercial Tool Integration
Most commercial tools follow this pattern:
Install GitHub App from marketplace.
Configure repository access (start with test repo).
Set permissions (read code, write PRs and checks).
Option B: Build Custom AI Assistant with Claude
For maximum control and customization, build your own using Anthropic's Claude.
Step-by-Step Guide: Building Your AI Assistant
Let’s walk through building your own AI PR review assistant using GitHub, Python, Claude, and GitHub Actions.
The Architecture of Our Solution
To understand how our assistant works, let's examine its main components and how they interact.

The Trigger: It all starts on GitHub. When a developer opens or updates a Pull Request, an event is triggered.
The Orchestrator: A GitHub Action listens for this event and kicks off our automated workflow in a virtual environment.
The Core Logic: The workflow executes our Python script. This script acts as the assistant's brain.
The Analysis Engine: The script uses the langchain_anthropic library to send the code and a well-defined prompt to the claude-sonnet-4-20250514 model, which generates a draft of the code review.
The Action: The script, using the mcp_use library, takes the AI's generated review and posts it as a helpful first comment on the Pull Request.
Step 1: Prerequisites and Initial Setup
Requirements: A GitHub account, an Anthropic API Key, and basic knowledge of Python and GitHub Actions.
Project Structure: Create a .reviewer/ folder in your repository to store the code_review.py and requirements.txt files.
Dependencies: Create the .reviewer/requirements.txt file with this content:
Step 2: The Python Script - The Assistant's Core Logic
This script, code_review.py, is the engine that drives our assistant.
Step 2.5: Local Testing (Highly Recommended!)
1. Create a Token: Go to GitHub > Settings > Developer settings > Personal access tokens > Fine-grained tokens. Generate a token with Read and Write permissions for Pull Requests.
2. Create a .env file: In your project's root, create a file named .env. Add .env to your .gitignore!
Populate .env:
3. Run the script: python .reviewer/code_review.py.
Step 3: Automation with GitHub Actions
Create the file .github/workflows/code_review.yml to trigger the assistant automatically.
Step 4: The Prompt - Instructing Your Assistant
This is where you define the "mind" of your assistant. Go to your repo Settings > Secrets and variables > Actions, select the Variables tab, and create PROMPT with this content.
Step 5: Troubleshooting
No comment appears: Check the permissions section in your .yml file.
API Key Error: Verify the ANTHROPIC_API_KEY secret in your repository settings.
Action fails: Check the action's logs in the "Actions" tab on GitHub for clues.
Our Assistant at Work
With everything set up, the final result is a clear and detailed comment directly on your Pull Request. This is what it looks like:

Collaborate and Fine-Tune the AI Feedback Loop
Teach your team how to interpret AI feedback. Not every comment needs action. Treat it as the first draft.
Most tools don’t support active learning yet, but you can simulate it by refining prompts and rules based on what your team accepts or rejects.
Monitor Results and Optimize
Look at:
Time-to-merge
Number of issues caught by AI vs. humans
Developer feedback
Some tools include built-in metrics. For custom setups, you can send logs to Datadog or Grafana.
Your Next Move
The whole point here is to optimize PR review time using AI, not to build a code reviewer that replaces people, but to speed up the parts that don’t need one.
You’ve now got a system that runs first-pass checks automatically. It catches common issues early and keeps PRs moving without wasting reviewer time on the basics.
What you can do next:
Refine the prompt to match your team’s review style.
Switch models depending on speed, cost, or repo complexity.
Add lightweight automations - label PRs, notify reviewers, or link related tickets.
You’re not removing humans from the loop - you’re making the loop faster, cleaner, and more focused
If you’re building, exploring, or just thinking through AI-assisted PR reviews, you can also connect with our experts to get a second opinion, technical support, or practical guidance - whatever’s most useful for where you are now. Collaborate and Fine-Tune the AI Feedback Loop
Teach your team how to interpret AI feedback. Not every comment needs action. Treat it as the first draft.
Most tools don’t support active learning yet, but you can simulate it by refining prompts and rules based on what your team accepts or rejects.
Monitor Results and Optimize
Look at:
Time-to-merge
Number of issues caught by AI vs. humans
Developer feedback
Some tools include built-in metrics. For custom setups, you can send logs to Datadog or Grafana.
Your Next Move
The whole point here is to optimize PR review time using AI, not to build a code reviewer that replaces people, but to speed up the parts that don’t need one.
You’ve now got a system that runs first-pass checks automatically. It catches common issues early and keeps PRs moving without wasting reviewer time on the basics.
What you can do next:
Refine the prompt to match your team’s review style.
Switch models depending on speed, cost, or repo complexity.
Add lightweight automations - label PRs, notify reviewers, or link related tickets.
You’re not removing humans from the loop - you’re making the loop faster, cleaner, and more focused
If you’re building, exploring, or just thinking through AI-assisted PR reviews, you can also connect with our experts to get a second opinion, technical support, or practical guidance - whatever’s most useful for where you are now.