title: "How to Use OpenClaw for Code Review" date: "2026-02-14" description: "Set up OpenClaw as your AI-powered code review assistant. Learn how to configure review rules, integrate with GitHub PRs, and catch bugs before they ship." category: "Guide" author: "OpenClaw Team" tags: ["code-review", "github", "development"] readTime: "10 min"
Catching bugs in production is expensive. Catching them in code review is cheap. The problem is that thorough code review takes time — time that engineers don't always have. OpenClaw bridges this gap by acting as an always-available, never-tired AI reviewer that checks every pull request against your team's standards before a human even opens the diff.
This guide walks you through setting up OpenClaw as your AI-powered code review assistant, from connecting your GitHub account to tuning the reviewer's strictness for your team's workflow.
What You'll Set Up
By the end of this guide, you'll have:
- OpenClaw connected to your GitHub repositories
- Review rules configured for style, security, and performance
- Automated PR comments on every pull request
- A CI/CD integration that blocks merges on critical findings
- A workflow for handling false positives cleanly
Prerequisites
Before you begin, make sure you have:
- OpenClaw installed (installation guide)
- A GitHub account with admin access to at least one repository
- An API key for Claude (Anthropic) or another supported LLM
- Basic familiarity with GitHub Actions or another CI/CD platform
- Node.js 18+ for the OpenClaw CLI
Step 1: Connect OpenClaw to GitHub
First, create a GitHub App or use a personal access token to give OpenClaw access to your repositories.
Option A: GitHub App (recommended for teams)
# Create the GitHub App via the OpenClaw CLI
openclaw github app create \
--name "OpenClaw Review Bot" \
--repo-permissions "pull_requests:write,contents:read,issues:write"
# OpenClaw will print an installation URL — open it in your browser
# and install the App on the repositories you want reviewed
After installation, store the credentials in your OpenClaw config:
# ~/.openclaw/config.yaml
integrations:
github:
app_id: 123456
private_key_path: ~/.openclaw/github-app.pem
installation_id: 789012
Option B: Personal Access Token (simpler for solo developers)
export GITHUB_TOKEN=ghp_your_token_here
openclaw github connect \
--token $GITHUB_TOKEN \
--repo your-org/your-repo
Verify the connection works:
openclaw github status
# Output:
# Connected to GitHub as: OpenClaw Review Bot
# Repositories: your-org/your-repo (active)
Step 2: Configure Review Rules
OpenClaw's review behavior is driven by a review.yaml file in your repository root (or your OpenClaw config directory). This is where you define what to look for.
Style Rules
# .openclaw/review.yaml
rules:
style:
enabled: true
severity: warning # warning, error, or info
checks:
- id: naming_conventions
description: "Variables, functions, and classes must follow team naming conventions"
languages: [python, typescript, javascript]
- id: function_length
description: "Functions longer than 50 lines should be refactored"
threshold: 50
severity: info
- id: comment_quality
description: "Complex logic must include inline comments"
require_comments_above_complexity: 10
Security Rules
rules:
security:
enabled: true
severity: error # Security issues block the merge
checks:
- id: hardcoded_secrets
description: "Detect API keys, tokens, and passwords in source code"
patterns:
- "sk-[a-zA-Z0-9]{32,}" # OpenAI keys
- "AKIA[0-9A-Z]{16}" # AWS access key IDs
- "(?i)password\\s*=\\s*['\"].+['\"]" # Hardcoded passwords
- id: sql_injection
description: "Flag string concatenation in SQL queries"
languages: [python, javascript, typescript, java]
- id: open_redirect
description: "Detect unvalidated redirect targets"
languages: [python, javascript, typescript]
- id: dependency_vulnerabilities
description: "Check for known CVEs in new or updated dependencies"
files: ["package.json", "requirements.txt", "go.mod", "Cargo.toml"]
Performance Rules
rules:
performance:
enabled: true
severity: warning
checks:
- id: n_plus_one_queries
description: "Detect potential N+1 database query patterns"
languages: [python, javascript, typescript]
- id: unbounded_loops
description: "Flag loops without clear termination conditions on large collections"
- id: missing_pagination
description: "Database queries that fetch all records without LIMIT"
languages: [python, javascript, typescript]
Step 3: Set Up PR Webhooks
OpenClaw listens for pull request events and posts review comments automatically. Deploy the webhook server and register it with GitHub.
Start the Webhook Server
# Start the OpenClaw webhook listener
openclaw webhook serve \
--port 3001 \
--secret $WEBHOOK_SECRET \
--review-config .openclaw/review.yaml
For production, run this as a persistent service. Here's a systemd unit file:
# /etc/systemd/system/openclaw-webhook.service
[Unit]
Description=OpenClaw PR Review Webhook
After=network.target
[Service]
Type=simple
User=openclaw
WorkingDirectory=/opt/openclaw
ExecStart=/usr/local/bin/openclaw webhook serve --port 3001 --secret ${WEBHOOK_SECRET}
Restart=on-failure
RestartSec=5s
EnvironmentFile=/etc/openclaw/env
[Install]
WantedBy=multi-user.target
Register the Webhook with GitHub
openclaw github webhook register \
--repo your-org/your-repo \
--url https://your-server.com/webhook \
--secret $WEBHOOK_SECRET \
--events pull_request,pull_request_review
Once registered, OpenClaw will post comments on every new PR and on every new push to an open PR:
OpenClaw Review Bot commented on PR #42:
**3 findings** (1 error, 2 warnings)
---
**[ERROR] Hardcoded Secret Detected**
`src/api/client.ts:14`
const API_KEY = "sk-proj-abc123...";
This looks like a live API key. Move it to an environment variable.
Suggested fix: `const API_KEY = process.env.OPENAI_API_KEY;`
---
**[WARNING] Function Length**
`src/utils/processor.ts:88`
`processUserData` is 73 lines. Consider splitting into smaller, testable units.
Step 4: Customize Review Strictness
Different teams have different tolerances. OpenClaw lets you tune strictness globally or per-repository.
# .openclaw/review.yaml
strictness:
# How many warnings before OpenClaw requests changes (rather than just commenting)?
warn_threshold: 5
# How many errors before OpenClaw marks the PR as "changes requested"?
error_threshold: 1
# Should OpenClaw approve PRs that have zero findings?
auto_approve_clean: true
# Paths to skip entirely (generated code, vendored files, etc.)
ignore_paths:
- "dist/**"
- "vendor/**"
- "*.generated.ts"
- "migrations/**"
# File size limit — skip reviewing files larger than this
max_file_size_kb: 500
You can also apply different rules to different branches:
branch_overrides:
main:
security:
severity: error # Always block security issues on main
style:
enabled: false # Don't nit on style for hotfixes to main
"feature/*":
style:
enabled: true
performance:
severity: info # Just inform, don't block feature branches
Step 5: Handle False Positives
No automated reviewer is perfect. OpenClaw provides structured ways to suppress false positives without disabling rules globally.
Inline Suppression
# openclaw: ignore hardcoded_secrets - this is a test fixture, not a real key
TEST_API_KEY = "sk-test-notarealkey12345678901234567890"
def test_api_client():
client = APIClient(api_key=TEST_API_KEY)
...
File-Level Suppression
# .openclaw/suppressions.yaml
suppressions:
- rule: hardcoded_secrets
path: "tests/fixtures/**"
reason: "Test fixtures use placeholder keys"
approved_by: "alice@yourteam.com"
expires: "2026-12-31"
- rule: function_length
path: "src/legacy/**"
reason: "Legacy code — refactoring tracked in TICKET-1234"
approved_by: "bob@yourteam.com"
Suppressions with expiry dates keep technical debt visible. OpenClaw will flag expired suppressions in future reviews.
Step 6: Integrate with CI/CD
Wire OpenClaw into your GitHub Actions pipeline so it runs on every PR and optionally blocks merges.
# .github/workflows/openclaw-review.yml
name: AI Code Review
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
openclaw-review:
runs-on: ubuntu-latest
permissions:
pull-requests: write
contents: read
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history needed for diff analysis
- name: Run OpenClaw Review
uses: openclaw/review-action@v2
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
anthropic-api-key: ${{ secrets.ANTHROPIC_API_KEY }}
review-config: .openclaw/review.yaml
fail-on-error: true # Fail the CI check on ERROR findings
fail-on-warning: false # Allow warnings to pass CI
With fail-on-error: true, any security error will cause the CI check to fail, which blocks merging if you have branch protection enabled.
Step 7: Review the Review Quality
Periodically audit how well OpenClaw is performing. Use the built-in reporting:
# Show review statistics for the past 30 days
openclaw stats --repo your-org/your-repo --days 30
# Output:
# PRs reviewed: 142
# Total findings: 389
# False positive rate: 8.2% (marked dismissed by engineers)
# Bugs caught (confirmed): 23
# Security issues: 7
# Average review time: 34 seconds
If your false positive rate climbs above 15%, it's time to tune your rules or add suppressions. If it drops below 5%, you might be able to increase strictness.
# Export detailed finding history to CSV for deeper analysis
openclaw export findings \
--repo your-org/your-repo \
--format csv \
--output review-audit.csv \
--days 90
Common Issues
OpenClaw posts no comments on my PR
Check that the GitHub App is installed on the repository, not just the organization. Run openclaw github status --repo your-org/your-repo to verify.
Too many false positives on security rules
Start with severity: warning for security rules and promote specific checks to severity: error once you've validated their accuracy on your codebase.
Review takes more than 2 minutes
Large PRs (500+ lines changed) slow things down. Set max_diff_lines: 1000 in your config to have OpenClaw review only the most critical parts of very large diffs.
Next Steps
You now have a production-grade AI code reviewer running on every pull request. From here:
- Build a multi-agent system to automate the fixes OpenClaw identifies
- Security hardening guide to protect the OpenClaw deployment itself
- Scheduling AI tasks with cron jobs for nightly security scans across your entire codebase
AI code review doesn't replace human review — it makes human review better by handling the mechanical checks so your engineers can focus on architecture, logic, and design.