Baz Docs
Sign Up ↗Contact sales
  • Introduction
  • Basics
    • Organization Setup
    • Configure with GitHub
    • Model Context Protocol (Beta)
  • Agents
    • Default Reviewers
    • Custom Reviewers (Beta)
    • Working with Agents
  • Code
    • Overview
    • Integrations
      • Datadog
      • GitHub Actions
      • Slack
      • Jira
  • Notifications
  • changes
    • Code Review
      • Description
      • Topics
        • Commands
      • Diff
        • Git Diff (optional)
      • Merge eligibility
    • Chat
    • Discuss
  • Account
    • Plans & Usage
    • Billing
    • Security, privacy and compliance
Powered by GitBook
On this page
  • Custom Reviewers
  • What Are Custom Reviewers?
  • Workflow Overview
  • Reviewer Playground
  • Path Scoping
  • Who Can Edit
  • Review Execution
  • Use Cases
  1. Agents

Custom Reviewers (Beta)

Organization-specific code review agents trained on your past pull request discussions

Custom Reviewers

Baz detects recurring feedback patterns in your team’s pull requests and converts them into reusable AI reviewers. These reviewers reflect your team's actual engineering practices—not generic lint rules—and are applied automatically to relevant parts of the codebase.

What Are Custom Reviewers?

Custom reviewers are AI-driven checks derived from historical code review comments. Baz analyzes your past pull requests, clusters repeated feedback, and presents them as actionable reviewers that can be activated or tested before enforcement.

Each reviewer includes:

  • A reviewer prompt (the rule logic)

  • Code examples from your own pull requests

  • Suggested path scopes using glob patterns

  • Associated metadata (category, repository, contributing authors)

These reviewers surface the real-world habits and standards your team follows and let you codify them without writing a single rule manually.

Workflow Overview

1. Reviewers Automatically Generated

Baz surfaces a list of custom reviewers under Settings > Reviewers > Custom Reviewers.

Each reviewer includes:

  • A title and category (e.g., Code Style, API, Algorithms)

  • The repository it originated from

  • Number of past discussions it’s based on

Click any item to inspect the logic behind it.

2. Inspect Details

You’ll see:

  • The reviewer prompt (used by the AI to evaluate future changes)

  • One or more real PR comment threads that led to its creation

  • A list of suggested paths scoped to the feedback (e.g., baz-scm/bazai/**/*.py)

This gives you the full context before deciding whether to enforce it.

3. Test in Playground (Optional)

Before enforcing a reviewer, you can test and refine it.

  • Select the reviewer and click Test in Playground

  • Choose a real open change request

  • Modify the prompt to fine-tune its behavior

  • Run it to preview AI-generated review comments in real-time

This lets you calibrate the reviewer logic and test coverage before rollout.

4. Activate

Once you're satisfied with the scope and logic:

  • Click Activate

  • The reviewer will now run on new changes in the defined paths

  • Findings will appear as inline comments during review, tagged with the reviewer name

Reviewer Playground

The Reviewer Playground lets you validate and iterate on reviewer logic using real, in-context code diffs.

Features include:

  • Editing and previewing prompts live

  • Selecting any open change request as input

  • Defining path scopes for enforcement

  • Seeing reviewer feedback exactly as it will be rendered during review

Use it to test edge cases, tighten or broaden reviewer logic, or debug unwanted findings before activation.

Path Scoping

Each reviewer can be scoped to one or more repositories or subpaths. Baz uses glob syntax (e.g., **/*.ts, src/**) to define where a reviewer should apply.

This allows you to:

  • Target reviewers to language-specific directories (e.g., baz-scm/frontend/**/*)

  • Exclude infrastructure or legacy paths

  • Apply different standards to different teams

Scopes are editable from the UI prior to activation.

Who Can Edit

Only organization admins can activate, deactivate, or edit reviewers. All users can view reviewer prompts and test them in playground, but only admins can apply them to production workflows.

Review Execution

When a reviewer is active:

  1. Baz detects if a file in the change matches its scope

  2. The reviewer prompt is included in the AI review run

  3. The AI evaluates the change and outputs comments tagged with the reviewer label

Developers see this feedback inline and can resolve or respond as part of normal review flow.

Use Cases

  • Standardize framework usage (e.g., “Use dependency injection in NestJS”)

  • Enforce architecture conventions (“Separate test and prod config”)

  • Catch performance antipatterns (“Avoid unbounded async chains”)

  • Ensure stylistic consistency (“Prefer Enums over magic strings”)

These reviewers act as institutional memory, giving your team guardrails without needing to write static rules or onboard everyone manually.

Let me know if you’d like tailored docs for your enterprise audience, a changelog-ready summary for release notes, or onboarding copy for first-time users.

PreviousDefault ReviewersNextWorking with Agents

Last updated 2 days ago