Evaluations

Analyze how users interact with Baz code review agents

The Evaluations page helps you understand how your team engages with Baz-generated review comments. It provides a structured view of user responses to reviewer agent feedback, allowing you to assess which reviewers are useful, where confusion arises, and how review quality can be improved.

After deploying an Agent, see how human developers interacted with its recomendations

Overview

Each row in the Evaluations table represents a single user interaction with a Baz reviewer comment. The following columns are available:

  • User Name: The Baz or GitHub user who responded to the review comment.

  • User Interaction: The actual reply or reaction given by the user to the automated review comment.

  • Baz Reviewer Comment: The comment generated by the reviewer agent.

  • Reviewer Title: The category or type of reviewer that produced the comment (e.g., "Logical Bugs", "Frontend Reviewer").

  • Updated: Timestamp of when the interaction occurred.

Using the Filters

You can narrow down the evaluation data using the filters at the top of the page:

  • Reviewer title: Filter by the category of reviewer agent.

  • Updated: Choose a timeframe such as “last week” or “last 24 hours” to review recent interactions.

  • Outcome: Filter by whether the user agreed, disagreed, asked a follow-up question, or took another action.

  • User: Focus on interactions from a specific team member.

Last updated