Skip to content

Auto QA (Scorecards and Tasks)

Auto QA evaluates conversations against a structured QA Scorecard (sections and questions). Unlike simple metrics (CSAT, sentiment), Auto QA typically requires configuring:

  1. A Scorecard (your QA rubric)
  2. An Auto QA AI Task that applies the scorecard to eligible conversations
  3. Reporting/dashboards to view scores and trends

What Auto QA is best for

  • consistent evaluation across large conversation volumes
  • coaching and training: identify question-level gaps
  • compliance: detect missing required statements
  • operational review: prioritize low-scoring conversations

Step 1 — Define the QA Scorecard

A scorecard usually includes: - sections (e.g., Greeting, Discovery, Resolution, Compliance) - questions (binary/graded) - scoring weights (optional) - pass/fail thresholds (optional)

  • Keep question wording objective and evidence-based.
  • Avoid compound questions (split into separate items).
  • Prefer measurable behaviors (“Did the agent confirm identity?”) over vague ones (“Was the agent professional?”).
  • Include a “Not applicable” option if relevant.

Step 2 — Configure the scorecard in MiaRec

QA Form configuration

Configure your QA scorecard with sections and questions:

Auto QA Form View

Figure: Auto QA form showing sections and questions in the scorecard.

Question configuration

Each question can be configured with:

  • Question text
  • Answer type (Yes/No, scale, dropdown)
  • Scoring weight
  • N/A option if applicable

Auto QA Question Configuration

Figure: Question configuration with answer options and scoring.

Auto QA results

Auto QA results are displayed in:

  • Dedicated QA view on the conversation details page (detailed per-question breakdown)
  • Dashboards for high-level score trends and distributions

Step 3 — Enable or configure the Auto QA AI Task

Auto QA requires an AI Task that: - uses the scorecard as evaluation criteria - outputs per-question results and an overall score - includes a short explanation/rationale per question (recommended)

If Auto QA is provided as a prebuilt task: - enable it from AI Tasks (Disabled → Enable) - configure filters (e.g., only certain queues)

If it requires a tenant-specific configuration: - create the task and connect it to the scorecard (deployment-dependent)

Step 4 — Test and calibrate

Use Playground or “Save and Test”: - run Auto QA on a sample set - compare results to human QA - refine question wording and scoring thresholds

Step 5 — Reporting and coaching workflow

Define how results are used: - dashboards: overall QA score trends, distribution buckets, low-score drilldowns - saved searches: conversations with “Compliance: Failed” - coaching: question-level remediation and training