Auto QA (Scorecards and Tasks)
Auto QA evaluates conversations against a structured QA Scorecard (sections and questions). Unlike simple metrics (CSAT, sentiment), Auto QA typically requires configuring:
- A Scorecard (your QA rubric)
- An Auto QA AI Task that applies the scorecard to eligible conversations
- Reporting/dashboards to view scores and trends
What Auto QA is best for
- consistent evaluation across large conversation volumes
- coaching and training: identify question-level gaps
- compliance: detect missing required statements
- operational review: prioritize low-scoring conversations
Step 1 — Define the QA Scorecard
A scorecard usually includes: - sections (e.g., Greeting, Discovery, Resolution, Compliance) - questions (binary/graded) - scoring weights (optional) - pass/fail thresholds (optional)
Recommended design principles
- Keep question wording objective and evidence-based.
- Avoid compound questions (split into separate items).
- Prefer measurable behaviors (“Did the agent confirm identity?”) over vague ones (“Was the agent professional?”).
- Include a “Not applicable” option if relevant.
Step 2 — Configure the scorecard in MiaRec
QA Form configuration
Configure your QA scorecard with sections and questions:
Figure: Auto QA form showing sections and questions in the scorecard.
Question configuration
Each question can be configured with:
- Question text
- Answer type (Yes/No, scale, dropdown)
- Scoring weight
- N/A option if applicable
Figure: Question configuration with answer options and scoring.
Auto QA results
Auto QA results are displayed in:
- Dedicated QA view on the conversation details page (detailed per-question breakdown)
- Dashboards for high-level score trends and distributions
EDITOR NOTE: Auto QA UI path and configuration
We know Auto QA exists and requires a scorecard, but we do not yet have:
- the exact menu path to create/manage scorecards
- which question types are supported
- how results are displayed (field-like vs dedicated QA view)
Ask product/engineering (choose one per question):
1) Where is the Scorecard configured?
- A) Administration > Speech Analytics > Auto QA
- B) Administration > Speech Analytics > AI Assistant > Auto QA
- C) Administration > Customization > QA Scorecards
- D) Other: __
2) Supported question types: - A) Yes/No (binary) - B) 1–5 scale - C) Dropdown (custom labels) - D) Free-text notes - E) N/A option
3) How are Auto QA results stored? - A) Dedicated QA results UI (not Custom Fields) - B) Stored into Custom Fields - C) Both (summary fields + detailed QA UI)
Best-guess recommendation:
- Store a summary score (overall) and key flags for dashboards, but keep detailed question breakdown in a dedicated QA view.
Step 3 — Enable or configure the Auto QA AI Task
Auto QA requires an AI Task that: - uses the scorecard as evaluation criteria - outputs per-question results and an overall score - includes a short explanation/rationale per question (recommended)
If Auto QA is provided as a prebuilt task:
- enable it from AI Tasks (Disabled → Enable)
- configure filters (e.g., only certain queues)
If it requires a tenant-specific configuration: - create the task and connect it to the scorecard (deployment-dependent)
Step 4 — Test and calibrate
Use Playground or “Save and Test”: - run Auto QA on a sample set - compare results to human QA - refine question wording and scoring thresholds
Step 5 — Reporting and coaching workflow
Define how results are used: - dashboards: overall QA score trends, distribution buckets, low-score drilldowns - saved searches: conversations with “Compliance: Failed” - coaching: question-level remediation and training

