Skip to content

Quality Reviews with Auto QA

Auto QA helps QA teams review more conversations by scoring them against a QA scorecard (sections and questions).

Goal

Use Auto QA to: - find low-scoring conversations quickly, - understand which questions are failing most often, - coach teams using consistent criteria.

Step-by-step workflow

1) Start with the QA dashboard (if available)

  • Set a time range (last 7–30 days).
  • Identify:
  • average QA score trend,
  • distribution of scores,
  • any spikes in failures.

2) Drill down to low-scoring conversations

  • Click the low-score bucket (for example “QA < 80”).
  • Review a sample set of conversations.

3) Review question-level details

In each conversation: - review the overall QA score, - review section/question outcomes (pass/fail/score), - read explanations for why a question was scored that way (if shown), - validate against transcript/thread.

4) Identify systemic issues

Look for recurring failures such as: - missing verification steps, - missing disclosures, - improper closing, - policy or compliance misses.

5) Coach and track improvements

  • share examples with team leads,
  • update coaching materials,
  • watch trend changes over time.

Auto QA results view

Auto QA results are displayed on the QA tab in conversation details, showing:

  • Overall QA score
  • Section-by-section breakdown
  • Question-level pass/fail results with explanations

Auto QA Report with Transcript

Figure: Auto QA report showing scorecard results alongside the conversation transcript.

Providing feedback

If you disagree with an Auto QA result, you can provide feedback:

Auto QA Report Feedback

Figure: Feedback option to flag incorrect Auto QA results for review.