Skip to content

Validation Checklist

Use this checklist before enabling a new AI Task, changing a prompt, or rolling out a new custom insight.

A. Data prerequisites

  • Conversations have transcripts/threads available
  • Conversation volume is sufficient to validate (recommended: 10–30 samples)
  • Conversations represent your typical and edge cases

B. Task configuration checks

  • Task is mapped to the intended Custom Field(s)
  • Field types match output types (number/date/dropdown/text)
  • Filters are appropriate (not overly strict, not overly broad)
  • Output is valid JSON (no extra text outside JSON)
  • Keys match mapping attributes exactly
  • Values are within allowed ranges / labels
  • “Unknown” or “not mentioned” behavior is defined and consistent

D. Explanation quality checks

  • Explanations are concise (1–3 sentences)
  • Explanations cite evidence from transcript/thread
  • Explanations avoid speculation (“maybe”, “probably”) unless explicitly allowed

E. UI verification

After enabling/adjusting the task:

  • Field values appear in Conversation Details → Analytics
  • Dashboards update correctly (if applicable)
  • Search filters work (e.g., numeric comparisons like CSAT < 3)
  • Drilldowns work (bucket labels navigate to matching conversations)

F. Governance checks

  • Stakeholders agree with definitions (QA/CX/Sales)
  • Prompt version/change notes are documented
  • Rollout plan defined (pilot group first, then broader)