Common Issues and Fixes
This chapter provides quick diagnosis and resolution steps for the most frequent platform-operator incidents.
For deeper, step-by-step procedures, see Troubleshooting → Runbooks.
1) Conversations ingest but do not appear in the UI
Likely causes - ingestion connector failure or auth expired - metadata missing tenant ID or conversation ID - indexing lag / search index failure
Quick checks - ingestion error logs - sample payload validation (tenant_id, conversation_id present) - search/index health
Quick fixes - rotate credentials - fix mapping; replay failed messages - restart indexing component (if applicable)
2) Voice calls have no transcripts
Likely causes - transcription engine misconfigured - provider quota/rate limiting - audio format unsupported or corrupted - transcription job backlog
Quick checks - transcription backlog and failure rate - provider dashboard (quota/limits) - test audio sample transcription
Quick fixes - scale transcription workers - adjust provider quotas or switch engine - retry failed transcription jobs - validate supported audio formats
3) AI Tasks are enabled but insights do not populate
Likely causes - AI Assistant job not running or backlog is high - task filters exclude most conversations (e.g., duration too high) - missing transcripts/threads for those conversations - output mapping points to missing/disabled Custom Fields
Quick checks - AI Assistant job health / backlog - check task filter hit rate (skipped due to filter) - verify Custom Fields exist and are AI-enabled
Quick fixes - start/scale AI Assistant job workers - relax filters for validation - fix mapping to correct fields - reprocess a test conversation
4) High rate of “invalid JSON” or schema validation failures
Likely causes - prompt does not enforce JSON-only output - model selected is weak at structured output - transcript too long; output truncated - schema too strict or mismatched with prompt instructions
Quick checks - inspect raw model outputs (sample failures) - compare prompt vs schema vs mapping keys - check token limits and truncation behavior
Quick fixes - tighten prompt (“Return JSON only. No extra text.”) - simplify schema, then tighten gradually - switch engine/model for JSON reliability - truncate inputs or use multi-stage approach
5) Cost spike after enabling a task or changing a prompt
Likely causes - task enabled for all tenants unintentionally - filters too permissive - prompt grew significantly - retry storm due to upstream provider failures
Quick checks - usage dashboard by tenant/task - retry rate and engine error rate - token usage per execution
Quick fixes - disable the task globally or for impacted tenants - tighten filters / add minimum duration or minimum text length - rollback prompt/version - throttle backfill/reprocessing
6) Dashboard shows unexpected distribution shifts
Likely causes - prompt or model changed (scoring definition drift) - thresholds misconfigured - backfill caused mixing of old/new scoring
Quick checks - task change log (what changed and when) - overrides inventory (some tenants diverged) - compare a sample of conversations before/after change
Quick fixes - recalibrate thresholds - use versioned tasks for major changes - communicate expected shifts to tenant admins
Where to inspect task outputs and logs
Job logs
Access detailed execution logs from the AI Assistant job view:
Administration > Speech Analytics > AI Assistant > Jobs → Select job → Logs tab
Figure: Job Logs tab showing execution log entries.
Click on a log entry to see detailed information:
Figure: Detailed log entry showing task execution details.
Processing records
View per-conversation processing status from the Processing records tab on the job view. This shows which conversations were processed and their execution status.
Job controls
Operators can control AI Assistant jobs through the job configuration:
- Enable/Disable the job to start or stop processing
- Schedule settings control when the job runs
- Filtering criteria control which conversations are processed

