Conversation Analytics Lifecycle
This page explains the end-to-end flow from raw conversations to dashboards and searchable insights.
High-level flow
At a high level, Conversation Analytics follows this pipeline:
Conversation content
├─ Voice calls → Transcription → Transcript
└─ Text channels (chat/email/tickets) → Normalized thread text
↓
AI Tasks (enabled)
↓
Custom Fields (stored results)
↓
Conversation Details · Dashboards · Search
Step 1: Collect conversation content
A conversation can include:
- a voice call recording (audio)
- a chat conversation
- an email thread
- a ticket with messages/notes
MiaRec associates the content with conversation metadata (agent, queue/team, timestamps, direction, etc.) so you can filter and segment analysis later.
Step 2: Convert to analyzable text
For voice calls, MiaRec uses transcription to produce a text transcript.
For text channels, MiaRec analyzes the conversation thread directly (when enabled).
If there is not enough text content, some insights may be skipped or returned as “unknown / insufficient evidence.”
Step 3: Run AI Tasks
An AI Task is a purpose-specific analysis definition that:
- reads the transcript/thread text (and optionally metadata)
- applies AI instructions (a prompt)
- produces structured outputs
AI Tasks can be enabled/disabled and can be limited to certain conversations using filters (e.g., inbound calls longer than 15 seconds).
See: - AI Tasks and Prompts - Filters and Eligibility
Step 4: Store results in Custom Fields
MiaRec stores insight outputs in Custom Fields, such as:
- CSAT (number)
- Sentiment (category)
- Top issue (multi-select)
- Next action (text)
- Reservation date (date)
Because results are stored in structured fields, they can be consistently used across the product:
- dashboards and trend charts
- drilldowns via clickable buckets
- advanced search and filtering
See: - Custom Fields and Metrics
Step 5: Use insights in day-to-day workflows
Once stored, insights become usable signals for different teams:
- CX leaders: trend CSAT and top issues
- supervisors: drill into low scores and coach with evidence
- QA teams: monitor Auto QA distributions
- sales: track objections and competitor mentions
For examples, see: - Use Cases