AI Tasks and Prompts
An AI Task is the unit of AI analysis in MiaRec.
Think of an AI Task as:
- a specific “job” that analyzes a conversation (e.g., summarize, score CSAT, detect topic)
- a definition of what the AI should extract and how results should be stored
What an AI Task contains (conceptually)
An AI Task typically includes:
- Prompt
- instructions telling the AI what to do
- expected output format (often JSON for structured results)
- Output mapping
- which extracted attributes are stored into which Custom Fields
- Filters (optional)
- which conversations the task should apply to (e.g., inbound only, duration > 15 seconds)
- AI engine selection
- which model/provider runs the task (varies by deployment)
Administrators enable and configure AI Tasks. End users consume the results in dashboards, search, and conversation details.
One task, one purpose (recommended)
AI Tasks are most maintainable when each task does one focused job, such as:
- CSAT scoring
- sentiment classification
- topic extraction
- call reason and outcome categorization
- conversation summarization
In some cases, a single task may output multiple related fields at once (e.g., reason + outcome + resolution notes).
Output format: values + explanations
A best practice in MiaRec is to extract:
- the value (score/category/date/text)
- plus an explanation that helps a human reviewer understand why
Example (conceptual):
{
"csat": {
"value": 2,
"explanation": "Customer expressed frustration and the issue was not resolved in the call."
}
}
The value powers dashboards/search; the explanation supports QA and coaching.
Filters: controlling relevance and cost
Filters ensure the task runs only when it makes sense. For example:
- do not score CSAT for calls shorter than 15 seconds
- apply a sales insight only to sales queues
- skip internal/test conversations
See: - Filters and Eligibility
Testing and tuning (conceptual)
Before rolling out an insight broadly, administrators typically:
- test it against real conversations
- review both values and explanations
- tune the prompt and filters until outputs match the business definition
The detailed testing workflow (Playground, “save and test”, validation checklists) is described in the Conversation Analytics – Administration Guide.
Next: when tasks run (eligibility)
To understand why an insight may appear for some conversations but not others: