Prompt and Schema Standards
This chapter defines recommended standards for writing prompts and output schemas for AI Tasks. These standards help ensure:
- stable, parseable outputs (especially for dashboards/search),
- predictable costs,
- easier troubleshooting,
- consistency across tenants.
Core principles
- Use JSON for structured outputs
- Scores, categories, entities, dates → JSON.
- Specify a strict output contract
- Exact keys, allowed values, and types.
- Return value + explanation
- Structured value for dashboards/search, plus a short human explanation for trust and review.
- Handle “unknown / not mentioned” explicitly
- Avoid forcing the model to guess.
- Keep prompts deterministic
- Avoid open-ended phrasing when you need a stable metric.
Prompt structure (recommended)
If the UI supports separate fields:
- Task instructions: role + high-level rules (short)
- Task inputs: transcript/thread + detailed scoring rules + response format
If the UI has one prompt field, combine them in the same order.
Variables and inputs
Common input variable (voice):
- ${transcript}
For omni-channel (text):
- ${thread} or ${messages} (confirm actual variable names)
Always instruct the model to use only the provided text; do not assume external facts.
Standard output patterns
Pattern A: Numeric score + explanation (CSAT, NPS-like)
JSON
{
"csat": {
"value": 1,
"explanation": "Short justification grounded in transcript evidence."
}
}
JSON schema (example)
{
"type": "object",
"required": ["csat"],
"properties": {
"csat": {
"type": "object",
"required": ["value", "explanation"],
"properties": {
"value": { "type": "integer", "minimum": 1, "maximum": 5 },
"explanation": { "type": "string", "minLength": 1 }
},
"additionalProperties": false
}
},
"additionalProperties": false
}
Pattern B: Single-select classification + explanation (Sentiment, Reason)
JSON
{
"sentiment": {
"value": "negative",
"explanation": "Customer expresses frustration about unresolved billing issue."
}
}
Prompt rule snippet
- Allowed values: positive | neutral | negative | unknown
- Use unknown if insufficient evidence.
Pattern C: Multi-output related fields
Use when outputs are tightly related (e.g., reason + outcome + explanation):
{
"reason": { "value": "Billing", "explanation": "..." },
"outcome": { "value": "Resolved", "explanation": "..." }
}
Recommendation: Keep keys short, stable, and mapped to Custom Fields.
Pattern D: Entity extraction (Competitor, Next Action)
Return normalized values and include explanation:
{
"competitor_mentioned": {
"value": true,
"explanation": "Customer compares pricing to AcmeCo at 04:12."
},
"competitor_name": {
"value": "AcmeCo",
"explanation": "Mentioned explicitly by the customer."
}
}
“JSON only” enforcement (recommended wording)
Include a strict instruction such as:
Output MUST be valid JSON matching the schema exactly.
Do not include markdown, code fences, or any text outside the JSON.
Also avoid including example JSON inside code fences if your model tends to copy fences.
Handling long transcripts
If transcripts are long, consider: - truncating input (last N minutes) for certain tasks - multi-stage workflows (summarize → score), only if the platform supports task chaining - adding an instruction: “If transcript is too long, focus on the final resolution segment.”
Testing standards
For every global task: - test on at least 10 transcripts: - 2 positive - 2 negative - 2 neutral/ambiguous - 2 short calls - 2 edge cases (transfers, escalations, silence) - track: - JSON validity rate - token usage per execution - distribution stability
Implementation notes
MiaRec AI Tasks support the following variables in prompts:
${transcript}– the call transcript${direction}– call direction (inbound/outbound)${duration}– call duration${caller-name}– caller name${called-name}– called party name${options}– dropdown options (for classification tasks)
Response type can be set to: Auto detect, Text, or JSON.
Even if schema validation is not strictly enforced, define expected schemas and test JSON validity rate. Treat invalid JSON as a defect. Standardize allowed labels for classifications globally to keep dashboards meaningful.