Prompting Guidelines
This chapter provides practical guidance for writing and overriding AI Task prompts in MiaRec.
Use strict, structured output for metrics
For any insight used in dashboards, search, or automation, prefer JSON output with a stable structure.
Recommended pattern:
- return value
- return explanation (1–3 sentences)
Example:
{
"csat": {
"value": 4,
"explanation": "The agent resolved the issue and the customer expressed thanks and satisfaction."
}
}
Make allowed values explicit
Numeric scores
- define the range (e.g., 1–5)
- define what each score means
- define what to do when evidence is insufficient
Dropdown classifications
- list allowed labels exactly
- instruct the model to pick only from that list
- include “Unknown” when appropriate
Example:
Allowed values for churn_risk: "Low", "Medium", "High", "Unknown"
Return "Unknown" if the transcript does not provide enough evidence.
Be explicit about evidence and explanations
Explanations are most useful when they: - reference specific statements or moments in the transcript/thread - avoid speculation - remain concise
Good instruction:
Write a 1–2 sentence explanation that cites evidence from the transcript.
Do not invent details not present in the conversation.
Enforce “JSON only”
A common failure mode is extra text before/after JSON. Add rules such as:
- “Output must be valid JSON only.”
- “Do not include Markdown.”
- “Do not include commentary outside the JSON.”
Handle ambiguity (Unknown / Not mentioned)
Define an explicit “unknown” policy: - Use “Unknown” if evidence is insufficient - Use null/empty string only if your schema allows it - Avoid guessing
Keep prompts stable after rollout
Prompt changes can shift results and trend lines. Recommended approach: - test in Playground - announce prompt changes to stakeholders - keep a prompt version header inside the prompt text
Example:
Prompt version: CSAT-v3 (2026-01-24)
Change note: clarified what counts as "resolved"
Suggested prompt structure
Many teams keep prompts consistent across tasks using this structure:
- Role / expertise
- Task description
- Definitions/rubric
- Rules (evidence-based, unknown handling)
- Output schema (JSON)
Multi-output tasks (one task writes multiple fields)
When a task writes multiple fields: - include a single JSON object with multiple keys - keep key names aligned to mapping - ensure every key has stable types
Example:
{
"churn_risk": { "value": "High", "explanation": "..." },
"escalation_reason": { "value": "Billing dispute", "explanation": "..." }
}
Common pitfalls and how to avoid them
- Pitfall: vague rubrics (“good/bad”)
Fix: define explicit criteria and examples. - Pitfall: free-text categories (“customer was unhappy about shipping delays”)
Fix: force dropdown values. - Pitfall: too-long explanations
Fix: cap at 1–3 sentences and require transcript evidence. - Pitfall: mixing multiple unrelated insights in one task
Fix: group only tightly related outputs.