Global AI Tasks
Global AI Tasks are reusable, system-managed analysis definitions that can be made available to all tenants. They are the primary mechanism for delivering prebuilt insights (CSAT, sentiment, summaries, topics, etc.) at scale.
In MiaRec, an AI Task is a bundle that includes: - Prompt (instructions + inputs to the LLM) - Attribute mapping (output attributes → Custom Fields) - Filtering criteria (optional; limits which conversations the task runs on) - AI engine selection (LLM provider/model) - Response settings (text/JSON, optional JSON schema validation)
Where to configure
Menu path (global tasks):
Administration > Speech Analytics > AI Assistant > AI Tasks | Global Tasks
This path is confirmed.
Global task lifecycle (recommended)
- Design
- Define the insight(s) and output schema.
- Ensure required Custom Fields exist (global fields recommended for prebuilt insights).
- Build
- Create the global AI Task with prompt, mapping, filters, engine.
- Test
- Use “Save and Test” (and/or Playground) on real transcripts.
- Publish
- Mark as available to tenants (global).
- Default to Disabled for tenants unless you have an explicit baseline bundle.
- Roll out
- Enable for pilot tenants.
- Monitor results and cost.
- Maintain
- Version changes safely (see change management below).
Designing a global AI Task (operator standards)
1) Prefer JSON output for structured insights
Use JSON for: - scores (CSAT, NPS) - classifications (reason/outcome/topic) - extraction (amount, date, competitor name)
2) Use a strict output schema
If the UI supports “Response JSON schema”, define required keys and types. This reduces failure modes and makes monitoring easier.
3) Include value + explanation
For most insights, return: - a structured value (score/label/date/text) - a short explanation that cites transcript evidence
This improves trust and helps supervisors and QA teams.
Important: ensure your platform has a clear way to store or display the explanation (either a separate field or a dedicated explanation area). Document the correct approach for MiaRec.
4) Keep tasks modular
- One task should do one logical “job”.
- It is acceptable for one task to write multiple fields only when they are tightly related (e.g., reason + outcome + explanation).
5) Use filters to control cost and relevance
Examples: - call duration > 15 seconds - inbound only - channel = call (vs chat/email) until text support is enabled
Creating a global AI Task (operator checklist)
- Create task
- Set type (single field / multiple fields / Auto QA where applicable)
- Name, description, icon
- Enable status
-
Mark as Global (available to all tenants)
-
Select AI engine
-
Choose the engine that matches the expected output reliability (JSON compliance matters).
-
Define outputs
- Set Response Type = JSON (recommended)
- Provide Response JSON schema (recommended)
-
Configure Attribute mapping:
- ATTRIBUTE: output key (e.g.,
csat) - CUSTOM FIELD: destination Custom Field (e.g., CSAT)
- ATTRIBUTE: output key (e.g.,
-
Write the prompt
- Include transcript/thread variable (e.g.,
${transcript}or equivalent) - Define scoring definitions or allowed labels
- Specify “JSON only” output and exact format
-
Include value + explanation pattern
-
Add filtering criteria (optional but recommended)
-
duration, direction, channel, queue, etc.
-
Save and Test
-
Test with representative examples:
- positive, neutral, negative
- edge cases (short calls, transfers, escalations)
-
Publish and roll out
- Keep task Disabled by default for tenants unless it is part of a baseline onboarding bundle.
Tenant activation and overrides (how global tasks behave)
In tenant UI, global tasks appear under:
Administration > Speech Analytics > AI Assistant > AI Tasks
with Enabled and Disabled tabs.
- Tenant admins enable a task by clicking Enable in Disabled.
- Tenant admins can click Edit to override:
- Prompt, Filter, or both.
Operator governance: decide whether partners allow overrides by default and how to support override troubleshooting.
Change management for global tasks (critical)
Global task changes can affect: - comparability of metrics over time - downstream dashboards and thresholds - tenant overrides (tenants may diverge from defaults)
Recommended practices: - Version prompts (clone task or version within task if supported) - Avoid breaking output schemas for existing mappings - When making major scoring logic changes: - communicate to tenant admins - provide a calibration plan - consider running old and new tasks in parallel for a period
Task types
When creating a new AI Task, select from these task types:
Figure: Select a task type when creating a new AI Task.
Available task types:
- Call note – Generate a note or summary for the call
- Custom field – Populate a single custom field
- Multiple custom fields – Populate multiple related custom fields
- Sentiment score – Calculate sentiment from the conversation
- Call summary – Generate a structured call summary
- Auto QA – Automated quality assurance scoring
- Topics – Multi-label topic classification
- Named Entity Recognition – Extract specific entities (dates, amounts, names)
- Speaker Label – Identify and label speakers
Task editor structure
The task editor includes these sections:
Figure: AI Task general settings and visibility configuration.
Prompt configuration
The prompt is split into two parts:
- Task instructions – High-level role and rules for the AI
- Task inputs – Detailed instructions including the transcript and output format
Figure: Task instructions and inputs configuration.
Available variables
Variables you can use in prompts:
${transcript}– The conversation transcript${direction}– Call direction (inbound/outbound)${duration}– Call duration${caller-name}– Caller's name${called-name}– Called party's name${options}– Dropdown field options (for classification tasks)
Response configuration
- Response type – Auto detect, Text, or JSON
- JSON schema – Optional schema for validating JSON responses
Viewing tenant activation status
Track which tenants have enabled a global task:
Figure: View tenant activation status for a global AI Task.
Viewing tenant overrides
Monitor which tenants have overridden the default settings:
Figure: View the default task settings.
Figure: View tenants with overridden settings. Tasks with overrides show an "Overridden settings" tag.
Tenant override scope
Tenants can override only:
- Prompt (Task instructions and Task inputs)
- Filters (eligibility criteria)
Tenants cannot override:
- Attribute mapping
- Response schema
- AI engine selection
This ensures consistent data structure across tenants while allowing customization of analysis logic.





