Skip to content

Global AI Tasks

Global AI Tasks are reusable, system-managed analysis definitions that can be made available to all tenants. They are the primary mechanism for delivering prebuilt insights (CSAT, sentiment, summaries, topics, etc.) at scale.

In MiaRec, an AI Task is a bundle that includes: - Prompt (instructions + inputs to the LLM) - Attribute mapping (output attributes → Custom Fields) - Filtering criteria (optional; limits which conversations the task runs on) - AI engine selection (LLM provider/model) - Response settings (text/JSON, optional JSON schema validation)


Where to configure

Menu path (global tasks):
Administration > Speech Analytics > AI Assistant > AI Tasks | Global Tasks

This path is confirmed.


  1. Design
  2. Define the insight(s) and output schema.
  3. Ensure required Custom Fields exist (global fields recommended for prebuilt insights).
  4. Build
  5. Create the global AI Task with prompt, mapping, filters, engine.
  6. Test
  7. Use “Save and Test” (and/or Playground) on real transcripts.
  8. Publish
  9. Mark as available to tenants (global).
  10. Default to Disabled for tenants unless you have an explicit baseline bundle.
  11. Roll out
  12. Enable for pilot tenants.
  13. Monitor results and cost.
  14. Maintain
  15. Version changes safely (see change management below).

Designing a global AI Task (operator standards)

1) Prefer JSON output for structured insights

Use JSON for: - scores (CSAT, NPS) - classifications (reason/outcome/topic) - extraction (amount, date, competitor name)

2) Use a strict output schema

If the UI supports “Response JSON schema”, define required keys and types. This reduces failure modes and makes monitoring easier.

3) Include value + explanation

For most insights, return: - a structured value (score/label/date/text) - a short explanation that cites transcript evidence

This improves trust and helps supervisors and QA teams.

Important: ensure your platform has a clear way to store or display the explanation (either a separate field or a dedicated explanation area). Document the correct approach for MiaRec.

4) Keep tasks modular

  • One task should do one logical “job”.
  • It is acceptable for one task to write multiple fields only when they are tightly related (e.g., reason + outcome + explanation).

5) Use filters to control cost and relevance

Examples: - call duration > 15 seconds - inbound only - channel = call (vs chat/email) until text support is enabled


Creating a global AI Task (operator checklist)

  1. Create task
  2. Set type (single field / multiple fields / Auto QA where applicable)
  3. Name, description, icon
  4. Enable status
  5. Mark as Global (available to all tenants)

  6. Select AI engine

  7. Choose the engine that matches the expected output reliability (JSON compliance matters).

  8. Define outputs

  9. Set Response Type = JSON (recommended)
  10. Provide Response JSON schema (recommended)
  11. Configure Attribute mapping:

    • ATTRIBUTE: output key (e.g., csat)
    • CUSTOM FIELD: destination Custom Field (e.g., CSAT)
  12. Write the prompt

  13. Include transcript/thread variable (e.g., ${transcript} or equivalent)
  14. Define scoring definitions or allowed labels
  15. Specify “JSON only” output and exact format
  16. Include value + explanation pattern

  17. Add filtering criteria (optional but recommended)

  18. duration, direction, channel, queue, etc.

  19. Save and Test

  20. Test with representative examples:

    • positive, neutral, negative
    • edge cases (short calls, transfers, escalations)
  21. Publish and roll out

  22. Keep task Disabled by default for tenants unless it is part of a baseline onboarding bundle.

Tenant activation and overrides (how global tasks behave)

In tenant UI, global tasks appear under: Administration > Speech Analytics > AI Assistant > AI Tasks
with Enabled and Disabled tabs.

  • Tenant admins enable a task by clicking Enable in Disabled.
  • Tenant admins can click Edit to override:
  • Prompt, Filter, or both.

Operator governance: decide whether partners allow overrides by default and how to support override troubleshooting.


Change management for global tasks (critical)

Global task changes can affect: - comparability of metrics over time - downstream dashboards and thresholds - tenant overrides (tenants may diverge from defaults)

Recommended practices: - Version prompts (clone task or version within task if supported) - Avoid breaking output schemas for existing mappings - When making major scoring logic changes: - communicate to tenant admins - provide a calibration plan - consider running old and new tasks in parallel for a period


Task types

When creating a new AI Task, select from these task types:

AI Task type selection

Figure: Select a task type when creating a new AI Task.

Available task types:

  • Call note – Generate a note or summary for the call
  • Custom field – Populate a single custom field
  • Multiple custom fields – Populate multiple related custom fields
  • Sentiment score – Calculate sentiment from the conversation
  • Call summary – Generate a structured call summary
  • Auto QA – Automated quality assurance scoring
  • Topics – Multi-label topic classification
  • Named Entity Recognition – Extract specific entities (dates, amounts, names)
  • Speaker Label – Identify and label speakers

Task editor structure

The task editor includes these sections:

AI Task editor - General settings

Figure: AI Task general settings and visibility configuration.

Prompt configuration

The prompt is split into two parts:

  • Task instructions – High-level role and rules for the AI
  • Task inputs – Detailed instructions including the transcript and output format

AI Task editor - Prompt configuration

Figure: Task instructions and inputs configuration.

Available variables

Variables you can use in prompts:

  • ${transcript} – The conversation transcript
  • ${direction} – Call direction (inbound/outbound)
  • ${duration} – Call duration
  • ${caller-name} – Caller's name
  • ${called-name} – Called party's name
  • ${options} – Dropdown field options (for classification tasks)

Response configuration

  • Response type – Auto detect, Text, or JSON
  • JSON schema – Optional schema for validating JSON responses

Viewing tenant activation status

Track which tenants have enabled a global task:

AI Task - Tenant list

Figure: View tenant activation status for a global AI Task.


Viewing tenant overrides

Monitor which tenants have overridden the default settings:

AI Task - Default settings view

Figure: View the default task settings.

AI Task - Overrides view

Figure: View tenants with overridden settings. Tasks with overrides show an "Overridden settings" tag.


Tenant override scope

Tenants can override only:

  • Prompt (Task instructions and Task inputs)
  • Filters (eligibility criteria)

Tenants cannot override:

  • Attribute mapping
  • Response schema
  • AI engine selection

This ensures consistent data structure across tenants while allowing customization of analysis logic.