Skip to content

AI Tasks: Enable and Manage

An AI Task is a purpose-specific analysis definition that reads a conversation transcript/thread and writes one or more outputs into Custom Fields.

In MiaRec, an AI Task typically includes:

  • Prompt (instructions to the AI model, including output format)
  • Attribute mapping (which output attribute populates which Custom Field)
  • Optional filters (which conversations the task applies to)
  • AI engine selection (LLM provider/model), depending on deployment

Administration > Speech Analytics > AI Assistant > AI Tasks

In this view, tenant admins can usually:

  • See tasks that are Enabled for the tenant
  • Enable tasks from the Disabled tab
  • Edit tasks to override Prompt, Filter, or both (where permitted)
  • (Optionally) create tenant-specific tasks, if enabled in your deployment

What to check in an AI Task (before enabling)

When you open an AI Task, verify:

  1. Purpose – the task name/description matches the business outcome you want (e.g., “CSAT Scoring”)
  2. Outputs – which fields it writes to
  3. Output type – JSON is recommended for structured metrics
  4. Filters – whether it applies to the right subset of conversations (e.g., inbound calls only)
  5. Explanation behavior – whether it includes a reviewer-friendly explanation (recommended)

Mapping: output attributes → Custom Fields

The mapping table connects: - ATTRIBUTE: the output key produced by the AI (e.g., csat, churn_risk) - CUSTOM FIELD: where the value is stored for reporting/search/dashboards

One task may populate multiple fields, which is useful for: - grouping related outputs (e.g., “Churn Risk” + “Churn Risk Reason”) - producing both value and explanation (if explanations are stored in fields)

Filters: controlling eligibility

Tasks can include filters such as:

  • channel (call vs chat vs ticket)
  • direction (inbound vs outbound calls)
  • duration threshold (e.g., > 15 seconds)
  • other metadata-based filters (queues, tags, etc.)

Filters are important for: - relevance (avoid scoring irrelevant calls) - accuracy (avoid scoring too-short conversations) - cost control (reduce unnecessary processing)

Prompt: what you can change (tenant view)

Depending on your deployment, tenant admins may be able to:

  • override the prompt to match your business definitions
  • tighten output requirements (e.g., “JSON only”)
  • update classification labels (dropdown values)
  • enforce a scoring rubric

Best practice: keep prompts stable once you start tracking metrics over time. Prompt changes can shift results and affect trend analysis.

What tenant admins can and cannot change

Can override:

  • Prompt (Task instructions and Task inputs)
  • Filters (eligibility criteria)

Cannot override (managed by provider):

  • Attribute mapping (output fields)
  • Response schema
  • AI engine selection

This ensures consistent data structure across tenants while allowing customization of analysis logic and filtering rules.

AI Task - Tenant settings view

Figure: AI Task tenant view showing overridable settings.