Skip to content

AI Assistant Job (Processing Pipeline)

The AI Assistant job is the background processing pipeline that executes AI Tasks against conversations and writes results to Custom Fields.

In most deployments, there is one global continuous job that: - continuously picks up new conversations, - determines which AI Tasks are enabled for the conversation’s tenant, - applies each task’s filters, - executes eligible tasks using the selected AI engine, - persists outputs to Custom Fields (and associated explanation where configured).


Conceptual execution flow

For each conversation:

  1. Eligibility check
  2. Does the conversation have content the task can analyze?
    • voice: transcript present
    • text: thread present
  3. Tenant activation
  4. Which global tasks are enabled for this tenant?
  5. Which tenant-specific tasks exist?
  6. Filters
  7. Does the conversation match each task’s filters (duration, direction, channel, etc.)?
  8. Execution
  9. Prompt is sent to configured AI engine.
  10. Output validated (JSON/schema if configured).
  11. Persistence
  12. Output attributes mapped into Custom Fields.
  13. Task execution status stored for monitoring/audit.

Operator configuration goals

  • Ensure continuous processing keeps up with ingestion volume.
  • Provide predictable execution latency (near-real-time vs batch).
  • Prevent runaway costs from misconfigured tasks/filters.
  • Provide observability: backlog, errors, per-tenant usage.

Even if the UI shows a single “job,” internally you typically want to support two modes:

1) Continuous processing (real-time-ish)

  • processes newly arrived conversations
  • tuned for steady throughput
  • retries transient failures

2) Backfill / reprocessing (batch)

  • processes historical data after:
  • enabling a new task
  • changing scoring logic
  • fixing ingestion/transcription gaps
  • throttled to avoid impacting live workloads

If the product does not support a separate backfill job today, document a safe operational procedure to run backfills without impacting the continuous job.


Operators should document/verify:

  • Concurrency controls: number of parallel executions per worker and per engine
  • Timeouts: per request and per conversation
  • Retry policy: exponential backoff; max retries; when to give up
  • Dead-letter handling: capture “poison” conversations that always fail (invalid transcript, huge transcript, etc.)
  • Idempotency: rerunning a task should overwrite/append deterministically (define behavior)

Cost controls and safety rails

  • Default filters for expensive tasks (e.g., minimum duration / minimum text length).
  • Rate limit by tenant if supported (fairness).
  • Disable tasks by default for new tenants until explicitly enabled.
  • Monitor for spikes in:
  • executions per conversation
  • tokens per execution
  • failure retries (can multiply cost)

Monitoring indicators (must-have)

  • backlog/lag (time from ingestion to completion)
  • success/failure rate per task and per engine
  • percent of conversations with missing outputs
  • average cost/usage per tenant (requests/tokens)
  • top failing tasks and top failing tenants

Where to configure

The AI Assistant job consists of two components:

Processing Queue

Menu path: Administration > Jobs > Processing Queues

AI Assistant Processing Queue configuration

Figure: Processing Queue configuration defining which conversations are eligible for AI processing.

AI Assistant Job

Menu path: Administration > Speech Analytics > AI Assistant > Jobs

AI Assistant Job configuration - General settings

Figure: AI Assistant Job general settings including name, status, and access scope.

Job settings

When configuring an AI Assistant job:

  • Access scope – Unrestricted (all tenants), Tenant only (specific tenant groups), or One tenant
  • Data source – Queue-based (continuous processing) or Full mode with continuation token (one-off backfill)
  • AI engine – Which engine to use for this job
  • Process tasks – All tasks or Selected tasks only
  • Filtering criteria – Additional filters (date range, call duration, direction, etc.)
  • Schedule – When the job should run

AI Assistant Job configuration - Data source and tasks

Figure: Job data source configuration and task selection.


Backfill processing

When enabling a new task or reprocessing historical data:

  1. New conversations only (default) – When you enable a task, only new conversations are processed going forward
  2. On-demand backfill – For historical reprocessing, contact MiaRec support to request a backfill job

To run a one-off backfill: - Create a new job with Data source = Full mode with continuation token - Configure the date range in filtering criteria - Run the job manually

Note: Backfill jobs should be throttled to avoid impacting production workloads.


Monitoring job execution

The job view provides several monitoring tabs:

  • Latest run – Current execution status and progress
  • All runs – Historical execution chart showing success/failure over time
  • Processing records – Individual conversation processing status
  • Logs – Detailed execution logs for troubleshooting

AI Assistant Job - Latest run

Figure: Latest run tab showing current job execution status.

EDITOR NOTE: fill in with product specifics

Purpose of this section

This is the operational heart of the platform. Operators need to know: where the job is, what knobs exist, and what "healthy" looks like.

Missing / unclear (confirm with Engineering)

  1. Where jobs are configured/monitored in UI
  2. A) Administration > Speech Analytics > AI Assistant > Jobs
  3. B) Jobs are not user-visible (config files only)
  4. C) Other (provide exact path)

  5. Backfill behavior when enabling a task

  6. A) Only new conversations are processed
  7. B) Historical conversations are automatically backfilled
  8. C) Optional backfill toggle
  9. D) Manual backfill only

  10. Idempotency

  11. A) Task results overwrite existing Custom Field values
  12. B) Task results append (history)
  13. C) Mixed by field type

  14. Execution ordering

  15. A) Tasks run independently in any order
  16. B) Tasks can depend on other tasks (pipeline)
  17. C) Not supported/unknown

  18. Retry behavior

  19. A) Automatic retries exist and are configurable
  20. B) Automatic retries exist but fixed
  21. C) No retries; manual only

  22. Transcript size limits

  23. A) Hard limit exists (document size)
  24. B) Soft limit; platform truncates
  25. C) No known limit