Skip to content

AI Engines (LLM Providers/Models)

AI Engines define which Large Language Model (LLM) provider/model MiaRec uses to run AI Tasks (summaries, sentiment, CSAT, Auto QA, and custom insights).

This chapter describes how platform operators should configure and govern AI Engines in a multi-tenant environment.


What an AI Engine is

An AI Engine is a named configuration that typically includes: - provider (e.g., OpenAI/Azure OpenAI/Anthropic/etc.) - model name/version - authentication (API key, endpoint, deployment name) - safety and policy settings (if applicable) - limits (rate limits, quotas) - default parameters (timeouts, max tokens, temperature) if the platform supports them

AI Tasks reference an AI Engine when executing.


Start with a small set of standard engines

For example: - Standard – balanced cost/quality - High accuracy – for complex scoring/categorization - Low latency – for near-real-time needs (if applicable)

Keep the catalog small to reduce operational complexity.

Decide how engine selection works

Common patterns: - Single global default engine for all tasks - Per-task engine selection (your UI appears to support this) - Per-tenant engine policy (if required for cost or compliance)


Credential and security management

  • Store credentials in a secrets manager (recommended).
  • Rotate credentials regularly and document rotation steps.
  • Limit engine access by environment (dev/stage/prod).
  • Maintain audit logs for engine configuration changes.

Validation / smoke tests

When adding an engine: 1. Run a minimal test prompt (health check) to verify credentials and connectivity. 2. Run a representative AI Task (e.g., summarization) on a test transcript. 3. Verify: - latency - successful JSON responses (if using JSON output) - stable output quality


Operational considerations

Cost management

  • Track usage by tenant and by task (requests, tokens, cost).
  • Use task filters to avoid running tasks on ineligible conversations.
  • Consider “preview mode” for new tasks before broad enablement.

Reliability and failover (if supported)

  • Define a fallback engine if the primary provider is down.
  • Document expected behavior:
  • automatic failover vs manual switch
  • per-task vs global failover

Data handling

  • Document data residency considerations (especially in partner-hosted deployments).
  • Document whether transcripts are sent to external providers and what is included (metadata vs transcript only).

Where to configure

Menu path: Administration > Speech Analytics > AI Assistant > Engines

AI Engine configuration form

Figure: AI Engine configuration showing name, status, visibility settings, and model configuration options.

Engine settings

When configuring an AI Engine, you specify:

  • Name – A descriptive name for the engine
  • Status – Enabled or Disabled
  • Visibility – Global (available to all tenants) or Tenant-specific
  • Model settings – Provider-specific configuration (API endpoint, model name, authentication)

Per-task engine selection

Each AI Task can select which engine to use. This allows you to:

  • Use different models for different task types (e.g., a faster model for simple tasks, a more capable model for complex analysis)
  • Test new models on specific tasks before broader rollout
  • Manage costs by using appropriate models for each use case

EDITOR NOTE: fill in with product specifics

Purpose of this section

Operators need exact steps for configuring engines and a governance model for how engines are used across tasks and tenants.

Missing / unclear (confirm with Engineering/Product)

  1. Where AI Engines are configured in UI
  2. A) Administration > Speech Analytics > AI Assistant > Engines
  3. B) Administration > Speech Analytics > AI Assistant > Settings
  4. C) Engines are configured via config file / environment variables
  5. D) Other (provide exact path)

  6. Supported providers

  7. A) OpenAI
  8. B) Azure OpenAI
  9. C) Anthropic
  10. D) Google
  11. E) AWS Bedrock
  12. F) Other (list)

  13. Per-task engine selection

  14. A) Yes (each AI Task selects an engine)
  15. B) No (single global engine)
  16. C) Mixed (task-level possible only for global tasks)

  17. Engine parameters

  18. Can operators configure max tokens / temperature / timeouts?

    • A) Yes, per engine
    • B) Yes, per task
    • C) Not configurable (platform default)
  19. Failover

  20. A) Automatic engine failover exists
  21. B) Manual only
  22. C) Not supported