Skip to content

Upgrades and Change Management

This chapter describes how platform operators should manage change safely across tenants, including: - platform software upgrades, - AI Engine changes, - global Custom Field / AI Task updates, - prompt and schema changes.

Because AI outputs can change with prompts/models, change management is not only technical—it affects customer reporting and trust.


Change types and risk levels

Platform software changes

  • UI and workflow changes
  • performance and scaling changes
  • schema migrations

AI Engine changes

  • model upgrades (new versions)
  • provider changes
  • parameter changes (timeouts, token limits)

AI configuration changes

  • global AI Task prompt edits
  • output schema changes (JSON keys/types)
  • mapping changes
  • new/removed Custom Fields
  • default filter changes

  1. Plan
  2. identify impacted tenants and tasks
  3. classify change as safe vs breaking
  4. Test
  5. run regression tests on a transcript test suite
  6. verify JSON compliance and schema validation
  7. measure token usage and latency changes
  8. Stage
  9. deploy to staging environment
  10. run pilot tenants first
  11. Roll out
  12. gradual rollout
  13. monitor errors, cost, and output shifts
  14. Communicate
  15. release notes for tenant admins (what changed and why)
  16. highlight expected metric shifts
  17. Rollback
  18. have a rollback plan (revert engine/task/prompt; disable feature)

For global AI Tasks

Prefer a versioned task approach: - create CSAT Scoring v2 rather than editing the existing prompt in place when logic changes significantly. - allow tenants to migrate when ready. - deprecate v1 after a transition period.

For Custom Fields

Treat type and computer name as immutable. For breaking changes: - create a new field - migrate mapping in a new task version


Tenant overrides and drift

Tenant overrides (prompt/filter) create divergence.

Recommended operator practices: - maintain an “Overrides inventory” (who overrides what) - when shipping global updates, document: - which tenants are affected directly (no overrides) - which tenants are not affected (overridden tasks) - provide a workflow to “reset to defaults” if supported


Include: - what changed (prompt/model/schema) - why it changed (accuracy, consistency, performance) - expected effect (score distribution may shift) - what customers should do (recalibrate thresholds, revalidate) - rollback/opt-out option (if available)


Implementation notes

  • Tenants automatically receive global task updates unless they have applied overrides
  • Tenant admins can override prompt and filter settings; they can also revert to defaults
  • New tasks and changes apply only to new conversations by default; backfill requires manual coordination
  • Adopt "new version = new task" for any change that can shift metrics significantly
  • Maintain a partner-facing changelog and tenant-admin facing release notes