1. Configure audio file format
First, you need to change the audio file format settings to increase transcription accuracy. Navigate to Administration -> Storage -> File format and apply the following changes:
- Set WAV file format
- Set Stereo format
- Disable Automatic Gain Control (AGC) filter
- Disable Packet Loss Concealment (PLC) filter
2. Configure speech recognition job
The speech recognition job automatically uploads audio recordings to the cloud service for transcription and then retrieves back the transcription results. Multiple jobs can be created with unique settings, for example, one job processes recordings in English and the second in Spanish.
Navigate to Administration -> Speech Analytics -> Speech-to-Text Jobs, click "New Job".
Choose a descriptive name for this job. Upload the Google Cloud Service Key JSON file, created in previous steps. Set the Mode to Incremental.
Optionally, provide the Phrase hints. You may use these phrase hints in a few ways:
- Improve the accuracy for specific words and phrases that may tend to be overrepresented in your audio data. For example, if specific commands are typically spoken by the user, you can provide these as phrase hints. Such additional phrases may be particularly useful if the supplied audio contains noise or the contained speech is not very clear.
- Add additional words to the vocabulary of the recognition task. The Cloud Speech API includes a very large vocabulary. However, if proper names or domain-specific words are out-of-vocabulary, you can add them to the phrases provided to your requests's speechContext.
Phrases may be provided both as small groups of words or as single words. When provided as multi-word phrases, hints boost the probability of recognizing those words in sequence but also, to a lesser extent, boost the probability of recognizing portions of the phrase, including individual words.
In general, be sparing when providing speech context hints. Better recognition accuracy can be achieved by limiting phrases to only those expected to be spoken. For example, if there are multiple dialog states or device operating modes, provide only the hints that correspond to the current state, rather than always supplying hints for all possible states.
Specify Filtering criteria for recordings. For example, you can limit transcription to specific group, duration, date, etc.
Configure a Schedule for transcription job. The job can be run either manually or by schedue (every hour/day/week or more often). In the example below, the transcription job will run every 2 minutes.
3. View results
If you run the job manually, then you can see the progress of uploading process:
It takes some time for the cloud service to complete transcription and return results. Usually, the results are available in a couple of minutes after upload.
You can check the status of the recently uploaded files via menu Administration -> Speech Analytics -> Speech Analytics Processed Records.
After the status changes to "COMPLETE", you can view the call details and transcription by clicking "View call" right on this page. Or you can open the call details from "Recordings" page as usual.
The screenshot below shows the example of transcription.
When you playback the recording, the transcript is automatically highlighted at that position (see the yellow background in the following screenshot). Click on the interesting word in transcription and the audio player will be fast forwarded to that location.