SDK UneeqOptions
When constructing the Uneeq() object, you are required to pass in UneeqOptions. These options are defined as follows.
Server URL to connect to.
Type: string
Uneeq conversation ID to use for the session.
Type: string
The div element that the avatars video should be placed inside.
Type: HTMLDivElement
The div element that the local video should be placed inside.
Type: string (optional)
Enable diagnostic callbacks.
Type: string (optional)
Default Value: false
This option allows you to specify the URL of an image that appears behind the digital human during a session. You must contact the Customer Success team to get your background images added to approved urls.
Type: string (optional)
Set custom metadata to be sent with text or speech questions.
Type: string (optional)
This option allows you to specify the URL of a name tag image that appears on the digital human during a session. You must contact the Customer Success team to get your background images added to approved urls.
Type: string (optional)
This option controls the visibility of client-side network performance messages in the developer console. These WebRTC statistics (eg packetsLost, framesDropped, framesPerSec) help identify if session quality is being impacted by client-side conditions. Irrespective of this visibility setting, UneeQ's servers receive these messages to help measure platform performance and stability for end users.
Type: boolean (optional)
Default Value: false
This option controls whether the Digital Human is rendered with a transparent background, allowing them to be overlaid on top of your experience. If true, the digital human's configured background image will be replaced with a transparent background. If false, your configured background image will be displayed.
Type: boolean (optional)
Default Value: false
When using speech recognition input mode your voice will automatically be detected when you start and stop speaking. In order to disable this behaviour you can set enableVad to false. When this option is set to false, you will be required to manage when the users microphone is listening or not.
You will be required to call pauseSpeechRecognition() and resumeSpeechRecognition() to controls the users microphone listening state.
Hint: you will still need to call enableMicrophone() to get access to the users microphone before calling to pause and resume speech recognition.
Type: boolean (optional)
Default Value: true
When using speech recognition and enableVad input mode your voice will not interrupt the digital human when they are speaking. You can override this behaviour by setting enableInterruptBySpeech to true.
If you are using enableVad true, then unmuting the microphone will stop the digital human from speaking and interrupt them irrespective of this configuration.
Note: Typing a text message to the digial human will interrupt the digital human.
Type: boolean (optional)
Default Value: false
Enable logging.
Type: boolean (optional)
Default Value: false
Provide a function to be used for message handler call backs.
Type: function (msg: <any>) (optional)
Default Value: undefined
Whether you want to receive mic activity messages or not.
Type: boolean (optional)
Default Value: false
Whether the NLP welcome message should be triggered.
Type: boolean (optional)
Default Value: false
Device ID of the preferred camera to start the session with.
Type: string (optional)
Default Value: undefined
Type: string (optional)
Default Value: undefined
Device ID of the preferred speaker to start the session with.
Type: string (optional)
Default Value: undefined
Whether the users local audio stream (microphone) should be sent on session start.
Type: boolean (optional)
Default Value: true
Whether the users local video stream (camera) should be sent on session start.
Type: boolean (optional)
Default Value: true
A comma separated string of hint phrases that should be used to hint the speech recognition system about words to expect. If you provide hint phrases here, then they are more likely to be detected in the speech recognition system.
Type: string (optional)
Default Value: ""
Example Value: UneeQ, digital human, New Zealand
A number between 0 and 20 that can be used to boost the likelyhood of hint phrase words being detected in speech.
Type: number
Default Value: 0
Example Value: 15
This option allows you to specify up to four colon separated locale codes (language tags) for languages that the end user can speak and the digital human should understand.
The first locale in the list is considered the primary locale. e.g. en-US:ja-JP:de-DE
Each locale should be four characters, language and region e.g. de-DE, not DE.
Type: string (optional)
The capture mode for users voice via microphone. When in push to talk mode, the startRecording and stopRecording methods must be called for voice to be captured. When in SPEECH_RECOGNITION mode, the users voice will automatically be captured without calling startRecording or stopRecording.
Type: string
Values: "PUSH_TO_TALK" "SPEECH_RECOGNITION"
Default Value: "PUSH_TO_TALK"
"SPEECH_RECOGNITION" is a beta feature and subject to change. You may encounter issues when using this feature.