SDK UneeqOptions
When constructing the Uneeq() object, you are required to pass in UneeqOptions. These options are defined as follows.
url
Server URL to connect to.
Type: string
conversationId
Uneeq conversation ID to use for the session.
Type: string
avatarVideoContainerElement
The div element that the avatars video should be placed inside.
Type: HTMLDivElement
localVideoContainerElement
The div element that the local video should be placed inside.
Type: string (optional)
diagnostics
Enable diagnostic callbacks.
Type: string (optional)
Default Value: false
backgroundImageUrl
This option allows you to specify the URL of an image that appears behind the digital human during a session. You must contact the Customer Success team to get your background images added to approved urls.
Type: string (optional)
nameTagImageUrl
This option allows you to specify the URL of a name tag image that appears on the digital human during a session. You must contact the Customer Success team to get your background images added to approved urls.
Type: string (optional)
enableClientPerformanceMessage
This option controls the visibility of client-side network performance messages in the developer console. These WebRTC statistics (eg packetsLost, framesDropped, framesPerSec) help identify if session quality is being impacted by client-side conditions. Irrespective of this visibility setting, UneeQ's servers receive these messages to help measure platform performance and stability for end users.
Type: boolean (optional)
Default Value: false
enableTransparentBackground
This option controls whether the Digital Human is rendered with a transparent background, allowing them to be overlaid on top of your experience. If true, the digital human's configured background image will be replaced with a transparent background. If false, your configured background image will be displayed.
Type: boolean (optional)
Default Value: false
logging
Enable logging.
Type: boolean (optional)
Default Value: false
messageHandler
Provide a function to be used for message handler call backs.
Type: function (msg: <any>) (optional)
Default Value: undefined
micActivityMessages
Whether you want to receive mic activity messages or not.
Type: boolean (optional)
Default Value: false
playWelcome
Whether the NLP welcome message should be triggered.
Type: boolean (optional)
Default Value: false
preferredCameraId
Device ID of the preferred camera to start the session with.
Type: string (optional)
Default Value: undefined
preferredMicrophoneId
Type: string (optional)
Default Value: undefined
preferredSpeakerId
Device ID of the preferred speaker to start the session with.
Type: string (optional)
Default Value: undefined
sendLocalAudio
Whether the users local audio stream (microphone) should be sent on session start.
Type: boolean (optional)
Default Value: true
sendLocalVideo
Whether the users local video stream (camera) should be sent on session start.
Type: boolean (optional)
Default Value: true
voiceInputMode
The capture mode for users voice via microphone. When in push to talk mode, the startRecording and stopRecording methods must be called for voice to be captured. When in voice activity mode, the users voice will automatically be captured without calling startRecording or stopRecording.
Type: string
Values: "PUSH_TO_TALK" "VOICE_ACTIVITY"
Default Value: "PUSH_TO_TALK"
"VOICE_ACTIVITY" is a beta feature and subject to change. You may encounter issues when using this feature.