SDK Methods
new Uneeq(options: UneeqOptions)
Construct a new Uneeq object. See SDK UneeqOptions for constructor parameter details.
Parameters: options: SDK UneeqOptions
Returns Uneeq
sessionId
The active session id as a string. If the session has not started yet value will be null. Note, this is an accessor and not a method. You can retrieve the value by accessing it as a property of Uneeq, e.g. Uneeq.sessionId.
Parameters: no parameters
Returns string | null
enableMicrophone(enable?: <boolean>)
Enable or disable the users microphone.
Parameters: enable: boolean (optional) Defaults to true
Returns void
endSession()
Ends the session, releases the microphone and camera and ends the avatar process. On success, SessionEndedMessage will be sent. On error, ErrorEndingSessionMessage will be sent.
Parameters: no parameters
Returns void
initWithToken(tokenId: string)
Initialise the session. ReadyMessage will be sent when init is ready. initWithToken should be used instead of init() when a third party conversation service is used.
Parameters: tokenId: string
Returns Promise<void>
Note: In order to avoid browser autoplay restrictions, you must call initWithToken as part of a mouse or keyboard event chain. Calling initWithToken before the user has interacted with the page may cause the session to be started with the digital human audio muted. When this occurs you will receive a message DigitalHumanPlayedInMutedModeSuccess. You will need to call unmuteDigitalHuman after the user has interacted with the page to unmute the digital human. Alternatively the user may click the digital human video to unmute audio.
unmuteDigitalHuman()
Use this method to unmute a digital human that was started in muted mode. If your digital human was started in muted mode due to browser autoplay policy you will receive a message DigitalHumanPlayedInMutedModeSuccess.
Parameters: no parameters
Returns void
pauseSpeechRecognition()
Pauses speech recognition when using SPEECH_RECOGNITION voiceInputMode. The users microphone will be muted, and audio data not processed.
Parameters: no parameters
Returns boolean
playWelcomeMessage()
Trigger this conversation's welcome message.
Parameters: no parameters
Returns Promise<void>
resumeSpeechRecognition()
Resumes speech recognition when using SPEECH_RECOGNITION voiceInputMode. The users microphone will be unmuted, and audio data will begin processing again.
Parameters: no parameters
Returns boolean
sendTranscript(text: string)
Send a text based message to the digital human.
Parameters: text: string
Returns void
setAvatarDebug(enabled: boolean)
Show Avatar Debugging.
Parameters: enabled: boolean
Returns Promise<any>
setCamera(deviceId: string)
Set a preferred camera to use in a live session. On success, SetCameraSuccessMessage will be sent. DeviceNotFoundErrorMessage will be sent when the requested device is not found.
Parameters: deviceId: string
Returns void
setMic(deviceId: string)
Set a preferred microphone to use in a live session. On success, SetMicSuccessMessage will be sent. DeviceNotFoundErrorMessage will be sent when the requested device is not found.
Parameters: deviceId: string
Returns void
setSpeaker(deviceId: string)
Set a preferred speaker to use in a live session. On success, SetSpeakerSuccessMessage will be sent. DeviceNotFoundErrorMessage will be sent when the requested device is not found.
Parameters: deviceId: string
Returns void
startRecording()
Start Recording voice audio through microphone. Call this method before using voice to speak to avatar. stopRecording should be called when finished speaking.
Parameters: no parameters
Returns void
stopRecording()
Stop Recording voice audio through microphone. Call this method after startRecording has been called.
Parameters: no parameters
Returns void
stopSpeaking()
Stop the avatar from speaking. This will stop avatar speaking even if interrupts are turned off. When successful a corresponding AvatarRequestCompleted should be received.
Parameters: no parameters
Returns Promise<void>
uneeqSetCustomChatMetadata(string)
Set meta data (stringified json) that will be sent with text or speech questions. When setting chat meta data it will persist with all future questions until overwritten with a new value.
Example: To have the users name sent as meta data to your NLP system on all questions you could set chat metadata uneeqSetCustomChatMetadata( JSON.stringify({name: 'John'}) )
Example: To clear the metadata uneeqSetCustomChatMetadata(''). After this method is called, text or speech questions will no longer pass the meta data {name: 'John'}
Parameters: string
Returns void