Hosted experience overview

Customizing your experience

39min

You may customize various aspects of your Hosted Experience by using any number of the following options as part of your deploy script in the uneeqInteractionsOptions section. You can do this by adding an option underneath the personaShareId: "ENTER-PERSONA-SHARE-ID-HERE", line of your deploy script.

If you are looking for a way to dynamically adjust these settings, after the session has started, you can do this by using Hosted Experience Methods

Example

JS






personaShareId <string>

Required Default Value: null

The unique personas share Id. This value can be found in the Creator portal, or provided to you by your customer success representative.

Example: personaShareId: "e0a935ad-f3ec-4469-af89-bf4fe6a0a246"





displayCallToAction <boolean>

Default Value: false

Should the call to action (DH preview with message) be displayed at the bottom of the page.

Example: displayCallToAction: true





position <string>

Default Value: "right"

When using split screen layout mode, this will determine whether the left or right side of the page is used to display the digital human.

Values: "right", "left"

Example: position: "left"





renderContent <boolean>

Default Value: true

Determines whether UneeQ should render content on screen. If the value is true, then UneeQ will render the content on screen when it is sent from the NLP system. When the value is false, then the content will not be rendered on screen. Using false may be desirable if you wish to render content in a different section of your website and allows you more fine tuned control. See //TODO event handling page.

Example: renderContent: false





callToActionText <string>

Default Value: "👋 Hi. Is there anything I can help you with?"

The text to be displayed inside the call to action.

Example: callToActionText: "Hello. Click here to chat with me."





ctaThumbnailUrl <string>

Default Value: undefined

Provide a URL to an image that will be used in the call to action. The image provided by the URL should be a square 140px .jpg .png or .gif.

Document image


Example: ctaThumbnailUrl: "https://cdn.your-domain.com/image1"





cameraPosition <string> || <object>

Default Value: "CENTER"

The horizontal camera position to be used. "CENTER" | "LEFT" | "RIGHT"

Example: cameraPosition: "RIGHT"

cameraPosition can optionally be JSON object of X, Y, Z offsets:

JS


The possible values are -1.0 to 1.0.

Positive values for vertical move the camera UP, this would move the character within the frame down

Negative values for horizontal move the camera left. Positive values move the camera right

Positive values for distance zoom the camera in - the character will appear larger



customStyles <string>

Default Value: null

String of CSS to be applied to the digital human frame. Allows client to inject their own CSS styling rules and overrides. Making large changes via customStyles is discouraged as the Hosted Experience interface is subject to change. For example, the class name you might rely on for as a css selector could change without warning.

Example: customStyles: `h1 { font-size: 150%; }`





enableTransparentBackground <boolean>

Default Value: false

Whether the digital human should render a background as part of the video stream. If false, then the client may position their own background behind the digital human.

Example: enableTransparentBackground: true





errorText <string>

Default Value: "⚠ Sorry, I am busy right now. Please try again later."

Text to be displayed when the digital human cannot be started.

Example: errorText: "Oops, something went wrong, please try again later."





playWelcome <boolean>

Default Value: false

If your NLP has a 'welcome' message, then this will be triggered at session start, if this option is set to true.

Example: playWelcome: true



mobileViewWidthBreakpoint <integer>

Default Value: 900

The screen width when the layout should be switched into mobile view.

Example: mobileViewWidthBreakpoint: 0 (a value of zero would disable mobile view entirely)



backgroundImageUrl <string>

A publicly-accesible URL to a background image that will be inserted behind the digital human once the session loads.

Example: backgroundImageUrl: https://images.yourdomain.com/background1.png (omitting this option or including this option with a blank / incorrect / invalid URL will result in a default background image from the Creator catalog loading)

Note: For security purposes, please provide this image to UneeQ for secure hosting before using this option.





layoutMode <string>

Default Value: "overlay"

Which layout mode should the session be started in. Note: the layout mode can be changed during a session by using the uneeqSetLayoutMode method.

Contained Layout Mode contained layout mode requires you to place a container div element into the page with the id set to uneeqContainedLayout, the digital human will be added inside this frame. This allows you the embed the digital human into a section of your website to create a more integrated experience. Adding and removing your container element from the DOM will result in the session being ended, however, you may move the element around the page using css to create a responsive experience.

Values: "overlay", "splitScreen", "fullScreen", "contained"

Example: layoutMode: "fullScreen"

Note: splitScreen layout mode will adjust the width of your page's <body> element to be 50% of the window width. This can produce unexpected results if your website is using CSS vw (view width) for styling.





logging <boolean>

Default Value: false

If true, verbose javscript logging will be enabled.

Example: logging: true





enableMicrophone <boolean>

Default Value: false

If true, access to the users microphone will be requested as soon as the session starts (if not accepted/declined previously).

Example: enableMicrophone: true





showUserInputInterface <boolean>

Default Value: false

If true, the voice and text input component will be shown when the session starts and does not need to be turned on during the session using uneeqSetShowUserInputInterface() method.

Example: showUserInputInterface: true





textInputPlaceholder <string>

Default Value: "Type here..."

The value provided will be used as the text input placeholder text, when displaying the user input interface.

Example: textInputPlaceholder: "Type your message"





voiceInputMode <string>

Default Value: "PUSH_TO_TALK"

Which voice input method should be used, push to talk or voice activity. When in push to talk mode, the user must press a button on screen to start and stop voice recordings. In speech recognition mode the user does not not press any button, instead their voice is automatically transcribed.

Values: "PUSH_TO_TALK", "SPEECH_RECOGNITION"

Example: voiceInputMode: "SPEECH_RECOGNITION"

PUSH_TO_TALK input mode is marked for deprecation. You should exclusively use "SPEECH_RECOGNITION" mode.







enableVad <boolean>

Default Value: true

When using speech recognition input mode your voice will automatically be detected when you start and stop speaking. In order to disable this behaviour and instead use the on screen microphone buttons (or methods), you can set enableVad to false. When this option is set to false, you will be required to manage when the users microphone is listening or not.

You can either rely on the on screen microphone button to allow users to push to talk, alternatively you can use the page methods available pauseSpeechRecognition() and resumeSpeechRecognition().

Example: enableVad: false







enableInterruptBySpeech <boolean>

Default Value: false

When using speech recognition and enableVad input mode your voice will not interrupt the digital human when they are speaking. You can override this behaviour by setting enableInterruptBySpeech to true.

If you are using enableVad true, then unmuting the microphone will stop the digital human from speaking and interrupt them irrespective of this configuration.

Note: Typing a text message to the digial human will interrupt the digital human.

Example:

enableInterruptBySpeech: true





autoStart <boolean>

Default Value: false

You can configure your session to start automatically on page load, without calling uneeqStartSession by setting autoStart to true .

When using autoStart you may find that your session begins with the digital human's audio muted. This occurs due to the web browser auto play policy preventing audio from being played on a page without the user first interacting with the page. When this occurs you will receive a message DigitalHumanPlayedInMutedModeSuccess. When this message is received, you may call uneeqUnmuteDigitalHuman() after the user has interacted with the page to unmute the digital human. Alternatively the user may click on the digital human video to unmute it.





containedAutoLayout <boolean>

Default Value: false

For sessions that are started in 'contained' layout mode you may set this value to true . When containedAutoLayout is set to true, the layout mode will automatically be changed to 'overlay' when the digital human container has been scrolled so that one quarter of the container is off screen. Upon the user scrolling the container back into view, the layout mode will be changed back to 'container'.





showClosedCaptions <boolean>

Default Value: true

Whether closed captions should be displayed in the interface. Closed captions will display both the users speech transcriptions and the digital humans speech transcription. This value can be changed dynamically during a session by using the uneeqSetShowClosedCaptions() method.

This option is only applied when using SPEECH_RECOGNITION voice input mode.





initLoadHandler <boolean>

Default Value: true

Set whether the digital human frame should be initialised on page load. When this value is true the digital human frame will be initialized when the page is loaded (via a page load event handler). When this value is false , then the digital human frame will not be added to the page on page load.

If you set this value to false then you will need to call uneeq.init() yourself when you want the digital human frame to be loaded. This is desirable in some situations when you want to delay or prevent loading the digital human script until some data has been retrieved, or action taken on the page. See uneeq.init.





languageStrings <object>

Default Value: <none>

The languageStrings property can be defined to update any of the text displayed within the Hosted Experience interface. Language strings object should contain values corresponding to ISO-639-1 languages, e.g. en, es, ja, etc. Within each language code value, provide an object containing the key of the value you want to update. Additionally a specific locale/region may be provided for more precise language targetting, e.g. en-US, en-GB, de-DE, pt-BR.

When a user loads the digital human experience their browser will provide a list of their preferred languages (an array in preference order). Hosted Experience will iteratively search the list of languageStrings you provide to find a match for the key based on the users preferred languages set in their browser. More information on how the browser detects the preferred language can be found here.

So for instance, if a user lives and works in Germany, but their preferred language is French, the browser will send "fr, de" in order, and we will return the French values if you have configured them even though the default locale would suggest German.

If a language code is not provided then the 'default' value will be used. If no 'default' configuration is specified by you, then the Hosted Experience base values will be used.

A complete list of languageString keys may be found here: Language Strings.

Example:

JS
Text






speechToTextLocales<string>

Default Value: <none>

This option allows you to specify up to four colon separated locale codes (language tags) for languages that the end user can speak and the digital human should understand.

The first locale in the list is considered the primary locale. e.g. en-US:ja-JP:de-DE

Each locale should be four characters, language and region e.g. de-DE, not DE.





speechRecognitionHintPhrases

Default Value: ""

A comma separated string of hint phrases that should be used to hint the speech recognition system about words to expect. If you provide hint phrases here, then they are more likely to be detected in the speech recognition system.

Example Value: UneeQ, digital human, New Zealand





speechRecognitionHintPhrasesBoost

Default Value: 0

A number between 0 and 20 that can be used to boost the likelyhood of hint phrase words being detected in speech.

Example Value: 15