The Response Generator page allows you to specify the behavior of the avatar using CodeBaby Markup Language.
CodeBaby Markup Language – or CBML – is our proprietary tagging system that allows you to add instructions to direct your avatars while they’re responding to users. Think about it like stage directions in a play or movie production. In your script, you’ll use these tags to “direct” your avatar to use particular “gestures” or animations, control vocal attributes, and interact with the webpage or application where your avatar lives.
Our CBML tags are inserted into your response scripts at the point where you want them to be enabled, and your avatar will perform them where you specify. They work whether you’re using a Natural Language Processor (NLP), such as Google Dialogflow, to control your conversation or if you’re using your own conversation against our CodeBaby Avatar API.
In our Response Generator, once you have added all of your desired CBML tags and previewed the scene by clicking the “Render Scene” button, copy your tagged text from the Response Content text area and paste it into your NLP or your code where you place your responses.
Edit in Dialogflow at #
The link to the currently active Dialogflow agent is displayed. This allows you to confirm you are connected to the correct agent as well as access the agent directly.
New Intent #
The New Intent field allows you to create a new intent or select, display and edit and existing intent. Any edits that are made are saved directly to the Dialogflow agent.
Save #
After an intent is edited, select Save to save the intent to Dialogflow.
Refresh Current Chat #
Refresh Current Chat resets the chat conversation and erases the prior dialog.
Page Interaction #
Page Interaction tags available for your conversation.
Sometimes you will want your Avatar’s response to trigger an interaction with your site – to scroll to a form she mentions or to highlight a button on the screen, for example. This is when you’d turn to a Page Interaction Tag to help.
Page Interaction Tags
- Highlight by ID
- Highlight by Selector
- Scroll to ID
- Scroll to Selector
SCROLL TO
Currently, you can add directions to scroll to a DOM object (an id, a class, etc) on your page at a given point in your response. Similar to gesture tags, place your cursor in your response text where you want the scroll action to trigger, select “Scroll to Selector” from the Page Interaction dropdown menu, and click “Insert”.
The Response Generator will insert a formatted tag for you to enter your selector.
Simply replace ___SELECTOR___ with .classname or #idname. Be sure to leave the quotes in place.
You won’t be able to test the interaction until the Avatar performs the scene on your website (our admin tool doesn’t have the same DOM objects as your site). But you can click the “Render Scene” button to ensure that there are no syntax errors on the tag.
HIGHLIGHT
You can trigger a style to be applied to a DOM object at a point in the conversation that you choose. The default behavior for this tag is to apply the “codebaby-highlight” style to the DOM object you define (the codebaby-highlight style definition is in most UIs available in the UI tab of the partner portal). You can also change the style to apply to an object by updating “codebaby-highlight” in the tag below with a style that you define. You can add that style and its properties in the UI tab of the partner portal.
You will update ___SELECTOR___ with the .classname or #idname of the object to which you want the new style applied. The tag also allows you to set the length of time for which the tag is applied. In the example below, the style will be applied for 3000 milliseconds before it is removed. To apply a style without a time limit, remove the following: , “time” : 3000
Animation / Gesture Tag #
Gestures can added to your avatar for the conversation.
CodeBaby Avatars are pre-programmed with “idle” gestures that help make your avatars look natural when they’re speaking with your customers. But for some conversations, you might want to add gestures in the context of your responses to emphasize points, express empathy, or direct customers’ attention to a certain part of the screen. That’s where our CodeBaby Gesture tags come in handy.
In the Response Generator tab in your CodeBaby Administration Portal, you will see a dropdown of gestures that are available for the Avatar model you have selected.
- Create your response text in the Response Content text area
- Place your cursor at the point in your conversation where you want to insert your gesture
- Click “Insert.”
- To see the gesture(s) in the context of your conversation, click the “Render Scene” button.
- If you want to preview a gesture before adding it to your response text, select the gesture from the pulldown and click the “Play” button.
Because we do have default gestures in place, you will likely only add specific gesture tags when you want a specific action performed at a certain point in your response. There is typically no need to add gestures explicitly to all of your responses.
SSML Tag #
Speech Synthesis Markup Language (SSML) is an XML-based markup language that can be used to fine-tune the text-to-speech output attributes such as volume, pitch, pronunciation, speaking rate, and more. SSML gives you more control and flexibility compared to plain text input.
Not all SSML tags are supported by all voices. We work to maintain a list of the supported tags for each voice but there can be gaps as services change. If an SSML tag is encountered that is not supported, the tag will be removed during the voice generation.
SSML – or Speech Synthesis Markup Language – enables you to adjust the speech the avatar is generating. We currently use Amazon Polly for speech synthesis and have two types of voices available: Standard and Neural. Standard voices are a bit more “flat” and computerized sounding, but the entire set of SSML tags (with the exception of Domain tags) can be used to create vocally nuanced responses. The Neural voices are richer, more robust, and build in as standard some voice inflections, but some SSML tags aren’t supported within the neural synthesis algorithm.
The Partner Portal includes a subset of SSML tags that can be inserted into the desired location of your responses, and you can find all supported tags here: https://docs.aws.amazon.com/polly/latest/dg/supportedtags.html
Because many of the SSML tags apply only to a section of text, rather than just a single trigger point, highlight the text in your response that should be changed by the tag. Other tags, such as breaks, are inserted into a specific area. Once you have added your tags, you can test them by pressing the “Render Scene” button. If the output says “Invalid SSML,” review that you have used tags that are compatible with the voice and type you have selected.
Available SSML Tags:
- Break (pause) – none, x-weak, weak, medium, strong, x-strong, custom
- Say As – digits, spell-out, number, ordinal, fraction, date, time, address, expletive, telephone
- Emphasis – strong, moderate, reduced
- Language – German, Australian English, Canadian English, Great British English, Indian English, US English, Spanish, Mexican Spanish, US Spanish, Canadian French, French, Hindi, Italian, Japanese, Portuguese
- Rate – x-slow, slow, medium, fast, x-fast
- Volume – default, silent, x-soft, soft, medium, loud, x-loud
- Pitch – default, x-low, low, medium, high, x-high
- Speech Style – news, conversational
- Abbreviation – abbreviation
- Miscellaneous – background image
Emotion Tag #
Add emotion to your avatar conversation using gestures using Emotion Tags. Emotions such as Empathy, Sadness or Compassion can be used to modify your avatars response.
In the Response Generator tab in your CodeBaby Administration Portal, you will see a dropdown of Emotion tags that are available for the Avatar model you have selected.
- Create your response text in the Response Content text area
- Highlight the block of text in your conversation where you want to apply the emotion.
- Click “Insert.”
- The tag text is added to the text, <mark name='{“emotion”:empathy”}’> is added at the start of the text and </mark> is added at the end of the text.
- Click the “Render Scene” button.
Default Emotion tags:
- Joy
- Empathy
- Fun
- Celebration
- Compassion
- Sadness
- Anger
- Frustration
- Scolding
- Fear
NOTE: You can create your own emotion sets using JSON on the Character page using Idle Animations. There are two groups of gestures, one for when the avatar is talking and one for when it is not talking.
Auto Tag with GPT #
Based on the vocabulary of the text in the Script field, AutoGPT will anticipate the intended emotion and insert the tag.
Script #
The Script section displays the text response for the current intent. If you select an existing intent in the Intent dropdown the existing response displays in the Script field. If you are creating a new intent, enter the response in the Script field. Page Interactions, Animation/Gesture tags and SSML tags can be added to the text strings in the Script field.
When entering response text in the Script field, HTML encoding can be used to format responses for the browser. For example, bold , italics, and other HTML formatting options can be added to your avatar’s responses, enabling you to create visually engaging replies. This helps you present information in a more eye-catching and organized manner, improving the overall user experience.
HREF links can also be used, allowing your chatbot to share clickable links with users. This is a fantastic way to guide users to helpful resources, whether it’s your website, a customer support page, or an informative article.
Custom Payload #
Custom Payload Syntax available for your Google Dialogflow conversation. Dialogflow allows additional custom data to be sent with a response, and CodeBaby parses that data to add functionality to your Avatar experience. Each response in Dialogflow has a space where you can add your custom payload, and you can use the Response Generator to format the data for that field.
There are five options available:
- Redirect to URL
- Clickable Response
- Clickable Response Advanced
- Conversation Callback
- Custom Data (Advanced)
Payload Content #
The Payload Content section is used to add your custom payload code. Select the Custom Payload type and then add the code for the current intent to the Payload Content field.
CLICKABLE RESPONSE
This tag lets you add clickable buttons to your Avatar’s response.
{
“clickableResponses”: [
“____CLICKABLE RESPONSE TEXT____”
]
}
Replace ______CLICKABLE RESPONSE TEXT_______ with the text you want to appear on the button. Click “Insert” multiple times, and the tool will format the multiple responses correctly. The value of the button will be passed as an intent to Dialogflow, so you can trigger behavior based on your response / route for that intent.
CLIENT CALLBACK
This tag lets you pass additional text to Dialogflow so you can control the routing after the response is delivered (this is needed primarily in Dialogflow ES. Dialogflow CX handles this through page routing.) Select “Client Callback” in the Custom Payload dropdown and click the Insert button.
Let’s take this conversation snippet for example.
Avatar: Welcome! Can you tell me how you’re feeling today?
User: I’m feeling pretty good!
Avatar: That’s great to hear.
Avatar: Let’s get through this quickly so you can get on with your day. Can you tell me what your temperature is?
In this scenario, a “start” trigger would be passed to dialog flow to trigger the original welcome response. This welcome message is listening for the user to provide a response, and based on that response, the avatar will route the user to the appropriate next phrase depending on whether the user is feeling good / bad / so-so. Once the avatar has provided the response to the user (in this case “That’s great to hear! Let’s get through this quickly so you can get on with your day.”), it needs to know where to go to continue the conversation. This is where, in ES, we use Client Callbacks. This is key in creating a bot that is leading the conversation rather than one that is only passively responding.
{
“clientCallBack”: “____TEXT TO PASS TO DIALOGFLOW AFTER THIS RESPONSE____”
}
For this example, let’s say we want to pass back <<getTemperature>> to Dialogflow. You would update the callback to the following:
{
“clientCallBack”: “<<getTemperature>>”
}
That value is passed to Dialogflow and matched to the intent with <<getTemperature>> among its training phrases. The avatar will continue the conversation based on the script in that intent and continue to guide the user through the conversation.
CLIENT DATA
You can pass data that you gather from your users via conversations back to your website to be available to javascript, allowing you to store data remotely, create conditional pathways, and anything else you need to do based on user responses.
To format this data, select “Client Data” from the Custom Payload dropdown and click the “Insert” button.
{
“clientData”: {
“_____YOUR_DATA_NAME_____”: “____YOUR_DATA___Listen_For_The_na-clientData.vidbaby_Event_To_Get_This_Data____”
}
}
You will replace the strings offset by underscores in the above example with your own values.
So let’s say your Avatar’s conversation asks a user for their member ID and their temperature that day. You want to get that data from the conversation so you can save it to that patient’s Electronic Health Record. In this case, you might do something like the following in the custom payload:
{
“clientData”: {
“patientID”: “123456”,
“patientDOB”: “1/2/85”
}
}
Then, to use the data on your front end, you might want to write something like this:
$(document).on(‘na-clientData.vidbaby’, function(event, payload) {
//Any code in here is run every time there is clientData in a responses custom payload
//Check the data we want exists in this batch of clientData, before trying to access it
if(payload.clientData.patientID && payload.clientData.patientDOB){
var patientID = payload.clientData.patientID;
var patientDOB = new date(payload.clientData.patientDOB)
}
})
This custom payload type allows you to create custom website behaviors based on your users’ interactions with the Avatar without needing CodeBaby to do any custom coding.
Render Scene #
Render Scene prompts the avatar to recite the script content including specified gestures.
Return to Portal #
Select Return to Portal to return to the CodeBaby portal and access other functions.