With a virtual assistant, it is possible to talk to several bots in one conversation and to create pipeline steps to allow a variety of features such as redaction and translation, among other conversational features.
A Virtual Assistant has the following components:
va
.{
"Name" : "MoneyTransferVa",
"Description": "A virtual assistant to aid monetary transfers",
"DefaultBotId": "info-bot",
"Bots" : [
{
"Type": "BOTARMY",
"Id": "info-bot",
"Enabled": true,
"Properties": {
"Name": "info-bot"
}
},
{
"Type": "BOTARMY",
"Id": "info-bot",
"Enabled": true,
"Properties": {
"Name": "transfer-bot"
}
}
],
"NluEngines": [
{
"Id": "TestVADispatcher",
"Type": "BOTARMY",
"Properties": {
"Secret": "srn:vault::acme:aws-cross-account-role:awscontent-lex-x-role",
"Nlp": "Lex"
}
}
],
"IngressPipeline": [
{
"Id": "CreditCardRedact",
"Type": "REGEX_REDACT",
"Properties": {
"Regex": "(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}"
}
},
{
"Id": "HumanHandoverStep",
"Type": "HUMAN_HANDOVER",
"Properties": {
"Regexes": [".*human.*"],
"BotName": "connectshowcase",
"Messages": {},
"OnJoin": [{"Type": "message","Message": "Hello" }],
"OnLeave": [ {"Type": "message","Message": "good bye"}]
}
}
],
"EgressPipeline": [],
"Events": {
"@SessionStart": [
{
"Id": "sessionStartMessage",
"Type": "SEND_MESSAGE",
"Properties": {
"Message": "Welcome to the transfer VA. Type 'help' for more info about what I do, or 'transfer' to make a monetary transfer."
}
},
{
"Id": "sessionStartMarkup",
"Type": "SEND_MARKUP",
"Properties": {
"Markup": "<TimelineMessage>\n <List title=\"Please select a number\" selectable=\"true\" interactionType=\"event\">\n <Item title=\"Option one is here\" id=\"1\"/>\n <Item title=\"Option two is here\" id=\"2\"/>\n <Item title=\"Option three is here\" id=\"3\"/>\n <Item title=\"Option four is here\" id=\"4\"/>\n </List>\n</TimelineMessage>",
"Context": {}
}
}
],
"@MissedInput": [
{
"Id": "missedInputMessage",
"Type": "SEND_MESSAGE",
"Properties": {
"Message": "Sorry, I didn't quite get that, could you phrase that differently?"
}
}
]
},
"Persona": "ReportBOT",
"Tags": []
}
{
"Id": "TestVADispatcher",
"Type": "BOTARMY",
"Properties": {
"Secret": "srn:vault::acme:aws-cross-account-role:awscontent-lex-x-role",
"Region": "eu-west-1",
"Nlp": "Lex"
}
}
{
"Id": "TestVADispatcher",
"Type": "BOTARMY",
"Properties": {
"Secret": "srn:vault::acme:aws-cross-account-role:awscontent-lex-x-role",
"Region": "eu-west-1",
"Locale": "en_US",
"Nlp": "LexV2"
}
}
{
"Id": "TestVADispatcher",
"Type": "BOTARMY",
"Properties": {
"Nlp": "ServisBOT"
}
}
Below is a list of support pipeline steps. They can be in any of following
Supported Types
You can configure a api connector using the following
{
"IngressPipeline": [
{
"Id": "referenceId",
"Type": "EXECUTE_API_CONNECTER",
"Properties": {
"ApiConnecter": "ApiConnecterName",
"OnError": "@EventNameHere"
}
}
]
}
You can configure a pipeline to invoke a flow using the following
{
"IngressPipeline": [
{
"Id": "referenceId",
"Type": "EXECUTE_FLOW",
"Properties": {
"FlowId": "Flow-UUID"
}
}
]
}
You can configure a generic http request using the following
{
"IngressPipeline": [
{
"Id": "referenceId",
"Type": "GENERIC_HTTP",
"Properties": {
"Url": "https://urltohit.com",
"OnError": "@EventNameHere"
}
}
]
}
You can configure a pipeline to do google language detection
{
"IngressPipeline": [
{
"Id": "GoogleLanguageDetect",
"Type": "GOOGLE_LANGUAGE_DETECT",
"Properties": {
"ApiKey": "ApiKey"
}
}
]
}
You can configure a pipeline to do google translate on the incoming/outgoing message
{
"IngressPipeline": [
{
"Id": "GoogleTranslate",
"Type": "GOOGLE_TRANSLATE",
"Properties": {
"ApiKey": "ApiKey",
"TargetLanguageSource": "CONFIGURATION",
"TargetLanguage": "en"
}
}
]
}
The human handover pipeline can be used to assign the VA to an agent to take over the conversation, currently only Amazon Connect, Genesys and Edgetier are supported as human handover type
{
"IngressPipeline": [
{
"Id": "HumanHandoverStep",
"Type": "HUMAN_HANDOVER",
"Properties": {
"Regexes": [".*human.*"],
"BotName": "connectshowcase",
"Messages": {},
"OnJoin": [{ "Type": "message", "Message": "Hello" }],
"OnLeave": [{ "Type": "message", "Message": "good bye" }]
}
}
]
}
The human handover can have a set of OnJoin and OnLeave events. These follow the same schema as the actions inside of intents.
Message
{
"Type": "message",
"Message": "Hello"
}
Markup
{
"Type": "markup",
"Markup": "<TimelineMessage><List type=\"disc\" selectable=\"true\" interactionType=\"utterance\"><Item title=\"Item one\" id=\"1\"/><Item title=\"Item two\" id=\"2\" /></List> </TimelineMessage>"
}
Content
{
"Type": "content",
"Value": "menu"
}
Host notification
{
"Type": "hostnotification",
"Notification": "SB:::UserInputDisabled"
}
To use named entity recognition you can use the following ingress pipeline:
{
"IngressPipeline": [
{
"Id": "Named entity recognition",
"Type": "NAMED_ENTITY_RECOGNITION",
"Properties": {}
}
]
}
If entities are detected on ingress of any message, they will be located in context, under the property state.va.entities
.
In flow, this property is referenced msg.payload.context.state.va.entities
. In intent fulfilment, this property can be referenced using state.va.entities
.
Here is an example entity payload:
{
"va":{
"entities":[
{
"utteranceText":"Dublin",
"entity":"Countries, cities, states",
"start":22,
"end":28,
"len":6
}
]
}
}
To use QNA maker you can use the following
{
"IngressPipeline": [
{
"Id": "QnaMakerStep",
"Type": "QNA_MAKER",
"Properties": {
"Url": "...",
"EndpointKeySecret": "srn:vault::myorg:secret:my-endpoint-key",
"FallbackMessage": "This message will be used if qna maker does not have an answer",
"ConfidenceThreshold": 80
}
}
]
}
Redaction can be configured as part of an ingress pipeline. As shown above, the following configuration will prevent credit card numbers from being persisted in the conversation history.
{
"IngressPipeline": [
{
"Id": "CreditCardRedact",
"Type": "REGEX_REDACT",
"Properties": {
"Regex": "(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}"
}
}
]
}
The ingress pipeline can contain one or many REGEX_REDACT
entries.
To add a regex raise event you can use the following
{
"IngressPipeline": [
{
"Id": "referenceId",
"Type": "REGEX_RAISE_EVENT",
"Properties": {
"Regex": ".*Custom*",
"Event": "@CustomEvent"
}
}
]
}
Secure Session can be configured as part of an ingress pipeline. As shown above, the following configuration will have secure session running with a check every 60 seconds
{
"IngressPipeline": [
{
"Id": "SecureSession",
"Type": "SECURE_SESSION",
"Properties": {
"SecureSessionType": "API_CONNECTER",
"ApiConnecter": "SecureSession",
"ValidationInterval": 60
}
}
]
}
When SecureSession is in use it will fire a “@SecureSessionUnauthorized” event when the ApiConnecter returns a non 2xx response
{
"Events": {
"@SecureSessionUnauthorized": [
{
"Id": "referenceId",
"Type": "SEND_MESSAGE",
"Properties": {
"Message": "SecureSession check failed"
}
}
]
}
}
To send CMS content
{
"IngressPipeline": [
{
"Id": "referenceId",
"Type": "SEND_CONTENT",
"Properties": {
"Key": "Content_key"
}
}
]
}
You can configure the pipeline to send a host notification message using the following config
{
"IngressPipeline": [
{
"Id": "referenceId",
"Type": "SEND_HOST_NOTIFICATION",
"Properties": {
"Notification": "Notification"
}
}
]
}
You can configure the pipeline to send a markup message using the following config
{
"IngressPipeline": [
{
"Id": "referenceId",
"Type": "SEND_MARKUP",
"Properties": {
"Markup": "<TimelineMessage><TextMsg>Hello ${name}</TextMsg></TimelineMessage>",
"Context": {
"name": "world"
}
}
}
]
}
You can configure the pipeline to send a message using the following config
{
"IngressPipeline": [
{
"Id": "referenceId",
"Type": "SEND_MESSAGE",
"Properties": {
"Message": "hello from the pipeline"
}
}
]
}
You can configure the pipeline to identify compound utterances using the following config
{
"IngressPipeline": [
{
"Id": "referenceId",
"Type": "COMPOUND",
"Properties": {
"CompoundMessage": "I see you need help with several things",
"UserSelectionConfig": {
"UserSelectionPolicy": "MANUAL|AUTOMATIC",
"MarkupType": "List|SuggestionPrompt|ButtonPromptContainer",
"OnSelectedNoneActions": [
{
"Type": "SEND_MESSAGE",
"Properties": {
"Message": "You have selected none"
}
},
{
"Type": "SEND_MARKUP",
"Properties": {
"Markup": "<TimelineMessage><TextMsg>Try rephrasing your message</TextMsg></TimelineMessage>",
"Context": {}
}
}
]
}
}
}
]
}
See here for more details on the possible configuration values.
For KNOWLEDGE_BASE we currently only support Kendra, below is an example of kendra This is recommend for use only within the fallback pipeline
{
"Egress": [
{
"Id": "KB-for-fallback",
"Type": "KNOWLEDGE_BASE",
"Properties": {
"SearchService": {
"Name": "Standard-Index",
"Type": "Kendra",
"Secret": "srn:vault::ORG:aws-cross-account-role:knowledgebase-query",
"MaxQueryTime": 1500, // optional, timeout before ending the search query
"ErrorMessageGeneral": "An error occurred using Kendra.", // message egressed to the user when we hit an error
"ErrorMessageMaxQueryTime": "The query ran longer than expected. Please try another search." // used when we is the max query timeout
},
"Config": {}
}
}
]
}
For optional Config values please refer to the docs here
{
"Id": "KB-for-fallback",
"Type": "KNOWLEDGE_BASE",
"Properties": {
"SearchService": {
"Name": "Standard-Index",
"Type": "Kendra",
"Secret": "srn:vault::ORG:aws-cross-account-role:knowledgebase-query",
"MaxQueryTime": 1500, // optional, timeout before ending the search query
"ErrorMessageGeneral": "An error occurred using Kendra.", // message egressed to the user when we hit an error
"ErrorMessageMaxQueryTime": "The query ran longer than expected. Please try another search." // used when we is the max query timeout
},
"Config": {
"IndexId": "8e2882ce-afb9-4eab-918d-12087266f2d7",
"PageSize": 5, // optional
"UserContextMapping": {}, // optional
"AttributeFilter": [], // optional
"QueryResultTypeFilter": "ANSWER", // optional,
"MinScoreAttribute": "LOW", // optional
"Region": "eu-west-1" // AWS region
}
}
}
See here for more details on the possible configuration values.
Disambiguation is supported on the VA via configuration on the VA’s NluEngine.
{
"NluEngines": [
{
"Id": "assistant",
"Type": "BOTARMY",
"Properties": {
"Nlp": "ServisBOT",
"DisambiguationConfig": {
"IntentCombinations": [
{
"DisambiguationMessage": "Sorry I did not understand that",
"IntentCombination": [
{
"IntentAlias": "vacation_question",
"BotName": "SmallTalkBot"
},
{
"IntentAlias": "vacation_booking",
"BotName": "VacationBot"
}
],
"OnSelectedNoneActions": [
{
"Type": "SEND_MESSAGE",
"Properties": {
"Message": "You have selected none."
}
},
{
"Type": "SEND_MARKUP",
"Properties": {
"Markup": "<TimelineMessage><TextMsg>Please ask me another question</TextMsg></TimelineMessage>"
}
}
]
}
]
}
}
}
]
}
IntentCombinations
- A list of intent combination objects that you want to use to generate intents for disambiguation on the VA dispatcher.
DisambiguationMessage
- The message used to prompt the user to select an intent (this is the list title), this is optional, and defaults to ‘Sorry I did not understand that, please select one of the following options’IntentCombination
- The intents to be used to form the intent to be used for disambiguation.
BotName
- The bot name that is associated with the intent alias. This must be a bot on the VA, and it must match the name of the bot in the VA configuration.IntentAlias
- The intent alias to use from the bot. This intent must exist on the bot.OnSelectedNoneActions
- A list of actions to execute if the user selects none of the presented intents. These are also executed if the user types a response to the bot rather than interacting with the list of options. This is optional, and by default no actions will be executed. See Supported “OnSelectedNoneActions”The following actions are supported within the OnSelectedNoneActions
configuration.
SEND_MESSAGE
{
"Type": "SEND_MESSAGE",
"Properties": {
"Message": "You have selected none."
}
}
SEND_MARKUP
{
"Type": "SEND_MARKUP",
"Properties": {
"Markup": "<TimelineMessage><TextMsg>Please ask me another question</TextMsg></TimelineMessage>"
}
}
When a user selects an intent from the list of options, the VA will execute the fulfillment actions for the intent that was selected. Below is a list of the currently supported fulfillment actions when an intent is selected.
message
{
"type": "message",
"value": "hello"
}
markup
{
"type": "markup",
"value": "<TimelineMessage><TextMsg>hello</TextMsg></TimelineMessage>"
}
Note that the intent description is displayed to the user for each intent, at time of writing, setting an intent description is not supported on bulk API actions. If no description is present, the intent displayName is shown.
A compound utterance is a single utterance provided by the user which contains several requests. For example a user might say “I want to pay my bill and change my address”.
In the example given above, the user has specified two requests “I want to pay my bill” and “change my address”, which is considered to be a compound utterance.
To identify compound utterances in your VA you can configure a COMPOUND step on your VA ingress pipeline. A sample configuration can be found below.
{
"IngressPipeline": [
{
"Id": "referenceId",
"Type": "COMPOUND",
"Properties": {
"CompoundMessage": "I see you need help with several things",
"UserSelectionConfig": {
"UserSelectionPolicy": "MANUAL|AUTOMATIC",
"MarkupType": "List|SuggestionPrompt|ButtonPromptContainer",
"OnSelectedNoneActions": [
{
"Type": "SEND_MESSAGE",
"Properties": {
"Message": "You have selected none"
}
},
{
"Type": "SEND_MARKUP",
"Properties": {
"Markup": "<TimelineMessage><TextMsg>Try rephrasing your message</TextMsg></TimelineMessage>",
"Context": {}
}
}
]
}
}
}
]
}
CompoundMessage
- The message used to inform the user that a compound utterance was identified. This is optional, and defaults to “I see you need help with a couple of things, I am going to guide you through the following”UserSelectionConfig
- An object which contains the type of user selection that will occur when a compound utterance is identified. This is optional, and will default to AUTOMATIC mode if no configuration is provided. AUTOMATIC mode will be explained in more detail below.
UserSelectionPolicy
- The type of user selection policy that should be used when a compound utterance is identified. Possible values are AUTOMATIC or MANUAL. This value is optional, and defaults to AUTOMATIC. See here for more details on the differences between AUTOMATIC and MANUAL user selection policies.MarkupType
- The type of markup you wish to use to render the identified individual utterances in the ServisBOT messenger. The possible values are
List
ButtonPromptContainer
SuggestionPrompt
.OnSelectedNoneActions
- A list of actions to execute if the user selects none of the presented options. These are also executed if the user types a response rather than interacting with the list of options. This is optional, and by default no actions will be executed. This option is only used for MANUAL selection mode. See here for details on the supported actions.SEND_MESSAGE
{
"Type": "SEND_MESSAGE",
"Properties": {
"Message": "You have selected none."
}
}
SEND_MARKUP
{
"Type": "SEND_MARKUP",
"Properties": {
"Markup": "<TimelineMessage><TextMsg>Please ask me another question</TextMsg></TimelineMessage>"
}
}
PayBillBot
for the “pay bill” utterance and the ChangeAddressBot
bot for the “change address” utterance. Then the VA will communicate with the PayBillBot first, and then the ChangeAddressBot.BotMissionDone
action is executed once finished. This happens automatically, there is no need to add a BotMissionDone
action to your intent.BotMissionDone
is deferred until the action flow is complete. This happens automatically, there is no need to add a BotMissionDone
action to your intent or flow.BotMissionDone
, it will move onto the next bot in the compound work flow, assuming there are still bots left to handle the compound utterance.Here is a sample of what the options look like when AUTOMATIC mode is being used.
Here is a sample of what the options look like when MANUAL mode is being used with a ButtonPromptContainer, before and after user selection.
In both modes, the VA will not attempt to identify compound utterances if the VA is already handling a compound utterance.
This pipeline step will attempt to detect whether the user input is ‘Positive’ or ‘Negative’ based on the AFINN word list and Emoji Sentiment Ranking.
{
"Name": "SentimentRecogVA",
"Bots": [
{
"Id": "botty",
"Type": "BOTARMY",
"Enabled": true,
"Properties": {
"Name": "botty"
}
}
],
"NluEngines": [
{
"Id": "Dispatcher",
"Type": "BOTARMY",
"Properties": {
"Nlp": "ServisBOT"
}
}
],
"IngressPipeline": [
{
"Id": "PositiveSentiment",
"Type": "SENTIMENT_RECOGNITION",
"Properties": {
"SentimentType": "Positive",
"Threshold": "0",
"CustomWords": [
{
"Word": "cats",
"Score": 5
},
{
"Word": "heck",
"Score": -5
}
],
"OnSentimentDetectedActions": [
{
"Type": "SEND_MESSAGE",
"Properties": {
"Message": "I am happy you are enjoying your ServisBOT experience"
}
}
]
}
}
],
"EgressPipeline": [],
"Events": {},
"Persona": "RefundBOT",
"Description": "A va for me",
"Tags": []
}
SentimentType: ‘Positive|Negative’ - This decides how we check to perform actions in the OnSentimentDetectedActions list.
Threshold: ‘Number between -1 and 1 - This is the threshold which we use to decide to perform actions or not.
CustomWords:
[
{
"Word": "The Word to classify",
"Score": "Number between -5 and 5 to give the word"
}
]
This provides a way to add custom words or overwrite the scores of existing words in the word list.
OnSentimentDetectedActions: ‘List of botnet actions’ - These are the actions to perform if the threshold is met. Currently only SendMessage and SendMarkup are supported.
The result of the sentiment analysis is stored in context here:
msg.payload.state.input.sentiment
{
"sentiment": {
"score": 7,
"comparative": 0.4666666666666668,
"threshold": "0",
"result": "Positive"
"calculation": [
{
"cats": 5
},
{
"like": 2
}
],
"tokens": [
"i",
"like",
"cats"
],
"words": [
"cats",
"like"
],
"positive": [
"cats",
"like"
],
"negative": []
}
}
score - The sum of all the recognized words sentiment scores.
comparative - The final score of the sentiment recognition it is a value between -1 and 1.
The comparative score is derived from the sum of all the recognized word’s sentiment scores divided by the total number of words in the utterance. This equation will give a result between -5 and 5. We then normalize this number to return between -1 and 1. Where -1 is the most negative result and 1 is the most positive.
result - can have the value Positive
or Negative
. It is based on the comparative score and threshold.
Positive
Negative
threshold - The Threshold set in the pipeline step configuration
calculation - A list of recognized words with their AFINN/Custom score.
tokens - All the tokens like words or emojis found in the utterance.
words - List of words from the utterance that were found in AFINN/Custom list.
positive - List of positive words from the utterance that were found in AFINN/Custom list.
negative - List of negative words from the utterance that were found in AFINN/Custom list.
Take the following pipeline on a VA or Enhanced Bot.
"IngressPipeline": [
{
"Id": "PositiveSentiment",
"Type": "SENTIMENT_RECOGNITION",
"Properties": {
"SentimentType": "Positive",
"Threshold": ".05",
"CustomWords": [],
"OnSentimentDetectedActions": []
}
}
]
Along with this intent on a bot configured in the VA or Enhanced Bot.
{
"alias": "support",
"displayName": "support",
"utterances": [
{
"text": "help",
"enabled": true
},
{
"text": "support",
"enabled": true
}
],
"detection": {
"actions": []
},
"slots": [],
"scope": "private",
"fulfilment": {
"actions": [
{
"type": "message",
"value": "I can help you. ",
"condition": "state.input.sentiment.result === \"Positive\""
},
{
"type": "message",
"value": "Get a better attitude and I'll help",
"condition": "state.input.sentiment.result === \"Negative\""
}
]
},
"errors": []
}
With this configuration, when this intent is hit, how the bot will respond is based on the detected sentiment from the user input.
This response is based on the value of state.input.sentiment.result
in the conversations context.
It is important to note that if your threshold value is very negative or very positive it may result in some undesired behavior. For instance, if the threshold on the sentiment pipeline step is set to ‘.5’, naturally positive utterances will result in a Negative result.
"IngressPipeline": [
{
"Id": "PositiveSentiment",
"Type": "SENTIMENT_RECOGNITION",
"Properties": {
"SentimentType": "Positive",
"Threshold": ".5",
"CustomWords": [],
"OnSentimentDetectedActions": []
}
}
]
You can configure the pipeline to assign a bot using the following config.
NOTE the bot needs to be part of the VA
{
"IngressPipeline": [
{
"Id": "referenceId",
"Type": "ASSIGN_BOT",
"Properties": {
"BotName": "fallback"
}
}
]
}
You can send missed inputs to OpenAI by using an ASSIGN_BOT
step within the virtual assistant’s @MissedInput
pipeline, and assigning a bot which contains a llm-worker that is configured to communicate with OpenAI.
Once the OpenAI based bot handles the missed input, the bot will return control to the virtual assistant, subsequent user messages will go through the normal virtual assistant process with OpenAI only being used in the case where a virtual assistant can not handle the user input.