To browse current career opportunities at our hospitals, medical offices and corporate offices, use the advanced search option above. Namespaces Article Talk. Charles Medical Center — Madras St. Adventist Health is an equal opportunity employer and welcomes people of all faiths and backgrounds to apply for any position s avventist interest. Walla Walla University School of Nursing. In the mids it was determined that expansion and relocation was again necessary.
There are basic reasoners - essentially rules for decision making - although more are being developed. Core to these are spatial reasoners, temporal reasoners, contextual, and personal. A restaurant reasoned would have access to the data on restaurants and be able to, as example, understand that asking for a restaurant with Tikka Masala means the user is most likely looking for an Indian restaurant, and serving that option up.
Nuance's Reasoning Framework see diagram , which resides within Dragon Drive, allows an API access to the functioning of the reasoning engine through a Reasoning Interface. The reasoners are part of a "reasoning layer" that applies knowledge to the reasoners and their rules to create the Reasoning Interface. The reasoners can be determined by the OEM, effectively setting parameters for the level of features and functionality an OEM wants to offer as part of its connected experience.
The Framework is anchored by the Knowledge Repository, which contains rules, world knowledge and augments data. The Knowledge Interface has the job of making the data sources including car sensors, social, music, points of interest, parking fuel, restaurants and knowledge repository information available to the reasoners, and translates between data formats. Nuance says that the Reasoning Framework can be customised as well, as mentioned.
In June , a Smart Car Manual will become available, enabling users to ask questions about the car and its functions while driving, leveraging content of the owner's manual and frequently asked questions. Further out, Nuance has promised Smart Messaging extracts entities and intent from messages, offers intelligent automatic replies and knows which messages are important to the user , Smart Travel Guide answers questions about surroundings and destination, recommends sights , Smart Music universal music search and personalised and contextualised music recommendations and Smart Restaurant personalised and contextualised recommendations.
All of these domains will learn users' preferences over time. As the Auto Assistant launched with the voice biometric feature, a car with two primary drivers will load preferences for the user based on the voice that awoke the system. Nuance announced its Auto Assistant platform at the Consumer Electronics Show in January , with the announcement this week expanding the capability of the system and pointing at where it will go in the future.
The system is positioned for all levels of connected cars, which should translate well to an autonomous car environment as well. The Nuance system is intended to be delivered to consumers only through an OEM; the company is not exploring an aftermarket option. It also appears highly customisable from an OEM perspective. It resides below the OEM user interface, there is ability to adapt to the specific brand experience an automaker wants to deliver.
As a long-standing automotive supplier, the ability for customisation and for the system to be used in support of brand experience and objectives the OEM has is in direct contrast to the approach of Apple or Google, which prefer to control the user experience rather than supporting an automotive brand's image. This is interesting, in that Apple and Google have access to a tremendous amount of information and have shown some ability to adapt to user preferences as well, minus the access to the car data and vehicle sensors.
It seems that both of those software giants should have the capability to develop systems that provide smart feedback as well.
For now, however, Nuance focused on providing support to OEMs, has history in doing so, and can currently provide a more robust solution.
IHS Markit analyst Mark Boyadjis also notes that it is unclear in the long run which solution OEM-designed versus smartphone-designed end-consumers will ultimately prefer. An OEM-designed solution will integrate the vehicle much more holistically, for a better and more complete automotive user experience.
Smartphone-designed solutions, on the other hand, provide an experience which users are familiar with and can translate between cars, brands, and nameplates, all while being better suited for a long-term vision of shared mobility platforms. The service averages thirty stories per day and also provides competitor and country intelligence.
Get a free trial. In the third quarter of , EVs accounted for about 1 in 4 new vehicles sold in mainland China and Germany. Nuance Communications and Covera Health this past week announced the launch of the Quality Care Collaborative, a new project they say is designed to support radiology quality improvement initiatives nationwide.
The project is billed as the first such effort to convene payers, providers and self-insured employers to support imaging innovation at scale. Covera Health, which develops diagnostic analytics tools, is working with Nuance to help payers, self-insured employers and providers partner in their efforts to improve care quality, Covera CEO Ron Vianu said in a statement.
The QCC combines Covera's analytics tools and Nuance's Precision Imaging Network to help providers, payers and employers collaborate on quality improvement programs, peer-learning initiatives and value-based care efforts, the companies said.
Radiology practices in participating payers' networks can opt in to the QCC to gain access to analytics and clinically validated AI tools that augment existing quality improvement programs. The goal is a workflow-integrated infrastructure that enables radiologists to access quality analytics and support improved diagnostics, the companies said. As for imaging trends, see recent Healthcare IT News coverage on democratizing MRI to advance health equity , how AI can increase the effectiveness of point-of-care ultrasounds and how those devices can support health equity.
Skip to main content. Global Edition. Nuance and Covera launch radiology-focused Quality Care Collaborative. By Mike Miliard August 22, More regional news. By Jessica Hagen. January 17, By Jeff Lagasse. By Bill Siwicki. Want to get more stories like this one? Get daily news updates from Healthcare IT News. Top Story. How to tailor cybersecurity discussions to align disparate stakeholders.
A variable of type time can be set to the current time for the specified time zone. Likewise, a variable of type date can be set to the current date for the specified time zone. A variable of type integer can be set to the value of an arithmetic operation addition, subtraction, multiplication, division, or percentage. This example shows how to increment a variable to use it as a counter. It assumes your design includes actions to initialize this variable, and reset it if needed.
A variable of type integer can be set to a random value between 1 and a number you specify. In this example, a variable is assigned a random integer value between 1 and 6. Once you have set a variable to a random value you can use it in conditions , for example to play a random greeting message when your application starts, or to perform a random transition. To initiate intent switching programmatically, you can use an assign action to modify the predefined variable Active Intent Value.
Intent switching requires Active Intent Value to be set before the dialog flow reaches the intent mapper node that will perform the switch. If you set Active Intent Value downstream from the intent mapper node, the type of transition to apply depends on whether the intent mapper node is in the same component as the assign action, or in another component; and on whether a question router node is present in the component where the assign action takes place:.
When adding an assignment for a variable, or when setting an expression for a condition , you can create the variable directly from the search field at the top of the variable selector. A transition identifies the next node to execute after completing an action.
Distinctive elements represent transitions between nodes on the canvas. Setting up a question router node determines transitions to other nodes from which the dialog flow should eventually return—via a Return To transition—after having collected information for entities handled by the question router node.
Setting up an intent mapper node determines transitions to other nodes or to other components that handle intents in your project. When a separate component handles an intent, the dialog flow must eventually return to the intent mapper node that called the current component via a Return transition.
Similarly, if your design uses component call nodes to transition to separate components that handle specific parts of the dialog flow, such a component should eventually use a Return transition to go back to the node from where it was called. Data access nodes only support GoTo transitions.
If your dialog design requires a different type of transition for the Success path or the Failure path of a data access node, add a decision node for example , and set the desired transition from that node.
Self-hosted environments: This behavior requires version 1. As a protection against infinite loops, the maximum number of times a dialog flow is allowed looping through a node before reaching a question and answer node is In self-hosted environments with a version of the Dialog service earlier than 1.
If your design is likely to involve loops with more than iterations, consider delegating this logic to your client application or a dedicated backend. Use throw event actions, in question and answer nodes, message nodes, or decision nodes, to throw custom events. Tip: In a node with a complex condition structure, clicking the Table icon expands the Node properties pane giving you a wider area to work with all messages, notes, actions and conditions you need to set.
Click the List icon to restore the Node properties pane down to its default width. Once you have created a condition, you can perform various operations directly from the compact condition in the Node properties pane, including:. Tip: If you need to delete a row, click the Delete icon at the end of the row. Click the arrow icon next to the statement you want to collapse in a condition.
Click the arrow icon to expand a collapsed statement in a condition. Add notes, anywhere around messages, actions, and conditions, to supplement your design with useful information. All resources in a dialog project have a global scope, which means you can use them anywhere in your dialog design. A dialog flow involves these resource types:. Any changes you make to intents or entities in Mix.
Associating each intent with a variety of representative utterances allows your application to be able to recognize what the user said in their own words. Refer to the Mix. You can create an intent component from the Components pane, or add it on the fly when mapping intents. An intent mapping represents the destination of an intent—that is, when the intent is recognized, the dialog goes to the specified intent component, generic component, or node. Deleting an intent that was mapped to an intent component leaves a broken link in the intent component.
In such an event, you have the ability to relink the intent component to another intent, or to convert it into a generic component , if it was not meant handle a specific intent after all. If you chose the wrong intent by mistake, you can unlink the intent component. Click NLU , on the Mix. Use the search field to filter the global intent mappings displayed in the NLU resource panel. The panel updates as you type, to show only the mappings where the intents, intent components, or generic components match the search string.
To stop filtering, click the Clear search icon. Intents can be mapped to components, to route the dialog based on the intent. When an intent is detected by the NLU model, the flow of the dialog can transfer to that component.
For most intents, you will probably want to have the intent handled consistently by the same component wherever the intent is detected in the dialog.
In the Intents tab of the NLU resource panel, you can set global default mappings for intents. Default mappings set in the NLU resource panel are inherited by intent mapper nodes but can be overridden if needed within the intent mapper node.
The types of mappings possible depend on the type of component. Only generic components support many-to-one mapping. Both intent components and generic components support one-to-one mapping. Intent components support one-to-one mapping only. Click the More actions icon for the mapping you want to remove, and choose Remove mapping. After you deleted a component, if an intent was mapped to this component, this results in a broken mapping.
In the mapping table, broken mappings are indicated by a warning: Mapped resource missing. Once an intent has been mapped to a component in the NLU resource panel, it is easy to navigate directly to that component to further design it. Click the More actions icon for the intent and choose Go to component. This closes the NLU resource panel and shows the desired component on the canvas.
You can also create entities on the fly from question and answer nodes. Any changes you make to entities in Mix. Nuance-hosted environments: For entities that were created with an earlier version of Mix. Just like variables, entities have a data type. Entity data types include: generic, alphanumeric, amount, Boolean, date, digits, distance, number, temperature, and time. See Data types , in the Mix.
When you choose the data type for an entity, this automatically sets a collection method also known as entity type , which determines the recognition and interpretation services to be invoked at runtime. Collection methods include: list , relationship , freeform , regex-based , and rule-based. Using the default collection method for your entities allows you to start developing your dialog design quickly.
Later on, perhaps after consulting with a speech science specialist, you might opt for a different collection method for some entities see Data type and collection method compatibility , in the Mix.
In addition to the entities from the data packs, Mix also provides a set of dialog-specific predefined entities. The default collection method for custom generic entities, List, represents a list of possible values. For list entities, literals and values are language-specific, whereas the entities themselves are common to all languages. When you add an entity value , it only applies to the current language in your project. To support language-specific requests, choose the desired language from the menu near the name of the project, and add literal-value pairs as required.
When you attempt to delete the last literal for a specific value, Mix warns you that you will be permanently deleting not only the last literal but also the entity value itself for the current languages, and prompts you to confirm the deletion.
In a dialog that collects a list entity, the next step might need to be different for each possible value, or it might be the same for all possible values. It is also possible for your application to perform exclusive actions for some values, and proceed via a common path for all other values. You determine the required behavior for a list entity—that is, whether all values are handled the same way, or individual values determine different paths—, when you set up the question and answer node that collects the entity.
If your application should support global commands for example, Goodbye, Main menu, Operator , you must reserve an entity to hold recognized command values. This entity becomes the global command entity for your project. You can also use this entity to extend the scope of specific question and answer nodes, by defining local command overrides.
In a question and answer node, command overrides allow you to:. When you create your NLU model, you might realize that, for some list entities, the set of values can only be fully determined at runtime.
For example, the set of contacts for a user is specific to that user. Depending on the number of possible values for a dynamic list entity, you can use inline resources compiled at runtime or precompiled resources. Represents an additional value for a dynamic list entity, and attributes to support interactivity for the value. A relationship entity has a specific relationship to an existing entity, either the isA or hasA relationship.
For example, for an airline application, the entities that represent departure and arrival airports typically have the same possible values. A freeform entity is used to capture user input that you cannot enumerate in a list. For example, a text message body could be any sequence of words of any length. Unlike other entity types, for which the keywords in the user input that allow the NLU service to recognize an entity are directly associated with the entity values, a freeform entity is like a black box—at runtime, the NLU service relies on the surrounding text to identify where the entity itself starts and ends.
When the user says, for example, "Send Joe this message: I'll be there tonight," the words "this message" are what tells the NLU service that what follows is the freeform entity to collect. The value attribute of a freeform entity is always empty—therefore, if your dialog design involves collecting freeform entities, make sure to refer to their literal or formatted literal attributes instead. A regex-based entity represents a sequence of characters matching a pattern defined by a regular expression.
A rule-based entity defines a set of values based on a GrXML grammar file. Once anaphoras have been tagged and your model trained in Mix. You do not need to configure anything in your dialog model. At runtime, the dialog uses the data that is has available to determine how to resolve the anaphora.
Entities marked as sensitive are masked in application logs. Depending on your purposes, you might consider marking specific question and answer nodes as sensitive by configuring data privacy settings. Click the filter icon , and choose the desired entity type a collection method or the predefined entity for a relationship entity from the list:. Only custom entities of the type you chose remain visible, and an indication showing that filtering is in effect appears under the search field.
To stop filtering by entity type, click the Clear filter icon , next to the indication. Use the Grammars tab of the NLU resource panel to export a grammar specification document, to help design and manage grammars that will be referenced by a VoiceXML application.
A grammar specification document lists all speech and DTMF grammar references found in your dialog design:. The document also includes any DTMF mappings you may have specified in your dialog design, for commands, confirmation, entity values, and command overrides. Variables are named objects in a dialog project. For example, you can use a variable:. The Variables resource panel organizes simple and complex variables into categories.
Use the Variables resource panel, for example, to:. Simple variables, and complex variable fields have a data type that determines which entities are compatible for use in assignments and conditions. The type of a variable also determines which methods are available when the variable is used in a dynamic message, a conditional expression, or on the right-hand side of an assignment.
You can determine how certain variables are used in reports or logs by using reporting properties. Choose the type of a simple variable based on what you want it to hold. When you use a variable of a specific type in a dynamic message, its value can be rendered with appropriate formatting.
Dynamic Entity Data is a special variable type, meant to extend a list entity with a set of literals and values to be compiled at runtime. Use this for entities with up to values. See dynamic list entity for detailed instructions. A variable of type string on the left-hand side of an assign action is compatible with all entities and variable types.
However, if you use a variable of type string on the right-hand side of an assign action, you are responsible for making sure its value will comply with the internal format of the variable or entity on the left-hand side. Neglecting this could generate errors in your application, at runtime. In assignments between a variable and an entity, the variable type in the first column of the table below, and the corresponding entity types fourth column are compatible both as left operands and right operands.
In assignments between two variables, for a left operand with a type in the first column of the table, the right operand can be the same type or another type listed in the last column.
For example, given the variables myDecimal decimal , and myInteger integer , this assignment is possible:. In conditional expressions, you cannot directly compare a variable and an entity, or two variables of different types.
You must first assign the entity, or one of the two variables, to an intermediate variable of the appropriate type. For example, if you want to compare myDecimal decimal , and myInteger integer :.
This table shows the methods that are available for each simple variable type, when a variable is used in a dynamic message, a conditional expression, or on the right-hand side of an assignment. If your design has simple variables that will store reporting data, or sensitive information, you can mark these variables with a reporting property : either Attribute, Dimension, Metric, or Sensitive.
For complex variables, you can mark sensitive fields in the schema upon which they are based. At runtime, variables marked with a reporting property are written to event logs as key-value pairs with additional metadata, at session start names and types , and whenever they are set or updated names and values.
Active Intent Value doesn't exactly behave as you would expect from a variable. You might think of it more as a function that returns the active intent at the state where you invoke it.
At runtime, only an intent mapper node can update Active Intent Value, based on the latest recognition results built-in intent switching , or following an assign action manual intent switching.
When the system recognizes that the user wants to fulfill a different intent, Active Intent Value does not immediately reflect the new intent: if you follow Active Intent Value as the dialog flows for example, by using it in dynamic messages or by sending it to external systems , you'll notice that its value persists from the moment the dialog enters a mapped component—that is, a component that has been mapped to an intent, either in the NLU resource panel or in an intent mapper node—, from an intent mapper node, all the way until the dialog eventually returns to the same intent mapper node, or reaches another one.
When the dialog enters a component via a component call node as opposed to entering via an intent mapper node , Active Intent Value does not change.
In the case of an otherwise mapped component, any required entities can still be collected to fulfill the intent associated with this component but, if you use Active Intent Value for reporting purposes, you might notice that it doesn't match the intent for the component in focus.
Likewise, in a manual intent switching scenario, when an assign action is used to set Active Intent Value, the new value is not immediately applied and only becomes effective at the next intent mapper node.
However, in this scenario, you must also make sure there are no other question and answer nodes between the assign action and the intent mapper that will perform the switch since the recognition results from the last question and answer node would take precedence and be applied to Active Intent Value instead of the intent you specified.
At runtime, whenever the dialog enters a question and answer node, the last interpretation variables collection and confirmation, simple and complex are all cleared null.
Therefore, if you have configured the Send Data section of the node properties to send any of these variables to the client application, since the node sends the specified data with each request it makes to the Dialog service, they will be sent as null with the first request, upon entering the node. Data specified in Send data is never sent when transitioning out of the node. Represents the last interpretation at the collection or confirmation step of a question and answer node.
Use this schema to create complex variables, when you need dynamic message references—for example, when a message does not directly depend on something the user said but on something else that can only be determined by the client application at runtime. This schema has two fields:. This reserved schema applies to the predefined complex variables lastCollectionResultObject and lastConfirmationResultObject.
Click Variables , on the Mix. Use the search field to narrow down the variables and schemas displayed in the Variables resource panel. The current section of the Variables resource panel updates as you type, showing only the variables, schemas, and fields that match the search string or that have fields that match the string.
Badges indicate the number of matches in each section. Use the handle icon to drag a field up or down, until all fields are in the desired sequence.
For complex variables, fields marked as sensitive in the base schema are masked in application logs. The Messages resource panel allows you to manage all messages in your project. When you set a message in a node, or in the Project Settings panel, or directly in the Messages resource panel, it becomes available for reuse anywhere within your project. To support language-specific messages, choose the desired language from the menu near the name of the project, or from the menu at the top of the Messages resource panel itself see Translate a message.
Click Messages , on the toolbar, to open the Messages resource panel. Use the search field in the upper-right corner of the Messages list, to narrow down the list to only messages that include the specified text—in their name or within their content.
You cannot delete or rename predefined events. Use the Events resource panel to add custom events for your project. You can also create a custom event on the fly when setting up an event handler.
Global command events Escalate, Goodbye, and MainMenu are thrown automatically, when a question and answer node happens to recognize the corresponding global command entity value. You can override global event handlers by setting component-level event handlers in the Enter node for a component. Default dialog events such as MaxTurns, MaxNomatch, MaxInvalidRecoOption are thrown automatically when their respective threshold is reached at a question and answer node.
You can configure these thresholds globally, or by channel, in the global settings of the project. You can set local event handlers for dialog events, in individual question and answer nodes , as needed. Question and answer nodes do not support local handlers for commands and custom events. That is, you cannot create node-level handlers for commands recognized at question and answer nodes.
In a question and answer node, such events must be handled through system actions, just like intents or entity values.
To add local handling for such events at a question and answer node, configure command overrides. Use throw event actions , in question and answer nodes, message nodes, or decision nodes, to throw custom events. It is also possible use a throw action to throw a global command event, if you want to handle a situation the same way as the global command—for example, if you want to transfer the user to a live agent to handle some error situations where a maximum threshold is reached, you can set a throw event action to throw the Escalate event.
Click Events , on the Mix. Use the search field in the upper-right corner of the Events list, to narrow down the list to only events that include the specified text in their name. Updated Set messages to reflect support for square brackets and emojis in messages. Added Protection against infinite loops to mention the maximum number of times allowed looping through a node before reaching a question and answer node.
Updated Define interactive elements to reflect support for language-specific interactivity in multilingual projects, and that it is no longer required for an entity value to exist in every language to be able to configure an interactive element for this value.
Updated Channels , to mention the ability to add, modify, and disable channels after a project has been created. Updated Fulfill the intent , in Get started, and Set up a question router node , to reflect UX change in the question router node properties.
Updated Dynamic messages to reflect that, for new projects, the dialog service will no longer automatically add spaces in dynamic messages it is possible to migrate existing project to take advantage of this change, if needed. Updated Create a simple variable , to mention that Try mode default value is intended as stub data for Get parameters of data access nodes only. Updated Configure TTS settings to reflect the ability to specify a custom voice manually. Updated requirements and examples for confirmation grammars.
Updated Send data to the client application , Set up a data access node , and Set up an external actions node , to reflect the ability to reorder Send data and Get data parameters. Updated Filter custom entities by type to reflect UI changes.
Added integer, as a compatible type for the right operand of an assignment, when the left operand is of type decimal —see Compatibility between variables and entities.
Updated Specify grammars for question and answer nodes , Specify grammars for commands , and Specify grammars for confirmation , to clarify that question and answer nodes can use the NLU model, for collection, and grammars for confirmation; or use grammars for collection, and the NLU model for confirmation, if desired.
Updated Set messages and Supported methods , to cover output formatting for dynamic messages. Added Create a grammar reference variable on the fly. Specified maximum length characters for notes in Manage variables. Specified maximum length characters for node descriptions. Added Show or hide the Components pane , and Show or hide the Node properties pane. Updated Manage variables to cover predefined variables: channelIntegration and userData.
Updated Manage entities to reflect that the literal-value pairs for list entities are now language-specific, and to cover regex-based entities. Added Manage intents , Manage entities , Manage variables , and Manage messages. Minor changes. See Get started and Dialog design elements.
Sample scenario simplified and further revised to match UX changes. See Get started. You are viewing legacy Mix documentation. This doc set is no longer actively maintained. Please visit our new site! Go to Mix Docs. Please visit our new site at docs.
Creating Mix. Get started This section explains how to design a simple chat application. Open your project in Mix. Click the large. If your project supports multiple languages, use the menu near the name of the project, to choose the language you want to start with. The new intent appears in the mapping table. The new intent component appears in the Components pane.
Switch to the Entities tab. Click the Add Entity icon. Select the desired data type for this entity for example, Generic , from the list that appears in the upper-right area of the panel. Click the Add icon. The new entity appears in the list of custom entities. Expand the Advanced settings section.
Use the fields at the bottom of this section to add a few representative literal-value pairs—for example, enter literals espresso, ristretto, americano. The literal text automatically doubles as the value, by default. If you want the value to be different, press Tab and type the desired value before pressing Enter. Multiple literals can have the same value, to help your application recognize the different ways a user might say an entity.
Click the Add icon next to the literal large, type double, and press Enter. Proceed in the same fashion to add the literal single, for the value small. Click NLU again, on the toolbar, to close the panel. Design your dialog Main component example Build a dialog by adding nodes and configuring their properties to direct the dialog flow based on interaction with the user. Greet the user Drag a Message node from the palette onto the Start node on the canvas. This automatically connects the Start node to the message node.
The properties for this message node appear in the Node properties pane. Click the default node name, Message, at the top of the Node properties pane, and replace it with a unique name—for example, Welcome.
Click the message placeholder. The message editor appears. Enter the desired greeting message—for example, Welcome to My Coffee Shop! This automatically generates a message ID, based on the message text. The message appears on the Welcome node. This automatically connects the message node to the question and answer node. Optional In the properties for the Welcome node, click the compact GoTo, to open the GoTo editor, and then replace the default GoTo transition label, with Always, to make it more obvious that this transition is not conditional.
Notice the GoTo Node field already indicates that the question and answer node is the next node in this flow.
Click the question and answer node on the canvas. A message placeholder appears. Enter the desired question—for example, How can I help you today? The question appears on the Get Intent node. Expand User Input and click Collect. Click Add Intent Mapper node. This automatically connects the Get Intent question and answer node to an intent mapper node.
Notice the intent mapper node indicates: 1 Mapped Component. Expand System Actions and click Default. Notice the compact GoTo already indicates that the intent mapper node is the next node in this flow. Optional Click the compact GoTo, to open the GoTo editor, and then replace the default GoTo transition label, with Always, to make it more obvious that this transition is not conditional.
Say goodbye to the user Click the intent mapper node on the canvas. Click the GoTo placeholder. The GoTo editor appears. This automatically connects the intent mapper node to a message node. This transition determines what happens when the dialog returns to this intent mapper node from an intent component, after the interaction associated with a specific intent is complete. Click the Message node on the canvas. Replace the default name of the message node with a unique name—for example, Goodbye.
Enter the desired parting message—for example, Thanks for visiting My Coffee Shop. The message appears on the Goodbye node on the canvas. This automatically connects the Goodbye node to an external actions node. Click the external actions node. Replace its default name with a unique name—for example, End.
Under Action Type , choose End. The End node represents the end of the conversation. Drag a Question Router node from the palette onto the canvas. The properties for the question router node appear in the Node properties pane.
A Collect placeholder appears. An editor appears. In the Node properties pane, click the Add icon , below the compact Collect parameter. Enter the desired question—for example, What would you like to drink? Expand Advanced settings. Since this scenario does not require the dialog flow to take different paths depending on the collected value, turn off all Show in Actions switches. Expand System Actions and click View All.
This means that, once the question and answer node has collected information relevant to the active intent—that is, any entities required to fulfill the intent—, the dialog flow goes back to the question router node. The question router node determines whether information still remains to be collected. Turn off all Show in Actions switches.
Wrap up Click the Get Order Details question router node on the canvas. This connects the question router node to a new message node. Click the new message node on the canvas.
Replace the default name of the message node with a unique name—for example, Wrap Up. Enter the desired message—for example, Your coffee is coming right up! The message appears on the Wrap Up node on the canvas.
Expand the GoTo Node list, and choose Return. The validation panel appears. Tip: You can change its position by dragging it. Click Run Validation. The panel reports any issues found in the design. If the panel reports warnings or errors , click Warnings or Errors to expand the list of issues. Note: With this simple dialog design, the validation panel reports missing transitions in the component called Main. This is because we haven't configured the two default global event handlers in the Start node.
You can safely ignore these warnings for now. If the NLU model for your project is not yet available, this also generates a warning. Click an issue to bring the affected node into focus, and correct your design as needed. In the Node properties pane, areas that require attention are outlined in red error or orange warning. Note: For issues related to a message, click the compact message to open the message editor. A link appears at the top of the editor, which lets you navigate directly to the Messages resource panel where you can address the issue.
Optional Click Run Again to validate your design again. Click one of the available channels. Click Continue. The Main flow of your dialog design appears in the main pane. If your project supports multiple languages, use the menu near the name of the project, to choose the language you want to use for this session, as desired.
Click Start , in the chat pane. Your greeting and initial question appear in the chat pane. Type something in the chat box at the bottom of the pane and press Enter. A response appears, based on what you typed. Pursue the conversation until you are satisfied with your scenario.
If you reached the end of the dialog, your can click Start New Session. Alternatively, click Restart at the top of the main pane, at any time, to try another scenario.
Entity names Entity names must not start with a number or hyphen - , and cannot include spaces. You cannot rename rule-based entities. Event names Custom event names are limited to characters, and can only include letters A-Z, a-z , and digits You cannot rename predefined events. Intent names Intent names are limited to characters, must not start with a number or hyphen - , and cannot include spaces.
Node names Except for the Start and Enter nodes, which you cannot rename, every node must have a unique name, across all nodes and components in your project. You cannot rename predefined variables, predefined schemas, and their fields.
Basic operations This section describe basic operations you can perform in Mix. Change the user interface language Click the gear icon , click Language , and choose the desired language. Change the active language If your project supports multiple languages, use the menu near the name of the project to switch to the desired language.
Resize the Components pane Move your pointer to the border between the Components pane and the main pane. The pointer switches to a left-right arrow handle. Click and drag the handle until the pane is the desired width.
Resize sections of the Components pane By default, the Components section and the Intent Components section of the Components pane are the same height. Double-click a section header to expand the section. This collapses the other section. Double-click the header of a fully-expanded section to restore the default layout. Switch to another component In the Components pane, click the component you want to bring into focus expand the Intent Components section, or the Components section, if needed.
Filter the Components pane You can narrow down the list of components and nodes that appear in the Components pane, by node type, keyword or both. Filter the Component pane by node type Click the Filter icon , and choose the desired type of node. Only nodes of the type you chose remain visible in the Components pane. Filter the Component pane by keyword Use the search field to narrow down the components and nodes displayed in the Components pane. Click the canvas, and drag to bring the desired area of your design into view.
In a multilingual project, switch to the desired language , if it is not already the active language. You can follow the progress of the training operation in the Notifications panel.
Review notifications Notifications panel example The Notifications panel collects status messages from various operations, such as importing resource data , training NLU models , and building resources. Status messages appear in chronological order, with the latest messages at the top. Click Clear All to dismiss all messages. Click the Notifications icon again to close the panel. Component operations In addition to the component called Main, Mix.
Add a generic component This section explains how to add a component that is not an intent component. Click the Add icon , next to Components. Enter a unique name for the component see Naming guidelines. The new component appears in the Components section of the Components pane. Rename a generic component On the Options menu , choose Rename component. Modify the name as desired see Naming guidelines. Add an intent component This section explains how to add an intent component, and create the associated intent, as a single operation.
Click the Add icon , next to Intent Components. Type a valid intent name for the intent component, and press Enter. The new component appears in the Intent Components section of the Components pane. The corresponding intent and its new mapping also appear in the NLU resource panel.
Convert an intent component into a generic component On the Options menu , choose Convert to component. A message appears prompting you to confirm your intention. Note: This action cannot be undone. Click Confirm. The component now appears in the generic Components section of the Components pane.
Delete a component On the Options menu , choose Delete component or Delete intent component. Relink an intent component Relinking an intent component to the intent it is meant to handle ensures that their name is kept in sync: for example, if the intent name changes after you created the intent component, the intent component will be renamed accordingly.
On the Options menu , point to Link to existing intent. The list of all intents that are not currently linked to an intent component appears. Use the search field to narrow down the list, if needed. If the intent you would like to use is not in the list, type the desired name in the search field, and click the Add icon , to create the intent on the fly. Choose the desired intent. This automatically renames the intent component, which might move up or down in the Intent Components section of the Components pane where components appear in alphabetical order.
This also creates a global mapping, in the NLU resource panel , between the specified intent and this intent component. Unlink an intent component This section explains how to relink an intent component after you linked it to the wrong intent by mistake—that is, when you realize you chose the wrong intent while relinking an intent component that had a broken reference to the intent it was previously meant to handle.
On the Options menu , choose Unlink intent component. The unlinked intent component moves to the top of the Intent Components section, and a broken link icon appears next to it. Relink the intent component to the desired intent.
Proceed as appropriate, depending on the situation: Delete the intent , if you no longer need it. Remove the broken mapping , if you are not yet ready to map this intent. Remap the intent to another intent component, generic component, or node. Open the Options menu for a component In most situations, to expand the Options menu, you can bring your pointer to the badge that shows the number of nodes next to the desired component, and then click the More options icon that appears.
Node operations This section describe basic operations you can perform to manipulate the elements of a dialog design—that is, nodes and their interconnections—in Mix.
Add a node Drag a node from the design palette onto the canvas. Drag a node from the design palette onto any transition area of a question and answer node, a message node, or a decision node. Drag a node from the design palette onto the Success area or onto the Failure area of a data access node or external actions node set up for a transfer action. Drag a node from the design palette onto the Complete area of a question router node.
Drag a node from the design palette onto the On Return area of an intent mapper node, or a component call node. Click Components on the design palette and drag the component you want to call onto the canvas or onto a transition area of a node.
This adds a component call node. Remove a node Click the More icon for the node you want to remove. Choose Delete. Duplicate a node You can duplicate a node within a component or to another component.
Choose Duplicate. Duplicate a node to another component Click the More icon for the node you want to duplicate. Point to Duplicate node in and choose the desired component.
Tip: Use the search field to narrow down the list of components, if needed. Depending on the node type, you can perform most or all of these tasks: Change the node name Add a description Assign variables Add messages Set the transition to the next node in the dialog flow Define conditions Assign variables based on conditions Add messages based on conditions Add transitions based on conditions Add notes Reorder messages, actions, notes, and conditions See Design a dialog flow for more details.
Click the default node name, at the top of the Node properties pane, and replace it with a unique name. Rename a node directly on the canvas Double-click the node name on the canvas, modify it, and press Enter.
Alternatively: Click the More icon for the node you want to rename. Choose Rename. Modify the name as desired and press Enter. Click the Node description icon , next to the node name, at the top of the Node properties pane. Enter the desired description maximum characters , in the field that appears.
Show or hide a node description When you click a node on the canvas, if the Node description icon has a blue indicator, this means there is a description for this node. Click the Node description icon to show the description. Click the Node description icon again, to hide the node description.
Move a message, action, note, or condition Bring your pointer to the right-hand side of the message, action, note, or condition you want to move. Use the handle that appears, to drag the selected element up or down. Drop the selected element at the desired position, above or below another element, or inside a condition. Channels A channel defines a set of modalities that determine how your application will exchange information with its users.
A channel can be applied to various channel integrations. Undock a channel at a node Expand the main channel selector at the top of the node properties. Click Edit Channels. The channel dock switches to edit mode. Click the channel you want to undock at this node. Click Undock Channel. Repeat these two steps if you want to undock more channels at this node.
Click Done Editing. You can now select the undocked channel for which you would like to set channel-specific messages and actions , or interactive elements. Dock a channel at a node Expand the main channel selector at the top of the node properties. Click the channel you want to redock at this node. Click Dock Channel. The channel is linked back to the default All Channels dock. View channel-specific dialog flow On the gear menu choose the desired channels.
Global settings and behaviors Project settings example showing default confirmation settings Global settings define common functionality such as error recovery and command handling.
You can define settings and behaviors for your application at different levels: Scope Description Global The top-level settings apply to all channels, that is your whole project. Channel Settings you define for a specific channel take precedence over the global settings, in all parts of your project under the dialog flow for this channel.
Entity Settings and behaviors you define for a specific entity will apply at any question and answer node where this entity might be collected, in the context of a specific channel.
In an open dialog application where a question router node handles multiple entities to be collected, any of the question and answer nodes under the question router node is able to collect any of the entities.
In such applications, setting entity behaviors at a specific question and answer node would not be sufficient. Component Event handling and error recovery behaviors you define for a specific component take precedence over the default behaviors set in the Start node of your project.
Node You can override some settings at the node level, for individual question and answer nodes and message nodes. Node-level settings take precedence over the global settings and any component-level overrides.
Message You can prevent users from interrupting specific messages by disabling barge-in at the message level, in the Messages resource panel.
The Project Settings panel is organized into these categories: Category Description Conversation settings Set how many times the application will try to collect the same piece of information intent or entity before giving up.
Collection settings Set the low-confidence threshold, below which the application rejects a collected utterance and throws a nomatch event, the high-confidence threshold above which it is not necessary for the application to prompt for confirmation, the number of nomatch event before the application throws a maxnomatch event, and how many times the application will try to collect the same piece of information after failing to detect any response from the user.
You can also choose whether the initial message is to be used, or not, after nomatch and noinput recovery messages at question and answer nodes. You can set different high- and low-confidence thresholds separately, for each entity in your project. In a multilingual project, you can set different confidence thresholds separately, for each language. Confirmation settings Set the confirmation strategy for entities, including the low-confidence threshold below which the application rejects a collected utterance at the confirmation step and throws a nomatch event, and how many times the application will try to collect the same piece of information after the user responds negatively to the confirmation question.
In multilingual projects, you can set different low-confidence thresholds separately, for each language. When specified, confirmation grammars apply to all channels. Speech settings Set the desired recognition speed and sensitivity, the weight for the ASR domain language model , and the default barge-in type speech vs.
Barge-in is enabled by default, and can be disabled for individual messages in the Messages resource panel , or at the node level in the speech settings of a question and answer node , or in the settings of a message node. TTS settings Choose the desired voice per language, including gender and quality, for the text-to-speech engine. No default values. Only available in projects with channels that support the TTS modality.
Fetching properties Set how long to wait before delivering a latency message when a data access request is pending fetch delay, default is ms , and the minimum time to play the message once started fetch minimum, default is 0 ms, applies to audio messages only. Available at the global all channels level, and at the node level for data access nodes. Grammars Specify, for each channel, whether to allow referencing external speech or DTMF grammars in question and answer nodes.
Only available at the channel level. Confirmation default messages Add default messages to handle confirmation for entities, including recovery behaviors at the confirmation step. Creating confirmation default messages directly from the Project Settings panel allows you to reference the entity being collected in a generic way by using the Current Entity Value predefined variable.
Note: Current Entity Value cannot be marked as sensitive, and therefore would never be masked in message event logs. If sensitive data is likely to be presented in confirmation messages, make sure to configure local confirmation messages , at every question and answer node that is set up to collect a sensitive entity. Global commands Global commands are utterances the user can use at any time and which immediately invoke an associated action; for example: main menu, operator, goodbye.
Enable the commands you want to support and add new ones if desired. Only available at the global all channels level. Audio file extension Extension to append to audio file IDs when exporting the list of messages:. Only available in projects with channels that support the Audio Script modality. Entities settings Set a confirmation strategy for specific entities predefined and custom , confirmation default messages, and other applicable settings in the collection, speech, TTS, and DTMF setting categories.
Data privacy Self-hosted environments: This setting category requires engine pack 2. Marking a question and answer node as sensitive prevents all user input collected at this node from being exposed in application logs. At runtime, anything found in the interpretation results for input collected at a sensitive question and answer node will be masked in the logs user text, utterance, intent and entity values and literals. For example, in the case of a question and answer node collecting a freeform entity , marking the node itself as sensitive will prevent the nomatch literal returned by the NLU service from being exposed in the logs.
Only available at the node level for question and answer nodes. If the entity meant to be collected at a question and answer node that is marked as sensitive is likely to be used in dynamic messages or exchanged with an external system at runtime, make sure to also mark the entity itself as sensitive , to ensure that it will be masked in all dialog event logs.
Other work on multi-task dialogs has focused on dialog interruptions Yang et al. Somewhat related are efforts to extend dialog systems to be able to support conversations with multiple applications sometimes referred to as cross-domain intentions , each of which has a particular specialization Ming Sum and Rudnicky, There is, therefore, a need to support multiple task dialogs using a computerized personal assistant. In a multi-intent search dialog, according to an embodiment of the invention, a human user and a computerized personal assistant incrementally exchange information to support achievement of multiple tasks of the human user.
These multiple tasks can interact, and choices made by the user can be revised, during the course of the dialog. Those revisions can, in turn, lead to modifications in the ongoing specification of other tasks. The approach is a plan-based one in which a dialog between the two agents is viewed as a collaboration involving the tasks under discussion.
In accordance with an embodiment of the invention, there is provided a computer-implemented method for managing a dialog between a computerized personal assistant and a human user. The computer-implemented method comprises performing dialog processing to permit the computerized personal assistant to interact with the human user in a collaborative dialog to ascertain values of parameters to execute multiple task intentions of the human user in the same collaborative dialog, at least one of the multiple task intentions being initially partially specified.
The dialog processing comprises, with a task engine of the computerized personal assistant, iteratively expanding task intentions of an intention base comprising the multiple task intentions until the computerized personal assistant and the human user collaboratively arrive at values of the parameters of the multiple task intentions of the intention base that are executable by the computerized personal assistant.
The iteratively expanding task intentions of the intention base comprises, at each iteration, using the task engine of the computerized personal assistant in evaluating a new option to be expressed via an utterance of the computerized personal assistant to the human user, the new option comprising a new constraint that has not been considered before, that is consistent with the intention base, and that reduces future options for the intention base, the collaborative dialog thereby converging on the intention base being executable by the computerized personal assistant.
In further, related embodiments, evaluating the new option may be based on a currently active task intention, any constraints for the currently active task intention that have already been considered in previous iterations, and any changes in the intention base that have resulted from revisions of the intention base in previous iterations.
The computer-implemented method may further comprise generating natural language to be uttered to the human user. At least two of the multiple task intentions may interact with each other by one or more of a greater cost or a lesser cost of performing the at least two of the multiple task intentions together.
In other related embodiments, the computer-implemented method may further comprise receiving, from a natural language understanding engine, a natural language interpretation of utterances of the human user to a speech recognition system.
The natural language interpretation may comprise at least one of: i intent data and mention list data from a statistical natural language system and ii logical form natural language data output from a deep natural language system. The natural language interpretation may be used as the basis for at least one of a new constraint and a new task intention for the collaborative dialog between the computerized personal assistant and the human user. The computer-implemented method may further comprise modeling a task intention of the human user in a dynamic intention structure built using a library of task recipes specifying how domain tasks are to be carried out in a hierarchical task model.
The dynamic intention structure may comprise, for each task intention: a task intention identifier, a task intention variable, an act, a constraint, and a representation of any subsidiary task dynamic intention structure. The computer-implemented method may further comprise modeling the at least one of the multiple task intentions, which is initially partially specified, using at least one of: i an existential quantifier within a scope of the initially partially specified task intention; ii an incompletely specified constraint of an intended action of the initially partially specified task intention; and iii an action description, which is not yet fully decomposed, of the initially partially specified task intention.
Upon receiving an interpreted utterance of the human user that is unrelated to performing collaborative dialog to ascertain values of parameters to execute multiple task intentions of the human user, a natural language response to the human user may be generated, to guide the human user to return to the collaborative dialog.
In another embodiment according to the invention, there is provided a computerized collaborative dialog manager system for managing a dialog between a computerized personal assistant and a human user. The computerized collaborative dialog manager system comprises a processor, and a memory with computer code instructions stored thereon, the processor and the memory, with the computer code instructions, being configured to implement a task engine. The task engine is configured to perform dialog processing to permit the computerized personal assistant to interact with the human user in a collaborative dialog to ascertain values of parameters to execute multiple task intentions of the human user in the same collaborative dialog, at least one of the multiple task intentions being initially partially specified.
The dialog processing comprises iteratively expanding task intentions of an intention base comprising the multiple task intentions until the computerized personal assistant and the human user collaboratively arrive at values of the parameters of the multiple task intentions of the intention base that are executable by the computerized personal assistant. The task engine comprises an option engine configured to, at each iteration, evaluate a new option to be expressed via an utterance of the computerized personal assistant to the human user.
The new option comprises a new constraint that has not been considered before, that is consistent with the intention base, and that reduces future options for the intention base, the collaborative dialog thereby converging on the intention base being executable by the computerized personal assistant.
In further related embodiments, the task engine may be configured to evaluate the new option based on a currently active task intention, any constraints for the currently active task intention that have already been considered in previous iterations, and any changes in the intention base that have resulted from revisions of the intention base in previous iterations.
The computerized collaborative dialog manager system may further comprise a dialog generator configured to generate natural language to be uttered to the human user. The task engine may be configured to manage dialog in which at least two of the multiple task intentions interact with each other by one or more of a greater cost or a lesser cost of performing the at least two of the multiple task intentions together.
In other related embodiments, the computerized collaborative dialog manager system may further comprise an input processor configured to receive, from a natural language understanding engine, a natural language interpretation of utterances of the human user to a speech recognition system. The task engine may be configured to use the natural language interpretation, as the basis for at least one of a new constraint and a new task intention for the collaborative dialog between the computerized personal assistant and the human user.
The task engine may be configured to model a task intention of the human user in a dynamic intention structure based at least on consulting a library of task recipes specifying how domain tasks are to be carried out in a hierarchical task model. The dynamic intention structure implemented by the task engine may comprise, for each task intention: a task intention identifier, a task intention variable, an act, a constraint, and a representation of any subsidiary task dynamic intention structure.
The task engine may be further configured to, upon receiving an interpreted utterance of the human user that is unrelated to performing collaborative dialog to ascertain values of parameters to execute multiple task intentions of the human user, generate a natural language response to the human user to guide the human user to return to the collaborative dialog.
In another embodiment according to the invention, there is provided a non-transitory computer-readable medium configured to store instructions for managing a dialog between a computerized personal assistant and a human user.
The instructions, when loaded and executed by a processor, cause the processor to manage the dialog by performing dialog processing to permit the computerized personal assistant to interact with the human user in a collaborative dialog to ascertain values of parameters to execute multiple task intentions of the human user in the same collaborative dialog, at least one of the multiple task intentions being initially partially specified. The iteratively expanding task intentions of the intention base comprises, at each iteration, using the task engine of the computerized personal assistant to evaluate a new option to be expressed via an utterance of the computerized personal assistant to the human user, the new option comprising a new constraint that has not been considered before, that is consistent with the intention base, and that reduces future options for the intention base, the collaborative dialog thereby converging on the intention base being executable by the computerized personal assistant.
The foregoing will be apparent from the following more particular description of example embodiments, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments. Research in task-based dialog management has focused for the most part on dialogs between agents involved in a single task.
A major approach within this area of research has focused on the development of plan- or collaborative-based systems where each agent shares beliefs, intentions and task information to enable completion of the task under discussion Grosz and Sidner, ; Grosz and Kraus, ; see References section below.
In accordance with an embodiment of the invention, a computerized collaborative dialog manager system focuses instead on multiple tasks, such as planning a dinner and a movie or planning a weekend that might involve wine tasting, a balloon ride, and dinner. An embodiment implements the sorts of dialogs that one would like to support between a virtual personal assistant VPA and a human user who is pursuing those tasks. In such dialogs, users typically only incrementally reveal their preferences or constraints regarding an eventual choice and often shift between sub-dialogs for different tasks as the conversation unfolds.
Hence, the assistant cannot pursue the solution of tasks in a linear fashion: that is, by first solving one task and then moving on to the next. In accordance with an embodiment of the invention, a search dialog is roughly modeled as follows.
A user and a system start with a partially specified intention, say, to reserve a table at some restaurant. As the dialog evolves, each agent exchanges information with the other in the form of constraints, options and selections. The information exchanged reflects the expertise of each agent in the collaborative planning: the user will have personal preferences for and knowledge of certain restaurants, for example, while the system will have extensive information about restaurant locations and availability.
This process continues until the user and system arrive at a fully specified and executable version of the task intentions. A number of challenges arise in such multi-task dialogs. First, the tasks under consideration can interact in both positive and negative ways.
A negative interaction between, for example, the tasks of dinner at a restaurant and watching a movie at a theater later might occur through a choice of a restaurant whose location is farther from the theater than another choice. As the conversation interleaves between the individual task sub-dialogs and because of the characteristic non-linearity of task elaboration discussed above, a user's specification of a task attribute can invalidate a developing plan for the other task entailing revision of that other task description.
A second type of revision of past decisions can come about because users typically change their mind during such dialogs: initially a user might choose a particular restaurant that is Italian only to later indicate a preference for Mexican cuisine, entailing revision of some of the consequences of the previous choice Ortiz and Shen, In contrast to a typical master-apprentice dialog in which it is assumed that a good master does not normally make mistakes.
Therefore, the dialog control that manages the moves between the subdialogs involving each task cannot be handled with a stack as is normally the case when dealing with interruptions: if the intention corresponding to the first task was put on a stack, that intention might itself be revised in the course of the conversation with a second task.
Moreover, if there are more than two tasks, there is no reason to believe that after updating or revising an intention as part of the current conversation one should return to the task discussed immediately before the interruption: there may be good reasons to go back to an older task that was revised as a consequence of the change.
To illustrate these phenomena and the challenges involved, as well as to motivate the approach taken in accordance with an embodiment of the invention, we turn first to the examples of FIGS. The scenario involves a user interacting with the system to obtain restaurant followed by parking information. Later, the user also expresses interest in watching a movie after dinner.
During the dialog, the system and the user jointly refine, adopt and execute supporting intentions, and the user changes his mind several times about possible options. The three tasks interact spatially, as the user would like to secure parking as close as possible to the chosen restaurant, as well as temporally as in the case of the movie after the meal.
The system, for each new user constraint, either works toward reducing the space of options remaining or processes any side effects that might result from a changed constraint.
An implementation of a solution, in accordance with an embodiment of the invention, is an integrated system that processes spoken natural language utterances, followed by parsing, semantic processing, dialog processing, planning and reasoning. In the first 5 utterances of the example of FIG. In utterance 3 , the system summarizes a few available choices, rather than listing all of them.
Before settling on a choice, the user wants to check on parking in utterances 6 - 9 , and the system must determine whether the parking is dependent on the dining task. This is signaled linguistically via anaphoric reference in utterance 6.
After settling on parking, in utterance 10 , the system reminds the user that a specific restaurant has not yet been chosen and in utterance 11 the user changes previous constraints involving cost and cuisine. At this point, the option space has changed as the new constraint conflicts with previous ones. However, the system assumes that other constraints regarding location still hold. This leads to new recommendations in utterance 12 and initiation of a new parking task in utterance 13 since the previous restaurant option was revised ; the exchange that follows elaborates on the new task with new constraints involving the type of parking.
The anaphoric reference here is dealt with, in this example of a system in accordance with an embodiment of the invention, by choosing the last restaurant mentioned. Utterances 14 of the example of FIGS. In utterance 22 , the user requests that the system reserve a table at the chosen restaurant, and the system infers that the user is actually going to eat at the restaurant an intention to find a restaurant is revised to an intention to eat dinner and will incorporate any appropriate information, such as travel time, into subsequent planning.
Utterances 23 - 25 complete addition of the necessary details, but in utterance 26 , the user decides to also watch an action movie after dinner. This triggers planning that is explained in utterance 27 involving temporal relaxation i. Upon conclusion, the system interacts with a reservation server to make the reservation. In FIG. To support multiple task dialogs using a computerized personal assistant in contexts such as those illustrated in FIGS.
The computerized collaborative dialog manager system comprises a processor , and a memory with computer code instructions stored thereon. The processor and the memory , with the computer code instructions, are configured to implement a task engine The task engine is configured to perform dialog processing to permit the computerized personal assistant to interact with the human user in a collaborative dialog to ascertain values of parameters to execute multiple task intentions here indicated as Task Intention 1 through Task Intention N of the human user in the same collaborative dialog One or more of the multiple task intentions are initially partially specified—for example, to reserve a table at some restaurant.
Two or more of the multiple task intentions may interact with each other, in one or more task interaction , by which there is one or more of a greater cost or a lesser cost of performing the two or more of the multiple task intentions together.
The task engine includes an iterative expansion engine , which performs dialog processing that involves iteratively expanding the multiple task intentions that are included in an intention base This continues until the computerized personal assistant and the human user collaboratively arrive at values of the parameters of the multiple task intentions of the intention base that are executable by the computerized personal assistant The task engine also comprises an option engine that is configured, at each iteration, to evaluate a new option to be expressed via an utterance of the computerized personal assistant to the human user In this embodiment, the option engine includes a currently active module , a visited constraints module and a changed constraints module These modules include, for example, references to storage locations in memory see FIG.
The option engine uses components such as - to evaluate a new option which it has generated, as described herein , based on a currently active task intention stored in module , any constraints for the currently active task intention that have already been considered in previous iterations in module , and any changes in the intention base that have resulted from revisions of the intention base in previous iterations, in module In this way, by iteratively evaluating such new options , the collaborative dialog converges on the intention base being executable by the computerized personal assistant In order to generate the new option , the option engine can use one or more of a heuristic option module , which uses a heuristic to generate the new option at each iteration; a systematic option module , which uses a systematic procedure to generate the new option at each iteration; a random option module , which uses a random or pseudo-random procedure to generate the new option at each iteration; or a module that uses another technique of generating the new option In the embodiment of FIG.
An iterative expansion engine see FIG. The iterative expansion engine also includes a task intention relatedness assessment module , which communicates at with the intention base of FIG. The task intention related assessment module determines whether the human user has proposed a new task intention that is related in parameters to an existing task intention of the intention base If so, the task intention related assessment module shares parameters between the new task intention and the existing task intention in the intention base If, on the other hand, the task intention related assessment module determines that the human user has proposed a new task intention unrelated to an existing task intention, the task intention related assessment module augments the intention base with the new task intention.
Further, task intention related assessment module can be configured to, upon receiving an interpreted utterance of the human user that is unrelated to performing collaborative dialog to ascertain values of parameters to execute multiple task intentions of the human user, determine the need to generate a natural language response to the human user to guide the human user to return to the collaborative dialog.
This can, for example, be performed by the module adjusting components of the intention base or another parameter controlled by the task engine , which are subsequently used for dialog generation module see FIG. The iterative expansion engine of FIG.
If the human user adds a new constraint or changes an existing constraint, the added or changed constraints module revises the intention base to include the new constraint or changed constraint and to change any other constraints in the intention base that are affected by the new constraint or changed constraint; and reflects the changed or added constraints in the changed constraints module of the option engine With reference to the embodiment of FIG.
Procedures a, a and a are examples of procedures used to interact with currently active module of FIG. Procedure a of FIG. Procedure a is an example of a procedure used by task intention related assessment module of FIG.
Procedure b is an example of a procedure used by task intention related assessment module of FIG. Procedure a is an example of a procedure used by added or changed constraints module of FIG. The procedure ends on lines 25 and 26 of the procedure of FIG.
In more detail, in the embodiment of FIG. The procedure of the embodiment of FIG. Multiple intentions in search dialogs are not treated as interruptions using a stack because the next step in a dialog might not necessarily involve the most recent intention that had been expanded. Instead, the procedures choose and option in lines 3 and 8 , respectively, consider all of the current DIS's during each step in the loop and pick one to explore next.
In the procedure of the embodiment of FIG. Constraints are stored with the id of the associated intention. The user's input from line 8 of FIG. If the user rejects the proposal lines 14 and 15 , the set V is simply augmented. If the user instead suggests a new task lines 16 - 19 , such as a request for parking information in the middle of restaurant considerations, then that input is used to augment the IB. Line 16 checks if there is a way to combine the new DIS with the existing one so that variables can be shared between the two tasks i.
For example utterances 6 - 8 , 13 , and 26 of FIGS. If the user adds a new constraint line 22 ; example utterances 4 , 11 and 15 unrelated to the suggestions from the system, the system revises the IB line 22 and returns the new IB and any changes that result.
This case is the most complicated: as we noted earlier, a revision such as found in utterance 11 of FIG. These are collected in line 22 of FIG. In the example procedure of FIG. Since each new option is checked for consistency as well as possibly inconsistent suggestions by the user in line 22 , the procedure eventually converges to an executable IB.
In default settings, there may be templates that suggest which parameters should be determined first. However, the procedure is not complete, since the user will only visit a subset of possible options that would lead to a fully fleshed-out IB. This is actually a feature of the procedure as a user would be quite unhappy if forced to consider every possible option before a final decision.
In this way, the procedure adopts a satisficing approach. In an embodiment according to the invention, a separate step can, for example, be added to handle conventional interruptions using a stack.
The computerized collaborative dialog manager system includes an input processor that is configured to receive, from a natural language understanding engine , a natural language interpretation of utterances of the human user to a speech recognition system.
The natural language interpretation can, for example, include at least one of: i intent data and mention list data from a statistical natural language system and ii logical form natural language data output from a deep natural language system.
The task engine is configured to use the natural language interpretation as the basis for at least one of a new constraint and a new task intention for the collaborative dialog between the computerized personal assistant and the human user, for example by first receiving the natural language interpretation via a semantic graph and user intent module The task engine is configured to model a task intention of the human user in a dynamic intention structure or other intentional structure , based at least on consulting a library of task recipes The task recipes specify how domain tasks are to be carried out, in a hierarchical task model.
Turning to FIG. The iterative expansion engine and option engine each interact with the intention base , and each other, as taught herein. In addition, the intention base of FIG. This includes, for example, for each task intention : a task intention identifier , a task intention variable , an act , a constraint , and a representation of any subsidiary task dynamic intention structure.
The task engine includes a task recipe translator engine , which consults the task recipe library to model a task intention of the human user in the dynamic intention structure For example, by consulting the hierarchical task model of the task recipe library , the task recipe translator engine can establish which tasks are indicated as subsidiary tasks for a task intention using the representation of the subsidiary task dynamic intention structure.
This embodiment indicates in schematic form an example of a task intention that includes representations of subsidiary intentions. DIS's range over act types. As shown in FIG. Hierarchical action structure is captured via sub-boxes. The representation shown in FIG. Each intention has an Id , an act type e. The variables x and t, for example, under Id23, , are shared among the sub-intentions In accordance with an embodiment of the invention, a collection of intentions is modeled in an Intention Base IB.
When an agent changes his mind about a constraint revision occurs. The revision of an IB involves a number of steps Ortiz and Hunsberger, What follows is an example focusing on revision of constraints with respect to cases 1 and 2 above regarding incomplete intentions. With this definition of revision, side effects can be removed automatically.
Here, it is noted that there is not always a unique revision but in this example, for the sake of simplicity, it is assumed that there is. In addition, it is noted that one can think of the variable x appearing in non-rules e. During the revision, boxes are unpacked into components, like the constraints shown here, revised and checked for consistency in terms of the modal logic translation and then reconstructed into new boxes. An embodiment according to the invention implements a conversational assistant prototype system called the Intelligent Concierge.
The system converses with the user about common destinations such as restaurants, movie theaters, and parking options. It helps users refine their needs and desires until they find exactly what they are happy with. A Natural Language Understanding NLU pipeline provides input to the Collaborative Dialog Manager CDM , that operates at the center of the Concierge, taking a user utterance in the form of natural language text produced by a speech recognition system and interpreting it.
With the aid of a library of reasoning components and backend knowledge sources, CDM interprets input in the context of the current dialog and evolving intention and processes dialogs of the form seen herein, taking the user's request, performing required actions such as making a restaurant reservation, requesting more information such as preferences regarding a particular cuisine, or offering information to the user such as providing a list of restaurant options. External sources, such as Opentable, are accessed via backend reasoning processes.
The operation of CDM and support for search dialogs is assisted by tightly integrating the dialog manager with supporting reasoning modules: the latter informs the dialog manager as to what to say next, how to interpret new user input, or how to revise an intention.
A temporal relaxation planner can be incorporated for reasoning about domain actions that produces the output associated with utterance 27 , for example.
Step 1. Generate token Step 2. Authorize the service Step 3. Start the conversation Step 4. Step through the dialog Step 4a. Interact with the user (text input) Step 4b. Interact with the user . A dialog design comprises nodes that perform operations such as prompting the user, evaluating a response, retrieving information from a backend system, or transferring the user to a live . Creating a compelling customer experience isn’t easy. Mix enables you to tackle the most complex conversational challenges all through a simple DIY interface. From design and .