Cognitive Engines
In this documentation, we will focus on understanding the general functioning of cognitive engines, their integration with Lynn, and the advantages they offer.
Before trying to understand what a cognitive engine is, we must comprehend the concept of a bot. A bot is an application capable of performing tasks autonomously and interacting with humans through a communication channel, usually in natural language.
For a bot to be a useful tool, it is essential that both the recognition of the user's intent and the configured response are accurate. To achieve this, the use of cognitive engines that incorporate natural language processing (NLP) modules is key, as they allow the definition of the user's intent through training processes.
Implementing a bot with conversational AI is an excellent way to automate support and enhance the service provided by agents, which, in the medium term, translates into cost optimization. In this way, organizations can ensure 24/7 service, reduce human errors, and lower expenses. By integrating bots into the customer relationship management strategy, you can be sure that your customers will receive efficient and competent assistance.
Every cognitive engine emulates human thinking through advanced computing techniques to predict and interpret the actions to be executed. The skills of cognitive engines are trained (NOT programmed), which allows them to tackle any challenge that the administrator deems necessary.
These capabilities are exposed through a series of services or APIs, which are available on the Internet via REST protocols and communicate in XML and JSON formats. Additionally, a series of SDKs are provided with which Lynn can interact.
Companies like Microsoft, IBM, and Google are dedicating significant efforts to the development of this technology. Some examples include IBM's cognitive engine Watson, Microsoft's Language Understanding service (LUIS), and Google's Dialogflow ES and Dialogflow CX products, among many other cognitive engines available in the market.
It is worth noting that Lynn incorporates the leading cognitive engines from these companies. In the upcoming chapters, we will explain how Lynn integrates, and can integrate, any cognitive engine.
Adding a Cognitive Engine to a Tenant
During Tenant Creation
It is possible to import a cognitive engine into the tenant's configuration during the creation of a tenant.
- In the dashboard, click on the New Tenant button located at the top right of the view.
- Fill in the required configuration data in the Start and General steps.
- In the Configuration step, in the Default flow template section, select the option Import from a cognitive engine.
- Below, a list will be displayed showing all supported cognitive engines.
- Select the cognitive engine and proceed with its configuration.
If this configuration is not made, the tenant will be created with a default cognitive engine from Lynn.
From the Flowchart
Located in the flowchart:
- Click on the Cognitive Contexts button (if a cognitive context has already been configured in the tenant, the name assigned in its configuration will appear). A window will open, which, if the tenant has cognitive engines, will display a list of their names along with the corresponding icon.
- Click on the Add button located in the upper right corner of the window. A list will appear showing all the cognitive engines compatible with Lynn.
- Select the cognitive engine and proceed with its configuration.
IBM Watson
Prerequisites
Create an IBM Watson Assistant instance:
- Go to the IBM Cloud platform and create an account if you don’t already have one.
- Once inside, search for "Watson Assistant" in the catalog and select the option to create an instance.
- Choose a plan that fits your needs (there are both free and paid options).
Configure the Assistant:
- After creating the instance, access Watson Assistant from your IBM Cloud dashboard.
- In the Watson Assistant panel, create a new assistant and start configuring it.
-
Define the skills the assistant needs, which may include:
Intents: These represent the user’s intent.
Entities: Key elements within an intent that help refine the response.
Configuration in Lynn
Name: Input field of type String for entering the name that will identify the cognitive engine within the tenant.
EndPoint: Input field of type String for defining the specific address where the service can be accessed (web application URL).
Skill ID: Input field of type password for entering the identifier assigned to each skill created in IBM Watson Assistant. This identifier is used to distinguish and work with specific skills within the development environment or via the API. It consists of 32 digits and 4 hyphens.
- Within the assistant, select the skill you’re interested in.
- Once on the skill page, go to the Settings section or click on the three-dot menu in the upper right corner and select View API Details or API Details.
- In this section, you’ll find API details, including the Skill ID (sometimes referred to as the Skill Identifier).
Assistant ID: Input field of type password for entering the Assistant ID.
- In the Watson Assistant dashboard, select the Assistants option from the left menu.
- You’ll see the list of assistants; look for the three vertical dots menu in the upper right corner.
- Select Settings. In this section, you’ll see the Assistant ID along with other credentials needed to make API calls.
API key: Input field of type password to enter the API key associated with the service credentials. Remember that the API key is case-sensitive and is essential for authenticating requests made through the Watson API.
Default cognitive context: Input field of type check to define whether a default cognitive context is desired. This means that evaluations will use this context unless there is an action within the flow to switch to another pre-configured cognitive context.
NLU Lynn
[Name]: This is a mandatory field where you should enter the name that will identify the creation of the cognitive engine.
[EndPoint]: This field defines the communication endpoint that can be accessed through the URL specified in the NLU Lynn interface, under the configuration details of the created application.
[Workspace]: It is the ID of the workspace you want to work with. It is unique to each workspace within a client, containing collections of intention (intent) and entities (entity) to define and train models.
[Server]: In this field, specify the server where the database is located.
[Database Name]: Enter the name of the database you will be working with.
[User]: Enter the username corresponding to the database you will be working with.
[Password]: Enter the password corresponding to the database you will be working with.
[Default Cognitive Context]: Allows defining whether the cognitive context should be configured by default for the tenant.
NLU Lynn NLU 2.0
Note: Consider that NLU 2.0 must have a minimum of 5 training phrases.
[Name]: This is a mandatory field where you should enter the name that will identify the creation of the cognitive engine.
[EndPoint]: This field defines the communication endpoint that can be accessed through the URL specified in the NLU Lynn NLU 2.0 interface, under the configuration details of the created application.
[InferenceURL]: Enter the corresponding URL for cognitive evaluation.
[Publish and training]: This field allows training the engine to enhance its ability to understand and respond accurately to user queries in that specific domain.
[Client Id]: It is the client or user ID, corresponding to their unique ID in the Firebase database, where all their projects, workspaces, and data are stored.
[Workspace]: It is the ID of the workspace you want to work with, unique to each workspace within a client, containing collections of intention (intent) and entities (entity) to define and train models.
[Project Id]: It is the ID or name of the project the client wants to work with. It is how the project to be worked on will be identified, considering that from the same dataset (within the same workspace), different models can be generated.
[User]: Enter the username used in LEA.
[Password]: Enter the user password used in LEA.
[Default Cognitive Context]: Allows defining whether the cognitive context should be configured by default for the tenant.
Dialog Flow
To begin, access the creation of an agent using the following link: https://dialogflow.cloud.google.com/. This process is carried out in the 'Create New Agent' section.
If you already have a previously created agent, access the same link mentioned, but this time go to the settings corresponding to the existing agent. In these settings, under the 'GOOGLE PROJECT' section, click on the Project ID.
Next, go to the APIs and services section and access the credentials. Here, you will need to manage a service account, creating it as needed.
Once the service account is created, go to the key creation section and generate a new key in JSON format.
After completing these steps, you can access Lynn and upload the generated file using the button indicated below.
[Button to Upload Configuration File]: Allows attaching a .json format file to fill in the configuration fields of the Dialog Flow cognitive engine.
[Name]: This is a mandatory field where you should enter the name that will identify the creation of the cognitive engine.
[Default Cognitive Context]: Allows defining whether the cognitive context should be configured by default for the tenant.
Microsoft CLU
Name: A required field where the name identifying the created cognitive engine is entered. This name allows easy identification and management of the resource within the Microsoft CLU platform.
Endpoint: A field where the communication endpoint, or access URL, is specified. This endpoint is generated in the Microsoft CLU interface and provides a direct connection to the configured cognitive engine.
Key: Field for the service's authentication key. This key ensures that only authorized users can access the cognitive engine, securing communications between the application and Microsoft CLU.
Project ID: A unique identifier for the project in Microsoft CLU. This field is essential to link and manage the cognitive engine associated with the specific project within the Azure ecosystem.
Version: Field indicating the version of the CLU model in use. It helps maintain version control and ensures applications always use the correct version of the configured cognitive engine.
Evaluation Language: Field that defines the language for processing and evaluation. This allows the cognitive engine to interpret and generate responses in the specified language, improving result accuracy and relevance.
Default Cognitive Context: Allows you to specify whether the cognitive context should be configured as the default for the tenant.
For more detailed information on each field and how to use them, refer to the CLU documentation
Microsoft LUIS
Note: Please note that as of October 1, 2025, Microsoft LUIS will no longer be available, and its access will be officially discontinued in favor of Conversational Language Understanding (CLU).
Prerequisites:
- Have an active MS LUIS account for cognitive integrations https://www.luis.ai/.
- Generate access credentials for external integrations.
[Name]: This is a mandatory field where you should enter the name that will identify the creation of the cognitive engine.
[EndPoint]: This field defines the communication endpoint that can be accessed through the URL specified in the Microsoft LUIS interface, under the configuration details of the created application.
[Key]: Obtain this field from the "manage/AzureResources" section when creating the cognitive engine in luis.ai.
[App Id]: Extract this field from the "manage/settings" section during the creation of the cognitive engine in luis.ai.
[Version]: The information in this field is visible next to the name assigned to the cognitive engine.
[SlotName]: In this field, specify the environment in Microsoft LUIS you want to work with; this can be the Production slot or the Staging slot.
[Default Cognitive Context]: Allows defining whether the cognitive context should be configured by default for the tenant.