Skip to content

Lynn Reporting - KNOWLEDGE PERFORMANCE

The dashboard provides an overview of the performance of cognitive contexts, organizing the information by dates. Users can view a summarized representation of the functioning of different cognitive models over a specific period, assessing key indicators such as accuracy, response time, execution of trained intents, the most used cognitive engines in the tenant, among others.

All the graphs that appear in the report include an interactive legend that allows for a more dynamic exploration of the data. Each element of the legend represents a data group. By clicking on one of these elements, the values associated with that variable disappear from the graph, allowing users to focus on the behavior of the other categories and improving interpretation.

Cognitive evaluations

This graph combines bars and lines to illustrate the relationship between the number of evaluations and evaluation times. The X-axis represents time, with a scale depending on the range selected in the initial filter when choosing the type of dashboard to display. The Y-axis indicates the evaluation time of the intent, measured in milliseconds (ms). Additionally, the graph has a second Y-axis, which allows for representing another variable in the same visualization. This axis is used to show the number of evaluations conducted.

  • Average Evaluation Time: The average time it takes for a model to process and generate a response during evaluations. It is calculated by summing all the evaluation times and dividing the result by the total number of evaluations.
  • Maximum Evaluation Time: The longest time recorded to process a specific evaluation. It refers to the time it took for the model to generate the slowest response observed during a given period.
  • Minimum Evaluation Time: The shortest time recorded to process an evaluation. It represents the shortest time it took for the model to generate a response during the evaluations.
  • Number of Evaluations: Refers to the total amount of evaluations conducted by the model over a specific period. This includes all interactions that have been evaluated, regardless of their outcome.

On the right side of the graph, the legend is available, providing details about the represented variables.

Total Cognitive Evaluations

It shows the total number of cognitive intents that have been executed in the tenant, distinguishing them by the cognitive engine used for their training. This information is presented in the form of a pie chart, where each section or 'slice' of the chart represents a cognitive engine, and the size of each slice reflects its proportion relative to the total.

Total Cognitive Intents

This pie chart shows the distribution of executed cognitive intents, segmented according to the evaluations performed on the system. The size of each slice indicates how often each intent has been detected. From the chart, we can identify the areas where the system is most effective, where improvements are needed, and which intents are most frequently requested by users.

Confidence in Interaction Detection

The chart illustrates the confidence level of the cognitive models configured in the tenant when correctly identifying an intent in an interaction. It allows for analyzing how certain the system is when assigning a specific intent to user inputs and shows how the cognitive engine detects different intents, along with the confidence level assigned to each one.

The X-axis groups the different intents that the system has detected. Each point on this axis represents a specific intent, which may vary depending on the set of input phrases and the training data provided to the engine.

On the Y-axis, the confidence level of the cognitive engine is shown, expressed as a percentage (from 0 to 100). This value reflects how confident the engine is that the detected intent is correct. A value close to 100 indicates high confidence, while lower values indicate less certainty in the detection. Additionally, two additional lines are shown that illustrate the average confidence in the detection of each of the intents trained with the cognitive engine and the overall confidence average for the tenant.

The following is the interpretation of the points:

  • High points on the graph: When an intent appears with a high confidence level (close to 100), it means that the engine is confident that it has correctly identified the intent, based on clear patterns in the input phrase.
  • Low points on the graph: They suggest that the engine is uncertain whether the detected intent is correct. This may be due to ambiguous phrases, similarities to other intents, or a lack of adequate training in those areas.

This chart allows us to visualize not only which intents are more frequent but also in which cases the engine has less confidence in its detection. Intents with low confidence may indicate areas where more training data or adjustments to the model are needed to improve the system's accuracy.

Table of Cognitive Phrases

Cognitive phrases are examples of expressions or questions that a user might use when interacting with a cognitive system or artificial intelligence engine. These phrases help the system identify the intent behind the query and generate an appropriate response.

The table is structured into six columns distributed as follows:

  1. Text: The user input.
  2. Intent Name: The intent that the engine must identify.
  3. Calls: The number of times the user input has been repeated verbatim.
  4. Confidence: A value indicating the engine’s certainty that it has correctly interpreted the intent.
  5. Minimum Confidence: The minimum acceptable confidence value.
  6. Result Type: The output or response generated by the engine after processing a user input.
    • NormalEvaluated: Indicates that the user input has been evaluated normally, and the engine has identified the intent with an acceptable level of confidence, allowing it to generate an appropriate response.
    • SystemIrrelevant: Indicates that the user input is not relevant to the system. This may refer to questions or comments unrelated to the intents the engine is trained to handle.
    • SilentError: Reflects an error in the system that does not produce visible error messages for the user. The engine was unable to process the input correctly but does not communicate the problem.
    • NonCognitiveAbility: Means the engine does not have the cognitive capability to interpret or respond to the user’s query. This may refer to questions requiring knowledge or skills that the engine lacks.
    • Deflection: Used when the engine redirects the user’s query to another area or topic as part of a strategy to handle questions it cannot answer directly.
    • DeflectionClientIdentification: Indicates that the engine has identified the need for more client information before providing a response. This type of output may require additional input from the user to personalize the interaction.
    • Voice: Refers to responses generated through voice interactions, commonly used in voice applications and virtual assistants.
    • CognitiveEvaluationError: Indicates an error in cognitive evaluation, meaning the engine was unable to analyze the input effectively due to ambiguities or issues in the training data.
    • NotFoundError: Occurs when the engine cannot find a corresponding intent or response for the user input, possibly due to a lack of relevant data or unusual phrasing.
    • LowConfidenceEvaluationError: Generated when the engine evaluates the input with a low confidence level, indicating uncertainty in the intent interpretation and possibly resulting in an unreliable response.
    • Error: A generic error indicating a problem occurred during the processing of user input, such as technical failures or unexpected issues preventing the engine from functioning correctly.

Cognitive phrases with low confidence can highlight areas where the engine needs further training or adjustments, which directly impacts the performance of the self-attention flow.