Introduction
Artificial Intelligence (AI) is an emerging technology that is transforming the way we interact with and process information. Ligantic's Flow Nodes unlock AI features and provides a suite of powerful AI-powered capabilities that can be seamlessly integrated into your Ligantic Flows. These nodes allow you to leverage the latest advancements in Large Language Models (LLM), data extraction, and vector context management to enhance your business processes and unlock new insights.
AI Nodes
The AI Nodes that feature in the Flow Designer are:
LLM Chat Message
LLM Compute Embedding
Data Extraction
Transcribe
Vector Context: Add
Vector Context: Remove
Vector Context: Query
LLM Chat Message
This node allows you to send a message to a large language model and receive a response. You can configure the Model (Ligantic provides access to the latest industry leading Large Language Models), the output (text or content), variables coming into the prompt, and the system and user prompt to send to the LLM.
Incoming Node Edges
Trigger: This is where the control flow triggers this Node.
Dynamic Data Inputs: This is a dynamic and optional incoming Node Edge that is dependent on the Variables set and the Model that is selected in the internal Node Configuration. Some Models take only text inputs - denoted by a green incoming Node Edge, others day media or files as well - denoted by a gradient incoming Node Edge. The number and name of the incoming Data Types are directly related to the Variables that are added. The "Query" incoming Node Edge is a default addition but has been configured through the Variables.
Node Configuration
Model: Select one of the many models that are available. Depending on the task and performance requirements of the Flow you may choose different models.
Settings: Parsing response to content - checking this option will change the outgoing Node Edge to Content as opposed to plain text. This allows markup or rich text to be output from the LLM as a response.
Variables: This allows you to configure named variable inputs into the Node Configuration. You can leverage this dynamic data in the message templates. Any variable you add here will generate a corresponding incoming Node Edge.
Message Templates: Configure one or more Message Templates to be compiled and sent to the LLM. Choose from a "System", "Assistant", or "User" type.
Outgoing Node Edges
Error: This is for when any error state has been returned from the update action. You can handle the error state gracefully from this outgoing Node Edge.
Success: This is for when the update action completes the execution successfully.
LLM Chat Message Node: Industry Leading Models
The Ligantic Platform Team are constantly reviewing the latest release models and making them available in the LLM Chat Message Node.
The current list of available Models and their integration partner are:
Open AI:
gpt-3.5-turbo-0125
gpt-3.5-turbo-1106
gpt-3.5-turbo (latest)
gpt-3.5-turbo-16k
gpt-4o-mini
gpt-4o (latest)
gpt-4o-2024-05-13
gpt-4 (latest)
gpt-4-vision-preview
gpt-4-turbo (latest)
gpt-4-turbo-2024-04-09
Anthropic:
Groq:
Google:
LLM Chat Message Node: Message Templates
In the LLM Chat Message Node you have the ability to configure a Message Template with the type of "System Message", "Assistant Message", and "User Message".
System Message
The system message is used to provide instructions, guidelines, or context to the language model.
It sets the tone, personality, and capabilities of the assistant that will be interacting with the user.
The system message is always the first message of the conversation and there can be only one System Message.
It helps the language model understand the desired behaviour and the type of responses the user expects.
Assistant Message
The assistant message is a demonstration of the response generated by the language model based on the user's input and the provided context. You can use this Message Template to demonstrate how the LLM should answer a query, then provide the real user query after it.
It represents the language model's generated output, which can be in the form of text, code, or other structured data.
The assistant message is the output that is returned to the user, reflecting the language model's understanding and generation capabilities.
User Message
The user message is the input or query provided by the user to the language model.
It represents the user's request, question, or statement that the language model should respond to.
The user message is the primary content that the language model will process and generate a response for.
In summary, the system message sets the context and desired behaviour for the language model, the user message is the input provided by the user, and the assistant message is the response generated by the language model based on the user's input and the provided context.
In the Ligantic Chat Message Node you can create one or many message templates that will be compiled and sent to the LLM. These can be supported with dynamic content by using the Variable capability.
Data Extraction
This node allows you to extract structured data from unstructured text using AI-powered data extraction. The Node will take in unstructured Text and identify the best matches in the content to the selected Schema. This will be based off of the Schema property names and descriptions.
Incoming Node Edges
Node Configuration
Schema: Set the Schema that the LLM will use to infer and extract data from the incoming text.
Creation On Extraction: Check this to create new Entities for extracted data.
Extract Single: Check this to return only a single Entity - the first matched in the provided text.
Outgoing Node Edges
Error: This is for when any error state has been returned from the data extraction action. You can handle the error state gracefully from this outgoing Node Edge.
Success: This is for when the data extraction action completes the execution successfully.
Results: This provides an array of objects or Entities that have been extracted. This outgoing Node Edge alternates between an Array of Object Type and an Object Type depending on whether the "Extract Single" configuration option is set.
Example
Using the Data Extraction Node you can convert unstructured data into structured data. Use intelligent data extraction to populate structured data schemas from unstructured data. The below shows unstructured data (the "Text" on the incoming Node Edges) going through the Data Extraction Node and populating the Schema on the right.
Transcribe
This node allows you to transcribe audio files and output the text transcriptions. The Transcribe Node uses the Whisper Model via OpenAI to complete transcriptions.
Incoming Node Edges
Trigger: This is where the control flow triggers this Node.
Audio File: This takes in an Audio File. The supported File Types are: "mp3", "mp4", "mpeg", "m4a", "wav", "webm".
Node Configuration
No Node configuration is available for the AI Transcribe Node.
Outgoing Node Edges
Error: This is for when any error state has been returned from the update action. You can handle the error state gracefully from this outgoing Node Edge.
Success: This is for when the update action completes the execution successfully.
Transcription: This is the output of the transcription text.
LLM Compute Embedding
Vector context and text embeddings are fundamental techniques in modern artificial intelligence and data management. By representing words, phrases, and documents as numerical vectors, these methods capture the semantic and contextual relationships within text data. The resulting embeddings provide a compact and efficient way to work with language, enabling improved performance on a variety of natural language processing tasks.
The Embedding Nodes in Ligantic's Flow Designer make this easy to access and work with - you don't need to know the intricacies of the process behind it. This node allows you to compute and label an embedding for a piece of text using a large language model.
Incoming Node Edges
Node Configuration
Outgoing Node Edges
Error: This is for when any error state has been returned from the text embedding action. You can handle the error state gracefully from this outgoing Node Edge.
Success: This is for when the text embedding action completes the execution successfully.
Embedding: This is where the resulting embedding object is output. This can then be passed through to the other Vector Context Nodes.
Observable Vector Context
The Vector Context is a powerful feature available for every Space on the Ligantic Platform. It allows you to add, edit, and query embeddings within the Flow Designer. You can also view a human-readable version of the Vector Context from the Space Settings.
The Vector Context provides several key capabilities:
Human-readable Vector Context: The Vector Context is observable by default, allowing you to always see the data that your AI responses are based on in plain language.
Vector Search: The Vector Context enables improved accuracy of Retrieval Augmented Generation (RAG) through vectorization and embedding of unstructured documents like PDFs, Wikis, FAQs, and more. This allows for semantic search of content, enabling rapid and secure search over your knowledge base.
Knowledge Base Embedding: The Vector Context supports tokenisation of large, unstructured or structured data sets to embed your organizational context directly into the platform.
Content Labelling: You can dynamically add labels to embeddings from within Flows on import or post-transformation. This allows for structured filtering and organisation of your content within the Vector Context.
The Observable Vector Context is a core capability of the Ligantic Platform, providing visibility, searchability, and structure to the data powering your AI-driven workflows and applications.
Vector Context: Add
This node allows you to label and add a piece of text to the vector context in your Space.
Incoming Node Edges
Trigger: This is where the control flow triggers this Node.
Embedding: This is where an Embedding (usually from a LLM Compute Embedding Node) is passed into the Node.
Dynamic Label: This is a dynamic and optional incoming Node Edge that is dependent on the labels that are configured in the Node.
Node Configuration
Outgoing Node Edges
Error: This is for when any error state has been returned from the Vector Context add action. You can handle the error state gracefully from this outgoing Node Edge.
Success: This is for when the Vector Context add action completes the execution successfully.
Vector Context: Remove
This node allows you to query and remove a piece of text from the vector context in your Space.
Incoming Node Edges
Node Configuration
Outgoing Node Edges
Error: This is for when any error state has been returned from the Vector Context add action. You can handle the error state gracefully from this outgoing Node Edge.
Success: This is for when the Vector Context add action completes the execution successfully.
Example
Let's take the example that we have added an Article content to our Vector Context and now want to remove it. You can view the Labels, Content, and Actions in the Vector Context Tab of Space Settings. You could remove it from here manually, however, it would be better to remove the context automatically when an Entity is deleted from the Articles Schema instead.
This example shows a Flow that is being triggered when an "Article" entity has been deleted.
This ArticleId is passed from the Object Pick Node and passed to the Vector Context Remove Node through the Dynamic Label incoming Node Edge.
The Vector Context Remove Node also takes in the data "Type" with the value "Article" as this is an additional label on the embedded data.
The data context inputs matches a specific label combination that the embedding is stored with in the context. The control flow triggers the remove action and this targeted embedding is now removed.
Vector Context: Query
Now that you have information stored in the Space Vector Context, like articles, reports, and other text and this information has been computed into "embeddings" that are numerical vectors that capture the semantic and contextual relationships of that text based data.
The Vector Context is like a big database that stores all these embeddings. When you want to find information you can use the Vector Context Query Node to search through the Vector Context and find the embeddings that are the most similar to what you're looking for.
First you will need to use the Compute Embedding Node to take text input and convert it to an embedding. Then, the Vector Context Query Node will search through all of the embeddings in the Vector Context and find the ones that are the most similar to the one you provided. The key thing to remember is that the Vector Context Query Node is looking for similarities between the embeddings, not just exact matches. So even if the text you're searching for isn't exactly the same as what's in the Vector Context, the computer can still find the closest matches and give you the information you need
So, the Vector Context Query Node allows you to query the vector context in your Space and return a filtered set of results.
Incoming Node Edges
Embedding: This is where an Embedding (usually from a LLM Compute Embedding Node) is passed into the Node.
Dynamic Label: This is a dynamic and optional incoming Node Edge that is dependent on the labels that are configured in the Node.
Node Configuration
Top Count: Set the number of embedded fragments you want returned from the Query. The default amount set is 3.
Label Filters: Configure the labels to filter out the results. Any label data that is put here will be matched by all returning embeddings.
Label Values: Configure any labels you want to have access to in the Results. This won't filter data but instead will be included in the Objects returned in the Outgoing Node Edges.
Outgoing Node Edges