Released:
We've introduced a new Stream Collect Node in Flows. This node efficiently reduces a data stream into a single piece of data, allowing for more real-time data manipulation within your Flows.
This new node pairs nicely with the new "Stream Response" option in the AI LLM Chat Message node to enable real-time response processing from LLM models.
Introducing a new Object Pack Node in Flows, mirroring the Object Pick Node. This feature supports creating objects by wrapping multiple primitive values into a new object. This new node helps remap data within a List Map Node or construct an object body for the Integration HTTP Node.
Meta's llama 3.3 model is now available from the LLM Chat Message
Node. The powerful Llama 3.3, a 70B parameter multilingual language model, sets new standards in AI capabilities while maintaining impressive cost efficiency. Now available on the Ligantic platform, this cutting-edge model excels at various natural language tasks through its advanced instruction tuning. With robust support for conversational AI, synthetic data generation, and various innovative applications, Llama 3.3 delivers exceptional performance across textual intelligence, reasoning, and complex language understanding.
Powered by Groq integration and accessible through Ligantic's user-friendly platform, Llama 3.3 enables seamless deployment for both immediate and sophisticated applications. Whether you're developing AI assistants, requiring multiple model interactions, or handling extensive context-based tasks, Llama 3.3 provides the advanced capabilities needed to elevate your projects to new heights. New users can easily access these cutting-edge features by signing up for a free Ligantic account while existing users can immediately begin exploring Llama 3.3's capabilities through their current login.
This feature pairs seamlessly with the new "Stream Collect Node", enabling real-time processing of responses from LLM models. These improvements give users greater control and efficiency when working with streaming data and AI-generated content in their Flows, enabling real-time Experiences.
We've introduced a new startsWith
function to the Data Expression node. This function allows you to test whether a string begins with the characters of another string with an optional starting position parameter.
We've introduced prompt caching for Anthropic LLM models, hooking into improved cost efficiency and performance.
This new feature allows for message caching, which is particularly beneficial when using extended prefix contexts, such as iterating through lists with mostly common content. As a result, users can expect reduced model costs and improved response speeds, making their Experience with Anthropopic models in our system more efficient and cost-effective.
We've introduced a new auto-scroll feature for blocks with over-scroll configured to auto
or scroll
. This enhancement is particularly beneficial for modern conversational Experiences, enabling the automatic scrolling behaviour that users have come to expect. The auto scroll functionality improves user interaction and makes the most recent content visible.
Related Releases