Llm
Learn about llm and how to implement it effectively.
4 min read
🆕Recently updated
Last updated: 12/9/2025
LLM (Agentflow Node)
The LLM node in InnoSynth-Forjinn's Agentflow is a fundamental component for integrating Large Language Models (LLMs) into your workflows. It allows you to leverage the power of various conversational AI models to analyze inputs, generate responses, and perform a wide range of language-based tasks.
Purpose
The LLM node is designed for:
- Text Generation: Creating human-like text for responses, summaries, or creative content.
- Question Answering: Providing answers to user queries based on its training data and provided context.
- Content Analysis: Analyzing text for sentiment, keywords, or other insights.
- Structured Output: Generating responses in a predefined JSON format for programmatic use.
Appearance on Canvas
(Image: Placeholder for the LLM node icon/appearance on the canvas)
Configuration Parameters
The LLM node offers extensive configuration options to control the behavior of the integrated LLM.
1. Model (Required)
- Label: Model
- Type:
asyncOptions(loads available Chat Models) - Description: Select the specific Large Language Model you want to use (e.g., OpenAI Chat Model, Anthropic Chat Model).
- Configuration: You can configure the selected model's specific parameters (e.g., API key, temperature, model name) via the
llmModelConfigproperty, which is loaded dynamically.
2. Messages (Optional)
- Label: Messages
- Type:
arrayof objects - Description: Pre-define a series of messages (System, Assistant, Developer, User) to set the context, persona, or initial instructions for the LLM. These messages are prepended to the conversation history.
- Role:
system,assistant,developer,user - Content: The message text. Supports variables.
- Role:
3. Enable Memory (Optional)
- Label: Enable Memory
- Type:
boolean - Default:
true - Description: If enabled, the LLM will maintain conversational memory, allowing it to remember past interactions within the current session.
4. Memory Type (Optional)
- Label: Memory Type
- Type:
options - Default:
allMessages - Description: Defines how the LLM manages its conversational history.
- All Messages: Retrieves all messages from the conversation.
- Window Size: Uses a fixed window to surface the last N messages.
- Conversation Summary: Summarizes the entire conversation.
- Conversation Summary Buffer: Summarizes conversations once a token limit is reached.
5. Window Size (Optional)
- Label: Window Size
- Type:
number - Default:
20 - Description: (Visible when Memory Type is "Window Size") Specifies the number of recent messages to retain in memory.
6. Max Token Limit (Optional)
- Label: Max Token Limit
- Type:
number - Default:
2000 - Description: (Visible when Memory Type is "Conversation Summary Buffer") The maximum token limit before the conversation is summarized.
7. Input Message (Optional)
- Label: Input Message
- Type:
string - Description: Add an additional input message as a user message at the end of the conversation. Useful for dynamic inputs.
8. Return Response As
- Label: Return Response As
- Type:
options - Default:
userMessage - Options:
User Message,Assistant Message - Description: Determines how the LLM's final response is formatted in the output.
- User Message: The response is treated as if it came from a user.
- Assistant Message: The response is treated as if it came from an assistant.
9. JSON Structured Output (Optional)
- Label: JSON Structured Output
- Type:
arrayof objects - Description: Instructs the LLM to generate its output in a specific JSON schema. This is powerful for integrating LLM responses into programmatic workflows.
- Key: The key name in the JSON output.
- Type: The data type of the value (
String,String Array,Number,Boolean,Enum,JSON Array). - Enum Values: (Visible for
Enumtype) Comma-separated list of allowed enum values. - JSON Schema: (Visible for
JSON Arraytype) A JSON schema defining the structure of items within the array. - Description: A description of the key, guiding the LLM on what to generate.
10. Update Flow State (Optional)
- Label: Update Flow State
- Type:
arrayof objects - Description: Allows you to update the runtime state of the workflow during the LLM's execution.
- Key: The key of the state variable to update.
- Value: The new value for the state variable. Supports variables and the output of the LLM (
{{ output }}or{{ output.jsonPath }}).
Inputs & Outputs
- Input: Typically receives a user message, a prompt, or contextual information from a preceding node.
- Output: Produces the LLM's generated response, which can be plain text or a structured JSON object (if configured). It also includes
usageMetadata(token counts).
Example Usage: Summarizing Text
Let's create a simple chatflow to summarize user-provided text.
- Connect a
Startnode to the input of anLLMnode. - Configure Model: Select an
OpenAI Chat Model. - Configure Messages: Add a
Systemmessage: "You are a helpful assistant that summarizes text." - Configure Input Message:
{{question}}(assuming user input is passed asquestionvariable). - Connect to Chat Output: Connect the output of the
LLMnode to aChat Outputnode.
Now, when a user provides text, the LLM will summarize it and return the summary.