Components Library
Learn about components library and how to implement it effectively.
Components Library
The InnoSynth-Forjinn Components Library is a rich collection of pre-built modules that you can drag and drop onto your chatflow canvas to construct powerful AI workflows. These components abstract complex functionalities, making it easy to integrate various AI models, tools, and services.
Accessing the Components Library
The components library is located in the left panel of the chatflow builder. Components are organized into categories to help you find what you need quickly.
Component Categories
Here's an overview of the main categories you'll find in the library:
-
Large Language Models (LLMs):
- These components represent various AI models capable of understanding and generating human-like text.
- Examples: OpenAI Chat Model, Anthropic Chat Model, Google Generative AI, Mistral, etc.
- Use them as the "brain" of your chatflow for tasks like summarization, question answering, and content generation.
-
Prompt Templates:
- Components designed to structure and format the input (prompts) for LLMs.
- Allows you to define dynamic prompts using variables (e.g.,
{{user_input}}) that can be filled by other components or user interaction.
-
Tools:
- Enable your AI workflows to interact with external systems and perform specific actions.
- Examples: Search API (Google Search, SerpApi), Database Query (PostgreSQL, MySQL), Web Scraper, Calculator, etc.
- Tools extend the capabilities of your AI beyond just text generation, allowing it to fetch real-time information or execute code.
-
Memory:
- Components that provide conversational memory to your chatflows, allowing the AI to remember past interactions within a session.
- Examples: Conversational Buffer Memory, Chat Message History.
- Crucial for building engaging and context-aware chatbots.
-
Document Loaders:
- Used to load data from various sources (e.g., PDF files, web pages, text documents) into your chatflow.
- Essential for Retrieval Augmented Generation (RAG) applications where you want your AI to answer questions based on specific documents.
-
Vector Stores:
- Components that integrate with vector databases to store and retrieve embeddings of your documents.
- Examples: Pinecone, Chroma, Qdrant.
- Used in conjunction with Document Loaders and Embeddings to enable semantic search and RAG.
-
Embeddings:
- Components that generate numerical representations (embeddings) of text, which are used by vector stores for similarity searches.
- Examples: OpenAI Embeddings, Cohere Embeddings.
-
Chains:
- Pre-defined sequences of components that perform common AI tasks (e.g., summarization chain, Q&A chain).
- Simplify complex workflows by encapsulating multiple steps into a single component.
-
Agents:
-
Output Parsers:
- Components that structure the output from LLMs into a specific format (e.g., JSON, list).
Configuring Components
Each component has unique configuration options that appear in the right panel when selected. These options allow you to customize the component's behavior, such as selecting an LLM model, providing API keys, defining prompt templates, or setting tool parameters.
By understanding and effectively utilizing the components library, you can build highly customized and powerful AI applications with InnoSynth-Forjinn.