Forjinn Docs

Development Platform

Documentation v2.0
Made with
by Forjinn

Human Input

Learn about human input and how to implement it effectively.

3 min read
🆕Recently updated
Last updated: 12/9/2025

Human Input (Agentflow Node)

The Human Input node in InnoSynth-Forjinn's Agentflow is a critical component for building workflows that require human intervention, approval, or feedback during their execution. This node allows you to pause the automated process and solicit a decision or information from a human user before proceeding.

Purpose

The Human Input node is designed for scenarios where:

  • Human Approval: A step in the workflow requires explicit human approval (e.g., "Approve this transaction?").
  • Decision Making: The AI needs human guidance to make a choice (e.g., "Which option do you prefer?").
  • Information Gathering: The workflow needs specific information from a human that the AI cannot obtain (e.g., "Please provide your account number.").
  • Feedback Collection: Gathering feedback on an AI-generated response or action.

Appearance on Canvas

(Image: Placeholder for the Human Input node icon/appearance on the canvas)

Configuration Parameters

The Human Input node offers options for defining the prompt and handling feedback.

1. Description Type (Required)

  • Label: Description Type
  • Type: options
  • Options: Fixed, Dynamic
  • Description: Determines how the prompt for human input is generated.
    • Fixed: You provide a static message.
    • Dynamic: An LLM generates the description based on a prompt.

2. Description (Required for Fixed Type)

  • Label: Description
  • Type: string
  • Placeholder: Are you sure you want to proceed?
  • Description: (Visible when Description Type is Fixed) The static message or question presented to the human user. Supports variables.

3. Model (Required for Dynamic Type)

  • Label: Model
  • Type: asyncOptions (loads available Chat Models)
  • Description: (Visible when Description Type is Dynamic) Select the LLM that will generate the human input prompt.

4. Prompt (Required for Dynamic Type)

  • Label: Prompt
  • Type: string
  • Default: DEFAULT_HUMAN_INPUT_DESCRIPTION_HTML (a predefined HTML prompt)
  • Description: (Visible when Description Type is Dynamic) The prompt given to the LLM to generate the human input description. Supports variables and can be used to provide context to the LLM.

5. Enable Feedback (Optional)

  • Label: Enable Feedback
  • Type: boolean
  • Default: true
  • Description: If enabled, the human user can provide a text-based feedback message along with their "Proceed" or "Reject" decision.

Inputs & Outputs

  • Input: Typically receives a trigger from a preceding node, indicating that human input is now required.
  • Outputs: The node has two distinct output ports:
    • Proceed: The workflow continues along this path if the human user chooses to "Proceed" or "Approve."
    • Reject: The workflow continues along this path if the human user chooses to "Reject" or "Decline."

How it Works During Execution

When the workflow reaches a Human Input node:

  1. The workflow pauses.
  2. The configured description/prompt is presented to the human user (e.g., in a chat interface or a dedicated UI).
  3. The human user makes a choice ("Proceed" or "Reject") and optionally provides feedback.
  4. Based on the human's decision, the workflow resumes along the corresponding output path. The human's feedback (if enabled) will be available as part of the node's output.

Example Usage: Content Moderation Approval

Imagine an AI generating marketing copy that needs human review before being published.

  1. Connect an LLM node (generating marketing copy) to the input of a Human Input node.
  2. Configure Description Type: Dynamic
  3. Configure Model: Select an OpenAI Chat Model.
  4. Configure Prompt: "Review the following marketing copy for tone and accuracy. Do you approve it for publication? Provide feedback if rejected: {{llmOutput}}" (assuming llmOutput is the generated copy).
  5. Enable Feedback: true
  6. Connect Outputs:
    • Connect the Proceed output to a "Publish Content" node.
    • Connect the Reject output to a "Revise Content" node (which might loop back to the LLM with the human feedback).

This setup ensures that sensitive content is reviewed by a human before being used, with a clear path for revision if needed.