Forjinn Docs

Development Platform

Documentation v2.0
Made with
by Forjinn

Testing Debugging

Learn about testing debugging and how to implement it effectively.

4 min read
🆕Recently updated
Last updated: 12/9/2025

Testing & Debugging Workflows

Thorough testing and effective debugging are critical for building reliable, robust, and production-ready chatflows, agents, and automations in InnoSynth-Forjinn. This guide provides practical advice, usage of platform tools, workflows for catching errors, and best practices for fixing complex automation bugs.


1. Live Testing in the UI

  • Interactive Canvas Testing: When building in the visual editor, you can trigger your chatflow/agent directly from the canvas.
    • Click the "Test" or "Send" button at any entry node (Start, Chat Input, Form).
    • Inputs can be provided directly; you'll see streaming or step-by-step output as the flow executes.
  • Variable Watch: Inspect variables and state at each node by clicking on them after a run.
  • Stop & Replay: Pause long conversations, step through each node, replay parts of the flow for quick iteration.

2. Simulation & Automated Testing

  • Simulate Input Variants: Use platform-provided Test Suites (if enabled) to run multiple user inputs (happy path, edge case, malicious input, empty state).
  • Assertions: Some nodes (such as Condition or Custom Function) support assertion logic to automatically detect failed states.
  • API Testing: Use REST endpoints with tools like Postman or cURL to simulate requests and catch backend-side bugs.
    POST /api/v1/prediction/<chatflowId>
    Body: { "question": "simulate message" }
    
  • State Validation: Use flow state debugging to confirm variables are set/updated as expected after each execution phase.

3. Common Troubleshooting Scenarios

a. No Output or Partial Output

  • Confirm connections (edges) between nodes are all valid; nodes with no output terminals will halt the flow.
  • Check if Conditional or Loop nodes are stopping/looping early due to misconfigured logic.

b. Unexpected Data or Parsing Errors

  • Use a Sticky Note or add temporary output nodes to inspect data format at each step.
  • In Custom Function nodes, print/log variables or throw errors to halt faulty flows early.

c. LLM/Agent Failures

  • Insufficient context or missing credentials often cause model or tool errors—always check the error message in the run log.
  • For LLM JSON mode: misalignment between prompt, expectation, and actual model output; use step-by-step streaming to catch formatting issues.

d. Tool/External API Fails

  • 400/401/500 errors: Review authentication credentials and request data shape.
  • Rate limit errors: Check API usage limits on the external provider dashboard.

e. Looping/Infinite Execution

  • Always set a max loop count in Iteration and Loop nodes.
  • Use Condition nodes to halt the loop when desired criteria are met.

4. Advanced Debugging

  • Execution Tracing: Each run is fully traced (inputs, outputs, reasoning, errors), viewable in Agent Executions or Chatflow Executions log UI.
    • Expand logs to see tool calls, memory updates, LLM responses, and flow state for each step.
  • Tracing with Analytics Integration: Connect with LangFuse, Arize, LangSmith for deep, cross-session visibility and analysis.
  • Error Propagation: Trace the origin of any top-level error to the specific node/failure point via provided error IDs/stacks.

5. Best Practices

  • Incremental Build/Test: Build out flows node by node, testing each addition before adding more complexity.
  • Copy/Paste Troublesome Inputs: When a bug appears for a given input, copy the exact payload and use for focused debugging.
  • Version Control for Chatflows: Use platform "export" to snapshot working flows before major changes.
  • Use Condition Nodes for Error Recovery: Catch/fix common error situations in automated flows.
  • Document with Sticky Notes: Annotate canvas with design rationale, caveats, and reminders.

6. Bug Reporting & Support

  • Always record:
    • Input leading to error
    • Expected/actual output
    • Flow export and affected node configs
    • Error log and traceback
  • Support team will request these for quicker diagnosis.

7. Key UI Features for Testing & Debugging

Feature UI Location Description
Run/Stop Node context menu/canvas Start, pause, or stop execution
Logs/Trace Agent Executions, right panel Step-by-step execution and error logs
Variable Inspector Side panel/top of canvas View runtime state at each node
Output Console Chat Output node, bottom Final result, incremental streaming output
Download Logs Execution log view JSON, CSV format for external debugging

By following these practices you will build robust, maintainable, and production-grade chatflows and agents.