[FEAT]: Raw Input Mode for Agent Flows #3193

Open
opened 2026-02-28 06:33:09 -05:00 by deekerman · 0 comments
Owner

Originally created by @elevatingcreativity on GitHub (Feb 21, 2026).

What would you like to see?

Problem

When an agent flow is invoked via @agent FlowName , the current architecture routes
everything through the LLM twice:

  1. The LLM reads the user's full message and decides to call the agent flow tool, generating its own
    version of the input as a function argument
  2. After the flow executes, the LLM processes the result again before returning it to the user

This creates several serious problems when the input is a large block of text (e.g. a document being
passed for processing):

  • Token limits — the full document must fit within the LLM's context window twice (once as input, once
    as output when the LLM reconstructs it as a function argument). This frequently exceeds limits even for
    models with large context windows.
  • Data fidelity — the LLM may paraphrase, truncate, or subtly alter the input text when passing it as a
    function argument, rather than preserving it verbatim. For document processing this is unacceptable.
  • Cost and latency — large documents being echoed through an LLM call adds significant cost and delay
    with no benefit.

Proposed Solution

Add a Raw Input Mode toggle to the Agent Flow configuration (on the flow's info node). When enabled, the LLM still decides to invoke the flow as a tool, but the argument it generates is discarded server-side. Instead, the original user text is extracted directly from the message and injected as the first input variable before the flow executes.

This seems appropriate for flows that act as pure text processors — where the flow receives a document,
runs it through a pipeline of steps, and returns a result — with no need for the LLM to interpret or
reformat the input.

We have an initial implementation working across three files:

  • frontend/src/pages/Admin/AgentBuilder/nodes/FlowInfoNode/index.jsx — toggle UI on the flow info node
  • frontend/src/pages/Admin/AgentBuilder/index.jsx — persists the rawInput flag when saving/loading
    flows
  • server/utils/agentFlows/index.js — server-side bypass that extracts raw text from the user's original
    message, strips the @agent FlowName prefix, and injects it directly as the first variable argument

Proof-of-concept code is available at:
https://github.com/elevatingcreativity/anything-llm/tree/agent-flow-raw-input

I can do a PR if this seems acceptable as a solution.

Originally created by @elevatingcreativity on GitHub (Feb 21, 2026). ### What would you like to see? ### Problem When an agent flow is invoked via @agent FlowName <input text>, the current architecture routes everything through the LLM twice: 1. The LLM reads the user's full message and decides to call the agent flow tool, generating its own version of the input as a function argument 2. After the flow executes, the LLM processes the result again before returning it to the user This creates several serious problems when the input is a large block of text (e.g. a document being passed for processing): - Token limits — the full document must fit within the LLM's context window twice (once as input, once as output when the LLM reconstructs it as a function argument). This frequently exceeds limits even for models with large context windows. - Data fidelity — the LLM may paraphrase, truncate, or subtly alter the input text when passing it as a function argument, rather than preserving it verbatim. For document processing this is unacceptable. - Cost and latency — large documents being echoed through an LLM call adds significant cost and delay with no benefit. ### Proposed Solution Add a Raw Input Mode toggle to the Agent Flow configuration (on the flow's info node). When enabled, the LLM still decides to invoke the flow as a tool, but the argument it generates is discarded server-side. Instead, the original user text is extracted directly from the message and injected as the first input variable before the flow executes. This seems appropriate for flows that act as pure text processors — where the flow receives a document, runs it through a pipeline of steps, and returns a result — with no need for the LLM to interpret or reformat the input. We have an initial implementation working across three files: - frontend/src/pages/Admin/AgentBuilder/nodes/FlowInfoNode/index.jsx — toggle UI on the flow info node - frontend/src/pages/Admin/AgentBuilder/index.jsx — persists the rawInput flag when saving/loading flows - server/utils/agentFlows/index.js — server-side bypass that extracts raw text from the user's original message, strips the @agent FlowName prefix, and injects it directly as the first variable argument Proof-of-concept code is available at: https://github.com/elevatingcreativity/anything-llm/tree/agent-flow-raw-input I can do a PR if this seems acceptable as a solution.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/anything-llm#3193
No description provided.