mirror of
https://github.com/Mintplex-Labs/anything-llm.git
synced 2026-03-02 22:57:05 -05:00
[FEAT]: Raw Input Mode for Agent Flows #3193
Labels
No labels
Desktop
Docker
Integration Request
Integration Request
OS: Linux
OS: Mobile
OS: Windows
UI/UX
blocked
bug
bug
core-team-only
documentation
duplicate
embed-widget
enhancement
feature request
github_actions
good first issue
investigating
needs info / can't replicate
possible bug
question
stage: specifications
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/anything-llm#3193
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @elevatingcreativity on GitHub (Feb 21, 2026).
What would you like to see?
Problem
When an agent flow is invoked via @agent FlowName , the current architecture routes
everything through the LLM twice:
version of the input as a function argument
This creates several serious problems when the input is a large block of text (e.g. a document being
passed for processing):
as output when the LLM reconstructs it as a function argument). This frequently exceeds limits even for
models with large context windows.
function argument, rather than preserving it verbatim. For document processing this is unacceptable.
with no benefit.
Proposed Solution
Add a Raw Input Mode toggle to the Agent Flow configuration (on the flow's info node). When enabled, the LLM still decides to invoke the flow as a tool, but the argument it generates is discarded server-side. Instead, the original user text is extracted directly from the message and injected as the first input variable before the flow executes.
This seems appropriate for flows that act as pure text processors — where the flow receives a document,
runs it through a pipeline of steps, and returns a result — with no need for the LLM to interpret or
reformat the input.
We have an initial implementation working across three files:
flows
message, strips the @agent FlowName prefix, and injects it directly as the first variable argument
Proof-of-concept code is available at:
https://github.com/elevatingcreativity/anything-llm/tree/agent-flow-raw-input
I can do a PR if this seems acceptable as a solution.