mirror of
https://github.com/Mintplex-Labs/anything-llm.git
synced 2026-03-02 22:57:05 -05:00
[FEAT]: For/Loop #2420
Labels
No labels
Desktop
Docker
Integration Request
Integration Request
OS: Linux
OS: Mobile
OS: Windows
UI/UX
blocked
bug
bug
core-team-only
documentation
duplicate
embed-widget
enhancement
feature request
github_actions
good first issue
investigating
needs info / can't replicate
possible bug
question
stage: specifications
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/anything-llm#2420
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @kfsone on GitHub (Apr 26, 2025).
What would you like to see?
This is something it might be possible to achieve with flows, but it doesn't seem to be trivial;
It's not quite something you can implement with mcp;
The intent is a set of sort of subtasks with the goal of reducing the context length by removing previous steps.
if you were to unroll this, it would continue to expand the context buffer repeatedly with "generate a joke and write it to jokes.txt" or something. the model would have a thing about it, talk to the mcp server or something, and then finally write the entry to the joke. then you'd have to ask "do we have a mb of jokes yet?" another llm response, and then next iteration.
with the loop, AnythingLLM would be effectively rolling the context back to issue sub-commands
#1
this allows us to start the first iteration, so anythingllm discards all the new context and goes back to the start before we started the condition:
so, once more, anything can roll back the context to the "prior to @while" state, and go back to condition check.
There need to be some way to collect the results as either a list of responses or a one big string (or you can write them to a file or etc).
With such a mechanism, allowing you to bring only the responses back, you could employ filtering with another loop:
What's the point of this? It's outputting the jokes again if they meet a condition, so it's filtering the non-funny ones. More importantly, if the agent is the same model, it's seeing it in a different/separate context, and may not find it funny. If it's a different model, you get a cross-check.
There are pieces missing (I didn't try to design how to capture variables) and placeholder pieces ({{>...>}} and {{<...<}} ... because so many other markers are already used, why not make up some moar? :)
I frequently have subtasks that require iteration (replace 'yaml' with 'json' in all these files... wait, you changed 30 of them and then said I hadn't told you what to do. CONTEXT LENGTH, DURNIT!). I could solve it with mcp but not without losing the enclosing context, which can be a booby-trap if you aren't thinking "I'm about to speak just these words to a totally separate ai"....
@timothycarambat commented on GitHub (Apr 27, 2025):
I can say that at this time we wont be perusing such a feature since it would not only be complex to use for the specific type of persona which uses anythingllm, but also that it's performance would be dubious at best since applying these kinds of "logic" rules to a prompt is bound to fall apart on any model that is not running on the cloud or at the very least running on a GPU.
Outside of that, practically speaking this would be easier and more deterministic to apply by just using the backend API to run inference in an actual while/for loop based on custom code. You could do exactly what you ask but because the interpretation of the rule is done programmatically, you can ensure no hallucination is occurring for your outputs. That + MCP, which can be invoked by the backend /chat API, would be a more robust way of solving this.