[FEAT]: For/Loop #2420

Closed
opened 2026-02-28 06:07:16 -05:00 by deekerman · 1 comment
Owner

Originally created by @kfsone on GitHub (Apr 26, 2025).

What would you like to see?

This is something it might be possible to achieve with flows, but it doesn't seem to be trivial;
It's not quite something you can implement with mcp;

The intent is a set of sort of subtasks with the goal of reducing the context length by removing previous steps.

<system prompt>
<user prompt>
<initial response>
@while {   // egyptian for parse simplification; we don't have to track state across lines.
  we have less than 1mb of text in the file jokes.txt}
} then {  // egyptian single-line for parse ease again
  generate or choose a joke
  search jokes.txt for the joke
  append it to jokes.txt if it is not already found
} 

if you were to unroll this, it would continue to expand the context buffer repeatedly with "generate a joke and write it to jokes.txt" or something. the model would have a thing about it, talk to the mcp server or something, and then finally write the entry to the joke. then you'd have to ask "do we have a mb of jokes yet?" another llm response, and then next iteration.

...
user: generate a joke and add it to jokes.txt
assistant: <premable>, <mcp instruction>
user: how big is jokes.txt now?
assistant: 1kb
user: ok, generate a joke and add it to jokes.txt, try to avoid repetition
assistant: <premable>, <mcp instruction>
user: how big is jokes.txt now?
...

with the loop, AnythingLLM would be effectively rolling the context back to issue sub-commands

#1

<context prior to @while>
[anythingllm remembers the offset from start]
user: do we have less than 1mb of text in the file jokes.txt.
response must end with a machine-readable indicator,
either "{{>true>}}", or "{{>false>}}", or if an error respond "{{>error>... error text ...>}}".
assistant: <preamble>, <mcp instruction> .. jokes.txt is 1kb
{{>true>}}

this allows us to start the first iteration, so anythingllm discards all the new context and goes back to the start before we started the condition:

<context prior to @while>
user: generate or choose a joke
search jokes.txt for the joke
and add the joke if it wasn't present

evaluate the instructions above using tools as required. ... if any kind of failure occurs during the process that the request does not explicitly account for, stop, and respond with an error. otherwise, provide an ok response.
response must end with a machine-readable indicator,
either "{{>error> ... text of error ...>}}" or "{{>ok>}}" if any problems were explicitly accounted for in the request.
assistant: <preamble, blather>, <mcp instruction>
{{>ok>}}

so, once more, anything can roll back the context to the "prior to @while" state, and go back to condition check.

There need to be some way to collect the results as either a list of responses or a one big string (or you can write them to a file or etc).

With such a mechanism, allowing you to bring only the responses back, you could employ filtering with another loop:

@while {
  there are more entries in {{<jokes<}}
  @take joke from jokes
} then {
  @ask-agent is this joke funny, respond with a single word: yes or no: {{<joke<}}
  @if-response {
    yes {
      {{<joke<}}
    }
  }
}

What's the point of this? It's outputting the jokes again if they meet a condition, so it's filtering the non-funny ones. More importantly, if the agent is the same model, it's seeing it in a different/separate context, and may not find it funny. If it's a different model, you get a cross-check.

There are pieces missing (I didn't try to design how to capture variables) and placeholder pieces ({{>...>}} and {{<...<}} ... because so many other markers are already used, why not make up some moar? :)

I frequently have subtasks that require iteration (replace 'yaml' with 'json' in all these files... wait, you changed 30 of them and then said I hadn't told you what to do. CONTEXT LENGTH, DURNIT!). I could solve it with mcp but not without losing the enclosing context, which can be a booby-trap if you aren't thinking "I'm about to speak just these words to a totally separate ai"....

Originally created by @kfsone on GitHub (Apr 26, 2025). ### What would you like to see? This is something it might be possible to achieve with flows, but it doesn't seem to be trivial; It's not quite something you can implement with mcp; The intent is a set of sort of subtasks with the goal of reducing the context length by removing previous steps. ``` <system prompt> <user prompt> <initial response> @while { // egyptian for parse simplification; we don't have to track state across lines. we have less than 1mb of text in the file jokes.txt} } then { // egyptian single-line for parse ease again generate or choose a joke search jokes.txt for the joke append it to jokes.txt if it is not already found } ``` if you were to unroll this, it would continue to expand the context buffer repeatedly with "generate a joke and write it to jokes.txt" or something. the model would have a thing about it, talk to the mcp server or something, and then finally write the entry to the joke. then you'd have to ask "do we have a mb of jokes yet?" another llm response, and then next iteration. ``` ... user: generate a joke and add it to jokes.txt assistant: <premable>, <mcp instruction> user: how big is jokes.txt now? assistant: 1kb user: ok, generate a joke and add it to jokes.txt, try to avoid repetition assistant: <premable>, <mcp instruction> user: how big is jokes.txt now? ... ``` with the loop, AnythingLLM would be effectively rolling the context back to issue sub-commands #1 ``` <context prior to @while> [anythingllm remembers the offset from start] user: do we have less than 1mb of text in the file jokes.txt. response must end with a machine-readable indicator, either "{{>true>}}", or "{{>false>}}", or if an error respond "{{>error>... error text ...>}}". assistant: <preamble>, <mcp instruction> .. jokes.txt is 1kb {{>true>}} ``` this allows us to start the first iteration, so anythingllm discards all the new context and goes back to the start before we started the condition: ``` <context prior to @while> user: generate or choose a joke search jokes.txt for the joke and add the joke if it wasn't present evaluate the instructions above using tools as required. ... if any kind of failure occurs during the process that the request does not explicitly account for, stop, and respond with an error. otherwise, provide an ok response. response must end with a machine-readable indicator, either "{{>error> ... text of error ...>}}" or "{{>ok>}}" if any problems were explicitly accounted for in the request. assistant: <preamble, blather>, <mcp instruction> {{>ok>}} ``` so, once more, anything can roll back the context to the "prior to @while" state, and go back to condition check. There need to be some way to collect the results as either a list of responses or a one big string (or you can write them to a file or etc). With such a mechanism, allowing you to bring only the responses back, you could employ filtering with another loop: ``` @while { there are more entries in {{<jokes<}} @take joke from jokes } then { @ask-agent is this joke funny, respond with a single word: yes or no: {{<joke<}} @if-response { yes { {{<joke<}} } } } ``` What's the point of *this*? It's outputting the jokes again if they meet a condition, so it's filtering the non-funny ones. More importantly, if the agent is the same model, it's seeing it in a different/separate context, and may not find it funny. If it's a different model, you get a cross-check. There *are* pieces missing (I didn't try to design how to capture variables) and placeholder pieces ({{>...>}} and {{<...<}} ... because so many other markers are already used, why not make up some moar? :) I frequently have subtasks that require iteration (replace 'yaml' with 'json' in all these files... wait, you changed 30 of them and then said I hadn't told you what to do. CONTEXT LENGTH, DURNIT!). I *could* solve it with mcp but not without losing the enclosing context, which can be a booby-trap if you aren't thinking "I'm about to speak just these words to a totally separate ai"....
Author
Owner

@timothycarambat commented on GitHub (Apr 27, 2025):

I can say that at this time we wont be perusing such a feature since it would not only be complex to use for the specific type of persona which uses anythingllm, but also that it's performance would be dubious at best since applying these kinds of "logic" rules to a prompt is bound to fall apart on any model that is not running on the cloud or at the very least running on a GPU.

Outside of that, practically speaking this would be easier and more deterministic to apply by just using the backend API to run inference in an actual while/for loop based on custom code. You could do exactly what you ask but because the interpretation of the rule is done programmatically, you can ensure no hallucination is occurring for your outputs. That + MCP, which can be invoked by the backend /chat API, would be a more robust way of solving this.

@timothycarambat commented on GitHub (Apr 27, 2025): I can say that at this time we wont be perusing such a feature since it would not only be complex to use for the specific type of persona which uses anythingllm, but also that it's performance would be dubious at best since applying these kinds of "logic" rules to a prompt is bound to fall apart on any model that is not running on the cloud or at the very least running on a GPU. Outside of that, practically speaking this would be easier and more deterministic to apply by just using the backend API to run inference in an actual while/for loop based on custom code. You could do exactly what you ask but because the interpretation of the rule is done programmatically, you can ensure no hallucination is occurring for your outputs. That + MCP, which can be invoked by the backend /chat API, would be a more robust way of solving this.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/anything-llm#2420
No description provided.