mirror of
https://github.com/Mintplex-Labs/anything-llm.git
synced 2026-03-02 22:57:05 -05:00
[BUG/FEAT]: AWS Bedrock reasoning models with @agent #2312
Labels
No labels
Desktop
Docker
Integration Request
Integration Request
OS: Linux
OS: Mobile
OS: Windows
UI/UX
blocked
bug
bug
core-team-only
documentation
duplicate
embed-widget
enhancement
feature request
github_actions
good first issue
investigating
needs info / can't replicate
possible bug
question
stage: specifications
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/anything-llm#2312
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @timothycarambat on GitHub (Mar 27, 2025).
Originally assigned to: @timothycarambat on GitHub.
How are you running AnythingLLM?
All versions
What happened?
With the refactor of AWS Bedrock to move away from Langchain in https://github.com/Mintplex-Labs/anything-llm/pull/3537
This needs to be expanded to the
agentexecution provider, as the use of Langchain to use reasoning models for agent execution is not possible with the current implementation.Current workaround
Do not use reasoning models for AWS Bedrock agent execution as the
contentresponse is an array as opposed to a string - like all other models.Are there known steps to reproduce?
Use a reasoning model for AWS bedrock and send a single agent chat. This error will manifest as a
jsonString?.startsWitherror - which is a red herring as the real error is the output formats from the response being mishandled by Langchain.@tristan-stahnke-GPS commented on GitHub (Apr 24, 2025):
@timothycarambat I think my PR #3714 resolved this, can you check?
@tristan-stahnke-GPS commented on GitHub (Apr 25, 2025):
I was able to create agents using bedrock with my PR applied for the bedrock provider, parsed the content of a google search and sent it back to the LLM; not sure if there's another part of the @agent functionality that's needed, but would be cool to test it out, definitely want to get agents working fully with bedrock 💯
@0xbadshah commented on GitHub (Jun 25, 2025):
@tristan-stahnke Amazon Bedrock has multiple models and not every model works correctly despite right IAM permissions to access. I tried agent with Deepseek and it errors out. Which Bedrock model did you this against?
@tristan-stahnke-GPS commented on GitHub (Jun 25, 2025):
@Chan9390 I primarily use Claude Sonnet / Opus models (as well as the amazon Nova etc. Models). I haven't had a chance to look at Deepseek; that would definitely be something to chase down! And making the provider more model agnostic would be ideal as well, so we can leave room for additional functionality down the road (maybe allowing capabilities to thought process tokens?) I'll take a look!
@bhasmang-tri commented on GitHub (Oct 7, 2025):
I also get Invalid message content: empty string. 'ai' must contain non-empty content. when i invoke the @agent with Bedrock (Claude model) using IAM role (i hosted it on AWS via docker)
@SquadUpSquid commented on GitHub (Nov 3, 2025):
Hello, Is there any update to this issue. I'm having similar issues when trying to call @agent (MCP Server) using Bedrock models.
Hosted on an AWS EC2 using Docker.
Input: @agent blah blah blah
Output: (it's not always the same)
I have tried this with Bedrock Claude 3.5, Claude 3.7, Nova Pro, Llama 3 70B. I'm using IAM Roles and gave it "bedrock:*" just to rule out any permissions problems with bedrock.
I have used the free Grok models, as a "control", and have gotten a proper response from it.