mirror of
https://github.com/Mintplex-Labs/anything-llm.git
synced 2026-03-02 22:57:05 -05:00
[BUG]: Could not respond to message. #3000
Labels
No labels
Desktop
Docker
Integration Request
Integration Request
OS: Linux
OS: Mobile
OS: Windows
UI/UX
blocked
bug
bug
core-team-only
documentation
duplicate
embed-widget
enhancement
feature request
github_actions
good first issue
investigating
needs info / can't replicate
possible bug
question
stage: specifications
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/anything-llm#3000
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @foundanand on GitHub (Nov 26, 2025).
How are you running AnythingLLM?
AnythingLLM desktop app
What happened?
AWSBedrock::streamGetChatCompletion failed during setup. Invocation of model ID meta.llama4-maverick-17b-instruct-v1:0 with on-demand throughput isn’t supported. Retry your request with the ID or ARN of an inference profile that contains this model.
I keep getting this issue, I have tried many thing, including following:
https://docs.anythingllm.com/setup/llm-configuration/cloud/aws-bedrock
Are there known steps to reproduce?
No response
@timothycarambat commented on GitHub (Nov 26, 2025):
The error message says all there is to know about this error. It is not related to AnythingLLM. You cannot use the llama4 maverick model for serverless/on-demand via bedrock. Thats it