mirror of
https://github.com/Mintplex-Labs/anything-llm.git
synced 2026-03-02 22:57:05 -05:00
[FEAT]:Support Multiple Generic OpenAI Configuration Presets #3216
Labels
No labels
Desktop
Docker
Integration Request
Integration Request
OS: Linux
OS: Mobile
OS: Windows
UI/UX
blocked
bug
bug
core-team-only
documentation
duplicate
embed-widget
enhancement
feature request
github_actions
good first issue
investigating
needs info / can't replicate
possible bug
question
stage: specifications
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/anything-llm#3216
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @yangcai0731 on GitHub (Feb 27, 2026).
What would you like to see?
Currently, the Generic OpenAI provider only supports saving a single set of configuration (including Base URL, API Key, Model Name and other related parameters). For users who need to connect to multiple OpenAI-compatible API services (such as regional large model APIs, different third-party OpenAI-format endpoints), we have to re-fill and modify all configuration parameters every time we switch between services, which is very inefficient and error-prone.
Add support for multiple Generic OpenAI configuration presets:
The current workaround is to create separate workspaces for each API service, but this is not ideal: it forces users to split their workflow across multiple workspaces just to switch models, and cannot share the same document/embedding context between different models smoothly.
Another alternative is to use a local proxy service like LiteLLM, but this adds extra setup and maintenance overhead, which is not friendly for non-technical users.
This feature will greatly improve the experience for users who rely on multiple OpenAI-compatible LLM services, especially for users who need to connect to non-US regional API providers. It will also make the Generic OpenAI integration much more flexible without breaking any existing functionality.
“Thank you so much for your hard work on this project and for reviewing this feature request! We really appreciate your consideration of this improvement!”
@dandawg commented on GitHub (Feb 27, 2026):
As far as I can tell, if you use the generic openai provider in two different workspaces, you can select a different named model, but it still assumes you are using the same endpoint (you can't have multiple endpoints).
This is issue represents a very important use case. To be able to compare different open-source models (at different endpoints), we need this implemented right away. I may wish to switch between a multi-modal model and a reasoning model that I have access to. Or between a large and small version of the same model that I have spun up and would like to compare. It is burdensome to have to re-enter endpoint and token details each time in the one available slot.