[FEAT]:Support Multiple Generic OpenAI Configuration Presets #3216

Open
opened 2026-02-28 06:33:43 -05:00 by deekerman · 1 comment
Owner

Originally created by @yangcai0731 on GitHub (Feb 27, 2026).

What would you like to see?

Currently, the Generic OpenAI provider only supports saving a single set of configuration (including Base URL, API Key, Model Name and other related parameters). For users who need to connect to multiple OpenAI-compatible API services (such as regional large model APIs, different third-party OpenAI-format endpoints), we have to re-fill and modify all configuration parameters every time we switch between services, which is very inefficient and error-prone.

Add support for multiple Generic OpenAI configuration presets:

  1. Allow users to save multiple named Generic OpenAI configuration sets, each containing independent Base URL, API Key, default model name, context window and other core parameters
  2. Add a preset selector dropdown in both the global LLM settings and per-workspace Chat Settings, to switch between saved configurations in one click
  3. Support adding, editing, deleting and renaming these presets directly in the settings interface

The current workaround is to create separate workspaces for each API service, but this is not ideal: it forces users to split their workflow across multiple workspaces just to switch models, and cannot share the same document/embedding context between different models smoothly.
Another alternative is to use a local proxy service like LiteLLM, but this adds extra setup and maintenance overhead, which is not friendly for non-technical users.

This feature will greatly improve the experience for users who rely on multiple OpenAI-compatible LLM services, especially for users who need to connect to non-US regional API providers. It will also make the Generic OpenAI integration much more flexible without breaking any existing functionality.

“Thank you so much for your hard work on this project and for reviewing this feature request! We really appreciate your consideration of this improvement!”

Originally created by @yangcai0731 on GitHub (Feb 27, 2026). ### What would you like to see? Currently, the Generic OpenAI provider only supports saving a single set of configuration (including Base URL, API Key, Model Name and other related parameters). For users who need to connect to multiple OpenAI-compatible API services (such as regional large model APIs, different third-party OpenAI-format endpoints), we have to re-fill and modify all configuration parameters every time we switch between services, which is very inefficient and error-prone. Add support for multiple Generic OpenAI configuration presets: 1. Allow users to save multiple named Generic OpenAI configuration sets, each containing independent Base URL, API Key, default model name, context window and other core parameters 2. Add a preset selector dropdown in both the global LLM settings and per-workspace Chat Settings, to switch between saved configurations in one click 3. Support adding, editing, deleting and renaming these presets directly in the settings interface The current workaround is to create separate workspaces for each API service, but this is not ideal: it forces users to split their workflow across multiple workspaces just to switch models, and cannot share the same document/embedding context between different models smoothly. Another alternative is to use a local proxy service like LiteLLM, but this adds extra setup and maintenance overhead, which is not friendly for non-technical users. This feature will greatly improve the experience for users who rely on multiple OpenAI-compatible LLM services, especially for users who need to connect to non-US regional API providers. It will also make the Generic OpenAI integration much more flexible without breaking any existing functionality. “Thank you so much for your hard work on this project and for reviewing this feature request! We really appreciate your consideration of this improvement!”
Author
Owner

@dandawg commented on GitHub (Feb 27, 2026):

The current workaround is to create separate workspaces for each API service

As far as I can tell, if you use the generic openai provider in two different workspaces, you can select a different named model, but it still assumes you are using the same endpoint (you can't have multiple endpoints).

This is issue represents a very important use case. To be able to compare different open-source models (at different endpoints), we need this implemented right away. I may wish to switch between a multi-modal model and a reasoning model that I have access to. Or between a large and small version of the same model that I have spun up and would like to compare. It is burdensome to have to re-enter endpoint and token details each time in the one available slot.

@dandawg commented on GitHub (Feb 27, 2026): > The current workaround is to create separate workspaces for each API service As far as I can tell, if you use the generic openai provider in two different workspaces, you can select a different named model, but it still assumes you are using the same endpoint (you can't have multiple endpoints). This is issue represents a very important use case. To be able to compare different open-source models (at different endpoints), we need this implemented right away. I may wish to switch between a multi-modal model and a reasoning model that I have access to. Or between a large and small version of the same model that I have spun up and would like to compare. It is burdensome to have to re-enter endpoint and token details each time in the one available slot.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/anything-llm#3216
No description provided.