[FEAT]: Built in llama-cpp integration with a model download interface #3211

Open
opened 2026-02-28 06:33:34 -05:00 by deekerman · 0 comments
Owner

Originally created by @pratik-narain on GitHub (Feb 25, 2026).

What would you like to see?

Since vulkan support in ollama is still experimental, and LM Studio is just another piece of external software to run, a llama-cpp backend built into anythingllm with configuration options and download from huggingface would make this the true one-stop local LLM solution.

Originally created by @pratik-narain on GitHub (Feb 25, 2026). ### What would you like to see? Since vulkan support in ollama is still experimental, and LM Studio is just another piece of external software to run, a llama-cpp backend built into anythingllm with configuration options and download from huggingface would make this the true one-stop local LLM solution.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/anything-llm#3211
No description provided.