mirror of
https://github.com/photoprism/photoprism.git
synced 2026-03-02 22:57:18 -05:00
Vision: Support multiple Ollama services and parallel jobs #2453
Labels
No labels
ai
android
api
auth
awesome
bug
bug
ci
cli
config
database
declined
deprecated
docker
docs 📚
documents
duplicate
easy
enhancement
enhancement
enhancement
epic
faces
feedback wanted
frontend
hacktoberfest
help wanted
idea
in-progress
incomplete
index
invalid
ios
labels
live
live
low-priority
macos
member-feature
metadata
mobile
nas
needs-analysis
no-coding-required
no-coding-required
observability
performance
places
please-test
plus-feature
priority
pro-feature
question
raspberry-pi
raw
released
released
released
research
resolved
security
sharing
tested
tests
third-party-issue
thumbnails
upgrade
upstream-issue
ux
vector
video
waiting
won't fix
won't fix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/photoprism#2453
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @alexislefebvre on GitHub (Dec 9, 2025).
Confirmation
What Problem Does This Solve and Why Is It Valuable?
Today we can setup Vision like this:
Source: https://docs.photoprism.app/user-guide/ai/ollama-models/#gemma-3-caption
What Solution Would You Like?
I would like to be able to use several instances of Ollama, with something like this:
Then Photoprism could parallelize the Vision jobs on these 2 servers.
And if one server is down, it would call the other one instead.
It could also be used to configure a local instance and a SaaS instance.
It may also support a
Priority:value:What Alternatives Have You Considered?
.
Additional Context
Well, GPUs are pretty expensive these days, so that won’t be a common situation.
And adding parallelization may be tricky.
Yet I think that it would be interesting, even to use it at fallbacks between several servers.
@lastzero commented on GitHub (Feb 11, 2026):
Interesting idea! We'll consider it when we resume work on Ollama/AI. 🤖