mirror of
https://github.com/photoprism/photoprism.git
synced 2026-03-02 22:57:18 -05:00
Albums: Use a temporary ZIP file for large downloads containing tens of thousands of files #2479
Labels
No labels
ai
android
api
auth
awesome
bug
bug
ci
cli
config
database
declined
deprecated
docker
docs 📚
documents
duplicate
easy
enhancement
enhancement
enhancement
epic
faces
feedback wanted
frontend
hacktoberfest
help wanted
idea
in-progress
incomplete
index
invalid
ios
labels
live
live
low-priority
macos
member-feature
metadata
mobile
nas
needs-analysis
no-coding-required
no-coding-required
observability
performance
places
please-test
plus-feature
priority
pro-feature
question
raspberry-pi
raw
released
released
released
research
resolved
security
sharing
tested
tests
third-party-issue
thumbnails
upgrade
upstream-issue
ux
vector
video
waiting
won't fix
won't fix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/photoprism#2479
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @xiaobai6769 on GitHub (Feb 4, 2026).
Before You Continue
What Is Not Working as Documented?
Excuse me , There is a serious problem here.
Downloading an album of tens of thousands of images may trigger an OOM.
Here is my configuration information.
free -h
total used free shared buff/cache available
Mem: 40Gi 12Gi 1.0Gi 1.4Mi 10Gi 26Gi
How Can We Reproduce It?
What Behavior Do You Expect?
Excuse me, I can download it successfully.
What Could Be the Cause?
No response
Logs, Sample Files, or Screenshots
No response
Which Software Versions Do You Use?
On What Device Is PhotoPrism Installed?
Do You Use a Reverse Proxy, Firewall, VPN, or CDN?
No response
@lastzero commented on GitHub (Feb 5, 2026):
Thanks for your report! I would guess that the download speed is slower than the rate at which data is added to the memory buffer. Since writing the data to memory works well for most users who don't download tens of thousands of files, we could add a setting or size threshold that writes the data to disk and streams it from there instead. However, note that you might run out of disk space in this case. Would you like us to implement such a solution? Do you have any preferences?
@xiaobai6769 commented on GitHub (Feb 9, 2026):
I think there's no problem, after all, the cost of adding a hard drive is relatively cheap.