mirror of
https://github.com/motioneye-project/motioneye.git
synced 2026-03-02 22:57:06 -05:00
Documentation Needed For Docker Installation #2667
Labels
No labels
Android app
Arch Linux
CI/CD
CSS
FreeBSD
HTML/HTTP
Home Assistant addon
JavaScript
Python
Raspberry Pi
Stale No Activity 60 Days
bug
code format
dependencies
dev branch
docker
documentation
duplicate
enhancement
feature
help wanted
i18n/l10n
invalid
legacy motionEye
meta
motion
motionEyeOS
notourproblem
python update
question
question
security
troubleshooting
wontfix
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/motioneye#2667
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Majoraslayer on GitHub (Dec 20, 2025).
I know that the Wiki seems to be missing a lot of info, but I can't seem to find an "official" guide to a Docker installation method anywhere. Issue #3007 references an outdated Docker install page that the OP was suggesting updates for, and I was able to install using his instructions. I don't know if the official documentation has gone missing, if the official Docker image is being abandoned, or if there's some other reason I can't find information on this. However, it appears that ghcr.io/motioneye-project/motioneye:edge is still a valid image that can be pulled, and it would be an ENORMOUS help if the Installation Instructions page (and primary README) could be updated to include instructions for running and using it.
@MichaIng commented on GitHub (Dec 21, 2025):
Agreed. There is another outdated readme: https://github.com/motioneye-project/motioneye/blob/main/docker/README.md
Since v0.43.1 stabe release we are however on Docker Hub as well again. So those instructions work when replacing "ccrisan" with "motioneyeproject": https://hub.docker.com/r/motioneyeproject/motioneye
Makes sense to review whether everything in that readme otherwise still make sense, and I'd probably merge the Docker install part into the main readme then.
@mdPlusPlus commented on GitHub (Jan 19, 2026):
https://github.com/mdPlusPlus/motioneye/wiki/Install-In-Docker
I haven't tested those instructions since the aforementioned pull request in 2024.
@MichaIng commented on GitHub (Jan 21, 2026):
In the meantime this is also outdated since motionEye is on Docker Hub now (since v0.43.1 stable release). At least if you want a stable release vs edge (
devbranch). But only change needed is replacingghcr.io/motioneye-project/motioneye:edgewithmotioneyeproject/motioneye:latesthttps://hub.docker.com/r/motioneyeproject/motioneye
@mdPlusPlus commented on GitHub (Jan 22, 2026):
You can go with
ghcr.io/motioneye-project/motioneye:latestas well, as far as I can tell.@MichaIng commented on GitHub (Jan 22, 2026):
Yes you are absolutely right. All images are uploaded to both registries. So it is more a question of convenience or preference.
@MichaIng commented on GitHub (Feb 11, 2026):
Is this a reasonable default suggestion?
Running in background, autostarts at boot and on Docker restarts (e.g. due to upgrades), unless explicitly stopped, hence matches somewhat the behavior of the bare-metal systemd unit.
Of course the video devices would need to be adjusted or removed depending on whether/which local camera shall be used. Some users might prefer the config and data dir on a different location, but using the defaults as well on the host makes at least following other guides and instructions easier, allow copy&pasting, where paths follow bare-metal defaults.
Compared to the old instructions, it uses
--nameinstead of--hostname, so the container can be easier identified indocker container ls, and the latter uses the prior as default. I removed the--rmflag, as I don't think it makes sense to remove the container once it stops? Unless it is started only for testing, of course.This could go into the main README, while we keep alternatives, like using (internal) volumes, Docker Compose, and build instructions in
docker/README.md. Or we remove them completely. Everyone who is familiar with Docker and Docker Compose will know how to adjust the above, or create adocker-compose.yamlfrom it, while for less experiences users the above requires least understanding or additional tools.@Majoraslayer commented on GitHub (Feb 11, 2026):
The only thing I might suggest adding is GPU support. Motioneye includes the option to encode with NVENC, but I've struggled with getting it to actually work inside Docker. At one time I used a custom image someone else made to include Nvidia GPU support, but they abandoned it. I'd much rather stick with the official image, so if giving the container GPU access is actually supported it might be worth noting it in the Docker guide.
@MichaIng commented on GitHub (Feb 11, 2026):
We'd need to find out which API node NVENC uses. V4L2 M2M, OMX, and libcamera all just use
/dev/video*device nodes. VAAPI accesses/dev/dri/render*device nodes. Not sure about QSV, NVMPI, and NVENC. Or whether its done via syscalls, and whether additional userland libraries are needed in the guest. Error logs fromjournalctl -u docker(AFAIK motionEye logs to STDOUT) should give a hint.Might require one of the proprietary Nvidia driver libraries: https://packages.debian.org/trixie/nvidia-driver
@mdPlusPlus commented on GitHub (Feb 12, 2026):
This might be helpful: https://jellyfin.org/docs/general/post-install/transcoding/hardware-acceleration/
@MichaIng commented on GitHub (Feb 12, 2026):
Indeed, especially this section: https://jellyfin.org/docs/general/post-install/transcoding/hardware-acceleration/nvidia#configure-with-linux-virtualization
So Nvidia drivers needed on the host, and then they provide a toolkit to pass through the relevant parts to the container. Worth to give a try.