mirror of
https://github.com/Chocobozzz/PeerTube.git
synced 2026-03-02 22:57:11 -05:00
Make the runner work alongside the server transcoding #5038
Labels
No labels
Component: Accessibility
Component: Administration
Component: Auth
Component: CLI
Component: Channels
Component: Chapters
Component: Comments
Component: Custom Markdown
Component: Docker 🐳
Component: Documentation 📚
Component: Email
Component: Embed
Component: Federation 🎡
Component: Import/Export
Component: Live
Component: Metadata
Component: Mobile
Component: Moderation :godmode:
Component: Notifications
Component: Object storage
Component: Observability
Component: PeerTube Plugin 📦
Component: Player ⏯️
Component: Playlist
Component: Recommendation
Component: Redundancy
Component: Registration
Component: Runners
Component: SEO
Component: Search
Component: Security
Component: Stats
Component: Studio
Component: Studio
Component: Subscriptions
Component: Subtitles 💬
Component: Transcoding
Component: Upload
Component: Video Import
Component: i18n 🔡
Priority: High
Priority: Low
Priority: Roadmap
Status: Blocked ✋
Status: In Progress 🔜
Status: To Reproduce
Status: Waiting for answer
Template not filled
Type: Bug 🐛
Type: Discussion 💭
Type: Discussion 💭
Type: Duplicate ➿
Type: Feature Request ✨
Type: Maintenance 👷♀️
Type: Performance
Type: Question
UI
good first issue
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/PeerTube#5038
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @tio-trom on GitHub (Jun 24, 2023).
Describe the problem to be solved
If I rely on the runners (external servers that can transcode the videos) that disables the ability for the host server to transcode. It means that the remote runners need to be available all the time. This makes relying on these runners a must.
Describe the solution you would like
What if the runners can run in parallel with the host server? When the runners are available, use them, when they are not, use the host server. I would feel much safer if that was the case, else I'd have to make sure the runners work reliably all the time.
@vid-bin commented on GitHub (Jun 25, 2023):
This was probably an oversight when designing the runners. I'd expect both the host server and the runners to run in tandem.
@ROBERT-MCDOWELL commented on GitHub (Jun 25, 2023):
indeed, the host should be part of the runners too. I suggest also an option to select which type of rolling between runners like
round robin, the least busy, the most cpu free etc....
@Chocobozzz commented on GitHub (Jun 26, 2023):
You can create a runner on the PeerTube instance if you want, it's the reason why we chose to completely disable local jobs when runners are enabled. So it's easier to understand, debug, have explicit execution etc.
@tio-trom commented on GitHub (Jun 26, 2023):
But if you create a runner on the same server seems redundant to me. Would be great if we can have these runners act as a backup if we need so, but also to choose to only use the runners. That's what I imagined. Since each job is assigned to a "runner" would it be too complicated to add the "host server" on the mix?
I imagine something like:
Enable Runners:
EDIT: I am thinking that I can even use my local machine to do the transcoding, but my machine may not be 100% on all the time. Would be great if we had this safety net where remote computers can transcode but in case they are not available, to still rely on the host server.
@ROBERT-MCDOWELL commented on GitHub (Jun 26, 2023):
@Chocobozzz
from your perspective that makes sense indeed.
@normen commented on GitHub (Jul 17, 2023):
I'm all for prioritizing runners but about local jobs.. I am just running a runner on the server as well, which works fine and is about the same load as running in actual "local mode". Am I missing something? 🤔
@tio-trom commented on GitHub (Jul 17, 2023):
To run a runner on the local server seems like unnecessary configuration to me and quite redundant. Would it have to grab the uploaded video and copy it in the .cache folder, transcode, then add the transcoded video to the proper peertube file location like all runners do?
Plus would be nice to make sure that if no runners are running, the local option is always there to pick up any job.
@normen commented on GitHub (Jul 17, 2023):
Except for one copy step thats exactly what local peertube transcoding does as well. Plus you have the option to dockerize the runner with separate CPU and memory restrictions. As said, works great for me. And the local runner is always running and can jump in - thats why I want runner priorities.
@tio-trom commented on GitHub (Jul 17, 2023):
If you run a runner on the same server, the runner is separated from the peertube service. Thus this does not solve anything. You still have to make sure this runner is running as a separate process....
@normen commented on GitHub (Jul 17, 2023):
Well most people run stuff in docker these days. I can set a memory and CPU limit for the peertube container and I can set one for the peertube-runner container which does the encoding. My problems are solved - sorry if it doesn't solve any of yours, I don't want to force anything on you. I'm just saying that what you effectively avoid is just one copy process, everything else is virtually the same plus the benefits I mentioned.
@tio-trom commented on GitHub (Jul 17, 2023):
This doe snot address my concerns, sorry. If Peertube is running then I would like that to be a guarantee that the "local runner" is also running. Just like it normally works for Peertube. The remote runners should be there as "extra". Else you have 2 worries: peertube and the runners. In your case it can happen that Peertube is running but no runners are...and this makes an instance unusable in terms of uploads. We have a bunch of users and I'd like to provide a great service via Peertube. And for that we need to make sure it is reliable.
@normen commented on GitHub (Jul 17, 2023):
Even more reason to run the encoding in a separate docker container. If it crashes or runs out of memory it doesn't crash the main server process.
@tio-trom commented on GitHub (Jul 17, 2023):
I cannot use a Peertube instance without encoders. That was my initial point. The encoders should always be as reliable as the main Peertube service.