Make the runner work alongside the server transcoding #5038

Open
opened 2026-02-22 10:58:55 -05:00 by deekerman · 13 comments
Owner

Originally created by @tio-trom on GitHub (Jun 24, 2023).

Describe the problem to be solved

If I rely on the runners (external servers that can transcode the videos) that disables the ability for the host server to transcode. It means that the remote runners need to be available all the time. This makes relying on these runners a must.

Describe the solution you would like

What if the runners can run in parallel with the host server? When the runners are available, use them, when they are not, use the host server. I would feel much safer if that was the case, else I'd have to make sure the runners work reliably all the time.

Originally created by @tio-trom on GitHub (Jun 24, 2023). ### Describe the problem to be solved If I rely on the runners (external servers that can transcode the videos) that disables the ability for the host server to transcode. It means that the remote runners need to be available all the time. This makes relying on these runners a must. ### Describe the solution you would like What if the runners can run in parallel with the host server? When the runners are available, use them, when they are not, use the host server. I would feel much safer if that was the case, else I'd have to make sure the runners work reliably all the time.
Author
Owner

@vid-bin commented on GitHub (Jun 25, 2023):

This was probably an oversight when designing the runners. I'd expect both the host server and the runners to run in tandem.

@vid-bin commented on GitHub (Jun 25, 2023): This was probably an oversight when designing the runners. I'd expect both the host server and the runners to run in tandem.
Author
Owner

@ROBERT-MCDOWELL commented on GitHub (Jun 25, 2023):

indeed, the host should be part of the runners too. I suggest also an option to select which type of rolling between runners like
round robin, the least busy, the most cpu free etc....

@ROBERT-MCDOWELL commented on GitHub (Jun 25, 2023): indeed, the host should be part of the runners too. I suggest also an option to select which type of rolling between runners like round robin, the least busy, the most cpu free etc....
Author
Owner

@Chocobozzz commented on GitHub (Jun 26, 2023):

You can create a runner on the PeerTube instance if you want, it's the reason why we chose to completely disable local jobs when runners are enabled. So it's easier to understand, debug, have explicit execution etc.

@Chocobozzz commented on GitHub (Jun 26, 2023): You can create a runner on the PeerTube instance if you want, it's the reason why we chose to completely disable local jobs when runners are enabled. So it's easier to understand, debug, have explicit execution etc.
Author
Owner

@tio-trom commented on GitHub (Jun 26, 2023):

You can create a runner on the PeerTube instance if you want, it's the reason why we chose to completely disable local jobs when runners are enabled. So it's easier to understand, debug, have explicit execution etc.

But if you create a runner on the same server seems redundant to me. Would be great if we can have these runners act as a backup if we need so, but also to choose to only use the runners. That's what I imagined. Since each job is assigned to a "runner" would it be too complicated to add the "host server" on the mix?

I imagine something like:

Enable Runners:

  • prioritize the runners (try to use the runners first, but use the server if/when they are not available)
  • prioritize the server (only use the runners when the server is busy handling other transcoding jobs)
  • use only the runners (disable the server for transcoding)

EDIT: I am thinking that I can even use my local machine to do the transcoding, but my machine may not be 100% on all the time. Would be great if we had this safety net where remote computers can transcode but in case they are not available, to still rely on the host server.

@tio-trom commented on GitHub (Jun 26, 2023): > You can create a runner on the PeerTube instance if you want, it's the reason why we chose to completely disable local jobs when runners are enabled. So it's easier to understand, debug, have explicit execution etc. But if you create a runner on the same server seems redundant to me. Would be great if we can have these runners act as a backup if we need so, but also to choose to only use the runners. That's what I imagined. Since each job is assigned to a "runner" would it be too complicated to add the "host server" on the mix? **I imagine something like:** Enable Runners: - prioritize the runners (try to use the runners first, but use the server if/when they are not available) - prioritize the server (only use the runners when the server is busy handling other transcoding jobs) - use only the runners (disable the server for transcoding) EDIT: I am thinking that I can even use my local machine to do the transcoding, but my machine may not be 100% on all the time. Would be great if we had this safety net where remote computers can transcode but in case they are not available, to still rely on the host server.
Author
Owner

@ROBERT-MCDOWELL commented on GitHub (Jun 26, 2023):

@Chocobozzz
from your perspective that makes sense indeed.

@ROBERT-MCDOWELL commented on GitHub (Jun 26, 2023): @Chocobozzz from your perspective that makes sense indeed.
Author
Owner

@normen commented on GitHub (Jul 17, 2023):

I'm all for prioritizing runners but about local jobs.. I am just running a runner on the server as well, which works fine and is about the same load as running in actual "local mode". Am I missing something? 🤔

@normen commented on GitHub (Jul 17, 2023): I'm all for prioritizing runners but about local jobs.. I am just running a runner on the server as well, which works fine and is about the same load as running in actual "local mode". Am I missing something? 🤔
Author
Owner

@tio-trom commented on GitHub (Jul 17, 2023):

I'm all for prioritizing runners but about local jobs.. I am just running a runner on the server as well, which works fine and is about the same load as running in actual "local mode". Am I missing something? thinking

To run a runner on the local server seems like unnecessary configuration to me and quite redundant. Would it have to grab the uploaded video and copy it in the .cache folder, transcode, then add the transcoded video to the proper peertube file location like all runners do?

Plus would be nice to make sure that if no runners are running, the local option is always there to pick up any job.

@tio-trom commented on GitHub (Jul 17, 2023): > I'm all for prioritizing runners but about local jobs.. I am just running a runner on the server as well, which works fine and is about the same load as running in actual "local mode". Am I missing something? thinking To run a runner on the local server seems like unnecessary configuration to me and quite redundant. Would it have to grab the uploaded video and copy it in the .cache folder, transcode, then add the transcoded video to the proper peertube file location like all runners do? Plus would be nice to make sure that if no runners are running, the local option is always there to pick up any job.
Author
Owner

@normen commented on GitHub (Jul 17, 2023):

To run a runner on the local server seems like unnecessary configuration to me and quite redundant. Would it have to grab the uploaded video and copy it in the .cache folder, transcode, then add the transcoded video to the proper peertube file location like all runners do?

Except for one copy step thats exactly what local peertube transcoding does as well. Plus you have the option to dockerize the runner with separate CPU and memory restrictions. As said, works great for me. And the local runner is always running and can jump in - thats why I want runner priorities.

@normen commented on GitHub (Jul 17, 2023): > To run a runner on the local server seems like unnecessary configuration to me and quite redundant. Would it have to grab the uploaded video and copy it in the .cache folder, transcode, then add the transcoded video to the proper peertube file location like all runners do? Except for one copy step thats exactly what local peertube transcoding does as well. Plus you have the option to dockerize the runner with separate CPU and memory restrictions. As said, works great for me. And the local runner *is* always running and can jump in - thats why I want runner priorities.
Author
Owner

@tio-trom commented on GitHub (Jul 17, 2023):

If you run a runner on the same server, the runner is separated from the peertube service. Thus this does not solve anything. You still have to make sure this runner is running as a separate process....

@tio-trom commented on GitHub (Jul 17, 2023): If you run a runner on the same server, the runner is separated from the peertube service. Thus this does not solve anything. You still have to make sure this runner is running as a separate process....
Author
Owner

@normen commented on GitHub (Jul 17, 2023):

If you run a runner on the same server, the runner is separated from the peertube service. Thus this does not solve anything. You still have to make sure this runner is running as a separate process....

Well most people run stuff in docker these days. I can set a memory and CPU limit for the peertube container and I can set one for the peertube-runner container which does the encoding. My problems are solved - sorry if it doesn't solve any of yours, I don't want to force anything on you. I'm just saying that what you effectively avoid is just one copy process, everything else is virtually the same plus the benefits I mentioned.

@normen commented on GitHub (Jul 17, 2023): > If you run a runner on the same server, the runner is separated from the peertube service. Thus this does not solve anything. You still have to make sure this runner is running as a separate process.... Well most people run stuff in docker these days. I can set a memory and CPU limit for the peertube container and I can set one for the peertube-runner container which does the encoding. My problems are solved - sorry if it doesn't solve any of yours, I don't want to force anything on you. I'm just saying that what you effectively avoid is just one copy process, everything else is virtually the same plus the benefits I mentioned.
Author
Owner

@tio-trom commented on GitHub (Jul 17, 2023):

This doe snot address my concerns, sorry. If Peertube is running then I would like that to be a guarantee that the "local runner" is also running. Just like it normally works for Peertube. The remote runners should be there as "extra". Else you have 2 worries: peertube and the runners. In your case it can happen that Peertube is running but no runners are...and this makes an instance unusable in terms of uploads. We have a bunch of users and I'd like to provide a great service via Peertube. And for that we need to make sure it is reliable.

@tio-trom commented on GitHub (Jul 17, 2023): This doe snot address my concerns, sorry. If Peertube is running then I would like that to be a guarantee that the "local runner" is also running. Just like it normally works for Peertube. The remote runners should be there as "extra". Else you have 2 worries: peertube and the runners. In your case it can happen that Peertube is running but no runners are...and this makes an instance unusable in terms of uploads. We have a bunch of users and I'd like to provide a great service via Peertube. And for that we need to make sure it is reliable.
Author
Owner

@normen commented on GitHub (Jul 17, 2023):

Even more reason to run the encoding in a separate docker container. If it crashes or runs out of memory it doesn't crash the main server process.

@normen commented on GitHub (Jul 17, 2023): Even more reason to run the encoding in a separate docker container. If it crashes or runs out of memory it doesn't crash the main server process.
Author
Owner

@tio-trom commented on GitHub (Jul 17, 2023):

Even more reason to run the encoding in a separate docker container. If it crashes or runs out of memory it doesn't crash the main server process.

I cannot use a Peertube instance without encoders. That was my initial point. The encoders should always be as reliable as the main Peertube service.

@tio-trom commented on GitHub (Jul 17, 2023): > Even more reason to run the encoding in a separate docker container. If it crashes or runs out of memory it doesn't crash the main server process. I cannot use a Peertube instance without encoders. That was my initial point. The encoders should always be as reliable as the main Peertube service.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/PeerTube#5038
No description provided.