mirror of
https://github.com/Chocobozzz/PeerTube.git
synced 2026-03-02 22:57:11 -05:00
Priorities for runners #5079
Labels
No labels
Component: Accessibility
Component: Administration
Component: Auth
Component: CLI
Component: Channels
Component: Chapters
Component: Comments
Component: Custom Markdown
Component: Docker 🐳
Component: Documentation 📚
Component: Email
Component: Embed
Component: Federation 🎡
Component: Import/Export
Component: Live
Component: Metadata
Component: Mobile
Component: Moderation :godmode:
Component: Notifications
Component: Object storage
Component: Observability
Component: PeerTube Plugin 📦
Component: Player ⏯️
Component: Playlist
Component: Recommendation
Component: Redundancy
Component: Registration
Component: Runners
Component: SEO
Component: Search
Component: Security
Component: Stats
Component: Studio
Component: Studio
Component: Subscriptions
Component: Subtitles 💬
Component: Transcoding
Component: Upload
Component: Video Import
Component: i18n 🔡
Priority: High
Priority: Low
Priority: Roadmap
Status: Blocked ✋
Status: In Progress 🔜
Status: To Reproduce
Status: Waiting for answer
Template not filled
Type: Bug 🐛
Type: Discussion 💭
Type: Discussion 💭
Type: Duplicate ➿
Type: Feature Request ✨
Type: Maintenance 👷♀️
Type: Performance
Type: Question
UI
good first issue
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/PeerTube#5079
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @normen on GitHub (Jul 17, 2023).
Describe the problem to be solved
I would like to use one runner as the "primary" runner while other registered runners should only be used when the primary runner is not available or its queue is full.
Describe the solution you would like
To achieve the mentioned result it would be great to have a
priorityoption per runner that would indicate to peertube which of the runners should be tried first.@tio-trom commented on GitHub (Jul 24, 2023):
I was about to say the same thing. Some runners are more important than others, or more reliable. Perhaps a priority index with 1 to 5 or something like that. Try the 1 first, if not available move to level 2, and so on.
@manicphase commented on GitHub (Feb 29, 2024):
I've been playing with the runners and come up with a very basic but functional way of managing priorities. I've added a variable called
responseDelaywhich can be set under [jobs] inconfig.toml. This number represents the amount of milliseconds a runner should wait before attempting to claim a job. A higher number means a lower chance of winning the race to pick up a job.the change can be found here: https://github.com/Chocobozzz/PeerTube/compare/develop...manicphase:PeerTube-p2p-runner:add-response-delay-to-runners
@Chocobozzz can I create a pull request for this?
@Chocobozzz commented on GitHub (Mar 1, 2024):
@manicphase thanks for the interesting idea! I was more thinking about using the WS
available-jobswhere the peertube instance tries to send the event to runners by priority.But the question is: do we assign priority on runner side or on the peertube instance?
@manicphase commented on GitHub (Mar 1, 2024):
From my perspective it seems like allowing the runner to decide whether it picks up a job is a better option. There's other considerations beyond just arbitrary priority too.
I've been experimenting using some old raspberry pis and with those you can hit limitations such as quickly running out of disk space or being too slow to do live transcoding above certain resolutions.
To me, having the instance send all the metadata to the runner and letting the runner decide whether or not it can do the job seems like it would make it easier to quickly deal with a wider range of issues than having the instance manage the runners. The theoretical problem with this is that it could also lead to jobs not being picked up, like your main workstation might by chance claim a job to transcode at 720p and your micro computer or VPS might then refuse to pick up the 1080p job.