mirror of
https://github.com/mumble-voip/mumble.git
synced 2026-03-03 00:46:56 -05:00
Multisent when Packet Loss. #1679
Labels
No labels
GlobalShortcuts
Hacktoberfest
accessibility
acl
asio
audio
bonjour
bsd
bug
build
certificate
ci
client
code
documentation
external-bug
feature-request
gRPC
github
good first issue
help wanted
help-needed
ice
installer
linux
macOS
needs-ckeck-with-latest-version
needs-more-input
overlay
positional audio
priority/P0 - Blocker
priority/P1 - Critical
priority/P2 - Important
priority/P3 - Somewhat important
priority/P4 - Low
public-server-registration
qt
recording
release-management
server
stale-no-response
stale-support
support
task
test
theme
translation
triage
ui
windows
wontfix
x64
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/mumble-mumble-voip#1679
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Stefan-comkmits on GitHub (Nov 13, 2019).
Let the Client sent Packets x Times,for Survival of the Fittest,
to have no Packet Loss on User Level.
@Krzmbrzl commented on GitHub (Nov 14, 2019):
So regardless of the situation, the client should send all packets multiple times?
This seems like a fair amount of bandwidth overhead on the client and especially on the server side as the latter has to receive all hose packages and keep track which it has processed already.
Not quite sure if this is a good idea 🤔
@Kissaki commented on GitHub (Nov 24, 2019):
The communication happens over two channels: The control channel uses TCP and the voice data channel UDP.
TCP makes sure transmission never loses information.
Voice data is time-critical, and once a critical time has passed delivering them late is of no use because that (buffer) time has passed.
Can you elaborate where you would want this? If the client notices packet loss on the UDP channel it should flood the network in hopes that if 30% gets lost pushing every packet twice will decrease it to 0.3 * 0.3 = 0.09?
That’s a lot of assumptions on the behavior of the networking. The percentages are not necessarily consistent with rate/data-width.
If the packet loss is because of network overload it could make the situation worse instead of better.
Do you experience regular packet loss? I feel like, at least where I am, this is not worth the effort at all. Not on the implementation side, nor performance side, nor the wasteful network usage.
@no-response[bot] commented on GitHub (Mar 14, 2020):
This issue has been automatically closed because there has been no response to our request for more information.
With only the information that is currently in the issue, we don't have enough information to take action.
Please reach out if you have or find the answers we need so that we can investigate further (or if you feel like this issue shouldn't be closed for another reason).