timeout flooding in murmur logs #17

Closed
opened 2026-02-20 19:10:03 -05:00 by deekerman · 12 comments
Owner

Originally created by @mumble-voip on GitHub (Apr 21, 2013).

Server:
Murmur 1.2.3
WinXP Pro SP3
3GHz Pentium 4 with 2GB RAM

Constant flooding like this in my logs:

<W>2013-04-21 01:45:09.292 1 => <68:(-1)> Timeout
<W>2013-04-21 01:45:09.371 1 => <86:(-1)> Timeout
<W>2013-04-21 01:45:09.449 1 => <72:(-1)> Timeout
<W>2013-04-21 01:45:09.527 1 => <78:(-1)> Timeout
<W>2013-04-21 01:45:09.589 1 => <84:(-1)> Timeout
<W>2013-04-21 01:45:24.824 1 => <68:(-1)> Timeout
<W>2013-04-21 01:45:24.871 1 => <86:(-1)> Timeout
<W>2013-04-21 01:45:24.917 1 => <72:(-1)> Timeout
<W>2013-04-21 01:45:24.980 1 => <78:(-1)> Timeout
<W>2013-04-21 01:45:25.042 1 => <84:(-1)> Timeout
<W>2013-04-21 01:45:40.308 1 => <68:(-1)> Timeout
<W>2013-04-21 01:45:40.386 1 => <86:(-1)> Timeout
<W>2013-04-21 01:45:40.464 1 => <72:(-1)> Timeout
<W>2013-04-21 01:45:40.511 1 => <78:(-1)> Timeout
<W>2013-04-21 01:45:40.574 1 => <84:(-1)> Timeout
<W>2013-04-21 01:45:55.792 1 => <68:(-1)> Timeout
<W>2013-04-21 01:45:55.855 1 => <86:(-1)> Timeout
<W>2013-04-21 01:45:55.917 1 => <72:(-1)> Timeout
<W>2013-04-21 01:45:55.980 1 => <78:(-1)> Timeout
<W>2013-04-21 01:45:56.042 1 => <84:(-1)> Timeout

I saw this posted as a bug and closed 5 years ago, but saw no reasonable solution in the thread. I've been experiencing this problem for at least a year-- just never got around to reporting it.

We have no connection issues. Things seem okay except for the flooding.

Restarting the server stops the flooding, but when left unattended the logs grow unnecessarily huge. Has this issue been addressed in 1.2.4-RC? I'm hesitant to upgrade at this point as the server is essential for raiding on a weekly basis.

This ticket has been migrated from sourceforge. It is thus missing some details like original creator etc.
The original is at https://sourceforge.net/p/mumble/bugs/981/ .

Originally created by @mumble-voip on GitHub (Apr 21, 2013). Server: Murmur 1.2.3 WinXP Pro SP3 3GHz Pentium 4 with 2GB RAM Constant flooding like this in my logs: ``` <W>2013-04-21 01:45:09.292 1 => <68:(-1)> Timeout <W>2013-04-21 01:45:09.371 1 => <86:(-1)> Timeout <W>2013-04-21 01:45:09.449 1 => <72:(-1)> Timeout <W>2013-04-21 01:45:09.527 1 => <78:(-1)> Timeout <W>2013-04-21 01:45:09.589 1 => <84:(-1)> Timeout <W>2013-04-21 01:45:24.824 1 => <68:(-1)> Timeout <W>2013-04-21 01:45:24.871 1 => <86:(-1)> Timeout <W>2013-04-21 01:45:24.917 1 => <72:(-1)> Timeout <W>2013-04-21 01:45:24.980 1 => <78:(-1)> Timeout <W>2013-04-21 01:45:25.042 1 => <84:(-1)> Timeout <W>2013-04-21 01:45:40.308 1 => <68:(-1)> Timeout <W>2013-04-21 01:45:40.386 1 => <86:(-1)> Timeout <W>2013-04-21 01:45:40.464 1 => <72:(-1)> Timeout <W>2013-04-21 01:45:40.511 1 => <78:(-1)> Timeout <W>2013-04-21 01:45:40.574 1 => <84:(-1)> Timeout <W>2013-04-21 01:45:55.792 1 => <68:(-1)> Timeout <W>2013-04-21 01:45:55.855 1 => <86:(-1)> Timeout <W>2013-04-21 01:45:55.917 1 => <72:(-1)> Timeout <W>2013-04-21 01:45:55.980 1 => <78:(-1)> Timeout <W>2013-04-21 01:45:56.042 1 => <84:(-1)> Timeout ``` I saw this posted as a bug and closed [5 years ago](bugs:#111), but saw no reasonable solution in the thread. I've been experiencing this problem for at least a year-- just never got around to reporting it. We have no connection issues. Things seem okay except for the flooding. Restarting the server stops the flooding, but when left unattended the logs grow unnecessarily huge. Has this issue been addressed in 1.2.4-RC? I'm hesitant to upgrade at this point as the server is essential for raiding on a weekly basis. _This ticket has been migrated from sourceforge. It is thus missing some details like original creator etc. The original is at https://sourceforge.net/p/mumble/bugs/981/ ._
deekerman 2026-02-20 19:10:03 -05:00
Author
Owner

@Kissaki commented on GitHub (Apr 22, 2013):

Can you check on the connection-IDs, whether that's people that were successfully connected to mumble and then exited?
Or maybe it's from people who get disconnected otherwise / completely.

Do you or other people use any bots or scripts that connect to your Mumble server?

@Kissaki commented on GitHub (Apr 22, 2013): Can you check on the connection-IDs, whether that's people that were successfully connected to mumble and then exited? Or maybe it's from people who get disconnected otherwise / completely. Do you or other people use any bots or scripts that connect to your Mumble server?
Author
Owner

@Kissaki commented on GitHub (Apr 22, 2013):

  • status: open --> awaiting-reply
  • Version: --> 1.2.3
  • Targeted Release: 1.2.3 --> unspecified
@Kissaki commented on GitHub (Apr 22, 2013): - **status**: open --> awaiting-reply - **Version**: --> 1.2.3 - **Targeted Release**: 1.2.3 --> unspecified
Author
Owner

@mumble-voip commented on GitHub (Apr 22, 2013):

No bots or scripts are being used on my server. I'm unable to check connection-IDs currently, as the flooding has wiped away all entries concerning connections. What I can tell you is that the logs are still being flooded and no one is using the server at the moment.

@mumble-voip commented on GitHub (Apr 22, 2013): No bots or scripts are being used on my server. I'm unable to check connection-IDs currently, as the flooding has wiped away all entries concerning connections. What I can tell you is that the logs are still being flooded and no one is using the server at the moment.
Author
Owner

@Kissaki commented on GitHub (Apr 22, 2013):

Are you checking the logs via a web interface?
You should still be able to check the logs written to disk. They don’t get pruned AFAIK.

What I’d check next then is if your Mumble server actually gets incoming connections (incomplete connect requests).
Maybe you can netstat that, or otherwise network-log?

@Kissaki commented on GitHub (Apr 22, 2013): Are you checking the logs via a web interface? You should still be able to check the logs written to disk. They don’t get pruned AFAIK. What I’d check next then is if your Mumble server actually gets incoming connections (incomplete connect requests). Maybe you can netstat that, or otherwise network-log?
Author
Owner

@mumble-voip commented on GitHub (Apr 23, 2013):

Here is a snippet showing everything leading up to the last flooding incident. I've partially masked the IPs but everything else is untouched.

The following attachments were added on the original comment:

@mumble-voip commented on GitHub (Apr 23, 2013): Here is a snippet showing everything leading up to the last flooding incident. I've partially masked the IPs but everything else is untouched. _The following attachments were added on the original comment:_ - _https://sourceforge.net/p/mumble/bugs/_discuss/thread/064d7e4a/e2a3/attachment/murmur_log_%20snippet.txt_
Author
Owner

@Kissaki commented on GitHub (Apr 23, 2013):

So a normal connection looks like

 <W>2013-04-19 21:10:14.761 1 => <59:(-1)> New connection: 0.0.136.205:51041
<W>2013-04-19 21:10:15.214 1 => <59:(-1)> Client version 1.2.3 (Win: 1.2.3)
<W>2013-04-19 21:10:15.433 1 => <59:Blackdragon(108)> Authenticated

All those timeouts seem to not announce their client version and then either close or time out.
Logging that stuff is not wrong or arguable really. If (some kind of) client initiates a connection it should be logged- and also what follows.

@Kissaki commented on GitHub (Apr 23, 2013): So a normal connection looks like ``` <W>2013-04-19 21:10:14.761 1 => <59:(-1)> New connection: 0.0.136.205:51041 <W>2013-04-19 21:10:15.214 1 => <59:(-1)> Client version 1.2.3 (Win: 1.2.3) <W>2013-04-19 21:10:15.433 1 => <59:Blackdragon(108)> Authenticated ``` All those timeouts seem to not announce their client version and then either close or time out. Logging that stuff is not wrong or arguable really. If (some kind of) client initiates a connection it should be logged- and also what follows.
Author
Owner

@mumble-voip commented on GitHub (Apr 23, 2013):

Okay, but that is only a tiny snippet of the log file.

The timeouts you see at the bottom repeat for 57,573 lines with the same connection IDs (in this case 68, 72, 78, 84, and 86) over and over again until I finally noticed it and restarted the server. These repeating entries keep flooding even if no one has connected to the server for days.

@mumble-voip commented on GitHub (Apr 23, 2013): Okay, but that is only a tiny snippet of the log file. The timeouts you see at the bottom repeat for **57,573 lines** with the same connection IDs (in this case 68, 72, 78, 84, and 86) over and over again until I finally noticed it and restarted the server. These repeating entries keep flooding even if no one has connected to the server for days.
Author
Owner

@Kissaki commented on GitHub (Apr 23, 2013):

Mh, interesting that a restart prevents further such loggings. Is the restart instantly? ( < 2s?)

I guess one should at least check the code, if it’s an issue of connection handling or wrong logging …

Oh I see, for one connection it logs a timeout multiple times.
Then a restart fixing it makes sense.
I wonder where the empty reason -1 connection close comes from though.

@Kissaki commented on GitHub (Apr 23, 2013): Mh, interesting that a restart prevents further such loggings. Is the restart instantly? ( < 2s?) I guess one should at least check the code, if it’s an issue of connection handling or wrong logging … Oh I see, for one connection it logs a timeout multiple times. Then a restart fixing it makes sense. I wonder where the empty reason -1 connection close comes from though.
Author
Owner

@mumble-voip commented on GitHub (Apr 24, 2013):

I restart it by choosing "Quit Murmur" from the tray icon, then launching it again from a desktop shortcut. Can't say for certain how long that takes, but 2-5 seconds seems like a reasonable guess.

@mumble-voip commented on GitHub (Apr 24, 2013): I restart it by choosing "Quit Murmur" from the tray icon, then launching it again from a desktop shortcut. Can't say for certain how long that takes, but 2-5 seconds seems like a reasonable guess.
Author
Owner

@Kissaki commented on GitHub (May 11, 2013):

  • status: awaiting-reply --> accepted
@Kissaki commented on GitHub (May 11, 2013): - **status**: awaiting-reply --> accepted
Author
Owner

@Krzmbrzl commented on GitHub (Jan 15, 2020):

Is this (still) reproducible in v1.3?

@Krzmbrzl commented on GitHub (Jan 15, 2020): Is this (still) reproducible in v1.3?
Author
Owner

@no-response[bot] commented on GitHub (Feb 15, 2020):

This issue has been automatically closed because there has been no response to our request for more information.
With only the information that is currently in the issue, we don't have enough information to take action.

Please reach out if you have or find the answers we need so that we can investigate further (or if you feel like this issue shouldn't be closed for another reason).

@no-response[bot] commented on GitHub (Feb 15, 2020): This issue has been automatically closed because there has been no response to our request for more information. With only the information that is currently in the issue, we don't have enough information to take action. Please reach out if you have or find the answers we need so that we can investigate further (or if you feel like this issue shouldn't be closed for another reason).
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/mumble-mumble-voip#17
No description provided.