Make it possible for a looper thread function to break out of the loop
by returning an appropriate status code.
Change-Id: I22e7789270eed4bf3340e7dae941929de58700ea
To do the work more efficiently,
and not be dependent on the call_timer runs by poller,
we should move the ports iterations (stats update from the kernel)
functionality to a separate thread, to make it faster and
not be dependent on what happens in the `call_timer` at all.
Since it has nothing to do with the call timers.
As an additional benefit: we unload the `call_timer` runner.
Change-Id: I511529ce504ef3d29f4e9d6d731ffd470d78d27a
Due to multiple threads polling all sockets for read events, it's
possible for one socket to receive a read event in one thread, then
immediately receive another read event in another thread, resulting in
two threads reading packets from the same socket at the same time.
While this is perfectly valid and correctly handled by mutex etc, it can
result in packets being processed out of order. In media passthrough
scenarios which don't do sequencing this can result in packets being
reordered.
Using a simple atomic counter we can ensure that only one thread is
reading from any one socket at a time.
Relevant to #1638
Change-Id: I406491d6ae5e13e618e153ba5463fd9169636016
To do the work more efficiently and not be dependent on
the `call_timer` runs by poller, we should move
the releasing of sockets to a separate thread, to make it
faster and not be dependent on what happens in the `call_timer`
at all. Since it has nothing to do with the call timers.
Since now we have two queues:
- thread scope (local): ports_to_release
- global one: ports_to_release_glob
`sockets_releaser()` uses the ports_to_release_glob,
meanwhile appending in the `call_timer()` happens using the
ports_to_release.
Change-Id: Iadd966ac895b2dd64f81269d4fdf5d83747fe0b7
We have to stop using objects of `struct port_pool` (media_socket.h),
becasue a newer approach introduced for ports allocations deprecates
usage of them.
Deprecated objects:
`port_pool.last_used`
`port_pool.ports_used`
`port_pool.free_list`
`port_pool.free_list_used`
Change-Id: I70e166753da7a43cb3b6b188c83d978b7dbce046
Introduce a reworked port allocation in RTPEngine.
The goal of this rework is to:
- simplify the logic of handling free/engaged ports
- eliminate a bottle neck begotten by overcomplicated logic
- potentially resolve the issue with "ran out of ports"
under heavy loading, when still there must be ports left
in the ports pool
Change-Id: Ifd2b1565611dd3b86c474a1ea5507fc6152fc212
The SRTP decryption context is associated with the local socket. Use the
socket that a packet was actually received on for the decryption context
instead of using the one that it was expected to be received on.
Change-Id: Iddf400a440fc51b4afb370ec827f75e9626b2cfd
(cherry picked from commit 8c3452e50b7aa4f5b7122dbd7221e34143467885)
Based on the information gotten from Richard Fuchs
document the main objects in the code, to let the code be more
understandable for other code readers.
Mainly documented:
- call
- call_monologue
- call_subscription
- call_media
- packet_stream
- stream_fd
- sink_handler
- rtpe_callhash / rtpe_callhash_lock
Change-Id: I0cf122bea2d9c3f198b48da134a70301564ff1f9
Instead of just leaving the transport protocol unset when we know we're
not supposed to be aware of the protocol, add a special entry to
suppress the pointless warning message.
Change-Id: I228c2f1652320627f974d9d7bcb0b1345adce2be
Create a dedicated struct to hold certain attributes shared by both sink
handlers and media subscriptions, as a preparation to simplify handling
these attributs.
Change-Id: I866159c33ed6d6a2873d2cf68c4906ea705d253e
When ports are closed early (while the call is still running), we must
first update a slave rtpengine with this new information (that these
ports are now closed) before actually releasing the ports ourselves. Not
doing so leads to a race condition where the master instance re-uses a
port that was just closed before the slave instance knows about the port
being closed.
We implement this using a thread-local list to keep track of ports that
were released while processing a control message, and process this list
to actually close the ports only after Redis has been updated.
Additional calls to the function to close the ports are placed in
strategic locations to make sure this is triggered in every code path.
closes#1495
Change-Id: I803f4594f30ca315da0b84c6e76893f54ca3a7c9
There's no need to open ports on non-primary interfaces if ICE is not in
use as these ports will not be used or seen by anyone.
This mostly obsoletes the `save-interface-ports` config option, with the
exception of ICE advertised by the offerer. We currently have no option
to reject ICE from the offerer during the offer phase, so ports would
always be opened on that side.
Relevant to #1164 and 001abe5
Change-Id: I43df70bc0ec49b81f63aec97c776e48617b2acfd
Flag a socket with an error strike when packets are received too fast,
and refuse processing once too many strikes have occurred. This should
prevent forwarding loops from taking down the system.
Change-Id: Idc574f2f1dbbcb156efc37a80e903dc4e60ef1b1
Whether a bit-field is signed or unsigned is implementation specific, so
we should be explicit about this.
Change-Id: I744df3d24bc08e95fa816ba4135f19cd3a5dcb17
Warned-by: lgtm
When set to `false`, no changes at all. (default)
When set to `true`, bind only one desired family local address.
Also add info in rtpengine.pod file.
Also add log for sfd with no call.
fix cleanup being skipped on redis slaves
fixes an SDES related Redis mem leak
adds a hash for the ports free list to avoid duplicate entries
fixes#898
Change-Id: I34aad67290ff5ef8824142682aac03cb600d0ecb