UDP packets sent in response to a UDP request should have the same
source address as the request's destination address.
This can be achieved with sockets bound to a specific address, but in
the case of ANY-bound sockets, we can use the PKTINFO mechanism to do
this.
Extend control_ng_process() to accept an extra socket address
corresponding to the local address to use. Extend the signature of the
callback function (to do the actual sending) accordingly.
Extend socket_sendiov() to be able to set the PKTINFO cmsg when sending
a packet.
Add socket_sendto_from() as a convenience wrapper.
Extend control_udp_incoming() to pass the address from
udp_buf->local_addr back to socket_sendiov().
Change-Id: Idd019fdcfd796098e7807427e6686d4b05de35d1
Avoid trying to acquire a recursive lock by making sure the response is
always generated in a different thread.
Fixes#1656
Change-Id: I6c4c5bb52cb95a204823848bb427ab24f42dcccd
While LWS explicitly allows usage of lws_callback_on_writable() from
other threads, for some reason there is no internal locking in place,
and so a concurrently running lws_service() can interfere with internal
structures, in particular if lws_service() is closing connections at the
same time as lws_callback_on_writable() is invoked.
The suggested approach of using lws_cancel_service() in combination with
the LWS_CALLBACK_EVENT_WAIT_CANCELLED callback and a user-kept queue is
not feasible, as we need to support LWS 2.x, which doesn't have
LWS_CALLBACK_EVENT_WAIT_CANCELLED.
Closes#1624
Change-Id: Ia3ddeda66fd553c87f99404e0816d97ecbd4cdfe
Avoid calling lws_write() from threads other than the service thread, as
this might not be thread-safe. Instead store the values used for the
HTTP response headers in the websocket_output, then trigger a "writable"
callback, and finally do all the lws_write() calls from the service
thread.
Reported in #1624
Change-Id: Ifcb050193044e5543f750a12fb44f5e16d4c0a08
Newer libwebsockets versions seem to use a longer internal timeout, so
an explicit "interrupt" is needed during shutdown to prevent a long wait
time.
Change-Id: I8f28ef658169178e35b40dd44520fbd7c812b590
When ports are closed early (while the call is still running), we must
first update a slave rtpengine with this new information (that these
ports are now closed) before actually releasing the ports ourselves. Not
doing so leads to a race condition where the master instance re-uses a
port that was just closed before the slave instance knows about the port
being closed.
We implement this using a thread-local list to keep track of ports that
were released while processing a control message, and process this list
to actually close the ports only after Redis has been updated.
Additional calls to the function to close the ports are placed in
strategic locations to make sure this is triggered in every code path.
closes#1495
Change-Id: I803f4594f30ca315da0b84c6e76893f54ca3a7c9
If the config only lists a port for the HTTP/WS bindings then we must
not try to create both a v4 and a v6 binding on that port as
libwebsockets handles the 4/6 mapping internally. In this case we make
sure to only create the v6 binding.
Further requirement for #1432
Change-Id: I9bf7ec5c041d0b5d4a22d507d993b85e2d4d3155
Add an explicit test to see if libwebsockets has been compiled with
support for IPv6. If it hasn't then we don't try to create v6 bindings.
Closes#1432
Change-Id: I6902f5b4203aa09cb28a8edb46f97b339677ed75
Make sure janus_session lock is obtained first and websocket_conn lock
second, in order to prevent a possible deadlock.
Change-Id: I3db1d5cea0c0295cc10c71edd20c86ce054f520b
Warned-by: Coverity
Make the websocket_conn_init() function return an error code, and delay
the initialization after we cannot fail. And otherwise return -1, such
as when we cannot initialize the HTTP nor SSL connection.
Change-Id: I0facd53560fdb06678d7df9775be277e5c4b2cae
Warned-by: coverity
Sequence of events:
1) HTTP request is being handled in worker thread by calling the handler
func() from within websocket_process().
2) Handler func generates output, queues it up, and requests a
`writeable` callback from within websocket_write_raw().
3) Main LWS thread triggers writeable callback and calls
websocket_dequeue().
4) Output is given to LWS still within the main LWS thread, and finally
lws_http_transaction_completed() is called to release the connection
and ready it for the next HTTP connection.
5) LWS internally cleans up the connection and frees the user context
(our `wc` struct).
6) The worker thread wakes up and continues to use the now invalid `wc`
in order to clean up after it has done its job. Boom.
The solution is to handle the `drop protocol` callback, which is
triggered by LWS in the main LWS thread in step 4 from within
lws_http_transaction_completed(). We call our own connection cleanup
function websocket_conn_cleanup() which blocks until all jobs are
removed from `wc` (step 6) and only then continue, allowing LWS to
safely free the struct.
Change-Id: I596a98e9b552a96aef259f4523f16fa63c287ef4