The redis onekey concepts is introduced to reduce traffic to redis
and redis notification traffic.
It modifies the current structure for one call in redis, which are
multiple keys with pre- and postfixes and the callid in between to
one key with the structure "json-<callid>". The value is a json
formatted string with the previous multi-key identifiers in it.
instead of outright rejecting local endpoints advertised by remote
clients (to allow for deliberately daisy chaining packet streams), we
flag them for checking against actual packet loops as we did before
Change-Id: I5652e86e12f3c1c5053ea70b01e8d128ebf47751
fixes#65
this also obsoletes the old loop detection
Change-Id: I850d81500c45828af2c4d50d80278ec2d599c2a0
(cherry picked from commit 3254278cfd55167fb881cc665328744183773728)
When we receive an offer or an answer, we will only turn recording on if
it contains the "record-call=yes" flag. Likewise, we only turn recording
off if it contains the "record-call=no" flag. If no flag is specified,
the call keeps its current recording status. It used to be the case that
not specifying "record-call=yes" would turn off call recording.
This fix is necessary to account for SDP renegotiation during
hold/unhold. If the call system sends over another offer during SDP
renegotiation to hold and then unhold, we don't want to change recording
behavior unless the call system specifically says so.
This involved moving all code from fs.(c|h) to recording.(c|h).
We still spoof packets, so the UDP will look like all monologues are coming
over the same port and will probably look like they are all one stream if
you look at the PCAP file.
Fixes and changes:
- Only create the metadata file if the call is being recorded.
- Only write to the metadata file if we actually created it (NULL check).
- Make sure we have metadata before putting it on the call object
- Correctly overwrite recording metadata without leaking memory
- Set the no kernalization flag per call instead of for *every* packet.
- Logging cleanup.
We create a metadata file for each call. The metadata files will all end up
in a spool directory for the rtpengine.
Each in-progress file has the format: "rtpengine-meta-$RANDHEX.tmp" and
goes in /tmp/. When a call finishes, it is moved to the spool directory
in sub-directory /var/spool/rtpengine/metadata/ and we change the file
extension to ".txt".
The metadata file contains references to all PCAP recording files associated
with a call, and it includes generic metadata at the tail of the file.
One absolute path for a PCAP file per line, followed by two blank lines,
followed by the metadata passed in to the rtpengine through an external
command.
RTP Engine checks for the spool directory "/var/spool/rtpengine" on startup.
If it's not there, it fails. If it's there, it sets up "metadata" and
"recordings" inner directories. This is where RTP Engine will write call
metadata files and PCAP files.
Creating a random filename with a prefix and a suffix is now done through a
generic function. We use this to create pcap recording files in the /tmp/
directory, and will soon use it to create metadata files.
RTP Engine creates PCAP files for recorded calls on offer answer instead
of initial offer.
We make up bogus values for the nonessential parts of the PCAP, UDP, and
IP headers. We might be able to pull these from other parts of the RTP
Engine, but that information was unnecessary for recording calls so they
can be recorded to audio files.
If you change the packet headers, be really careful about byte order and
datatype size!
Pass in "record-call" flag over `rtpengine_offer` or `rtpengine_answer`
message. RTP Engine tracks files used to record pcaps and send them back
in the response message.
Pipes call audio (unencrypted from both ends) to recording files.
Sets up file descriptors for local files to dump RTP recordings.
A file and a file descriptor per monologue in a call.
Recorded streams will be running in user daemon mode, not in kernel mode.
This removes first 12 octets from packet to record just the rtp.
- add --subscribe_keyspace list config parameter.
- don't delete foreign calls by timers
- fix synchronization of foreign calls (use a separate redis_notify database)
- fix statistics for control channel calls.
- fix deletion of foreign calls upon del notifications
- update rtpengine-ctl tool
Removes the explicit redis-read-db configuration and reduces the option
to one redis DB and one redis write DB. If only the redis DB is
configured, then it will be used for all operations. If both are
configured, then the redis DB will be used for reading and the write DB
will be used for writing (updates).
Change-Id: I8d5a32c53c9416b514c98d69c3afe7c547e530ad
The session limit is only for calls an rtpengine is responsible for.
Foreign calls (coming in via redis notification) are not counted as
long as the rtpengine is not responsible for those calls.
At least that means that the limit may exceed if the calls the rtpengine
is responsible for plus the former foreign calls are greater than the limit.
This will happen suddenly when the rtpengine becomes responsible for the
foreign calls.
Thoughts on that topic so far:
There's one thing to keep also in mind. What do we do if the call
changes (streams) and the backup node is notified ? Currently we only
know (by subscribing to the 'notifier-' prefix that in fact it has
changed, but we don't know what has changed in detail. Subscribing to
everything would lead to the problem that we have to take care about
synchronising the the new streams with the old ones. Without having a
look at the code that might be a lot of effort and ... I guess that's
why richard likes to ... have clean states of the calls. Synchronizing
is always a mess. Easier to delete and setup new.
I thought about the following solution for makes things more abstract
and easier to understand:
1. Whenever that call on a backup node (foreign call) has seen a packet
(also timers are started then), we do not process any notification by a
redis notification for this call (caus' the RTP-IP has already been
switched). More precise and abstract that means: If a node has taken
over the responsibility for that call (by having seen a packet), it
assumes that notifications from the former node to be dropped. The call
will be deleted when either the timer fires or (for other companies) the
control channel address has also been switched and via that channel the
call is deleted.
2. If a call has not seen a packet (inactive call on the backup node,
seen as not responsible for that call but could become so), we accept
all commands for that call on the backup node in the same way as on the
original node. That means also deletions in-between and so on. I mean if
the original node does it, why not accepting to do the same way on the
backup node ?