Attempt to parse each src/dst leg only once (which is a JSON document
these days) and use the same logic for determining whether to create a
CDR for both intermediate and final CDRs.
Change-Id: If8afae812585cb8624799b0d2f4e6be64980cea9
The MySQL INSERT statements to move processed Redis acc records from
Redis to the respective backup/trash MySQL tables are always issued
within a MySQL transaction (med_handler via medmysql_batch_start), but
the deletions from Redis were done immediately. Therefore if mediator
were to abort within a processing loop, the MySQL transaction would be
rolled back after the entries had already been deleted from Redis,
therefore losing the acc entries.
Solve this by using an internal queue for Redis entries to hold the
lists of entries to be deleted until the MySQL transaction is commited.
Change-Id: Ib41d0e2ca722c66f9e078ca31f7e5ca2b9d9fe2d
Use a struct with globally defined instances instead of a literal string
name to distinguish the two destination tables. This makes it possible
to unify record handling between the two tables.
Change-Id: I900debf3a28f262b4503d79562d69b69502b7aa8
This makes it possible to trace which CDR record was created from which
acc records and at which point.
Change-Id: I3645ccf244bf7d86b6f70181c57e47dfa204f7b9
json_tokener_parse returns a newly created JSON object (tree) which must
be freed by decreasing the ref count before the variable goes out of
scope.
Fix-up for Ia7e8446fe4953d1391f99ea1530990e3d385c056
Change-Id: I2e4b17086df468f66401a71a836d37ed821944e5
The `cdr_index` variable already tracks the number of created CDRs, as
it points to the slot for the next CDR record to be inserted. It's
increased by one at the start of the processing loop, so if we end up
skipping over an entry after it's been increased, it must be decreased
again, which keeps the count intact and prevents empty CDR records from
being created.
Fix-up for Ia7e8446fe4953d1391f99ea1530990e3d385c056
Change-Id: Ibb650a4b00978a272ef8f60751f6efda0491a912
* If there are multiple call leg acc records for a call, ones
that do not contain valid src + dst leg data (either JSON or th
old format), these acc records are skipped and intermediate cdrs
are created for the remaining records that contain valid src and dst
leg data
Change-Id: Ia7e8446fe4953d1391f99ea1530990e3d385c056
(cherry picked from commit c56b5f7503)
KeyDB notifies systemd prematurely about its readiness, while it might
still be loading data from persistent storage or from its replication
master. It refuses access to its databases during that time, so make
mediator sleep/retry a few times to handle this.
Change-Id: Ieff52bc9f92697385f300c6fba26ace03fde78f6
When processing acc records for a call that was the result of a REFER,
there will be two BYE records, one for the call itself without further
info in dst_leg, and another one for the original call that includes the
record for the REFER, with extra info in dst_leg that points back to the
original call. Skip the BYE with this extra info when processing record
so that it's still present when the other call (with the REFER) is being
processed.
Change-Id: Ia8b405c1d9d556ccb30f9b51a0bbd001f48da180
Use the call ID from the contained JSON object in REFER records to
follow the call flow to the new call ID and look there for the matching
BYE record.
Change-Id: Ie2655937c12bb8ae6ae2aa48c1b2cdbb3c1ac120
This allows us to fetch only specific records that we're interested in,
based on context, instead of all of them and then having to do a second
pass over them.
Change-Id: I5e314fa633f57c79db85476e347a3305b5f585e9
Instead of using two lists to keep the acc records (based on where they
were retrieved from) use only a single list for all acc records.
Make sure the list is only appended to in the functions doing the record
retrieval.
Since the list of acc records needs to be sorted only when records were
retrieved from Redis, change the return value of the retrieval functions
to indiciate whether this needs to be done, or -1 for error.
Change-Id: Ie61c054b430cb5d390b1f2b742c64be1df831fd4
Use g_strdup_printf instead of sprintf+strdup
Include Redis key prefix in the first printf to avoid having to do
another sprintf
Use G_N_ELEMENTS() instead of hard coded array size
Do alloc cleanup only once at the end of the function
Use g_list_prepend instead of g_list_append to avoid repeated list
iterations
Change-Id: If843723eee7578b774ecc1b2fcb1d1e30b16cd19
If we know the method of the Redis entry that we're deleting, we can
skip on trying to remove the entry from all possible pseudo-keys and
instead remove it only from the appropriate one.
Change-Id: I2992b250c24f274ff190e9965d0e685877f25a4a
With this we can directly use the method code for decision making
purposes before the CDR generation routine runs.
Change-Id: I8967e77e6cd717bb4d3342a1bf7052c33d0f9b45
Now we can use some of the JSON fields for decision making purposes,
before we get to the CDR creation stage.
Change-Id: I0a521c7c6bbf82e5fed683c5461914fefd784f0f
This makes it possible to reuse the data from the JSON object in several
places in the code without having to re-parse it.
Change-Id: I62f3b2b814acc52ae3fb84c88b7b4f99bff9232f
Use strdup/free for string fields in med_entry_t that are highly
variable in length. This elimiates future problems if one of these
fields ever has its length extended.
Use g_strdup as it guarantees a non-NULL return value.
Change-Id: Ia0f5883547feb62f04fcd4c5353850eb9d815413
Using an array in this context (to return a list of acc records) is
mostly pointless as it wastes memory and incurs the additional overhead
of having to initialise the array and an extra layer of copying strings
around. This also ultimately allows us to dynamically append to the list
of acc records without having to reallocate the array.
Change-Id: I1039f01861f8d3f82fdc3a80377fd7535fa24bab
Using an array in this context (to return a list of call IDs) is mostly
pointless as it wastes memory and incurs the additional overhead of
having to initialise the array and an extra layer of copying strings
around. This also eliminates the auxiliary type `med_callid_t` and
ultimately allows us to dynamically append to the list of call IDs
without having to reallocate the array.
Use g_strdup for string allocation as it guarantees a non-NULL return
value.
Change-Id: Iae6c97f80c216352ab36de89d361f09ee355b6c8
We are migrating from redis to keydb so for now we need to support both
so we should not depend on specific key-value storage in unit file but
should use database.key_value.flavor value in override file.
But we can't redefine dependencies in override file just add additional
ones. So remove it from unit file.
Change-Id: I16e94e938bd9f1da14e1068bc6b94485b08a4ca5
As the number of subscribers grows, the current approach of doing a full
table dump of the subscribers DB and caching it in memory becomes less
and less feasible. The new approach is to simply do a straight DB query
for each subscriber as records are processed, and then cache the result
in memory for a little while.
Change-Id: I19a6271d779bd0abccc29e3548e7bcdb2e00baa3
ubuntu-20.04 doesn't provide debhelper-compat (= 13), therefore fails with:
| The following packages have unmet dependencies:
| builddeps:. : Depends: debhelper-compat (= 13)
| E: Unable to correct problems, you have held broken packages.
Let's switch from ubuntu-20.04 to ubuntu-latest, which
currently still points to ubuntu-20.04, but should reduce
our maintenance efforts.
Furthermore enabled the ubuntu-cloud-archive/yoga-staging PPA,
which provides a backport of debhelper v13:
https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/yoga-staging/+packages
and fixes our failing coverity builds on github.
Change-Id: I3bba166843f164b67b90c403cc772dfd939eeae7
Using apt-get with `-qq` displays only the following message
on package installation problems:
| E: Unable to correct problems, you have held broken packages.
Whereas with `-q`, we get the actual underlying problem, being:
| The following packages have unmet dependencies:
| builddeps:. : Depends: debhelper-compat (= 13)
| E: Unable to correct problems, you have held broken packages.
Change-Id: Ibadc483f1cb324c83d7616d009bcc932876a25a3
Use a macro in combination with an included file to define the list of
all string fields used in the CDR struct.
Change-Id: Ic0b93c005b792eadf00544768c74382fa3307577