TALK_DETECT: A channel function that raises events when talking is detected

This patch adds a new channel function TALK_DETECT that, when set on a
channel, causes events indicating the start/stop of talking on a channel to be
emitted to both AMI and ARI clients. 

The function allows setting both the silence threshold (the length of silence
after which we decide no one is talking) as well as the talking threshold (the
amount of energy that counts as talking). Parameters can be updated on a channel
after talk detection has been enabled, and talk detection can be removed at
any time.

The events raised by the function use a nomenclature similar to existing AMI/ARI
events.
For AMI: ChannelTalkingStart/ChannelTalkingStop
For ARI: ChannelTalkingStarted/ChannelTalkingFinished

Review: https://reviewboard.asterisk.org/r/3563/

#ASTERISK-23786 #close
Reported by: Matt Jordan
........

Merged revisions 414934 from http://svn.asterisk.org/svn/asterisk/branches/12


git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@414935 65c4cc65-6c06-0410-ace0-fbb531ad65f3
changes/97/197/1
Matthew Jordan 11 years ago
parent e9f09ab2bc
commit 53968c00b3

@ -31,6 +31,26 @@ AgentRequest
of the incoming caller. The most likely reason this would happen is of the incoming caller. The most likely reason this would happen is
the agent did not acknowledge the call in time. the agent did not acknowledge the call in time.
AMI
------------------
* New events have been added for the TALK_DETECT function. When the function
is used on a channel, ChannelTalkingStart/ChannelTalkingStop events will be
emitted to connected AMI clients indicating the start/stop of talking on
the channel.
ARI
------------------
* New event models have been aded for the TALK_DETECT function. When the
function is used on a channel, ChannelTalkingStarted/ChannelTalkingFinished
events will be emitted to connected WebSockets subscribed to the channel,
indicating the start/stop of talking on the channel.
Functions
------------------
* A new function, TALK_DETECT, has been added. When set on a channel, this
fucntion causes events indicating the starting/stoping of talking on said
channel to be emitted to both AMI and ARI clients.
------------------------------------------------------------------------------ ------------------------------------------------------------------------------
--- Functionality changes from Asterisk 12.2.0 to Asterisk 12.3.0 ------------ --- Functionality changes from Asterisk 12.2.0 to Asterisk 12.3.0 ------------
------------------------------------------------------------------------------ ------------------------------------------------------------------------------

@ -0,0 +1,404 @@
/*
* Asterisk -- An open source telephony toolkit.
*
* Copyright (C) 2014, Digium, Inc.
*
* Matt Jordan <mjordan@digium.com>
*
* See http://www.asterisk.org for more information about
* the Asterisk project. Please do not directly contact
* any of the maintainers of this project for assistance;
* the project provides a web site, mailing lists and IRC
* channels for your use.
*
* This program is free software, distributed under the terms of
* the GNU General Public License Version 2. See the LICENSE file
* at the top of the source tree.
*/
/*! \file
*
* \brief Function that raises events when talking is detected on a channel
*
* \author Matt Jordan <mjordan@digium.com>
*
* \ingroup functions
*/
/*** MODULEINFO
<support_level>core</support_level>
***/
#include "asterisk.h"
ASTERISK_FILE_VERSION(__FILE__, "$Revision$")
#include "asterisk/module.h"
#include "asterisk/channel.h"
#include "asterisk/pbx.h"
#include "asterisk/app.h"
#include "asterisk/dsp.h"
#include "asterisk/audiohook.h"
#include "asterisk/stasis.h"
#include "asterisk/stasis_channels.h"
/*** DOCUMENTATION
<function name="TALK_DETECT" language="en_US">
<synopsis>
Raises notifications when Asterisk detects silence or talking on a channel.
</synopsis>
<syntax>
<parameter name="action" required="true">
<optionlist>
<option name="remove">
<para>W/O. Remove talk detection from the channel.</para>
</option>
<option name="set">
<para>W/O. Enable TALK_DETECT and/or configure talk detection
parameters. Can be called multiple times to change parameters
on a channel with talk detection already enabled.</para>
<argument name="dsp_silence_threshold" required="false">
<para>The time in milliseconds before which a user is considered silent.</para>
</argument>
<argument name="dsp_talking_threshold" required="false">
<para>The time in milliseconds after which a user is considered talking.</para>
</argument>
</option>
</optionlist>
</parameter>
</syntax>
<description>
<para>The TALK_DETECT function enables events on the channel
it is applied to. These events can be emited over AMI, ARI, and
potentially other Asterisk modules that listen for the internal
notification.</para>
<para>The function has two parameters that can optionally be passed
when <literal>set</literal> on a channel: <replaceable>dsp_talking_threshold</replaceable>
and <replaceable>dsp_silence_threshold</replaceable>.</para>
<para><replaceable>dsp_talking_threshold</replaceable> is the time in milliseconds of sound
above what the dsp has established as base line silence for a user
before a user is considered to be talking. By default, the value of
<replaceable>silencethreshold</replaceable> from <filename>dsp.conf</filename>
is used. If this value is set too tight events may be
falsely triggered by variants in room noise.</para>
<para>Valid values are 1 through 2^31.</para>
<para><replaceable>dsp_silence_threshold</replaceable> is the time in milliseconds of sound
falling within what the dsp has established as baseline silence before
a user is considered be silent. If this value is set too low events
indicating the user has stopped talking may get falsely sent out when
the user briefly pauses during mid sentence.</para>
<para>The best way to approach this option is to set it slightly above
the maximum amount of ms of silence a user may generate during
natural speech.</para>
<para>By default this value is 2500ms. Valid values are 1
through 2^31.</para>
<para>Example:</para>
<para>same => n,Set(TALK_DETECT(set)=) ; Enable talk detection</para>
<para>same => n,Set(TALK_DETECT(set)=1200) ; Update existing talk detection's silence threshold to 1200 ms</para>
<para>same => n,Set(TALK_DETECT(remove)=) ; Remove talk detection</para>
<para>same => n,Set(TALK_DETECT(set)=,128) ; Enable and set talk threshold to 128</para>
<para>This function will set the following variables:</para>
<note>
<para>The TALK_DETECT function uses an audiohook to inspect the
voice media frames on a channel. Other functions, such as JITTERBUFFER,
DENOISE, and AGC use a similar mechanism. Audiohooks are processed
in the order in which they are placed on the channel. As such,
it typically makes sense to place functions that modify the voice
media data prior to placing the TALK_DETECT function, as this will
yield better results.</para>
<para>Example:</para>
<para>same => n,Set(DENOISE(rx)=on) ; Denoise received audio</para>
<para>same => n,Set(TALK_DETECT(set)=) ; Perform talk detection on the denoised received audio</para>
</note>
</description>
</function>
***/
#define DEFAULT_SILENCE_THRESHOLD 2500
/*! \brief Private data structure used with the function's datastore */
struct talk_detect_params {
/*! The audiohook for the function */
struct ast_audiohook audiohook;
/*! Our threshold above which we consider someone talking */
int dsp_talking_threshold;
/*! How long we'll wait before we decide someone is silent */
int dsp_silence_threshold;
/*! Whether or not the user is currently talking */
int talking;
/*! The time the current burst of talking started */
struct timeval talking_start;
/*! The DSP used to do the heavy lifting */
struct ast_dsp *dsp;
};
/*! \internal \brief Destroy the datastore */
static void datastore_destroy_cb(void *data) {
struct talk_detect_params *td_params = data;
ast_audiohook_destroy(&td_params->audiohook);
if (td_params->dsp) {
ast_dsp_free(td_params->dsp);
}
ast_free(data);
}
/*! \brief The channel datastore the function uses to store state */
static const struct ast_datastore_info talk_detect_datastore = {
.type = "talk_detect",
.destroy = datastore_destroy_cb
};
/*! \internal \brief An audiohook modification callback
*
* This processes the read side of a channel's voice data to see if
* they are talking
*
* \note We don't actually modify the audio, so this function always
* returns a 'failure' indicating that it didn't modify the data
*/
static int talk_detect_audiohook_cb(struct ast_audiohook *audiohook, struct ast_channel *chan, struct ast_frame *frame, enum ast_audiohook_direction direction)
{
int total_silence;
int update_talking = 0;
struct ast_datastore *datastore;
struct talk_detect_params *td_params;
struct stasis_message *message;
if (audiohook->status == AST_AUDIOHOOK_STATUS_DONE) {
return 1;
}
if (direction != AST_AUDIOHOOK_DIRECTION_READ) {
return 1;
}
if (frame->frametype != AST_FRAME_VOICE) {
return 1;
}
if (!(datastore = ast_channel_datastore_find(chan, &talk_detect_datastore, NULL))) {
return 1;
}
td_params = datastore->data;
ast_dsp_silence(td_params->dsp, frame, &total_silence);
if (total_silence < td_params->dsp_silence_threshold) {
if (!td_params->talking) {
update_talking = 1;
td_params->talking_start = ast_tvnow();
}
td_params->talking = 1;
} else {
if (td_params->talking) {
update_talking = 1;
}
td_params->talking = 0;
}
if (update_talking) {
struct ast_json *blob = NULL;
if (!td_params->talking) {
int64_t diff_ms = ast_tvdiff_ms(ast_tvnow(), td_params->talking_start);
diff_ms -= td_params->dsp_silence_threshold;
blob = ast_json_pack("{s: i}", "duration", diff_ms);
if (!blob) {
return 1;
}
}
ast_verb(4, "%s is now %s\n", ast_channel_name(chan),
td_params->talking ? "talking" : "silent");
message = ast_channel_blob_create_from_cache(ast_channel_uniqueid(chan),
td_params->talking ? ast_channel_talking_start() : ast_channel_talking_stop(),
blob);
if (message) {
stasis_publish(ast_channel_topic(chan), message);
}
ast_json_unref(blob);
}
return 1;
}
/*! \internal \brief Disable talk detection on the channel */
static int remove_talk_detect(struct ast_channel *chan)
{
struct ast_datastore *datastore = NULL;
struct talk_detect_params *td_params;
SCOPED_CHANNELLOCK(chan_lock, chan);
datastore = ast_channel_datastore_find(chan, &talk_detect_datastore, NULL);
if (!datastore) {
ast_log(AST_LOG_WARNING, "Cannot remove TALK_DETECT from %s: TALK_DETECT not currently enabled\n",
ast_channel_name(chan));
return -1;
}
td_params = datastore->data;
if (ast_audiohook_remove(chan, &td_params->audiohook)) {
ast_log(AST_LOG_WARNING, "Failed to remove TALK_DETECT audiohook from channel %s\n",
ast_channel_name(chan));
return -1;
}
if (ast_channel_datastore_remove(chan, datastore)) {
ast_log(AST_LOG_WARNING, "Failed to remove TALK_DETECT datastore from channel %s\n",
ast_channel_name(chan));
return -1;
}
ast_datastore_free(datastore);
return 0;
}
/*! \internal \brief Enable talk detection on the channel */
static int set_talk_detect(struct ast_channel *chan, int dsp_silence_threshold, int dsp_talking_threshold)
{
struct ast_datastore *datastore = NULL;
struct talk_detect_params *td_params;
SCOPED_CHANNELLOCK(chan_lock, chan);
datastore = ast_channel_datastore_find(chan, &talk_detect_datastore, NULL);
if (!datastore) {
datastore = ast_datastore_alloc(&talk_detect_datastore, NULL);
if (!datastore) {
return -1;
}
td_params = ast_calloc(1, sizeof(*td_params));
if (!td_params) {
ast_datastore_free(datastore);
return -1;
}
ast_audiohook_init(&td_params->audiohook,
AST_AUDIOHOOK_TYPE_MANIPULATE,
"TALK_DETECT",
AST_AUDIOHOOK_MANIPULATE_ALL_RATES);
td_params->audiohook.manipulate_callback = talk_detect_audiohook_cb;
ast_set_flag(&td_params->audiohook, AST_AUDIOHOOK_TRIGGER_READ);
td_params->dsp = ast_dsp_new_with_rate(ast_format_rate(ast_channel_rawreadformat(chan)));
if (!td_params->dsp) {
ast_datastore_free(datastore);
ast_free(td_params);
return -1;
}
datastore->data = td_params;
ast_channel_datastore_add(chan, datastore);
ast_audiohook_attach(chan, &td_params->audiohook);
} else {
/* Talk detection already enabled; update existing settings */
td_params = datastore->data;
}
td_params->dsp_talking_threshold = dsp_talking_threshold;
td_params->dsp_silence_threshold = dsp_silence_threshold;
ast_dsp_set_threshold(td_params->dsp, td_params->dsp_talking_threshold);
return 0;
}
/*! \internal \brief TALK_DETECT write function callback */
static int talk_detect_fn_write(struct ast_channel *chan, const char *function, char *data, const char *value)
{
int res;
if (!chan) {
return -1;
}
if (ast_strlen_zero(data)) {
ast_log(AST_LOG_WARNING, "TALK_DETECT requires an argument\n");
return -1;
}
if (!strcasecmp(data, "set")) {
int dsp_silence_threshold = DEFAULT_SILENCE_THRESHOLD;
int dsp_talking_threshold = ast_dsp_get_threshold_from_settings(THRESHOLD_SILENCE);
if (!ast_strlen_zero(value)) {
char *parse = ast_strdupa(value);
AST_DECLARE_APP_ARGS(args,
AST_APP_ARG(silence_threshold);
AST_APP_ARG(talking_threshold);
);
AST_STANDARD_APP_ARGS(args, parse);
if (!ast_strlen_zero(args.silence_threshold)) {
if (sscanf(args.silence_threshold, "%30d", &dsp_silence_threshold) != 1) {
ast_log(AST_LOG_WARNING, "Failed to parse %s for dsp_silence_threshold\n",
args.silence_threshold);
return -1;
}
if (dsp_silence_threshold < 1) {
ast_log(AST_LOG_WARNING, "Invalid value %d for dsp_silence_threshold\n",
dsp_silence_threshold);
return -1;
}
}
if (!ast_strlen_zero(args.talking_threshold)) {
if (sscanf(args.talking_threshold, "%30d", &dsp_talking_threshold) != 1) {
ast_log(AST_LOG_WARNING, "Failed to parse %s for dsp_talking_threshold\n",
args.talking_threshold);
return -1;
}
if (dsp_talking_threshold < 1) {
ast_log(AST_LOG_WARNING, "Invalid value %d for dsp_talking_threshold\n",
dsp_silence_threshold);
return -1;
}
}
}
res = set_talk_detect(chan, dsp_silence_threshold, dsp_talking_threshold);
} else if (!strcasecmp(data, "remove")) {
res = remove_talk_detect(chan);
} else {
ast_log(AST_LOG_WARNING, "TALK_DETECT: unknown option %s\n", data);
res = -1;
}
return res;
}
/*! \brief Definition of the TALK_DETECT function */
static struct ast_custom_function talk_detect_function = {
.name = "TALK_DETECT",
.write = talk_detect_fn_write,
};
/*! \internal \brief Unload the module */
static int unload_module(void)
{
int res = 0;
res |= ast_custom_function_unregister(&talk_detect_function);
return res;
}
/*! \internal \brief Load the module */
static int load_module(void)
{
int res = 0;
res |= ast_custom_function_register(&talk_detect_function);
return res ? AST_MODULE_LOAD_FAILURE : AST_MODULE_LOAD_SUCCESS;
}
AST_MODULE_INFO_STANDARD(ASTERISK_GPL_KEY, "Talk detection dialplan function");

@ -499,6 +499,22 @@ struct stasis_message_type *ast_channel_moh_start_type(void);
*/ */
struct stasis_message_type *ast_channel_moh_stop_type(void); struct stasis_message_type *ast_channel_moh_stop_type(void);
/*!
* \since 12.4.0
* \brief Message type for a channel starting talking
*
* \retval A stasis message type
*/
struct stasis_message_type *ast_channel_talking_start(void);
/*!
* \since 12.4.0
* \brief Message type for a channel stopping talking
*
* \retval A stasis message type
*/
struct stasis_message_type *ast_channel_talking_stop(void);
/*! /*!
* \since 12 * \since 12
* \brief Publish in the \ref ast_channel_topic or \ref ast_channel_topic_all * \brief Publish in the \ref ast_channel_topic or \ref ast_channel_topic_all

@ -874,17 +874,15 @@ static struct ast_frame *audio_audiohook_write_list(struct ast_channel *chan, st
} }
audiohook_set_internal_rate(audiohook, audiohook_list->list_internal_samp_rate, 1); audiohook_set_internal_rate(audiohook, audiohook_list->list_internal_samp_rate, 1);
/* Feed in frame to manipulation. */ /* Feed in frame to manipulation. */
if (audiohook->manipulate_callback(audiohook, chan, middle_frame, direction)) { if (!audiohook->manipulate_callback(audiohook, chan, middle_frame, direction)) {
/* XXX IGNORE FAILURE */
/* If the manipulation fails then the frame will be returned in its original state. /* If the manipulation fails then the frame will be returned in its original state.
* Since there are potentially more manipulator callbacks in the list, no action should * Since there are potentially more manipulator callbacks in the list, no action should
* be taken here to exit early. */ * be taken here to exit early. */
middle_frame_manipulated = 1;
} }
ast_audiohook_unlock(audiohook); ast_audiohook_unlock(audiohook);
} }
AST_LIST_TRAVERSE_SAFE_END; AST_LIST_TRAVERSE_SAFE_END;
middle_frame_manipulated = 1;
} }
/* ---Part_3: Decide what to do with the end_frame (whether to transcode or not) */ /* ---Part_3: Decide what to do with the end_frame (whether to transcode or not) */

@ -85,6 +85,34 @@ ASTERISK_FILE_VERSION(__FILE__, "$Revision$")
</see-also> </see-also>
</managerEventInstance> </managerEventInstance>
</managerEvent> </managerEvent>
<managerEvent language="en_US" name="ChannelTalkingStart">
<managerEventInstance class="EVENT_FLAG_CLASS">
<synopsis>Raised when talking is detected on a channel.</synopsis>
<syntax>
<channel_snapshot/>
</syntax>
<see-also>
<ref type="function">TALK_DETECT</ref>
<ref type="managerEvent">ChannelTalkingStop</ref>
</see-also>
</managerEventInstance>
</managerEvent>
<managerEvent language="en_US" name="ChannelTalkingStop">
<managerEventInstance class="EVENT_FLAG_CLASS">
<synopsis>Raised when talking is no longer detected on a channel.</synopsis>
<syntax>
<channel_snapshot/>
<parameter name="Duration">
<para>The length in time, in milliseconds, that talking was
detected on the channel.</para>
</parameter>
</syntax>
<see-also>
<ref type="function">TALK_DETECT</ref>
<ref type="managerEvent">ChannelTalkingStart</ref>
</see-also>
</managerEventInstance>
</managerEvent>
***/ ***/
#define NUM_MULTI_CHANNEL_BLOB_BUCKETS 7 #define NUM_MULTI_CHANNEL_BLOB_BUCKETS 7
@ -974,6 +1002,58 @@ static struct ast_json *dial_to_json(
return json; return json;
} }
static struct ast_manager_event_blob *talking_start_to_ami(struct stasis_message *msg)
{
struct ast_str *channel_string;
struct ast_channel_blob *obj = stasis_message_data(msg);
struct ast_manager_event_blob *blob;
channel_string = ast_manager_build_channel_state_string(obj->snapshot);
if (!channel_string) {
return NULL;
}
blob = ast_manager_event_blob_create(EVENT_FLAG_CALL, "ChannelTalkingStart",
"%s", ast_str_buffer(channel_string));
ast_free(channel_string);
return blob;
}
static struct ast_json *talking_start_to_json(struct stasis_message *message,
const struct stasis_message_sanitizer *sanitize)
{
return channel_blob_to_json(message, "ChannelTalkingStarted", sanitize);
}
static struct ast_manager_event_blob *talking_stop_to_ami(struct stasis_message *msg)
{
struct ast_str *channel_string;
struct ast_channel_blob *obj = stasis_message_data(msg);
int duration = ast_json_integer_get(ast_json_object_get(obj->blob, "duration"));
struct ast_manager_event_blob *blob;
channel_string = ast_manager_build_channel_state_string(obj->snapshot);
if (!channel_string) {
return NULL;
}
blob = ast_manager_event_blob_create(EVENT_FLAG_CALL, "ChannelTalkingStop",
"%s"
"Duration: %d\r\n",
ast_str_buffer(channel_string),
duration);
ast_free(channel_string);
return blob;
}
static struct ast_json *talking_stop_to_json(struct stasis_message *message,
const struct stasis_message_sanitizer *sanitize)
{
return channel_blob_to_json(message, "ChannelTalkingFinished", sanitize);
}
/*! /*!
* @{ \brief Define channel message types. * @{ \brief Define channel message types.
*/ */
@ -1008,6 +1088,14 @@ STASIS_MESSAGE_TYPE_DEFN(ast_channel_agent_login_type,
STASIS_MESSAGE_TYPE_DEFN(ast_channel_agent_logoff_type, STASIS_MESSAGE_TYPE_DEFN(ast_channel_agent_logoff_type,
.to_ami = agent_logoff_to_ami, .to_ami = agent_logoff_to_ami,
); );
STASIS_MESSAGE_TYPE_DEFN(ast_channel_talking_start,
.to_ami = talking_start_to_ami,
.to_json = talking_start_to_json,
);
STASIS_MESSAGE_TYPE_DEFN(ast_channel_talking_stop,
.to_ami = talking_stop_to_ami,
.to_json = talking_stop_to_json,
);
/*! @} */ /*! @} */
@ -1038,6 +1126,8 @@ static void stasis_channels_cleanup(void)
STASIS_MESSAGE_TYPE_CLEANUP(ast_channel_monitor_stop_type); STASIS_MESSAGE_TYPE_CLEANUP(ast_channel_monitor_stop_type);
STASIS_MESSAGE_TYPE_CLEANUP(ast_channel_agent_login_type); STASIS_MESSAGE_TYPE_CLEANUP(ast_channel_agent_login_type);
STASIS_MESSAGE_TYPE_CLEANUP(ast_channel_agent_logoff_type); STASIS_MESSAGE_TYPE_CLEANUP(ast_channel_agent_logoff_type);
STASIS_MESSAGE_TYPE_CLEANUP(ast_channel_talking_start);
STASIS_MESSAGE_TYPE_CLEANUP(ast_channel_talking_stop);
} }
int ast_stasis_channels_init(void) int ast_stasis_channels_init(void)
@ -1084,6 +1174,8 @@ int ast_stasis_channels_init(void)
res |= STASIS_MESSAGE_TYPE_INIT(ast_channel_moh_stop_type); res |= STASIS_MESSAGE_TYPE_INIT(ast_channel_moh_stop_type);
res |= STASIS_MESSAGE_TYPE_INIT(ast_channel_monitor_start_type); res |= STASIS_MESSAGE_TYPE_INIT(ast_channel_monitor_start_type);
res |= STASIS_MESSAGE_TYPE_INIT(ast_channel_monitor_stop_type); res |= STASIS_MESSAGE_TYPE_INIT(ast_channel_monitor_stop_type);
res |= STASIS_MESSAGE_TYPE_INIT(ast_channel_talking_start);
res |= STASIS_MESSAGE_TYPE_INIT(ast_channel_talking_stop);
return res; return res;
} }

@ -3070,6 +3070,180 @@ ari_validator ast_ari_validate_channel_state_change_fn(void)
return ast_ari_validate_channel_state_change; return ast_ari_validate_channel_state_change;
} }
int ast_ari_validate_channel_talking_finished(struct ast_json *json)
{
int res = 1;
struct ast_json_iter *iter;
int has_type = 0;
int has_application = 0;
int has_channel = 0;
int has_duration = 0;
for (iter = ast_json_object_iter(json); iter; iter = ast_json_object_iter_next(json, iter)) {
if (strcmp("type", ast_json_object_iter_key(iter)) == 0) {
int prop_is_valid;
has_type = 1;
prop_is_valid = ast_ari_validate_string(
ast_json_object_iter_value(iter));
if (!prop_is_valid) {
ast_log(LOG_ERROR, "ARI ChannelTalkingFinished field type failed validation\n");
res = 0;
}
} else
if (strcmp("application", ast_json_object_iter_key(iter)) == 0) {
int prop_is_valid;
has_application = 1;
prop_is_valid = ast_ari_validate_string(
ast_json_object_iter_value(iter));
if (!prop_is_valid) {
ast_log(LOG_ERROR, "ARI ChannelTalkingFinished field application failed validation\n");
res = 0;
}
} else
if (strcmp("timestamp", ast_json_object_iter_key(iter)) == 0) {
int prop_is_valid;
prop_is_valid = ast_ari_validate_date(
ast_json_object_iter_value(iter));
if (!prop_is_valid) {
ast_log(LOG_ERROR, "ARI ChannelTalkingFinished field timestamp failed validation\n");
res = 0;
}
} else
if (strcmp("channel", ast_json_object_iter_key(iter)) == 0) {
int prop_is_valid;
has_channel = 1;
prop_is_valid = ast_ari_validate_channel(
ast_json_object_iter_value(iter));
if (!prop_is_valid) {
ast_log(LOG_ERROR, "ARI ChannelTalkingFinished field channel failed validation\n");
res = 0;
}
} else
if (strcmp("duration", ast_json_object_iter_key(iter)) == 0) {
int prop_is_valid;
has_duration = 1;
prop_is_valid = ast_ari_validate_int(
ast_json_object_iter_value(iter));
if (!prop_is_valid) {
ast_log(LOG_ERROR, "ARI ChannelTalkingFinished field duration failed validation\n");
res = 0;
}
} else
{
ast_log(LOG_ERROR,
"ARI ChannelTalkingFinished has undocumented field %s\n",
ast_json_object_iter_key(iter));
res = 0;
}
}
if (!has_type) {
ast_log(LOG_ERROR, "ARI ChannelTalkingFinished missing required field type\n");
res = 0;
}
if (!has_application) {
ast_log(LOG_ERROR, "ARI ChannelTalkingFinished missing required field application\n");
res = 0;
}
if (!has_channel) {
ast_log(LOG_ERROR, "ARI ChannelTalkingFinished missing required field channel\n");
res = 0;
}
if (!has_duration) {
ast_log(LOG_ERROR, "ARI ChannelTalkingFinished missing required field duration\n");
res = 0;
}
return res;
}
ari_validator ast_ari_validate_channel_talking_finished_fn(void)
{
return ast_ari_validate_channel_talking_finished;
}
int ast_ari_validate_channel_talking_started(struct ast_json *json)
{
int res = 1;
struct ast_json_iter *iter;
int has_type = 0;
int has_application = 0;
int has_channel = 0;
for (iter = ast_json_object_iter(json); iter; iter = ast_json_object_iter_next(json, iter)) {
if (strcmp("type", ast_json_object_iter_key(iter)) == 0) {
int prop_is_valid;
has_type = 1;
prop_is_valid = ast_ari_validate_string(
ast_json_object_iter_value(iter));
if (!prop_is_valid) {
ast_log(LOG_ERROR, "ARI ChannelTalkingStarted field type failed validation\n");
res = 0;
}
} else
if (strcmp("application", ast_json_object_iter_key(iter)) == 0) {
int prop_is_valid;
has_application = 1;
prop_is_valid = ast_ari_validate_string(
ast_json_object_iter_value(iter));
if (!prop_is_valid) {
ast_log(LOG_ERROR, "ARI ChannelTalkingStarted field application failed validation\n");
res = 0;
}
} else
if (strcmp("timestamp", ast_json_object_iter_key(iter)) == 0) {
int prop_is_valid;
prop_is_valid = ast_ari_validate_date(
ast_json_object_iter_value(iter));
if (!prop_is_valid) {
ast_log(LOG_ERROR, "ARI ChannelTalkingStarted field timestamp failed validation\n");
res = 0;
}
} else
if (strcmp("channel", ast_json_object_iter_key(iter)) == 0) {
int prop_is_valid;
has_channel = 1;
prop_is_valid = ast_ari_validate_channel(
ast_json_object_iter_value(iter));
if (!prop_is_valid) {
ast_log(LOG_ERROR, "ARI ChannelTalkingStarted field channel failed validation\n");
res = 0;
}
} else
{
ast_log(LOG_ERROR,
"ARI ChannelTalkingStarted has undocumented field %s\n",
ast_json_object_iter_key(iter));
res = 0;
}
}
if (!has_type) {
ast_log(LOG_ERROR, "ARI ChannelTalkingStarted missing required field type\n");
res = 0;
}
if (!has_application) {
ast_log(LOG_ERROR, "ARI ChannelTalkingStarted missing required field application\n");
res = 0;
}
if (!has_channel) {
ast_log(LOG_ERROR, "ARI ChannelTalkingStarted missing required field channel\n");
res = 0;
}
return res;
}
ari_validator ast_ari_validate_channel_talking_started_fn(void)
{
return ast_ari_validate_channel_talking_started;
}
int ast_ari_validate_channel_userevent(struct ast_json *json) int ast_ari_validate_channel_userevent(struct ast_json *json)
{ {
int res = 1; int res = 1;
@ -3647,6 +3821,12 @@ int ast_ari_validate_event(struct ast_json *json)
if (strcmp("ChannelStateChange", discriminator) == 0) { if (strcmp("ChannelStateChange", discriminator) == 0) {
return ast_ari_validate_channel_state_change(json); return ast_ari_validate_channel_state_change(json);
} else } else
if (strcmp("ChannelTalkingFinished", discriminator) == 0) {
return ast_ari_validate_channel_talking_finished(json);
} else
if (strcmp("ChannelTalkingStarted", discriminator) == 0) {
return ast_ari_validate_channel_talking_started(json);
} else
if (strcmp("ChannelUserevent", discriminator) == 0) { if (strcmp("ChannelUserevent", discriminator) == 0) {
return ast_ari_validate_channel_userevent(json); return ast_ari_validate_channel_userevent(json);
} else } else
@ -3806,6 +3986,12 @@ int ast_ari_validate_message(struct ast_json *json)
if (strcmp("ChannelStateChange", discriminator) == 0) { if (strcmp("ChannelStateChange", discriminator) == 0) {
return ast_ari_validate_channel_state_change(json); return ast_ari_validate_channel_state_change(json);
} else } else
if (strcmp("ChannelTalkingFinished", discriminator) == 0) {
return ast_ari_validate_channel_talking_finished(json);
} else
if (strcmp("ChannelTalkingStarted", discriminator) == 0) {
return ast_ari_validate_channel_talking_started(json);
} else
if (strcmp("ChannelUserevent", discriminator) == 0) { if (strcmp("ChannelUserevent", discriminator) == 0) {
return ast_ari_validate_channel_userevent(json); return ast_ari_validate_channel_userevent(json);
} else } else

@ -790,6 +790,42 @@ int ast_ari_validate_channel_state_change(struct ast_json *json);
*/ */
ari_validator ast_ari_validate_channel_state_change_fn(void); ari_validator ast_ari_validate_channel_state_change_fn(void);
/*!
* \brief Validator for ChannelTalkingFinished.
*
* Talking is no longer detected on the channel.
*
* \param json JSON object to validate.
* \returns True (non-zero) if valid.
* \returns False (zero) if invalid.
*/
int ast_ari_validate_channel_talking_finished(struct ast_json *json);
/*!
* \brief Function pointer to ast_ari_validate_channel_talking_finished().
*
* See \ref ast_ari_model_validators.h for more details.
*/
ari_validator ast_ari_validate_channel_talking_finished_fn(void);
/*!
* \brief Validator for ChannelTalkingStarted.
*
* Talking was detected on the channel.
*
* \param json JSON object to validate.
* \returns True (non-zero) if valid.
* \returns False (zero) if invalid.
*/
int ast_ari_validate_channel_talking_started(struct ast_json *json);
/*!
* \brief Function pointer to ast_ari_validate_channel_talking_started().
*
* See \ref ast_ari_model_validators.h for more details.
*/
ari_validator ast_ari_validate_channel_talking_started_fn(void);
/*! /*!
* \brief Validator for ChannelUserevent. * \brief Validator for ChannelUserevent.
* *
@ -1274,6 +1310,17 @@ ari_validator ast_ari_validate_application_fn(void);
* - application: string (required) * - application: string (required)
* - timestamp: Date * - timestamp: Date
* - channel: Channel (required) * - channel: Channel (required)
* ChannelTalkingFinished
* - type: string (required)
* - application: string (required)
* - timestamp: Date
* - channel: Channel (required)
* - duration: int (required)
* ChannelTalkingStarted
* - type: string (required)
* - application: string (required)
* - timestamp: Date
* - channel: Channel (required)
* ChannelUserevent * ChannelUserevent
* - type: string (required) * - type: string (required)
* - application: string (required) * - application: string (required)

@ -159,6 +159,8 @@
"ChannelUserevent", "ChannelUserevent",
"ChannelHangupRequest", "ChannelHangupRequest",
"ChannelVarset", "ChannelVarset",
"ChannelTalkingStarted",
"ChannelTalkingFinished",
"EndpointStateChange", "EndpointStateChange",
"Dial", "Dial",
"StasisEnd", "StasisEnd",
@ -572,6 +574,33 @@
} }
} }
}, },
"ChannelTalkingStarted": {
"id": "ChannelTalkingStarted",
"description": "Talking was detected on the channel.",
"properties": {
"channel": {
"required": true,
"type": "Channel",
"description": "The channel on which talking started."
}
}
},
"ChannelTalkingFinished": {
"id": "ChannelTalkingFinished",
"description": "Talking is no longer detected on the channel.",
"properties": {
"channel": {
"required": true,
"type": "Channel",
"description": "The channel on which talking completed."
},
"duration": {
"required": true,
"type": "int",
"description": "The length of time, in milliseconds, that talking was detected on the channel"
}
}
},
"EndpointStateChange": { "EndpointStateChange": {
"id": "EndpointStateChange", "id": "EndpointStateChange",
"description": "Endpoint state changed.", "description": "Endpoint state changed.",

Loading…
Cancel
Save