DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerinjacobk@gmail.com>
To: Naga Harish K S V <s.v.naga.harish.k@intel.com>
Cc: jerinj@marvell.com, dev@dpdk.org, erik.g.carrillo@intel.com,
	 pbhagavatula@marvell.com, sthotton@marvell.com
Subject: Re: [PATCH v6 1/3] eventdev/timer: add periodic event timer support
Date: Wed, 14 Sep 2022 20:54:10 +0530	[thread overview]
Message-ID: <CALBAE1NkLxtCHebndxk04oMgnbpq5ecXq64+WeoJuS19w6N=PA@mail.gmail.com> (raw)
In-Reply-To: <20220914135147.1846550-1-s.v.naga.harish.k@intel.com>

On Wed, Sep 14, 2022 at 7:21 PM Naga Harish K S V
<s.v.naga.harish.k@intel.com> wrote:
>
> This patch adds support to configure and use periodic event
> timers in software timer adapter.
>
> The structure ``rte_event_timer_adapter_stats`` is extended
> by adding a new field, ``evtim_drop_count``. This stat
> represents the number of times an event_timer expiry event
> is dropped by the event timer adapter.
>
> update the software eventdev pmd timer_adapter_caps_get
> callback function to report the support of periodic
> event timer capability
>
> Signed-off-by: Naga Harish K S V <s.v.naga.harish.k@intel.com>


Fix doc build issue

See: http://mails.dpdk.org/archives/test-report/2022-September/306650.html


> ---
>  doc/guides/rel_notes/deprecation.rst   |   7 --
>  doc/guides/rel_notes/release_22_11.rst |   2 +
>  drivers/event/sw/sw_evdev.c            |   2 +-
>  lib/eventdev/eventdev_pmd.h            |   3 +
>  lib/eventdev/rte_event_timer_adapter.c | 106 ++++++++++++++++---------
>  lib/eventdev/rte_event_timer_adapter.h |   2 +
>  lib/eventdev/rte_eventdev.c            |   6 +-
>  7 files changed, 83 insertions(+), 45 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index e7583cae4c..fd8ef4dff7 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -186,13 +186,6 @@ Deprecation Notices
>    Event will be one of the configuration fields,
>    together with additional vector parameters.
>
> -* eventdev: The structure ``rte_event_timer_adapter_stats`` will be
> -  extended by adding a new field ``evtim_drop_count``.
> -  This counter will represent the number of times an event_timer expiry event
> -  is dropped by the timer adapter.
> -  This field will be used to add periodic mode support
> -  to the software timer adapter in DPDK 22.11.
> -
>  * eventdev: The function pointer declaration ``eventdev_stop_flush_t``
>    will be renamed to ``rte_eventdev_stop_flush_t`` in DPDK 22.11.
>
> diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
> index c32c18ff49..976a74b054 100644
> --- a/doc/guides/rel_notes/release_22_11.rst
> +++ b/doc/guides/rel_notes/release_22_11.rst
> @@ -94,6 +94,8 @@ API Changes
>  ABI Changes
>  -----------
>
> +* eventdev: Added ``evtim_drop_count`` field to ``rte_event_timer_adapter_stats``
> +  structure.
>  .. This section should contain ABI changes. Sample format:
>
>     * sample: Add a short 1-2 sentence description of the ABI change
> diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
> index c9efe35bf4..e866150379 100644
> --- a/drivers/event/sw/sw_evdev.c
> +++ b/drivers/event/sw/sw_evdev.c
> @@ -564,7 +564,7 @@ sw_timer_adapter_caps_get(const struct rte_eventdev *dev, uint64_t flags,
>  {
>         RTE_SET_USED(dev);
>         RTE_SET_USED(flags);
> -       *caps = 0;
> +       *caps = RTE_EVENT_TIMER_ADAPTER_SW_CAP;
>
>         /* Use default SW ops */
>         *ops = NULL;
> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
> index f514a37575..085563ff52 100644
> --- a/lib/eventdev/eventdev_pmd.h
> +++ b/lib/eventdev/eventdev_pmd.h
> @@ -81,6 +81,9 @@ extern "C" {
>   * the ethdev to eventdev use a SW service function
>   */
>
> +#define RTE_EVENT_TIMER_ADAPTER_SW_CAP \
> +               RTE_EVENT_TIMER_ADAPTER_CAP_PERIODIC
> +
>  #define RTE_EVENTDEV_DETACHED  (0)
>  #define RTE_EVENTDEV_ATTACHED  (1)
>
> diff --git a/lib/eventdev/rte_event_timer_adapter.c b/lib/eventdev/rte_event_timer_adapter.c
> index e0d978d641..d2480060c5 100644
> --- a/lib/eventdev/rte_event_timer_adapter.c
> +++ b/lib/eventdev/rte_event_timer_adapter.c
> @@ -53,6 +53,14 @@ static const struct event_timer_adapter_ops swtim_ops;
>  #define EVTIM_SVC_LOG_DBG(...) (void)0
>  #endif
>
> +static inline enum rte_timer_type
> +get_timer_type(const struct rte_event_timer_adapter *adapter)
> +{
> +       return (adapter->data->conf.flags &
> +                       RTE_EVENT_TIMER_ADAPTER_F_PERIODIC) ?
> +                       PERIODICAL : SINGLE;
> +}
> +
>  static int
>  default_port_conf_cb(uint16_t id, uint8_t event_dev_id, uint8_t *event_port_id,
>                      void *conf_arg)
> @@ -195,13 +203,14 @@ rte_event_timer_adapter_create_ext(
>         adapter->data->conf = *conf;  /* copy conf structure */
>
>         /* Query eventdev PMD for timer adapter capabilities and ops */
> -       ret = dev->dev_ops->timer_adapter_caps_get(dev,
> -                                                  adapter->data->conf.flags,
> -                                                  &adapter->data->caps,
> -                                                  &adapter->ops);
> -       if (ret < 0) {
> -               rte_errno = -ret;
> -               goto free_memzone;
> +       if (dev->dev_ops->timer_adapter_caps_get) {
> +               ret = dev->dev_ops->timer_adapter_caps_get(dev,
> +                               adapter->data->conf.flags,
> +                               &adapter->data->caps, &adapter->ops);
> +               if (ret < 0) {
> +                       rte_errno = -ret;
> +                       goto free_memzone;
> +               }
>         }
>
>         if (!(adapter->data->caps &
> @@ -348,13 +357,14 @@ rte_event_timer_adapter_lookup(uint16_t adapter_id)
>         dev = &rte_eventdevs[adapter->data->event_dev_id];
>
>         /* Query eventdev PMD for timer adapter capabilities and ops */
> -       ret = dev->dev_ops->timer_adapter_caps_get(dev,
> -                                                  adapter->data->conf.flags,
> -                                                  &adapter->data->caps,
> -                                                  &adapter->ops);
> -       if (ret < 0) {
> -               rte_errno = EINVAL;
> -               return NULL;
> +       if (dev->dev_ops->timer_adapter_caps_get) {
> +               ret = dev->dev_ops->timer_adapter_caps_get(dev,
> +                               adapter->data->conf.flags,
> +                               &adapter->data->caps, &adapter->ops);
> +               if (ret < 0) {
> +                       rte_errno = EINVAL;
> +                       return NULL;
> +               }
>         }
>
>         /* If eventdev PMD did not provide ops, use default software
> @@ -612,35 +622,44 @@ swtim_callback(struct rte_timer *tim)
>         uint64_t opaque;
>         int ret;
>         int n_lcores;
> +       enum rte_timer_type type;
>
>         opaque = evtim->impl_opaque[1];
>         adapter = (struct rte_event_timer_adapter *)(uintptr_t)opaque;
>         sw = swtim_pmd_priv(adapter);
> +       type = get_timer_type(adapter);
> +
> +       if (unlikely(sw->in_use[lcore].v == 0)) {
> +               sw->in_use[lcore].v = 1;
> +               n_lcores = __atomic_fetch_add(&sw->n_poll_lcores, 1,
> +                                            __ATOMIC_RELAXED);
> +               __atomic_store_n(&sw->poll_lcores[n_lcores], lcore,
> +                               __ATOMIC_RELAXED);
> +       }
>
>         ret = event_buffer_add(&sw->buffer, &evtim->ev);
>         if (ret < 0) {
> -               /* If event buffer is full, put timer back in list with
> -                * immediate expiry value, so that we process it again on the
> -                * next iteration.
> -                */
> -               ret = rte_timer_alt_reset(sw->timer_data_id, tim, 0, SINGLE,
> -                                         lcore, NULL, evtim);
> -               if (ret < 0) {
> -                       EVTIM_LOG_DBG("event buffer full, failed to reset "
> -                                     "timer with immediate expiry value");
> +               if (type == SINGLE) {
> +                       /* If event buffer is full, put timer back in list with
> +                        * immediate expiry value, so that we process it again
> +                        * on the next iteration.
> +                        */
> +                       ret = rte_timer_alt_reset(sw->timer_data_id, tim, 0,
> +                                               SINGLE, lcore, NULL, evtim);
> +                       if (ret < 0) {
> +                               EVTIM_LOG_DBG("event buffer full, failed to "
> +                                               "reset timer with immediate "
> +                                               "expiry value");
> +                       } else {
> +                               sw->stats.evtim_retry_count++;
> +                               EVTIM_LOG_DBG("event buffer full, resetting "
> +                                               "rte_timer with immediate "
> +                                               "expiry value");
> +                       }
>                 } else {
> -                       sw->stats.evtim_retry_count++;
> -                       EVTIM_LOG_DBG("event buffer full, resetting rte_timer "
> -                                     "with immediate expiry value");
> +                       sw->stats.evtim_drop_count++;
>                 }
>
> -               if (unlikely(sw->in_use[lcore].v == 0)) {
> -                       sw->in_use[lcore].v = 1;
> -                       n_lcores = __atomic_fetch_add(&sw->n_poll_lcores, 1,
> -                                                    __ATOMIC_RELAXED);
> -                       __atomic_store_n(&sw->poll_lcores[n_lcores], lcore,
> -                                       __ATOMIC_RELAXED);
> -               }
>         } else {
>                 EVTIM_BUF_LOG_DBG("buffered an event timer expiry event");
>
> @@ -654,10 +673,15 @@ swtim_callback(struct rte_timer *tim)
>                         sw->n_expired_timers = 0;
>                 }
>
> -               sw->expired_timers[sw->n_expired_timers++] = tim;
> +               /* Don't free rte_timer for a periodic event timer until
> +                * it is cancelled
> +                */
> +               if (type == SINGLE)
> +                       sw->expired_timers[sw->n_expired_timers++] = tim;
>                 sw->stats.evtim_exp_count++;
>
> -               __atomic_store_n(&evtim->state, RTE_EVENT_TIMER_NOT_ARMED,
> +               if (type == SINGLE)
> +                       __atomic_store_n(&evtim->state, RTE_EVENT_TIMER_NOT_ARMED,
>                                 __ATOMIC_RELEASE);
>         }
>
> @@ -947,6 +971,12 @@ swtim_uninit(struct rte_event_timer_adapter *adapter)
>                            swtim_free_tim,
>                            sw);
>
> +       ret = rte_timer_data_dealloc(sw->timer_data_id);
> +       if (ret < 0) {
> +               EVTIM_LOG_ERR("failed to deallocate timer data instance");
> +               return ret;
> +       }
> +
>         ret = rte_service_component_unregister(sw->service_id);
>         if (ret < 0) {
>                 EVTIM_LOG_ERR("failed to unregister service component");
> @@ -1053,6 +1083,7 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter,
>         /* Timer list for this lcore is not in use. */
>         uint16_t exp_state = 0;
>         enum rte_event_timer_state n_state;
> +       enum rte_timer_type type = SINGLE;
>
>  #ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>         /* Check that the service is running. */
> @@ -1092,6 +1123,9 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter,
>                 return 0;
>         }
>
> +       /* update timer type for periodic adapter */
> +       type = get_timer_type(adapter);
> +
>         for (i = 0; i < nb_evtims; i++) {
>                 n_state = __atomic_load_n(&evtims[i]->state, __ATOMIC_ACQUIRE);
>                 if (n_state == RTE_EVENT_TIMER_ARMED) {
> @@ -1135,7 +1169,7 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter,
>
>                 cycles = get_timeout_cycles(evtims[i], adapter);
>                 ret = rte_timer_alt_reset(sw->timer_data_id, tim, cycles,
> -                                         SINGLE, lcore_id, NULL, evtims[i]);
> +                                         type, lcore_id, NULL, evtims[i]);
>                 if (ret < 0) {
>                         /* tim was in RUNNING or CONFIG state */
>                         __atomic_store_n(&evtims[i]->state,
> diff --git a/lib/eventdev/rte_event_timer_adapter.h b/lib/eventdev/rte_event_timer_adapter.h
> index eab8e59a57..cd10db19e4 100644
> --- a/lib/eventdev/rte_event_timer_adapter.h
> +++ b/lib/eventdev/rte_event_timer_adapter.h
> @@ -193,6 +193,8 @@ struct rte_event_timer_adapter_stats {
>         /**< Event timer retry count */
>         uint64_t adapter_tick_count;
>         /**< Tick count for the adapter, at its resolution */
> +       uint64_t evtim_drop_count;
> +       /**< event timer expiries dropped */
>  };
>
>  struct rte_event_timer_adapter;
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 1dc4f966be..59d8b49ef6 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -139,7 +139,11 @@ rte_event_timer_adapter_caps_get(uint8_t dev_id, uint32_t *caps)
>
>         if (caps == NULL)
>                 return -EINVAL;
> -       *caps = 0;
> +
> +       if (dev->dev_ops->timer_adapter_caps_get == NULL)
> +               *caps = RTE_EVENT_TIMER_ADAPTER_SW_CAP;
> +       else
> +               *caps = 0;
>
>         return dev->dev_ops->timer_adapter_caps_get ?
>                                 (*dev->dev_ops->timer_adapter_caps_get)(dev,
> --
> 2.25.1
>

  parent reply	other threads:[~2022-09-14 15:24 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-03 16:25 [PATCH 1/4] " Naga Harish K S V
2022-08-03 16:25 ` [PATCH 4/4] test/event: update periodic event timer tests Naga Harish K S V
2022-08-10  7:07 ` [PATCH v2 1/4] eventdev/timer: add periodic event timer support Naga Harish K S V
2022-08-10  7:07   ` [PATCH v2 4/4] test/event: update periodic event timer tests Naga Harish K S V
2022-08-10 19:55   ` [PATCH v2 1/4] eventdev/timer: add periodic event timer support Carrillo, Erik G
2022-08-11 15:43     ` Naga Harish K, S V
2022-08-11 15:36   ` [PATCH v3 " Naga Harish K S V
2022-08-11 15:36     ` [PATCH v3 4/4] test/event: update periodic event timer tests Naga Harish K S V
2022-08-11 19:22     ` [PATCH v3 1/4] eventdev/timer: add periodic event timer support Carrillo, Erik G
2022-08-12 16:10       ` Naga Harish K, S V
2022-08-12 16:07     ` [PATCH v4 " Naga Harish K S V
2022-08-12 16:07       ` [PATCH v4 4/4] test/event: update periodic event timer tests Naga Harish K S V
2022-08-18 13:13         ` Carrillo, Erik G
2022-08-18 13:06       ` [PATCH v4 1/4] eventdev/timer: add periodic event timer support Carrillo, Erik G
2022-09-14  5:08       ` [PATCH v5 " Naga Harish K S V
2022-09-14  5:08         ` [PATCH v5 2/4] timer: fix function to stop all timers Naga Harish K S V
2022-09-14  5:08         ` [PATCH v5 3/4] test/event: update periodic event timer tests Naga Harish K S V
2022-09-14  5:08         ` [PATCH v5 4/4] doc: remove deprication notice Naga Harish K S V
2022-09-14  5:15       ` [PATCH v5 1/4] eventdev/timer: add periodic event timer support Naga Harish K S V
2022-09-14  5:15         ` [PATCH v5 2/4] timer: fix function to stop all timers Naga Harish K S V
2022-09-14  5:15         ` [PATCH v5 3/4] test/event: update periodic event timer tests Naga Harish K S V
2022-09-14  5:15         ` [PATCH v5 4/4] doc: remove deprecation notice Naga Harish K S V
2022-09-14 12:41           ` Jerin Jacob
2022-09-14 13:54             ` Naga Harish K, S V
2022-09-14 13:51         ` [PATCH v6 1/3] eventdev/timer: add periodic event timer support Naga Harish K S V
2022-09-14 13:51           ` [PATCH v6 2/3] timer: fix function to stop all timers Naga Harish K S V
2022-09-14 13:51           ` [PATCH v6 3/3] test/event: update periodic event timer tests Naga Harish K S V
2022-09-14 15:24           ` Jerin Jacob [this message]
2022-09-14 16:43             ` [PATCH v6 1/3] eventdev/timer: add periodic event timer support Naga Harish K, S V
2022-09-14 15:33           ` [PATCH v7 " Naga Harish K S V
2022-09-14 15:33             ` [PATCH v7 2/3] timer: fix function to stop all timers Naga Harish K S V
2022-09-15  6:41               ` Jerin Jacob
2022-09-16  4:40                 ` Naga Harish K, S V
2022-10-05 12:59                   ` Thomas Monjalon
2022-09-26  5:21                 ` Naga Harish K, S V
2022-09-14 15:33             ` [PATCH v7 3/3] test/event: update periodic event timer tests Naga Harish K S V
2022-09-15  6:43               ` Jerin Jacob
2022-09-15  6:40             ` [PATCH v7 1/3] eventdev/timer: add periodic event timer support Jerin Jacob

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALBAE1NkLxtCHebndxk04oMgnbpq5ecXq64+WeoJuS19w6N=PA@mail.gmail.com' \
    --to=jerinjacobk@gmail.com \
    --cc=dev@dpdk.org \
    --cc=erik.g.carrillo@intel.com \
    --cc=jerinj@marvell.com \
    --cc=pbhagavatula@marvell.com \
    --cc=s.v.naga.harish.k@intel.com \
    --cc=sthotton@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).