DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerinjacobk@gmail.com>
To: Pavan Nikhilesh <pbhagavatula@marvell.com>
Cc: Jerin Jacob <jerinj@marvell.com>,
	John McNamara <john.mcnamara@intel.com>,
	 Marko Kovacevic <marko.kovacevic@intel.com>,
	dpdk-dev <dev@dpdk.org>, dpdk stable <stable@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] event/octeontx: validate events requested against available
Date: Sun, 4 Oct 2020 16:10:09 +0530
Message-ID: <CALBAE1Mc0NxqCBWpHHTSRm5thhi7ydG1aGrho3RcMzEz5Q5GVQ@mail.gmail.com> (raw)
In-Reply-To: <20200728182224.1359-1-pbhagavatula@marvell.com>

On Tue, Jul 28, 2020 at 11:52 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Validate events configured in ssopf against the total number of
> events configured across all the RX/TIM event adapters.
>
> Events available to ssopf can be reconfigured by passing the required
> amount to kernel bootargs and are only limited by DRAM size.
> Example:
>         ssopf.max_events= 2097152
>
> Cc: stable@dpdk.org
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>


Applied to dpdk-next-eventdev/for-main. Thanks.



> ---
>  doc/guides/eventdevs/octeontx.rst    | 23 +++++--
>  drivers/event/octeontx/ssovf_evdev.c | 99 +++++++++++++++++++++++++---
>  drivers/event/octeontx/ssovf_evdev.h |  6 ++
>  drivers/event/octeontx/timvf_evdev.c | 68 +++++++++++++++++--
>  drivers/event/octeontx/timvf_evdev.h |  2 +
>  5 files changed, 176 insertions(+), 22 deletions(-)
>
> diff --git a/doc/guides/eventdevs/octeontx.rst b/doc/guides/eventdevs/octeontx.rst
> index 9a3646db0..21d251341 100644
> --- a/doc/guides/eventdevs/octeontx.rst
> +++ b/doc/guides/eventdevs/octeontx.rst
> @@ -140,9 +140,22 @@ follows:
>  When timvf is used as Event timer adapter event schedule type
>  ``RTE_SCHED_TYPE_PARALLEL`` is not supported.
>
> -Max mempool size
> -~~~~~~~~~~~~~~~~
> +Max number of events
> +~~~~~~~~~~~~~~~~~~~~
>
> -Max mempool size when using OCTEON TX Eventdev (SSO) should be limited to 128K.
> -When running dpdk-test-eventdev on OCTEON TX the application can limit the
> -number of mbufs by using the option ``--pool_sz 131072``
> +Max number of events in OCTEON TX Eventdev (SSO) are only limited by DRAM size
> +and they can be configured by passing limits to kernel bootargs as follows:
> +
> +.. code-block:: console
> +
> +        ssopf.max_events=4194304
> +
> +The same can be verified by looking at the following sysfs entry:
> +
> +.. code-block:: console
> +
> +        # cat /sys/module/ssopf/parameters/max_events
> +        4194304
> +
> +The maximum number of events that can be added to SSO by the event adapters such
> +as (Rx/Timer) should be limited to the above configured value.
> diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
> index 4fc4e8f7e..33cb50204 100644
> --- a/drivers/event/octeontx/ssovf_evdev.c
> +++ b/drivers/event/octeontx/ssovf_evdev.c
> @@ -384,22 +384,78 @@ ssovf_eth_rx_adapter_queue_add(const struct rte_eventdev *dev,
>                 const struct rte_eth_dev *eth_dev, int32_t rx_queue_id,
>                 const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
>  {
> -       int ret = 0;
>         const struct octeontx_nic *nic = eth_dev->data->dev_private;
>         struct ssovf_evdev *edev = ssovf_pmd_priv(dev);
> +       uint16_t free_idx = UINT16_MAX;
> +       struct octeontx_rxq *rxq;
>         pki_mod_qos_t pki_qos;
> -       RTE_SET_USED(dev);
> +       uint8_t found = false;
> +       int i, ret = 0;
> +       void *old_ptr;
>
>         ret = strncmp(eth_dev->data->name, "eth_octeontx", 12);
>         if (ret)
>                 return -EINVAL;
>
> -       if (rx_queue_id >= 0)
> -               return -EINVAL;
> -
>         if (queue_conf->ev.sched_type == RTE_SCHED_TYPE_PARALLEL)
>                 return -ENOTSUP;
>
> +       /* eth_octeontx only supports one rq. */
> +       rx_queue_id = rx_queue_id == -1 ? 0 : rx_queue_id;
> +       rxq = eth_dev->data->rx_queues[rx_queue_id];
> +       /* Add rxq pool to list of used pools and reduce available events. */
> +       for (i = 0; i < edev->rxq_pools; i++) {
> +               if (edev->rxq_pool_array[i] == (uintptr_t)rxq->pool) {
> +                       edev->rxq_pool_rcnt[i]++;
> +                       found = true;
> +                       break;
> +               } else if (free_idx == UINT16_MAX &&
> +                          edev->rxq_pool_array[i] == 0) {
> +                       free_idx = i;
> +               }
> +       }
> +
> +       if (!found) {
> +               uint16_t idx;
> +
> +               if (edev->available_events < rxq->pool->size) {
> +                       ssovf_log_err(
> +                               "Max available events %"PRIu32" requested events in rxq pool %"PRIu32"",
> +                               edev->available_events, rxq->pool->size);
> +                       return -ENOMEM;
> +               }
> +
> +               if (free_idx != UINT16_MAX) {
> +                       idx = free_idx;
> +               } else {
> +                       old_ptr = edev->rxq_pool_array;
> +                       edev->rxq_pools++;
> +                       edev->rxq_pool_array = rte_realloc(
> +                               edev->rxq_pool_array,
> +                               sizeof(uint64_t) * edev->rxq_pools, 0);
> +                       if (edev->rxq_pool_array == NULL) {
> +                               edev->rxq_pools--;
> +                               edev->rxq_pool_array = old_ptr;
> +                               return -ENOMEM;
> +                       }
> +
> +                       old_ptr = edev->rxq_pool_rcnt;
> +                       edev->rxq_pool_rcnt = rte_realloc(
> +                               edev->rxq_pool_rcnt,
> +                               sizeof(uint8_t) * edev->rxq_pools, 0);
> +                       if (edev->rxq_pool_rcnt == NULL) {
> +                               edev->rxq_pools--;
> +                               edev->rxq_pool_rcnt = old_ptr;
> +                               return -ENOMEM;
> +                       }
> +                       idx = edev->rxq_pools - 1;
> +               }
> +
> +               edev->rxq_pool_array[idx] = (uintptr_t)rxq->pool;
> +               edev->rxq_pool_rcnt[idx] = 1;
> +               edev->available_events -= rxq->pool->size;
> +       }
> +
>         memset(&pki_qos, 0, sizeof(pki_mod_qos_t));
>
>         pki_qos.port_type = 0;
> @@ -432,10 +488,28 @@ static int
>  ssovf_eth_rx_adapter_queue_del(const struct rte_eventdev *dev,
>                 const struct rte_eth_dev *eth_dev, int32_t rx_queue_id)
>  {
> -       int ret = 0;
>         const struct octeontx_nic *nic = eth_dev->data->dev_private;
> +       struct ssovf_evdev *edev = ssovf_pmd_priv(dev);
> +       struct octeontx_rxq *rxq;
>         pki_del_qos_t pki_qos;
> -       RTE_SET_USED(dev);
> +       uint8_t found = false;
> +       int i, ret = 0;
> +
> +       rx_queue_id = rx_queue_id == -1 ? 0 : rx_queue_id;
> +       rxq = eth_dev->data->rx_queues[rx_queue_id];
> +       for (i = 0; i < edev->rxq_pools; i++) {
> +               if (edev->rxq_pool_array[i] == (uintptr_t)rxq->pool) {
> +                       found = true;
> +                       break;
> +               }
> +       }
> +
> +       if (found) {
> +               edev->rxq_pool_rcnt[i]--;
> +               if (edev->rxq_pool_rcnt[i] == 0)
> +                       edev->rxq_pool_array[i] = 0;
> +               edev->available_events += rxq->pool->size;
> +       }
>
>         ret = strncmp(eth_dev->data->name, "eth_octeontx", 12);
>         if (ret)
> @@ -754,6 +828,8 @@ ssovf_vdev_probe(struct rte_vdev_device *vdev)
>         }
>         eventdev->dev_ops = &ssovf_ops;
>
> +       timvf_set_eventdevice(eventdev);
> +
>         /* For secondary processes, the primary has done all the work */
>         if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
>                 ssovf_fastpath_fns_set(eventdev);
> @@ -781,9 +857,12 @@ ssovf_vdev_probe(struct rte_vdev_device *vdev)
>         edev->min_deq_timeout_ns = info.min_deq_timeout_ns;
>         edev->max_deq_timeout_ns = info.max_deq_timeout_ns;
>         edev->max_num_events =  info.max_num_events;
> -       ssovf_log_dbg("min_deq_tmo=%"PRId64" max_deq_tmo=%"PRId64" max_evts=%d",
> -                       info.min_deq_timeout_ns, info.max_deq_timeout_ns,
> -                       info.max_num_events);
> +       edev->available_events = info.max_num_events;
> +
> +       ssovf_log_dbg("min_deq_tmo=%" PRId64 " max_deq_tmo=%" PRId64
> +                     " max_evts=%d",
> +                     info.min_deq_timeout_ns, info.max_deq_timeout_ns,
> +                     info.max_num_events);
>
>         if (!edev->max_event_ports || !edev->max_event_queues) {
>                 ssovf_log_err("Not enough eventdev resource queues=%d ports=%d",
> diff --git a/drivers/event/octeontx/ssovf_evdev.h b/drivers/event/octeontx/ssovf_evdev.h
> index aa5acf246..90d760a54 100644
> --- a/drivers/event/octeontx/ssovf_evdev.h
> +++ b/drivers/event/octeontx/ssovf_evdev.h
> @@ -146,6 +146,12 @@ struct ssovf_evdev {
>         uint32_t min_deq_timeout_ns;
>         uint32_t max_deq_timeout_ns;
>         int32_t max_num_events;
> +       uint32_t available_events;
> +       uint16_t rxq_pools;
> +       uint64_t *rxq_pool_array;
> +       uint8_t *rxq_pool_rcnt;
> +       uint16_t tim_ring_cnt;
> +       uint16_t *tim_ring_ids;
>  } __rte_cache_aligned;
>
>  /* Event port aka HWS */
> diff --git a/drivers/event/octeontx/timvf_evdev.c b/drivers/event/octeontx/timvf_evdev.c
> index c61aacacc..8af4d6e37 100644
> --- a/drivers/event/octeontx/timvf_evdev.c
> +++ b/drivers/event/octeontx/timvf_evdev.c
> @@ -2,10 +2,13 @@
>   * Copyright(c) 2017 Cavium, Inc
>   */
>
> +#include "ssovf_evdev.h"
>  #include "timvf_evdev.h"
>
>  RTE_LOG_REGISTER(otx_logtype_timvf, pmd.event.octeontx.timer, NOTICE);
>
> +static struct rte_eventdev *event_dev;
> +
>  struct __rte_packed timvf_mbox_dev_info {
>         uint64_t ring_active[4];
>         uint64_t clk_freq;
> @@ -222,19 +225,21 @@ timvf_ring_stop(const struct rte_event_timer_adapter *adptr)
>  static int
>  timvf_ring_create(struct rte_event_timer_adapter *adptr)
>  {
> -       char pool_name[25];
> -       int ret;
> -       uint8_t tim_ring_id;
> -       uint64_t nb_timers;
>         struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
> +       uint16_t free_idx = UINT16_MAX;
> +       unsigned int mp_flags = 0;
> +       struct ssovf_evdev *edev;
>         struct timvf_ring *timr;
>         const char *mempool_ops;
> -       unsigned int mp_flags = 0;
> +       uint8_t tim_ring_id;
> +       char pool_name[25];
> +       int i, ret;
>
>         tim_ring_id = timvf_get_ring();
>         if (tim_ring_id == UINT8_MAX)
>                 return -ENODEV;
>
> +       edev = ssovf_pmd_priv(event_dev);
>         timr = rte_zmalloc("octeontx_timvf_priv",
>                         sizeof(struct timvf_ring), 0);
>         if (timr == NULL)
> @@ -256,10 +261,42 @@ timvf_ring_create(struct rte_event_timer_adapter *adptr)
>         timr->nb_bkts = (timr->max_tout / timr->tck_nsec);
>         timr->vbar0 = timvf_bar(timr->tim_ring_id, 0);
>         timr->bkt_pos = (uint8_t *)timr->vbar0 + TIM_VRING_REL;
> -       nb_timers = rcfg->nb_timers;
> +       timr->nb_timers = rcfg->nb_timers;
>         timr->get_target_bkt = bkt_mod;
>
> -       timr->nb_chunks = nb_timers / nb_chunk_slots;
> +       if (edev->available_events < timr->nb_timers) {
> +               timvf_log_err(
> +                       "Max available events %"PRIu32" requested timer events %"PRIu64"",
> +                       edev->available_events, timr->nb_timers);
> +               return -ENOMEM;
> +       }
> +
> +       for (i = 0; i < edev->tim_ring_cnt; i++) {
> +               if (edev->tim_ring_ids[i] == UINT16_MAX)
> +                       free_idx = i;
> +       }
> +
> +       if (free_idx == UINT16_MAX) {
> +               void *old_ptr;
> +
> +               edev->tim_ring_cnt++;
> +               old_ptr = edev->tim_ring_ids;
> +               edev->tim_ring_ids =
> +                       rte_realloc(edev->tim_ring_ids,
> +                                   sizeof(uint16_t) * edev->tim_ring_cnt, 0);
> +               if (edev->tim_ring_ids == NULL) {
> +                       edev->tim_ring_ids = old_ptr;
> +                       edev->tim_ring_cnt--;
> +                       return -ENOMEM;
> +               }
> +
> +               edev->available_events -= timr->nb_timers;
> +       } else {
> +               edev->tim_ring_ids[free_idx] = tim_ring_id;
> +               edev->available_events -= timr->nb_timers;
> +       }
> +
> +       timr->nb_chunks = timr->nb_timers / nb_chunk_slots;
>
>         /* Try to optimize the bucket parameters. */
>         if ((rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES)
> @@ -328,6 +365,17 @@ static int
>  timvf_ring_free(struct rte_event_timer_adapter *adptr)
>  {
>         struct timvf_ring *timr = adptr->data->adapter_priv;
> +       struct ssovf_evdev *edev;
> +       int i;
> +
> +       edev = ssovf_pmd_priv(event_dev);
> +       for (i = 0; i < edev->tim_ring_cnt; i++) {
> +               if (edev->tim_ring_ids[i] == timr->tim_ring_id) {
> +                       edev->available_events += timr->nb_timers;
> +                       edev->tim_ring_ids[i] = UINT16_MAX;
> +                       break;
> +               }
> +       }
>
>         rte_mempool_free(timr->chunk_pool);
>         rte_free(timr->bkt);
> @@ -396,3 +444,9 @@ timvf_timer_adapter_caps_get(const struct rte_eventdev *dev, uint64_t flags,
>         *ops = &timvf_ops;
>         return 0;
>  }
> +
> +void
> +timvf_set_eventdevice(struct rte_eventdev *dev)
> +{
> +       event_dev = dev;
> +}
> diff --git a/drivers/event/octeontx/timvf_evdev.h b/drivers/event/octeontx/timvf_evdev.h
> index d0e5921db..2977063d6 100644
> --- a/drivers/event/octeontx/timvf_evdev.h
> +++ b/drivers/event/octeontx/timvf_evdev.h
> @@ -175,6 +175,7 @@ struct timvf_ring {
>         void *bkt_pos;
>         uint64_t max_tout;
>         uint64_t nb_chunks;
> +       uint64_t nb_timers;
>         enum timvf_clk_src clk_src;
>         uint16_t tim_ring_id;
>  } __rte_cache_aligned;
> @@ -217,5 +218,6 @@ uint16_t timvf_timer_arm_tmo_brst_stats(
>                 struct rte_event_timer **tim, const uint64_t timeout_tick,
>                 const uint16_t nb_timers);
>  void timvf_set_chunk_refill(struct timvf_ring * const timr, uint8_t use_fpa);
> +void timvf_set_eventdevice(struct rte_eventdev *dev);
>
>  #endif /* __TIMVF_EVDEV_H__ */
> --
> 2.17.1
>

      reply	other threads:[~2020-10-04 10:40 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-28 18:22 pbhagavatula
2020-10-04 10:40 ` Jerin Jacob [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALBAE1Mc0NxqCBWpHHTSRm5thhi7ydG1aGrho3RcMzEz5Q5GVQ@mail.gmail.com \
    --to=jerinjacobk@gmail.com \
    --cc=dev@dpdk.org \
    --cc=jerinj@marvell.com \
    --cc=john.mcnamara@intel.com \
    --cc=marko.kovacevic@intel.com \
    --cc=pbhagavatula@marvell.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK patches and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ https://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git