From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E9E90A053C for ; Tue, 28 Jul 2020 20:22:43 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BAD531BE8A; Tue, 28 Jul 2020 20:22:43 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 8D44010A3; Tue, 28 Jul 2020 20:22:41 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06SIGIg5030248; Tue, 28 Jul 2020 11:22:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=7WLrTCg6q76FnQSOMnHapI3qb3a7iJFgLRL1UfD4E7g=; b=DkbtaToGO5NVXTmB0Y80RztkTGvf4H/Vf/PsrdAgSNejEYp3K1BZQCatZe3U5fJWrHVJ u5AwZKcE9ki0JmvNc1XDwcyJdGw5f4/qDuQJFLKdbrO5tr+vY7Voq6TYoasL/l44VeXU 5MhFAXIPLbCY1b05W4EbMPbUzpg3ueSqF1BUvnxqWZ1l57FCJYYWk2CYTPOqNleZyLRA nKOIaqODk/+Dzc7FwJiauSMMRHDQae2aGsCuZ3y7mmqBtA0RnJGJCudDD+zlcjIyzJSr nSf/g4koI5fi0a0VIqdlHzfGSaBBNXgHZTqbhFtI2e/qaidxbCuptOukkufuj2XHTWKs cw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 32gm8nmhws-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 28 Jul 2020 11:22:40 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 28 Jul 2020 11:22:39 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 28 Jul 2020 11:22:38 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 28 Jul 2020 11:22:38 -0700 Received: from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.161.240]) by maili.marvell.com (Postfix) with ESMTP id A44D33F703F; Tue, 28 Jul 2020 11:22:34 -0700 (PDT) From: To: , John McNamara , "Marko Kovacevic" , Pavan Nikhilesh CC: , Date: Tue, 28 Jul 2020 23:52:23 +0530 Message-ID: <20200728182224.1359-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-28_15:2020-07-28, 2020-07-28 signatures=0 Subject: [dpdk-stable] [dpdk-dev] [PATCH] event/octeontx: validate events requested against available X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" From: Pavan Nikhilesh Validate events configured in ssopf against the total number of events configured across all the RX/TIM event adapters. Events available to ssopf can be reconfigured by passing the required amount to kernel bootargs and are only limited by DRAM size. Example: ssopf.max_events= 2097152 Cc: stable@dpdk.org Signed-off-by: Pavan Nikhilesh --- doc/guides/eventdevs/octeontx.rst | 23 +++++-- drivers/event/octeontx/ssovf_evdev.c | 99 +++++++++++++++++++++++++--- drivers/event/octeontx/ssovf_evdev.h | 6 ++ drivers/event/octeontx/timvf_evdev.c | 68 +++++++++++++++++-- drivers/event/octeontx/timvf_evdev.h | 2 + 5 files changed, 176 insertions(+), 22 deletions(-) diff --git a/doc/guides/eventdevs/octeontx.rst b/doc/guides/eventdevs/octeontx.rst index 9a3646db0..21d251341 100644 --- a/doc/guides/eventdevs/octeontx.rst +++ b/doc/guides/eventdevs/octeontx.rst @@ -140,9 +140,22 @@ follows: When timvf is used as Event timer adapter event schedule type ``RTE_SCHED_TYPE_PARALLEL`` is not supported. -Max mempool size -~~~~~~~~~~~~~~~~ +Max number of events +~~~~~~~~~~~~~~~~~~~~ -Max mempool size when using OCTEON TX Eventdev (SSO) should be limited to 128K. -When running dpdk-test-eventdev on OCTEON TX the application can limit the -number of mbufs by using the option ``--pool_sz 131072`` +Max number of events in OCTEON TX Eventdev (SSO) are only limited by DRAM size +and they can be configured by passing limits to kernel bootargs as follows: + +.. code-block:: console + + ssopf.max_events=4194304 + +The same can be verified by looking at the following sysfs entry: + +.. code-block:: console + + # cat /sys/module/ssopf/parameters/max_events + 4194304 + +The maximum number of events that can be added to SSO by the event adapters such +as (Rx/Timer) should be limited to the above configured value. diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c index 4fc4e8f7e..33cb50204 100644 --- a/drivers/event/octeontx/ssovf_evdev.c +++ b/drivers/event/octeontx/ssovf_evdev.c @@ -384,22 +384,78 @@ ssovf_eth_rx_adapter_queue_add(const struct rte_eventdev *dev, const struct rte_eth_dev *eth_dev, int32_t rx_queue_id, const struct rte_event_eth_rx_adapter_queue_conf *queue_conf) { - int ret = 0; const struct octeontx_nic *nic = eth_dev->data->dev_private; struct ssovf_evdev *edev = ssovf_pmd_priv(dev); + uint16_t free_idx = UINT16_MAX; + struct octeontx_rxq *rxq; pki_mod_qos_t pki_qos; - RTE_SET_USED(dev); + uint8_t found = false; + int i, ret = 0; + void *old_ptr; ret = strncmp(eth_dev->data->name, "eth_octeontx", 12); if (ret) return -EINVAL; - if (rx_queue_id >= 0) - return -EINVAL; - if (queue_conf->ev.sched_type == RTE_SCHED_TYPE_PARALLEL) return -ENOTSUP; + /* eth_octeontx only supports one rq. */ + rx_queue_id = rx_queue_id == -1 ? 0 : rx_queue_id; + rxq = eth_dev->data->rx_queues[rx_queue_id]; + /* Add rxq pool to list of used pools and reduce available events. */ + for (i = 0; i < edev->rxq_pools; i++) { + if (edev->rxq_pool_array[i] == (uintptr_t)rxq->pool) { + edev->rxq_pool_rcnt[i]++; + found = true; + break; + } else if (free_idx == UINT16_MAX && + edev->rxq_pool_array[i] == 0) { + free_idx = i; + } + } + + if (!found) { + uint16_t idx; + + if (edev->available_events < rxq->pool->size) { + ssovf_log_err( + "Max available events %"PRIu32" requested events in rxq pool %"PRIu32"", + edev->available_events, rxq->pool->size); + return -ENOMEM; + } + + if (free_idx != UINT16_MAX) { + idx = free_idx; + } else { + old_ptr = edev->rxq_pool_array; + edev->rxq_pools++; + edev->rxq_pool_array = rte_realloc( + edev->rxq_pool_array, + sizeof(uint64_t) * edev->rxq_pools, 0); + if (edev->rxq_pool_array == NULL) { + edev->rxq_pools--; + edev->rxq_pool_array = old_ptr; + return -ENOMEM; + } + + old_ptr = edev->rxq_pool_rcnt; + edev->rxq_pool_rcnt = rte_realloc( + edev->rxq_pool_rcnt, + sizeof(uint8_t) * edev->rxq_pools, 0); + if (edev->rxq_pool_rcnt == NULL) { + edev->rxq_pools--; + edev->rxq_pool_rcnt = old_ptr; + return -ENOMEM; + } + idx = edev->rxq_pools - 1; + } + + edev->rxq_pool_array[idx] = (uintptr_t)rxq->pool; + edev->rxq_pool_rcnt[idx] = 1; + edev->available_events -= rxq->pool->size; + } + memset(&pki_qos, 0, sizeof(pki_mod_qos_t)); pki_qos.port_type = 0; @@ -432,10 +488,28 @@ static int ssovf_eth_rx_adapter_queue_del(const struct rte_eventdev *dev, const struct rte_eth_dev *eth_dev, int32_t rx_queue_id) { - int ret = 0; const struct octeontx_nic *nic = eth_dev->data->dev_private; + struct ssovf_evdev *edev = ssovf_pmd_priv(dev); + struct octeontx_rxq *rxq; pki_del_qos_t pki_qos; - RTE_SET_USED(dev); + uint8_t found = false; + int i, ret = 0; + + rx_queue_id = rx_queue_id == -1 ? 0 : rx_queue_id; + rxq = eth_dev->data->rx_queues[rx_queue_id]; + for (i = 0; i < edev->rxq_pools; i++) { + if (edev->rxq_pool_array[i] == (uintptr_t)rxq->pool) { + found = true; + break; + } + } + + if (found) { + edev->rxq_pool_rcnt[i]--; + if (edev->rxq_pool_rcnt[i] == 0) + edev->rxq_pool_array[i] = 0; + edev->available_events += rxq->pool->size; + } ret = strncmp(eth_dev->data->name, "eth_octeontx", 12); if (ret) @@ -754,6 +828,8 @@ ssovf_vdev_probe(struct rte_vdev_device *vdev) } eventdev->dev_ops = &ssovf_ops; + timvf_set_eventdevice(eventdev); + /* For secondary processes, the primary has done all the work */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) { ssovf_fastpath_fns_set(eventdev); @@ -781,9 +857,12 @@ ssovf_vdev_probe(struct rte_vdev_device *vdev) edev->min_deq_timeout_ns = info.min_deq_timeout_ns; edev->max_deq_timeout_ns = info.max_deq_timeout_ns; edev->max_num_events = info.max_num_events; - ssovf_log_dbg("min_deq_tmo=%"PRId64" max_deq_tmo=%"PRId64" max_evts=%d", - info.min_deq_timeout_ns, info.max_deq_timeout_ns, - info.max_num_events); + edev->available_events = info.max_num_events; + + ssovf_log_dbg("min_deq_tmo=%" PRId64 " max_deq_tmo=%" PRId64 + " max_evts=%d", + info.min_deq_timeout_ns, info.max_deq_timeout_ns, + info.max_num_events); if (!edev->max_event_ports || !edev->max_event_queues) { ssovf_log_err("Not enough eventdev resource queues=%d ports=%d", diff --git a/drivers/event/octeontx/ssovf_evdev.h b/drivers/event/octeontx/ssovf_evdev.h index aa5acf246..90d760a54 100644 --- a/drivers/event/octeontx/ssovf_evdev.h +++ b/drivers/event/octeontx/ssovf_evdev.h @@ -146,6 +146,12 @@ struct ssovf_evdev { uint32_t min_deq_timeout_ns; uint32_t max_deq_timeout_ns; int32_t max_num_events; + uint32_t available_events; + uint16_t rxq_pools; + uint64_t *rxq_pool_array; + uint8_t *rxq_pool_rcnt; + uint16_t tim_ring_cnt; + uint16_t *tim_ring_ids; } __rte_cache_aligned; /* Event port aka HWS */ diff --git a/drivers/event/octeontx/timvf_evdev.c b/drivers/event/octeontx/timvf_evdev.c index c61aacacc..8af4d6e37 100644 --- a/drivers/event/octeontx/timvf_evdev.c +++ b/drivers/event/octeontx/timvf_evdev.c @@ -2,10 +2,13 @@ * Copyright(c) 2017 Cavium, Inc */ +#include "ssovf_evdev.h" #include "timvf_evdev.h" RTE_LOG_REGISTER(otx_logtype_timvf, pmd.event.octeontx.timer, NOTICE); +static struct rte_eventdev *event_dev; + struct __rte_packed timvf_mbox_dev_info { uint64_t ring_active[4]; uint64_t clk_freq; @@ -222,19 +225,21 @@ timvf_ring_stop(const struct rte_event_timer_adapter *adptr) static int timvf_ring_create(struct rte_event_timer_adapter *adptr) { - char pool_name[25]; - int ret; - uint8_t tim_ring_id; - uint64_t nb_timers; struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf; + uint16_t free_idx = UINT16_MAX; + unsigned int mp_flags = 0; + struct ssovf_evdev *edev; struct timvf_ring *timr; const char *mempool_ops; - unsigned int mp_flags = 0; + uint8_t tim_ring_id; + char pool_name[25]; + int i, ret; tim_ring_id = timvf_get_ring(); if (tim_ring_id == UINT8_MAX) return -ENODEV; + edev = ssovf_pmd_priv(event_dev); timr = rte_zmalloc("octeontx_timvf_priv", sizeof(struct timvf_ring), 0); if (timr == NULL) @@ -256,10 +261,42 @@ timvf_ring_create(struct rte_event_timer_adapter *adptr) timr->nb_bkts = (timr->max_tout / timr->tck_nsec); timr->vbar0 = timvf_bar(timr->tim_ring_id, 0); timr->bkt_pos = (uint8_t *)timr->vbar0 + TIM_VRING_REL; - nb_timers = rcfg->nb_timers; + timr->nb_timers = rcfg->nb_timers; timr->get_target_bkt = bkt_mod; - timr->nb_chunks = nb_timers / nb_chunk_slots; + if (edev->available_events < timr->nb_timers) { + timvf_log_err( + "Max available events %"PRIu32" requested timer events %"PRIu64"", + edev->available_events, timr->nb_timers); + return -ENOMEM; + } + + for (i = 0; i < edev->tim_ring_cnt; i++) { + if (edev->tim_ring_ids[i] == UINT16_MAX) + free_idx = i; + } + + if (free_idx == UINT16_MAX) { + void *old_ptr; + + edev->tim_ring_cnt++; + old_ptr = edev->tim_ring_ids; + edev->tim_ring_ids = + rte_realloc(edev->tim_ring_ids, + sizeof(uint16_t) * edev->tim_ring_cnt, 0); + if (edev->tim_ring_ids == NULL) { + edev->tim_ring_ids = old_ptr; + edev->tim_ring_cnt--; + return -ENOMEM; + } + + edev->available_events -= timr->nb_timers; + } else { + edev->tim_ring_ids[free_idx] = tim_ring_id; + edev->available_events -= timr->nb_timers; + } + + timr->nb_chunks = timr->nb_timers / nb_chunk_slots; /* Try to optimize the bucket parameters. */ if ((rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES) @@ -328,6 +365,17 @@ static int timvf_ring_free(struct rte_event_timer_adapter *adptr) { struct timvf_ring *timr = adptr->data->adapter_priv; + struct ssovf_evdev *edev; + int i; + + edev = ssovf_pmd_priv(event_dev); + for (i = 0; i < edev->tim_ring_cnt; i++) { + if (edev->tim_ring_ids[i] == timr->tim_ring_id) { + edev->available_events += timr->nb_timers; + edev->tim_ring_ids[i] = UINT16_MAX; + break; + } + } rte_mempool_free(timr->chunk_pool); rte_free(timr->bkt); @@ -396,3 +444,9 @@ timvf_timer_adapter_caps_get(const struct rte_eventdev *dev, uint64_t flags, *ops = &timvf_ops; return 0; } + +void +timvf_set_eventdevice(struct rte_eventdev *dev) +{ + event_dev = dev; +} diff --git a/drivers/event/octeontx/timvf_evdev.h b/drivers/event/octeontx/timvf_evdev.h index d0e5921db..2977063d6 100644 --- a/drivers/event/octeontx/timvf_evdev.h +++ b/drivers/event/octeontx/timvf_evdev.h @@ -175,6 +175,7 @@ struct timvf_ring { void *bkt_pos; uint64_t max_tout; uint64_t nb_chunks; + uint64_t nb_timers; enum timvf_clk_src clk_src; uint16_t tim_ring_id; } __rte_cache_aligned; @@ -217,5 +218,6 @@ uint16_t timvf_timer_arm_tmo_brst_stats( struct rte_event_timer **tim, const uint64_t timeout_tick, const uint16_t nb_timers); void timvf_set_chunk_refill(struct timvf_ring * const timr, uint8_t use_fpa); +void timvf_set_eventdevice(struct rte_eventdev *dev); #endif /* __TIMVF_EVDEV_H__ */ -- 2.17.1