* [dpdk-dev] [PATCH 01/36] event/cnxk: add build infra and device setup
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 02/36] event/cnxk: add device capabilities function pbhagavatula
` (35 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Thomas Monjalon, Pavan Nikhilesh, Shijith Thotton,
Ray Kinsella, Neil Horman, Anatoly Burakov
Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add meson build infra structure along with the event device
SSO initialization and teardown functions.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
MAINTAINERS | 6 +++
doc/guides/eventdevs/cnxk.rst | 55 ++++++++++++++++++++++++
doc/guides/eventdevs/index.rst | 1 +
drivers/event/cnxk/cnxk_eventdev.c | 68 ++++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 39 +++++++++++++++++
drivers/event/cnxk/meson.build | 13 ++++++
drivers/event/cnxk/version.map | 3 ++
drivers/event/meson.build | 2 +-
8 files changed, 186 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/eventdevs/cnxk.rst
create mode 100644 drivers/event/cnxk/cnxk_eventdev.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev.h
create mode 100644 drivers/event/cnxk/meson.build
create mode 100644 drivers/event/cnxk/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index e341bc81d..89c23c49c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1211,6 +1211,12 @@ M: Jerin Jacob <jerinj@marvell.com>
F: drivers/event/octeontx2/
F: doc/guides/eventdevs/octeontx2.rst
+Marvell OCTEON CNXK
+M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+M: Shijith Thotton <sthotton@marvell.com>
+F: drivers/event/cnxk/
+F: doc/guides/eventdevs/cnxk.rst
+
NXP DPAA eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Nipun Gupta <nipun.gupta@nxp.com>
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
new file mode 100644
index 000000000..e94225bd3
--- /dev/null
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -0,0 +1,55 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2021 Marvell International Ltd.
+
+OCTEON CNXK SSO Eventdev Driver
+==========================
+
+The SSO PMD (**librte_event_cnxk**) and provides poll mode
+eventdev driver support for the inbuilt event device found in the
+**Marvell OCTEON CNXK** SoC family.
+
+More information about OCTEON CNXK SoC can be found at `Marvell Official Website
+<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
+
+Supported OCTEON CNXK SoCs
+--------------------------
+
+- CN9XX
+- CN10XX
+
+Features
+--------
+
+Features of the OCTEON CNXK SSO PMD are:
+
+- 256 Event queues
+- 26 (dual) and 52 (single) Event ports on CN10XX
+- 52 Event ports on CN9XX
+- HW event scheduler
+- Supports 1M flows per event queue
+- Flow based event pipelining
+- Flow pinning support in flow based event pipelining
+- Queue based event pipelining
+- Supports ATOMIC, ORDERED, PARALLEL schedule types per flow
+- Event scheduling QoS based on event queue priority
+- Open system with configurable amount of outstanding events limited only by
+ DRAM
+- HW accelerated dequeue timeout support to enable power management
+
+Prerequisites and Compilation procedure
+---------------------------------------
+
+ See :doc:`../platform/cnxk` for setup information.
+
+Debugging Options
+-----------------
+
+.. _table_octeon_cnxk_event_debug_options:
+
+.. table:: OCTEON CNXK event device debug options
+
+ +---+------------+-------------------------------------------------------+
+ | # | Component | EAL log command |
+ +===+============+=======================================================+
+ | 1 | SSO | --log-level='pmd\.event\.cnxk,8' |
+ +---+------------+-------------------------------------------------------+
diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst
index f5b69b39d..00203e0f0 100644
--- a/doc/guides/eventdevs/index.rst
+++ b/doc/guides/eventdevs/index.rst
@@ -11,6 +11,7 @@ application through the eventdev API.
:maxdepth: 2
:numbered:
+ cnxk
dlb
dlb2
dpaa
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
new file mode 100644
index 000000000..b7f9c81bd
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_eventdev.h"
+
+int
+cnxk_sso_init(struct rte_eventdev *event_dev)
+{
+ const struct rte_memzone *mz = NULL;
+ struct rte_pci_device *pci_dev;
+ struct cnxk_sso_evdev *dev;
+ int rc;
+
+ mz = rte_memzone_reserve(CNXK_SSO_MZ_NAME, sizeof(uint64_t),
+ SOCKET_ID_ANY, 0);
+ if (mz == NULL) {
+ plt_err("Failed to create eventdev memzone");
+ return -ENOMEM;
+ }
+
+ dev = cnxk_sso_pmd_priv(event_dev);
+ pci_dev = container_of(event_dev->dev, struct rte_pci_device, device);
+ dev->sso.pci_dev = pci_dev;
+
+ *(uint64_t *)mz->addr = (uint64_t)dev;
+
+ /* Initialize the base cnxk_dev object */
+ rc = roc_sso_dev_init(&dev->sso);
+ if (rc < 0) {
+ plt_err("Failed to initialize RoC SSO rc=%d", rc);
+ goto error;
+ }
+
+ dev->is_timeout_deq = 0;
+ dev->min_dequeue_timeout_ns = USEC2NSEC(1);
+ dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF);
+ dev->max_num_events = -1;
+ dev->nb_event_queues = 0;
+ dev->nb_event_ports = 0;
+
+ return 0;
+
+error:
+ rte_memzone_free(mz);
+ return rc;
+}
+
+int
+cnxk_sso_fini(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ /* For secondary processes, nothing to be done */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ roc_sso_rsrc_fini(&dev->sso);
+ roc_sso_dev_fini(&dev->sso);
+
+ return 0;
+}
+
+int
+cnxk_sso_remove(struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_remove(pci_dev, cnxk_sso_fini);
+}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
new file mode 100644
index 000000000..148b327a1
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __CNXK_EVENTDEV_H__
+#define __CNXK_EVENTDEV_H__
+
+#include <rte_pci.h>
+
+#include <eventdev_pmd_pci.h>
+
+#include "roc_api.h"
+
+#define USEC2NSEC(__us) ((__us)*1E3)
+
+#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
+
+struct cnxk_sso_evdev {
+ struct roc_sso sso;
+ uint8_t is_timeout_deq;
+ uint8_t nb_event_queues;
+ uint8_t nb_event_ports;
+ uint32_t min_dequeue_timeout_ns;
+ uint32_t max_dequeue_timeout_ns;
+ int32_t max_num_events;
+} __rte_cache_aligned;
+
+static inline struct cnxk_sso_evdev *
+cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
+{
+ return event_dev->data->dev_private;
+}
+
+/* Common ops API. */
+int cnxk_sso_init(struct rte_eventdev *event_dev);
+int cnxk_sso_fini(struct rte_eventdev *event_dev);
+int cnxk_sso_remove(struct rte_pci_device *pci_dev);
+
+#endif /* __CNXK_EVENTDEV_H__ */
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
new file mode 100644
index 000000000..110b45188
--- /dev/null
+++ b/drivers/event/cnxk/meson.build
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2021 Marvell International Ltd.
+#
+
+if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on 64-bit Linux'
+ subdir_done()
+endif
+
+sources = files('cnxk_eventdev.c')
+
+deps += ['bus_pci', 'common_cnxk', 'net_cnxk']
diff --git a/drivers/event/cnxk/version.map b/drivers/event/cnxk/version.map
new file mode 100644
index 000000000..ee80c5172
--- /dev/null
+++ b/drivers/event/cnxk/version.map
@@ -0,0 +1,3 @@
+INTERNAL {
+ local: *;
+};
diff --git a/drivers/event/meson.build b/drivers/event/meson.build
index a49288a5d..583ebbc9c 100644
--- a/drivers/event/meson.build
+++ b/drivers/event/meson.build
@@ -5,7 +5,7 @@ if is_windows
subdir_done()
endif
-drivers = ['dlb', 'dlb2', 'dpaa', 'dpaa2', 'octeontx2', 'opdl', 'skeleton', 'sw',
+drivers = ['cnxk', 'dlb', 'dlb2', 'dpaa', 'dpaa2', 'octeontx2', 'opdl', 'skeleton', 'sw',
'dsw']
if not (toolchain == 'gcc' and cc.version().version_compare('<4.8.6') and
dpdk_conf.has('RTE_ARCH_ARM64'))
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 02/36] event/cnxk: add device capabilities function
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 01/36] event/cnxk: add build infra and device setup pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 03/36] event/cnxk: add platform specific device probe pbhagavatula
` (34 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
Add the info_get function to return details on the queues, flow,
prioritization capabilities, etc. which this device has.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 24 ++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 4 ++++
2 files changed, 28 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index b7f9c81bd..ae553fd23 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -4,6 +4,30 @@
#include "cnxk_eventdev.h"
+void
+cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
+ struct rte_event_dev_info *dev_info)
+{
+
+ dev_info->min_dequeue_timeout_ns = dev->min_dequeue_timeout_ns;
+ dev_info->max_dequeue_timeout_ns = dev->max_dequeue_timeout_ns;
+ dev_info->max_event_queues = dev->max_event_queues;
+ dev_info->max_event_queue_flows = (1ULL << 20);
+ dev_info->max_event_queue_priority_levels = 8;
+ dev_info->max_event_priority_levels = 1;
+ dev_info->max_event_ports = dev->max_event_ports;
+ dev_info->max_event_port_dequeue_depth = 1;
+ dev_info->max_event_port_enqueue_depth = 1;
+ dev_info->max_num_events = dev->max_num_events;
+ dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
+ RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
+ RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
+ RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
+ RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
+}
+
int
cnxk_sso_init(struct rte_eventdev *event_dev)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 148b327a1..583492948 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -17,6 +17,8 @@
struct cnxk_sso_evdev {
struct roc_sso sso;
+ uint8_t max_event_queues;
+ uint8_t max_event_ports;
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
@@ -35,5 +37,7 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
int cnxk_sso_init(struct rte_eventdev *event_dev);
int cnxk_sso_fini(struct rte_eventdev *event_dev);
int cnxk_sso_remove(struct rte_pci_device *pci_dev);
+void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
+ struct rte_event_dev_info *dev_info);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 03/36] event/cnxk: add platform specific device probe
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 01/36] event/cnxk: add build infra and device setup pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 02/36] event/cnxk: add device capabilities function pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 04/36] event/cnxk: add common configuration validation pbhagavatula
` (33 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov
Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add platform specific event device probe and remove, also add
event device info get function.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 101 +++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 102 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 2 +
drivers/event/cnxk/meson.build | 4 +-
4 files changed, 208 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cn10k_eventdev.c
create mode 100644 drivers/event/cnxk/cn9k_eventdev.c
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
new file mode 100644
index 000000000..34238d3b5
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_eventdev.h"
+
+static void
+cn10k_sso_set_rsrc(void *arg)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ dev->max_event_ports = dev->sso.max_hws;
+ dev->max_event_queues =
+ dev->sso.max_hwgrp > RTE_EVENT_MAX_QUEUES_PER_DEV ?
+ RTE_EVENT_MAX_QUEUES_PER_DEV :
+ dev->sso.max_hwgrp;
+}
+
+static void
+cn10k_sso_info_get(struct rte_eventdev *event_dev,
+ struct rte_event_dev_info *dev_info)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ dev_info->driver_name = RTE_STR(EVENTDEV_NAME_CN10K_PMD);
+ cnxk_sso_info_get(dev, dev_info);
+}
+
+static struct rte_eventdev_ops cn10k_sso_dev_ops = {
+ .dev_infos_get = cn10k_sso_info_get,
+};
+
+static int
+cn10k_sso_init(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ if (RTE_CACHE_LINE_SIZE != 64) {
+ plt_err("Driver not compiled for CN9K");
+ return -EFAULT;
+ }
+
+ rc = plt_init();
+ if (rc < 0) {
+ plt_err("Failed to initialize platform model");
+ return rc;
+ }
+
+ event_dev->dev_ops = &cn10k_sso_dev_ops;
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ rc = cnxk_sso_init(event_dev);
+ if (rc < 0)
+ return rc;
+
+ cn10k_sso_set_rsrc(cnxk_sso_pmd_priv(event_dev));
+ if (!dev->max_event_ports || !dev->max_event_queues) {
+ plt_err("Not enough eventdev resource queues=%d ports=%d",
+ dev->max_event_queues, dev->max_event_ports);
+ cnxk_sso_fini(event_dev);
+ return -ENODEV;
+ }
+
+ plt_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
+ event_dev->data->name, dev->max_event_queues,
+ dev->max_event_ports);
+
+ return 0;
+}
+
+static int
+cn10k_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_probe(pci_drv, pci_dev,
+ sizeof(struct cnxk_sso_evdev),
+ cn10k_sso_init);
+}
+
+static const struct rte_pci_id cn10k_pci_sso_map[] = {
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver cn10k_pci_sso = {
+ .id_table = cn10k_pci_sso_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
+ .probe = cn10k_sso_probe,
+ .remove = cnxk_sso_remove,
+};
+
+RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
+RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
+RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
new file mode 100644
index 000000000..238540828
--- /dev/null
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_eventdev.h"
+
+#define CN9K_DUAL_WS_NB_WS 2
+#define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
+
+static void
+cn9k_sso_set_rsrc(void *arg)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ if (dev->dual_ws)
+ dev->max_event_ports = dev->sso.max_hws / CN9K_DUAL_WS_NB_WS;
+ else
+ dev->max_event_ports = dev->sso.max_hws;
+ dev->max_event_queues =
+ dev->sso.max_hwgrp > RTE_EVENT_MAX_QUEUES_PER_DEV ?
+ RTE_EVENT_MAX_QUEUES_PER_DEV :
+ dev->sso.max_hwgrp;
+}
+
+static void
+cn9k_sso_info_get(struct rte_eventdev *event_dev,
+ struct rte_event_dev_info *dev_info)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ dev_info->driver_name = RTE_STR(EVENTDEV_NAME_CN9K_PMD);
+ cnxk_sso_info_get(dev, dev_info);
+}
+
+static struct rte_eventdev_ops cn9k_sso_dev_ops = {
+ .dev_infos_get = cn9k_sso_info_get,
+};
+
+static int
+cn9k_sso_init(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ if (RTE_CACHE_LINE_SIZE != 128) {
+ plt_err("Driver not compiled for CN9K");
+ return -EFAULT;
+ }
+
+ rc = plt_init();
+ if (rc < 0) {
+ plt_err("Failed to initialize platform model");
+ return rc;
+ }
+
+ event_dev->dev_ops = &cn9k_sso_dev_ops;
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ rc = cnxk_sso_init(event_dev);
+ if (rc < 0)
+ return rc;
+
+ cn9k_sso_set_rsrc(cnxk_sso_pmd_priv(event_dev));
+ if (!dev->max_event_ports || !dev->max_event_queues) {
+ plt_err("Not enough eventdev resource queues=%d ports=%d",
+ dev->max_event_queues, dev->max_event_ports);
+ cnxk_sso_fini(event_dev);
+ return -ENODEV;
+ }
+
+ plt_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
+ event_dev->data->name, dev->max_event_queues,
+ dev->max_event_ports);
+
+ return 0;
+}
+
+static int
+cn9k_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_probe(
+ pci_drv, pci_dev, sizeof(struct cnxk_sso_evdev), cn9k_sso_init);
+}
+
+static const struct rte_pci_id cn9k_pci_sso_map[] = {
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver cn9k_pci_sso = {
+ .id_table = cn9k_pci_sso_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
+ .probe = cn9k_sso_probe,
+ .remove = cnxk_sso_remove,
+};
+
+RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
+RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
+RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 583492948..b98c783ae 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -25,6 +25,8 @@ struct cnxk_sso_evdev {
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
+ /* CN9K */
+ uint8_t dual_ws;
} __rte_cache_aligned;
static inline struct cnxk_sso_evdev *
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 110b45188..c969595c1 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -8,6 +8,8 @@ if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
subdir_done()
endif
-sources = files('cnxk_eventdev.c')
+sources = files('cn10k_eventdev.c',
+ 'cn9k_eventdev.c',
+ 'cnxk_eventdev.c')
deps += ['bus_pci', 'common_cnxk', 'net_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 04/36] event/cnxk: add common configuration validation
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (2 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 03/36] event/cnxk: add platform specific device probe pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 05/36] event/cnxk: add platform specific device config pbhagavatula
` (32 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add configuration validation, port and queue configuration
functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 70 ++++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 6 +++
2 files changed, 76 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index ae553fd23..f15986f3e 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -28,6 +28,76 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
+int
+cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
+{
+ struct rte_event_dev_config *conf = &event_dev->data->dev_conf;
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint32_t deq_tmo_ns;
+
+ deq_tmo_ns = conf->dequeue_timeout_ns;
+
+ if (deq_tmo_ns == 0)
+ deq_tmo_ns = dev->min_dequeue_timeout_ns;
+ if (deq_tmo_ns < dev->min_dequeue_timeout_ns ||
+ deq_tmo_ns > dev->max_dequeue_timeout_ns) {
+ plt_err("Unsupported dequeue timeout requested");
+ return -EINVAL;
+ }
+
+ if (conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT)
+ dev->is_timeout_deq = 1;
+
+ dev->deq_tmo_ns = deq_tmo_ns;
+
+ if (!conf->nb_event_queues || !conf->nb_event_ports ||
+ conf->nb_event_ports > dev->max_event_ports ||
+ conf->nb_event_queues > dev->max_event_queues) {
+ plt_err("Unsupported event queues/ports requested");
+ return -EINVAL;
+ }
+
+ if (conf->nb_event_port_dequeue_depth > 1) {
+ plt_err("Unsupported event port deq depth requested");
+ return -EINVAL;
+ }
+
+ if (conf->nb_event_port_enqueue_depth > 1) {
+ plt_err("Unsupported event port enq depth requested");
+ return -EINVAL;
+ }
+
+ dev->nb_event_queues = conf->nb_event_queues;
+ dev->nb_event_ports = conf->nb_event_ports;
+
+ return 0;
+}
+
+void
+cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
+ struct rte_event_queue_conf *queue_conf)
+{
+ RTE_SET_USED(event_dev);
+ RTE_SET_USED(queue_id);
+
+ queue_conf->nb_atomic_flows = (1ULL << 20);
+ queue_conf->nb_atomic_order_sequences = (1ULL << 20);
+ queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+ queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+}
+
+void
+cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
+ struct rte_event_port_conf *port_conf)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ RTE_SET_USED(port_id);
+ port_conf->new_event_threshold = dev->max_num_events;
+ port_conf->dequeue_depth = 1;
+ port_conf->enqueue_depth = 1;
+}
+
int
cnxk_sso_init(struct rte_eventdev *event_dev)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index b98c783ae..08eba2270 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -22,6 +22,7 @@ struct cnxk_sso_evdev {
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
+ uint32_t deq_tmo_ns;
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
@@ -41,5 +42,10 @@ int cnxk_sso_fini(struct rte_eventdev *event_dev);
int cnxk_sso_remove(struct rte_pci_device *pci_dev);
void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
struct rte_event_dev_info *dev_info);
+int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
+void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
+ struct rte_event_queue_conf *queue_conf);
+void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
+ struct rte_event_port_conf *port_conf);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 05/36] event/cnxk: add platform specific device config
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (3 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 04/36] event/cnxk: add common configuration validation pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 06/36] event/cnxk: add event queue config functions pbhagavatula
` (31 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
Add platform specific event device configuration that attaches the
requested number of SSO HWS(event ports) and HWGRP(event queues) LFs
to the RVU PF/VF.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 35 +++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 37 +++++++++++++++++++++++++++++
2 files changed, 72 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 34238d3b5..352df88fc 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -16,6 +16,14 @@ cn10k_sso_set_rsrc(void *arg)
dev->sso.max_hwgrp;
}
+static int
+cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
+}
+
static void
cn10k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -26,8 +34,35 @@ cn10k_sso_info_get(struct rte_eventdev *event_dev,
cnxk_sso_info_get(dev, dev_info);
}
+static int
+cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ rc = cnxk_sso_dev_validate(event_dev);
+ if (rc < 0) {
+ plt_err("Invalid event device configuration");
+ return -EINVAL;
+ }
+
+ roc_sso_rsrc_fini(&dev->sso);
+
+ rc = cn10k_sso_rsrc_init(dev, dev->nb_event_ports,
+ dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to initialize SSO resources");
+ return -ENODEV;
+ }
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
+ .dev_configure = cn10k_sso_dev_configure,
+ .queue_def_conf = cnxk_sso_queue_def_conf,
+ .port_def_conf = cnxk_sso_port_def_conf,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 238540828..126388a23 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -22,6 +22,17 @@ cn9k_sso_set_rsrc(void *arg)
dev->sso.max_hwgrp;
}
+static int
+cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ if (dev->dual_ws)
+ hws = hws * CN9K_DUAL_WS_NB_WS;
+
+ return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
+}
+
static void
cn9k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -32,8 +43,34 @@ cn9k_sso_info_get(struct rte_eventdev *event_dev,
cnxk_sso_info_get(dev, dev_info);
}
+static int
+cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ rc = cnxk_sso_dev_validate(event_dev);
+ if (rc < 0) {
+ plt_err("Invalid event device configuration");
+ return -EINVAL;
+ }
+
+ roc_sso_rsrc_fini(&dev->sso);
+
+ rc = cn9k_sso_rsrc_init(dev, dev->nb_event_ports, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to initialize SSO resources");
+ return -ENODEV;
+ }
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
+ .dev_configure = cn9k_sso_dev_configure,
+ .queue_def_conf = cnxk_sso_queue_def_conf,
+ .port_def_conf = cnxk_sso_port_def_conf,
};
static int
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 06/36] event/cnxk: add event queue config functions
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (4 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 05/36] event/cnxk: add platform specific device config pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 07/36] event/cnxk: allocate event inflight buffers pbhagavatula
` (30 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
Add setup and release functions for event queues i.e.
SSO HWGRPs.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 2 ++
drivers/event/cnxk/cn9k_eventdev.c | 2 ++
drivers/event/cnxk/cnxk_eventdev.c | 19 +++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 3 +++
4 files changed, 26 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 352df88fc..92687c23e 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -62,6 +62,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
.queue_def_conf = cnxk_sso_queue_def_conf,
+ .queue_setup = cnxk_sso_queue_setup,
+ .queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
};
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 126388a23..1bd2b3343 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -70,6 +70,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
.queue_def_conf = cnxk_sso_queue_def_conf,
+ .queue_setup = cnxk_sso_queue_setup,
+ .queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
};
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index f15986f3e..59cc570fe 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -86,6 +86,25 @@ cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
}
+int
+cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
+ const struct rte_event_queue_conf *queue_conf)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ plt_sso_dbg("Queue=%d prio=%d", queue_id, queue_conf->priority);
+ /* Normalize <0-255> to <0-7> */
+ return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, 0xFF, 0xFF,
+ queue_conf->priority / 32);
+}
+
+void
+cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
+{
+ RTE_SET_USED(event_dev);
+ RTE_SET_USED(queue_id);
+}
+
void
cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf)
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 08eba2270..974c618bc 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -45,6 +45,9 @@ void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
struct rte_event_queue_conf *queue_conf);
+int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
+ const struct rte_event_queue_conf *queue_conf);
+void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id);
void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 07/36] event/cnxk: allocate event inflight buffers
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (5 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 06/36] event/cnxk: add event queue config functions pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 08/36] event/cnxk: add devargs for inflight buffer count pbhagavatula
` (29 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Allocate buffers in DRAM that hold inflight events.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 7 ++
drivers/event/cnxk/cn9k_eventdev.c | 7 ++
drivers/event/cnxk/cnxk_eventdev.c | 105 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 14 +++-
4 files changed, 132 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 92687c23e..7e3fa20c5 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -55,6 +55,13 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
return -ENODEV;
}
+ rc = cnxk_sso_xaq_allocate(dev);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
+ return 0;
+cnxk_rsrc_fini:
+ roc_sso_rsrc_fini(&dev->sso);
return rc;
}
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 1bd2b3343..71245b660 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -63,6 +63,13 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
return -ENODEV;
}
+ rc = cnxk_sso_xaq_allocate(dev);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
+ return 0;
+cnxk_rsrc_fini:
+ roc_sso_rsrc_fini(&dev->sso);
return rc;
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 59cc570fe..927f99117 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -28,12 +28,107 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
+int
+cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
+{
+ char pool_name[RTE_MEMZONE_NAMESIZE];
+ uint32_t xaq_cnt, npa_aura_id;
+ const struct rte_memzone *mz;
+ struct npa_aura_s *aura;
+ static int reconfig_cnt;
+ int rc;
+
+ if (dev->xaq_pool) {
+ rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to release XAQ %d", rc);
+ return rc;
+ }
+ rte_mempool_free(dev->xaq_pool);
+ dev->xaq_pool = NULL;
+ }
+
+ /*
+ * Allocate memory for Add work backpressure.
+ */
+ mz = rte_memzone_lookup(CNXK_SSO_FC_NAME);
+ if (mz == NULL)
+ mz = rte_memzone_reserve_aligned(CNXK_SSO_FC_NAME,
+ sizeof(struct npa_aura_s) +
+ RTE_CACHE_LINE_SIZE,
+ 0, 0, RTE_CACHE_LINE_SIZE);
+ if (mz == NULL) {
+ plt_err("Failed to allocate mem for fcmem");
+ return -ENOMEM;
+ }
+
+ dev->fc_iova = mz->iova;
+ dev->fc_mem = mz->addr;
+
+ aura = (struct npa_aura_s *)((uintptr_t)dev->fc_mem +
+ RTE_CACHE_LINE_SIZE);
+ memset(aura, 0, sizeof(struct npa_aura_s));
+
+ aura->fc_ena = 1;
+ aura->fc_addr = dev->fc_iova;
+ aura->fc_hyst_bits = 0; /* Store count on all updates */
+
+ /* Taken from HRM 14.3.3(4) */
+ xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
+ xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
+ (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
+
+ plt_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
+ /* Setup XAQ based on number of nb queues. */
+ snprintf(pool_name, 30, "cnxk_xaq_buf_pool_%d", reconfig_cnt);
+ dev->xaq_pool = (void *)rte_mempool_create_empty(
+ pool_name, xaq_cnt, dev->sso.xaq_buf_size, 0, 0,
+ rte_socket_id(), 0);
+
+ if (dev->xaq_pool == NULL) {
+ plt_err("Unable to create empty mempool.");
+ rte_memzone_free(mz);
+ return -ENOMEM;
+ }
+
+ rc = rte_mempool_set_ops_byname(dev->xaq_pool,
+ rte_mbuf_platform_mempool_ops(), aura);
+ if (rc != 0) {
+ plt_err("Unable to set xaqpool ops.");
+ goto alloc_fail;
+ }
+
+ rc = rte_mempool_populate_default(dev->xaq_pool);
+ if (rc < 0) {
+ plt_err("Unable to set populate xaqpool.");
+ goto alloc_fail;
+ }
+ reconfig_cnt++;
+ /* When SW does addwork (enqueue) check if there is space in XAQ by
+ * comparing fc_addr above against the xaq_lmt calculated below.
+ * There should be a minimum headroom (CNXK_SSO_XAQ_SLACK / 2) for SSO
+ * to request XAQ to cache them even before enqueue is called.
+ */
+ dev->xaq_lmt =
+ xaq_cnt - (CNXK_SSO_XAQ_SLACK / 2 * dev->nb_event_queues);
+ dev->nb_xaq_cfg = xaq_cnt;
+
+ npa_aura_id = roc_npa_aura_handle_to_aura(dev->xaq_pool->pool_id);
+ return roc_sso_hwgrp_alloc_xaq(&dev->sso, npa_aura_id,
+ dev->nb_event_queues);
+alloc_fail:
+ rte_mempool_free(dev->xaq_pool);
+ rte_memzone_free(mz);
+ return rc;
+}
+
int
cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
{
struct rte_event_dev_config *conf = &event_dev->data->dev_conf;
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint32_t deq_tmo_ns;
+ int rc;
deq_tmo_ns = conf->dequeue_timeout_ns;
@@ -67,6 +162,16 @@ cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
return -EINVAL;
}
+ if (dev->xaq_pool) {
+ rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to release XAQ %d", rc);
+ return rc;
+ }
+ rte_mempool_free(dev->xaq_pool);
+ dev->xaq_pool = NULL;
+ }
+
dev->nb_event_queues = conf->nb_event_queues;
dev->nb_event_ports = conf->nb_event_ports;
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 974c618bc..8478120c0 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -5,6 +5,7 @@
#ifndef __CNXK_EVENTDEV_H__
#define __CNXK_EVENTDEV_H__
+#include <rte_mbuf_pool_ops.h>
#include <rte_pci.h>
#include <eventdev_pmd_pci.h>
@@ -13,7 +14,10 @@
#define USEC2NSEC(__us) ((__us)*1E3)
-#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
+#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
+#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
+#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
+#define CNXK_SSO_XAQ_SLACK (8)
struct cnxk_sso_evdev {
struct roc_sso sso;
@@ -26,6 +30,11 @@ struct cnxk_sso_evdev {
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
+ uint64_t *fc_mem;
+ uint64_t xaq_lmt;
+ uint64_t nb_xaq_cfg;
+ rte_iova_t fc_iova;
+ struct rte_mempool *xaq_pool;
/* CN9K */
uint8_t dual_ws;
} __rte_cache_aligned;
@@ -36,6 +45,9 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
return event_dev->data->dev_private;
}
+/* Configuration functions */
+int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev);
+
/* Common ops API. */
int cnxk_sso_init(struct rte_eventdev *event_dev);
int cnxk_sso_fini(struct rte_eventdev *event_dev);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 08/36] event/cnxk: add devargs for inflight buffer count
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (6 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 07/36] event/cnxk: allocate event inflight buffers pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 09/36] event/cnxk: add devargs to control SSO HWGRP QoS pbhagavatula
` (28 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
The number of events for a *open system* event device is specified
as -1 as per the eventdev specification.
Since, SSO inflight events are only limited by DRAM size, the
xae_cnt devargs parameter is introduced to provide upper limit for
in-flight events.
Example:
--dev "0002:0e:00.0,xae_cnt=8192"
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 14 ++++++++++++++
drivers/event/cnxk/cn10k_eventdev.c | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 1 +
drivers/event/cnxk/cnxk_eventdev.c | 24 ++++++++++++++++++++++--
drivers/event/cnxk/cnxk_eventdev.h | 15 +++++++++++++++
5 files changed, 53 insertions(+), 2 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index e94225bd3..569fce4cb 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -41,6 +41,20 @@ Prerequisites and Compilation procedure
See :doc:`../platform/cnxk` for setup information.
+
+Runtime Config Options
+----------------------
+
+- ``Maximum number of in-flight events`` (default ``8192``)
+
+ In **Marvell OCTEON CNXK** the max number of in-flight events are only limited
+ by DRAM size, the ``xae_cnt`` devargs parameter is introduced to provide
+ upper limit for in-flight events.
+
+ For example::
+
+ -a 0002:0e:00.0,xae_cnt=16384
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 7e3fa20c5..1b278360f 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -143,3 +143,4 @@ static struct rte_pci_driver cn10k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 71245b660..8dfcf35b4 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -146,3 +146,4 @@ static struct rte_pci_driver cn9k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 927f99117..28a03aeab 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -75,8 +75,11 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
/* Taken from HRM 14.3.3(4) */
xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
- xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
- (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
+ if (dev->xae_cnt)
+ xaq_cnt += dev->xae_cnt / dev->sso.xae_waes;
+ else
+ xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
+ (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
plt_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
/* Setup XAQ based on number of nb queues. */
@@ -222,6 +225,22 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
port_conf->enqueue_depth = 1;
}
+static void
+cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
+{
+ struct rte_kvargs *kvlist;
+
+ if (devargs == NULL)
+ return;
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, CNXK_SSO_XAE_CNT, &parse_kvargs_value,
+ &dev->xae_cnt);
+ rte_kvargs_free(kvlist);
+}
+
int
cnxk_sso_init(struct rte_eventdev *event_dev)
{
@@ -242,6 +261,7 @@ cnxk_sso_init(struct rte_eventdev *event_dev)
dev->sso.pci_dev = pci_dev;
*(uint64_t *)mz->addr = (uint64_t)dev;
+ cnxk_sso_parse_devargs(dev, pci_dev->device.devargs);
/* Initialize the base cnxk_dev object */
rc = roc_sso_dev_init(&dev->sso);
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 8478120c0..72b0ff3f8 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -5,6 +5,8 @@
#ifndef __CNXK_EVENTDEV_H__
#define __CNXK_EVENTDEV_H__
+#include <rte_devargs.h>
+#include <rte_kvargs.h>
#include <rte_mbuf_pool_ops.h>
#include <rte_pci.h>
@@ -12,6 +14,8 @@
#include "roc_api.h"
+#define CNXK_SSO_XAE_CNT "xae_cnt"
+
#define USEC2NSEC(__us) ((__us)*1E3)
#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
@@ -35,10 +39,21 @@ struct cnxk_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
+ /* Dev args */
+ uint32_t xae_cnt;
/* CN9K */
uint8_t dual_ws;
} __rte_cache_aligned;
+static inline int
+parse_kvargs_value(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ *(uint32_t *)opaque = (uint32_t)atoi(value);
+ return 0;
+}
+
static inline struct cnxk_sso_evdev *
cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
{
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 09/36] event/cnxk: add devargs to control SSO HWGRP QoS
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (7 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 08/36] event/cnxk: add devargs for inflight buffer count pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 10/36] event/cnxk: add port config functions pbhagavatula
` (27 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
SSO HWGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
events. By default the buffers are assigned to the SSO HWGRPs to
satisfy minimum HW requirements. SSO is free to assign the remaining
buffers to HWGRPs based on a preconfigured threshold.
We can control the QoS of SSO HWGRP by modifying the above mentioned
thresholds. HWGRPs that have higher importance can be assigned higher
thresholds than the rest.
Example:
--dev "0002:0e:00.0,qos=[1-50-50-50]" // [Qx-XAQ-TAQ-IAQ]
Qx -> Event queue Aka SSO GGRP.
XAQ -> DRAM In-flights.
TAQ & IAQ -> SRAM In-flights.
The values need to be expressed in terms of percentages, 0 represents
default.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 16 ++++++
drivers/event/cnxk/cn10k_eventdev.c | 3 +-
drivers/event/cnxk/cn9k_eventdev.c | 3 +-
drivers/event/cnxk/cnxk_eventdev.c | 78 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 12 ++++-
5 files changed, 109 insertions(+), 3 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 569fce4cb..cf2156333 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -55,6 +55,22 @@ Runtime Config Options
-a 0002:0e:00.0,xae_cnt=16384
+- ``Event Group QoS support``
+
+ SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
+ events. By default the buffers are assigned to the SSO GGRPs to
+ satisfy minimum HW requirements. SSO is free to assign the remaining
+ buffers to GGRPs based on a preconfigured threshold.
+ We can control the QoS of SSO GGRP by modifying the above mentioned
+ thresholds. GGRPs that have higher importance can be assigned higher
+ thresholds than the rest. The dictionary format is as follows
+ [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] expressed in percentages, 0 represents
+ default.
+
+ For example::
+
+ -a 0002:0e:00.0,qos=[1-50-50-50]
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 1b278360f..47eb8898b 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -143,4 +143,5 @@ static struct rte_pci_driver cn10k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
+ CNXK_SSO_GGRP_QOS "=<string>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 8dfcf35b4..43c045d43 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -146,4 +146,5 @@ static struct rte_pci_driver cn9k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
+ CNXK_SSO_GGRP_QOS "=<string>");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 28a03aeab..4cb5359a8 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -225,6 +225,82 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
port_conf->enqueue_depth = 1;
}
+static void
+parse_queue_param(char *value, void *opaque)
+{
+ struct cnxk_sso_qos queue_qos = {0};
+ uint8_t *val = (uint8_t *)&queue_qos;
+ struct cnxk_sso_evdev *dev = opaque;
+ char *tok = strtok(value, "-");
+ struct cnxk_sso_qos *old_ptr;
+
+ if (!strlen(value))
+ return;
+
+ while (tok != NULL) {
+ *val = atoi(tok);
+ tok = strtok(NULL, "-");
+ val++;
+ }
+
+ if (val != (&queue_qos.iaq_prcnt + 1)) {
+ plt_err("Invalid QoS parameter expected [Qx-XAQ-TAQ-IAQ]");
+ return;
+ }
+
+ dev->qos_queue_cnt++;
+ old_ptr = dev->qos_parse_data;
+ dev->qos_parse_data = rte_realloc(
+ dev->qos_parse_data,
+ sizeof(struct cnxk_sso_qos) * dev->qos_queue_cnt, 0);
+ if (dev->qos_parse_data == NULL) {
+ dev->qos_parse_data = old_ptr;
+ dev->qos_queue_cnt--;
+ return;
+ }
+ dev->qos_parse_data[dev->qos_queue_cnt - 1] = queue_qos;
+}
+
+static void
+parse_qos_list(const char *value, void *opaque)
+{
+ char *s = strdup(value);
+ char *start = NULL;
+ char *end = NULL;
+ char *f = s;
+
+ while (*s) {
+ if (*s == '[')
+ start = s;
+ else if (*s == ']')
+ end = s;
+
+ if (start && start < end) {
+ *end = 0;
+ parse_queue_param(start + 1, opaque);
+ s = end;
+ start = end;
+ }
+ s++;
+ }
+
+ free(f);
+}
+
+static int
+parse_sso_kvargs_dict(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ /* Dict format [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] use '-' cause ','
+ * isn't allowed. Everything is expressed in percentages, 0 represents
+ * default.
+ */
+ parse_qos_list(value, opaque);
+
+ return 0;
+}
+
static void
cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
{
@@ -238,6 +314,8 @@ cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
rte_kvargs_process(kvlist, CNXK_SSO_XAE_CNT, &parse_kvargs_value,
&dev->xae_cnt);
+ rte_kvargs_process(kvlist, CNXK_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
+ dev);
rte_kvargs_free(kvlist);
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 72b0ff3f8..4a2fa73fe 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,7 +14,8 @@
#include "roc_api.h"
-#define CNXK_SSO_XAE_CNT "xae_cnt"
+#define CNXK_SSO_XAE_CNT "xae_cnt"
+#define CNXK_SSO_GGRP_QOS "qos"
#define USEC2NSEC(__us) ((__us)*1E3)
@@ -23,6 +24,13 @@
#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
#define CNXK_SSO_XAQ_SLACK (8)
+struct cnxk_sso_qos {
+ uint16_t queue;
+ uint8_t xaq_prcnt;
+ uint8_t taq_prcnt;
+ uint8_t iaq_prcnt;
+};
+
struct cnxk_sso_evdev {
struct roc_sso sso;
uint8_t max_event_queues;
@@ -41,6 +49,8 @@ struct cnxk_sso_evdev {
struct rte_mempool *xaq_pool;
/* Dev args */
uint32_t xae_cnt;
+ uint8_t qos_queue_cnt;
+ struct cnxk_sso_qos *qos_parse_data;
/* CN9K */
uint8_t dual_ws;
} __rte_cache_aligned;
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 10/36] event/cnxk: add port config functions
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (8 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 09/36] event/cnxk: add devargs to control SSO HWGRP QoS pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 11/36] event/cnxk: add event port link and unlink pbhagavatula
` (26 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
Add SSO HWS aka event port setup and release functions.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 121 +++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 147 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.c | 65 ++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 91 +++++++++++++++++
4 files changed, 424 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 47eb8898b..c60df7f7b 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -4,6 +4,91 @@
#include "cnxk_eventdev.h"
+static void
+cn10k_init_hws_ops(struct cn10k_sso_hws *ws, uintptr_t base)
+{
+ ws->tag_wqe_op = base + SSOW_LF_GWS_WQE0;
+ ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK0;
+ ws->updt_wqe_op = base + SSOW_LF_GWS_OP_UPD_WQP_GRP1;
+ ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM;
+ ws->swtag_untag_op = base + SSOW_LF_GWS_OP_SWTAG_UNTAG;
+ ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH;
+ ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
+}
+
+static uint32_t
+cn10k_sso_gw_mode_wdata(struct cnxk_sso_evdev *dev)
+{
+ uint32_t wdata = BIT(16) | 1;
+
+ switch (dev->gw_mode) {
+ case CN10K_GW_MODE_NONE:
+ default:
+ break;
+ case CN10K_GW_MODE_PREF:
+ wdata |= BIT(19);
+ break;
+ case CN10K_GW_MODE_PREF_WFE:
+ wdata |= BIT(20) | BIT(19);
+ break;
+ }
+
+ return wdata;
+}
+
+static void *
+cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws;
+
+ /* Allocate event port memory */
+ ws = rte_zmalloc("cn10k_ws",
+ sizeof(struct cn10k_sso_hws) + RTE_CACHE_LINE_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (ws == NULL) {
+ plt_err("Failed to alloc memory for port=%d", port_id);
+ return NULL;
+ }
+
+ /* First cache line is reserved for cookie */
+ ws = (struct cn10k_sso_hws *)((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
+ ws->base = roc_sso_hws_base_get(&dev->sso, port_id);
+ cn10k_init_hws_ops(ws, ws->base);
+ ws->hws_id = port_id;
+ ws->swtag_req = 0;
+ ws->gw_wdata = cn10k_sso_gw_mode_wdata(dev);
+ ws->lmt_base = dev->sso.lmt_base;
+
+ return ws;
+}
+
+static void
+cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = hws;
+ uint64_t val;
+
+ rte_memcpy(ws->grps_base, grps_base,
+ sizeof(uintptr_t) * CNXK_SSO_MAX_HWGRP);
+ ws->fc_mem = dev->fc_mem;
+ ws->xaq_lmt = dev->xaq_lmt;
+
+ /* Set get_work timeout for HWS */
+ val = NSEC2USEC(dev->deq_tmo_ns) - 1;
+ plt_write64(val, ws->base + SSOW_LF_GWS_NW_TIM);
+}
+
+static void
+cn10k_sso_hws_release(void *arg, void *hws)
+{
+ struct cn10k_sso_hws *ws = hws;
+
+ RTE_SET_USED(arg);
+ memset(ws, 0, sizeof(*ws));
+}
+
static void
cn10k_sso_set_rsrc(void *arg)
{
@@ -59,12 +144,46 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ rc = cnxk_setup_event_ports(event_dev, cn10k_sso_init_hws_mem,
+ cn10k_sso_hws_setup);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
+ dev->nb_event_ports = 0;
return rc;
}
+static int
+cn10k_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ const struct rte_event_port_conf *port_conf)
+{
+
+ RTE_SET_USED(port_conf);
+ return cnxk_sso_port_setup(event_dev, port_id, cn10k_sso_hws_setup);
+}
+
+static void
+cn10k_sso_port_release(void *port)
+{
+ struct cnxk_sso_hws_cookie *gws_cookie = cnxk_sso_hws_get_cookie(port);
+ struct cnxk_sso_evdev *dev;
+
+ if (port == NULL)
+ return;
+
+ dev = cnxk_sso_pmd_priv(gws_cookie->event_dev);
+ if (!gws_cookie->configured)
+ goto free;
+
+ cn10k_sso_hws_release(dev, port);
+ memset(gws_cookie, 0, sizeof(*gws_cookie));
+free:
+ rte_free(gws_cookie);
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -72,6 +191,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.queue_setup = cnxk_sso_queue_setup,
.queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
+ .port_setup = cn10k_sso_port_setup,
+ .port_release = cn10k_sso_port_release,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 43c045d43..116f5bdab 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -7,6 +7,63 @@
#define CN9K_DUAL_WS_NB_WS 2
#define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
+static void
+cn9k_init_hws_ops(struct cn9k_sso_hws_state *ws, uintptr_t base)
+{
+ ws->tag_op = base + SSOW_LF_GWS_TAG;
+ ws->wqp_op = base + SSOW_LF_GWS_WQP;
+ ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK0;
+ ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH;
+ ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM;
+ ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
+}
+
+static void
+cn9k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ uint64_t val;
+
+ /* Set get_work tmo for HWS */
+ val = NSEC2USEC(dev->deq_tmo_ns) - 1;
+ if (dev->dual_ws) {
+ dws = hws;
+ rte_memcpy(dws->grps_base, grps_base,
+ sizeof(uintptr_t) * CNXK_SSO_MAX_HWGRP);
+ dws->fc_mem = dev->fc_mem;
+ dws->xaq_lmt = dev->xaq_lmt;
+
+ plt_write64(val, dws->base[0] + SSOW_LF_GWS_NW_TIM);
+ plt_write64(val, dws->base[1] + SSOW_LF_GWS_NW_TIM);
+ } else {
+ ws = hws;
+ rte_memcpy(ws->grps_base, grps_base,
+ sizeof(uintptr_t) * CNXK_SSO_MAX_HWGRP);
+ ws->fc_mem = dev->fc_mem;
+ ws->xaq_lmt = dev->xaq_lmt;
+
+ plt_write64(val, ws->base + SSOW_LF_GWS_NW_TIM);
+ }
+}
+
+static void
+cn9k_sso_hws_release(void *arg, void *hws)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+
+ if (dev->dual_ws) {
+ dws = hws;
+ memset(dws, 0, sizeof(*dws));
+ } else {
+ ws = hws;
+ memset(ws, 0, sizeof(*ws));
+ }
+}
+
static void
cn9k_sso_set_rsrc(void *arg)
{
@@ -33,6 +90,60 @@ cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static void *
+cn9k_sso_init_hws_mem(void *arg, uint8_t port_id)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ void *data;
+
+ if (dev->dual_ws) {
+ dws = rte_zmalloc("cn9k_dual_ws",
+ sizeof(struct cn9k_sso_hws_dual) +
+ RTE_CACHE_LINE_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (dws == NULL) {
+ plt_err("Failed to alloc memory for port=%d", port_id);
+ return NULL;
+ }
+
+ dws = RTE_PTR_ADD(dws, sizeof(struct cnxk_sso_hws_cookie));
+ dws->base[0] = roc_sso_hws_base_get(
+ &dev->sso, CN9K_DUAL_WS_PAIR_ID(port_id, 0));
+ dws->base[1] = roc_sso_hws_base_get(
+ &dev->sso, CN9K_DUAL_WS_PAIR_ID(port_id, 1));
+ cn9k_init_hws_ops(&dws->ws_state[0], dws->base[0]);
+ cn9k_init_hws_ops(&dws->ws_state[1], dws->base[1]);
+ dws->hws_id = port_id;
+ dws->swtag_req = 0;
+ dws->vws = 0;
+
+ data = dws;
+ } else {
+ /* Allocate event port memory */
+ ws = rte_zmalloc("cn9k_ws",
+ sizeof(struct cn9k_sso_hws) +
+ RTE_CACHE_LINE_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (ws == NULL) {
+ plt_err("Failed to alloc memory for port=%d", port_id);
+ return NULL;
+ }
+
+ /* First cache line is reserved for cookie */
+ ws = RTE_PTR_ADD(ws, sizeof(struct cnxk_sso_hws_cookie));
+ ws->base = roc_sso_hws_base_get(&dev->sso, port_id);
+ cn9k_init_hws_ops((struct cn9k_sso_hws_state *)ws, ws->base);
+ ws->hws_id = port_id;
+ ws->swtag_req = 0;
+
+ data = ws;
+ }
+
+ return data;
+}
+
static void
cn9k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -67,12 +178,46 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ rc = cnxk_setup_event_ports(event_dev, cn9k_sso_init_hws_mem,
+ cn9k_sso_hws_setup);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
+ dev->nb_event_ports = 0;
return rc;
}
+static int
+cn9k_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ const struct rte_event_port_conf *port_conf)
+{
+
+ RTE_SET_USED(port_conf);
+ return cnxk_sso_port_setup(event_dev, port_id, cn9k_sso_hws_setup);
+}
+
+static void
+cn9k_sso_port_release(void *port)
+{
+ struct cnxk_sso_hws_cookie *gws_cookie = cnxk_sso_hws_get_cookie(port);
+ struct cnxk_sso_evdev *dev;
+
+ if (port == NULL)
+ return;
+
+ dev = cnxk_sso_pmd_priv(gws_cookie->event_dev);
+ if (!gws_cookie->configured)
+ goto free;
+
+ cn9k_sso_hws_release(dev, port);
+ memset(gws_cookie, 0, sizeof(*gws_cookie));
+free:
+ rte_free(gws_cookie);
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -80,6 +225,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.queue_setup = cnxk_sso_queue_setup,
.queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
+ .port_setup = cn9k_sso_port_setup,
+ .port_release = cn9k_sso_port_release,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 4cb5359a8..9d455c93d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -125,6 +125,42 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
return rc;
}
+int
+cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
+ cnxk_sso_init_hws_mem_t init_hws_fn,
+ cnxk_sso_hws_setup_t setup_hws_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int i;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ struct cnxk_sso_hws_cookie *ws_cookie;
+ void *ws;
+
+ /* Free memory prior to re-allocation if needed */
+ if (event_dev->data->ports[i] != NULL)
+ ws = event_dev->data->ports[i];
+ else
+ ws = init_hws_fn(dev, i);
+ if (ws == NULL)
+ goto hws_fini;
+ ws_cookie = cnxk_sso_hws_get_cookie(ws);
+ ws_cookie->event_dev = event_dev;
+ ws_cookie->configured = 1;
+ event_dev->data->ports[i] = ws;
+ cnxk_sso_port_setup((struct rte_eventdev *)(uintptr_t)event_dev,
+ i, setup_hws_fn);
+ }
+
+ return 0;
+hws_fini:
+ for (i = i - 1; i >= 0; i--) {
+ event_dev->data->ports[i] = NULL;
+ rte_free(cnxk_sso_hws_get_cookie(event_dev->data->ports[i]));
+ }
+ return -ENOMEM;
+}
+
int
cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
{
@@ -225,6 +261,35 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
port_conf->enqueue_depth = 1;
}
+int
+cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ cnxk_sso_hws_setup_t hws_setup_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP] = {0};
+ uint16_t q;
+
+ plt_sso_dbg("Port=%d", port_id);
+ if (event_dev->data->ports[port_id] == NULL) {
+ plt_err("Invalid port Id %d", port_id);
+ return -EINVAL;
+ }
+
+ for (q = 0; q < dev->nb_event_queues; q++) {
+ grps_base[q] = roc_sso_hwgrp_base_get(&dev->sso, q);
+ if (grps_base[q] == 0) {
+ plt_err("Failed to get grp[%d] base addr", q);
+ return -EINVAL;
+ }
+ }
+
+ hws_setup_fn(dev, event_dev->data->ports[port_id], grps_base);
+ plt_sso_dbg("Port=%d ws=%p", port_id, event_dev->data->ports[port_id]);
+ rte_mb();
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 4a2fa73fe..0e8457f02 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -17,13 +17,23 @@
#define CNXK_SSO_XAE_CNT "xae_cnt"
#define CNXK_SSO_GGRP_QOS "qos"
+#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
+#define CNXK_SSO_MAX_HWGRP (RTE_EVENT_MAX_QUEUES_PER_DEV + 1)
#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
#define CNXK_SSO_XAQ_SLACK (8)
+#define CN10K_GW_MODE_NONE 0
+#define CN10K_GW_MODE_PREF 1
+#define CN10K_GW_MODE_PREF_WFE 2
+
+typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
+typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
+typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
+
struct cnxk_sso_qos {
uint16_t queue;
uint8_t xaq_prcnt;
@@ -53,6 +63,76 @@ struct cnxk_sso_evdev {
struct cnxk_sso_qos *qos_parse_data;
/* CN9K */
uint8_t dual_ws;
+ /* CN10K */
+ uint8_t gw_mode;
+} __rte_cache_aligned;
+
+/* CN10K HWS ops */
+#define CN10K_SSO_HWS_OPS \
+ uintptr_t swtag_desched_op; \
+ uintptr_t swtag_flush_op; \
+ uintptr_t swtag_untag_op; \
+ uintptr_t swtag_norm_op; \
+ uintptr_t updt_wqe_op; \
+ uintptr_t tag_wqe_op; \
+ uintptr_t getwrk_op
+
+struct cn10k_sso_hws {
+ /* Get Work Fastpath data */
+ CN10K_SSO_HWS_OPS;
+ uint32_t gw_wdata;
+ uint8_t swtag_req;
+ uint8_t hws_id;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
+ uint64_t base;
+ uintptr_t lmt_base;
+} __rte_cache_aligned;
+
+/* CN9K HWS ops */
+#define CN9K_SSO_HWS_OPS \
+ uintptr_t swtag_desched_op; \
+ uintptr_t swtag_flush_op; \
+ uintptr_t swtag_norm_op; \
+ uintptr_t getwrk_op; \
+ uintptr_t tag_op; \
+ uintptr_t wqp_op
+
+/* Event port aka GWS */
+struct cn9k_sso_hws {
+ /* Get Work Fastpath data */
+ CN9K_SSO_HWS_OPS;
+ uint8_t swtag_req;
+ uint8_t hws_id;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
+ uint64_t base;
+} __rte_cache_aligned;
+
+struct cn9k_sso_hws_state {
+ CN9K_SSO_HWS_OPS;
+};
+
+struct cn9k_sso_hws_dual {
+ /* Get Work Fastpath data */
+ struct cn9k_sso_hws_state ws_state[2]; /* Ping and Pong */
+ uint8_t swtag_req;
+ uint8_t vws; /* Ping pong bit */
+ uint8_t hws_id;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
+ uint64_t base[2];
+} __rte_cache_aligned;
+
+struct cnxk_sso_hws_cookie {
+ const struct rte_eventdev *event_dev;
+ bool configured;
} __rte_cache_aligned;
static inline int
@@ -70,6 +150,12 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
return event_dev->data->dev_private;
}
+static inline struct cnxk_sso_hws_cookie *
+cnxk_sso_hws_get_cookie(void *ws)
+{
+ return RTE_PTR_SUB(ws, sizeof(struct cnxk_sso_hws_cookie));
+}
+
/* Configuration functions */
int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev);
@@ -80,6 +166,9 @@ int cnxk_sso_remove(struct rte_pci_device *pci_dev);
void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
struct rte_event_dev_info *dev_info);
int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
+int cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
+ cnxk_sso_init_hws_mem_t init_hws_mem,
+ cnxk_sso_hws_setup_t hws_setup);
void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
struct rte_event_queue_conf *queue_conf);
int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
@@ -87,5 +176,7 @@ int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id);
void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf);
+int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ cnxk_sso_hws_setup_t hws_setup_fn);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 11/36] event/cnxk: add event port link and unlink
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (9 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 10/36] event/cnxk: add port config functions pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 12/36] event/cnxk: add devargs to configure getwork mode pbhagavatula
` (25 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
Add platform specific event port, queue link and unlink APIs.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 64 +++++++++++++++++-
drivers/event/cnxk/cn9k_eventdev.c | 101 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.c | 36 ++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 12 +++-
4 files changed, 210 insertions(+), 3 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index c60df7f7b..3cf07734b 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -63,6 +63,24 @@ cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
return ws;
}
+static int
+cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = port;
+
+ return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+}
+
+static int
+cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = port;
+
+ return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+}
+
static void
cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
{
@@ -83,9 +101,12 @@ cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
static void
cn10k_sso_hws_release(void *arg, void *hws)
{
+ struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = hws;
+ int i;
- RTE_SET_USED(arg);
+ for (i = 0; i < dev->nb_event_queues; i++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, (uint16_t *)&i, 1);
memset(ws, 0, sizeof(*ws));
}
@@ -149,6 +170,12 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ /* Restore any prior port-queue mapping. */
+ cnxk_sso_restore_links(event_dev, cn10k_sso_hws_link);
+
+ dev->configured = 1;
+ rte_mb();
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
@@ -184,6 +211,38 @@ cn10k_sso_port_release(void *port)
rte_free(gws_cookie);
}
+static int
+cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_links];
+ uint16_t link;
+
+ RTE_SET_USED(priorities);
+ for (link = 0; link < nb_links; link++)
+ hwgrp_ids[link] = queues[link];
+ nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+
+ return (int)nb_links;
+}
+
+static int
+cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_unlinks];
+ uint16_t unlink;
+
+ for (unlink = 0; unlink < nb_unlinks; unlink++)
+ hwgrp_ids[unlink] = queues[unlink];
+ nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+
+ return (int)nb_unlinks;
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -193,6 +252,9 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_def_conf = cnxk_sso_port_def_conf,
.port_setup = cn10k_sso_port_setup,
.port_release = cn10k_sso_port_release,
+ .port_link = cn10k_sso_port_link,
+ .port_unlink = cn10k_sso_port_unlink,
+ .timeout_ticks = cnxk_sso_timeout_ticks,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 116f5bdab..5be2776cc 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -18,6 +18,54 @@ cn9k_init_hws_ops(struct cn9k_sso_hws_state *ws, uintptr_t base)
ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
}
+static int
+cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ int rc;
+
+ if (dev->dual_ws) {
+ dws = port;
+ rc = roc_sso_hws_link(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
+ nb_link);
+ rc |= roc_sso_hws_link(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ map, nb_link);
+ } else {
+ ws = port;
+ rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ }
+
+ return rc;
+}
+
+static int
+cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ int rc;
+
+ if (dev->dual_ws) {
+ dws = port;
+ rc = roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ map, nb_link);
+ rc |= roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ map, nb_link);
+ } else {
+ ws = port;
+ rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ }
+
+ return rc;
+}
+
static void
cn9k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
{
@@ -54,12 +102,24 @@ cn9k_sso_hws_release(void *arg, void *hws)
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
struct cn9k_sso_hws *ws;
+ int i;
if (dev->dual_ws) {
dws = hws;
+ for (i = 0; i < dev->nb_event_queues; i++) {
+ roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ (uint16_t *)&i, 1);
+ roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ (uint16_t *)&i, 1);
+ }
memset(dws, 0, sizeof(*dws));
} else {
ws = hws;
+ for (i = 0; i < dev->nb_event_queues; i++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id,
+ (uint16_t *)&i, 1);
memset(ws, 0, sizeof(*ws));
}
}
@@ -183,6 +243,12 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ /* Restore any prior port-queue mapping. */
+ cnxk_sso_restore_links(event_dev, cn9k_sso_hws_link);
+
+ dev->configured = 1;
+ rte_mb();
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
@@ -218,6 +284,38 @@ cn9k_sso_port_release(void *port)
rte_free(gws_cookie);
}
+static int
+cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_links];
+ uint16_t link;
+
+ RTE_SET_USED(priorities);
+ for (link = 0; link < nb_links; link++)
+ hwgrp_ids[link] = queues[link];
+ nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+
+ return (int)nb_links;
+}
+
+static int
+cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_unlinks];
+ uint16_t unlink;
+
+ for (unlink = 0; unlink < nb_unlinks; unlink++)
+ hwgrp_ids[unlink] = queues[unlink];
+ nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+
+ return (int)nb_unlinks;
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -227,6 +325,9 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_def_conf = cnxk_sso_port_def_conf,
.port_setup = cn9k_sso_port_setup,
.port_release = cn9k_sso_port_release,
+ .port_link = cn9k_sso_port_link,
+ .port_unlink = cn9k_sso_port_unlink,
+ .timeout_ticks = cnxk_sso_timeout_ticks,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 9d455c93d..5f4075a31 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -161,6 +161,32 @@ cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
return -ENOMEM;
}
+void
+cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
+ cnxk_sso_link_t link_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t *links_map, hwgrp[CNXK_SSO_MAX_HWGRP];
+ int i, j;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ uint16_t nb_hwgrp = 0;
+
+ links_map = event_dev->data->links_map;
+ /* Point links_map to this port specific area */
+ links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+
+ for (j = 0; j < dev->nb_event_queues; j++) {
+ if (links_map[j] == 0xdead)
+ continue;
+ hwgrp[nb_hwgrp] = j;
+ nb_hwgrp++;
+ }
+
+ link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp);
+ }
+}
+
int
cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
{
@@ -290,6 +316,16 @@ cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
return 0;
}
+int
+cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
+ uint64_t *tmo_ticks)
+{
+ RTE_SET_USED(event_dev);
+ *tmo_ticks = NSEC2TICK(ns, rte_get_timer_hz());
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 0e8457f02..bf2c961aa 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -17,8 +17,9 @@
#define CNXK_SSO_XAE_CNT "xae_cnt"
#define CNXK_SSO_GGRP_QOS "qos"
-#define NSEC2USEC(__ns) ((__ns) / 1E3)
-#define USEC2NSEC(__us) ((__us)*1E3)
+#define NSEC2USEC(__ns) ((__ns) / 1E3)
+#define USEC2NSEC(__us) ((__us)*1E3)
+#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
#define CNXK_SSO_MAX_HWGRP (RTE_EVENT_MAX_QUEUES_PER_DEV + 1)
#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
@@ -33,6 +34,8 @@
typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
+typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
+ uint16_t nb_link);
struct cnxk_sso_qos {
uint16_t queue;
@@ -48,6 +51,7 @@ struct cnxk_sso_evdev {
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
+ uint8_t configured;
uint32_t deq_tmo_ns;
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
@@ -169,6 +173,8 @@ int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
int cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
cnxk_sso_init_hws_mem_t init_hws_mem,
cnxk_sso_hws_setup_t hws_setup);
+void cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
+ cnxk_sso_link_t link_fn);
void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
struct rte_event_queue_conf *queue_conf);
int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
@@ -178,5 +184,7 @@ void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf);
int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
cnxk_sso_hws_setup_t hws_setup_fn);
+int cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
+ uint64_t *tmo_ticks);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 12/36] event/cnxk: add devargs to configure getwork mode
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (10 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 11/36] event/cnxk: add event port link and unlink pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 13/36] event/cnxk: add SSO HW device operations pbhagavatula
` (24 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add devargs to configure the platform specific getwork mode.
CN9K getwork mode by default is set to use dual workslot mode.
Add option to force single workslot mode.
Example:
--dev "0002:0e:00.0,single_ws=1"
CN10K supports multiple getwork prefetch modes, by default the
prefetch mode is set to none.
Add option to select getwork prefetch mode
Example:
--dev "0002:1e:00.0,gw_mode=1"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 18 ++++++++++++++++++
drivers/event/cnxk/cn10k_eventdev.c | 3 ++-
drivers/event/cnxk/cn9k_eventdev.c | 3 ++-
drivers/event/cnxk/cnxk_eventdev.c | 6 ++++++
drivers/event/cnxk/cnxk_eventdev.h | 6 ++++--
5 files changed, 32 insertions(+), 4 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index cf2156333..b2684d431 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -55,6 +55,24 @@ Runtime Config Options
-a 0002:0e:00.0,xae_cnt=16384
+- ``CN9K Getwork mode``
+
+ CN9K ``single_ws`` devargs parameter is introduced to select single workslot
+ mode in SSO and disable the default dual workslot mode.
+
+ For example::
+
+ -a 0002:0e:00.0,single_ws=1
+
+- ``CN10K Getwork mode``
+
+ CN10K supports multiple getwork prefetch modes, by default the prefetch
+ mode is set to none.
+
+ For example::
+
+ -a 0002:0e:00.0,gw_mode=1
+
- ``Event Group QoS support``
SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 3cf07734b..310acc011 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -327,4 +327,5 @@ RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
- CNXK_SSO_GGRP_QOS "=<string>");
+ CNXK_SSO_GGRP_QOS "=<string>"
+ CN10K_SSO_GW_MODE "=<int>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 5be2776cc..44c7a0c3a 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -395,4 +395,5 @@ RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
- CNXK_SSO_GGRP_QOS "=<string>");
+ CNXK_SSO_GGRP_QOS "=<string>"
+ CN9K_SSO_SINGLE_WS "=1");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 5f4075a31..0e2cc3681 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -406,6 +406,7 @@ static void
cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
{
struct rte_kvargs *kvlist;
+ uint8_t single_ws = 0;
if (devargs == NULL)
return;
@@ -417,6 +418,11 @@ cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
&dev->xae_cnt);
rte_kvargs_process(kvlist, CNXK_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
dev);
+ rte_kvargs_process(kvlist, CN9K_SSO_SINGLE_WS, &parse_kvargs_value,
+ &single_ws);
+ rte_kvargs_process(kvlist, CN10K_SSO_GW_MODE, &parse_kvargs_value,
+ &dev->gw_mode);
+ dev->dual_ws = !single_ws;
rte_kvargs_free(kvlist);
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index bf2c961aa..85f6058f2 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,8 +14,10 @@
#include "roc_api.h"
-#define CNXK_SSO_XAE_CNT "xae_cnt"
-#define CNXK_SSO_GGRP_QOS "qos"
+#define CNXK_SSO_XAE_CNT "xae_cnt"
+#define CNXK_SSO_GGRP_QOS "qos"
+#define CN9K_SSO_SINGLE_WS "single_ws"
+#define CN10K_SSO_GW_MODE "gw_mode"
#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 13/36] event/cnxk: add SSO HW device operations
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (11 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 12/36] event/cnxk: add devargs to configure getwork mode pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 14/36] event/cnxk: add SSO GWS fastpath enqueue functions pbhagavatula
` (23 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO HW device operations used for enqueue/dequeue.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_worker.c | 7 +
drivers/event/cnxk/cn10k_worker.h | 151 +++++++++++++++++
drivers/event/cnxk/cn9k_worker.c | 7 +
drivers/event/cnxk/cn9k_worker.h | 249 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 10 ++
drivers/event/cnxk/cnxk_worker.h | 101 ++++++++++++
drivers/event/cnxk/meson.build | 4 +-
7 files changed, 528 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cn10k_worker.c
create mode 100644 drivers/event/cnxk/cn10k_worker.h
create mode 100644 drivers/event/cnxk/cn9k_worker.c
create mode 100644 drivers/event/cnxk/cn9k_worker.h
create mode 100644 drivers/event/cnxk/cnxk_worker.h
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
new file mode 100644
index 000000000..4a7d0b535
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cn10k_worker.h"
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
new file mode 100644
index 000000000..0a7cb9c57
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __CN10K_WORKER_H__
+#define __CN10K_WORKER_H__
+
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
+
+/* SSO Operations */
+
+static __rte_always_inline uint8_t
+cn10k_sso_hws_new_event(struct cn10k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ rte_atomic_thread_fence(__ATOMIC_ACQ_REL);
+ if (ws->xaq_lmt <= *ws->fc_mem)
+ return 0;
+
+ cnxk_sso_hws_add_work(event_ptr, tag, new_tt, ws->grps_base[grp]);
+ return 1;
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_fwd_swtag(struct cn10k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint8_t cur_tt = CNXK_TT_FROM_TAG(plt_read64(ws->tag_wqe_op));
+
+ /* CNXK model
+ * cur_tt/new_tt SSO_TT_ORDERED SSO_TT_ATOMIC SSO_TT_UNTAGGED
+ *
+ * SSO_TT_ORDERED norm norm untag
+ * SSO_TT_ATOMIC norm norm untag
+ * SSO_TT_UNTAGGED norm norm NOOP
+ */
+
+ if (new_tt == SSO_TT_UNTAGGED) {
+ if (cur_tt != SSO_TT_UNTAGGED)
+ cnxk_sso_hws_swtag_untag(ws->swtag_untag_op);
+ } else {
+ cnxk_sso_hws_swtag_norm(tag, new_tt, ws->swtag_norm_op);
+ }
+ ws->swtag_req = 1;
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_fwd_group(struct cn10k_sso_hws *ws, const struct rte_event *ev,
+ const uint16_t grp)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+
+ plt_write64(ev->u64, ws->updt_wqe_op);
+ cnxk_sso_hws_swtag_desched(tag, new_tt, grp, ws->swtag_desched_op);
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_forward_event(struct cn10k_sso_hws *ws,
+ const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (CNXK_GRP_FROM_TAG(plt_read64(ws->tag_wqe_op)) == grp)
+ cn10k_sso_hws_fwd_swtag(ws, ev);
+ else
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ cn10k_sso_hws_fwd_group(ws, ev, grp);
+}
+
+static __rte_always_inline uint16_t
+cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+ gw.get_work = ws->gw_wdata;
+#if defined(RTE_ARCH_ARM64) && !defined(__clang__)
+ asm volatile(
+ PLT_CPU_FEATURE_PREAMBLE
+ "caspl %[wdata], %H[wdata], %[wdata], %H[wdata], [%[gw_loc]]\n"
+ : [wdata] "+r"(gw.get_work)
+ : [gw_loc] "r"(ws->getwrk_op)
+ : "memory");
+#else
+ plt_write64(gw.u64[0], ws->getwrk_op);
+ do {
+ roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+ } while (gw.u64[0] & BIT_ULL(63));
+#endif
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+/* Used in cleaning up workslot. */
+static __rte_always_inline uint16_t
+cn10k_sso_hws_get_work_empty(struct cn10k_sso_hws *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldp %[tag], %[wqp], [%[tag_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldp %[tag], %[wqp], [%[tag_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_wqe_op)
+ : "memory");
+#else
+ do {
+ roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+ } while (gw.u64[0] & BIT_ULL(63));
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+#endif
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
new file mode 100644
index 000000000..77856f2e7
--- /dev/null
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "roc_api.h"
+
+#include "cn9k_worker.h"
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
new file mode 100644
index 000000000..ff7851642
--- /dev/null
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -0,0 +1,249 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __CN9K_WORKER_H__
+#define __CN9K_WORKER_H__
+
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
+
+/* SSO Operations */
+
+static __rte_always_inline uint8_t
+cn9k_sso_hws_new_event(struct cn9k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ rte_atomic_thread_fence(__ATOMIC_ACQ_REL);
+ if (ws->xaq_lmt <= *ws->fc_mem)
+ return 0;
+
+ cnxk_sso_hws_add_work(event_ptr, tag, new_tt, ws->grps_base[grp]);
+ return 1;
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_fwd_swtag(struct cn9k_sso_hws_state *vws,
+ const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint8_t cur_tt = CNXK_TT_FROM_TAG(plt_read64(vws->tag_op));
+
+ /* CNXK model
+ * cur_tt/new_tt SSO_TT_ORDERED SSO_TT_ATOMIC SSO_TT_UNTAGGED
+ *
+ * SSO_TT_ORDERED norm norm untag
+ * SSO_TT_ATOMIC norm norm untag
+ * SSO_TT_UNTAGGED norm norm NOOP
+ */
+
+ if (new_tt == SSO_TT_UNTAGGED) {
+ if (cur_tt != SSO_TT_UNTAGGED)
+ cnxk_sso_hws_swtag_untag(
+ CN9K_SSOW_GET_BASE_ADDR(vws->getwrk_op) +
+ SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ } else {
+ cnxk_sso_hws_swtag_norm(tag, new_tt, vws->swtag_norm_op);
+ }
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_fwd_group(struct cn9k_sso_hws_state *ws,
+ const struct rte_event *ev, const uint16_t grp)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+
+ plt_write64(ev->u64, CN9K_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
+ SSOW_LF_GWS_OP_UPD_WQP_GRP1);
+ cnxk_sso_hws_swtag_desched(tag, new_tt, grp, ws->swtag_desched_op);
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_forward_event(struct cn9k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (CNXK_GRP_FROM_TAG(plt_read64(ws->tag_op)) == grp) {
+ cn9k_sso_hws_fwd_swtag((struct cn9k_sso_hws_state *)ws, ev);
+ ws->swtag_req = 1;
+ } else {
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ cn9k_sso_hws_fwd_group((struct cn9k_sso_hws_state *)ws, ev,
+ grp);
+ }
+}
+
+/* Dual ws ops. */
+
+static __rte_always_inline uint8_t
+cn9k_sso_hws_dual_new_event(struct cn9k_sso_hws_dual *dws,
+ const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ rte_atomic_thread_fence(__ATOMIC_ACQ_REL);
+ if (dws->xaq_lmt <= *dws->fc_mem)
+ return 0;
+
+ cnxk_sso_hws_add_work(event_ptr, tag, new_tt, dws->grps_base[grp]);
+ return 1;
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_dual_forward_event(struct cn9k_sso_hws_dual *dws,
+ struct cn9k_sso_hws_state *vws,
+ const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (CNXK_GRP_FROM_TAG(plt_read64(vws->tag_op)) == grp) {
+ cn9k_sso_hws_fwd_swtag(vws, ev);
+ dws->swtag_req = 1;
+ } else {
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ cn9k_sso_hws_fwd_group(vws, ev, grp);
+ }
+}
+
+static __rte_always_inline uint16_t
+cn9k_sso_hws_dual_get_work(struct cn9k_sso_hws_state *ws,
+ struct cn9k_sso_hws_state *ws_pair,
+ struct rte_event *ev)
+{
+ const uint64_t set_gw = BIT_ULL(16) | 1;
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ "rty%=: \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: str %[gw], [%[pong]] \n"
+ " dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op),
+ [gw] "r"(set_gw), [pong] "r"(ws_pair->getwrk_op));
+#else
+ gw.u64[0] = plt_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & gw.u64[0])
+ gw.u64[0] = plt_read64(ws->tag_op);
+ gw.u64[1] = plt_read64(ws->wqp_op);
+ plt_write64(set_gw, ws_pair->getwrk_op);
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+static __rte_always_inline uint16_t
+cn9k_sso_hws_get_work(struct cn9k_sso_hws *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+ plt_write64(BIT_ULL(16) | /* wait for work. */
+ 1, /* Use Mask set 0. */
+ ws->getwrk_op);
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op));
+#else
+ gw.u64[0] = plt_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & gw.u64[0])
+ gw.u64[0] = plt_read64(ws->tag_op);
+
+ gw.u64[1] = plt_read64(ws->wqp_op);
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+/* Used in cleaning up workslot. */
+static __rte_always_inline uint16_t
+cn9k_sso_hws_get_work_empty(struct cn9k_sso_hws_state *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op));
+#else
+ gw.u64[0] = plt_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & gw.u64[0])
+ gw.u64[0] = plt_read64(ws->tag_op);
+
+ gw.u64[1] = plt_read64(ws->wqp_op);
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+#endif
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 85f6058f2..ac55d0ccb 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -29,6 +29,16 @@
#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
#define CNXK_SSO_XAQ_SLACK (8)
+#define CNXK_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY)
+#define CNXK_TT_FROM_EVENT(x) (((x) >> 38) & SSO_TT_EMPTY)
+#define CNXK_EVENT_TYPE_FROM_TAG(x) (((x) >> 28) & 0xf)
+#define CNXK_SUB_EVENT_FROM_TAG(x) (((x) >> 20) & 0xff)
+#define CNXK_CLR_SUB_EVENT(x) (~(0xffu << 20) & x)
+#define CNXK_GRP_FROM_TAG(x) (((x) >> 36) & 0x3ff)
+#define CNXK_SWTAG_PEND(x) (BIT_ULL(62) & x)
+
+#define CN9K_SSOW_GET_BASE_ADDR(_GW) ((_GW)-SSOW_LF_GWS_OP_GET_WORK0)
+
#define CN10K_GW_MODE_NONE 0
#define CN10K_GW_MODE_PREF 1
#define CN10K_GW_MODE_PREF_WFE 2
diff --git a/drivers/event/cnxk/cnxk_worker.h b/drivers/event/cnxk/cnxk_worker.h
new file mode 100644
index 000000000..c7f18911e
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_worker.h
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __CNXK_WORKER_H__
+#define __CNXK_WORKER_H__
+
+#include "cnxk_eventdev.h"
+
+/* SSO Operations */
+
+static __rte_always_inline void
+cnxk_sso_hws_add_work(const uint64_t event_ptr, const uint32_t tag,
+ const uint8_t new_tt, const uintptr_t grp_base)
+{
+ uint64_t add_work0;
+
+ add_work0 = tag | ((uint64_t)(new_tt) << 32);
+ roc_store_pair(add_work0, event_ptr, grp_base);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_desched(uint32_t tag, uint8_t new_tt, uint16_t grp,
+ uintptr_t swtag_desched_op)
+{
+ uint64_t val;
+
+ val = tag | ((uint64_t)(new_tt & 0x3) << 32) | ((uint64_t)grp << 34);
+ __atomic_store_n((uint64_t *)swtag_desched_op, val, __ATOMIC_RELEASE);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_norm(uint32_t tag, uint8_t new_tt, uintptr_t swtag_norm_op)
+{
+ uint64_t val;
+
+ val = tag | ((uint64_t)(new_tt & 0x3) << 32);
+ plt_write64(val, swtag_norm_op);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_untag(uintptr_t swtag_untag_op)
+{
+ plt_write64(0, swtag_untag_op);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_flush(uint64_t tag_op, uint64_t flush_op)
+{
+ if (CNXK_TT_FROM_TAG(plt_read64(tag_op)) == SSO_TT_EMPTY)
+ return;
+ plt_write64(0, flush_op);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_wait(uintptr_t tag_op)
+{
+#ifdef RTE_ARCH_ARM64
+ uint64_t swtp;
+
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbz %[swtb], 62, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbnz %[swtb], 62, rty%= \n"
+ "done%=: \n"
+ : [swtb] "=&r"(swtp)
+ : [swtp_loc] "r"(tag_op));
+#else
+ /* Wait for the SWTAG/SWTAG_FULL operation */
+ while (plt_read64(tag_op) & BIT_ULL(62))
+ ;
+#endif
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_head_wait(uintptr_t tag_op)
+{
+#ifdef RTE_ARCH_ARM64
+ uint64_t swtp;
+
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbz %[swtb], 62, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbnz %[swtb], 35, rty%= \n"
+ "done%=: \n"
+ : [swtb] "=&r"(swtp)
+ : [swtp_loc] "r"(tag_op));
+#else
+ /* Wait for the SWTAG/SWTAG_FULL operation */
+ while (plt_read64(tag_op) & BIT_ULL(35))
+ ;
+#endif
+}
+
+#endif
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index c969595c1..5a9cc9f57 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -8,7 +8,9 @@ if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
subdir_done()
endif
-sources = files('cn10k_eventdev.c',
+sources = files('cn10k_worker.c',
+ 'cn10k_eventdev.c',
+ 'cn9k_worker.c',
'cn9k_eventdev.c',
'cnxk_eventdev.c')
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 14/36] event/cnxk: add SSO GWS fastpath enqueue functions
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (12 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 13/36] event/cnxk: add SSO HW device operations pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 15/36] event/cnxk: add SSO GWS dequeue fastpath functions pbhagavatula
` (22 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov
Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO GWS fastpath event device enqueue functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 16 +++-
drivers/event/cnxk/cn10k_worker.c | 54 ++++++++++++++
drivers/event/cnxk/cn10k_worker.h | 12 +++
drivers/event/cnxk/cn9k_eventdev.c | 25 ++++++-
drivers/event/cnxk/cn9k_worker.c | 112 ++++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_worker.h | 24 ++++++
6 files changed, 241 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 310acc011..16848798c 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -2,7 +2,9 @@
* Copyright(C) 2021 Marvell International Ltd.
*/
+#include "cn10k_worker.h"
#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
static void
cn10k_init_hws_ops(struct cn10k_sso_hws *ws, uintptr_t base)
@@ -130,6 +132,16 @@ cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static void
+cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
+{
+ PLT_SET_USED(event_dev);
+ event_dev->enqueue = cn10k_sso_hws_enq;
+ event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
+ event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
+ event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst;
+}
+
static void
cn10k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -276,8 +288,10 @@ cn10k_sso_init(struct rte_eventdev *event_dev)
event_dev->dev_ops = &cn10k_sso_dev_ops;
/* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ cn10k_sso_fp_fns_set(event_dev);
return 0;
+ }
rc = cnxk_sso_init(event_dev);
if (rc < 0)
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 4a7d0b535..cef24f4e2 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -5,3 +5,57 @@
#include "cn10k_worker.h"
#include "cnxk_eventdev.h"
#include "cnxk_worker.h"
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq(void *port, const struct rte_event *ev)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ return cn10k_sso_hws_new_event(ws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ cn10k_sso_hws_forward_event(ws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ cnxk_sso_hws_swtag_flush(ws->tag_wqe_op, ws->swtag_flush_op);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return cn10k_sso_hws_enq(port, ev);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn10k_sso_hws *ws = port;
+ uint16_t i, rc = 1;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = cn10k_sso_hws_new_event(ws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ RTE_SET_USED(nb_events);
+ cn10k_sso_hws_forward_event(ws, ev);
+
+ return 1;
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index 0a7cb9c57..d75e92846 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -148,4 +148,16 @@ cn10k_sso_hws_get_work_empty(struct cn10k_sso_hws *ws, struct rte_event *ev)
return !!gw.u64[1];
}
+/* CN10K Fastpath functions. */
+uint16_t __rte_hot cn10k_sso_hws_enq(void *port, const struct rte_event *ev);
+uint16_t __rte_hot cn10k_sso_hws_enq_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+
#endif
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 44c7a0c3a..7e4c1b415 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -2,7 +2,9 @@
* Copyright(C) 2021 Marvell International Ltd.
*/
+#include "cn9k_worker.h"
#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
#define CN9K_DUAL_WS_NB_WS 2
#define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
@@ -150,6 +152,25 @@ cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static void
+cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ event_dev->enqueue = cn9k_sso_hws_enq;
+ event_dev->enqueue_burst = cn9k_sso_hws_enq_burst;
+ event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
+ event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
+
+ if (dev->dual_ws) {
+ event_dev->enqueue = cn9k_sso_hws_dual_enq;
+ event_dev->enqueue_burst = cn9k_sso_hws_dual_enq_burst;
+ event_dev->enqueue_new_burst = cn9k_sso_hws_dual_enq_new_burst;
+ event_dev->enqueue_forward_burst =
+ cn9k_sso_hws_dual_enq_fwd_burst;
+ }
+}
+
static void *
cn9k_sso_init_hws_mem(void *arg, uint8_t port_id)
{
@@ -349,8 +370,10 @@ cn9k_sso_init(struct rte_eventdev *event_dev)
event_dev->dev_ops = &cn9k_sso_dev_ops;
/* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ cn9k_sso_fp_fns_set(event_dev);
return 0;
+ }
rc = cnxk_sso_init(event_dev);
if (rc < 0)
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index 77856f2e7..c2f09a99b 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -5,3 +5,115 @@
#include "roc_api.h"
#include "cn9k_worker.h"
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq(void *port, const struct rte_event *ev)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ return cn9k_sso_hws_new_event(ws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ cn9k_sso_hws_forward_event(ws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ cnxk_sso_hws_swtag_flush(ws->tag_op, ws->swtag_flush_op);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return cn9k_sso_hws_enq(port, ev);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws *ws = port;
+ uint16_t i, rc = 1;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = cn9k_sso_hws_new_event(ws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ RTE_SET_USED(nb_events);
+ cn9k_sso_hws_forward_event(ws, ev);
+
+ return 1;
+}
+
+/* Dual ws ops. */
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq(void *port, const struct rte_event *ev)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ struct cn9k_sso_hws_state *vws;
+
+ vws = &dws->ws_state[!dws->vws];
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ return cn9k_sso_hws_dual_new_event(dws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ cn9k_sso_hws_dual_forward_event(dws, vws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ cnxk_sso_hws_swtag_flush(vws->tag_op, vws->swtag_flush_op);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return cn9k_sso_hws_dual_enq(port, ev);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ uint16_t i, rc = 1;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = cn9k_sso_hws_dual_new_event(dws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+
+ RTE_SET_USED(nb_events);
+ cn9k_sso_hws_dual_forward_event(dws, &dws->ws_state[!dws->vws], ev);
+
+ return 1;
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index ff7851642..e75ed10ad 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -246,4 +246,28 @@ cn9k_sso_hws_get_work_empty(struct cn9k_sso_hws_state *ws, struct rte_event *ev)
return !!gw.u64[1];
}
+/* CN9K Fastpath functions. */
+uint16_t __rte_hot cn9k_sso_hws_enq(void *port, const struct rte_event *ev);
+uint16_t __rte_hot cn9k_sso_hws_enq_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_enq_new_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_enq_fwd_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+
+uint16_t __rte_hot cn9k_sso_hws_dual_enq(void *port,
+ const struct rte_event *ev);
+uint16_t __rte_hot cn9k_sso_hws_dual_enq_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_dual_enq_new_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+
#endif
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 15/36] event/cnxk: add SSO GWS dequeue fastpath functions
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (13 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 14/36] event/cnxk: add SSO GWS fastpath enqueue functions pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 16/36] event/cnxk: add device start function pbhagavatula
` (21 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO GWS event dequeue fastpath functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 10 ++-
drivers/event/cnxk/cn10k_worker.c | 54 +++++++++++++
drivers/event/cnxk/cn10k_worker.h | 12 +++
drivers/event/cnxk/cn9k_eventdev.c | 15 ++++
drivers/event/cnxk/cn9k_worker.c | 117 ++++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_worker.h | 24 ++++++
6 files changed, 231 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 16848798c..a9948e1b2 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -135,11 +135,19 @@ cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
static void
cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
{
- PLT_SET_USED(event_dev);
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
event_dev->enqueue = cn10k_sso_hws_enq;
event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst;
+
+ event_dev->dequeue = cn10k_sso_hws_deq;
+ event_dev->dequeue_burst = cn10k_sso_hws_deq_burst;
+ if (dev->is_timeout_deq) {
+ event_dev->dequeue = cn10k_sso_hws_tmo_deq;
+ event_dev->dequeue_burst = cn10k_sso_hws_tmo_deq_burst;
+ }
}
static void
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index cef24f4e2..57b0714bb 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -59,3 +59,57 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+uint16_t __rte_hot
+cn10k_sso_hws_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ RTE_SET_USED(timeout_ticks);
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_wqe_op);
+ return 1;
+ }
+
+ return cn10k_sso_hws_get_work(ws, ev);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_deq_burst(void *port, struct rte_event ev[], uint16_t nb_events,
+ uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn10k_sso_hws_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_tmo_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn10k_sso_hws *ws = port;
+ uint16_t ret = 1;
+ uint64_t iter;
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_wqe_op);
+ return ret;
+ }
+
+ ret = cn10k_sso_hws_get_work(ws, ev);
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++)
+ ret = cn10k_sso_hws_get_work(ws, ev);
+
+ return ret;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn10k_sso_hws_tmo_deq(port, ev, timeout_ticks);
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index d75e92846..ed4e3bd63 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -160,4 +160,16 @@ uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_tmo_deq_burst(void *port,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+
#endif
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 7e4c1b415..8100140fc 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -162,12 +162,27 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
+ event_dev->dequeue = cn9k_sso_hws_deq;
+ event_dev->dequeue_burst = cn9k_sso_hws_deq_burst;
+ if (dev->deq_tmo_ns) {
+ event_dev->dequeue = cn9k_sso_hws_tmo_deq;
+ event_dev->dequeue_burst = cn9k_sso_hws_tmo_deq_burst;
+ }
+
if (dev->dual_ws) {
event_dev->enqueue = cn9k_sso_hws_dual_enq;
event_dev->enqueue_burst = cn9k_sso_hws_dual_enq_burst;
event_dev->enqueue_new_burst = cn9k_sso_hws_dual_enq_new_burst;
event_dev->enqueue_forward_burst =
cn9k_sso_hws_dual_enq_fwd_burst;
+
+ event_dev->dequeue = cn9k_sso_hws_dual_deq;
+ event_dev->dequeue_burst = cn9k_sso_hws_dual_deq_burst;
+ if (dev->deq_tmo_ns) {
+ event_dev->dequeue = cn9k_sso_hws_dual_tmo_deq;
+ event_dev->dequeue_burst =
+ cn9k_sso_hws_dual_tmo_deq_burst;
+ }
}
}
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index c2f09a99b..41ffd88a0 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -60,6 +60,60 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+uint16_t __rte_hot
+cn9k_sso_hws_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ RTE_SET_USED(timeout_ticks);
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_op);
+ return 1;
+ }
+
+ return cn9k_sso_hws_get_work(ws, ev);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_deq_burst(void *port, struct rte_event ev[], uint16_t nb_events,
+ uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_tmo_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws *ws = port;
+ uint16_t ret = 1;
+ uint64_t iter;
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_op);
+ return ret;
+ }
+
+ ret = cn9k_sso_hws_get_work(ws, ev);
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++)
+ ret = cn9k_sso_hws_get_work(ws, ev);
+
+ return ret;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_tmo_deq(port, ev, timeout_ticks);
+}
+
/* Dual ws ops. */
uint16_t __rte_hot
@@ -117,3 +171,66 @@ cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ uint16_t gw;
+
+ RTE_SET_USED(timeout_ticks);
+ if (dws->swtag_req) {
+ dws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(dws->ws_state[!dws->vws].tag_op);
+ return 1;
+ }
+
+ gw = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
+ &dws->ws_state[!dws->vws], ev);
+ dws->vws = !dws->vws;
+ return gw;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_dual_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ uint16_t ret = 1;
+ uint64_t iter;
+
+ if (dws->swtag_req) {
+ dws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(dws->ws_state[!dws->vws].tag_op);
+ return ret;
+ }
+
+ ret = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
+ &dws->ws_state[!dws->vws], ev);
+ dws->vws = !dws->vws;
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) {
+ ret = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
+ &dws->ws_state[!dws->vws], ev);
+ dws->vws = !dws->vws;
+ }
+
+ return ret;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_dual_tmo_deq(port, ev, timeout_ticks);
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index e75ed10ad..b997db2fe 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -270,4 +270,28 @@ uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+
+uint16_t __rte_hot cn9k_sso_hws_dual_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst(void *port,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq_burst(void *port,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+
#endif
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 16/36] event/cnxk: add device start function
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (14 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 15/36] event/cnxk: add SSO GWS dequeue fastpath functions pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 17/36] event/cnxk: add device stop and close functions pbhagavatula
` (20 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
Add eventdev start function along with few cleanup API's to maintain
sanity.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 127 ++++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 113 +++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.c | 64 ++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 7 ++
4 files changed, 311 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index a9948e1b2..0de44ed43 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -112,6 +112,117 @@ cn10k_sso_hws_release(void *arg, void *hws)
memset(ws, 0, sizeof(*ws));
}
+static void
+cn10k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base,
+ cnxk_handle_event_t fn, void *arg)
+{
+ struct cn10k_sso_hws *ws = hws;
+ uint64_t cq_ds_cnt = 1;
+ uint64_t aq_cnt = 1;
+ uint64_t ds_cnt = 1;
+ struct rte_event ev;
+ uint64_t val, req;
+
+ plt_write64(0, base + SSO_LF_GGRP_QCTL);
+
+ req = queue_id; /* GGRP ID */
+ req |= BIT_ULL(18); /* Grouped */
+ req |= BIT_ULL(16); /* WAIT */
+
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+
+ while (aq_cnt || cq_ds_cnt || ds_cnt) {
+ plt_write64(req, ws->getwrk_op);
+ cn10k_sso_hws_get_work_empty(ws, &ev);
+ if (fn != NULL && ev.u64 != 0)
+ fn(arg, ev);
+ if (ev.sched_type != SSO_TT_EMPTY)
+ cnxk_sso_hws_swtag_flush(ws->tag_wqe_op,
+ ws->swtag_flush_op);
+ do {
+ val = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE);
+ } while (val & BIT_ULL(56));
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ /* Extract cq and ds count */
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+ }
+
+ plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL);
+ rte_mb();
+}
+
+static void
+cn10k_sso_hws_reset(void *arg, void *hws)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = hws;
+ uintptr_t base = ws->base;
+ uint64_t pend_state;
+ union {
+ __uint128_t wdata;
+ uint64_t u64[2];
+ } gw;
+ uint8_t pend_tt;
+
+ /* Wait till getwork/swtp/waitw/desched completes. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) |
+ BIT_ULL(56) | BIT_ULL(54)));
+ pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0));
+ if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+ if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED)
+ cnxk_sso_hws_swtag_untag(base +
+ SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+ }
+
+ /* Wait for desched to complete. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & BIT_ULL(58));
+
+ switch (dev->gw_mode) {
+ case CN10K_GW_MODE_PREF:
+ while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) & BIT_ULL(63))
+ ;
+ break;
+ case CN10K_GW_MODE_PREF_WFE:
+ while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) &
+ SSOW_LF_GWS_TAG_PEND_GET_WORK_BIT)
+ continue;
+ plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL);
+ break;
+ case CN10K_GW_MODE_NONE:
+ default:
+ break;
+ }
+
+ if (CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_PRF_WQE0)) !=
+ SSO_TT_EMPTY) {
+ plt_write64(BIT_ULL(16) | 1, ws->getwrk_op);
+ do {
+ roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+ } while (gw.u64[0] & BIT_ULL(63));
+ pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0));
+ if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+ if (pend_tt == SSO_TT_ATOMIC ||
+ pend_tt == SSO_TT_ORDERED)
+ cnxk_sso_hws_swtag_untag(
+ base + SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+ }
+ }
+
+ plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL);
+ rte_mb();
+}
+
static void
cn10k_sso_set_rsrc(void *arg)
{
@@ -263,6 +374,20 @@ cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
return (int)nb_unlinks;
}
+static int
+cn10k_sso_start(struct rte_eventdev *event_dev)
+{
+ int rc;
+
+ rc = cnxk_sso_start(event_dev, cn10k_sso_hws_reset,
+ cn10k_sso_hws_flush_events);
+ if (rc < 0)
+ return rc;
+ cn10k_sso_fp_fns_set(event_dev);
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -275,6 +400,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_link = cn10k_sso_port_link,
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+
+ .dev_start = cn10k_sso_start,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 8100140fc..39f29b687 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -126,6 +126,102 @@ cn9k_sso_hws_release(void *arg, void *hws)
}
}
+static void
+cn9k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base,
+ cnxk_handle_event_t fn, void *arg)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(arg);
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws_state *st;
+ struct cn9k_sso_hws *ws;
+ uint64_t cq_ds_cnt = 1;
+ uint64_t aq_cnt = 1;
+ uint64_t ds_cnt = 1;
+ struct rte_event ev;
+ uintptr_t ws_base;
+ uint64_t val, req;
+
+ plt_write64(0, base + SSO_LF_GGRP_QCTL);
+
+ req = queue_id; /* GGRP ID */
+ req |= BIT_ULL(18); /* Grouped */
+ req |= BIT_ULL(16); /* WAIT */
+
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+
+ if (dev->dual_ws) {
+ dws = hws;
+ st = &dws->ws_state[0];
+ ws_base = dws->base[0];
+ } else {
+ ws = hws;
+ st = (struct cn9k_sso_hws_state *)ws;
+ ws_base = ws->base;
+ }
+
+ while (aq_cnt || cq_ds_cnt || ds_cnt) {
+ plt_write64(req, st->getwrk_op);
+ cn9k_sso_hws_get_work_empty(st, &ev);
+ if (fn != NULL && ev.u64 != 0)
+ fn(arg, ev);
+ if (ev.sched_type != SSO_TT_EMPTY)
+ cnxk_sso_hws_swtag_flush(st->tag_op,
+ st->swtag_flush_op);
+ do {
+ val = plt_read64(ws_base + SSOW_LF_GWS_PENDSTATE);
+ } while (val & BIT_ULL(56));
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ /* Extract cq and ds count */
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+ }
+
+ plt_write64(0, ws_base + SSOW_LF_GWS_OP_GWC_INVAL);
+}
+
+static void
+cn9k_sso_hws_reset(void *arg, void *hws)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ uint64_t pend_state;
+ uint8_t pend_tt;
+ uintptr_t base;
+ uint64_t tag;
+ uint8_t i;
+
+ dws = hws;
+ ws = hws;
+ for (i = 0; i < (dev->dual_ws ? CN9K_DUAL_WS_NB_WS : 1); i++) {
+ base = dev->dual_ws ? dws->base[i] : ws->base;
+ /* Wait till getwork/swtp/waitw/desched completes. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) |
+ BIT_ULL(56)));
+
+ tag = plt_read64(base + SSOW_LF_GWS_TAG);
+ pend_tt = (tag >> 32) & 0x3;
+ if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+ if (pend_tt == SSO_TT_ATOMIC ||
+ pend_tt == SSO_TT_ORDERED)
+ cnxk_sso_hws_swtag_untag(
+ base + SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+ }
+
+ /* Wait for desched to complete. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & BIT_ULL(58));
+ }
+}
+
static void
cn9k_sso_set_rsrc(void *arg)
{
@@ -352,6 +448,21 @@ cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
return (int)nb_unlinks;
}
+static int
+cn9k_sso_start(struct rte_eventdev *event_dev)
+{
+ int rc;
+
+ rc = cnxk_sso_start(event_dev, cn9k_sso_hws_reset,
+ cn9k_sso_hws_flush_events);
+ if (rc < 0)
+ return rc;
+
+ cn9k_sso_fp_fns_set(event_dev);
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -364,6 +475,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_link = cn9k_sso_port_link,
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+
+ .dev_start = cn9k_sso_start,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 0e2cc3681..0059b0eca 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -326,6 +326,70 @@ cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
return 0;
}
+static void
+cnxk_handle_event(void *arg, struct rte_event event)
+{
+ struct rte_eventdev *event_dev = arg;
+
+ if (event_dev->dev_ops->dev_stop_flush != NULL)
+ event_dev->dev_ops->dev_stop_flush(
+ event_dev->data->dev_id, event,
+ event_dev->data->dev_stop_flush_arg);
+}
+
+static void
+cnxk_sso_cleanup(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn, uint8_t enable)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uintptr_t hwgrp_base;
+ uint16_t i;
+ void *ws;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ ws = event_dev->data->ports[i];
+ reset_fn(dev, ws);
+ }
+
+ rte_mb();
+ ws = event_dev->data->ports[0];
+
+ for (i = 0; i < dev->nb_event_queues; i++) {
+ /* Consume all the events through HWS0 */
+ hwgrp_base = roc_sso_hwgrp_base_get(&dev->sso, i);
+ flush_fn(ws, i, hwgrp_base, cnxk_handle_event, event_dev);
+ /* Enable/Disable SSO GGRP */
+ plt_write64(enable, hwgrp_base + SSO_LF_GGRP_QCTL);
+ }
+}
+
+int
+cnxk_sso_start(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ struct roc_sso_hwgrp_qos qos[dev->qos_queue_cnt];
+ int i, rc;
+
+ plt_sso_dbg();
+ for (i = 0; i < dev->qos_queue_cnt; i++) {
+ qos->hwgrp = dev->qos_parse_data[i].queue;
+ qos->iaq_prcnt = dev->qos_parse_data[i].iaq_prcnt;
+ qos->taq_prcnt = dev->qos_parse_data[i].taq_prcnt;
+ qos->xaq_prcnt = dev->qos_parse_data[i].xaq_prcnt;
+ }
+ rc = roc_sso_hwgrp_qos_config(&dev->sso, qos, dev->qos_queue_cnt,
+ dev->xae_cnt);
+ if (rc < 0) {
+ plt_sso_dbg("failed to configure HWGRP QoS rc = %d", rc);
+ return -EINVAL;
+ }
+ cnxk_sso_cleanup(event_dev, reset_fn, flush_fn, true);
+ rte_mb();
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index ac55d0ccb..6ead171c0 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -48,6 +48,10 @@ typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
uint16_t nb_link);
+typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
+typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
+typedef void (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
+ cnxk_handle_event_t fn, void *arg);
struct cnxk_sso_qos {
uint16_t queue;
@@ -198,5 +202,8 @@ int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
cnxk_sso_hws_setup_t hws_setup_fn);
int cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
uint64_t *tmo_ticks);
+int cnxk_sso_start(struct rte_eventdev *event_dev,
+ cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 17/36] event/cnxk: add device stop and close functions
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (15 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 16/36] event/cnxk: add device start function pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 18/36] event/cnxk: add SSO selftest and dump pbhagavatula
` (19 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
Add event device stop and close callback functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 15 +++++++++
drivers/event/cnxk/cn9k_eventdev.c | 14 +++++++++
drivers/event/cnxk/cnxk_eventdev.c | 48 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 6 ++++
4 files changed, 83 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 0de44ed43..6a0b9bcd9 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -388,6 +388,19 @@ cn10k_sso_start(struct rte_eventdev *event_dev)
return rc;
}
+static void
+cn10k_sso_stop(struct rte_eventdev *event_dev)
+{
+ cnxk_sso_stop(event_dev, cn10k_sso_hws_reset,
+ cn10k_sso_hws_flush_events);
+}
+
+static int
+cn10k_sso_close(struct rte_eventdev *event_dev)
+{
+ return cnxk_sso_close(event_dev, cn10k_sso_hws_unlink);
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -402,6 +415,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.timeout_ticks = cnxk_sso_timeout_ticks,
.dev_start = cn10k_sso_start,
+ .dev_stop = cn10k_sso_stop,
+ .dev_close = cn10k_sso_close,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 39f29b687..195ed49d8 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -463,6 +463,18 @@ cn9k_sso_start(struct rte_eventdev *event_dev)
return rc;
}
+static void
+cn9k_sso_stop(struct rte_eventdev *event_dev)
+{
+ cnxk_sso_stop(event_dev, cn9k_sso_hws_reset, cn9k_sso_hws_flush_events);
+}
+
+static int
+cn9k_sso_close(struct rte_eventdev *event_dev)
+{
+ return cnxk_sso_close(event_dev, cn9k_sso_hws_unlink);
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -477,6 +489,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.timeout_ticks = cnxk_sso_timeout_ticks,
.dev_start = cn9k_sso_start,
+ .dev_stop = cn9k_sso_stop,
+ .dev_close = cn9k_sso_close,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 0059b0eca..01685633d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -390,6 +390,54 @@ cnxk_sso_start(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
return 0;
}
+void
+cnxk_sso_stop(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn)
+{
+ plt_sso_dbg();
+ cnxk_sso_cleanup(event_dev, reset_fn, flush_fn, false);
+ rte_mb();
+}
+
+int
+cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t all_queues[CNXK_SSO_MAX_HWGRP];
+ uint16_t i;
+ void *ws;
+
+ if (!dev->configured)
+ return 0;
+
+ for (i = 0; i < dev->nb_event_queues; i++)
+ all_queues[i] = i;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ ws = event_dev->data->ports[i];
+ unlink_fn(dev, ws, all_queues, dev->nb_event_queues);
+ rte_free(cnxk_sso_hws_get_cookie(ws));
+ event_dev->data->ports[i] = NULL;
+ }
+
+ roc_sso_rsrc_fini(&dev->sso);
+ rte_mempool_free(dev->xaq_pool);
+ rte_memzone_free(rte_memzone_lookup(CNXK_SSO_FC_NAME));
+
+ dev->fc_iova = 0;
+ dev->fc_mem = NULL;
+ dev->xaq_pool = NULL;
+ dev->configured = false;
+ dev->is_timeout_deq = 0;
+ dev->nb_event_ports = 0;
+ dev->max_num_events = -1;
+ dev->nb_event_queues = 0;
+ dev->min_dequeue_timeout_ns = USEC2NSEC(1);
+ dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF);
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 6ead171c0..1030d5840 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -48,6 +48,8 @@ typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
uint16_t nb_link);
+typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map,
+ uint16_t nb_link);
typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
typedef void (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
@@ -205,5 +207,9 @@ int cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
int cnxk_sso_start(struct rte_eventdev *event_dev,
cnxk_sso_hws_reset_t reset_fn,
cnxk_sso_hws_flush_t flush_fn);
+void cnxk_sso_stop(struct rte_eventdev *event_dev,
+ cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn);
+int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 18/36] event/cnxk: add SSO selftest and dump
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (16 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 17/36] event/cnxk: add device stop and close functions pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 19/36] event/cnxk: support event timer pbhagavatula
` (18 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add selftest to verify sanity of SSO and also add function to
dump internal state of SSO.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/test_eventdev.c | 14 +
drivers/event/cnxk/cn10k_eventdev.c | 8 +
drivers/event/cnxk/cn9k_eventdev.c | 10 +-
drivers/event/cnxk/cnxk_eventdev.c | 8 +
drivers/event/cnxk/cnxk_eventdev.h | 5 +
drivers/event/cnxk/cnxk_sso_selftest.c | 1570 ++++++++++++++++++++++++
drivers/event/cnxk/meson.build | 3 +-
7 files changed, 1616 insertions(+), 2 deletions(-)
create mode 100644 drivers/event/cnxk/cnxk_sso_selftest.c
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 27ca5a649..107003f0b 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1042,6 +1042,18 @@ test_eventdev_selftest_dlb2(void)
return test_eventdev_selftest_impl("dlb2_event", "");
}
+static int
+test_eventdev_selftest_cn9k(void)
+{
+ return test_eventdev_selftest_impl("event_cn9k", "");
+}
+
+static int
+test_eventdev_selftest_cn10k(void)
+{
+ return test_eventdev_selftest_impl("event_cn10k", "");
+}
+
REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
REGISTER_TEST_COMMAND(eventdev_selftest_sw, test_eventdev_selftest_sw);
REGISTER_TEST_COMMAND(eventdev_selftest_octeontx,
@@ -1051,3 +1063,5 @@ REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2,
REGISTER_TEST_COMMAND(eventdev_selftest_dpaa2, test_eventdev_selftest_dpaa2);
REGISTER_TEST_COMMAND(eventdev_selftest_dlb, test_eventdev_selftest_dlb);
REGISTER_TEST_COMMAND(eventdev_selftest_dlb2, test_eventdev_selftest_dlb2);
+REGISTER_TEST_COMMAND(eventdev_selftest_cn9k, test_eventdev_selftest_cn9k);
+REGISTER_TEST_COMMAND(eventdev_selftest_cn10k, test_eventdev_selftest_cn10k);
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 6a0b9bcd9..74070e005 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -401,6 +401,12 @@ cn10k_sso_close(struct rte_eventdev *event_dev)
return cnxk_sso_close(event_dev, cn10k_sso_hws_unlink);
}
+static int
+cn10k_sso_selftest(void)
+{
+ return cnxk_sso_selftest(RTE_STR(event_cn10k));
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -414,9 +420,11 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .dump = cnxk_sso_dump,
.dev_start = cn10k_sso_start,
.dev_stop = cn10k_sso_stop,
.dev_close = cn10k_sso_close,
+ .dev_selftest = cn10k_sso_selftest,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 195ed49d8..4fb0f1ccc 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -222,7 +222,7 @@ cn9k_sso_hws_reset(void *arg, void *hws)
}
}
-static void
+void
cn9k_sso_set_rsrc(void *arg)
{
struct cnxk_sso_evdev *dev = arg;
@@ -475,6 +475,12 @@ cn9k_sso_close(struct rte_eventdev *event_dev)
return cnxk_sso_close(event_dev, cn9k_sso_hws_unlink);
}
+static int
+cn9k_sso_selftest(void)
+{
+ return cnxk_sso_selftest(RTE_STR(event_cn9k));
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -488,9 +494,11 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .dump = cnxk_sso_dump,
.dev_start = cn9k_sso_start,
.dev_stop = cn9k_sso_stop,
.dev_close = cn9k_sso_close,
+ .dev_selftest = cn9k_sso_selftest,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 01685633d..dbd35ca5d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -326,6 +326,14 @@ cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
return 0;
}
+void
+cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ roc_sso_dump(&dev->sso, dev->sso.nb_hws, dev->sso.nb_hwgrp, f);
+}
+
static void
cnxk_handle_event(void *arg, struct rte_event event)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 1030d5840..ee7dce5f5 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -211,5 +211,10 @@ void cnxk_sso_stop(struct rte_eventdev *event_dev,
cnxk_sso_hws_reset_t reset_fn,
cnxk_sso_hws_flush_t flush_fn);
int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn);
+int cnxk_sso_selftest(const char *dev_name);
+void cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f);
+
+/* CN9K */
+void cn9k_sso_set_rsrc(void *arg);
#endif /* __CNXK_EVENTDEV_H__ */
diff --git a/drivers/event/cnxk/cnxk_sso_selftest.c b/drivers/event/cnxk/cnxk_sso_selftest.c
new file mode 100644
index 000000000..c99a81327
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_sso_selftest.c
@@ -0,0 +1,1570 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_hexdump.h>
+#include <rte_launch.h>
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_memcpy.h>
+#include <rte_per_lcore.h>
+#include <rte_random.h>
+#include <rte_test.h>
+
+#include "cnxk_eventdev.h"
+
+#define NUM_PACKETS (1024)
+#define MAX_EVENTS (1024)
+#define MAX_STAGES (255)
+
+#define CNXK_TEST_RUN(setup, teardown, test) \
+ cnxk_test_run(setup, teardown, test, #test)
+
+static int total;
+static int passed;
+static int failed;
+static int unsupported;
+
+static int evdev;
+static struct rte_mempool *eventdev_test_mempool;
+
+struct event_attr {
+ uint32_t flow_id;
+ uint8_t event_type;
+ uint8_t sub_event_type;
+ uint8_t sched_type;
+ uint8_t queue;
+ uint8_t port;
+};
+
+static uint32_t seqn_list_index;
+static int seqn_list[NUM_PACKETS];
+
+static inline void
+seqn_list_init(void)
+{
+ RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS);
+ memset(seqn_list, 0, sizeof(seqn_list));
+ seqn_list_index = 0;
+}
+
+static inline int
+seqn_list_update(int val)
+{
+ if (seqn_list_index >= NUM_PACKETS)
+ return -1;
+
+ seqn_list[seqn_list_index++] = val;
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ return 0;
+}
+
+static inline int
+seqn_list_check(int limit)
+{
+ int i;
+
+ for (i = 0; i < limit; i++) {
+ if (seqn_list[i] != i) {
+ plt_err("Seqn mismatch %d %d", seqn_list[i], i);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+struct test_core_param {
+ uint32_t *total_events;
+ uint64_t dequeue_tmo_ticks;
+ uint8_t port;
+ uint8_t sched_type;
+};
+
+static int
+testsuite_setup(const char *eventdev_name)
+{
+ evdev = rte_event_dev_get_dev_id(eventdev_name);
+ if (evdev < 0) {
+ plt_err("%d: Eventdev %s not found", __LINE__, eventdev_name);
+ return -1;
+ }
+ return 0;
+}
+
+static void
+testsuite_teardown(void)
+{
+ rte_event_dev_close(evdev);
+ total = 0;
+ passed = 0;
+ failed = 0;
+ unsupported = 0;
+}
+
+static inline void
+devconf_set_default_sane_values(struct rte_event_dev_config *dev_conf,
+ struct rte_event_dev_info *info)
+{
+ memset(dev_conf, 0, sizeof(struct rte_event_dev_config));
+ dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns;
+ dev_conf->nb_event_ports = info->max_event_ports;
+ dev_conf->nb_event_queues = info->max_event_queues;
+ dev_conf->nb_event_queue_flows = info->max_event_queue_flows;
+ dev_conf->nb_event_port_dequeue_depth =
+ info->max_event_port_dequeue_depth;
+ dev_conf->nb_event_port_enqueue_depth =
+ info->max_event_port_enqueue_depth;
+ dev_conf->nb_event_port_enqueue_depth =
+ info->max_event_port_enqueue_depth;
+ dev_conf->nb_events_limit = info->max_num_events;
+}
+
+enum {
+ TEST_EVENTDEV_SETUP_DEFAULT,
+ TEST_EVENTDEV_SETUP_PRIORITY,
+ TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT,
+};
+
+static inline int
+_eventdev_setup(int mode)
+{
+ const char *pool_name = "evdev_cnxk_test_pool";
+ struct rte_event_dev_config dev_conf;
+ struct rte_event_dev_info info;
+ int i, ret;
+
+ /* Create and destrory pool for each test case to make it standalone */
+ eventdev_test_mempool = rte_pktmbuf_pool_create(
+ pool_name, MAX_EVENTS, 0, 0, 512, rte_socket_id());
+ if (!eventdev_test_mempool) {
+ plt_err("ERROR creating mempool");
+ return -1;
+ }
+
+ ret = rte_event_dev_info_get(evdev, &info);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+ devconf_set_default_sane_values(&dev_conf, &info);
+ if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT)
+ dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT;
+
+ ret = rte_event_dev_configure(evdev, &dev_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+ uint32_t queue_count;
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+
+ if (mode == TEST_EVENTDEV_SETUP_PRIORITY) {
+ if (queue_count > 8)
+ queue_count = 8;
+
+ /* Configure event queues(0 to n) with
+ * RTE_EVENT_DEV_PRIORITY_HIGHEST to
+ * RTE_EVENT_DEV_PRIORITY_LOWEST
+ */
+ uint8_t step =
+ (RTE_EVENT_DEV_PRIORITY_LOWEST + 1) / queue_count;
+ for (i = 0; i < (int)queue_count; i++) {
+ struct rte_event_queue_conf queue_conf;
+
+ ret = rte_event_queue_default_conf_get(evdev, i,
+ &queue_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d",
+ i);
+ queue_conf.priority = i * step;
+ ret = rte_event_queue_setup(evdev, i, &queue_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
+ i);
+ }
+
+ } else {
+ /* Configure event queues with default priority */
+ for (i = 0; i < (int)queue_count; i++) {
+ ret = rte_event_queue_setup(evdev, i, NULL);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
+ i);
+ }
+ }
+ /* Configure event ports */
+ uint32_t port_count;
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &port_count),
+ "Port count get failed");
+ for (i = 0; i < (int)port_count; i++) {
+ ret = rte_event_port_setup(evdev, i, NULL);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i);
+ ret = rte_event_port_link(evdev, i, NULL, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d",
+ i);
+ }
+
+ ret = rte_event_dev_start(evdev);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device");
+
+ return 0;
+}
+
+static inline int
+eventdev_setup(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_DEFAULT);
+}
+
+static inline int
+eventdev_setup_priority(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_PRIORITY);
+}
+
+static inline int
+eventdev_setup_dequeue_timeout(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT);
+}
+
+static inline void
+eventdev_teardown(void)
+{
+ rte_event_dev_stop(evdev);
+ rte_mempool_free(eventdev_test_mempool);
+}
+
+static inline void
+update_event_and_validation_attr(struct rte_mbuf *m, struct rte_event *ev,
+ uint32_t flow_id, uint8_t event_type,
+ uint8_t sub_event_type, uint8_t sched_type,
+ uint8_t queue, uint8_t port)
+{
+ struct event_attr *attr;
+
+ /* Store the event attributes in mbuf for future reference */
+ attr = rte_pktmbuf_mtod(m, struct event_attr *);
+ attr->flow_id = flow_id;
+ attr->event_type = event_type;
+ attr->sub_event_type = sub_event_type;
+ attr->sched_type = sched_type;
+ attr->queue = queue;
+ attr->port = port;
+
+ ev->flow_id = flow_id;
+ ev->sub_event_type = sub_event_type;
+ ev->event_type = event_type;
+ /* Inject the new event */
+ ev->op = RTE_EVENT_OP_NEW;
+ ev->sched_type = sched_type;
+ ev->queue_id = queue;
+ ev->mbuf = m;
+}
+
+static inline int
+inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type,
+ uint8_t sched_type, uint8_t queue, uint8_t port,
+ unsigned int events)
+{
+ struct rte_mbuf *m;
+ unsigned int i;
+
+ for (i = 0; i < events; i++) {
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
+
+ *rte_event_pmd_selftest_seqn(m) = i;
+ update_event_and_validation_attr(m, &ev, flow_id, event_type,
+ sub_event_type, sched_type,
+ queue, port);
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ return 0;
+}
+
+static inline int
+check_excess_events(uint8_t port)
+{
+ uint16_t valid_event;
+ struct rte_event ev;
+ int i;
+
+ /* Check for excess events, try for a few times and exit */
+ for (i = 0; i < 32; i++) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+
+ RTE_TEST_ASSERT_SUCCESS(valid_event,
+ "Unexpected valid event=%d",
+ *rte_event_pmd_selftest_seqn(ev.mbuf));
+ }
+ return 0;
+}
+
+static inline int
+generate_random_events(const unsigned int total_events)
+{
+ struct rte_event_dev_info info;
+ uint32_t queue_count;
+ unsigned int i;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+
+ ret = rte_event_dev_info_get(evdev, &info);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+ for (i = 0; i < total_events; i++) {
+ ret = inject_events(
+ rte_rand() % info.max_event_queue_flows /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ rte_rand() % queue_count /* queue */, 0 /* port */,
+ 1 /* events */);
+ if (ret)
+ return -1;
+ }
+ return ret;
+}
+
+static inline int
+validate_event(struct rte_event *ev)
+{
+ struct event_attr *attr;
+
+ attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *);
+ RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id,
+ "flow_id mismatch enq=%d deq =%d", attr->flow_id,
+ ev->flow_id);
+ RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type,
+ "event_type mismatch enq=%d deq =%d",
+ attr->event_type, ev->event_type);
+ RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type,
+ "sub_event_type mismatch enq=%d deq =%d",
+ attr->sub_event_type, ev->sub_event_type);
+ RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type,
+ "sched_type mismatch enq=%d deq =%d",
+ attr->sched_type, ev->sched_type);
+ RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id,
+ "queue mismatch enq=%d deq =%d", attr->queue,
+ ev->queue_id);
+ return 0;
+}
+
+typedef int (*validate_event_cb)(uint32_t index, uint8_t port,
+ struct rte_event *ev);
+
+static inline int
+consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn)
+{
+ uint32_t events = 0, forward_progress_cnt = 0, index = 0;
+ uint16_t valid_event;
+ struct rte_event ev;
+ int ret;
+
+ while (1) {
+ if (++forward_progress_cnt > UINT16_MAX) {
+ plt_err("Detected deadlock");
+ return -1;
+ }
+
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ forward_progress_cnt = 0;
+ ret = validate_event(&ev);
+ if (ret)
+ return -1;
+
+ if (fn != NULL) {
+ ret = fn(index, port, &ev);
+ RTE_TEST_ASSERT_SUCCESS(
+ ret, "Failed to validate test specific event");
+ }
+
+ ++index;
+
+ rte_pktmbuf_free(ev.mbuf);
+ if (++events >= total_events)
+ break;
+ }
+
+ return check_excess_events(port);
+}
+
+static int
+validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev)
+{
+ RTE_SET_USED(port);
+ RTE_TEST_ASSERT_EQUAL(index, *rte_event_pmd_selftest_seqn(ev->mbuf),
+ "index=%d != seqn=%d", index,
+ *rte_event_pmd_selftest_seqn(ev->mbuf));
+ return 0;
+}
+
+static inline int
+test_simple_enqdeq(uint8_t sched_type)
+{
+ int ret;
+
+ ret = inject_events(0 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type */, sched_type, 0 /* queue */,
+ 0 /* port */, MAX_EVENTS);
+ if (ret)
+ return -1;
+
+ return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq);
+}
+
+static int
+test_simple_enqdeq_ordered(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_simple_enqdeq_atomic(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_simple_enqdeq_parallel(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_PARALLEL);
+}
+
+/*
+ * Generate a prescribed number of events and spread them across available
+ * queues. On dequeue, using single event port(port 0) verify the enqueued
+ * event attributes
+ */
+static int
+test_multi_queue_enq_single_port_deq(void)
+{
+ int ret;
+
+ ret = generate_random_events(MAX_EVENTS);
+ if (ret)
+ return -1;
+
+ return consume_events(0 /* port */, MAX_EVENTS, NULL);
+}
+
+/*
+ * Inject 0..MAX_EVENTS events over 0..queue_count with modulus
+ * operation
+ *
+ * For example, Inject 32 events over 0..7 queues
+ * enqueue events 0, 8, 16, 24 in queue 0
+ * enqueue events 1, 9, 17, 25 in queue 1
+ * ..
+ * ..
+ * enqueue events 7, 15, 23, 31 in queue 7
+ *
+ * On dequeue, Validate the events comes in 0,8,16,24,1,9,17,25..,7,15,23,31
+ * order from queue0(highest priority) to queue7(lowest_priority)
+ */
+static int
+validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev)
+{
+ uint32_t queue_count;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ if (queue_count > 8)
+ queue_count = 8;
+ uint32_t range = MAX_EVENTS / queue_count;
+ uint32_t expected_val = (index % range) * queue_count;
+
+ expected_val += ev->queue_id;
+ RTE_SET_USED(port);
+ RTE_TEST_ASSERT_EQUAL(
+ *rte_event_pmd_selftest_seqn(ev->mbuf), expected_val,
+ "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d",
+ *rte_event_pmd_selftest_seqn(ev->mbuf), index, expected_val,
+ range, queue_count, MAX_EVENTS);
+ return 0;
+}
+
+static int
+test_multi_queue_priority(void)
+{
+ int i, max_evts_roundoff;
+ /* See validate_queue_priority() comments for priority validate logic */
+ uint32_t queue_count;
+ struct rte_mbuf *m;
+ uint8_t queue;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ if (queue_count > 8)
+ queue_count = 8;
+ max_evts_roundoff = MAX_EVENTS / queue_count;
+ max_evts_roundoff *= queue_count;
+
+ for (i = 0; i < max_evts_roundoff; i++) {
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
+
+ *rte_event_pmd_selftest_seqn(m) = i;
+ queue = i % queue_count;
+ update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU,
+ 0, RTE_SCHED_TYPE_PARALLEL,
+ queue, 0);
+ rte_event_enqueue_burst(evdev, 0, &ev, 1);
+ }
+
+ return consume_events(0, max_evts_roundoff, validate_queue_priority);
+}
+
+static int
+worker_multi_port_fn(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint32_t *total_events = param->total_events;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+ int ret;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ ret = validate_event(&ev);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event");
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ }
+
+ return 0;
+}
+
+static inline int
+wait_workers_to_join(const uint32_t *count)
+{
+ uint64_t cycles, print_cycles;
+
+ cycles = rte_get_timer_cycles();
+ print_cycles = cycles;
+ while (__atomic_load_n(count, __ATOMIC_RELAXED)) {
+ uint64_t new_cycles = rte_get_timer_cycles();
+
+ if (new_cycles - print_cycles > rte_get_timer_hz()) {
+ plt_info("Events %d",
+ __atomic_load_n(count, __ATOMIC_RELAXED));
+ print_cycles = new_cycles;
+ }
+ if (new_cycles - cycles > rte_get_timer_hz() * 10000000000) {
+ plt_err("No schedules for seconds, deadlock (%d)",
+ __atomic_load_n(count, __ATOMIC_RELAXED));
+ rte_event_dev_dump(evdev, stdout);
+ cycles = new_cycles;
+ return -1;
+ }
+ }
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+static inline int
+launch_workers_and_wait(int (*main_thread)(void *),
+ int (*worker_thread)(void *), uint32_t total_events,
+ uint8_t nb_workers, uint8_t sched_type)
+{
+ uint32_t atomic_total_events;
+ struct test_core_param *param;
+ uint64_t dequeue_tmo_ticks;
+ uint8_t port = 0;
+ int w_lcore;
+ int ret;
+
+ if (!nb_workers)
+ return 0;
+
+ __atomic_store_n(&atomic_total_events, total_events, __ATOMIC_RELAXED);
+ seqn_list_init();
+
+ param = malloc(sizeof(struct test_core_param) * nb_workers);
+ if (!param)
+ return -1;
+
+ ret = rte_event_dequeue_timeout_ticks(
+ evdev, rte_rand() % 10000000 /* 10ms */, &dequeue_tmo_ticks);
+ if (ret) {
+ free(param);
+ return -1;
+ }
+
+ param[0].total_events = &atomic_total_events;
+ param[0].sched_type = sched_type;
+ param[0].port = 0;
+ param[0].dequeue_tmo_ticks = dequeue_tmo_ticks;
+ rte_wmb();
+
+ w_lcore = rte_get_next_lcore(
+ /* start core */ -1,
+ /* skip main */ 1,
+ /* wrap */ 0);
+ rte_eal_remote_launch(main_thread, ¶m[0], w_lcore);
+
+ for (port = 1; port < nb_workers; port++) {
+ param[port].total_events = &atomic_total_events;
+ param[port].sched_type = sched_type;
+ param[port].port = port;
+ param[port].dequeue_tmo_ticks = dequeue_tmo_ticks;
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ w_lcore = rte_get_next_lcore(w_lcore, 1, 0);
+ rte_eal_remote_launch(worker_thread, ¶m[port], w_lcore);
+ }
+
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ ret = wait_workers_to_join(&atomic_total_events);
+ free(param);
+
+ return ret;
+}
+
+/*
+ * Generate a prescribed number of events and spread them across available
+ * queues. Dequeue the events through multiple ports and verify the enqueued
+ * event attributes
+ */
+static int
+test_multi_queue_enq_multi_port_deq(void)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t nr_ports;
+ int ret;
+
+ ret = generate_random_events(total_events);
+ if (ret)
+ return -1;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ plt_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ return launch_workers_and_wait(worker_multi_port_fn,
+ worker_multi_port_fn, total_events,
+ nr_ports, 0xff /* invalid */);
+}
+
+static void
+flush(uint8_t dev_id, struct rte_event event, void *arg)
+{
+ unsigned int *count = arg;
+
+ RTE_SET_USED(dev_id);
+ if (event.event_type == RTE_EVENT_TYPE_CPU)
+ *count = *count + 1;
+}
+
+static int
+test_dev_stop_flush(void)
+{
+ unsigned int total_events = MAX_EVENTS, count = 0;
+ int ret;
+
+ ret = generate_random_events(total_events);
+ if (ret)
+ return -1;
+
+ ret = rte_event_dev_stop_flush_callback_register(evdev, flush, &count);
+ if (ret)
+ return -2;
+ rte_event_dev_stop(evdev);
+ ret = rte_event_dev_stop_flush_callback_register(evdev, NULL, NULL);
+ if (ret)
+ return -3;
+ RTE_TEST_ASSERT_EQUAL(total_events, count,
+ "count mismatch total_events=%d count=%d",
+ total_events, count);
+
+ return 0;
+}
+
+static int
+validate_queue_to_port_single_link(uint32_t index, uint8_t port,
+ struct rte_event *ev)
+{
+ RTE_SET_USED(index);
+ RTE_TEST_ASSERT_EQUAL(port, ev->queue_id,
+ "queue mismatch enq=%d deq =%d", port,
+ ev->queue_id);
+
+ return 0;
+}
+
+/*
+ * Link queue x to port x and check correctness of link by checking
+ * queue_id == x on dequeue on the specific port x
+ */
+static int
+test_queue_to_port_single_link(void)
+{
+ int i, nr_links, ret;
+ uint32_t queue_count;
+ uint32_t port_count;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &port_count),
+ "Port count get failed");
+
+ /* Unlink all connections that created in eventdev_setup */
+ for (i = 0; i < (int)port_count; i++) {
+ ret = rte_event_port_unlink(evdev, i, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d",
+ i);
+ }
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+
+ nr_links = RTE_MIN(port_count, queue_count);
+ const unsigned int total_events = MAX_EVENTS / nr_links;
+
+ /* Link queue x to port x and inject events to queue x through port x */
+ for (i = 0; i < nr_links; i++) {
+ uint8_t queue = (uint8_t)i;
+
+ ret = rte_event_port_link(evdev, i, &queue, NULL, 1);
+ RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i);
+
+ ret = inject_events(0x100 /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ queue /* queue */, i /* port */,
+ total_events /* events */);
+ if (ret)
+ return -1;
+ }
+
+ /* Verify the events generated from correct queue */
+ for (i = 0; i < nr_links; i++) {
+ ret = consume_events(i /* port */, total_events,
+ validate_queue_to_port_single_link);
+ if (ret)
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+validate_queue_to_port_multi_link(uint32_t index, uint8_t port,
+ struct rte_event *ev)
+{
+ RTE_SET_USED(index);
+ RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1),
+ "queue mismatch enq=%d deq =%d", port,
+ ev->queue_id);
+
+ return 0;
+}
+
+/*
+ * Link all even number of queues to port 0 and all odd number of queues to
+ * port 1 and verify the link connection on dequeue
+ */
+static int
+test_queue_to_port_multi_link(void)
+{
+ int ret, port0_events = 0, port1_events = 0;
+ uint32_t nr_queues = 0;
+ uint32_t nr_ports = 0;
+ uint8_t queue, port;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &nr_queues),
+ "Queue count get failed");
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &nr_queues),
+ "Queue count get failed");
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+
+ if (nr_ports < 2) {
+ plt_err("Not enough ports to test ports=%d", nr_ports);
+ return 0;
+ }
+
+ /* Unlink all connections that created in eventdev_setup */
+ for (port = 0; port < nr_ports; port++) {
+ ret = rte_event_port_unlink(evdev, port, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d",
+ port);
+ }
+
+ unsigned int total_events = MAX_EVENTS / nr_queues;
+ if (!total_events) {
+ nr_queues = MAX_EVENTS;
+ total_events = MAX_EVENTS / nr_queues;
+ }
+
+ /* Link all even number of queues to port0 and odd numbers to port 1*/
+ for (queue = 0; queue < nr_queues; queue++) {
+ port = queue & 0x1;
+ ret = rte_event_port_link(evdev, port, &queue, NULL, 1);
+ RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d",
+ queue, port);
+
+ ret = inject_events(0x100 /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ queue /* queue */, port /* port */,
+ total_events /* events */);
+ if (ret)
+ return -1;
+
+ if (port == 0)
+ port0_events += total_events;
+ else
+ port1_events += total_events;
+ }
+
+ ret = consume_events(0 /* port */, port0_events,
+ validate_queue_to_port_multi_link);
+ if (ret)
+ return -1;
+ ret = consume_events(1 /* port */, port1_events,
+ validate_queue_to_port_multi_link);
+ if (ret)
+ return -1;
+
+ return 0;
+}
+
+static int
+worker_flow_based_pipeline(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
+ uint32_t *total_events = param->total_events;
+ uint8_t new_sched_type = param->sched_type;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
+ dequeue_tmo_ticks);
+ if (!valid_event)
+ continue;
+
+ /* Events from stage 0 */
+ if (ev.sub_event_type == 0) {
+ /* Move to atomic flow to maintain the ordering */
+ ev.flow_id = 0x2;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sub_event_type = 1; /* stage 1 */
+ ev.sched_type = new_sched_type;
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ } else if (ev.sub_event_type == 1) { /* Events from stage 1*/
+ uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
+
+ if (seqn_list_update(seqn) == 0) {
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1,
+ __ATOMIC_RELAXED);
+ } else {
+ plt_err("Failed to update seqn_list");
+ return -1;
+ }
+ } else {
+ plt_err("Invalid ev.sub_event_type = %d",
+ ev.sub_event_type);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+static int
+test_multiport_flow_sched_type_test(uint8_t in_sched_type,
+ uint8_t out_sched_type)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ plt_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with a 0 sequence number to total_events */
+ ret = inject_events(
+ 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */, in_sched_type, 0 /* queue */,
+ 0 /* port */, total_events /* events */);
+ if (ret)
+ return -1;
+
+ rte_mb();
+ ret = launch_workers_and_wait(worker_flow_based_pipeline,
+ worker_flow_based_pipeline, total_events,
+ nr_ports, out_sched_type);
+ if (ret)
+ return -1;
+
+ if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
+ out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
+ /* Check the events order maintained or not */
+ return seqn_list_check(total_events);
+ }
+
+ return 0;
+}
+
+/* Multi port ordered to atomic transaction */
+static int
+test_multi_port_flow_ordered_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_ordered_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_ordered_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_flow_atomic_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_atomic_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_atomic_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_flow_parallel_to_atomic(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_parallel_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_parallel_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+worker_group_based_pipeline(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
+ uint32_t *total_events = param->total_events;
+ uint8_t new_sched_type = param->sched_type;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
+ dequeue_tmo_ticks);
+ if (!valid_event)
+ continue;
+
+ /* Events from stage 0(group 0) */
+ if (ev.queue_id == 0) {
+ /* Move to atomic flow to maintain the ordering */
+ ev.flow_id = 0x2;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sched_type = new_sched_type;
+ ev.queue_id = 1; /* Stage 1*/
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/
+ uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
+
+ if (seqn_list_update(seqn) == 0) {
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1,
+ __ATOMIC_RELAXED);
+ } else {
+ plt_err("Failed to update seqn_list");
+ return -1;
+ }
+ } else {
+ plt_err("Invalid ev.queue_id = %d", ev.queue_id);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int
+test_multiport_queue_sched_type_test(uint8_t in_sched_type,
+ uint8_t out_sched_type)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t queue_count;
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ if (queue_count < 2 || !nr_ports) {
+ plt_err("Not enough queues=%d ports=%d or workers=%d",
+ queue_count, nr_ports, rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with a 0 sequence number to total_events */
+ ret = inject_events(
+ 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */, in_sched_type, 0 /* queue */,
+ 0 /* port */, total_events /* events */);
+ if (ret)
+ return -1;
+
+ ret = launch_workers_and_wait(worker_group_based_pipeline,
+ worker_group_based_pipeline, total_events,
+ nr_ports, out_sched_type);
+ if (ret)
+ return -1;
+
+ if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
+ out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
+ /* Check the events order maintained or not */
+ return seqn_list_check(total_events);
+ }
+
+ return 0;
+}
+
+static int
+test_multi_port_queue_ordered_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_ordered_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_ordered_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_queue_atomic_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_atomic_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_atomic_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_queue_parallel_to_atomic(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_parallel_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_parallel_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+worker_flow_based_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint32_t *total_events = param->total_events;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.sub_event_type == MAX_STAGES) { /* last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sub_event_type++;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+static int
+launch_multi_port_max_stages_random_sched_type(int (*fn)(void *))
+{
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ plt_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with a 0 sequence number to total_events */
+ ret = inject_events(
+ 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1) /* sched_type */,
+ 0 /* queue */, 0 /* port */, MAX_EVENTS /* events */);
+ if (ret)
+ return -1;
+
+ return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports,
+ 0xff /* invalid */);
+}
+
+/* Flow based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_flow_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_flow_based_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ uint32_t queue_count;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ uint8_t nr_queues = queue_count;
+ uint32_t *total_events = param->total_events;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.queue_id == nr_queues - 1) { /* last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id++;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+/* Queue based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_queue_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_queue_based_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_mixed_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ uint32_t queue_count;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ uint8_t nr_queues = queue_count;
+ uint32_t *total_events = param->total_events;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.queue_id == nr_queues - 1) { /* Last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id++;
+ ev.sub_event_type = rte_rand() % 256;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+/* Queue and flow based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_mixed_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_mixed_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_ordered_flow_producer(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ struct rte_mbuf *m;
+ int counter = 0;
+
+ while (counter < NUM_PACKETS) {
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ if (m == NULL)
+ continue;
+
+ *rte_event_pmd_selftest_seqn(m) = counter++;
+
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ ev.flow_id = 0x1; /* Generate a fat flow */
+ ev.sub_event_type = 0;
+ /* Inject the new event */
+ ev.op = RTE_EVENT_OP_NEW;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sched_type = RTE_SCHED_TYPE_ORDERED;
+ ev.queue_id = 0;
+ ev.mbuf = m;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+
+ return 0;
+}
+
+static inline int
+test_producer_consumer_ingress_order_test(int (*fn)(void *))
+{
+ uint32_t nr_ports;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (rte_lcore_count() < 3 || nr_ports < 2) {
+ plt_err("### Not enough cores for test.");
+ return 0;
+ }
+
+ launch_workers_and_wait(worker_ordered_flow_producer, fn, NUM_PACKETS,
+ nr_ports, RTE_SCHED_TYPE_ATOMIC);
+ /* Check the events order maintained or not */
+ return seqn_list_check(NUM_PACKETS);
+}
+
+/* Flow based producer consumer ingress order test */
+static int
+test_flow_producer_consumer_ingress_order_test(void)
+{
+ return test_producer_consumer_ingress_order_test(
+ worker_flow_based_pipeline);
+}
+
+/* Queue based producer consumer ingress order test */
+static int
+test_queue_producer_consumer_ingress_order_test(void)
+{
+ return test_producer_consumer_ingress_order_test(
+ worker_group_based_pipeline);
+}
+
+static void
+cnxk_test_run(int (*setup)(void), void (*tdown)(void), int (*test)(void),
+ const char *name)
+{
+ if (setup() < 0) {
+ printf("Error setting up test %s", name);
+ unsupported++;
+ } else {
+ if (test() < 0) {
+ failed++;
+ printf("+ TestCase [%2d] : %s failed\n", total, name);
+ } else {
+ passed++;
+ printf("+ TestCase [%2d] : %s succeeded\n", total,
+ name);
+ }
+ }
+
+ total++;
+ tdown();
+}
+
+static int
+cnxk_sso_testsuite_run(const char *dev_name)
+{
+ int rc;
+
+ testsuite_setup(dev_name);
+
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_queue_enq_single_port_deq);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown, test_dev_stop_flush);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_queue_enq_multi_port_deq);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_to_port_single_link);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_to_port_multi_link);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_max_stages_random_sched_type);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_max_stages_random_sched_type);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_mixed_max_stages_random_sched_type);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_flow_producer_consumer_ingress_order_test);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_producer_consumer_ingress_order_test);
+ CNXK_TEST_RUN(eventdev_setup_priority, eventdev_teardown,
+ test_multi_queue_priority);
+ CNXK_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
+ test_multi_port_flow_ordered_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
+ test_multi_port_queue_ordered_to_atomic);
+ printf("Total tests : %d\n", total);
+ printf("Passed : %d\n", passed);
+ printf("Failed : %d\n", failed);
+ printf("Not supported : %d\n", unsupported);
+
+ rc = failed;
+ testsuite_teardown();
+
+ return rc;
+}
+
+int
+cnxk_sso_selftest(const char *dev_name)
+{
+ const struct rte_memzone *mz;
+ struct cnxk_sso_evdev *dev;
+ int rc = -1;
+
+ mz = rte_memzone_lookup(CNXK_SSO_MZ_NAME);
+ if (mz == NULL)
+ return rc;
+
+ dev = (void *)*((uint64_t *)mz->addr);
+ if (roc_model_runtime_is_cn9k()) {
+ /* Verify single ws mode. */
+ printf("Verifying CN9K Single workslot mode\n");
+ dev->dual_ws = 0;
+ cn9k_sso_set_rsrc(dev);
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ /* Verift dual ws mode. */
+ printf("Verifying CN9K Dual workslot mode\n");
+ dev->dual_ws = 1;
+ cn9k_sso_set_rsrc(dev);
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ }
+
+ if (roc_model_runtime_is_cn10k()) {
+ printf("Verifying CN10K workslot getwork mode none\n");
+ dev->gw_mode = CN10K_GW_MODE_NONE;
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ printf("Verifying CN10K workslot getwork mode prefetch\n");
+ dev->gw_mode = CN10K_GW_MODE_PREF;
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ printf("Verifying CN10K workslot getwork mode smart prefetch\n");
+ dev->gw_mode = CN10K_GW_MODE_PREF_WFE;
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ }
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 5a9cc9f57..8bac4b7f3 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -12,6 +12,7 @@ sources = files('cn10k_worker.c',
'cn10k_eventdev.c',
'cn9k_worker.c',
'cn9k_eventdev.c',
- 'cnxk_eventdev.c')
+ 'cnxk_eventdev.c',
+ 'cnxk_sso_selftest.c')
deps += ['bus_pci', 'common_cnxk', 'net_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 19/36] event/cnxk: support event timer
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (17 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 18/36] event/cnxk: add SSO selftest and dump pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 20/36] event/cnxk: add timer adapter capabilities pbhagavatula
` (17 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov
Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
Add event timer adapter aka TIM initialization on SSO probe.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 6 ++++
drivers/event/cnxk/cnxk_eventdev.c | 3 ++
drivers/event/cnxk/cnxk_eventdev.h | 2 ++
drivers/event/cnxk/cnxk_tim_evdev.c | 47 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_evdev.h | 44 +++++++++++++++++++++++++++
drivers/event/cnxk/meson.build | 3 +-
6 files changed, 104 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.c
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.h
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index b2684d431..662df2971 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -35,6 +35,10 @@ Features of the OCTEON CNXK SSO PMD are:
- Open system with configurable amount of outstanding events limited only by
DRAM
- HW accelerated dequeue timeout support to enable power management
+- HW managed event timers support through TIM, with high precision and
+ time granularity of 2.5us on CN9K and 1us on CN10K.
+- Up to 256 TIM rings aka event timer adapters.
+- Up to 8 rings traversed in parallel.
Prerequisites and Compilation procedure
---------------------------------------
@@ -101,3 +105,5 @@ Debugging Options
+===+============+=======================================================+
| 1 | SSO | --log-level='pmd\.event\.cnxk,8' |
+---+------------+-------------------------------------------------------+
+ | 2 | TIM | --log-level='pmd\.event\.cnxk\.timer,8' |
+ +---+------------+-------------------------------------------------------+
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index dbd35ca5d..c404bb586 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -582,6 +582,8 @@ cnxk_sso_init(struct rte_eventdev *event_dev)
dev->nb_event_queues = 0;
dev->nb_event_ports = 0;
+ cnxk_tim_init(&dev->sso);
+
return 0;
error:
@@ -598,6 +600,7 @@ cnxk_sso_fini(struct rte_eventdev *event_dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ cnxk_tim_fini();
roc_sso_rsrc_fini(&dev->sso);
roc_sso_dev_fini(&dev->sso);
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index ee7dce5f5..e4051a64b 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,6 +14,8 @@
#include "roc_api.h"
+#include "cnxk_tim_evdev.h"
+
#define CNXK_SSO_XAE_CNT "xae_cnt"
#define CNXK_SSO_GGRP_QOS "qos"
#define CN9K_SSO_SINGLE_WS "single_ws"
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
new file mode 100644
index 000000000..76b17910f
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_eventdev.h"
+#include "cnxk_tim_evdev.h"
+
+void
+cnxk_tim_init(struct roc_sso *sso)
+{
+ const struct rte_memzone *mz;
+ struct cnxk_tim_evdev *dev;
+ int rc;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return;
+
+ mz = rte_memzone_reserve(RTE_STR(CNXK_TIM_EVDEV_NAME),
+ sizeof(struct cnxk_tim_evdev), 0, 0);
+ if (mz == NULL) {
+ plt_tim_dbg("Unable to allocate memory for TIM Event device");
+ return;
+ }
+ dev = mz->addr;
+
+ dev->tim.roc_sso = sso;
+ rc = roc_tim_init(&dev->tim);
+ if (rc < 0) {
+ plt_err("Failed to initialize roc tim resources");
+ rte_memzone_free(mz);
+ return;
+ }
+ dev->nb_rings = rc;
+ dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+}
+
+void
+cnxk_tim_fini(void)
+{
+ struct cnxk_tim_evdev *dev = tim_priv_get();
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return;
+
+ roc_tim_fini(&dev->tim);
+ rte_memzone_free(rte_memzone_lookup(RTE_STR(CNXK_TIM_EVDEV_NAME)));
+}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
new file mode 100644
index 000000000..6cf0adb21
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __CNXK_TIM_EVDEV_H__
+#define __CNXK_TIM_EVDEV_H__
+
+#include <stddef.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <eventdev_pmd_pci.h>
+#include <rte_event_timer_adapter.h>
+#include <rte_memzone.h>
+
+#include "roc_api.h"
+
+#define CNXK_TIM_EVDEV_NAME cnxk_tim_eventdev
+#define CNXK_TIM_RING_DEF_CHUNK_SZ (4096)
+
+struct cnxk_tim_evdev {
+ struct roc_tim tim;
+ struct rte_eventdev *event_dev;
+ uint16_t nb_rings;
+ uint32_t chunk_sz;
+};
+
+static inline struct cnxk_tim_evdev *
+tim_priv_get(void)
+{
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(RTE_STR(CNXK_TIM_EVDEV_NAME));
+ if (mz == NULL)
+ return NULL;
+
+ return mz->addr;
+}
+
+void cnxk_tim_init(struct roc_sso *sso);
+void cnxk_tim_fini(void);
+
+#endif /* __CNXK_TIM_EVDEV_H__ */
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 8bac4b7f3..6e9f3daab 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -13,6 +13,7 @@ sources = files('cn10k_worker.c',
'cn9k_worker.c',
'cn9k_eventdev.c',
'cnxk_eventdev.c',
- 'cnxk_sso_selftest.c')
+ 'cnxk_sso_selftest.c',
+ 'cnxk_tim_evdev.c')
deps += ['bus_pci', 'common_cnxk', 'net_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 20/36] event/cnxk: add timer adapter capabilities
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (18 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 19/36] event/cnxk: support event timer pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 21/36] event/cnxk: create and free timer adapter pbhagavatula
` (16 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
Add function to retrieve event timer adapter capabilities.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 2 ++
drivers/event/cnxk/cn9k_eventdev.c | 2 ++
drivers/event/cnxk/cnxk_tim_evdev.c | 22 +++++++++++++++++++++-
drivers/event/cnxk/cnxk_tim_evdev.h | 6 +++++-
4 files changed, 30 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 74070e005..30ca0d901 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -420,6 +420,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .timer_adapter_caps_get = cnxk_tim_caps_get,
+
.dump = cnxk_sso_dump,
.dev_start = cn10k_sso_start,
.dev_stop = cn10k_sso_stop,
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 4fb0f1ccc..773152e55 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -494,6 +494,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .timer_adapter_caps_get = cnxk_tim_caps_get,
+
.dump = cnxk_sso_dump,
.dev_start = cn9k_sso_start,
.dev_stop = cn9k_sso_stop,
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 76b17910f..6000b507a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -5,6 +5,26 @@
#include "cnxk_eventdev.h"
#include "cnxk_tim_evdev.h"
+int
+cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
+ uint32_t *caps,
+ const struct rte_event_timer_adapter_ops **ops)
+{
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+
+ RTE_SET_USED(flags);
+ RTE_SET_USED(ops);
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ /* Store evdev pointer for later use. */
+ dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
+ *caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
+
+ return 0;
+}
+
void
cnxk_tim_init(struct roc_sso *sso)
{
@@ -37,7 +57,7 @@ cnxk_tim_init(struct roc_sso *sso)
void
cnxk_tim_fini(void)
{
- struct cnxk_tim_evdev *dev = tim_priv_get();
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return;
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 6cf0adb21..8dcecb281 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -27,7 +27,7 @@ struct cnxk_tim_evdev {
};
static inline struct cnxk_tim_evdev *
-tim_priv_get(void)
+cnxk_tim_priv_get(void)
{
const struct rte_memzone *mz;
@@ -38,6 +38,10 @@ tim_priv_get(void)
return mz->addr;
}
+int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
+ uint32_t *caps,
+ const struct rte_event_timer_adapter_ops **ops);
+
void cnxk_tim_init(struct roc_sso *sso);
void cnxk_tim_fini(void);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 21/36] event/cnxk: create and free timer adapter
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (19 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 20/36] event/cnxk: add timer adapter capabilities pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 22/36] event/cnxk: add devargs to disable NPA pbhagavatula
` (15 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
When the application calls timer adapter create the following is used:
- Allocate a TIM LF based on number of LF's provisioned.
- Verify the config parameters supplied.
- Allocate memory required for
* Buckets based on min and max timeout supplied.
* Allocate the chunk pool based on the number of timers.
On Free:
- Free the allocated bucket and chunk memory.
- Free the TIM lf allocated.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 174 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_evdev.h | 128 +++++++++++++++++++-
2 files changed, 300 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 6000b507a..986ad8493 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -5,6 +5,177 @@
#include "cnxk_eventdev.h"
#include "cnxk_tim_evdev.h"
+static struct rte_event_timer_adapter_ops cnxk_tim_ops;
+
+static int
+cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
+ struct rte_event_timer_adapter_conf *rcfg)
+{
+ unsigned int cache_sz = (tim_ring->nb_chunks / 1.5);
+ unsigned int mp_flags = 0;
+ char pool_name[25];
+ int rc;
+
+ cache_sz /= rte_lcore_count();
+ /* Create chunk pool. */
+ if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
+ mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+ plt_tim_dbg("Using single producer mode");
+ tim_ring->prod_type_sp = true;
+ }
+
+ snprintf(pool_name, sizeof(pool_name), "cnxk_tim_chunk_pool%d",
+ tim_ring->ring_id);
+
+ if (cache_sz > RTE_MEMPOOL_CACHE_MAX_SIZE)
+ cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
+ cache_sz = cache_sz != 0 ? cache_sz : 2;
+ tim_ring->nb_chunks += (cache_sz * rte_lcore_count());
+ tim_ring->chunk_pool = rte_mempool_create_empty(
+ pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz, cache_sz, 0,
+ rte_socket_id(), mp_flags);
+
+ if (tim_ring->chunk_pool == NULL) {
+ plt_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
+
+ rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
+ rte_mbuf_platform_mempool_ops(), NULL);
+ if (rc < 0) {
+ plt_err("Unable to set chunkpool ops");
+ goto free;
+ }
+
+ rc = rte_mempool_populate_default(tim_ring->chunk_pool);
+ if (rc < 0) {
+ plt_err("Unable to set populate chunkpool.");
+ goto free;
+ }
+ tim_ring->aura =
+ roc_npa_aura_handle_to_aura(tim_ring->chunk_pool->pool_id);
+ tim_ring->ena_dfb = 0;
+
+ return 0;
+
+free:
+ rte_mempool_free(tim_ring->chunk_pool);
+ return rc;
+}
+
+static int
+cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
+{
+ struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+ struct cnxk_tim_ring *tim_ring;
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ if (adptr->data->id >= dev->nb_rings)
+ return -ENODEV;
+
+ tim_ring = rte_zmalloc("cnxk_tim_prv", sizeof(struct cnxk_tim_ring), 0);
+ if (tim_ring == NULL)
+ return -ENOMEM;
+
+ rc = roc_tim_lf_alloc(&dev->tim, adptr->data->id, NULL);
+ if (rc < 0) {
+ plt_err("Failed to create timer ring");
+ goto tim_ring_free;
+ }
+
+ if (NSEC2TICK(RTE_ALIGN_MUL_CEIL(
+ rcfg->timer_tick_ns,
+ cnxk_tim_min_resolution_ns(cnxk_tim_cntfrq())),
+ cnxk_tim_cntfrq()) <
+ cnxk_tim_min_tmo_ticks(cnxk_tim_cntfrq())) {
+ if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES)
+ rcfg->timer_tick_ns = TICK2NSEC(
+ cnxk_tim_min_tmo_ticks(cnxk_tim_cntfrq()),
+ cnxk_tim_cntfrq());
+ else {
+ rc = -ERANGE;
+ goto tim_hw_free;
+ }
+ }
+ tim_ring->ring_id = adptr->data->id;
+ tim_ring->clk_src = (int)rcfg->clk_src;
+ tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL(
+ rcfg->timer_tick_ns,
+ cnxk_tim_min_resolution_ns(cnxk_tim_cntfrq()));
+ tim_ring->max_tout = rcfg->max_tmo_ns;
+ tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
+ tim_ring->nb_timers = rcfg->nb_timers;
+ tim_ring->chunk_sz = dev->chunk_sz;
+
+ tim_ring->nb_chunks = tim_ring->nb_timers;
+ tim_ring->nb_chunk_slots = CNXK_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
+ /* Create buckets. */
+ tim_ring->bkt =
+ rte_zmalloc("cnxk_tim_bucket",
+ (tim_ring->nb_bkts) * sizeof(struct cnxk_tim_bkt),
+ RTE_CACHE_LINE_SIZE);
+ if (tim_ring->bkt == NULL)
+ goto tim_hw_free;
+
+ rc = cnxk_tim_chnk_pool_create(tim_ring, rcfg);
+ if (rc < 0)
+ goto tim_bkt_free;
+
+ rc = roc_tim_lf_config(
+ &dev->tim, tim_ring->ring_id,
+ cnxk_tim_convert_clk_src(tim_ring->clk_src), 0, 0,
+ tim_ring->nb_bkts, tim_ring->chunk_sz,
+ NSEC2TICK(tim_ring->tck_nsec, cnxk_tim_cntfrq()));
+ if (rc < 0) {
+ plt_err("Failed to configure timer ring");
+ goto tim_chnk_free;
+ }
+
+ tim_ring->base = roc_tim_lf_base_get(&dev->tim, tim_ring->ring_id);
+ plt_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE);
+ plt_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+
+ plt_tim_dbg(
+ "Total memory used %" PRIu64 "MB\n",
+ (uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz) +
+ (tim_ring->nb_bkts * sizeof(struct cnxk_tim_bkt))) /
+ BIT_ULL(20)));
+
+ adptr->data->adapter_priv = tim_ring;
+ return rc;
+
+tim_chnk_free:
+ rte_mempool_free(tim_ring->chunk_pool);
+tim_bkt_free:
+ rte_free(tim_ring->bkt);
+tim_hw_free:
+ roc_tim_lf_free(&dev->tim, tim_ring->ring_id);
+tim_ring_free:
+ rte_free(tim_ring);
+ return rc;
+}
+
+static int
+cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ roc_tim_lf_free(&dev->tim, tim_ring->ring_id);
+ rte_free(tim_ring->bkt);
+ rte_mempool_free(tim_ring->chunk_pool);
+ rte_free(tim_ring);
+
+ return 0;
+}
+
int
cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
uint32_t *caps,
@@ -18,6 +189,9 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
if (dev == NULL)
return -ENODEV;
+ cnxk_tim_ops.init = cnxk_tim_ring_create;
+ cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 8dcecb281..62bb2f1eb 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -12,12 +12,26 @@
#include <eventdev_pmd_pci.h>
#include <rte_event_timer_adapter.h>
+#include <rte_malloc.h>
#include <rte_memzone.h>
#include "roc_api.h"
-#define CNXK_TIM_EVDEV_NAME cnxk_tim_eventdev
-#define CNXK_TIM_RING_DEF_CHUNK_SZ (4096)
+#define NSECPERSEC 1E9
+#define USECPERSEC 1E6
+#define TICK2NSEC(__tck, __freq) (((__tck)*NSECPERSEC) / (__freq))
+
+#define CNXK_TIM_EVDEV_NAME cnxk_tim_eventdev
+#define CNXK_TIM_MAX_BUCKETS (0xFFFFF)
+#define CNXK_TIM_RING_DEF_CHUNK_SZ (4096)
+#define CNXK_TIM_CHUNK_ALIGNMENT (16)
+#define CNXK_TIM_MAX_BURST \
+ (RTE_CACHE_LINE_SIZE / CNXK_TIM_CHUNK_ALIGNMENT)
+#define CNXK_TIM_NB_CHUNK_SLOTS(sz) (((sz) / CNXK_TIM_CHUNK_ALIGNMENT) - 1)
+#define CNXK_TIM_MIN_CHUNK_SLOTS (0x1)
+#define CNXK_TIM_MAX_CHUNK_SLOTS (0x1FFE)
+
+#define CN9K_TIM_MIN_TMO_TKS (256)
struct cnxk_tim_evdev {
struct roc_tim tim;
@@ -26,6 +40,57 @@ struct cnxk_tim_evdev {
uint32_t chunk_sz;
};
+enum cnxk_tim_clk_src {
+ CNXK_TIM_CLK_SRC_10NS = RTE_EVENT_TIMER_ADAPTER_CPU_CLK,
+ CNXK_TIM_CLK_SRC_GPIO = RTE_EVENT_TIMER_ADAPTER_EXT_CLK0,
+ CNXK_TIM_CLK_SRC_GTI = RTE_EVENT_TIMER_ADAPTER_EXT_CLK1,
+ CNXK_TIM_CLK_SRC_PTP = RTE_EVENT_TIMER_ADAPTER_EXT_CLK2,
+};
+
+struct cnxk_tim_bkt {
+ uint64_t first_chunk;
+ union {
+ uint64_t w1;
+ struct {
+ uint32_t nb_entry;
+ uint8_t sbt : 1;
+ uint8_t hbt : 1;
+ uint8_t bsk : 1;
+ uint8_t rsvd : 5;
+ uint8_t lock;
+ int16_t chunk_remainder;
+ };
+ };
+ uint64_t current_chunk;
+ uint64_t pad;
+};
+
+struct cnxk_tim_ring {
+ uintptr_t base;
+ uint16_t nb_chunk_slots;
+ uint32_t nb_bkts;
+ uint64_t tck_int;
+ uint64_t tot_int;
+ struct cnxk_tim_bkt *bkt;
+ struct rte_mempool *chunk_pool;
+ uint64_t arm_cnt;
+ uint8_t prod_type_sp;
+ uint8_t ena_dfb;
+ uint16_t ring_id;
+ uint32_t aura;
+ uint64_t nb_timers;
+ uint64_t tck_nsec;
+ uint64_t max_tout;
+ uint64_t nb_chunks;
+ uint64_t chunk_sz;
+ enum cnxk_tim_clk_src clk_src;
+} __rte_cache_aligned;
+
+struct cnxk_tim_ent {
+ uint64_t w0;
+ uint64_t wqe;
+};
+
static inline struct cnxk_tim_evdev *
cnxk_tim_priv_get(void)
{
@@ -38,6 +103,65 @@ cnxk_tim_priv_get(void)
return mz->addr;
}
+static inline uint64_t
+cnxk_tim_min_tmo_ticks(uint64_t freq)
+{
+ if (roc_model_runtime_is_cn9k())
+ return CN9K_TIM_MIN_TMO_TKS;
+ else /* CN10K min tick is of 1us */
+ return freq / USECPERSEC;
+}
+
+static inline uint64_t
+cnxk_tim_min_resolution_ns(uint64_t freq)
+{
+ return NSECPERSEC / freq;
+}
+
+static inline enum roc_tim_clk_src
+cnxk_tim_convert_clk_src(enum cnxk_tim_clk_src clk_src)
+{
+ switch (clk_src) {
+ case RTE_EVENT_TIMER_ADAPTER_CPU_CLK:
+ return roc_model_runtime_is_cn9k() ? ROC_TIM_CLK_SRC_10NS :
+ ROC_TIM_CLK_SRC_GTI;
+ default:
+ return ROC_TIM_CLK_SRC_INVALID;
+ }
+}
+
+#ifdef RTE_ARCH_ARM64
+static inline uint64_t
+cnxk_tim_cntvct(void)
+{
+ uint64_t tsc;
+
+ asm volatile("mrs %0, cntvct_el0" : "=r"(tsc));
+ return tsc;
+}
+
+static inline uint64_t
+cnxk_tim_cntfrq(void)
+{
+ uint64_t freq;
+
+ asm volatile("mrs %0, cntfrq_el0" : "=r"(freq));
+ return freq;
+}
+#else
+static inline uint64_t
+cnxk_tim_cntvct(void)
+{
+ return 0;
+}
+
+static inline uint64_t
+cnxk_tim_cntfrq(void)
+{
+ return 0;
+}
+#endif
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 22/36] event/cnxk: add devargs to disable NPA
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (20 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 21/36] event/cnxk: create and free timer adapter pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 23/36] event/cnxk: allow adapters to resize inflights pbhagavatula
` (14 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
If the chunks are allocated from NPA then TIM can automatically free
them when traversing the list of chunks.
Add devargs to disable NPA and use software mempool to manage chunks.
Example:
--dev "0002:0e:00.0,tim_disable_npa=1"
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 10 ++++
drivers/event/cnxk/cn10k_eventdev.c | 3 +-
drivers/event/cnxk/cn9k_eventdev.c | 3 +-
drivers/event/cnxk/cnxk_eventdev.h | 9 +++
drivers/event/cnxk/cnxk_tim_evdev.c | 86 +++++++++++++++++++++--------
drivers/event/cnxk/cnxk_tim_evdev.h | 5 ++
6 files changed, 92 insertions(+), 24 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 662df2971..9e14f99f2 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -93,6 +93,16 @@ Runtime Config Options
-a 0002:0e:00.0,qos=[1-50-50-50]
+- ``TIM disable NPA``
+
+ By default chunks are allocated from NPA then TIM can automatically free
+ them when traversing the list of chunks. The ``tim_disable_npa`` devargs
+ parameter disables NPA and uses software mempool to manage chunks
+
+ For example::
+
+ -a 0002:0e:00.0,tim_disable_npa=1
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 30ca0d901..807e666d3 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -502,4 +502,5 @@ RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
- CN10K_SSO_GW_MODE "=<int>");
+ CN10K_SSO_GW_MODE "=<int>"
+ CNXK_TIM_DISABLE_NPA "=1");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 773152e55..3e27fce4a 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -571,4 +571,5 @@ RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
- CN9K_SSO_SINGLE_WS "=1");
+ CN9K_SSO_SINGLE_WS "=1"
+ CNXK_TIM_DISABLE_NPA "=1");
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index e4051a64b..487c7f822 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -159,6 +159,15 @@ struct cnxk_sso_hws_cookie {
bool configured;
} __rte_cache_aligned;
+static inline int
+parse_kvargs_flag(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ *(uint8_t *)opaque = !!atoi(value);
+ return 0;
+}
+
static inline int
parse_kvargs_value(const char *key, const char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 986ad8493..44bcad94d 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -31,30 +31,43 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
cache_sz = cache_sz != 0 ? cache_sz : 2;
tim_ring->nb_chunks += (cache_sz * rte_lcore_count());
- tim_ring->chunk_pool = rte_mempool_create_empty(
- pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz, cache_sz, 0,
- rte_socket_id(), mp_flags);
-
- if (tim_ring->chunk_pool == NULL) {
- plt_err("Unable to create chunkpool.");
- return -ENOMEM;
- }
+ if (!tim_ring->disable_npa) {
+ tim_ring->chunk_pool = rte_mempool_create_empty(
+ pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz,
+ cache_sz, 0, rte_socket_id(), mp_flags);
+
+ if (tim_ring->chunk_pool == NULL) {
+ plt_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
- rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
- rte_mbuf_platform_mempool_ops(), NULL);
- if (rc < 0) {
- plt_err("Unable to set chunkpool ops");
- goto free;
- }
+ rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
+ rte_mbuf_platform_mempool_ops(),
+ NULL);
+ if (rc < 0) {
+ plt_err("Unable to set chunkpool ops");
+ goto free;
+ }
- rc = rte_mempool_populate_default(tim_ring->chunk_pool);
- if (rc < 0) {
- plt_err("Unable to set populate chunkpool.");
- goto free;
+ rc = rte_mempool_populate_default(tim_ring->chunk_pool);
+ if (rc < 0) {
+ plt_err("Unable to set populate chunkpool.");
+ goto free;
+ }
+ tim_ring->aura = roc_npa_aura_handle_to_aura(
+ tim_ring->chunk_pool->pool_id);
+ tim_ring->ena_dfb = 0;
+ } else {
+ tim_ring->chunk_pool = rte_mempool_create(
+ pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz,
+ cache_sz, 0, NULL, NULL, NULL, NULL, rte_socket_id(),
+ mp_flags);
+ if (tim_ring->chunk_pool == NULL) {
+ plt_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
+ tim_ring->ena_dfb = 1;
}
- tim_ring->aura =
- roc_npa_aura_handle_to_aura(tim_ring->chunk_pool->pool_id);
- tim_ring->ena_dfb = 0;
return 0;
@@ -110,8 +123,17 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
tim_ring->nb_timers = rcfg->nb_timers;
tim_ring->chunk_sz = dev->chunk_sz;
+ tim_ring->disable_npa = dev->disable_npa;
+
+ if (tim_ring->disable_npa) {
+ tim_ring->nb_chunks =
+ tim_ring->nb_timers /
+ CNXK_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
+ tim_ring->nb_chunks = tim_ring->nb_chunks * tim_ring->nb_bkts;
+ } else {
+ tim_ring->nb_chunks = tim_ring->nb_timers;
+ }
- tim_ring->nb_chunks = tim_ring->nb_timers;
tim_ring->nb_chunk_slots = CNXK_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
/* Create buckets. */
tim_ring->bkt =
@@ -199,6 +221,24 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
return 0;
}
+static void
+cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
+{
+ struct rte_kvargs *kvlist;
+
+ if (devargs == NULL)
+ return;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, CNXK_TIM_DISABLE_NPA, &parse_kvargs_flag,
+ &dev->disable_npa);
+
+ rte_kvargs_free(kvlist);
+}
+
void
cnxk_tim_init(struct roc_sso *sso)
{
@@ -217,6 +257,8 @@ cnxk_tim_init(struct roc_sso *sso)
}
dev = mz->addr;
+ cnxk_tim_parse_devargs(sso->pci_dev->device.devargs, dev);
+
dev->tim.roc_sso = sso;
rc = roc_tim_init(&dev->tim);
if (rc < 0) {
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 62bb2f1eb..8c21ab1fe 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -33,11 +33,15 @@
#define CN9K_TIM_MIN_TMO_TKS (256)
+#define CNXK_TIM_DISABLE_NPA "tim_disable_npa"
+
struct cnxk_tim_evdev {
struct roc_tim tim;
struct rte_eventdev *event_dev;
uint16_t nb_rings;
uint32_t chunk_sz;
+ /* Dev args */
+ uint8_t disable_npa;
};
enum cnxk_tim_clk_src {
@@ -75,6 +79,7 @@ struct cnxk_tim_ring {
struct rte_mempool *chunk_pool;
uint64_t arm_cnt;
uint8_t prod_type_sp;
+ uint8_t disable_npa;
uint8_t ena_dfb;
uint16_t ring_id;
uint32_t aura;
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 23/36] event/cnxk: allow adapters to resize inflights
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (21 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 22/36] event/cnxk: add devargs to disable NPA pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 24/36] event/cnxk: add timer adapter info function pbhagavatula
` (13 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add internal SSO functions to allow event adapters to resize SSO buffers
that are used to hold in-flight events in DRAM.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 33 ++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 7 +++
drivers/event/cnxk/cnxk_eventdev_adptr.c | 67 ++++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_evdev.c | 5 ++
drivers/event/cnxk/meson.build | 1 +
5 files changed, 113 insertions(+)
create mode 100644 drivers/event/cnxk/cnxk_eventdev_adptr.c
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index c404bb586..29e38478d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -77,6 +77,9 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
if (dev->xae_cnt)
xaq_cnt += dev->xae_cnt / dev->sso.xae_waes;
+ else if (dev->adptr_xae_cnt)
+ xaq_cnt += (dev->adptr_xae_cnt / dev->sso.xae_waes) +
+ (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
else
xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
(CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
@@ -125,6 +128,36 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
return rc;
}
+int
+cnxk_sso_xae_reconfigure(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc = 0;
+
+ if (event_dev->data->dev_started)
+ event_dev->dev_ops->dev_stop(event_dev);
+
+ rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to release XAQ %d", rc);
+ return rc;
+ }
+
+ rte_mempool_free(dev->xaq_pool);
+ dev->xaq_pool = NULL;
+ rc = cnxk_sso_xaq_allocate(dev);
+ if (rc < 0) {
+ plt_err("Failed to alloc XAQ %d", rc);
+ return rc;
+ }
+
+ rte_mb();
+ if (event_dev->data->dev_started)
+ event_dev->dev_ops->dev_start(event_dev);
+
+ return 0;
+}
+
int
cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
cnxk_sso_init_hws_mem_t init_hws_fn,
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 487c7f822..32abf9632 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -81,6 +81,10 @@ struct cnxk_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
+ uint64_t adptr_xae_cnt;
+ uint16_t tim_adptr_ring_cnt;
+ uint16_t *timer_adptr_rings;
+ uint64_t *timer_adptr_sz;
/* Dev args */
uint32_t xae_cnt;
uint8_t qos_queue_cnt;
@@ -190,7 +194,10 @@ cnxk_sso_hws_get_cookie(void *ws)
}
/* Configuration functions */
+int cnxk_sso_xae_reconfigure(struct rte_eventdev *event_dev);
int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev);
+void cnxk_sso_updt_xae_cnt(struct cnxk_sso_evdev *dev, void *data,
+ uint32_t event_type);
/* Common ops API. */
int cnxk_sso_init(struct rte_eventdev *event_dev);
diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c
new file mode 100644
index 000000000..6d9615453
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_eventdev.h"
+
+void
+cnxk_sso_updt_xae_cnt(struct cnxk_sso_evdev *dev, void *data,
+ uint32_t event_type)
+{
+ int i;
+
+ switch (event_type) {
+ case RTE_EVENT_TYPE_TIMER: {
+ struct cnxk_tim_ring *timr = data;
+ uint16_t *old_ring_ptr;
+ uint64_t *old_sz_ptr;
+
+ for (i = 0; i < dev->tim_adptr_ring_cnt; i++) {
+ if (timr->ring_id != dev->timer_adptr_rings[i])
+ continue;
+ if (timr->nb_timers == dev->timer_adptr_sz[i])
+ return;
+ dev->adptr_xae_cnt -= dev->timer_adptr_sz[i];
+ dev->adptr_xae_cnt += timr->nb_timers;
+ dev->timer_adptr_sz[i] = timr->nb_timers;
+
+ return;
+ }
+
+ dev->tim_adptr_ring_cnt++;
+ old_ring_ptr = dev->timer_adptr_rings;
+ old_sz_ptr = dev->timer_adptr_sz;
+
+ dev->timer_adptr_rings = rte_realloc(
+ dev->timer_adptr_rings,
+ sizeof(uint16_t) * dev->tim_adptr_ring_cnt, 0);
+ if (dev->timer_adptr_rings == NULL) {
+ dev->adptr_xae_cnt += timr->nb_timers;
+ dev->timer_adptr_rings = old_ring_ptr;
+ dev->tim_adptr_ring_cnt--;
+ return;
+ }
+
+ dev->timer_adptr_sz = rte_realloc(
+ dev->timer_adptr_sz,
+ sizeof(uint64_t) * dev->tim_adptr_ring_cnt, 0);
+
+ if (dev->timer_adptr_sz == NULL) {
+ dev->adptr_xae_cnt += timr->nb_timers;
+ dev->timer_adptr_sz = old_sz_ptr;
+ dev->tim_adptr_ring_cnt--;
+ return;
+ }
+
+ dev->timer_adptr_rings[dev->tim_adptr_ring_cnt - 1] =
+ timr->ring_id;
+ dev->timer_adptr_sz[dev->tim_adptr_ring_cnt - 1] =
+ timr->nb_timers;
+
+ dev->adptr_xae_cnt += timr->nb_timers;
+ break;
+ }
+ default:
+ break;
+ }
+}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 44bcad94d..4add1d659 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -161,6 +161,11 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
plt_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE);
plt_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+ /* Update SSO xae count. */
+ cnxk_sso_updt_xae_cnt(cnxk_sso_pmd_priv(dev->event_dev), tim_ring,
+ RTE_EVENT_TYPE_TIMER);
+ cnxk_sso_xae_reconfigure(dev->event_dev);
+
plt_tim_dbg(
"Total memory used %" PRIu64 "MB\n",
(uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz) +
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 6e9f3daab..a9566cb09 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -12,6 +12,7 @@ sources = files('cn10k_worker.c',
'cn10k_eventdev.c',
'cn9k_worker.c',
'cn9k_eventdev.c',
+ 'cnxk_eventdev_adptr.c',
'cnxk_eventdev.c',
'cnxk_sso_selftest.c',
'cnxk_tim_evdev.c')
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 24/36] event/cnxk: add timer adapter info function
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (22 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 23/36] event/cnxk: allow adapters to resize inflights pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 25/36] event/cnxk: add devargs for chunk size and rings pbhagavatula
` (12 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
Add TIM event timer adapter info get function.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 4add1d659..6bbfadb25 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -76,6 +76,18 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
return rc;
}
+static void
+cnxk_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer_adapter_info *adptr_info)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+
+ adptr_info->max_tmo_ns = tim_ring->max_tout;
+ adptr_info->min_resolution_ns = tim_ring->tck_nsec;
+ rte_memcpy(&adptr_info->conf, &adptr->data->conf,
+ sizeof(struct rte_event_timer_adapter_conf));
+}
+
static int
cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
{
@@ -218,6 +230,7 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
cnxk_tim_ops.init = cnxk_tim_ring_create;
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+ cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 25/36] event/cnxk: add devargs for chunk size and rings
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (23 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 24/36] event/cnxk: add timer adapter info function pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 26/36] event/cnxk: add TIM bucket operations pbhagavatula
` (11 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
Add devargs to control default chunk size and max numbers of
timer rings to attach to a given RVU PF.
Example:
--dev "0002:1e:00.0,tim_chnk_slots=1024"
--dev "0002:1e:00.0,tim_rings_lmt=4"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 23 +++++++++++++++++++++++
drivers/event/cnxk/cn10k_eventdev.c | 4 +++-
drivers/event/cnxk/cn9k_eventdev.c | 4 +++-
drivers/event/cnxk/cnxk_tim_evdev.c | 14 +++++++++++++-
drivers/event/cnxk/cnxk_tim_evdev.h | 4 ++++
5 files changed, 46 insertions(+), 3 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 9e14f99f2..05dcf06f4 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -103,6 +103,29 @@ Runtime Config Options
-a 0002:0e:00.0,tim_disable_npa=1
+- ``TIM modify chunk slots``
+
+ The ``tim_chnk_slots`` devargs can be used to modify number of chunk slots.
+ Chunks are used to store event timers, a chunk can be visualised as an array
+ where the last element points to the next chunk and rest of them are used to
+ store events. TIM traverses the list of chunks and enqueues the event timers
+ to SSO. The default value is 255 and the max value is 4095.
+
+ For example::
+
+ -a 0002:0e:00.0,tim_chnk_slots=1023
+
+- ``TIM limit max rings reserved``
+
+ The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
+ rings i.e. event timer adapter reserved on probe. Since, TIM rings are HW
+ resources we can avoid starving other applications by not grabbing all the
+ rings.
+
+ For example::
+
+ -a 0002:0e:00.0,tim_rings_lmt=5
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 807e666d3..a5a614196 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -503,4 +503,6 @@ RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
CN10K_SSO_GW_MODE "=<int>"
- CNXK_TIM_DISABLE_NPA "=1");
+ CNXK_TIM_DISABLE_NPA "=1"
+ CNXK_TIM_CHNK_SLOTS "=<int>"
+ CNXK_TIM_RINGS_LMT "=<int>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 3e27fce4a..cfea3723a 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -572,4 +572,6 @@ RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
CN9K_SSO_SINGLE_WS "=1"
- CNXK_TIM_DISABLE_NPA "=1");
+ CNXK_TIM_DISABLE_NPA "=1"
+ CNXK_TIM_CHNK_SLOTS "=<int>"
+ CNXK_TIM_RINGS_LMT "=<int>");
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 6bbfadb25..07ec57fd2 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -253,6 +253,10 @@ cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
rte_kvargs_process(kvlist, CNXK_TIM_DISABLE_NPA, &parse_kvargs_flag,
&dev->disable_npa);
+ rte_kvargs_process(kvlist, CNXK_TIM_CHNK_SLOTS, &parse_kvargs_value,
+ &dev->chunk_slots);
+ rte_kvargs_process(kvlist, CNXK_TIM_RINGS_LMT, &parse_kvargs_value,
+ &dev->min_ring_cnt);
rte_kvargs_free(kvlist);
}
@@ -278,6 +282,7 @@ cnxk_tim_init(struct roc_sso *sso)
cnxk_tim_parse_devargs(sso->pci_dev->device.devargs, dev);
dev->tim.roc_sso = sso;
+ dev->tim.nb_lfs = dev->min_ring_cnt;
rc = roc_tim_init(&dev->tim);
if (rc < 0) {
plt_err("Failed to initialize roc tim resources");
@@ -285,7 +290,14 @@ cnxk_tim_init(struct roc_sso *sso)
return;
}
dev->nb_rings = rc;
- dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+
+ if (dev->chunk_slots && dev->chunk_slots <= CNXK_TIM_MAX_CHUNK_SLOTS &&
+ dev->chunk_slots >= CNXK_TIM_MIN_CHUNK_SLOTS) {
+ dev->chunk_sz =
+ (dev->chunk_slots + 1) * CNXK_TIM_CHUNK_ALIGNMENT;
+ } else {
+ dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+ }
}
void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 8c21ab1fe..6208c150a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -34,6 +34,8 @@
#define CN9K_TIM_MIN_TMO_TKS (256)
#define CNXK_TIM_DISABLE_NPA "tim_disable_npa"
+#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
+#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
struct cnxk_tim_evdev {
struct roc_tim tim;
@@ -42,6 +44,8 @@ struct cnxk_tim_evdev {
uint32_t chunk_sz;
/* Dev args */
uint8_t disable_npa;
+ uint16_t chunk_slots;
+ uint16_t min_ring_cnt;
};
enum cnxk_tim_clk_src {
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 26/36] event/cnxk: add TIM bucket operations
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (24 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 25/36] event/cnxk: add devargs for chunk size and rings pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 27/36] event/cnxk: add timer arm routine pbhagavatula
` (10 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add TIM bucket operations used for event timer arm and cancel.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.h | 30 +++++++
drivers/event/cnxk/cnxk_tim_worker.c | 6 ++
drivers/event/cnxk/cnxk_tim_worker.h | 123 +++++++++++++++++++++++++++
drivers/event/cnxk/meson.build | 1 +
4 files changed, 160 insertions(+)
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.c
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.h
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 6208c150a..c844d9b61 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -37,6 +37,36 @@
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
+#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
+#define TIM_BUCKET_W1_M_CHUNK_REMAINDER \
+ ((1ULL << (64 - TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
+#define TIM_BUCKET_W1_S_LOCK (40)
+#define TIM_BUCKET_W1_M_LOCK \
+ ((1ULL << (TIM_BUCKET_W1_S_CHUNK_REMAINDER - TIM_BUCKET_W1_S_LOCK)) - 1)
+#define TIM_BUCKET_W1_S_RSVD (35)
+#define TIM_BUCKET_W1_S_BSK (34)
+#define TIM_BUCKET_W1_M_BSK \
+ ((1ULL << (TIM_BUCKET_W1_S_RSVD - TIM_BUCKET_W1_S_BSK)) - 1)
+#define TIM_BUCKET_W1_S_HBT (33)
+#define TIM_BUCKET_W1_M_HBT \
+ ((1ULL << (TIM_BUCKET_W1_S_BSK - TIM_BUCKET_W1_S_HBT)) - 1)
+#define TIM_BUCKET_W1_S_SBT (32)
+#define TIM_BUCKET_W1_M_SBT \
+ ((1ULL << (TIM_BUCKET_W1_S_HBT - TIM_BUCKET_W1_S_SBT)) - 1)
+#define TIM_BUCKET_W1_S_NUM_ENTRIES (0)
+#define TIM_BUCKET_W1_M_NUM_ENTRIES \
+ ((1ULL << (TIM_BUCKET_W1_S_SBT - TIM_BUCKET_W1_S_NUM_ENTRIES)) - 1)
+
+#define TIM_BUCKET_SEMA (TIM_BUCKET_CHUNK_REMAIN)
+
+#define TIM_BUCKET_CHUNK_REMAIN \
+ (TIM_BUCKET_W1_M_CHUNK_REMAINDER << TIM_BUCKET_W1_S_CHUNK_REMAINDER)
+
+#define TIM_BUCKET_LOCK (TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK)
+
+#define TIM_BUCKET_SEMA_WLOCK \
+ (TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
+
struct cnxk_tim_evdev {
struct roc_tim tim;
struct rte_eventdev *event_dev;
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
new file mode 100644
index 000000000..564687d9b
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -0,0 +1,6 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_tim_evdev.h"
+#include "cnxk_tim_worker.h"
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
new file mode 100644
index 000000000..bd205e5c1
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __CNXK_TIM_WORKER_H__
+#define __CNXK_TIM_WORKER_H__
+
+#include "cnxk_tim_evdev.h"
+
+static inline uint8_t
+cnxk_tim_bkt_fetch_lock(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_LOCK) & TIM_BUCKET_W1_M_LOCK;
+}
+
+static inline int16_t
+cnxk_tim_bkt_fetch_rem(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_CHUNK_REMAINDER) &
+ TIM_BUCKET_W1_M_CHUNK_REMAINDER;
+}
+
+static inline int16_t
+cnxk_tim_bkt_get_rem(struct cnxk_tim_bkt *bktp)
+{
+ return __atomic_load_n(&bktp->chunk_remainder, __ATOMIC_ACQUIRE);
+}
+
+static inline void
+cnxk_tim_bkt_set_rem(struct cnxk_tim_bkt *bktp, uint16_t v)
+{
+ __atomic_store_n(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
+}
+
+static inline void
+cnxk_tim_bkt_sub_rem(struct cnxk_tim_bkt *bktp, uint16_t v)
+{
+ __atomic_fetch_sub(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
+}
+
+static inline uint8_t
+cnxk_tim_bkt_get_hbt(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_HBT) & TIM_BUCKET_W1_M_HBT;
+}
+
+static inline uint8_t
+cnxk_tim_bkt_get_bsk(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_BSK) & TIM_BUCKET_W1_M_BSK;
+}
+
+static inline uint64_t
+cnxk_tim_bkt_clr_bsk(struct cnxk_tim_bkt *bktp)
+{
+ /* Clear everything except lock. */
+ const uint64_t v = TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK;
+
+ return __atomic_fetch_and(&bktp->w1, v, __ATOMIC_ACQ_REL);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_fetch_sema_lock(struct cnxk_tim_bkt *bktp)
+{
+ return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA_WLOCK,
+ __ATOMIC_ACQUIRE);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_fetch_sema(struct cnxk_tim_bkt *bktp)
+{
+ return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA, __ATOMIC_RELAXED);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_inc_lock(struct cnxk_tim_bkt *bktp)
+{
+ const uint64_t v = 1ull << TIM_BUCKET_W1_S_LOCK;
+
+ return __atomic_fetch_add(&bktp->w1, v, __ATOMIC_ACQUIRE);
+}
+
+static inline void
+cnxk_tim_bkt_dec_lock(struct cnxk_tim_bkt *bktp)
+{
+ __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELEASE);
+}
+
+static inline void
+cnxk_tim_bkt_dec_lock_relaxed(struct cnxk_tim_bkt *bktp)
+{
+ __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELAXED);
+}
+
+static inline uint32_t
+cnxk_tim_bkt_get_nent(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_NUM_ENTRIES) &
+ TIM_BUCKET_W1_M_NUM_ENTRIES;
+}
+
+static inline void
+cnxk_tim_bkt_inc_nent(struct cnxk_tim_bkt *bktp)
+{
+ __atomic_add_fetch(&bktp->nb_entry, 1, __ATOMIC_RELAXED);
+}
+
+static inline void
+cnxk_tim_bkt_add_nent(struct cnxk_tim_bkt *bktp, uint32_t v)
+{
+ __atomic_add_fetch(&bktp->nb_entry, v, __ATOMIC_RELAXED);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_clr_nent(struct cnxk_tim_bkt *bktp)
+{
+ const uint64_t v =
+ ~(TIM_BUCKET_W1_M_NUM_ENTRIES << TIM_BUCKET_W1_S_NUM_ENTRIES);
+
+ return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL);
+}
+
+#endif /* __CNXK_TIM_WORKER_H__ */
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index a9566cb09..b40e39397 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -15,6 +15,7 @@ sources = files('cn10k_worker.c',
'cnxk_eventdev_adptr.c',
'cnxk_eventdev.c',
'cnxk_sso_selftest.c',
+ 'cnxk_tim_worker.c',
'cnxk_tim_evdev.c')
deps += ['bus_pci', 'common_cnxk', 'net_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 27/36] event/cnxk: add timer arm routine
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (25 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 26/36] event/cnxk: add TIM bucket operations pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 28/36] event/cnxk: add timer arm timeout burst pbhagavatula
` (9 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event timer arm routine.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 18 ++
drivers/event/cnxk/cnxk_tim_evdev.h | 23 ++
drivers/event/cnxk/cnxk_tim_worker.c | 95 +++++++++
drivers/event/cnxk/cnxk_tim_worker.h | 300 +++++++++++++++++++++++++++
4 files changed, 436 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 07ec57fd2..a3be66f9a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -76,6 +76,21 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
return rc;
}
+static void
+cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
+{
+ uint8_t prod_flag = !tim_ring->prod_type_sp;
+
+ /* [DFB/FB] [SP][MP]*/
+ const rte_event_timer_arm_burst_t arm_burst[2][2] = {
+#define FP(_name, _f2, _f1, flags) [_f2][_f1] = cnxk_tim_arm_burst_##_name,
+ TIM_ARM_FASTPATH_MODES
+#undef FP
+ };
+
+ cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
+}
+
static void
cnxk_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
struct rte_event_timer_adapter_info *adptr_info)
@@ -173,6 +188,9 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
plt_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE);
plt_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+ /* Set fastpath ops. */
+ cnxk_tim_set_fp_ops(tim_ring);
+
/* Update SSO xae count. */
cnxk_sso_updt_xae_cnt(cnxk_sso_pmd_priv(dev->event_dev), tim_ring,
RTE_EVENT_TYPE_TIMER);
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index c844d9b61..7cbcdb701 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -14,6 +14,7 @@
#include <rte_event_timer_adapter.h>
#include <rte_malloc.h>
#include <rte_memzone.h>
+#include <rte_reciprocal.h>
#include "roc_api.h"
@@ -37,6 +38,11 @@
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
+#define CNXK_TIM_SP 0x1
+#define CNXK_TIM_MP 0x2
+#define CNXK_TIM_ENA_FB 0x10
+#define CNXK_TIM_ENA_DFB 0x20
+
#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
#define TIM_BUCKET_W1_M_CHUNK_REMAINDER \
((1ULL << (64 - TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
@@ -107,10 +113,14 @@ struct cnxk_tim_ring {
uintptr_t base;
uint16_t nb_chunk_slots;
uint32_t nb_bkts;
+ uint64_t last_updt_cyc;
+ uint64_t ring_start_cyc;
uint64_t tck_int;
uint64_t tot_int;
struct cnxk_tim_bkt *bkt;
struct rte_mempool *chunk_pool;
+ struct rte_reciprocal_u64 fast_div;
+ struct rte_reciprocal_u64 fast_bkt;
uint64_t arm_cnt;
uint8_t prod_type_sp;
uint8_t disable_npa;
@@ -201,6 +211,19 @@ cnxk_tim_cntfrq(void)
}
#endif
+#define TIM_ARM_FASTPATH_MODES \
+ FP(sp, 0, 0, CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
+ FP(mp, 0, 1, CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
+ FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
+ FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
+
+#define FP(_name, _f2, _f1, flags) \
+ uint16_t cnxk_tim_arm_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint16_t nb_timers);
+TIM_ARM_FASTPATH_MODES
+#undef FP
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index 564687d9b..eec39b9c2 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -4,3 +4,98 @@
#include "cnxk_tim_evdev.h"
#include "cnxk_tim_worker.h"
+
+static inline int
+cnxk_tim_arm_checks(const struct cnxk_tim_ring *const tim_ring,
+ struct rte_event_timer *const tim)
+{
+ if (unlikely(tim->state)) {
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ rte_errno = EALREADY;
+ goto fail;
+ }
+
+ if (unlikely(!tim->timeout_ticks ||
+ tim->timeout_ticks > tim_ring->nb_bkts)) {
+ tim->state = tim->timeout_ticks ?
+ RTE_EVENT_TIMER_ERROR_TOOLATE :
+ RTE_EVENT_TIMER_ERROR_TOOEARLY;
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ return 0;
+
+fail:
+ return -EINVAL;
+}
+
+static inline void
+cnxk_tim_format_event(const struct rte_event_timer *const tim,
+ struct cnxk_tim_ent *const entry)
+{
+ entry->w0 = (tim->ev.event & 0xFFC000000000) >> 6 |
+ (tim->ev.event & 0xFFFFFFFFF);
+ entry->wqe = tim->ev.u64;
+}
+
+static inline void
+cnxk_tim_sync_start_cyc(struct cnxk_tim_ring *tim_ring)
+{
+ uint64_t cur_cyc = cnxk_tim_cntvct();
+ uint32_t real_bkt;
+
+ if (cur_cyc - tim_ring->last_updt_cyc > tim_ring->tot_int) {
+ real_bkt = plt_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
+ cur_cyc = cnxk_tim_cntvct();
+
+ tim_ring->ring_start_cyc =
+ cur_cyc - (real_bkt * tim_ring->tck_int);
+ tim_ring->last_updt_cyc = cur_cyc;
+ }
+}
+
+static __rte_always_inline uint16_t
+cnxk_tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim, const uint16_t nb_timers,
+ const uint8_t flags)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_ent entry;
+ uint16_t index;
+ int ret;
+
+ cnxk_tim_sync_start_cyc(tim_ring);
+ for (index = 0; index < nb_timers; index++) {
+ if (cnxk_tim_arm_checks(tim_ring, tim[index]))
+ break;
+
+ cnxk_tim_format_event(tim[index], &entry);
+ if (flags & CNXK_TIM_SP)
+ ret = cnxk_tim_add_entry_sp(tim_ring,
+ tim[index]->timeout_ticks,
+ tim[index], &entry, flags);
+ if (flags & CNXK_TIM_MP)
+ ret = cnxk_tim_add_entry_mp(tim_ring,
+ tim[index]->timeout_ticks,
+ tim[index], &entry, flags);
+
+ if (unlikely(ret)) {
+ rte_errno = -ret;
+ break;
+ }
+ }
+
+ return index;
+}
+
+#define FP(_name, _f2, _f1, _flags) \
+ uint16_t __rte_noinline cnxk_tim_arm_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint16_t nb_timers) \
+ { \
+ return cnxk_tim_timer_arm_burst(adptr, tim, nb_timers, \
+ _flags); \
+ }
+TIM_ARM_FASTPATH_MODES
+#undef FP
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
index bd205e5c1..efdcf8969 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.h
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -120,4 +120,304 @@ cnxk_tim_bkt_clr_nent(struct cnxk_tim_bkt *bktp)
return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL);
}
+static inline uint64_t
+cnxk_tim_bkt_fast_mod(uint64_t n, uint64_t d, struct rte_reciprocal_u64 R)
+{
+ return (n - (d * rte_reciprocal_divide_u64(n, &R)));
+}
+
+static __rte_always_inline void
+cnxk_tim_get_target_bucket(struct cnxk_tim_ring *const tim_ring,
+ const uint32_t rel_bkt, struct cnxk_tim_bkt **bkt,
+ struct cnxk_tim_bkt **mirr_bkt)
+{
+ const uint64_t bkt_cyc = cnxk_tim_cntvct() - tim_ring->ring_start_cyc;
+ uint64_t bucket =
+ rte_reciprocal_divide_u64(bkt_cyc, &tim_ring->fast_div) +
+ rel_bkt;
+ uint64_t mirr_bucket = 0;
+
+ bucket = cnxk_tim_bkt_fast_mod(bucket, tim_ring->nb_bkts,
+ tim_ring->fast_bkt);
+ mirr_bucket =
+ cnxk_tim_bkt_fast_mod(bucket + (tim_ring->nb_bkts >> 1),
+ tim_ring->nb_bkts, tim_ring->fast_bkt);
+ *bkt = &tim_ring->bkt[bucket];
+ *mirr_bkt = &tim_ring->bkt[mirr_bucket];
+}
+
+static struct cnxk_tim_ent *
+cnxk_tim_clr_bkt(struct cnxk_tim_ring *const tim_ring,
+ struct cnxk_tim_bkt *const bkt)
+{
+#define TIM_MAX_OUTSTANDING_OBJ 64
+ void *pend_chunks[TIM_MAX_OUTSTANDING_OBJ];
+ struct cnxk_tim_ent *chunk;
+ struct cnxk_tim_ent *pnext;
+ uint8_t objs = 0;
+
+ chunk = ((struct cnxk_tim_ent *)(uintptr_t)bkt->first_chunk);
+ chunk = (struct cnxk_tim_ent *)(uintptr_t)(chunk +
+ tim_ring->nb_chunk_slots)
+ ->w0;
+ while (chunk) {
+ pnext = (struct cnxk_tim_ent *)(uintptr_t)(
+ (chunk + tim_ring->nb_chunk_slots)->w0);
+ if (objs == TIM_MAX_OUTSTANDING_OBJ) {
+ rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks,
+ objs);
+ objs = 0;
+ }
+ pend_chunks[objs++] = chunk;
+ chunk = pnext;
+ }
+
+ if (objs)
+ rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks, objs);
+
+ return (struct cnxk_tim_ent *)(uintptr_t)bkt->first_chunk;
+}
+
+static struct cnxk_tim_ent *
+cnxk_tim_refill_chunk(struct cnxk_tim_bkt *const bkt,
+ struct cnxk_tim_bkt *const mirr_bkt,
+ struct cnxk_tim_ring *const tim_ring)
+{
+ struct cnxk_tim_ent *chunk;
+
+ if (bkt->nb_entry || !bkt->first_chunk) {
+ if (unlikely(rte_mempool_get(tim_ring->chunk_pool,
+ (void **)&chunk)))
+ return NULL;
+ if (bkt->nb_entry) {
+ *(uint64_t *)(((struct cnxk_tim_ent *)
+ mirr_bkt->current_chunk) +
+ tim_ring->nb_chunk_slots) =
+ (uintptr_t)chunk;
+ } else {
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ } else {
+ chunk = cnxk_tim_clr_bkt(tim_ring, bkt);
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+
+ return chunk;
+}
+
+static struct cnxk_tim_ent *
+cnxk_tim_insert_chunk(struct cnxk_tim_bkt *const bkt,
+ struct cnxk_tim_bkt *const mirr_bkt,
+ struct cnxk_tim_ring *const tim_ring)
+{
+ struct cnxk_tim_ent *chunk;
+
+ if (unlikely(rte_mempool_get(tim_ring->chunk_pool, (void **)&chunk)))
+ return NULL;
+
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+ if (bkt->nb_entry) {
+ *(uint64_t *)(((struct cnxk_tim_ent *)(uintptr_t)
+ mirr_bkt->current_chunk) +
+ tim_ring->nb_chunk_slots) = (uintptr_t)chunk;
+ } else {
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ return chunk;
+}
+
+static __rte_always_inline int
+cnxk_tim_add_entry_sp(struct cnxk_tim_ring *const tim_ring,
+ const uint32_t rel_bkt, struct rte_event_timer *const tim,
+ const struct cnxk_tim_ent *const pent,
+ const uint8_t flags)
+{
+ struct cnxk_tim_bkt *mirr_bkt;
+ struct cnxk_tim_ent *chunk;
+ struct cnxk_tim_bkt *bkt;
+ uint64_t lock_sema;
+ int16_t rem;
+
+__retry:
+ cnxk_tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
+
+ /* Get Bucket sema*/
+ lock_sema = cnxk_tim_bkt_fetch_sema_lock(bkt);
+
+ /* Bucket related checks. */
+ if (unlikely(cnxk_tim_bkt_get_hbt(lock_sema))) {
+ if (cnxk_tim_bkt_get_nent(lock_sema) != 0) {
+ uint64_t hbt_state;
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbz %[hbt], 33, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbnz %[hbt], 33, rty%= \n"
+ "dne%=: \n"
+ : [hbt] "=&r"(hbt_state)
+ : [w1] "r"((&bkt->w1))
+ : "memory");
+#else
+ do {
+ hbt_state = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (hbt_state & BIT_ULL(33));
+#endif
+
+ if (!(hbt_state & BIT_ULL(34))) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+ }
+ }
+ /* Insert the work. */
+ rem = cnxk_tim_bkt_fetch_rem(lock_sema);
+
+ if (!rem) {
+ if (flags & CNXK_TIM_ENA_FB)
+ chunk = cnxk_tim_refill_chunk(bkt, mirr_bkt, tim_ring);
+ if (flags & CNXK_TIM_ENA_DFB)
+ chunk = cnxk_tim_insert_chunk(bkt, mirr_bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ bkt->chunk_remainder = 0;
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ cnxk_tim_bkt_dec_lock(bkt);
+ return -ENOMEM;
+ }
+ mirr_bkt->current_chunk = (uintptr_t)chunk;
+ bkt->chunk_remainder = tim_ring->nb_chunk_slots - 1;
+ } else {
+ chunk = (struct cnxk_tim_ent *)mirr_bkt->current_chunk;
+ chunk += tim_ring->nb_chunk_slots - rem;
+ }
+
+ /* Copy work entry. */
+ *chunk = *pent;
+
+ tim->impl_opaque[0] = (uintptr_t)chunk;
+ tim->impl_opaque[1] = (uintptr_t)bkt;
+ __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
+ cnxk_tim_bkt_inc_nent(bkt);
+ cnxk_tim_bkt_dec_lock_relaxed(bkt);
+
+ return 0;
+}
+
+static __rte_always_inline int
+cnxk_tim_add_entry_mp(struct cnxk_tim_ring *const tim_ring,
+ const uint32_t rel_bkt, struct rte_event_timer *const tim,
+ const struct cnxk_tim_ent *const pent,
+ const uint8_t flags)
+{
+ struct cnxk_tim_bkt *mirr_bkt;
+ struct cnxk_tim_ent *chunk;
+ struct cnxk_tim_bkt *bkt;
+ uint64_t lock_sema;
+ int16_t rem;
+
+__retry:
+ cnxk_tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
+ /* Get Bucket sema*/
+ lock_sema = cnxk_tim_bkt_fetch_sema_lock(bkt);
+
+ /* Bucket related checks. */
+ if (unlikely(cnxk_tim_bkt_get_hbt(lock_sema))) {
+ if (cnxk_tim_bkt_get_nent(lock_sema) != 0) {
+ uint64_t hbt_state;
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbz %[hbt], 33, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbnz %[hbt], 33, rty%= \n"
+ "dne%=: \n"
+ : [hbt] "=&r"(hbt_state)
+ : [w1] "r"((&bkt->w1))
+ : "memory");
+#else
+ do {
+ hbt_state = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (hbt_state & BIT_ULL(33));
+#endif
+
+ if (!(hbt_state & BIT_ULL(34))) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+ }
+ }
+
+ rem = cnxk_tim_bkt_fetch_rem(lock_sema);
+ if (rem < 0) {
+ cnxk_tim_bkt_dec_lock(bkt);
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[rem], [%[crem]] \n"
+ " tbz %[rem], 63, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[rem], [%[crem]] \n"
+ " tbnz %[rem], 63, rty%= \n"
+ "dne%=: \n"
+ : [rem] "=&r"(rem)
+ : [crem] "r"(&bkt->w1)
+ : "memory");
+#else
+ while (__atomic_load_n((int64_t *)&bkt->w1, __ATOMIC_RELAXED) <
+ 0)
+ ;
+#endif
+ goto __retry;
+ } else if (!rem) {
+ /* Only one thread can be here*/
+ if (flags & CNXK_TIM_ENA_FB)
+ chunk = cnxk_tim_refill_chunk(bkt, mirr_bkt, tim_ring);
+ if (flags & CNXK_TIM_ENA_DFB)
+ chunk = cnxk_tim_insert_chunk(bkt, mirr_bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ cnxk_tim_bkt_set_rem(bkt, 0);
+ cnxk_tim_bkt_dec_lock(bkt);
+ return -ENOMEM;
+ }
+ *chunk = *pent;
+ if (cnxk_tim_bkt_fetch_lock(lock_sema)) {
+ do {
+ lock_sema = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (cnxk_tim_bkt_fetch_lock(lock_sema) - 1);
+ }
+ rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+ mirr_bkt->current_chunk = (uintptr_t)chunk;
+ __atomic_store_n(&bkt->chunk_remainder,
+ tim_ring->nb_chunk_slots - 1,
+ __ATOMIC_RELEASE);
+ } else {
+ chunk = (struct cnxk_tim_ent *)mirr_bkt->current_chunk;
+ chunk += tim_ring->nb_chunk_slots - rem;
+ *chunk = *pent;
+ }
+
+ tim->impl_opaque[0] = (uintptr_t)chunk;
+ tim->impl_opaque[1] = (uintptr_t)bkt;
+ __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
+ cnxk_tim_bkt_inc_nent(bkt);
+ cnxk_tim_bkt_dec_lock_relaxed(bkt);
+
+ return 0;
+}
+
#endif /* __CNXK_TIM_WORKER_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 28/36] event/cnxk: add timer arm timeout burst
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (26 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 27/36] event/cnxk: add timer arm routine pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 29/36] event/cnxk: add timer cancel function pbhagavatula
` (8 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event timer arm timeout burst function.
All the timers requested to be armed have the same timeout.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 7 ++
drivers/event/cnxk/cnxk_tim_evdev.h | 12 +++
drivers/event/cnxk/cnxk_tim_worker.c | 53 ++++++++++
drivers/event/cnxk/cnxk_tim_worker.h | 141 +++++++++++++++++++++++++++
4 files changed, 213 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index a3be66f9a..e6f31b19f 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -88,7 +88,14 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
#undef FP
};
+ const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2] = {
+#define FP(_name, _f1, flags) [_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
+ TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
+ };
+
cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
+ cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
}
static void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 7cbcdb701..04ba3dc8c 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -217,6 +217,10 @@ cnxk_tim_cntfrq(void)
FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
+#define TIM_ARM_TMO_FASTPATH_MODES \
+ FP(dfb, 0, CNXK_TIM_ENA_DFB) \
+ FP(fb, 1, CNXK_TIM_ENA_FB)
+
#define FP(_name, _f2, _f1, flags) \
uint16_t cnxk_tim_arm_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
@@ -224,6 +228,14 @@ cnxk_tim_cntfrq(void)
TIM_ARM_FASTPATH_MODES
#undef FP
+#define FP(_name, _f1, flags) \
+ uint16_t cnxk_tim_arm_tmo_tick_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint64_t timeout_tick, \
+ const uint16_t nb_timers);
+TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index eec39b9c2..2f1676ec1 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -99,3 +99,56 @@ cnxk_tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
}
TIM_ARM_FASTPATH_MODES
#undef FP
+
+static __rte_always_inline uint16_t
+cnxk_tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint64_t timeout_tick,
+ const uint16_t nb_timers, const uint8_t flags)
+{
+ struct cnxk_tim_ent entry[CNXK_TIM_MAX_BURST] __rte_cache_aligned;
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ uint16_t set_timers = 0;
+ uint16_t arr_idx = 0;
+ uint16_t idx;
+ int ret;
+
+ if (unlikely(!timeout_tick || timeout_tick > tim_ring->nb_bkts)) {
+ const enum rte_event_timer_state state =
+ timeout_tick ? RTE_EVENT_TIMER_ERROR_TOOLATE :
+ RTE_EVENT_TIMER_ERROR_TOOEARLY;
+ for (idx = 0; idx < nb_timers; idx++)
+ tim[idx]->state = state;
+
+ rte_errno = EINVAL;
+ return 0;
+ }
+
+ cnxk_tim_sync_start_cyc(tim_ring);
+ while (arr_idx < nb_timers) {
+ for (idx = 0; idx < CNXK_TIM_MAX_BURST && (arr_idx < nb_timers);
+ idx++, arr_idx++) {
+ cnxk_tim_format_event(tim[arr_idx], &entry[idx]);
+ }
+ ret = cnxk_tim_add_entry_brst(tim_ring, timeout_tick,
+ &tim[set_timers], entry, idx,
+ flags);
+ set_timers += ret;
+ if (ret != idx)
+ break;
+ }
+
+ return set_timers;
+}
+
+#define FP(_name, _f1, _flags) \
+ uint16_t __rte_noinline cnxk_tim_arm_tmo_tick_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint64_t timeout_tick, \
+ const uint16_t nb_timers) \
+ { \
+ return cnxk_tim_timer_arm_tmo_brst(adptr, tim, timeout_tick, \
+ nb_timers, _flags); \
+ }
+TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
index efdcf8969..7a4cfd1a6 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.h
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -420,4 +420,145 @@ cnxk_tim_add_entry_mp(struct cnxk_tim_ring *const tim_ring,
return 0;
}
+static inline uint16_t
+cnxk_tim_cpy_wrk(uint16_t index, uint16_t cpy_lmt, struct cnxk_tim_ent *chunk,
+ struct rte_event_timer **const tim,
+ const struct cnxk_tim_ent *const ents,
+ const struct cnxk_tim_bkt *const bkt)
+{
+ for (; index < cpy_lmt; index++) {
+ *chunk = *(ents + index);
+ tim[index]->impl_opaque[0] = (uintptr_t)chunk++;
+ tim[index]->impl_opaque[1] = (uintptr_t)bkt;
+ tim[index]->state = RTE_EVENT_TIMER_ARMED;
+ }
+
+ return index;
+}
+
+/* Burst mode functions */
+static inline int
+cnxk_tim_add_entry_brst(struct cnxk_tim_ring *const tim_ring,
+ const uint16_t rel_bkt,
+ struct rte_event_timer **const tim,
+ const struct cnxk_tim_ent *ents,
+ const uint16_t nb_timers, const uint8_t flags)
+{
+ struct cnxk_tim_ent *chunk = NULL;
+ struct cnxk_tim_bkt *mirr_bkt;
+ struct cnxk_tim_bkt *bkt;
+ uint16_t chunk_remainder;
+ uint16_t index = 0;
+ uint64_t lock_sema;
+ int16_t rem, crem;
+ uint8_t lock_cnt;
+
+__retry:
+ cnxk_tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
+
+ /* Only one thread beyond this. */
+ lock_sema = cnxk_tim_bkt_inc_lock(bkt);
+ lock_cnt = (uint8_t)((lock_sema >> TIM_BUCKET_W1_S_LOCK) &
+ TIM_BUCKET_W1_M_LOCK);
+
+ if (lock_cnt) {
+ cnxk_tim_bkt_dec_lock(bkt);
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxrb %w[lock_cnt], [%[lock]] \n"
+ " tst %w[lock_cnt], 255 \n"
+ " beq dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxrb %w[lock_cnt], [%[lock]] \n"
+ " tst %w[lock_cnt], 255 \n"
+ " bne rty%= \n"
+ "dne%=: \n"
+ : [lock_cnt] "=&r"(lock_cnt)
+ : [lock] "r"(&bkt->lock)
+ : "memory");
+#else
+ while (__atomic_load_n(&bkt->lock, __ATOMIC_RELAXED))
+ ;
+#endif
+ goto __retry;
+ }
+
+ /* Bucket related checks. */
+ if (unlikely(cnxk_tim_bkt_get_hbt(lock_sema))) {
+ if (cnxk_tim_bkt_get_nent(lock_sema) != 0) {
+ uint64_t hbt_state;
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbz %[hbt], 33, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbnz %[hbt], 33, rty%= \n"
+ "dne%=: \n"
+ : [hbt] "=&r"(hbt_state)
+ : [w1] "r"((&bkt->w1))
+ : "memory");
+#else
+ do {
+ hbt_state = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (hbt_state & BIT_ULL(33));
+#endif
+
+ if (!(hbt_state & BIT_ULL(34))) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+ }
+ }
+
+ chunk_remainder = cnxk_tim_bkt_fetch_rem(lock_sema);
+ rem = chunk_remainder - nb_timers;
+ if (rem < 0) {
+ crem = tim_ring->nb_chunk_slots - chunk_remainder;
+ if (chunk_remainder && crem) {
+ chunk = ((struct cnxk_tim_ent *)
+ mirr_bkt->current_chunk) +
+ crem;
+
+ index = cnxk_tim_cpy_wrk(index, chunk_remainder, chunk,
+ tim, ents, bkt);
+ cnxk_tim_bkt_sub_rem(bkt, chunk_remainder);
+ cnxk_tim_bkt_add_nent(bkt, chunk_remainder);
+ }
+
+ if (flags & CNXK_TIM_ENA_FB)
+ chunk = cnxk_tim_refill_chunk(bkt, mirr_bkt, tim_ring);
+ if (flags & CNXK_TIM_ENA_DFB)
+ chunk = cnxk_tim_insert_chunk(bkt, mirr_bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ rte_errno = ENOMEM;
+ tim[index]->state = RTE_EVENT_TIMER_ERROR;
+ return crem;
+ }
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+ mirr_bkt->current_chunk = (uintptr_t)chunk;
+ cnxk_tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
+
+ rem = nb_timers - chunk_remainder;
+ cnxk_tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - rem);
+ cnxk_tim_bkt_add_nent(bkt, rem);
+ } else {
+ chunk = (struct cnxk_tim_ent *)mirr_bkt->current_chunk;
+ chunk += (tim_ring->nb_chunk_slots - chunk_remainder);
+
+ cnxk_tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
+ cnxk_tim_bkt_sub_rem(bkt, nb_timers);
+ cnxk_tim_bkt_add_nent(bkt, nb_timers);
+ }
+
+ cnxk_tim_bkt_dec_lock(bkt);
+
+ return nb_timers;
+}
+
#endif /* __CNXK_TIM_WORKER_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 29/36] event/cnxk: add timer cancel function
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (27 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 28/36] event/cnxk: add timer arm timeout burst pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 30/36] event/cnxk: add timer stats get and reset pbhagavatula
` (7 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add function to cancel event timer that has been armed.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 1 +
drivers/event/cnxk/cnxk_tim_evdev.h | 5 ++++
drivers/event/cnxk/cnxk_tim_worker.c | 30 ++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_worker.h | 37 ++++++++++++++++++++++++++++
4 files changed, 73 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index e6f31b19f..edc8706f8 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -96,6 +96,7 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
+ cnxk_tim_ops.cancel_burst = cnxk_tim_timer_cancel_burst;
}
static void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 04ba3dc8c..9cc6e7512 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -236,6 +236,11 @@ TIM_ARM_FASTPATH_MODES
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
+uint16_t
+cnxk_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint16_t nb_timers);
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index 2f1676ec1..ce6918465 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -152,3 +152,33 @@ cnxk_tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
}
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
+
+uint16_t
+cnxk_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint16_t nb_timers)
+{
+ uint16_t index;
+ int ret;
+
+ RTE_SET_USED(adptr);
+ rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+ for (index = 0; index < nb_timers; index++) {
+ if (tim[index]->state == RTE_EVENT_TIMER_CANCELED) {
+ rte_errno = EALREADY;
+ break;
+ }
+
+ if (tim[index]->state != RTE_EVENT_TIMER_ARMED) {
+ rte_errno = EINVAL;
+ break;
+ }
+ ret = cnxk_tim_rm_entry(tim[index]);
+ if (ret) {
+ rte_errno = -ret;
+ break;
+ }
+ }
+
+ return index;
+}
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
index 7a4cfd1a6..02f58eb3d 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.h
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -561,4 +561,41 @@ cnxk_tim_add_entry_brst(struct cnxk_tim_ring *const tim_ring,
return nb_timers;
}
+static int
+cnxk_tim_rm_entry(struct rte_event_timer *tim)
+{
+ struct cnxk_tim_ent *entry;
+ struct cnxk_tim_bkt *bkt;
+ uint64_t lock_sema;
+
+ if (tim->impl_opaque[1] == 0 || tim->impl_opaque[0] == 0)
+ return -ENOENT;
+
+ entry = (struct cnxk_tim_ent *)(uintptr_t)tim->impl_opaque[0];
+ if (entry->wqe != tim->ev.u64) {
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ return -ENOENT;
+ }
+
+ bkt = (struct cnxk_tim_bkt *)(uintptr_t)tim->impl_opaque[1];
+ lock_sema = cnxk_tim_bkt_inc_lock(bkt);
+ if (cnxk_tim_bkt_get_hbt(lock_sema) ||
+ !cnxk_tim_bkt_get_nent(lock_sema)) {
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ cnxk_tim_bkt_dec_lock(bkt);
+ return -ENOENT;
+ }
+
+ entry->w0 = 0;
+ entry->wqe = 0;
+ tim->state = RTE_EVENT_TIMER_CANCELED;
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ cnxk_tim_bkt_dec_lock(bkt);
+
+ return 0;
+}
+
#endif /* __CNXK_TIM_WORKER_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 30/36] event/cnxk: add timer stats get and reset
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (28 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 29/36] event/cnxk: add timer cancel function pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 31/36] event/cnxk: add timer adapter start and stop pbhagavatula
` (6 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
Add event timer adapter statistics get and reset functions.
Stats are disabled by default and can be enabled through devargs.
Example:
--dev "0002:1e:00.0,tim_stats_ena=1"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 9 +++++
drivers/event/cnxk/cn10k_eventdev.c | 3 +-
drivers/event/cnxk/cn9k_eventdev.c | 3 +-
drivers/event/cnxk/cnxk_tim_evdev.c | 50 ++++++++++++++++++++++++----
drivers/event/cnxk/cnxk_tim_evdev.h | 38 ++++++++++++++-------
drivers/event/cnxk/cnxk_tim_worker.c | 11 ++++--
6 files changed, 91 insertions(+), 23 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 05dcf06f4..cfa743da1 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -115,6 +115,15 @@ Runtime Config Options
-a 0002:0e:00.0,tim_chnk_slots=1023
+- ``TIM enable arm/cancel statistics``
+
+ The ``tim_stats_ena`` devargs can be used to enable arm and cancel stats of
+ event timer adapter.
+
+ For example::
+
+ -a 0002:0e:00.0,tim_stats_ena=1
+
- ``TIM limit max rings reserved``
The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index a5a614196..2b2025cdb 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -505,4 +505,5 @@ RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
CN10K_SSO_GW_MODE "=<int>"
CNXK_TIM_DISABLE_NPA "=1"
CNXK_TIM_CHNK_SLOTS "=<int>"
- CNXK_TIM_RINGS_LMT "=<int>");
+ CNXK_TIM_RINGS_LMT "=<int>"
+ CNXK_TIM_STATS_ENA "=1");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index cfea3723a..e39b4ded2 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -574,4 +574,5 @@ RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
CN9K_SSO_SINGLE_WS "=1"
CNXK_TIM_DISABLE_NPA "=1"
CNXK_TIM_CHNK_SLOTS "=<int>"
- CNXK_TIM_RINGS_LMT "=<int>");
+ CNXK_TIM_RINGS_LMT "=<int>"
+ CNXK_TIM_STATS_ENA "=1");
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index edc8706f8..1b2518a64 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -81,21 +81,25 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
{
uint8_t prod_flag = !tim_ring->prod_type_sp;
- /* [DFB/FB] [SP][MP]*/
- const rte_event_timer_arm_burst_t arm_burst[2][2] = {
-#define FP(_name, _f2, _f1, flags) [_f2][_f1] = cnxk_tim_arm_burst_##_name,
+ /* [STATS] [DFB/FB] [SP][MP]*/
+ const rte_event_timer_arm_burst_t arm_burst[2][2][2] = {
+#define FP(_name, _f3, _f2, _f1, flags) \
+ [_f3][_f2][_f1] = cnxk_tim_arm_burst_##_name,
TIM_ARM_FASTPATH_MODES
#undef FP
};
- const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2] = {
-#define FP(_name, _f1, flags) [_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
+ const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2] = {
+#define FP(_name, _f2, _f1, flags) \
+ [_f2][_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
};
- cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
- cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
+ cnxk_tim_ops.arm_burst =
+ arm_burst[tim_ring->enable_stats][tim_ring->ena_dfb][prod_flag];
+ cnxk_tim_ops.arm_tmo_tick_burst =
+ arm_tmo_burst[tim_ring->enable_stats][tim_ring->ena_dfb];
cnxk_tim_ops.cancel_burst = cnxk_tim_timer_cancel_burst;
}
@@ -159,6 +163,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->nb_timers = rcfg->nb_timers;
tim_ring->chunk_sz = dev->chunk_sz;
tim_ring->disable_npa = dev->disable_npa;
+ tim_ring->enable_stats = dev->enable_stats;
if (tim_ring->disable_npa) {
tim_ring->nb_chunks =
@@ -241,6 +246,30 @@ cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
return 0;
}
+static int
+cnxk_tim_stats_get(const struct rte_event_timer_adapter *adapter,
+ struct rte_event_timer_adapter_stats *stats)
+{
+ struct cnxk_tim_ring *tim_ring = adapter->data->adapter_priv;
+ uint64_t bkt_cyc = cnxk_tim_cntvct() - tim_ring->ring_start_cyc;
+
+ stats->evtim_exp_count =
+ __atomic_load_n(&tim_ring->arm_cnt, __ATOMIC_RELAXED);
+ stats->ev_enq_count = stats->evtim_exp_count;
+ stats->adapter_tick_count =
+ rte_reciprocal_divide_u64(bkt_cyc, &tim_ring->fast_div);
+ return 0;
+}
+
+static int
+cnxk_tim_stats_reset(const struct rte_event_timer_adapter *adapter)
+{
+ struct cnxk_tim_ring *tim_ring = adapter->data->adapter_priv;
+
+ __atomic_store_n(&tim_ring->arm_cnt, 0, __ATOMIC_RELAXED);
+ return 0;
+}
+
int
cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
uint32_t *caps,
@@ -258,6 +287,11 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
+ if (dev->enable_stats) {
+ cnxk_tim_ops.stats_get = cnxk_tim_stats_get;
+ cnxk_tim_ops.stats_reset = cnxk_tim_stats_reset;
+ }
+
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
@@ -281,6 +315,8 @@ cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
&dev->disable_npa);
rte_kvargs_process(kvlist, CNXK_TIM_CHNK_SLOTS, &parse_kvargs_value,
&dev->chunk_slots);
+ rte_kvargs_process(kvlist, CNXK_TIM_STATS_ENA, &parse_kvargs_flag,
+ &dev->enable_stats);
rte_kvargs_process(kvlist, CNXK_TIM_RINGS_LMT, &parse_kvargs_value,
&dev->min_ring_cnt);
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 9cc6e7512..7aa9650c1 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -36,12 +36,14 @@
#define CNXK_TIM_DISABLE_NPA "tim_disable_npa"
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
+#define CNXK_TIM_STATS_ENA "tim_stats_ena"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
-#define CNXK_TIM_SP 0x1
-#define CNXK_TIM_MP 0x2
-#define CNXK_TIM_ENA_FB 0x10
-#define CNXK_TIM_ENA_DFB 0x20
+#define CNXK_TIM_SP 0x1
+#define CNXK_TIM_MP 0x2
+#define CNXK_TIM_ENA_FB 0x10
+#define CNXK_TIM_ENA_DFB 0x20
+#define CNXK_TIM_ENA_STATS 0x40
#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
#define TIM_BUCKET_W1_M_CHUNK_REMAINDER \
@@ -82,6 +84,7 @@ struct cnxk_tim_evdev {
uint8_t disable_npa;
uint16_t chunk_slots;
uint16_t min_ring_cnt;
+ uint8_t enable_stats;
};
enum cnxk_tim_clk_src {
@@ -123,6 +126,7 @@ struct cnxk_tim_ring {
struct rte_reciprocal_u64 fast_bkt;
uint64_t arm_cnt;
uint8_t prod_type_sp;
+ uint8_t enable_stats;
uint8_t disable_npa;
uint8_t ena_dfb;
uint16_t ring_id;
@@ -212,23 +216,33 @@ cnxk_tim_cntfrq(void)
#endif
#define TIM_ARM_FASTPATH_MODES \
- FP(sp, 0, 0, CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
- FP(mp, 0, 1, CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
- FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
- FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
+ FP(sp, 0, 0, 0, CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
+ FP(mp, 0, 0, 1, CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
+ FP(fb_sp, 0, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
+ FP(fb_mp, 0, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP) \
+ FP(stats_sp, 1, 0, 0, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
+ FP(stats_mp, 1, 0, 1, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
+ FP(stats_fb_sp, 1, 1, 0, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
+ FP(stats_fb_mp, 1, 1, 1, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_FB | CNXK_TIM_MP)
#define TIM_ARM_TMO_FASTPATH_MODES \
- FP(dfb, 0, CNXK_TIM_ENA_DFB) \
- FP(fb, 1, CNXK_TIM_ENA_FB)
+ FP(dfb, 0, 0, CNXK_TIM_ENA_DFB) \
+ FP(fb, 0, 1, CNXK_TIM_ENA_FB) \
+ FP(stats_dfb, 1, 0, CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_DFB) \
+ FP(stats_fb, 1, 1, CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_FB)
-#define FP(_name, _f2, _f1, flags) \
+#define FP(_name, _f3, _f2, _f1, flags) \
uint16_t cnxk_tim_arm_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint16_t nb_timers);
TIM_ARM_FASTPATH_MODES
#undef FP
-#define FP(_name, _f1, flags) \
+#define FP(_name, _f2, _f1, flags) \
uint16_t cnxk_tim_arm_tmo_tick_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint64_t timeout_tick, \
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index ce6918465..598379ac4 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -86,10 +86,13 @@ cnxk_tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
}
}
+ if (flags & CNXK_TIM_ENA_STATS)
+ __atomic_fetch_add(&tim_ring->arm_cnt, index, __ATOMIC_RELAXED);
+
return index;
}
-#define FP(_name, _f2, _f1, _flags) \
+#define FP(_name, _f3, _f2, _f1, _flags) \
uint16_t __rte_noinline cnxk_tim_arm_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint16_t nb_timers) \
@@ -138,10 +141,14 @@ cnxk_tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
break;
}
+ if (flags & CNXK_TIM_ENA_STATS)
+ __atomic_fetch_add(&tim_ring->arm_cnt, set_timers,
+ __ATOMIC_RELAXED);
+
return set_timers;
}
-#define FP(_name, _f1, _flags) \
+#define FP(_name, _f2, _f1, _flags) \
uint16_t __rte_noinline cnxk_tim_arm_tmo_tick_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint64_t timeout_tick, \
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 31/36] event/cnxk: add timer adapter start and stop
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (29 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 30/36] event/cnxk: add timer stats get and reset pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 32/36] event/cnxk: add devargs to control timer adapters pbhagavatula
` (5 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
Add event timer adapter start and stop functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 71 ++++++++++++++++++++++++++++-
1 file changed, 70 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 1b2518a64..7b28969c9 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -246,6 +246,73 @@ cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
return 0;
}
+static void
+cnxk_tim_calibrate_start_tsc(struct cnxk_tim_ring *tim_ring)
+{
+#define CNXK_TIM_CALIB_ITER 1E6
+ uint32_t real_bkt, bucket;
+ int icount, ecount = 0;
+ uint64_t bkt_cyc;
+
+ for (icount = 0; icount < CNXK_TIM_CALIB_ITER; icount++) {
+ real_bkt = plt_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
+ bkt_cyc = cnxk_tim_cntvct();
+ bucket = (bkt_cyc - tim_ring->ring_start_cyc) /
+ tim_ring->tck_int;
+ bucket = bucket % (tim_ring->nb_bkts);
+ tim_ring->ring_start_cyc =
+ bkt_cyc - (real_bkt * tim_ring->tck_int);
+ if (bucket != real_bkt)
+ ecount++;
+ }
+ tim_ring->last_updt_cyc = bkt_cyc;
+ plt_tim_dbg("Bucket mispredict %3.2f distance %d\n",
+ 100 - (((double)(icount - ecount) / (double)icount) * 100),
+ bucket - real_bkt);
+}
+
+static int
+cnxk_tim_ring_start(const struct rte_event_timer_adapter *adptr)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ rc = roc_tim_lf_enable(&dev->tim, tim_ring->ring_id,
+ &tim_ring->ring_start_cyc, NULL);
+ if (rc < 0)
+ return rc;
+
+ tim_ring->tck_int = NSEC2TICK(tim_ring->tck_nsec, cnxk_tim_cntfrq());
+ tim_ring->tot_int = tim_ring->tck_int * tim_ring->nb_bkts;
+ tim_ring->fast_div = rte_reciprocal_value_u64(tim_ring->tck_int);
+ tim_ring->fast_bkt = rte_reciprocal_value_u64(tim_ring->nb_bkts);
+
+ cnxk_tim_calibrate_start_tsc(tim_ring);
+
+ return rc;
+}
+
+static int
+cnxk_tim_ring_stop(const struct rte_event_timer_adapter *adptr)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ rc = roc_tim_lf_disable(&dev->tim, tim_ring->ring_id);
+ if (rc < 0)
+ plt_err("Failed to disable timer ring");
+
+ return rc;
+}
+
static int
cnxk_tim_stats_get(const struct rte_event_timer_adapter *adapter,
struct rte_event_timer_adapter_stats *stats)
@@ -278,13 +345,14 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
RTE_SET_USED(flags);
- RTE_SET_USED(ops);
if (dev == NULL)
return -ENODEV;
cnxk_tim_ops.init = cnxk_tim_ring_create;
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+ cnxk_tim_ops.start = cnxk_tim_ring_start;
+ cnxk_tim_ops.stop = cnxk_tim_ring_stop;
cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
if (dev->enable_stats) {
@@ -295,6 +363,7 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
+ *ops = &cnxk_tim_ops;
return 0;
}
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 32/36] event/cnxk: add devargs to control timer adapters
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (30 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 31/36] event/cnxk: add timer adapter start and stop pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 33/36] event/cnxk: add Rx adapter support pbhagavatula
` (4 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Shijith Thotton <sthotton@marvell.com>
Add devargs to control each event timer adapter i.e. TIM rings internal
parameters uniquely. The following dict format is expected
[ring-chnk_slots-disable_npa-stats_ena]. 0 represents default values.
Example:
--dev "0002:1e:00.0,tim_ring_ctl=[2-1023-1-0]"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 11 ++++
drivers/event/cnxk/cnxk_tim_evdev.c | 96 ++++++++++++++++++++++++++++-
drivers/event/cnxk/cnxk_tim_evdev.h | 10 +++
3 files changed, 116 insertions(+), 1 deletion(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index cfa743da1..c42784a3b 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -135,6 +135,17 @@ Runtime Config Options
-a 0002:0e:00.0,tim_rings_lmt=5
+- ``TIM ring control internal parameters``
+
+ When using multiple TIM rings the ``tim_ring_ctl`` devargs can be used to
+ control each TIM rings internal parameters uniquely. The following dict
+ format is expected [ring-chnk_slots-disable_npa-stats_ena]. 0 represents
+ default values.
+
+ For Example::
+
+ -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 7b28969c9..fdc78270d 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -121,7 +121,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
struct cnxk_tim_ring *tim_ring;
- int rc;
+ int i, rc;
if (dev == NULL)
return -ENODEV;
@@ -165,6 +165,20 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->disable_npa = dev->disable_npa;
tim_ring->enable_stats = dev->enable_stats;
+ for (i = 0; i < dev->ring_ctl_cnt; i++) {
+ struct cnxk_tim_ctl *ring_ctl = &dev->ring_ctl_data[i];
+
+ if (ring_ctl->ring == tim_ring->ring_id) {
+ tim_ring->chunk_sz =
+ ring_ctl->chunk_slots ?
+ ((uint32_t)(ring_ctl->chunk_slots + 1) *
+ CNXK_TIM_CHUNK_ALIGNMENT) :
+ tim_ring->chunk_sz;
+ tim_ring->enable_stats = ring_ctl->enable_stats;
+ tim_ring->disable_npa = ring_ctl->disable_npa;
+ }
+ }
+
if (tim_ring->disable_npa) {
tim_ring->nb_chunks =
tim_ring->nb_timers /
@@ -368,6 +382,84 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
return 0;
}
+static void
+cnxk_tim_parse_ring_param(char *value, void *opaque)
+{
+ struct cnxk_tim_evdev *dev = opaque;
+ struct cnxk_tim_ctl ring_ctl = {0};
+ char *tok = strtok(value, "-");
+ struct cnxk_tim_ctl *old_ptr;
+ uint16_t *val;
+
+ val = (uint16_t *)&ring_ctl;
+
+ if (!strlen(value))
+ return;
+
+ while (tok != NULL) {
+ *val = atoi(tok);
+ tok = strtok(NULL, "-");
+ val++;
+ }
+
+ if (val != (&ring_ctl.enable_stats + 1)) {
+ plt_err("Invalid ring param expected [ring-chunk_sz-disable_npa-enable_stats]");
+ return;
+ }
+
+ dev->ring_ctl_cnt++;
+ old_ptr = dev->ring_ctl_data;
+ dev->ring_ctl_data =
+ rte_realloc(dev->ring_ctl_data,
+ sizeof(struct cnxk_tim_ctl) * dev->ring_ctl_cnt, 0);
+ if (dev->ring_ctl_data == NULL) {
+ dev->ring_ctl_data = old_ptr;
+ dev->ring_ctl_cnt--;
+ return;
+ }
+
+ dev->ring_ctl_data[dev->ring_ctl_cnt - 1] = ring_ctl;
+}
+
+static void
+cnxk_tim_parse_ring_ctl_list(const char *value, void *opaque)
+{
+ char *s = strdup(value);
+ char *start = NULL;
+ char *end = NULL;
+ char *f = s;
+
+ while (*s) {
+ if (*s == '[')
+ start = s;
+ else if (*s == ']')
+ end = s;
+
+ if (start && start < end) {
+ *end = 0;
+ cnxk_tim_parse_ring_param(start + 1, opaque);
+ start = end;
+ s = end;
+ }
+ s++;
+ }
+
+ free(f);
+}
+
+static int
+cnxk_tim_parse_kvargs_dict(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ /* Dict format [ring-chunk_sz-disable_npa-enable_stats] use '-' as ','
+ * isn't allowed. 0 represents default.
+ */
+ cnxk_tim_parse_ring_ctl_list(value, opaque);
+
+ return 0;
+}
+
static void
cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
{
@@ -388,6 +480,8 @@ cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
&dev->enable_stats);
rte_kvargs_process(kvlist, CNXK_TIM_RINGS_LMT, &parse_kvargs_value,
&dev->min_ring_cnt);
+ rte_kvargs_process(kvlist, CNXK_TIM_RING_CTL,
+ &cnxk_tim_parse_kvargs_dict, &dev);
rte_kvargs_free(kvlist);
}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 7aa9650c1..d0df226ed 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -38,6 +38,7 @@
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
#define CNXK_TIM_STATS_ENA "tim_stats_ena"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
+#define CNXK_TIM_RING_CTL "tim_ring_ctl"
#define CNXK_TIM_SP 0x1
#define CNXK_TIM_MP 0x2
@@ -75,6 +76,13 @@
#define TIM_BUCKET_SEMA_WLOCK \
(TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
+struct cnxk_tim_ctl {
+ uint16_t ring;
+ uint16_t chunk_slots;
+ uint16_t disable_npa;
+ uint16_t enable_stats;
+};
+
struct cnxk_tim_evdev {
struct roc_tim tim;
struct rte_eventdev *event_dev;
@@ -85,6 +93,8 @@ struct cnxk_tim_evdev {
uint16_t chunk_slots;
uint16_t min_ring_cnt;
uint8_t enable_stats;
+ uint16_t ring_ctl_cnt;
+ struct cnxk_tim_ctl *ring_ctl_data;
};
enum cnxk_tim_clk_src {
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 33/36] event/cnxk: add Rx adapter support
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (31 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 32/36] event/cnxk: add devargs to control timer adapters pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 34/36] event/cnxk: add Rx adapter fastpath ops pbhagavatula
` (3 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add support for event eth Rx adapter.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 4 +
drivers/event/cnxk/cn10k_eventdev.c | 76 +++++++++++
drivers/event/cnxk/cn10k_worker.h | 4 +
drivers/event/cnxk/cn9k_eventdev.c | 82 ++++++++++++
drivers/event/cnxk/cn9k_worker.h | 4 +
drivers/event/cnxk/cnxk_eventdev.h | 21 +++
drivers/event/cnxk/cnxk_eventdev_adptr.c | 157 +++++++++++++++++++++++
7 files changed, 348 insertions(+)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index c42784a3b..abab7f742 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -39,6 +39,10 @@ Features of the OCTEON CNXK SSO PMD are:
time granularity of 2.5us on CN9K and 1us on CN10K.
- Up to 256 TIM rings aka event timer adapters.
- Up to 8 rings traversed in parallel.
+- HW managed packets enqueued from ethdev to eventdev exposed through event eth
+ RX adapter.
+- N:1 ethernet device Rx queue to Event queue mapping.
+- Full Rx offload support defined through ethdev queue configuration.
Prerequisites and Compilation procedure
---------------------------------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 2b2025cdb..72175e16f 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -407,6 +407,76 @@ cn10k_sso_selftest(void)
return cnxk_sso_selftest(RTE_STR(event_cn10k));
}
+static int
+cn10k_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev, uint32_t *caps)
+{
+ int rc;
+
+ RTE_SET_USED(event_dev);
+ rc = strncmp(eth_dev->device->driver->name, "net_cn10k", 9);
+ if (rc)
+ *caps = RTE_EVENT_ETH_RX_ADAPTER_SW_CAP;
+ else
+ *caps = RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT |
+ RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ |
+ RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID;
+
+ return 0;
+}
+
+static void
+cn10k_sso_set_lookup_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int i;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ struct cn10k_sso_hws *ws = event_dev->data->ports[i];
+ ws->lookup_mem = lookup_mem;
+ }
+}
+
+static int
+cn10k_sso_rx_adapter_queue_add(
+ const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev,
+ int32_t rx_queue_id,
+ const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
+{
+ void *lookup_mem;
+ int rc;
+
+ rc = strncmp(eth_dev->device->driver->name, "net_cn10k", 8);
+ if (rc)
+ return -EINVAL;
+
+ rc = cnxk_sso_rx_adapter_queue_add(event_dev, eth_dev, rx_queue_id,
+ queue_conf);
+ if (rc)
+ return -EINVAL;
+
+ lookup_mem = ((struct cn10k_eth_rxq *)eth_dev->data->rx_queues[0])
+ ->lookup_mem;
+ cn10k_sso_set_lookup_mem(event_dev, lookup_mem);
+ cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
+
+ return 0;
+}
+
+static int
+cn10k_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev,
+ int32_t rx_queue_id)
+{
+ int rc;
+
+ rc = strncmp(eth_dev->device->driver->name, "net_cn10k", 8);
+ if (rc)
+ return -EINVAL;
+
+ return cnxk_sso_rx_adapter_queue_del(event_dev, eth_dev, rx_queue_id);
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -420,6 +490,12 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .eth_rx_adapter_caps_get = cn10k_sso_rx_adapter_caps_get,
+ .eth_rx_adapter_queue_add = cn10k_sso_rx_adapter_queue_add,
+ .eth_rx_adapter_queue_del = cn10k_sso_rx_adapter_queue_del,
+ .eth_rx_adapter_start = cnxk_sso_rx_adapter_start,
+ .eth_rx_adapter_stop = cnxk_sso_rx_adapter_stop,
+
.timer_adapter_caps_get = cnxk_tim_caps_get,
.dump = cnxk_sso_dump,
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index ed4e3bd63..d418e80aa 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -5,9 +5,13 @@
#ifndef __CN10K_WORKER_H__
#define __CN10K_WORKER_H__
+#include "cnxk_ethdev.h"
#include "cnxk_eventdev.h"
#include "cnxk_worker.h"
+#include "cn10k_ethdev.h"
+#include "cn10k_rx.h"
+
/* SSO Operations */
static __rte_always_inline uint8_t
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index e39b4ded2..4aa577bd5 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -481,6 +481,82 @@ cn9k_sso_selftest(void)
return cnxk_sso_selftest(RTE_STR(event_cn9k));
}
+static int
+cn9k_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev, uint32_t *caps)
+{
+ int rc;
+
+ RTE_SET_USED(event_dev);
+ rc = strncmp(eth_dev->device->driver->name, "net_cn9k", 9);
+ if (rc)
+ *caps = RTE_EVENT_ETH_RX_ADAPTER_SW_CAP;
+ else
+ *caps = RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT |
+ RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ |
+ RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID;
+
+ return 0;
+}
+
+static void
+cn9k_sso_set_lookup_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int i;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ if (dev->dual_ws) {
+ struct cn9k_sso_hws_dual *dws =
+ event_dev->data->ports[i];
+ dws->lookup_mem = lookup_mem;
+ } else {
+ struct cn9k_sso_hws *ws = event_dev->data->ports[i];
+ ws->lookup_mem = lookup_mem;
+ }
+ }
+}
+
+static int
+cn9k_sso_rx_adapter_queue_add(
+ const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev,
+ int32_t rx_queue_id,
+ const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
+{
+ void *lookup_mem;
+ int rc;
+
+ rc = strncmp(eth_dev->device->driver->name, "net_cn9k", 8);
+ if (rc)
+ return -EINVAL;
+
+ rc = cnxk_sso_rx_adapter_queue_add(event_dev, eth_dev, rx_queue_id,
+ queue_conf);
+ if (rc)
+ return -EINVAL;
+
+ lookup_mem = ((struct cn9k_eth_rxq *)eth_dev->data->rx_queues[0])
+ ->lookup_mem;
+ cn9k_sso_set_lookup_mem(event_dev, lookup_mem);
+ cn9k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
+
+ return 0;
+}
+
+static int
+cn9k_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev,
+ int32_t rx_queue_id)
+{
+ int rc;
+
+ rc = strncmp(eth_dev->device->driver->name, "net_cn9k", 8);
+ if (rc)
+ return -EINVAL;
+
+ return cnxk_sso_rx_adapter_queue_del(event_dev, eth_dev, rx_queue_id);
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -494,6 +570,12 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .eth_rx_adapter_caps_get = cn9k_sso_rx_adapter_caps_get,
+ .eth_rx_adapter_queue_add = cn9k_sso_rx_adapter_queue_add,
+ .eth_rx_adapter_queue_del = cn9k_sso_rx_adapter_queue_del,
+ .eth_rx_adapter_start = cnxk_sso_rx_adapter_start,
+ .eth_rx_adapter_stop = cnxk_sso_rx_adapter_stop,
+
.timer_adapter_caps_get = cnxk_tim_caps_get,
.dump = cnxk_sso_dump,
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index b997db2fe..b5af5ecf4 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -5,9 +5,13 @@
#ifndef __CN9K_WORKER_H__
#define __CN9K_WORKER_H__
+#include "cnxk_ethdev.h"
#include "cnxk_eventdev.h"
#include "cnxk_worker.h"
+#include "cn9k_ethdev.h"
+#include "cn9k_rx.h"
+
/* SSO Operations */
static __rte_always_inline uint8_t
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 32abf9632..9c3331f7e 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -6,6 +6,8 @@
#define __CNXK_EVENTDEV_H__
#include <rte_devargs.h>
+#include <rte_ethdev.h>
+#include <rte_event_eth_rx_adapter.h>
#include <rte_kvargs.h>
#include <rte_mbuf_pool_ops.h>
#include <rte_pci.h>
@@ -81,7 +83,10 @@ struct cnxk_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
+ uint64_t rx_offloads;
uint64_t adptr_xae_cnt;
+ uint16_t rx_adptr_pool_cnt;
+ uint64_t *rx_adptr_pools;
uint16_t tim_adptr_ring_cnt;
uint16_t *timer_adptr_rings;
uint64_t *timer_adptr_sz;
@@ -108,6 +113,7 @@ struct cnxk_sso_evdev {
struct cn10k_sso_hws {
/* Get Work Fastpath data */
CN10K_SSO_HWS_OPS;
+ void *lookup_mem;
uint32_t gw_wdata;
uint8_t swtag_req;
uint8_t hws_id;
@@ -132,6 +138,7 @@ struct cn10k_sso_hws {
struct cn9k_sso_hws {
/* Get Work Fastpath data */
CN9K_SSO_HWS_OPS;
+ void *lookup_mem;
uint8_t swtag_req;
uint8_t hws_id;
/* Add Work Fastpath data */
@@ -148,6 +155,7 @@ struct cn9k_sso_hws_state {
struct cn9k_sso_hws_dual {
/* Get Work Fastpath data */
struct cn9k_sso_hws_state ws_state[2]; /* Ping and Pong */
+ void *lookup_mem;
uint8_t swtag_req;
uint8_t vws; /* Ping pong bit */
uint8_t hws_id;
@@ -235,4 +243,17 @@ void cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f);
/* CN9K */
void cn9k_sso_set_rsrc(void *arg);
+/* Common adapter ops */
+int cnxk_sso_rx_adapter_queue_add(
+ const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev,
+ int32_t rx_queue_id,
+ const struct rte_event_eth_rx_adapter_queue_conf *queue_conf);
+int cnxk_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev,
+ int32_t rx_queue_id);
+int cnxk_sso_rx_adapter_start(const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev);
+int cnxk_sso_rx_adapter_stop(const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev);
+
#endif /* __CNXK_EVENTDEV_H__ */
diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c
index 6d9615453..e06033117 100644
--- a/drivers/event/cnxk/cnxk_eventdev_adptr.c
+++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
@@ -2,6 +2,7 @@
* Copyright(C) 2021 Marvell International Ltd.
*/
+#include "cnxk_ethdev.h"
#include "cnxk_eventdev.h"
void
@@ -11,6 +12,32 @@ cnxk_sso_updt_xae_cnt(struct cnxk_sso_evdev *dev, void *data,
int i;
switch (event_type) {
+ case RTE_EVENT_TYPE_ETHDEV: {
+ struct cnxk_eth_rxq_sp *rxq = data;
+ uint64_t *old_ptr;
+
+ for (i = 0; i < dev->rx_adptr_pool_cnt; i++) {
+ if ((uint64_t)rxq->qconf.mp == dev->rx_adptr_pools[i])
+ return;
+ }
+
+ dev->rx_adptr_pool_cnt++;
+ old_ptr = dev->rx_adptr_pools;
+ dev->rx_adptr_pools = rte_realloc(
+ dev->rx_adptr_pools,
+ sizeof(uint64_t) * dev->rx_adptr_pool_cnt, 0);
+ if (dev->rx_adptr_pools == NULL) {
+ dev->adptr_xae_cnt += rxq->qconf.mp->size;
+ dev->rx_adptr_pools = old_ptr;
+ dev->rx_adptr_pool_cnt--;
+ return;
+ }
+ dev->rx_adptr_pools[dev->rx_adptr_pool_cnt - 1] =
+ (uint64_t)rxq->qconf.mp;
+
+ dev->adptr_xae_cnt += rxq->qconf.mp->size;
+ break;
+ }
case RTE_EVENT_TYPE_TIMER: {
struct cnxk_tim_ring *timr = data;
uint16_t *old_ring_ptr;
@@ -65,3 +92,133 @@ cnxk_sso_updt_xae_cnt(struct cnxk_sso_evdev *dev, void *data,
break;
}
}
+
+static int
+cnxk_sso_rxq_enable(struct cnxk_eth_dev *cnxk_eth_dev, uint16_t rq_id,
+ uint16_t port_id, const struct rte_event *ev,
+ uint8_t custom_flowid)
+{
+ struct roc_nix_rq *rq;
+
+ rq = &cnxk_eth_dev->rqs[rq_id];
+ rq->sso_ena = 1;
+ rq->tt = ev->sched_type;
+ rq->hwgrp = ev->queue_id;
+ rq->flow_tag_width = 20;
+ rq->wqe_skip = 1;
+ rq->tag_mask = (port_id & 0xF) << 20;
+ rq->tag_mask |= (((port_id >> 4) & 0xF) | (RTE_EVENT_TYPE_ETHDEV << 4))
+ << 24;
+
+ if (custom_flowid) {
+ rq->flow_tag_width = 0;
+ rq->tag_mask |= ev->flow_id;
+ }
+
+ return roc_nix_rq_modify(&cnxk_eth_dev->nix, rq, 0);
+}
+
+static int
+cnxk_sso_rxq_disable(struct cnxk_eth_dev *cnxk_eth_dev, uint16_t rq_id)
+{
+ struct roc_nix_rq *rq;
+
+ rq = &cnxk_eth_dev->rqs[rq_id];
+ rq->sso_ena = 0;
+ rq->flow_tag_width = 32;
+ rq->tag_mask = 0;
+
+ return roc_nix_rq_modify(&cnxk_eth_dev->nix, rq, 0);
+}
+
+int
+cnxk_sso_rx_adapter_queue_add(
+ const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev,
+ int32_t rx_queue_id,
+ const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
+{
+ struct cnxk_eth_dev *cnxk_eth_dev = eth_dev->data->dev_private;
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t port = eth_dev->data->port_id;
+ struct cnxk_eth_rxq_sp *rxq_sp;
+ int i, rc = 0;
+
+ if (rx_queue_id < 0) {
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ rxq_sp = eth_dev->data->rx_queues[i];
+ rxq_sp = rxq_sp - 1;
+ cnxk_sso_updt_xae_cnt(dev, rxq_sp,
+ RTE_EVENT_TYPE_ETHDEV);
+ rc = cnxk_sso_xae_reconfigure(
+ (struct rte_eventdev *)(uintptr_t)event_dev);
+ rc |= cnxk_sso_rxq_enable(
+ cnxk_eth_dev, i, port, &queue_conf->ev,
+ !!(queue_conf->rx_queue_flags &
+ RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID));
+ }
+ } else {
+ rxq_sp = eth_dev->data->rx_queues[rx_queue_id];
+ rxq_sp = rxq_sp - 1;
+ cnxk_sso_updt_xae_cnt(dev, rxq_sp, RTE_EVENT_TYPE_ETHDEV);
+ rc = cnxk_sso_xae_reconfigure(
+ (struct rte_eventdev *)(uintptr_t)event_dev);
+ rc |= cnxk_sso_rxq_enable(
+ cnxk_eth_dev, (uint16_t)rx_queue_id, port,
+ &queue_conf->ev,
+ !!(queue_conf->rx_queue_flags &
+ RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID));
+ }
+
+ if (rc < 0) {
+ plt_err("Failed to configure Rx adapter port=%d, q=%d", port,
+ queue_conf->ev.queue_id);
+ return rc;
+ }
+
+ dev->rx_offloads |= cnxk_eth_dev->rx_offload_flags;
+
+ return 0;
+}
+
+int
+cnxk_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev,
+ int32_t rx_queue_id)
+{
+ struct cnxk_eth_dev *dev = eth_dev->data->dev_private;
+ int i, rc = 0;
+
+ RTE_SET_USED(event_dev);
+ if (rx_queue_id < 0) {
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++)
+ rc = cnxk_sso_rxq_disable(dev, i);
+ } else {
+ rc = cnxk_sso_rxq_disable(dev, (uint16_t)rx_queue_id);
+ }
+
+ if (rc < 0)
+ plt_err("Failed to clear Rx adapter config port=%d, q=%d",
+ eth_dev->data->port_id, rx_queue_id);
+
+ return rc;
+}
+
+int
+cnxk_sso_rx_adapter_start(const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(event_dev);
+ RTE_SET_USED(eth_dev);
+
+ return 0;
+}
+
+int
+cnxk_sso_rx_adapter_stop(const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(event_dev);
+ RTE_SET_USED(eth_dev);
+
+ return 0;
+}
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 34/36] event/cnxk: add Rx adapter fastpath ops
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (32 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 33/36] event/cnxk: add Rx adapter support pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 35/36] event/cnxk: add Tx adapter support pbhagavatula
` (2 subsequent siblings)
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add support for event eth Rx adapter fastpath operations.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 115 ++++++++-
drivers/event/cnxk/cn10k_worker.c | 164 +++++++++----
drivers/event/cnxk/cn10k_worker.h | 91 +++++--
drivers/event/cnxk/cn9k_eventdev.c | 254 ++++++++++++++++++-
drivers/event/cnxk/cn9k_worker.c | 364 +++++++++++++++++++---------
drivers/event/cnxk/cn9k_worker.h | 158 +++++++++---
drivers/event/cnxk/meson.build | 8 +
7 files changed, 932 insertions(+), 222 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 72175e16f..70c6fedae 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -247,17 +247,120 @@ static void
cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ const event_dequeue_t sso_hws_deq[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn10k_sso_hws_deq_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn10k_sso_hws_deq_burst_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_t sso_hws_tmo_deq[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn10k_sso_hws_tmo_deq_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_burst_t sso_hws_tmo_deq_burst[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn10k_sso_hws_tmo_deq_burst_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_t sso_hws_deq_seg[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_burst_t sso_hws_deq_seg_burst[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_burst_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_t sso_hws_tmo_deq_seg[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn10k_sso_hws_tmo_deq_seg_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_burst_t sso_hws_tmo_deq_seg_burst[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn10k_sso_hws_tmo_deq_seg_burst_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
event_dev->enqueue = cn10k_sso_hws_enq;
event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst;
-
- event_dev->dequeue = cn10k_sso_hws_deq;
- event_dev->dequeue_burst = cn10k_sso_hws_deq_burst;
- if (dev->is_timeout_deq) {
- event_dev->dequeue = cn10k_sso_hws_tmo_deq;
- event_dev->dequeue_burst = cn10k_sso_hws_tmo_deq_burst;
+ if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
+ event_dev->dequeue = sso_hws_deq_seg
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ event_dev->dequeue_burst = sso_hws_deq_seg_burst
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ if (dev->is_timeout_deq) {
+ event_dev->dequeue = sso_hws_tmo_deq_seg
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ event_dev->dequeue_burst = sso_hws_tmo_deq_seg_burst
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ }
+ } else {
+ event_dev->dequeue = sso_hws_deq
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ event_dev->dequeue_burst = sso_hws_deq_burst
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ if (dev->is_timeout_deq) {
+ event_dev->dequeue = sso_hws_tmo_deq
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ event_dev->dequeue_burst = sso_hws_tmo_deq_burst
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ }
}
}
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 57b0714bb..46f72cf20 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -60,56 +60,118 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
-uint16_t __rte_hot
-cn10k_sso_hws_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
-{
- struct cn10k_sso_hws *ws = port;
-
- RTE_SET_USED(timeout_ticks);
-
- if (ws->swtag_req) {
- ws->swtag_req = 0;
- cnxk_sso_hws_swtag_wait(ws->tag_wqe_op);
- return 1;
+#define R(name, f3, f2, f1, f0, flags) \
+ uint16_t __rte_hot cn10k_sso_hws_deq_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks) \
+ { \
+ struct cn10k_sso_hws *ws = port; \
+ \
+ RTE_SET_USED(timeout_ticks); \
+ \
+ if (ws->swtag_req) { \
+ ws->swtag_req = 0; \
+ cnxk_sso_hws_swtag_wait(ws->tag_wqe_op); \
+ return 1; \
+ } \
+ \
+ return cn10k_sso_hws_get_work(ws, ev, flags, ws->lookup_mem); \
+ } \
+ \
+ uint16_t __rte_hot cn10k_sso_hws_deq_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks) \
+ { \
+ RTE_SET_USED(nb_events); \
+ \
+ return cn10k_sso_hws_deq_##name(port, ev, timeout_ticks); \
+ } \
+ \
+ uint16_t __rte_hot cn10k_sso_hws_tmo_deq_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks) \
+ { \
+ struct cn10k_sso_hws *ws = port; \
+ uint16_t ret = 1; \
+ uint64_t iter; \
+ \
+ if (ws->swtag_req) { \
+ ws->swtag_req = 0; \
+ cnxk_sso_hws_swtag_wait(ws->tag_wqe_op); \
+ return ret; \
+ } \
+ \
+ ret = cn10k_sso_hws_get_work(ws, ev, flags, ws->lookup_mem); \
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) \
+ ret = cn10k_sso_hws_get_work(ws, ev, flags, \
+ ws->lookup_mem); \
+ \
+ return ret; \
+ } \
+ \
+ uint16_t __rte_hot cn10k_sso_hws_tmo_deq_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks) \
+ { \
+ RTE_SET_USED(nb_events); \
+ \
+ return cn10k_sso_hws_tmo_deq_##name(port, ev, timeout_ticks); \
+ } \
+ \
+ uint16_t __rte_hot cn10k_sso_hws_deq_seg_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks) \
+ { \
+ struct cn10k_sso_hws *ws = port; \
+ \
+ RTE_SET_USED(timeout_ticks); \
+ \
+ if (ws->swtag_req) { \
+ ws->swtag_req = 0; \
+ cnxk_sso_hws_swtag_wait(ws->tag_wqe_op); \
+ return 1; \
+ } \
+ \
+ return cn10k_sso_hws_get_work( \
+ ws, ev, flags | NIX_RX_MULTI_SEG_F, ws->lookup_mem); \
+ } \
+ \
+ uint16_t __rte_hot cn10k_sso_hws_deq_seg_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks) \
+ { \
+ RTE_SET_USED(nb_events); \
+ \
+ return cn10k_sso_hws_deq_seg_##name(port, ev, timeout_ticks); \
+ } \
+ \
+ uint16_t __rte_hot cn10k_sso_hws_tmo_deq_seg_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks) \
+ { \
+ struct cn10k_sso_hws *ws = port; \
+ uint16_t ret = 1; \
+ uint64_t iter; \
+ \
+ if (ws->swtag_req) { \
+ ws->swtag_req = 0; \
+ cnxk_sso_hws_swtag_wait(ws->tag_wqe_op); \
+ return ret; \
+ } \
+ \
+ ret = cn10k_sso_hws_get_work(ws, ev, flags, ws->lookup_mem); \
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) \
+ ret = cn10k_sso_hws_get_work(ws, ev, flags, \
+ ws->lookup_mem); \
+ \
+ return ret; \
+ } \
+ \
+ uint16_t __rte_hot cn10k_sso_hws_tmo_deq_seg_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks) \
+ { \
+ RTE_SET_USED(nb_events); \
+ \
+ return cn10k_sso_hws_tmo_deq_seg_##name(port, ev, \
+ timeout_ticks); \
}
- return cn10k_sso_hws_get_work(ws, ev);
-}
-
-uint16_t __rte_hot
-cn10k_sso_hws_deq_burst(void *port, struct rte_event ev[], uint16_t nb_events,
- uint64_t timeout_ticks)
-{
- RTE_SET_USED(nb_events);
-
- return cn10k_sso_hws_deq(port, ev, timeout_ticks);
-}
-
-uint16_t __rte_hot
-cn10k_sso_hws_tmo_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
-{
- struct cn10k_sso_hws *ws = port;
- uint16_t ret = 1;
- uint64_t iter;
-
- if (ws->swtag_req) {
- ws->swtag_req = 0;
- cnxk_sso_hws_swtag_wait(ws->tag_wqe_op);
- return ret;
- }
-
- ret = cn10k_sso_hws_get_work(ws, ev);
- for (iter = 1; iter < timeout_ticks && (ret == 0); iter++)
- ret = cn10k_sso_hws_get_work(ws, ev);
-
- return ret;
-}
-
-uint16_t __rte_hot
-cn10k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
- uint16_t nb_events, uint64_t timeout_ticks)
-{
- RTE_SET_USED(nb_events);
-
- return cn10k_sso_hws_tmo_deq(port, ev, timeout_ticks);
-}
+NIX_RX_FASTPATH_MODES
+#undef R
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index d418e80aa..9521a5c94 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -83,20 +83,40 @@ cn10k_sso_hws_forward_event(struct cn10k_sso_hws *ws,
cn10k_sso_hws_fwd_group(ws, ev, grp);
}
+static __rte_always_inline void
+cn10k_wqe_to_mbuf(uint64_t wqe, const uint64_t mbuf, uint8_t port_id,
+ const uint32_t tag, const uint32_t flags,
+ const void *const lookup_mem)
+{
+ union mbuf_initializer mbuf_init = {
+ .fields = {.data_off = RTE_PKTMBUF_HEADROOM,
+ .refcnt = 1,
+ .nb_segs = 1,
+ .port = port_id},
+ };
+
+ cn10k_nix_cqe_to_mbuf((struct nix_cqe_hdr_s *)wqe, tag,
+ (struct rte_mbuf *)mbuf, lookup_mem,
+ mbuf_init.value, flags);
+}
+
static __rte_always_inline uint16_t
-cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct rte_event *ev)
+cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct rte_event *ev,
+ const uint32_t flags, void *lookup_mem)
{
union {
__uint128_t get_work;
uint64_t u64[2];
} gw;
+ uint64_t mbuf;
gw.get_work = ws->gw_wdata;
#if defined(RTE_ARCH_ARM64) && !defined(__clang__)
asm volatile(
PLT_CPU_FEATURE_PREAMBLE
"caspl %[wdata], %H[wdata], %[wdata], %H[wdata], [%[gw_loc]]\n"
- : [wdata] "+r"(gw.get_work)
+ "sub %[mbuf], %H[wdata], #0x80 \n"
+ : [wdata] "+r"(gw.get_work), [mbuf] "=&r"(mbuf)
: [gw_loc] "r"(ws->getwrk_op)
: "memory");
#else
@@ -104,11 +124,25 @@ cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct rte_event *ev)
do {
roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
} while (gw.u64[0] & BIT_ULL(63));
+ mbuf = (uint64_t)((char *)gw.u64[1] - sizeof(struct rte_mbuf));
#endif
gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
(gw.u64[0] & (0x3FFull << 36)) << 4 |
(gw.u64[0] & 0xffffffff);
+ if (CNXK_TT_FROM_EVENT(gw.u64[0]) != SSO_TT_EMPTY) {
+ if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) ==
+ RTE_EVENT_TYPE_ETHDEV) {
+ uint8_t port = CNXK_SUB_EVENT_FROM_TAG(gw.u64[0]);
+
+ gw.u64[0] = CNXK_CLR_SUB_EVENT(gw.u64[0]);
+ cn10k_wqe_to_mbuf(gw.u64[1], mbuf, port,
+ gw.u64[0] & 0xFFFFF, flags,
+ lookup_mem);
+ gw.u64[1] = mbuf;
+ }
+ }
+
ev->event = gw.u64[0];
ev->u64 = gw.u64[1];
@@ -123,6 +157,7 @@ cn10k_sso_hws_get_work_empty(struct cn10k_sso_hws *ws, struct rte_event *ev)
__uint128_t get_work;
uint64_t u64[2];
} gw;
+ uint64_t mbuf;
#ifdef RTE_ARCH_ARM64
asm volatile(PLT_CPU_FEATURE_PREAMBLE
@@ -133,19 +168,34 @@ cn10k_sso_hws_get_work_empty(struct cn10k_sso_hws *ws, struct rte_event *ev)
" ldp %[tag], %[wqp], [%[tag_loc]] \n"
" tbnz %[tag], 63, rty%= \n"
"done%=: dmb ld \n"
- : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ " sub %[mbuf], %[wqp], #0x80 \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1]),
+ [mbuf] "=&r"(mbuf)
: [tag_loc] "r"(ws->tag_wqe_op)
: "memory");
#else
do {
roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
} while (gw.u64[0] & BIT_ULL(63));
+ mbuf = (uint64_t)((char *)gw.u64[1] - sizeof(struct rte_mbuf));
#endif
gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
(gw.u64[0] & (0x3FFull << 36)) << 4 |
(gw.u64[0] & 0xffffffff);
+ if (CNXK_TT_FROM_EVENT(gw.u64[0]) != SSO_TT_EMPTY) {
+ if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) ==
+ RTE_EVENT_TYPE_ETHDEV) {
+ uint8_t port = CNXK_SUB_EVENT_FROM_TAG(gw.u64[0]);
+
+ gw.u64[0] = CNXK_CLR_SUB_EVENT(gw.u64[0]);
+ cn10k_wqe_to_mbuf(gw.u64[1], mbuf, port,
+ gw.u64[0] & 0xFFFFF, 0, NULL);
+ gw.u64[1] = mbuf;
+ }
+ }
+
ev->event = gw.u64[0];
ev->u64 = gw.u64[1];
@@ -164,16 +214,29 @@ uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
-uint16_t __rte_hot cn10k_sso_hws_deq(void *port, struct rte_event *ev,
- uint64_t timeout_ticks);
-uint16_t __rte_hot cn10k_sso_hws_deq_burst(void *port, struct rte_event ev[],
- uint16_t nb_events,
- uint64_t timeout_ticks);
-uint16_t __rte_hot cn10k_sso_hws_tmo_deq(void *port, struct rte_event *ev,
- uint64_t timeout_ticks);
-uint16_t __rte_hot cn10k_sso_hws_tmo_deq_burst(void *port,
- struct rte_event ev[],
- uint16_t nb_events,
- uint64_t timeout_ticks);
+#define R(name, f3, f2, f1, f0, flags) \
+ uint16_t __rte_hot cn10k_sso_hws_deq_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn10k_sso_hws_deq_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn10k_sso_hws_tmo_deq_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn10k_sso_hws_tmo_deq_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn10k_sso_hws_deq_seg_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn10k_sso_hws_deq_seg_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn10k_sso_hws_tmo_deq_seg_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn10k_sso_hws_tmo_deq_seg_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks);
+
+NIX_RX_FASTPATH_MODES
+#undef R
#endif
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 4aa577bd5..e4383dca1 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -252,17 +252,179 @@ static void
cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ /* Single WS modes */
+ const event_dequeue_t sso_hws_deq[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn9k_sso_hws_deq_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn9k_sso_hws_deq_burst_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_t sso_hws_tmo_deq[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn9k_sso_hws_tmo_deq_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_burst_t sso_hws_tmo_deq_burst[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn9k_sso_hws_tmo_deq_burst_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_t sso_hws_deq_seg[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_burst_t sso_hws_deq_seg_burst[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_burst_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_t sso_hws_tmo_deq_seg[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn9k_sso_hws_tmo_deq_seg_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_burst_t sso_hws_tmo_deq_seg_burst[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn9k_sso_hws_tmo_deq_seg_burst_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ /* Dual WS modes */
+ const event_dequeue_t sso_hws_dual_deq[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_burst_t sso_hws_dual_deq_burst[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_burst_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_t sso_hws_dual_tmo_deq[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn9k_sso_hws_dual_tmo_deq_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_burst_t sso_hws_dual_tmo_deq_burst[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn9k_sso_hws_dual_tmo_deq_burst_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_t sso_hws_dual_deq_seg[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_seg_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_burst_t sso_hws_dual_deq_seg_burst[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_seg_burst_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_t sso_hws_dual_tmo_deq_seg[2][2][2][2] = {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn9k_sso_hws_dual_tmo_deq_seg_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const event_dequeue_burst_t sso_hws_dual_tmo_deq_seg_burst[2][2][2][2] =
+ {
+#define R(name, f3, f2, f1, f0, flags) \
+ [f3][f2][f1][f0] = cn9k_sso_hws_dual_tmo_deq_seg_burst_##name,
+ NIX_RX_FASTPATH_MODES
+#undef R
+ };
event_dev->enqueue = cn9k_sso_hws_enq;
event_dev->enqueue_burst = cn9k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
-
- event_dev->dequeue = cn9k_sso_hws_deq;
- event_dev->dequeue_burst = cn9k_sso_hws_deq_burst;
- if (dev->deq_tmo_ns) {
- event_dev->dequeue = cn9k_sso_hws_tmo_deq;
- event_dev->dequeue_burst = cn9k_sso_hws_tmo_deq_burst;
+ if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
+ event_dev->dequeue = sso_hws_deq_seg
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ event_dev->dequeue_burst = sso_hws_deq_seg_burst
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ if (dev->is_timeout_deq) {
+ event_dev->dequeue = sso_hws_tmo_deq_seg
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ event_dev->dequeue_burst = sso_hws_tmo_deq_seg_burst
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ }
+ } else {
+ event_dev->dequeue = sso_hws_deq
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ event_dev->dequeue_burst = sso_hws_deq_burst
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ if (dev->is_timeout_deq) {
+ event_dev->dequeue = sso_hws_tmo_deq
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ event_dev->dequeue_burst = sso_hws_tmo_deq_burst
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ }
}
if (dev->dual_ws) {
@@ -272,14 +434,82 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_forward_burst =
cn9k_sso_hws_dual_enq_fwd_burst;
- event_dev->dequeue = cn9k_sso_hws_dual_deq;
- event_dev->dequeue_burst = cn9k_sso_hws_dual_deq_burst;
- if (dev->deq_tmo_ns) {
- event_dev->dequeue = cn9k_sso_hws_dual_tmo_deq;
- event_dev->dequeue_burst =
- cn9k_sso_hws_dual_tmo_deq_burst;
+ if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
+ event_dev->dequeue = sso_hws_dual_deq_seg
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ event_dev->dequeue_burst = sso_hws_dual_deq_seg_burst
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ if (dev->is_timeout_deq) {
+ event_dev->dequeue = sso_hws_dual_tmo_deq_seg
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_RSS_F)];
+ event_dev->dequeue_burst =
+ sso_hws_dual_tmo_deq_seg_burst
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_RSS_F)];
+ }
+ } else {
+ event_dev->dequeue = sso_hws_dual_deq
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ event_dev->dequeue_burst = sso_hws_dual_deq_burst
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
+ if (dev->is_timeout_deq) {
+ event_dev->dequeue = sso_hws_dual_tmo_deq
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_RSS_F)];
+ event_dev->dequeue_burst =
+ sso_hws_dual_tmo_deq_burst
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offloads &
+ NIX_RX_OFFLOAD_RSS_F)];
+ }
}
}
+
+ rte_mb();
}
static void *
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index 41ffd88a0..fb572c7c9 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -60,59 +60,121 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
-uint16_t __rte_hot
-cn9k_sso_hws_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
-{
- struct cn9k_sso_hws *ws = port;
-
- RTE_SET_USED(timeout_ticks);
-
- if (ws->swtag_req) {
- ws->swtag_req = 0;
- cnxk_sso_hws_swtag_wait(ws->tag_op);
- return 1;
+#define R(name, f3, f2, f1, f0, flags) \
+ uint16_t __rte_hot cn9k_sso_hws_deq_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks) \
+ { \
+ struct cn9k_sso_hws *ws = port; \
+ \
+ RTE_SET_USED(timeout_ticks); \
+ \
+ if (ws->swtag_req) { \
+ ws->swtag_req = 0; \
+ cnxk_sso_hws_swtag_wait(ws->tag_op); \
+ return 1; \
+ } \
+ \
+ return cn9k_sso_hws_get_work(ws, ev, flags, ws->lookup_mem); \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_deq_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks) \
+ { \
+ RTE_SET_USED(nb_events); \
+ \
+ return cn9k_sso_hws_deq_##name(port, ev, timeout_ticks); \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_tmo_deq_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks) \
+ { \
+ struct cn9k_sso_hws *ws = port; \
+ uint16_t ret = 1; \
+ uint64_t iter; \
+ \
+ if (ws->swtag_req) { \
+ ws->swtag_req = 0; \
+ cnxk_sso_hws_swtag_wait(ws->tag_op); \
+ return ret; \
+ } \
+ \
+ ret = cn9k_sso_hws_get_work(ws, ev, flags, ws->lookup_mem); \
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) \
+ ret = cn9k_sso_hws_get_work(ws, ev, flags, \
+ ws->lookup_mem); \
+ \
+ return ret; \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_tmo_deq_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks) \
+ { \
+ RTE_SET_USED(nb_events); \
+ \
+ return cn9k_sso_hws_tmo_deq_##name(port, ev, timeout_ticks); \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_deq_seg_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks) \
+ { \
+ struct cn9k_sso_hws *ws = port; \
+ \
+ RTE_SET_USED(timeout_ticks); \
+ \
+ if (ws->swtag_req) { \
+ ws->swtag_req = 0; \
+ cnxk_sso_hws_swtag_wait(ws->tag_op); \
+ return 1; \
+ } \
+ \
+ return cn9k_sso_hws_get_work( \
+ ws, ev, flags | NIX_RX_MULTI_SEG_F, ws->lookup_mem); \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_deq_seg_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks) \
+ { \
+ RTE_SET_USED(nb_events); \
+ \
+ return cn9k_sso_hws_deq_seg_##name(port, ev, timeout_ticks); \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_tmo_deq_seg_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks) \
+ { \
+ struct cn9k_sso_hws *ws = port; \
+ uint16_t ret = 1; \
+ uint64_t iter; \
+ \
+ if (ws->swtag_req) { \
+ ws->swtag_req = 0; \
+ cnxk_sso_hws_swtag_wait(ws->tag_op); \
+ return ret; \
+ } \
+ \
+ ret = cn9k_sso_hws_get_work(ws, ev, flags, ws->lookup_mem); \
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) \
+ ret = cn9k_sso_hws_get_work(ws, ev, flags, \
+ ws->lookup_mem); \
+ \
+ return ret; \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_tmo_deq_seg_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks) \
+ { \
+ RTE_SET_USED(nb_events); \
+ \
+ return cn9k_sso_hws_tmo_deq_seg_##name(port, ev, \
+ timeout_ticks); \
}
- return cn9k_sso_hws_get_work(ws, ev);
-}
-
-uint16_t __rte_hot
-cn9k_sso_hws_deq_burst(void *port, struct rte_event ev[], uint16_t nb_events,
- uint64_t timeout_ticks)
-{
- RTE_SET_USED(nb_events);
-
- return cn9k_sso_hws_deq(port, ev, timeout_ticks);
-}
-
-uint16_t __rte_hot
-cn9k_sso_hws_tmo_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
-{
- struct cn9k_sso_hws *ws = port;
- uint16_t ret = 1;
- uint64_t iter;
-
- if (ws->swtag_req) {
- ws->swtag_req = 0;
- cnxk_sso_hws_swtag_wait(ws->tag_op);
- return ret;
- }
-
- ret = cn9k_sso_hws_get_work(ws, ev);
- for (iter = 1; iter < timeout_ticks && (ret == 0); iter++)
- ret = cn9k_sso_hws_get_work(ws, ev);
-
- return ret;
-}
-
-uint16_t __rte_hot
-cn9k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
- uint16_t nb_events, uint64_t timeout_ticks)
-{
- RTE_SET_USED(nb_events);
-
- return cn9k_sso_hws_tmo_deq(port, ev, timeout_ticks);
-}
+NIX_RX_FASTPATH_MODES
+#undef R
/* Dual ws ops. */
@@ -172,65 +234,145 @@ cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
-uint16_t __rte_hot
-cn9k_sso_hws_dual_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
-{
- struct cn9k_sso_hws_dual *dws = port;
- uint16_t gw;
-
- RTE_SET_USED(timeout_ticks);
- if (dws->swtag_req) {
- dws->swtag_req = 0;
- cnxk_sso_hws_swtag_wait(dws->ws_state[!dws->vws].tag_op);
- return 1;
+#define R(name, f3, f2, f1, f0, flags) \
+ uint16_t __rte_hot cn9k_sso_hws_dual_deq_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks) \
+ { \
+ struct cn9k_sso_hws_dual *dws = port; \
+ uint16_t gw; \
+ \
+ RTE_SET_USED(timeout_ticks); \
+ if (dws->swtag_req) { \
+ dws->swtag_req = 0; \
+ cnxk_sso_hws_swtag_wait( \
+ dws->ws_state[!dws->vws].tag_op); \
+ return 1; \
+ } \
+ \
+ gw = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws], \
+ &dws->ws_state[!dws->vws], ev, \
+ flags, dws->lookup_mem); \
+ dws->vws = !dws->vws; \
+ return gw; \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks) \
+ { \
+ RTE_SET_USED(nb_events); \
+ \
+ return cn9k_sso_hws_dual_deq_##name(port, ev, timeout_ticks); \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks) \
+ { \
+ struct cn9k_sso_hws_dual *dws = port; \
+ uint16_t ret = 1; \
+ uint64_t iter; \
+ \
+ if (dws->swtag_req) { \
+ dws->swtag_req = 0; \
+ cnxk_sso_hws_swtag_wait( \
+ dws->ws_state[!dws->vws].tag_op); \
+ return ret; \
+ } \
+ \
+ ret = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws], \
+ &dws->ws_state[!dws->vws], \
+ ev, flags, dws->lookup_mem); \
+ dws->vws = !dws->vws; \
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) { \
+ ret = cn9k_sso_hws_dual_get_work( \
+ &dws->ws_state[dws->vws], \
+ &dws->ws_state[!dws->vws], ev, flags, \
+ dws->lookup_mem); \
+ dws->vws = !dws->vws; \
+ } \
+ \
+ return ret; \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks) \
+ { \
+ RTE_SET_USED(nb_events); \
+ \
+ return cn9k_sso_hws_dual_tmo_deq_##name(port, ev, \
+ timeout_ticks); \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_dual_deq_seg_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks) \
+ { \
+ struct cn9k_sso_hws_dual *dws = port; \
+ uint16_t gw; \
+ \
+ RTE_SET_USED(timeout_ticks); \
+ if (dws->swtag_req) { \
+ dws->swtag_req = 0; \
+ cnxk_sso_hws_swtag_wait( \
+ dws->ws_state[!dws->vws].tag_op); \
+ return 1; \
+ } \
+ \
+ gw = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws], \
+ &dws->ws_state[!dws->vws], ev, \
+ flags, dws->lookup_mem); \
+ dws->vws = !dws->vws; \
+ return gw; \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_dual_deq_seg_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks) \
+ { \
+ RTE_SET_USED(nb_events); \
+ \
+ return cn9k_sso_hws_dual_deq_seg_##name(port, ev, \
+ timeout_ticks); \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq_seg_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks) \
+ { \
+ struct cn9k_sso_hws_dual *dws = port; \
+ uint16_t ret = 1; \
+ uint64_t iter; \
+ \
+ if (dws->swtag_req) { \
+ dws->swtag_req = 0; \
+ cnxk_sso_hws_swtag_wait( \
+ dws->ws_state[!dws->vws].tag_op); \
+ return ret; \
+ } \
+ \
+ ret = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws], \
+ &dws->ws_state[!dws->vws], \
+ ev, flags, dws->lookup_mem); \
+ dws->vws = !dws->vws; \
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) { \
+ ret = cn9k_sso_hws_dual_get_work( \
+ &dws->ws_state[dws->vws], \
+ &dws->ws_state[!dws->vws], ev, flags, \
+ dws->lookup_mem); \
+ dws->vws = !dws->vws; \
+ } \
+ \
+ return ret; \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq_seg_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks) \
+ { \
+ RTE_SET_USED(nb_events); \
+ \
+ return cn9k_sso_hws_dual_tmo_deq_seg_##name(port, ev, \
+ timeout_ticks); \
}
- gw = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
- &dws->ws_state[!dws->vws], ev);
- dws->vws = !dws->vws;
- return gw;
-}
-
-uint16_t __rte_hot
-cn9k_sso_hws_dual_deq_burst(void *port, struct rte_event ev[],
- uint16_t nb_events, uint64_t timeout_ticks)
-{
- RTE_SET_USED(nb_events);
-
- return cn9k_sso_hws_dual_deq(port, ev, timeout_ticks);
-}
-
-uint16_t __rte_hot
-cn9k_sso_hws_dual_tmo_deq(void *port, struct rte_event *ev,
- uint64_t timeout_ticks)
-{
- struct cn9k_sso_hws_dual *dws = port;
- uint16_t ret = 1;
- uint64_t iter;
-
- if (dws->swtag_req) {
- dws->swtag_req = 0;
- cnxk_sso_hws_swtag_wait(dws->ws_state[!dws->vws].tag_op);
- return ret;
- }
-
- ret = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
- &dws->ws_state[!dws->vws], ev);
- dws->vws = !dws->vws;
- for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) {
- ret = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
- &dws->ws_state[!dws->vws], ev);
- dws->vws = !dws->vws;
- }
-
- return ret;
-}
-
-uint16_t __rte_hot
-cn9k_sso_hws_dual_tmo_deq_burst(void *port, struct rte_event ev[],
- uint16_t nb_events, uint64_t timeout_ticks)
-{
- RTE_SET_USED(nb_events);
-
- return cn9k_sso_hws_dual_tmo_deq(port, ev, timeout_ticks);
-}
+NIX_RX_FASTPATH_MODES
+#undef R
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index b5af5ecf4..bbdca3c95 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -128,17 +128,38 @@ cn9k_sso_hws_dual_forward_event(struct cn9k_sso_hws_dual *dws,
}
}
+static __rte_always_inline void
+cn9k_wqe_to_mbuf(uint64_t wqe, const uint64_t mbuf, uint8_t port_id,
+ const uint32_t tag, const uint32_t flags,
+ const void *const lookup_mem)
+{
+ union mbuf_initializer mbuf_init = {
+ .fields = {.data_off = RTE_PKTMBUF_HEADROOM,
+ .refcnt = 1,
+ .nb_segs = 1,
+ .port = port_id},
+ };
+
+ cn9k_nix_cqe_to_mbuf((struct nix_cqe_hdr_s *)wqe, tag,
+ (struct rte_mbuf *)mbuf, lookup_mem,
+ mbuf_init.value, flags);
+}
+
static __rte_always_inline uint16_t
cn9k_sso_hws_dual_get_work(struct cn9k_sso_hws_state *ws,
struct cn9k_sso_hws_state *ws_pair,
- struct rte_event *ev)
+ struct rte_event *ev, const uint32_t flags,
+ const void *const lookup_mem)
{
const uint64_t set_gw = BIT_ULL(16) | 1;
union {
__uint128_t get_work;
uint64_t u64[2];
} gw;
+ uint64_t mbuf;
+ if (flags & NIX_RX_OFFLOAD_PTYPE_F)
+ rte_prefetch_non_temporal(lookup_mem);
#ifdef RTE_ARCH_ARM64
asm volatile(PLT_CPU_FEATURE_PREAMBLE
"rty%=: \n"
@@ -147,7 +168,10 @@ cn9k_sso_hws_dual_get_work(struct cn9k_sso_hws_state *ws,
" tbnz %[tag], 63, rty%= \n"
"done%=: str %[gw], [%[pong]] \n"
" dmb ld \n"
- : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ " sub %[mbuf], %[wqp], #0x80 \n"
+ " prfm pldl1keep, [%[mbuf]] \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1]),
+ [mbuf] "=&r"(mbuf)
: [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op),
[gw] "r"(set_gw), [pong] "r"(ws_pair->getwrk_op));
#else
@@ -156,12 +180,26 @@ cn9k_sso_hws_dual_get_work(struct cn9k_sso_hws_state *ws,
gw.u64[0] = plt_read64(ws->tag_op);
gw.u64[1] = plt_read64(ws->wqp_op);
plt_write64(set_gw, ws_pair->getwrk_op);
+ mbuf = (uint64_t)((char *)gw.u64[1] - sizeof(struct rte_mbuf));
#endif
gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
(gw.u64[0] & (0x3FFull << 36)) << 4 |
(gw.u64[0] & 0xffffffff);
+ if (CNXK_TT_FROM_EVENT(gw.u64[0]) != SSO_TT_EMPTY) {
+ if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) ==
+ RTE_EVENT_TYPE_ETHDEV) {
+ uint8_t port = CNXK_SUB_EVENT_FROM_TAG(gw.u64[0]);
+
+ gw.u64[0] = CNXK_CLR_SUB_EVENT(gw.u64[0]);
+ cn9k_wqe_to_mbuf(gw.u64[1], mbuf, port,
+ gw.u64[0] & 0xFFFFF, flags,
+ lookup_mem);
+ gw.u64[1] = mbuf;
+ }
+ }
+
ev->event = gw.u64[0];
ev->u64 = gw.u64[1];
@@ -169,16 +207,21 @@ cn9k_sso_hws_dual_get_work(struct cn9k_sso_hws_state *ws,
}
static __rte_always_inline uint16_t
-cn9k_sso_hws_get_work(struct cn9k_sso_hws *ws, struct rte_event *ev)
+cn9k_sso_hws_get_work(struct cn9k_sso_hws *ws, struct rte_event *ev,
+ const uint32_t flags, const void *const lookup_mem)
{
union {
__uint128_t get_work;
uint64_t u64[2];
} gw;
+ uint64_t mbuf;
plt_write64(BIT_ULL(16) | /* wait for work. */
1, /* Use Mask set 0. */
ws->getwrk_op);
+
+ if (flags & NIX_RX_OFFLOAD_PTYPE_F)
+ rte_prefetch_non_temporal(lookup_mem);
#ifdef RTE_ARCH_ARM64
asm volatile(PLT_CPU_FEATURE_PREAMBLE
" ldr %[tag], [%[tag_loc]] \n"
@@ -190,7 +233,10 @@ cn9k_sso_hws_get_work(struct cn9k_sso_hws *ws, struct rte_event *ev)
" ldr %[wqp], [%[wqp_loc]] \n"
" tbnz %[tag], 63, rty%= \n"
"done%=: dmb ld \n"
- : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ " sub %[mbuf], %[wqp], #0x80 \n"
+ " prfm pldl1keep, [%[mbuf]] \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1]),
+ [mbuf] "=&r"(mbuf)
: [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op));
#else
gw.u64[0] = plt_read64(ws->tag_op);
@@ -198,12 +244,26 @@ cn9k_sso_hws_get_work(struct cn9k_sso_hws *ws, struct rte_event *ev)
gw.u64[0] = plt_read64(ws->tag_op);
gw.u64[1] = plt_read64(ws->wqp_op);
+ mbuf = (uint64_t)((char *)gw.u64[1] - sizeof(struct rte_mbuf));
#endif
gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
(gw.u64[0] & (0x3FFull << 36)) << 4 |
(gw.u64[0] & 0xffffffff);
+ if (CNXK_TT_FROM_EVENT(gw.u64[0]) != SSO_TT_EMPTY) {
+ if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) ==
+ RTE_EVENT_TYPE_ETHDEV) {
+ uint8_t port = CNXK_SUB_EVENT_FROM_TAG(gw.u64[0]);
+
+ gw.u64[0] = CNXK_CLR_SUB_EVENT(gw.u64[0]);
+ cn9k_wqe_to_mbuf(gw.u64[1], mbuf, port,
+ gw.u64[0] & 0xFFFFF, flags,
+ lookup_mem);
+ gw.u64[1] = mbuf;
+ }
+ }
+
ev->event = gw.u64[0];
ev->u64 = gw.u64[1];
@@ -218,6 +278,7 @@ cn9k_sso_hws_get_work_empty(struct cn9k_sso_hws_state *ws, struct rte_event *ev)
__uint128_t get_work;
uint64_t u64[2];
} gw;
+ uint64_t mbuf;
#ifdef RTE_ARCH_ARM64
asm volatile(PLT_CPU_FEATURE_PREAMBLE
@@ -230,7 +291,9 @@ cn9k_sso_hws_get_work_empty(struct cn9k_sso_hws_state *ws, struct rte_event *ev)
" ldr %[wqp], [%[wqp_loc]] \n"
" tbnz %[tag], 63, rty%= \n"
"done%=: dmb ld \n"
- : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ " sub %[mbuf], %[wqp], #0x80 \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1]),
+ [mbuf] "=&r"(mbuf)
: [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op));
#else
gw.u64[0] = plt_read64(ws->tag_op);
@@ -238,12 +301,25 @@ cn9k_sso_hws_get_work_empty(struct cn9k_sso_hws_state *ws, struct rte_event *ev)
gw.u64[0] = plt_read64(ws->tag_op);
gw.u64[1] = plt_read64(ws->wqp_op);
+ mbuf = (uint64_t)((char *)gw.u64[1] - sizeof(struct rte_mbuf));
#endif
gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
(gw.u64[0] & (0x3FFull << 36)) << 4 |
(gw.u64[0] & 0xffffffff);
+ if (CNXK_TT_FROM_EVENT(gw.u64[0]) != SSO_TT_EMPTY) {
+ if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) ==
+ RTE_EVENT_TYPE_ETHDEV) {
+ uint8_t port = CNXK_SUB_EVENT_FROM_TAG(gw.u64[0]);
+
+ gw.u64[0] = CNXK_CLR_SUB_EVENT(gw.u64[0]);
+ cn9k_wqe_to_mbuf(gw.u64[1], mbuf, port,
+ gw.u64[0] & 0xFFFFF, 0, NULL);
+ gw.u64[1] = mbuf;
+ }
+ }
+
ev->event = gw.u64[0];
ev->u64 = gw.u64[1];
@@ -274,28 +350,54 @@ uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
-uint16_t __rte_hot cn9k_sso_hws_deq(void *port, struct rte_event *ev,
- uint64_t timeout_ticks);
-uint16_t __rte_hot cn9k_sso_hws_deq_burst(void *port, struct rte_event ev[],
- uint16_t nb_events,
- uint64_t timeout_ticks);
-uint16_t __rte_hot cn9k_sso_hws_tmo_deq(void *port, struct rte_event *ev,
- uint64_t timeout_ticks);
-uint16_t __rte_hot cn9k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
- uint16_t nb_events,
- uint64_t timeout_ticks);
-
-uint16_t __rte_hot cn9k_sso_hws_dual_deq(void *port, struct rte_event *ev,
- uint64_t timeout_ticks);
-uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst(void *port,
- struct rte_event ev[],
- uint16_t nb_events,
- uint64_t timeout_ticks);
-uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq(void *port, struct rte_event *ev,
- uint64_t timeout_ticks);
-uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq_burst(void *port,
- struct rte_event ev[],
- uint16_t nb_events,
- uint64_t timeout_ticks);
+#define R(name, f3, f2, f1, f0, flags) \
+ uint16_t __rte_hot cn9k_sso_hws_deq_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn9k_sso_hws_deq_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn9k_sso_hws_tmo_deq_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn9k_sso_hws_tmo_deq_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn9k_sso_hws_deq_seg_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn9k_sso_hws_deq_seg_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn9k_sso_hws_tmo_deq_seg_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn9k_sso_hws_tmo_deq_seg_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks);
+
+NIX_RX_FASTPATH_MODES
+#undef R
+
+#define R(name, f3, f2, f1, f0, flags) \
+ uint16_t __rte_hot cn9k_sso_hws_dual_deq_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn9k_sso_hws_dual_deq_seg_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn9k_sso_hws_dual_deq_seg_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq_seg_##name( \
+ void *port, struct rte_event *ev, uint64_t timeout_ticks); \
+ uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq_seg_burst_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events, \
+ uint64_t timeout_ticks);
+
+NIX_RX_FASTPATH_MODES
+#undef R
#endif
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index b40e39397..533ad853a 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -8,6 +8,14 @@ if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
subdir_done()
endif
+extra_flags = ['-Wno-strict-aliasing']
+foreach flag: extra_flags
+ if cc.has_argument(flag)
+ cflags += flag
+ endif
+endforeach
+
+
sources = files('cn10k_worker.c',
'cn10k_eventdev.c',
'cn9k_worker.c',
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 35/36] event/cnxk: add Tx adapter support
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (33 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 34/36] event/cnxk: add Rx adapter fastpath ops pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-03-06 16:29 ` [dpdk-dev] [PATCH 36/36] event/cnxk: add Tx adapter fastpath ops pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add support for event eth Tx adapter.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 4 +-
drivers/event/cnxk/cn10k_eventdev.c | 90 +++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 117 +++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 22 ++++-
drivers/event/cnxk/cnxk_eventdev_adptr.c | 106 ++++++++++++++++++++
5 files changed, 335 insertions(+), 4 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index abab7f742..0f916ff5c 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -42,7 +42,9 @@ Features of the OCTEON CNXK SSO PMD are:
- HW managed packets enqueued from ethdev to eventdev exposed through event eth
RX adapter.
- N:1 ethernet device Rx queue to Event queue mapping.
-- Full Rx offload support defined through ethdev queue configuration.
+- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+ capability while maintaining receive packet order.
+- Full Rx/Tx offload support defined through ethdev queue configuration.
Prerequisites and Compilation procedure
---------------------------------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 70c6fedae..3662fd720 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -243,6 +243,39 @@ cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static int
+cn10k_sso_updt_tx_adptr_data(const struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int i;
+
+ if (dev->tx_adptr_data == NULL)
+ return 0;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ struct cn10k_sso_hws *ws = event_dev->data->ports[i];
+ void *ws_cookie;
+
+ ws_cookie = cnxk_sso_hws_get_cookie(ws);
+ ws_cookie = rte_realloc_socket(
+ ws_cookie,
+ sizeof(struct cnxk_sso_hws_cookie) +
+ sizeof(struct cn10k_sso_hws) +
+ (sizeof(uint64_t) * (dev->max_port_id + 1) *
+ RTE_MAX_QUEUES_PER_PORT),
+ RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
+ if (ws_cookie == NULL)
+ return -ENOMEM;
+ ws = RTE_PTR_ADD(ws_cookie, sizeof(struct cnxk_sso_hws_cookie));
+ memcpy(&ws->tx_adptr_data, dev->tx_adptr_data,
+ sizeof(uint64_t) * (dev->max_port_id + 1) *
+ RTE_MAX_QUEUES_PER_PORT);
+ event_dev->data->ports[i] = ws;
+ }
+
+ return 0;
+}
+
static void
cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
{
@@ -482,6 +515,10 @@ cn10k_sso_start(struct rte_eventdev *event_dev)
{
int rc;
+ rc = cn10k_sso_updt_tx_adptr_data(event_dev);
+ if (rc < 0)
+ return rc;
+
rc = cnxk_sso_start(event_dev, cn10k_sso_hws_reset,
cn10k_sso_hws_flush_events);
if (rc < 0)
@@ -580,6 +617,55 @@ cn10k_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
return cnxk_sso_rx_adapter_queue_del(event_dev, eth_dev, rx_queue_id);
}
+static int
+cn10k_sso_tx_adapter_caps_get(const struct rte_eventdev *dev,
+ const struct rte_eth_dev *eth_dev, uint32_t *caps)
+{
+ int ret;
+
+ RTE_SET_USED(dev);
+ ret = strncmp(eth_dev->device->driver->name, "net_cn10k", 8);
+ if (ret)
+ *caps = 0;
+ else
+ *caps = RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT;
+
+ return 0;
+}
+
+static int
+cn10k_sso_tx_adapter_queue_add(uint8_t id, const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev,
+ int32_t tx_queue_id)
+{
+ int rc;
+
+ RTE_SET_USED(id);
+ rc = cnxk_sso_tx_adapter_queue_add(event_dev, eth_dev, tx_queue_id);
+ if (rc < 0)
+ return rc;
+ rc = cn10k_sso_updt_tx_adptr_data(event_dev);
+ if (rc < 0)
+ return rc;
+ cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
+
+ return 0;
+}
+
+static int
+cn10k_sso_tx_adapter_queue_del(uint8_t id, const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev,
+ int32_t tx_queue_id)
+{
+ int rc;
+
+ RTE_SET_USED(id);
+ rc = cnxk_sso_tx_adapter_queue_del(event_dev, eth_dev, tx_queue_id);
+ if (rc < 0)
+ return rc;
+ return cn10k_sso_updt_tx_adptr_data(event_dev);
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -599,6 +685,10 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.eth_rx_adapter_start = cnxk_sso_rx_adapter_start,
.eth_rx_adapter_stop = cnxk_sso_rx_adapter_stop,
+ .eth_tx_adapter_caps_get = cn10k_sso_tx_adapter_caps_get,
+ .eth_tx_adapter_queue_add = cn10k_sso_tx_adapter_queue_add,
+ .eth_tx_adapter_queue_del = cn10k_sso_tx_adapter_queue_del,
+
.timer_adapter_caps_get = cnxk_tim_caps_get,
.dump = cnxk_sso_dump,
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index e4383dca1..33b3b6237 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -248,6 +248,66 @@ cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static int
+cn9k_sso_updt_tx_adptr_data(const struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int i;
+
+ if (dev->tx_adptr_data == NULL)
+ return 0;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ if (dev->dual_ws) {
+ struct cn9k_sso_hws_dual *dws =
+ event_dev->data->ports[i];
+ void *ws_cookie;
+
+ ws_cookie = cnxk_sso_hws_get_cookie(dws);
+ ws_cookie = rte_realloc_socket(
+ ws_cookie,
+ sizeof(struct cnxk_sso_hws_cookie) +
+ sizeof(struct cn9k_sso_hws_dual) +
+ (sizeof(uint64_t) *
+ (dev->max_port_id + 1) *
+ RTE_MAX_QUEUES_PER_PORT),
+ RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
+ if (ws_cookie == NULL)
+ return -ENOMEM;
+ dws = RTE_PTR_ADD(ws_cookie,
+ sizeof(struct cnxk_sso_hws_cookie));
+ memcpy(&dws->tx_adptr_data, dev->tx_adptr_data,
+ sizeof(uint64_t) * (dev->max_port_id + 1) *
+ RTE_MAX_QUEUES_PER_PORT);
+ event_dev->data->ports[i] = dws;
+ } else {
+ struct cn9k_sso_hws *ws = event_dev->data->ports[i];
+ void *ws_cookie;
+
+ ws_cookie = cnxk_sso_hws_get_cookie(ws);
+ ws_cookie = rte_realloc_socket(
+ ws_cookie,
+ sizeof(struct cnxk_sso_hws_cookie) +
+ sizeof(struct cn9k_sso_hws_dual) +
+ (sizeof(uint64_t) *
+ (dev->max_port_id + 1) *
+ RTE_MAX_QUEUES_PER_PORT),
+ RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
+ if (ws_cookie == NULL)
+ return -ENOMEM;
+ ws = RTE_PTR_ADD(ws_cookie,
+ sizeof(struct cnxk_sso_hws_cookie));
+ memcpy(&ws->tx_adptr_data, dev->tx_adptr_data,
+ sizeof(uint64_t) * (dev->max_port_id + 1) *
+ RTE_MAX_QUEUES_PER_PORT);
+ event_dev->data->ports[i] = ws;
+ }
+ }
+ rte_mb();
+
+ return 0;
+}
+
static void
cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
{
@@ -683,6 +743,10 @@ cn9k_sso_start(struct rte_eventdev *event_dev)
{
int rc;
+ rc = cn9k_sso_updt_tx_adptr_data(event_dev);
+ if (rc < 0)
+ return rc;
+
rc = cnxk_sso_start(event_dev, cn9k_sso_hws_reset,
cn9k_sso_hws_flush_events);
if (rc < 0)
@@ -787,6 +851,55 @@ cn9k_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
return cnxk_sso_rx_adapter_queue_del(event_dev, eth_dev, rx_queue_id);
}
+static int
+cn9k_sso_tx_adapter_caps_get(const struct rte_eventdev *dev,
+ const struct rte_eth_dev *eth_dev, uint32_t *caps)
+{
+ int ret;
+
+ RTE_SET_USED(dev);
+ ret = strncmp(eth_dev->device->driver->name, "net_cn9k", 8);
+ if (ret)
+ *caps = 0;
+ else
+ *caps = RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT;
+
+ return 0;
+}
+
+static int
+cn9k_sso_tx_adapter_queue_add(uint8_t id, const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev,
+ int32_t tx_queue_id)
+{
+ int rc;
+
+ RTE_SET_USED(id);
+ rc = cnxk_sso_tx_adapter_queue_add(event_dev, eth_dev, tx_queue_id);
+ if (rc < 0)
+ return rc;
+ rc = cn9k_sso_updt_tx_adptr_data(event_dev);
+ if (rc < 0)
+ return rc;
+ cn9k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
+
+ return 0;
+}
+
+static int
+cn9k_sso_tx_adapter_queue_del(uint8_t id, const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev,
+ int32_t tx_queue_id)
+{
+ int rc;
+
+ RTE_SET_USED(id);
+ rc = cnxk_sso_tx_adapter_queue_del(event_dev, eth_dev, tx_queue_id);
+ if (rc < 0)
+ return rc;
+ return cn9k_sso_updt_tx_adptr_data(event_dev);
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -806,6 +919,10 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.eth_rx_adapter_start = cnxk_sso_rx_adapter_start,
.eth_rx_adapter_stop = cnxk_sso_rx_adapter_stop,
+ .eth_tx_adapter_caps_get = cn9k_sso_tx_adapter_caps_get,
+ .eth_tx_adapter_queue_add = cn9k_sso_tx_adapter_queue_add,
+ .eth_tx_adapter_queue_del = cn9k_sso_tx_adapter_queue_del,
+
.timer_adapter_caps_get = cnxk_tim_caps_get,
.dump = cnxk_sso_dump,
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 9c3331f7e..59c1af98e 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -8,6 +8,7 @@
#include <rte_devargs.h>
#include <rte_ethdev.h>
#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
#include <rte_kvargs.h>
#include <rte_mbuf_pool_ops.h>
#include <rte_pci.h>
@@ -84,9 +85,12 @@ struct cnxk_sso_evdev {
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
uint64_t rx_offloads;
+ uint64_t tx_offloads;
uint64_t adptr_xae_cnt;
uint16_t rx_adptr_pool_cnt;
uint64_t *rx_adptr_pools;
+ uint64_t *tx_adptr_data;
+ uint16_t max_port_id;
uint16_t tim_adptr_ring_cnt;
uint16_t *timer_adptr_rings;
uint64_t *timer_adptr_sz;
@@ -121,8 +125,10 @@ struct cn10k_sso_hws {
uint64_t xaq_lmt __rte_cache_aligned;
uint64_t *fc_mem;
uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
- uint64_t base;
+ /* Tx Fastpath data */
+ uint64_t base __rte_cache_aligned;
uintptr_t lmt_base;
+ uint8_t tx_adptr_data[];
} __rte_cache_aligned;
/* CN9K HWS ops */
@@ -145,7 +151,9 @@ struct cn9k_sso_hws {
uint64_t xaq_lmt __rte_cache_aligned;
uint64_t *fc_mem;
uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
- uint64_t base;
+ /* Tx Fastpath data */
+ uint64_t base __rte_cache_aligned;
+ uint8_t tx_adptr_data[];
} __rte_cache_aligned;
struct cn9k_sso_hws_state {
@@ -163,7 +171,9 @@ struct cn9k_sso_hws_dual {
uint64_t xaq_lmt __rte_cache_aligned;
uint64_t *fc_mem;
uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
- uint64_t base[2];
+ /* Tx Fastpath data */
+ uint64_t base[2] __rte_cache_aligned;
+ uint8_t tx_adptr_data[];
} __rte_cache_aligned;
struct cnxk_sso_hws_cookie {
@@ -255,5 +265,11 @@ int cnxk_sso_rx_adapter_start(const struct rte_eventdev *event_dev,
const struct rte_eth_dev *eth_dev);
int cnxk_sso_rx_adapter_stop(const struct rte_eventdev *event_dev,
const struct rte_eth_dev *eth_dev);
+int cnxk_sso_tx_adapter_queue_add(const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev,
+ int32_t tx_queue_id);
+int cnxk_sso_tx_adapter_queue_del(const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev,
+ int32_t tx_queue_id);
#endif /* __CNXK_EVENTDEV_H__ */
diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c
index e06033117..af44f63f9 100644
--- a/drivers/event/cnxk/cnxk_eventdev_adptr.c
+++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
@@ -5,6 +5,8 @@
#include "cnxk_ethdev.h"
#include "cnxk_eventdev.h"
+#define CNXK_SSO_SQB_LIMIT (0x180)
+
void
cnxk_sso_updt_xae_cnt(struct cnxk_sso_evdev *dev, void *data,
uint32_t event_type)
@@ -222,3 +224,107 @@ cnxk_sso_rx_adapter_stop(const struct rte_eventdev *event_dev,
return 0;
}
+
+static int
+cnxk_sso_sqb_aura_limit_edit(struct roc_nix_sq *sq, uint16_t nb_sqb_bufs)
+{
+ uint16_t sqb_limit;
+
+ sqb_limit = RTE_MIN(nb_sqb_bufs, sq->nb_sqb_bufs);
+ return roc_npa_aura_limit_modify(sq->aura_handle, sqb_limit);
+}
+
+static int
+cnxk_sso_updt_tx_queue_data(const struct rte_eventdev *event_dev,
+ uint16_t eth_port_id, uint16_t tx_queue_id,
+ void *txq)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t max_port_id = dev->max_port_id;
+ uint64_t *txq_data = dev->tx_adptr_data;
+
+ if (txq_data == NULL || eth_port_id > max_port_id) {
+ max_port_id = RTE_MAX(max_port_id, eth_port_id);
+ txq_data = rte_realloc_socket(
+ txq_data,
+ (sizeof(uint64_t) * (max_port_id + 1) *
+ RTE_MAX_QUEUES_PER_PORT),
+ RTE_CACHE_LINE_SIZE, event_dev->data->socket_id);
+ if (txq_data == NULL)
+ return -ENOMEM;
+ }
+
+ ((uint64_t(*)[RTE_MAX_QUEUES_PER_PORT])
+ txq_data)[eth_port_id][tx_queue_id] = (uint64_t)txq;
+ dev->max_port_id = max_port_id;
+ dev->tx_adptr_data = txq_data;
+ return 0;
+}
+
+int
+cnxk_sso_tx_adapter_queue_add(const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev,
+ int32_t tx_queue_id)
+{
+ struct cnxk_eth_dev *cnxk_eth_dev = eth_dev->data->dev_private;
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ struct roc_nix_sq *sq;
+ int i, ret;
+ void *txq;
+
+ if (tx_queue_id < 0) {
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ txq = eth_dev->data->tx_queues[i];
+ sq = &cnxk_eth_dev->sqs[i];
+ cnxk_sso_sqb_aura_limit_edit(sq, CNXK_SSO_SQB_LIMIT);
+ ret = cnxk_sso_updt_tx_queue_data(
+ event_dev, eth_dev->data->port_id, i, txq);
+ if (ret < 0)
+ return ret;
+ }
+ } else {
+ txq = eth_dev->data->tx_queues[tx_queue_id];
+ sq = &cnxk_eth_dev->sqs[tx_queue_id];
+ cnxk_sso_sqb_aura_limit_edit(sq, CNXK_SSO_SQB_LIMIT);
+ ret = cnxk_sso_updt_tx_queue_data(
+ event_dev, eth_dev->data->port_id, tx_queue_id, txq);
+ if (ret < 0)
+ return ret;
+ }
+
+ dev->tx_offloads |= cnxk_eth_dev->tx_offload_flags;
+
+ return 0;
+}
+
+int
+cnxk_sso_tx_adapter_queue_del(const struct rte_eventdev *event_dev,
+ const struct rte_eth_dev *eth_dev,
+ int32_t tx_queue_id)
+{
+ struct cnxk_eth_dev *cnxk_eth_dev = eth_dev->data->dev_private;
+ struct roc_nix_sq *sq;
+ int i, ret;
+
+ RTE_SET_USED(event_dev);
+ if (tx_queue_id < 0) {
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ sq = &cnxk_eth_dev->sqs[i];
+ cnxk_sso_sqb_aura_limit_edit(sq, sq->nb_sqb_bufs);
+ ret = cnxk_sso_updt_tx_queue_data(
+ event_dev, eth_dev->data->port_id, tx_queue_id,
+ NULL);
+ if (ret < 0)
+ return ret;
+ }
+ } else {
+ sq = &cnxk_eth_dev->sqs[tx_queue_id];
+ cnxk_sso_sqb_aura_limit_edit(sq, sq->nb_sqb_bufs);
+ ret = cnxk_sso_updt_tx_queue_data(
+ event_dev, eth_dev->data->port_id, tx_queue_id, NULL);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH 36/36] event/cnxk: add Tx adapter fastpath ops
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (34 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 35/36] event/cnxk: add Tx adapter support pbhagavatula
@ 2021-03-06 16:29 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
36 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-03-06 16:29 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: ndabilpuram, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add support for event eth Tx adapter fastpath operations.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 35 ++++++++++++
drivers/event/cnxk/cn10k_worker.c | 32 +++++++++++
drivers/event/cnxk/cn10k_worker.h | 67 ++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 76 +++++++++++++++++++++++++
drivers/event/cnxk/cn9k_worker.c | 60 ++++++++++++++++++++
drivers/event/cnxk/cn9k_worker.h | 87 +++++++++++++++++++++++++++++
6 files changed, 357 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 3662fd720..817dcc7cc 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -336,6 +336,22 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
#undef R
};
+ /* Tx modes */
+ const event_tx_adapter_enqueue sso_hws_tx_adptr_enq[2][2][2][2][2] = {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ [f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_##name,
+ NIX_TX_FASTPATH_MODES
+#undef T
+ };
+
+ const event_tx_adapter_enqueue sso_hws_tx_adptr_enq_seg[2][2][2][2][2] =
+ {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ [f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_seg_##name,
+ NIX_TX_FASTPATH_MODES
+#undef T
+ };
+
event_dev->enqueue = cn10k_sso_hws_enq;
event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
@@ -395,6 +411,25 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
[!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
}
}
+
+ if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) {
+ /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */
+ event_dev->txa_enqueue = sso_hws_tx_adptr_enq_seg
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
+ } else {
+ event_dev->txa_enqueue = sso_hws_tx_adptr_enq
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
+ }
+
+ event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue;
}
static void
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 46f72cf20..ab149c5e3 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -175,3 +175,35 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
NIX_RX_FASTPATH_MODES
#undef R
+
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events) \
+ { \
+ struct cn10k_sso_hws *ws = port; \
+ uint64_t cmd[sz]; \
+ \
+ RTE_SET_USED(nb_events); \
+ return cn10k_sso_hws_event_tx( \
+ ws, &ev[0], cmd, \
+ (const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) & \
+ ws->tx_adptr_data, \
+ flags); \
+ } \
+ \
+ uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events) \
+ { \
+ uint64_t cmd[(sz) + CNXK_NIX_TX_MSEG_SG_DWORDS - 2]; \
+ struct cn10k_sso_hws *ws = port; \
+ \
+ RTE_SET_USED(nb_events); \
+ return cn10k_sso_hws_event_tx( \
+ ws, &ev[0], cmd, \
+ (const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) & \
+ ws->tx_adptr_data, \
+ (flags) | NIX_TX_MULTI_SEG_F); \
+ }
+
+NIX_TX_FASTPATH_MODES
+#undef T
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index 9521a5c94..ebfd5dee9 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -11,6 +11,7 @@
#include "cn10k_ethdev.h"
#include "cn10k_rx.h"
+#include "cn10k_tx.h"
/* SSO Operations */
@@ -239,4 +240,70 @@ uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
NIX_RX_FASTPATH_MODES
#undef R
+static __rte_always_inline const struct cn10k_eth_txq *
+cn10k_sso_hws_xtract_meta(struct rte_mbuf *m,
+ const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT])
+{
+ return (const struct cn10k_eth_txq *)
+ txq_data[m->port][rte_event_eth_tx_adapter_txq_get(m)];
+}
+
+static __rte_always_inline uint16_t
+cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev,
+ uint64_t *cmd,
+ const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT],
+ const uint32_t flags)
+{
+ const struct cn10k_eth_txq *txq;
+ struct rte_mbuf *m = ev->mbuf;
+ uint16_t ref_cnt = m->refcnt;
+ uintptr_t lmt_addr;
+ uint16_t lmt_id;
+ uintptr_t pa;
+
+ lmt_addr = ws->lmt_base;
+ ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id);
+ txq = cn10k_sso_hws_xtract_meta(m, txq_data);
+ cn10k_nix_tx_skeleton(txq, cmd, flags);
+ /* Perform header writes before barrier for TSO */
+ if (flags & NIX_TX_OFFLOAD_TSO_F)
+ cn10k_nix_xmit_prepare_tso(m, flags);
+
+ cn10k_nix_xmit_prepare(m, cmd, lmt_addr, flags);
+ if (flags & NIX_TX_MULTI_SEG_F) {
+ const uint16_t segdw =
+ cn10k_nix_prepare_mseg(m, (uint64_t *)lmt_addr, flags);
+ pa = txq->io_addr | ((segdw - 1) << 4);
+ } else {
+ pa = txq->io_addr | (cn10k_nix_tx_ext_subs(flags) + 1) << 4;
+ }
+ if (!ev->sched_type)
+ cnxk_sso_hws_head_wait(ws->base + SSOW_LF_GWS_TAG);
+
+ roc_lmt_submit_steorl(lmt_id, pa);
+
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
+ if (ref_cnt > 1)
+ return 1;
+ }
+
+ cnxk_sso_hws_swtag_flush(ws->base + SSOW_LF_GWS_TAG,
+ ws->base + SSOW_LF_GWS_OP_SWTAG_FLUSH);
+
+ return 1;
+}
+
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events); \
+ uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events); \
+ uint16_t __rte_hot cn10k_sso_hws_dual_tx_adptr_enq_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events); \
+ uint16_t __rte_hot cn10k_sso_hws_dual_tx_adptr_enq_seg_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events);
+
+NIX_TX_FASTPATH_MODES
+#undef T
+
#endif
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 33b3b6237..39e5e516d 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -427,6 +427,38 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
#undef R
};
+ /* Tx modes */
+ const event_tx_adapter_enqueue sso_hws_tx_adptr_enq[2][2][2][2][2] = {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ [f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_##name,
+ NIX_TX_FASTPATH_MODES
+#undef T
+ };
+
+ const event_tx_adapter_enqueue sso_hws_tx_adptr_enq_seg[2][2][2][2][2] =
+ {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ [f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_seg_##name,
+ NIX_TX_FASTPATH_MODES
+#undef T
+ };
+
+ const event_tx_adapter_enqueue
+ sso_hws_dual_tx_adptr_enq[2][2][2][2][2] = {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ [f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_##name,
+ NIX_TX_FASTPATH_MODES
+#undef T
+ };
+
+ const event_tx_adapter_enqueue
+ sso_hws_dual_tx_adptr_enq_seg[2][2][2][2][2] = {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ [f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_seg_##name,
+ NIX_TX_FASTPATH_MODES
+#undef T
+ };
+
event_dev->enqueue = cn9k_sso_hws_enq;
event_dev->enqueue_burst = cn9k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
@@ -487,6 +519,23 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
}
}
+ if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) {
+ /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */
+ event_dev->txa_enqueue = sso_hws_tx_adptr_enq_seg
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
+ } else {
+ event_dev->txa_enqueue = sso_hws_tx_adptr_enq
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
+ }
+
if (dev->dual_ws) {
event_dev->enqueue = cn9k_sso_hws_dual_enq;
event_dev->enqueue_burst = cn9k_sso_hws_dual_enq_burst;
@@ -567,8 +616,35 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
NIX_RX_OFFLOAD_RSS_F)];
}
}
+
+ if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) {
+ /* [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM]
+ */
+ event_dev->txa_enqueue = sso_hws_dual_tx_adptr_enq_seg
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
+ [!!(dev->tx_offloads &
+ NIX_TX_OFFLOAD_MBUF_NOFF_F)]
+ [!!(dev->tx_offloads &
+ NIX_TX_OFFLOAD_VLAN_QINQ_F)]
+ [!!(dev->tx_offloads &
+ NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
+ [!!(dev->tx_offloads &
+ NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
+ } else {
+ event_dev->txa_enqueue = sso_hws_dual_tx_adptr_enq
+ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
+ [!!(dev->tx_offloads &
+ NIX_TX_OFFLOAD_MBUF_NOFF_F)]
+ [!!(dev->tx_offloads &
+ NIX_TX_OFFLOAD_VLAN_QINQ_F)]
+ [!!(dev->tx_offloads &
+ NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
+ [!!(dev->tx_offloads &
+ NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
+ }
}
+ event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue;
rte_mb();
}
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index fb572c7c9..b19078618 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -376,3 +376,63 @@ cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
NIX_RX_FASTPATH_MODES
#undef R
+
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events) \
+ { \
+ struct cn9k_sso_hws *ws = port; \
+ uint64_t cmd[sz]; \
+ \
+ RTE_SET_USED(nb_events); \
+ return cn9k_sso_hws_event_tx( \
+ ws->base, &ev[0], cmd, \
+ (const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) & \
+ ws->tx_adptr_data, \
+ flags); \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_seg_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events) \
+ { \
+ uint64_t cmd[(sz) + CNXK_NIX_TX_MSEG_SG_DWORDS - 2]; \
+ struct cn9k_sso_hws *ws = port; \
+ \
+ RTE_SET_USED(nb_events); \
+ return cn9k_sso_hws_event_tx( \
+ ws->base, &ev[0], cmd, \
+ (const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) & \
+ ws->tx_adptr_data, \
+ (flags) | NIX_TX_MULTI_SEG_F); \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events) \
+ { \
+ struct cn9k_sso_hws_dual *ws = port; \
+ uint64_t cmd[sz]; \
+ \
+ RTE_SET_USED(nb_events); \
+ return cn9k_sso_hws_event_tx( \
+ ws->base[!ws->vws], &ev[0], cmd, \
+ (const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) & \
+ ws->tx_adptr_data, \
+ flags); \
+ } \
+ \
+ uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_seg_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events) \
+ { \
+ uint64_t cmd[(sz) + CNXK_NIX_TX_MSEG_SG_DWORDS - 2]; \
+ struct cn9k_sso_hws_dual *ws = port; \
+ \
+ RTE_SET_USED(nb_events); \
+ return cn9k_sso_hws_event_tx( \
+ ws->base[!ws->vws], &ev[0], cmd, \
+ (const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) & \
+ ws->tx_adptr_data, \
+ (flags) | NIX_TX_MULTI_SEG_F); \
+ }
+
+NIX_TX_FASTPATH_MODES
+#undef T
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index bbdca3c95..382910a25 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -11,6 +11,7 @@
#include "cn9k_ethdev.h"
#include "cn9k_rx.h"
+#include "cn9k_tx.h"
/* SSO Operations */
@@ -400,4 +401,90 @@ NIX_RX_FASTPATH_MODES
NIX_RX_FASTPATH_MODES
#undef R
+static __rte_always_inline const struct cn9k_eth_txq *
+cn9k_sso_hws_xtract_meta(struct rte_mbuf *m,
+ const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT])
+{
+ return (const struct cn9k_eth_txq *)
+ txq_data[m->port][rte_event_eth_tx_adapter_txq_get(m)];
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_prepare_pkt(const struct cn9k_eth_txq *txq, struct rte_mbuf *m,
+ uint64_t *cmd, const uint32_t flags)
+{
+ roc_lmt_mov(cmd, txq->cmd, cn9k_nix_tx_ext_subs(flags));
+ cn9k_nix_xmit_prepare(m, cmd, flags);
+}
+
+static __rte_always_inline uint16_t
+cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
+ const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT],
+ const uint32_t flags)
+{
+ struct rte_mbuf *m = ev->mbuf;
+ const struct cn9k_eth_txq *txq;
+ uint16_t ref_cnt = m->refcnt;
+
+ /* Perform header writes before barrier for TSO */
+ cn9k_nix_xmit_prepare_tso(m, flags);
+ /* Lets commit any changes in the packet here in case when
+ * fast free is set as no further changes will be made to mbuf.
+ * In case of fast free is not set, both cn9k_nix_prepare_mseg()
+ * and cn9k_nix_xmit_prepare() has a barrier after refcnt update.
+ */
+ if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
+ rte_io_wmb();
+ txq = cn9k_sso_hws_xtract_meta(m, txq_data);
+ cn9k_sso_hws_prepare_pkt(txq, m, cmd, flags);
+
+ if (flags & NIX_TX_MULTI_SEG_F) {
+ const uint16_t segdw = cn9k_nix_prepare_mseg(m, cmd, flags);
+ if (!ev->sched_type) {
+ cn9k_nix_xmit_mseg_prep_lmt(cmd, txq->lmt_addr, segdw);
+ cnxk_sso_hws_head_wait(base + SSOW_LF_GWS_TAG);
+ if (cn9k_nix_xmit_submit_lmt(txq->io_addr) == 0)
+ cn9k_nix_xmit_mseg_one(cmd, txq->lmt_addr,
+ txq->io_addr, segdw);
+ } else {
+ cn9k_nix_xmit_mseg_one(cmd, txq->lmt_addr, txq->io_addr,
+ segdw);
+ }
+ } else {
+ if (!ev->sched_type) {
+ cn9k_nix_xmit_prep_lmt(cmd, txq->lmt_addr, flags);
+ cnxk_sso_hws_head_wait(base + SSOW_LF_GWS_TAG);
+ if (cn9k_nix_xmit_submit_lmt(txq->io_addr) == 0)
+ cn9k_nix_xmit_one(cmd, txq->lmt_addr,
+ txq->io_addr, flags);
+ } else {
+ cn9k_nix_xmit_one(cmd, txq->lmt_addr, txq->io_addr,
+ flags);
+ }
+ }
+
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
+ if (ref_cnt > 1)
+ return 1;
+ }
+
+ cnxk_sso_hws_swtag_flush(base + SSOW_LF_GWS_TAG,
+ base + SSOW_LF_GWS_OP_SWTAG_FLUSH);
+
+ return 1;
+}
+
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events); \
+ uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_seg_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events); \
+ uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events); \
+ uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_seg_##name( \
+ void *port, struct rte_event ev[], uint16_t nb_events);
+
+NIX_TX_FASTPATH_MODES
+#undef T
+
#endif
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver
2021-03-06 16:29 [dpdk-dev] [PATCH 00/36] Marvell CNXK Event device Driver pbhagavatula
` (35 preceding siblings ...)
2021-03-06 16:29 ` [dpdk-dev] [PATCH 36/36] event/cnxk: add Tx adapter fastpath ops pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 01/33] event/cnxk: add build infra and device setup pbhagavatula
` (34 more replies)
36 siblings, 35 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
This patchset adds support for Marvell CN106XX SoC based on 'common/cnxk'
driver. In future, CN9K a.k.a octeontx2 will also be supported by same
driver when code is ready and 'event/octeontx2' will be deprecated.
v2 Changes:
- Split Rx/Tx adapter into seperate patch set to remove dependency on net/cnxk
- Add missing xStats patch.
- Fix incorrect head wait operation.
Pavan Nikhilesh (16):
event/cnxk: add build infra and device setup
event/cnxk: add platform specific device probe
event/cnxk: add common configuration validation
event/cnxk: allocate event inflight buffers
event/cnxk: add devargs to configure getwork mode
event/cnxk: add SSO HW device operations
event/cnxk: add SSO GWS fastpath enqueue functions
event/cnxk: add SSO GWS dequeue fastpath functions
event/cnxk: add SSO selftest and dump
event/cnxk: add event port and queue xstats
event/cnxk: add devargs to disable NPA
event/cnxk: allow adapters to resize inflights
event/cnxk: add TIM bucket operations
event/cnxk: add timer arm routine
event/cnxk: add timer arm timeout burst
event/cnxk: add timer cancel function
Shijith Thotton (17):
event/cnxk: add device capabilities function
event/cnxk: add platform specific device config
event/cnxk: add event queue config functions
event/cnxk: add devargs for inflight buffer count
event/cnxk: add devargs to control SSO HWGRP QoS
event/cnxk: add port config functions
event/cnxk: add event port link and unlink
event/cnxk: add device start function
event/cnxk: add device stop and close functions
event/cnxk: support event timer
event/cnxk: add timer adapter capabilities
event/cnxk: create and free timer adapter
event/cnxk: add timer adapter info function
event/cnxk: add devargs for chunk size and rings
event/cnxk: add timer stats get and reset
event/cnxk: add timer adapter start and stop
event/cnxk: add devargs to control timer adapters
MAINTAINERS | 6 +
app/test/test_eventdev.c | 14 +
doc/guides/eventdevs/cnxk.rst | 162 ++
doc/guides/eventdevs/index.rst | 1 +
drivers/common/cnxk/roc_sso.c | 63 +
drivers/common/cnxk/roc_sso.h | 19 +
drivers/event/cnxk/cn10k_eventdev.c | 509 ++++++
drivers/event/cnxk/cn10k_worker.c | 115 ++
drivers/event/cnxk/cn10k_worker.h | 175 +++
drivers/event/cnxk/cn9k_eventdev.c | 578 +++++++
drivers/event/cnxk/cn9k_worker.c | 236 +++
drivers/event/cnxk/cn9k_worker.h | 297 ++++
drivers/event/cnxk/cnxk_eventdev.c | 647 ++++++++
drivers/event/cnxk/cnxk_eventdev.h | 253 +++
drivers/event/cnxk/cnxk_eventdev_adptr.c | 67 +
drivers/event/cnxk/cnxk_eventdev_selftest.c | 1570 +++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev_stats.c | 289 ++++
drivers/event/cnxk/cnxk_tim_evdev.c | 538 +++++++
drivers/event/cnxk/cnxk_tim_evdev.h | 275 ++++
drivers/event/cnxk/cnxk_tim_worker.c | 191 +++
drivers/event/cnxk/cnxk_tim_worker.h | 601 +++++++
drivers/event/cnxk/cnxk_worker.h | 101 ++
drivers/event/cnxk/meson.build | 23 +
drivers/event/cnxk/version.map | 3 +
drivers/event/meson.build | 1 +
25 files changed, 6734 insertions(+)
create mode 100644 doc/guides/eventdevs/cnxk.rst
create mode 100644 drivers/event/cnxk/cn10k_eventdev.c
create mode 100644 drivers/event/cnxk/cn10k_worker.c
create mode 100644 drivers/event/cnxk/cn10k_worker.h
create mode 100644 drivers/event/cnxk/cn9k_eventdev.c
create mode 100644 drivers/event/cnxk/cn9k_worker.c
create mode 100644 drivers/event/cnxk/cn9k_worker.h
create mode 100644 drivers/event/cnxk/cnxk_eventdev.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev.h
create mode 100644 drivers/event/cnxk/cnxk_eventdev_adptr.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev_selftest.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev_stats.c
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.c
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.h
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.c
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.h
create mode 100644 drivers/event/cnxk/cnxk_worker.h
create mode 100644 drivers/event/cnxk/meson.build
create mode 100644 drivers/event/cnxk/version.map
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 01/33] event/cnxk: add build infra and device setup
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-27 9:08 ` Kinsella, Ray
` (2 more replies)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 02/33] event/cnxk: add device capabilities function pbhagavatula
` (33 subsequent siblings)
34 siblings, 3 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Thomas Monjalon, Pavan Nikhilesh, Shijith Thotton,
Ray Kinsella, Neil Horman, Anatoly Burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add meson build infra structure along with the event device
SSO initialization and teardown functions.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
MAINTAINERS | 6 +++
doc/guides/eventdevs/cnxk.rst | 55 ++++++++++++++++++++++++
doc/guides/eventdevs/index.rst | 1 +
drivers/event/cnxk/cnxk_eventdev.c | 68 ++++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 39 +++++++++++++++++
drivers/event/cnxk/meson.build | 13 ++++++
drivers/event/cnxk/version.map | 3 ++
drivers/event/meson.build | 1 +
8 files changed, 186 insertions(+)
create mode 100644 doc/guides/eventdevs/cnxk.rst
create mode 100644 drivers/event/cnxk/cnxk_eventdev.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev.h
create mode 100644 drivers/event/cnxk/meson.build
create mode 100644 drivers/event/cnxk/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 44f3d322e..d1ae33d48 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1224,6 +1224,12 @@ M: Jerin Jacob <jerinj@marvell.com>
F: drivers/event/octeontx2/
F: doc/guides/eventdevs/octeontx2.rst
+Marvell OCTEON CNXK
+M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+M: Shijith Thotton <sthotton@marvell.com>
+F: drivers/event/cnxk/
+F: doc/guides/eventdevs/cnxk.rst
+
NXP DPAA eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Nipun Gupta <nipun.gupta@nxp.com>
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
new file mode 100644
index 000000000..e94225bd3
--- /dev/null
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -0,0 +1,55 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2021 Marvell International Ltd.
+
+OCTEON CNXK SSO Eventdev Driver
+==========================
+
+The SSO PMD (**librte_event_cnxk**) and provides poll mode
+eventdev driver support for the inbuilt event device found in the
+**Marvell OCTEON CNXK** SoC family.
+
+More information about OCTEON CNXK SoC can be found at `Marvell Official Website
+<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
+
+Supported OCTEON CNXK SoCs
+--------------------------
+
+- CN9XX
+- CN10XX
+
+Features
+--------
+
+Features of the OCTEON CNXK SSO PMD are:
+
+- 256 Event queues
+- 26 (dual) and 52 (single) Event ports on CN10XX
+- 52 Event ports on CN9XX
+- HW event scheduler
+- Supports 1M flows per event queue
+- Flow based event pipelining
+- Flow pinning support in flow based event pipelining
+- Queue based event pipelining
+- Supports ATOMIC, ORDERED, PARALLEL schedule types per flow
+- Event scheduling QoS based on event queue priority
+- Open system with configurable amount of outstanding events limited only by
+ DRAM
+- HW accelerated dequeue timeout support to enable power management
+
+Prerequisites and Compilation procedure
+---------------------------------------
+
+ See :doc:`../platform/cnxk` for setup information.
+
+Debugging Options
+-----------------
+
+.. _table_octeon_cnxk_event_debug_options:
+
+.. table:: OCTEON CNXK event device debug options
+
+ +---+------------+-------------------------------------------------------+
+ | # | Component | EAL log command |
+ +===+============+=======================================================+
+ | 1 | SSO | --log-level='pmd\.event\.cnxk,8' |
+ +---+------------+-------------------------------------------------------+
diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst
index 738788d9e..214302539 100644
--- a/doc/guides/eventdevs/index.rst
+++ b/doc/guides/eventdevs/index.rst
@@ -11,6 +11,7 @@ application through the eventdev API.
:maxdepth: 2
:numbered:
+ cnxk
dlb2
dpaa
dpaa2
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
new file mode 100644
index 000000000..b7f9c81bd
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_eventdev.h"
+
+int
+cnxk_sso_init(struct rte_eventdev *event_dev)
+{
+ const struct rte_memzone *mz = NULL;
+ struct rte_pci_device *pci_dev;
+ struct cnxk_sso_evdev *dev;
+ int rc;
+
+ mz = rte_memzone_reserve(CNXK_SSO_MZ_NAME, sizeof(uint64_t),
+ SOCKET_ID_ANY, 0);
+ if (mz == NULL) {
+ plt_err("Failed to create eventdev memzone");
+ return -ENOMEM;
+ }
+
+ dev = cnxk_sso_pmd_priv(event_dev);
+ pci_dev = container_of(event_dev->dev, struct rte_pci_device, device);
+ dev->sso.pci_dev = pci_dev;
+
+ *(uint64_t *)mz->addr = (uint64_t)dev;
+
+ /* Initialize the base cnxk_dev object */
+ rc = roc_sso_dev_init(&dev->sso);
+ if (rc < 0) {
+ plt_err("Failed to initialize RoC SSO rc=%d", rc);
+ goto error;
+ }
+
+ dev->is_timeout_deq = 0;
+ dev->min_dequeue_timeout_ns = USEC2NSEC(1);
+ dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF);
+ dev->max_num_events = -1;
+ dev->nb_event_queues = 0;
+ dev->nb_event_ports = 0;
+
+ return 0;
+
+error:
+ rte_memzone_free(mz);
+ return rc;
+}
+
+int
+cnxk_sso_fini(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ /* For secondary processes, nothing to be done */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ roc_sso_rsrc_fini(&dev->sso);
+ roc_sso_dev_fini(&dev->sso);
+
+ return 0;
+}
+
+int
+cnxk_sso_remove(struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_remove(pci_dev, cnxk_sso_fini);
+}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
new file mode 100644
index 000000000..148b327a1
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __CNXK_EVENTDEV_H__
+#define __CNXK_EVENTDEV_H__
+
+#include <rte_pci.h>
+
+#include <eventdev_pmd_pci.h>
+
+#include "roc_api.h"
+
+#define USEC2NSEC(__us) ((__us)*1E3)
+
+#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
+
+struct cnxk_sso_evdev {
+ struct roc_sso sso;
+ uint8_t is_timeout_deq;
+ uint8_t nb_event_queues;
+ uint8_t nb_event_ports;
+ uint32_t min_dequeue_timeout_ns;
+ uint32_t max_dequeue_timeout_ns;
+ int32_t max_num_events;
+} __rte_cache_aligned;
+
+static inline struct cnxk_sso_evdev *
+cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
+{
+ return event_dev->data->dev_private;
+}
+
+/* Common ops API. */
+int cnxk_sso_init(struct rte_eventdev *event_dev);
+int cnxk_sso_fini(struct rte_eventdev *event_dev);
+int cnxk_sso_remove(struct rte_pci_device *pci_dev);
+
+#endif /* __CNXK_EVENTDEV_H__ */
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
new file mode 100644
index 000000000..e454dc863
--- /dev/null
+++ b/drivers/event/cnxk/meson.build
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2021 Marvell International Ltd.
+#
+
+if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on 64-bit Linux'
+ subdir_done()
+endif
+
+sources = files('cnxk_eventdev.c')
+
+deps += ['bus_pci', 'common_cnxk', 'net_cnxk']
diff --git a/drivers/event/cnxk/version.map b/drivers/event/cnxk/version.map
new file mode 100644
index 000000000..ee80c5172
--- /dev/null
+++ b/drivers/event/cnxk/version.map
@@ -0,0 +1,3 @@
+INTERNAL {
+ local: *;
+};
diff --git a/drivers/event/meson.build b/drivers/event/meson.build
index 539c5aeb9..63d6b410b 100644
--- a/drivers/event/meson.build
+++ b/drivers/event/meson.build
@@ -6,6 +6,7 @@ if is_windows
endif
drivers = [
+ 'cnxk',
'dlb2',
'dpaa',
'dpaa2',
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* Re: [dpdk-dev] [PATCH v2 01/33] event/cnxk: add build infra and device setup
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 01/33] event/cnxk: add build infra and device setup pbhagavatula
@ 2021-04-27 9:08 ` Kinsella, Ray
2021-04-28 8:01 ` David Marchand
2021-04-29 9:05 ` Jerin Jacob
2 siblings, 0 replies; 185+ messages in thread
From: Kinsella, Ray @ 2021-04-27 9:08 UTC (permalink / raw)
To: pbhagavatula, jerinj, Thomas Monjalon, Shijith Thotton,
Neil Horman, Anatoly Burakov
Cc: dev
On 26/04/2021 18:44, pbhagavatula@marvell.com wrote:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add meson build infra structure along with the event device
> SSO initialization and teardown functions.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> MAINTAINERS | 6 +++
> doc/guides/eventdevs/cnxk.rst | 55 ++++++++++++++++++++++++
> doc/guides/eventdevs/index.rst | 1 +
> drivers/event/cnxk/cnxk_eventdev.c | 68 ++++++++++++++++++++++++++++++
> drivers/event/cnxk/cnxk_eventdev.h | 39 +++++++++++++++++
> drivers/event/cnxk/meson.build | 13 ++++++
> drivers/event/cnxk/version.map | 3 ++
> drivers/event/meson.build | 1 +
> 8 files changed, 186 insertions(+)
> create mode 100644 doc/guides/eventdevs/cnxk.rst
> create mode 100644 drivers/event/cnxk/cnxk_eventdev.c
> create mode 100644 drivers/event/cnxk/cnxk_eventdev.h
> create mode 100644 drivers/event/cnxk/meson.build
> create mode 100644 drivers/event/cnxk/version.map
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [flat|nested] 185+ messages in thread
* Re: [dpdk-dev] [PATCH v2 01/33] event/cnxk: add build infra and device setup
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 01/33] event/cnxk: add build infra and device setup pbhagavatula
2021-04-27 9:08 ` Kinsella, Ray
@ 2021-04-28 8:01 ` David Marchand
2021-04-29 9:05 ` Jerin Jacob
2 siblings, 0 replies; 185+ messages in thread
From: David Marchand @ 2021-04-28 8:01 UTC (permalink / raw)
To: Pavan Nikhilesh
Cc: Jerin Jacob Kollanukkaran, Thomas Monjalon, Shijith Thotton,
Ray Kinsella, Neil Horman, Anatoly Burakov, dev
Hello Pavan,
On Mon, Apr 26, 2021 at 7:45 PM <pbhagavatula@marvell.com> wrote:
> diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
> new file mode 100644
> index 000000000..e94225bd3
> --- /dev/null
> +++ b/doc/guides/eventdevs/cnxk.rst
> @@ -0,0 +1,55 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright(c) 2021 Marvell International Ltd.
> +
> +OCTEON CNXK SSO Eventdev Driver
> +==========================
I did not see a v2, so simply reporting this in case you are not aware.
Documentation generation fails (seen in GHA):
[2772/3339] Compiling C object 'examples/c590b3c@@dpdk-kni@exe/kni_main.c.o'.
[2773/3339] Compiling C object
'examples/c590b3c@@dpdk-ipv4_multicast@exe/ipv4_multicast_main.c.o'.
[2774/3339] Linking target examples/dpdk-ipv4_multicast.
[2775/3339] Generating html_guides with a custom command.
FAILED: doc/guides/html
/usr/bin/python3 ../buildtools/call-sphinx-build.py
/usr/bin/sphinx-build 21.05.0-rc1
/home/runner/work/dpdk/dpdk/doc/guides
/home/runner/work/dpdk/dpdk/build/doc/guides -W
Warning, treated as error:
/home/runner/work/dpdk/dpdk/doc/guides/eventdevs/cnxk.rst:5:Title
underline too short.
OCTEON CNXK SSO Eventdev Driver
==========================
[2776/3339] Compiling C object
'examples/c590b3c@@dpdk-l2fwd-event@exe/l2fwd-event_l2fwd_common.c.o'.
[2777/3339] Compiling C object
'examples/c590b3c@@dpdk-l2fwd-crypto@exe/l2fwd-crypto_main.c.o'.
--
David Marchand
^ permalink raw reply [flat|nested] 185+ messages in thread
* Re: [dpdk-dev] [PATCH v2 01/33] event/cnxk: add build infra and device setup
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 01/33] event/cnxk: add build infra and device setup pbhagavatula
2021-04-27 9:08 ` Kinsella, Ray
2021-04-28 8:01 ` David Marchand
@ 2021-04-29 9:05 ` Jerin Jacob
2 siblings, 0 replies; 185+ messages in thread
From: Jerin Jacob @ 2021-04-29 9:05 UTC (permalink / raw)
To: Pavan Nikhilesh
Cc: Jerin Jacob, Thomas Monjalon, Shijith Thotton, Ray Kinsella,
Neil Horman, Anatoly Burakov, dpdk-dev
On Mon, Apr 26, 2021 at 11:15 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add meson build infra structure along with the event device
s/infra structure/ infrastructure
> SSO initialization and teardown functions.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> new file mode 100644
> index 000000000..e94225bd3
> --- /dev/null
> +++ b/doc/guides/eventdevs/cnxk.rst
> @@ -0,0 +1,55 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright(c) 2021 Marvell International Ltd.
> +
> +OCTEON CNXK SSO Eventdev Driver
> +==========================
Doc error
log: /home/runner/work/dpdk/dpdk/doc/guides/eventdevs/cnxk.rst:5:Title
underline too short.
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 02/33] event/cnxk: add device capabilities function
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 01/33] event/cnxk: add build infra and device setup pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 03/33] event/cnxk: add platform specific device probe pbhagavatula
` (32 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add the info_get function to return details on the queues, flow,
prioritization capabilities, etc. which this device has.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 24 ++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 4 ++++
2 files changed, 28 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index b7f9c81bd..ae553fd23 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -4,6 +4,30 @@
#include "cnxk_eventdev.h"
+void
+cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
+ struct rte_event_dev_info *dev_info)
+{
+
+ dev_info->min_dequeue_timeout_ns = dev->min_dequeue_timeout_ns;
+ dev_info->max_dequeue_timeout_ns = dev->max_dequeue_timeout_ns;
+ dev_info->max_event_queues = dev->max_event_queues;
+ dev_info->max_event_queue_flows = (1ULL << 20);
+ dev_info->max_event_queue_priority_levels = 8;
+ dev_info->max_event_priority_levels = 1;
+ dev_info->max_event_ports = dev->max_event_ports;
+ dev_info->max_event_port_dequeue_depth = 1;
+ dev_info->max_event_port_enqueue_depth = 1;
+ dev_info->max_num_events = dev->max_num_events;
+ dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
+ RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
+ RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
+ RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
+ RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
+}
+
int
cnxk_sso_init(struct rte_eventdev *event_dev)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 148b327a1..583492948 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -17,6 +17,8 @@
struct cnxk_sso_evdev {
struct roc_sso sso;
+ uint8_t max_event_queues;
+ uint8_t max_event_ports;
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
@@ -35,5 +37,7 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
int cnxk_sso_init(struct rte_eventdev *event_dev);
int cnxk_sso_fini(struct rte_eventdev *event_dev);
int cnxk_sso_remove(struct rte_pci_device *pci_dev);
+void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
+ struct rte_event_dev_info *dev_info);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 03/33] event/cnxk: add platform specific device probe
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 01/33] event/cnxk: add build infra and device setup pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 02/33] event/cnxk: add device capabilities function pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 04/33] event/cnxk: add common configuration validation pbhagavatula
` (31 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add platform specific event device probe and remove, also add
event device info get function.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 101 +++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 102 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 2 +
drivers/event/cnxk/meson.build | 5 +-
4 files changed, 209 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cn10k_eventdev.c
create mode 100644 drivers/event/cnxk/cn9k_eventdev.c
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
new file mode 100644
index 000000000..34238d3b5
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_eventdev.h"
+
+static void
+cn10k_sso_set_rsrc(void *arg)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ dev->max_event_ports = dev->sso.max_hws;
+ dev->max_event_queues =
+ dev->sso.max_hwgrp > RTE_EVENT_MAX_QUEUES_PER_DEV ?
+ RTE_EVENT_MAX_QUEUES_PER_DEV :
+ dev->sso.max_hwgrp;
+}
+
+static void
+cn10k_sso_info_get(struct rte_eventdev *event_dev,
+ struct rte_event_dev_info *dev_info)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ dev_info->driver_name = RTE_STR(EVENTDEV_NAME_CN10K_PMD);
+ cnxk_sso_info_get(dev, dev_info);
+}
+
+static struct rte_eventdev_ops cn10k_sso_dev_ops = {
+ .dev_infos_get = cn10k_sso_info_get,
+};
+
+static int
+cn10k_sso_init(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ if (RTE_CACHE_LINE_SIZE != 64) {
+ plt_err("Driver not compiled for CN9K");
+ return -EFAULT;
+ }
+
+ rc = plt_init();
+ if (rc < 0) {
+ plt_err("Failed to initialize platform model");
+ return rc;
+ }
+
+ event_dev->dev_ops = &cn10k_sso_dev_ops;
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ rc = cnxk_sso_init(event_dev);
+ if (rc < 0)
+ return rc;
+
+ cn10k_sso_set_rsrc(cnxk_sso_pmd_priv(event_dev));
+ if (!dev->max_event_ports || !dev->max_event_queues) {
+ plt_err("Not enough eventdev resource queues=%d ports=%d",
+ dev->max_event_queues, dev->max_event_ports);
+ cnxk_sso_fini(event_dev);
+ return -ENODEV;
+ }
+
+ plt_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
+ event_dev->data->name, dev->max_event_queues,
+ dev->max_event_ports);
+
+ return 0;
+}
+
+static int
+cn10k_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_probe(pci_drv, pci_dev,
+ sizeof(struct cnxk_sso_evdev),
+ cn10k_sso_init);
+}
+
+static const struct rte_pci_id cn10k_pci_sso_map[] = {
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver cn10k_pci_sso = {
+ .id_table = cn10k_pci_sso_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
+ .probe = cn10k_sso_probe,
+ .remove = cnxk_sso_remove,
+};
+
+RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
+RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
+RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
new file mode 100644
index 000000000..238540828
--- /dev/null
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_eventdev.h"
+
+#define CN9K_DUAL_WS_NB_WS 2
+#define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
+
+static void
+cn9k_sso_set_rsrc(void *arg)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ if (dev->dual_ws)
+ dev->max_event_ports = dev->sso.max_hws / CN9K_DUAL_WS_NB_WS;
+ else
+ dev->max_event_ports = dev->sso.max_hws;
+ dev->max_event_queues =
+ dev->sso.max_hwgrp > RTE_EVENT_MAX_QUEUES_PER_DEV ?
+ RTE_EVENT_MAX_QUEUES_PER_DEV :
+ dev->sso.max_hwgrp;
+}
+
+static void
+cn9k_sso_info_get(struct rte_eventdev *event_dev,
+ struct rte_event_dev_info *dev_info)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ dev_info->driver_name = RTE_STR(EVENTDEV_NAME_CN9K_PMD);
+ cnxk_sso_info_get(dev, dev_info);
+}
+
+static struct rte_eventdev_ops cn9k_sso_dev_ops = {
+ .dev_infos_get = cn9k_sso_info_get,
+};
+
+static int
+cn9k_sso_init(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ if (RTE_CACHE_LINE_SIZE != 128) {
+ plt_err("Driver not compiled for CN9K");
+ return -EFAULT;
+ }
+
+ rc = plt_init();
+ if (rc < 0) {
+ plt_err("Failed to initialize platform model");
+ return rc;
+ }
+
+ event_dev->dev_ops = &cn9k_sso_dev_ops;
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ rc = cnxk_sso_init(event_dev);
+ if (rc < 0)
+ return rc;
+
+ cn9k_sso_set_rsrc(cnxk_sso_pmd_priv(event_dev));
+ if (!dev->max_event_ports || !dev->max_event_queues) {
+ plt_err("Not enough eventdev resource queues=%d ports=%d",
+ dev->max_event_queues, dev->max_event_ports);
+ cnxk_sso_fini(event_dev);
+ return -ENODEV;
+ }
+
+ plt_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
+ event_dev->data->name, dev->max_event_queues,
+ dev->max_event_ports);
+
+ return 0;
+}
+
+static int
+cn9k_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_probe(
+ pci_drv, pci_dev, sizeof(struct cnxk_sso_evdev), cn9k_sso_init);
+}
+
+static const struct rte_pci_id cn9k_pci_sso_map[] = {
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver cn9k_pci_sso = {
+ .id_table = cn9k_pci_sso_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
+ .probe = cn9k_sso_probe,
+ .remove = cnxk_sso_remove,
+};
+
+RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
+RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
+RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 583492948..b98c783ae 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -25,6 +25,8 @@ struct cnxk_sso_evdev {
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
+ /* CN9K */
+ uint8_t dual_ws;
} __rte_cache_aligned;
static inline struct cnxk_sso_evdev *
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index e454dc863..453c3a4f7 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -8,6 +8,9 @@ if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
subdir_done()
endif
-sources = files('cnxk_eventdev.c')
+sources = files('cn10k_eventdev.c',
+ 'cn9k_eventdev.c',
+ 'cnxk_eventdev.c',
+ )
deps += ['bus_pci', 'common_cnxk', 'net_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 04/33] event/cnxk: add common configuration validation
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (2 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 03/33] event/cnxk: add platform specific device probe pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 05/33] event/cnxk: add platform specific device config pbhagavatula
` (30 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add configuration validation, port and queue configuration
functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 70 ++++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 6 +++
2 files changed, 76 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index ae553fd23..f15986f3e 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -28,6 +28,76 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
+int
+cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
+{
+ struct rte_event_dev_config *conf = &event_dev->data->dev_conf;
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint32_t deq_tmo_ns;
+
+ deq_tmo_ns = conf->dequeue_timeout_ns;
+
+ if (deq_tmo_ns == 0)
+ deq_tmo_ns = dev->min_dequeue_timeout_ns;
+ if (deq_tmo_ns < dev->min_dequeue_timeout_ns ||
+ deq_tmo_ns > dev->max_dequeue_timeout_ns) {
+ plt_err("Unsupported dequeue timeout requested");
+ return -EINVAL;
+ }
+
+ if (conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT)
+ dev->is_timeout_deq = 1;
+
+ dev->deq_tmo_ns = deq_tmo_ns;
+
+ if (!conf->nb_event_queues || !conf->nb_event_ports ||
+ conf->nb_event_ports > dev->max_event_ports ||
+ conf->nb_event_queues > dev->max_event_queues) {
+ plt_err("Unsupported event queues/ports requested");
+ return -EINVAL;
+ }
+
+ if (conf->nb_event_port_dequeue_depth > 1) {
+ plt_err("Unsupported event port deq depth requested");
+ return -EINVAL;
+ }
+
+ if (conf->nb_event_port_enqueue_depth > 1) {
+ plt_err("Unsupported event port enq depth requested");
+ return -EINVAL;
+ }
+
+ dev->nb_event_queues = conf->nb_event_queues;
+ dev->nb_event_ports = conf->nb_event_ports;
+
+ return 0;
+}
+
+void
+cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
+ struct rte_event_queue_conf *queue_conf)
+{
+ RTE_SET_USED(event_dev);
+ RTE_SET_USED(queue_id);
+
+ queue_conf->nb_atomic_flows = (1ULL << 20);
+ queue_conf->nb_atomic_order_sequences = (1ULL << 20);
+ queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+ queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+}
+
+void
+cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
+ struct rte_event_port_conf *port_conf)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ RTE_SET_USED(port_id);
+ port_conf->new_event_threshold = dev->max_num_events;
+ port_conf->dequeue_depth = 1;
+ port_conf->enqueue_depth = 1;
+}
+
int
cnxk_sso_init(struct rte_eventdev *event_dev)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index b98c783ae..08eba2270 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -22,6 +22,7 @@ struct cnxk_sso_evdev {
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
+ uint32_t deq_tmo_ns;
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
@@ -41,5 +42,10 @@ int cnxk_sso_fini(struct rte_eventdev *event_dev);
int cnxk_sso_remove(struct rte_pci_device *pci_dev);
void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
struct rte_event_dev_info *dev_info);
+int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
+void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
+ struct rte_event_queue_conf *queue_conf);
+void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
+ struct rte_event_port_conf *port_conf);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 05/33] event/cnxk: add platform specific device config
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (3 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 04/33] event/cnxk: add common configuration validation pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 06/33] event/cnxk: add event queue config functions pbhagavatula
` (29 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add platform specific event device configuration that attaches the
requested number of SSO HWS(event ports) and HWGRP(event queues) LFs
to the RVU PF/VF.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 35 +++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 37 +++++++++++++++++++++++++++++
2 files changed, 72 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 34238d3b5..352df88fc 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -16,6 +16,14 @@ cn10k_sso_set_rsrc(void *arg)
dev->sso.max_hwgrp;
}
+static int
+cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
+}
+
static void
cn10k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -26,8 +34,35 @@ cn10k_sso_info_get(struct rte_eventdev *event_dev,
cnxk_sso_info_get(dev, dev_info);
}
+static int
+cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ rc = cnxk_sso_dev_validate(event_dev);
+ if (rc < 0) {
+ plt_err("Invalid event device configuration");
+ return -EINVAL;
+ }
+
+ roc_sso_rsrc_fini(&dev->sso);
+
+ rc = cn10k_sso_rsrc_init(dev, dev->nb_event_ports,
+ dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to initialize SSO resources");
+ return -ENODEV;
+ }
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
+ .dev_configure = cn10k_sso_dev_configure,
+ .queue_def_conf = cnxk_sso_queue_def_conf,
+ .port_def_conf = cnxk_sso_port_def_conf,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 238540828..126388a23 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -22,6 +22,17 @@ cn9k_sso_set_rsrc(void *arg)
dev->sso.max_hwgrp;
}
+static int
+cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ if (dev->dual_ws)
+ hws = hws * CN9K_DUAL_WS_NB_WS;
+
+ return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
+}
+
static void
cn9k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -32,8 +43,34 @@ cn9k_sso_info_get(struct rte_eventdev *event_dev,
cnxk_sso_info_get(dev, dev_info);
}
+static int
+cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ rc = cnxk_sso_dev_validate(event_dev);
+ if (rc < 0) {
+ plt_err("Invalid event device configuration");
+ return -EINVAL;
+ }
+
+ roc_sso_rsrc_fini(&dev->sso);
+
+ rc = cn9k_sso_rsrc_init(dev, dev->nb_event_ports, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to initialize SSO resources");
+ return -ENODEV;
+ }
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
+ .dev_configure = cn9k_sso_dev_configure,
+ .queue_def_conf = cnxk_sso_queue_def_conf,
+ .port_def_conf = cnxk_sso_port_def_conf,
};
static int
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 06/33] event/cnxk: add event queue config functions
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (4 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 05/33] event/cnxk: add platform specific device config pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 07/33] event/cnxk: allocate event inflight buffers pbhagavatula
` (28 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add setup and release functions for event queues i.e.
SSO HWGRPs.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 2 ++
drivers/event/cnxk/cn9k_eventdev.c | 2 ++
drivers/event/cnxk/cnxk_eventdev.c | 19 +++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 3 +++
4 files changed, 26 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 352df88fc..92687c23e 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -62,6 +62,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
.queue_def_conf = cnxk_sso_queue_def_conf,
+ .queue_setup = cnxk_sso_queue_setup,
+ .queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
};
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 126388a23..1bd2b3343 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -70,6 +70,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
.queue_def_conf = cnxk_sso_queue_def_conf,
+ .queue_setup = cnxk_sso_queue_setup,
+ .queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
};
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index f15986f3e..59cc570fe 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -86,6 +86,25 @@ cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
}
+int
+cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
+ const struct rte_event_queue_conf *queue_conf)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ plt_sso_dbg("Queue=%d prio=%d", queue_id, queue_conf->priority);
+ /* Normalize <0-255> to <0-7> */
+ return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, 0xFF, 0xFF,
+ queue_conf->priority / 32);
+}
+
+void
+cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
+{
+ RTE_SET_USED(event_dev);
+ RTE_SET_USED(queue_id);
+}
+
void
cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf)
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 08eba2270..974c618bc 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -45,6 +45,9 @@ void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
struct rte_event_queue_conf *queue_conf);
+int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
+ const struct rte_event_queue_conf *queue_conf);
+void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id);
void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 07/33] event/cnxk: allocate event inflight buffers
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (5 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 06/33] event/cnxk: add event queue config functions pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 08/33] event/cnxk: add devargs for inflight buffer count pbhagavatula
` (27 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Allocate buffers in DRAM that hold inflight events.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 7 ++
drivers/event/cnxk/cn9k_eventdev.c | 7 ++
drivers/event/cnxk/cnxk_eventdev.c | 105 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 14 +++-
4 files changed, 132 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 92687c23e..7e3fa20c5 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -55,6 +55,13 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
return -ENODEV;
}
+ rc = cnxk_sso_xaq_allocate(dev);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
+ return 0;
+cnxk_rsrc_fini:
+ roc_sso_rsrc_fini(&dev->sso);
return rc;
}
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 1bd2b3343..71245b660 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -63,6 +63,13 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
return -ENODEV;
}
+ rc = cnxk_sso_xaq_allocate(dev);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
+ return 0;
+cnxk_rsrc_fini:
+ roc_sso_rsrc_fini(&dev->sso);
return rc;
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 59cc570fe..927f99117 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -28,12 +28,107 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
+int
+cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
+{
+ char pool_name[RTE_MEMZONE_NAMESIZE];
+ uint32_t xaq_cnt, npa_aura_id;
+ const struct rte_memzone *mz;
+ struct npa_aura_s *aura;
+ static int reconfig_cnt;
+ int rc;
+
+ if (dev->xaq_pool) {
+ rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to release XAQ %d", rc);
+ return rc;
+ }
+ rte_mempool_free(dev->xaq_pool);
+ dev->xaq_pool = NULL;
+ }
+
+ /*
+ * Allocate memory for Add work backpressure.
+ */
+ mz = rte_memzone_lookup(CNXK_SSO_FC_NAME);
+ if (mz == NULL)
+ mz = rte_memzone_reserve_aligned(CNXK_SSO_FC_NAME,
+ sizeof(struct npa_aura_s) +
+ RTE_CACHE_LINE_SIZE,
+ 0, 0, RTE_CACHE_LINE_SIZE);
+ if (mz == NULL) {
+ plt_err("Failed to allocate mem for fcmem");
+ return -ENOMEM;
+ }
+
+ dev->fc_iova = mz->iova;
+ dev->fc_mem = mz->addr;
+
+ aura = (struct npa_aura_s *)((uintptr_t)dev->fc_mem +
+ RTE_CACHE_LINE_SIZE);
+ memset(aura, 0, sizeof(struct npa_aura_s));
+
+ aura->fc_ena = 1;
+ aura->fc_addr = dev->fc_iova;
+ aura->fc_hyst_bits = 0; /* Store count on all updates */
+
+ /* Taken from HRM 14.3.3(4) */
+ xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
+ xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
+ (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
+
+ plt_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
+ /* Setup XAQ based on number of nb queues. */
+ snprintf(pool_name, 30, "cnxk_xaq_buf_pool_%d", reconfig_cnt);
+ dev->xaq_pool = (void *)rte_mempool_create_empty(
+ pool_name, xaq_cnt, dev->sso.xaq_buf_size, 0, 0,
+ rte_socket_id(), 0);
+
+ if (dev->xaq_pool == NULL) {
+ plt_err("Unable to create empty mempool.");
+ rte_memzone_free(mz);
+ return -ENOMEM;
+ }
+
+ rc = rte_mempool_set_ops_byname(dev->xaq_pool,
+ rte_mbuf_platform_mempool_ops(), aura);
+ if (rc != 0) {
+ plt_err("Unable to set xaqpool ops.");
+ goto alloc_fail;
+ }
+
+ rc = rte_mempool_populate_default(dev->xaq_pool);
+ if (rc < 0) {
+ plt_err("Unable to set populate xaqpool.");
+ goto alloc_fail;
+ }
+ reconfig_cnt++;
+ /* When SW does addwork (enqueue) check if there is space in XAQ by
+ * comparing fc_addr above against the xaq_lmt calculated below.
+ * There should be a minimum headroom (CNXK_SSO_XAQ_SLACK / 2) for SSO
+ * to request XAQ to cache them even before enqueue is called.
+ */
+ dev->xaq_lmt =
+ xaq_cnt - (CNXK_SSO_XAQ_SLACK / 2 * dev->nb_event_queues);
+ dev->nb_xaq_cfg = xaq_cnt;
+
+ npa_aura_id = roc_npa_aura_handle_to_aura(dev->xaq_pool->pool_id);
+ return roc_sso_hwgrp_alloc_xaq(&dev->sso, npa_aura_id,
+ dev->nb_event_queues);
+alloc_fail:
+ rte_mempool_free(dev->xaq_pool);
+ rte_memzone_free(mz);
+ return rc;
+}
+
int
cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
{
struct rte_event_dev_config *conf = &event_dev->data->dev_conf;
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint32_t deq_tmo_ns;
+ int rc;
deq_tmo_ns = conf->dequeue_timeout_ns;
@@ -67,6 +162,16 @@ cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
return -EINVAL;
}
+ if (dev->xaq_pool) {
+ rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to release XAQ %d", rc);
+ return rc;
+ }
+ rte_mempool_free(dev->xaq_pool);
+ dev->xaq_pool = NULL;
+ }
+
dev->nb_event_queues = conf->nb_event_queues;
dev->nb_event_ports = conf->nb_event_ports;
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 974c618bc..8478120c0 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -5,6 +5,7 @@
#ifndef __CNXK_EVENTDEV_H__
#define __CNXK_EVENTDEV_H__
+#include <rte_mbuf_pool_ops.h>
#include <rte_pci.h>
#include <eventdev_pmd_pci.h>
@@ -13,7 +14,10 @@
#define USEC2NSEC(__us) ((__us)*1E3)
-#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
+#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
+#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
+#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
+#define CNXK_SSO_XAQ_SLACK (8)
struct cnxk_sso_evdev {
struct roc_sso sso;
@@ -26,6 +30,11 @@ struct cnxk_sso_evdev {
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
+ uint64_t *fc_mem;
+ uint64_t xaq_lmt;
+ uint64_t nb_xaq_cfg;
+ rte_iova_t fc_iova;
+ struct rte_mempool *xaq_pool;
/* CN9K */
uint8_t dual_ws;
} __rte_cache_aligned;
@@ -36,6 +45,9 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
return event_dev->data->dev_private;
}
+/* Configuration functions */
+int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev);
+
/* Common ops API. */
int cnxk_sso_init(struct rte_eventdev *event_dev);
int cnxk_sso_fini(struct rte_eventdev *event_dev);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 08/33] event/cnxk: add devargs for inflight buffer count
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (6 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 07/33] event/cnxk: allocate event inflight buffers pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 09/33] event/cnxk: add devargs to control SSO HWGRP QoS pbhagavatula
` (26 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
The number of events for a *open system* event device is specified
as -1 as per the eventdev specification.
Since, SSO inflight events are only limited by DRAM size, the
xae_cnt devargs parameter is introduced to provide upper limit for
in-flight events.
Example:
--dev "0002:0e:00.0,xae_cnt=8192"
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 14 ++++++++++++++
drivers/event/cnxk/cn10k_eventdev.c | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 1 +
drivers/event/cnxk/cnxk_eventdev.c | 24 ++++++++++++++++++++++--
drivers/event/cnxk/cnxk_eventdev.h | 15 +++++++++++++++
5 files changed, 53 insertions(+), 2 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index e94225bd3..569fce4cb 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -41,6 +41,20 @@ Prerequisites and Compilation procedure
See :doc:`../platform/cnxk` for setup information.
+
+Runtime Config Options
+----------------------
+
+- ``Maximum number of in-flight events`` (default ``8192``)
+
+ In **Marvell OCTEON CNXK** the max number of in-flight events are only limited
+ by DRAM size, the ``xae_cnt`` devargs parameter is introduced to provide
+ upper limit for in-flight events.
+
+ For example::
+
+ -a 0002:0e:00.0,xae_cnt=16384
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 7e3fa20c5..1b278360f 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -143,3 +143,4 @@ static struct rte_pci_driver cn10k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 71245b660..8dfcf35b4 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -146,3 +146,4 @@ static struct rte_pci_driver cn9k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 927f99117..28a03aeab 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -75,8 +75,11 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
/* Taken from HRM 14.3.3(4) */
xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
- xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
- (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
+ if (dev->xae_cnt)
+ xaq_cnt += dev->xae_cnt / dev->sso.xae_waes;
+ else
+ xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
+ (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
plt_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
/* Setup XAQ based on number of nb queues. */
@@ -222,6 +225,22 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
port_conf->enqueue_depth = 1;
}
+static void
+cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
+{
+ struct rte_kvargs *kvlist;
+
+ if (devargs == NULL)
+ return;
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, CNXK_SSO_XAE_CNT, &parse_kvargs_value,
+ &dev->xae_cnt);
+ rte_kvargs_free(kvlist);
+}
+
int
cnxk_sso_init(struct rte_eventdev *event_dev)
{
@@ -242,6 +261,7 @@ cnxk_sso_init(struct rte_eventdev *event_dev)
dev->sso.pci_dev = pci_dev;
*(uint64_t *)mz->addr = (uint64_t)dev;
+ cnxk_sso_parse_devargs(dev, pci_dev->device.devargs);
/* Initialize the base cnxk_dev object */
rc = roc_sso_dev_init(&dev->sso);
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 8478120c0..72b0ff3f8 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -5,6 +5,8 @@
#ifndef __CNXK_EVENTDEV_H__
#define __CNXK_EVENTDEV_H__
+#include <rte_devargs.h>
+#include <rte_kvargs.h>
#include <rte_mbuf_pool_ops.h>
#include <rte_pci.h>
@@ -12,6 +14,8 @@
#include "roc_api.h"
+#define CNXK_SSO_XAE_CNT "xae_cnt"
+
#define USEC2NSEC(__us) ((__us)*1E3)
#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
@@ -35,10 +39,21 @@ struct cnxk_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
+ /* Dev args */
+ uint32_t xae_cnt;
/* CN9K */
uint8_t dual_ws;
} __rte_cache_aligned;
+static inline int
+parse_kvargs_value(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ *(uint32_t *)opaque = (uint32_t)atoi(value);
+ return 0;
+}
+
static inline struct cnxk_sso_evdev *
cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
{
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 09/33] event/cnxk: add devargs to control SSO HWGRP QoS
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (7 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 08/33] event/cnxk: add devargs for inflight buffer count pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 10/33] event/cnxk: add port config functions pbhagavatula
` (25 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
SSO HWGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
events. By default the buffers are assigned to the SSO HWGRPs to
satisfy minimum HW requirements. SSO is free to assign the remaining
buffers to HWGRPs based on a preconfigured threshold.
We can control the QoS of SSO HWGRP by modifying the above mentioned
thresholds. HWGRPs that have higher importance can be assigned higher
thresholds than the rest.
Example:
--dev "0002:0e:00.0,qos=[1-50-50-50]" // [Qx-XAQ-TAQ-IAQ]
Qx -> Event queue Aka SSO GGRP.
XAQ -> DRAM In-flights.
TAQ & IAQ -> SRAM In-flights.
The values need to be expressed in terms of percentages, 0 represents
default.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 16 ++++++
drivers/event/cnxk/cn10k_eventdev.c | 3 +-
drivers/event/cnxk/cn9k_eventdev.c | 3 +-
drivers/event/cnxk/cnxk_eventdev.c | 78 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 12 ++++-
5 files changed, 109 insertions(+), 3 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 569fce4cb..cf2156333 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -55,6 +55,22 @@ Runtime Config Options
-a 0002:0e:00.0,xae_cnt=16384
+- ``Event Group QoS support``
+
+ SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
+ events. By default the buffers are assigned to the SSO GGRPs to
+ satisfy minimum HW requirements. SSO is free to assign the remaining
+ buffers to GGRPs based on a preconfigured threshold.
+ We can control the QoS of SSO GGRP by modifying the above mentioned
+ thresholds. GGRPs that have higher importance can be assigned higher
+ thresholds than the rest. The dictionary format is as follows
+ [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] expressed in percentages, 0 represents
+ default.
+
+ For example::
+
+ -a 0002:0e:00.0,qos=[1-50-50-50]
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 1b278360f..47eb8898b 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -143,4 +143,5 @@ static struct rte_pci_driver cn10k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
+ CNXK_SSO_GGRP_QOS "=<string>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 8dfcf35b4..43c045d43 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -146,4 +146,5 @@ static struct rte_pci_driver cn9k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
+ CNXK_SSO_GGRP_QOS "=<string>");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 28a03aeab..4cb5359a8 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -225,6 +225,82 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
port_conf->enqueue_depth = 1;
}
+static void
+parse_queue_param(char *value, void *opaque)
+{
+ struct cnxk_sso_qos queue_qos = {0};
+ uint8_t *val = (uint8_t *)&queue_qos;
+ struct cnxk_sso_evdev *dev = opaque;
+ char *tok = strtok(value, "-");
+ struct cnxk_sso_qos *old_ptr;
+
+ if (!strlen(value))
+ return;
+
+ while (tok != NULL) {
+ *val = atoi(tok);
+ tok = strtok(NULL, "-");
+ val++;
+ }
+
+ if (val != (&queue_qos.iaq_prcnt + 1)) {
+ plt_err("Invalid QoS parameter expected [Qx-XAQ-TAQ-IAQ]");
+ return;
+ }
+
+ dev->qos_queue_cnt++;
+ old_ptr = dev->qos_parse_data;
+ dev->qos_parse_data = rte_realloc(
+ dev->qos_parse_data,
+ sizeof(struct cnxk_sso_qos) * dev->qos_queue_cnt, 0);
+ if (dev->qos_parse_data == NULL) {
+ dev->qos_parse_data = old_ptr;
+ dev->qos_queue_cnt--;
+ return;
+ }
+ dev->qos_parse_data[dev->qos_queue_cnt - 1] = queue_qos;
+}
+
+static void
+parse_qos_list(const char *value, void *opaque)
+{
+ char *s = strdup(value);
+ char *start = NULL;
+ char *end = NULL;
+ char *f = s;
+
+ while (*s) {
+ if (*s == '[')
+ start = s;
+ else if (*s == ']')
+ end = s;
+
+ if (start && start < end) {
+ *end = 0;
+ parse_queue_param(start + 1, opaque);
+ s = end;
+ start = end;
+ }
+ s++;
+ }
+
+ free(f);
+}
+
+static int
+parse_sso_kvargs_dict(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ /* Dict format [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] use '-' cause ','
+ * isn't allowed. Everything is expressed in percentages, 0 represents
+ * default.
+ */
+ parse_qos_list(value, opaque);
+
+ return 0;
+}
+
static void
cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
{
@@ -238,6 +314,8 @@ cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
rte_kvargs_process(kvlist, CNXK_SSO_XAE_CNT, &parse_kvargs_value,
&dev->xae_cnt);
+ rte_kvargs_process(kvlist, CNXK_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
+ dev);
rte_kvargs_free(kvlist);
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 72b0ff3f8..4a2fa73fe 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,7 +14,8 @@
#include "roc_api.h"
-#define CNXK_SSO_XAE_CNT "xae_cnt"
+#define CNXK_SSO_XAE_CNT "xae_cnt"
+#define CNXK_SSO_GGRP_QOS "qos"
#define USEC2NSEC(__us) ((__us)*1E3)
@@ -23,6 +24,13 @@
#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
#define CNXK_SSO_XAQ_SLACK (8)
+struct cnxk_sso_qos {
+ uint16_t queue;
+ uint8_t xaq_prcnt;
+ uint8_t taq_prcnt;
+ uint8_t iaq_prcnt;
+};
+
struct cnxk_sso_evdev {
struct roc_sso sso;
uint8_t max_event_queues;
@@ -41,6 +49,8 @@ struct cnxk_sso_evdev {
struct rte_mempool *xaq_pool;
/* Dev args */
uint32_t xae_cnt;
+ uint8_t qos_queue_cnt;
+ struct cnxk_sso_qos *qos_parse_data;
/* CN9K */
uint8_t dual_ws;
} __rte_cache_aligned;
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 10/33] event/cnxk: add port config functions
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (8 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 09/33] event/cnxk: add devargs to control SSO HWGRP QoS pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 11/33] event/cnxk: add event port link and unlink pbhagavatula
` (24 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add SSO HWS aka event port setup and release functions.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 121 +++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 147 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.c | 65 ++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 91 +++++++++++++++++
4 files changed, 424 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 47eb8898b..c60df7f7b 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -4,6 +4,91 @@
#include "cnxk_eventdev.h"
+static void
+cn10k_init_hws_ops(struct cn10k_sso_hws *ws, uintptr_t base)
+{
+ ws->tag_wqe_op = base + SSOW_LF_GWS_WQE0;
+ ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK0;
+ ws->updt_wqe_op = base + SSOW_LF_GWS_OP_UPD_WQP_GRP1;
+ ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM;
+ ws->swtag_untag_op = base + SSOW_LF_GWS_OP_SWTAG_UNTAG;
+ ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH;
+ ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
+}
+
+static uint32_t
+cn10k_sso_gw_mode_wdata(struct cnxk_sso_evdev *dev)
+{
+ uint32_t wdata = BIT(16) | 1;
+
+ switch (dev->gw_mode) {
+ case CN10K_GW_MODE_NONE:
+ default:
+ break;
+ case CN10K_GW_MODE_PREF:
+ wdata |= BIT(19);
+ break;
+ case CN10K_GW_MODE_PREF_WFE:
+ wdata |= BIT(20) | BIT(19);
+ break;
+ }
+
+ return wdata;
+}
+
+static void *
+cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws;
+
+ /* Allocate event port memory */
+ ws = rte_zmalloc("cn10k_ws",
+ sizeof(struct cn10k_sso_hws) + RTE_CACHE_LINE_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (ws == NULL) {
+ plt_err("Failed to alloc memory for port=%d", port_id);
+ return NULL;
+ }
+
+ /* First cache line is reserved for cookie */
+ ws = (struct cn10k_sso_hws *)((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
+ ws->base = roc_sso_hws_base_get(&dev->sso, port_id);
+ cn10k_init_hws_ops(ws, ws->base);
+ ws->hws_id = port_id;
+ ws->swtag_req = 0;
+ ws->gw_wdata = cn10k_sso_gw_mode_wdata(dev);
+ ws->lmt_base = dev->sso.lmt_base;
+
+ return ws;
+}
+
+static void
+cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = hws;
+ uint64_t val;
+
+ rte_memcpy(ws->grps_base, grps_base,
+ sizeof(uintptr_t) * CNXK_SSO_MAX_HWGRP);
+ ws->fc_mem = dev->fc_mem;
+ ws->xaq_lmt = dev->xaq_lmt;
+
+ /* Set get_work timeout for HWS */
+ val = NSEC2USEC(dev->deq_tmo_ns) - 1;
+ plt_write64(val, ws->base + SSOW_LF_GWS_NW_TIM);
+}
+
+static void
+cn10k_sso_hws_release(void *arg, void *hws)
+{
+ struct cn10k_sso_hws *ws = hws;
+
+ RTE_SET_USED(arg);
+ memset(ws, 0, sizeof(*ws));
+}
+
static void
cn10k_sso_set_rsrc(void *arg)
{
@@ -59,12 +144,46 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ rc = cnxk_setup_event_ports(event_dev, cn10k_sso_init_hws_mem,
+ cn10k_sso_hws_setup);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
+ dev->nb_event_ports = 0;
return rc;
}
+static int
+cn10k_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ const struct rte_event_port_conf *port_conf)
+{
+
+ RTE_SET_USED(port_conf);
+ return cnxk_sso_port_setup(event_dev, port_id, cn10k_sso_hws_setup);
+}
+
+static void
+cn10k_sso_port_release(void *port)
+{
+ struct cnxk_sso_hws_cookie *gws_cookie = cnxk_sso_hws_get_cookie(port);
+ struct cnxk_sso_evdev *dev;
+
+ if (port == NULL)
+ return;
+
+ dev = cnxk_sso_pmd_priv(gws_cookie->event_dev);
+ if (!gws_cookie->configured)
+ goto free;
+
+ cn10k_sso_hws_release(dev, port);
+ memset(gws_cookie, 0, sizeof(*gws_cookie));
+free:
+ rte_free(gws_cookie);
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -72,6 +191,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.queue_setup = cnxk_sso_queue_setup,
.queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
+ .port_setup = cn10k_sso_port_setup,
+ .port_release = cn10k_sso_port_release,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 43c045d43..116f5bdab 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -7,6 +7,63 @@
#define CN9K_DUAL_WS_NB_WS 2
#define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
+static void
+cn9k_init_hws_ops(struct cn9k_sso_hws_state *ws, uintptr_t base)
+{
+ ws->tag_op = base + SSOW_LF_GWS_TAG;
+ ws->wqp_op = base + SSOW_LF_GWS_WQP;
+ ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK0;
+ ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH;
+ ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM;
+ ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
+}
+
+static void
+cn9k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ uint64_t val;
+
+ /* Set get_work tmo for HWS */
+ val = NSEC2USEC(dev->deq_tmo_ns) - 1;
+ if (dev->dual_ws) {
+ dws = hws;
+ rte_memcpy(dws->grps_base, grps_base,
+ sizeof(uintptr_t) * CNXK_SSO_MAX_HWGRP);
+ dws->fc_mem = dev->fc_mem;
+ dws->xaq_lmt = dev->xaq_lmt;
+
+ plt_write64(val, dws->base[0] + SSOW_LF_GWS_NW_TIM);
+ plt_write64(val, dws->base[1] + SSOW_LF_GWS_NW_TIM);
+ } else {
+ ws = hws;
+ rte_memcpy(ws->grps_base, grps_base,
+ sizeof(uintptr_t) * CNXK_SSO_MAX_HWGRP);
+ ws->fc_mem = dev->fc_mem;
+ ws->xaq_lmt = dev->xaq_lmt;
+
+ plt_write64(val, ws->base + SSOW_LF_GWS_NW_TIM);
+ }
+}
+
+static void
+cn9k_sso_hws_release(void *arg, void *hws)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+
+ if (dev->dual_ws) {
+ dws = hws;
+ memset(dws, 0, sizeof(*dws));
+ } else {
+ ws = hws;
+ memset(ws, 0, sizeof(*ws));
+ }
+}
+
static void
cn9k_sso_set_rsrc(void *arg)
{
@@ -33,6 +90,60 @@ cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static void *
+cn9k_sso_init_hws_mem(void *arg, uint8_t port_id)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ void *data;
+
+ if (dev->dual_ws) {
+ dws = rte_zmalloc("cn9k_dual_ws",
+ sizeof(struct cn9k_sso_hws_dual) +
+ RTE_CACHE_LINE_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (dws == NULL) {
+ plt_err("Failed to alloc memory for port=%d", port_id);
+ return NULL;
+ }
+
+ dws = RTE_PTR_ADD(dws, sizeof(struct cnxk_sso_hws_cookie));
+ dws->base[0] = roc_sso_hws_base_get(
+ &dev->sso, CN9K_DUAL_WS_PAIR_ID(port_id, 0));
+ dws->base[1] = roc_sso_hws_base_get(
+ &dev->sso, CN9K_DUAL_WS_PAIR_ID(port_id, 1));
+ cn9k_init_hws_ops(&dws->ws_state[0], dws->base[0]);
+ cn9k_init_hws_ops(&dws->ws_state[1], dws->base[1]);
+ dws->hws_id = port_id;
+ dws->swtag_req = 0;
+ dws->vws = 0;
+
+ data = dws;
+ } else {
+ /* Allocate event port memory */
+ ws = rte_zmalloc("cn9k_ws",
+ sizeof(struct cn9k_sso_hws) +
+ RTE_CACHE_LINE_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (ws == NULL) {
+ plt_err("Failed to alloc memory for port=%d", port_id);
+ return NULL;
+ }
+
+ /* First cache line is reserved for cookie */
+ ws = RTE_PTR_ADD(ws, sizeof(struct cnxk_sso_hws_cookie));
+ ws->base = roc_sso_hws_base_get(&dev->sso, port_id);
+ cn9k_init_hws_ops((struct cn9k_sso_hws_state *)ws, ws->base);
+ ws->hws_id = port_id;
+ ws->swtag_req = 0;
+
+ data = ws;
+ }
+
+ return data;
+}
+
static void
cn9k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -67,12 +178,46 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ rc = cnxk_setup_event_ports(event_dev, cn9k_sso_init_hws_mem,
+ cn9k_sso_hws_setup);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
+ dev->nb_event_ports = 0;
return rc;
}
+static int
+cn9k_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ const struct rte_event_port_conf *port_conf)
+{
+
+ RTE_SET_USED(port_conf);
+ return cnxk_sso_port_setup(event_dev, port_id, cn9k_sso_hws_setup);
+}
+
+static void
+cn9k_sso_port_release(void *port)
+{
+ struct cnxk_sso_hws_cookie *gws_cookie = cnxk_sso_hws_get_cookie(port);
+ struct cnxk_sso_evdev *dev;
+
+ if (port == NULL)
+ return;
+
+ dev = cnxk_sso_pmd_priv(gws_cookie->event_dev);
+ if (!gws_cookie->configured)
+ goto free;
+
+ cn9k_sso_hws_release(dev, port);
+ memset(gws_cookie, 0, sizeof(*gws_cookie));
+free:
+ rte_free(gws_cookie);
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -80,6 +225,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.queue_setup = cnxk_sso_queue_setup,
.queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
+ .port_setup = cn9k_sso_port_setup,
+ .port_release = cn9k_sso_port_release,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 4cb5359a8..9d455c93d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -125,6 +125,42 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
return rc;
}
+int
+cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
+ cnxk_sso_init_hws_mem_t init_hws_fn,
+ cnxk_sso_hws_setup_t setup_hws_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int i;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ struct cnxk_sso_hws_cookie *ws_cookie;
+ void *ws;
+
+ /* Free memory prior to re-allocation if needed */
+ if (event_dev->data->ports[i] != NULL)
+ ws = event_dev->data->ports[i];
+ else
+ ws = init_hws_fn(dev, i);
+ if (ws == NULL)
+ goto hws_fini;
+ ws_cookie = cnxk_sso_hws_get_cookie(ws);
+ ws_cookie->event_dev = event_dev;
+ ws_cookie->configured = 1;
+ event_dev->data->ports[i] = ws;
+ cnxk_sso_port_setup((struct rte_eventdev *)(uintptr_t)event_dev,
+ i, setup_hws_fn);
+ }
+
+ return 0;
+hws_fini:
+ for (i = i - 1; i >= 0; i--) {
+ event_dev->data->ports[i] = NULL;
+ rte_free(cnxk_sso_hws_get_cookie(event_dev->data->ports[i]));
+ }
+ return -ENOMEM;
+}
+
int
cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
{
@@ -225,6 +261,35 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
port_conf->enqueue_depth = 1;
}
+int
+cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ cnxk_sso_hws_setup_t hws_setup_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP] = {0};
+ uint16_t q;
+
+ plt_sso_dbg("Port=%d", port_id);
+ if (event_dev->data->ports[port_id] == NULL) {
+ plt_err("Invalid port Id %d", port_id);
+ return -EINVAL;
+ }
+
+ for (q = 0; q < dev->nb_event_queues; q++) {
+ grps_base[q] = roc_sso_hwgrp_base_get(&dev->sso, q);
+ if (grps_base[q] == 0) {
+ plt_err("Failed to get grp[%d] base addr", q);
+ return -EINVAL;
+ }
+ }
+
+ hws_setup_fn(dev, event_dev->data->ports[port_id], grps_base);
+ plt_sso_dbg("Port=%d ws=%p", port_id, event_dev->data->ports[port_id]);
+ rte_mb();
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 4a2fa73fe..0e8457f02 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -17,13 +17,23 @@
#define CNXK_SSO_XAE_CNT "xae_cnt"
#define CNXK_SSO_GGRP_QOS "qos"
+#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
+#define CNXK_SSO_MAX_HWGRP (RTE_EVENT_MAX_QUEUES_PER_DEV + 1)
#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
#define CNXK_SSO_XAQ_SLACK (8)
+#define CN10K_GW_MODE_NONE 0
+#define CN10K_GW_MODE_PREF 1
+#define CN10K_GW_MODE_PREF_WFE 2
+
+typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
+typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
+typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
+
struct cnxk_sso_qos {
uint16_t queue;
uint8_t xaq_prcnt;
@@ -53,6 +63,76 @@ struct cnxk_sso_evdev {
struct cnxk_sso_qos *qos_parse_data;
/* CN9K */
uint8_t dual_ws;
+ /* CN10K */
+ uint8_t gw_mode;
+} __rte_cache_aligned;
+
+/* CN10K HWS ops */
+#define CN10K_SSO_HWS_OPS \
+ uintptr_t swtag_desched_op; \
+ uintptr_t swtag_flush_op; \
+ uintptr_t swtag_untag_op; \
+ uintptr_t swtag_norm_op; \
+ uintptr_t updt_wqe_op; \
+ uintptr_t tag_wqe_op; \
+ uintptr_t getwrk_op
+
+struct cn10k_sso_hws {
+ /* Get Work Fastpath data */
+ CN10K_SSO_HWS_OPS;
+ uint32_t gw_wdata;
+ uint8_t swtag_req;
+ uint8_t hws_id;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
+ uint64_t base;
+ uintptr_t lmt_base;
+} __rte_cache_aligned;
+
+/* CN9K HWS ops */
+#define CN9K_SSO_HWS_OPS \
+ uintptr_t swtag_desched_op; \
+ uintptr_t swtag_flush_op; \
+ uintptr_t swtag_norm_op; \
+ uintptr_t getwrk_op; \
+ uintptr_t tag_op; \
+ uintptr_t wqp_op
+
+/* Event port aka GWS */
+struct cn9k_sso_hws {
+ /* Get Work Fastpath data */
+ CN9K_SSO_HWS_OPS;
+ uint8_t swtag_req;
+ uint8_t hws_id;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
+ uint64_t base;
+} __rte_cache_aligned;
+
+struct cn9k_sso_hws_state {
+ CN9K_SSO_HWS_OPS;
+};
+
+struct cn9k_sso_hws_dual {
+ /* Get Work Fastpath data */
+ struct cn9k_sso_hws_state ws_state[2]; /* Ping and Pong */
+ uint8_t swtag_req;
+ uint8_t vws; /* Ping pong bit */
+ uint8_t hws_id;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
+ uint64_t base[2];
+} __rte_cache_aligned;
+
+struct cnxk_sso_hws_cookie {
+ const struct rte_eventdev *event_dev;
+ bool configured;
} __rte_cache_aligned;
static inline int
@@ -70,6 +150,12 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
return event_dev->data->dev_private;
}
+static inline struct cnxk_sso_hws_cookie *
+cnxk_sso_hws_get_cookie(void *ws)
+{
+ return RTE_PTR_SUB(ws, sizeof(struct cnxk_sso_hws_cookie));
+}
+
/* Configuration functions */
int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev);
@@ -80,6 +166,9 @@ int cnxk_sso_remove(struct rte_pci_device *pci_dev);
void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
struct rte_event_dev_info *dev_info);
int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
+int cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
+ cnxk_sso_init_hws_mem_t init_hws_mem,
+ cnxk_sso_hws_setup_t hws_setup);
void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
struct rte_event_queue_conf *queue_conf);
int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
@@ -87,5 +176,7 @@ int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id);
void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf);
+int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ cnxk_sso_hws_setup_t hws_setup_fn);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 11/33] event/cnxk: add event port link and unlink
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (9 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 10/33] event/cnxk: add port config functions pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 12/33] event/cnxk: add devargs to configure getwork mode pbhagavatula
` (23 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add platform specific event port, queue link and unlink APIs.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 64 +++++++++++++++++-
drivers/event/cnxk/cn9k_eventdev.c | 101 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.c | 36 ++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 12 +++-
4 files changed, 210 insertions(+), 3 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index c60df7f7b..3cf07734b 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -63,6 +63,24 @@ cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
return ws;
}
+static int
+cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = port;
+
+ return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+}
+
+static int
+cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = port;
+
+ return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+}
+
static void
cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
{
@@ -83,9 +101,12 @@ cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
static void
cn10k_sso_hws_release(void *arg, void *hws)
{
+ struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = hws;
+ int i;
- RTE_SET_USED(arg);
+ for (i = 0; i < dev->nb_event_queues; i++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, (uint16_t *)&i, 1);
memset(ws, 0, sizeof(*ws));
}
@@ -149,6 +170,12 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ /* Restore any prior port-queue mapping. */
+ cnxk_sso_restore_links(event_dev, cn10k_sso_hws_link);
+
+ dev->configured = 1;
+ rte_mb();
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
@@ -184,6 +211,38 @@ cn10k_sso_port_release(void *port)
rte_free(gws_cookie);
}
+static int
+cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_links];
+ uint16_t link;
+
+ RTE_SET_USED(priorities);
+ for (link = 0; link < nb_links; link++)
+ hwgrp_ids[link] = queues[link];
+ nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+
+ return (int)nb_links;
+}
+
+static int
+cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_unlinks];
+ uint16_t unlink;
+
+ for (unlink = 0; unlink < nb_unlinks; unlink++)
+ hwgrp_ids[unlink] = queues[unlink];
+ nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+
+ return (int)nb_unlinks;
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -193,6 +252,9 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_def_conf = cnxk_sso_port_def_conf,
.port_setup = cn10k_sso_port_setup,
.port_release = cn10k_sso_port_release,
+ .port_link = cn10k_sso_port_link,
+ .port_unlink = cn10k_sso_port_unlink,
+ .timeout_ticks = cnxk_sso_timeout_ticks,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 116f5bdab..5be2776cc 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -18,6 +18,54 @@ cn9k_init_hws_ops(struct cn9k_sso_hws_state *ws, uintptr_t base)
ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
}
+static int
+cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ int rc;
+
+ if (dev->dual_ws) {
+ dws = port;
+ rc = roc_sso_hws_link(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
+ nb_link);
+ rc |= roc_sso_hws_link(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ map, nb_link);
+ } else {
+ ws = port;
+ rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ }
+
+ return rc;
+}
+
+static int
+cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ int rc;
+
+ if (dev->dual_ws) {
+ dws = port;
+ rc = roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ map, nb_link);
+ rc |= roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ map, nb_link);
+ } else {
+ ws = port;
+ rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ }
+
+ return rc;
+}
+
static void
cn9k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
{
@@ -54,12 +102,24 @@ cn9k_sso_hws_release(void *arg, void *hws)
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
struct cn9k_sso_hws *ws;
+ int i;
if (dev->dual_ws) {
dws = hws;
+ for (i = 0; i < dev->nb_event_queues; i++) {
+ roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ (uint16_t *)&i, 1);
+ roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ (uint16_t *)&i, 1);
+ }
memset(dws, 0, sizeof(*dws));
} else {
ws = hws;
+ for (i = 0; i < dev->nb_event_queues; i++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id,
+ (uint16_t *)&i, 1);
memset(ws, 0, sizeof(*ws));
}
}
@@ -183,6 +243,12 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ /* Restore any prior port-queue mapping. */
+ cnxk_sso_restore_links(event_dev, cn9k_sso_hws_link);
+
+ dev->configured = 1;
+ rte_mb();
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
@@ -218,6 +284,38 @@ cn9k_sso_port_release(void *port)
rte_free(gws_cookie);
}
+static int
+cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_links];
+ uint16_t link;
+
+ RTE_SET_USED(priorities);
+ for (link = 0; link < nb_links; link++)
+ hwgrp_ids[link] = queues[link];
+ nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+
+ return (int)nb_links;
+}
+
+static int
+cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_unlinks];
+ uint16_t unlink;
+
+ for (unlink = 0; unlink < nb_unlinks; unlink++)
+ hwgrp_ids[unlink] = queues[unlink];
+ nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+
+ return (int)nb_unlinks;
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -227,6 +325,9 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_def_conf = cnxk_sso_port_def_conf,
.port_setup = cn9k_sso_port_setup,
.port_release = cn9k_sso_port_release,
+ .port_link = cn9k_sso_port_link,
+ .port_unlink = cn9k_sso_port_unlink,
+ .timeout_ticks = cnxk_sso_timeout_ticks,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 9d455c93d..5f4075a31 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -161,6 +161,32 @@ cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
return -ENOMEM;
}
+void
+cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
+ cnxk_sso_link_t link_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t *links_map, hwgrp[CNXK_SSO_MAX_HWGRP];
+ int i, j;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ uint16_t nb_hwgrp = 0;
+
+ links_map = event_dev->data->links_map;
+ /* Point links_map to this port specific area */
+ links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+
+ for (j = 0; j < dev->nb_event_queues; j++) {
+ if (links_map[j] == 0xdead)
+ continue;
+ hwgrp[nb_hwgrp] = j;
+ nb_hwgrp++;
+ }
+
+ link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp);
+ }
+}
+
int
cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
{
@@ -290,6 +316,16 @@ cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
return 0;
}
+int
+cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
+ uint64_t *tmo_ticks)
+{
+ RTE_SET_USED(event_dev);
+ *tmo_ticks = NSEC2TICK(ns, rte_get_timer_hz());
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 0e8457f02..bf2c961aa 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -17,8 +17,9 @@
#define CNXK_SSO_XAE_CNT "xae_cnt"
#define CNXK_SSO_GGRP_QOS "qos"
-#define NSEC2USEC(__ns) ((__ns) / 1E3)
-#define USEC2NSEC(__us) ((__us)*1E3)
+#define NSEC2USEC(__ns) ((__ns) / 1E3)
+#define USEC2NSEC(__us) ((__us)*1E3)
+#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
#define CNXK_SSO_MAX_HWGRP (RTE_EVENT_MAX_QUEUES_PER_DEV + 1)
#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
@@ -33,6 +34,8 @@
typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
+typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
+ uint16_t nb_link);
struct cnxk_sso_qos {
uint16_t queue;
@@ -48,6 +51,7 @@ struct cnxk_sso_evdev {
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
+ uint8_t configured;
uint32_t deq_tmo_ns;
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
@@ -169,6 +173,8 @@ int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
int cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
cnxk_sso_init_hws_mem_t init_hws_mem,
cnxk_sso_hws_setup_t hws_setup);
+void cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
+ cnxk_sso_link_t link_fn);
void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
struct rte_event_queue_conf *queue_conf);
int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
@@ -178,5 +184,7 @@ void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf);
int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
cnxk_sso_hws_setup_t hws_setup_fn);
+int cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
+ uint64_t *tmo_ticks);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 12/33] event/cnxk: add devargs to configure getwork mode
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (10 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 11/33] event/cnxk: add event port link and unlink pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 13/33] event/cnxk: add SSO HW device operations pbhagavatula
` (22 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add devargs to configure the platform specific getwork mode.
CN9K getwork mode by default is set to use dual workslot mode.
Add option to force single workslot mode.
Example:
--dev "0002:0e:00.0,single_ws=1"
CN10K supports multiple getwork prefetch modes, by default the
prefetch mode is set to none.
Add option to select getwork prefetch mode
Example:
--dev "0002:1e:00.0,gw_mode=1"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 18 ++++++++++++++++++
drivers/event/cnxk/cn10k_eventdev.c | 3 ++-
drivers/event/cnxk/cn9k_eventdev.c | 3 ++-
drivers/event/cnxk/cnxk_eventdev.c | 6 ++++++
drivers/event/cnxk/cnxk_eventdev.h | 6 ++++--
5 files changed, 32 insertions(+), 4 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index cf2156333..b2684d431 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -55,6 +55,24 @@ Runtime Config Options
-a 0002:0e:00.0,xae_cnt=16384
+- ``CN9K Getwork mode``
+
+ CN9K ``single_ws`` devargs parameter is introduced to select single workslot
+ mode in SSO and disable the default dual workslot mode.
+
+ For example::
+
+ -a 0002:0e:00.0,single_ws=1
+
+- ``CN10K Getwork mode``
+
+ CN10K supports multiple getwork prefetch modes, by default the prefetch
+ mode is set to none.
+
+ For example::
+
+ -a 0002:0e:00.0,gw_mode=1
+
- ``Event Group QoS support``
SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 3cf07734b..310acc011 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -327,4 +327,5 @@ RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
- CNXK_SSO_GGRP_QOS "=<string>");
+ CNXK_SSO_GGRP_QOS "=<string>"
+ CN10K_SSO_GW_MODE "=<int>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 5be2776cc..44c7a0c3a 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -395,4 +395,5 @@ RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
- CNXK_SSO_GGRP_QOS "=<string>");
+ CNXK_SSO_GGRP_QOS "=<string>"
+ CN9K_SSO_SINGLE_WS "=1");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 5f4075a31..0e2cc3681 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -406,6 +406,7 @@ static void
cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
{
struct rte_kvargs *kvlist;
+ uint8_t single_ws = 0;
if (devargs == NULL)
return;
@@ -417,6 +418,11 @@ cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
&dev->xae_cnt);
rte_kvargs_process(kvlist, CNXK_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
dev);
+ rte_kvargs_process(kvlist, CN9K_SSO_SINGLE_WS, &parse_kvargs_value,
+ &single_ws);
+ rte_kvargs_process(kvlist, CN10K_SSO_GW_MODE, &parse_kvargs_value,
+ &dev->gw_mode);
+ dev->dual_ws = !single_ws;
rte_kvargs_free(kvlist);
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index bf2c961aa..85f6058f2 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,8 +14,10 @@
#include "roc_api.h"
-#define CNXK_SSO_XAE_CNT "xae_cnt"
-#define CNXK_SSO_GGRP_QOS "qos"
+#define CNXK_SSO_XAE_CNT "xae_cnt"
+#define CNXK_SSO_GGRP_QOS "qos"
+#define CN9K_SSO_SINGLE_WS "single_ws"
+#define CN10K_SSO_GW_MODE "gw_mode"
#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 13/33] event/cnxk: add SSO HW device operations
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (11 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 12/33] event/cnxk: add devargs to configure getwork mode pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 14/33] event/cnxk: add SSO GWS fastpath enqueue functions pbhagavatula
` (21 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO HW device operations used for enqueue/dequeue.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_worker.c | 7 +
drivers/event/cnxk/cn10k_worker.h | 151 +++++++++++++++++
drivers/event/cnxk/cn9k_worker.c | 7 +
drivers/event/cnxk/cn9k_worker.h | 249 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 10 ++
drivers/event/cnxk/cnxk_worker.h | 101 ++++++++++++
drivers/event/cnxk/meson.build | 4 +-
7 files changed, 528 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cn10k_worker.c
create mode 100644 drivers/event/cnxk/cn10k_worker.h
create mode 100644 drivers/event/cnxk/cn9k_worker.c
create mode 100644 drivers/event/cnxk/cn9k_worker.h
create mode 100644 drivers/event/cnxk/cnxk_worker.h
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
new file mode 100644
index 000000000..4a7d0b535
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cn10k_worker.h"
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
new file mode 100644
index 000000000..0a7cb9c57
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __CN10K_WORKER_H__
+#define __CN10K_WORKER_H__
+
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
+
+/* SSO Operations */
+
+static __rte_always_inline uint8_t
+cn10k_sso_hws_new_event(struct cn10k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ rte_atomic_thread_fence(__ATOMIC_ACQ_REL);
+ if (ws->xaq_lmt <= *ws->fc_mem)
+ return 0;
+
+ cnxk_sso_hws_add_work(event_ptr, tag, new_tt, ws->grps_base[grp]);
+ return 1;
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_fwd_swtag(struct cn10k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint8_t cur_tt = CNXK_TT_FROM_TAG(plt_read64(ws->tag_wqe_op));
+
+ /* CNXK model
+ * cur_tt/new_tt SSO_TT_ORDERED SSO_TT_ATOMIC SSO_TT_UNTAGGED
+ *
+ * SSO_TT_ORDERED norm norm untag
+ * SSO_TT_ATOMIC norm norm untag
+ * SSO_TT_UNTAGGED norm norm NOOP
+ */
+
+ if (new_tt == SSO_TT_UNTAGGED) {
+ if (cur_tt != SSO_TT_UNTAGGED)
+ cnxk_sso_hws_swtag_untag(ws->swtag_untag_op);
+ } else {
+ cnxk_sso_hws_swtag_norm(tag, new_tt, ws->swtag_norm_op);
+ }
+ ws->swtag_req = 1;
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_fwd_group(struct cn10k_sso_hws *ws, const struct rte_event *ev,
+ const uint16_t grp)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+
+ plt_write64(ev->u64, ws->updt_wqe_op);
+ cnxk_sso_hws_swtag_desched(tag, new_tt, grp, ws->swtag_desched_op);
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_forward_event(struct cn10k_sso_hws *ws,
+ const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (CNXK_GRP_FROM_TAG(plt_read64(ws->tag_wqe_op)) == grp)
+ cn10k_sso_hws_fwd_swtag(ws, ev);
+ else
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ cn10k_sso_hws_fwd_group(ws, ev, grp);
+}
+
+static __rte_always_inline uint16_t
+cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+ gw.get_work = ws->gw_wdata;
+#if defined(RTE_ARCH_ARM64) && !defined(__clang__)
+ asm volatile(
+ PLT_CPU_FEATURE_PREAMBLE
+ "caspl %[wdata], %H[wdata], %[wdata], %H[wdata], [%[gw_loc]]\n"
+ : [wdata] "+r"(gw.get_work)
+ : [gw_loc] "r"(ws->getwrk_op)
+ : "memory");
+#else
+ plt_write64(gw.u64[0], ws->getwrk_op);
+ do {
+ roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+ } while (gw.u64[0] & BIT_ULL(63));
+#endif
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+/* Used in cleaning up workslot. */
+static __rte_always_inline uint16_t
+cn10k_sso_hws_get_work_empty(struct cn10k_sso_hws *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldp %[tag], %[wqp], [%[tag_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldp %[tag], %[wqp], [%[tag_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_wqe_op)
+ : "memory");
+#else
+ do {
+ roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+ } while (gw.u64[0] & BIT_ULL(63));
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+#endif
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
new file mode 100644
index 000000000..77856f2e7
--- /dev/null
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "roc_api.h"
+
+#include "cn9k_worker.h"
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
new file mode 100644
index 000000000..ff7851642
--- /dev/null
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -0,0 +1,249 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __CN9K_WORKER_H__
+#define __CN9K_WORKER_H__
+
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
+
+/* SSO Operations */
+
+static __rte_always_inline uint8_t
+cn9k_sso_hws_new_event(struct cn9k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ rte_atomic_thread_fence(__ATOMIC_ACQ_REL);
+ if (ws->xaq_lmt <= *ws->fc_mem)
+ return 0;
+
+ cnxk_sso_hws_add_work(event_ptr, tag, new_tt, ws->grps_base[grp]);
+ return 1;
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_fwd_swtag(struct cn9k_sso_hws_state *vws,
+ const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint8_t cur_tt = CNXK_TT_FROM_TAG(plt_read64(vws->tag_op));
+
+ /* CNXK model
+ * cur_tt/new_tt SSO_TT_ORDERED SSO_TT_ATOMIC SSO_TT_UNTAGGED
+ *
+ * SSO_TT_ORDERED norm norm untag
+ * SSO_TT_ATOMIC norm norm untag
+ * SSO_TT_UNTAGGED norm norm NOOP
+ */
+
+ if (new_tt == SSO_TT_UNTAGGED) {
+ if (cur_tt != SSO_TT_UNTAGGED)
+ cnxk_sso_hws_swtag_untag(
+ CN9K_SSOW_GET_BASE_ADDR(vws->getwrk_op) +
+ SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ } else {
+ cnxk_sso_hws_swtag_norm(tag, new_tt, vws->swtag_norm_op);
+ }
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_fwd_group(struct cn9k_sso_hws_state *ws,
+ const struct rte_event *ev, const uint16_t grp)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+
+ plt_write64(ev->u64, CN9K_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
+ SSOW_LF_GWS_OP_UPD_WQP_GRP1);
+ cnxk_sso_hws_swtag_desched(tag, new_tt, grp, ws->swtag_desched_op);
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_forward_event(struct cn9k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (CNXK_GRP_FROM_TAG(plt_read64(ws->tag_op)) == grp) {
+ cn9k_sso_hws_fwd_swtag((struct cn9k_sso_hws_state *)ws, ev);
+ ws->swtag_req = 1;
+ } else {
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ cn9k_sso_hws_fwd_group((struct cn9k_sso_hws_state *)ws, ev,
+ grp);
+ }
+}
+
+/* Dual ws ops. */
+
+static __rte_always_inline uint8_t
+cn9k_sso_hws_dual_new_event(struct cn9k_sso_hws_dual *dws,
+ const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ rte_atomic_thread_fence(__ATOMIC_ACQ_REL);
+ if (dws->xaq_lmt <= *dws->fc_mem)
+ return 0;
+
+ cnxk_sso_hws_add_work(event_ptr, tag, new_tt, dws->grps_base[grp]);
+ return 1;
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_dual_forward_event(struct cn9k_sso_hws_dual *dws,
+ struct cn9k_sso_hws_state *vws,
+ const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (CNXK_GRP_FROM_TAG(plt_read64(vws->tag_op)) == grp) {
+ cn9k_sso_hws_fwd_swtag(vws, ev);
+ dws->swtag_req = 1;
+ } else {
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ cn9k_sso_hws_fwd_group(vws, ev, grp);
+ }
+}
+
+static __rte_always_inline uint16_t
+cn9k_sso_hws_dual_get_work(struct cn9k_sso_hws_state *ws,
+ struct cn9k_sso_hws_state *ws_pair,
+ struct rte_event *ev)
+{
+ const uint64_t set_gw = BIT_ULL(16) | 1;
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ "rty%=: \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: str %[gw], [%[pong]] \n"
+ " dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op),
+ [gw] "r"(set_gw), [pong] "r"(ws_pair->getwrk_op));
+#else
+ gw.u64[0] = plt_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & gw.u64[0])
+ gw.u64[0] = plt_read64(ws->tag_op);
+ gw.u64[1] = plt_read64(ws->wqp_op);
+ plt_write64(set_gw, ws_pair->getwrk_op);
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+static __rte_always_inline uint16_t
+cn9k_sso_hws_get_work(struct cn9k_sso_hws *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+ plt_write64(BIT_ULL(16) | /* wait for work. */
+ 1, /* Use Mask set 0. */
+ ws->getwrk_op);
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op));
+#else
+ gw.u64[0] = plt_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & gw.u64[0])
+ gw.u64[0] = plt_read64(ws->tag_op);
+
+ gw.u64[1] = plt_read64(ws->wqp_op);
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+/* Used in cleaning up workslot. */
+static __rte_always_inline uint16_t
+cn9k_sso_hws_get_work_empty(struct cn9k_sso_hws_state *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op));
+#else
+ gw.u64[0] = plt_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & gw.u64[0])
+ gw.u64[0] = plt_read64(ws->tag_op);
+
+ gw.u64[1] = plt_read64(ws->wqp_op);
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+#endif
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 85f6058f2..ac55d0ccb 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -29,6 +29,16 @@
#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
#define CNXK_SSO_XAQ_SLACK (8)
+#define CNXK_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY)
+#define CNXK_TT_FROM_EVENT(x) (((x) >> 38) & SSO_TT_EMPTY)
+#define CNXK_EVENT_TYPE_FROM_TAG(x) (((x) >> 28) & 0xf)
+#define CNXK_SUB_EVENT_FROM_TAG(x) (((x) >> 20) & 0xff)
+#define CNXK_CLR_SUB_EVENT(x) (~(0xffu << 20) & x)
+#define CNXK_GRP_FROM_TAG(x) (((x) >> 36) & 0x3ff)
+#define CNXK_SWTAG_PEND(x) (BIT_ULL(62) & x)
+
+#define CN9K_SSOW_GET_BASE_ADDR(_GW) ((_GW)-SSOW_LF_GWS_OP_GET_WORK0)
+
#define CN10K_GW_MODE_NONE 0
#define CN10K_GW_MODE_PREF 1
#define CN10K_GW_MODE_PREF_WFE 2
diff --git a/drivers/event/cnxk/cnxk_worker.h b/drivers/event/cnxk/cnxk_worker.h
new file mode 100644
index 000000000..962d9e86e
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_worker.h
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __CNXK_WORKER_H__
+#define __CNXK_WORKER_H__
+
+#include "cnxk_eventdev.h"
+
+/* SSO Operations */
+
+static __rte_always_inline void
+cnxk_sso_hws_add_work(const uint64_t event_ptr, const uint32_t tag,
+ const uint8_t new_tt, const uintptr_t grp_base)
+{
+ uint64_t add_work0;
+
+ add_work0 = tag | ((uint64_t)(new_tt) << 32);
+ roc_store_pair(add_work0, event_ptr, grp_base);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_desched(uint32_t tag, uint8_t new_tt, uint16_t grp,
+ uintptr_t swtag_desched_op)
+{
+ uint64_t val;
+
+ val = tag | ((uint64_t)(new_tt & 0x3) << 32) | ((uint64_t)grp << 34);
+ __atomic_store_n((uint64_t *)swtag_desched_op, val, __ATOMIC_RELEASE);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_norm(uint32_t tag, uint8_t new_tt, uintptr_t swtag_norm_op)
+{
+ uint64_t val;
+
+ val = tag | ((uint64_t)(new_tt & 0x3) << 32);
+ plt_write64(val, swtag_norm_op);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_untag(uintptr_t swtag_untag_op)
+{
+ plt_write64(0, swtag_untag_op);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_flush(uint64_t tag_op, uint64_t flush_op)
+{
+ if (CNXK_TT_FROM_TAG(plt_read64(tag_op)) == SSO_TT_EMPTY)
+ return;
+ plt_write64(0, flush_op);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_wait(uintptr_t tag_op)
+{
+#ifdef RTE_ARCH_ARM64
+ uint64_t swtp;
+
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbz %[swtb], 62, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbnz %[swtb], 62, rty%= \n"
+ "done%=: \n"
+ : [swtb] "=&r"(swtp)
+ : [swtp_loc] "r"(tag_op));
+#else
+ /* Wait for the SWTAG/SWTAG_FULL operation */
+ while (plt_read64(tag_op) & BIT_ULL(62))
+ ;
+#endif
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_head_wait(uintptr_t tag_op)
+{
+#ifdef RTE_ARCH_ARM64
+ uint64_t swtp;
+
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbz %[swtb], 35, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbnz %[swtb], 35, rty%= \n"
+ "done%=: \n"
+ : [swtb] "=&r"(swtp)
+ : [swtp_loc] "r"(tag_op));
+#else
+ /* Wait for the SWTAG/SWTAG_FULL operation */
+ while (plt_read64(tag_op) & BIT_ULL(35))
+ ;
+#endif
+}
+
+#endif
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 453c3a4f7..f2d1543ba 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -8,7 +8,9 @@ if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
subdir_done()
endif
-sources = files('cn10k_eventdev.c',
+sources = files('cn10k_worker.c',
+ 'cn10k_eventdev.c',
+ 'cn9k_worker.c',
'cn9k_eventdev.c',
'cnxk_eventdev.c',
)
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 14/33] event/cnxk: add SSO GWS fastpath enqueue functions
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (12 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 13/33] event/cnxk: add SSO HW device operations pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 15/33] event/cnxk: add SSO GWS dequeue fastpath functions pbhagavatula
` (20 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO GWS fastpath event device enqueue functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 16 +++-
drivers/event/cnxk/cn10k_worker.c | 54 ++++++++++++++
drivers/event/cnxk/cn10k_worker.h | 12 +++
drivers/event/cnxk/cn9k_eventdev.c | 25 ++++++-
drivers/event/cnxk/cn9k_worker.c | 112 ++++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_worker.h | 24 ++++++
6 files changed, 241 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 310acc011..16848798c 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -2,7 +2,9 @@
* Copyright(C) 2021 Marvell International Ltd.
*/
+#include "cn10k_worker.h"
#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
static void
cn10k_init_hws_ops(struct cn10k_sso_hws *ws, uintptr_t base)
@@ -130,6 +132,16 @@ cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static void
+cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
+{
+ PLT_SET_USED(event_dev);
+ event_dev->enqueue = cn10k_sso_hws_enq;
+ event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
+ event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
+ event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst;
+}
+
static void
cn10k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -276,8 +288,10 @@ cn10k_sso_init(struct rte_eventdev *event_dev)
event_dev->dev_ops = &cn10k_sso_dev_ops;
/* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ cn10k_sso_fp_fns_set(event_dev);
return 0;
+ }
rc = cnxk_sso_init(event_dev);
if (rc < 0)
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 4a7d0b535..cef24f4e2 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -5,3 +5,57 @@
#include "cn10k_worker.h"
#include "cnxk_eventdev.h"
#include "cnxk_worker.h"
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq(void *port, const struct rte_event *ev)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ return cn10k_sso_hws_new_event(ws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ cn10k_sso_hws_forward_event(ws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ cnxk_sso_hws_swtag_flush(ws->tag_wqe_op, ws->swtag_flush_op);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return cn10k_sso_hws_enq(port, ev);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn10k_sso_hws *ws = port;
+ uint16_t i, rc = 1;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = cn10k_sso_hws_new_event(ws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ RTE_SET_USED(nb_events);
+ cn10k_sso_hws_forward_event(ws, ev);
+
+ return 1;
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index 0a7cb9c57..d75e92846 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -148,4 +148,16 @@ cn10k_sso_hws_get_work_empty(struct cn10k_sso_hws *ws, struct rte_event *ev)
return !!gw.u64[1];
}
+/* CN10K Fastpath functions. */
+uint16_t __rte_hot cn10k_sso_hws_enq(void *port, const struct rte_event *ev);
+uint16_t __rte_hot cn10k_sso_hws_enq_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+
#endif
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 44c7a0c3a..7e4c1b415 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -2,7 +2,9 @@
* Copyright(C) 2021 Marvell International Ltd.
*/
+#include "cn9k_worker.h"
#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
#define CN9K_DUAL_WS_NB_WS 2
#define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
@@ -150,6 +152,25 @@ cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static void
+cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ event_dev->enqueue = cn9k_sso_hws_enq;
+ event_dev->enqueue_burst = cn9k_sso_hws_enq_burst;
+ event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
+ event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
+
+ if (dev->dual_ws) {
+ event_dev->enqueue = cn9k_sso_hws_dual_enq;
+ event_dev->enqueue_burst = cn9k_sso_hws_dual_enq_burst;
+ event_dev->enqueue_new_burst = cn9k_sso_hws_dual_enq_new_burst;
+ event_dev->enqueue_forward_burst =
+ cn9k_sso_hws_dual_enq_fwd_burst;
+ }
+}
+
static void *
cn9k_sso_init_hws_mem(void *arg, uint8_t port_id)
{
@@ -349,8 +370,10 @@ cn9k_sso_init(struct rte_eventdev *event_dev)
event_dev->dev_ops = &cn9k_sso_dev_ops;
/* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ cn9k_sso_fp_fns_set(event_dev);
return 0;
+ }
rc = cnxk_sso_init(event_dev);
if (rc < 0)
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index 77856f2e7..c2f09a99b 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -5,3 +5,115 @@
#include "roc_api.h"
#include "cn9k_worker.h"
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq(void *port, const struct rte_event *ev)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ return cn9k_sso_hws_new_event(ws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ cn9k_sso_hws_forward_event(ws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ cnxk_sso_hws_swtag_flush(ws->tag_op, ws->swtag_flush_op);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return cn9k_sso_hws_enq(port, ev);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws *ws = port;
+ uint16_t i, rc = 1;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = cn9k_sso_hws_new_event(ws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ RTE_SET_USED(nb_events);
+ cn9k_sso_hws_forward_event(ws, ev);
+
+ return 1;
+}
+
+/* Dual ws ops. */
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq(void *port, const struct rte_event *ev)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ struct cn9k_sso_hws_state *vws;
+
+ vws = &dws->ws_state[!dws->vws];
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ return cn9k_sso_hws_dual_new_event(dws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ cn9k_sso_hws_dual_forward_event(dws, vws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ cnxk_sso_hws_swtag_flush(vws->tag_op, vws->swtag_flush_op);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return cn9k_sso_hws_dual_enq(port, ev);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ uint16_t i, rc = 1;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = cn9k_sso_hws_dual_new_event(dws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+
+ RTE_SET_USED(nb_events);
+ cn9k_sso_hws_dual_forward_event(dws, &dws->ws_state[!dws->vws], ev);
+
+ return 1;
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index ff7851642..e75ed10ad 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -246,4 +246,28 @@ cn9k_sso_hws_get_work_empty(struct cn9k_sso_hws_state *ws, struct rte_event *ev)
return !!gw.u64[1];
}
+/* CN9K Fastpath functions. */
+uint16_t __rte_hot cn9k_sso_hws_enq(void *port, const struct rte_event *ev);
+uint16_t __rte_hot cn9k_sso_hws_enq_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_enq_new_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_enq_fwd_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+
+uint16_t __rte_hot cn9k_sso_hws_dual_enq(void *port,
+ const struct rte_event *ev);
+uint16_t __rte_hot cn9k_sso_hws_dual_enq_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_dual_enq_new_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+
#endif
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 15/33] event/cnxk: add SSO GWS dequeue fastpath functions
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (13 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 14/33] event/cnxk: add SSO GWS fastpath enqueue functions pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 16/33] event/cnxk: add device start function pbhagavatula
` (19 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO GWS event dequeue fastpath functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 10 ++-
drivers/event/cnxk/cn10k_worker.c | 54 +++++++++++++
drivers/event/cnxk/cn10k_worker.h | 12 +++
drivers/event/cnxk/cn9k_eventdev.c | 15 ++++
drivers/event/cnxk/cn9k_worker.c | 117 ++++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_worker.h | 24 ++++++
6 files changed, 231 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 16848798c..a9948e1b2 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -135,11 +135,19 @@ cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
static void
cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
{
- PLT_SET_USED(event_dev);
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
event_dev->enqueue = cn10k_sso_hws_enq;
event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst;
+
+ event_dev->dequeue = cn10k_sso_hws_deq;
+ event_dev->dequeue_burst = cn10k_sso_hws_deq_burst;
+ if (dev->is_timeout_deq) {
+ event_dev->dequeue = cn10k_sso_hws_tmo_deq;
+ event_dev->dequeue_burst = cn10k_sso_hws_tmo_deq_burst;
+ }
}
static void
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index cef24f4e2..57b0714bb 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -59,3 +59,57 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+uint16_t __rte_hot
+cn10k_sso_hws_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ RTE_SET_USED(timeout_ticks);
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_wqe_op);
+ return 1;
+ }
+
+ return cn10k_sso_hws_get_work(ws, ev);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_deq_burst(void *port, struct rte_event ev[], uint16_t nb_events,
+ uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn10k_sso_hws_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_tmo_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn10k_sso_hws *ws = port;
+ uint16_t ret = 1;
+ uint64_t iter;
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_wqe_op);
+ return ret;
+ }
+
+ ret = cn10k_sso_hws_get_work(ws, ev);
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++)
+ ret = cn10k_sso_hws_get_work(ws, ev);
+
+ return ret;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn10k_sso_hws_tmo_deq(port, ev, timeout_ticks);
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index d75e92846..ed4e3bd63 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -160,4 +160,16 @@ uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_tmo_deq_burst(void *port,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+
#endif
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 7e4c1b415..8100140fc 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -162,12 +162,27 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
+ event_dev->dequeue = cn9k_sso_hws_deq;
+ event_dev->dequeue_burst = cn9k_sso_hws_deq_burst;
+ if (dev->deq_tmo_ns) {
+ event_dev->dequeue = cn9k_sso_hws_tmo_deq;
+ event_dev->dequeue_burst = cn9k_sso_hws_tmo_deq_burst;
+ }
+
if (dev->dual_ws) {
event_dev->enqueue = cn9k_sso_hws_dual_enq;
event_dev->enqueue_burst = cn9k_sso_hws_dual_enq_burst;
event_dev->enqueue_new_burst = cn9k_sso_hws_dual_enq_new_burst;
event_dev->enqueue_forward_burst =
cn9k_sso_hws_dual_enq_fwd_burst;
+
+ event_dev->dequeue = cn9k_sso_hws_dual_deq;
+ event_dev->dequeue_burst = cn9k_sso_hws_dual_deq_burst;
+ if (dev->deq_tmo_ns) {
+ event_dev->dequeue = cn9k_sso_hws_dual_tmo_deq;
+ event_dev->dequeue_burst =
+ cn9k_sso_hws_dual_tmo_deq_burst;
+ }
}
}
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index c2f09a99b..41ffd88a0 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -60,6 +60,60 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+uint16_t __rte_hot
+cn9k_sso_hws_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ RTE_SET_USED(timeout_ticks);
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_op);
+ return 1;
+ }
+
+ return cn9k_sso_hws_get_work(ws, ev);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_deq_burst(void *port, struct rte_event ev[], uint16_t nb_events,
+ uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_tmo_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws *ws = port;
+ uint16_t ret = 1;
+ uint64_t iter;
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_op);
+ return ret;
+ }
+
+ ret = cn9k_sso_hws_get_work(ws, ev);
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++)
+ ret = cn9k_sso_hws_get_work(ws, ev);
+
+ return ret;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_tmo_deq(port, ev, timeout_ticks);
+}
+
/* Dual ws ops. */
uint16_t __rte_hot
@@ -117,3 +171,66 @@ cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ uint16_t gw;
+
+ RTE_SET_USED(timeout_ticks);
+ if (dws->swtag_req) {
+ dws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(dws->ws_state[!dws->vws].tag_op);
+ return 1;
+ }
+
+ gw = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
+ &dws->ws_state[!dws->vws], ev);
+ dws->vws = !dws->vws;
+ return gw;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_dual_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ uint16_t ret = 1;
+ uint64_t iter;
+
+ if (dws->swtag_req) {
+ dws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(dws->ws_state[!dws->vws].tag_op);
+ return ret;
+ }
+
+ ret = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
+ &dws->ws_state[!dws->vws], ev);
+ dws->vws = !dws->vws;
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) {
+ ret = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
+ &dws->ws_state[!dws->vws], ev);
+ dws->vws = !dws->vws;
+ }
+
+ return ret;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_dual_tmo_deq(port, ev, timeout_ticks);
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index e75ed10ad..b997db2fe 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -270,4 +270,28 @@ uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+
+uint16_t __rte_hot cn9k_sso_hws_dual_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst(void *port,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq_burst(void *port,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+
#endif
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 16/33] event/cnxk: add device start function
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (14 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 15/33] event/cnxk: add SSO GWS dequeue fastpath functions pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 17/33] event/cnxk: add device stop and close functions pbhagavatula
` (18 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add eventdev start function along with few cleanup API's to maintain
sanity.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 127 ++++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 113 +++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.c | 64 ++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 7 ++
4 files changed, 311 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index a9948e1b2..0de44ed43 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -112,6 +112,117 @@ cn10k_sso_hws_release(void *arg, void *hws)
memset(ws, 0, sizeof(*ws));
}
+static void
+cn10k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base,
+ cnxk_handle_event_t fn, void *arg)
+{
+ struct cn10k_sso_hws *ws = hws;
+ uint64_t cq_ds_cnt = 1;
+ uint64_t aq_cnt = 1;
+ uint64_t ds_cnt = 1;
+ struct rte_event ev;
+ uint64_t val, req;
+
+ plt_write64(0, base + SSO_LF_GGRP_QCTL);
+
+ req = queue_id; /* GGRP ID */
+ req |= BIT_ULL(18); /* Grouped */
+ req |= BIT_ULL(16); /* WAIT */
+
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+
+ while (aq_cnt || cq_ds_cnt || ds_cnt) {
+ plt_write64(req, ws->getwrk_op);
+ cn10k_sso_hws_get_work_empty(ws, &ev);
+ if (fn != NULL && ev.u64 != 0)
+ fn(arg, ev);
+ if (ev.sched_type != SSO_TT_EMPTY)
+ cnxk_sso_hws_swtag_flush(ws->tag_wqe_op,
+ ws->swtag_flush_op);
+ do {
+ val = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE);
+ } while (val & BIT_ULL(56));
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ /* Extract cq and ds count */
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+ }
+
+ plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL);
+ rte_mb();
+}
+
+static void
+cn10k_sso_hws_reset(void *arg, void *hws)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = hws;
+ uintptr_t base = ws->base;
+ uint64_t pend_state;
+ union {
+ __uint128_t wdata;
+ uint64_t u64[2];
+ } gw;
+ uint8_t pend_tt;
+
+ /* Wait till getwork/swtp/waitw/desched completes. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) |
+ BIT_ULL(56) | BIT_ULL(54)));
+ pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0));
+ if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+ if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED)
+ cnxk_sso_hws_swtag_untag(base +
+ SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+ }
+
+ /* Wait for desched to complete. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & BIT_ULL(58));
+
+ switch (dev->gw_mode) {
+ case CN10K_GW_MODE_PREF:
+ while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) & BIT_ULL(63))
+ ;
+ break;
+ case CN10K_GW_MODE_PREF_WFE:
+ while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) &
+ SSOW_LF_GWS_TAG_PEND_GET_WORK_BIT)
+ continue;
+ plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL);
+ break;
+ case CN10K_GW_MODE_NONE:
+ default:
+ break;
+ }
+
+ if (CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_PRF_WQE0)) !=
+ SSO_TT_EMPTY) {
+ plt_write64(BIT_ULL(16) | 1, ws->getwrk_op);
+ do {
+ roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+ } while (gw.u64[0] & BIT_ULL(63));
+ pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0));
+ if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+ if (pend_tt == SSO_TT_ATOMIC ||
+ pend_tt == SSO_TT_ORDERED)
+ cnxk_sso_hws_swtag_untag(
+ base + SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+ }
+ }
+
+ plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL);
+ rte_mb();
+}
+
static void
cn10k_sso_set_rsrc(void *arg)
{
@@ -263,6 +374,20 @@ cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
return (int)nb_unlinks;
}
+static int
+cn10k_sso_start(struct rte_eventdev *event_dev)
+{
+ int rc;
+
+ rc = cnxk_sso_start(event_dev, cn10k_sso_hws_reset,
+ cn10k_sso_hws_flush_events);
+ if (rc < 0)
+ return rc;
+ cn10k_sso_fp_fns_set(event_dev);
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -275,6 +400,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_link = cn10k_sso_port_link,
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+
+ .dev_start = cn10k_sso_start,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 8100140fc..39f29b687 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -126,6 +126,102 @@ cn9k_sso_hws_release(void *arg, void *hws)
}
}
+static void
+cn9k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base,
+ cnxk_handle_event_t fn, void *arg)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(arg);
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws_state *st;
+ struct cn9k_sso_hws *ws;
+ uint64_t cq_ds_cnt = 1;
+ uint64_t aq_cnt = 1;
+ uint64_t ds_cnt = 1;
+ struct rte_event ev;
+ uintptr_t ws_base;
+ uint64_t val, req;
+
+ plt_write64(0, base + SSO_LF_GGRP_QCTL);
+
+ req = queue_id; /* GGRP ID */
+ req |= BIT_ULL(18); /* Grouped */
+ req |= BIT_ULL(16); /* WAIT */
+
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+
+ if (dev->dual_ws) {
+ dws = hws;
+ st = &dws->ws_state[0];
+ ws_base = dws->base[0];
+ } else {
+ ws = hws;
+ st = (struct cn9k_sso_hws_state *)ws;
+ ws_base = ws->base;
+ }
+
+ while (aq_cnt || cq_ds_cnt || ds_cnt) {
+ plt_write64(req, st->getwrk_op);
+ cn9k_sso_hws_get_work_empty(st, &ev);
+ if (fn != NULL && ev.u64 != 0)
+ fn(arg, ev);
+ if (ev.sched_type != SSO_TT_EMPTY)
+ cnxk_sso_hws_swtag_flush(st->tag_op,
+ st->swtag_flush_op);
+ do {
+ val = plt_read64(ws_base + SSOW_LF_GWS_PENDSTATE);
+ } while (val & BIT_ULL(56));
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ /* Extract cq and ds count */
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+ }
+
+ plt_write64(0, ws_base + SSOW_LF_GWS_OP_GWC_INVAL);
+}
+
+static void
+cn9k_sso_hws_reset(void *arg, void *hws)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ uint64_t pend_state;
+ uint8_t pend_tt;
+ uintptr_t base;
+ uint64_t tag;
+ uint8_t i;
+
+ dws = hws;
+ ws = hws;
+ for (i = 0; i < (dev->dual_ws ? CN9K_DUAL_WS_NB_WS : 1); i++) {
+ base = dev->dual_ws ? dws->base[i] : ws->base;
+ /* Wait till getwork/swtp/waitw/desched completes. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) |
+ BIT_ULL(56)));
+
+ tag = plt_read64(base + SSOW_LF_GWS_TAG);
+ pend_tt = (tag >> 32) & 0x3;
+ if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+ if (pend_tt == SSO_TT_ATOMIC ||
+ pend_tt == SSO_TT_ORDERED)
+ cnxk_sso_hws_swtag_untag(
+ base + SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+ }
+
+ /* Wait for desched to complete. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & BIT_ULL(58));
+ }
+}
+
static void
cn9k_sso_set_rsrc(void *arg)
{
@@ -352,6 +448,21 @@ cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
return (int)nb_unlinks;
}
+static int
+cn9k_sso_start(struct rte_eventdev *event_dev)
+{
+ int rc;
+
+ rc = cnxk_sso_start(event_dev, cn9k_sso_hws_reset,
+ cn9k_sso_hws_flush_events);
+ if (rc < 0)
+ return rc;
+
+ cn9k_sso_fp_fns_set(event_dev);
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -364,6 +475,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_link = cn9k_sso_port_link,
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+
+ .dev_start = cn9k_sso_start,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 0e2cc3681..0059b0eca 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -326,6 +326,70 @@ cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
return 0;
}
+static void
+cnxk_handle_event(void *arg, struct rte_event event)
+{
+ struct rte_eventdev *event_dev = arg;
+
+ if (event_dev->dev_ops->dev_stop_flush != NULL)
+ event_dev->dev_ops->dev_stop_flush(
+ event_dev->data->dev_id, event,
+ event_dev->data->dev_stop_flush_arg);
+}
+
+static void
+cnxk_sso_cleanup(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn, uint8_t enable)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uintptr_t hwgrp_base;
+ uint16_t i;
+ void *ws;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ ws = event_dev->data->ports[i];
+ reset_fn(dev, ws);
+ }
+
+ rte_mb();
+ ws = event_dev->data->ports[0];
+
+ for (i = 0; i < dev->nb_event_queues; i++) {
+ /* Consume all the events through HWS0 */
+ hwgrp_base = roc_sso_hwgrp_base_get(&dev->sso, i);
+ flush_fn(ws, i, hwgrp_base, cnxk_handle_event, event_dev);
+ /* Enable/Disable SSO GGRP */
+ plt_write64(enable, hwgrp_base + SSO_LF_GGRP_QCTL);
+ }
+}
+
+int
+cnxk_sso_start(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ struct roc_sso_hwgrp_qos qos[dev->qos_queue_cnt];
+ int i, rc;
+
+ plt_sso_dbg();
+ for (i = 0; i < dev->qos_queue_cnt; i++) {
+ qos->hwgrp = dev->qos_parse_data[i].queue;
+ qos->iaq_prcnt = dev->qos_parse_data[i].iaq_prcnt;
+ qos->taq_prcnt = dev->qos_parse_data[i].taq_prcnt;
+ qos->xaq_prcnt = dev->qos_parse_data[i].xaq_prcnt;
+ }
+ rc = roc_sso_hwgrp_qos_config(&dev->sso, qos, dev->qos_queue_cnt,
+ dev->xae_cnt);
+ if (rc < 0) {
+ plt_sso_dbg("failed to configure HWGRP QoS rc = %d", rc);
+ return -EINVAL;
+ }
+ cnxk_sso_cleanup(event_dev, reset_fn, flush_fn, true);
+ rte_mb();
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index ac55d0ccb..6ead171c0 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -48,6 +48,10 @@ typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
uint16_t nb_link);
+typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
+typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
+typedef void (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
+ cnxk_handle_event_t fn, void *arg);
struct cnxk_sso_qos {
uint16_t queue;
@@ -198,5 +202,8 @@ int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
cnxk_sso_hws_setup_t hws_setup_fn);
int cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
uint64_t *tmo_ticks);
+int cnxk_sso_start(struct rte_eventdev *event_dev,
+ cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 17/33] event/cnxk: add device stop and close functions
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (15 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 16/33] event/cnxk: add device start function pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 18/33] event/cnxk: add SSO selftest and dump pbhagavatula
` (17 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add event device stop and close callback functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 15 +++++++++
drivers/event/cnxk/cn9k_eventdev.c | 14 +++++++++
drivers/event/cnxk/cnxk_eventdev.c | 48 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 6 ++++
4 files changed, 83 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 0de44ed43..6a0b9bcd9 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -388,6 +388,19 @@ cn10k_sso_start(struct rte_eventdev *event_dev)
return rc;
}
+static void
+cn10k_sso_stop(struct rte_eventdev *event_dev)
+{
+ cnxk_sso_stop(event_dev, cn10k_sso_hws_reset,
+ cn10k_sso_hws_flush_events);
+}
+
+static int
+cn10k_sso_close(struct rte_eventdev *event_dev)
+{
+ return cnxk_sso_close(event_dev, cn10k_sso_hws_unlink);
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -402,6 +415,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.timeout_ticks = cnxk_sso_timeout_ticks,
.dev_start = cn10k_sso_start,
+ .dev_stop = cn10k_sso_stop,
+ .dev_close = cn10k_sso_close,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 39f29b687..195ed49d8 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -463,6 +463,18 @@ cn9k_sso_start(struct rte_eventdev *event_dev)
return rc;
}
+static void
+cn9k_sso_stop(struct rte_eventdev *event_dev)
+{
+ cnxk_sso_stop(event_dev, cn9k_sso_hws_reset, cn9k_sso_hws_flush_events);
+}
+
+static int
+cn9k_sso_close(struct rte_eventdev *event_dev)
+{
+ return cnxk_sso_close(event_dev, cn9k_sso_hws_unlink);
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -477,6 +489,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.timeout_ticks = cnxk_sso_timeout_ticks,
.dev_start = cn9k_sso_start,
+ .dev_stop = cn9k_sso_stop,
+ .dev_close = cn9k_sso_close,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 0059b0eca..01685633d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -390,6 +390,54 @@ cnxk_sso_start(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
return 0;
}
+void
+cnxk_sso_stop(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn)
+{
+ plt_sso_dbg();
+ cnxk_sso_cleanup(event_dev, reset_fn, flush_fn, false);
+ rte_mb();
+}
+
+int
+cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t all_queues[CNXK_SSO_MAX_HWGRP];
+ uint16_t i;
+ void *ws;
+
+ if (!dev->configured)
+ return 0;
+
+ for (i = 0; i < dev->nb_event_queues; i++)
+ all_queues[i] = i;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ ws = event_dev->data->ports[i];
+ unlink_fn(dev, ws, all_queues, dev->nb_event_queues);
+ rte_free(cnxk_sso_hws_get_cookie(ws));
+ event_dev->data->ports[i] = NULL;
+ }
+
+ roc_sso_rsrc_fini(&dev->sso);
+ rte_mempool_free(dev->xaq_pool);
+ rte_memzone_free(rte_memzone_lookup(CNXK_SSO_FC_NAME));
+
+ dev->fc_iova = 0;
+ dev->fc_mem = NULL;
+ dev->xaq_pool = NULL;
+ dev->configured = false;
+ dev->is_timeout_deq = 0;
+ dev->nb_event_ports = 0;
+ dev->max_num_events = -1;
+ dev->nb_event_queues = 0;
+ dev->min_dequeue_timeout_ns = USEC2NSEC(1);
+ dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF);
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 6ead171c0..1030d5840 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -48,6 +48,8 @@ typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
uint16_t nb_link);
+typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map,
+ uint16_t nb_link);
typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
typedef void (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
@@ -205,5 +207,9 @@ int cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
int cnxk_sso_start(struct rte_eventdev *event_dev,
cnxk_sso_hws_reset_t reset_fn,
cnxk_sso_hws_flush_t flush_fn);
+void cnxk_sso_stop(struct rte_eventdev *event_dev,
+ cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn);
+int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 18/33] event/cnxk: add SSO selftest and dump
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (16 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 17/33] event/cnxk: add device stop and close functions pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 19/33] event/cnxk: add event port and queue xstats pbhagavatula
` (16 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add selftest to verify sanity of SSO and also add function to
dump internal state of SSO.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/test_eventdev.c | 14 +
drivers/event/cnxk/cn10k_eventdev.c | 8 +
drivers/event/cnxk/cn9k_eventdev.c | 10 +-
drivers/event/cnxk/cnxk_eventdev.c | 8 +
drivers/event/cnxk/cnxk_eventdev.h | 5 +
drivers/event/cnxk/cnxk_eventdev_selftest.c | 1570 +++++++++++++++++++
drivers/event/cnxk/meson.build | 1 +
7 files changed, 1615 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cnxk_eventdev_selftest.c
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index bcfaa53cb..843d9766b 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1036,6 +1036,18 @@ test_eventdev_selftest_dlb2(void)
return test_eventdev_selftest_impl("dlb2_event", "");
}
+static int
+test_eventdev_selftest_cn9k(void)
+{
+ return test_eventdev_selftest_impl("event_cn9k", "");
+}
+
+static int
+test_eventdev_selftest_cn10k(void)
+{
+ return test_eventdev_selftest_impl("event_cn10k", "");
+}
+
REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
REGISTER_TEST_COMMAND(eventdev_selftest_sw, test_eventdev_selftest_sw);
REGISTER_TEST_COMMAND(eventdev_selftest_octeontx,
@@ -1044,3 +1056,5 @@ REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2,
test_eventdev_selftest_octeontx2);
REGISTER_TEST_COMMAND(eventdev_selftest_dpaa2, test_eventdev_selftest_dpaa2);
REGISTER_TEST_COMMAND(eventdev_selftest_dlb2, test_eventdev_selftest_dlb2);
+REGISTER_TEST_COMMAND(eventdev_selftest_cn9k, test_eventdev_selftest_cn9k);
+REGISTER_TEST_COMMAND(eventdev_selftest_cn10k, test_eventdev_selftest_cn10k);
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 6a0b9bcd9..74070e005 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -401,6 +401,12 @@ cn10k_sso_close(struct rte_eventdev *event_dev)
return cnxk_sso_close(event_dev, cn10k_sso_hws_unlink);
}
+static int
+cn10k_sso_selftest(void)
+{
+ return cnxk_sso_selftest(RTE_STR(event_cn10k));
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -414,9 +420,11 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .dump = cnxk_sso_dump,
.dev_start = cn10k_sso_start,
.dev_stop = cn10k_sso_stop,
.dev_close = cn10k_sso_close,
+ .dev_selftest = cn10k_sso_selftest,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 195ed49d8..4fb0f1ccc 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -222,7 +222,7 @@ cn9k_sso_hws_reset(void *arg, void *hws)
}
}
-static void
+void
cn9k_sso_set_rsrc(void *arg)
{
struct cnxk_sso_evdev *dev = arg;
@@ -475,6 +475,12 @@ cn9k_sso_close(struct rte_eventdev *event_dev)
return cnxk_sso_close(event_dev, cn9k_sso_hws_unlink);
}
+static int
+cn9k_sso_selftest(void)
+{
+ return cnxk_sso_selftest(RTE_STR(event_cn9k));
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -488,9 +494,11 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .dump = cnxk_sso_dump,
.dev_start = cn9k_sso_start,
.dev_stop = cn9k_sso_stop,
.dev_close = cn9k_sso_close,
+ .dev_selftest = cn9k_sso_selftest,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 01685633d..dbd35ca5d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -326,6 +326,14 @@ cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
return 0;
}
+void
+cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ roc_sso_dump(&dev->sso, dev->sso.nb_hws, dev->sso.nb_hwgrp, f);
+}
+
static void
cnxk_handle_event(void *arg, struct rte_event event)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 1030d5840..ee7dce5f5 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -211,5 +211,10 @@ void cnxk_sso_stop(struct rte_eventdev *event_dev,
cnxk_sso_hws_reset_t reset_fn,
cnxk_sso_hws_flush_t flush_fn);
int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn);
+int cnxk_sso_selftest(const char *dev_name);
+void cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f);
+
+/* CN9K */
+void cn9k_sso_set_rsrc(void *arg);
#endif /* __CNXK_EVENTDEV_H__ */
diff --git a/drivers/event/cnxk/cnxk_eventdev_selftest.c b/drivers/event/cnxk/cnxk_eventdev_selftest.c
new file mode 100644
index 000000000..c99a81327
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev_selftest.c
@@ -0,0 +1,1570 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_hexdump.h>
+#include <rte_launch.h>
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_memcpy.h>
+#include <rte_per_lcore.h>
+#include <rte_random.h>
+#include <rte_test.h>
+
+#include "cnxk_eventdev.h"
+
+#define NUM_PACKETS (1024)
+#define MAX_EVENTS (1024)
+#define MAX_STAGES (255)
+
+#define CNXK_TEST_RUN(setup, teardown, test) \
+ cnxk_test_run(setup, teardown, test, #test)
+
+static int total;
+static int passed;
+static int failed;
+static int unsupported;
+
+static int evdev;
+static struct rte_mempool *eventdev_test_mempool;
+
+struct event_attr {
+ uint32_t flow_id;
+ uint8_t event_type;
+ uint8_t sub_event_type;
+ uint8_t sched_type;
+ uint8_t queue;
+ uint8_t port;
+};
+
+static uint32_t seqn_list_index;
+static int seqn_list[NUM_PACKETS];
+
+static inline void
+seqn_list_init(void)
+{
+ RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS);
+ memset(seqn_list, 0, sizeof(seqn_list));
+ seqn_list_index = 0;
+}
+
+static inline int
+seqn_list_update(int val)
+{
+ if (seqn_list_index >= NUM_PACKETS)
+ return -1;
+
+ seqn_list[seqn_list_index++] = val;
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ return 0;
+}
+
+static inline int
+seqn_list_check(int limit)
+{
+ int i;
+
+ for (i = 0; i < limit; i++) {
+ if (seqn_list[i] != i) {
+ plt_err("Seqn mismatch %d %d", seqn_list[i], i);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+struct test_core_param {
+ uint32_t *total_events;
+ uint64_t dequeue_tmo_ticks;
+ uint8_t port;
+ uint8_t sched_type;
+};
+
+static int
+testsuite_setup(const char *eventdev_name)
+{
+ evdev = rte_event_dev_get_dev_id(eventdev_name);
+ if (evdev < 0) {
+ plt_err("%d: Eventdev %s not found", __LINE__, eventdev_name);
+ return -1;
+ }
+ return 0;
+}
+
+static void
+testsuite_teardown(void)
+{
+ rte_event_dev_close(evdev);
+ total = 0;
+ passed = 0;
+ failed = 0;
+ unsupported = 0;
+}
+
+static inline void
+devconf_set_default_sane_values(struct rte_event_dev_config *dev_conf,
+ struct rte_event_dev_info *info)
+{
+ memset(dev_conf, 0, sizeof(struct rte_event_dev_config));
+ dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns;
+ dev_conf->nb_event_ports = info->max_event_ports;
+ dev_conf->nb_event_queues = info->max_event_queues;
+ dev_conf->nb_event_queue_flows = info->max_event_queue_flows;
+ dev_conf->nb_event_port_dequeue_depth =
+ info->max_event_port_dequeue_depth;
+ dev_conf->nb_event_port_enqueue_depth =
+ info->max_event_port_enqueue_depth;
+ dev_conf->nb_event_port_enqueue_depth =
+ info->max_event_port_enqueue_depth;
+ dev_conf->nb_events_limit = info->max_num_events;
+}
+
+enum {
+ TEST_EVENTDEV_SETUP_DEFAULT,
+ TEST_EVENTDEV_SETUP_PRIORITY,
+ TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT,
+};
+
+static inline int
+_eventdev_setup(int mode)
+{
+ const char *pool_name = "evdev_cnxk_test_pool";
+ struct rte_event_dev_config dev_conf;
+ struct rte_event_dev_info info;
+ int i, ret;
+
+ /* Create and destrory pool for each test case to make it standalone */
+ eventdev_test_mempool = rte_pktmbuf_pool_create(
+ pool_name, MAX_EVENTS, 0, 0, 512, rte_socket_id());
+ if (!eventdev_test_mempool) {
+ plt_err("ERROR creating mempool");
+ return -1;
+ }
+
+ ret = rte_event_dev_info_get(evdev, &info);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+ devconf_set_default_sane_values(&dev_conf, &info);
+ if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT)
+ dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT;
+
+ ret = rte_event_dev_configure(evdev, &dev_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+ uint32_t queue_count;
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+
+ if (mode == TEST_EVENTDEV_SETUP_PRIORITY) {
+ if (queue_count > 8)
+ queue_count = 8;
+
+ /* Configure event queues(0 to n) with
+ * RTE_EVENT_DEV_PRIORITY_HIGHEST to
+ * RTE_EVENT_DEV_PRIORITY_LOWEST
+ */
+ uint8_t step =
+ (RTE_EVENT_DEV_PRIORITY_LOWEST + 1) / queue_count;
+ for (i = 0; i < (int)queue_count; i++) {
+ struct rte_event_queue_conf queue_conf;
+
+ ret = rte_event_queue_default_conf_get(evdev, i,
+ &queue_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d",
+ i);
+ queue_conf.priority = i * step;
+ ret = rte_event_queue_setup(evdev, i, &queue_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
+ i);
+ }
+
+ } else {
+ /* Configure event queues with default priority */
+ for (i = 0; i < (int)queue_count; i++) {
+ ret = rte_event_queue_setup(evdev, i, NULL);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
+ i);
+ }
+ }
+ /* Configure event ports */
+ uint32_t port_count;
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &port_count),
+ "Port count get failed");
+ for (i = 0; i < (int)port_count; i++) {
+ ret = rte_event_port_setup(evdev, i, NULL);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i);
+ ret = rte_event_port_link(evdev, i, NULL, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d",
+ i);
+ }
+
+ ret = rte_event_dev_start(evdev);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device");
+
+ return 0;
+}
+
+static inline int
+eventdev_setup(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_DEFAULT);
+}
+
+static inline int
+eventdev_setup_priority(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_PRIORITY);
+}
+
+static inline int
+eventdev_setup_dequeue_timeout(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT);
+}
+
+static inline void
+eventdev_teardown(void)
+{
+ rte_event_dev_stop(evdev);
+ rte_mempool_free(eventdev_test_mempool);
+}
+
+static inline void
+update_event_and_validation_attr(struct rte_mbuf *m, struct rte_event *ev,
+ uint32_t flow_id, uint8_t event_type,
+ uint8_t sub_event_type, uint8_t sched_type,
+ uint8_t queue, uint8_t port)
+{
+ struct event_attr *attr;
+
+ /* Store the event attributes in mbuf for future reference */
+ attr = rte_pktmbuf_mtod(m, struct event_attr *);
+ attr->flow_id = flow_id;
+ attr->event_type = event_type;
+ attr->sub_event_type = sub_event_type;
+ attr->sched_type = sched_type;
+ attr->queue = queue;
+ attr->port = port;
+
+ ev->flow_id = flow_id;
+ ev->sub_event_type = sub_event_type;
+ ev->event_type = event_type;
+ /* Inject the new event */
+ ev->op = RTE_EVENT_OP_NEW;
+ ev->sched_type = sched_type;
+ ev->queue_id = queue;
+ ev->mbuf = m;
+}
+
+static inline int
+inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type,
+ uint8_t sched_type, uint8_t queue, uint8_t port,
+ unsigned int events)
+{
+ struct rte_mbuf *m;
+ unsigned int i;
+
+ for (i = 0; i < events; i++) {
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
+
+ *rte_event_pmd_selftest_seqn(m) = i;
+ update_event_and_validation_attr(m, &ev, flow_id, event_type,
+ sub_event_type, sched_type,
+ queue, port);
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ return 0;
+}
+
+static inline int
+check_excess_events(uint8_t port)
+{
+ uint16_t valid_event;
+ struct rte_event ev;
+ int i;
+
+ /* Check for excess events, try for a few times and exit */
+ for (i = 0; i < 32; i++) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+
+ RTE_TEST_ASSERT_SUCCESS(valid_event,
+ "Unexpected valid event=%d",
+ *rte_event_pmd_selftest_seqn(ev.mbuf));
+ }
+ return 0;
+}
+
+static inline int
+generate_random_events(const unsigned int total_events)
+{
+ struct rte_event_dev_info info;
+ uint32_t queue_count;
+ unsigned int i;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+
+ ret = rte_event_dev_info_get(evdev, &info);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+ for (i = 0; i < total_events; i++) {
+ ret = inject_events(
+ rte_rand() % info.max_event_queue_flows /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ rte_rand() % queue_count /* queue */, 0 /* port */,
+ 1 /* events */);
+ if (ret)
+ return -1;
+ }
+ return ret;
+}
+
+static inline int
+validate_event(struct rte_event *ev)
+{
+ struct event_attr *attr;
+
+ attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *);
+ RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id,
+ "flow_id mismatch enq=%d deq =%d", attr->flow_id,
+ ev->flow_id);
+ RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type,
+ "event_type mismatch enq=%d deq =%d",
+ attr->event_type, ev->event_type);
+ RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type,
+ "sub_event_type mismatch enq=%d deq =%d",
+ attr->sub_event_type, ev->sub_event_type);
+ RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type,
+ "sched_type mismatch enq=%d deq =%d",
+ attr->sched_type, ev->sched_type);
+ RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id,
+ "queue mismatch enq=%d deq =%d", attr->queue,
+ ev->queue_id);
+ return 0;
+}
+
+typedef int (*validate_event_cb)(uint32_t index, uint8_t port,
+ struct rte_event *ev);
+
+static inline int
+consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn)
+{
+ uint32_t events = 0, forward_progress_cnt = 0, index = 0;
+ uint16_t valid_event;
+ struct rte_event ev;
+ int ret;
+
+ while (1) {
+ if (++forward_progress_cnt > UINT16_MAX) {
+ plt_err("Detected deadlock");
+ return -1;
+ }
+
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ forward_progress_cnt = 0;
+ ret = validate_event(&ev);
+ if (ret)
+ return -1;
+
+ if (fn != NULL) {
+ ret = fn(index, port, &ev);
+ RTE_TEST_ASSERT_SUCCESS(
+ ret, "Failed to validate test specific event");
+ }
+
+ ++index;
+
+ rte_pktmbuf_free(ev.mbuf);
+ if (++events >= total_events)
+ break;
+ }
+
+ return check_excess_events(port);
+}
+
+static int
+validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev)
+{
+ RTE_SET_USED(port);
+ RTE_TEST_ASSERT_EQUAL(index, *rte_event_pmd_selftest_seqn(ev->mbuf),
+ "index=%d != seqn=%d", index,
+ *rte_event_pmd_selftest_seqn(ev->mbuf));
+ return 0;
+}
+
+static inline int
+test_simple_enqdeq(uint8_t sched_type)
+{
+ int ret;
+
+ ret = inject_events(0 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type */, sched_type, 0 /* queue */,
+ 0 /* port */, MAX_EVENTS);
+ if (ret)
+ return -1;
+
+ return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq);
+}
+
+static int
+test_simple_enqdeq_ordered(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_simple_enqdeq_atomic(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_simple_enqdeq_parallel(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_PARALLEL);
+}
+
+/*
+ * Generate a prescribed number of events and spread them across available
+ * queues. On dequeue, using single event port(port 0) verify the enqueued
+ * event attributes
+ */
+static int
+test_multi_queue_enq_single_port_deq(void)
+{
+ int ret;
+
+ ret = generate_random_events(MAX_EVENTS);
+ if (ret)
+ return -1;
+
+ return consume_events(0 /* port */, MAX_EVENTS, NULL);
+}
+
+/*
+ * Inject 0..MAX_EVENTS events over 0..queue_count with modulus
+ * operation
+ *
+ * For example, Inject 32 events over 0..7 queues
+ * enqueue events 0, 8, 16, 24 in queue 0
+ * enqueue events 1, 9, 17, 25 in queue 1
+ * ..
+ * ..
+ * enqueue events 7, 15, 23, 31 in queue 7
+ *
+ * On dequeue, Validate the events comes in 0,8,16,24,1,9,17,25..,7,15,23,31
+ * order from queue0(highest priority) to queue7(lowest_priority)
+ */
+static int
+validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev)
+{
+ uint32_t queue_count;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ if (queue_count > 8)
+ queue_count = 8;
+ uint32_t range = MAX_EVENTS / queue_count;
+ uint32_t expected_val = (index % range) * queue_count;
+
+ expected_val += ev->queue_id;
+ RTE_SET_USED(port);
+ RTE_TEST_ASSERT_EQUAL(
+ *rte_event_pmd_selftest_seqn(ev->mbuf), expected_val,
+ "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d",
+ *rte_event_pmd_selftest_seqn(ev->mbuf), index, expected_val,
+ range, queue_count, MAX_EVENTS);
+ return 0;
+}
+
+static int
+test_multi_queue_priority(void)
+{
+ int i, max_evts_roundoff;
+ /* See validate_queue_priority() comments for priority validate logic */
+ uint32_t queue_count;
+ struct rte_mbuf *m;
+ uint8_t queue;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ if (queue_count > 8)
+ queue_count = 8;
+ max_evts_roundoff = MAX_EVENTS / queue_count;
+ max_evts_roundoff *= queue_count;
+
+ for (i = 0; i < max_evts_roundoff; i++) {
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
+
+ *rte_event_pmd_selftest_seqn(m) = i;
+ queue = i % queue_count;
+ update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU,
+ 0, RTE_SCHED_TYPE_PARALLEL,
+ queue, 0);
+ rte_event_enqueue_burst(evdev, 0, &ev, 1);
+ }
+
+ return consume_events(0, max_evts_roundoff, validate_queue_priority);
+}
+
+static int
+worker_multi_port_fn(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint32_t *total_events = param->total_events;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+ int ret;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ ret = validate_event(&ev);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event");
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ }
+
+ return 0;
+}
+
+static inline int
+wait_workers_to_join(const uint32_t *count)
+{
+ uint64_t cycles, print_cycles;
+
+ cycles = rte_get_timer_cycles();
+ print_cycles = cycles;
+ while (__atomic_load_n(count, __ATOMIC_RELAXED)) {
+ uint64_t new_cycles = rte_get_timer_cycles();
+
+ if (new_cycles - print_cycles > rte_get_timer_hz()) {
+ plt_info("Events %d",
+ __atomic_load_n(count, __ATOMIC_RELAXED));
+ print_cycles = new_cycles;
+ }
+ if (new_cycles - cycles > rte_get_timer_hz() * 10000000000) {
+ plt_err("No schedules for seconds, deadlock (%d)",
+ __atomic_load_n(count, __ATOMIC_RELAXED));
+ rte_event_dev_dump(evdev, stdout);
+ cycles = new_cycles;
+ return -1;
+ }
+ }
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+static inline int
+launch_workers_and_wait(int (*main_thread)(void *),
+ int (*worker_thread)(void *), uint32_t total_events,
+ uint8_t nb_workers, uint8_t sched_type)
+{
+ uint32_t atomic_total_events;
+ struct test_core_param *param;
+ uint64_t dequeue_tmo_ticks;
+ uint8_t port = 0;
+ int w_lcore;
+ int ret;
+
+ if (!nb_workers)
+ return 0;
+
+ __atomic_store_n(&atomic_total_events, total_events, __ATOMIC_RELAXED);
+ seqn_list_init();
+
+ param = malloc(sizeof(struct test_core_param) * nb_workers);
+ if (!param)
+ return -1;
+
+ ret = rte_event_dequeue_timeout_ticks(
+ evdev, rte_rand() % 10000000 /* 10ms */, &dequeue_tmo_ticks);
+ if (ret) {
+ free(param);
+ return -1;
+ }
+
+ param[0].total_events = &atomic_total_events;
+ param[0].sched_type = sched_type;
+ param[0].port = 0;
+ param[0].dequeue_tmo_ticks = dequeue_tmo_ticks;
+ rte_wmb();
+
+ w_lcore = rte_get_next_lcore(
+ /* start core */ -1,
+ /* skip main */ 1,
+ /* wrap */ 0);
+ rte_eal_remote_launch(main_thread, ¶m[0], w_lcore);
+
+ for (port = 1; port < nb_workers; port++) {
+ param[port].total_events = &atomic_total_events;
+ param[port].sched_type = sched_type;
+ param[port].port = port;
+ param[port].dequeue_tmo_ticks = dequeue_tmo_ticks;
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ w_lcore = rte_get_next_lcore(w_lcore, 1, 0);
+ rte_eal_remote_launch(worker_thread, ¶m[port], w_lcore);
+ }
+
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ ret = wait_workers_to_join(&atomic_total_events);
+ free(param);
+
+ return ret;
+}
+
+/*
+ * Generate a prescribed number of events and spread them across available
+ * queues. Dequeue the events through multiple ports and verify the enqueued
+ * event attributes
+ */
+static int
+test_multi_queue_enq_multi_port_deq(void)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t nr_ports;
+ int ret;
+
+ ret = generate_random_events(total_events);
+ if (ret)
+ return -1;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ plt_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ return launch_workers_and_wait(worker_multi_port_fn,
+ worker_multi_port_fn, total_events,
+ nr_ports, 0xff /* invalid */);
+}
+
+static void
+flush(uint8_t dev_id, struct rte_event event, void *arg)
+{
+ unsigned int *count = arg;
+
+ RTE_SET_USED(dev_id);
+ if (event.event_type == RTE_EVENT_TYPE_CPU)
+ *count = *count + 1;
+}
+
+static int
+test_dev_stop_flush(void)
+{
+ unsigned int total_events = MAX_EVENTS, count = 0;
+ int ret;
+
+ ret = generate_random_events(total_events);
+ if (ret)
+ return -1;
+
+ ret = rte_event_dev_stop_flush_callback_register(evdev, flush, &count);
+ if (ret)
+ return -2;
+ rte_event_dev_stop(evdev);
+ ret = rte_event_dev_stop_flush_callback_register(evdev, NULL, NULL);
+ if (ret)
+ return -3;
+ RTE_TEST_ASSERT_EQUAL(total_events, count,
+ "count mismatch total_events=%d count=%d",
+ total_events, count);
+
+ return 0;
+}
+
+static int
+validate_queue_to_port_single_link(uint32_t index, uint8_t port,
+ struct rte_event *ev)
+{
+ RTE_SET_USED(index);
+ RTE_TEST_ASSERT_EQUAL(port, ev->queue_id,
+ "queue mismatch enq=%d deq =%d", port,
+ ev->queue_id);
+
+ return 0;
+}
+
+/*
+ * Link queue x to port x and check correctness of link by checking
+ * queue_id == x on dequeue on the specific port x
+ */
+static int
+test_queue_to_port_single_link(void)
+{
+ int i, nr_links, ret;
+ uint32_t queue_count;
+ uint32_t port_count;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &port_count),
+ "Port count get failed");
+
+ /* Unlink all connections that created in eventdev_setup */
+ for (i = 0; i < (int)port_count; i++) {
+ ret = rte_event_port_unlink(evdev, i, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d",
+ i);
+ }
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+
+ nr_links = RTE_MIN(port_count, queue_count);
+ const unsigned int total_events = MAX_EVENTS / nr_links;
+
+ /* Link queue x to port x and inject events to queue x through port x */
+ for (i = 0; i < nr_links; i++) {
+ uint8_t queue = (uint8_t)i;
+
+ ret = rte_event_port_link(evdev, i, &queue, NULL, 1);
+ RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i);
+
+ ret = inject_events(0x100 /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ queue /* queue */, i /* port */,
+ total_events /* events */);
+ if (ret)
+ return -1;
+ }
+
+ /* Verify the events generated from correct queue */
+ for (i = 0; i < nr_links; i++) {
+ ret = consume_events(i /* port */, total_events,
+ validate_queue_to_port_single_link);
+ if (ret)
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+validate_queue_to_port_multi_link(uint32_t index, uint8_t port,
+ struct rte_event *ev)
+{
+ RTE_SET_USED(index);
+ RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1),
+ "queue mismatch enq=%d deq =%d", port,
+ ev->queue_id);
+
+ return 0;
+}
+
+/*
+ * Link all even number of queues to port 0 and all odd number of queues to
+ * port 1 and verify the link connection on dequeue
+ */
+static int
+test_queue_to_port_multi_link(void)
+{
+ int ret, port0_events = 0, port1_events = 0;
+ uint32_t nr_queues = 0;
+ uint32_t nr_ports = 0;
+ uint8_t queue, port;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &nr_queues),
+ "Queue count get failed");
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &nr_queues),
+ "Queue count get failed");
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+
+ if (nr_ports < 2) {
+ plt_err("Not enough ports to test ports=%d", nr_ports);
+ return 0;
+ }
+
+ /* Unlink all connections that created in eventdev_setup */
+ for (port = 0; port < nr_ports; port++) {
+ ret = rte_event_port_unlink(evdev, port, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d",
+ port);
+ }
+
+ unsigned int total_events = MAX_EVENTS / nr_queues;
+ if (!total_events) {
+ nr_queues = MAX_EVENTS;
+ total_events = MAX_EVENTS / nr_queues;
+ }
+
+ /* Link all even number of queues to port0 and odd numbers to port 1*/
+ for (queue = 0; queue < nr_queues; queue++) {
+ port = queue & 0x1;
+ ret = rte_event_port_link(evdev, port, &queue, NULL, 1);
+ RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d",
+ queue, port);
+
+ ret = inject_events(0x100 /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ queue /* queue */, port /* port */,
+ total_events /* events */);
+ if (ret)
+ return -1;
+
+ if (port == 0)
+ port0_events += total_events;
+ else
+ port1_events += total_events;
+ }
+
+ ret = consume_events(0 /* port */, port0_events,
+ validate_queue_to_port_multi_link);
+ if (ret)
+ return -1;
+ ret = consume_events(1 /* port */, port1_events,
+ validate_queue_to_port_multi_link);
+ if (ret)
+ return -1;
+
+ return 0;
+}
+
+static int
+worker_flow_based_pipeline(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
+ uint32_t *total_events = param->total_events;
+ uint8_t new_sched_type = param->sched_type;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
+ dequeue_tmo_ticks);
+ if (!valid_event)
+ continue;
+
+ /* Events from stage 0 */
+ if (ev.sub_event_type == 0) {
+ /* Move to atomic flow to maintain the ordering */
+ ev.flow_id = 0x2;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sub_event_type = 1; /* stage 1 */
+ ev.sched_type = new_sched_type;
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ } else if (ev.sub_event_type == 1) { /* Events from stage 1*/
+ uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
+
+ if (seqn_list_update(seqn) == 0) {
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1,
+ __ATOMIC_RELAXED);
+ } else {
+ plt_err("Failed to update seqn_list");
+ return -1;
+ }
+ } else {
+ plt_err("Invalid ev.sub_event_type = %d",
+ ev.sub_event_type);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+static int
+test_multiport_flow_sched_type_test(uint8_t in_sched_type,
+ uint8_t out_sched_type)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ plt_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with a 0 sequence number to total_events */
+ ret = inject_events(
+ 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */, in_sched_type, 0 /* queue */,
+ 0 /* port */, total_events /* events */);
+ if (ret)
+ return -1;
+
+ rte_mb();
+ ret = launch_workers_and_wait(worker_flow_based_pipeline,
+ worker_flow_based_pipeline, total_events,
+ nr_ports, out_sched_type);
+ if (ret)
+ return -1;
+
+ if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
+ out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
+ /* Check the events order maintained or not */
+ return seqn_list_check(total_events);
+ }
+
+ return 0;
+}
+
+/* Multi port ordered to atomic transaction */
+static int
+test_multi_port_flow_ordered_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_ordered_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_ordered_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_flow_atomic_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_atomic_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_atomic_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_flow_parallel_to_atomic(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_parallel_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_parallel_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+worker_group_based_pipeline(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
+ uint32_t *total_events = param->total_events;
+ uint8_t new_sched_type = param->sched_type;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
+ dequeue_tmo_ticks);
+ if (!valid_event)
+ continue;
+
+ /* Events from stage 0(group 0) */
+ if (ev.queue_id == 0) {
+ /* Move to atomic flow to maintain the ordering */
+ ev.flow_id = 0x2;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sched_type = new_sched_type;
+ ev.queue_id = 1; /* Stage 1*/
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/
+ uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
+
+ if (seqn_list_update(seqn) == 0) {
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1,
+ __ATOMIC_RELAXED);
+ } else {
+ plt_err("Failed to update seqn_list");
+ return -1;
+ }
+ } else {
+ plt_err("Invalid ev.queue_id = %d", ev.queue_id);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int
+test_multiport_queue_sched_type_test(uint8_t in_sched_type,
+ uint8_t out_sched_type)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t queue_count;
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ if (queue_count < 2 || !nr_ports) {
+ plt_err("Not enough queues=%d ports=%d or workers=%d",
+ queue_count, nr_ports, rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with a 0 sequence number to total_events */
+ ret = inject_events(
+ 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */, in_sched_type, 0 /* queue */,
+ 0 /* port */, total_events /* events */);
+ if (ret)
+ return -1;
+
+ ret = launch_workers_and_wait(worker_group_based_pipeline,
+ worker_group_based_pipeline, total_events,
+ nr_ports, out_sched_type);
+ if (ret)
+ return -1;
+
+ if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
+ out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
+ /* Check the events order maintained or not */
+ return seqn_list_check(total_events);
+ }
+
+ return 0;
+}
+
+static int
+test_multi_port_queue_ordered_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_ordered_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_ordered_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_queue_atomic_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_atomic_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_atomic_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_queue_parallel_to_atomic(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_parallel_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_parallel_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+worker_flow_based_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint32_t *total_events = param->total_events;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.sub_event_type == MAX_STAGES) { /* last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sub_event_type++;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+static int
+launch_multi_port_max_stages_random_sched_type(int (*fn)(void *))
+{
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ plt_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with a 0 sequence number to total_events */
+ ret = inject_events(
+ 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1) /* sched_type */,
+ 0 /* queue */, 0 /* port */, MAX_EVENTS /* events */);
+ if (ret)
+ return -1;
+
+ return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports,
+ 0xff /* invalid */);
+}
+
+/* Flow based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_flow_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_flow_based_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ uint32_t queue_count;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ uint8_t nr_queues = queue_count;
+ uint32_t *total_events = param->total_events;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.queue_id == nr_queues - 1) { /* last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id++;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+/* Queue based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_queue_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_queue_based_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_mixed_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ uint32_t queue_count;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ uint8_t nr_queues = queue_count;
+ uint32_t *total_events = param->total_events;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.queue_id == nr_queues - 1) { /* Last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id++;
+ ev.sub_event_type = rte_rand() % 256;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+/* Queue and flow based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_mixed_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_mixed_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_ordered_flow_producer(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ struct rte_mbuf *m;
+ int counter = 0;
+
+ while (counter < NUM_PACKETS) {
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ if (m == NULL)
+ continue;
+
+ *rte_event_pmd_selftest_seqn(m) = counter++;
+
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ ev.flow_id = 0x1; /* Generate a fat flow */
+ ev.sub_event_type = 0;
+ /* Inject the new event */
+ ev.op = RTE_EVENT_OP_NEW;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sched_type = RTE_SCHED_TYPE_ORDERED;
+ ev.queue_id = 0;
+ ev.mbuf = m;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+
+ return 0;
+}
+
+static inline int
+test_producer_consumer_ingress_order_test(int (*fn)(void *))
+{
+ uint32_t nr_ports;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (rte_lcore_count() < 3 || nr_ports < 2) {
+ plt_err("### Not enough cores for test.");
+ return 0;
+ }
+
+ launch_workers_and_wait(worker_ordered_flow_producer, fn, NUM_PACKETS,
+ nr_ports, RTE_SCHED_TYPE_ATOMIC);
+ /* Check the events order maintained or not */
+ return seqn_list_check(NUM_PACKETS);
+}
+
+/* Flow based producer consumer ingress order test */
+static int
+test_flow_producer_consumer_ingress_order_test(void)
+{
+ return test_producer_consumer_ingress_order_test(
+ worker_flow_based_pipeline);
+}
+
+/* Queue based producer consumer ingress order test */
+static int
+test_queue_producer_consumer_ingress_order_test(void)
+{
+ return test_producer_consumer_ingress_order_test(
+ worker_group_based_pipeline);
+}
+
+static void
+cnxk_test_run(int (*setup)(void), void (*tdown)(void), int (*test)(void),
+ const char *name)
+{
+ if (setup() < 0) {
+ printf("Error setting up test %s", name);
+ unsupported++;
+ } else {
+ if (test() < 0) {
+ failed++;
+ printf("+ TestCase [%2d] : %s failed\n", total, name);
+ } else {
+ passed++;
+ printf("+ TestCase [%2d] : %s succeeded\n", total,
+ name);
+ }
+ }
+
+ total++;
+ tdown();
+}
+
+static int
+cnxk_sso_testsuite_run(const char *dev_name)
+{
+ int rc;
+
+ testsuite_setup(dev_name);
+
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_queue_enq_single_port_deq);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown, test_dev_stop_flush);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_queue_enq_multi_port_deq);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_to_port_single_link);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_to_port_multi_link);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_max_stages_random_sched_type);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_max_stages_random_sched_type);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_mixed_max_stages_random_sched_type);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_flow_producer_consumer_ingress_order_test);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_producer_consumer_ingress_order_test);
+ CNXK_TEST_RUN(eventdev_setup_priority, eventdev_teardown,
+ test_multi_queue_priority);
+ CNXK_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
+ test_multi_port_flow_ordered_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
+ test_multi_port_queue_ordered_to_atomic);
+ printf("Total tests : %d\n", total);
+ printf("Passed : %d\n", passed);
+ printf("Failed : %d\n", failed);
+ printf("Not supported : %d\n", unsupported);
+
+ rc = failed;
+ testsuite_teardown();
+
+ return rc;
+}
+
+int
+cnxk_sso_selftest(const char *dev_name)
+{
+ const struct rte_memzone *mz;
+ struct cnxk_sso_evdev *dev;
+ int rc = -1;
+
+ mz = rte_memzone_lookup(CNXK_SSO_MZ_NAME);
+ if (mz == NULL)
+ return rc;
+
+ dev = (void *)*((uint64_t *)mz->addr);
+ if (roc_model_runtime_is_cn9k()) {
+ /* Verify single ws mode. */
+ printf("Verifying CN9K Single workslot mode\n");
+ dev->dual_ws = 0;
+ cn9k_sso_set_rsrc(dev);
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ /* Verift dual ws mode. */
+ printf("Verifying CN9K Dual workslot mode\n");
+ dev->dual_ws = 1;
+ cn9k_sso_set_rsrc(dev);
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ }
+
+ if (roc_model_runtime_is_cn10k()) {
+ printf("Verifying CN10K workslot getwork mode none\n");
+ dev->gw_mode = CN10K_GW_MODE_NONE;
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ printf("Verifying CN10K workslot getwork mode prefetch\n");
+ dev->gw_mode = CN10K_GW_MODE_PREF;
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ printf("Verifying CN10K workslot getwork mode smart prefetch\n");
+ dev->gw_mode = CN10K_GW_MODE_PREF_WFE;
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ }
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index f2d1543ba..eeb5ad64a 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -13,6 +13,7 @@ sources = files('cn10k_worker.c',
'cn9k_worker.c',
'cn9k_eventdev.c',
'cnxk_eventdev.c',
+ 'cnxk_eventdev_selftest.c'
)
deps += ['bus_pci', 'common_cnxk', 'net_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 19/33] event/cnxk: add event port and queue xstats
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (17 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 18/33] event/cnxk: add SSO selftest and dump pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 20/33] event/cnxk: support event timer pbhagavatula
` (15 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
Satha Rao, Pavan Nikhilesh, Shijith Thotton
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add support for retrieving statistics from SSO HWS and HWGRP.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/common/cnxk/roc_sso.c | 63 +++++
drivers/common/cnxk/roc_sso.h | 19 ++
drivers/event/cnxk/cnxk_eventdev.h | 15 ++
drivers/event/cnxk/cnxk_eventdev_stats.c | 289 +++++++++++++++++++++++
drivers/event/cnxk/meson.build | 3 +-
5 files changed, 388 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cnxk_eventdev_stats.c
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index 80d032039..1ccf2626b 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -279,6 +279,69 @@ roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
return nb_hwgrp;
}
+int
+roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws,
+ struct roc_sso_hws_stats *stats)
+{
+ struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+ struct sso_hws_stats *req_rsp;
+ int rc;
+
+ req_rsp = (struct sso_hws_stats *)mbox_alloc_msg_sso_hws_get_stats(
+ dev->mbox);
+ if (req_rsp == NULL) {
+ rc = mbox_process(dev->mbox);
+ if (rc < 0)
+ return rc;
+ req_rsp = (struct sso_hws_stats *)
+ mbox_alloc_msg_sso_hws_get_stats(dev->mbox);
+ if (req_rsp == NULL)
+ return -ENOSPC;
+ }
+ req_rsp->hws = hws;
+ rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
+ if (rc)
+ return rc;
+
+ stats->arbitration = req_rsp->arbitration;
+ return 0;
+}
+
+int
+roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
+ struct roc_sso_hwgrp_stats *stats)
+{
+ struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+ struct sso_grp_stats *req_rsp;
+ int rc;
+
+ req_rsp = (struct sso_grp_stats *)mbox_alloc_msg_sso_grp_get_stats(
+ dev->mbox);
+ if (req_rsp == NULL) {
+ rc = mbox_process(dev->mbox);
+ if (rc < 0)
+ return rc;
+ req_rsp = (struct sso_grp_stats *)
+ mbox_alloc_msg_sso_grp_get_stats(dev->mbox);
+ if (req_rsp == NULL)
+ return -ENOSPC;
+ }
+ req_rsp->grp = hwgrp;
+ rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
+ if (rc)
+ return rc;
+
+ stats->aw_status = req_rsp->aw_status;
+ stats->dq_pc = req_rsp->dq_pc;
+ stats->ds_pc = req_rsp->ds_pc;
+ stats->ext_pc = req_rsp->ext_pc;
+ stats->page_cnt = req_rsp->page_cnt;
+ stats->ts_pc = req_rsp->ts_pc;
+ stats->wa_pc = req_rsp->wa_pc;
+ stats->ws_pc = req_rsp->ws_pc;
+ return 0;
+}
+
int
roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso, uint8_t hws,
uint16_t hwgrp)
diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h
index f85799ba8..c07ff50de 100644
--- a/drivers/common/cnxk/roc_sso.h
+++ b/drivers/common/cnxk/roc_sso.h
@@ -12,6 +12,21 @@ struct roc_sso_hwgrp_qos {
uint8_t taq_prcnt;
};
+struct roc_sso_hws_stats {
+ uint64_t arbitration;
+};
+
+struct roc_sso_hwgrp_stats {
+ uint64_t ws_pc;
+ uint64_t ext_pc;
+ uint64_t wa_pc;
+ uint64_t ts_pc;
+ uint64_t ds_pc;
+ uint64_t dq_pc;
+ uint64_t aw_status;
+ uint64_t page_cnt;
+};
+
struct roc_sso {
struct plt_pci_device *pci_dev;
/* Public data. */
@@ -61,5 +76,9 @@ uintptr_t __roc_api roc_sso_hwgrp_base_get(struct roc_sso *roc_sso,
/* Debug */
void __roc_api roc_sso_dump(struct roc_sso *roc_sso, uint8_t nb_hws,
uint16_t hwgrp, FILE *f);
+int roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
+ struct roc_sso_hwgrp_stats *stats);
+int roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws,
+ struct roc_sso_hws_stats *stats);
#endif /* _ROC_SSOW_H_ */
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index ee7dce5f5..d52408e5a 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -214,6 +214,21 @@ int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn);
int cnxk_sso_selftest(const char *dev_name);
void cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f);
+/* Stats API. */
+int cnxk_sso_xstats_get_names(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ uint8_t queue_port_id,
+ struct rte_event_dev_xstats_name *xstats_names,
+ unsigned int *ids, unsigned int size);
+int cnxk_sso_xstats_get(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ uint8_t queue_port_id, const unsigned int ids[],
+ uint64_t values[], unsigned int n);
+int cnxk_sso_xstats_reset(struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ int16_t queue_port_id, const uint32_t ids[],
+ uint32_t n);
+
/* CN9K */
void cn9k_sso_set_rsrc(void *arg);
diff --git a/drivers/event/cnxk/cnxk_eventdev_stats.c b/drivers/event/cnxk/cnxk_eventdev_stats.c
new file mode 100644
index 000000000..e6879b083
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev_stats.c
@@ -0,0 +1,289 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_eventdev.h"
+
+struct cnxk_sso_xstats_name {
+ const char name[RTE_EVENT_DEV_XSTATS_NAME_SIZE];
+ const size_t offset;
+ const uint64_t mask;
+ const uint8_t shift;
+ uint64_t reset_snap[CNXK_SSO_MAX_HWGRP];
+};
+
+static struct cnxk_sso_xstats_name sso_hws_xstats[] = {
+ {
+ "last_grp_serviced",
+ offsetof(struct roc_sso_hws_stats, arbitration),
+ 0x3FF,
+ 0,
+ {0},
+ },
+ {
+ "affinity_arbitration_credits",
+ offsetof(struct roc_sso_hws_stats, arbitration),
+ 0xF,
+ 16,
+ {0},
+ },
+};
+
+static struct cnxk_sso_xstats_name sso_hwgrp_xstats[] = {
+ {
+ "wrk_sched",
+ offsetof(struct roc_sso_hwgrp_stats, ws_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "xaq_dram",
+ offsetof(struct roc_sso_hwgrp_stats, ext_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "add_wrk",
+ offsetof(struct roc_sso_hwgrp_stats, wa_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "tag_switch_req",
+ offsetof(struct roc_sso_hwgrp_stats, ts_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "desched_req",
+ offsetof(struct roc_sso_hwgrp_stats, ds_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "desched_wrk",
+ offsetof(struct roc_sso_hwgrp_stats, dq_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "xaq_cached",
+ offsetof(struct roc_sso_hwgrp_stats, aw_status),
+ 0x3,
+ 0,
+ {0},
+ },
+ {
+ "work_inflight",
+ offsetof(struct roc_sso_hwgrp_stats, aw_status),
+ 0x3F,
+ 16,
+ {0},
+ },
+ {
+ "inuse_pages",
+ offsetof(struct roc_sso_hwgrp_stats, page_cnt),
+ 0xFFFFFFFF,
+ 0,
+ {0},
+ },
+};
+
+#define CNXK_SSO_NUM_HWS_XSTATS RTE_DIM(sso_hws_xstats)
+#define CNXK_SSO_NUM_GRP_XSTATS RTE_DIM(sso_hwgrp_xstats)
+
+#define CNXK_SSO_NUM_XSTATS (CNXK_SSO_NUM_HWS_XSTATS + CNXK_SSO_NUM_GRP_XSTATS)
+
+int
+cnxk_sso_xstats_get(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
+ const unsigned int ids[], uint64_t values[], unsigned int n)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ struct roc_sso_hwgrp_stats hwgrp_stats;
+ struct cnxk_sso_xstats_name *xstats;
+ struct cnxk_sso_xstats_name *xstat;
+ struct roc_sso_hws_stats hws_stats;
+ uint32_t xstats_mode_count = 0;
+ uint32_t start_offset = 0;
+ unsigned int i;
+ uint64_t value;
+ void *rsp;
+ int rc;
+
+ switch (mode) {
+ case RTE_EVENT_DEV_XSTATS_DEVICE:
+ return 0;
+ case RTE_EVENT_DEV_XSTATS_PORT:
+ if (queue_port_id >= (signed int)dev->nb_event_ports)
+ goto invalid_value;
+
+ xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hws_xstats;
+
+ rc = roc_sso_hws_stats_get(&dev->sso, queue_port_id,
+ &hws_stats);
+ if (rc < 0)
+ goto invalid_value;
+ rsp = &hws_stats;
+ break;
+ case RTE_EVENT_DEV_XSTATS_QUEUE:
+ if (queue_port_id >= (signed int)dev->nb_event_queues)
+ goto invalid_value;
+
+ xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
+ start_offset = CNXK_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hwgrp_xstats;
+
+ rc = roc_sso_hwgrp_stats_get(&dev->sso, queue_port_id,
+ &hwgrp_stats);
+ if (rc < 0)
+ goto invalid_value;
+ rsp = &hwgrp_stats;
+
+ break;
+ default:
+ cnxk_err("Invalid mode received");
+ goto invalid_value;
+ };
+
+ for (i = 0; i < n && i < xstats_mode_count; i++) {
+ xstat = &xstats[ids[i] - start_offset];
+ value = *(uint64_t *)((char *)rsp + xstat->offset);
+ value = (value >> xstat->shift) & xstat->mask;
+
+ values[i] = value;
+ values[i] -= xstat->reset_snap[queue_port_id];
+ }
+
+ return i;
+invalid_value:
+ return -EINVAL;
+}
+
+int
+cnxk_sso_xstats_reset(struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ int16_t queue_port_id, const uint32_t ids[], uint32_t n)
+{
+ struct cnxk_sso_evdev *dev = sso_pmd_priv(event_dev);
+ struct roc_sso_hwgrp_stats hwgrp_stats;
+ struct cnxk_sso_xstats_name *xstats;
+ struct cnxk_sso_xstats_name *xstat;
+ struct roc_sso_hws_stats hws_stats;
+ uint32_t xstats_mode_count = 0;
+ uint32_t start_offset = 0;
+ unsigned int i;
+ uint64_t value;
+ void *rsp;
+ int rc;
+
+ switch (mode) {
+ case RTE_EVENT_DEV_XSTATS_DEVICE:
+ return 0;
+ case RTE_EVENT_DEV_XSTATS_PORT:
+ if (queue_port_id >= (signed int)dev->nb_event_ports)
+ goto invalid_value;
+
+ xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hws_xstats;
+ rc = roc_sso_hws_stats_get(&dev->sso, queue_port_id,
+ &hws_stats);
+ if (rc < 0)
+ goto invalid_value;
+ rsp = &hws_stats;
+ break;
+ case RTE_EVENT_DEV_XSTATS_QUEUE:
+ if (queue_port_id >= (signed int)dev->nb_event_queues)
+ goto invalid_value;
+
+ xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
+ start_offset = CNXK_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hwgrp_xstats;
+
+ rc = roc_sso_hwgrp_stats_get(&dev->sso, queue_port_id,
+ &hwgrp_stats);
+ if (rc < 0)
+ goto invalid_value;
+ rsp = &hwgrp_stats;
+ break;
+ default:
+ cnxk_err("Invalid mode received");
+ goto invalid_value;
+ };
+
+ for (i = 0; i < n && i < xstats_mode_count; i++) {
+ xstat = &xstats[ids[i] - start_offset];
+ value = *(uint64_t *)((char *)rsp + xstat->offset);
+ value = (value >> xstat->shift) & xstat->mask;
+
+ xstat->reset_snap[queue_port_id] = value;
+ }
+ return i;
+invalid_value:
+ return -EINVAL;
+}
+
+int
+cnxk_sso_xstats_get_names(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ uint8_t queue_port_id,
+ struct rte_event_dev_xstats_name *xstats_names,
+ unsigned int *ids, unsigned int size)
+{
+ struct rte_event_dev_xstats_name xstats_names_copy[CNXK_SSO_NUM_XSTATS];
+ struct cnxk_sso_evdev *dev = sso_pmd_priv(event_dev);
+ uint32_t xstats_mode_count = 0;
+ uint32_t start_offset = 0;
+ unsigned int xidx = 0;
+ unsigned int i;
+
+ for (i = 0; i < CNXK_SSO_NUM_HWS_XSTATS; i++) {
+ snprintf(xstats_names_copy[i].name,
+ sizeof(xstats_names_copy[i].name), "%s",
+ sso_hws_xstats[i].name);
+ }
+
+ for (; i < CNXK_SSO_NUM_XSTATS; i++) {
+ snprintf(xstats_names_copy[i].name,
+ sizeof(xstats_names_copy[i].name), "%s",
+ sso_hwgrp_xstats[i - CNXK_SSO_NUM_HWS_XSTATS].name);
+ }
+
+ switch (mode) {
+ case RTE_EVENT_DEV_XSTATS_DEVICE:
+ break;
+ case RTE_EVENT_DEV_XSTATS_PORT:
+ if (queue_port_id >= (signed int)dev->nb_event_ports)
+ break;
+ xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
+ break;
+ case RTE_EVENT_DEV_XSTATS_QUEUE:
+ if (queue_port_id >= (signed int)dev->nb_event_queues)
+ break;
+ xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
+ start_offset = CNXK_SSO_NUM_HWS_XSTATS;
+ break;
+ default:
+ cnxk_err("Invalid mode received");
+ return -EINVAL;
+ };
+
+ if (xstats_mode_count > size || !ids || !xstats_names)
+ return xstats_mode_count;
+
+ for (i = 0; i < xstats_mode_count; i++) {
+ xidx = i + start_offset;
+ strncpy(xstats_names[i].name, xstats_names_copy[xidx].name,
+ sizeof(xstats_names[i].name));
+ ids[i] = xidx;
+ }
+
+ return i;
+}
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index eeb5ad64a..da386001e 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -13,7 +13,8 @@ sources = files('cn10k_worker.c',
'cn9k_worker.c',
'cn9k_eventdev.c',
'cnxk_eventdev.c',
- 'cnxk_eventdev_selftest.c'
+ 'cnxk_eventdev_selftest.c',
+ 'cnxk_eventdev_stats.c',
)
deps += ['bus_pci', 'common_cnxk', 'net_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 20/33] event/cnxk: support event timer
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (18 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 19/33] event/cnxk: add event port and queue xstats pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 21/33] event/cnxk: add timer adapter capabilities pbhagavatula
` (14 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add event timer adapter aka TIM initialization on SSO probe.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 6 ++++
drivers/event/cnxk/cnxk_eventdev.c | 3 ++
drivers/event/cnxk/cnxk_eventdev.h | 2 ++
drivers/event/cnxk/cnxk_tim_evdev.c | 47 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_evdev.h | 44 +++++++++++++++++++++++++++
drivers/event/cnxk/meson.build | 1 +
6 files changed, 103 insertions(+)
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.c
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.h
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index b2684d431..662df2971 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -35,6 +35,10 @@ Features of the OCTEON CNXK SSO PMD are:
- Open system with configurable amount of outstanding events limited only by
DRAM
- HW accelerated dequeue timeout support to enable power management
+- HW managed event timers support through TIM, with high precision and
+ time granularity of 2.5us on CN9K and 1us on CN10K.
+- Up to 256 TIM rings aka event timer adapters.
+- Up to 8 rings traversed in parallel.
Prerequisites and Compilation procedure
---------------------------------------
@@ -101,3 +105,5 @@ Debugging Options
+===+============+=======================================================+
| 1 | SSO | --log-level='pmd\.event\.cnxk,8' |
+---+------------+-------------------------------------------------------+
+ | 2 | TIM | --log-level='pmd\.event\.cnxk\.timer,8' |
+ +---+------------+-------------------------------------------------------+
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index dbd35ca5d..c404bb586 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -582,6 +582,8 @@ cnxk_sso_init(struct rte_eventdev *event_dev)
dev->nb_event_queues = 0;
dev->nb_event_ports = 0;
+ cnxk_tim_init(&dev->sso);
+
return 0;
error:
@@ -598,6 +600,7 @@ cnxk_sso_fini(struct rte_eventdev *event_dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ cnxk_tim_fini();
roc_sso_rsrc_fini(&dev->sso);
roc_sso_dev_fini(&dev->sso);
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index d52408e5a..81bb3488a 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,6 +14,8 @@
#include "roc_api.h"
+#include "cnxk_tim_evdev.h"
+
#define CNXK_SSO_XAE_CNT "xae_cnt"
#define CNXK_SSO_GGRP_QOS "qos"
#define CN9K_SSO_SINGLE_WS "single_ws"
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
new file mode 100644
index 000000000..76b17910f
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_eventdev.h"
+#include "cnxk_tim_evdev.h"
+
+void
+cnxk_tim_init(struct roc_sso *sso)
+{
+ const struct rte_memzone *mz;
+ struct cnxk_tim_evdev *dev;
+ int rc;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return;
+
+ mz = rte_memzone_reserve(RTE_STR(CNXK_TIM_EVDEV_NAME),
+ sizeof(struct cnxk_tim_evdev), 0, 0);
+ if (mz == NULL) {
+ plt_tim_dbg("Unable to allocate memory for TIM Event device");
+ return;
+ }
+ dev = mz->addr;
+
+ dev->tim.roc_sso = sso;
+ rc = roc_tim_init(&dev->tim);
+ if (rc < 0) {
+ plt_err("Failed to initialize roc tim resources");
+ rte_memzone_free(mz);
+ return;
+ }
+ dev->nb_rings = rc;
+ dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+}
+
+void
+cnxk_tim_fini(void)
+{
+ struct cnxk_tim_evdev *dev = tim_priv_get();
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return;
+
+ roc_tim_fini(&dev->tim);
+ rte_memzone_free(rte_memzone_lookup(RTE_STR(CNXK_TIM_EVDEV_NAME)));
+}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
new file mode 100644
index 000000000..6cf0adb21
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __CNXK_TIM_EVDEV_H__
+#define __CNXK_TIM_EVDEV_H__
+
+#include <stddef.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <eventdev_pmd_pci.h>
+#include <rte_event_timer_adapter.h>
+#include <rte_memzone.h>
+
+#include "roc_api.h"
+
+#define CNXK_TIM_EVDEV_NAME cnxk_tim_eventdev
+#define CNXK_TIM_RING_DEF_CHUNK_SZ (4096)
+
+struct cnxk_tim_evdev {
+ struct roc_tim tim;
+ struct rte_eventdev *event_dev;
+ uint16_t nb_rings;
+ uint32_t chunk_sz;
+};
+
+static inline struct cnxk_tim_evdev *
+tim_priv_get(void)
+{
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(RTE_STR(CNXK_TIM_EVDEV_NAME));
+ if (mz == NULL)
+ return NULL;
+
+ return mz->addr;
+}
+
+void cnxk_tim_init(struct roc_sso *sso);
+void cnxk_tim_fini(void);
+
+#endif /* __CNXK_TIM_EVDEV_H__ */
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index da386001e..44a300aa0 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -15,6 +15,7 @@ sources = files('cn10k_worker.c',
'cnxk_eventdev.c',
'cnxk_eventdev_selftest.c',
'cnxk_eventdev_stats.c',
+ 'cnxk_tim_evdev.c',
)
deps += ['bus_pci', 'common_cnxk', 'net_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 21/33] event/cnxk: add timer adapter capabilities
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (19 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 20/33] event/cnxk: support event timer pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 22/33] event/cnxk: create and free timer adapter pbhagavatula
` (13 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add function to retrieve event timer adapter capabilities.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 2 ++
drivers/event/cnxk/cn9k_eventdev.c | 2 ++
drivers/event/cnxk/cnxk_tim_evdev.c | 22 +++++++++++++++++++++-
drivers/event/cnxk/cnxk_tim_evdev.h | 6 +++++-
4 files changed, 30 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 74070e005..30ca0d901 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -420,6 +420,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .timer_adapter_caps_get = cnxk_tim_caps_get,
+
.dump = cnxk_sso_dump,
.dev_start = cn10k_sso_start,
.dev_stop = cn10k_sso_stop,
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 4fb0f1ccc..773152e55 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -494,6 +494,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .timer_adapter_caps_get = cnxk_tim_caps_get,
+
.dump = cnxk_sso_dump,
.dev_start = cn9k_sso_start,
.dev_stop = cn9k_sso_stop,
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 76b17910f..6000b507a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -5,6 +5,26 @@
#include "cnxk_eventdev.h"
#include "cnxk_tim_evdev.h"
+int
+cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
+ uint32_t *caps,
+ const struct rte_event_timer_adapter_ops **ops)
+{
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+
+ RTE_SET_USED(flags);
+ RTE_SET_USED(ops);
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ /* Store evdev pointer for later use. */
+ dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
+ *caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
+
+ return 0;
+}
+
void
cnxk_tim_init(struct roc_sso *sso)
{
@@ -37,7 +57,7 @@ cnxk_tim_init(struct roc_sso *sso)
void
cnxk_tim_fini(void)
{
- struct cnxk_tim_evdev *dev = tim_priv_get();
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return;
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 6cf0adb21..8dcecb281 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -27,7 +27,7 @@ struct cnxk_tim_evdev {
};
static inline struct cnxk_tim_evdev *
-tim_priv_get(void)
+cnxk_tim_priv_get(void)
{
const struct rte_memzone *mz;
@@ -38,6 +38,10 @@ tim_priv_get(void)
return mz->addr;
}
+int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
+ uint32_t *caps,
+ const struct rte_event_timer_adapter_ops **ops);
+
void cnxk_tim_init(struct roc_sso *sso);
void cnxk_tim_fini(void);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 22/33] event/cnxk: create and free timer adapter
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (20 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 21/33] event/cnxk: add timer adapter capabilities pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 23/33] event/cnxk: add devargs to disable NPA pbhagavatula
` (12 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
When the application calls timer adapter create the following is used:
- Allocate a TIM LF based on number of LF's provisioned.
- Verify the config parameters supplied.
- Allocate memory required for
* Buckets based on min and max timeout supplied.
* Allocate the chunk pool based on the number of timers.
On Free:
- Free the allocated bucket and chunk memory.
- Free the TIM lf allocated.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 174 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_evdev.h | 128 +++++++++++++++++++-
2 files changed, 300 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 6000b507a..986ad8493 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -5,6 +5,177 @@
#include "cnxk_eventdev.h"
#include "cnxk_tim_evdev.h"
+static struct rte_event_timer_adapter_ops cnxk_tim_ops;
+
+static int
+cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
+ struct rte_event_timer_adapter_conf *rcfg)
+{
+ unsigned int cache_sz = (tim_ring->nb_chunks / 1.5);
+ unsigned int mp_flags = 0;
+ char pool_name[25];
+ int rc;
+
+ cache_sz /= rte_lcore_count();
+ /* Create chunk pool. */
+ if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
+ mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+ plt_tim_dbg("Using single producer mode");
+ tim_ring->prod_type_sp = true;
+ }
+
+ snprintf(pool_name, sizeof(pool_name), "cnxk_tim_chunk_pool%d",
+ tim_ring->ring_id);
+
+ if (cache_sz > RTE_MEMPOOL_CACHE_MAX_SIZE)
+ cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
+ cache_sz = cache_sz != 0 ? cache_sz : 2;
+ tim_ring->nb_chunks += (cache_sz * rte_lcore_count());
+ tim_ring->chunk_pool = rte_mempool_create_empty(
+ pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz, cache_sz, 0,
+ rte_socket_id(), mp_flags);
+
+ if (tim_ring->chunk_pool == NULL) {
+ plt_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
+
+ rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
+ rte_mbuf_platform_mempool_ops(), NULL);
+ if (rc < 0) {
+ plt_err("Unable to set chunkpool ops");
+ goto free;
+ }
+
+ rc = rte_mempool_populate_default(tim_ring->chunk_pool);
+ if (rc < 0) {
+ plt_err("Unable to set populate chunkpool.");
+ goto free;
+ }
+ tim_ring->aura =
+ roc_npa_aura_handle_to_aura(tim_ring->chunk_pool->pool_id);
+ tim_ring->ena_dfb = 0;
+
+ return 0;
+
+free:
+ rte_mempool_free(tim_ring->chunk_pool);
+ return rc;
+}
+
+static int
+cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
+{
+ struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+ struct cnxk_tim_ring *tim_ring;
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ if (adptr->data->id >= dev->nb_rings)
+ return -ENODEV;
+
+ tim_ring = rte_zmalloc("cnxk_tim_prv", sizeof(struct cnxk_tim_ring), 0);
+ if (tim_ring == NULL)
+ return -ENOMEM;
+
+ rc = roc_tim_lf_alloc(&dev->tim, adptr->data->id, NULL);
+ if (rc < 0) {
+ plt_err("Failed to create timer ring");
+ goto tim_ring_free;
+ }
+
+ if (NSEC2TICK(RTE_ALIGN_MUL_CEIL(
+ rcfg->timer_tick_ns,
+ cnxk_tim_min_resolution_ns(cnxk_tim_cntfrq())),
+ cnxk_tim_cntfrq()) <
+ cnxk_tim_min_tmo_ticks(cnxk_tim_cntfrq())) {
+ if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES)
+ rcfg->timer_tick_ns = TICK2NSEC(
+ cnxk_tim_min_tmo_ticks(cnxk_tim_cntfrq()),
+ cnxk_tim_cntfrq());
+ else {
+ rc = -ERANGE;
+ goto tim_hw_free;
+ }
+ }
+ tim_ring->ring_id = adptr->data->id;
+ tim_ring->clk_src = (int)rcfg->clk_src;
+ tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL(
+ rcfg->timer_tick_ns,
+ cnxk_tim_min_resolution_ns(cnxk_tim_cntfrq()));
+ tim_ring->max_tout = rcfg->max_tmo_ns;
+ tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
+ tim_ring->nb_timers = rcfg->nb_timers;
+ tim_ring->chunk_sz = dev->chunk_sz;
+
+ tim_ring->nb_chunks = tim_ring->nb_timers;
+ tim_ring->nb_chunk_slots = CNXK_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
+ /* Create buckets. */
+ tim_ring->bkt =
+ rte_zmalloc("cnxk_tim_bucket",
+ (tim_ring->nb_bkts) * sizeof(struct cnxk_tim_bkt),
+ RTE_CACHE_LINE_SIZE);
+ if (tim_ring->bkt == NULL)
+ goto tim_hw_free;
+
+ rc = cnxk_tim_chnk_pool_create(tim_ring, rcfg);
+ if (rc < 0)
+ goto tim_bkt_free;
+
+ rc = roc_tim_lf_config(
+ &dev->tim, tim_ring->ring_id,
+ cnxk_tim_convert_clk_src(tim_ring->clk_src), 0, 0,
+ tim_ring->nb_bkts, tim_ring->chunk_sz,
+ NSEC2TICK(tim_ring->tck_nsec, cnxk_tim_cntfrq()));
+ if (rc < 0) {
+ plt_err("Failed to configure timer ring");
+ goto tim_chnk_free;
+ }
+
+ tim_ring->base = roc_tim_lf_base_get(&dev->tim, tim_ring->ring_id);
+ plt_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE);
+ plt_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+
+ plt_tim_dbg(
+ "Total memory used %" PRIu64 "MB\n",
+ (uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz) +
+ (tim_ring->nb_bkts * sizeof(struct cnxk_tim_bkt))) /
+ BIT_ULL(20)));
+
+ adptr->data->adapter_priv = tim_ring;
+ return rc;
+
+tim_chnk_free:
+ rte_mempool_free(tim_ring->chunk_pool);
+tim_bkt_free:
+ rte_free(tim_ring->bkt);
+tim_hw_free:
+ roc_tim_lf_free(&dev->tim, tim_ring->ring_id);
+tim_ring_free:
+ rte_free(tim_ring);
+ return rc;
+}
+
+static int
+cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ roc_tim_lf_free(&dev->tim, tim_ring->ring_id);
+ rte_free(tim_ring->bkt);
+ rte_mempool_free(tim_ring->chunk_pool);
+ rte_free(tim_ring);
+
+ return 0;
+}
+
int
cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
uint32_t *caps,
@@ -18,6 +189,9 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
if (dev == NULL)
return -ENODEV;
+ cnxk_tim_ops.init = cnxk_tim_ring_create;
+ cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 8dcecb281..62bb2f1eb 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -12,12 +12,26 @@
#include <eventdev_pmd_pci.h>
#include <rte_event_timer_adapter.h>
+#include <rte_malloc.h>
#include <rte_memzone.h>
#include "roc_api.h"
-#define CNXK_TIM_EVDEV_NAME cnxk_tim_eventdev
-#define CNXK_TIM_RING_DEF_CHUNK_SZ (4096)
+#define NSECPERSEC 1E9
+#define USECPERSEC 1E6
+#define TICK2NSEC(__tck, __freq) (((__tck)*NSECPERSEC) / (__freq))
+
+#define CNXK_TIM_EVDEV_NAME cnxk_tim_eventdev
+#define CNXK_TIM_MAX_BUCKETS (0xFFFFF)
+#define CNXK_TIM_RING_DEF_CHUNK_SZ (4096)
+#define CNXK_TIM_CHUNK_ALIGNMENT (16)
+#define CNXK_TIM_MAX_BURST \
+ (RTE_CACHE_LINE_SIZE / CNXK_TIM_CHUNK_ALIGNMENT)
+#define CNXK_TIM_NB_CHUNK_SLOTS(sz) (((sz) / CNXK_TIM_CHUNK_ALIGNMENT) - 1)
+#define CNXK_TIM_MIN_CHUNK_SLOTS (0x1)
+#define CNXK_TIM_MAX_CHUNK_SLOTS (0x1FFE)
+
+#define CN9K_TIM_MIN_TMO_TKS (256)
struct cnxk_tim_evdev {
struct roc_tim tim;
@@ -26,6 +40,57 @@ struct cnxk_tim_evdev {
uint32_t chunk_sz;
};
+enum cnxk_tim_clk_src {
+ CNXK_TIM_CLK_SRC_10NS = RTE_EVENT_TIMER_ADAPTER_CPU_CLK,
+ CNXK_TIM_CLK_SRC_GPIO = RTE_EVENT_TIMER_ADAPTER_EXT_CLK0,
+ CNXK_TIM_CLK_SRC_GTI = RTE_EVENT_TIMER_ADAPTER_EXT_CLK1,
+ CNXK_TIM_CLK_SRC_PTP = RTE_EVENT_TIMER_ADAPTER_EXT_CLK2,
+};
+
+struct cnxk_tim_bkt {
+ uint64_t first_chunk;
+ union {
+ uint64_t w1;
+ struct {
+ uint32_t nb_entry;
+ uint8_t sbt : 1;
+ uint8_t hbt : 1;
+ uint8_t bsk : 1;
+ uint8_t rsvd : 5;
+ uint8_t lock;
+ int16_t chunk_remainder;
+ };
+ };
+ uint64_t current_chunk;
+ uint64_t pad;
+};
+
+struct cnxk_tim_ring {
+ uintptr_t base;
+ uint16_t nb_chunk_slots;
+ uint32_t nb_bkts;
+ uint64_t tck_int;
+ uint64_t tot_int;
+ struct cnxk_tim_bkt *bkt;
+ struct rte_mempool *chunk_pool;
+ uint64_t arm_cnt;
+ uint8_t prod_type_sp;
+ uint8_t ena_dfb;
+ uint16_t ring_id;
+ uint32_t aura;
+ uint64_t nb_timers;
+ uint64_t tck_nsec;
+ uint64_t max_tout;
+ uint64_t nb_chunks;
+ uint64_t chunk_sz;
+ enum cnxk_tim_clk_src clk_src;
+} __rte_cache_aligned;
+
+struct cnxk_tim_ent {
+ uint64_t w0;
+ uint64_t wqe;
+};
+
static inline struct cnxk_tim_evdev *
cnxk_tim_priv_get(void)
{
@@ -38,6 +103,65 @@ cnxk_tim_priv_get(void)
return mz->addr;
}
+static inline uint64_t
+cnxk_tim_min_tmo_ticks(uint64_t freq)
+{
+ if (roc_model_runtime_is_cn9k())
+ return CN9K_TIM_MIN_TMO_TKS;
+ else /* CN10K min tick is of 1us */
+ return freq / USECPERSEC;
+}
+
+static inline uint64_t
+cnxk_tim_min_resolution_ns(uint64_t freq)
+{
+ return NSECPERSEC / freq;
+}
+
+static inline enum roc_tim_clk_src
+cnxk_tim_convert_clk_src(enum cnxk_tim_clk_src clk_src)
+{
+ switch (clk_src) {
+ case RTE_EVENT_TIMER_ADAPTER_CPU_CLK:
+ return roc_model_runtime_is_cn9k() ? ROC_TIM_CLK_SRC_10NS :
+ ROC_TIM_CLK_SRC_GTI;
+ default:
+ return ROC_TIM_CLK_SRC_INVALID;
+ }
+}
+
+#ifdef RTE_ARCH_ARM64
+static inline uint64_t
+cnxk_tim_cntvct(void)
+{
+ uint64_t tsc;
+
+ asm volatile("mrs %0, cntvct_el0" : "=r"(tsc));
+ return tsc;
+}
+
+static inline uint64_t
+cnxk_tim_cntfrq(void)
+{
+ uint64_t freq;
+
+ asm volatile("mrs %0, cntfrq_el0" : "=r"(freq));
+ return freq;
+}
+#else
+static inline uint64_t
+cnxk_tim_cntvct(void)
+{
+ return 0;
+}
+
+static inline uint64_t
+cnxk_tim_cntfrq(void)
+{
+ return 0;
+}
+#endif
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 23/33] event/cnxk: add devargs to disable NPA
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (21 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 22/33] event/cnxk: create and free timer adapter pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 24/33] event/cnxk: allow adapters to resize inflights pbhagavatula
` (11 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
If the chunks are allocated from NPA then TIM can automatically free
them when traversing the list of chunks.
Add devargs to disable NPA and use software mempool to manage chunks.
Example:
--dev "0002:0e:00.0,tim_disable_npa=1"
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 10 ++++
drivers/event/cnxk/cn10k_eventdev.c | 3 +-
drivers/event/cnxk/cn9k_eventdev.c | 3 +-
drivers/event/cnxk/cnxk_eventdev.h | 9 +++
drivers/event/cnxk/cnxk_tim_evdev.c | 86 +++++++++++++++++++++--------
drivers/event/cnxk/cnxk_tim_evdev.h | 5 ++
6 files changed, 92 insertions(+), 24 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 662df2971..9e14f99f2 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -93,6 +93,16 @@ Runtime Config Options
-a 0002:0e:00.0,qos=[1-50-50-50]
+- ``TIM disable NPA``
+
+ By default chunks are allocated from NPA then TIM can automatically free
+ them when traversing the list of chunks. The ``tim_disable_npa`` devargs
+ parameter disables NPA and uses software mempool to manage chunks
+
+ For example::
+
+ -a 0002:0e:00.0,tim_disable_npa=1
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 30ca0d901..807e666d3 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -502,4 +502,5 @@ RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
- CN10K_SSO_GW_MODE "=<int>");
+ CN10K_SSO_GW_MODE "=<int>"
+ CNXK_TIM_DISABLE_NPA "=1");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 773152e55..3e27fce4a 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -571,4 +571,5 @@ RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
- CN9K_SSO_SINGLE_WS "=1");
+ CN9K_SSO_SINGLE_WS "=1"
+ CNXK_TIM_DISABLE_NPA "=1");
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 81bb3488a..257772cb2 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -159,6 +159,15 @@ struct cnxk_sso_hws_cookie {
bool configured;
} __rte_cache_aligned;
+static inline int
+parse_kvargs_flag(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ *(uint8_t *)opaque = !!atoi(value);
+ return 0;
+}
+
static inline int
parse_kvargs_value(const char *key, const char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 986ad8493..44bcad94d 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -31,30 +31,43 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
cache_sz = cache_sz != 0 ? cache_sz : 2;
tim_ring->nb_chunks += (cache_sz * rte_lcore_count());
- tim_ring->chunk_pool = rte_mempool_create_empty(
- pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz, cache_sz, 0,
- rte_socket_id(), mp_flags);
-
- if (tim_ring->chunk_pool == NULL) {
- plt_err("Unable to create chunkpool.");
- return -ENOMEM;
- }
+ if (!tim_ring->disable_npa) {
+ tim_ring->chunk_pool = rte_mempool_create_empty(
+ pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz,
+ cache_sz, 0, rte_socket_id(), mp_flags);
+
+ if (tim_ring->chunk_pool == NULL) {
+ plt_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
- rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
- rte_mbuf_platform_mempool_ops(), NULL);
- if (rc < 0) {
- plt_err("Unable to set chunkpool ops");
- goto free;
- }
+ rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
+ rte_mbuf_platform_mempool_ops(),
+ NULL);
+ if (rc < 0) {
+ plt_err("Unable to set chunkpool ops");
+ goto free;
+ }
- rc = rte_mempool_populate_default(tim_ring->chunk_pool);
- if (rc < 0) {
- plt_err("Unable to set populate chunkpool.");
- goto free;
+ rc = rte_mempool_populate_default(tim_ring->chunk_pool);
+ if (rc < 0) {
+ plt_err("Unable to set populate chunkpool.");
+ goto free;
+ }
+ tim_ring->aura = roc_npa_aura_handle_to_aura(
+ tim_ring->chunk_pool->pool_id);
+ tim_ring->ena_dfb = 0;
+ } else {
+ tim_ring->chunk_pool = rte_mempool_create(
+ pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz,
+ cache_sz, 0, NULL, NULL, NULL, NULL, rte_socket_id(),
+ mp_flags);
+ if (tim_ring->chunk_pool == NULL) {
+ plt_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
+ tim_ring->ena_dfb = 1;
}
- tim_ring->aura =
- roc_npa_aura_handle_to_aura(tim_ring->chunk_pool->pool_id);
- tim_ring->ena_dfb = 0;
return 0;
@@ -110,8 +123,17 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
tim_ring->nb_timers = rcfg->nb_timers;
tim_ring->chunk_sz = dev->chunk_sz;
+ tim_ring->disable_npa = dev->disable_npa;
+
+ if (tim_ring->disable_npa) {
+ tim_ring->nb_chunks =
+ tim_ring->nb_timers /
+ CNXK_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
+ tim_ring->nb_chunks = tim_ring->nb_chunks * tim_ring->nb_bkts;
+ } else {
+ tim_ring->nb_chunks = tim_ring->nb_timers;
+ }
- tim_ring->nb_chunks = tim_ring->nb_timers;
tim_ring->nb_chunk_slots = CNXK_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
/* Create buckets. */
tim_ring->bkt =
@@ -199,6 +221,24 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
return 0;
}
+static void
+cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
+{
+ struct rte_kvargs *kvlist;
+
+ if (devargs == NULL)
+ return;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, CNXK_TIM_DISABLE_NPA, &parse_kvargs_flag,
+ &dev->disable_npa);
+
+ rte_kvargs_free(kvlist);
+}
+
void
cnxk_tim_init(struct roc_sso *sso)
{
@@ -217,6 +257,8 @@ cnxk_tim_init(struct roc_sso *sso)
}
dev = mz->addr;
+ cnxk_tim_parse_devargs(sso->pci_dev->device.devargs, dev);
+
dev->tim.roc_sso = sso;
rc = roc_tim_init(&dev->tim);
if (rc < 0) {
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 62bb2f1eb..8c21ab1fe 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -33,11 +33,15 @@
#define CN9K_TIM_MIN_TMO_TKS (256)
+#define CNXK_TIM_DISABLE_NPA "tim_disable_npa"
+
struct cnxk_tim_evdev {
struct roc_tim tim;
struct rte_eventdev *event_dev;
uint16_t nb_rings;
uint32_t chunk_sz;
+ /* Dev args */
+ uint8_t disable_npa;
};
enum cnxk_tim_clk_src {
@@ -75,6 +79,7 @@ struct cnxk_tim_ring {
struct rte_mempool *chunk_pool;
uint64_t arm_cnt;
uint8_t prod_type_sp;
+ uint8_t disable_npa;
uint8_t ena_dfb;
uint16_t ring_id;
uint32_t aura;
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 24/33] event/cnxk: allow adapters to resize inflights
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (22 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 23/33] event/cnxk: add devargs to disable NPA pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 25/33] event/cnxk: add timer adapter info function pbhagavatula
` (10 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add internal SSO functions to allow event adapters to resize SSO buffers
that are used to hold in-flight events in DRAM.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 33 ++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 7 +++
drivers/event/cnxk/cnxk_eventdev_adptr.c | 67 ++++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_evdev.c | 5 ++
drivers/event/cnxk/meson.build | 1 +
5 files changed, 113 insertions(+)
create mode 100644 drivers/event/cnxk/cnxk_eventdev_adptr.c
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index c404bb586..29e38478d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -77,6 +77,9 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
if (dev->xae_cnt)
xaq_cnt += dev->xae_cnt / dev->sso.xae_waes;
+ else if (dev->adptr_xae_cnt)
+ xaq_cnt += (dev->adptr_xae_cnt / dev->sso.xae_waes) +
+ (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
else
xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
(CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
@@ -125,6 +128,36 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
return rc;
}
+int
+cnxk_sso_xae_reconfigure(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc = 0;
+
+ if (event_dev->data->dev_started)
+ event_dev->dev_ops->dev_stop(event_dev);
+
+ rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to release XAQ %d", rc);
+ return rc;
+ }
+
+ rte_mempool_free(dev->xaq_pool);
+ dev->xaq_pool = NULL;
+ rc = cnxk_sso_xaq_allocate(dev);
+ if (rc < 0) {
+ plt_err("Failed to alloc XAQ %d", rc);
+ return rc;
+ }
+
+ rte_mb();
+ if (event_dev->data->dev_started)
+ event_dev->dev_ops->dev_start(event_dev);
+
+ return 0;
+}
+
int
cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
cnxk_sso_init_hws_mem_t init_hws_fn,
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 257772cb2..721d8d9ad 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -81,6 +81,10 @@ struct cnxk_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
+ uint64_t adptr_xae_cnt;
+ uint16_t tim_adptr_ring_cnt;
+ uint16_t *timer_adptr_rings;
+ uint64_t *timer_adptr_sz;
/* Dev args */
uint32_t xae_cnt;
uint8_t qos_queue_cnt;
@@ -190,7 +194,10 @@ cnxk_sso_hws_get_cookie(void *ws)
}
/* Configuration functions */
+int cnxk_sso_xae_reconfigure(struct rte_eventdev *event_dev);
int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev);
+void cnxk_sso_updt_xae_cnt(struct cnxk_sso_evdev *dev, void *data,
+ uint32_t event_type);
/* Common ops API. */
int cnxk_sso_init(struct rte_eventdev *event_dev);
diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c
new file mode 100644
index 000000000..6d9615453
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_eventdev.h"
+
+void
+cnxk_sso_updt_xae_cnt(struct cnxk_sso_evdev *dev, void *data,
+ uint32_t event_type)
+{
+ int i;
+
+ switch (event_type) {
+ case RTE_EVENT_TYPE_TIMER: {
+ struct cnxk_tim_ring *timr = data;
+ uint16_t *old_ring_ptr;
+ uint64_t *old_sz_ptr;
+
+ for (i = 0; i < dev->tim_adptr_ring_cnt; i++) {
+ if (timr->ring_id != dev->timer_adptr_rings[i])
+ continue;
+ if (timr->nb_timers == dev->timer_adptr_sz[i])
+ return;
+ dev->adptr_xae_cnt -= dev->timer_adptr_sz[i];
+ dev->adptr_xae_cnt += timr->nb_timers;
+ dev->timer_adptr_sz[i] = timr->nb_timers;
+
+ return;
+ }
+
+ dev->tim_adptr_ring_cnt++;
+ old_ring_ptr = dev->timer_adptr_rings;
+ old_sz_ptr = dev->timer_adptr_sz;
+
+ dev->timer_adptr_rings = rte_realloc(
+ dev->timer_adptr_rings,
+ sizeof(uint16_t) * dev->tim_adptr_ring_cnt, 0);
+ if (dev->timer_adptr_rings == NULL) {
+ dev->adptr_xae_cnt += timr->nb_timers;
+ dev->timer_adptr_rings = old_ring_ptr;
+ dev->tim_adptr_ring_cnt--;
+ return;
+ }
+
+ dev->timer_adptr_sz = rte_realloc(
+ dev->timer_adptr_sz,
+ sizeof(uint64_t) * dev->tim_adptr_ring_cnt, 0);
+
+ if (dev->timer_adptr_sz == NULL) {
+ dev->adptr_xae_cnt += timr->nb_timers;
+ dev->timer_adptr_sz = old_sz_ptr;
+ dev->tim_adptr_ring_cnt--;
+ return;
+ }
+
+ dev->timer_adptr_rings[dev->tim_adptr_ring_cnt - 1] =
+ timr->ring_id;
+ dev->timer_adptr_sz[dev->tim_adptr_ring_cnt - 1] =
+ timr->nb_timers;
+
+ dev->adptr_xae_cnt += timr->nb_timers;
+ break;
+ }
+ default:
+ break;
+ }
+}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 44bcad94d..4add1d659 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -161,6 +161,11 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
plt_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE);
plt_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+ /* Update SSO xae count. */
+ cnxk_sso_updt_xae_cnt(cnxk_sso_pmd_priv(dev->event_dev), tim_ring,
+ RTE_EVENT_TYPE_TIMER);
+ cnxk_sso_xae_reconfigure(dev->event_dev);
+
plt_tim_dbg(
"Total memory used %" PRIu64 "MB\n",
(uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz) +
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 44a300aa0..6f1db3440 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -12,6 +12,7 @@ sources = files('cn10k_worker.c',
'cn10k_eventdev.c',
'cn9k_worker.c',
'cn9k_eventdev.c',
+ 'cnxk_eventdev_adptr.c',
'cnxk_eventdev.c',
'cnxk_eventdev_selftest.c',
'cnxk_eventdev_stats.c',
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 25/33] event/cnxk: add timer adapter info function
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (23 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 24/33] event/cnxk: allow adapters to resize inflights pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 26/33] event/cnxk: add devargs for chunk size and rings pbhagavatula
` (9 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add TIM event timer adapter info get function.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 4add1d659..6bbfadb25 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -76,6 +76,18 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
return rc;
}
+static void
+cnxk_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer_adapter_info *adptr_info)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+
+ adptr_info->max_tmo_ns = tim_ring->max_tout;
+ adptr_info->min_resolution_ns = tim_ring->tck_nsec;
+ rte_memcpy(&adptr_info->conf, &adptr->data->conf,
+ sizeof(struct rte_event_timer_adapter_conf));
+}
+
static int
cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
{
@@ -218,6 +230,7 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
cnxk_tim_ops.init = cnxk_tim_ring_create;
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+ cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 26/33] event/cnxk: add devargs for chunk size and rings
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (24 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 25/33] event/cnxk: add timer adapter info function pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 27/33] event/cnxk: add TIM bucket operations pbhagavatula
` (8 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add devargs to control default chunk size and max numbers of
timer rings to attach to a given RVU PF.
Example:
--dev "0002:1e:00.0,tim_chnk_slots=1024"
--dev "0002:1e:00.0,tim_rings_lmt=4"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 23 +++++++++++++++++++++++
drivers/event/cnxk/cn10k_eventdev.c | 4 +++-
drivers/event/cnxk/cn9k_eventdev.c | 4 +++-
drivers/event/cnxk/cnxk_tim_evdev.c | 14 +++++++++++++-
drivers/event/cnxk/cnxk_tim_evdev.h | 4 ++++
5 files changed, 46 insertions(+), 3 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 9e14f99f2..05dcf06f4 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -103,6 +103,29 @@ Runtime Config Options
-a 0002:0e:00.0,tim_disable_npa=1
+- ``TIM modify chunk slots``
+
+ The ``tim_chnk_slots`` devargs can be used to modify number of chunk slots.
+ Chunks are used to store event timers, a chunk can be visualised as an array
+ where the last element points to the next chunk and rest of them are used to
+ store events. TIM traverses the list of chunks and enqueues the event timers
+ to SSO. The default value is 255 and the max value is 4095.
+
+ For example::
+
+ -a 0002:0e:00.0,tim_chnk_slots=1023
+
+- ``TIM limit max rings reserved``
+
+ The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
+ rings i.e. event timer adapter reserved on probe. Since, TIM rings are HW
+ resources we can avoid starving other applications by not grabbing all the
+ rings.
+
+ For example::
+
+ -a 0002:0e:00.0,tim_rings_lmt=5
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 807e666d3..a5a614196 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -503,4 +503,6 @@ RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
CN10K_SSO_GW_MODE "=<int>"
- CNXK_TIM_DISABLE_NPA "=1");
+ CNXK_TIM_DISABLE_NPA "=1"
+ CNXK_TIM_CHNK_SLOTS "=<int>"
+ CNXK_TIM_RINGS_LMT "=<int>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 3e27fce4a..cfea3723a 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -572,4 +572,6 @@ RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
CN9K_SSO_SINGLE_WS "=1"
- CNXK_TIM_DISABLE_NPA "=1");
+ CNXK_TIM_DISABLE_NPA "=1"
+ CNXK_TIM_CHNK_SLOTS "=<int>"
+ CNXK_TIM_RINGS_LMT "=<int>");
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 6bbfadb25..07ec57fd2 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -253,6 +253,10 @@ cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
rte_kvargs_process(kvlist, CNXK_TIM_DISABLE_NPA, &parse_kvargs_flag,
&dev->disable_npa);
+ rte_kvargs_process(kvlist, CNXK_TIM_CHNK_SLOTS, &parse_kvargs_value,
+ &dev->chunk_slots);
+ rte_kvargs_process(kvlist, CNXK_TIM_RINGS_LMT, &parse_kvargs_value,
+ &dev->min_ring_cnt);
rte_kvargs_free(kvlist);
}
@@ -278,6 +282,7 @@ cnxk_tim_init(struct roc_sso *sso)
cnxk_tim_parse_devargs(sso->pci_dev->device.devargs, dev);
dev->tim.roc_sso = sso;
+ dev->tim.nb_lfs = dev->min_ring_cnt;
rc = roc_tim_init(&dev->tim);
if (rc < 0) {
plt_err("Failed to initialize roc tim resources");
@@ -285,7 +290,14 @@ cnxk_tim_init(struct roc_sso *sso)
return;
}
dev->nb_rings = rc;
- dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+
+ if (dev->chunk_slots && dev->chunk_slots <= CNXK_TIM_MAX_CHUNK_SLOTS &&
+ dev->chunk_slots >= CNXK_TIM_MIN_CHUNK_SLOTS) {
+ dev->chunk_sz =
+ (dev->chunk_slots + 1) * CNXK_TIM_CHUNK_ALIGNMENT;
+ } else {
+ dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+ }
}
void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 8c21ab1fe..6208c150a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -34,6 +34,8 @@
#define CN9K_TIM_MIN_TMO_TKS (256)
#define CNXK_TIM_DISABLE_NPA "tim_disable_npa"
+#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
+#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
struct cnxk_tim_evdev {
struct roc_tim tim;
@@ -42,6 +44,8 @@ struct cnxk_tim_evdev {
uint32_t chunk_sz;
/* Dev args */
uint8_t disable_npa;
+ uint16_t chunk_slots;
+ uint16_t min_ring_cnt;
};
enum cnxk_tim_clk_src {
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 27/33] event/cnxk: add TIM bucket operations
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (25 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 26/33] event/cnxk: add devargs for chunk size and rings pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 28/33] event/cnxk: add timer arm routine pbhagavatula
` (7 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add TIM bucket operations used for event timer arm and cancel.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.h | 30 +++++++
drivers/event/cnxk/cnxk_tim_worker.c | 6 ++
drivers/event/cnxk/cnxk_tim_worker.h | 123 +++++++++++++++++++++++++++
drivers/event/cnxk/meson.build | 1 +
4 files changed, 160 insertions(+)
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.c
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.h
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 6208c150a..c844d9b61 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -37,6 +37,36 @@
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
+#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
+#define TIM_BUCKET_W1_M_CHUNK_REMAINDER \
+ ((1ULL << (64 - TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
+#define TIM_BUCKET_W1_S_LOCK (40)
+#define TIM_BUCKET_W1_M_LOCK \
+ ((1ULL << (TIM_BUCKET_W1_S_CHUNK_REMAINDER - TIM_BUCKET_W1_S_LOCK)) - 1)
+#define TIM_BUCKET_W1_S_RSVD (35)
+#define TIM_BUCKET_W1_S_BSK (34)
+#define TIM_BUCKET_W1_M_BSK \
+ ((1ULL << (TIM_BUCKET_W1_S_RSVD - TIM_BUCKET_W1_S_BSK)) - 1)
+#define TIM_BUCKET_W1_S_HBT (33)
+#define TIM_BUCKET_W1_M_HBT \
+ ((1ULL << (TIM_BUCKET_W1_S_BSK - TIM_BUCKET_W1_S_HBT)) - 1)
+#define TIM_BUCKET_W1_S_SBT (32)
+#define TIM_BUCKET_W1_M_SBT \
+ ((1ULL << (TIM_BUCKET_W1_S_HBT - TIM_BUCKET_W1_S_SBT)) - 1)
+#define TIM_BUCKET_W1_S_NUM_ENTRIES (0)
+#define TIM_BUCKET_W1_M_NUM_ENTRIES \
+ ((1ULL << (TIM_BUCKET_W1_S_SBT - TIM_BUCKET_W1_S_NUM_ENTRIES)) - 1)
+
+#define TIM_BUCKET_SEMA (TIM_BUCKET_CHUNK_REMAIN)
+
+#define TIM_BUCKET_CHUNK_REMAIN \
+ (TIM_BUCKET_W1_M_CHUNK_REMAINDER << TIM_BUCKET_W1_S_CHUNK_REMAINDER)
+
+#define TIM_BUCKET_LOCK (TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK)
+
+#define TIM_BUCKET_SEMA_WLOCK \
+ (TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
+
struct cnxk_tim_evdev {
struct roc_tim tim;
struct rte_eventdev *event_dev;
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
new file mode 100644
index 000000000..564687d9b
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -0,0 +1,6 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#include "cnxk_tim_evdev.h"
+#include "cnxk_tim_worker.h"
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
new file mode 100644
index 000000000..bd205e5c1
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __CNXK_TIM_WORKER_H__
+#define __CNXK_TIM_WORKER_H__
+
+#include "cnxk_tim_evdev.h"
+
+static inline uint8_t
+cnxk_tim_bkt_fetch_lock(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_LOCK) & TIM_BUCKET_W1_M_LOCK;
+}
+
+static inline int16_t
+cnxk_tim_bkt_fetch_rem(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_CHUNK_REMAINDER) &
+ TIM_BUCKET_W1_M_CHUNK_REMAINDER;
+}
+
+static inline int16_t
+cnxk_tim_bkt_get_rem(struct cnxk_tim_bkt *bktp)
+{
+ return __atomic_load_n(&bktp->chunk_remainder, __ATOMIC_ACQUIRE);
+}
+
+static inline void
+cnxk_tim_bkt_set_rem(struct cnxk_tim_bkt *bktp, uint16_t v)
+{
+ __atomic_store_n(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
+}
+
+static inline void
+cnxk_tim_bkt_sub_rem(struct cnxk_tim_bkt *bktp, uint16_t v)
+{
+ __atomic_fetch_sub(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
+}
+
+static inline uint8_t
+cnxk_tim_bkt_get_hbt(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_HBT) & TIM_BUCKET_W1_M_HBT;
+}
+
+static inline uint8_t
+cnxk_tim_bkt_get_bsk(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_BSK) & TIM_BUCKET_W1_M_BSK;
+}
+
+static inline uint64_t
+cnxk_tim_bkt_clr_bsk(struct cnxk_tim_bkt *bktp)
+{
+ /* Clear everything except lock. */
+ const uint64_t v = TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK;
+
+ return __atomic_fetch_and(&bktp->w1, v, __ATOMIC_ACQ_REL);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_fetch_sema_lock(struct cnxk_tim_bkt *bktp)
+{
+ return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA_WLOCK,
+ __ATOMIC_ACQUIRE);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_fetch_sema(struct cnxk_tim_bkt *bktp)
+{
+ return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA, __ATOMIC_RELAXED);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_inc_lock(struct cnxk_tim_bkt *bktp)
+{
+ const uint64_t v = 1ull << TIM_BUCKET_W1_S_LOCK;
+
+ return __atomic_fetch_add(&bktp->w1, v, __ATOMIC_ACQUIRE);
+}
+
+static inline void
+cnxk_tim_bkt_dec_lock(struct cnxk_tim_bkt *bktp)
+{
+ __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELEASE);
+}
+
+static inline void
+cnxk_tim_bkt_dec_lock_relaxed(struct cnxk_tim_bkt *bktp)
+{
+ __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELAXED);
+}
+
+static inline uint32_t
+cnxk_tim_bkt_get_nent(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_NUM_ENTRIES) &
+ TIM_BUCKET_W1_M_NUM_ENTRIES;
+}
+
+static inline void
+cnxk_tim_bkt_inc_nent(struct cnxk_tim_bkt *bktp)
+{
+ __atomic_add_fetch(&bktp->nb_entry, 1, __ATOMIC_RELAXED);
+}
+
+static inline void
+cnxk_tim_bkt_add_nent(struct cnxk_tim_bkt *bktp, uint32_t v)
+{
+ __atomic_add_fetch(&bktp->nb_entry, v, __ATOMIC_RELAXED);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_clr_nent(struct cnxk_tim_bkt *bktp)
+{
+ const uint64_t v =
+ ~(TIM_BUCKET_W1_M_NUM_ENTRIES << TIM_BUCKET_W1_S_NUM_ENTRIES);
+
+ return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL);
+}
+
+#endif /* __CNXK_TIM_WORKER_H__ */
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 6f1db3440..c4c9e9009 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -16,6 +16,7 @@ sources = files('cn10k_worker.c',
'cnxk_eventdev.c',
'cnxk_eventdev_selftest.c',
'cnxk_eventdev_stats.c',
+ 'cnxk_tim_worker.c',
'cnxk_tim_evdev.c',
)
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 28/33] event/cnxk: add timer arm routine
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (26 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 27/33] event/cnxk: add TIM bucket operations pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 29/33] event/cnxk: add timer arm timeout burst pbhagavatula
` (6 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event timer arm routine.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 18 ++
drivers/event/cnxk/cnxk_tim_evdev.h | 23 ++
drivers/event/cnxk/cnxk_tim_worker.c | 95 +++++++++
drivers/event/cnxk/cnxk_tim_worker.h | 300 +++++++++++++++++++++++++++
4 files changed, 436 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 07ec57fd2..a3be66f9a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -76,6 +76,21 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
return rc;
}
+static void
+cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
+{
+ uint8_t prod_flag = !tim_ring->prod_type_sp;
+
+ /* [DFB/FB] [SP][MP]*/
+ const rte_event_timer_arm_burst_t arm_burst[2][2] = {
+#define FP(_name, _f2, _f1, flags) [_f2][_f1] = cnxk_tim_arm_burst_##_name,
+ TIM_ARM_FASTPATH_MODES
+#undef FP
+ };
+
+ cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
+}
+
static void
cnxk_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
struct rte_event_timer_adapter_info *adptr_info)
@@ -173,6 +188,9 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
plt_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE);
plt_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+ /* Set fastpath ops. */
+ cnxk_tim_set_fp_ops(tim_ring);
+
/* Update SSO xae count. */
cnxk_sso_updt_xae_cnt(cnxk_sso_pmd_priv(dev->event_dev), tim_ring,
RTE_EVENT_TYPE_TIMER);
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index c844d9b61..7cbcdb701 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -14,6 +14,7 @@
#include <rte_event_timer_adapter.h>
#include <rte_malloc.h>
#include <rte_memzone.h>
+#include <rte_reciprocal.h>
#include "roc_api.h"
@@ -37,6 +38,11 @@
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
+#define CNXK_TIM_SP 0x1
+#define CNXK_TIM_MP 0x2
+#define CNXK_TIM_ENA_FB 0x10
+#define CNXK_TIM_ENA_DFB 0x20
+
#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
#define TIM_BUCKET_W1_M_CHUNK_REMAINDER \
((1ULL << (64 - TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
@@ -107,10 +113,14 @@ struct cnxk_tim_ring {
uintptr_t base;
uint16_t nb_chunk_slots;
uint32_t nb_bkts;
+ uint64_t last_updt_cyc;
+ uint64_t ring_start_cyc;
uint64_t tck_int;
uint64_t tot_int;
struct cnxk_tim_bkt *bkt;
struct rte_mempool *chunk_pool;
+ struct rte_reciprocal_u64 fast_div;
+ struct rte_reciprocal_u64 fast_bkt;
uint64_t arm_cnt;
uint8_t prod_type_sp;
uint8_t disable_npa;
@@ -201,6 +211,19 @@ cnxk_tim_cntfrq(void)
}
#endif
+#define TIM_ARM_FASTPATH_MODES \
+ FP(sp, 0, 0, CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
+ FP(mp, 0, 1, CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
+ FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
+ FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
+
+#define FP(_name, _f2, _f1, flags) \
+ uint16_t cnxk_tim_arm_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint16_t nb_timers);
+TIM_ARM_FASTPATH_MODES
+#undef FP
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index 564687d9b..eec39b9c2 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -4,3 +4,98 @@
#include "cnxk_tim_evdev.h"
#include "cnxk_tim_worker.h"
+
+static inline int
+cnxk_tim_arm_checks(const struct cnxk_tim_ring *const tim_ring,
+ struct rte_event_timer *const tim)
+{
+ if (unlikely(tim->state)) {
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ rte_errno = EALREADY;
+ goto fail;
+ }
+
+ if (unlikely(!tim->timeout_ticks ||
+ tim->timeout_ticks > tim_ring->nb_bkts)) {
+ tim->state = tim->timeout_ticks ?
+ RTE_EVENT_TIMER_ERROR_TOOLATE :
+ RTE_EVENT_TIMER_ERROR_TOOEARLY;
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ return 0;
+
+fail:
+ return -EINVAL;
+}
+
+static inline void
+cnxk_tim_format_event(const struct rte_event_timer *const tim,
+ struct cnxk_tim_ent *const entry)
+{
+ entry->w0 = (tim->ev.event & 0xFFC000000000) >> 6 |
+ (tim->ev.event & 0xFFFFFFFFF);
+ entry->wqe = tim->ev.u64;
+}
+
+static inline void
+cnxk_tim_sync_start_cyc(struct cnxk_tim_ring *tim_ring)
+{
+ uint64_t cur_cyc = cnxk_tim_cntvct();
+ uint32_t real_bkt;
+
+ if (cur_cyc - tim_ring->last_updt_cyc > tim_ring->tot_int) {
+ real_bkt = plt_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
+ cur_cyc = cnxk_tim_cntvct();
+
+ tim_ring->ring_start_cyc =
+ cur_cyc - (real_bkt * tim_ring->tck_int);
+ tim_ring->last_updt_cyc = cur_cyc;
+ }
+}
+
+static __rte_always_inline uint16_t
+cnxk_tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim, const uint16_t nb_timers,
+ const uint8_t flags)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_ent entry;
+ uint16_t index;
+ int ret;
+
+ cnxk_tim_sync_start_cyc(tim_ring);
+ for (index = 0; index < nb_timers; index++) {
+ if (cnxk_tim_arm_checks(tim_ring, tim[index]))
+ break;
+
+ cnxk_tim_format_event(tim[index], &entry);
+ if (flags & CNXK_TIM_SP)
+ ret = cnxk_tim_add_entry_sp(tim_ring,
+ tim[index]->timeout_ticks,
+ tim[index], &entry, flags);
+ if (flags & CNXK_TIM_MP)
+ ret = cnxk_tim_add_entry_mp(tim_ring,
+ tim[index]->timeout_ticks,
+ tim[index], &entry, flags);
+
+ if (unlikely(ret)) {
+ rte_errno = -ret;
+ break;
+ }
+ }
+
+ return index;
+}
+
+#define FP(_name, _f2, _f1, _flags) \
+ uint16_t __rte_noinline cnxk_tim_arm_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint16_t nb_timers) \
+ { \
+ return cnxk_tim_timer_arm_burst(adptr, tim, nb_timers, \
+ _flags); \
+ }
+TIM_ARM_FASTPATH_MODES
+#undef FP
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
index bd205e5c1..efdcf8969 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.h
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -120,4 +120,304 @@ cnxk_tim_bkt_clr_nent(struct cnxk_tim_bkt *bktp)
return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL);
}
+static inline uint64_t
+cnxk_tim_bkt_fast_mod(uint64_t n, uint64_t d, struct rte_reciprocal_u64 R)
+{
+ return (n - (d * rte_reciprocal_divide_u64(n, &R)));
+}
+
+static __rte_always_inline void
+cnxk_tim_get_target_bucket(struct cnxk_tim_ring *const tim_ring,
+ const uint32_t rel_bkt, struct cnxk_tim_bkt **bkt,
+ struct cnxk_tim_bkt **mirr_bkt)
+{
+ const uint64_t bkt_cyc = cnxk_tim_cntvct() - tim_ring->ring_start_cyc;
+ uint64_t bucket =
+ rte_reciprocal_divide_u64(bkt_cyc, &tim_ring->fast_div) +
+ rel_bkt;
+ uint64_t mirr_bucket = 0;
+
+ bucket = cnxk_tim_bkt_fast_mod(bucket, tim_ring->nb_bkts,
+ tim_ring->fast_bkt);
+ mirr_bucket =
+ cnxk_tim_bkt_fast_mod(bucket + (tim_ring->nb_bkts >> 1),
+ tim_ring->nb_bkts, tim_ring->fast_bkt);
+ *bkt = &tim_ring->bkt[bucket];
+ *mirr_bkt = &tim_ring->bkt[mirr_bucket];
+}
+
+static struct cnxk_tim_ent *
+cnxk_tim_clr_bkt(struct cnxk_tim_ring *const tim_ring,
+ struct cnxk_tim_bkt *const bkt)
+{
+#define TIM_MAX_OUTSTANDING_OBJ 64
+ void *pend_chunks[TIM_MAX_OUTSTANDING_OBJ];
+ struct cnxk_tim_ent *chunk;
+ struct cnxk_tim_ent *pnext;
+ uint8_t objs = 0;
+
+ chunk = ((struct cnxk_tim_ent *)(uintptr_t)bkt->first_chunk);
+ chunk = (struct cnxk_tim_ent *)(uintptr_t)(chunk +
+ tim_ring->nb_chunk_slots)
+ ->w0;
+ while (chunk) {
+ pnext = (struct cnxk_tim_ent *)(uintptr_t)(
+ (chunk + tim_ring->nb_chunk_slots)->w0);
+ if (objs == TIM_MAX_OUTSTANDING_OBJ) {
+ rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks,
+ objs);
+ objs = 0;
+ }
+ pend_chunks[objs++] = chunk;
+ chunk = pnext;
+ }
+
+ if (objs)
+ rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks, objs);
+
+ return (struct cnxk_tim_ent *)(uintptr_t)bkt->first_chunk;
+}
+
+static struct cnxk_tim_ent *
+cnxk_tim_refill_chunk(struct cnxk_tim_bkt *const bkt,
+ struct cnxk_tim_bkt *const mirr_bkt,
+ struct cnxk_tim_ring *const tim_ring)
+{
+ struct cnxk_tim_ent *chunk;
+
+ if (bkt->nb_entry || !bkt->first_chunk) {
+ if (unlikely(rte_mempool_get(tim_ring->chunk_pool,
+ (void **)&chunk)))
+ return NULL;
+ if (bkt->nb_entry) {
+ *(uint64_t *)(((struct cnxk_tim_ent *)
+ mirr_bkt->current_chunk) +
+ tim_ring->nb_chunk_slots) =
+ (uintptr_t)chunk;
+ } else {
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ } else {
+ chunk = cnxk_tim_clr_bkt(tim_ring, bkt);
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+
+ return chunk;
+}
+
+static struct cnxk_tim_ent *
+cnxk_tim_insert_chunk(struct cnxk_tim_bkt *const bkt,
+ struct cnxk_tim_bkt *const mirr_bkt,
+ struct cnxk_tim_ring *const tim_ring)
+{
+ struct cnxk_tim_ent *chunk;
+
+ if (unlikely(rte_mempool_get(tim_ring->chunk_pool, (void **)&chunk)))
+ return NULL;
+
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+ if (bkt->nb_entry) {
+ *(uint64_t *)(((struct cnxk_tim_ent *)(uintptr_t)
+ mirr_bkt->current_chunk) +
+ tim_ring->nb_chunk_slots) = (uintptr_t)chunk;
+ } else {
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ return chunk;
+}
+
+static __rte_always_inline int
+cnxk_tim_add_entry_sp(struct cnxk_tim_ring *const tim_ring,
+ const uint32_t rel_bkt, struct rte_event_timer *const tim,
+ const struct cnxk_tim_ent *const pent,
+ const uint8_t flags)
+{
+ struct cnxk_tim_bkt *mirr_bkt;
+ struct cnxk_tim_ent *chunk;
+ struct cnxk_tim_bkt *bkt;
+ uint64_t lock_sema;
+ int16_t rem;
+
+__retry:
+ cnxk_tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
+
+ /* Get Bucket sema*/
+ lock_sema = cnxk_tim_bkt_fetch_sema_lock(bkt);
+
+ /* Bucket related checks. */
+ if (unlikely(cnxk_tim_bkt_get_hbt(lock_sema))) {
+ if (cnxk_tim_bkt_get_nent(lock_sema) != 0) {
+ uint64_t hbt_state;
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbz %[hbt], 33, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbnz %[hbt], 33, rty%= \n"
+ "dne%=: \n"
+ : [hbt] "=&r"(hbt_state)
+ : [w1] "r"((&bkt->w1))
+ : "memory");
+#else
+ do {
+ hbt_state = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (hbt_state & BIT_ULL(33));
+#endif
+
+ if (!(hbt_state & BIT_ULL(34))) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+ }
+ }
+ /* Insert the work. */
+ rem = cnxk_tim_bkt_fetch_rem(lock_sema);
+
+ if (!rem) {
+ if (flags & CNXK_TIM_ENA_FB)
+ chunk = cnxk_tim_refill_chunk(bkt, mirr_bkt, tim_ring);
+ if (flags & CNXK_TIM_ENA_DFB)
+ chunk = cnxk_tim_insert_chunk(bkt, mirr_bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ bkt->chunk_remainder = 0;
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ cnxk_tim_bkt_dec_lock(bkt);
+ return -ENOMEM;
+ }
+ mirr_bkt->current_chunk = (uintptr_t)chunk;
+ bkt->chunk_remainder = tim_ring->nb_chunk_slots - 1;
+ } else {
+ chunk = (struct cnxk_tim_ent *)mirr_bkt->current_chunk;
+ chunk += tim_ring->nb_chunk_slots - rem;
+ }
+
+ /* Copy work entry. */
+ *chunk = *pent;
+
+ tim->impl_opaque[0] = (uintptr_t)chunk;
+ tim->impl_opaque[1] = (uintptr_t)bkt;
+ __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
+ cnxk_tim_bkt_inc_nent(bkt);
+ cnxk_tim_bkt_dec_lock_relaxed(bkt);
+
+ return 0;
+}
+
+static __rte_always_inline int
+cnxk_tim_add_entry_mp(struct cnxk_tim_ring *const tim_ring,
+ const uint32_t rel_bkt, struct rte_event_timer *const tim,
+ const struct cnxk_tim_ent *const pent,
+ const uint8_t flags)
+{
+ struct cnxk_tim_bkt *mirr_bkt;
+ struct cnxk_tim_ent *chunk;
+ struct cnxk_tim_bkt *bkt;
+ uint64_t lock_sema;
+ int16_t rem;
+
+__retry:
+ cnxk_tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
+ /* Get Bucket sema*/
+ lock_sema = cnxk_tim_bkt_fetch_sema_lock(bkt);
+
+ /* Bucket related checks. */
+ if (unlikely(cnxk_tim_bkt_get_hbt(lock_sema))) {
+ if (cnxk_tim_bkt_get_nent(lock_sema) != 0) {
+ uint64_t hbt_state;
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbz %[hbt], 33, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbnz %[hbt], 33, rty%= \n"
+ "dne%=: \n"
+ : [hbt] "=&r"(hbt_state)
+ : [w1] "r"((&bkt->w1))
+ : "memory");
+#else
+ do {
+ hbt_state = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (hbt_state & BIT_ULL(33));
+#endif
+
+ if (!(hbt_state & BIT_ULL(34))) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+ }
+ }
+
+ rem = cnxk_tim_bkt_fetch_rem(lock_sema);
+ if (rem < 0) {
+ cnxk_tim_bkt_dec_lock(bkt);
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[rem], [%[crem]] \n"
+ " tbz %[rem], 63, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[rem], [%[crem]] \n"
+ " tbnz %[rem], 63, rty%= \n"
+ "dne%=: \n"
+ : [rem] "=&r"(rem)
+ : [crem] "r"(&bkt->w1)
+ : "memory");
+#else
+ while (__atomic_load_n((int64_t *)&bkt->w1, __ATOMIC_RELAXED) <
+ 0)
+ ;
+#endif
+ goto __retry;
+ } else if (!rem) {
+ /* Only one thread can be here*/
+ if (flags & CNXK_TIM_ENA_FB)
+ chunk = cnxk_tim_refill_chunk(bkt, mirr_bkt, tim_ring);
+ if (flags & CNXK_TIM_ENA_DFB)
+ chunk = cnxk_tim_insert_chunk(bkt, mirr_bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ cnxk_tim_bkt_set_rem(bkt, 0);
+ cnxk_tim_bkt_dec_lock(bkt);
+ return -ENOMEM;
+ }
+ *chunk = *pent;
+ if (cnxk_tim_bkt_fetch_lock(lock_sema)) {
+ do {
+ lock_sema = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (cnxk_tim_bkt_fetch_lock(lock_sema) - 1);
+ }
+ rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+ mirr_bkt->current_chunk = (uintptr_t)chunk;
+ __atomic_store_n(&bkt->chunk_remainder,
+ tim_ring->nb_chunk_slots - 1,
+ __ATOMIC_RELEASE);
+ } else {
+ chunk = (struct cnxk_tim_ent *)mirr_bkt->current_chunk;
+ chunk += tim_ring->nb_chunk_slots - rem;
+ *chunk = *pent;
+ }
+
+ tim->impl_opaque[0] = (uintptr_t)chunk;
+ tim->impl_opaque[1] = (uintptr_t)bkt;
+ __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
+ cnxk_tim_bkt_inc_nent(bkt);
+ cnxk_tim_bkt_dec_lock_relaxed(bkt);
+
+ return 0;
+}
+
#endif /* __CNXK_TIM_WORKER_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 29/33] event/cnxk: add timer arm timeout burst
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (27 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 28/33] event/cnxk: add timer arm routine pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 30/33] event/cnxk: add timer cancel function pbhagavatula
` (5 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event timer arm timeout burst function.
All the timers requested to be armed have the same timeout.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 7 ++
drivers/event/cnxk/cnxk_tim_evdev.h | 12 +++
drivers/event/cnxk/cnxk_tim_worker.c | 53 ++++++++++
drivers/event/cnxk/cnxk_tim_worker.h | 141 +++++++++++++++++++++++++++
4 files changed, 213 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index a3be66f9a..e6f31b19f 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -88,7 +88,14 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
#undef FP
};
+ const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2] = {
+#define FP(_name, _f1, flags) [_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
+ TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
+ };
+
cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
+ cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
}
static void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 7cbcdb701..04ba3dc8c 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -217,6 +217,10 @@ cnxk_tim_cntfrq(void)
FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
+#define TIM_ARM_TMO_FASTPATH_MODES \
+ FP(dfb, 0, CNXK_TIM_ENA_DFB) \
+ FP(fb, 1, CNXK_TIM_ENA_FB)
+
#define FP(_name, _f2, _f1, flags) \
uint16_t cnxk_tim_arm_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
@@ -224,6 +228,14 @@ cnxk_tim_cntfrq(void)
TIM_ARM_FASTPATH_MODES
#undef FP
+#define FP(_name, _f1, flags) \
+ uint16_t cnxk_tim_arm_tmo_tick_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint64_t timeout_tick, \
+ const uint16_t nb_timers);
+TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index eec39b9c2..2f1676ec1 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -99,3 +99,56 @@ cnxk_tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
}
TIM_ARM_FASTPATH_MODES
#undef FP
+
+static __rte_always_inline uint16_t
+cnxk_tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint64_t timeout_tick,
+ const uint16_t nb_timers, const uint8_t flags)
+{
+ struct cnxk_tim_ent entry[CNXK_TIM_MAX_BURST] __rte_cache_aligned;
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ uint16_t set_timers = 0;
+ uint16_t arr_idx = 0;
+ uint16_t idx;
+ int ret;
+
+ if (unlikely(!timeout_tick || timeout_tick > tim_ring->nb_bkts)) {
+ const enum rte_event_timer_state state =
+ timeout_tick ? RTE_EVENT_TIMER_ERROR_TOOLATE :
+ RTE_EVENT_TIMER_ERROR_TOOEARLY;
+ for (idx = 0; idx < nb_timers; idx++)
+ tim[idx]->state = state;
+
+ rte_errno = EINVAL;
+ return 0;
+ }
+
+ cnxk_tim_sync_start_cyc(tim_ring);
+ while (arr_idx < nb_timers) {
+ for (idx = 0; idx < CNXK_TIM_MAX_BURST && (arr_idx < nb_timers);
+ idx++, arr_idx++) {
+ cnxk_tim_format_event(tim[arr_idx], &entry[idx]);
+ }
+ ret = cnxk_tim_add_entry_brst(tim_ring, timeout_tick,
+ &tim[set_timers], entry, idx,
+ flags);
+ set_timers += ret;
+ if (ret != idx)
+ break;
+ }
+
+ return set_timers;
+}
+
+#define FP(_name, _f1, _flags) \
+ uint16_t __rte_noinline cnxk_tim_arm_tmo_tick_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint64_t timeout_tick, \
+ const uint16_t nb_timers) \
+ { \
+ return cnxk_tim_timer_arm_tmo_brst(adptr, tim, timeout_tick, \
+ nb_timers, _flags); \
+ }
+TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
index efdcf8969..7a4cfd1a6 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.h
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -420,4 +420,145 @@ cnxk_tim_add_entry_mp(struct cnxk_tim_ring *const tim_ring,
return 0;
}
+static inline uint16_t
+cnxk_tim_cpy_wrk(uint16_t index, uint16_t cpy_lmt, struct cnxk_tim_ent *chunk,
+ struct rte_event_timer **const tim,
+ const struct cnxk_tim_ent *const ents,
+ const struct cnxk_tim_bkt *const bkt)
+{
+ for (; index < cpy_lmt; index++) {
+ *chunk = *(ents + index);
+ tim[index]->impl_opaque[0] = (uintptr_t)chunk++;
+ tim[index]->impl_opaque[1] = (uintptr_t)bkt;
+ tim[index]->state = RTE_EVENT_TIMER_ARMED;
+ }
+
+ return index;
+}
+
+/* Burst mode functions */
+static inline int
+cnxk_tim_add_entry_brst(struct cnxk_tim_ring *const tim_ring,
+ const uint16_t rel_bkt,
+ struct rte_event_timer **const tim,
+ const struct cnxk_tim_ent *ents,
+ const uint16_t nb_timers, const uint8_t flags)
+{
+ struct cnxk_tim_ent *chunk = NULL;
+ struct cnxk_tim_bkt *mirr_bkt;
+ struct cnxk_tim_bkt *bkt;
+ uint16_t chunk_remainder;
+ uint16_t index = 0;
+ uint64_t lock_sema;
+ int16_t rem, crem;
+ uint8_t lock_cnt;
+
+__retry:
+ cnxk_tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
+
+ /* Only one thread beyond this. */
+ lock_sema = cnxk_tim_bkt_inc_lock(bkt);
+ lock_cnt = (uint8_t)((lock_sema >> TIM_BUCKET_W1_S_LOCK) &
+ TIM_BUCKET_W1_M_LOCK);
+
+ if (lock_cnt) {
+ cnxk_tim_bkt_dec_lock(bkt);
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxrb %w[lock_cnt], [%[lock]] \n"
+ " tst %w[lock_cnt], 255 \n"
+ " beq dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxrb %w[lock_cnt], [%[lock]] \n"
+ " tst %w[lock_cnt], 255 \n"
+ " bne rty%= \n"
+ "dne%=: \n"
+ : [lock_cnt] "=&r"(lock_cnt)
+ : [lock] "r"(&bkt->lock)
+ : "memory");
+#else
+ while (__atomic_load_n(&bkt->lock, __ATOMIC_RELAXED))
+ ;
+#endif
+ goto __retry;
+ }
+
+ /* Bucket related checks. */
+ if (unlikely(cnxk_tim_bkt_get_hbt(lock_sema))) {
+ if (cnxk_tim_bkt_get_nent(lock_sema) != 0) {
+ uint64_t hbt_state;
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbz %[hbt], 33, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbnz %[hbt], 33, rty%= \n"
+ "dne%=: \n"
+ : [hbt] "=&r"(hbt_state)
+ : [w1] "r"((&bkt->w1))
+ : "memory");
+#else
+ do {
+ hbt_state = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (hbt_state & BIT_ULL(33));
+#endif
+
+ if (!(hbt_state & BIT_ULL(34))) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+ }
+ }
+
+ chunk_remainder = cnxk_tim_bkt_fetch_rem(lock_sema);
+ rem = chunk_remainder - nb_timers;
+ if (rem < 0) {
+ crem = tim_ring->nb_chunk_slots - chunk_remainder;
+ if (chunk_remainder && crem) {
+ chunk = ((struct cnxk_tim_ent *)
+ mirr_bkt->current_chunk) +
+ crem;
+
+ index = cnxk_tim_cpy_wrk(index, chunk_remainder, chunk,
+ tim, ents, bkt);
+ cnxk_tim_bkt_sub_rem(bkt, chunk_remainder);
+ cnxk_tim_bkt_add_nent(bkt, chunk_remainder);
+ }
+
+ if (flags & CNXK_TIM_ENA_FB)
+ chunk = cnxk_tim_refill_chunk(bkt, mirr_bkt, tim_ring);
+ if (flags & CNXK_TIM_ENA_DFB)
+ chunk = cnxk_tim_insert_chunk(bkt, mirr_bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ rte_errno = ENOMEM;
+ tim[index]->state = RTE_EVENT_TIMER_ERROR;
+ return crem;
+ }
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+ mirr_bkt->current_chunk = (uintptr_t)chunk;
+ cnxk_tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
+
+ rem = nb_timers - chunk_remainder;
+ cnxk_tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - rem);
+ cnxk_tim_bkt_add_nent(bkt, rem);
+ } else {
+ chunk = (struct cnxk_tim_ent *)mirr_bkt->current_chunk;
+ chunk += (tim_ring->nb_chunk_slots - chunk_remainder);
+
+ cnxk_tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
+ cnxk_tim_bkt_sub_rem(bkt, nb_timers);
+ cnxk_tim_bkt_add_nent(bkt, nb_timers);
+ }
+
+ cnxk_tim_bkt_dec_lock(bkt);
+
+ return nb_timers;
+}
+
#endif /* __CNXK_TIM_WORKER_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 30/33] event/cnxk: add timer cancel function
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (28 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 29/33] event/cnxk: add timer arm timeout burst pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 31/33] event/cnxk: add timer stats get and reset pbhagavatula
` (4 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add function to cancel event timer that has been armed.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 1 +
drivers/event/cnxk/cnxk_tim_evdev.h | 5 ++++
drivers/event/cnxk/cnxk_tim_worker.c | 30 ++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_worker.h | 37 ++++++++++++++++++++++++++++
4 files changed, 73 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index e6f31b19f..edc8706f8 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -96,6 +96,7 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
+ cnxk_tim_ops.cancel_burst = cnxk_tim_timer_cancel_burst;
}
static void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 04ba3dc8c..9cc6e7512 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -236,6 +236,11 @@ TIM_ARM_FASTPATH_MODES
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
+uint16_t
+cnxk_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint16_t nb_timers);
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index 2f1676ec1..ce6918465 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -152,3 +152,33 @@ cnxk_tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
}
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
+
+uint16_t
+cnxk_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint16_t nb_timers)
+{
+ uint16_t index;
+ int ret;
+
+ RTE_SET_USED(adptr);
+ rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+ for (index = 0; index < nb_timers; index++) {
+ if (tim[index]->state == RTE_EVENT_TIMER_CANCELED) {
+ rte_errno = EALREADY;
+ break;
+ }
+
+ if (tim[index]->state != RTE_EVENT_TIMER_ARMED) {
+ rte_errno = EINVAL;
+ break;
+ }
+ ret = cnxk_tim_rm_entry(tim[index]);
+ if (ret) {
+ rte_errno = -ret;
+ break;
+ }
+ }
+
+ return index;
+}
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
index 7a4cfd1a6..02f58eb3d 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.h
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -561,4 +561,41 @@ cnxk_tim_add_entry_brst(struct cnxk_tim_ring *const tim_ring,
return nb_timers;
}
+static int
+cnxk_tim_rm_entry(struct rte_event_timer *tim)
+{
+ struct cnxk_tim_ent *entry;
+ struct cnxk_tim_bkt *bkt;
+ uint64_t lock_sema;
+
+ if (tim->impl_opaque[1] == 0 || tim->impl_opaque[0] == 0)
+ return -ENOENT;
+
+ entry = (struct cnxk_tim_ent *)(uintptr_t)tim->impl_opaque[0];
+ if (entry->wqe != tim->ev.u64) {
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ return -ENOENT;
+ }
+
+ bkt = (struct cnxk_tim_bkt *)(uintptr_t)tim->impl_opaque[1];
+ lock_sema = cnxk_tim_bkt_inc_lock(bkt);
+ if (cnxk_tim_bkt_get_hbt(lock_sema) ||
+ !cnxk_tim_bkt_get_nent(lock_sema)) {
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ cnxk_tim_bkt_dec_lock(bkt);
+ return -ENOENT;
+ }
+
+ entry->w0 = 0;
+ entry->wqe = 0;
+ tim->state = RTE_EVENT_TIMER_CANCELED;
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ cnxk_tim_bkt_dec_lock(bkt);
+
+ return 0;
+}
+
#endif /* __CNXK_TIM_WORKER_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 31/33] event/cnxk: add timer stats get and reset
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (29 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 30/33] event/cnxk: add timer cancel function pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 32/33] event/cnxk: add timer adapter start and stop pbhagavatula
` (3 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add event timer adapter statistics get and reset functions.
Stats are disabled by default and can be enabled through devargs.
Example:
--dev "0002:1e:00.0,tim_stats_ena=1"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 9 +++++
drivers/event/cnxk/cn10k_eventdev.c | 3 +-
drivers/event/cnxk/cn9k_eventdev.c | 3 +-
drivers/event/cnxk/cnxk_tim_evdev.c | 50 ++++++++++++++++++++++++----
drivers/event/cnxk/cnxk_tim_evdev.h | 38 ++++++++++++++-------
drivers/event/cnxk/cnxk_tim_worker.c | 11 ++++--
6 files changed, 91 insertions(+), 23 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 05dcf06f4..cfa743da1 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -115,6 +115,15 @@ Runtime Config Options
-a 0002:0e:00.0,tim_chnk_slots=1023
+- ``TIM enable arm/cancel statistics``
+
+ The ``tim_stats_ena`` devargs can be used to enable arm and cancel stats of
+ event timer adapter.
+
+ For example::
+
+ -a 0002:0e:00.0,tim_stats_ena=1
+
- ``TIM limit max rings reserved``
The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index a5a614196..2b2025cdb 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -505,4 +505,5 @@ RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
CN10K_SSO_GW_MODE "=<int>"
CNXK_TIM_DISABLE_NPA "=1"
CNXK_TIM_CHNK_SLOTS "=<int>"
- CNXK_TIM_RINGS_LMT "=<int>");
+ CNXK_TIM_RINGS_LMT "=<int>"
+ CNXK_TIM_STATS_ENA "=1");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index cfea3723a..e39b4ded2 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -574,4 +574,5 @@ RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
CN9K_SSO_SINGLE_WS "=1"
CNXK_TIM_DISABLE_NPA "=1"
CNXK_TIM_CHNK_SLOTS "=<int>"
- CNXK_TIM_RINGS_LMT "=<int>");
+ CNXK_TIM_RINGS_LMT "=<int>"
+ CNXK_TIM_STATS_ENA "=1");
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index edc8706f8..1b2518a64 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -81,21 +81,25 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
{
uint8_t prod_flag = !tim_ring->prod_type_sp;
- /* [DFB/FB] [SP][MP]*/
- const rte_event_timer_arm_burst_t arm_burst[2][2] = {
-#define FP(_name, _f2, _f1, flags) [_f2][_f1] = cnxk_tim_arm_burst_##_name,
+ /* [STATS] [DFB/FB] [SP][MP]*/
+ const rte_event_timer_arm_burst_t arm_burst[2][2][2] = {
+#define FP(_name, _f3, _f2, _f1, flags) \
+ [_f3][_f2][_f1] = cnxk_tim_arm_burst_##_name,
TIM_ARM_FASTPATH_MODES
#undef FP
};
- const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2] = {
-#define FP(_name, _f1, flags) [_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
+ const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2] = {
+#define FP(_name, _f2, _f1, flags) \
+ [_f2][_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
};
- cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
- cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
+ cnxk_tim_ops.arm_burst =
+ arm_burst[tim_ring->enable_stats][tim_ring->ena_dfb][prod_flag];
+ cnxk_tim_ops.arm_tmo_tick_burst =
+ arm_tmo_burst[tim_ring->enable_stats][tim_ring->ena_dfb];
cnxk_tim_ops.cancel_burst = cnxk_tim_timer_cancel_burst;
}
@@ -159,6 +163,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->nb_timers = rcfg->nb_timers;
tim_ring->chunk_sz = dev->chunk_sz;
tim_ring->disable_npa = dev->disable_npa;
+ tim_ring->enable_stats = dev->enable_stats;
if (tim_ring->disable_npa) {
tim_ring->nb_chunks =
@@ -241,6 +246,30 @@ cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
return 0;
}
+static int
+cnxk_tim_stats_get(const struct rte_event_timer_adapter *adapter,
+ struct rte_event_timer_adapter_stats *stats)
+{
+ struct cnxk_tim_ring *tim_ring = adapter->data->adapter_priv;
+ uint64_t bkt_cyc = cnxk_tim_cntvct() - tim_ring->ring_start_cyc;
+
+ stats->evtim_exp_count =
+ __atomic_load_n(&tim_ring->arm_cnt, __ATOMIC_RELAXED);
+ stats->ev_enq_count = stats->evtim_exp_count;
+ stats->adapter_tick_count =
+ rte_reciprocal_divide_u64(bkt_cyc, &tim_ring->fast_div);
+ return 0;
+}
+
+static int
+cnxk_tim_stats_reset(const struct rte_event_timer_adapter *adapter)
+{
+ struct cnxk_tim_ring *tim_ring = adapter->data->adapter_priv;
+
+ __atomic_store_n(&tim_ring->arm_cnt, 0, __ATOMIC_RELAXED);
+ return 0;
+}
+
int
cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
uint32_t *caps,
@@ -258,6 +287,11 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
+ if (dev->enable_stats) {
+ cnxk_tim_ops.stats_get = cnxk_tim_stats_get;
+ cnxk_tim_ops.stats_reset = cnxk_tim_stats_reset;
+ }
+
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
@@ -281,6 +315,8 @@ cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
&dev->disable_npa);
rte_kvargs_process(kvlist, CNXK_TIM_CHNK_SLOTS, &parse_kvargs_value,
&dev->chunk_slots);
+ rte_kvargs_process(kvlist, CNXK_TIM_STATS_ENA, &parse_kvargs_flag,
+ &dev->enable_stats);
rte_kvargs_process(kvlist, CNXK_TIM_RINGS_LMT, &parse_kvargs_value,
&dev->min_ring_cnt);
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 9cc6e7512..7aa9650c1 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -36,12 +36,14 @@
#define CNXK_TIM_DISABLE_NPA "tim_disable_npa"
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
+#define CNXK_TIM_STATS_ENA "tim_stats_ena"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
-#define CNXK_TIM_SP 0x1
-#define CNXK_TIM_MP 0x2
-#define CNXK_TIM_ENA_FB 0x10
-#define CNXK_TIM_ENA_DFB 0x20
+#define CNXK_TIM_SP 0x1
+#define CNXK_TIM_MP 0x2
+#define CNXK_TIM_ENA_FB 0x10
+#define CNXK_TIM_ENA_DFB 0x20
+#define CNXK_TIM_ENA_STATS 0x40
#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
#define TIM_BUCKET_W1_M_CHUNK_REMAINDER \
@@ -82,6 +84,7 @@ struct cnxk_tim_evdev {
uint8_t disable_npa;
uint16_t chunk_slots;
uint16_t min_ring_cnt;
+ uint8_t enable_stats;
};
enum cnxk_tim_clk_src {
@@ -123,6 +126,7 @@ struct cnxk_tim_ring {
struct rte_reciprocal_u64 fast_bkt;
uint64_t arm_cnt;
uint8_t prod_type_sp;
+ uint8_t enable_stats;
uint8_t disable_npa;
uint8_t ena_dfb;
uint16_t ring_id;
@@ -212,23 +216,33 @@ cnxk_tim_cntfrq(void)
#endif
#define TIM_ARM_FASTPATH_MODES \
- FP(sp, 0, 0, CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
- FP(mp, 0, 1, CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
- FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
- FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
+ FP(sp, 0, 0, 0, CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
+ FP(mp, 0, 0, 1, CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
+ FP(fb_sp, 0, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
+ FP(fb_mp, 0, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP) \
+ FP(stats_sp, 1, 0, 0, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
+ FP(stats_mp, 1, 0, 1, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
+ FP(stats_fb_sp, 1, 1, 0, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
+ FP(stats_fb_mp, 1, 1, 1, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_FB | CNXK_TIM_MP)
#define TIM_ARM_TMO_FASTPATH_MODES \
- FP(dfb, 0, CNXK_TIM_ENA_DFB) \
- FP(fb, 1, CNXK_TIM_ENA_FB)
+ FP(dfb, 0, 0, CNXK_TIM_ENA_DFB) \
+ FP(fb, 0, 1, CNXK_TIM_ENA_FB) \
+ FP(stats_dfb, 1, 0, CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_DFB) \
+ FP(stats_fb, 1, 1, CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_FB)
-#define FP(_name, _f2, _f1, flags) \
+#define FP(_name, _f3, _f2, _f1, flags) \
uint16_t cnxk_tim_arm_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint16_t nb_timers);
TIM_ARM_FASTPATH_MODES
#undef FP
-#define FP(_name, _f1, flags) \
+#define FP(_name, _f2, _f1, flags) \
uint16_t cnxk_tim_arm_tmo_tick_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint64_t timeout_tick, \
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index ce6918465..598379ac4 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -86,10 +86,13 @@ cnxk_tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
}
}
+ if (flags & CNXK_TIM_ENA_STATS)
+ __atomic_fetch_add(&tim_ring->arm_cnt, index, __ATOMIC_RELAXED);
+
return index;
}
-#define FP(_name, _f2, _f1, _flags) \
+#define FP(_name, _f3, _f2, _f1, _flags) \
uint16_t __rte_noinline cnxk_tim_arm_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint16_t nb_timers) \
@@ -138,10 +141,14 @@ cnxk_tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
break;
}
+ if (flags & CNXK_TIM_ENA_STATS)
+ __atomic_fetch_add(&tim_ring->arm_cnt, set_timers,
+ __ATOMIC_RELAXED);
+
return set_timers;
}
-#define FP(_name, _f1, _flags) \
+#define FP(_name, _f2, _f1, _flags) \
uint16_t __rte_noinline cnxk_tim_arm_tmo_tick_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint64_t timeout_tick, \
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 32/33] event/cnxk: add timer adapter start and stop
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (30 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 31/33] event/cnxk: add timer stats get and reset pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 33/33] event/cnxk: add devargs to control timer adapters pbhagavatula
` (2 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add event timer adapter start and stop functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 71 ++++++++++++++++++++++++++++-
1 file changed, 70 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 1b2518a64..7b28969c9 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -246,6 +246,73 @@ cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
return 0;
}
+static void
+cnxk_tim_calibrate_start_tsc(struct cnxk_tim_ring *tim_ring)
+{
+#define CNXK_TIM_CALIB_ITER 1E6
+ uint32_t real_bkt, bucket;
+ int icount, ecount = 0;
+ uint64_t bkt_cyc;
+
+ for (icount = 0; icount < CNXK_TIM_CALIB_ITER; icount++) {
+ real_bkt = plt_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
+ bkt_cyc = cnxk_tim_cntvct();
+ bucket = (bkt_cyc - tim_ring->ring_start_cyc) /
+ tim_ring->tck_int;
+ bucket = bucket % (tim_ring->nb_bkts);
+ tim_ring->ring_start_cyc =
+ bkt_cyc - (real_bkt * tim_ring->tck_int);
+ if (bucket != real_bkt)
+ ecount++;
+ }
+ tim_ring->last_updt_cyc = bkt_cyc;
+ plt_tim_dbg("Bucket mispredict %3.2f distance %d\n",
+ 100 - (((double)(icount - ecount) / (double)icount) * 100),
+ bucket - real_bkt);
+}
+
+static int
+cnxk_tim_ring_start(const struct rte_event_timer_adapter *adptr)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ rc = roc_tim_lf_enable(&dev->tim, tim_ring->ring_id,
+ &tim_ring->ring_start_cyc, NULL);
+ if (rc < 0)
+ return rc;
+
+ tim_ring->tck_int = NSEC2TICK(tim_ring->tck_nsec, cnxk_tim_cntfrq());
+ tim_ring->tot_int = tim_ring->tck_int * tim_ring->nb_bkts;
+ tim_ring->fast_div = rte_reciprocal_value_u64(tim_ring->tck_int);
+ tim_ring->fast_bkt = rte_reciprocal_value_u64(tim_ring->nb_bkts);
+
+ cnxk_tim_calibrate_start_tsc(tim_ring);
+
+ return rc;
+}
+
+static int
+cnxk_tim_ring_stop(const struct rte_event_timer_adapter *adptr)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ rc = roc_tim_lf_disable(&dev->tim, tim_ring->ring_id);
+ if (rc < 0)
+ plt_err("Failed to disable timer ring");
+
+ return rc;
+}
+
static int
cnxk_tim_stats_get(const struct rte_event_timer_adapter *adapter,
struct rte_event_timer_adapter_stats *stats)
@@ -278,13 +345,14 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
RTE_SET_USED(flags);
- RTE_SET_USED(ops);
if (dev == NULL)
return -ENODEV;
cnxk_tim_ops.init = cnxk_tim_ring_create;
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+ cnxk_tim_ops.start = cnxk_tim_ring_start;
+ cnxk_tim_ops.stop = cnxk_tim_ring_stop;
cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
if (dev->enable_stats) {
@@ -295,6 +363,7 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
+ *ops = &cnxk_tim_ops;
return 0;
}
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v2 33/33] event/cnxk: add devargs to control timer adapters
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (31 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 32/33] event/cnxk: add timer adapter start and stop pbhagavatula
@ 2021-04-26 17:44 ` pbhagavatula
2021-04-30 5:11 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver Jerin Jacob
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-26 17:44 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add devargs to control each event timer adapter i.e. TIM rings internal
parameters uniquely. The following dict format is expected
[ring-chnk_slots-disable_npa-stats_ena]. 0 represents default values.
Example:
--dev "0002:1e:00.0,tim_ring_ctl=[2-1023-1-0]"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 11 ++++
drivers/event/cnxk/cnxk_tim_evdev.c | 96 ++++++++++++++++++++++++++++-
drivers/event/cnxk/cnxk_tim_evdev.h | 10 +++
3 files changed, 116 insertions(+), 1 deletion(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index cfa743da1..c42784a3b 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -135,6 +135,17 @@ Runtime Config Options
-a 0002:0e:00.0,tim_rings_lmt=5
+- ``TIM ring control internal parameters``
+
+ When using multiple TIM rings the ``tim_ring_ctl`` devargs can be used to
+ control each TIM rings internal parameters uniquely. The following dict
+ format is expected [ring-chnk_slots-disable_npa-stats_ena]. 0 represents
+ default values.
+
+ For Example::
+
+ -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 7b28969c9..fdc78270d 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -121,7 +121,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
struct cnxk_tim_ring *tim_ring;
- int rc;
+ int i, rc;
if (dev == NULL)
return -ENODEV;
@@ -165,6 +165,20 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->disable_npa = dev->disable_npa;
tim_ring->enable_stats = dev->enable_stats;
+ for (i = 0; i < dev->ring_ctl_cnt; i++) {
+ struct cnxk_tim_ctl *ring_ctl = &dev->ring_ctl_data[i];
+
+ if (ring_ctl->ring == tim_ring->ring_id) {
+ tim_ring->chunk_sz =
+ ring_ctl->chunk_slots ?
+ ((uint32_t)(ring_ctl->chunk_slots + 1) *
+ CNXK_TIM_CHUNK_ALIGNMENT) :
+ tim_ring->chunk_sz;
+ tim_ring->enable_stats = ring_ctl->enable_stats;
+ tim_ring->disable_npa = ring_ctl->disable_npa;
+ }
+ }
+
if (tim_ring->disable_npa) {
tim_ring->nb_chunks =
tim_ring->nb_timers /
@@ -368,6 +382,84 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
return 0;
}
+static void
+cnxk_tim_parse_ring_param(char *value, void *opaque)
+{
+ struct cnxk_tim_evdev *dev = opaque;
+ struct cnxk_tim_ctl ring_ctl = {0};
+ char *tok = strtok(value, "-");
+ struct cnxk_tim_ctl *old_ptr;
+ uint16_t *val;
+
+ val = (uint16_t *)&ring_ctl;
+
+ if (!strlen(value))
+ return;
+
+ while (tok != NULL) {
+ *val = atoi(tok);
+ tok = strtok(NULL, "-");
+ val++;
+ }
+
+ if (val != (&ring_ctl.enable_stats + 1)) {
+ plt_err("Invalid ring param expected [ring-chunk_sz-disable_npa-enable_stats]");
+ return;
+ }
+
+ dev->ring_ctl_cnt++;
+ old_ptr = dev->ring_ctl_data;
+ dev->ring_ctl_data =
+ rte_realloc(dev->ring_ctl_data,
+ sizeof(struct cnxk_tim_ctl) * dev->ring_ctl_cnt, 0);
+ if (dev->ring_ctl_data == NULL) {
+ dev->ring_ctl_data = old_ptr;
+ dev->ring_ctl_cnt--;
+ return;
+ }
+
+ dev->ring_ctl_data[dev->ring_ctl_cnt - 1] = ring_ctl;
+}
+
+static void
+cnxk_tim_parse_ring_ctl_list(const char *value, void *opaque)
+{
+ char *s = strdup(value);
+ char *start = NULL;
+ char *end = NULL;
+ char *f = s;
+
+ while (*s) {
+ if (*s == '[')
+ start = s;
+ else if (*s == ']')
+ end = s;
+
+ if (start && start < end) {
+ *end = 0;
+ cnxk_tim_parse_ring_param(start + 1, opaque);
+ start = end;
+ s = end;
+ }
+ s++;
+ }
+
+ free(f);
+}
+
+static int
+cnxk_tim_parse_kvargs_dict(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ /* Dict format [ring-chunk_sz-disable_npa-enable_stats] use '-' as ','
+ * isn't allowed. 0 represents default.
+ */
+ cnxk_tim_parse_ring_ctl_list(value, opaque);
+
+ return 0;
+}
+
static void
cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
{
@@ -388,6 +480,8 @@ cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
&dev->enable_stats);
rte_kvargs_process(kvlist, CNXK_TIM_RINGS_LMT, &parse_kvargs_value,
&dev->min_ring_cnt);
+ rte_kvargs_process(kvlist, CNXK_TIM_RING_CTL,
+ &cnxk_tim_parse_kvargs_dict, &dev);
rte_kvargs_free(kvlist);
}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 7aa9650c1..d0df226ed 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -38,6 +38,7 @@
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
#define CNXK_TIM_STATS_ENA "tim_stats_ena"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
+#define CNXK_TIM_RING_CTL "tim_ring_ctl"
#define CNXK_TIM_SP 0x1
#define CNXK_TIM_MP 0x2
@@ -75,6 +76,13 @@
#define TIM_BUCKET_SEMA_WLOCK \
(TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
+struct cnxk_tim_ctl {
+ uint16_t ring;
+ uint16_t chunk_slots;
+ uint16_t disable_npa;
+ uint16_t enable_stats;
+};
+
struct cnxk_tim_evdev {
struct roc_tim tim;
struct rte_eventdev *event_dev;
@@ -85,6 +93,8 @@ struct cnxk_tim_evdev {
uint16_t chunk_slots;
uint16_t min_ring_cnt;
uint8_t enable_stats;
+ uint16_t ring_ctl_cnt;
+ struct cnxk_tim_ctl *ring_ctl_data;
};
enum cnxk_tim_clk_src {
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (32 preceding siblings ...)
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 33/33] event/cnxk: add devargs to control timer adapters pbhagavatula
@ 2021-04-30 5:11 ` Jerin Jacob
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
34 siblings, 0 replies; 185+ messages in thread
From: Jerin Jacob @ 2021-04-30 5:11 UTC (permalink / raw)
To: Pavan Nikhilesh; +Cc: Jerin Jacob, dpdk-dev
On Mon, Apr 26, 2021 at 11:14 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> This patchset adds support for Marvell CN106XX SoC based on 'common/cnxk'
> driver. In future, CN9K a.k.a octeontx2 will also be supported by same
> driver when code is ready and 'event/octeontx2' will be deprecated.
>
> v2 Changes:
> - Split Rx/Tx adapter into seperate patch set to remove dependency on net/cnxk
> - Add missing xStats patch.
> - Fix incorrect head wait operation.
# Please check the following checkpatch issues[1]
# Please fix the copyright to Copyright(c) 2021 Marvell[2]
# Please fix documentation to have synergy with rest of doc[2]
[1]
[for-main][dpdk-next-eventdev] $ ./devtools/checkpatches.sh -n 33
### event/cnxk: add devargs to control SSO HWGRP QoS
WARNING:TYPO_SPELLING: 'Aka' may be misspelled - perhaps 'A.k.a.'?
#17:
Qx -> Event queue Aka SSO GGRP.
^^^
total: 0 errors, 1 warnings, 154 lines checked
### event/cnxk: add port config functions
WARNING:TYPO_SPELLING: 'aka' may be misspelled - perhaps 'a.k.a.'?
#6:
Add SSO HWS aka event port setup and release functions.
^^^
WARNING:TYPO_SPELLING: 'aka' may be misspelled - perhaps 'a.k.a.'?
#500: FILE: drivers/event/cnxk/cnxk_eventdev.h:103:
+/* Event port aka GWS */
^^^
total: 0 errors, 2 warnings, 526 lines checked
### event/cnxk: support event timer
WARNING:TYPO_SPELLING: 'aka' may be misspelled - perhaps 'a.k.a.'?
#6:
Add event timer adapter aka TIM initialization on SSO probe.
^^^
WARNING:TYPO_SPELLING: 'aka' may be misspelled - perhaps 'a.k.a.'?
#21: FILE: doc/guides/eventdevs/cnxk.rst:40:
+- Up to 256 TIM rings aka event timer adapters.
^^^
total: 0 errors, 2 warnings, 136 lines checked
[2]
[for-main]dell[dpdk-next-eventdev] $ git diff
diff --git a/MAINTAINERS b/MAINTAINERS
index d1ae33d48f..5a2297e999 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1224,7 +1224,7 @@ M: Jerin Jacob <jerinj@marvell.com>
F: drivers/event/octeontx2/
F: doc/guides/eventdevs/octeontx2.rst
-Marvell OCTEON CNXK
+Marvell cnxk
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
M: Shijith Thotton <sthotton@marvell.com>
F: drivers/event/cnxk/
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index c42784a3be..27cc32e26e 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -1,8 +1,8 @@
.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2021 Marvell International Ltd.
+ Copyright(c) 2021 Marvell.
-OCTEON CNXK SSO Eventdev Driver
-==========================
+Marvell cnxk SSO Eventdev Driver
+================================
The SSO PMD (**librte_event_cnxk**) and provides poll mode
eventdev driver support for the inbuilt event device found in the
@@ -11,7 +11,7 @@ eventdev driver support for the inbuilt event device
found in the
More information about OCTEON CNXK SoC can be found at `Marvell
Official Website
<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
-Supported OCTEON CNXK SoCs
+Supported OCTEON cnxk SoCs
--------------------------
- CN9XX
>
> Pavan Nikhilesh (16):
> event/cnxk: add build infra and device setup
> event/cnxk: add platform specific device probe
> event/cnxk: add common configuration validation
> event/cnxk: allocate event inflight buffers
> event/cnxk: add devargs to configure getwork mode
> event/cnxk: add SSO HW device operations
> event/cnxk: add SSO GWS fastpath enqueue functions
> event/cnxk: add SSO GWS dequeue fastpath functions
> event/cnxk: add SSO selftest and dump
> event/cnxk: add event port and queue xstats
> event/cnxk: add devargs to disable NPA
> event/cnxk: allow adapters to resize inflights
> event/cnxk: add TIM bucket operations
> event/cnxk: add timer arm routine
> event/cnxk: add timer arm timeout burst
> event/cnxk: add timer cancel function
>
> Shijith Thotton (17):
> event/cnxk: add device capabilities function
> event/cnxk: add platform specific device config
> event/cnxk: add event queue config functions
> event/cnxk: add devargs for inflight buffer count
> event/cnxk: add devargs to control SSO HWGRP QoS
> event/cnxk: add port config functions
> event/cnxk: add event port link and unlink
> event/cnxk: add device start function
> event/cnxk: add device stop and close functions
> event/cnxk: support event timer
> event/cnxk: add timer adapter capabilities
> event/cnxk: create and free timer adapter
> event/cnxk: add timer adapter info function
> event/cnxk: add devargs for chunk size and rings
> event/cnxk: add timer stats get and reset
> event/cnxk: add timer adapter start and stop
> event/cnxk: add devargs to control timer adapters
>
> MAINTAINERS | 6 +
> app/test/test_eventdev.c | 14 +
> doc/guides/eventdevs/cnxk.rst | 162 ++
> doc/guides/eventdevs/index.rst | 1 +
> drivers/common/cnxk/roc_sso.c | 63 +
> drivers/common/cnxk/roc_sso.h | 19 +
> drivers/event/cnxk/cn10k_eventdev.c | 509 ++++++
> drivers/event/cnxk/cn10k_worker.c | 115 ++
> drivers/event/cnxk/cn10k_worker.h | 175 +++
> drivers/event/cnxk/cn9k_eventdev.c | 578 +++++++
> drivers/event/cnxk/cn9k_worker.c | 236 +++
> drivers/event/cnxk/cn9k_worker.h | 297 ++++
> drivers/event/cnxk/cnxk_eventdev.c | 647 ++++++++
> drivers/event/cnxk/cnxk_eventdev.h | 253 +++
> drivers/event/cnxk/cnxk_eventdev_adptr.c | 67 +
> drivers/event/cnxk/cnxk_eventdev_selftest.c | 1570 +++++++++++++++++++
> drivers/event/cnxk/cnxk_eventdev_stats.c | 289 ++++
> drivers/event/cnxk/cnxk_tim_evdev.c | 538 +++++++
> drivers/event/cnxk/cnxk_tim_evdev.h | 275 ++++
> drivers/event/cnxk/cnxk_tim_worker.c | 191 +++
> drivers/event/cnxk/cnxk_tim_worker.h | 601 +++++++
> drivers/event/cnxk/cnxk_worker.h | 101 ++
> drivers/event/cnxk/meson.build | 23 +
> drivers/event/cnxk/version.map | 3 +
> drivers/event/meson.build | 1 +
> 25 files changed, 6734 insertions(+)
> create mode 100644 doc/guides/eventdevs/cnxk.rst
> create mode 100644 drivers/event/cnxk/cn10k_eventdev.c
> create mode 100644 drivers/event/cnxk/cn10k_worker.c
> create mode 100644 drivers/event/cnxk/cn10k_worker.h
> create mode 100644 drivers/event/cnxk/cn9k_eventdev.c
> create mode 100644 drivers/event/cnxk/cn9k_worker.c
> create mode 100644 drivers/event/cnxk/cn9k_worker.h
> create mode 100644 drivers/event/cnxk/cnxk_eventdev.c
> create mode 100644 drivers/event/cnxk/cnxk_eventdev.h
> create mode 100644 drivers/event/cnxk/cnxk_eventdev_adptr.c
> create mode 100644 drivers/event/cnxk/cnxk_eventdev_selftest.c
> create mode 100644 drivers/event/cnxk/cnxk_eventdev_stats.c
> create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.c
> create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.h
> create mode 100644 drivers/event/cnxk/cnxk_tim_worker.c
> create mode 100644 drivers/event/cnxk/cnxk_tim_worker.h
> create mode 100644 drivers/event/cnxk/cnxk_worker.h
> create mode 100644 drivers/event/cnxk/meson.build
> create mode 100644 drivers/event/cnxk/version.map
>
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 00/33] Marvell CNXK Event device Driver
2021-04-26 17:44 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver pbhagavatula
` (33 preceding siblings ...)
2021-04-30 5:11 ` [dpdk-dev] [PATCH v2 00/33] Marvell CNXK Event device Driver Jerin Jacob
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 01/33] event/cnxk: add build infra and device setup pbhagavatula
` (34 more replies)
34 siblings, 35 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
This patchset adds support for Marvell CN106XX SoC based on 'common/cnxk'
driver. In future, CN9K a.k.a octeontx2 will also be supported by same
driver when code is ready and 'event/octeontx2' will be deprecated.
v3 Changes:
- Fix documentation, copyright.
- Update release notes.
v2 Changes:
- Split Rx/Tx adapter into seperate patch set to remove dependency on net/cnxk
- Add missing xStats patch.
- Fix incorrect head wait operation.
Pavan Nikhilesh (16):
event/cnxk: add build infra and device setup
event/cnxk: add platform specific device probe
event/cnxk: add common configuration validation
event/cnxk: allocate event inflight buffers
event/cnxk: add devargs to configure getwork mode
event/cnxk: add SSO HW device operations
event/cnxk: add SSO GWS fastpath enqueue functions
event/cnxk: add SSO GWS dequeue fastpath functions
event/cnxk: add SSO selftest and dump
event/cnxk: add event port and queue xstats
event/cnxk: add devargs to disable NPA
event/cnxk: allow adapters to resize inflights
event/cnxk: add TIM bucket operations
event/cnxk: add timer arm routine
event/cnxk: add timer arm timeout burst
event/cnxk: add timer cancel function
Shijith Thotton (17):
event/cnxk: add device capabilities function
event/cnxk: add platform specific device config
event/cnxk: add event queue config functions
event/cnxk: add devargs for inflight buffer count
event/cnxk: add devargs to control SSO HWGRP QoS
event/cnxk: add port config functions
event/cnxk: add event port link and unlink
event/cnxk: add device start function
event/cnxk: add device stop and close functions
event/cnxk: support event timer
event/cnxk: add timer adapter capabilities
event/cnxk: create and free timer adapter
event/cnxk: add timer adapter info function
event/cnxk: add devargs for chunk size and rings
event/cnxk: add timer stats get and reset
event/cnxk: add timer adapter start and stop
event/cnxk: add devargs to control timer adapters
MAINTAINERS | 6 +
app/test/test_eventdev.c | 14 +
doc/guides/eventdevs/cnxk.rst | 162 ++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_21_05.rst | 2 +
drivers/common/cnxk/roc_sso.c | 63 +
drivers/common/cnxk/roc_sso.h | 19 +
drivers/event/cnxk/cn10k_eventdev.c | 509 ++++++
drivers/event/cnxk/cn10k_worker.c | 115 ++
drivers/event/cnxk/cn10k_worker.h | 175 +++
drivers/event/cnxk/cn9k_eventdev.c | 578 +++++++
drivers/event/cnxk/cn9k_worker.c | 236 +++
drivers/event/cnxk/cn9k_worker.h | 297 ++++
drivers/event/cnxk/cnxk_eventdev.c | 647 ++++++++
drivers/event/cnxk/cnxk_eventdev.h | 253 +++
drivers/event/cnxk/cnxk_eventdev_adptr.c | 67 +
drivers/event/cnxk/cnxk_eventdev_selftest.c | 1570 +++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev_stats.c | 289 ++++
drivers/event/cnxk/cnxk_tim_evdev.c | 538 +++++++
drivers/event/cnxk/cnxk_tim_evdev.h | 275 ++++
drivers/event/cnxk/cnxk_tim_worker.c | 191 +++
drivers/event/cnxk/cnxk_tim_worker.h | 601 +++++++
drivers/event/cnxk/cnxk_worker.h | 101 ++
drivers/event/cnxk/meson.build | 23 +
drivers/event/cnxk/version.map | 3 +
drivers/event/meson.build | 1 +
26 files changed, 6736 insertions(+)
create mode 100644 doc/guides/eventdevs/cnxk.rst
create mode 100644 drivers/event/cnxk/cn10k_eventdev.c
create mode 100644 drivers/event/cnxk/cn10k_worker.c
create mode 100644 drivers/event/cnxk/cn10k_worker.h
create mode 100644 drivers/event/cnxk/cn9k_eventdev.c
create mode 100644 drivers/event/cnxk/cn9k_worker.c
create mode 100644 drivers/event/cnxk/cn9k_worker.h
create mode 100644 drivers/event/cnxk/cnxk_eventdev.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev.h
create mode 100644 drivers/event/cnxk/cnxk_eventdev_adptr.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev_selftest.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev_stats.c
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.c
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.h
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.c
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.h
create mode 100644 drivers/event/cnxk/cnxk_worker.h
create mode 100644 drivers/event/cnxk/meson.build
create mode 100644 drivers/event/cnxk/version.map
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 01/33] event/cnxk: add build infra and device setup
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-05-03 9:41 ` Jerin Jacob
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 02/33] event/cnxk: add device capabilities function pbhagavatula
` (33 subsequent siblings)
34 siblings, 1 reply; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Thomas Monjalon, Pavan Nikhilesh, Shijith Thotton,
Ray Kinsella, Neil Horman, Anatoly Burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add meson build infra structure along with the event device
SSO initialization and teardown functions.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
MAINTAINERS | 6 +++
doc/guides/eventdevs/cnxk.rst | 55 +++++++++++++++++++++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_21_05.rst | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 68 ++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 39 +++++++++++++++
drivers/event/cnxk/meson.build | 13 +++++
drivers/event/cnxk/version.map | 3 ++
drivers/event/meson.build | 1 +
9 files changed, 188 insertions(+)
create mode 100644 doc/guides/eventdevs/cnxk.rst
create mode 100644 drivers/event/cnxk/cnxk_eventdev.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev.h
create mode 100644 drivers/event/cnxk/meson.build
create mode 100644 drivers/event/cnxk/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 44f3d322e..5a2297e99 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1224,6 +1224,12 @@ M: Jerin Jacob <jerinj@marvell.com>
F: drivers/event/octeontx2/
F: doc/guides/eventdevs/octeontx2.rst
+Marvell cnxk
+M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+M: Shijith Thotton <sthotton@marvell.com>
+F: drivers/event/cnxk/
+F: doc/guides/eventdevs/cnxk.rst
+
NXP DPAA eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Nipun Gupta <nipun.gupta@nxp.com>
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
new file mode 100644
index 000000000..148280b85
--- /dev/null
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -0,0 +1,55 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2021 Marvell.
+
+Marvell cnxk SSO Eventdev Driver
+================================
+
+The SSO PMD (**librte_event_cnxk**) and provides poll mode
+eventdev driver support for the inbuilt event device found in the
+**Marvell OCTEON cnxk** SoC family.
+
+More information about OCTEON cnxk SoC can be found at `Marvell Official Website
+<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
+
+Supported OCTEON cnxk SoCs
+--------------------------
+
+- CN9XX
+- CN10XX
+
+Features
+--------
+
+Features of the OCTEON cnxk SSO PMD are:
+
+- 256 Event queues
+- 26 (dual) and 52 (single) Event ports on CN9XX
+- 52 Event ports on CN10XX
+- HW event scheduler
+- Supports 1M flows per event queue
+- Flow based event pipelining
+- Flow pinning support in flow based event pipelining
+- Queue based event pipelining
+- Supports ATOMIC, ORDERED, PARALLEL schedule types per flow
+- Event scheduling QoS based on event queue priority
+- Open system with configurable amount of outstanding events limited only by
+ DRAM
+- HW accelerated dequeue timeout support to enable power management
+
+Prerequisites and Compilation procedure
+---------------------------------------
+
+ See :doc:`../platform/cnxk` for setup information.
+
+Debugging Options
+-----------------
+
+.. _table_octeon_cnxk_event_debug_options:
+
+.. table:: OCTEON cnxk event device debug options
+
+ +---+------------+-------------------------------------------------------+
+ | # | Component | EAL log command |
+ +===+============+=======================================================+
+ | 1 | SSO | --log-level='pmd\.event\.cnxk,8' |
+ +---+------------+-------------------------------------------------------+
diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst
index 738788d9e..214302539 100644
--- a/doc/guides/eventdevs/index.rst
+++ b/doc/guides/eventdevs/index.rst
@@ -11,6 +11,7 @@ application through the eventdev API.
:maxdepth: 2
:numbered:
+ cnxk
dlb2
dpaa
dpaa2
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index b3224dc33..428615e4f 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -75,6 +75,8 @@ New Features
net, crypto and event PMD's.
* Added mempool/cnxk driver which provides the support for the integrated
mempool device.
+ * Added event/cnxk driver which provides the support for integrated event
+ device.
* **Enhanced ethdev representor syntax.**
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
new file mode 100644
index 000000000..7ea782eaa
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+
+int
+cnxk_sso_init(struct rte_eventdev *event_dev)
+{
+ const struct rte_memzone *mz = NULL;
+ struct rte_pci_device *pci_dev;
+ struct cnxk_sso_evdev *dev;
+ int rc;
+
+ mz = rte_memzone_reserve(CNXK_SSO_MZ_NAME, sizeof(uint64_t),
+ SOCKET_ID_ANY, 0);
+ if (mz == NULL) {
+ plt_err("Failed to create eventdev memzone");
+ return -ENOMEM;
+ }
+
+ dev = cnxk_sso_pmd_priv(event_dev);
+ pci_dev = container_of(event_dev->dev, struct rte_pci_device, device);
+ dev->sso.pci_dev = pci_dev;
+
+ *(uint64_t *)mz->addr = (uint64_t)dev;
+
+ /* Initialize the base cnxk_dev object */
+ rc = roc_sso_dev_init(&dev->sso);
+ if (rc < 0) {
+ plt_err("Failed to initialize RoC SSO rc=%d", rc);
+ goto error;
+ }
+
+ dev->is_timeout_deq = 0;
+ dev->min_dequeue_timeout_ns = USEC2NSEC(1);
+ dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF);
+ dev->max_num_events = -1;
+ dev->nb_event_queues = 0;
+ dev->nb_event_ports = 0;
+
+ return 0;
+
+error:
+ rte_memzone_free(mz);
+ return rc;
+}
+
+int
+cnxk_sso_fini(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ /* For secondary processes, nothing to be done */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ roc_sso_rsrc_fini(&dev->sso);
+ roc_sso_dev_fini(&dev->sso);
+
+ return 0;
+}
+
+int
+cnxk_sso_remove(struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_remove(pci_dev, cnxk_sso_fini);
+}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
new file mode 100644
index 000000000..74d0990fa
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CNXK_EVENTDEV_H__
+#define __CNXK_EVENTDEV_H__
+
+#include <rte_pci.h>
+
+#include <eventdev_pmd_pci.h>
+
+#include "roc_api.h"
+
+#define USEC2NSEC(__us) ((__us)*1E3)
+
+#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
+
+struct cnxk_sso_evdev {
+ struct roc_sso sso;
+ uint8_t is_timeout_deq;
+ uint8_t nb_event_queues;
+ uint8_t nb_event_ports;
+ uint32_t min_dequeue_timeout_ns;
+ uint32_t max_dequeue_timeout_ns;
+ int32_t max_num_events;
+} __rte_cache_aligned;
+
+static inline struct cnxk_sso_evdev *
+cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
+{
+ return event_dev->data->dev_private;
+}
+
+/* Common ops API. */
+int cnxk_sso_init(struct rte_eventdev *event_dev);
+int cnxk_sso_fini(struct rte_eventdev *event_dev);
+int cnxk_sso_remove(struct rte_pci_device *pci_dev);
+
+#endif /* __CNXK_EVENTDEV_H__ */
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
new file mode 100644
index 000000000..88cb472f0
--- /dev/null
+++ b/drivers/event/cnxk/meson.build
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2021 Marvell.
+#
+
+if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on 64-bit Linux'
+ subdir_done()
+endif
+
+sources = files('cnxk_eventdev.c')
+
+deps += ['bus_pci', 'common_cnxk', 'net_cnxk']
diff --git a/drivers/event/cnxk/version.map b/drivers/event/cnxk/version.map
new file mode 100644
index 000000000..ee80c5172
--- /dev/null
+++ b/drivers/event/cnxk/version.map
@@ -0,0 +1,3 @@
+INTERNAL {
+ local: *;
+};
diff --git a/drivers/event/meson.build b/drivers/event/meson.build
index 539c5aeb9..63d6b410b 100644
--- a/drivers/event/meson.build
+++ b/drivers/event/meson.build
@@ -6,6 +6,7 @@ if is_windows
endif
drivers = [
+ 'cnxk',
'dlb2',
'dpaa',
'dpaa2',
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* Re: [dpdk-dev] [PATCH v3 01/33] event/cnxk: add build infra and device setup
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 01/33] event/cnxk: add build infra and device setup pbhagavatula
@ 2021-05-03 9:41 ` Jerin Jacob
0 siblings, 0 replies; 185+ messages in thread
From: Jerin Jacob @ 2021-05-03 9:41 UTC (permalink / raw)
To: Pavan Nikhilesh
Cc: Jerin Jacob, Thomas Monjalon, Shijith Thotton, Ray Kinsella,
Neil Horman, Anatoly Burakov, dpdk-dev
On Fri, Apr 30, 2021 at 7:23 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add meson build infra structure along with the event device
> SSO initialization and teardown functions.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> Acked-by: Ray Kinsella <mdr@ashroe.eu>
> ---
> +
> +deps += ['bus_pci', 'common_cnxk', 'net_cnxk']
Due to 'net_cnxk' dependency, this driver was not building at all.
Please remove this and send the updated series. I will make it as
"Change requested" in PW from accepted.
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 02/33] event/cnxk: add device capabilities function
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 01/33] event/cnxk: add build infra and device setup pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 03/33] event/cnxk: add platform specific device probe pbhagavatula
` (32 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add the info_get function to return details on the queues, flow,
prioritization capabilities, etc. which this device has.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 24 ++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 4 ++++
2 files changed, 28 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 7ea782eaa..3a7053af6 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -4,6 +4,30 @@
#include "cnxk_eventdev.h"
+void
+cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
+ struct rte_event_dev_info *dev_info)
+{
+
+ dev_info->min_dequeue_timeout_ns = dev->min_dequeue_timeout_ns;
+ dev_info->max_dequeue_timeout_ns = dev->max_dequeue_timeout_ns;
+ dev_info->max_event_queues = dev->max_event_queues;
+ dev_info->max_event_queue_flows = (1ULL << 20);
+ dev_info->max_event_queue_priority_levels = 8;
+ dev_info->max_event_priority_levels = 1;
+ dev_info->max_event_ports = dev->max_event_ports;
+ dev_info->max_event_port_dequeue_depth = 1;
+ dev_info->max_event_port_enqueue_depth = 1;
+ dev_info->max_num_events = dev->max_num_events;
+ dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
+ RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
+ RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
+ RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
+ RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
+}
+
int
cnxk_sso_init(struct rte_eventdev *event_dev)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 74d0990fa..9745bfd3e 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -17,6 +17,8 @@
struct cnxk_sso_evdev {
struct roc_sso sso;
+ uint8_t max_event_queues;
+ uint8_t max_event_ports;
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
@@ -35,5 +37,7 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
int cnxk_sso_init(struct rte_eventdev *event_dev);
int cnxk_sso_fini(struct rte_eventdev *event_dev);
int cnxk_sso_remove(struct rte_pci_device *pci_dev);
+void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
+ struct rte_event_dev_info *dev_info);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 03/33] event/cnxk: add platform specific device probe
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 01/33] event/cnxk: add build infra and device setup pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 02/33] event/cnxk: add device capabilities function pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 04/33] event/cnxk: add common configuration validation pbhagavatula
` (31 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add platform specific event device probe and remove, also add
event device info get function.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 101 +++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 102 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 2 +
drivers/event/cnxk/meson.build | 5 +-
4 files changed, 209 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cn10k_eventdev.c
create mode 100644 drivers/event/cnxk/cn9k_eventdev.c
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
new file mode 100644
index 000000000..f5caa5537
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+
+static void
+cn10k_sso_set_rsrc(void *arg)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ dev->max_event_ports = dev->sso.max_hws;
+ dev->max_event_queues =
+ dev->sso.max_hwgrp > RTE_EVENT_MAX_QUEUES_PER_DEV ?
+ RTE_EVENT_MAX_QUEUES_PER_DEV :
+ dev->sso.max_hwgrp;
+}
+
+static void
+cn10k_sso_info_get(struct rte_eventdev *event_dev,
+ struct rte_event_dev_info *dev_info)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ dev_info->driver_name = RTE_STR(EVENTDEV_NAME_CN10K_PMD);
+ cnxk_sso_info_get(dev, dev_info);
+}
+
+static struct rte_eventdev_ops cn10k_sso_dev_ops = {
+ .dev_infos_get = cn10k_sso_info_get,
+};
+
+static int
+cn10k_sso_init(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ if (RTE_CACHE_LINE_SIZE != 64) {
+ plt_err("Driver not compiled for CN9K");
+ return -EFAULT;
+ }
+
+ rc = plt_init();
+ if (rc < 0) {
+ plt_err("Failed to initialize platform model");
+ return rc;
+ }
+
+ event_dev->dev_ops = &cn10k_sso_dev_ops;
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ rc = cnxk_sso_init(event_dev);
+ if (rc < 0)
+ return rc;
+
+ cn10k_sso_set_rsrc(cnxk_sso_pmd_priv(event_dev));
+ if (!dev->max_event_ports || !dev->max_event_queues) {
+ plt_err("Not enough eventdev resource queues=%d ports=%d",
+ dev->max_event_queues, dev->max_event_ports);
+ cnxk_sso_fini(event_dev);
+ return -ENODEV;
+ }
+
+ plt_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
+ event_dev->data->name, dev->max_event_queues,
+ dev->max_event_ports);
+
+ return 0;
+}
+
+static int
+cn10k_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_probe(pci_drv, pci_dev,
+ sizeof(struct cnxk_sso_evdev),
+ cn10k_sso_init);
+}
+
+static const struct rte_pci_id cn10k_pci_sso_map[] = {
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver cn10k_pci_sso = {
+ .id_table = cn10k_pci_sso_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
+ .probe = cn10k_sso_probe,
+ .remove = cnxk_sso_remove,
+};
+
+RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
+RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
+RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
new file mode 100644
index 000000000..d02c0c9bd
--- /dev/null
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+
+#define CN9K_DUAL_WS_NB_WS 2
+#define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
+
+static void
+cn9k_sso_set_rsrc(void *arg)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ if (dev->dual_ws)
+ dev->max_event_ports = dev->sso.max_hws / CN9K_DUAL_WS_NB_WS;
+ else
+ dev->max_event_ports = dev->sso.max_hws;
+ dev->max_event_queues =
+ dev->sso.max_hwgrp > RTE_EVENT_MAX_QUEUES_PER_DEV ?
+ RTE_EVENT_MAX_QUEUES_PER_DEV :
+ dev->sso.max_hwgrp;
+}
+
+static void
+cn9k_sso_info_get(struct rte_eventdev *event_dev,
+ struct rte_event_dev_info *dev_info)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ dev_info->driver_name = RTE_STR(EVENTDEV_NAME_CN9K_PMD);
+ cnxk_sso_info_get(dev, dev_info);
+}
+
+static struct rte_eventdev_ops cn9k_sso_dev_ops = {
+ .dev_infos_get = cn9k_sso_info_get,
+};
+
+static int
+cn9k_sso_init(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ if (RTE_CACHE_LINE_SIZE != 128) {
+ plt_err("Driver not compiled for CN9K");
+ return -EFAULT;
+ }
+
+ rc = plt_init();
+ if (rc < 0) {
+ plt_err("Failed to initialize platform model");
+ return rc;
+ }
+
+ event_dev->dev_ops = &cn9k_sso_dev_ops;
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ rc = cnxk_sso_init(event_dev);
+ if (rc < 0)
+ return rc;
+
+ cn9k_sso_set_rsrc(cnxk_sso_pmd_priv(event_dev));
+ if (!dev->max_event_ports || !dev->max_event_queues) {
+ plt_err("Not enough eventdev resource queues=%d ports=%d",
+ dev->max_event_queues, dev->max_event_ports);
+ cnxk_sso_fini(event_dev);
+ return -ENODEV;
+ }
+
+ plt_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
+ event_dev->data->name, dev->max_event_queues,
+ dev->max_event_ports);
+
+ return 0;
+}
+
+static int
+cn9k_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_probe(
+ pci_drv, pci_dev, sizeof(struct cnxk_sso_evdev), cn9k_sso_init);
+}
+
+static const struct rte_pci_id cn9k_pci_sso_map[] = {
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver cn9k_pci_sso = {
+ .id_table = cn9k_pci_sso_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
+ .probe = cn9k_sso_probe,
+ .remove = cnxk_sso_remove,
+};
+
+RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
+RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
+RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 9745bfd3e..6bdf0b347 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -25,6 +25,8 @@ struct cnxk_sso_evdev {
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
+ /* CN9K */
+ uint8_t dual_ws;
} __rte_cache_aligned;
static inline struct cnxk_sso_evdev *
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 88cb472f0..888aeb064 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -8,6 +8,9 @@ if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
subdir_done()
endif
-sources = files('cnxk_eventdev.c')
+sources = files('cn10k_eventdev.c',
+ 'cn9k_eventdev.c',
+ 'cnxk_eventdev.c',
+ )
deps += ['bus_pci', 'common_cnxk', 'net_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 04/33] event/cnxk: add common configuration validation
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (2 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 03/33] event/cnxk: add platform specific device probe pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 05/33] event/cnxk: add platform specific device config pbhagavatula
` (30 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add configuration validation, port and queue configuration
functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 70 ++++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 6 +++
2 files changed, 76 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 3a7053af6..3eab1ed29 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -28,6 +28,76 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
+int
+cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
+{
+ struct rte_event_dev_config *conf = &event_dev->data->dev_conf;
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint32_t deq_tmo_ns;
+
+ deq_tmo_ns = conf->dequeue_timeout_ns;
+
+ if (deq_tmo_ns == 0)
+ deq_tmo_ns = dev->min_dequeue_timeout_ns;
+ if (deq_tmo_ns < dev->min_dequeue_timeout_ns ||
+ deq_tmo_ns > dev->max_dequeue_timeout_ns) {
+ plt_err("Unsupported dequeue timeout requested");
+ return -EINVAL;
+ }
+
+ if (conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT)
+ dev->is_timeout_deq = 1;
+
+ dev->deq_tmo_ns = deq_tmo_ns;
+
+ if (!conf->nb_event_queues || !conf->nb_event_ports ||
+ conf->nb_event_ports > dev->max_event_ports ||
+ conf->nb_event_queues > dev->max_event_queues) {
+ plt_err("Unsupported event queues/ports requested");
+ return -EINVAL;
+ }
+
+ if (conf->nb_event_port_dequeue_depth > 1) {
+ plt_err("Unsupported event port deq depth requested");
+ return -EINVAL;
+ }
+
+ if (conf->nb_event_port_enqueue_depth > 1) {
+ plt_err("Unsupported event port enq depth requested");
+ return -EINVAL;
+ }
+
+ dev->nb_event_queues = conf->nb_event_queues;
+ dev->nb_event_ports = conf->nb_event_ports;
+
+ return 0;
+}
+
+void
+cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
+ struct rte_event_queue_conf *queue_conf)
+{
+ RTE_SET_USED(event_dev);
+ RTE_SET_USED(queue_id);
+
+ queue_conf->nb_atomic_flows = (1ULL << 20);
+ queue_conf->nb_atomic_order_sequences = (1ULL << 20);
+ queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+ queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+}
+
+void
+cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
+ struct rte_event_port_conf *port_conf)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ RTE_SET_USED(port_id);
+ port_conf->new_event_threshold = dev->max_num_events;
+ port_conf->dequeue_depth = 1;
+ port_conf->enqueue_depth = 1;
+}
+
int
cnxk_sso_init(struct rte_eventdev *event_dev)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 6bdf0b347..59d96a08f 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -22,6 +22,7 @@ struct cnxk_sso_evdev {
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
+ uint32_t deq_tmo_ns;
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
@@ -41,5 +42,10 @@ int cnxk_sso_fini(struct rte_eventdev *event_dev);
int cnxk_sso_remove(struct rte_pci_device *pci_dev);
void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
struct rte_event_dev_info *dev_info);
+int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
+void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
+ struct rte_event_queue_conf *queue_conf);
+void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
+ struct rte_event_port_conf *port_conf);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 05/33] event/cnxk: add platform specific device config
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (3 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 04/33] event/cnxk: add common configuration validation pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 06/33] event/cnxk: add event queue config functions pbhagavatula
` (29 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add platform specific event device configuration that attaches the
requested number of SSO HWS(event ports) and HWGRP(event queues) LFs
to the RVU PF/VF.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 35 +++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 37 +++++++++++++++++++++++++++++
2 files changed, 72 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index f5caa5537..ab1c98946 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -16,6 +16,14 @@ cn10k_sso_set_rsrc(void *arg)
dev->sso.max_hwgrp;
}
+static int
+cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
+}
+
static void
cn10k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -26,8 +34,35 @@ cn10k_sso_info_get(struct rte_eventdev *event_dev,
cnxk_sso_info_get(dev, dev_info);
}
+static int
+cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ rc = cnxk_sso_dev_validate(event_dev);
+ if (rc < 0) {
+ plt_err("Invalid event device configuration");
+ return -EINVAL;
+ }
+
+ roc_sso_rsrc_fini(&dev->sso);
+
+ rc = cn10k_sso_rsrc_init(dev, dev->nb_event_ports,
+ dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to initialize SSO resources");
+ return -ENODEV;
+ }
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
+ .dev_configure = cn10k_sso_dev_configure,
+ .queue_def_conf = cnxk_sso_queue_def_conf,
+ .port_def_conf = cnxk_sso_port_def_conf,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index d02c0c9bd..66aea70be 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -22,6 +22,17 @@ cn9k_sso_set_rsrc(void *arg)
dev->sso.max_hwgrp;
}
+static int
+cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ if (dev->dual_ws)
+ hws = hws * CN9K_DUAL_WS_NB_WS;
+
+ return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
+}
+
static void
cn9k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -32,8 +43,34 @@ cn9k_sso_info_get(struct rte_eventdev *event_dev,
cnxk_sso_info_get(dev, dev_info);
}
+static int
+cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ rc = cnxk_sso_dev_validate(event_dev);
+ if (rc < 0) {
+ plt_err("Invalid event device configuration");
+ return -EINVAL;
+ }
+
+ roc_sso_rsrc_fini(&dev->sso);
+
+ rc = cn9k_sso_rsrc_init(dev, dev->nb_event_ports, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to initialize SSO resources");
+ return -ENODEV;
+ }
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
+ .dev_configure = cn9k_sso_dev_configure,
+ .queue_def_conf = cnxk_sso_queue_def_conf,
+ .port_def_conf = cnxk_sso_port_def_conf,
};
static int
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 06/33] event/cnxk: add event queue config functions
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (4 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 05/33] event/cnxk: add platform specific device config pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 07/33] event/cnxk: allocate event inflight buffers pbhagavatula
` (28 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add setup and release functions for event queues i.e.
SSO HWGRPs.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 2 ++
drivers/event/cnxk/cn9k_eventdev.c | 2 ++
drivers/event/cnxk/cnxk_eventdev.c | 19 +++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 3 +++
4 files changed, 26 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index ab1c98946..fe562d65b 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -62,6 +62,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
.queue_def_conf = cnxk_sso_queue_def_conf,
+ .queue_setup = cnxk_sso_queue_setup,
+ .queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
};
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 66aea70be..d49acad83 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -70,6 +70,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
.queue_def_conf = cnxk_sso_queue_def_conf,
+ .queue_setup = cnxk_sso_queue_setup,
+ .queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
};
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 3eab1ed29..e22479a19 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -86,6 +86,25 @@ cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
}
+int
+cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
+ const struct rte_event_queue_conf *queue_conf)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ plt_sso_dbg("Queue=%d prio=%d", queue_id, queue_conf->priority);
+ /* Normalize <0-255> to <0-7> */
+ return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, 0xFF, 0xFF,
+ queue_conf->priority / 32);
+}
+
+void
+cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
+{
+ RTE_SET_USED(event_dev);
+ RTE_SET_USED(queue_id);
+}
+
void
cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf)
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 59d96a08f..426219c85 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -45,6 +45,9 @@ void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
struct rte_event_queue_conf *queue_conf);
+int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
+ const struct rte_event_queue_conf *queue_conf);
+void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id);
void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 07/33] event/cnxk: allocate event inflight buffers
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (5 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 06/33] event/cnxk: add event queue config functions pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 08/33] event/cnxk: add devargs for inflight buffer count pbhagavatula
` (27 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Allocate buffers in DRAM that hold inflight events.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 7 ++
drivers/event/cnxk/cn9k_eventdev.c | 7 ++
drivers/event/cnxk/cnxk_eventdev.c | 105 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 14 +++-
4 files changed, 132 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index fe562d65b..20d141f16 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -55,6 +55,13 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
return -ENODEV;
}
+ rc = cnxk_sso_xaq_allocate(dev);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
+ return 0;
+cnxk_rsrc_fini:
+ roc_sso_rsrc_fini(&dev->sso);
return rc;
}
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index d49acad83..5f93fc2a7 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -63,6 +63,13 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
return -ENODEV;
}
+ rc = cnxk_sso_xaq_allocate(dev);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
+ return 0;
+cnxk_rsrc_fini:
+ roc_sso_rsrc_fini(&dev->sso);
return rc;
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index e22479a19..34a8bce05 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -28,12 +28,107 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
+int
+cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
+{
+ char pool_name[RTE_MEMZONE_NAMESIZE];
+ uint32_t xaq_cnt, npa_aura_id;
+ const struct rte_memzone *mz;
+ struct npa_aura_s *aura;
+ static int reconfig_cnt;
+ int rc;
+
+ if (dev->xaq_pool) {
+ rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to release XAQ %d", rc);
+ return rc;
+ }
+ rte_mempool_free(dev->xaq_pool);
+ dev->xaq_pool = NULL;
+ }
+
+ /*
+ * Allocate memory for Add work backpressure.
+ */
+ mz = rte_memzone_lookup(CNXK_SSO_FC_NAME);
+ if (mz == NULL)
+ mz = rte_memzone_reserve_aligned(CNXK_SSO_FC_NAME,
+ sizeof(struct npa_aura_s) +
+ RTE_CACHE_LINE_SIZE,
+ 0, 0, RTE_CACHE_LINE_SIZE);
+ if (mz == NULL) {
+ plt_err("Failed to allocate mem for fcmem");
+ return -ENOMEM;
+ }
+
+ dev->fc_iova = mz->iova;
+ dev->fc_mem = mz->addr;
+
+ aura = (struct npa_aura_s *)((uintptr_t)dev->fc_mem +
+ RTE_CACHE_LINE_SIZE);
+ memset(aura, 0, sizeof(struct npa_aura_s));
+
+ aura->fc_ena = 1;
+ aura->fc_addr = dev->fc_iova;
+ aura->fc_hyst_bits = 0; /* Store count on all updates */
+
+ /* Taken from HRM 14.3.3(4) */
+ xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
+ xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
+ (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
+
+ plt_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
+ /* Setup XAQ based on number of nb queues. */
+ snprintf(pool_name, 30, "cnxk_xaq_buf_pool_%d", reconfig_cnt);
+ dev->xaq_pool = (void *)rte_mempool_create_empty(
+ pool_name, xaq_cnt, dev->sso.xaq_buf_size, 0, 0,
+ rte_socket_id(), 0);
+
+ if (dev->xaq_pool == NULL) {
+ plt_err("Unable to create empty mempool.");
+ rte_memzone_free(mz);
+ return -ENOMEM;
+ }
+
+ rc = rte_mempool_set_ops_byname(dev->xaq_pool,
+ rte_mbuf_platform_mempool_ops(), aura);
+ if (rc != 0) {
+ plt_err("Unable to set xaqpool ops.");
+ goto alloc_fail;
+ }
+
+ rc = rte_mempool_populate_default(dev->xaq_pool);
+ if (rc < 0) {
+ plt_err("Unable to set populate xaqpool.");
+ goto alloc_fail;
+ }
+ reconfig_cnt++;
+ /* When SW does addwork (enqueue) check if there is space in XAQ by
+ * comparing fc_addr above against the xaq_lmt calculated below.
+ * There should be a minimum headroom (CNXK_SSO_XAQ_SLACK / 2) for SSO
+ * to request XAQ to cache them even before enqueue is called.
+ */
+ dev->xaq_lmt =
+ xaq_cnt - (CNXK_SSO_XAQ_SLACK / 2 * dev->nb_event_queues);
+ dev->nb_xaq_cfg = xaq_cnt;
+
+ npa_aura_id = roc_npa_aura_handle_to_aura(dev->xaq_pool->pool_id);
+ return roc_sso_hwgrp_alloc_xaq(&dev->sso, npa_aura_id,
+ dev->nb_event_queues);
+alloc_fail:
+ rte_mempool_free(dev->xaq_pool);
+ rte_memzone_free(mz);
+ return rc;
+}
+
int
cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
{
struct rte_event_dev_config *conf = &event_dev->data->dev_conf;
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint32_t deq_tmo_ns;
+ int rc;
deq_tmo_ns = conf->dequeue_timeout_ns;
@@ -67,6 +162,16 @@ cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
return -EINVAL;
}
+ if (dev->xaq_pool) {
+ rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to release XAQ %d", rc);
+ return rc;
+ }
+ rte_mempool_free(dev->xaq_pool);
+ dev->xaq_pool = NULL;
+ }
+
dev->nb_event_queues = conf->nb_event_queues;
dev->nb_event_ports = conf->nb_event_ports;
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 426219c85..4abe4548d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -5,6 +5,7 @@
#ifndef __CNXK_EVENTDEV_H__
#define __CNXK_EVENTDEV_H__
+#include <rte_mbuf_pool_ops.h>
#include <rte_pci.h>
#include <eventdev_pmd_pci.h>
@@ -13,7 +14,10 @@
#define USEC2NSEC(__us) ((__us)*1E3)
-#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
+#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
+#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
+#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
+#define CNXK_SSO_XAQ_SLACK (8)
struct cnxk_sso_evdev {
struct roc_sso sso;
@@ -26,6 +30,11 @@ struct cnxk_sso_evdev {
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
+ uint64_t *fc_mem;
+ uint64_t xaq_lmt;
+ uint64_t nb_xaq_cfg;
+ rte_iova_t fc_iova;
+ struct rte_mempool *xaq_pool;
/* CN9K */
uint8_t dual_ws;
} __rte_cache_aligned;
@@ -36,6 +45,9 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
return event_dev->data->dev_private;
}
+/* Configuration functions */
+int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev);
+
/* Common ops API. */
int cnxk_sso_init(struct rte_eventdev *event_dev);
int cnxk_sso_fini(struct rte_eventdev *event_dev);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 08/33] event/cnxk: add devargs for inflight buffer count
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (6 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 07/33] event/cnxk: allocate event inflight buffers pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 09/33] event/cnxk: add devargs to control SSO HWGRP QoS pbhagavatula
` (26 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
The number of events for a *open system* event device is specified
as -1 as per the eventdev specification.
Since, SSO inflight events are only limited by DRAM size, the
xae_cnt devargs parameter is introduced to provide upper limit for
in-flight events.
Example:
--dev "0002:0e:00.0,xae_cnt=8192"
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 14 ++++++++++++++
drivers/event/cnxk/cn10k_eventdev.c | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 1 +
drivers/event/cnxk/cnxk_eventdev.c | 24 ++++++++++++++++++++++--
drivers/event/cnxk/cnxk_eventdev.h | 15 +++++++++++++++
5 files changed, 53 insertions(+), 2 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 148280b85..b556681ff 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -41,6 +41,20 @@ Prerequisites and Compilation procedure
See :doc:`../platform/cnxk` for setup information.
+
+Runtime Config Options
+----------------------
+
+- ``Maximum number of in-flight events`` (default ``8192``)
+
+ In **Marvell OCTEON cnxk** the max number of in-flight events are only limited
+ by DRAM size, the ``xae_cnt`` devargs parameter is introduced to provide
+ upper limit for in-flight events.
+
+ For example::
+
+ -a 0002:0e:00.0,xae_cnt=16384
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 20d141f16..f2f597dce 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -143,3 +143,4 @@ static struct rte_pci_driver cn10k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 5f93fc2a7..866c38035 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -146,3 +146,4 @@ static struct rte_pci_driver cn9k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 34a8bce05..fddd71a8d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -75,8 +75,11 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
/* Taken from HRM 14.3.3(4) */
xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
- xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
- (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
+ if (dev->xae_cnt)
+ xaq_cnt += dev->xae_cnt / dev->sso.xae_waes;
+ else
+ xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
+ (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
plt_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
/* Setup XAQ based on number of nb queues. */
@@ -222,6 +225,22 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
port_conf->enqueue_depth = 1;
}
+static void
+cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
+{
+ struct rte_kvargs *kvlist;
+
+ if (devargs == NULL)
+ return;
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, CNXK_SSO_XAE_CNT, &parse_kvargs_value,
+ &dev->xae_cnt);
+ rte_kvargs_free(kvlist);
+}
+
int
cnxk_sso_init(struct rte_eventdev *event_dev)
{
@@ -242,6 +261,7 @@ cnxk_sso_init(struct rte_eventdev *event_dev)
dev->sso.pci_dev = pci_dev;
*(uint64_t *)mz->addr = (uint64_t)dev;
+ cnxk_sso_parse_devargs(dev, pci_dev->device.devargs);
/* Initialize the base cnxk_dev object */
rc = roc_sso_dev_init(&dev->sso);
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 4abe4548d..202c6e6a7 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -5,6 +5,8 @@
#ifndef __CNXK_EVENTDEV_H__
#define __CNXK_EVENTDEV_H__
+#include <rte_devargs.h>
+#include <rte_kvargs.h>
#include <rte_mbuf_pool_ops.h>
#include <rte_pci.h>
@@ -12,6 +14,8 @@
#include "roc_api.h"
+#define CNXK_SSO_XAE_CNT "xae_cnt"
+
#define USEC2NSEC(__us) ((__us)*1E3)
#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
@@ -35,10 +39,21 @@ struct cnxk_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
+ /* Dev args */
+ uint32_t xae_cnt;
/* CN9K */
uint8_t dual_ws;
} __rte_cache_aligned;
+static inline int
+parse_kvargs_value(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ *(uint32_t *)opaque = (uint32_t)atoi(value);
+ return 0;
+}
+
static inline struct cnxk_sso_evdev *
cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
{
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 09/33] event/cnxk: add devargs to control SSO HWGRP QoS
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (7 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 08/33] event/cnxk: add devargs for inflight buffer count pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 10/33] event/cnxk: add port config functions pbhagavatula
` (25 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
SSO HWGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
events. By default the buffers are assigned to the SSO HWGRPs to
satisfy minimum HW requirements. SSO is free to assign the remaining
buffers to HWGRPs based on a preconfigured threshold.
We can control the QoS of SSO HWGRP by modifying the above mentioned
thresholds. HWGRPs that have higher importance can be assigned higher
thresholds than the rest.
Example:
--dev "0002:0e:00.0,qos=[1-50-50-50]" // [Qx-XAQ-TAQ-IAQ]
Qx -> Event queue Aka SSO GGRP.
XAQ -> DRAM In-flights.
TAQ & IAQ -> SRAM In-flights.
The values need to be expressed in terms of percentages, 0 represents
default.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 16 ++++++
drivers/event/cnxk/cn10k_eventdev.c | 3 +-
drivers/event/cnxk/cn9k_eventdev.c | 3 +-
drivers/event/cnxk/cnxk_eventdev.c | 78 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 12 ++++-
5 files changed, 109 insertions(+), 3 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index b556681ff..0583e5fdd 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -55,6 +55,22 @@ Runtime Config Options
-a 0002:0e:00.0,xae_cnt=16384
+- ``Event Group QoS support``
+
+ SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
+ events. By default the buffers are assigned to the SSO GGRPs to
+ satisfy minimum HW requirements. SSO is free to assign the remaining
+ buffers to GGRPs based on a preconfigured threshold.
+ We can control the QoS of SSO GGRP by modifying the above mentioned
+ thresholds. GGRPs that have higher importance can be assigned higher
+ thresholds than the rest. The dictionary format is as follows
+ [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] expressed in percentages, 0 represents
+ default.
+
+ For example::
+
+ -a 0002:0e:00.0,qos=[1-50-50-50]
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index f2f597dce..207560d5c 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -143,4 +143,5 @@ static struct rte_pci_driver cn10k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
+ CNXK_SSO_GGRP_QOS "=<string>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 866c38035..46618df37 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -146,4 +146,5 @@ static struct rte_pci_driver cn9k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
+ CNXK_SSO_GGRP_QOS "=<string>");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index fddd71a8d..e93aaccd8 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -225,6 +225,82 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
port_conf->enqueue_depth = 1;
}
+static void
+parse_queue_param(char *value, void *opaque)
+{
+ struct cnxk_sso_qos queue_qos = {0};
+ uint8_t *val = (uint8_t *)&queue_qos;
+ struct cnxk_sso_evdev *dev = opaque;
+ char *tok = strtok(value, "-");
+ struct cnxk_sso_qos *old_ptr;
+
+ if (!strlen(value))
+ return;
+
+ while (tok != NULL) {
+ *val = atoi(tok);
+ tok = strtok(NULL, "-");
+ val++;
+ }
+
+ if (val != (&queue_qos.iaq_prcnt + 1)) {
+ plt_err("Invalid QoS parameter expected [Qx-XAQ-TAQ-IAQ]");
+ return;
+ }
+
+ dev->qos_queue_cnt++;
+ old_ptr = dev->qos_parse_data;
+ dev->qos_parse_data = rte_realloc(
+ dev->qos_parse_data,
+ sizeof(struct cnxk_sso_qos) * dev->qos_queue_cnt, 0);
+ if (dev->qos_parse_data == NULL) {
+ dev->qos_parse_data = old_ptr;
+ dev->qos_queue_cnt--;
+ return;
+ }
+ dev->qos_parse_data[dev->qos_queue_cnt - 1] = queue_qos;
+}
+
+static void
+parse_qos_list(const char *value, void *opaque)
+{
+ char *s = strdup(value);
+ char *start = NULL;
+ char *end = NULL;
+ char *f = s;
+
+ while (*s) {
+ if (*s == '[')
+ start = s;
+ else if (*s == ']')
+ end = s;
+
+ if (start && start < end) {
+ *end = 0;
+ parse_queue_param(start + 1, opaque);
+ s = end;
+ start = end;
+ }
+ s++;
+ }
+
+ free(f);
+}
+
+static int
+parse_sso_kvargs_dict(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ /* Dict format [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] use '-' cause ','
+ * isn't allowed. Everything is expressed in percentages, 0 represents
+ * default.
+ */
+ parse_qos_list(value, opaque);
+
+ return 0;
+}
+
static void
cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
{
@@ -238,6 +314,8 @@ cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
rte_kvargs_process(kvlist, CNXK_SSO_XAE_CNT, &parse_kvargs_value,
&dev->xae_cnt);
+ rte_kvargs_process(kvlist, CNXK_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
+ dev);
rte_kvargs_free(kvlist);
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 202c6e6a7..b96a6a908 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,7 +14,8 @@
#include "roc_api.h"
-#define CNXK_SSO_XAE_CNT "xae_cnt"
+#define CNXK_SSO_XAE_CNT "xae_cnt"
+#define CNXK_SSO_GGRP_QOS "qos"
#define USEC2NSEC(__us) ((__us)*1E3)
@@ -23,6 +24,13 @@
#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
#define CNXK_SSO_XAQ_SLACK (8)
+struct cnxk_sso_qos {
+ uint16_t queue;
+ uint8_t xaq_prcnt;
+ uint8_t taq_prcnt;
+ uint8_t iaq_prcnt;
+};
+
struct cnxk_sso_evdev {
struct roc_sso sso;
uint8_t max_event_queues;
@@ -41,6 +49,8 @@ struct cnxk_sso_evdev {
struct rte_mempool *xaq_pool;
/* Dev args */
uint32_t xae_cnt;
+ uint8_t qos_queue_cnt;
+ struct cnxk_sso_qos *qos_parse_data;
/* CN9K */
uint8_t dual_ws;
} __rte_cache_aligned;
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 10/33] event/cnxk: add port config functions
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (8 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 09/33] event/cnxk: add devargs to control SSO HWGRP QoS pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 11/33] event/cnxk: add event port link and unlink pbhagavatula
` (24 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add SSO HWS a.k.a event port setup and release functions.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 121 +++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 147 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.c | 65 ++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 91 +++++++++++++++++
4 files changed, 424 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 207560d5c..11eaef380 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -4,6 +4,91 @@
#include "cnxk_eventdev.h"
+static void
+cn10k_init_hws_ops(struct cn10k_sso_hws *ws, uintptr_t base)
+{
+ ws->tag_wqe_op = base + SSOW_LF_GWS_WQE0;
+ ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK0;
+ ws->updt_wqe_op = base + SSOW_LF_GWS_OP_UPD_WQP_GRP1;
+ ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM;
+ ws->swtag_untag_op = base + SSOW_LF_GWS_OP_SWTAG_UNTAG;
+ ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH;
+ ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
+}
+
+static uint32_t
+cn10k_sso_gw_mode_wdata(struct cnxk_sso_evdev *dev)
+{
+ uint32_t wdata = BIT(16) | 1;
+
+ switch (dev->gw_mode) {
+ case CN10K_GW_MODE_NONE:
+ default:
+ break;
+ case CN10K_GW_MODE_PREF:
+ wdata |= BIT(19);
+ break;
+ case CN10K_GW_MODE_PREF_WFE:
+ wdata |= BIT(20) | BIT(19);
+ break;
+ }
+
+ return wdata;
+}
+
+static void *
+cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws;
+
+ /* Allocate event port memory */
+ ws = rte_zmalloc("cn10k_ws",
+ sizeof(struct cn10k_sso_hws) + RTE_CACHE_LINE_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (ws == NULL) {
+ plt_err("Failed to alloc memory for port=%d", port_id);
+ return NULL;
+ }
+
+ /* First cache line is reserved for cookie */
+ ws = (struct cn10k_sso_hws *)((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
+ ws->base = roc_sso_hws_base_get(&dev->sso, port_id);
+ cn10k_init_hws_ops(ws, ws->base);
+ ws->hws_id = port_id;
+ ws->swtag_req = 0;
+ ws->gw_wdata = cn10k_sso_gw_mode_wdata(dev);
+ ws->lmt_base = dev->sso.lmt_base;
+
+ return ws;
+}
+
+static void
+cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = hws;
+ uint64_t val;
+
+ rte_memcpy(ws->grps_base, grps_base,
+ sizeof(uintptr_t) * CNXK_SSO_MAX_HWGRP);
+ ws->fc_mem = dev->fc_mem;
+ ws->xaq_lmt = dev->xaq_lmt;
+
+ /* Set get_work timeout for HWS */
+ val = NSEC2USEC(dev->deq_tmo_ns) - 1;
+ plt_write64(val, ws->base + SSOW_LF_GWS_NW_TIM);
+}
+
+static void
+cn10k_sso_hws_release(void *arg, void *hws)
+{
+ struct cn10k_sso_hws *ws = hws;
+
+ RTE_SET_USED(arg);
+ memset(ws, 0, sizeof(*ws));
+}
+
static void
cn10k_sso_set_rsrc(void *arg)
{
@@ -59,12 +144,46 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ rc = cnxk_setup_event_ports(event_dev, cn10k_sso_init_hws_mem,
+ cn10k_sso_hws_setup);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
+ dev->nb_event_ports = 0;
return rc;
}
+static int
+cn10k_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ const struct rte_event_port_conf *port_conf)
+{
+
+ RTE_SET_USED(port_conf);
+ return cnxk_sso_port_setup(event_dev, port_id, cn10k_sso_hws_setup);
+}
+
+static void
+cn10k_sso_port_release(void *port)
+{
+ struct cnxk_sso_hws_cookie *gws_cookie = cnxk_sso_hws_get_cookie(port);
+ struct cnxk_sso_evdev *dev;
+
+ if (port == NULL)
+ return;
+
+ dev = cnxk_sso_pmd_priv(gws_cookie->event_dev);
+ if (!gws_cookie->configured)
+ goto free;
+
+ cn10k_sso_hws_release(dev, port);
+ memset(gws_cookie, 0, sizeof(*gws_cookie));
+free:
+ rte_free(gws_cookie);
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -72,6 +191,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.queue_setup = cnxk_sso_queue_setup,
.queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
+ .port_setup = cn10k_sso_port_setup,
+ .port_release = cn10k_sso_port_release,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 46618df37..2daa14b50 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -7,6 +7,63 @@
#define CN9K_DUAL_WS_NB_WS 2
#define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
+static void
+cn9k_init_hws_ops(struct cn9k_sso_hws_state *ws, uintptr_t base)
+{
+ ws->tag_op = base + SSOW_LF_GWS_TAG;
+ ws->wqp_op = base + SSOW_LF_GWS_WQP;
+ ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK0;
+ ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH;
+ ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM;
+ ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
+}
+
+static void
+cn9k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ uint64_t val;
+
+ /* Set get_work tmo for HWS */
+ val = NSEC2USEC(dev->deq_tmo_ns) - 1;
+ if (dev->dual_ws) {
+ dws = hws;
+ rte_memcpy(dws->grps_base, grps_base,
+ sizeof(uintptr_t) * CNXK_SSO_MAX_HWGRP);
+ dws->fc_mem = dev->fc_mem;
+ dws->xaq_lmt = dev->xaq_lmt;
+
+ plt_write64(val, dws->base[0] + SSOW_LF_GWS_NW_TIM);
+ plt_write64(val, dws->base[1] + SSOW_LF_GWS_NW_TIM);
+ } else {
+ ws = hws;
+ rte_memcpy(ws->grps_base, grps_base,
+ sizeof(uintptr_t) * CNXK_SSO_MAX_HWGRP);
+ ws->fc_mem = dev->fc_mem;
+ ws->xaq_lmt = dev->xaq_lmt;
+
+ plt_write64(val, ws->base + SSOW_LF_GWS_NW_TIM);
+ }
+}
+
+static void
+cn9k_sso_hws_release(void *arg, void *hws)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+
+ if (dev->dual_ws) {
+ dws = hws;
+ memset(dws, 0, sizeof(*dws));
+ } else {
+ ws = hws;
+ memset(ws, 0, sizeof(*ws));
+ }
+}
+
static void
cn9k_sso_set_rsrc(void *arg)
{
@@ -33,6 +90,60 @@ cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static void *
+cn9k_sso_init_hws_mem(void *arg, uint8_t port_id)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ void *data;
+
+ if (dev->dual_ws) {
+ dws = rte_zmalloc("cn9k_dual_ws",
+ sizeof(struct cn9k_sso_hws_dual) +
+ RTE_CACHE_LINE_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (dws == NULL) {
+ plt_err("Failed to alloc memory for port=%d", port_id);
+ return NULL;
+ }
+
+ dws = RTE_PTR_ADD(dws, sizeof(struct cnxk_sso_hws_cookie));
+ dws->base[0] = roc_sso_hws_base_get(
+ &dev->sso, CN9K_DUAL_WS_PAIR_ID(port_id, 0));
+ dws->base[1] = roc_sso_hws_base_get(
+ &dev->sso, CN9K_DUAL_WS_PAIR_ID(port_id, 1));
+ cn9k_init_hws_ops(&dws->ws_state[0], dws->base[0]);
+ cn9k_init_hws_ops(&dws->ws_state[1], dws->base[1]);
+ dws->hws_id = port_id;
+ dws->swtag_req = 0;
+ dws->vws = 0;
+
+ data = dws;
+ } else {
+ /* Allocate event port memory */
+ ws = rte_zmalloc("cn9k_ws",
+ sizeof(struct cn9k_sso_hws) +
+ RTE_CACHE_LINE_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (ws == NULL) {
+ plt_err("Failed to alloc memory for port=%d", port_id);
+ return NULL;
+ }
+
+ /* First cache line is reserved for cookie */
+ ws = RTE_PTR_ADD(ws, sizeof(struct cnxk_sso_hws_cookie));
+ ws->base = roc_sso_hws_base_get(&dev->sso, port_id);
+ cn9k_init_hws_ops((struct cn9k_sso_hws_state *)ws, ws->base);
+ ws->hws_id = port_id;
+ ws->swtag_req = 0;
+
+ data = ws;
+ }
+
+ return data;
+}
+
static void
cn9k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -67,12 +178,46 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ rc = cnxk_setup_event_ports(event_dev, cn9k_sso_init_hws_mem,
+ cn9k_sso_hws_setup);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
+ dev->nb_event_ports = 0;
return rc;
}
+static int
+cn9k_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ const struct rte_event_port_conf *port_conf)
+{
+
+ RTE_SET_USED(port_conf);
+ return cnxk_sso_port_setup(event_dev, port_id, cn9k_sso_hws_setup);
+}
+
+static void
+cn9k_sso_port_release(void *port)
+{
+ struct cnxk_sso_hws_cookie *gws_cookie = cnxk_sso_hws_get_cookie(port);
+ struct cnxk_sso_evdev *dev;
+
+ if (port == NULL)
+ return;
+
+ dev = cnxk_sso_pmd_priv(gws_cookie->event_dev);
+ if (!gws_cookie->configured)
+ goto free;
+
+ cn9k_sso_hws_release(dev, port);
+ memset(gws_cookie, 0, sizeof(*gws_cookie));
+free:
+ rte_free(gws_cookie);
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -80,6 +225,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.queue_setup = cnxk_sso_queue_setup,
.queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
+ .port_setup = cn9k_sso_port_setup,
+ .port_release = cn9k_sso_port_release,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index e93aaccd8..daf24d84a 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -125,6 +125,42 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
return rc;
}
+int
+cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
+ cnxk_sso_init_hws_mem_t init_hws_fn,
+ cnxk_sso_hws_setup_t setup_hws_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int i;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ struct cnxk_sso_hws_cookie *ws_cookie;
+ void *ws;
+
+ /* Free memory prior to re-allocation if needed */
+ if (event_dev->data->ports[i] != NULL)
+ ws = event_dev->data->ports[i];
+ else
+ ws = init_hws_fn(dev, i);
+ if (ws == NULL)
+ goto hws_fini;
+ ws_cookie = cnxk_sso_hws_get_cookie(ws);
+ ws_cookie->event_dev = event_dev;
+ ws_cookie->configured = 1;
+ event_dev->data->ports[i] = ws;
+ cnxk_sso_port_setup((struct rte_eventdev *)(uintptr_t)event_dev,
+ i, setup_hws_fn);
+ }
+
+ return 0;
+hws_fini:
+ for (i = i - 1; i >= 0; i--) {
+ event_dev->data->ports[i] = NULL;
+ rte_free(cnxk_sso_hws_get_cookie(event_dev->data->ports[i]));
+ }
+ return -ENOMEM;
+}
+
int
cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
{
@@ -225,6 +261,35 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
port_conf->enqueue_depth = 1;
}
+int
+cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ cnxk_sso_hws_setup_t hws_setup_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP] = {0};
+ uint16_t q;
+
+ plt_sso_dbg("Port=%d", port_id);
+ if (event_dev->data->ports[port_id] == NULL) {
+ plt_err("Invalid port Id %d", port_id);
+ return -EINVAL;
+ }
+
+ for (q = 0; q < dev->nb_event_queues; q++) {
+ grps_base[q] = roc_sso_hwgrp_base_get(&dev->sso, q);
+ if (grps_base[q] == 0) {
+ plt_err("Failed to get grp[%d] base addr", q);
+ return -EINVAL;
+ }
+ }
+
+ hws_setup_fn(dev, event_dev->data->ports[port_id], grps_base);
+ plt_sso_dbg("Port=%d ws=%p", port_id, event_dev->data->ports[port_id]);
+ rte_mb();
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index b96a6a908..79eab1829 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -17,13 +17,23 @@
#define CNXK_SSO_XAE_CNT "xae_cnt"
#define CNXK_SSO_GGRP_QOS "qos"
+#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
+#define CNXK_SSO_MAX_HWGRP (RTE_EVENT_MAX_QUEUES_PER_DEV + 1)
#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
#define CNXK_SSO_XAQ_SLACK (8)
+#define CN10K_GW_MODE_NONE 0
+#define CN10K_GW_MODE_PREF 1
+#define CN10K_GW_MODE_PREF_WFE 2
+
+typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
+typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
+typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
+
struct cnxk_sso_qos {
uint16_t queue;
uint8_t xaq_prcnt;
@@ -53,6 +63,76 @@ struct cnxk_sso_evdev {
struct cnxk_sso_qos *qos_parse_data;
/* CN9K */
uint8_t dual_ws;
+ /* CN10K */
+ uint8_t gw_mode;
+} __rte_cache_aligned;
+
+/* CN10K HWS ops */
+#define CN10K_SSO_HWS_OPS \
+ uintptr_t swtag_desched_op; \
+ uintptr_t swtag_flush_op; \
+ uintptr_t swtag_untag_op; \
+ uintptr_t swtag_norm_op; \
+ uintptr_t updt_wqe_op; \
+ uintptr_t tag_wqe_op; \
+ uintptr_t getwrk_op
+
+struct cn10k_sso_hws {
+ /* Get Work Fastpath data */
+ CN10K_SSO_HWS_OPS;
+ uint32_t gw_wdata;
+ uint8_t swtag_req;
+ uint8_t hws_id;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
+ uint64_t base;
+ uintptr_t lmt_base;
+} __rte_cache_aligned;
+
+/* CN9K HWS ops */
+#define CN9K_SSO_HWS_OPS \
+ uintptr_t swtag_desched_op; \
+ uintptr_t swtag_flush_op; \
+ uintptr_t swtag_norm_op; \
+ uintptr_t getwrk_op; \
+ uintptr_t tag_op; \
+ uintptr_t wqp_op
+
+/* Event port a.k.a GWS */
+struct cn9k_sso_hws {
+ /* Get Work Fastpath data */
+ CN9K_SSO_HWS_OPS;
+ uint8_t swtag_req;
+ uint8_t hws_id;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
+ uint64_t base;
+} __rte_cache_aligned;
+
+struct cn9k_sso_hws_state {
+ CN9K_SSO_HWS_OPS;
+};
+
+struct cn9k_sso_hws_dual {
+ /* Get Work Fastpath data */
+ struct cn9k_sso_hws_state ws_state[2]; /* Ping and Pong */
+ uint8_t swtag_req;
+ uint8_t vws; /* Ping pong bit */
+ uint8_t hws_id;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
+ uint64_t base[2];
+} __rte_cache_aligned;
+
+struct cnxk_sso_hws_cookie {
+ const struct rte_eventdev *event_dev;
+ bool configured;
} __rte_cache_aligned;
static inline int
@@ -70,6 +150,12 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
return event_dev->data->dev_private;
}
+static inline struct cnxk_sso_hws_cookie *
+cnxk_sso_hws_get_cookie(void *ws)
+{
+ return RTE_PTR_SUB(ws, sizeof(struct cnxk_sso_hws_cookie));
+}
+
/* Configuration functions */
int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev);
@@ -80,6 +166,9 @@ int cnxk_sso_remove(struct rte_pci_device *pci_dev);
void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
struct rte_event_dev_info *dev_info);
int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
+int cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
+ cnxk_sso_init_hws_mem_t init_hws_mem,
+ cnxk_sso_hws_setup_t hws_setup);
void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
struct rte_event_queue_conf *queue_conf);
int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
@@ -87,5 +176,7 @@ int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id);
void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf);
+int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ cnxk_sso_hws_setup_t hws_setup_fn);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 11/33] event/cnxk: add event port link and unlink
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (9 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 10/33] event/cnxk: add port config functions pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 12/33] event/cnxk: add devargs to configure getwork mode pbhagavatula
` (23 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add platform specific event port, queue link and unlink APIs.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 64 +++++++++++++++++-
drivers/event/cnxk/cn9k_eventdev.c | 101 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.c | 36 ++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 12 +++-
4 files changed, 210 insertions(+), 3 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 11eaef380..b149b7831 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -63,6 +63,24 @@ cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
return ws;
}
+static int
+cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = port;
+
+ return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+}
+
+static int
+cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = port;
+
+ return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+}
+
static void
cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
{
@@ -83,9 +101,12 @@ cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
static void
cn10k_sso_hws_release(void *arg, void *hws)
{
+ struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = hws;
+ int i;
- RTE_SET_USED(arg);
+ for (i = 0; i < dev->nb_event_queues; i++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, (uint16_t *)&i, 1);
memset(ws, 0, sizeof(*ws));
}
@@ -149,6 +170,12 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ /* Restore any prior port-queue mapping. */
+ cnxk_sso_restore_links(event_dev, cn10k_sso_hws_link);
+
+ dev->configured = 1;
+ rte_mb();
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
@@ -184,6 +211,38 @@ cn10k_sso_port_release(void *port)
rte_free(gws_cookie);
}
+static int
+cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_links];
+ uint16_t link;
+
+ RTE_SET_USED(priorities);
+ for (link = 0; link < nb_links; link++)
+ hwgrp_ids[link] = queues[link];
+ nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+
+ return (int)nb_links;
+}
+
+static int
+cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_unlinks];
+ uint16_t unlink;
+
+ for (unlink = 0; unlink < nb_unlinks; unlink++)
+ hwgrp_ids[unlink] = queues[unlink];
+ nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+
+ return (int)nb_unlinks;
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -193,6 +252,9 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_def_conf = cnxk_sso_port_def_conf,
.port_setup = cn10k_sso_port_setup,
.port_release = cn10k_sso_port_release,
+ .port_link = cn10k_sso_port_link,
+ .port_unlink = cn10k_sso_port_unlink,
+ .timeout_ticks = cnxk_sso_timeout_ticks,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 2daa14b50..b26fc0eae 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -18,6 +18,54 @@ cn9k_init_hws_ops(struct cn9k_sso_hws_state *ws, uintptr_t base)
ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
}
+static int
+cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ int rc;
+
+ if (dev->dual_ws) {
+ dws = port;
+ rc = roc_sso_hws_link(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
+ nb_link);
+ rc |= roc_sso_hws_link(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ map, nb_link);
+ } else {
+ ws = port;
+ rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ }
+
+ return rc;
+}
+
+static int
+cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ int rc;
+
+ if (dev->dual_ws) {
+ dws = port;
+ rc = roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ map, nb_link);
+ rc |= roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ map, nb_link);
+ } else {
+ ws = port;
+ rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ }
+
+ return rc;
+}
+
static void
cn9k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
{
@@ -54,12 +102,24 @@ cn9k_sso_hws_release(void *arg, void *hws)
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
struct cn9k_sso_hws *ws;
+ int i;
if (dev->dual_ws) {
dws = hws;
+ for (i = 0; i < dev->nb_event_queues; i++) {
+ roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ (uint16_t *)&i, 1);
+ roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ (uint16_t *)&i, 1);
+ }
memset(dws, 0, sizeof(*dws));
} else {
ws = hws;
+ for (i = 0; i < dev->nb_event_queues; i++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id,
+ (uint16_t *)&i, 1);
memset(ws, 0, sizeof(*ws));
}
}
@@ -183,6 +243,12 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ /* Restore any prior port-queue mapping. */
+ cnxk_sso_restore_links(event_dev, cn9k_sso_hws_link);
+
+ dev->configured = 1;
+ rte_mb();
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
@@ -218,6 +284,38 @@ cn9k_sso_port_release(void *port)
rte_free(gws_cookie);
}
+static int
+cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_links];
+ uint16_t link;
+
+ RTE_SET_USED(priorities);
+ for (link = 0; link < nb_links; link++)
+ hwgrp_ids[link] = queues[link];
+ nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+
+ return (int)nb_links;
+}
+
+static int
+cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_unlinks];
+ uint16_t unlink;
+
+ for (unlink = 0; unlink < nb_unlinks; unlink++)
+ hwgrp_ids[unlink] = queues[unlink];
+ nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+
+ return (int)nb_unlinks;
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -227,6 +325,9 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_def_conf = cnxk_sso_port_def_conf,
.port_setup = cn9k_sso_port_setup,
.port_release = cn9k_sso_port_release,
+ .port_link = cn9k_sso_port_link,
+ .port_unlink = cn9k_sso_port_unlink,
+ .timeout_ticks = cnxk_sso_timeout_ticks,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index daf24d84a..e68079997 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -161,6 +161,32 @@ cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
return -ENOMEM;
}
+void
+cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
+ cnxk_sso_link_t link_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t *links_map, hwgrp[CNXK_SSO_MAX_HWGRP];
+ int i, j;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ uint16_t nb_hwgrp = 0;
+
+ links_map = event_dev->data->links_map;
+ /* Point links_map to this port specific area */
+ links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+
+ for (j = 0; j < dev->nb_event_queues; j++) {
+ if (links_map[j] == 0xdead)
+ continue;
+ hwgrp[nb_hwgrp] = j;
+ nb_hwgrp++;
+ }
+
+ link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp);
+ }
+}
+
int
cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
{
@@ -290,6 +316,16 @@ cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
return 0;
}
+int
+cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
+ uint64_t *tmo_ticks)
+{
+ RTE_SET_USED(event_dev);
+ *tmo_ticks = NSEC2TICK(ns, rte_get_timer_hz());
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 79eab1829..97a944d88 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -17,8 +17,9 @@
#define CNXK_SSO_XAE_CNT "xae_cnt"
#define CNXK_SSO_GGRP_QOS "qos"
-#define NSEC2USEC(__ns) ((__ns) / 1E3)
-#define USEC2NSEC(__us) ((__us)*1E3)
+#define NSEC2USEC(__ns) ((__ns) / 1E3)
+#define USEC2NSEC(__us) ((__us)*1E3)
+#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
#define CNXK_SSO_MAX_HWGRP (RTE_EVENT_MAX_QUEUES_PER_DEV + 1)
#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
@@ -33,6 +34,8 @@
typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
+typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
+ uint16_t nb_link);
struct cnxk_sso_qos {
uint16_t queue;
@@ -48,6 +51,7 @@ struct cnxk_sso_evdev {
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
+ uint8_t configured;
uint32_t deq_tmo_ns;
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
@@ -169,6 +173,8 @@ int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
int cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
cnxk_sso_init_hws_mem_t init_hws_mem,
cnxk_sso_hws_setup_t hws_setup);
+void cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
+ cnxk_sso_link_t link_fn);
void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
struct rte_event_queue_conf *queue_conf);
int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
@@ -178,5 +184,7 @@ void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf);
int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
cnxk_sso_hws_setup_t hws_setup_fn);
+int cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
+ uint64_t *tmo_ticks);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 12/33] event/cnxk: add devargs to configure getwork mode
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (10 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 11/33] event/cnxk: add event port link and unlink pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 13/33] event/cnxk: add SSO HW device operations pbhagavatula
` (22 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add devargs to configure the platform specific getwork mode.
CN9K getwork mode by default is set to use dual workslot mode.
Add option to force single workslot mode.
Example:
--dev "0002:0e:00.0,single_ws=1"
CN10K supports multiple getwork prefetch modes, by default the
prefetch mode is set to none.
Add option to select getwork prefetch mode
Example:
--dev "0002:1e:00.0,gw_mode=1"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 18 ++++++++++++++++++
drivers/event/cnxk/cn10k_eventdev.c | 3 ++-
drivers/event/cnxk/cn9k_eventdev.c | 3 ++-
drivers/event/cnxk/cnxk_eventdev.c | 6 ++++++
drivers/event/cnxk/cnxk_eventdev.h | 6 ++++--
5 files changed, 32 insertions(+), 4 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 0583e5fdd..f48452982 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -55,6 +55,24 @@ Runtime Config Options
-a 0002:0e:00.0,xae_cnt=16384
+- ``CN9K Getwork mode``
+
+ CN9K ``single_ws`` devargs parameter is introduced to select single workslot
+ mode in SSO and disable the default dual workslot mode.
+
+ For example::
+
+ -a 0002:0e:00.0,single_ws=1
+
+- ``CN10K Getwork mode``
+
+ CN10K supports multiple getwork prefetch modes, by default the prefetch
+ mode is set to none.
+
+ For example::
+
+ -a 0002:0e:00.0,gw_mode=1
+
- ``Event Group QoS support``
SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index b149b7831..fe1bbd2d4 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -327,4 +327,5 @@ RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
- CNXK_SSO_GGRP_QOS "=<string>");
+ CNXK_SSO_GGRP_QOS "=<string>"
+ CN10K_SSO_GW_MODE "=<int>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index b26fc0eae..29d1de09e 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -395,4 +395,5 @@ RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
- CNXK_SSO_GGRP_QOS "=<string>");
+ CNXK_SSO_GGRP_QOS "=<string>"
+ CN9K_SSO_SINGLE_WS "=1");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index e68079997..2a387ff95 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -406,6 +406,7 @@ static void
cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
{
struct rte_kvargs *kvlist;
+ uint8_t single_ws = 0;
if (devargs == NULL)
return;
@@ -417,6 +418,11 @@ cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
&dev->xae_cnt);
rte_kvargs_process(kvlist, CNXK_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
dev);
+ rte_kvargs_process(kvlist, CN9K_SSO_SINGLE_WS, &parse_kvargs_value,
+ &single_ws);
+ rte_kvargs_process(kvlist, CN10K_SSO_GW_MODE, &parse_kvargs_value,
+ &dev->gw_mode);
+ dev->dual_ws = !single_ws;
rte_kvargs_free(kvlist);
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 97a944d88..437cdf3db 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,8 +14,10 @@
#include "roc_api.h"
-#define CNXK_SSO_XAE_CNT "xae_cnt"
-#define CNXK_SSO_GGRP_QOS "qos"
+#define CNXK_SSO_XAE_CNT "xae_cnt"
+#define CNXK_SSO_GGRP_QOS "qos"
+#define CN9K_SSO_SINGLE_WS "single_ws"
+#define CN10K_SSO_GW_MODE "gw_mode"
#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 13/33] event/cnxk: add SSO HW device operations
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (11 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 12/33] event/cnxk: add devargs to configure getwork mode pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 14/33] event/cnxk: add SSO GWS fastpath enqueue functions pbhagavatula
` (21 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO HW device operations used for enqueue/dequeue.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_worker.c | 7 +
drivers/event/cnxk/cn10k_worker.h | 151 +++++++++++++++++
drivers/event/cnxk/cn9k_worker.c | 7 +
drivers/event/cnxk/cn9k_worker.h | 249 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 10 ++
drivers/event/cnxk/cnxk_worker.h | 101 ++++++++++++
drivers/event/cnxk/meson.build | 4 +-
7 files changed, 528 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cn10k_worker.c
create mode 100644 drivers/event/cnxk/cn10k_worker.h
create mode 100644 drivers/event/cnxk/cn9k_worker.c
create mode 100644 drivers/event/cnxk/cn9k_worker.h
create mode 100644 drivers/event/cnxk/cnxk_worker.h
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
new file mode 100644
index 000000000..63b587301
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cn10k_worker.h"
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
new file mode 100644
index 000000000..04517055d
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CN10K_WORKER_H__
+#define __CN10K_WORKER_H__
+
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
+
+/* SSO Operations */
+
+static __rte_always_inline uint8_t
+cn10k_sso_hws_new_event(struct cn10k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ rte_atomic_thread_fence(__ATOMIC_ACQ_REL);
+ if (ws->xaq_lmt <= *ws->fc_mem)
+ return 0;
+
+ cnxk_sso_hws_add_work(event_ptr, tag, new_tt, ws->grps_base[grp]);
+ return 1;
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_fwd_swtag(struct cn10k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint8_t cur_tt = CNXK_TT_FROM_TAG(plt_read64(ws->tag_wqe_op));
+
+ /* CNXK model
+ * cur_tt/new_tt SSO_TT_ORDERED SSO_TT_ATOMIC SSO_TT_UNTAGGED
+ *
+ * SSO_TT_ORDERED norm norm untag
+ * SSO_TT_ATOMIC norm norm untag
+ * SSO_TT_UNTAGGED norm norm NOOP
+ */
+
+ if (new_tt == SSO_TT_UNTAGGED) {
+ if (cur_tt != SSO_TT_UNTAGGED)
+ cnxk_sso_hws_swtag_untag(ws->swtag_untag_op);
+ } else {
+ cnxk_sso_hws_swtag_norm(tag, new_tt, ws->swtag_norm_op);
+ }
+ ws->swtag_req = 1;
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_fwd_group(struct cn10k_sso_hws *ws, const struct rte_event *ev,
+ const uint16_t grp)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+
+ plt_write64(ev->u64, ws->updt_wqe_op);
+ cnxk_sso_hws_swtag_desched(tag, new_tt, grp, ws->swtag_desched_op);
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_forward_event(struct cn10k_sso_hws *ws,
+ const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (CNXK_GRP_FROM_TAG(plt_read64(ws->tag_wqe_op)) == grp)
+ cn10k_sso_hws_fwd_swtag(ws, ev);
+ else
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ cn10k_sso_hws_fwd_group(ws, ev, grp);
+}
+
+static __rte_always_inline uint16_t
+cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+ gw.get_work = ws->gw_wdata;
+#if defined(RTE_ARCH_ARM64) && !defined(__clang__)
+ asm volatile(
+ PLT_CPU_FEATURE_PREAMBLE
+ "caspl %[wdata], %H[wdata], %[wdata], %H[wdata], [%[gw_loc]]\n"
+ : [wdata] "+r"(gw.get_work)
+ : [gw_loc] "r"(ws->getwrk_op)
+ : "memory");
+#else
+ plt_write64(gw.u64[0], ws->getwrk_op);
+ do {
+ roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+ } while (gw.u64[0] & BIT_ULL(63));
+#endif
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+/* Used in cleaning up workslot. */
+static __rte_always_inline uint16_t
+cn10k_sso_hws_get_work_empty(struct cn10k_sso_hws *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldp %[tag], %[wqp], [%[tag_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldp %[tag], %[wqp], [%[tag_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_wqe_op)
+ : "memory");
+#else
+ do {
+ roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+ } while (gw.u64[0] & BIT_ULL(63));
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+#endif
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
new file mode 100644
index 000000000..836914163
--- /dev/null
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "roc_api.h"
+
+#include "cn9k_worker.h"
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
new file mode 100644
index 000000000..85be742c1
--- /dev/null
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -0,0 +1,249 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CN9K_WORKER_H__
+#define __CN9K_WORKER_H__
+
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
+
+/* SSO Operations */
+
+static __rte_always_inline uint8_t
+cn9k_sso_hws_new_event(struct cn9k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ rte_atomic_thread_fence(__ATOMIC_ACQ_REL);
+ if (ws->xaq_lmt <= *ws->fc_mem)
+ return 0;
+
+ cnxk_sso_hws_add_work(event_ptr, tag, new_tt, ws->grps_base[grp]);
+ return 1;
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_fwd_swtag(struct cn9k_sso_hws_state *vws,
+ const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint8_t cur_tt = CNXK_TT_FROM_TAG(plt_read64(vws->tag_op));
+
+ /* CNXK model
+ * cur_tt/new_tt SSO_TT_ORDERED SSO_TT_ATOMIC SSO_TT_UNTAGGED
+ *
+ * SSO_TT_ORDERED norm norm untag
+ * SSO_TT_ATOMIC norm norm untag
+ * SSO_TT_UNTAGGED norm norm NOOP
+ */
+
+ if (new_tt == SSO_TT_UNTAGGED) {
+ if (cur_tt != SSO_TT_UNTAGGED)
+ cnxk_sso_hws_swtag_untag(
+ CN9K_SSOW_GET_BASE_ADDR(vws->getwrk_op) +
+ SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ } else {
+ cnxk_sso_hws_swtag_norm(tag, new_tt, vws->swtag_norm_op);
+ }
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_fwd_group(struct cn9k_sso_hws_state *ws,
+ const struct rte_event *ev, const uint16_t grp)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+
+ plt_write64(ev->u64, CN9K_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
+ SSOW_LF_GWS_OP_UPD_WQP_GRP1);
+ cnxk_sso_hws_swtag_desched(tag, new_tt, grp, ws->swtag_desched_op);
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_forward_event(struct cn9k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (CNXK_GRP_FROM_TAG(plt_read64(ws->tag_op)) == grp) {
+ cn9k_sso_hws_fwd_swtag((struct cn9k_sso_hws_state *)ws, ev);
+ ws->swtag_req = 1;
+ } else {
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ cn9k_sso_hws_fwd_group((struct cn9k_sso_hws_state *)ws, ev,
+ grp);
+ }
+}
+
+/* Dual ws ops. */
+
+static __rte_always_inline uint8_t
+cn9k_sso_hws_dual_new_event(struct cn9k_sso_hws_dual *dws,
+ const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ rte_atomic_thread_fence(__ATOMIC_ACQ_REL);
+ if (dws->xaq_lmt <= *dws->fc_mem)
+ return 0;
+
+ cnxk_sso_hws_add_work(event_ptr, tag, new_tt, dws->grps_base[grp]);
+ return 1;
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_dual_forward_event(struct cn9k_sso_hws_dual *dws,
+ struct cn9k_sso_hws_state *vws,
+ const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (CNXK_GRP_FROM_TAG(plt_read64(vws->tag_op)) == grp) {
+ cn9k_sso_hws_fwd_swtag(vws, ev);
+ dws->swtag_req = 1;
+ } else {
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ cn9k_sso_hws_fwd_group(vws, ev, grp);
+ }
+}
+
+static __rte_always_inline uint16_t
+cn9k_sso_hws_dual_get_work(struct cn9k_sso_hws_state *ws,
+ struct cn9k_sso_hws_state *ws_pair,
+ struct rte_event *ev)
+{
+ const uint64_t set_gw = BIT_ULL(16) | 1;
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ "rty%=: \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: str %[gw], [%[pong]] \n"
+ " dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op),
+ [gw] "r"(set_gw), [pong] "r"(ws_pair->getwrk_op));
+#else
+ gw.u64[0] = plt_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & gw.u64[0])
+ gw.u64[0] = plt_read64(ws->tag_op);
+ gw.u64[1] = plt_read64(ws->wqp_op);
+ plt_write64(set_gw, ws_pair->getwrk_op);
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+static __rte_always_inline uint16_t
+cn9k_sso_hws_get_work(struct cn9k_sso_hws *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+ plt_write64(BIT_ULL(16) | /* wait for work. */
+ 1, /* Use Mask set 0. */
+ ws->getwrk_op);
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op));
+#else
+ gw.u64[0] = plt_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & gw.u64[0])
+ gw.u64[0] = plt_read64(ws->tag_op);
+
+ gw.u64[1] = plt_read64(ws->wqp_op);
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+/* Used in cleaning up workslot. */
+static __rte_always_inline uint16_t
+cn9k_sso_hws_get_work_empty(struct cn9k_sso_hws_state *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op));
+#else
+ gw.u64[0] = plt_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & gw.u64[0])
+ gw.u64[0] = plt_read64(ws->tag_op);
+
+ gw.u64[1] = plt_read64(ws->wqp_op);
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+#endif
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 437cdf3db..0a3ab71e4 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -29,6 +29,16 @@
#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
#define CNXK_SSO_XAQ_SLACK (8)
+#define CNXK_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY)
+#define CNXK_TT_FROM_EVENT(x) (((x) >> 38) & SSO_TT_EMPTY)
+#define CNXK_EVENT_TYPE_FROM_TAG(x) (((x) >> 28) & 0xf)
+#define CNXK_SUB_EVENT_FROM_TAG(x) (((x) >> 20) & 0xff)
+#define CNXK_CLR_SUB_EVENT(x) (~(0xffu << 20) & x)
+#define CNXK_GRP_FROM_TAG(x) (((x) >> 36) & 0x3ff)
+#define CNXK_SWTAG_PEND(x) (BIT_ULL(62) & x)
+
+#define CN9K_SSOW_GET_BASE_ADDR(_GW) ((_GW)-SSOW_LF_GWS_OP_GET_WORK0)
+
#define CN10K_GW_MODE_NONE 0
#define CN10K_GW_MODE_PREF 1
#define CN10K_GW_MODE_PREF_WFE 2
diff --git a/drivers/event/cnxk/cnxk_worker.h b/drivers/event/cnxk/cnxk_worker.h
new file mode 100644
index 000000000..4eb46ae16
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_worker.h
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CNXK_WORKER_H__
+#define __CNXK_WORKER_H__
+
+#include "cnxk_eventdev.h"
+
+/* SSO Operations */
+
+static __rte_always_inline void
+cnxk_sso_hws_add_work(const uint64_t event_ptr, const uint32_t tag,
+ const uint8_t new_tt, const uintptr_t grp_base)
+{
+ uint64_t add_work0;
+
+ add_work0 = tag | ((uint64_t)(new_tt) << 32);
+ roc_store_pair(add_work0, event_ptr, grp_base);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_desched(uint32_t tag, uint8_t new_tt, uint16_t grp,
+ uintptr_t swtag_desched_op)
+{
+ uint64_t val;
+
+ val = tag | ((uint64_t)(new_tt & 0x3) << 32) | ((uint64_t)grp << 34);
+ __atomic_store_n((uint64_t *)swtag_desched_op, val, __ATOMIC_RELEASE);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_norm(uint32_t tag, uint8_t new_tt, uintptr_t swtag_norm_op)
+{
+ uint64_t val;
+
+ val = tag | ((uint64_t)(new_tt & 0x3) << 32);
+ plt_write64(val, swtag_norm_op);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_untag(uintptr_t swtag_untag_op)
+{
+ plt_write64(0, swtag_untag_op);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_flush(uint64_t tag_op, uint64_t flush_op)
+{
+ if (CNXK_TT_FROM_TAG(plt_read64(tag_op)) == SSO_TT_EMPTY)
+ return;
+ plt_write64(0, flush_op);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_wait(uintptr_t tag_op)
+{
+#ifdef RTE_ARCH_ARM64
+ uint64_t swtp;
+
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbz %[swtb], 62, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbnz %[swtb], 62, rty%= \n"
+ "done%=: \n"
+ : [swtb] "=&r"(swtp)
+ : [swtp_loc] "r"(tag_op));
+#else
+ /* Wait for the SWTAG/SWTAG_FULL operation */
+ while (plt_read64(tag_op) & BIT_ULL(62))
+ ;
+#endif
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_head_wait(uintptr_t tag_op)
+{
+#ifdef RTE_ARCH_ARM64
+ uint64_t swtp;
+
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbz %[swtb], 35, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbnz %[swtb], 35, rty%= \n"
+ "done%=: \n"
+ : [swtb] "=&r"(swtp)
+ : [swtp_loc] "r"(tag_op));
+#else
+ /* Wait for the SWTAG/SWTAG_FULL operation */
+ while (plt_read64(tag_op) & BIT_ULL(35))
+ ;
+#endif
+}
+
+#endif
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 888aeb064..3617d707b 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -8,7 +8,9 @@ if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
subdir_done()
endif
-sources = files('cn10k_eventdev.c',
+sources = files('cn10k_worker.c',
+ 'cn10k_eventdev.c',
+ 'cn9k_worker.c',
'cn9k_eventdev.c',
'cnxk_eventdev.c',
)
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 14/33] event/cnxk: add SSO GWS fastpath enqueue functions
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (12 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 13/33] event/cnxk: add SSO HW device operations pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 15/33] event/cnxk: add SSO GWS dequeue fastpath functions pbhagavatula
` (20 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO GWS fastpath event device enqueue functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 16 +++-
drivers/event/cnxk/cn10k_worker.c | 54 ++++++++++++++
drivers/event/cnxk/cn10k_worker.h | 12 +++
drivers/event/cnxk/cn9k_eventdev.c | 25 ++++++-
drivers/event/cnxk/cn9k_worker.c | 112 ++++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_worker.h | 24 ++++++
6 files changed, 241 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index fe1bbd2d4..c75fcf791 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -2,7 +2,9 @@
* Copyright(C) 2021 Marvell.
*/
+#include "cn10k_worker.h"
#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
static void
cn10k_init_hws_ops(struct cn10k_sso_hws *ws, uintptr_t base)
@@ -130,6 +132,16 @@ cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static void
+cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
+{
+ PLT_SET_USED(event_dev);
+ event_dev->enqueue = cn10k_sso_hws_enq;
+ event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
+ event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
+ event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst;
+}
+
static void
cn10k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -276,8 +288,10 @@ cn10k_sso_init(struct rte_eventdev *event_dev)
event_dev->dev_ops = &cn10k_sso_dev_ops;
/* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ cn10k_sso_fp_fns_set(event_dev);
return 0;
+ }
rc = cnxk_sso_init(event_dev);
if (rc < 0)
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 63b587301..9b5cb7be6 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -5,3 +5,57 @@
#include "cn10k_worker.h"
#include "cnxk_eventdev.h"
#include "cnxk_worker.h"
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq(void *port, const struct rte_event *ev)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ return cn10k_sso_hws_new_event(ws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ cn10k_sso_hws_forward_event(ws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ cnxk_sso_hws_swtag_flush(ws->tag_wqe_op, ws->swtag_flush_op);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return cn10k_sso_hws_enq(port, ev);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn10k_sso_hws *ws = port;
+ uint16_t i, rc = 1;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = cn10k_sso_hws_new_event(ws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ RTE_SET_USED(nb_events);
+ cn10k_sso_hws_forward_event(ws, ev);
+
+ return 1;
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index 04517055d..48158b320 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -148,4 +148,16 @@ cn10k_sso_hws_get_work_empty(struct cn10k_sso_hws *ws, struct rte_event *ev)
return !!gw.u64[1];
}
+/* CN10K Fastpath functions. */
+uint16_t __rte_hot cn10k_sso_hws_enq(void *port, const struct rte_event *ev);
+uint16_t __rte_hot cn10k_sso_hws_enq_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+
#endif
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 29d1de09e..4d2af070d 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -2,7 +2,9 @@
* Copyright(C) 2021 Marvell.
*/
+#include "cn9k_worker.h"
#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
#define CN9K_DUAL_WS_NB_WS 2
#define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
@@ -150,6 +152,25 @@ cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static void
+cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ event_dev->enqueue = cn9k_sso_hws_enq;
+ event_dev->enqueue_burst = cn9k_sso_hws_enq_burst;
+ event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
+ event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
+
+ if (dev->dual_ws) {
+ event_dev->enqueue = cn9k_sso_hws_dual_enq;
+ event_dev->enqueue_burst = cn9k_sso_hws_dual_enq_burst;
+ event_dev->enqueue_new_burst = cn9k_sso_hws_dual_enq_new_burst;
+ event_dev->enqueue_forward_burst =
+ cn9k_sso_hws_dual_enq_fwd_burst;
+ }
+}
+
static void *
cn9k_sso_init_hws_mem(void *arg, uint8_t port_id)
{
@@ -349,8 +370,10 @@ cn9k_sso_init(struct rte_eventdev *event_dev)
event_dev->dev_ops = &cn9k_sso_dev_ops;
/* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ cn9k_sso_fp_fns_set(event_dev);
return 0;
+ }
rc = cnxk_sso_init(event_dev);
if (rc < 0)
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index 836914163..538bc4b0b 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -5,3 +5,115 @@
#include "roc_api.h"
#include "cn9k_worker.h"
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq(void *port, const struct rte_event *ev)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ return cn9k_sso_hws_new_event(ws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ cn9k_sso_hws_forward_event(ws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ cnxk_sso_hws_swtag_flush(ws->tag_op, ws->swtag_flush_op);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return cn9k_sso_hws_enq(port, ev);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws *ws = port;
+ uint16_t i, rc = 1;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = cn9k_sso_hws_new_event(ws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ RTE_SET_USED(nb_events);
+ cn9k_sso_hws_forward_event(ws, ev);
+
+ return 1;
+}
+
+/* Dual ws ops. */
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq(void *port, const struct rte_event *ev)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ struct cn9k_sso_hws_state *vws;
+
+ vws = &dws->ws_state[!dws->vws];
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ return cn9k_sso_hws_dual_new_event(dws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ cn9k_sso_hws_dual_forward_event(dws, vws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ cnxk_sso_hws_swtag_flush(vws->tag_op, vws->swtag_flush_op);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return cn9k_sso_hws_dual_enq(port, ev);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ uint16_t i, rc = 1;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = cn9k_sso_hws_dual_new_event(dws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+
+ RTE_SET_USED(nb_events);
+ cn9k_sso_hws_dual_forward_event(dws, &dws->ws_state[!dws->vws], ev);
+
+ return 1;
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index 85be742c1..aa321d0e4 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -246,4 +246,28 @@ cn9k_sso_hws_get_work_empty(struct cn9k_sso_hws_state *ws, struct rte_event *ev)
return !!gw.u64[1];
}
+/* CN9K Fastpath functions. */
+uint16_t __rte_hot cn9k_sso_hws_enq(void *port, const struct rte_event *ev);
+uint16_t __rte_hot cn9k_sso_hws_enq_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_enq_new_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_enq_fwd_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+
+uint16_t __rte_hot cn9k_sso_hws_dual_enq(void *port,
+ const struct rte_event *ev);
+uint16_t __rte_hot cn9k_sso_hws_dual_enq_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_dual_enq_new_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+
#endif
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 15/33] event/cnxk: add SSO GWS dequeue fastpath functions
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (13 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 14/33] event/cnxk: add SSO GWS fastpath enqueue functions pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 16/33] event/cnxk: add device start function pbhagavatula
` (19 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO GWS event dequeue fastpath functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 10 ++-
drivers/event/cnxk/cn10k_worker.c | 54 +++++++++++++
drivers/event/cnxk/cn10k_worker.h | 12 +++
drivers/event/cnxk/cn9k_eventdev.c | 15 ++++
drivers/event/cnxk/cn9k_worker.c | 117 ++++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_worker.h | 24 ++++++
6 files changed, 231 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index c75fcf791..7095ea13e 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -135,11 +135,19 @@ cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
static void
cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
{
- PLT_SET_USED(event_dev);
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
event_dev->enqueue = cn10k_sso_hws_enq;
event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst;
+
+ event_dev->dequeue = cn10k_sso_hws_deq;
+ event_dev->dequeue_burst = cn10k_sso_hws_deq_burst;
+ if (dev->is_timeout_deq) {
+ event_dev->dequeue = cn10k_sso_hws_tmo_deq;
+ event_dev->dequeue_burst = cn10k_sso_hws_tmo_deq_burst;
+ }
}
static void
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 9b5cb7be6..e2aa534c6 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -59,3 +59,57 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+uint16_t __rte_hot
+cn10k_sso_hws_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ RTE_SET_USED(timeout_ticks);
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_wqe_op);
+ return 1;
+ }
+
+ return cn10k_sso_hws_get_work(ws, ev);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_deq_burst(void *port, struct rte_event ev[], uint16_t nb_events,
+ uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn10k_sso_hws_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_tmo_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn10k_sso_hws *ws = port;
+ uint16_t ret = 1;
+ uint64_t iter;
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_wqe_op);
+ return ret;
+ }
+
+ ret = cn10k_sso_hws_get_work(ws, ev);
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++)
+ ret = cn10k_sso_hws_get_work(ws, ev);
+
+ return ret;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn10k_sso_hws_tmo_deq(port, ev, timeout_ticks);
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index 48158b320..2f093a8dd 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -160,4 +160,16 @@ uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_tmo_deq_burst(void *port,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+
#endif
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 4d2af070d..5506ea320 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -162,12 +162,27 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
+ event_dev->dequeue = cn9k_sso_hws_deq;
+ event_dev->dequeue_burst = cn9k_sso_hws_deq_burst;
+ if (dev->deq_tmo_ns) {
+ event_dev->dequeue = cn9k_sso_hws_tmo_deq;
+ event_dev->dequeue_burst = cn9k_sso_hws_tmo_deq_burst;
+ }
+
if (dev->dual_ws) {
event_dev->enqueue = cn9k_sso_hws_dual_enq;
event_dev->enqueue_burst = cn9k_sso_hws_dual_enq_burst;
event_dev->enqueue_new_burst = cn9k_sso_hws_dual_enq_new_burst;
event_dev->enqueue_forward_burst =
cn9k_sso_hws_dual_enq_fwd_burst;
+
+ event_dev->dequeue = cn9k_sso_hws_dual_deq;
+ event_dev->dequeue_burst = cn9k_sso_hws_dual_deq_burst;
+ if (dev->deq_tmo_ns) {
+ event_dev->dequeue = cn9k_sso_hws_dual_tmo_deq;
+ event_dev->dequeue_burst =
+ cn9k_sso_hws_dual_tmo_deq_burst;
+ }
}
}
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index 538bc4b0b..9ceacc98d 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -60,6 +60,60 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+uint16_t __rte_hot
+cn9k_sso_hws_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ RTE_SET_USED(timeout_ticks);
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_op);
+ return 1;
+ }
+
+ return cn9k_sso_hws_get_work(ws, ev);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_deq_burst(void *port, struct rte_event ev[], uint16_t nb_events,
+ uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_tmo_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws *ws = port;
+ uint16_t ret = 1;
+ uint64_t iter;
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_op);
+ return ret;
+ }
+
+ ret = cn9k_sso_hws_get_work(ws, ev);
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++)
+ ret = cn9k_sso_hws_get_work(ws, ev);
+
+ return ret;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_tmo_deq(port, ev, timeout_ticks);
+}
+
/* Dual ws ops. */
uint16_t __rte_hot
@@ -117,3 +171,66 @@ cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ uint16_t gw;
+
+ RTE_SET_USED(timeout_ticks);
+ if (dws->swtag_req) {
+ dws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(dws->ws_state[!dws->vws].tag_op);
+ return 1;
+ }
+
+ gw = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
+ &dws->ws_state[!dws->vws], ev);
+ dws->vws = !dws->vws;
+ return gw;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_dual_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ uint16_t ret = 1;
+ uint64_t iter;
+
+ if (dws->swtag_req) {
+ dws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(dws->ws_state[!dws->vws].tag_op);
+ return ret;
+ }
+
+ ret = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
+ &dws->ws_state[!dws->vws], ev);
+ dws->vws = !dws->vws;
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) {
+ ret = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
+ &dws->ws_state[!dws->vws], ev);
+ dws->vws = !dws->vws;
+ }
+
+ return ret;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_dual_tmo_deq(port, ev, timeout_ticks);
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index aa321d0e4..38fca08fb 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -270,4 +270,28 @@ uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+
+uint16_t __rte_hot cn9k_sso_hws_dual_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst(void *port,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq_burst(void *port,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+
#endif
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 16/33] event/cnxk: add device start function
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (14 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 15/33] event/cnxk: add SSO GWS dequeue fastpath functions pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 17/33] event/cnxk: add device stop and close functions pbhagavatula
` (18 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add eventdev start function along with few cleanup API's to maintain
sanity.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 127 ++++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 113 +++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.c | 64 ++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 7 ++
4 files changed, 311 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 7095ea13e..bc873eada 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -112,6 +112,117 @@ cn10k_sso_hws_release(void *arg, void *hws)
memset(ws, 0, sizeof(*ws));
}
+static void
+cn10k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base,
+ cnxk_handle_event_t fn, void *arg)
+{
+ struct cn10k_sso_hws *ws = hws;
+ uint64_t cq_ds_cnt = 1;
+ uint64_t aq_cnt = 1;
+ uint64_t ds_cnt = 1;
+ struct rte_event ev;
+ uint64_t val, req;
+
+ plt_write64(0, base + SSO_LF_GGRP_QCTL);
+
+ req = queue_id; /* GGRP ID */
+ req |= BIT_ULL(18); /* Grouped */
+ req |= BIT_ULL(16); /* WAIT */
+
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+
+ while (aq_cnt || cq_ds_cnt || ds_cnt) {
+ plt_write64(req, ws->getwrk_op);
+ cn10k_sso_hws_get_work_empty(ws, &ev);
+ if (fn != NULL && ev.u64 != 0)
+ fn(arg, ev);
+ if (ev.sched_type != SSO_TT_EMPTY)
+ cnxk_sso_hws_swtag_flush(ws->tag_wqe_op,
+ ws->swtag_flush_op);
+ do {
+ val = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE);
+ } while (val & BIT_ULL(56));
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ /* Extract cq and ds count */
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+ }
+
+ plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL);
+ rte_mb();
+}
+
+static void
+cn10k_sso_hws_reset(void *arg, void *hws)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = hws;
+ uintptr_t base = ws->base;
+ uint64_t pend_state;
+ union {
+ __uint128_t wdata;
+ uint64_t u64[2];
+ } gw;
+ uint8_t pend_tt;
+
+ /* Wait till getwork/swtp/waitw/desched completes. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) |
+ BIT_ULL(56) | BIT_ULL(54)));
+ pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0));
+ if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+ if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED)
+ cnxk_sso_hws_swtag_untag(base +
+ SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+ }
+
+ /* Wait for desched to complete. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & BIT_ULL(58));
+
+ switch (dev->gw_mode) {
+ case CN10K_GW_MODE_PREF:
+ while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) & BIT_ULL(63))
+ ;
+ break;
+ case CN10K_GW_MODE_PREF_WFE:
+ while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) &
+ SSOW_LF_GWS_TAG_PEND_GET_WORK_BIT)
+ continue;
+ plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL);
+ break;
+ case CN10K_GW_MODE_NONE:
+ default:
+ break;
+ }
+
+ if (CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_PRF_WQE0)) !=
+ SSO_TT_EMPTY) {
+ plt_write64(BIT_ULL(16) | 1, ws->getwrk_op);
+ do {
+ roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+ } while (gw.u64[0] & BIT_ULL(63));
+ pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0));
+ if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+ if (pend_tt == SSO_TT_ATOMIC ||
+ pend_tt == SSO_TT_ORDERED)
+ cnxk_sso_hws_swtag_untag(
+ base + SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+ }
+ }
+
+ plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL);
+ rte_mb();
+}
+
static void
cn10k_sso_set_rsrc(void *arg)
{
@@ -263,6 +374,20 @@ cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
return (int)nb_unlinks;
}
+static int
+cn10k_sso_start(struct rte_eventdev *event_dev)
+{
+ int rc;
+
+ rc = cnxk_sso_start(event_dev, cn10k_sso_hws_reset,
+ cn10k_sso_hws_flush_events);
+ if (rc < 0)
+ return rc;
+ cn10k_sso_fp_fns_set(event_dev);
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -275,6 +400,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_link = cn10k_sso_port_link,
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+
+ .dev_start = cn10k_sso_start,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 5506ea320..a4613e410 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -126,6 +126,102 @@ cn9k_sso_hws_release(void *arg, void *hws)
}
}
+static void
+cn9k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base,
+ cnxk_handle_event_t fn, void *arg)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(arg);
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws_state *st;
+ struct cn9k_sso_hws *ws;
+ uint64_t cq_ds_cnt = 1;
+ uint64_t aq_cnt = 1;
+ uint64_t ds_cnt = 1;
+ struct rte_event ev;
+ uintptr_t ws_base;
+ uint64_t val, req;
+
+ plt_write64(0, base + SSO_LF_GGRP_QCTL);
+
+ req = queue_id; /* GGRP ID */
+ req |= BIT_ULL(18); /* Grouped */
+ req |= BIT_ULL(16); /* WAIT */
+
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+
+ if (dev->dual_ws) {
+ dws = hws;
+ st = &dws->ws_state[0];
+ ws_base = dws->base[0];
+ } else {
+ ws = hws;
+ st = (struct cn9k_sso_hws_state *)ws;
+ ws_base = ws->base;
+ }
+
+ while (aq_cnt || cq_ds_cnt || ds_cnt) {
+ plt_write64(req, st->getwrk_op);
+ cn9k_sso_hws_get_work_empty(st, &ev);
+ if (fn != NULL && ev.u64 != 0)
+ fn(arg, ev);
+ if (ev.sched_type != SSO_TT_EMPTY)
+ cnxk_sso_hws_swtag_flush(st->tag_op,
+ st->swtag_flush_op);
+ do {
+ val = plt_read64(ws_base + SSOW_LF_GWS_PENDSTATE);
+ } while (val & BIT_ULL(56));
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ /* Extract cq and ds count */
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+ }
+
+ plt_write64(0, ws_base + SSOW_LF_GWS_OP_GWC_INVAL);
+}
+
+static void
+cn9k_sso_hws_reset(void *arg, void *hws)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ uint64_t pend_state;
+ uint8_t pend_tt;
+ uintptr_t base;
+ uint64_t tag;
+ uint8_t i;
+
+ dws = hws;
+ ws = hws;
+ for (i = 0; i < (dev->dual_ws ? CN9K_DUAL_WS_NB_WS : 1); i++) {
+ base = dev->dual_ws ? dws->base[i] : ws->base;
+ /* Wait till getwork/swtp/waitw/desched completes. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) |
+ BIT_ULL(56)));
+
+ tag = plt_read64(base + SSOW_LF_GWS_TAG);
+ pend_tt = (tag >> 32) & 0x3;
+ if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+ if (pend_tt == SSO_TT_ATOMIC ||
+ pend_tt == SSO_TT_ORDERED)
+ cnxk_sso_hws_swtag_untag(
+ base + SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+ }
+
+ /* Wait for desched to complete. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & BIT_ULL(58));
+ }
+}
+
static void
cn9k_sso_set_rsrc(void *arg)
{
@@ -352,6 +448,21 @@ cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
return (int)nb_unlinks;
}
+static int
+cn9k_sso_start(struct rte_eventdev *event_dev)
+{
+ int rc;
+
+ rc = cnxk_sso_start(event_dev, cn9k_sso_hws_reset,
+ cn9k_sso_hws_flush_events);
+ if (rc < 0)
+ return rc;
+
+ cn9k_sso_fp_fns_set(event_dev);
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -364,6 +475,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_link = cn9k_sso_port_link,
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+
+ .dev_start = cn9k_sso_start,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 2a387ff95..5feae5288 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -326,6 +326,70 @@ cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
return 0;
}
+static void
+cnxk_handle_event(void *arg, struct rte_event event)
+{
+ struct rte_eventdev *event_dev = arg;
+
+ if (event_dev->dev_ops->dev_stop_flush != NULL)
+ event_dev->dev_ops->dev_stop_flush(
+ event_dev->data->dev_id, event,
+ event_dev->data->dev_stop_flush_arg);
+}
+
+static void
+cnxk_sso_cleanup(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn, uint8_t enable)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uintptr_t hwgrp_base;
+ uint16_t i;
+ void *ws;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ ws = event_dev->data->ports[i];
+ reset_fn(dev, ws);
+ }
+
+ rte_mb();
+ ws = event_dev->data->ports[0];
+
+ for (i = 0; i < dev->nb_event_queues; i++) {
+ /* Consume all the events through HWS0 */
+ hwgrp_base = roc_sso_hwgrp_base_get(&dev->sso, i);
+ flush_fn(ws, i, hwgrp_base, cnxk_handle_event, event_dev);
+ /* Enable/Disable SSO GGRP */
+ plt_write64(enable, hwgrp_base + SSO_LF_GGRP_QCTL);
+ }
+}
+
+int
+cnxk_sso_start(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ struct roc_sso_hwgrp_qos qos[dev->qos_queue_cnt];
+ int i, rc;
+
+ plt_sso_dbg();
+ for (i = 0; i < dev->qos_queue_cnt; i++) {
+ qos->hwgrp = dev->qos_parse_data[i].queue;
+ qos->iaq_prcnt = dev->qos_parse_data[i].iaq_prcnt;
+ qos->taq_prcnt = dev->qos_parse_data[i].taq_prcnt;
+ qos->xaq_prcnt = dev->qos_parse_data[i].xaq_prcnt;
+ }
+ rc = roc_sso_hwgrp_qos_config(&dev->sso, qos, dev->qos_queue_cnt,
+ dev->xae_cnt);
+ if (rc < 0) {
+ plt_sso_dbg("failed to configure HWGRP QoS rc = %d", rc);
+ return -EINVAL;
+ }
+ cnxk_sso_cleanup(event_dev, reset_fn, flush_fn, true);
+ rte_mb();
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 0a3ab71e4..f175c23bb 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -48,6 +48,10 @@ typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
uint16_t nb_link);
+typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
+typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
+typedef void (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
+ cnxk_handle_event_t fn, void *arg);
struct cnxk_sso_qos {
uint16_t queue;
@@ -198,5 +202,8 @@ int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
cnxk_sso_hws_setup_t hws_setup_fn);
int cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
uint64_t *tmo_ticks);
+int cnxk_sso_start(struct rte_eventdev *event_dev,
+ cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 17/33] event/cnxk: add device stop and close functions
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (15 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 16/33] event/cnxk: add device start function pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 18/33] event/cnxk: add SSO selftest and dump pbhagavatula
` (17 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add event device stop and close callback functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 15 +++++++++
drivers/event/cnxk/cn9k_eventdev.c | 14 +++++++++
drivers/event/cnxk/cnxk_eventdev.c | 48 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 6 ++++
4 files changed, 83 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index bc873eada..e35dd45b0 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -388,6 +388,19 @@ cn10k_sso_start(struct rte_eventdev *event_dev)
return rc;
}
+static void
+cn10k_sso_stop(struct rte_eventdev *event_dev)
+{
+ cnxk_sso_stop(event_dev, cn10k_sso_hws_reset,
+ cn10k_sso_hws_flush_events);
+}
+
+static int
+cn10k_sso_close(struct rte_eventdev *event_dev)
+{
+ return cnxk_sso_close(event_dev, cn10k_sso_hws_unlink);
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -402,6 +415,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.timeout_ticks = cnxk_sso_timeout_ticks,
.dev_start = cn10k_sso_start,
+ .dev_stop = cn10k_sso_stop,
+ .dev_close = cn10k_sso_close,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index a4613e410..7383b7e92 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -463,6 +463,18 @@ cn9k_sso_start(struct rte_eventdev *event_dev)
return rc;
}
+static void
+cn9k_sso_stop(struct rte_eventdev *event_dev)
+{
+ cnxk_sso_stop(event_dev, cn9k_sso_hws_reset, cn9k_sso_hws_flush_events);
+}
+
+static int
+cn9k_sso_close(struct rte_eventdev *event_dev)
+{
+ return cnxk_sso_close(event_dev, cn9k_sso_hws_unlink);
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -477,6 +489,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.timeout_ticks = cnxk_sso_timeout_ticks,
.dev_start = cn9k_sso_start,
+ .dev_stop = cn9k_sso_stop,
+ .dev_close = cn9k_sso_close,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 5feae5288..a3900315a 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -390,6 +390,54 @@ cnxk_sso_start(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
return 0;
}
+void
+cnxk_sso_stop(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn)
+{
+ plt_sso_dbg();
+ cnxk_sso_cleanup(event_dev, reset_fn, flush_fn, false);
+ rte_mb();
+}
+
+int
+cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t all_queues[CNXK_SSO_MAX_HWGRP];
+ uint16_t i;
+ void *ws;
+
+ if (!dev->configured)
+ return 0;
+
+ for (i = 0; i < dev->nb_event_queues; i++)
+ all_queues[i] = i;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ ws = event_dev->data->ports[i];
+ unlink_fn(dev, ws, all_queues, dev->nb_event_queues);
+ rte_free(cnxk_sso_hws_get_cookie(ws));
+ event_dev->data->ports[i] = NULL;
+ }
+
+ roc_sso_rsrc_fini(&dev->sso);
+ rte_mempool_free(dev->xaq_pool);
+ rte_memzone_free(rte_memzone_lookup(CNXK_SSO_FC_NAME));
+
+ dev->fc_iova = 0;
+ dev->fc_mem = NULL;
+ dev->xaq_pool = NULL;
+ dev->configured = false;
+ dev->is_timeout_deq = 0;
+ dev->nb_event_ports = 0;
+ dev->max_num_events = -1;
+ dev->nb_event_queues = 0;
+ dev->min_dequeue_timeout_ns = USEC2NSEC(1);
+ dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF);
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index f175c23bb..3011af153 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -48,6 +48,8 @@ typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
uint16_t nb_link);
+typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map,
+ uint16_t nb_link);
typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
typedef void (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
@@ -205,5 +207,9 @@ int cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
int cnxk_sso_start(struct rte_eventdev *event_dev,
cnxk_sso_hws_reset_t reset_fn,
cnxk_sso_hws_flush_t flush_fn);
+void cnxk_sso_stop(struct rte_eventdev *event_dev,
+ cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn);
+int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 18/33] event/cnxk: add SSO selftest and dump
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (16 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 17/33] event/cnxk: add device stop and close functions pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 19/33] event/cnxk: add event port and queue xstats pbhagavatula
` (16 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add selftest to verify sanity of SSO and also add function to
dump internal state of SSO.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/test_eventdev.c | 14 +
drivers/event/cnxk/cn10k_eventdev.c | 8 +
drivers/event/cnxk/cn9k_eventdev.c | 10 +-
drivers/event/cnxk/cnxk_eventdev.c | 8 +
drivers/event/cnxk/cnxk_eventdev.h | 5 +
drivers/event/cnxk/cnxk_eventdev_selftest.c | 1570 +++++++++++++++++++
drivers/event/cnxk/meson.build | 1 +
7 files changed, 1615 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cnxk_eventdev_selftest.c
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index bcfaa53cb..843d9766b 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1036,6 +1036,18 @@ test_eventdev_selftest_dlb2(void)
return test_eventdev_selftest_impl("dlb2_event", "");
}
+static int
+test_eventdev_selftest_cn9k(void)
+{
+ return test_eventdev_selftest_impl("event_cn9k", "");
+}
+
+static int
+test_eventdev_selftest_cn10k(void)
+{
+ return test_eventdev_selftest_impl("event_cn10k", "");
+}
+
REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
REGISTER_TEST_COMMAND(eventdev_selftest_sw, test_eventdev_selftest_sw);
REGISTER_TEST_COMMAND(eventdev_selftest_octeontx,
@@ -1044,3 +1056,5 @@ REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2,
test_eventdev_selftest_octeontx2);
REGISTER_TEST_COMMAND(eventdev_selftest_dpaa2, test_eventdev_selftest_dpaa2);
REGISTER_TEST_COMMAND(eventdev_selftest_dlb2, test_eventdev_selftest_dlb2);
+REGISTER_TEST_COMMAND(eventdev_selftest_cn9k, test_eventdev_selftest_cn9k);
+REGISTER_TEST_COMMAND(eventdev_selftest_cn10k, test_eventdev_selftest_cn10k);
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index e35dd45b0..17bed4c9a 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -401,6 +401,12 @@ cn10k_sso_close(struct rte_eventdev *event_dev)
return cnxk_sso_close(event_dev, cn10k_sso_hws_unlink);
}
+static int
+cn10k_sso_selftest(void)
+{
+ return cnxk_sso_selftest(RTE_STR(event_cn10k));
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -414,9 +420,11 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .dump = cnxk_sso_dump,
.dev_start = cn10k_sso_start,
.dev_stop = cn10k_sso_stop,
.dev_close = cn10k_sso_close,
+ .dev_selftest = cn10k_sso_selftest,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 7383b7e92..e39d04fdd 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -222,7 +222,7 @@ cn9k_sso_hws_reset(void *arg, void *hws)
}
}
-static void
+void
cn9k_sso_set_rsrc(void *arg)
{
struct cnxk_sso_evdev *dev = arg;
@@ -475,6 +475,12 @@ cn9k_sso_close(struct rte_eventdev *event_dev)
return cnxk_sso_close(event_dev, cn9k_sso_hws_unlink);
}
+static int
+cn9k_sso_selftest(void)
+{
+ return cnxk_sso_selftest(RTE_STR(event_cn9k));
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -488,9 +494,11 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .dump = cnxk_sso_dump,
.dev_start = cn9k_sso_start,
.dev_stop = cn9k_sso_stop,
.dev_close = cn9k_sso_close,
+ .dev_selftest = cn9k_sso_selftest,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index a3900315a..0f084176c 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -326,6 +326,14 @@ cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
return 0;
}
+void
+cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ roc_sso_dump(&dev->sso, dev->sso.nb_hws, dev->sso.nb_hwgrp, f);
+}
+
static void
cnxk_handle_event(void *arg, struct rte_event event)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 3011af153..9af04bc3d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -211,5 +211,10 @@ void cnxk_sso_stop(struct rte_eventdev *event_dev,
cnxk_sso_hws_reset_t reset_fn,
cnxk_sso_hws_flush_t flush_fn);
int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn);
+int cnxk_sso_selftest(const char *dev_name);
+void cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f);
+
+/* CN9K */
+void cn9k_sso_set_rsrc(void *arg);
#endif /* __CNXK_EVENTDEV_H__ */
diff --git a/drivers/event/cnxk/cnxk_eventdev_selftest.c b/drivers/event/cnxk/cnxk_eventdev_selftest.c
new file mode 100644
index 000000000..69c15b1d0
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev_selftest.c
@@ -0,0 +1,1570 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_hexdump.h>
+#include <rte_launch.h>
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_memcpy.h>
+#include <rte_per_lcore.h>
+#include <rte_random.h>
+#include <rte_test.h>
+
+#include "cnxk_eventdev.h"
+
+#define NUM_PACKETS (1024)
+#define MAX_EVENTS (1024)
+#define MAX_STAGES (255)
+
+#define CNXK_TEST_RUN(setup, teardown, test) \
+ cnxk_test_run(setup, teardown, test, #test)
+
+static int total;
+static int passed;
+static int failed;
+static int unsupported;
+
+static int evdev;
+static struct rte_mempool *eventdev_test_mempool;
+
+struct event_attr {
+ uint32_t flow_id;
+ uint8_t event_type;
+ uint8_t sub_event_type;
+ uint8_t sched_type;
+ uint8_t queue;
+ uint8_t port;
+};
+
+static uint32_t seqn_list_index;
+static int seqn_list[NUM_PACKETS];
+
+static inline void
+seqn_list_init(void)
+{
+ RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS);
+ memset(seqn_list, 0, sizeof(seqn_list));
+ seqn_list_index = 0;
+}
+
+static inline int
+seqn_list_update(int val)
+{
+ if (seqn_list_index >= NUM_PACKETS)
+ return -1;
+
+ seqn_list[seqn_list_index++] = val;
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ return 0;
+}
+
+static inline int
+seqn_list_check(int limit)
+{
+ int i;
+
+ for (i = 0; i < limit; i++) {
+ if (seqn_list[i] != i) {
+ plt_err("Seqn mismatch %d %d", seqn_list[i], i);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+struct test_core_param {
+ uint32_t *total_events;
+ uint64_t dequeue_tmo_ticks;
+ uint8_t port;
+ uint8_t sched_type;
+};
+
+static int
+testsuite_setup(const char *eventdev_name)
+{
+ evdev = rte_event_dev_get_dev_id(eventdev_name);
+ if (evdev < 0) {
+ plt_err("%d: Eventdev %s not found", __LINE__, eventdev_name);
+ return -1;
+ }
+ return 0;
+}
+
+static void
+testsuite_teardown(void)
+{
+ rte_event_dev_close(evdev);
+ total = 0;
+ passed = 0;
+ failed = 0;
+ unsupported = 0;
+}
+
+static inline void
+devconf_set_default_sane_values(struct rte_event_dev_config *dev_conf,
+ struct rte_event_dev_info *info)
+{
+ memset(dev_conf, 0, sizeof(struct rte_event_dev_config));
+ dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns;
+ dev_conf->nb_event_ports = info->max_event_ports;
+ dev_conf->nb_event_queues = info->max_event_queues;
+ dev_conf->nb_event_queue_flows = info->max_event_queue_flows;
+ dev_conf->nb_event_port_dequeue_depth =
+ info->max_event_port_dequeue_depth;
+ dev_conf->nb_event_port_enqueue_depth =
+ info->max_event_port_enqueue_depth;
+ dev_conf->nb_event_port_enqueue_depth =
+ info->max_event_port_enqueue_depth;
+ dev_conf->nb_events_limit = info->max_num_events;
+}
+
+enum {
+ TEST_EVENTDEV_SETUP_DEFAULT,
+ TEST_EVENTDEV_SETUP_PRIORITY,
+ TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT,
+};
+
+static inline int
+_eventdev_setup(int mode)
+{
+ const char *pool_name = "evdev_cnxk_test_pool";
+ struct rte_event_dev_config dev_conf;
+ struct rte_event_dev_info info;
+ int i, ret;
+
+ /* Create and destrory pool for each test case to make it standalone */
+ eventdev_test_mempool = rte_pktmbuf_pool_create(
+ pool_name, MAX_EVENTS, 0, 0, 512, rte_socket_id());
+ if (!eventdev_test_mempool) {
+ plt_err("ERROR creating mempool");
+ return -1;
+ }
+
+ ret = rte_event_dev_info_get(evdev, &info);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+ devconf_set_default_sane_values(&dev_conf, &info);
+ if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT)
+ dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT;
+
+ ret = rte_event_dev_configure(evdev, &dev_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+ uint32_t queue_count;
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+
+ if (mode == TEST_EVENTDEV_SETUP_PRIORITY) {
+ if (queue_count > 8)
+ queue_count = 8;
+
+ /* Configure event queues(0 to n) with
+ * RTE_EVENT_DEV_PRIORITY_HIGHEST to
+ * RTE_EVENT_DEV_PRIORITY_LOWEST
+ */
+ uint8_t step =
+ (RTE_EVENT_DEV_PRIORITY_LOWEST + 1) / queue_count;
+ for (i = 0; i < (int)queue_count; i++) {
+ struct rte_event_queue_conf queue_conf;
+
+ ret = rte_event_queue_default_conf_get(evdev, i,
+ &queue_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d",
+ i);
+ queue_conf.priority = i * step;
+ ret = rte_event_queue_setup(evdev, i, &queue_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
+ i);
+ }
+
+ } else {
+ /* Configure event queues with default priority */
+ for (i = 0; i < (int)queue_count; i++) {
+ ret = rte_event_queue_setup(evdev, i, NULL);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
+ i);
+ }
+ }
+ /* Configure event ports */
+ uint32_t port_count;
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &port_count),
+ "Port count get failed");
+ for (i = 0; i < (int)port_count; i++) {
+ ret = rte_event_port_setup(evdev, i, NULL);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i);
+ ret = rte_event_port_link(evdev, i, NULL, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d",
+ i);
+ }
+
+ ret = rte_event_dev_start(evdev);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device");
+
+ return 0;
+}
+
+static inline int
+eventdev_setup(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_DEFAULT);
+}
+
+static inline int
+eventdev_setup_priority(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_PRIORITY);
+}
+
+static inline int
+eventdev_setup_dequeue_timeout(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT);
+}
+
+static inline void
+eventdev_teardown(void)
+{
+ rte_event_dev_stop(evdev);
+ rte_mempool_free(eventdev_test_mempool);
+}
+
+static inline void
+update_event_and_validation_attr(struct rte_mbuf *m, struct rte_event *ev,
+ uint32_t flow_id, uint8_t event_type,
+ uint8_t sub_event_type, uint8_t sched_type,
+ uint8_t queue, uint8_t port)
+{
+ struct event_attr *attr;
+
+ /* Store the event attributes in mbuf for future reference */
+ attr = rte_pktmbuf_mtod(m, struct event_attr *);
+ attr->flow_id = flow_id;
+ attr->event_type = event_type;
+ attr->sub_event_type = sub_event_type;
+ attr->sched_type = sched_type;
+ attr->queue = queue;
+ attr->port = port;
+
+ ev->flow_id = flow_id;
+ ev->sub_event_type = sub_event_type;
+ ev->event_type = event_type;
+ /* Inject the new event */
+ ev->op = RTE_EVENT_OP_NEW;
+ ev->sched_type = sched_type;
+ ev->queue_id = queue;
+ ev->mbuf = m;
+}
+
+static inline int
+inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type,
+ uint8_t sched_type, uint8_t queue, uint8_t port,
+ unsigned int events)
+{
+ struct rte_mbuf *m;
+ unsigned int i;
+
+ for (i = 0; i < events; i++) {
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
+
+ *rte_event_pmd_selftest_seqn(m) = i;
+ update_event_and_validation_attr(m, &ev, flow_id, event_type,
+ sub_event_type, sched_type,
+ queue, port);
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ return 0;
+}
+
+static inline int
+check_excess_events(uint8_t port)
+{
+ uint16_t valid_event;
+ struct rte_event ev;
+ int i;
+
+ /* Check for excess events, try for a few times and exit */
+ for (i = 0; i < 32; i++) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+
+ RTE_TEST_ASSERT_SUCCESS(valid_event,
+ "Unexpected valid event=%d",
+ *rte_event_pmd_selftest_seqn(ev.mbuf));
+ }
+ return 0;
+}
+
+static inline int
+generate_random_events(const unsigned int total_events)
+{
+ struct rte_event_dev_info info;
+ uint32_t queue_count;
+ unsigned int i;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+
+ ret = rte_event_dev_info_get(evdev, &info);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+ for (i = 0; i < total_events; i++) {
+ ret = inject_events(
+ rte_rand() % info.max_event_queue_flows /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ rte_rand() % queue_count /* queue */, 0 /* port */,
+ 1 /* events */);
+ if (ret)
+ return -1;
+ }
+ return ret;
+}
+
+static inline int
+validate_event(struct rte_event *ev)
+{
+ struct event_attr *attr;
+
+ attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *);
+ RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id,
+ "flow_id mismatch enq=%d deq =%d", attr->flow_id,
+ ev->flow_id);
+ RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type,
+ "event_type mismatch enq=%d deq =%d",
+ attr->event_type, ev->event_type);
+ RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type,
+ "sub_event_type mismatch enq=%d deq =%d",
+ attr->sub_event_type, ev->sub_event_type);
+ RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type,
+ "sched_type mismatch enq=%d deq =%d",
+ attr->sched_type, ev->sched_type);
+ RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id,
+ "queue mismatch enq=%d deq =%d", attr->queue,
+ ev->queue_id);
+ return 0;
+}
+
+typedef int (*validate_event_cb)(uint32_t index, uint8_t port,
+ struct rte_event *ev);
+
+static inline int
+consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn)
+{
+ uint32_t events = 0, forward_progress_cnt = 0, index = 0;
+ uint16_t valid_event;
+ struct rte_event ev;
+ int ret;
+
+ while (1) {
+ if (++forward_progress_cnt > UINT16_MAX) {
+ plt_err("Detected deadlock");
+ return -1;
+ }
+
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ forward_progress_cnt = 0;
+ ret = validate_event(&ev);
+ if (ret)
+ return -1;
+
+ if (fn != NULL) {
+ ret = fn(index, port, &ev);
+ RTE_TEST_ASSERT_SUCCESS(
+ ret, "Failed to validate test specific event");
+ }
+
+ ++index;
+
+ rte_pktmbuf_free(ev.mbuf);
+ if (++events >= total_events)
+ break;
+ }
+
+ return check_excess_events(port);
+}
+
+static int
+validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev)
+{
+ RTE_SET_USED(port);
+ RTE_TEST_ASSERT_EQUAL(index, *rte_event_pmd_selftest_seqn(ev->mbuf),
+ "index=%d != seqn=%d", index,
+ *rte_event_pmd_selftest_seqn(ev->mbuf));
+ return 0;
+}
+
+static inline int
+test_simple_enqdeq(uint8_t sched_type)
+{
+ int ret;
+
+ ret = inject_events(0 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type */, sched_type, 0 /* queue */,
+ 0 /* port */, MAX_EVENTS);
+ if (ret)
+ return -1;
+
+ return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq);
+}
+
+static int
+test_simple_enqdeq_ordered(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_simple_enqdeq_atomic(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_simple_enqdeq_parallel(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_PARALLEL);
+}
+
+/*
+ * Generate a prescribed number of events and spread them across available
+ * queues. On dequeue, using single event port(port 0) verify the enqueued
+ * event attributes
+ */
+static int
+test_multi_queue_enq_single_port_deq(void)
+{
+ int ret;
+
+ ret = generate_random_events(MAX_EVENTS);
+ if (ret)
+ return -1;
+
+ return consume_events(0 /* port */, MAX_EVENTS, NULL);
+}
+
+/*
+ * Inject 0..MAX_EVENTS events over 0..queue_count with modulus
+ * operation
+ *
+ * For example, Inject 32 events over 0..7 queues
+ * enqueue events 0, 8, 16, 24 in queue 0
+ * enqueue events 1, 9, 17, 25 in queue 1
+ * ..
+ * ..
+ * enqueue events 7, 15, 23, 31 in queue 7
+ *
+ * On dequeue, Validate the events comes in 0,8,16,24,1,9,17,25..,7,15,23,31
+ * order from queue0(highest priority) to queue7(lowest_priority)
+ */
+static int
+validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev)
+{
+ uint32_t queue_count;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ if (queue_count > 8)
+ queue_count = 8;
+ uint32_t range = MAX_EVENTS / queue_count;
+ uint32_t expected_val = (index % range) * queue_count;
+
+ expected_val += ev->queue_id;
+ RTE_SET_USED(port);
+ RTE_TEST_ASSERT_EQUAL(
+ *rte_event_pmd_selftest_seqn(ev->mbuf), expected_val,
+ "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d",
+ *rte_event_pmd_selftest_seqn(ev->mbuf), index, expected_val,
+ range, queue_count, MAX_EVENTS);
+ return 0;
+}
+
+static int
+test_multi_queue_priority(void)
+{
+ int i, max_evts_roundoff;
+ /* See validate_queue_priority() comments for priority validate logic */
+ uint32_t queue_count;
+ struct rte_mbuf *m;
+ uint8_t queue;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ if (queue_count > 8)
+ queue_count = 8;
+ max_evts_roundoff = MAX_EVENTS / queue_count;
+ max_evts_roundoff *= queue_count;
+
+ for (i = 0; i < max_evts_roundoff; i++) {
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
+
+ *rte_event_pmd_selftest_seqn(m) = i;
+ queue = i % queue_count;
+ update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU,
+ 0, RTE_SCHED_TYPE_PARALLEL,
+ queue, 0);
+ rte_event_enqueue_burst(evdev, 0, &ev, 1);
+ }
+
+ return consume_events(0, max_evts_roundoff, validate_queue_priority);
+}
+
+static int
+worker_multi_port_fn(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint32_t *total_events = param->total_events;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+ int ret;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ ret = validate_event(&ev);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event");
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ }
+
+ return 0;
+}
+
+static inline int
+wait_workers_to_join(const uint32_t *count)
+{
+ uint64_t cycles, print_cycles;
+
+ cycles = rte_get_timer_cycles();
+ print_cycles = cycles;
+ while (__atomic_load_n(count, __ATOMIC_RELAXED)) {
+ uint64_t new_cycles = rte_get_timer_cycles();
+
+ if (new_cycles - print_cycles > rte_get_timer_hz()) {
+ plt_info("Events %d",
+ __atomic_load_n(count, __ATOMIC_RELAXED));
+ print_cycles = new_cycles;
+ }
+ if (new_cycles - cycles > rte_get_timer_hz() * 10000000000) {
+ plt_err("No schedules for seconds, deadlock (%d)",
+ __atomic_load_n(count, __ATOMIC_RELAXED));
+ rte_event_dev_dump(evdev, stdout);
+ cycles = new_cycles;
+ return -1;
+ }
+ }
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+static inline int
+launch_workers_and_wait(int (*main_thread)(void *),
+ int (*worker_thread)(void *), uint32_t total_events,
+ uint8_t nb_workers, uint8_t sched_type)
+{
+ uint32_t atomic_total_events;
+ struct test_core_param *param;
+ uint64_t dequeue_tmo_ticks;
+ uint8_t port = 0;
+ int w_lcore;
+ int ret;
+
+ if (!nb_workers)
+ return 0;
+
+ __atomic_store_n(&atomic_total_events, total_events, __ATOMIC_RELAXED);
+ seqn_list_init();
+
+ param = malloc(sizeof(struct test_core_param) * nb_workers);
+ if (!param)
+ return -1;
+
+ ret = rte_event_dequeue_timeout_ticks(
+ evdev, rte_rand() % 10000000 /* 10ms */, &dequeue_tmo_ticks);
+ if (ret) {
+ free(param);
+ return -1;
+ }
+
+ param[0].total_events = &atomic_total_events;
+ param[0].sched_type = sched_type;
+ param[0].port = 0;
+ param[0].dequeue_tmo_ticks = dequeue_tmo_ticks;
+ rte_wmb();
+
+ w_lcore = rte_get_next_lcore(
+ /* start core */ -1,
+ /* skip main */ 1,
+ /* wrap */ 0);
+ rte_eal_remote_launch(main_thread, ¶m[0], w_lcore);
+
+ for (port = 1; port < nb_workers; port++) {
+ param[port].total_events = &atomic_total_events;
+ param[port].sched_type = sched_type;
+ param[port].port = port;
+ param[port].dequeue_tmo_ticks = dequeue_tmo_ticks;
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ w_lcore = rte_get_next_lcore(w_lcore, 1, 0);
+ rte_eal_remote_launch(worker_thread, ¶m[port], w_lcore);
+ }
+
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ ret = wait_workers_to_join(&atomic_total_events);
+ free(param);
+
+ return ret;
+}
+
+/*
+ * Generate a prescribed number of events and spread them across available
+ * queues. Dequeue the events through multiple ports and verify the enqueued
+ * event attributes
+ */
+static int
+test_multi_queue_enq_multi_port_deq(void)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t nr_ports;
+ int ret;
+
+ ret = generate_random_events(total_events);
+ if (ret)
+ return -1;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ plt_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ return launch_workers_and_wait(worker_multi_port_fn,
+ worker_multi_port_fn, total_events,
+ nr_ports, 0xff /* invalid */);
+}
+
+static void
+flush(uint8_t dev_id, struct rte_event event, void *arg)
+{
+ unsigned int *count = arg;
+
+ RTE_SET_USED(dev_id);
+ if (event.event_type == RTE_EVENT_TYPE_CPU)
+ *count = *count + 1;
+}
+
+static int
+test_dev_stop_flush(void)
+{
+ unsigned int total_events = MAX_EVENTS, count = 0;
+ int ret;
+
+ ret = generate_random_events(total_events);
+ if (ret)
+ return -1;
+
+ ret = rte_event_dev_stop_flush_callback_register(evdev, flush, &count);
+ if (ret)
+ return -2;
+ rte_event_dev_stop(evdev);
+ ret = rte_event_dev_stop_flush_callback_register(evdev, NULL, NULL);
+ if (ret)
+ return -3;
+ RTE_TEST_ASSERT_EQUAL(total_events, count,
+ "count mismatch total_events=%d count=%d",
+ total_events, count);
+
+ return 0;
+}
+
+static int
+validate_queue_to_port_single_link(uint32_t index, uint8_t port,
+ struct rte_event *ev)
+{
+ RTE_SET_USED(index);
+ RTE_TEST_ASSERT_EQUAL(port, ev->queue_id,
+ "queue mismatch enq=%d deq =%d", port,
+ ev->queue_id);
+
+ return 0;
+}
+
+/*
+ * Link queue x to port x and check correctness of link by checking
+ * queue_id == x on dequeue on the specific port x
+ */
+static int
+test_queue_to_port_single_link(void)
+{
+ int i, nr_links, ret;
+ uint32_t queue_count;
+ uint32_t port_count;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &port_count),
+ "Port count get failed");
+
+ /* Unlink all connections that created in eventdev_setup */
+ for (i = 0; i < (int)port_count; i++) {
+ ret = rte_event_port_unlink(evdev, i, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d",
+ i);
+ }
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+
+ nr_links = RTE_MIN(port_count, queue_count);
+ const unsigned int total_events = MAX_EVENTS / nr_links;
+
+ /* Link queue x to port x and inject events to queue x through port x */
+ for (i = 0; i < nr_links; i++) {
+ uint8_t queue = (uint8_t)i;
+
+ ret = rte_event_port_link(evdev, i, &queue, NULL, 1);
+ RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i);
+
+ ret = inject_events(0x100 /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ queue /* queue */, i /* port */,
+ total_events /* events */);
+ if (ret)
+ return -1;
+ }
+
+ /* Verify the events generated from correct queue */
+ for (i = 0; i < nr_links; i++) {
+ ret = consume_events(i /* port */, total_events,
+ validate_queue_to_port_single_link);
+ if (ret)
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+validate_queue_to_port_multi_link(uint32_t index, uint8_t port,
+ struct rte_event *ev)
+{
+ RTE_SET_USED(index);
+ RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1),
+ "queue mismatch enq=%d deq =%d", port,
+ ev->queue_id);
+
+ return 0;
+}
+
+/*
+ * Link all even number of queues to port 0 and all odd number of queues to
+ * port 1 and verify the link connection on dequeue
+ */
+static int
+test_queue_to_port_multi_link(void)
+{
+ int ret, port0_events = 0, port1_events = 0;
+ uint32_t nr_queues = 0;
+ uint32_t nr_ports = 0;
+ uint8_t queue, port;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &nr_queues),
+ "Queue count get failed");
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &nr_queues),
+ "Queue count get failed");
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+
+ if (nr_ports < 2) {
+ plt_err("Not enough ports to test ports=%d", nr_ports);
+ return 0;
+ }
+
+ /* Unlink all connections that created in eventdev_setup */
+ for (port = 0; port < nr_ports; port++) {
+ ret = rte_event_port_unlink(evdev, port, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d",
+ port);
+ }
+
+ unsigned int total_events = MAX_EVENTS / nr_queues;
+ if (!total_events) {
+ nr_queues = MAX_EVENTS;
+ total_events = MAX_EVENTS / nr_queues;
+ }
+
+ /* Link all even number of queues to port0 and odd numbers to port 1*/
+ for (queue = 0; queue < nr_queues; queue++) {
+ port = queue & 0x1;
+ ret = rte_event_port_link(evdev, port, &queue, NULL, 1);
+ RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d",
+ queue, port);
+
+ ret = inject_events(0x100 /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ queue /* queue */, port /* port */,
+ total_events /* events */);
+ if (ret)
+ return -1;
+
+ if (port == 0)
+ port0_events += total_events;
+ else
+ port1_events += total_events;
+ }
+
+ ret = consume_events(0 /* port */, port0_events,
+ validate_queue_to_port_multi_link);
+ if (ret)
+ return -1;
+ ret = consume_events(1 /* port */, port1_events,
+ validate_queue_to_port_multi_link);
+ if (ret)
+ return -1;
+
+ return 0;
+}
+
+static int
+worker_flow_based_pipeline(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
+ uint32_t *total_events = param->total_events;
+ uint8_t new_sched_type = param->sched_type;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
+ dequeue_tmo_ticks);
+ if (!valid_event)
+ continue;
+
+ /* Events from stage 0 */
+ if (ev.sub_event_type == 0) {
+ /* Move to atomic flow to maintain the ordering */
+ ev.flow_id = 0x2;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sub_event_type = 1; /* stage 1 */
+ ev.sched_type = new_sched_type;
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ } else if (ev.sub_event_type == 1) { /* Events from stage 1*/
+ uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
+
+ if (seqn_list_update(seqn) == 0) {
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1,
+ __ATOMIC_RELAXED);
+ } else {
+ plt_err("Failed to update seqn_list");
+ return -1;
+ }
+ } else {
+ plt_err("Invalid ev.sub_event_type = %d",
+ ev.sub_event_type);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+static int
+test_multiport_flow_sched_type_test(uint8_t in_sched_type,
+ uint8_t out_sched_type)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ plt_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with a 0 sequence number to total_events */
+ ret = inject_events(
+ 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */, in_sched_type, 0 /* queue */,
+ 0 /* port */, total_events /* events */);
+ if (ret)
+ return -1;
+
+ rte_mb();
+ ret = launch_workers_and_wait(worker_flow_based_pipeline,
+ worker_flow_based_pipeline, total_events,
+ nr_ports, out_sched_type);
+ if (ret)
+ return -1;
+
+ if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
+ out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
+ /* Check the events order maintained or not */
+ return seqn_list_check(total_events);
+ }
+
+ return 0;
+}
+
+/* Multi port ordered to atomic transaction */
+static int
+test_multi_port_flow_ordered_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_ordered_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_ordered_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_flow_atomic_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_atomic_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_atomic_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_flow_parallel_to_atomic(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_parallel_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_parallel_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+worker_group_based_pipeline(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
+ uint32_t *total_events = param->total_events;
+ uint8_t new_sched_type = param->sched_type;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
+ dequeue_tmo_ticks);
+ if (!valid_event)
+ continue;
+
+ /* Events from stage 0(group 0) */
+ if (ev.queue_id == 0) {
+ /* Move to atomic flow to maintain the ordering */
+ ev.flow_id = 0x2;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sched_type = new_sched_type;
+ ev.queue_id = 1; /* Stage 1*/
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/
+ uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
+
+ if (seqn_list_update(seqn) == 0) {
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1,
+ __ATOMIC_RELAXED);
+ } else {
+ plt_err("Failed to update seqn_list");
+ return -1;
+ }
+ } else {
+ plt_err("Invalid ev.queue_id = %d", ev.queue_id);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int
+test_multiport_queue_sched_type_test(uint8_t in_sched_type,
+ uint8_t out_sched_type)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t queue_count;
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ if (queue_count < 2 || !nr_ports) {
+ plt_err("Not enough queues=%d ports=%d or workers=%d",
+ queue_count, nr_ports, rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with a 0 sequence number to total_events */
+ ret = inject_events(
+ 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */, in_sched_type, 0 /* queue */,
+ 0 /* port */, total_events /* events */);
+ if (ret)
+ return -1;
+
+ ret = launch_workers_and_wait(worker_group_based_pipeline,
+ worker_group_based_pipeline, total_events,
+ nr_ports, out_sched_type);
+ if (ret)
+ return -1;
+
+ if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
+ out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
+ /* Check the events order maintained or not */
+ return seqn_list_check(total_events);
+ }
+
+ return 0;
+}
+
+static int
+test_multi_port_queue_ordered_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_ordered_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_ordered_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_queue_atomic_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_atomic_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_atomic_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_queue_parallel_to_atomic(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_parallel_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_parallel_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+worker_flow_based_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint32_t *total_events = param->total_events;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.sub_event_type == MAX_STAGES) { /* last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sub_event_type++;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+static int
+launch_multi_port_max_stages_random_sched_type(int (*fn)(void *))
+{
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ plt_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with a 0 sequence number to total_events */
+ ret = inject_events(
+ 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1) /* sched_type */,
+ 0 /* queue */, 0 /* port */, MAX_EVENTS /* events */);
+ if (ret)
+ return -1;
+
+ return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports,
+ 0xff /* invalid */);
+}
+
+/* Flow based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_flow_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_flow_based_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ uint32_t queue_count;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ uint8_t nr_queues = queue_count;
+ uint32_t *total_events = param->total_events;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.queue_id == nr_queues - 1) { /* last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id++;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+/* Queue based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_queue_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_queue_based_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_mixed_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ uint32_t queue_count;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ uint8_t nr_queues = queue_count;
+ uint32_t *total_events = param->total_events;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.queue_id == nr_queues - 1) { /* Last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id++;
+ ev.sub_event_type = rte_rand() % 256;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+/* Queue and flow based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_mixed_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_mixed_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_ordered_flow_producer(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ struct rte_mbuf *m;
+ int counter = 0;
+
+ while (counter < NUM_PACKETS) {
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ if (m == NULL)
+ continue;
+
+ *rte_event_pmd_selftest_seqn(m) = counter++;
+
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ ev.flow_id = 0x1; /* Generate a fat flow */
+ ev.sub_event_type = 0;
+ /* Inject the new event */
+ ev.op = RTE_EVENT_OP_NEW;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sched_type = RTE_SCHED_TYPE_ORDERED;
+ ev.queue_id = 0;
+ ev.mbuf = m;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+
+ return 0;
+}
+
+static inline int
+test_producer_consumer_ingress_order_test(int (*fn)(void *))
+{
+ uint32_t nr_ports;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (rte_lcore_count() < 3 || nr_ports < 2) {
+ plt_err("### Not enough cores for test.");
+ return 0;
+ }
+
+ launch_workers_and_wait(worker_ordered_flow_producer, fn, NUM_PACKETS,
+ nr_ports, RTE_SCHED_TYPE_ATOMIC);
+ /* Check the events order maintained or not */
+ return seqn_list_check(NUM_PACKETS);
+}
+
+/* Flow based producer consumer ingress order test */
+static int
+test_flow_producer_consumer_ingress_order_test(void)
+{
+ return test_producer_consumer_ingress_order_test(
+ worker_flow_based_pipeline);
+}
+
+/* Queue based producer consumer ingress order test */
+static int
+test_queue_producer_consumer_ingress_order_test(void)
+{
+ return test_producer_consumer_ingress_order_test(
+ worker_group_based_pipeline);
+}
+
+static void
+cnxk_test_run(int (*setup)(void), void (*tdown)(void), int (*test)(void),
+ const char *name)
+{
+ if (setup() < 0) {
+ printf("Error setting up test %s", name);
+ unsupported++;
+ } else {
+ if (test() < 0) {
+ failed++;
+ printf("+ TestCase [%2d] : %s failed\n", total, name);
+ } else {
+ passed++;
+ printf("+ TestCase [%2d] : %s succeeded\n", total,
+ name);
+ }
+ }
+
+ total++;
+ tdown();
+}
+
+static int
+cnxk_sso_testsuite_run(const char *dev_name)
+{
+ int rc;
+
+ testsuite_setup(dev_name);
+
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_queue_enq_single_port_deq);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown, test_dev_stop_flush);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_queue_enq_multi_port_deq);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_to_port_single_link);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_to_port_multi_link);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_max_stages_random_sched_type);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_max_stages_random_sched_type);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_mixed_max_stages_random_sched_type);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_flow_producer_consumer_ingress_order_test);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_producer_consumer_ingress_order_test);
+ CNXK_TEST_RUN(eventdev_setup_priority, eventdev_teardown,
+ test_multi_queue_priority);
+ CNXK_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
+ test_multi_port_flow_ordered_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
+ test_multi_port_queue_ordered_to_atomic);
+ printf("Total tests : %d\n", total);
+ printf("Passed : %d\n", passed);
+ printf("Failed : %d\n", failed);
+ printf("Not supported : %d\n", unsupported);
+
+ rc = failed;
+ testsuite_teardown();
+
+ return rc;
+}
+
+int
+cnxk_sso_selftest(const char *dev_name)
+{
+ const struct rte_memzone *mz;
+ struct cnxk_sso_evdev *dev;
+ int rc = -1;
+
+ mz = rte_memzone_lookup(CNXK_SSO_MZ_NAME);
+ if (mz == NULL)
+ return rc;
+
+ dev = (void *)*((uint64_t *)mz->addr);
+ if (roc_model_runtime_is_cn9k()) {
+ /* Verify single ws mode. */
+ printf("Verifying CN9K Single workslot mode\n");
+ dev->dual_ws = 0;
+ cn9k_sso_set_rsrc(dev);
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ /* Verift dual ws mode. */
+ printf("Verifying CN9K Dual workslot mode\n");
+ dev->dual_ws = 1;
+ cn9k_sso_set_rsrc(dev);
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ }
+
+ if (roc_model_runtime_is_cn10k()) {
+ printf("Verifying CN10K workslot getwork mode none\n");
+ dev->gw_mode = CN10K_GW_MODE_NONE;
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ printf("Verifying CN10K workslot getwork mode prefetch\n");
+ dev->gw_mode = CN10K_GW_MODE_PREF;
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ printf("Verifying CN10K workslot getwork mode smart prefetch\n");
+ dev->gw_mode = CN10K_GW_MODE_PREF_WFE;
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ }
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 3617d707b..a81ae134d 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -13,6 +13,7 @@ sources = files('cn10k_worker.c',
'cn9k_worker.c',
'cn9k_eventdev.c',
'cnxk_eventdev.c',
+ 'cnxk_eventdev_selftest.c'
)
deps += ['bus_pci', 'common_cnxk', 'net_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 19/33] event/cnxk: add event port and queue xstats
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (17 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 18/33] event/cnxk: add SSO selftest and dump pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 20/33] event/cnxk: support event timer pbhagavatula
` (15 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
Satha Rao, Pavan Nikhilesh, Shijith Thotton
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add support for retrieving statistics from SSO HWS and HWGRP.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/common/cnxk/roc_sso.c | 63 +++++
drivers/common/cnxk/roc_sso.h | 19 ++
drivers/event/cnxk/cnxk_eventdev.h | 15 ++
drivers/event/cnxk/cnxk_eventdev_stats.c | 289 +++++++++++++++++++++++
drivers/event/cnxk/meson.build | 3 +-
5 files changed, 388 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cnxk_eventdev_stats.c
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index 80d032039..1ccf2626b 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -279,6 +279,69 @@ roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
return nb_hwgrp;
}
+int
+roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws,
+ struct roc_sso_hws_stats *stats)
+{
+ struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+ struct sso_hws_stats *req_rsp;
+ int rc;
+
+ req_rsp = (struct sso_hws_stats *)mbox_alloc_msg_sso_hws_get_stats(
+ dev->mbox);
+ if (req_rsp == NULL) {
+ rc = mbox_process(dev->mbox);
+ if (rc < 0)
+ return rc;
+ req_rsp = (struct sso_hws_stats *)
+ mbox_alloc_msg_sso_hws_get_stats(dev->mbox);
+ if (req_rsp == NULL)
+ return -ENOSPC;
+ }
+ req_rsp->hws = hws;
+ rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
+ if (rc)
+ return rc;
+
+ stats->arbitration = req_rsp->arbitration;
+ return 0;
+}
+
+int
+roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
+ struct roc_sso_hwgrp_stats *stats)
+{
+ struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+ struct sso_grp_stats *req_rsp;
+ int rc;
+
+ req_rsp = (struct sso_grp_stats *)mbox_alloc_msg_sso_grp_get_stats(
+ dev->mbox);
+ if (req_rsp == NULL) {
+ rc = mbox_process(dev->mbox);
+ if (rc < 0)
+ return rc;
+ req_rsp = (struct sso_grp_stats *)
+ mbox_alloc_msg_sso_grp_get_stats(dev->mbox);
+ if (req_rsp == NULL)
+ return -ENOSPC;
+ }
+ req_rsp->grp = hwgrp;
+ rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
+ if (rc)
+ return rc;
+
+ stats->aw_status = req_rsp->aw_status;
+ stats->dq_pc = req_rsp->dq_pc;
+ stats->ds_pc = req_rsp->ds_pc;
+ stats->ext_pc = req_rsp->ext_pc;
+ stats->page_cnt = req_rsp->page_cnt;
+ stats->ts_pc = req_rsp->ts_pc;
+ stats->wa_pc = req_rsp->wa_pc;
+ stats->ws_pc = req_rsp->ws_pc;
+ return 0;
+}
+
int
roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso, uint8_t hws,
uint16_t hwgrp)
diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h
index f85799ba8..c07ff50de 100644
--- a/drivers/common/cnxk/roc_sso.h
+++ b/drivers/common/cnxk/roc_sso.h
@@ -12,6 +12,21 @@ struct roc_sso_hwgrp_qos {
uint8_t taq_prcnt;
};
+struct roc_sso_hws_stats {
+ uint64_t arbitration;
+};
+
+struct roc_sso_hwgrp_stats {
+ uint64_t ws_pc;
+ uint64_t ext_pc;
+ uint64_t wa_pc;
+ uint64_t ts_pc;
+ uint64_t ds_pc;
+ uint64_t dq_pc;
+ uint64_t aw_status;
+ uint64_t page_cnt;
+};
+
struct roc_sso {
struct plt_pci_device *pci_dev;
/* Public data. */
@@ -61,5 +76,9 @@ uintptr_t __roc_api roc_sso_hwgrp_base_get(struct roc_sso *roc_sso,
/* Debug */
void __roc_api roc_sso_dump(struct roc_sso *roc_sso, uint8_t nb_hws,
uint16_t hwgrp, FILE *f);
+int roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
+ struct roc_sso_hwgrp_stats *stats);
+int roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws,
+ struct roc_sso_hws_stats *stats);
#endif /* _ROC_SSOW_H_ */
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 9af04bc3d..abe36f21f 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -214,6 +214,21 @@ int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn);
int cnxk_sso_selftest(const char *dev_name);
void cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f);
+/* Stats API. */
+int cnxk_sso_xstats_get_names(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ uint8_t queue_port_id,
+ struct rte_event_dev_xstats_name *xstats_names,
+ unsigned int *ids, unsigned int size);
+int cnxk_sso_xstats_get(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ uint8_t queue_port_id, const unsigned int ids[],
+ uint64_t values[], unsigned int n);
+int cnxk_sso_xstats_reset(struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ int16_t queue_port_id, const uint32_t ids[],
+ uint32_t n);
+
/* CN9K */
void cn9k_sso_set_rsrc(void *arg);
diff --git a/drivers/event/cnxk/cnxk_eventdev_stats.c b/drivers/event/cnxk/cnxk_eventdev_stats.c
new file mode 100644
index 000000000..7abe1c9c7
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev_stats.c
@@ -0,0 +1,289 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+
+struct cnxk_sso_xstats_name {
+ const char name[RTE_EVENT_DEV_XSTATS_NAME_SIZE];
+ const size_t offset;
+ const uint64_t mask;
+ const uint8_t shift;
+ uint64_t reset_snap[CNXK_SSO_MAX_HWGRP];
+};
+
+static struct cnxk_sso_xstats_name sso_hws_xstats[] = {
+ {
+ "last_grp_serviced",
+ offsetof(struct roc_sso_hws_stats, arbitration),
+ 0x3FF,
+ 0,
+ {0},
+ },
+ {
+ "affinity_arbitration_credits",
+ offsetof(struct roc_sso_hws_stats, arbitration),
+ 0xF,
+ 16,
+ {0},
+ },
+};
+
+static struct cnxk_sso_xstats_name sso_hwgrp_xstats[] = {
+ {
+ "wrk_sched",
+ offsetof(struct roc_sso_hwgrp_stats, ws_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "xaq_dram",
+ offsetof(struct roc_sso_hwgrp_stats, ext_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "add_wrk",
+ offsetof(struct roc_sso_hwgrp_stats, wa_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "tag_switch_req",
+ offsetof(struct roc_sso_hwgrp_stats, ts_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "desched_req",
+ offsetof(struct roc_sso_hwgrp_stats, ds_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "desched_wrk",
+ offsetof(struct roc_sso_hwgrp_stats, dq_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "xaq_cached",
+ offsetof(struct roc_sso_hwgrp_stats, aw_status),
+ 0x3,
+ 0,
+ {0},
+ },
+ {
+ "work_inflight",
+ offsetof(struct roc_sso_hwgrp_stats, aw_status),
+ 0x3F,
+ 16,
+ {0},
+ },
+ {
+ "inuse_pages",
+ offsetof(struct roc_sso_hwgrp_stats, page_cnt),
+ 0xFFFFFFFF,
+ 0,
+ {0},
+ },
+};
+
+#define CNXK_SSO_NUM_HWS_XSTATS RTE_DIM(sso_hws_xstats)
+#define CNXK_SSO_NUM_GRP_XSTATS RTE_DIM(sso_hwgrp_xstats)
+
+#define CNXK_SSO_NUM_XSTATS (CNXK_SSO_NUM_HWS_XSTATS + CNXK_SSO_NUM_GRP_XSTATS)
+
+int
+cnxk_sso_xstats_get(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
+ const unsigned int ids[], uint64_t values[], unsigned int n)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ struct roc_sso_hwgrp_stats hwgrp_stats;
+ struct cnxk_sso_xstats_name *xstats;
+ struct cnxk_sso_xstats_name *xstat;
+ struct roc_sso_hws_stats hws_stats;
+ uint32_t xstats_mode_count = 0;
+ uint32_t start_offset = 0;
+ unsigned int i;
+ uint64_t value;
+ void *rsp;
+ int rc;
+
+ switch (mode) {
+ case RTE_EVENT_DEV_XSTATS_DEVICE:
+ return 0;
+ case RTE_EVENT_DEV_XSTATS_PORT:
+ if (queue_port_id >= (signed int)dev->nb_event_ports)
+ goto invalid_value;
+
+ xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hws_xstats;
+
+ rc = roc_sso_hws_stats_get(&dev->sso, queue_port_id,
+ &hws_stats);
+ if (rc < 0)
+ goto invalid_value;
+ rsp = &hws_stats;
+ break;
+ case RTE_EVENT_DEV_XSTATS_QUEUE:
+ if (queue_port_id >= (signed int)dev->nb_event_queues)
+ goto invalid_value;
+
+ xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
+ start_offset = CNXK_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hwgrp_xstats;
+
+ rc = roc_sso_hwgrp_stats_get(&dev->sso, queue_port_id,
+ &hwgrp_stats);
+ if (rc < 0)
+ goto invalid_value;
+ rsp = &hwgrp_stats;
+
+ break;
+ default:
+ cnxk_err("Invalid mode received");
+ goto invalid_value;
+ };
+
+ for (i = 0; i < n && i < xstats_mode_count; i++) {
+ xstat = &xstats[ids[i] - start_offset];
+ value = *(uint64_t *)((char *)rsp + xstat->offset);
+ value = (value >> xstat->shift) & xstat->mask;
+
+ values[i] = value;
+ values[i] -= xstat->reset_snap[queue_port_id];
+ }
+
+ return i;
+invalid_value:
+ return -EINVAL;
+}
+
+int
+cnxk_sso_xstats_reset(struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ int16_t queue_port_id, const uint32_t ids[], uint32_t n)
+{
+ struct cnxk_sso_evdev *dev = sso_pmd_priv(event_dev);
+ struct roc_sso_hwgrp_stats hwgrp_stats;
+ struct cnxk_sso_xstats_name *xstats;
+ struct cnxk_sso_xstats_name *xstat;
+ struct roc_sso_hws_stats hws_stats;
+ uint32_t xstats_mode_count = 0;
+ uint32_t start_offset = 0;
+ unsigned int i;
+ uint64_t value;
+ void *rsp;
+ int rc;
+
+ switch (mode) {
+ case RTE_EVENT_DEV_XSTATS_DEVICE:
+ return 0;
+ case RTE_EVENT_DEV_XSTATS_PORT:
+ if (queue_port_id >= (signed int)dev->nb_event_ports)
+ goto invalid_value;
+
+ xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hws_xstats;
+ rc = roc_sso_hws_stats_get(&dev->sso, queue_port_id,
+ &hws_stats);
+ if (rc < 0)
+ goto invalid_value;
+ rsp = &hws_stats;
+ break;
+ case RTE_EVENT_DEV_XSTATS_QUEUE:
+ if (queue_port_id >= (signed int)dev->nb_event_queues)
+ goto invalid_value;
+
+ xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
+ start_offset = CNXK_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hwgrp_xstats;
+
+ rc = roc_sso_hwgrp_stats_get(&dev->sso, queue_port_id,
+ &hwgrp_stats);
+ if (rc < 0)
+ goto invalid_value;
+ rsp = &hwgrp_stats;
+ break;
+ default:
+ cnxk_err("Invalid mode received");
+ goto invalid_value;
+ };
+
+ for (i = 0; i < n && i < xstats_mode_count; i++) {
+ xstat = &xstats[ids[i] - start_offset];
+ value = *(uint64_t *)((char *)rsp + xstat->offset);
+ value = (value >> xstat->shift) & xstat->mask;
+
+ xstat->reset_snap[queue_port_id] = value;
+ }
+ return i;
+invalid_value:
+ return -EINVAL;
+}
+
+int
+cnxk_sso_xstats_get_names(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ uint8_t queue_port_id,
+ struct rte_event_dev_xstats_name *xstats_names,
+ unsigned int *ids, unsigned int size)
+{
+ struct rte_event_dev_xstats_name xstats_names_copy[CNXK_SSO_NUM_XSTATS];
+ struct cnxk_sso_evdev *dev = sso_pmd_priv(event_dev);
+ uint32_t xstats_mode_count = 0;
+ uint32_t start_offset = 0;
+ unsigned int xidx = 0;
+ unsigned int i;
+
+ for (i = 0; i < CNXK_SSO_NUM_HWS_XSTATS; i++) {
+ snprintf(xstats_names_copy[i].name,
+ sizeof(xstats_names_copy[i].name), "%s",
+ sso_hws_xstats[i].name);
+ }
+
+ for (; i < CNXK_SSO_NUM_XSTATS; i++) {
+ snprintf(xstats_names_copy[i].name,
+ sizeof(xstats_names_copy[i].name), "%s",
+ sso_hwgrp_xstats[i - CNXK_SSO_NUM_HWS_XSTATS].name);
+ }
+
+ switch (mode) {
+ case RTE_EVENT_DEV_XSTATS_DEVICE:
+ break;
+ case RTE_EVENT_DEV_XSTATS_PORT:
+ if (queue_port_id >= (signed int)dev->nb_event_ports)
+ break;
+ xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
+ break;
+ case RTE_EVENT_DEV_XSTATS_QUEUE:
+ if (queue_port_id >= (signed int)dev->nb_event_queues)
+ break;
+ xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
+ start_offset = CNXK_SSO_NUM_HWS_XSTATS;
+ break;
+ default:
+ cnxk_err("Invalid mode received");
+ return -EINVAL;
+ };
+
+ if (xstats_mode_count > size || !ids || !xstats_names)
+ return xstats_mode_count;
+
+ for (i = 0; i < xstats_mode_count; i++) {
+ xidx = i + start_offset;
+ strncpy(xstats_names[i].name, xstats_names_copy[xidx].name,
+ sizeof(xstats_names[i].name));
+ ids[i] = xidx;
+ }
+
+ return i;
+}
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index a81ae134d..3830c7236 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -13,7 +13,8 @@ sources = files('cn10k_worker.c',
'cn9k_worker.c',
'cn9k_eventdev.c',
'cnxk_eventdev.c',
- 'cnxk_eventdev_selftest.c'
+ 'cnxk_eventdev_selftest.c',
+ 'cnxk_eventdev_stats.c',
)
deps += ['bus_pci', 'common_cnxk', 'net_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 20/33] event/cnxk: support event timer
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (18 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 19/33] event/cnxk: add event port and queue xstats pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 21/33] event/cnxk: add timer adapter capabilities pbhagavatula
` (14 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add event timer adapter a.k.a TIM initialization on SSO probe.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 6 ++++
drivers/event/cnxk/cnxk_eventdev.c | 3 ++
drivers/event/cnxk/cnxk_eventdev.h | 2 ++
drivers/event/cnxk/cnxk_tim_evdev.c | 47 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_evdev.h | 44 +++++++++++++++++++++++++++
drivers/event/cnxk/meson.build | 1 +
6 files changed, 103 insertions(+)
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.c
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.h
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index f48452982..e6f81f8b1 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -35,6 +35,10 @@ Features of the OCTEON cnxk SSO PMD are:
- Open system with configurable amount of outstanding events limited only by
DRAM
- HW accelerated dequeue timeout support to enable power management
+- HW managed event timers support through TIM, with high precision and
+ time granularity of 2.5us on CN9K and 1us on CN10K.
+- Up to 256 TIM rings a.k.a event timer adapters.
+- Up to 8 rings traversed in parallel.
Prerequisites and Compilation procedure
---------------------------------------
@@ -101,3 +105,5 @@ Debugging Options
+===+============+=======================================================+
| 1 | SSO | --log-level='pmd\.event\.cnxk,8' |
+---+------------+-------------------------------------------------------+
+ | 2 | TIM | --log-level='pmd\.event\.cnxk\.timer,8' |
+ +---+------------+-------------------------------------------------------+
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 0f084176c..85bb12e00 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -582,6 +582,8 @@ cnxk_sso_init(struct rte_eventdev *event_dev)
dev->nb_event_queues = 0;
dev->nb_event_ports = 0;
+ cnxk_tim_init(&dev->sso);
+
return 0;
error:
@@ -598,6 +600,7 @@ cnxk_sso_fini(struct rte_eventdev *event_dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ cnxk_tim_fini();
roc_sso_rsrc_fini(&dev->sso);
roc_sso_dev_fini(&dev->sso);
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index abe36f21f..1c61063c9 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,6 +14,8 @@
#include "roc_api.h"
+#include "cnxk_tim_evdev.h"
+
#define CNXK_SSO_XAE_CNT "xae_cnt"
#define CNXK_SSO_GGRP_QOS "qos"
#define CN9K_SSO_SINGLE_WS "single_ws"
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
new file mode 100644
index 000000000..46461b885
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+#include "cnxk_tim_evdev.h"
+
+void
+cnxk_tim_init(struct roc_sso *sso)
+{
+ const struct rte_memzone *mz;
+ struct cnxk_tim_evdev *dev;
+ int rc;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return;
+
+ mz = rte_memzone_reserve(RTE_STR(CNXK_TIM_EVDEV_NAME),
+ sizeof(struct cnxk_tim_evdev), 0, 0);
+ if (mz == NULL) {
+ plt_tim_dbg("Unable to allocate memory for TIM Event device");
+ return;
+ }
+ dev = mz->addr;
+
+ dev->tim.roc_sso = sso;
+ rc = roc_tim_init(&dev->tim);
+ if (rc < 0) {
+ plt_err("Failed to initialize roc tim resources");
+ rte_memzone_free(mz);
+ return;
+ }
+ dev->nb_rings = rc;
+ dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+}
+
+void
+cnxk_tim_fini(void)
+{
+ struct cnxk_tim_evdev *dev = tim_priv_get();
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return;
+
+ roc_tim_fini(&dev->tim);
+ rte_memzone_free(rte_memzone_lookup(RTE_STR(CNXK_TIM_EVDEV_NAME)));
+}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
new file mode 100644
index 000000000..5ddc94ed4
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CNXK_TIM_EVDEV_H__
+#define __CNXK_TIM_EVDEV_H__
+
+#include <stddef.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <eventdev_pmd_pci.h>
+#include <rte_event_timer_adapter.h>
+#include <rte_memzone.h>
+
+#include "roc_api.h"
+
+#define CNXK_TIM_EVDEV_NAME cnxk_tim_eventdev
+#define CNXK_TIM_RING_DEF_CHUNK_SZ (4096)
+
+struct cnxk_tim_evdev {
+ struct roc_tim tim;
+ struct rte_eventdev *event_dev;
+ uint16_t nb_rings;
+ uint32_t chunk_sz;
+};
+
+static inline struct cnxk_tim_evdev *
+tim_priv_get(void)
+{
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(RTE_STR(CNXK_TIM_EVDEV_NAME));
+ if (mz == NULL)
+ return NULL;
+
+ return mz->addr;
+}
+
+void cnxk_tim_init(struct roc_sso *sso);
+void cnxk_tim_fini(void);
+
+#endif /* __CNXK_TIM_EVDEV_H__ */
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 3830c7236..34e99d5bc 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -15,6 +15,7 @@ sources = files('cn10k_worker.c',
'cnxk_eventdev.c',
'cnxk_eventdev_selftest.c',
'cnxk_eventdev_stats.c',
+ 'cnxk_tim_evdev.c',
)
deps += ['bus_pci', 'common_cnxk', 'net_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 21/33] event/cnxk: add timer adapter capabilities
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (19 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 20/33] event/cnxk: support event timer pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 22/33] event/cnxk: create and free timer adapter pbhagavatula
` (13 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add function to retrieve event timer adapter capabilities.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 2 ++
drivers/event/cnxk/cn9k_eventdev.c | 2 ++
drivers/event/cnxk/cnxk_tim_evdev.c | 22 +++++++++++++++++++++-
drivers/event/cnxk/cnxk_tim_evdev.h | 6 +++++-
4 files changed, 30 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 17bed4c9a..c6fde7232 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -420,6 +420,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .timer_adapter_caps_get = cnxk_tim_caps_get,
+
.dump = cnxk_sso_dump,
.dev_start = cn10k_sso_start,
.dev_stop = cn10k_sso_stop,
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index e39d04fdd..4211610c6 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -494,6 +494,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .timer_adapter_caps_get = cnxk_tim_caps_get,
+
.dump = cnxk_sso_dump,
.dev_start = cn9k_sso_start,
.dev_stop = cn9k_sso_stop,
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 46461b885..265bee533 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -5,6 +5,26 @@
#include "cnxk_eventdev.h"
#include "cnxk_tim_evdev.h"
+int
+cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
+ uint32_t *caps,
+ const struct rte_event_timer_adapter_ops **ops)
+{
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+
+ RTE_SET_USED(flags);
+ RTE_SET_USED(ops);
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ /* Store evdev pointer for later use. */
+ dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
+ *caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
+
+ return 0;
+}
+
void
cnxk_tim_init(struct roc_sso *sso)
{
@@ -37,7 +57,7 @@ cnxk_tim_init(struct roc_sso *sso)
void
cnxk_tim_fini(void)
{
- struct cnxk_tim_evdev *dev = tim_priv_get();
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return;
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 5ddc94ed4..ece66ab25 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -27,7 +27,7 @@ struct cnxk_tim_evdev {
};
static inline struct cnxk_tim_evdev *
-tim_priv_get(void)
+cnxk_tim_priv_get(void)
{
const struct rte_memzone *mz;
@@ -38,6 +38,10 @@ tim_priv_get(void)
return mz->addr;
}
+int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
+ uint32_t *caps,
+ const struct rte_event_timer_adapter_ops **ops);
+
void cnxk_tim_init(struct roc_sso *sso);
void cnxk_tim_fini(void);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 22/33] event/cnxk: create and free timer adapter
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (20 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 21/33] event/cnxk: add timer adapter capabilities pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 23/33] event/cnxk: add devargs to disable NPA pbhagavatula
` (12 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
When the application calls timer adapter create the following is used:
- Allocate a TIM LF based on number of LF's provisioned.
- Verify the config parameters supplied.
- Allocate memory required for
* Buckets based on min and max timeout supplied.
* Allocate the chunk pool based on the number of timers.
On Free:
- Free the allocated bucket and chunk memory.
- Free the TIM lf allocated.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 174 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_evdev.h | 128 +++++++++++++++++++-
2 files changed, 300 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 265bee533..655540a72 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -5,6 +5,177 @@
#include "cnxk_eventdev.h"
#include "cnxk_tim_evdev.h"
+static struct rte_event_timer_adapter_ops cnxk_tim_ops;
+
+static int
+cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
+ struct rte_event_timer_adapter_conf *rcfg)
+{
+ unsigned int cache_sz = (tim_ring->nb_chunks / 1.5);
+ unsigned int mp_flags = 0;
+ char pool_name[25];
+ int rc;
+
+ cache_sz /= rte_lcore_count();
+ /* Create chunk pool. */
+ if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
+ mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+ plt_tim_dbg("Using single producer mode");
+ tim_ring->prod_type_sp = true;
+ }
+
+ snprintf(pool_name, sizeof(pool_name), "cnxk_tim_chunk_pool%d",
+ tim_ring->ring_id);
+
+ if (cache_sz > RTE_MEMPOOL_CACHE_MAX_SIZE)
+ cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
+ cache_sz = cache_sz != 0 ? cache_sz : 2;
+ tim_ring->nb_chunks += (cache_sz * rte_lcore_count());
+ tim_ring->chunk_pool = rte_mempool_create_empty(
+ pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz, cache_sz, 0,
+ rte_socket_id(), mp_flags);
+
+ if (tim_ring->chunk_pool == NULL) {
+ plt_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
+
+ rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
+ rte_mbuf_platform_mempool_ops(), NULL);
+ if (rc < 0) {
+ plt_err("Unable to set chunkpool ops");
+ goto free;
+ }
+
+ rc = rte_mempool_populate_default(tim_ring->chunk_pool);
+ if (rc < 0) {
+ plt_err("Unable to set populate chunkpool.");
+ goto free;
+ }
+ tim_ring->aura =
+ roc_npa_aura_handle_to_aura(tim_ring->chunk_pool->pool_id);
+ tim_ring->ena_dfb = 0;
+
+ return 0;
+
+free:
+ rte_mempool_free(tim_ring->chunk_pool);
+ return rc;
+}
+
+static int
+cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
+{
+ struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+ struct cnxk_tim_ring *tim_ring;
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ if (adptr->data->id >= dev->nb_rings)
+ return -ENODEV;
+
+ tim_ring = rte_zmalloc("cnxk_tim_prv", sizeof(struct cnxk_tim_ring), 0);
+ if (tim_ring == NULL)
+ return -ENOMEM;
+
+ rc = roc_tim_lf_alloc(&dev->tim, adptr->data->id, NULL);
+ if (rc < 0) {
+ plt_err("Failed to create timer ring");
+ goto tim_ring_free;
+ }
+
+ if (NSEC2TICK(RTE_ALIGN_MUL_CEIL(
+ rcfg->timer_tick_ns,
+ cnxk_tim_min_resolution_ns(cnxk_tim_cntfrq())),
+ cnxk_tim_cntfrq()) <
+ cnxk_tim_min_tmo_ticks(cnxk_tim_cntfrq())) {
+ if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES)
+ rcfg->timer_tick_ns = TICK2NSEC(
+ cnxk_tim_min_tmo_ticks(cnxk_tim_cntfrq()),
+ cnxk_tim_cntfrq());
+ else {
+ rc = -ERANGE;
+ goto tim_hw_free;
+ }
+ }
+ tim_ring->ring_id = adptr->data->id;
+ tim_ring->clk_src = (int)rcfg->clk_src;
+ tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL(
+ rcfg->timer_tick_ns,
+ cnxk_tim_min_resolution_ns(cnxk_tim_cntfrq()));
+ tim_ring->max_tout = rcfg->max_tmo_ns;
+ tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
+ tim_ring->nb_timers = rcfg->nb_timers;
+ tim_ring->chunk_sz = dev->chunk_sz;
+
+ tim_ring->nb_chunks = tim_ring->nb_timers;
+ tim_ring->nb_chunk_slots = CNXK_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
+ /* Create buckets. */
+ tim_ring->bkt =
+ rte_zmalloc("cnxk_tim_bucket",
+ (tim_ring->nb_bkts) * sizeof(struct cnxk_tim_bkt),
+ RTE_CACHE_LINE_SIZE);
+ if (tim_ring->bkt == NULL)
+ goto tim_hw_free;
+
+ rc = cnxk_tim_chnk_pool_create(tim_ring, rcfg);
+ if (rc < 0)
+ goto tim_bkt_free;
+
+ rc = roc_tim_lf_config(
+ &dev->tim, tim_ring->ring_id,
+ cnxk_tim_convert_clk_src(tim_ring->clk_src), 0, 0,
+ tim_ring->nb_bkts, tim_ring->chunk_sz,
+ NSEC2TICK(tim_ring->tck_nsec, cnxk_tim_cntfrq()));
+ if (rc < 0) {
+ plt_err("Failed to configure timer ring");
+ goto tim_chnk_free;
+ }
+
+ tim_ring->base = roc_tim_lf_base_get(&dev->tim, tim_ring->ring_id);
+ plt_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE);
+ plt_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+
+ plt_tim_dbg(
+ "Total memory used %" PRIu64 "MB\n",
+ (uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz) +
+ (tim_ring->nb_bkts * sizeof(struct cnxk_tim_bkt))) /
+ BIT_ULL(20)));
+
+ adptr->data->adapter_priv = tim_ring;
+ return rc;
+
+tim_chnk_free:
+ rte_mempool_free(tim_ring->chunk_pool);
+tim_bkt_free:
+ rte_free(tim_ring->bkt);
+tim_hw_free:
+ roc_tim_lf_free(&dev->tim, tim_ring->ring_id);
+tim_ring_free:
+ rte_free(tim_ring);
+ return rc;
+}
+
+static int
+cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ roc_tim_lf_free(&dev->tim, tim_ring->ring_id);
+ rte_free(tim_ring->bkt);
+ rte_mempool_free(tim_ring->chunk_pool);
+ rte_free(tim_ring);
+
+ return 0;
+}
+
int
cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
uint32_t *caps,
@@ -18,6 +189,9 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
if (dev == NULL)
return -ENODEV;
+ cnxk_tim_ops.init = cnxk_tim_ring_create;
+ cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index ece66ab25..2335707cd 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -12,12 +12,26 @@
#include <eventdev_pmd_pci.h>
#include <rte_event_timer_adapter.h>
+#include <rte_malloc.h>
#include <rte_memzone.h>
#include "roc_api.h"
-#define CNXK_TIM_EVDEV_NAME cnxk_tim_eventdev
-#define CNXK_TIM_RING_DEF_CHUNK_SZ (4096)
+#define NSECPERSEC 1E9
+#define USECPERSEC 1E6
+#define TICK2NSEC(__tck, __freq) (((__tck)*NSECPERSEC) / (__freq))
+
+#define CNXK_TIM_EVDEV_NAME cnxk_tim_eventdev
+#define CNXK_TIM_MAX_BUCKETS (0xFFFFF)
+#define CNXK_TIM_RING_DEF_CHUNK_SZ (4096)
+#define CNXK_TIM_CHUNK_ALIGNMENT (16)
+#define CNXK_TIM_MAX_BURST \
+ (RTE_CACHE_LINE_SIZE / CNXK_TIM_CHUNK_ALIGNMENT)
+#define CNXK_TIM_NB_CHUNK_SLOTS(sz) (((sz) / CNXK_TIM_CHUNK_ALIGNMENT) - 1)
+#define CNXK_TIM_MIN_CHUNK_SLOTS (0x1)
+#define CNXK_TIM_MAX_CHUNK_SLOTS (0x1FFE)
+
+#define CN9K_TIM_MIN_TMO_TKS (256)
struct cnxk_tim_evdev {
struct roc_tim tim;
@@ -26,6 +40,57 @@ struct cnxk_tim_evdev {
uint32_t chunk_sz;
};
+enum cnxk_tim_clk_src {
+ CNXK_TIM_CLK_SRC_10NS = RTE_EVENT_TIMER_ADAPTER_CPU_CLK,
+ CNXK_TIM_CLK_SRC_GPIO = RTE_EVENT_TIMER_ADAPTER_EXT_CLK0,
+ CNXK_TIM_CLK_SRC_GTI = RTE_EVENT_TIMER_ADAPTER_EXT_CLK1,
+ CNXK_TIM_CLK_SRC_PTP = RTE_EVENT_TIMER_ADAPTER_EXT_CLK2,
+};
+
+struct cnxk_tim_bkt {
+ uint64_t first_chunk;
+ union {
+ uint64_t w1;
+ struct {
+ uint32_t nb_entry;
+ uint8_t sbt : 1;
+ uint8_t hbt : 1;
+ uint8_t bsk : 1;
+ uint8_t rsvd : 5;
+ uint8_t lock;
+ int16_t chunk_remainder;
+ };
+ };
+ uint64_t current_chunk;
+ uint64_t pad;
+};
+
+struct cnxk_tim_ring {
+ uintptr_t base;
+ uint16_t nb_chunk_slots;
+ uint32_t nb_bkts;
+ uint64_t tck_int;
+ uint64_t tot_int;
+ struct cnxk_tim_bkt *bkt;
+ struct rte_mempool *chunk_pool;
+ uint64_t arm_cnt;
+ uint8_t prod_type_sp;
+ uint8_t ena_dfb;
+ uint16_t ring_id;
+ uint32_t aura;
+ uint64_t nb_timers;
+ uint64_t tck_nsec;
+ uint64_t max_tout;
+ uint64_t nb_chunks;
+ uint64_t chunk_sz;
+ enum cnxk_tim_clk_src clk_src;
+} __rte_cache_aligned;
+
+struct cnxk_tim_ent {
+ uint64_t w0;
+ uint64_t wqe;
+};
+
static inline struct cnxk_tim_evdev *
cnxk_tim_priv_get(void)
{
@@ -38,6 +103,65 @@ cnxk_tim_priv_get(void)
return mz->addr;
}
+static inline uint64_t
+cnxk_tim_min_tmo_ticks(uint64_t freq)
+{
+ if (roc_model_runtime_is_cn9k())
+ return CN9K_TIM_MIN_TMO_TKS;
+ else /* CN10K min tick is of 1us */
+ return freq / USECPERSEC;
+}
+
+static inline uint64_t
+cnxk_tim_min_resolution_ns(uint64_t freq)
+{
+ return NSECPERSEC / freq;
+}
+
+static inline enum roc_tim_clk_src
+cnxk_tim_convert_clk_src(enum cnxk_tim_clk_src clk_src)
+{
+ switch (clk_src) {
+ case RTE_EVENT_TIMER_ADAPTER_CPU_CLK:
+ return roc_model_runtime_is_cn9k() ? ROC_TIM_CLK_SRC_10NS :
+ ROC_TIM_CLK_SRC_GTI;
+ default:
+ return ROC_TIM_CLK_SRC_INVALID;
+ }
+}
+
+#ifdef RTE_ARCH_ARM64
+static inline uint64_t
+cnxk_tim_cntvct(void)
+{
+ uint64_t tsc;
+
+ asm volatile("mrs %0, cntvct_el0" : "=r"(tsc));
+ return tsc;
+}
+
+static inline uint64_t
+cnxk_tim_cntfrq(void)
+{
+ uint64_t freq;
+
+ asm volatile("mrs %0, cntfrq_el0" : "=r"(freq));
+ return freq;
+}
+#else
+static inline uint64_t
+cnxk_tim_cntvct(void)
+{
+ return 0;
+}
+
+static inline uint64_t
+cnxk_tim_cntfrq(void)
+{
+ return 0;
+}
+#endif
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 23/33] event/cnxk: add devargs to disable NPA
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (21 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 22/33] event/cnxk: create and free timer adapter pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 24/33] event/cnxk: allow adapters to resize inflights pbhagavatula
` (11 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
If the chunks are allocated from NPA then TIM can automatically free
them when traversing the list of chunks.
Add devargs to disable NPA and use software mempool to manage chunks.
Example:
--dev "0002:0e:00.0,tim_disable_npa=1"
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 10 ++++
drivers/event/cnxk/cn10k_eventdev.c | 3 +-
drivers/event/cnxk/cn9k_eventdev.c | 3 +-
drivers/event/cnxk/cnxk_eventdev.h | 9 +++
drivers/event/cnxk/cnxk_tim_evdev.c | 86 +++++++++++++++++++++--------
drivers/event/cnxk/cnxk_tim_evdev.h | 5 ++
6 files changed, 92 insertions(+), 24 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index e6f81f8b1..c2d6ed2fb 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -93,6 +93,16 @@ Runtime Config Options
-a 0002:0e:00.0,qos=[1-50-50-50]
+- ``TIM disable NPA``
+
+ By default chunks are allocated from NPA then TIM can automatically free
+ them when traversing the list of chunks. The ``tim_disable_npa`` devargs
+ parameter disables NPA and uses software mempool to manage chunks
+
+ For example::
+
+ -a 0002:0e:00.0,tim_disable_npa=1
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index c6fde7232..d0ab0c853 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -502,4 +502,5 @@ RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
- CN10K_SSO_GW_MODE "=<int>");
+ CN10K_SSO_GW_MODE "=<int>"
+ CNXK_TIM_DISABLE_NPA "=1");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 4211610c6..6b406daf0 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -571,4 +571,5 @@ RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
- CN9K_SSO_SINGLE_WS "=1");
+ CN9K_SSO_SINGLE_WS "=1"
+ CNXK_TIM_DISABLE_NPA "=1");
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 1c61063c9..77835e463 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -159,6 +159,15 @@ struct cnxk_sso_hws_cookie {
bool configured;
} __rte_cache_aligned;
+static inline int
+parse_kvargs_flag(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ *(uint8_t *)opaque = !!atoi(value);
+ return 0;
+}
+
static inline int
parse_kvargs_value(const char *key, const char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 655540a72..d93b37e4f 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -31,30 +31,43 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
cache_sz = cache_sz != 0 ? cache_sz : 2;
tim_ring->nb_chunks += (cache_sz * rte_lcore_count());
- tim_ring->chunk_pool = rte_mempool_create_empty(
- pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz, cache_sz, 0,
- rte_socket_id(), mp_flags);
-
- if (tim_ring->chunk_pool == NULL) {
- plt_err("Unable to create chunkpool.");
- return -ENOMEM;
- }
+ if (!tim_ring->disable_npa) {
+ tim_ring->chunk_pool = rte_mempool_create_empty(
+ pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz,
+ cache_sz, 0, rte_socket_id(), mp_flags);
+
+ if (tim_ring->chunk_pool == NULL) {
+ plt_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
- rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
- rte_mbuf_platform_mempool_ops(), NULL);
- if (rc < 0) {
- plt_err("Unable to set chunkpool ops");
- goto free;
- }
+ rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
+ rte_mbuf_platform_mempool_ops(),
+ NULL);
+ if (rc < 0) {
+ plt_err("Unable to set chunkpool ops");
+ goto free;
+ }
- rc = rte_mempool_populate_default(tim_ring->chunk_pool);
- if (rc < 0) {
- plt_err("Unable to set populate chunkpool.");
- goto free;
+ rc = rte_mempool_populate_default(tim_ring->chunk_pool);
+ if (rc < 0) {
+ plt_err("Unable to set populate chunkpool.");
+ goto free;
+ }
+ tim_ring->aura = roc_npa_aura_handle_to_aura(
+ tim_ring->chunk_pool->pool_id);
+ tim_ring->ena_dfb = 0;
+ } else {
+ tim_ring->chunk_pool = rte_mempool_create(
+ pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz,
+ cache_sz, 0, NULL, NULL, NULL, NULL, rte_socket_id(),
+ mp_flags);
+ if (tim_ring->chunk_pool == NULL) {
+ plt_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
+ tim_ring->ena_dfb = 1;
}
- tim_ring->aura =
- roc_npa_aura_handle_to_aura(tim_ring->chunk_pool->pool_id);
- tim_ring->ena_dfb = 0;
return 0;
@@ -110,8 +123,17 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
tim_ring->nb_timers = rcfg->nb_timers;
tim_ring->chunk_sz = dev->chunk_sz;
+ tim_ring->disable_npa = dev->disable_npa;
+
+ if (tim_ring->disable_npa) {
+ tim_ring->nb_chunks =
+ tim_ring->nb_timers /
+ CNXK_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
+ tim_ring->nb_chunks = tim_ring->nb_chunks * tim_ring->nb_bkts;
+ } else {
+ tim_ring->nb_chunks = tim_ring->nb_timers;
+ }
- tim_ring->nb_chunks = tim_ring->nb_timers;
tim_ring->nb_chunk_slots = CNXK_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
/* Create buckets. */
tim_ring->bkt =
@@ -199,6 +221,24 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
return 0;
}
+static void
+cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
+{
+ struct rte_kvargs *kvlist;
+
+ if (devargs == NULL)
+ return;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, CNXK_TIM_DISABLE_NPA, &parse_kvargs_flag,
+ &dev->disable_npa);
+
+ rte_kvargs_free(kvlist);
+}
+
void
cnxk_tim_init(struct roc_sso *sso)
{
@@ -217,6 +257,8 @@ cnxk_tim_init(struct roc_sso *sso)
}
dev = mz->addr;
+ cnxk_tim_parse_devargs(sso->pci_dev->device.devargs, dev);
+
dev->tim.roc_sso = sso;
rc = roc_tim_init(&dev->tim);
if (rc < 0) {
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 2335707cd..4896ed67a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -33,11 +33,15 @@
#define CN9K_TIM_MIN_TMO_TKS (256)
+#define CNXK_TIM_DISABLE_NPA "tim_disable_npa"
+
struct cnxk_tim_evdev {
struct roc_tim tim;
struct rte_eventdev *event_dev;
uint16_t nb_rings;
uint32_t chunk_sz;
+ /* Dev args */
+ uint8_t disable_npa;
};
enum cnxk_tim_clk_src {
@@ -75,6 +79,7 @@ struct cnxk_tim_ring {
struct rte_mempool *chunk_pool;
uint64_t arm_cnt;
uint8_t prod_type_sp;
+ uint8_t disable_npa;
uint8_t ena_dfb;
uint16_t ring_id;
uint32_t aura;
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 24/33] event/cnxk: allow adapters to resize inflights
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (22 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 23/33] event/cnxk: add devargs to disable NPA pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 25/33] event/cnxk: add timer adapter info function pbhagavatula
` (10 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add internal SSO functions to allow event adapters to resize SSO buffers
that are used to hold in-flight events in DRAM.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 33 ++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 7 +++
drivers/event/cnxk/cnxk_eventdev_adptr.c | 67 ++++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_evdev.c | 5 ++
drivers/event/cnxk/meson.build | 1 +
5 files changed, 113 insertions(+)
create mode 100644 drivers/event/cnxk/cnxk_eventdev_adptr.c
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 85bb12e00..7189ee3a7 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -77,6 +77,9 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
if (dev->xae_cnt)
xaq_cnt += dev->xae_cnt / dev->sso.xae_waes;
+ else if (dev->adptr_xae_cnt)
+ xaq_cnt += (dev->adptr_xae_cnt / dev->sso.xae_waes) +
+ (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
else
xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
(CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
@@ -125,6 +128,36 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
return rc;
}
+int
+cnxk_sso_xae_reconfigure(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc = 0;
+
+ if (event_dev->data->dev_started)
+ event_dev->dev_ops->dev_stop(event_dev);
+
+ rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to release XAQ %d", rc);
+ return rc;
+ }
+
+ rte_mempool_free(dev->xaq_pool);
+ dev->xaq_pool = NULL;
+ rc = cnxk_sso_xaq_allocate(dev);
+ if (rc < 0) {
+ plt_err("Failed to alloc XAQ %d", rc);
+ return rc;
+ }
+
+ rte_mb();
+ if (event_dev->data->dev_started)
+ event_dev->dev_ops->dev_start(event_dev);
+
+ return 0;
+}
+
int
cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
cnxk_sso_init_hws_mem_t init_hws_fn,
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 77835e463..668e51d62 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -81,6 +81,10 @@ struct cnxk_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
+ uint64_t adptr_xae_cnt;
+ uint16_t tim_adptr_ring_cnt;
+ uint16_t *timer_adptr_rings;
+ uint64_t *timer_adptr_sz;
/* Dev args */
uint32_t xae_cnt;
uint8_t qos_queue_cnt;
@@ -190,7 +194,10 @@ cnxk_sso_hws_get_cookie(void *ws)
}
/* Configuration functions */
+int cnxk_sso_xae_reconfigure(struct rte_eventdev *event_dev);
int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev);
+void cnxk_sso_updt_xae_cnt(struct cnxk_sso_evdev *dev, void *data,
+ uint32_t event_type);
/* Common ops API. */
int cnxk_sso_init(struct rte_eventdev *event_dev);
diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c
new file mode 100644
index 000000000..89a1d82c1
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+
+void
+cnxk_sso_updt_xae_cnt(struct cnxk_sso_evdev *dev, void *data,
+ uint32_t event_type)
+{
+ int i;
+
+ switch (event_type) {
+ case RTE_EVENT_TYPE_TIMER: {
+ struct cnxk_tim_ring *timr = data;
+ uint16_t *old_ring_ptr;
+ uint64_t *old_sz_ptr;
+
+ for (i = 0; i < dev->tim_adptr_ring_cnt; i++) {
+ if (timr->ring_id != dev->timer_adptr_rings[i])
+ continue;
+ if (timr->nb_timers == dev->timer_adptr_sz[i])
+ return;
+ dev->adptr_xae_cnt -= dev->timer_adptr_sz[i];
+ dev->adptr_xae_cnt += timr->nb_timers;
+ dev->timer_adptr_sz[i] = timr->nb_timers;
+
+ return;
+ }
+
+ dev->tim_adptr_ring_cnt++;
+ old_ring_ptr = dev->timer_adptr_rings;
+ old_sz_ptr = dev->timer_adptr_sz;
+
+ dev->timer_adptr_rings = rte_realloc(
+ dev->timer_adptr_rings,
+ sizeof(uint16_t) * dev->tim_adptr_ring_cnt, 0);
+ if (dev->timer_adptr_rings == NULL) {
+ dev->adptr_xae_cnt += timr->nb_timers;
+ dev->timer_adptr_rings = old_ring_ptr;
+ dev->tim_adptr_ring_cnt--;
+ return;
+ }
+
+ dev->timer_adptr_sz = rte_realloc(
+ dev->timer_adptr_sz,
+ sizeof(uint64_t) * dev->tim_adptr_ring_cnt, 0);
+
+ if (dev->timer_adptr_sz == NULL) {
+ dev->adptr_xae_cnt += timr->nb_timers;
+ dev->timer_adptr_sz = old_sz_ptr;
+ dev->tim_adptr_ring_cnt--;
+ return;
+ }
+
+ dev->timer_adptr_rings[dev->tim_adptr_ring_cnt - 1] =
+ timr->ring_id;
+ dev->timer_adptr_sz[dev->tim_adptr_ring_cnt - 1] =
+ timr->nb_timers;
+
+ dev->adptr_xae_cnt += timr->nb_timers;
+ break;
+ }
+ default:
+ break;
+ }
+}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index d93b37e4f..1eb39a789 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -161,6 +161,11 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
plt_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE);
plt_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+ /* Update SSO xae count. */
+ cnxk_sso_updt_xae_cnt(cnxk_sso_pmd_priv(dev->event_dev), tim_ring,
+ RTE_EVENT_TYPE_TIMER);
+ cnxk_sso_xae_reconfigure(dev->event_dev);
+
plt_tim_dbg(
"Total memory used %" PRIu64 "MB\n",
(uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz) +
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 34e99d5bc..4b1aef0b4 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -12,6 +12,7 @@ sources = files('cn10k_worker.c',
'cn10k_eventdev.c',
'cn9k_worker.c',
'cn9k_eventdev.c',
+ 'cnxk_eventdev_adptr.c',
'cnxk_eventdev.c',
'cnxk_eventdev_selftest.c',
'cnxk_eventdev_stats.c',
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 25/33] event/cnxk: add timer adapter info function
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (23 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 24/33] event/cnxk: allow adapters to resize inflights pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 26/33] event/cnxk: add devargs for chunk size and rings pbhagavatula
` (9 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add TIM event timer adapter info get function.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 1eb39a789..2fefa56f5 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -76,6 +76,18 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
return rc;
}
+static void
+cnxk_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer_adapter_info *adptr_info)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+
+ adptr_info->max_tmo_ns = tim_ring->max_tout;
+ adptr_info->min_resolution_ns = tim_ring->tck_nsec;
+ rte_memcpy(&adptr_info->conf, &adptr->data->conf,
+ sizeof(struct rte_event_timer_adapter_conf));
+}
+
static int
cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
{
@@ -218,6 +230,7 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
cnxk_tim_ops.init = cnxk_tim_ring_create;
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+ cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 26/33] event/cnxk: add devargs for chunk size and rings
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (24 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 25/33] event/cnxk: add timer adapter info function pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 27/33] event/cnxk: add TIM bucket operations pbhagavatula
` (8 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add devargs to control default chunk size and max numbers of
timer rings to attach to a given RVU PF.
Example:
--dev "0002:1e:00.0,tim_chnk_slots=1024"
--dev "0002:1e:00.0,tim_rings_lmt=4"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 23 +++++++++++++++++++++++
drivers/event/cnxk/cn10k_eventdev.c | 4 +++-
drivers/event/cnxk/cn9k_eventdev.c | 4 +++-
drivers/event/cnxk/cnxk_tim_evdev.c | 14 +++++++++++++-
drivers/event/cnxk/cnxk_tim_evdev.h | 4 ++++
5 files changed, 46 insertions(+), 3 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index c2d6ed2fb..a8199aac7 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -103,6 +103,29 @@ Runtime Config Options
-a 0002:0e:00.0,tim_disable_npa=1
+- ``TIM modify chunk slots``
+
+ The ``tim_chnk_slots`` devargs can be used to modify number of chunk slots.
+ Chunks are used to store event timers, a chunk can be visualised as an array
+ where the last element points to the next chunk and rest of them are used to
+ store events. TIM traverses the list of chunks and enqueues the event timers
+ to SSO. The default value is 255 and the max value is 4095.
+
+ For example::
+
+ -a 0002:0e:00.0,tim_chnk_slots=1023
+
+- ``TIM limit max rings reserved``
+
+ The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
+ rings i.e. event timer adapter reserved on probe. Since, TIM rings are HW
+ resources we can avoid starving other applications by not grabbing all the
+ rings.
+
+ For example::
+
+ -a 0002:0e:00.0,tim_rings_lmt=5
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index d0ab0c853..9fdb66343 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -503,4 +503,6 @@ RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
CN10K_SSO_GW_MODE "=<int>"
- CNXK_TIM_DISABLE_NPA "=1");
+ CNXK_TIM_DISABLE_NPA "=1"
+ CNXK_TIM_CHNK_SLOTS "=<int>"
+ CNXK_TIM_RINGS_LMT "=<int>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 6b406daf0..a68502703 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -572,4 +572,6 @@ RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
CN9K_SSO_SINGLE_WS "=1"
- CNXK_TIM_DISABLE_NPA "=1");
+ CNXK_TIM_DISABLE_NPA "=1"
+ CNXK_TIM_CHNK_SLOTS "=<int>"
+ CNXK_TIM_RINGS_LMT "=<int>");
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 2fefa56f5..e06fe2f52 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -253,6 +253,10 @@ cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
rte_kvargs_process(kvlist, CNXK_TIM_DISABLE_NPA, &parse_kvargs_flag,
&dev->disable_npa);
+ rte_kvargs_process(kvlist, CNXK_TIM_CHNK_SLOTS, &parse_kvargs_value,
+ &dev->chunk_slots);
+ rte_kvargs_process(kvlist, CNXK_TIM_RINGS_LMT, &parse_kvargs_value,
+ &dev->min_ring_cnt);
rte_kvargs_free(kvlist);
}
@@ -278,6 +282,7 @@ cnxk_tim_init(struct roc_sso *sso)
cnxk_tim_parse_devargs(sso->pci_dev->device.devargs, dev);
dev->tim.roc_sso = sso;
+ dev->tim.nb_lfs = dev->min_ring_cnt;
rc = roc_tim_init(&dev->tim);
if (rc < 0) {
plt_err("Failed to initialize roc tim resources");
@@ -285,7 +290,14 @@ cnxk_tim_init(struct roc_sso *sso)
return;
}
dev->nb_rings = rc;
- dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+
+ if (dev->chunk_slots && dev->chunk_slots <= CNXK_TIM_MAX_CHUNK_SLOTS &&
+ dev->chunk_slots >= CNXK_TIM_MIN_CHUNK_SLOTS) {
+ dev->chunk_sz =
+ (dev->chunk_slots + 1) * CNXK_TIM_CHUNK_ALIGNMENT;
+ } else {
+ dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+ }
}
void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 4896ed67a..9496634c8 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -34,6 +34,8 @@
#define CN9K_TIM_MIN_TMO_TKS (256)
#define CNXK_TIM_DISABLE_NPA "tim_disable_npa"
+#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
+#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
struct cnxk_tim_evdev {
struct roc_tim tim;
@@ -42,6 +44,8 @@ struct cnxk_tim_evdev {
uint32_t chunk_sz;
/* Dev args */
uint8_t disable_npa;
+ uint16_t chunk_slots;
+ uint16_t min_ring_cnt;
};
enum cnxk_tim_clk_src {
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 27/33] event/cnxk: add TIM bucket operations
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (25 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 26/33] event/cnxk: add devargs for chunk size and rings pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 28/33] event/cnxk: add timer arm routine pbhagavatula
` (7 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add TIM bucket operations used for event timer arm and cancel.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.h | 30 +++++++
drivers/event/cnxk/cnxk_tim_worker.c | 6 ++
drivers/event/cnxk/cnxk_tim_worker.h | 123 +++++++++++++++++++++++++++
drivers/event/cnxk/meson.build | 1 +
4 files changed, 160 insertions(+)
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.c
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.h
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 9496634c8..f6895417a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -37,6 +37,36 @@
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
+#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
+#define TIM_BUCKET_W1_M_CHUNK_REMAINDER \
+ ((1ULL << (64 - TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
+#define TIM_BUCKET_W1_S_LOCK (40)
+#define TIM_BUCKET_W1_M_LOCK \
+ ((1ULL << (TIM_BUCKET_W1_S_CHUNK_REMAINDER - TIM_BUCKET_W1_S_LOCK)) - 1)
+#define TIM_BUCKET_W1_S_RSVD (35)
+#define TIM_BUCKET_W1_S_BSK (34)
+#define TIM_BUCKET_W1_M_BSK \
+ ((1ULL << (TIM_BUCKET_W1_S_RSVD - TIM_BUCKET_W1_S_BSK)) - 1)
+#define TIM_BUCKET_W1_S_HBT (33)
+#define TIM_BUCKET_W1_M_HBT \
+ ((1ULL << (TIM_BUCKET_W1_S_BSK - TIM_BUCKET_W1_S_HBT)) - 1)
+#define TIM_BUCKET_W1_S_SBT (32)
+#define TIM_BUCKET_W1_M_SBT \
+ ((1ULL << (TIM_BUCKET_W1_S_HBT - TIM_BUCKET_W1_S_SBT)) - 1)
+#define TIM_BUCKET_W1_S_NUM_ENTRIES (0)
+#define TIM_BUCKET_W1_M_NUM_ENTRIES \
+ ((1ULL << (TIM_BUCKET_W1_S_SBT - TIM_BUCKET_W1_S_NUM_ENTRIES)) - 1)
+
+#define TIM_BUCKET_SEMA (TIM_BUCKET_CHUNK_REMAIN)
+
+#define TIM_BUCKET_CHUNK_REMAIN \
+ (TIM_BUCKET_W1_M_CHUNK_REMAINDER << TIM_BUCKET_W1_S_CHUNK_REMAINDER)
+
+#define TIM_BUCKET_LOCK (TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK)
+
+#define TIM_BUCKET_SEMA_WLOCK \
+ (TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
+
struct cnxk_tim_evdev {
struct roc_tim tim;
struct rte_eventdev *event_dev;
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
new file mode 100644
index 000000000..49ee85245
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -0,0 +1,6 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_tim_evdev.h"
+#include "cnxk_tim_worker.h"
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
new file mode 100644
index 000000000..d56e67360
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CNXK_TIM_WORKER_H__
+#define __CNXK_TIM_WORKER_H__
+
+#include "cnxk_tim_evdev.h"
+
+static inline uint8_t
+cnxk_tim_bkt_fetch_lock(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_LOCK) & TIM_BUCKET_W1_M_LOCK;
+}
+
+static inline int16_t
+cnxk_tim_bkt_fetch_rem(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_CHUNK_REMAINDER) &
+ TIM_BUCKET_W1_M_CHUNK_REMAINDER;
+}
+
+static inline int16_t
+cnxk_tim_bkt_get_rem(struct cnxk_tim_bkt *bktp)
+{
+ return __atomic_load_n(&bktp->chunk_remainder, __ATOMIC_ACQUIRE);
+}
+
+static inline void
+cnxk_tim_bkt_set_rem(struct cnxk_tim_bkt *bktp, uint16_t v)
+{
+ __atomic_store_n(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
+}
+
+static inline void
+cnxk_tim_bkt_sub_rem(struct cnxk_tim_bkt *bktp, uint16_t v)
+{
+ __atomic_fetch_sub(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
+}
+
+static inline uint8_t
+cnxk_tim_bkt_get_hbt(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_HBT) & TIM_BUCKET_W1_M_HBT;
+}
+
+static inline uint8_t
+cnxk_tim_bkt_get_bsk(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_BSK) & TIM_BUCKET_W1_M_BSK;
+}
+
+static inline uint64_t
+cnxk_tim_bkt_clr_bsk(struct cnxk_tim_bkt *bktp)
+{
+ /* Clear everything except lock. */
+ const uint64_t v = TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK;
+
+ return __atomic_fetch_and(&bktp->w1, v, __ATOMIC_ACQ_REL);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_fetch_sema_lock(struct cnxk_tim_bkt *bktp)
+{
+ return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA_WLOCK,
+ __ATOMIC_ACQUIRE);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_fetch_sema(struct cnxk_tim_bkt *bktp)
+{
+ return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA, __ATOMIC_RELAXED);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_inc_lock(struct cnxk_tim_bkt *bktp)
+{
+ const uint64_t v = 1ull << TIM_BUCKET_W1_S_LOCK;
+
+ return __atomic_fetch_add(&bktp->w1, v, __ATOMIC_ACQUIRE);
+}
+
+static inline void
+cnxk_tim_bkt_dec_lock(struct cnxk_tim_bkt *bktp)
+{
+ __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELEASE);
+}
+
+static inline void
+cnxk_tim_bkt_dec_lock_relaxed(struct cnxk_tim_bkt *bktp)
+{
+ __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELAXED);
+}
+
+static inline uint32_t
+cnxk_tim_bkt_get_nent(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_NUM_ENTRIES) &
+ TIM_BUCKET_W1_M_NUM_ENTRIES;
+}
+
+static inline void
+cnxk_tim_bkt_inc_nent(struct cnxk_tim_bkt *bktp)
+{
+ __atomic_add_fetch(&bktp->nb_entry, 1, __ATOMIC_RELAXED);
+}
+
+static inline void
+cnxk_tim_bkt_add_nent(struct cnxk_tim_bkt *bktp, uint32_t v)
+{
+ __atomic_add_fetch(&bktp->nb_entry, v, __ATOMIC_RELAXED);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_clr_nent(struct cnxk_tim_bkt *bktp)
+{
+ const uint64_t v =
+ ~(TIM_BUCKET_W1_M_NUM_ENTRIES << TIM_BUCKET_W1_S_NUM_ENTRIES);
+
+ return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL);
+}
+
+#endif /* __CNXK_TIM_WORKER_H__ */
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 4b1aef0b4..098b0db09 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -16,6 +16,7 @@ sources = files('cn10k_worker.c',
'cnxk_eventdev.c',
'cnxk_eventdev_selftest.c',
'cnxk_eventdev_stats.c',
+ 'cnxk_tim_worker.c',
'cnxk_tim_evdev.c',
)
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 28/33] event/cnxk: add timer arm routine
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (26 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 27/33] event/cnxk: add TIM bucket operations pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 29/33] event/cnxk: add timer arm timeout burst pbhagavatula
` (6 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event timer arm routine.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 18 ++
drivers/event/cnxk/cnxk_tim_evdev.h | 23 ++
drivers/event/cnxk/cnxk_tim_worker.c | 95 +++++++++
drivers/event/cnxk/cnxk_tim_worker.h | 300 +++++++++++++++++++++++++++
4 files changed, 436 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index e06fe2f52..ecc952a6a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -76,6 +76,21 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
return rc;
}
+static void
+cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
+{
+ uint8_t prod_flag = !tim_ring->prod_type_sp;
+
+ /* [DFB/FB] [SP][MP]*/
+ const rte_event_timer_arm_burst_t arm_burst[2][2] = {
+#define FP(_name, _f2, _f1, flags) [_f2][_f1] = cnxk_tim_arm_burst_##_name,
+ TIM_ARM_FASTPATH_MODES
+#undef FP
+ };
+
+ cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
+}
+
static void
cnxk_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
struct rte_event_timer_adapter_info *adptr_info)
@@ -173,6 +188,9 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
plt_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE);
plt_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+ /* Set fastpath ops. */
+ cnxk_tim_set_fp_ops(tim_ring);
+
/* Update SSO xae count. */
cnxk_sso_updt_xae_cnt(cnxk_sso_pmd_priv(dev->event_dev), tim_ring,
RTE_EVENT_TYPE_TIMER);
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index f6895417a..1f2aad17a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -14,6 +14,7 @@
#include <rte_event_timer_adapter.h>
#include <rte_malloc.h>
#include <rte_memzone.h>
+#include <rte_reciprocal.h>
#include "roc_api.h"
@@ -37,6 +38,11 @@
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
+#define CNXK_TIM_SP 0x1
+#define CNXK_TIM_MP 0x2
+#define CNXK_TIM_ENA_FB 0x10
+#define CNXK_TIM_ENA_DFB 0x20
+
#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
#define TIM_BUCKET_W1_M_CHUNK_REMAINDER \
((1ULL << (64 - TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
@@ -107,10 +113,14 @@ struct cnxk_tim_ring {
uintptr_t base;
uint16_t nb_chunk_slots;
uint32_t nb_bkts;
+ uint64_t last_updt_cyc;
+ uint64_t ring_start_cyc;
uint64_t tck_int;
uint64_t tot_int;
struct cnxk_tim_bkt *bkt;
struct rte_mempool *chunk_pool;
+ struct rte_reciprocal_u64 fast_div;
+ struct rte_reciprocal_u64 fast_bkt;
uint64_t arm_cnt;
uint8_t prod_type_sp;
uint8_t disable_npa;
@@ -201,6 +211,19 @@ cnxk_tim_cntfrq(void)
}
#endif
+#define TIM_ARM_FASTPATH_MODES \
+ FP(sp, 0, 0, CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
+ FP(mp, 0, 1, CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
+ FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
+ FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
+
+#define FP(_name, _f2, _f1, flags) \
+ uint16_t cnxk_tim_arm_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint16_t nb_timers);
+TIM_ARM_FASTPATH_MODES
+#undef FP
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index 49ee85245..268f845c8 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -4,3 +4,98 @@
#include "cnxk_tim_evdev.h"
#include "cnxk_tim_worker.h"
+
+static inline int
+cnxk_tim_arm_checks(const struct cnxk_tim_ring *const tim_ring,
+ struct rte_event_timer *const tim)
+{
+ if (unlikely(tim->state)) {
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ rte_errno = EALREADY;
+ goto fail;
+ }
+
+ if (unlikely(!tim->timeout_ticks ||
+ tim->timeout_ticks > tim_ring->nb_bkts)) {
+ tim->state = tim->timeout_ticks ?
+ RTE_EVENT_TIMER_ERROR_TOOLATE :
+ RTE_EVENT_TIMER_ERROR_TOOEARLY;
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ return 0;
+
+fail:
+ return -EINVAL;
+}
+
+static inline void
+cnxk_tim_format_event(const struct rte_event_timer *const tim,
+ struct cnxk_tim_ent *const entry)
+{
+ entry->w0 = (tim->ev.event & 0xFFC000000000) >> 6 |
+ (tim->ev.event & 0xFFFFFFFFF);
+ entry->wqe = tim->ev.u64;
+}
+
+static inline void
+cnxk_tim_sync_start_cyc(struct cnxk_tim_ring *tim_ring)
+{
+ uint64_t cur_cyc = cnxk_tim_cntvct();
+ uint32_t real_bkt;
+
+ if (cur_cyc - tim_ring->last_updt_cyc > tim_ring->tot_int) {
+ real_bkt = plt_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
+ cur_cyc = cnxk_tim_cntvct();
+
+ tim_ring->ring_start_cyc =
+ cur_cyc - (real_bkt * tim_ring->tck_int);
+ tim_ring->last_updt_cyc = cur_cyc;
+ }
+}
+
+static __rte_always_inline uint16_t
+cnxk_tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim, const uint16_t nb_timers,
+ const uint8_t flags)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_ent entry;
+ uint16_t index;
+ int ret;
+
+ cnxk_tim_sync_start_cyc(tim_ring);
+ for (index = 0; index < nb_timers; index++) {
+ if (cnxk_tim_arm_checks(tim_ring, tim[index]))
+ break;
+
+ cnxk_tim_format_event(tim[index], &entry);
+ if (flags & CNXK_TIM_SP)
+ ret = cnxk_tim_add_entry_sp(tim_ring,
+ tim[index]->timeout_ticks,
+ tim[index], &entry, flags);
+ if (flags & CNXK_TIM_MP)
+ ret = cnxk_tim_add_entry_mp(tim_ring,
+ tim[index]->timeout_ticks,
+ tim[index], &entry, flags);
+
+ if (unlikely(ret)) {
+ rte_errno = -ret;
+ break;
+ }
+ }
+
+ return index;
+}
+
+#define FP(_name, _f2, _f1, _flags) \
+ uint16_t __rte_noinline cnxk_tim_arm_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint16_t nb_timers) \
+ { \
+ return cnxk_tim_timer_arm_burst(adptr, tim, nb_timers, \
+ _flags); \
+ }
+TIM_ARM_FASTPATH_MODES
+#undef FP
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
index d56e67360..de8464e33 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.h
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -120,4 +120,304 @@ cnxk_tim_bkt_clr_nent(struct cnxk_tim_bkt *bktp)
return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL);
}
+static inline uint64_t
+cnxk_tim_bkt_fast_mod(uint64_t n, uint64_t d, struct rte_reciprocal_u64 R)
+{
+ return (n - (d * rte_reciprocal_divide_u64(n, &R)));
+}
+
+static __rte_always_inline void
+cnxk_tim_get_target_bucket(struct cnxk_tim_ring *const tim_ring,
+ const uint32_t rel_bkt, struct cnxk_tim_bkt **bkt,
+ struct cnxk_tim_bkt **mirr_bkt)
+{
+ const uint64_t bkt_cyc = cnxk_tim_cntvct() - tim_ring->ring_start_cyc;
+ uint64_t bucket =
+ rte_reciprocal_divide_u64(bkt_cyc, &tim_ring->fast_div) +
+ rel_bkt;
+ uint64_t mirr_bucket = 0;
+
+ bucket = cnxk_tim_bkt_fast_mod(bucket, tim_ring->nb_bkts,
+ tim_ring->fast_bkt);
+ mirr_bucket =
+ cnxk_tim_bkt_fast_mod(bucket + (tim_ring->nb_bkts >> 1),
+ tim_ring->nb_bkts, tim_ring->fast_bkt);
+ *bkt = &tim_ring->bkt[bucket];
+ *mirr_bkt = &tim_ring->bkt[mirr_bucket];
+}
+
+static struct cnxk_tim_ent *
+cnxk_tim_clr_bkt(struct cnxk_tim_ring *const tim_ring,
+ struct cnxk_tim_bkt *const bkt)
+{
+#define TIM_MAX_OUTSTANDING_OBJ 64
+ void *pend_chunks[TIM_MAX_OUTSTANDING_OBJ];
+ struct cnxk_tim_ent *chunk;
+ struct cnxk_tim_ent *pnext;
+ uint8_t objs = 0;
+
+ chunk = ((struct cnxk_tim_ent *)(uintptr_t)bkt->first_chunk);
+ chunk = (struct cnxk_tim_ent *)(uintptr_t)(chunk +
+ tim_ring->nb_chunk_slots)
+ ->w0;
+ while (chunk) {
+ pnext = (struct cnxk_tim_ent *)(uintptr_t)(
+ (chunk + tim_ring->nb_chunk_slots)->w0);
+ if (objs == TIM_MAX_OUTSTANDING_OBJ) {
+ rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks,
+ objs);
+ objs = 0;
+ }
+ pend_chunks[objs++] = chunk;
+ chunk = pnext;
+ }
+
+ if (objs)
+ rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks, objs);
+
+ return (struct cnxk_tim_ent *)(uintptr_t)bkt->first_chunk;
+}
+
+static struct cnxk_tim_ent *
+cnxk_tim_refill_chunk(struct cnxk_tim_bkt *const bkt,
+ struct cnxk_tim_bkt *const mirr_bkt,
+ struct cnxk_tim_ring *const tim_ring)
+{
+ struct cnxk_tim_ent *chunk;
+
+ if (bkt->nb_entry || !bkt->first_chunk) {
+ if (unlikely(rte_mempool_get(tim_ring->chunk_pool,
+ (void **)&chunk)))
+ return NULL;
+ if (bkt->nb_entry) {
+ *(uint64_t *)(((struct cnxk_tim_ent *)
+ mirr_bkt->current_chunk) +
+ tim_ring->nb_chunk_slots) =
+ (uintptr_t)chunk;
+ } else {
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ } else {
+ chunk = cnxk_tim_clr_bkt(tim_ring, bkt);
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+
+ return chunk;
+}
+
+static struct cnxk_tim_ent *
+cnxk_tim_insert_chunk(struct cnxk_tim_bkt *const bkt,
+ struct cnxk_tim_bkt *const mirr_bkt,
+ struct cnxk_tim_ring *const tim_ring)
+{
+ struct cnxk_tim_ent *chunk;
+
+ if (unlikely(rte_mempool_get(tim_ring->chunk_pool, (void **)&chunk)))
+ return NULL;
+
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+ if (bkt->nb_entry) {
+ *(uint64_t *)(((struct cnxk_tim_ent *)(uintptr_t)
+ mirr_bkt->current_chunk) +
+ tim_ring->nb_chunk_slots) = (uintptr_t)chunk;
+ } else {
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ return chunk;
+}
+
+static __rte_always_inline int
+cnxk_tim_add_entry_sp(struct cnxk_tim_ring *const tim_ring,
+ const uint32_t rel_bkt, struct rte_event_timer *const tim,
+ const struct cnxk_tim_ent *const pent,
+ const uint8_t flags)
+{
+ struct cnxk_tim_bkt *mirr_bkt;
+ struct cnxk_tim_ent *chunk;
+ struct cnxk_tim_bkt *bkt;
+ uint64_t lock_sema;
+ int16_t rem;
+
+__retry:
+ cnxk_tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
+
+ /* Get Bucket sema*/
+ lock_sema = cnxk_tim_bkt_fetch_sema_lock(bkt);
+
+ /* Bucket related checks. */
+ if (unlikely(cnxk_tim_bkt_get_hbt(lock_sema))) {
+ if (cnxk_tim_bkt_get_nent(lock_sema) != 0) {
+ uint64_t hbt_state;
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbz %[hbt], 33, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbnz %[hbt], 33, rty%= \n"
+ "dne%=: \n"
+ : [hbt] "=&r"(hbt_state)
+ : [w1] "r"((&bkt->w1))
+ : "memory");
+#else
+ do {
+ hbt_state = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (hbt_state & BIT_ULL(33));
+#endif
+
+ if (!(hbt_state & BIT_ULL(34))) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+ }
+ }
+ /* Insert the work. */
+ rem = cnxk_tim_bkt_fetch_rem(lock_sema);
+
+ if (!rem) {
+ if (flags & CNXK_TIM_ENA_FB)
+ chunk = cnxk_tim_refill_chunk(bkt, mirr_bkt, tim_ring);
+ if (flags & CNXK_TIM_ENA_DFB)
+ chunk = cnxk_tim_insert_chunk(bkt, mirr_bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ bkt->chunk_remainder = 0;
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ cnxk_tim_bkt_dec_lock(bkt);
+ return -ENOMEM;
+ }
+ mirr_bkt->current_chunk = (uintptr_t)chunk;
+ bkt->chunk_remainder = tim_ring->nb_chunk_slots - 1;
+ } else {
+ chunk = (struct cnxk_tim_ent *)mirr_bkt->current_chunk;
+ chunk += tim_ring->nb_chunk_slots - rem;
+ }
+
+ /* Copy work entry. */
+ *chunk = *pent;
+
+ tim->impl_opaque[0] = (uintptr_t)chunk;
+ tim->impl_opaque[1] = (uintptr_t)bkt;
+ __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
+ cnxk_tim_bkt_inc_nent(bkt);
+ cnxk_tim_bkt_dec_lock_relaxed(bkt);
+
+ return 0;
+}
+
+static __rte_always_inline int
+cnxk_tim_add_entry_mp(struct cnxk_tim_ring *const tim_ring,
+ const uint32_t rel_bkt, struct rte_event_timer *const tim,
+ const struct cnxk_tim_ent *const pent,
+ const uint8_t flags)
+{
+ struct cnxk_tim_bkt *mirr_bkt;
+ struct cnxk_tim_ent *chunk;
+ struct cnxk_tim_bkt *bkt;
+ uint64_t lock_sema;
+ int16_t rem;
+
+__retry:
+ cnxk_tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
+ /* Get Bucket sema*/
+ lock_sema = cnxk_tim_bkt_fetch_sema_lock(bkt);
+
+ /* Bucket related checks. */
+ if (unlikely(cnxk_tim_bkt_get_hbt(lock_sema))) {
+ if (cnxk_tim_bkt_get_nent(lock_sema) != 0) {
+ uint64_t hbt_state;
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbz %[hbt], 33, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbnz %[hbt], 33, rty%= \n"
+ "dne%=: \n"
+ : [hbt] "=&r"(hbt_state)
+ : [w1] "r"((&bkt->w1))
+ : "memory");
+#else
+ do {
+ hbt_state = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (hbt_state & BIT_ULL(33));
+#endif
+
+ if (!(hbt_state & BIT_ULL(34))) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+ }
+ }
+
+ rem = cnxk_tim_bkt_fetch_rem(lock_sema);
+ if (rem < 0) {
+ cnxk_tim_bkt_dec_lock(bkt);
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[rem], [%[crem]] \n"
+ " tbz %[rem], 63, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[rem], [%[crem]] \n"
+ " tbnz %[rem], 63, rty%= \n"
+ "dne%=: \n"
+ : [rem] "=&r"(rem)
+ : [crem] "r"(&bkt->w1)
+ : "memory");
+#else
+ while (__atomic_load_n((int64_t *)&bkt->w1, __ATOMIC_RELAXED) <
+ 0)
+ ;
+#endif
+ goto __retry;
+ } else if (!rem) {
+ /* Only one thread can be here*/
+ if (flags & CNXK_TIM_ENA_FB)
+ chunk = cnxk_tim_refill_chunk(bkt, mirr_bkt, tim_ring);
+ if (flags & CNXK_TIM_ENA_DFB)
+ chunk = cnxk_tim_insert_chunk(bkt, mirr_bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ cnxk_tim_bkt_set_rem(bkt, 0);
+ cnxk_tim_bkt_dec_lock(bkt);
+ return -ENOMEM;
+ }
+ *chunk = *pent;
+ if (cnxk_tim_bkt_fetch_lock(lock_sema)) {
+ do {
+ lock_sema = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (cnxk_tim_bkt_fetch_lock(lock_sema) - 1);
+ }
+ rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+ mirr_bkt->current_chunk = (uintptr_t)chunk;
+ __atomic_store_n(&bkt->chunk_remainder,
+ tim_ring->nb_chunk_slots - 1,
+ __ATOMIC_RELEASE);
+ } else {
+ chunk = (struct cnxk_tim_ent *)mirr_bkt->current_chunk;
+ chunk += tim_ring->nb_chunk_slots - rem;
+ *chunk = *pent;
+ }
+
+ tim->impl_opaque[0] = (uintptr_t)chunk;
+ tim->impl_opaque[1] = (uintptr_t)bkt;
+ __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
+ cnxk_tim_bkt_inc_nent(bkt);
+ cnxk_tim_bkt_dec_lock_relaxed(bkt);
+
+ return 0;
+}
+
#endif /* __CNXK_TIM_WORKER_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 29/33] event/cnxk: add timer arm timeout burst
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (27 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 28/33] event/cnxk: add timer arm routine pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 30/33] event/cnxk: add timer cancel function pbhagavatula
` (5 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event timer arm timeout burst function.
All the timers requested to be armed have the same timeout.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 7 ++
drivers/event/cnxk/cnxk_tim_evdev.h | 12 +++
drivers/event/cnxk/cnxk_tim_worker.c | 53 ++++++++++
drivers/event/cnxk/cnxk_tim_worker.h | 141 +++++++++++++++++++++++++++
4 files changed, 213 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index ecc952a6a..68c3b3049 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -88,7 +88,14 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
#undef FP
};
+ const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2] = {
+#define FP(_name, _f1, flags) [_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
+ TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
+ };
+
cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
+ cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
}
static void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 1f2aad17a..b66aac17c 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -217,6 +217,10 @@ cnxk_tim_cntfrq(void)
FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
+#define TIM_ARM_TMO_FASTPATH_MODES \
+ FP(dfb, 0, CNXK_TIM_ENA_DFB) \
+ FP(fb, 1, CNXK_TIM_ENA_FB)
+
#define FP(_name, _f2, _f1, flags) \
uint16_t cnxk_tim_arm_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
@@ -224,6 +228,14 @@ cnxk_tim_cntfrq(void)
TIM_ARM_FASTPATH_MODES
#undef FP
+#define FP(_name, _f1, flags) \
+ uint16_t cnxk_tim_arm_tmo_tick_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint64_t timeout_tick, \
+ const uint16_t nb_timers);
+TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index 268f845c8..717c53fb7 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -99,3 +99,56 @@ cnxk_tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
}
TIM_ARM_FASTPATH_MODES
#undef FP
+
+static __rte_always_inline uint16_t
+cnxk_tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint64_t timeout_tick,
+ const uint16_t nb_timers, const uint8_t flags)
+{
+ struct cnxk_tim_ent entry[CNXK_TIM_MAX_BURST] __rte_cache_aligned;
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ uint16_t set_timers = 0;
+ uint16_t arr_idx = 0;
+ uint16_t idx;
+ int ret;
+
+ if (unlikely(!timeout_tick || timeout_tick > tim_ring->nb_bkts)) {
+ const enum rte_event_timer_state state =
+ timeout_tick ? RTE_EVENT_TIMER_ERROR_TOOLATE :
+ RTE_EVENT_TIMER_ERROR_TOOEARLY;
+ for (idx = 0; idx < nb_timers; idx++)
+ tim[idx]->state = state;
+
+ rte_errno = EINVAL;
+ return 0;
+ }
+
+ cnxk_tim_sync_start_cyc(tim_ring);
+ while (arr_idx < nb_timers) {
+ for (idx = 0; idx < CNXK_TIM_MAX_BURST && (arr_idx < nb_timers);
+ idx++, arr_idx++) {
+ cnxk_tim_format_event(tim[arr_idx], &entry[idx]);
+ }
+ ret = cnxk_tim_add_entry_brst(tim_ring, timeout_tick,
+ &tim[set_timers], entry, idx,
+ flags);
+ set_timers += ret;
+ if (ret != idx)
+ break;
+ }
+
+ return set_timers;
+}
+
+#define FP(_name, _f1, _flags) \
+ uint16_t __rte_noinline cnxk_tim_arm_tmo_tick_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint64_t timeout_tick, \
+ const uint16_t nb_timers) \
+ { \
+ return cnxk_tim_timer_arm_tmo_brst(adptr, tim, timeout_tick, \
+ nb_timers, _flags); \
+ }
+TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
index de8464e33..56cb4cdd9 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.h
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -420,4 +420,145 @@ cnxk_tim_add_entry_mp(struct cnxk_tim_ring *const tim_ring,
return 0;
}
+static inline uint16_t
+cnxk_tim_cpy_wrk(uint16_t index, uint16_t cpy_lmt, struct cnxk_tim_ent *chunk,
+ struct rte_event_timer **const tim,
+ const struct cnxk_tim_ent *const ents,
+ const struct cnxk_tim_bkt *const bkt)
+{
+ for (; index < cpy_lmt; index++) {
+ *chunk = *(ents + index);
+ tim[index]->impl_opaque[0] = (uintptr_t)chunk++;
+ tim[index]->impl_opaque[1] = (uintptr_t)bkt;
+ tim[index]->state = RTE_EVENT_TIMER_ARMED;
+ }
+
+ return index;
+}
+
+/* Burst mode functions */
+static inline int
+cnxk_tim_add_entry_brst(struct cnxk_tim_ring *const tim_ring,
+ const uint16_t rel_bkt,
+ struct rte_event_timer **const tim,
+ const struct cnxk_tim_ent *ents,
+ const uint16_t nb_timers, const uint8_t flags)
+{
+ struct cnxk_tim_ent *chunk = NULL;
+ struct cnxk_tim_bkt *mirr_bkt;
+ struct cnxk_tim_bkt *bkt;
+ uint16_t chunk_remainder;
+ uint16_t index = 0;
+ uint64_t lock_sema;
+ int16_t rem, crem;
+ uint8_t lock_cnt;
+
+__retry:
+ cnxk_tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
+
+ /* Only one thread beyond this. */
+ lock_sema = cnxk_tim_bkt_inc_lock(bkt);
+ lock_cnt = (uint8_t)((lock_sema >> TIM_BUCKET_W1_S_LOCK) &
+ TIM_BUCKET_W1_M_LOCK);
+
+ if (lock_cnt) {
+ cnxk_tim_bkt_dec_lock(bkt);
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxrb %w[lock_cnt], [%[lock]] \n"
+ " tst %w[lock_cnt], 255 \n"
+ " beq dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxrb %w[lock_cnt], [%[lock]] \n"
+ " tst %w[lock_cnt], 255 \n"
+ " bne rty%= \n"
+ "dne%=: \n"
+ : [lock_cnt] "=&r"(lock_cnt)
+ : [lock] "r"(&bkt->lock)
+ : "memory");
+#else
+ while (__atomic_load_n(&bkt->lock, __ATOMIC_RELAXED))
+ ;
+#endif
+ goto __retry;
+ }
+
+ /* Bucket related checks. */
+ if (unlikely(cnxk_tim_bkt_get_hbt(lock_sema))) {
+ if (cnxk_tim_bkt_get_nent(lock_sema) != 0) {
+ uint64_t hbt_state;
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbz %[hbt], 33, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbnz %[hbt], 33, rty%= \n"
+ "dne%=: \n"
+ : [hbt] "=&r"(hbt_state)
+ : [w1] "r"((&bkt->w1))
+ : "memory");
+#else
+ do {
+ hbt_state = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (hbt_state & BIT_ULL(33));
+#endif
+
+ if (!(hbt_state & BIT_ULL(34))) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+ }
+ }
+
+ chunk_remainder = cnxk_tim_bkt_fetch_rem(lock_sema);
+ rem = chunk_remainder - nb_timers;
+ if (rem < 0) {
+ crem = tim_ring->nb_chunk_slots - chunk_remainder;
+ if (chunk_remainder && crem) {
+ chunk = ((struct cnxk_tim_ent *)
+ mirr_bkt->current_chunk) +
+ crem;
+
+ index = cnxk_tim_cpy_wrk(index, chunk_remainder, chunk,
+ tim, ents, bkt);
+ cnxk_tim_bkt_sub_rem(bkt, chunk_remainder);
+ cnxk_tim_bkt_add_nent(bkt, chunk_remainder);
+ }
+
+ if (flags & CNXK_TIM_ENA_FB)
+ chunk = cnxk_tim_refill_chunk(bkt, mirr_bkt, tim_ring);
+ if (flags & CNXK_TIM_ENA_DFB)
+ chunk = cnxk_tim_insert_chunk(bkt, mirr_bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ rte_errno = ENOMEM;
+ tim[index]->state = RTE_EVENT_TIMER_ERROR;
+ return crem;
+ }
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+ mirr_bkt->current_chunk = (uintptr_t)chunk;
+ cnxk_tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
+
+ rem = nb_timers - chunk_remainder;
+ cnxk_tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - rem);
+ cnxk_tim_bkt_add_nent(bkt, rem);
+ } else {
+ chunk = (struct cnxk_tim_ent *)mirr_bkt->current_chunk;
+ chunk += (tim_ring->nb_chunk_slots - chunk_remainder);
+
+ cnxk_tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
+ cnxk_tim_bkt_sub_rem(bkt, nb_timers);
+ cnxk_tim_bkt_add_nent(bkt, nb_timers);
+ }
+
+ cnxk_tim_bkt_dec_lock(bkt);
+
+ return nb_timers;
+}
+
#endif /* __CNXK_TIM_WORKER_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 30/33] event/cnxk: add timer cancel function
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (28 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 29/33] event/cnxk: add timer arm timeout burst pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 31/33] event/cnxk: add timer stats get and reset pbhagavatula
` (4 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add function to cancel event timer that has been armed.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 1 +
drivers/event/cnxk/cnxk_tim_evdev.h | 5 ++++
drivers/event/cnxk/cnxk_tim_worker.c | 30 ++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_worker.h | 37 ++++++++++++++++++++++++++++
4 files changed, 73 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 68c3b3049..62a15a4a1 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -96,6 +96,7 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
+ cnxk_tim_ops.cancel_burst = cnxk_tim_timer_cancel_burst;
}
static void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index b66aac17c..001f448d5 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -236,6 +236,11 @@ TIM_ARM_FASTPATH_MODES
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
+uint16_t
+cnxk_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint16_t nb_timers);
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index 717c53fb7..98ff143c3 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -152,3 +152,33 @@ cnxk_tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
}
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
+
+uint16_t
+cnxk_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint16_t nb_timers)
+{
+ uint16_t index;
+ int ret;
+
+ RTE_SET_USED(adptr);
+ rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+ for (index = 0; index < nb_timers; index++) {
+ if (tim[index]->state == RTE_EVENT_TIMER_CANCELED) {
+ rte_errno = EALREADY;
+ break;
+ }
+
+ if (tim[index]->state != RTE_EVENT_TIMER_ARMED) {
+ rte_errno = EINVAL;
+ break;
+ }
+ ret = cnxk_tim_rm_entry(tim[index]);
+ if (ret) {
+ rte_errno = -ret;
+ break;
+ }
+ }
+
+ return index;
+}
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
index 56cb4cdd9..7caeb1a8f 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.h
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -561,4 +561,41 @@ cnxk_tim_add_entry_brst(struct cnxk_tim_ring *const tim_ring,
return nb_timers;
}
+static int
+cnxk_tim_rm_entry(struct rte_event_timer *tim)
+{
+ struct cnxk_tim_ent *entry;
+ struct cnxk_tim_bkt *bkt;
+ uint64_t lock_sema;
+
+ if (tim->impl_opaque[1] == 0 || tim->impl_opaque[0] == 0)
+ return -ENOENT;
+
+ entry = (struct cnxk_tim_ent *)(uintptr_t)tim->impl_opaque[0];
+ if (entry->wqe != tim->ev.u64) {
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ return -ENOENT;
+ }
+
+ bkt = (struct cnxk_tim_bkt *)(uintptr_t)tim->impl_opaque[1];
+ lock_sema = cnxk_tim_bkt_inc_lock(bkt);
+ if (cnxk_tim_bkt_get_hbt(lock_sema) ||
+ !cnxk_tim_bkt_get_nent(lock_sema)) {
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ cnxk_tim_bkt_dec_lock(bkt);
+ return -ENOENT;
+ }
+
+ entry->w0 = 0;
+ entry->wqe = 0;
+ tim->state = RTE_EVENT_TIMER_CANCELED;
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ cnxk_tim_bkt_dec_lock(bkt);
+
+ return 0;
+}
+
#endif /* __CNXK_TIM_WORKER_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 31/33] event/cnxk: add timer stats get and reset
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (29 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 30/33] event/cnxk: add timer cancel function pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 32/33] event/cnxk: add timer adapter start and stop pbhagavatula
` (3 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add event timer adapter statistics get and reset functions.
Stats are disabled by default and can be enabled through devargs.
Example:
--dev "0002:1e:00.0,tim_stats_ena=1"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 9 +++++
drivers/event/cnxk/cn10k_eventdev.c | 3 +-
drivers/event/cnxk/cn9k_eventdev.c | 3 +-
drivers/event/cnxk/cnxk_tim_evdev.c | 50 ++++++++++++++++++++++++----
drivers/event/cnxk/cnxk_tim_evdev.h | 38 ++++++++++++++-------
drivers/event/cnxk/cnxk_tim_worker.c | 11 ++++--
6 files changed, 91 insertions(+), 23 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index a8199aac7..11145dd7d 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -115,6 +115,15 @@ Runtime Config Options
-a 0002:0e:00.0,tim_chnk_slots=1023
+- ``TIM enable arm/cancel statistics``
+
+ The ``tim_stats_ena`` devargs can be used to enable arm and cancel stats of
+ event timer adapter.
+
+ For example::
+
+ -a 0002:0e:00.0,tim_stats_ena=1
+
- ``TIM limit max rings reserved``
The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 9fdb66343..07bdb9d85 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -505,4 +505,5 @@ RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
CN10K_SSO_GW_MODE "=<int>"
CNXK_TIM_DISABLE_NPA "=1"
CNXK_TIM_CHNK_SLOTS "=<int>"
- CNXK_TIM_RINGS_LMT "=<int>");
+ CNXK_TIM_RINGS_LMT "=<int>"
+ CNXK_TIM_STATS_ENA "=1");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index a68502703..f72b3b11a 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -574,4 +574,5 @@ RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
CN9K_SSO_SINGLE_WS "=1"
CNXK_TIM_DISABLE_NPA "=1"
CNXK_TIM_CHNK_SLOTS "=<int>"
- CNXK_TIM_RINGS_LMT "=<int>");
+ CNXK_TIM_RINGS_LMT "=<int>"
+ CNXK_TIM_STATS_ENA "=1");
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 62a15a4a1..a73ca33d8 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -81,21 +81,25 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
{
uint8_t prod_flag = !tim_ring->prod_type_sp;
- /* [DFB/FB] [SP][MP]*/
- const rte_event_timer_arm_burst_t arm_burst[2][2] = {
-#define FP(_name, _f2, _f1, flags) [_f2][_f1] = cnxk_tim_arm_burst_##_name,
+ /* [STATS] [DFB/FB] [SP][MP]*/
+ const rte_event_timer_arm_burst_t arm_burst[2][2][2] = {
+#define FP(_name, _f3, _f2, _f1, flags) \
+ [_f3][_f2][_f1] = cnxk_tim_arm_burst_##_name,
TIM_ARM_FASTPATH_MODES
#undef FP
};
- const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2] = {
-#define FP(_name, _f1, flags) [_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
+ const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2] = {
+#define FP(_name, _f2, _f1, flags) \
+ [_f2][_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
};
- cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
- cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
+ cnxk_tim_ops.arm_burst =
+ arm_burst[tim_ring->enable_stats][tim_ring->ena_dfb][prod_flag];
+ cnxk_tim_ops.arm_tmo_tick_burst =
+ arm_tmo_burst[tim_ring->enable_stats][tim_ring->ena_dfb];
cnxk_tim_ops.cancel_burst = cnxk_tim_timer_cancel_burst;
}
@@ -159,6 +163,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->nb_timers = rcfg->nb_timers;
tim_ring->chunk_sz = dev->chunk_sz;
tim_ring->disable_npa = dev->disable_npa;
+ tim_ring->enable_stats = dev->enable_stats;
if (tim_ring->disable_npa) {
tim_ring->nb_chunks =
@@ -241,6 +246,30 @@ cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
return 0;
}
+static int
+cnxk_tim_stats_get(const struct rte_event_timer_adapter *adapter,
+ struct rte_event_timer_adapter_stats *stats)
+{
+ struct cnxk_tim_ring *tim_ring = adapter->data->adapter_priv;
+ uint64_t bkt_cyc = cnxk_tim_cntvct() - tim_ring->ring_start_cyc;
+
+ stats->evtim_exp_count =
+ __atomic_load_n(&tim_ring->arm_cnt, __ATOMIC_RELAXED);
+ stats->ev_enq_count = stats->evtim_exp_count;
+ stats->adapter_tick_count =
+ rte_reciprocal_divide_u64(bkt_cyc, &tim_ring->fast_div);
+ return 0;
+}
+
+static int
+cnxk_tim_stats_reset(const struct rte_event_timer_adapter *adapter)
+{
+ struct cnxk_tim_ring *tim_ring = adapter->data->adapter_priv;
+
+ __atomic_store_n(&tim_ring->arm_cnt, 0, __ATOMIC_RELAXED);
+ return 0;
+}
+
int
cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
uint32_t *caps,
@@ -258,6 +287,11 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
+ if (dev->enable_stats) {
+ cnxk_tim_ops.stats_get = cnxk_tim_stats_get;
+ cnxk_tim_ops.stats_reset = cnxk_tim_stats_reset;
+ }
+
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
@@ -281,6 +315,8 @@ cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
&dev->disable_npa);
rte_kvargs_process(kvlist, CNXK_TIM_CHNK_SLOTS, &parse_kvargs_value,
&dev->chunk_slots);
+ rte_kvargs_process(kvlist, CNXK_TIM_STATS_ENA, &parse_kvargs_flag,
+ &dev->enable_stats);
rte_kvargs_process(kvlist, CNXK_TIM_RINGS_LMT, &parse_kvargs_value,
&dev->min_ring_cnt);
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 001f448d5..b5e4cfc9e 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -36,12 +36,14 @@
#define CNXK_TIM_DISABLE_NPA "tim_disable_npa"
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
+#define CNXK_TIM_STATS_ENA "tim_stats_ena"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
-#define CNXK_TIM_SP 0x1
-#define CNXK_TIM_MP 0x2
-#define CNXK_TIM_ENA_FB 0x10
-#define CNXK_TIM_ENA_DFB 0x20
+#define CNXK_TIM_SP 0x1
+#define CNXK_TIM_MP 0x2
+#define CNXK_TIM_ENA_FB 0x10
+#define CNXK_TIM_ENA_DFB 0x20
+#define CNXK_TIM_ENA_STATS 0x40
#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
#define TIM_BUCKET_W1_M_CHUNK_REMAINDER \
@@ -82,6 +84,7 @@ struct cnxk_tim_evdev {
uint8_t disable_npa;
uint16_t chunk_slots;
uint16_t min_ring_cnt;
+ uint8_t enable_stats;
};
enum cnxk_tim_clk_src {
@@ -123,6 +126,7 @@ struct cnxk_tim_ring {
struct rte_reciprocal_u64 fast_bkt;
uint64_t arm_cnt;
uint8_t prod_type_sp;
+ uint8_t enable_stats;
uint8_t disable_npa;
uint8_t ena_dfb;
uint16_t ring_id;
@@ -212,23 +216,33 @@ cnxk_tim_cntfrq(void)
#endif
#define TIM_ARM_FASTPATH_MODES \
- FP(sp, 0, 0, CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
- FP(mp, 0, 1, CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
- FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
- FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
+ FP(sp, 0, 0, 0, CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
+ FP(mp, 0, 0, 1, CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
+ FP(fb_sp, 0, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
+ FP(fb_mp, 0, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP) \
+ FP(stats_sp, 1, 0, 0, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
+ FP(stats_mp, 1, 0, 1, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
+ FP(stats_fb_sp, 1, 1, 0, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
+ FP(stats_fb_mp, 1, 1, 1, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_FB | CNXK_TIM_MP)
#define TIM_ARM_TMO_FASTPATH_MODES \
- FP(dfb, 0, CNXK_TIM_ENA_DFB) \
- FP(fb, 1, CNXK_TIM_ENA_FB)
+ FP(dfb, 0, 0, CNXK_TIM_ENA_DFB) \
+ FP(fb, 0, 1, CNXK_TIM_ENA_FB) \
+ FP(stats_dfb, 1, 0, CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_DFB) \
+ FP(stats_fb, 1, 1, CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_FB)
-#define FP(_name, _f2, _f1, flags) \
+#define FP(_name, _f3, _f2, _f1, flags) \
uint16_t cnxk_tim_arm_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint16_t nb_timers);
TIM_ARM_FASTPATH_MODES
#undef FP
-#define FP(_name, _f1, flags) \
+#define FP(_name, _f2, _f1, flags) \
uint16_t cnxk_tim_arm_tmo_tick_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint64_t timeout_tick, \
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index 98ff143c3..3ce99864a 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -86,10 +86,13 @@ cnxk_tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
}
}
+ if (flags & CNXK_TIM_ENA_STATS)
+ __atomic_fetch_add(&tim_ring->arm_cnt, index, __ATOMIC_RELAXED);
+
return index;
}
-#define FP(_name, _f2, _f1, _flags) \
+#define FP(_name, _f3, _f2, _f1, _flags) \
uint16_t __rte_noinline cnxk_tim_arm_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint16_t nb_timers) \
@@ -138,10 +141,14 @@ cnxk_tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
break;
}
+ if (flags & CNXK_TIM_ENA_STATS)
+ __atomic_fetch_add(&tim_ring->arm_cnt, set_timers,
+ __ATOMIC_RELAXED);
+
return set_timers;
}
-#define FP(_name, _f1, _flags) \
+#define FP(_name, _f2, _f1, _flags) \
uint16_t __rte_noinline cnxk_tim_arm_tmo_tick_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint64_t timeout_tick, \
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 32/33] event/cnxk: add timer adapter start and stop
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (30 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 31/33] event/cnxk: add timer stats get and reset pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 33/33] event/cnxk: add devargs to control timer adapters pbhagavatula
` (2 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add event timer adapter start and stop functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 71 ++++++++++++++++++++++++++++-
1 file changed, 70 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index a73ca33d8..19b71b4f5 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -246,6 +246,73 @@ cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
return 0;
}
+static void
+cnxk_tim_calibrate_start_tsc(struct cnxk_tim_ring *tim_ring)
+{
+#define CNXK_TIM_CALIB_ITER 1E6
+ uint32_t real_bkt, bucket;
+ int icount, ecount = 0;
+ uint64_t bkt_cyc;
+
+ for (icount = 0; icount < CNXK_TIM_CALIB_ITER; icount++) {
+ real_bkt = plt_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
+ bkt_cyc = cnxk_tim_cntvct();
+ bucket = (bkt_cyc - tim_ring->ring_start_cyc) /
+ tim_ring->tck_int;
+ bucket = bucket % (tim_ring->nb_bkts);
+ tim_ring->ring_start_cyc =
+ bkt_cyc - (real_bkt * tim_ring->tck_int);
+ if (bucket != real_bkt)
+ ecount++;
+ }
+ tim_ring->last_updt_cyc = bkt_cyc;
+ plt_tim_dbg("Bucket mispredict %3.2f distance %d\n",
+ 100 - (((double)(icount - ecount) / (double)icount) * 100),
+ bucket - real_bkt);
+}
+
+static int
+cnxk_tim_ring_start(const struct rte_event_timer_adapter *adptr)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ rc = roc_tim_lf_enable(&dev->tim, tim_ring->ring_id,
+ &tim_ring->ring_start_cyc, NULL);
+ if (rc < 0)
+ return rc;
+
+ tim_ring->tck_int = NSEC2TICK(tim_ring->tck_nsec, cnxk_tim_cntfrq());
+ tim_ring->tot_int = tim_ring->tck_int * tim_ring->nb_bkts;
+ tim_ring->fast_div = rte_reciprocal_value_u64(tim_ring->tck_int);
+ tim_ring->fast_bkt = rte_reciprocal_value_u64(tim_ring->nb_bkts);
+
+ cnxk_tim_calibrate_start_tsc(tim_ring);
+
+ return rc;
+}
+
+static int
+cnxk_tim_ring_stop(const struct rte_event_timer_adapter *adptr)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ rc = roc_tim_lf_disable(&dev->tim, tim_ring->ring_id);
+ if (rc < 0)
+ plt_err("Failed to disable timer ring");
+
+ return rc;
+}
+
static int
cnxk_tim_stats_get(const struct rte_event_timer_adapter *adapter,
struct rte_event_timer_adapter_stats *stats)
@@ -278,13 +345,14 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
RTE_SET_USED(flags);
- RTE_SET_USED(ops);
if (dev == NULL)
return -ENODEV;
cnxk_tim_ops.init = cnxk_tim_ring_create;
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+ cnxk_tim_ops.start = cnxk_tim_ring_start;
+ cnxk_tim_ops.stop = cnxk_tim_ring_stop;
cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
if (dev->enable_stats) {
@@ -295,6 +363,7 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
+ *ops = &cnxk_tim_ops;
return 0;
}
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v3 33/33] event/cnxk: add devargs to control timer adapters
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (31 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 32/33] event/cnxk: add timer adapter start and stop pbhagavatula
@ 2021-04-30 13:53 ` pbhagavatula
2021-05-01 12:03 ` [dpdk-dev] [PATCH v3 00/33] Marvell CNXK Event device Driver Jerin Jacob
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-04-30 13:53 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add devargs to control each event timer adapter i.e. TIM rings internal
parameters uniquely. The following dict format is expected
[ring-chnk_slots-disable_npa-stats_ena]. 0 represents default values.
Example:
--dev "0002:1e:00.0,tim_ring_ctl=[2-1023-1-0]"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 11 ++++
drivers/event/cnxk/cnxk_tim_evdev.c | 96 ++++++++++++++++++++++++++++-
drivers/event/cnxk/cnxk_tim_evdev.h | 10 +++
3 files changed, 116 insertions(+), 1 deletion(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 11145dd7d..1bd935abc 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -135,6 +135,17 @@ Runtime Config Options
-a 0002:0e:00.0,tim_rings_lmt=5
+- ``TIM ring control internal parameters``
+
+ When using multiple TIM rings the ``tim_ring_ctl`` devargs can be used to
+ control each TIM rings internal parameters uniquely. The following dict
+ format is expected [ring-chnk_slots-disable_npa-stats_ena]. 0 represents
+ default values.
+
+ For Example::
+
+ -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 19b71b4f5..9d40e336d 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -121,7 +121,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
struct cnxk_tim_ring *tim_ring;
- int rc;
+ int i, rc;
if (dev == NULL)
return -ENODEV;
@@ -165,6 +165,20 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->disable_npa = dev->disable_npa;
tim_ring->enable_stats = dev->enable_stats;
+ for (i = 0; i < dev->ring_ctl_cnt; i++) {
+ struct cnxk_tim_ctl *ring_ctl = &dev->ring_ctl_data[i];
+
+ if (ring_ctl->ring == tim_ring->ring_id) {
+ tim_ring->chunk_sz =
+ ring_ctl->chunk_slots ?
+ ((uint32_t)(ring_ctl->chunk_slots + 1) *
+ CNXK_TIM_CHUNK_ALIGNMENT) :
+ tim_ring->chunk_sz;
+ tim_ring->enable_stats = ring_ctl->enable_stats;
+ tim_ring->disable_npa = ring_ctl->disable_npa;
+ }
+ }
+
if (tim_ring->disable_npa) {
tim_ring->nb_chunks =
tim_ring->nb_timers /
@@ -368,6 +382,84 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
return 0;
}
+static void
+cnxk_tim_parse_ring_param(char *value, void *opaque)
+{
+ struct cnxk_tim_evdev *dev = opaque;
+ struct cnxk_tim_ctl ring_ctl = {0};
+ char *tok = strtok(value, "-");
+ struct cnxk_tim_ctl *old_ptr;
+ uint16_t *val;
+
+ val = (uint16_t *)&ring_ctl;
+
+ if (!strlen(value))
+ return;
+
+ while (tok != NULL) {
+ *val = atoi(tok);
+ tok = strtok(NULL, "-");
+ val++;
+ }
+
+ if (val != (&ring_ctl.enable_stats + 1)) {
+ plt_err("Invalid ring param expected [ring-chunk_sz-disable_npa-enable_stats]");
+ return;
+ }
+
+ dev->ring_ctl_cnt++;
+ old_ptr = dev->ring_ctl_data;
+ dev->ring_ctl_data =
+ rte_realloc(dev->ring_ctl_data,
+ sizeof(struct cnxk_tim_ctl) * dev->ring_ctl_cnt, 0);
+ if (dev->ring_ctl_data == NULL) {
+ dev->ring_ctl_data = old_ptr;
+ dev->ring_ctl_cnt--;
+ return;
+ }
+
+ dev->ring_ctl_data[dev->ring_ctl_cnt - 1] = ring_ctl;
+}
+
+static void
+cnxk_tim_parse_ring_ctl_list(const char *value, void *opaque)
+{
+ char *s = strdup(value);
+ char *start = NULL;
+ char *end = NULL;
+ char *f = s;
+
+ while (*s) {
+ if (*s == '[')
+ start = s;
+ else if (*s == ']')
+ end = s;
+
+ if (start && start < end) {
+ *end = 0;
+ cnxk_tim_parse_ring_param(start + 1, opaque);
+ start = end;
+ s = end;
+ }
+ s++;
+ }
+
+ free(f);
+}
+
+static int
+cnxk_tim_parse_kvargs_dict(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ /* Dict format [ring-chunk_sz-disable_npa-enable_stats] use '-' as ','
+ * isn't allowed. 0 represents default.
+ */
+ cnxk_tim_parse_ring_ctl_list(value, opaque);
+
+ return 0;
+}
+
static void
cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
{
@@ -388,6 +480,8 @@ cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
&dev->enable_stats);
rte_kvargs_process(kvlist, CNXK_TIM_RINGS_LMT, &parse_kvargs_value,
&dev->min_ring_cnt);
+ rte_kvargs_process(kvlist, CNXK_TIM_RING_CTL,
+ &cnxk_tim_parse_kvargs_dict, &dev);
rte_kvargs_free(kvlist);
}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index b5e4cfc9e..c369f6f47 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -38,6 +38,7 @@
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
#define CNXK_TIM_STATS_ENA "tim_stats_ena"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
+#define CNXK_TIM_RING_CTL "tim_ring_ctl"
#define CNXK_TIM_SP 0x1
#define CNXK_TIM_MP 0x2
@@ -75,6 +76,13 @@
#define TIM_BUCKET_SEMA_WLOCK \
(TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
+struct cnxk_tim_ctl {
+ uint16_t ring;
+ uint16_t chunk_slots;
+ uint16_t disable_npa;
+ uint16_t enable_stats;
+};
+
struct cnxk_tim_evdev {
struct roc_tim tim;
struct rte_eventdev *event_dev;
@@ -85,6 +93,8 @@ struct cnxk_tim_evdev {
uint16_t chunk_slots;
uint16_t min_ring_cnt;
uint8_t enable_stats;
+ uint16_t ring_ctl_cnt;
+ struct cnxk_tim_ctl *ring_ctl_data;
};
enum cnxk_tim_clk_src {
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* Re: [dpdk-dev] [PATCH v3 00/33] Marvell CNXK Event device Driver
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (32 preceding siblings ...)
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 33/33] event/cnxk: add devargs to control timer adapters pbhagavatula
@ 2021-05-01 12:03 ` Jerin Jacob
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
34 siblings, 0 replies; 185+ messages in thread
From: Jerin Jacob @ 2021-05-01 12:03 UTC (permalink / raw)
To: Pavan Nikhilesh; +Cc: Jerin Jacob, dpdk-dev
On Fri, Apr 30, 2021 at 7:23 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> This patchset adds support for Marvell CN106XX SoC based on 'common/cnxk'
> driver. In future, CN9K a.k.a octeontx2 will also be supported by same
> driver when code is ready and 'event/octeontx2' will be deprecated.
>
> v3 Changes:
> - Fix documentation, copyright.
> - Update release notes.
Series applied to dpdk-next-eventdev/for-main. Thanks.
>
> v2 Changes:
> - Split Rx/Tx adapter into seperate patch set to remove dependency on net/cnxk
> - Add missing xStats patch.
> - Fix incorrect head wait operation.
>
> Pavan Nikhilesh (16):
> event/cnxk: add build infra and device setup
> event/cnxk: add platform specific device probe
> event/cnxk: add common configuration validation
> event/cnxk: allocate event inflight buffers
> event/cnxk: add devargs to configure getwork mode
> event/cnxk: add SSO HW device operations
> event/cnxk: add SSO GWS fastpath enqueue functions
> event/cnxk: add SSO GWS dequeue fastpath functions
> event/cnxk: add SSO selftest and dump
> event/cnxk: add event port and queue xstats
> event/cnxk: add devargs to disable NPA
> event/cnxk: allow adapters to resize inflights
> event/cnxk: add TIM bucket operations
> event/cnxk: add timer arm routine
> event/cnxk: add timer arm timeout burst
> event/cnxk: add timer cancel function
>
> Shijith Thotton (17):
> event/cnxk: add device capabilities function
> event/cnxk: add platform specific device config
> event/cnxk: add event queue config functions
> event/cnxk: add devargs for inflight buffer count
> event/cnxk: add devargs to control SSO HWGRP QoS
> event/cnxk: add port config functions
> event/cnxk: add event port link and unlink
> event/cnxk: add device start function
> event/cnxk: add device stop and close functions
> event/cnxk: support event timer
> event/cnxk: add timer adapter capabilities
> event/cnxk: create and free timer adapter
> event/cnxk: add timer adapter info function
> event/cnxk: add devargs for chunk size and rings
> event/cnxk: add timer stats get and reset
> event/cnxk: add timer adapter start and stop
> event/cnxk: add devargs to control timer adapters
>
> MAINTAINERS | 6 +
> app/test/test_eventdev.c | 14 +
> doc/guides/eventdevs/cnxk.rst | 162 ++
> doc/guides/eventdevs/index.rst | 1 +
> doc/guides/rel_notes/release_21_05.rst | 2 +
> drivers/common/cnxk/roc_sso.c | 63 +
> drivers/common/cnxk/roc_sso.h | 19 +
> drivers/event/cnxk/cn10k_eventdev.c | 509 ++++++
> drivers/event/cnxk/cn10k_worker.c | 115 ++
> drivers/event/cnxk/cn10k_worker.h | 175 +++
> drivers/event/cnxk/cn9k_eventdev.c | 578 +++++++
> drivers/event/cnxk/cn9k_worker.c | 236 +++
> drivers/event/cnxk/cn9k_worker.h | 297 ++++
> drivers/event/cnxk/cnxk_eventdev.c | 647 ++++++++
> drivers/event/cnxk/cnxk_eventdev.h | 253 +++
> drivers/event/cnxk/cnxk_eventdev_adptr.c | 67 +
> drivers/event/cnxk/cnxk_eventdev_selftest.c | 1570 +++++++++++++++++++
> drivers/event/cnxk/cnxk_eventdev_stats.c | 289 ++++
> drivers/event/cnxk/cnxk_tim_evdev.c | 538 +++++++
> drivers/event/cnxk/cnxk_tim_evdev.h | 275 ++++
> drivers/event/cnxk/cnxk_tim_worker.c | 191 +++
> drivers/event/cnxk/cnxk_tim_worker.h | 601 +++++++
> drivers/event/cnxk/cnxk_worker.h | 101 ++
> drivers/event/cnxk/meson.build | 23 +
> drivers/event/cnxk/version.map | 3 +
> drivers/event/meson.build | 1 +
> 26 files changed, 6736 insertions(+)
> create mode 100644 doc/guides/eventdevs/cnxk.rst
> create mode 100644 drivers/event/cnxk/cn10k_eventdev.c
> create mode 100644 drivers/event/cnxk/cn10k_worker.c
> create mode 100644 drivers/event/cnxk/cn10k_worker.h
> create mode 100644 drivers/event/cnxk/cn9k_eventdev.c
> create mode 100644 drivers/event/cnxk/cn9k_worker.c
> create mode 100644 drivers/event/cnxk/cn9k_worker.h
> create mode 100644 drivers/event/cnxk/cnxk_eventdev.c
> create mode 100644 drivers/event/cnxk/cnxk_eventdev.h
> create mode 100644 drivers/event/cnxk/cnxk_eventdev_adptr.c
> create mode 100644 drivers/event/cnxk/cnxk_eventdev_selftest.c
> create mode 100644 drivers/event/cnxk/cnxk_eventdev_stats.c
> create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.c
> create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.h
> create mode 100644 drivers/event/cnxk/cnxk_tim_worker.c
> create mode 100644 drivers/event/cnxk/cnxk_tim_worker.h
> create mode 100644 drivers/event/cnxk/cnxk_worker.h
> create mode 100644 drivers/event/cnxk/meson.build
> create mode 100644 drivers/event/cnxk/version.map
>
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 00/34] Marvell CNXK Event device Driver
2021-04-30 13:53 ` [dpdk-dev] [PATCH v3 " pbhagavatula
` (33 preceding siblings ...)
2021-05-01 12:03 ` [dpdk-dev] [PATCH v3 00/33] Marvell CNXK Event device Driver Jerin Jacob
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 01/34] common/cnxk: rename deprecated constant pbhagavatula
` (34 more replies)
34 siblings, 35 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
This patchset adds support for Marvell CN106XX SoC based on 'common/cnxk'
driver. In future, CN9K a.k.a octeontx2 will also be supported by same
driver when code is ready and 'event/octeontx2' will be deprecated.
v4 Changes:
- s/PCI_ANY_ID/RTE_PCI_ANY_ID.
- Remove dependency on net_cnxk
- Fix compilation issues with xstats patch.
v3 Changes:
- Fix documentation, copyright.
- Update release notes.
v2 Changes:
- Split Rx/Tx adapter into seperate patch set to remove dependency on net/cnxk
- Add missing xStats patch.
- Fix incorrect head wait operation.
Pavan Nikhilesh (17):
common/cnxk: rename deprecated constant
event/cnxk: add build infra and device setup
event/cnxk: add platform specific device probe
event/cnxk: add common configuration validation
event/cnxk: allocate event inflight buffers
event/cnxk: add devargs to configure getwork mode
event/cnxk: add SSO HW device operations
event/cnxk: add SSO GWS fastpath enqueue functions
event/cnxk: add SSO GWS dequeue fastpath functions
event/cnxk: add SSO selftest and dump
event/cnxk: add event port and queue xstats
event/cnxk: add devargs to disable NPA
event/cnxk: allow adapters to resize inflights
event/cnxk: add TIM bucket operations
event/cnxk: add timer arm routine
event/cnxk: add timer arm timeout burst
event/cnxk: add timer cancel function
Shijith Thotton (17):
event/cnxk: add device capabilities function
event/cnxk: add platform specific device config
event/cnxk: add event queue config functions
event/cnxk: add devargs for inflight buffer count
event/cnxk: add devargs to control SSO HWGRP QoS
event/cnxk: add port config functions
event/cnxk: add event port link and unlink
event/cnxk: add device start function
event/cnxk: add device stop and close functions
event/cnxk: support event timer
event/cnxk: add timer adapter capabilities
event/cnxk: create and free timer adapter
event/cnxk: add timer adapter info function
event/cnxk: add devargs for chunk size and rings
event/cnxk: add timer stats get and reset
event/cnxk: add timer adapter start and stop
event/cnxk: add devargs to control timer adapters
MAINTAINERS | 6 +
app/test/test_eventdev.c | 14 +
doc/guides/eventdevs/cnxk.rst | 162 ++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_21_05.rst | 2 +
drivers/common/cnxk/roc_platform.h | 24 +-
drivers/common/cnxk/roc_sso.c | 63 +
drivers/common/cnxk/roc_sso.h | 19 +
drivers/common/cnxk/version.map | 2 +
drivers/event/cnxk/cn10k_eventdev.c | 509 ++++++
drivers/event/cnxk/cn10k_worker.c | 115 ++
drivers/event/cnxk/cn10k_worker.h | 175 +++
drivers/event/cnxk/cn9k_eventdev.c | 578 +++++++
drivers/event/cnxk/cn9k_worker.c | 236 +++
drivers/event/cnxk/cn9k_worker.h | 297 ++++
drivers/event/cnxk/cnxk_eventdev.c | 647 ++++++++
drivers/event/cnxk/cnxk_eventdev.h | 253 +++
drivers/event/cnxk/cnxk_eventdev_adptr.c | 67 +
drivers/event/cnxk/cnxk_eventdev_selftest.c | 1570 +++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev_stats.c | 289 ++++
drivers/event/cnxk/cnxk_tim_evdev.c | 538 +++++++
drivers/event/cnxk/cnxk_tim_evdev.h | 275 ++++
drivers/event/cnxk/cnxk_tim_worker.c | 191 +++
drivers/event/cnxk/cnxk_tim_worker.h | 601 +++++++
drivers/event/cnxk/cnxk_worker.h | 101 ++
drivers/event/cnxk/meson.build | 23 +
drivers/event/cnxk/version.map | 3 +
drivers/event/meson.build | 1 +
28 files changed, 6748 insertions(+), 14 deletions(-)
create mode 100644 doc/guides/eventdevs/cnxk.rst
create mode 100644 drivers/event/cnxk/cn10k_eventdev.c
create mode 100644 drivers/event/cnxk/cn10k_worker.c
create mode 100644 drivers/event/cnxk/cn10k_worker.h
create mode 100644 drivers/event/cnxk/cn9k_eventdev.c
create mode 100644 drivers/event/cnxk/cn9k_worker.c
create mode 100644 drivers/event/cnxk/cn9k_worker.h
create mode 100644 drivers/event/cnxk/cnxk_eventdev.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev.h
create mode 100644 drivers/event/cnxk/cnxk_eventdev_adptr.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev_selftest.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev_stats.c
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.c
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.h
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.c
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.h
create mode 100644 drivers/event/cnxk/cnxk_worker.h
create mode 100644 drivers/event/cnxk/meson.build
create mode 100644 drivers/event/cnxk/version.map
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 01/34] common/cnxk: rename deprecated constant
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 02/34] event/cnxk: add build infra and device setup pbhagavatula
` (33 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
The PCI_ANY_ID constant is deprecated and renamed as RTE_PCI_ANY_ID.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/common/cnxk/roc_platform.h | 24 ++++++++++--------------
1 file changed, 10 insertions(+), 14 deletions(-)
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 97600e56f..29ab71240 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -165,22 +165,18 @@ extern int cnxk_logtype_tm;
#define plt_tm_dbg(fmt, ...) plt_dbg(tm, fmt, ##__VA_ARGS__)
#ifdef __cplusplus
-#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- RTE_CLASS_ANY_ID, \
- PCI_VENDOR_ID_CAVIUM, \
- (dev), \
- PCI_ANY_ID, \
- (subsystem_dev), \
+#define CNXK_PCI_ID(subsystem_dev, dev) \
+ { \
+ RTE_CLASS_ANY_ID, PCI_VENDOR_ID_CAVIUM, (dev), RTE_PCI_ANY_ID, \
+ (subsystem_dev), \
}
#else
-#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- .class_id = RTE_CLASS_ANY_ID, \
- .vendor_id = PCI_VENDOR_ID_CAVIUM, \
- .device_id = (dev), \
- .subsystem_vendor_id = PCI_ANY_ID, \
- .subsystem_device_id = (subsystem_dev), \
+#define CNXK_PCI_ID(subsystem_dev, dev) \
+ { \
+ .class_id = RTE_CLASS_ANY_ID, \
+ .vendor_id = PCI_VENDOR_ID_CAVIUM, .device_id = (dev), \
+ .subsystem_vendor_id = RTE_PCI_ANY_ID, \
+ .subsystem_device_id = (subsystem_dev), \
}
#endif
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 02/34] event/cnxk: add build infra and device setup
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 01/34] common/cnxk: rename deprecated constant pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 03/34] event/cnxk: add device capabilities function pbhagavatula
` (32 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Thomas Monjalon, Pavan Nikhilesh, Shijith Thotton,
Ray Kinsella, Neil Horman, Anatoly Burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add meson build infra structure along with the event device
SSO initialization and teardown functions.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
MAINTAINERS | 6 +++
doc/guides/eventdevs/cnxk.rst | 55 +++++++++++++++++++++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_21_05.rst | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 68 ++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 39 +++++++++++++++
drivers/event/cnxk/meson.build | 13 +++++
drivers/event/cnxk/version.map | 3 ++
drivers/event/meson.build | 1 +
9 files changed, 188 insertions(+)
create mode 100644 doc/guides/eventdevs/cnxk.rst
create mode 100644 drivers/event/cnxk/cnxk_eventdev.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev.h
create mode 100644 drivers/event/cnxk/meson.build
create mode 100644 drivers/event/cnxk/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 44f3d322e..5a2297e99 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1224,6 +1224,12 @@ M: Jerin Jacob <jerinj@marvell.com>
F: drivers/event/octeontx2/
F: doc/guides/eventdevs/octeontx2.rst
+Marvell cnxk
+M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+M: Shijith Thotton <sthotton@marvell.com>
+F: drivers/event/cnxk/
+F: doc/guides/eventdevs/cnxk.rst
+
NXP DPAA eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Nipun Gupta <nipun.gupta@nxp.com>
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
new file mode 100644
index 000000000..148280b85
--- /dev/null
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -0,0 +1,55 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2021 Marvell.
+
+Marvell cnxk SSO Eventdev Driver
+================================
+
+The SSO PMD (**librte_event_cnxk**) and provides poll mode
+eventdev driver support for the inbuilt event device found in the
+**Marvell OCTEON cnxk** SoC family.
+
+More information about OCTEON cnxk SoC can be found at `Marvell Official Website
+<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
+
+Supported OCTEON cnxk SoCs
+--------------------------
+
+- CN9XX
+- CN10XX
+
+Features
+--------
+
+Features of the OCTEON cnxk SSO PMD are:
+
+- 256 Event queues
+- 26 (dual) and 52 (single) Event ports on CN9XX
+- 52 Event ports on CN10XX
+- HW event scheduler
+- Supports 1M flows per event queue
+- Flow based event pipelining
+- Flow pinning support in flow based event pipelining
+- Queue based event pipelining
+- Supports ATOMIC, ORDERED, PARALLEL schedule types per flow
+- Event scheduling QoS based on event queue priority
+- Open system with configurable amount of outstanding events limited only by
+ DRAM
+- HW accelerated dequeue timeout support to enable power management
+
+Prerequisites and Compilation procedure
+---------------------------------------
+
+ See :doc:`../platform/cnxk` for setup information.
+
+Debugging Options
+-----------------
+
+.. _table_octeon_cnxk_event_debug_options:
+
+.. table:: OCTEON cnxk event device debug options
+
+ +---+------------+-------------------------------------------------------+
+ | # | Component | EAL log command |
+ +===+============+=======================================================+
+ | 1 | SSO | --log-level='pmd\.event\.cnxk,8' |
+ +---+------------+-------------------------------------------------------+
diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst
index 738788d9e..214302539 100644
--- a/doc/guides/eventdevs/index.rst
+++ b/doc/guides/eventdevs/index.rst
@@ -11,6 +11,7 @@ application through the eventdev API.
:maxdepth: 2
:numbered:
+ cnxk
dlb2
dpaa
dpaa2
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index b3224dc33..428615e4f 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -75,6 +75,8 @@ New Features
net, crypto and event PMD's.
* Added mempool/cnxk driver which provides the support for the integrated
mempool device.
+ * Added event/cnxk driver which provides the support for integrated event
+ device.
* **Enhanced ethdev representor syntax.**
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
new file mode 100644
index 000000000..7ea782eaa
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+
+int
+cnxk_sso_init(struct rte_eventdev *event_dev)
+{
+ const struct rte_memzone *mz = NULL;
+ struct rte_pci_device *pci_dev;
+ struct cnxk_sso_evdev *dev;
+ int rc;
+
+ mz = rte_memzone_reserve(CNXK_SSO_MZ_NAME, sizeof(uint64_t),
+ SOCKET_ID_ANY, 0);
+ if (mz == NULL) {
+ plt_err("Failed to create eventdev memzone");
+ return -ENOMEM;
+ }
+
+ dev = cnxk_sso_pmd_priv(event_dev);
+ pci_dev = container_of(event_dev->dev, struct rte_pci_device, device);
+ dev->sso.pci_dev = pci_dev;
+
+ *(uint64_t *)mz->addr = (uint64_t)dev;
+
+ /* Initialize the base cnxk_dev object */
+ rc = roc_sso_dev_init(&dev->sso);
+ if (rc < 0) {
+ plt_err("Failed to initialize RoC SSO rc=%d", rc);
+ goto error;
+ }
+
+ dev->is_timeout_deq = 0;
+ dev->min_dequeue_timeout_ns = USEC2NSEC(1);
+ dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF);
+ dev->max_num_events = -1;
+ dev->nb_event_queues = 0;
+ dev->nb_event_ports = 0;
+
+ return 0;
+
+error:
+ rte_memzone_free(mz);
+ return rc;
+}
+
+int
+cnxk_sso_fini(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ /* For secondary processes, nothing to be done */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ roc_sso_rsrc_fini(&dev->sso);
+ roc_sso_dev_fini(&dev->sso);
+
+ return 0;
+}
+
+int
+cnxk_sso_remove(struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_remove(pci_dev, cnxk_sso_fini);
+}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
new file mode 100644
index 000000000..74d0990fa
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CNXK_EVENTDEV_H__
+#define __CNXK_EVENTDEV_H__
+
+#include <rte_pci.h>
+
+#include <eventdev_pmd_pci.h>
+
+#include "roc_api.h"
+
+#define USEC2NSEC(__us) ((__us)*1E3)
+
+#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
+
+struct cnxk_sso_evdev {
+ struct roc_sso sso;
+ uint8_t is_timeout_deq;
+ uint8_t nb_event_queues;
+ uint8_t nb_event_ports;
+ uint32_t min_dequeue_timeout_ns;
+ uint32_t max_dequeue_timeout_ns;
+ int32_t max_num_events;
+} __rte_cache_aligned;
+
+static inline struct cnxk_sso_evdev *
+cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
+{
+ return event_dev->data->dev_private;
+}
+
+/* Common ops API. */
+int cnxk_sso_init(struct rte_eventdev *event_dev);
+int cnxk_sso_fini(struct rte_eventdev *event_dev);
+int cnxk_sso_remove(struct rte_pci_device *pci_dev);
+
+#endif /* __CNXK_EVENTDEV_H__ */
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
new file mode 100644
index 000000000..fbe245fca
--- /dev/null
+++ b/drivers/event/cnxk/meson.build
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2021 Marvell.
+#
+
+if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on 64-bit Linux'
+ subdir_done()
+endif
+
+sources = files('cnxk_eventdev.c')
+
+deps += ['bus_pci', 'common_cnxk']
diff --git a/drivers/event/cnxk/version.map b/drivers/event/cnxk/version.map
new file mode 100644
index 000000000..ee80c5172
--- /dev/null
+++ b/drivers/event/cnxk/version.map
@@ -0,0 +1,3 @@
+INTERNAL {
+ local: *;
+};
diff --git a/drivers/event/meson.build b/drivers/event/meson.build
index 539c5aeb9..63d6b410b 100644
--- a/drivers/event/meson.build
+++ b/drivers/event/meson.build
@@ -6,6 +6,7 @@ if is_windows
endif
drivers = [
+ 'cnxk',
'dlb2',
'dpaa',
'dpaa2',
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 03/34] event/cnxk: add device capabilities function
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 01/34] common/cnxk: rename deprecated constant pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 02/34] event/cnxk: add build infra and device setup pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 04/34] event/cnxk: add platform specific device probe pbhagavatula
` (31 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add the info_get function to return details on the queues, flow,
prioritization capabilities, etc. which this device has.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 24 ++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 4 ++++
2 files changed, 28 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 7ea782eaa..3a7053af6 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -4,6 +4,30 @@
#include "cnxk_eventdev.h"
+void
+cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
+ struct rte_event_dev_info *dev_info)
+{
+
+ dev_info->min_dequeue_timeout_ns = dev->min_dequeue_timeout_ns;
+ dev_info->max_dequeue_timeout_ns = dev->max_dequeue_timeout_ns;
+ dev_info->max_event_queues = dev->max_event_queues;
+ dev_info->max_event_queue_flows = (1ULL << 20);
+ dev_info->max_event_queue_priority_levels = 8;
+ dev_info->max_event_priority_levels = 1;
+ dev_info->max_event_ports = dev->max_event_ports;
+ dev_info->max_event_port_dequeue_depth = 1;
+ dev_info->max_event_port_enqueue_depth = 1;
+ dev_info->max_num_events = dev->max_num_events;
+ dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
+ RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
+ RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
+ RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
+ RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
+}
+
int
cnxk_sso_init(struct rte_eventdev *event_dev)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 74d0990fa..9745bfd3e 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -17,6 +17,8 @@
struct cnxk_sso_evdev {
struct roc_sso sso;
+ uint8_t max_event_queues;
+ uint8_t max_event_ports;
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
@@ -35,5 +37,7 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
int cnxk_sso_init(struct rte_eventdev *event_dev);
int cnxk_sso_fini(struct rte_eventdev *event_dev);
int cnxk_sso_remove(struct rte_pci_device *pci_dev);
+void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
+ struct rte_event_dev_info *dev_info);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 04/34] event/cnxk: add platform specific device probe
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (2 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 03/34] event/cnxk: add device capabilities function pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 05/34] event/cnxk: add common configuration validation pbhagavatula
` (30 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add platform specific event device probe and remove, also add
event device info get function.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 101 +++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 102 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 2 +
drivers/event/cnxk/meson.build | 5 +-
4 files changed, 209 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cn10k_eventdev.c
create mode 100644 drivers/event/cnxk/cn9k_eventdev.c
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
new file mode 100644
index 000000000..1216acaad
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+
+static void
+cn10k_sso_set_rsrc(void *arg)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ dev->max_event_ports = dev->sso.max_hws;
+ dev->max_event_queues =
+ dev->sso.max_hwgrp > RTE_EVENT_MAX_QUEUES_PER_DEV ?
+ RTE_EVENT_MAX_QUEUES_PER_DEV :
+ dev->sso.max_hwgrp;
+}
+
+static void
+cn10k_sso_info_get(struct rte_eventdev *event_dev,
+ struct rte_event_dev_info *dev_info)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ dev_info->driver_name = RTE_STR(EVENTDEV_NAME_CN10K_PMD);
+ cnxk_sso_info_get(dev, dev_info);
+}
+
+static struct rte_eventdev_ops cn10k_sso_dev_ops = {
+ .dev_infos_get = cn10k_sso_info_get,
+};
+
+static int
+cn10k_sso_init(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ if (RTE_CACHE_LINE_SIZE != 64) {
+ plt_err("Driver not compiled for CN9K");
+ return -EFAULT;
+ }
+
+ rc = roc_plt_init();
+ if (rc < 0) {
+ plt_err("Failed to initialize platform model");
+ return rc;
+ }
+
+ event_dev->dev_ops = &cn10k_sso_dev_ops;
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ rc = cnxk_sso_init(event_dev);
+ if (rc < 0)
+ return rc;
+
+ cn10k_sso_set_rsrc(cnxk_sso_pmd_priv(event_dev));
+ if (!dev->max_event_ports || !dev->max_event_queues) {
+ plt_err("Not enough eventdev resource queues=%d ports=%d",
+ dev->max_event_queues, dev->max_event_ports);
+ cnxk_sso_fini(event_dev);
+ return -ENODEV;
+ }
+
+ plt_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
+ event_dev->data->name, dev->max_event_queues,
+ dev->max_event_ports);
+
+ return 0;
+}
+
+static int
+cn10k_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_probe(pci_drv, pci_dev,
+ sizeof(struct cnxk_sso_evdev),
+ cn10k_sso_init);
+}
+
+static const struct rte_pci_id cn10k_pci_sso_map[] = {
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver cn10k_pci_sso = {
+ .id_table = cn10k_pci_sso_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
+ .probe = cn10k_sso_probe,
+ .remove = cnxk_sso_remove,
+};
+
+RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
+RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
+RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
new file mode 100644
index 000000000..988d2425f
--- /dev/null
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+
+#define CN9K_DUAL_WS_NB_WS 2
+#define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
+
+static void
+cn9k_sso_set_rsrc(void *arg)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ if (dev->dual_ws)
+ dev->max_event_ports = dev->sso.max_hws / CN9K_DUAL_WS_NB_WS;
+ else
+ dev->max_event_ports = dev->sso.max_hws;
+ dev->max_event_queues =
+ dev->sso.max_hwgrp > RTE_EVENT_MAX_QUEUES_PER_DEV ?
+ RTE_EVENT_MAX_QUEUES_PER_DEV :
+ dev->sso.max_hwgrp;
+}
+
+static void
+cn9k_sso_info_get(struct rte_eventdev *event_dev,
+ struct rte_event_dev_info *dev_info)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ dev_info->driver_name = RTE_STR(EVENTDEV_NAME_CN9K_PMD);
+ cnxk_sso_info_get(dev, dev_info);
+}
+
+static struct rte_eventdev_ops cn9k_sso_dev_ops = {
+ .dev_infos_get = cn9k_sso_info_get,
+};
+
+static int
+cn9k_sso_init(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ if (RTE_CACHE_LINE_SIZE != 128) {
+ plt_err("Driver not compiled for CN9K");
+ return -EFAULT;
+ }
+
+ rc = roc_plt_init();
+ if (rc < 0) {
+ plt_err("Failed to initialize platform model");
+ return rc;
+ }
+
+ event_dev->dev_ops = &cn9k_sso_dev_ops;
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ rc = cnxk_sso_init(event_dev);
+ if (rc < 0)
+ return rc;
+
+ cn9k_sso_set_rsrc(cnxk_sso_pmd_priv(event_dev));
+ if (!dev->max_event_ports || !dev->max_event_queues) {
+ plt_err("Not enough eventdev resource queues=%d ports=%d",
+ dev->max_event_queues, dev->max_event_ports);
+ cnxk_sso_fini(event_dev);
+ return -ENODEV;
+ }
+
+ plt_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
+ event_dev->data->name, dev->max_event_queues,
+ dev->max_event_ports);
+
+ return 0;
+}
+
+static int
+cn9k_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_probe(
+ pci_drv, pci_dev, sizeof(struct cnxk_sso_evdev), cn9k_sso_init);
+}
+
+static const struct rte_pci_id cn9k_pci_sso_map[] = {
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver cn9k_pci_sso = {
+ .id_table = cn9k_pci_sso_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
+ .probe = cn9k_sso_probe,
+ .remove = cnxk_sso_remove,
+};
+
+RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
+RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
+RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 9745bfd3e..6bdf0b347 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -25,6 +25,8 @@ struct cnxk_sso_evdev {
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
+ /* CN9K */
+ uint8_t dual_ws;
} __rte_cache_aligned;
static inline struct cnxk_sso_evdev *
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index fbe245fca..22eb28345 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -8,6 +8,9 @@ if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
subdir_done()
endif
-sources = files('cnxk_eventdev.c')
+sources = files('cn10k_eventdev.c',
+ 'cn9k_eventdev.c',
+ 'cnxk_eventdev.c',
+ )
deps += ['bus_pci', 'common_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 05/34] event/cnxk: add common configuration validation
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (3 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 04/34] event/cnxk: add platform specific device probe pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 06/34] event/cnxk: add platform specific device config pbhagavatula
` (29 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add configuration validation, port and queue configuration
functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 70 ++++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 6 +++
2 files changed, 76 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 3a7053af6..3eab1ed29 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -28,6 +28,76 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
+int
+cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
+{
+ struct rte_event_dev_config *conf = &event_dev->data->dev_conf;
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint32_t deq_tmo_ns;
+
+ deq_tmo_ns = conf->dequeue_timeout_ns;
+
+ if (deq_tmo_ns == 0)
+ deq_tmo_ns = dev->min_dequeue_timeout_ns;
+ if (deq_tmo_ns < dev->min_dequeue_timeout_ns ||
+ deq_tmo_ns > dev->max_dequeue_timeout_ns) {
+ plt_err("Unsupported dequeue timeout requested");
+ return -EINVAL;
+ }
+
+ if (conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT)
+ dev->is_timeout_deq = 1;
+
+ dev->deq_tmo_ns = deq_tmo_ns;
+
+ if (!conf->nb_event_queues || !conf->nb_event_ports ||
+ conf->nb_event_ports > dev->max_event_ports ||
+ conf->nb_event_queues > dev->max_event_queues) {
+ plt_err("Unsupported event queues/ports requested");
+ return -EINVAL;
+ }
+
+ if (conf->nb_event_port_dequeue_depth > 1) {
+ plt_err("Unsupported event port deq depth requested");
+ return -EINVAL;
+ }
+
+ if (conf->nb_event_port_enqueue_depth > 1) {
+ plt_err("Unsupported event port enq depth requested");
+ return -EINVAL;
+ }
+
+ dev->nb_event_queues = conf->nb_event_queues;
+ dev->nb_event_ports = conf->nb_event_ports;
+
+ return 0;
+}
+
+void
+cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
+ struct rte_event_queue_conf *queue_conf)
+{
+ RTE_SET_USED(event_dev);
+ RTE_SET_USED(queue_id);
+
+ queue_conf->nb_atomic_flows = (1ULL << 20);
+ queue_conf->nb_atomic_order_sequences = (1ULL << 20);
+ queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+ queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+}
+
+void
+cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
+ struct rte_event_port_conf *port_conf)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ RTE_SET_USED(port_id);
+ port_conf->new_event_threshold = dev->max_num_events;
+ port_conf->dequeue_depth = 1;
+ port_conf->enqueue_depth = 1;
+}
+
int
cnxk_sso_init(struct rte_eventdev *event_dev)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 6bdf0b347..59d96a08f 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -22,6 +22,7 @@ struct cnxk_sso_evdev {
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
+ uint32_t deq_tmo_ns;
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
@@ -41,5 +42,10 @@ int cnxk_sso_fini(struct rte_eventdev *event_dev);
int cnxk_sso_remove(struct rte_pci_device *pci_dev);
void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
struct rte_event_dev_info *dev_info);
+int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
+void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
+ struct rte_event_queue_conf *queue_conf);
+void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
+ struct rte_event_port_conf *port_conf);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 06/34] event/cnxk: add platform specific device config
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (4 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 05/34] event/cnxk: add common configuration validation pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 07/34] event/cnxk: add event queue config functions pbhagavatula
` (28 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add platform specific event device configuration that attaches the
requested number of SSO HWS(event ports) and HWGRP(event queues) LFs
to the RVU PF/VF.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 35 +++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 37 +++++++++++++++++++++++++++++
2 files changed, 72 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 1216acaad..779a2e026 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -16,6 +16,14 @@ cn10k_sso_set_rsrc(void *arg)
dev->sso.max_hwgrp;
}
+static int
+cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
+}
+
static void
cn10k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -26,8 +34,35 @@ cn10k_sso_info_get(struct rte_eventdev *event_dev,
cnxk_sso_info_get(dev, dev_info);
}
+static int
+cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ rc = cnxk_sso_dev_validate(event_dev);
+ if (rc < 0) {
+ plt_err("Invalid event device configuration");
+ return -EINVAL;
+ }
+
+ roc_sso_rsrc_fini(&dev->sso);
+
+ rc = cn10k_sso_rsrc_init(dev, dev->nb_event_ports,
+ dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to initialize SSO resources");
+ return -ENODEV;
+ }
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
+ .dev_configure = cn10k_sso_dev_configure,
+ .queue_def_conf = cnxk_sso_queue_def_conf,
+ .port_def_conf = cnxk_sso_port_def_conf,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 988d2425f..d042f58da 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -22,6 +22,17 @@ cn9k_sso_set_rsrc(void *arg)
dev->sso.max_hwgrp;
}
+static int
+cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ if (dev->dual_ws)
+ hws = hws * CN9K_DUAL_WS_NB_WS;
+
+ return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
+}
+
static void
cn9k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -32,8 +43,34 @@ cn9k_sso_info_get(struct rte_eventdev *event_dev,
cnxk_sso_info_get(dev, dev_info);
}
+static int
+cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ rc = cnxk_sso_dev_validate(event_dev);
+ if (rc < 0) {
+ plt_err("Invalid event device configuration");
+ return -EINVAL;
+ }
+
+ roc_sso_rsrc_fini(&dev->sso);
+
+ rc = cn9k_sso_rsrc_init(dev, dev->nb_event_ports, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to initialize SSO resources");
+ return -ENODEV;
+ }
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
+ .dev_configure = cn9k_sso_dev_configure,
+ .queue_def_conf = cnxk_sso_queue_def_conf,
+ .port_def_conf = cnxk_sso_port_def_conf,
};
static int
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 07/34] event/cnxk: add event queue config functions
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (5 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 06/34] event/cnxk: add platform specific device config pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 08/34] event/cnxk: allocate event inflight buffers pbhagavatula
` (27 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add setup and release functions for event queues i.e.
SSO HWGRPs.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 2 ++
drivers/event/cnxk/cn9k_eventdev.c | 2 ++
drivers/event/cnxk/cnxk_eventdev.c | 19 +++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 3 +++
4 files changed, 26 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 779a2e026..557f26b8f 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -62,6 +62,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
.queue_def_conf = cnxk_sso_queue_def_conf,
+ .queue_setup = cnxk_sso_queue_setup,
+ .queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
};
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index d042f58da..eba1bfbf0 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -70,6 +70,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
.queue_def_conf = cnxk_sso_queue_def_conf,
+ .queue_setup = cnxk_sso_queue_setup,
+ .queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
};
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 3eab1ed29..e22479a19 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -86,6 +86,25 @@ cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
}
+int
+cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
+ const struct rte_event_queue_conf *queue_conf)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ plt_sso_dbg("Queue=%d prio=%d", queue_id, queue_conf->priority);
+ /* Normalize <0-255> to <0-7> */
+ return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, 0xFF, 0xFF,
+ queue_conf->priority / 32);
+}
+
+void
+cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
+{
+ RTE_SET_USED(event_dev);
+ RTE_SET_USED(queue_id);
+}
+
void
cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf)
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 59d96a08f..426219c85 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -45,6 +45,9 @@ void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
struct rte_event_queue_conf *queue_conf);
+int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
+ const struct rte_event_queue_conf *queue_conf);
+void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id);
void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 08/34] event/cnxk: allocate event inflight buffers
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (6 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 07/34] event/cnxk: add event queue config functions pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 09/34] event/cnxk: add devargs for inflight buffer count pbhagavatula
` (26 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Allocate buffers in DRAM that hold inflight events.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 7 ++
drivers/event/cnxk/cn9k_eventdev.c | 7 ++
drivers/event/cnxk/cnxk_eventdev.c | 105 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 14 +++-
4 files changed, 132 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 557f26b8f..9c5ddea76 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -55,6 +55,13 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
return -ENODEV;
}
+ rc = cnxk_sso_xaq_allocate(dev);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
+ return 0;
+cnxk_rsrc_fini:
+ roc_sso_rsrc_fini(&dev->sso);
return rc;
}
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index eba1bfbf0..954fea01f 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -63,6 +63,13 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
return -ENODEV;
}
+ rc = cnxk_sso_xaq_allocate(dev);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
+ return 0;
+cnxk_rsrc_fini:
+ roc_sso_rsrc_fini(&dev->sso);
return rc;
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index e22479a19..34a8bce05 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -28,12 +28,107 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
+int
+cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
+{
+ char pool_name[RTE_MEMZONE_NAMESIZE];
+ uint32_t xaq_cnt, npa_aura_id;
+ const struct rte_memzone *mz;
+ struct npa_aura_s *aura;
+ static int reconfig_cnt;
+ int rc;
+
+ if (dev->xaq_pool) {
+ rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to release XAQ %d", rc);
+ return rc;
+ }
+ rte_mempool_free(dev->xaq_pool);
+ dev->xaq_pool = NULL;
+ }
+
+ /*
+ * Allocate memory for Add work backpressure.
+ */
+ mz = rte_memzone_lookup(CNXK_SSO_FC_NAME);
+ if (mz == NULL)
+ mz = rte_memzone_reserve_aligned(CNXK_SSO_FC_NAME,
+ sizeof(struct npa_aura_s) +
+ RTE_CACHE_LINE_SIZE,
+ 0, 0, RTE_CACHE_LINE_SIZE);
+ if (mz == NULL) {
+ plt_err("Failed to allocate mem for fcmem");
+ return -ENOMEM;
+ }
+
+ dev->fc_iova = mz->iova;
+ dev->fc_mem = mz->addr;
+
+ aura = (struct npa_aura_s *)((uintptr_t)dev->fc_mem +
+ RTE_CACHE_LINE_SIZE);
+ memset(aura, 0, sizeof(struct npa_aura_s));
+
+ aura->fc_ena = 1;
+ aura->fc_addr = dev->fc_iova;
+ aura->fc_hyst_bits = 0; /* Store count on all updates */
+
+ /* Taken from HRM 14.3.3(4) */
+ xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
+ xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
+ (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
+
+ plt_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
+ /* Setup XAQ based on number of nb queues. */
+ snprintf(pool_name, 30, "cnxk_xaq_buf_pool_%d", reconfig_cnt);
+ dev->xaq_pool = (void *)rte_mempool_create_empty(
+ pool_name, xaq_cnt, dev->sso.xaq_buf_size, 0, 0,
+ rte_socket_id(), 0);
+
+ if (dev->xaq_pool == NULL) {
+ plt_err("Unable to create empty mempool.");
+ rte_memzone_free(mz);
+ return -ENOMEM;
+ }
+
+ rc = rte_mempool_set_ops_byname(dev->xaq_pool,
+ rte_mbuf_platform_mempool_ops(), aura);
+ if (rc != 0) {
+ plt_err("Unable to set xaqpool ops.");
+ goto alloc_fail;
+ }
+
+ rc = rte_mempool_populate_default(dev->xaq_pool);
+ if (rc < 0) {
+ plt_err("Unable to set populate xaqpool.");
+ goto alloc_fail;
+ }
+ reconfig_cnt++;
+ /* When SW does addwork (enqueue) check if there is space in XAQ by
+ * comparing fc_addr above against the xaq_lmt calculated below.
+ * There should be a minimum headroom (CNXK_SSO_XAQ_SLACK / 2) for SSO
+ * to request XAQ to cache them even before enqueue is called.
+ */
+ dev->xaq_lmt =
+ xaq_cnt - (CNXK_SSO_XAQ_SLACK / 2 * dev->nb_event_queues);
+ dev->nb_xaq_cfg = xaq_cnt;
+
+ npa_aura_id = roc_npa_aura_handle_to_aura(dev->xaq_pool->pool_id);
+ return roc_sso_hwgrp_alloc_xaq(&dev->sso, npa_aura_id,
+ dev->nb_event_queues);
+alloc_fail:
+ rte_mempool_free(dev->xaq_pool);
+ rte_memzone_free(mz);
+ return rc;
+}
+
int
cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
{
struct rte_event_dev_config *conf = &event_dev->data->dev_conf;
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint32_t deq_tmo_ns;
+ int rc;
deq_tmo_ns = conf->dequeue_timeout_ns;
@@ -67,6 +162,16 @@ cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
return -EINVAL;
}
+ if (dev->xaq_pool) {
+ rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to release XAQ %d", rc);
+ return rc;
+ }
+ rte_mempool_free(dev->xaq_pool);
+ dev->xaq_pool = NULL;
+ }
+
dev->nb_event_queues = conf->nb_event_queues;
dev->nb_event_ports = conf->nb_event_ports;
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 426219c85..4abe4548d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -5,6 +5,7 @@
#ifndef __CNXK_EVENTDEV_H__
#define __CNXK_EVENTDEV_H__
+#include <rte_mbuf_pool_ops.h>
#include <rte_pci.h>
#include <eventdev_pmd_pci.h>
@@ -13,7 +14,10 @@
#define USEC2NSEC(__us) ((__us)*1E3)
-#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
+#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
+#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
+#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
+#define CNXK_SSO_XAQ_SLACK (8)
struct cnxk_sso_evdev {
struct roc_sso sso;
@@ -26,6 +30,11 @@ struct cnxk_sso_evdev {
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
+ uint64_t *fc_mem;
+ uint64_t xaq_lmt;
+ uint64_t nb_xaq_cfg;
+ rte_iova_t fc_iova;
+ struct rte_mempool *xaq_pool;
/* CN9K */
uint8_t dual_ws;
} __rte_cache_aligned;
@@ -36,6 +45,9 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
return event_dev->data->dev_private;
}
+/* Configuration functions */
+int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev);
+
/* Common ops API. */
int cnxk_sso_init(struct rte_eventdev *event_dev);
int cnxk_sso_fini(struct rte_eventdev *event_dev);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 09/34] event/cnxk: add devargs for inflight buffer count
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (7 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 08/34] event/cnxk: allocate event inflight buffers pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 10/34] event/cnxk: add devargs to control SSO HWGRP QoS pbhagavatula
` (25 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
The number of events for a *open system* event device is specified
as -1 as per the eventdev specification.
Since, SSO inflight events are only limited by DRAM size, the
xae_cnt devargs parameter is introduced to provide upper limit for
in-flight events.
Example:
--dev "0002:0e:00.0,xae_cnt=8192"
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 14 ++++++++++++++
drivers/event/cnxk/cn10k_eventdev.c | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 1 +
drivers/event/cnxk/cnxk_eventdev.c | 24 ++++++++++++++++++++++--
drivers/event/cnxk/cnxk_eventdev.h | 15 +++++++++++++++
5 files changed, 53 insertions(+), 2 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 148280b85..b556681ff 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -41,6 +41,20 @@ Prerequisites and Compilation procedure
See :doc:`../platform/cnxk` for setup information.
+
+Runtime Config Options
+----------------------
+
+- ``Maximum number of in-flight events`` (default ``8192``)
+
+ In **Marvell OCTEON cnxk** the max number of in-flight events are only limited
+ by DRAM size, the ``xae_cnt`` devargs parameter is introduced to provide
+ upper limit for in-flight events.
+
+ For example::
+
+ -a 0002:0e:00.0,xae_cnt=16384
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 9c5ddea76..020905290 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -143,3 +143,4 @@ static struct rte_pci_driver cn10k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 954fea01f..50f6fef01 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -146,3 +146,4 @@ static struct rte_pci_driver cn9k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 34a8bce05..fddd71a8d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -75,8 +75,11 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
/* Taken from HRM 14.3.3(4) */
xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
- xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
- (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
+ if (dev->xae_cnt)
+ xaq_cnt += dev->xae_cnt / dev->sso.xae_waes;
+ else
+ xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
+ (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
plt_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
/* Setup XAQ based on number of nb queues. */
@@ -222,6 +225,22 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
port_conf->enqueue_depth = 1;
}
+static void
+cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
+{
+ struct rte_kvargs *kvlist;
+
+ if (devargs == NULL)
+ return;
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, CNXK_SSO_XAE_CNT, &parse_kvargs_value,
+ &dev->xae_cnt);
+ rte_kvargs_free(kvlist);
+}
+
int
cnxk_sso_init(struct rte_eventdev *event_dev)
{
@@ -242,6 +261,7 @@ cnxk_sso_init(struct rte_eventdev *event_dev)
dev->sso.pci_dev = pci_dev;
*(uint64_t *)mz->addr = (uint64_t)dev;
+ cnxk_sso_parse_devargs(dev, pci_dev->device.devargs);
/* Initialize the base cnxk_dev object */
rc = roc_sso_dev_init(&dev->sso);
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 4abe4548d..202c6e6a7 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -5,6 +5,8 @@
#ifndef __CNXK_EVENTDEV_H__
#define __CNXK_EVENTDEV_H__
+#include <rte_devargs.h>
+#include <rte_kvargs.h>
#include <rte_mbuf_pool_ops.h>
#include <rte_pci.h>
@@ -12,6 +14,8 @@
#include "roc_api.h"
+#define CNXK_SSO_XAE_CNT "xae_cnt"
+
#define USEC2NSEC(__us) ((__us)*1E3)
#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
@@ -35,10 +39,21 @@ struct cnxk_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
+ /* Dev args */
+ uint32_t xae_cnt;
/* CN9K */
uint8_t dual_ws;
} __rte_cache_aligned;
+static inline int
+parse_kvargs_value(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ *(uint32_t *)opaque = (uint32_t)atoi(value);
+ return 0;
+}
+
static inline struct cnxk_sso_evdev *
cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
{
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 10/34] event/cnxk: add devargs to control SSO HWGRP QoS
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (8 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 09/34] event/cnxk: add devargs for inflight buffer count pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 11/34] event/cnxk: add port config functions pbhagavatula
` (24 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
SSO HWGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
events. By default the buffers are assigned to the SSO HWGRPs to
satisfy minimum HW requirements. SSO is free to assign the remaining
buffers to HWGRPs based on a preconfigured threshold.
We can control the QoS of SSO HWGRP by modifying the above mentioned
thresholds. HWGRPs that have higher importance can be assigned higher
thresholds than the rest.
Example:
--dev "0002:0e:00.0,qos=[1-50-50-50]" // [Qx-XAQ-TAQ-IAQ]
Qx -> Event queue Aka SSO GGRP.
XAQ -> DRAM In-flights.
TAQ & IAQ -> SRAM In-flights.
The values need to be expressed in terms of percentages, 0 represents
default.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 16 ++++++
drivers/event/cnxk/cn10k_eventdev.c | 3 +-
drivers/event/cnxk/cn9k_eventdev.c | 3 +-
drivers/event/cnxk/cnxk_eventdev.c | 78 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 12 ++++-
5 files changed, 109 insertions(+), 3 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index b556681ff..0583e5fdd 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -55,6 +55,22 @@ Runtime Config Options
-a 0002:0e:00.0,xae_cnt=16384
+- ``Event Group QoS support``
+
+ SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
+ events. By default the buffers are assigned to the SSO GGRPs to
+ satisfy minimum HW requirements. SSO is free to assign the remaining
+ buffers to GGRPs based on a preconfigured threshold.
+ We can control the QoS of SSO GGRP by modifying the above mentioned
+ thresholds. GGRPs that have higher importance can be assigned higher
+ thresholds than the rest. The dictionary format is as follows
+ [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] expressed in percentages, 0 represents
+ default.
+
+ For example::
+
+ -a 0002:0e:00.0,qos=[1-50-50-50]
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 020905290..0b39c6c09 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -143,4 +143,5 @@ static struct rte_pci_driver cn10k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
+ CNXK_SSO_GGRP_QOS "=<string>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 50f6fef01..ab165c850 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -146,4 +146,5 @@ static struct rte_pci_driver cn9k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
+ CNXK_SSO_GGRP_QOS "=<string>");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index fddd71a8d..e93aaccd8 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -225,6 +225,82 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
port_conf->enqueue_depth = 1;
}
+static void
+parse_queue_param(char *value, void *opaque)
+{
+ struct cnxk_sso_qos queue_qos = {0};
+ uint8_t *val = (uint8_t *)&queue_qos;
+ struct cnxk_sso_evdev *dev = opaque;
+ char *tok = strtok(value, "-");
+ struct cnxk_sso_qos *old_ptr;
+
+ if (!strlen(value))
+ return;
+
+ while (tok != NULL) {
+ *val = atoi(tok);
+ tok = strtok(NULL, "-");
+ val++;
+ }
+
+ if (val != (&queue_qos.iaq_prcnt + 1)) {
+ plt_err("Invalid QoS parameter expected [Qx-XAQ-TAQ-IAQ]");
+ return;
+ }
+
+ dev->qos_queue_cnt++;
+ old_ptr = dev->qos_parse_data;
+ dev->qos_parse_data = rte_realloc(
+ dev->qos_parse_data,
+ sizeof(struct cnxk_sso_qos) * dev->qos_queue_cnt, 0);
+ if (dev->qos_parse_data == NULL) {
+ dev->qos_parse_data = old_ptr;
+ dev->qos_queue_cnt--;
+ return;
+ }
+ dev->qos_parse_data[dev->qos_queue_cnt - 1] = queue_qos;
+}
+
+static void
+parse_qos_list(const char *value, void *opaque)
+{
+ char *s = strdup(value);
+ char *start = NULL;
+ char *end = NULL;
+ char *f = s;
+
+ while (*s) {
+ if (*s == '[')
+ start = s;
+ else if (*s == ']')
+ end = s;
+
+ if (start && start < end) {
+ *end = 0;
+ parse_queue_param(start + 1, opaque);
+ s = end;
+ start = end;
+ }
+ s++;
+ }
+
+ free(f);
+}
+
+static int
+parse_sso_kvargs_dict(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ /* Dict format [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] use '-' cause ','
+ * isn't allowed. Everything is expressed in percentages, 0 represents
+ * default.
+ */
+ parse_qos_list(value, opaque);
+
+ return 0;
+}
+
static void
cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
{
@@ -238,6 +314,8 @@ cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
rte_kvargs_process(kvlist, CNXK_SSO_XAE_CNT, &parse_kvargs_value,
&dev->xae_cnt);
+ rte_kvargs_process(kvlist, CNXK_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
+ dev);
rte_kvargs_free(kvlist);
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 202c6e6a7..b96a6a908 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,7 +14,8 @@
#include "roc_api.h"
-#define CNXK_SSO_XAE_CNT "xae_cnt"
+#define CNXK_SSO_XAE_CNT "xae_cnt"
+#define CNXK_SSO_GGRP_QOS "qos"
#define USEC2NSEC(__us) ((__us)*1E3)
@@ -23,6 +24,13 @@
#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
#define CNXK_SSO_XAQ_SLACK (8)
+struct cnxk_sso_qos {
+ uint16_t queue;
+ uint8_t xaq_prcnt;
+ uint8_t taq_prcnt;
+ uint8_t iaq_prcnt;
+};
+
struct cnxk_sso_evdev {
struct roc_sso sso;
uint8_t max_event_queues;
@@ -41,6 +49,8 @@ struct cnxk_sso_evdev {
struct rte_mempool *xaq_pool;
/* Dev args */
uint32_t xae_cnt;
+ uint8_t qos_queue_cnt;
+ struct cnxk_sso_qos *qos_parse_data;
/* CN9K */
uint8_t dual_ws;
} __rte_cache_aligned;
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 11/34] event/cnxk: add port config functions
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (9 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 10/34] event/cnxk: add devargs to control SSO HWGRP QoS pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 12/34] event/cnxk: add event port link and unlink pbhagavatula
` (23 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add SSO HWS a.k.a event port setup and release functions.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 121 +++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 147 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.c | 65 ++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 91 +++++++++++++++++
4 files changed, 424 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 0b39c6c09..fcdc1cf84 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -4,6 +4,91 @@
#include "cnxk_eventdev.h"
+static void
+cn10k_init_hws_ops(struct cn10k_sso_hws *ws, uintptr_t base)
+{
+ ws->tag_wqe_op = base + SSOW_LF_GWS_WQE0;
+ ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK0;
+ ws->updt_wqe_op = base + SSOW_LF_GWS_OP_UPD_WQP_GRP1;
+ ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM;
+ ws->swtag_untag_op = base + SSOW_LF_GWS_OP_SWTAG_UNTAG;
+ ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH;
+ ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
+}
+
+static uint32_t
+cn10k_sso_gw_mode_wdata(struct cnxk_sso_evdev *dev)
+{
+ uint32_t wdata = BIT(16) | 1;
+
+ switch (dev->gw_mode) {
+ case CN10K_GW_MODE_NONE:
+ default:
+ break;
+ case CN10K_GW_MODE_PREF:
+ wdata |= BIT(19);
+ break;
+ case CN10K_GW_MODE_PREF_WFE:
+ wdata |= BIT(20) | BIT(19);
+ break;
+ }
+
+ return wdata;
+}
+
+static void *
+cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws;
+
+ /* Allocate event port memory */
+ ws = rte_zmalloc("cn10k_ws",
+ sizeof(struct cn10k_sso_hws) + RTE_CACHE_LINE_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (ws == NULL) {
+ plt_err("Failed to alloc memory for port=%d", port_id);
+ return NULL;
+ }
+
+ /* First cache line is reserved for cookie */
+ ws = (struct cn10k_sso_hws *)((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
+ ws->base = roc_sso_hws_base_get(&dev->sso, port_id);
+ cn10k_init_hws_ops(ws, ws->base);
+ ws->hws_id = port_id;
+ ws->swtag_req = 0;
+ ws->gw_wdata = cn10k_sso_gw_mode_wdata(dev);
+ ws->lmt_base = dev->sso.lmt_base;
+
+ return ws;
+}
+
+static void
+cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = hws;
+ uint64_t val;
+
+ rte_memcpy(ws->grps_base, grps_base,
+ sizeof(uintptr_t) * CNXK_SSO_MAX_HWGRP);
+ ws->fc_mem = dev->fc_mem;
+ ws->xaq_lmt = dev->xaq_lmt;
+
+ /* Set get_work timeout for HWS */
+ val = NSEC2USEC(dev->deq_tmo_ns) - 1;
+ plt_write64(val, ws->base + SSOW_LF_GWS_NW_TIM);
+}
+
+static void
+cn10k_sso_hws_release(void *arg, void *hws)
+{
+ struct cn10k_sso_hws *ws = hws;
+
+ RTE_SET_USED(arg);
+ memset(ws, 0, sizeof(*ws));
+}
+
static void
cn10k_sso_set_rsrc(void *arg)
{
@@ -59,12 +144,46 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ rc = cnxk_setup_event_ports(event_dev, cn10k_sso_init_hws_mem,
+ cn10k_sso_hws_setup);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
+ dev->nb_event_ports = 0;
return rc;
}
+static int
+cn10k_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ const struct rte_event_port_conf *port_conf)
+{
+
+ RTE_SET_USED(port_conf);
+ return cnxk_sso_port_setup(event_dev, port_id, cn10k_sso_hws_setup);
+}
+
+static void
+cn10k_sso_port_release(void *port)
+{
+ struct cnxk_sso_hws_cookie *gws_cookie = cnxk_sso_hws_get_cookie(port);
+ struct cnxk_sso_evdev *dev;
+
+ if (port == NULL)
+ return;
+
+ dev = cnxk_sso_pmd_priv(gws_cookie->event_dev);
+ if (!gws_cookie->configured)
+ goto free;
+
+ cn10k_sso_hws_release(dev, port);
+ memset(gws_cookie, 0, sizeof(*gws_cookie));
+free:
+ rte_free(gws_cookie);
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -72,6 +191,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.queue_setup = cnxk_sso_queue_setup,
.queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
+ .port_setup = cn10k_sso_port_setup,
+ .port_release = cn10k_sso_port_release,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index ab165c850..b8c74633b 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -7,6 +7,63 @@
#define CN9K_DUAL_WS_NB_WS 2
#define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
+static void
+cn9k_init_hws_ops(struct cn9k_sso_hws_state *ws, uintptr_t base)
+{
+ ws->tag_op = base + SSOW_LF_GWS_TAG;
+ ws->wqp_op = base + SSOW_LF_GWS_WQP;
+ ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK0;
+ ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH;
+ ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM;
+ ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
+}
+
+static void
+cn9k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ uint64_t val;
+
+ /* Set get_work tmo for HWS */
+ val = NSEC2USEC(dev->deq_tmo_ns) - 1;
+ if (dev->dual_ws) {
+ dws = hws;
+ rte_memcpy(dws->grps_base, grps_base,
+ sizeof(uintptr_t) * CNXK_SSO_MAX_HWGRP);
+ dws->fc_mem = dev->fc_mem;
+ dws->xaq_lmt = dev->xaq_lmt;
+
+ plt_write64(val, dws->base[0] + SSOW_LF_GWS_NW_TIM);
+ plt_write64(val, dws->base[1] + SSOW_LF_GWS_NW_TIM);
+ } else {
+ ws = hws;
+ rte_memcpy(ws->grps_base, grps_base,
+ sizeof(uintptr_t) * CNXK_SSO_MAX_HWGRP);
+ ws->fc_mem = dev->fc_mem;
+ ws->xaq_lmt = dev->xaq_lmt;
+
+ plt_write64(val, ws->base + SSOW_LF_GWS_NW_TIM);
+ }
+}
+
+static void
+cn9k_sso_hws_release(void *arg, void *hws)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+
+ if (dev->dual_ws) {
+ dws = hws;
+ memset(dws, 0, sizeof(*dws));
+ } else {
+ ws = hws;
+ memset(ws, 0, sizeof(*ws));
+ }
+}
+
static void
cn9k_sso_set_rsrc(void *arg)
{
@@ -33,6 +90,60 @@ cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static void *
+cn9k_sso_init_hws_mem(void *arg, uint8_t port_id)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ void *data;
+
+ if (dev->dual_ws) {
+ dws = rte_zmalloc("cn9k_dual_ws",
+ sizeof(struct cn9k_sso_hws_dual) +
+ RTE_CACHE_LINE_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (dws == NULL) {
+ plt_err("Failed to alloc memory for port=%d", port_id);
+ return NULL;
+ }
+
+ dws = RTE_PTR_ADD(dws, sizeof(struct cnxk_sso_hws_cookie));
+ dws->base[0] = roc_sso_hws_base_get(
+ &dev->sso, CN9K_DUAL_WS_PAIR_ID(port_id, 0));
+ dws->base[1] = roc_sso_hws_base_get(
+ &dev->sso, CN9K_DUAL_WS_PAIR_ID(port_id, 1));
+ cn9k_init_hws_ops(&dws->ws_state[0], dws->base[0]);
+ cn9k_init_hws_ops(&dws->ws_state[1], dws->base[1]);
+ dws->hws_id = port_id;
+ dws->swtag_req = 0;
+ dws->vws = 0;
+
+ data = dws;
+ } else {
+ /* Allocate event port memory */
+ ws = rte_zmalloc("cn9k_ws",
+ sizeof(struct cn9k_sso_hws) +
+ RTE_CACHE_LINE_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (ws == NULL) {
+ plt_err("Failed to alloc memory for port=%d", port_id);
+ return NULL;
+ }
+
+ /* First cache line is reserved for cookie */
+ ws = RTE_PTR_ADD(ws, sizeof(struct cnxk_sso_hws_cookie));
+ ws->base = roc_sso_hws_base_get(&dev->sso, port_id);
+ cn9k_init_hws_ops((struct cn9k_sso_hws_state *)ws, ws->base);
+ ws->hws_id = port_id;
+ ws->swtag_req = 0;
+
+ data = ws;
+ }
+
+ return data;
+}
+
static void
cn9k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -67,12 +178,46 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ rc = cnxk_setup_event_ports(event_dev, cn9k_sso_init_hws_mem,
+ cn9k_sso_hws_setup);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
+ dev->nb_event_ports = 0;
return rc;
}
+static int
+cn9k_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ const struct rte_event_port_conf *port_conf)
+{
+
+ RTE_SET_USED(port_conf);
+ return cnxk_sso_port_setup(event_dev, port_id, cn9k_sso_hws_setup);
+}
+
+static void
+cn9k_sso_port_release(void *port)
+{
+ struct cnxk_sso_hws_cookie *gws_cookie = cnxk_sso_hws_get_cookie(port);
+ struct cnxk_sso_evdev *dev;
+
+ if (port == NULL)
+ return;
+
+ dev = cnxk_sso_pmd_priv(gws_cookie->event_dev);
+ if (!gws_cookie->configured)
+ goto free;
+
+ cn9k_sso_hws_release(dev, port);
+ memset(gws_cookie, 0, sizeof(*gws_cookie));
+free:
+ rte_free(gws_cookie);
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -80,6 +225,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.queue_setup = cnxk_sso_queue_setup,
.queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
+ .port_setup = cn9k_sso_port_setup,
+ .port_release = cn9k_sso_port_release,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index e93aaccd8..daf24d84a 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -125,6 +125,42 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
return rc;
}
+int
+cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
+ cnxk_sso_init_hws_mem_t init_hws_fn,
+ cnxk_sso_hws_setup_t setup_hws_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int i;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ struct cnxk_sso_hws_cookie *ws_cookie;
+ void *ws;
+
+ /* Free memory prior to re-allocation if needed */
+ if (event_dev->data->ports[i] != NULL)
+ ws = event_dev->data->ports[i];
+ else
+ ws = init_hws_fn(dev, i);
+ if (ws == NULL)
+ goto hws_fini;
+ ws_cookie = cnxk_sso_hws_get_cookie(ws);
+ ws_cookie->event_dev = event_dev;
+ ws_cookie->configured = 1;
+ event_dev->data->ports[i] = ws;
+ cnxk_sso_port_setup((struct rte_eventdev *)(uintptr_t)event_dev,
+ i, setup_hws_fn);
+ }
+
+ return 0;
+hws_fini:
+ for (i = i - 1; i >= 0; i--) {
+ event_dev->data->ports[i] = NULL;
+ rte_free(cnxk_sso_hws_get_cookie(event_dev->data->ports[i]));
+ }
+ return -ENOMEM;
+}
+
int
cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
{
@@ -225,6 +261,35 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
port_conf->enqueue_depth = 1;
}
+int
+cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ cnxk_sso_hws_setup_t hws_setup_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP] = {0};
+ uint16_t q;
+
+ plt_sso_dbg("Port=%d", port_id);
+ if (event_dev->data->ports[port_id] == NULL) {
+ plt_err("Invalid port Id %d", port_id);
+ return -EINVAL;
+ }
+
+ for (q = 0; q < dev->nb_event_queues; q++) {
+ grps_base[q] = roc_sso_hwgrp_base_get(&dev->sso, q);
+ if (grps_base[q] == 0) {
+ plt_err("Failed to get grp[%d] base addr", q);
+ return -EINVAL;
+ }
+ }
+
+ hws_setup_fn(dev, event_dev->data->ports[port_id], grps_base);
+ plt_sso_dbg("Port=%d ws=%p", port_id, event_dev->data->ports[port_id]);
+ rte_mb();
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index b96a6a908..79eab1829 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -17,13 +17,23 @@
#define CNXK_SSO_XAE_CNT "xae_cnt"
#define CNXK_SSO_GGRP_QOS "qos"
+#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
+#define CNXK_SSO_MAX_HWGRP (RTE_EVENT_MAX_QUEUES_PER_DEV + 1)
#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
#define CNXK_SSO_XAQ_SLACK (8)
+#define CN10K_GW_MODE_NONE 0
+#define CN10K_GW_MODE_PREF 1
+#define CN10K_GW_MODE_PREF_WFE 2
+
+typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
+typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
+typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
+
struct cnxk_sso_qos {
uint16_t queue;
uint8_t xaq_prcnt;
@@ -53,6 +63,76 @@ struct cnxk_sso_evdev {
struct cnxk_sso_qos *qos_parse_data;
/* CN9K */
uint8_t dual_ws;
+ /* CN10K */
+ uint8_t gw_mode;
+} __rte_cache_aligned;
+
+/* CN10K HWS ops */
+#define CN10K_SSO_HWS_OPS \
+ uintptr_t swtag_desched_op; \
+ uintptr_t swtag_flush_op; \
+ uintptr_t swtag_untag_op; \
+ uintptr_t swtag_norm_op; \
+ uintptr_t updt_wqe_op; \
+ uintptr_t tag_wqe_op; \
+ uintptr_t getwrk_op
+
+struct cn10k_sso_hws {
+ /* Get Work Fastpath data */
+ CN10K_SSO_HWS_OPS;
+ uint32_t gw_wdata;
+ uint8_t swtag_req;
+ uint8_t hws_id;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
+ uint64_t base;
+ uintptr_t lmt_base;
+} __rte_cache_aligned;
+
+/* CN9K HWS ops */
+#define CN9K_SSO_HWS_OPS \
+ uintptr_t swtag_desched_op; \
+ uintptr_t swtag_flush_op; \
+ uintptr_t swtag_norm_op; \
+ uintptr_t getwrk_op; \
+ uintptr_t tag_op; \
+ uintptr_t wqp_op
+
+/* Event port a.k.a GWS */
+struct cn9k_sso_hws {
+ /* Get Work Fastpath data */
+ CN9K_SSO_HWS_OPS;
+ uint8_t swtag_req;
+ uint8_t hws_id;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
+ uint64_t base;
+} __rte_cache_aligned;
+
+struct cn9k_sso_hws_state {
+ CN9K_SSO_HWS_OPS;
+};
+
+struct cn9k_sso_hws_dual {
+ /* Get Work Fastpath data */
+ struct cn9k_sso_hws_state ws_state[2]; /* Ping and Pong */
+ uint8_t swtag_req;
+ uint8_t vws; /* Ping pong bit */
+ uint8_t hws_id;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
+ uint64_t base[2];
+} __rte_cache_aligned;
+
+struct cnxk_sso_hws_cookie {
+ const struct rte_eventdev *event_dev;
+ bool configured;
} __rte_cache_aligned;
static inline int
@@ -70,6 +150,12 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
return event_dev->data->dev_private;
}
+static inline struct cnxk_sso_hws_cookie *
+cnxk_sso_hws_get_cookie(void *ws)
+{
+ return RTE_PTR_SUB(ws, sizeof(struct cnxk_sso_hws_cookie));
+}
+
/* Configuration functions */
int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev);
@@ -80,6 +166,9 @@ int cnxk_sso_remove(struct rte_pci_device *pci_dev);
void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
struct rte_event_dev_info *dev_info);
int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
+int cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
+ cnxk_sso_init_hws_mem_t init_hws_mem,
+ cnxk_sso_hws_setup_t hws_setup);
void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
struct rte_event_queue_conf *queue_conf);
int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
@@ -87,5 +176,7 @@ int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id);
void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf);
+int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ cnxk_sso_hws_setup_t hws_setup_fn);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 12/34] event/cnxk: add event port link and unlink
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (10 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 11/34] event/cnxk: add port config functions pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 13/34] event/cnxk: add devargs to configure getwork mode pbhagavatula
` (22 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add platform specific event port, queue link and unlink APIs.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 64 +++++++++++++++++-
drivers/event/cnxk/cn9k_eventdev.c | 101 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.c | 36 ++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 12 +++-
4 files changed, 210 insertions(+), 3 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index fcdc1cf84..db8fe8169 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -63,6 +63,24 @@ cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
return ws;
}
+static int
+cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = port;
+
+ return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+}
+
+static int
+cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = port;
+
+ return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+}
+
static void
cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
{
@@ -83,9 +101,12 @@ cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
static void
cn10k_sso_hws_release(void *arg, void *hws)
{
+ struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = hws;
+ int i;
- RTE_SET_USED(arg);
+ for (i = 0; i < dev->nb_event_queues; i++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, (uint16_t *)&i, 1);
memset(ws, 0, sizeof(*ws));
}
@@ -149,6 +170,12 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ /* Restore any prior port-queue mapping. */
+ cnxk_sso_restore_links(event_dev, cn10k_sso_hws_link);
+
+ dev->configured = 1;
+ rte_mb();
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
@@ -184,6 +211,38 @@ cn10k_sso_port_release(void *port)
rte_free(gws_cookie);
}
+static int
+cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_links];
+ uint16_t link;
+
+ RTE_SET_USED(priorities);
+ for (link = 0; link < nb_links; link++)
+ hwgrp_ids[link] = queues[link];
+ nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+
+ return (int)nb_links;
+}
+
+static int
+cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_unlinks];
+ uint16_t unlink;
+
+ for (unlink = 0; unlink < nb_unlinks; unlink++)
+ hwgrp_ids[unlink] = queues[unlink];
+ nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+
+ return (int)nb_unlinks;
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -193,6 +252,9 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_def_conf = cnxk_sso_port_def_conf,
.port_setup = cn10k_sso_port_setup,
.port_release = cn10k_sso_port_release,
+ .port_link = cn10k_sso_port_link,
+ .port_unlink = cn10k_sso_port_unlink,
+ .timeout_ticks = cnxk_sso_timeout_ticks,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index b8c74633b..a0d76335f 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -18,6 +18,54 @@ cn9k_init_hws_ops(struct cn9k_sso_hws_state *ws, uintptr_t base)
ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
}
+static int
+cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ int rc;
+
+ if (dev->dual_ws) {
+ dws = port;
+ rc = roc_sso_hws_link(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
+ nb_link);
+ rc |= roc_sso_hws_link(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ map, nb_link);
+ } else {
+ ws = port;
+ rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ }
+
+ return rc;
+}
+
+static int
+cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ int rc;
+
+ if (dev->dual_ws) {
+ dws = port;
+ rc = roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ map, nb_link);
+ rc |= roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ map, nb_link);
+ } else {
+ ws = port;
+ rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ }
+
+ return rc;
+}
+
static void
cn9k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
{
@@ -54,12 +102,24 @@ cn9k_sso_hws_release(void *arg, void *hws)
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
struct cn9k_sso_hws *ws;
+ int i;
if (dev->dual_ws) {
dws = hws;
+ for (i = 0; i < dev->nb_event_queues; i++) {
+ roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ (uint16_t *)&i, 1);
+ roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ (uint16_t *)&i, 1);
+ }
memset(dws, 0, sizeof(*dws));
} else {
ws = hws;
+ for (i = 0; i < dev->nb_event_queues; i++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id,
+ (uint16_t *)&i, 1);
memset(ws, 0, sizeof(*ws));
}
}
@@ -183,6 +243,12 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ /* Restore any prior port-queue mapping. */
+ cnxk_sso_restore_links(event_dev, cn9k_sso_hws_link);
+
+ dev->configured = 1;
+ rte_mb();
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
@@ -218,6 +284,38 @@ cn9k_sso_port_release(void *port)
rte_free(gws_cookie);
}
+static int
+cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_links];
+ uint16_t link;
+
+ RTE_SET_USED(priorities);
+ for (link = 0; link < nb_links; link++)
+ hwgrp_ids[link] = queues[link];
+ nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+
+ return (int)nb_links;
+}
+
+static int
+cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_unlinks];
+ uint16_t unlink;
+
+ for (unlink = 0; unlink < nb_unlinks; unlink++)
+ hwgrp_ids[unlink] = queues[unlink];
+ nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+
+ return (int)nb_unlinks;
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -227,6 +325,9 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_def_conf = cnxk_sso_port_def_conf,
.port_setup = cn9k_sso_port_setup,
.port_release = cn9k_sso_port_release,
+ .port_link = cn9k_sso_port_link,
+ .port_unlink = cn9k_sso_port_unlink,
+ .timeout_ticks = cnxk_sso_timeout_ticks,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index daf24d84a..e68079997 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -161,6 +161,32 @@ cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
return -ENOMEM;
}
+void
+cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
+ cnxk_sso_link_t link_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t *links_map, hwgrp[CNXK_SSO_MAX_HWGRP];
+ int i, j;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ uint16_t nb_hwgrp = 0;
+
+ links_map = event_dev->data->links_map;
+ /* Point links_map to this port specific area */
+ links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+
+ for (j = 0; j < dev->nb_event_queues; j++) {
+ if (links_map[j] == 0xdead)
+ continue;
+ hwgrp[nb_hwgrp] = j;
+ nb_hwgrp++;
+ }
+
+ link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp);
+ }
+}
+
int
cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
{
@@ -290,6 +316,16 @@ cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
return 0;
}
+int
+cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
+ uint64_t *tmo_ticks)
+{
+ RTE_SET_USED(event_dev);
+ *tmo_ticks = NSEC2TICK(ns, rte_get_timer_hz());
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 79eab1829..97a944d88 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -17,8 +17,9 @@
#define CNXK_SSO_XAE_CNT "xae_cnt"
#define CNXK_SSO_GGRP_QOS "qos"
-#define NSEC2USEC(__ns) ((__ns) / 1E3)
-#define USEC2NSEC(__us) ((__us)*1E3)
+#define NSEC2USEC(__ns) ((__ns) / 1E3)
+#define USEC2NSEC(__us) ((__us)*1E3)
+#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
#define CNXK_SSO_MAX_HWGRP (RTE_EVENT_MAX_QUEUES_PER_DEV + 1)
#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
@@ -33,6 +34,8 @@
typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
+typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
+ uint16_t nb_link);
struct cnxk_sso_qos {
uint16_t queue;
@@ -48,6 +51,7 @@ struct cnxk_sso_evdev {
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
+ uint8_t configured;
uint32_t deq_tmo_ns;
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
@@ -169,6 +173,8 @@ int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
int cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
cnxk_sso_init_hws_mem_t init_hws_mem,
cnxk_sso_hws_setup_t hws_setup);
+void cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
+ cnxk_sso_link_t link_fn);
void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
struct rte_event_queue_conf *queue_conf);
int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
@@ -178,5 +184,7 @@ void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf);
int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
cnxk_sso_hws_setup_t hws_setup_fn);
+int cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
+ uint64_t *tmo_ticks);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 13/34] event/cnxk: add devargs to configure getwork mode
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (11 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 12/34] event/cnxk: add event port link and unlink pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 14/34] event/cnxk: add SSO HW device operations pbhagavatula
` (21 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add devargs to configure the platform specific getwork mode.
CN9K getwork mode by default is set to use dual workslot mode.
Add option to force single workslot mode.
Example:
--dev "0002:0e:00.0,single_ws=1"
CN10K supports multiple getwork prefetch modes, by default the
prefetch mode is set to none.
Add option to select getwork prefetch mode
Example:
--dev "0002:1e:00.0,gw_mode=1"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 18 ++++++++++++++++++
drivers/event/cnxk/cn10k_eventdev.c | 3 ++-
drivers/event/cnxk/cn9k_eventdev.c | 3 ++-
drivers/event/cnxk/cnxk_eventdev.c | 6 ++++++
drivers/event/cnxk/cnxk_eventdev.h | 6 ++++--
5 files changed, 32 insertions(+), 4 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 0583e5fdd..f48452982 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -55,6 +55,24 @@ Runtime Config Options
-a 0002:0e:00.0,xae_cnt=16384
+- ``CN9K Getwork mode``
+
+ CN9K ``single_ws`` devargs parameter is introduced to select single workslot
+ mode in SSO and disable the default dual workslot mode.
+
+ For example::
+
+ -a 0002:0e:00.0,single_ws=1
+
+- ``CN10K Getwork mode``
+
+ CN10K supports multiple getwork prefetch modes, by default the prefetch
+ mode is set to none.
+
+ For example::
+
+ -a 0002:0e:00.0,gw_mode=1
+
- ``Event Group QoS support``
SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index db8fe8169..6522351ca 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -327,4 +327,5 @@ RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
- CNXK_SSO_GGRP_QOS "=<string>");
+ CNXK_SSO_GGRP_QOS "=<string>"
+ CN10K_SSO_GW_MODE "=<int>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index a0d76335f..00c5565e7 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -395,4 +395,5 @@ RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
- CNXK_SSO_GGRP_QOS "=<string>");
+ CNXK_SSO_GGRP_QOS "=<string>"
+ CN9K_SSO_SINGLE_WS "=1");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index e68079997..2a387ff95 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -406,6 +406,7 @@ static void
cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
{
struct rte_kvargs *kvlist;
+ uint8_t single_ws = 0;
if (devargs == NULL)
return;
@@ -417,6 +418,11 @@ cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
&dev->xae_cnt);
rte_kvargs_process(kvlist, CNXK_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
dev);
+ rte_kvargs_process(kvlist, CN9K_SSO_SINGLE_WS, &parse_kvargs_value,
+ &single_ws);
+ rte_kvargs_process(kvlist, CN10K_SSO_GW_MODE, &parse_kvargs_value,
+ &dev->gw_mode);
+ dev->dual_ws = !single_ws;
rte_kvargs_free(kvlist);
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 97a944d88..437cdf3db 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,8 +14,10 @@
#include "roc_api.h"
-#define CNXK_SSO_XAE_CNT "xae_cnt"
-#define CNXK_SSO_GGRP_QOS "qos"
+#define CNXK_SSO_XAE_CNT "xae_cnt"
+#define CNXK_SSO_GGRP_QOS "qos"
+#define CN9K_SSO_SINGLE_WS "single_ws"
+#define CN10K_SSO_GW_MODE "gw_mode"
#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 14/34] event/cnxk: add SSO HW device operations
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (12 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 13/34] event/cnxk: add devargs to configure getwork mode pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 15/34] event/cnxk: add SSO GWS fastpath enqueue functions pbhagavatula
` (20 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO HW device operations used for enqueue/dequeue.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_worker.c | 7 +
drivers/event/cnxk/cn10k_worker.h | 151 +++++++++++++++++
drivers/event/cnxk/cn9k_worker.c | 7 +
drivers/event/cnxk/cn9k_worker.h | 249 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 10 ++
drivers/event/cnxk/cnxk_worker.h | 101 ++++++++++++
drivers/event/cnxk/meson.build | 4 +-
7 files changed, 528 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cn10k_worker.c
create mode 100644 drivers/event/cnxk/cn10k_worker.h
create mode 100644 drivers/event/cnxk/cn9k_worker.c
create mode 100644 drivers/event/cnxk/cn9k_worker.h
create mode 100644 drivers/event/cnxk/cnxk_worker.h
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
new file mode 100644
index 000000000..63b587301
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cn10k_worker.h"
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
new file mode 100644
index 000000000..04517055d
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CN10K_WORKER_H__
+#define __CN10K_WORKER_H__
+
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
+
+/* SSO Operations */
+
+static __rte_always_inline uint8_t
+cn10k_sso_hws_new_event(struct cn10k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ rte_atomic_thread_fence(__ATOMIC_ACQ_REL);
+ if (ws->xaq_lmt <= *ws->fc_mem)
+ return 0;
+
+ cnxk_sso_hws_add_work(event_ptr, tag, new_tt, ws->grps_base[grp]);
+ return 1;
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_fwd_swtag(struct cn10k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint8_t cur_tt = CNXK_TT_FROM_TAG(plt_read64(ws->tag_wqe_op));
+
+ /* CNXK model
+ * cur_tt/new_tt SSO_TT_ORDERED SSO_TT_ATOMIC SSO_TT_UNTAGGED
+ *
+ * SSO_TT_ORDERED norm norm untag
+ * SSO_TT_ATOMIC norm norm untag
+ * SSO_TT_UNTAGGED norm norm NOOP
+ */
+
+ if (new_tt == SSO_TT_UNTAGGED) {
+ if (cur_tt != SSO_TT_UNTAGGED)
+ cnxk_sso_hws_swtag_untag(ws->swtag_untag_op);
+ } else {
+ cnxk_sso_hws_swtag_norm(tag, new_tt, ws->swtag_norm_op);
+ }
+ ws->swtag_req = 1;
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_fwd_group(struct cn10k_sso_hws *ws, const struct rte_event *ev,
+ const uint16_t grp)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+
+ plt_write64(ev->u64, ws->updt_wqe_op);
+ cnxk_sso_hws_swtag_desched(tag, new_tt, grp, ws->swtag_desched_op);
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_forward_event(struct cn10k_sso_hws *ws,
+ const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (CNXK_GRP_FROM_TAG(plt_read64(ws->tag_wqe_op)) == grp)
+ cn10k_sso_hws_fwd_swtag(ws, ev);
+ else
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ cn10k_sso_hws_fwd_group(ws, ev, grp);
+}
+
+static __rte_always_inline uint16_t
+cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+ gw.get_work = ws->gw_wdata;
+#if defined(RTE_ARCH_ARM64) && !defined(__clang__)
+ asm volatile(
+ PLT_CPU_FEATURE_PREAMBLE
+ "caspl %[wdata], %H[wdata], %[wdata], %H[wdata], [%[gw_loc]]\n"
+ : [wdata] "+r"(gw.get_work)
+ : [gw_loc] "r"(ws->getwrk_op)
+ : "memory");
+#else
+ plt_write64(gw.u64[0], ws->getwrk_op);
+ do {
+ roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+ } while (gw.u64[0] & BIT_ULL(63));
+#endif
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+/* Used in cleaning up workslot. */
+static __rte_always_inline uint16_t
+cn10k_sso_hws_get_work_empty(struct cn10k_sso_hws *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldp %[tag], %[wqp], [%[tag_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldp %[tag], %[wqp], [%[tag_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_wqe_op)
+ : "memory");
+#else
+ do {
+ roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+ } while (gw.u64[0] & BIT_ULL(63));
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+#endif
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
new file mode 100644
index 000000000..836914163
--- /dev/null
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "roc_api.h"
+
+#include "cn9k_worker.h"
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
new file mode 100644
index 000000000..85be742c1
--- /dev/null
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -0,0 +1,249 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CN9K_WORKER_H__
+#define __CN9K_WORKER_H__
+
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
+
+/* SSO Operations */
+
+static __rte_always_inline uint8_t
+cn9k_sso_hws_new_event(struct cn9k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ rte_atomic_thread_fence(__ATOMIC_ACQ_REL);
+ if (ws->xaq_lmt <= *ws->fc_mem)
+ return 0;
+
+ cnxk_sso_hws_add_work(event_ptr, tag, new_tt, ws->grps_base[grp]);
+ return 1;
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_fwd_swtag(struct cn9k_sso_hws_state *vws,
+ const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint8_t cur_tt = CNXK_TT_FROM_TAG(plt_read64(vws->tag_op));
+
+ /* CNXK model
+ * cur_tt/new_tt SSO_TT_ORDERED SSO_TT_ATOMIC SSO_TT_UNTAGGED
+ *
+ * SSO_TT_ORDERED norm norm untag
+ * SSO_TT_ATOMIC norm norm untag
+ * SSO_TT_UNTAGGED norm norm NOOP
+ */
+
+ if (new_tt == SSO_TT_UNTAGGED) {
+ if (cur_tt != SSO_TT_UNTAGGED)
+ cnxk_sso_hws_swtag_untag(
+ CN9K_SSOW_GET_BASE_ADDR(vws->getwrk_op) +
+ SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ } else {
+ cnxk_sso_hws_swtag_norm(tag, new_tt, vws->swtag_norm_op);
+ }
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_fwd_group(struct cn9k_sso_hws_state *ws,
+ const struct rte_event *ev, const uint16_t grp)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+
+ plt_write64(ev->u64, CN9K_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
+ SSOW_LF_GWS_OP_UPD_WQP_GRP1);
+ cnxk_sso_hws_swtag_desched(tag, new_tt, grp, ws->swtag_desched_op);
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_forward_event(struct cn9k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (CNXK_GRP_FROM_TAG(plt_read64(ws->tag_op)) == grp) {
+ cn9k_sso_hws_fwd_swtag((struct cn9k_sso_hws_state *)ws, ev);
+ ws->swtag_req = 1;
+ } else {
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ cn9k_sso_hws_fwd_group((struct cn9k_sso_hws_state *)ws, ev,
+ grp);
+ }
+}
+
+/* Dual ws ops. */
+
+static __rte_always_inline uint8_t
+cn9k_sso_hws_dual_new_event(struct cn9k_sso_hws_dual *dws,
+ const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ rte_atomic_thread_fence(__ATOMIC_ACQ_REL);
+ if (dws->xaq_lmt <= *dws->fc_mem)
+ return 0;
+
+ cnxk_sso_hws_add_work(event_ptr, tag, new_tt, dws->grps_base[grp]);
+ return 1;
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_dual_forward_event(struct cn9k_sso_hws_dual *dws,
+ struct cn9k_sso_hws_state *vws,
+ const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (CNXK_GRP_FROM_TAG(plt_read64(vws->tag_op)) == grp) {
+ cn9k_sso_hws_fwd_swtag(vws, ev);
+ dws->swtag_req = 1;
+ } else {
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ cn9k_sso_hws_fwd_group(vws, ev, grp);
+ }
+}
+
+static __rte_always_inline uint16_t
+cn9k_sso_hws_dual_get_work(struct cn9k_sso_hws_state *ws,
+ struct cn9k_sso_hws_state *ws_pair,
+ struct rte_event *ev)
+{
+ const uint64_t set_gw = BIT_ULL(16) | 1;
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ "rty%=: \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: str %[gw], [%[pong]] \n"
+ " dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op),
+ [gw] "r"(set_gw), [pong] "r"(ws_pair->getwrk_op));
+#else
+ gw.u64[0] = plt_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & gw.u64[0])
+ gw.u64[0] = plt_read64(ws->tag_op);
+ gw.u64[1] = plt_read64(ws->wqp_op);
+ plt_write64(set_gw, ws_pair->getwrk_op);
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+static __rte_always_inline uint16_t
+cn9k_sso_hws_get_work(struct cn9k_sso_hws *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+ plt_write64(BIT_ULL(16) | /* wait for work. */
+ 1, /* Use Mask set 0. */
+ ws->getwrk_op);
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op));
+#else
+ gw.u64[0] = plt_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & gw.u64[0])
+ gw.u64[0] = plt_read64(ws->tag_op);
+
+ gw.u64[1] = plt_read64(ws->wqp_op);
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+/* Used in cleaning up workslot. */
+static __rte_always_inline uint16_t
+cn9k_sso_hws_get_work_empty(struct cn9k_sso_hws_state *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op));
+#else
+ gw.u64[0] = plt_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & gw.u64[0])
+ gw.u64[0] = plt_read64(ws->tag_op);
+
+ gw.u64[1] = plt_read64(ws->wqp_op);
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+#endif
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 437cdf3db..0a3ab71e4 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -29,6 +29,16 @@
#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
#define CNXK_SSO_XAQ_SLACK (8)
+#define CNXK_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY)
+#define CNXK_TT_FROM_EVENT(x) (((x) >> 38) & SSO_TT_EMPTY)
+#define CNXK_EVENT_TYPE_FROM_TAG(x) (((x) >> 28) & 0xf)
+#define CNXK_SUB_EVENT_FROM_TAG(x) (((x) >> 20) & 0xff)
+#define CNXK_CLR_SUB_EVENT(x) (~(0xffu << 20) & x)
+#define CNXK_GRP_FROM_TAG(x) (((x) >> 36) & 0x3ff)
+#define CNXK_SWTAG_PEND(x) (BIT_ULL(62) & x)
+
+#define CN9K_SSOW_GET_BASE_ADDR(_GW) ((_GW)-SSOW_LF_GWS_OP_GET_WORK0)
+
#define CN10K_GW_MODE_NONE 0
#define CN10K_GW_MODE_PREF 1
#define CN10K_GW_MODE_PREF_WFE 2
diff --git a/drivers/event/cnxk/cnxk_worker.h b/drivers/event/cnxk/cnxk_worker.h
new file mode 100644
index 000000000..4eb46ae16
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_worker.h
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CNXK_WORKER_H__
+#define __CNXK_WORKER_H__
+
+#include "cnxk_eventdev.h"
+
+/* SSO Operations */
+
+static __rte_always_inline void
+cnxk_sso_hws_add_work(const uint64_t event_ptr, const uint32_t tag,
+ const uint8_t new_tt, const uintptr_t grp_base)
+{
+ uint64_t add_work0;
+
+ add_work0 = tag | ((uint64_t)(new_tt) << 32);
+ roc_store_pair(add_work0, event_ptr, grp_base);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_desched(uint32_t tag, uint8_t new_tt, uint16_t grp,
+ uintptr_t swtag_desched_op)
+{
+ uint64_t val;
+
+ val = tag | ((uint64_t)(new_tt & 0x3) << 32) | ((uint64_t)grp << 34);
+ __atomic_store_n((uint64_t *)swtag_desched_op, val, __ATOMIC_RELEASE);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_norm(uint32_t tag, uint8_t new_tt, uintptr_t swtag_norm_op)
+{
+ uint64_t val;
+
+ val = tag | ((uint64_t)(new_tt & 0x3) << 32);
+ plt_write64(val, swtag_norm_op);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_untag(uintptr_t swtag_untag_op)
+{
+ plt_write64(0, swtag_untag_op);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_flush(uint64_t tag_op, uint64_t flush_op)
+{
+ if (CNXK_TT_FROM_TAG(plt_read64(tag_op)) == SSO_TT_EMPTY)
+ return;
+ plt_write64(0, flush_op);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_wait(uintptr_t tag_op)
+{
+#ifdef RTE_ARCH_ARM64
+ uint64_t swtp;
+
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbz %[swtb], 62, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbnz %[swtb], 62, rty%= \n"
+ "done%=: \n"
+ : [swtb] "=&r"(swtp)
+ : [swtp_loc] "r"(tag_op));
+#else
+ /* Wait for the SWTAG/SWTAG_FULL operation */
+ while (plt_read64(tag_op) & BIT_ULL(62))
+ ;
+#endif
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_head_wait(uintptr_t tag_op)
+{
+#ifdef RTE_ARCH_ARM64
+ uint64_t swtp;
+
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbz %[swtb], 35, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbnz %[swtb], 35, rty%= \n"
+ "done%=: \n"
+ : [swtb] "=&r"(swtp)
+ : [swtp_loc] "r"(tag_op));
+#else
+ /* Wait for the SWTAG/SWTAG_FULL operation */
+ while (plt_read64(tag_op) & BIT_ULL(35))
+ ;
+#endif
+}
+
+#endif
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 22eb28345..57b3f66ea 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -8,7 +8,9 @@ if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
subdir_done()
endif
-sources = files('cn10k_eventdev.c',
+sources = files('cn10k_worker.c',
+ 'cn10k_eventdev.c',
+ 'cn9k_worker.c',
'cn9k_eventdev.c',
'cnxk_eventdev.c',
)
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 15/34] event/cnxk: add SSO GWS fastpath enqueue functions
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (13 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 14/34] event/cnxk: add SSO HW device operations pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 16/34] event/cnxk: add SSO GWS dequeue fastpath functions pbhagavatula
` (19 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO GWS fastpath event device enqueue functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 16 +++-
drivers/event/cnxk/cn10k_worker.c | 54 ++++++++++++++
drivers/event/cnxk/cn10k_worker.h | 12 +++
drivers/event/cnxk/cn9k_eventdev.c | 25 ++++++-
drivers/event/cnxk/cn9k_worker.c | 112 ++++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_worker.h | 24 ++++++
6 files changed, 241 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 6522351ca..a1b44744b 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -2,7 +2,9 @@
* Copyright(C) 2021 Marvell.
*/
+#include "cn10k_worker.h"
#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
static void
cn10k_init_hws_ops(struct cn10k_sso_hws *ws, uintptr_t base)
@@ -130,6 +132,16 @@ cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static void
+cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
+{
+ PLT_SET_USED(event_dev);
+ event_dev->enqueue = cn10k_sso_hws_enq;
+ event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
+ event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
+ event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst;
+}
+
static void
cn10k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -276,8 +288,10 @@ cn10k_sso_init(struct rte_eventdev *event_dev)
event_dev->dev_ops = &cn10k_sso_dev_ops;
/* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ cn10k_sso_fp_fns_set(event_dev);
return 0;
+ }
rc = cnxk_sso_init(event_dev);
if (rc < 0)
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 63b587301..9b5cb7be6 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -5,3 +5,57 @@
#include "cn10k_worker.h"
#include "cnxk_eventdev.h"
#include "cnxk_worker.h"
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq(void *port, const struct rte_event *ev)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ return cn10k_sso_hws_new_event(ws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ cn10k_sso_hws_forward_event(ws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ cnxk_sso_hws_swtag_flush(ws->tag_wqe_op, ws->swtag_flush_op);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return cn10k_sso_hws_enq(port, ev);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn10k_sso_hws *ws = port;
+ uint16_t i, rc = 1;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = cn10k_sso_hws_new_event(ws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ RTE_SET_USED(nb_events);
+ cn10k_sso_hws_forward_event(ws, ev);
+
+ return 1;
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index 04517055d..48158b320 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -148,4 +148,16 @@ cn10k_sso_hws_get_work_empty(struct cn10k_sso_hws *ws, struct rte_event *ev)
return !!gw.u64[1];
}
+/* CN10K Fastpath functions. */
+uint16_t __rte_hot cn10k_sso_hws_enq(void *port, const struct rte_event *ev);
+uint16_t __rte_hot cn10k_sso_hws_enq_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+
#endif
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 00c5565e7..61a4d0823 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -2,7 +2,9 @@
* Copyright(C) 2021 Marvell.
*/
+#include "cn9k_worker.h"
#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
#define CN9K_DUAL_WS_NB_WS 2
#define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
@@ -150,6 +152,25 @@ cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static void
+cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ event_dev->enqueue = cn9k_sso_hws_enq;
+ event_dev->enqueue_burst = cn9k_sso_hws_enq_burst;
+ event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
+ event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
+
+ if (dev->dual_ws) {
+ event_dev->enqueue = cn9k_sso_hws_dual_enq;
+ event_dev->enqueue_burst = cn9k_sso_hws_dual_enq_burst;
+ event_dev->enqueue_new_burst = cn9k_sso_hws_dual_enq_new_burst;
+ event_dev->enqueue_forward_burst =
+ cn9k_sso_hws_dual_enq_fwd_burst;
+ }
+}
+
static void *
cn9k_sso_init_hws_mem(void *arg, uint8_t port_id)
{
@@ -349,8 +370,10 @@ cn9k_sso_init(struct rte_eventdev *event_dev)
event_dev->dev_ops = &cn9k_sso_dev_ops;
/* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ cn9k_sso_fp_fns_set(event_dev);
return 0;
+ }
rc = cnxk_sso_init(event_dev);
if (rc < 0)
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index 836914163..538bc4b0b 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -5,3 +5,115 @@
#include "roc_api.h"
#include "cn9k_worker.h"
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq(void *port, const struct rte_event *ev)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ return cn9k_sso_hws_new_event(ws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ cn9k_sso_hws_forward_event(ws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ cnxk_sso_hws_swtag_flush(ws->tag_op, ws->swtag_flush_op);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return cn9k_sso_hws_enq(port, ev);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws *ws = port;
+ uint16_t i, rc = 1;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = cn9k_sso_hws_new_event(ws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ RTE_SET_USED(nb_events);
+ cn9k_sso_hws_forward_event(ws, ev);
+
+ return 1;
+}
+
+/* Dual ws ops. */
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq(void *port, const struct rte_event *ev)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ struct cn9k_sso_hws_state *vws;
+
+ vws = &dws->ws_state[!dws->vws];
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ return cn9k_sso_hws_dual_new_event(dws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ cn9k_sso_hws_dual_forward_event(dws, vws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ cnxk_sso_hws_swtag_flush(vws->tag_op, vws->swtag_flush_op);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return cn9k_sso_hws_dual_enq(port, ev);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ uint16_t i, rc = 1;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = cn9k_sso_hws_dual_new_event(dws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+
+ RTE_SET_USED(nb_events);
+ cn9k_sso_hws_dual_forward_event(dws, &dws->ws_state[!dws->vws], ev);
+
+ return 1;
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index 85be742c1..aa321d0e4 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -246,4 +246,28 @@ cn9k_sso_hws_get_work_empty(struct cn9k_sso_hws_state *ws, struct rte_event *ev)
return !!gw.u64[1];
}
+/* CN9K Fastpath functions. */
+uint16_t __rte_hot cn9k_sso_hws_enq(void *port, const struct rte_event *ev);
+uint16_t __rte_hot cn9k_sso_hws_enq_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_enq_new_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_enq_fwd_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+
+uint16_t __rte_hot cn9k_sso_hws_dual_enq(void *port,
+ const struct rte_event *ev);
+uint16_t __rte_hot cn9k_sso_hws_dual_enq_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_dual_enq_new_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+
#endif
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 16/34] event/cnxk: add SSO GWS dequeue fastpath functions
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (14 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 15/34] event/cnxk: add SSO GWS fastpath enqueue functions pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 17/34] event/cnxk: add device start function pbhagavatula
` (18 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO GWS event dequeue fastpath functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 10 ++-
drivers/event/cnxk/cn10k_worker.c | 54 +++++++++++++
drivers/event/cnxk/cn10k_worker.h | 12 +++
drivers/event/cnxk/cn9k_eventdev.c | 15 ++++
drivers/event/cnxk/cn9k_worker.c | 117 ++++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_worker.h | 24 ++++++
6 files changed, 231 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index a1b44744b..37a7c8a8e 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -135,11 +135,19 @@ cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
static void
cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
{
- PLT_SET_USED(event_dev);
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
event_dev->enqueue = cn10k_sso_hws_enq;
event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst;
+
+ event_dev->dequeue = cn10k_sso_hws_deq;
+ event_dev->dequeue_burst = cn10k_sso_hws_deq_burst;
+ if (dev->is_timeout_deq) {
+ event_dev->dequeue = cn10k_sso_hws_tmo_deq;
+ event_dev->dequeue_burst = cn10k_sso_hws_tmo_deq_burst;
+ }
}
static void
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 9b5cb7be6..e2aa534c6 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -59,3 +59,57 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+uint16_t __rte_hot
+cn10k_sso_hws_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ RTE_SET_USED(timeout_ticks);
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_wqe_op);
+ return 1;
+ }
+
+ return cn10k_sso_hws_get_work(ws, ev);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_deq_burst(void *port, struct rte_event ev[], uint16_t nb_events,
+ uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn10k_sso_hws_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_tmo_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn10k_sso_hws *ws = port;
+ uint16_t ret = 1;
+ uint64_t iter;
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_wqe_op);
+ return ret;
+ }
+
+ ret = cn10k_sso_hws_get_work(ws, ev);
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++)
+ ret = cn10k_sso_hws_get_work(ws, ev);
+
+ return ret;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn10k_sso_hws_tmo_deq(port, ev, timeout_ticks);
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index 48158b320..2f093a8dd 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -160,4 +160,16 @@ uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_tmo_deq_burst(void *port,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+
#endif
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 61a4d0823..6ba3d1466 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -162,12 +162,27 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
+ event_dev->dequeue = cn9k_sso_hws_deq;
+ event_dev->dequeue_burst = cn9k_sso_hws_deq_burst;
+ if (dev->deq_tmo_ns) {
+ event_dev->dequeue = cn9k_sso_hws_tmo_deq;
+ event_dev->dequeue_burst = cn9k_sso_hws_tmo_deq_burst;
+ }
+
if (dev->dual_ws) {
event_dev->enqueue = cn9k_sso_hws_dual_enq;
event_dev->enqueue_burst = cn9k_sso_hws_dual_enq_burst;
event_dev->enqueue_new_burst = cn9k_sso_hws_dual_enq_new_burst;
event_dev->enqueue_forward_burst =
cn9k_sso_hws_dual_enq_fwd_burst;
+
+ event_dev->dequeue = cn9k_sso_hws_dual_deq;
+ event_dev->dequeue_burst = cn9k_sso_hws_dual_deq_burst;
+ if (dev->deq_tmo_ns) {
+ event_dev->dequeue = cn9k_sso_hws_dual_tmo_deq;
+ event_dev->dequeue_burst =
+ cn9k_sso_hws_dual_tmo_deq_burst;
+ }
}
}
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index 538bc4b0b..9ceacc98d 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -60,6 +60,60 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+uint16_t __rte_hot
+cn9k_sso_hws_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ RTE_SET_USED(timeout_ticks);
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_op);
+ return 1;
+ }
+
+ return cn9k_sso_hws_get_work(ws, ev);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_deq_burst(void *port, struct rte_event ev[], uint16_t nb_events,
+ uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_tmo_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws *ws = port;
+ uint16_t ret = 1;
+ uint64_t iter;
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_op);
+ return ret;
+ }
+
+ ret = cn9k_sso_hws_get_work(ws, ev);
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++)
+ ret = cn9k_sso_hws_get_work(ws, ev);
+
+ return ret;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_tmo_deq(port, ev, timeout_ticks);
+}
+
/* Dual ws ops. */
uint16_t __rte_hot
@@ -117,3 +171,66 @@ cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ uint16_t gw;
+
+ RTE_SET_USED(timeout_ticks);
+ if (dws->swtag_req) {
+ dws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(dws->ws_state[!dws->vws].tag_op);
+ return 1;
+ }
+
+ gw = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
+ &dws->ws_state[!dws->vws], ev);
+ dws->vws = !dws->vws;
+ return gw;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_dual_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ uint16_t ret = 1;
+ uint64_t iter;
+
+ if (dws->swtag_req) {
+ dws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(dws->ws_state[!dws->vws].tag_op);
+ return ret;
+ }
+
+ ret = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
+ &dws->ws_state[!dws->vws], ev);
+ dws->vws = !dws->vws;
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) {
+ ret = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
+ &dws->ws_state[!dws->vws], ev);
+ dws->vws = !dws->vws;
+ }
+
+ return ret;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_dual_tmo_deq(port, ev, timeout_ticks);
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index aa321d0e4..38fca08fb 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -270,4 +270,28 @@ uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+
+uint16_t __rte_hot cn9k_sso_hws_dual_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst(void *port,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq_burst(void *port,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+
#endif
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 17/34] event/cnxk: add device start function
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (15 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 16/34] event/cnxk: add SSO GWS dequeue fastpath functions pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 18/34] event/cnxk: add device stop and close functions pbhagavatula
` (17 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add eventdev start function along with few cleanup API's to maintain
sanity.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 127 ++++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 113 +++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.c | 64 ++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 7 ++
4 files changed, 311 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 37a7c8a8e..8d6b1e48a 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -112,6 +112,117 @@ cn10k_sso_hws_release(void *arg, void *hws)
memset(ws, 0, sizeof(*ws));
}
+static void
+cn10k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base,
+ cnxk_handle_event_t fn, void *arg)
+{
+ struct cn10k_sso_hws *ws = hws;
+ uint64_t cq_ds_cnt = 1;
+ uint64_t aq_cnt = 1;
+ uint64_t ds_cnt = 1;
+ struct rte_event ev;
+ uint64_t val, req;
+
+ plt_write64(0, base + SSO_LF_GGRP_QCTL);
+
+ req = queue_id; /* GGRP ID */
+ req |= BIT_ULL(18); /* Grouped */
+ req |= BIT_ULL(16); /* WAIT */
+
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+
+ while (aq_cnt || cq_ds_cnt || ds_cnt) {
+ plt_write64(req, ws->getwrk_op);
+ cn10k_sso_hws_get_work_empty(ws, &ev);
+ if (fn != NULL && ev.u64 != 0)
+ fn(arg, ev);
+ if (ev.sched_type != SSO_TT_EMPTY)
+ cnxk_sso_hws_swtag_flush(ws->tag_wqe_op,
+ ws->swtag_flush_op);
+ do {
+ val = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE);
+ } while (val & BIT_ULL(56));
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ /* Extract cq and ds count */
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+ }
+
+ plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL);
+ rte_mb();
+}
+
+static void
+cn10k_sso_hws_reset(void *arg, void *hws)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = hws;
+ uintptr_t base = ws->base;
+ uint64_t pend_state;
+ union {
+ __uint128_t wdata;
+ uint64_t u64[2];
+ } gw;
+ uint8_t pend_tt;
+
+ /* Wait till getwork/swtp/waitw/desched completes. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) |
+ BIT_ULL(56) | BIT_ULL(54)));
+ pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0));
+ if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+ if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED)
+ cnxk_sso_hws_swtag_untag(base +
+ SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+ }
+
+ /* Wait for desched to complete. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & BIT_ULL(58));
+
+ switch (dev->gw_mode) {
+ case CN10K_GW_MODE_PREF:
+ while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) & BIT_ULL(63))
+ ;
+ break;
+ case CN10K_GW_MODE_PREF_WFE:
+ while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) &
+ SSOW_LF_GWS_TAG_PEND_GET_WORK_BIT)
+ continue;
+ plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL);
+ break;
+ case CN10K_GW_MODE_NONE:
+ default:
+ break;
+ }
+
+ if (CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_PRF_WQE0)) !=
+ SSO_TT_EMPTY) {
+ plt_write64(BIT_ULL(16) | 1, ws->getwrk_op);
+ do {
+ roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+ } while (gw.u64[0] & BIT_ULL(63));
+ pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0));
+ if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+ if (pend_tt == SSO_TT_ATOMIC ||
+ pend_tt == SSO_TT_ORDERED)
+ cnxk_sso_hws_swtag_untag(
+ base + SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+ }
+ }
+
+ plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL);
+ rte_mb();
+}
+
static void
cn10k_sso_set_rsrc(void *arg)
{
@@ -263,6 +374,20 @@ cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
return (int)nb_unlinks;
}
+static int
+cn10k_sso_start(struct rte_eventdev *event_dev)
+{
+ int rc;
+
+ rc = cnxk_sso_start(event_dev, cn10k_sso_hws_reset,
+ cn10k_sso_hws_flush_events);
+ if (rc < 0)
+ return rc;
+ cn10k_sso_fp_fns_set(event_dev);
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -275,6 +400,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_link = cn10k_sso_port_link,
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+
+ .dev_start = cn10k_sso_start,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 6ba3d1466..20919e3a0 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -126,6 +126,102 @@ cn9k_sso_hws_release(void *arg, void *hws)
}
}
+static void
+cn9k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base,
+ cnxk_handle_event_t fn, void *arg)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(arg);
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws_state *st;
+ struct cn9k_sso_hws *ws;
+ uint64_t cq_ds_cnt = 1;
+ uint64_t aq_cnt = 1;
+ uint64_t ds_cnt = 1;
+ struct rte_event ev;
+ uintptr_t ws_base;
+ uint64_t val, req;
+
+ plt_write64(0, base + SSO_LF_GGRP_QCTL);
+
+ req = queue_id; /* GGRP ID */
+ req |= BIT_ULL(18); /* Grouped */
+ req |= BIT_ULL(16); /* WAIT */
+
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+
+ if (dev->dual_ws) {
+ dws = hws;
+ st = &dws->ws_state[0];
+ ws_base = dws->base[0];
+ } else {
+ ws = hws;
+ st = (struct cn9k_sso_hws_state *)ws;
+ ws_base = ws->base;
+ }
+
+ while (aq_cnt || cq_ds_cnt || ds_cnt) {
+ plt_write64(req, st->getwrk_op);
+ cn9k_sso_hws_get_work_empty(st, &ev);
+ if (fn != NULL && ev.u64 != 0)
+ fn(arg, ev);
+ if (ev.sched_type != SSO_TT_EMPTY)
+ cnxk_sso_hws_swtag_flush(st->tag_op,
+ st->swtag_flush_op);
+ do {
+ val = plt_read64(ws_base + SSOW_LF_GWS_PENDSTATE);
+ } while (val & BIT_ULL(56));
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ /* Extract cq and ds count */
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+ }
+
+ plt_write64(0, ws_base + SSOW_LF_GWS_OP_GWC_INVAL);
+}
+
+static void
+cn9k_sso_hws_reset(void *arg, void *hws)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ uint64_t pend_state;
+ uint8_t pend_tt;
+ uintptr_t base;
+ uint64_t tag;
+ uint8_t i;
+
+ dws = hws;
+ ws = hws;
+ for (i = 0; i < (dev->dual_ws ? CN9K_DUAL_WS_NB_WS : 1); i++) {
+ base = dev->dual_ws ? dws->base[i] : ws->base;
+ /* Wait till getwork/swtp/waitw/desched completes. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) |
+ BIT_ULL(56)));
+
+ tag = plt_read64(base + SSOW_LF_GWS_TAG);
+ pend_tt = (tag >> 32) & 0x3;
+ if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+ if (pend_tt == SSO_TT_ATOMIC ||
+ pend_tt == SSO_TT_ORDERED)
+ cnxk_sso_hws_swtag_untag(
+ base + SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+ }
+
+ /* Wait for desched to complete. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & BIT_ULL(58));
+ }
+}
+
static void
cn9k_sso_set_rsrc(void *arg)
{
@@ -352,6 +448,21 @@ cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
return (int)nb_unlinks;
}
+static int
+cn9k_sso_start(struct rte_eventdev *event_dev)
+{
+ int rc;
+
+ rc = cnxk_sso_start(event_dev, cn9k_sso_hws_reset,
+ cn9k_sso_hws_flush_events);
+ if (rc < 0)
+ return rc;
+
+ cn9k_sso_fp_fns_set(event_dev);
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -364,6 +475,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_link = cn9k_sso_port_link,
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+
+ .dev_start = cn9k_sso_start,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 2a387ff95..5feae5288 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -326,6 +326,70 @@ cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
return 0;
}
+static void
+cnxk_handle_event(void *arg, struct rte_event event)
+{
+ struct rte_eventdev *event_dev = arg;
+
+ if (event_dev->dev_ops->dev_stop_flush != NULL)
+ event_dev->dev_ops->dev_stop_flush(
+ event_dev->data->dev_id, event,
+ event_dev->data->dev_stop_flush_arg);
+}
+
+static void
+cnxk_sso_cleanup(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn, uint8_t enable)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uintptr_t hwgrp_base;
+ uint16_t i;
+ void *ws;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ ws = event_dev->data->ports[i];
+ reset_fn(dev, ws);
+ }
+
+ rte_mb();
+ ws = event_dev->data->ports[0];
+
+ for (i = 0; i < dev->nb_event_queues; i++) {
+ /* Consume all the events through HWS0 */
+ hwgrp_base = roc_sso_hwgrp_base_get(&dev->sso, i);
+ flush_fn(ws, i, hwgrp_base, cnxk_handle_event, event_dev);
+ /* Enable/Disable SSO GGRP */
+ plt_write64(enable, hwgrp_base + SSO_LF_GGRP_QCTL);
+ }
+}
+
+int
+cnxk_sso_start(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ struct roc_sso_hwgrp_qos qos[dev->qos_queue_cnt];
+ int i, rc;
+
+ plt_sso_dbg();
+ for (i = 0; i < dev->qos_queue_cnt; i++) {
+ qos->hwgrp = dev->qos_parse_data[i].queue;
+ qos->iaq_prcnt = dev->qos_parse_data[i].iaq_prcnt;
+ qos->taq_prcnt = dev->qos_parse_data[i].taq_prcnt;
+ qos->xaq_prcnt = dev->qos_parse_data[i].xaq_prcnt;
+ }
+ rc = roc_sso_hwgrp_qos_config(&dev->sso, qos, dev->qos_queue_cnt,
+ dev->xae_cnt);
+ if (rc < 0) {
+ plt_sso_dbg("failed to configure HWGRP QoS rc = %d", rc);
+ return -EINVAL;
+ }
+ cnxk_sso_cleanup(event_dev, reset_fn, flush_fn, true);
+ rte_mb();
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 0a3ab71e4..f175c23bb 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -48,6 +48,10 @@ typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
uint16_t nb_link);
+typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
+typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
+typedef void (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
+ cnxk_handle_event_t fn, void *arg);
struct cnxk_sso_qos {
uint16_t queue;
@@ -198,5 +202,8 @@ int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
cnxk_sso_hws_setup_t hws_setup_fn);
int cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
uint64_t *tmo_ticks);
+int cnxk_sso_start(struct rte_eventdev *event_dev,
+ cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 18/34] event/cnxk: add device stop and close functions
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (16 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 17/34] event/cnxk: add device start function pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 19/34] event/cnxk: add SSO selftest and dump pbhagavatula
` (16 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add event device stop and close callback functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 15 +++++++++
drivers/event/cnxk/cn9k_eventdev.c | 14 +++++++++
drivers/event/cnxk/cnxk_eventdev.c | 48 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 6 ++++
4 files changed, 83 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 8d6b1e48a..5b7cd672c 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -388,6 +388,19 @@ cn10k_sso_start(struct rte_eventdev *event_dev)
return rc;
}
+static void
+cn10k_sso_stop(struct rte_eventdev *event_dev)
+{
+ cnxk_sso_stop(event_dev, cn10k_sso_hws_reset,
+ cn10k_sso_hws_flush_events);
+}
+
+static int
+cn10k_sso_close(struct rte_eventdev *event_dev)
+{
+ return cnxk_sso_close(event_dev, cn10k_sso_hws_unlink);
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -402,6 +415,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.timeout_ticks = cnxk_sso_timeout_ticks,
.dev_start = cn10k_sso_start,
+ .dev_stop = cn10k_sso_stop,
+ .dev_close = cn10k_sso_close,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 20919e3a0..f13f50f42 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -463,6 +463,18 @@ cn9k_sso_start(struct rte_eventdev *event_dev)
return rc;
}
+static void
+cn9k_sso_stop(struct rte_eventdev *event_dev)
+{
+ cnxk_sso_stop(event_dev, cn9k_sso_hws_reset, cn9k_sso_hws_flush_events);
+}
+
+static int
+cn9k_sso_close(struct rte_eventdev *event_dev)
+{
+ return cnxk_sso_close(event_dev, cn9k_sso_hws_unlink);
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -477,6 +489,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.timeout_ticks = cnxk_sso_timeout_ticks,
.dev_start = cn9k_sso_start,
+ .dev_stop = cn9k_sso_stop,
+ .dev_close = cn9k_sso_close,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 5feae5288..a3900315a 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -390,6 +390,54 @@ cnxk_sso_start(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
return 0;
}
+void
+cnxk_sso_stop(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn)
+{
+ plt_sso_dbg();
+ cnxk_sso_cleanup(event_dev, reset_fn, flush_fn, false);
+ rte_mb();
+}
+
+int
+cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t all_queues[CNXK_SSO_MAX_HWGRP];
+ uint16_t i;
+ void *ws;
+
+ if (!dev->configured)
+ return 0;
+
+ for (i = 0; i < dev->nb_event_queues; i++)
+ all_queues[i] = i;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ ws = event_dev->data->ports[i];
+ unlink_fn(dev, ws, all_queues, dev->nb_event_queues);
+ rte_free(cnxk_sso_hws_get_cookie(ws));
+ event_dev->data->ports[i] = NULL;
+ }
+
+ roc_sso_rsrc_fini(&dev->sso);
+ rte_mempool_free(dev->xaq_pool);
+ rte_memzone_free(rte_memzone_lookup(CNXK_SSO_FC_NAME));
+
+ dev->fc_iova = 0;
+ dev->fc_mem = NULL;
+ dev->xaq_pool = NULL;
+ dev->configured = false;
+ dev->is_timeout_deq = 0;
+ dev->nb_event_ports = 0;
+ dev->max_num_events = -1;
+ dev->nb_event_queues = 0;
+ dev->min_dequeue_timeout_ns = USEC2NSEC(1);
+ dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF);
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index f175c23bb..3011af153 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -48,6 +48,8 @@ typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
uint16_t nb_link);
+typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map,
+ uint16_t nb_link);
typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
typedef void (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
@@ -205,5 +207,9 @@ int cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
int cnxk_sso_start(struct rte_eventdev *event_dev,
cnxk_sso_hws_reset_t reset_fn,
cnxk_sso_hws_flush_t flush_fn);
+void cnxk_sso_stop(struct rte_eventdev *event_dev,
+ cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn);
+int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 19/34] event/cnxk: add SSO selftest and dump
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (17 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 18/34] event/cnxk: add device stop and close functions pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 20/34] event/cnxk: add event port and queue xstats pbhagavatula
` (15 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add selftest to verify sanity of SSO and also add function to
dump internal state of SSO.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/test_eventdev.c | 14 +
drivers/event/cnxk/cn10k_eventdev.c | 8 +
drivers/event/cnxk/cn9k_eventdev.c | 10 +-
drivers/event/cnxk/cnxk_eventdev.c | 8 +
drivers/event/cnxk/cnxk_eventdev.h | 5 +
drivers/event/cnxk/cnxk_eventdev_selftest.c | 1570 +++++++++++++++++++
drivers/event/cnxk/meson.build | 1 +
7 files changed, 1615 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cnxk_eventdev_selftest.c
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index bcfaa53cb..843d9766b 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1036,6 +1036,18 @@ test_eventdev_selftest_dlb2(void)
return test_eventdev_selftest_impl("dlb2_event", "");
}
+static int
+test_eventdev_selftest_cn9k(void)
+{
+ return test_eventdev_selftest_impl("event_cn9k", "");
+}
+
+static int
+test_eventdev_selftest_cn10k(void)
+{
+ return test_eventdev_selftest_impl("event_cn10k", "");
+}
+
REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
REGISTER_TEST_COMMAND(eventdev_selftest_sw, test_eventdev_selftest_sw);
REGISTER_TEST_COMMAND(eventdev_selftest_octeontx,
@@ -1044,3 +1056,5 @@ REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2,
test_eventdev_selftest_octeontx2);
REGISTER_TEST_COMMAND(eventdev_selftest_dpaa2, test_eventdev_selftest_dpaa2);
REGISTER_TEST_COMMAND(eventdev_selftest_dlb2, test_eventdev_selftest_dlb2);
+REGISTER_TEST_COMMAND(eventdev_selftest_cn9k, test_eventdev_selftest_cn9k);
+REGISTER_TEST_COMMAND(eventdev_selftest_cn10k, test_eventdev_selftest_cn10k);
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 5b7cd672c..a0c6d32cc 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -401,6 +401,12 @@ cn10k_sso_close(struct rte_eventdev *event_dev)
return cnxk_sso_close(event_dev, cn10k_sso_hws_unlink);
}
+static int
+cn10k_sso_selftest(void)
+{
+ return cnxk_sso_selftest(RTE_STR(event_cn10k));
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -414,9 +420,11 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .dump = cnxk_sso_dump,
.dev_start = cn10k_sso_start,
.dev_stop = cn10k_sso_stop,
.dev_close = cn10k_sso_close,
+ .dev_selftest = cn10k_sso_selftest,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index f13f50f42..48991e522 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -222,7 +222,7 @@ cn9k_sso_hws_reset(void *arg, void *hws)
}
}
-static void
+void
cn9k_sso_set_rsrc(void *arg)
{
struct cnxk_sso_evdev *dev = arg;
@@ -475,6 +475,12 @@ cn9k_sso_close(struct rte_eventdev *event_dev)
return cnxk_sso_close(event_dev, cn9k_sso_hws_unlink);
}
+static int
+cn9k_sso_selftest(void)
+{
+ return cnxk_sso_selftest(RTE_STR(event_cn9k));
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -488,9 +494,11 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .dump = cnxk_sso_dump,
.dev_start = cn9k_sso_start,
.dev_stop = cn9k_sso_stop,
.dev_close = cn9k_sso_close,
+ .dev_selftest = cn9k_sso_selftest,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index a3900315a..0f084176c 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -326,6 +326,14 @@ cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
return 0;
}
+void
+cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ roc_sso_dump(&dev->sso, dev->sso.nb_hws, dev->sso.nb_hwgrp, f);
+}
+
static void
cnxk_handle_event(void *arg, struct rte_event event)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 3011af153..9af04bc3d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -211,5 +211,10 @@ void cnxk_sso_stop(struct rte_eventdev *event_dev,
cnxk_sso_hws_reset_t reset_fn,
cnxk_sso_hws_flush_t flush_fn);
int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn);
+int cnxk_sso_selftest(const char *dev_name);
+void cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f);
+
+/* CN9K */
+void cn9k_sso_set_rsrc(void *arg);
#endif /* __CNXK_EVENTDEV_H__ */
diff --git a/drivers/event/cnxk/cnxk_eventdev_selftest.c b/drivers/event/cnxk/cnxk_eventdev_selftest.c
new file mode 100644
index 000000000..69c15b1d0
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev_selftest.c
@@ -0,0 +1,1570 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_hexdump.h>
+#include <rte_launch.h>
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_memcpy.h>
+#include <rte_per_lcore.h>
+#include <rte_random.h>
+#include <rte_test.h>
+
+#include "cnxk_eventdev.h"
+
+#define NUM_PACKETS (1024)
+#define MAX_EVENTS (1024)
+#define MAX_STAGES (255)
+
+#define CNXK_TEST_RUN(setup, teardown, test) \
+ cnxk_test_run(setup, teardown, test, #test)
+
+static int total;
+static int passed;
+static int failed;
+static int unsupported;
+
+static int evdev;
+static struct rte_mempool *eventdev_test_mempool;
+
+struct event_attr {
+ uint32_t flow_id;
+ uint8_t event_type;
+ uint8_t sub_event_type;
+ uint8_t sched_type;
+ uint8_t queue;
+ uint8_t port;
+};
+
+static uint32_t seqn_list_index;
+static int seqn_list[NUM_PACKETS];
+
+static inline void
+seqn_list_init(void)
+{
+ RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS);
+ memset(seqn_list, 0, sizeof(seqn_list));
+ seqn_list_index = 0;
+}
+
+static inline int
+seqn_list_update(int val)
+{
+ if (seqn_list_index >= NUM_PACKETS)
+ return -1;
+
+ seqn_list[seqn_list_index++] = val;
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ return 0;
+}
+
+static inline int
+seqn_list_check(int limit)
+{
+ int i;
+
+ for (i = 0; i < limit; i++) {
+ if (seqn_list[i] != i) {
+ plt_err("Seqn mismatch %d %d", seqn_list[i], i);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+struct test_core_param {
+ uint32_t *total_events;
+ uint64_t dequeue_tmo_ticks;
+ uint8_t port;
+ uint8_t sched_type;
+};
+
+static int
+testsuite_setup(const char *eventdev_name)
+{
+ evdev = rte_event_dev_get_dev_id(eventdev_name);
+ if (evdev < 0) {
+ plt_err("%d: Eventdev %s not found", __LINE__, eventdev_name);
+ return -1;
+ }
+ return 0;
+}
+
+static void
+testsuite_teardown(void)
+{
+ rte_event_dev_close(evdev);
+ total = 0;
+ passed = 0;
+ failed = 0;
+ unsupported = 0;
+}
+
+static inline void
+devconf_set_default_sane_values(struct rte_event_dev_config *dev_conf,
+ struct rte_event_dev_info *info)
+{
+ memset(dev_conf, 0, sizeof(struct rte_event_dev_config));
+ dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns;
+ dev_conf->nb_event_ports = info->max_event_ports;
+ dev_conf->nb_event_queues = info->max_event_queues;
+ dev_conf->nb_event_queue_flows = info->max_event_queue_flows;
+ dev_conf->nb_event_port_dequeue_depth =
+ info->max_event_port_dequeue_depth;
+ dev_conf->nb_event_port_enqueue_depth =
+ info->max_event_port_enqueue_depth;
+ dev_conf->nb_event_port_enqueue_depth =
+ info->max_event_port_enqueue_depth;
+ dev_conf->nb_events_limit = info->max_num_events;
+}
+
+enum {
+ TEST_EVENTDEV_SETUP_DEFAULT,
+ TEST_EVENTDEV_SETUP_PRIORITY,
+ TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT,
+};
+
+static inline int
+_eventdev_setup(int mode)
+{
+ const char *pool_name = "evdev_cnxk_test_pool";
+ struct rte_event_dev_config dev_conf;
+ struct rte_event_dev_info info;
+ int i, ret;
+
+ /* Create and destrory pool for each test case to make it standalone */
+ eventdev_test_mempool = rte_pktmbuf_pool_create(
+ pool_name, MAX_EVENTS, 0, 0, 512, rte_socket_id());
+ if (!eventdev_test_mempool) {
+ plt_err("ERROR creating mempool");
+ return -1;
+ }
+
+ ret = rte_event_dev_info_get(evdev, &info);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+ devconf_set_default_sane_values(&dev_conf, &info);
+ if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT)
+ dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT;
+
+ ret = rte_event_dev_configure(evdev, &dev_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+ uint32_t queue_count;
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+
+ if (mode == TEST_EVENTDEV_SETUP_PRIORITY) {
+ if (queue_count > 8)
+ queue_count = 8;
+
+ /* Configure event queues(0 to n) with
+ * RTE_EVENT_DEV_PRIORITY_HIGHEST to
+ * RTE_EVENT_DEV_PRIORITY_LOWEST
+ */
+ uint8_t step =
+ (RTE_EVENT_DEV_PRIORITY_LOWEST + 1) / queue_count;
+ for (i = 0; i < (int)queue_count; i++) {
+ struct rte_event_queue_conf queue_conf;
+
+ ret = rte_event_queue_default_conf_get(evdev, i,
+ &queue_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d",
+ i);
+ queue_conf.priority = i * step;
+ ret = rte_event_queue_setup(evdev, i, &queue_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
+ i);
+ }
+
+ } else {
+ /* Configure event queues with default priority */
+ for (i = 0; i < (int)queue_count; i++) {
+ ret = rte_event_queue_setup(evdev, i, NULL);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
+ i);
+ }
+ }
+ /* Configure event ports */
+ uint32_t port_count;
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &port_count),
+ "Port count get failed");
+ for (i = 0; i < (int)port_count; i++) {
+ ret = rte_event_port_setup(evdev, i, NULL);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i);
+ ret = rte_event_port_link(evdev, i, NULL, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d",
+ i);
+ }
+
+ ret = rte_event_dev_start(evdev);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device");
+
+ return 0;
+}
+
+static inline int
+eventdev_setup(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_DEFAULT);
+}
+
+static inline int
+eventdev_setup_priority(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_PRIORITY);
+}
+
+static inline int
+eventdev_setup_dequeue_timeout(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT);
+}
+
+static inline void
+eventdev_teardown(void)
+{
+ rte_event_dev_stop(evdev);
+ rte_mempool_free(eventdev_test_mempool);
+}
+
+static inline void
+update_event_and_validation_attr(struct rte_mbuf *m, struct rte_event *ev,
+ uint32_t flow_id, uint8_t event_type,
+ uint8_t sub_event_type, uint8_t sched_type,
+ uint8_t queue, uint8_t port)
+{
+ struct event_attr *attr;
+
+ /* Store the event attributes in mbuf for future reference */
+ attr = rte_pktmbuf_mtod(m, struct event_attr *);
+ attr->flow_id = flow_id;
+ attr->event_type = event_type;
+ attr->sub_event_type = sub_event_type;
+ attr->sched_type = sched_type;
+ attr->queue = queue;
+ attr->port = port;
+
+ ev->flow_id = flow_id;
+ ev->sub_event_type = sub_event_type;
+ ev->event_type = event_type;
+ /* Inject the new event */
+ ev->op = RTE_EVENT_OP_NEW;
+ ev->sched_type = sched_type;
+ ev->queue_id = queue;
+ ev->mbuf = m;
+}
+
+static inline int
+inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type,
+ uint8_t sched_type, uint8_t queue, uint8_t port,
+ unsigned int events)
+{
+ struct rte_mbuf *m;
+ unsigned int i;
+
+ for (i = 0; i < events; i++) {
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
+
+ *rte_event_pmd_selftest_seqn(m) = i;
+ update_event_and_validation_attr(m, &ev, flow_id, event_type,
+ sub_event_type, sched_type,
+ queue, port);
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ return 0;
+}
+
+static inline int
+check_excess_events(uint8_t port)
+{
+ uint16_t valid_event;
+ struct rte_event ev;
+ int i;
+
+ /* Check for excess events, try for a few times and exit */
+ for (i = 0; i < 32; i++) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+
+ RTE_TEST_ASSERT_SUCCESS(valid_event,
+ "Unexpected valid event=%d",
+ *rte_event_pmd_selftest_seqn(ev.mbuf));
+ }
+ return 0;
+}
+
+static inline int
+generate_random_events(const unsigned int total_events)
+{
+ struct rte_event_dev_info info;
+ uint32_t queue_count;
+ unsigned int i;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+
+ ret = rte_event_dev_info_get(evdev, &info);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+ for (i = 0; i < total_events; i++) {
+ ret = inject_events(
+ rte_rand() % info.max_event_queue_flows /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ rte_rand() % queue_count /* queue */, 0 /* port */,
+ 1 /* events */);
+ if (ret)
+ return -1;
+ }
+ return ret;
+}
+
+static inline int
+validate_event(struct rte_event *ev)
+{
+ struct event_attr *attr;
+
+ attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *);
+ RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id,
+ "flow_id mismatch enq=%d deq =%d", attr->flow_id,
+ ev->flow_id);
+ RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type,
+ "event_type mismatch enq=%d deq =%d",
+ attr->event_type, ev->event_type);
+ RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type,
+ "sub_event_type mismatch enq=%d deq =%d",
+ attr->sub_event_type, ev->sub_event_type);
+ RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type,
+ "sched_type mismatch enq=%d deq =%d",
+ attr->sched_type, ev->sched_type);
+ RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id,
+ "queue mismatch enq=%d deq =%d", attr->queue,
+ ev->queue_id);
+ return 0;
+}
+
+typedef int (*validate_event_cb)(uint32_t index, uint8_t port,
+ struct rte_event *ev);
+
+static inline int
+consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn)
+{
+ uint32_t events = 0, forward_progress_cnt = 0, index = 0;
+ uint16_t valid_event;
+ struct rte_event ev;
+ int ret;
+
+ while (1) {
+ if (++forward_progress_cnt > UINT16_MAX) {
+ plt_err("Detected deadlock");
+ return -1;
+ }
+
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ forward_progress_cnt = 0;
+ ret = validate_event(&ev);
+ if (ret)
+ return -1;
+
+ if (fn != NULL) {
+ ret = fn(index, port, &ev);
+ RTE_TEST_ASSERT_SUCCESS(
+ ret, "Failed to validate test specific event");
+ }
+
+ ++index;
+
+ rte_pktmbuf_free(ev.mbuf);
+ if (++events >= total_events)
+ break;
+ }
+
+ return check_excess_events(port);
+}
+
+static int
+validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev)
+{
+ RTE_SET_USED(port);
+ RTE_TEST_ASSERT_EQUAL(index, *rte_event_pmd_selftest_seqn(ev->mbuf),
+ "index=%d != seqn=%d", index,
+ *rte_event_pmd_selftest_seqn(ev->mbuf));
+ return 0;
+}
+
+static inline int
+test_simple_enqdeq(uint8_t sched_type)
+{
+ int ret;
+
+ ret = inject_events(0 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type */, sched_type, 0 /* queue */,
+ 0 /* port */, MAX_EVENTS);
+ if (ret)
+ return -1;
+
+ return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq);
+}
+
+static int
+test_simple_enqdeq_ordered(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_simple_enqdeq_atomic(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_simple_enqdeq_parallel(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_PARALLEL);
+}
+
+/*
+ * Generate a prescribed number of events and spread them across available
+ * queues. On dequeue, using single event port(port 0) verify the enqueued
+ * event attributes
+ */
+static int
+test_multi_queue_enq_single_port_deq(void)
+{
+ int ret;
+
+ ret = generate_random_events(MAX_EVENTS);
+ if (ret)
+ return -1;
+
+ return consume_events(0 /* port */, MAX_EVENTS, NULL);
+}
+
+/*
+ * Inject 0..MAX_EVENTS events over 0..queue_count with modulus
+ * operation
+ *
+ * For example, Inject 32 events over 0..7 queues
+ * enqueue events 0, 8, 16, 24 in queue 0
+ * enqueue events 1, 9, 17, 25 in queue 1
+ * ..
+ * ..
+ * enqueue events 7, 15, 23, 31 in queue 7
+ *
+ * On dequeue, Validate the events comes in 0,8,16,24,1,9,17,25..,7,15,23,31
+ * order from queue0(highest priority) to queue7(lowest_priority)
+ */
+static int
+validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev)
+{
+ uint32_t queue_count;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ if (queue_count > 8)
+ queue_count = 8;
+ uint32_t range = MAX_EVENTS / queue_count;
+ uint32_t expected_val = (index % range) * queue_count;
+
+ expected_val += ev->queue_id;
+ RTE_SET_USED(port);
+ RTE_TEST_ASSERT_EQUAL(
+ *rte_event_pmd_selftest_seqn(ev->mbuf), expected_val,
+ "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d",
+ *rte_event_pmd_selftest_seqn(ev->mbuf), index, expected_val,
+ range, queue_count, MAX_EVENTS);
+ return 0;
+}
+
+static int
+test_multi_queue_priority(void)
+{
+ int i, max_evts_roundoff;
+ /* See validate_queue_priority() comments for priority validate logic */
+ uint32_t queue_count;
+ struct rte_mbuf *m;
+ uint8_t queue;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ if (queue_count > 8)
+ queue_count = 8;
+ max_evts_roundoff = MAX_EVENTS / queue_count;
+ max_evts_roundoff *= queue_count;
+
+ for (i = 0; i < max_evts_roundoff; i++) {
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
+
+ *rte_event_pmd_selftest_seqn(m) = i;
+ queue = i % queue_count;
+ update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU,
+ 0, RTE_SCHED_TYPE_PARALLEL,
+ queue, 0);
+ rte_event_enqueue_burst(evdev, 0, &ev, 1);
+ }
+
+ return consume_events(0, max_evts_roundoff, validate_queue_priority);
+}
+
+static int
+worker_multi_port_fn(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint32_t *total_events = param->total_events;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+ int ret;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ ret = validate_event(&ev);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event");
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ }
+
+ return 0;
+}
+
+static inline int
+wait_workers_to_join(const uint32_t *count)
+{
+ uint64_t cycles, print_cycles;
+
+ cycles = rte_get_timer_cycles();
+ print_cycles = cycles;
+ while (__atomic_load_n(count, __ATOMIC_RELAXED)) {
+ uint64_t new_cycles = rte_get_timer_cycles();
+
+ if (new_cycles - print_cycles > rte_get_timer_hz()) {
+ plt_info("Events %d",
+ __atomic_load_n(count, __ATOMIC_RELAXED));
+ print_cycles = new_cycles;
+ }
+ if (new_cycles - cycles > rte_get_timer_hz() * 10000000000) {
+ plt_err("No schedules for seconds, deadlock (%d)",
+ __atomic_load_n(count, __ATOMIC_RELAXED));
+ rte_event_dev_dump(evdev, stdout);
+ cycles = new_cycles;
+ return -1;
+ }
+ }
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+static inline int
+launch_workers_and_wait(int (*main_thread)(void *),
+ int (*worker_thread)(void *), uint32_t total_events,
+ uint8_t nb_workers, uint8_t sched_type)
+{
+ uint32_t atomic_total_events;
+ struct test_core_param *param;
+ uint64_t dequeue_tmo_ticks;
+ uint8_t port = 0;
+ int w_lcore;
+ int ret;
+
+ if (!nb_workers)
+ return 0;
+
+ __atomic_store_n(&atomic_total_events, total_events, __ATOMIC_RELAXED);
+ seqn_list_init();
+
+ param = malloc(sizeof(struct test_core_param) * nb_workers);
+ if (!param)
+ return -1;
+
+ ret = rte_event_dequeue_timeout_ticks(
+ evdev, rte_rand() % 10000000 /* 10ms */, &dequeue_tmo_ticks);
+ if (ret) {
+ free(param);
+ return -1;
+ }
+
+ param[0].total_events = &atomic_total_events;
+ param[0].sched_type = sched_type;
+ param[0].port = 0;
+ param[0].dequeue_tmo_ticks = dequeue_tmo_ticks;
+ rte_wmb();
+
+ w_lcore = rte_get_next_lcore(
+ /* start core */ -1,
+ /* skip main */ 1,
+ /* wrap */ 0);
+ rte_eal_remote_launch(main_thread, ¶m[0], w_lcore);
+
+ for (port = 1; port < nb_workers; port++) {
+ param[port].total_events = &atomic_total_events;
+ param[port].sched_type = sched_type;
+ param[port].port = port;
+ param[port].dequeue_tmo_ticks = dequeue_tmo_ticks;
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ w_lcore = rte_get_next_lcore(w_lcore, 1, 0);
+ rte_eal_remote_launch(worker_thread, ¶m[port], w_lcore);
+ }
+
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ ret = wait_workers_to_join(&atomic_total_events);
+ free(param);
+
+ return ret;
+}
+
+/*
+ * Generate a prescribed number of events and spread them across available
+ * queues. Dequeue the events through multiple ports and verify the enqueued
+ * event attributes
+ */
+static int
+test_multi_queue_enq_multi_port_deq(void)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t nr_ports;
+ int ret;
+
+ ret = generate_random_events(total_events);
+ if (ret)
+ return -1;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ plt_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ return launch_workers_and_wait(worker_multi_port_fn,
+ worker_multi_port_fn, total_events,
+ nr_ports, 0xff /* invalid */);
+}
+
+static void
+flush(uint8_t dev_id, struct rte_event event, void *arg)
+{
+ unsigned int *count = arg;
+
+ RTE_SET_USED(dev_id);
+ if (event.event_type == RTE_EVENT_TYPE_CPU)
+ *count = *count + 1;
+}
+
+static int
+test_dev_stop_flush(void)
+{
+ unsigned int total_events = MAX_EVENTS, count = 0;
+ int ret;
+
+ ret = generate_random_events(total_events);
+ if (ret)
+ return -1;
+
+ ret = rte_event_dev_stop_flush_callback_register(evdev, flush, &count);
+ if (ret)
+ return -2;
+ rte_event_dev_stop(evdev);
+ ret = rte_event_dev_stop_flush_callback_register(evdev, NULL, NULL);
+ if (ret)
+ return -3;
+ RTE_TEST_ASSERT_EQUAL(total_events, count,
+ "count mismatch total_events=%d count=%d",
+ total_events, count);
+
+ return 0;
+}
+
+static int
+validate_queue_to_port_single_link(uint32_t index, uint8_t port,
+ struct rte_event *ev)
+{
+ RTE_SET_USED(index);
+ RTE_TEST_ASSERT_EQUAL(port, ev->queue_id,
+ "queue mismatch enq=%d deq =%d", port,
+ ev->queue_id);
+
+ return 0;
+}
+
+/*
+ * Link queue x to port x and check correctness of link by checking
+ * queue_id == x on dequeue on the specific port x
+ */
+static int
+test_queue_to_port_single_link(void)
+{
+ int i, nr_links, ret;
+ uint32_t queue_count;
+ uint32_t port_count;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &port_count),
+ "Port count get failed");
+
+ /* Unlink all connections that created in eventdev_setup */
+ for (i = 0; i < (int)port_count; i++) {
+ ret = rte_event_port_unlink(evdev, i, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d",
+ i);
+ }
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+
+ nr_links = RTE_MIN(port_count, queue_count);
+ const unsigned int total_events = MAX_EVENTS / nr_links;
+
+ /* Link queue x to port x and inject events to queue x through port x */
+ for (i = 0; i < nr_links; i++) {
+ uint8_t queue = (uint8_t)i;
+
+ ret = rte_event_port_link(evdev, i, &queue, NULL, 1);
+ RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i);
+
+ ret = inject_events(0x100 /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ queue /* queue */, i /* port */,
+ total_events /* events */);
+ if (ret)
+ return -1;
+ }
+
+ /* Verify the events generated from correct queue */
+ for (i = 0; i < nr_links; i++) {
+ ret = consume_events(i /* port */, total_events,
+ validate_queue_to_port_single_link);
+ if (ret)
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+validate_queue_to_port_multi_link(uint32_t index, uint8_t port,
+ struct rte_event *ev)
+{
+ RTE_SET_USED(index);
+ RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1),
+ "queue mismatch enq=%d deq =%d", port,
+ ev->queue_id);
+
+ return 0;
+}
+
+/*
+ * Link all even number of queues to port 0 and all odd number of queues to
+ * port 1 and verify the link connection on dequeue
+ */
+static int
+test_queue_to_port_multi_link(void)
+{
+ int ret, port0_events = 0, port1_events = 0;
+ uint32_t nr_queues = 0;
+ uint32_t nr_ports = 0;
+ uint8_t queue, port;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &nr_queues),
+ "Queue count get failed");
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &nr_queues),
+ "Queue count get failed");
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+
+ if (nr_ports < 2) {
+ plt_err("Not enough ports to test ports=%d", nr_ports);
+ return 0;
+ }
+
+ /* Unlink all connections that created in eventdev_setup */
+ for (port = 0; port < nr_ports; port++) {
+ ret = rte_event_port_unlink(evdev, port, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d",
+ port);
+ }
+
+ unsigned int total_events = MAX_EVENTS / nr_queues;
+ if (!total_events) {
+ nr_queues = MAX_EVENTS;
+ total_events = MAX_EVENTS / nr_queues;
+ }
+
+ /* Link all even number of queues to port0 and odd numbers to port 1*/
+ for (queue = 0; queue < nr_queues; queue++) {
+ port = queue & 0x1;
+ ret = rte_event_port_link(evdev, port, &queue, NULL, 1);
+ RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d",
+ queue, port);
+
+ ret = inject_events(0x100 /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ queue /* queue */, port /* port */,
+ total_events /* events */);
+ if (ret)
+ return -1;
+
+ if (port == 0)
+ port0_events += total_events;
+ else
+ port1_events += total_events;
+ }
+
+ ret = consume_events(0 /* port */, port0_events,
+ validate_queue_to_port_multi_link);
+ if (ret)
+ return -1;
+ ret = consume_events(1 /* port */, port1_events,
+ validate_queue_to_port_multi_link);
+ if (ret)
+ return -1;
+
+ return 0;
+}
+
+static int
+worker_flow_based_pipeline(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
+ uint32_t *total_events = param->total_events;
+ uint8_t new_sched_type = param->sched_type;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
+ dequeue_tmo_ticks);
+ if (!valid_event)
+ continue;
+
+ /* Events from stage 0 */
+ if (ev.sub_event_type == 0) {
+ /* Move to atomic flow to maintain the ordering */
+ ev.flow_id = 0x2;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sub_event_type = 1; /* stage 1 */
+ ev.sched_type = new_sched_type;
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ } else if (ev.sub_event_type == 1) { /* Events from stage 1*/
+ uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
+
+ if (seqn_list_update(seqn) == 0) {
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1,
+ __ATOMIC_RELAXED);
+ } else {
+ plt_err("Failed to update seqn_list");
+ return -1;
+ }
+ } else {
+ plt_err("Invalid ev.sub_event_type = %d",
+ ev.sub_event_type);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+static int
+test_multiport_flow_sched_type_test(uint8_t in_sched_type,
+ uint8_t out_sched_type)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ plt_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with a 0 sequence number to total_events */
+ ret = inject_events(
+ 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */, in_sched_type, 0 /* queue */,
+ 0 /* port */, total_events /* events */);
+ if (ret)
+ return -1;
+
+ rte_mb();
+ ret = launch_workers_and_wait(worker_flow_based_pipeline,
+ worker_flow_based_pipeline, total_events,
+ nr_ports, out_sched_type);
+ if (ret)
+ return -1;
+
+ if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
+ out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
+ /* Check the events order maintained or not */
+ return seqn_list_check(total_events);
+ }
+
+ return 0;
+}
+
+/* Multi port ordered to atomic transaction */
+static int
+test_multi_port_flow_ordered_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_ordered_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_ordered_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_flow_atomic_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_atomic_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_atomic_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_flow_parallel_to_atomic(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_parallel_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_parallel_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+worker_group_based_pipeline(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
+ uint32_t *total_events = param->total_events;
+ uint8_t new_sched_type = param->sched_type;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
+ dequeue_tmo_ticks);
+ if (!valid_event)
+ continue;
+
+ /* Events from stage 0(group 0) */
+ if (ev.queue_id == 0) {
+ /* Move to atomic flow to maintain the ordering */
+ ev.flow_id = 0x2;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sched_type = new_sched_type;
+ ev.queue_id = 1; /* Stage 1*/
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/
+ uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
+
+ if (seqn_list_update(seqn) == 0) {
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1,
+ __ATOMIC_RELAXED);
+ } else {
+ plt_err("Failed to update seqn_list");
+ return -1;
+ }
+ } else {
+ plt_err("Invalid ev.queue_id = %d", ev.queue_id);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int
+test_multiport_queue_sched_type_test(uint8_t in_sched_type,
+ uint8_t out_sched_type)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t queue_count;
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ if (queue_count < 2 || !nr_ports) {
+ plt_err("Not enough queues=%d ports=%d or workers=%d",
+ queue_count, nr_ports, rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with a 0 sequence number to total_events */
+ ret = inject_events(
+ 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */, in_sched_type, 0 /* queue */,
+ 0 /* port */, total_events /* events */);
+ if (ret)
+ return -1;
+
+ ret = launch_workers_and_wait(worker_group_based_pipeline,
+ worker_group_based_pipeline, total_events,
+ nr_ports, out_sched_type);
+ if (ret)
+ return -1;
+
+ if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
+ out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
+ /* Check the events order maintained or not */
+ return seqn_list_check(total_events);
+ }
+
+ return 0;
+}
+
+static int
+test_multi_port_queue_ordered_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_ordered_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_ordered_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_queue_atomic_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_atomic_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_atomic_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_queue_parallel_to_atomic(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_parallel_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_parallel_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+worker_flow_based_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint32_t *total_events = param->total_events;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.sub_event_type == MAX_STAGES) { /* last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sub_event_type++;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+static int
+launch_multi_port_max_stages_random_sched_type(int (*fn)(void *))
+{
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ plt_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with a 0 sequence number to total_events */
+ ret = inject_events(
+ 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1) /* sched_type */,
+ 0 /* queue */, 0 /* port */, MAX_EVENTS /* events */);
+ if (ret)
+ return -1;
+
+ return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports,
+ 0xff /* invalid */);
+}
+
+/* Flow based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_flow_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_flow_based_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ uint32_t queue_count;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ uint8_t nr_queues = queue_count;
+ uint32_t *total_events = param->total_events;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.queue_id == nr_queues - 1) { /* last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id++;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+/* Queue based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_queue_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_queue_based_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_mixed_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ uint32_t queue_count;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ uint8_t nr_queues = queue_count;
+ uint32_t *total_events = param->total_events;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.queue_id == nr_queues - 1) { /* Last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id++;
+ ev.sub_event_type = rte_rand() % 256;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+/* Queue and flow based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_mixed_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_mixed_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_ordered_flow_producer(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ struct rte_mbuf *m;
+ int counter = 0;
+
+ while (counter < NUM_PACKETS) {
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ if (m == NULL)
+ continue;
+
+ *rte_event_pmd_selftest_seqn(m) = counter++;
+
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ ev.flow_id = 0x1; /* Generate a fat flow */
+ ev.sub_event_type = 0;
+ /* Inject the new event */
+ ev.op = RTE_EVENT_OP_NEW;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sched_type = RTE_SCHED_TYPE_ORDERED;
+ ev.queue_id = 0;
+ ev.mbuf = m;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+
+ return 0;
+}
+
+static inline int
+test_producer_consumer_ingress_order_test(int (*fn)(void *))
+{
+ uint32_t nr_ports;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (rte_lcore_count() < 3 || nr_ports < 2) {
+ plt_err("### Not enough cores for test.");
+ return 0;
+ }
+
+ launch_workers_and_wait(worker_ordered_flow_producer, fn, NUM_PACKETS,
+ nr_ports, RTE_SCHED_TYPE_ATOMIC);
+ /* Check the events order maintained or not */
+ return seqn_list_check(NUM_PACKETS);
+}
+
+/* Flow based producer consumer ingress order test */
+static int
+test_flow_producer_consumer_ingress_order_test(void)
+{
+ return test_producer_consumer_ingress_order_test(
+ worker_flow_based_pipeline);
+}
+
+/* Queue based producer consumer ingress order test */
+static int
+test_queue_producer_consumer_ingress_order_test(void)
+{
+ return test_producer_consumer_ingress_order_test(
+ worker_group_based_pipeline);
+}
+
+static void
+cnxk_test_run(int (*setup)(void), void (*tdown)(void), int (*test)(void),
+ const char *name)
+{
+ if (setup() < 0) {
+ printf("Error setting up test %s", name);
+ unsupported++;
+ } else {
+ if (test() < 0) {
+ failed++;
+ printf("+ TestCase [%2d] : %s failed\n", total, name);
+ } else {
+ passed++;
+ printf("+ TestCase [%2d] : %s succeeded\n", total,
+ name);
+ }
+ }
+
+ total++;
+ tdown();
+}
+
+static int
+cnxk_sso_testsuite_run(const char *dev_name)
+{
+ int rc;
+
+ testsuite_setup(dev_name);
+
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_queue_enq_single_port_deq);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown, test_dev_stop_flush);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_queue_enq_multi_port_deq);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_to_port_single_link);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_to_port_multi_link);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_max_stages_random_sched_type);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_max_stages_random_sched_type);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_mixed_max_stages_random_sched_type);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_flow_producer_consumer_ingress_order_test);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_producer_consumer_ingress_order_test);
+ CNXK_TEST_RUN(eventdev_setup_priority, eventdev_teardown,
+ test_multi_queue_priority);
+ CNXK_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
+ test_multi_port_flow_ordered_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
+ test_multi_port_queue_ordered_to_atomic);
+ printf("Total tests : %d\n", total);
+ printf("Passed : %d\n", passed);
+ printf("Failed : %d\n", failed);
+ printf("Not supported : %d\n", unsupported);
+
+ rc = failed;
+ testsuite_teardown();
+
+ return rc;
+}
+
+int
+cnxk_sso_selftest(const char *dev_name)
+{
+ const struct rte_memzone *mz;
+ struct cnxk_sso_evdev *dev;
+ int rc = -1;
+
+ mz = rte_memzone_lookup(CNXK_SSO_MZ_NAME);
+ if (mz == NULL)
+ return rc;
+
+ dev = (void *)*((uint64_t *)mz->addr);
+ if (roc_model_runtime_is_cn9k()) {
+ /* Verify single ws mode. */
+ printf("Verifying CN9K Single workslot mode\n");
+ dev->dual_ws = 0;
+ cn9k_sso_set_rsrc(dev);
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ /* Verift dual ws mode. */
+ printf("Verifying CN9K Dual workslot mode\n");
+ dev->dual_ws = 1;
+ cn9k_sso_set_rsrc(dev);
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ }
+
+ if (roc_model_runtime_is_cn10k()) {
+ printf("Verifying CN10K workslot getwork mode none\n");
+ dev->gw_mode = CN10K_GW_MODE_NONE;
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ printf("Verifying CN10K workslot getwork mode prefetch\n");
+ dev->gw_mode = CN10K_GW_MODE_PREF;
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ printf("Verifying CN10K workslot getwork mode smart prefetch\n");
+ dev->gw_mode = CN10K_GW_MODE_PREF_WFE;
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ }
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 57b3f66ea..e37ea3478 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -13,6 +13,7 @@ sources = files('cn10k_worker.c',
'cn9k_worker.c',
'cn9k_eventdev.c',
'cnxk_eventdev.c',
+ 'cnxk_eventdev_selftest.c'
)
deps += ['bus_pci', 'common_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 20/34] event/cnxk: add event port and queue xstats
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (18 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 19/34] event/cnxk: add SSO selftest and dump pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 21/34] event/cnxk: support event timer pbhagavatula
` (14 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
Satha Rao, Ray Kinsella, Neil Horman, Pavan Nikhilesh,
Shijith Thotton
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add support for retrieving statistics from SSO HWS and HWGRP.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/common/cnxk/roc_sso.c | 63 +++++
drivers/common/cnxk/roc_sso.h | 19 ++
drivers/common/cnxk/version.map | 2 +
drivers/event/cnxk/cnxk_eventdev.h | 15 ++
drivers/event/cnxk/cnxk_eventdev_stats.c | 289 +++++++++++++++++++++++
drivers/event/cnxk/meson.build | 3 +-
6 files changed, 390 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cnxk_eventdev_stats.c
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index 80d032039..1ccf2626b 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -279,6 +279,69 @@ roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
return nb_hwgrp;
}
+int
+roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws,
+ struct roc_sso_hws_stats *stats)
+{
+ struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+ struct sso_hws_stats *req_rsp;
+ int rc;
+
+ req_rsp = (struct sso_hws_stats *)mbox_alloc_msg_sso_hws_get_stats(
+ dev->mbox);
+ if (req_rsp == NULL) {
+ rc = mbox_process(dev->mbox);
+ if (rc < 0)
+ return rc;
+ req_rsp = (struct sso_hws_stats *)
+ mbox_alloc_msg_sso_hws_get_stats(dev->mbox);
+ if (req_rsp == NULL)
+ return -ENOSPC;
+ }
+ req_rsp->hws = hws;
+ rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
+ if (rc)
+ return rc;
+
+ stats->arbitration = req_rsp->arbitration;
+ return 0;
+}
+
+int
+roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
+ struct roc_sso_hwgrp_stats *stats)
+{
+ struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+ struct sso_grp_stats *req_rsp;
+ int rc;
+
+ req_rsp = (struct sso_grp_stats *)mbox_alloc_msg_sso_grp_get_stats(
+ dev->mbox);
+ if (req_rsp == NULL) {
+ rc = mbox_process(dev->mbox);
+ if (rc < 0)
+ return rc;
+ req_rsp = (struct sso_grp_stats *)
+ mbox_alloc_msg_sso_grp_get_stats(dev->mbox);
+ if (req_rsp == NULL)
+ return -ENOSPC;
+ }
+ req_rsp->grp = hwgrp;
+ rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
+ if (rc)
+ return rc;
+
+ stats->aw_status = req_rsp->aw_status;
+ stats->dq_pc = req_rsp->dq_pc;
+ stats->ds_pc = req_rsp->ds_pc;
+ stats->ext_pc = req_rsp->ext_pc;
+ stats->page_cnt = req_rsp->page_cnt;
+ stats->ts_pc = req_rsp->ts_pc;
+ stats->wa_pc = req_rsp->wa_pc;
+ stats->ws_pc = req_rsp->ws_pc;
+ return 0;
+}
+
int
roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso, uint8_t hws,
uint16_t hwgrp)
diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h
index f85799ba8..a6030e7d8 100644
--- a/drivers/common/cnxk/roc_sso.h
+++ b/drivers/common/cnxk/roc_sso.h
@@ -12,6 +12,21 @@ struct roc_sso_hwgrp_qos {
uint8_t taq_prcnt;
};
+struct roc_sso_hws_stats {
+ uint64_t arbitration;
+};
+
+struct roc_sso_hwgrp_stats {
+ uint64_t ws_pc;
+ uint64_t ext_pc;
+ uint64_t wa_pc;
+ uint64_t ts_pc;
+ uint64_t ds_pc;
+ uint64_t dq_pc;
+ uint64_t aw_status;
+ uint64_t page_cnt;
+};
+
struct roc_sso {
struct plt_pci_device *pci_dev;
/* Public data. */
@@ -61,5 +76,9 @@ uintptr_t __roc_api roc_sso_hwgrp_base_get(struct roc_sso *roc_sso,
/* Debug */
void __roc_api roc_sso_dump(struct roc_sso *roc_sso, uint8_t nb_hws,
uint16_t hwgrp, FILE *f);
+int __roc_api roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
+ struct roc_sso_hwgrp_stats *stats);
+int __roc_api roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws,
+ struct roc_sso_hws_stats *stats);
#endif /* _ROC_SSOW_H_ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 5f2264f23..8e67c83a6 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -183,8 +183,10 @@ INTERNAL {
roc_sso_hwgrp_qos_config;
roc_sso_hwgrp_release_xaq;
roc_sso_hwgrp_set_priority;
+ roc_sso_hwgrp_stats_get;
roc_sso_hws_base_get;
roc_sso_hws_link;
+ roc_sso_hws_stats_get;
roc_sso_hws_unlink;
roc_sso_ns_to_gw;
roc_sso_rsrc_fini;
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 9af04bc3d..abe36f21f 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -214,6 +214,21 @@ int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn);
int cnxk_sso_selftest(const char *dev_name);
void cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f);
+/* Stats API. */
+int cnxk_sso_xstats_get_names(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ uint8_t queue_port_id,
+ struct rte_event_dev_xstats_name *xstats_names,
+ unsigned int *ids, unsigned int size);
+int cnxk_sso_xstats_get(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ uint8_t queue_port_id, const unsigned int ids[],
+ uint64_t values[], unsigned int n);
+int cnxk_sso_xstats_reset(struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ int16_t queue_port_id, const uint32_t ids[],
+ uint32_t n);
+
/* CN9K */
void cn9k_sso_set_rsrc(void *arg);
diff --git a/drivers/event/cnxk/cnxk_eventdev_stats.c b/drivers/event/cnxk/cnxk_eventdev_stats.c
new file mode 100644
index 000000000..a3b548f46
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev_stats.c
@@ -0,0 +1,289 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+
+struct cnxk_sso_xstats_name {
+ const char name[RTE_EVENT_DEV_XSTATS_NAME_SIZE];
+ const size_t offset;
+ const uint64_t mask;
+ const uint8_t shift;
+ uint64_t reset_snap[CNXK_SSO_MAX_HWGRP];
+};
+
+static struct cnxk_sso_xstats_name sso_hws_xstats[] = {
+ {
+ "last_grp_serviced",
+ offsetof(struct roc_sso_hws_stats, arbitration),
+ 0x3FF,
+ 0,
+ {0},
+ },
+ {
+ "affinity_arbitration_credits",
+ offsetof(struct roc_sso_hws_stats, arbitration),
+ 0xF,
+ 16,
+ {0},
+ },
+};
+
+static struct cnxk_sso_xstats_name sso_hwgrp_xstats[] = {
+ {
+ "wrk_sched",
+ offsetof(struct roc_sso_hwgrp_stats, ws_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "xaq_dram",
+ offsetof(struct roc_sso_hwgrp_stats, ext_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "add_wrk",
+ offsetof(struct roc_sso_hwgrp_stats, wa_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "tag_switch_req",
+ offsetof(struct roc_sso_hwgrp_stats, ts_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "desched_req",
+ offsetof(struct roc_sso_hwgrp_stats, ds_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "desched_wrk",
+ offsetof(struct roc_sso_hwgrp_stats, dq_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "xaq_cached",
+ offsetof(struct roc_sso_hwgrp_stats, aw_status),
+ 0x3,
+ 0,
+ {0},
+ },
+ {
+ "work_inflight",
+ offsetof(struct roc_sso_hwgrp_stats, aw_status),
+ 0x3F,
+ 16,
+ {0},
+ },
+ {
+ "inuse_pages",
+ offsetof(struct roc_sso_hwgrp_stats, page_cnt),
+ 0xFFFFFFFF,
+ 0,
+ {0},
+ },
+};
+
+#define CNXK_SSO_NUM_HWS_XSTATS RTE_DIM(sso_hws_xstats)
+#define CNXK_SSO_NUM_GRP_XSTATS RTE_DIM(sso_hwgrp_xstats)
+
+#define CNXK_SSO_NUM_XSTATS (CNXK_SSO_NUM_HWS_XSTATS + CNXK_SSO_NUM_GRP_XSTATS)
+
+int
+cnxk_sso_xstats_get(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
+ const unsigned int ids[], uint64_t values[], unsigned int n)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ struct roc_sso_hwgrp_stats hwgrp_stats;
+ struct cnxk_sso_xstats_name *xstats;
+ struct cnxk_sso_xstats_name *xstat;
+ struct roc_sso_hws_stats hws_stats;
+ uint32_t xstats_mode_count = 0;
+ uint32_t start_offset = 0;
+ unsigned int i;
+ uint64_t value;
+ void *rsp;
+ int rc;
+
+ switch (mode) {
+ case RTE_EVENT_DEV_XSTATS_DEVICE:
+ return 0;
+ case RTE_EVENT_DEV_XSTATS_PORT:
+ if (queue_port_id >= (signed int)dev->nb_event_ports)
+ goto invalid_value;
+
+ xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hws_xstats;
+
+ rc = roc_sso_hws_stats_get(&dev->sso, queue_port_id,
+ &hws_stats);
+ if (rc < 0)
+ goto invalid_value;
+ rsp = &hws_stats;
+ break;
+ case RTE_EVENT_DEV_XSTATS_QUEUE:
+ if (queue_port_id >= (signed int)dev->nb_event_queues)
+ goto invalid_value;
+
+ xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
+ start_offset = CNXK_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hwgrp_xstats;
+
+ rc = roc_sso_hwgrp_stats_get(&dev->sso, queue_port_id,
+ &hwgrp_stats);
+ if (rc < 0)
+ goto invalid_value;
+ rsp = &hwgrp_stats;
+
+ break;
+ default:
+ plt_err("Invalid mode received");
+ goto invalid_value;
+ };
+
+ for (i = 0; i < n && i < xstats_mode_count; i++) {
+ xstat = &xstats[ids[i] - start_offset];
+ value = *(uint64_t *)((char *)rsp + xstat->offset);
+ value = (value >> xstat->shift) & xstat->mask;
+
+ values[i] = value;
+ values[i] -= xstat->reset_snap[queue_port_id];
+ }
+
+ return i;
+invalid_value:
+ return -EINVAL;
+}
+
+int
+cnxk_sso_xstats_reset(struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ int16_t queue_port_id, const uint32_t ids[], uint32_t n)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ struct roc_sso_hwgrp_stats hwgrp_stats;
+ struct cnxk_sso_xstats_name *xstats;
+ struct cnxk_sso_xstats_name *xstat;
+ struct roc_sso_hws_stats hws_stats;
+ uint32_t xstats_mode_count = 0;
+ uint32_t start_offset = 0;
+ unsigned int i;
+ uint64_t value;
+ void *rsp;
+ int rc;
+
+ switch (mode) {
+ case RTE_EVENT_DEV_XSTATS_DEVICE:
+ return 0;
+ case RTE_EVENT_DEV_XSTATS_PORT:
+ if (queue_port_id >= (signed int)dev->nb_event_ports)
+ goto invalid_value;
+
+ xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hws_xstats;
+ rc = roc_sso_hws_stats_get(&dev->sso, queue_port_id,
+ &hws_stats);
+ if (rc < 0)
+ goto invalid_value;
+ rsp = &hws_stats;
+ break;
+ case RTE_EVENT_DEV_XSTATS_QUEUE:
+ if (queue_port_id >= (signed int)dev->nb_event_queues)
+ goto invalid_value;
+
+ xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
+ start_offset = CNXK_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hwgrp_xstats;
+
+ rc = roc_sso_hwgrp_stats_get(&dev->sso, queue_port_id,
+ &hwgrp_stats);
+ if (rc < 0)
+ goto invalid_value;
+ rsp = &hwgrp_stats;
+ break;
+ default:
+ plt_err("Invalid mode received");
+ goto invalid_value;
+ };
+
+ for (i = 0; i < n && i < xstats_mode_count; i++) {
+ xstat = &xstats[ids[i] - start_offset];
+ value = *(uint64_t *)((char *)rsp + xstat->offset);
+ value = (value >> xstat->shift) & xstat->mask;
+
+ xstat->reset_snap[queue_port_id] = value;
+ }
+ return i;
+invalid_value:
+ return -EINVAL;
+}
+
+int
+cnxk_sso_xstats_get_names(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ uint8_t queue_port_id,
+ struct rte_event_dev_xstats_name *xstats_names,
+ unsigned int *ids, unsigned int size)
+{
+ struct rte_event_dev_xstats_name xstats_names_copy[CNXK_SSO_NUM_XSTATS];
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint32_t xstats_mode_count = 0;
+ uint32_t start_offset = 0;
+ unsigned int xidx = 0;
+ unsigned int i;
+
+ for (i = 0; i < CNXK_SSO_NUM_HWS_XSTATS; i++) {
+ snprintf(xstats_names_copy[i].name,
+ sizeof(xstats_names_copy[i].name), "%s",
+ sso_hws_xstats[i].name);
+ }
+
+ for (; i < CNXK_SSO_NUM_XSTATS; i++) {
+ snprintf(xstats_names_copy[i].name,
+ sizeof(xstats_names_copy[i].name), "%s",
+ sso_hwgrp_xstats[i - CNXK_SSO_NUM_HWS_XSTATS].name);
+ }
+
+ switch (mode) {
+ case RTE_EVENT_DEV_XSTATS_DEVICE:
+ break;
+ case RTE_EVENT_DEV_XSTATS_PORT:
+ if (queue_port_id >= (signed int)dev->nb_event_ports)
+ break;
+ xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
+ break;
+ case RTE_EVENT_DEV_XSTATS_QUEUE:
+ if (queue_port_id >= (signed int)dev->nb_event_queues)
+ break;
+ xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
+ start_offset = CNXK_SSO_NUM_HWS_XSTATS;
+ break;
+ default:
+ plt_err("Invalid mode received");
+ return -EINVAL;
+ };
+
+ if (xstats_mode_count > size || !ids || !xstats_names)
+ return xstats_mode_count;
+
+ for (i = 0; i < xstats_mode_count; i++) {
+ xidx = i + start_offset;
+ strncpy(xstats_names[i].name, xstats_names_copy[xidx].name,
+ sizeof(xstats_names[i].name));
+ ids[i] = xidx;
+ }
+
+ return i;
+}
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index e37ea3478..5b215b73f 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -13,7 +13,8 @@ sources = files('cn10k_worker.c',
'cn9k_worker.c',
'cn9k_eventdev.c',
'cnxk_eventdev.c',
- 'cnxk_eventdev_selftest.c'
+ 'cnxk_eventdev_selftest.c',
+ 'cnxk_eventdev_stats.c',
)
deps += ['bus_pci', 'common_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 21/34] event/cnxk: support event timer
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (19 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 20/34] event/cnxk: add event port and queue xstats pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 22/34] event/cnxk: add timer adapter capabilities pbhagavatula
` (13 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add event timer adapter a.k.a TIM initialization on SSO probe.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 6 ++++
drivers/event/cnxk/cnxk_eventdev.c | 3 ++
drivers/event/cnxk/cnxk_eventdev.h | 2 ++
drivers/event/cnxk/cnxk_tim_evdev.c | 47 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_evdev.h | 44 +++++++++++++++++++++++++++
drivers/event/cnxk/meson.build | 1 +
6 files changed, 103 insertions(+)
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.c
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.h
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index f48452982..e6f81f8b1 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -35,6 +35,10 @@ Features of the OCTEON cnxk SSO PMD are:
- Open system with configurable amount of outstanding events limited only by
DRAM
- HW accelerated dequeue timeout support to enable power management
+- HW managed event timers support through TIM, with high precision and
+ time granularity of 2.5us on CN9K and 1us on CN10K.
+- Up to 256 TIM rings a.k.a event timer adapters.
+- Up to 8 rings traversed in parallel.
Prerequisites and Compilation procedure
---------------------------------------
@@ -101,3 +105,5 @@ Debugging Options
+===+============+=======================================================+
| 1 | SSO | --log-level='pmd\.event\.cnxk,8' |
+---+------------+-------------------------------------------------------+
+ | 2 | TIM | --log-level='pmd\.event\.cnxk\.timer,8' |
+ +---+------------+-------------------------------------------------------+
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 0f084176c..85bb12e00 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -582,6 +582,8 @@ cnxk_sso_init(struct rte_eventdev *event_dev)
dev->nb_event_queues = 0;
dev->nb_event_ports = 0;
+ cnxk_tim_init(&dev->sso);
+
return 0;
error:
@@ -598,6 +600,7 @@ cnxk_sso_fini(struct rte_eventdev *event_dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ cnxk_tim_fini();
roc_sso_rsrc_fini(&dev->sso);
roc_sso_dev_fini(&dev->sso);
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index abe36f21f..1c61063c9 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,6 +14,8 @@
#include "roc_api.h"
+#include "cnxk_tim_evdev.h"
+
#define CNXK_SSO_XAE_CNT "xae_cnt"
#define CNXK_SSO_GGRP_QOS "qos"
#define CN9K_SSO_SINGLE_WS "single_ws"
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
new file mode 100644
index 000000000..46461b885
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+#include "cnxk_tim_evdev.h"
+
+void
+cnxk_tim_init(struct roc_sso *sso)
+{
+ const struct rte_memzone *mz;
+ struct cnxk_tim_evdev *dev;
+ int rc;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return;
+
+ mz = rte_memzone_reserve(RTE_STR(CNXK_TIM_EVDEV_NAME),
+ sizeof(struct cnxk_tim_evdev), 0, 0);
+ if (mz == NULL) {
+ plt_tim_dbg("Unable to allocate memory for TIM Event device");
+ return;
+ }
+ dev = mz->addr;
+
+ dev->tim.roc_sso = sso;
+ rc = roc_tim_init(&dev->tim);
+ if (rc < 0) {
+ plt_err("Failed to initialize roc tim resources");
+ rte_memzone_free(mz);
+ return;
+ }
+ dev->nb_rings = rc;
+ dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+}
+
+void
+cnxk_tim_fini(void)
+{
+ struct cnxk_tim_evdev *dev = tim_priv_get();
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return;
+
+ roc_tim_fini(&dev->tim);
+ rte_memzone_free(rte_memzone_lookup(RTE_STR(CNXK_TIM_EVDEV_NAME)));
+}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
new file mode 100644
index 000000000..5ddc94ed4
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CNXK_TIM_EVDEV_H__
+#define __CNXK_TIM_EVDEV_H__
+
+#include <stddef.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <eventdev_pmd_pci.h>
+#include <rte_event_timer_adapter.h>
+#include <rte_memzone.h>
+
+#include "roc_api.h"
+
+#define CNXK_TIM_EVDEV_NAME cnxk_tim_eventdev
+#define CNXK_TIM_RING_DEF_CHUNK_SZ (4096)
+
+struct cnxk_tim_evdev {
+ struct roc_tim tim;
+ struct rte_eventdev *event_dev;
+ uint16_t nb_rings;
+ uint32_t chunk_sz;
+};
+
+static inline struct cnxk_tim_evdev *
+tim_priv_get(void)
+{
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(RTE_STR(CNXK_TIM_EVDEV_NAME));
+ if (mz == NULL)
+ return NULL;
+
+ return mz->addr;
+}
+
+void cnxk_tim_init(struct roc_sso *sso);
+void cnxk_tim_fini(void);
+
+#endif /* __CNXK_TIM_EVDEV_H__ */
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 5b215b73f..ce8764eda 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -15,6 +15,7 @@ sources = files('cn10k_worker.c',
'cnxk_eventdev.c',
'cnxk_eventdev_selftest.c',
'cnxk_eventdev_stats.c',
+ 'cnxk_tim_evdev.c',
)
deps += ['bus_pci', 'common_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 22/34] event/cnxk: add timer adapter capabilities
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (20 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 21/34] event/cnxk: support event timer pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 23/34] event/cnxk: create and free timer adapter pbhagavatula
` (12 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add function to retrieve event timer adapter capabilities.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 2 ++
drivers/event/cnxk/cn9k_eventdev.c | 2 ++
drivers/event/cnxk/cnxk_tim_evdev.c | 22 +++++++++++++++++++++-
drivers/event/cnxk/cnxk_tim_evdev.h | 6 +++++-
4 files changed, 30 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index a0c6d32cc..0981085e8 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -420,6 +420,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .timer_adapter_caps_get = cnxk_tim_caps_get,
+
.dump = cnxk_sso_dump,
.dev_start = cn10k_sso_start,
.dev_stop = cn10k_sso_stop,
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 48991e522..d9882ebb9 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -494,6 +494,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .timer_adapter_caps_get = cnxk_tim_caps_get,
+
.dump = cnxk_sso_dump,
.dev_start = cn9k_sso_start,
.dev_stop = cn9k_sso_stop,
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 46461b885..265bee533 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -5,6 +5,26 @@
#include "cnxk_eventdev.h"
#include "cnxk_tim_evdev.h"
+int
+cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
+ uint32_t *caps,
+ const struct rte_event_timer_adapter_ops **ops)
+{
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+
+ RTE_SET_USED(flags);
+ RTE_SET_USED(ops);
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ /* Store evdev pointer for later use. */
+ dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
+ *caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
+
+ return 0;
+}
+
void
cnxk_tim_init(struct roc_sso *sso)
{
@@ -37,7 +57,7 @@ cnxk_tim_init(struct roc_sso *sso)
void
cnxk_tim_fini(void)
{
- struct cnxk_tim_evdev *dev = tim_priv_get();
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return;
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 5ddc94ed4..ece66ab25 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -27,7 +27,7 @@ struct cnxk_tim_evdev {
};
static inline struct cnxk_tim_evdev *
-tim_priv_get(void)
+cnxk_tim_priv_get(void)
{
const struct rte_memzone *mz;
@@ -38,6 +38,10 @@ tim_priv_get(void)
return mz->addr;
}
+int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
+ uint32_t *caps,
+ const struct rte_event_timer_adapter_ops **ops);
+
void cnxk_tim_init(struct roc_sso *sso);
void cnxk_tim_fini(void);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 23/34] event/cnxk: create and free timer adapter
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (21 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 22/34] event/cnxk: add timer adapter capabilities pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 24/34] event/cnxk: add devargs to disable NPA pbhagavatula
` (11 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
When the application calls timer adapter create the following is used:
- Allocate a TIM LF based on number of LF's provisioned.
- Verify the config parameters supplied.
- Allocate memory required for
* Buckets based on min and max timeout supplied.
* Allocate the chunk pool based on the number of timers.
On Free:
- Free the allocated bucket and chunk memory.
- Free the TIM lf allocated.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 174 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_evdev.h | 128 +++++++++++++++++++-
2 files changed, 300 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 265bee533..655540a72 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -5,6 +5,177 @@
#include "cnxk_eventdev.h"
#include "cnxk_tim_evdev.h"
+static struct rte_event_timer_adapter_ops cnxk_tim_ops;
+
+static int
+cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
+ struct rte_event_timer_adapter_conf *rcfg)
+{
+ unsigned int cache_sz = (tim_ring->nb_chunks / 1.5);
+ unsigned int mp_flags = 0;
+ char pool_name[25];
+ int rc;
+
+ cache_sz /= rte_lcore_count();
+ /* Create chunk pool. */
+ if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
+ mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+ plt_tim_dbg("Using single producer mode");
+ tim_ring->prod_type_sp = true;
+ }
+
+ snprintf(pool_name, sizeof(pool_name), "cnxk_tim_chunk_pool%d",
+ tim_ring->ring_id);
+
+ if (cache_sz > RTE_MEMPOOL_CACHE_MAX_SIZE)
+ cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
+ cache_sz = cache_sz != 0 ? cache_sz : 2;
+ tim_ring->nb_chunks += (cache_sz * rte_lcore_count());
+ tim_ring->chunk_pool = rte_mempool_create_empty(
+ pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz, cache_sz, 0,
+ rte_socket_id(), mp_flags);
+
+ if (tim_ring->chunk_pool == NULL) {
+ plt_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
+
+ rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
+ rte_mbuf_platform_mempool_ops(), NULL);
+ if (rc < 0) {
+ plt_err("Unable to set chunkpool ops");
+ goto free;
+ }
+
+ rc = rte_mempool_populate_default(tim_ring->chunk_pool);
+ if (rc < 0) {
+ plt_err("Unable to set populate chunkpool.");
+ goto free;
+ }
+ tim_ring->aura =
+ roc_npa_aura_handle_to_aura(tim_ring->chunk_pool->pool_id);
+ tim_ring->ena_dfb = 0;
+
+ return 0;
+
+free:
+ rte_mempool_free(tim_ring->chunk_pool);
+ return rc;
+}
+
+static int
+cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
+{
+ struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+ struct cnxk_tim_ring *tim_ring;
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ if (adptr->data->id >= dev->nb_rings)
+ return -ENODEV;
+
+ tim_ring = rte_zmalloc("cnxk_tim_prv", sizeof(struct cnxk_tim_ring), 0);
+ if (tim_ring == NULL)
+ return -ENOMEM;
+
+ rc = roc_tim_lf_alloc(&dev->tim, adptr->data->id, NULL);
+ if (rc < 0) {
+ plt_err("Failed to create timer ring");
+ goto tim_ring_free;
+ }
+
+ if (NSEC2TICK(RTE_ALIGN_MUL_CEIL(
+ rcfg->timer_tick_ns,
+ cnxk_tim_min_resolution_ns(cnxk_tim_cntfrq())),
+ cnxk_tim_cntfrq()) <
+ cnxk_tim_min_tmo_ticks(cnxk_tim_cntfrq())) {
+ if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES)
+ rcfg->timer_tick_ns = TICK2NSEC(
+ cnxk_tim_min_tmo_ticks(cnxk_tim_cntfrq()),
+ cnxk_tim_cntfrq());
+ else {
+ rc = -ERANGE;
+ goto tim_hw_free;
+ }
+ }
+ tim_ring->ring_id = adptr->data->id;
+ tim_ring->clk_src = (int)rcfg->clk_src;
+ tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL(
+ rcfg->timer_tick_ns,
+ cnxk_tim_min_resolution_ns(cnxk_tim_cntfrq()));
+ tim_ring->max_tout = rcfg->max_tmo_ns;
+ tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
+ tim_ring->nb_timers = rcfg->nb_timers;
+ tim_ring->chunk_sz = dev->chunk_sz;
+
+ tim_ring->nb_chunks = tim_ring->nb_timers;
+ tim_ring->nb_chunk_slots = CNXK_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
+ /* Create buckets. */
+ tim_ring->bkt =
+ rte_zmalloc("cnxk_tim_bucket",
+ (tim_ring->nb_bkts) * sizeof(struct cnxk_tim_bkt),
+ RTE_CACHE_LINE_SIZE);
+ if (tim_ring->bkt == NULL)
+ goto tim_hw_free;
+
+ rc = cnxk_tim_chnk_pool_create(tim_ring, rcfg);
+ if (rc < 0)
+ goto tim_bkt_free;
+
+ rc = roc_tim_lf_config(
+ &dev->tim, tim_ring->ring_id,
+ cnxk_tim_convert_clk_src(tim_ring->clk_src), 0, 0,
+ tim_ring->nb_bkts, tim_ring->chunk_sz,
+ NSEC2TICK(tim_ring->tck_nsec, cnxk_tim_cntfrq()));
+ if (rc < 0) {
+ plt_err("Failed to configure timer ring");
+ goto tim_chnk_free;
+ }
+
+ tim_ring->base = roc_tim_lf_base_get(&dev->tim, tim_ring->ring_id);
+ plt_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE);
+ plt_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+
+ plt_tim_dbg(
+ "Total memory used %" PRIu64 "MB\n",
+ (uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz) +
+ (tim_ring->nb_bkts * sizeof(struct cnxk_tim_bkt))) /
+ BIT_ULL(20)));
+
+ adptr->data->adapter_priv = tim_ring;
+ return rc;
+
+tim_chnk_free:
+ rte_mempool_free(tim_ring->chunk_pool);
+tim_bkt_free:
+ rte_free(tim_ring->bkt);
+tim_hw_free:
+ roc_tim_lf_free(&dev->tim, tim_ring->ring_id);
+tim_ring_free:
+ rte_free(tim_ring);
+ return rc;
+}
+
+static int
+cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ roc_tim_lf_free(&dev->tim, tim_ring->ring_id);
+ rte_free(tim_ring->bkt);
+ rte_mempool_free(tim_ring->chunk_pool);
+ rte_free(tim_ring);
+
+ return 0;
+}
+
int
cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
uint32_t *caps,
@@ -18,6 +189,9 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
if (dev == NULL)
return -ENODEV;
+ cnxk_tim_ops.init = cnxk_tim_ring_create;
+ cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index ece66ab25..2335707cd 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -12,12 +12,26 @@
#include <eventdev_pmd_pci.h>
#include <rte_event_timer_adapter.h>
+#include <rte_malloc.h>
#include <rte_memzone.h>
#include "roc_api.h"
-#define CNXK_TIM_EVDEV_NAME cnxk_tim_eventdev
-#define CNXK_TIM_RING_DEF_CHUNK_SZ (4096)
+#define NSECPERSEC 1E9
+#define USECPERSEC 1E6
+#define TICK2NSEC(__tck, __freq) (((__tck)*NSECPERSEC) / (__freq))
+
+#define CNXK_TIM_EVDEV_NAME cnxk_tim_eventdev
+#define CNXK_TIM_MAX_BUCKETS (0xFFFFF)
+#define CNXK_TIM_RING_DEF_CHUNK_SZ (4096)
+#define CNXK_TIM_CHUNK_ALIGNMENT (16)
+#define CNXK_TIM_MAX_BURST \
+ (RTE_CACHE_LINE_SIZE / CNXK_TIM_CHUNK_ALIGNMENT)
+#define CNXK_TIM_NB_CHUNK_SLOTS(sz) (((sz) / CNXK_TIM_CHUNK_ALIGNMENT) - 1)
+#define CNXK_TIM_MIN_CHUNK_SLOTS (0x1)
+#define CNXK_TIM_MAX_CHUNK_SLOTS (0x1FFE)
+
+#define CN9K_TIM_MIN_TMO_TKS (256)
struct cnxk_tim_evdev {
struct roc_tim tim;
@@ -26,6 +40,57 @@ struct cnxk_tim_evdev {
uint32_t chunk_sz;
};
+enum cnxk_tim_clk_src {
+ CNXK_TIM_CLK_SRC_10NS = RTE_EVENT_TIMER_ADAPTER_CPU_CLK,
+ CNXK_TIM_CLK_SRC_GPIO = RTE_EVENT_TIMER_ADAPTER_EXT_CLK0,
+ CNXK_TIM_CLK_SRC_GTI = RTE_EVENT_TIMER_ADAPTER_EXT_CLK1,
+ CNXK_TIM_CLK_SRC_PTP = RTE_EVENT_TIMER_ADAPTER_EXT_CLK2,
+};
+
+struct cnxk_tim_bkt {
+ uint64_t first_chunk;
+ union {
+ uint64_t w1;
+ struct {
+ uint32_t nb_entry;
+ uint8_t sbt : 1;
+ uint8_t hbt : 1;
+ uint8_t bsk : 1;
+ uint8_t rsvd : 5;
+ uint8_t lock;
+ int16_t chunk_remainder;
+ };
+ };
+ uint64_t current_chunk;
+ uint64_t pad;
+};
+
+struct cnxk_tim_ring {
+ uintptr_t base;
+ uint16_t nb_chunk_slots;
+ uint32_t nb_bkts;
+ uint64_t tck_int;
+ uint64_t tot_int;
+ struct cnxk_tim_bkt *bkt;
+ struct rte_mempool *chunk_pool;
+ uint64_t arm_cnt;
+ uint8_t prod_type_sp;
+ uint8_t ena_dfb;
+ uint16_t ring_id;
+ uint32_t aura;
+ uint64_t nb_timers;
+ uint64_t tck_nsec;
+ uint64_t max_tout;
+ uint64_t nb_chunks;
+ uint64_t chunk_sz;
+ enum cnxk_tim_clk_src clk_src;
+} __rte_cache_aligned;
+
+struct cnxk_tim_ent {
+ uint64_t w0;
+ uint64_t wqe;
+};
+
static inline struct cnxk_tim_evdev *
cnxk_tim_priv_get(void)
{
@@ -38,6 +103,65 @@ cnxk_tim_priv_get(void)
return mz->addr;
}
+static inline uint64_t
+cnxk_tim_min_tmo_ticks(uint64_t freq)
+{
+ if (roc_model_runtime_is_cn9k())
+ return CN9K_TIM_MIN_TMO_TKS;
+ else /* CN10K min tick is of 1us */
+ return freq / USECPERSEC;
+}
+
+static inline uint64_t
+cnxk_tim_min_resolution_ns(uint64_t freq)
+{
+ return NSECPERSEC / freq;
+}
+
+static inline enum roc_tim_clk_src
+cnxk_tim_convert_clk_src(enum cnxk_tim_clk_src clk_src)
+{
+ switch (clk_src) {
+ case RTE_EVENT_TIMER_ADAPTER_CPU_CLK:
+ return roc_model_runtime_is_cn9k() ? ROC_TIM_CLK_SRC_10NS :
+ ROC_TIM_CLK_SRC_GTI;
+ default:
+ return ROC_TIM_CLK_SRC_INVALID;
+ }
+}
+
+#ifdef RTE_ARCH_ARM64
+static inline uint64_t
+cnxk_tim_cntvct(void)
+{
+ uint64_t tsc;
+
+ asm volatile("mrs %0, cntvct_el0" : "=r"(tsc));
+ return tsc;
+}
+
+static inline uint64_t
+cnxk_tim_cntfrq(void)
+{
+ uint64_t freq;
+
+ asm volatile("mrs %0, cntfrq_el0" : "=r"(freq));
+ return freq;
+}
+#else
+static inline uint64_t
+cnxk_tim_cntvct(void)
+{
+ return 0;
+}
+
+static inline uint64_t
+cnxk_tim_cntfrq(void)
+{
+ return 0;
+}
+#endif
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 24/34] event/cnxk: add devargs to disable NPA
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (22 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 23/34] event/cnxk: create and free timer adapter pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 25/34] event/cnxk: allow adapters to resize inflights pbhagavatula
` (10 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
If the chunks are allocated from NPA then TIM can automatically free
them when traversing the list of chunks.
Add devargs to disable NPA and use software mempool to manage chunks.
Example:
--dev "0002:0e:00.0,tim_disable_npa=1"
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 10 ++++
drivers/event/cnxk/cn10k_eventdev.c | 3 +-
drivers/event/cnxk/cn9k_eventdev.c | 3 +-
drivers/event/cnxk/cnxk_eventdev.h | 9 +++
drivers/event/cnxk/cnxk_tim_evdev.c | 86 +++++++++++++++++++++--------
drivers/event/cnxk/cnxk_tim_evdev.h | 5 ++
6 files changed, 92 insertions(+), 24 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index e6f81f8b1..c2d6ed2fb 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -93,6 +93,16 @@ Runtime Config Options
-a 0002:0e:00.0,qos=[1-50-50-50]
+- ``TIM disable NPA``
+
+ By default chunks are allocated from NPA then TIM can automatically free
+ them when traversing the list of chunks. The ``tim_disable_npa`` devargs
+ parameter disables NPA and uses software mempool to manage chunks
+
+ For example::
+
+ -a 0002:0e:00.0,tim_disable_npa=1
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 0981085e8..a2ef1fa73 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -502,4 +502,5 @@ RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
- CN10K_SSO_GW_MODE "=<int>");
+ CN10K_SSO_GW_MODE "=<int>"
+ CNXK_TIM_DISABLE_NPA "=1");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index d9882ebb9..3a0caa009 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -571,4 +571,5 @@ RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
- CN9K_SSO_SINGLE_WS "=1");
+ CN9K_SSO_SINGLE_WS "=1"
+ CNXK_TIM_DISABLE_NPA "=1");
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 1c61063c9..77835e463 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -159,6 +159,15 @@ struct cnxk_sso_hws_cookie {
bool configured;
} __rte_cache_aligned;
+static inline int
+parse_kvargs_flag(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ *(uint8_t *)opaque = !!atoi(value);
+ return 0;
+}
+
static inline int
parse_kvargs_value(const char *key, const char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 655540a72..d93b37e4f 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -31,30 +31,43 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
cache_sz = cache_sz != 0 ? cache_sz : 2;
tim_ring->nb_chunks += (cache_sz * rte_lcore_count());
- tim_ring->chunk_pool = rte_mempool_create_empty(
- pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz, cache_sz, 0,
- rte_socket_id(), mp_flags);
-
- if (tim_ring->chunk_pool == NULL) {
- plt_err("Unable to create chunkpool.");
- return -ENOMEM;
- }
+ if (!tim_ring->disable_npa) {
+ tim_ring->chunk_pool = rte_mempool_create_empty(
+ pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz,
+ cache_sz, 0, rte_socket_id(), mp_flags);
+
+ if (tim_ring->chunk_pool == NULL) {
+ plt_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
- rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
- rte_mbuf_platform_mempool_ops(), NULL);
- if (rc < 0) {
- plt_err("Unable to set chunkpool ops");
- goto free;
- }
+ rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
+ rte_mbuf_platform_mempool_ops(),
+ NULL);
+ if (rc < 0) {
+ plt_err("Unable to set chunkpool ops");
+ goto free;
+ }
- rc = rte_mempool_populate_default(tim_ring->chunk_pool);
- if (rc < 0) {
- plt_err("Unable to set populate chunkpool.");
- goto free;
+ rc = rte_mempool_populate_default(tim_ring->chunk_pool);
+ if (rc < 0) {
+ plt_err("Unable to set populate chunkpool.");
+ goto free;
+ }
+ tim_ring->aura = roc_npa_aura_handle_to_aura(
+ tim_ring->chunk_pool->pool_id);
+ tim_ring->ena_dfb = 0;
+ } else {
+ tim_ring->chunk_pool = rte_mempool_create(
+ pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz,
+ cache_sz, 0, NULL, NULL, NULL, NULL, rte_socket_id(),
+ mp_flags);
+ if (tim_ring->chunk_pool == NULL) {
+ plt_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
+ tim_ring->ena_dfb = 1;
}
- tim_ring->aura =
- roc_npa_aura_handle_to_aura(tim_ring->chunk_pool->pool_id);
- tim_ring->ena_dfb = 0;
return 0;
@@ -110,8 +123,17 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
tim_ring->nb_timers = rcfg->nb_timers;
tim_ring->chunk_sz = dev->chunk_sz;
+ tim_ring->disable_npa = dev->disable_npa;
+
+ if (tim_ring->disable_npa) {
+ tim_ring->nb_chunks =
+ tim_ring->nb_timers /
+ CNXK_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
+ tim_ring->nb_chunks = tim_ring->nb_chunks * tim_ring->nb_bkts;
+ } else {
+ tim_ring->nb_chunks = tim_ring->nb_timers;
+ }
- tim_ring->nb_chunks = tim_ring->nb_timers;
tim_ring->nb_chunk_slots = CNXK_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
/* Create buckets. */
tim_ring->bkt =
@@ -199,6 +221,24 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
return 0;
}
+static void
+cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
+{
+ struct rte_kvargs *kvlist;
+
+ if (devargs == NULL)
+ return;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, CNXK_TIM_DISABLE_NPA, &parse_kvargs_flag,
+ &dev->disable_npa);
+
+ rte_kvargs_free(kvlist);
+}
+
void
cnxk_tim_init(struct roc_sso *sso)
{
@@ -217,6 +257,8 @@ cnxk_tim_init(struct roc_sso *sso)
}
dev = mz->addr;
+ cnxk_tim_parse_devargs(sso->pci_dev->device.devargs, dev);
+
dev->tim.roc_sso = sso;
rc = roc_tim_init(&dev->tim);
if (rc < 0) {
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 2335707cd..4896ed67a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -33,11 +33,15 @@
#define CN9K_TIM_MIN_TMO_TKS (256)
+#define CNXK_TIM_DISABLE_NPA "tim_disable_npa"
+
struct cnxk_tim_evdev {
struct roc_tim tim;
struct rte_eventdev *event_dev;
uint16_t nb_rings;
uint32_t chunk_sz;
+ /* Dev args */
+ uint8_t disable_npa;
};
enum cnxk_tim_clk_src {
@@ -75,6 +79,7 @@ struct cnxk_tim_ring {
struct rte_mempool *chunk_pool;
uint64_t arm_cnt;
uint8_t prod_type_sp;
+ uint8_t disable_npa;
uint8_t ena_dfb;
uint16_t ring_id;
uint32_t aura;
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 25/34] event/cnxk: allow adapters to resize inflights
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (23 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 24/34] event/cnxk: add devargs to disable NPA pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 26/34] event/cnxk: add timer adapter info function pbhagavatula
` (9 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add internal SSO functions to allow event adapters to resize SSO buffers
that are used to hold in-flight events in DRAM.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 33 ++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 7 +++
drivers/event/cnxk/cnxk_eventdev_adptr.c | 67 ++++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_evdev.c | 5 ++
drivers/event/cnxk/meson.build | 1 +
5 files changed, 113 insertions(+)
create mode 100644 drivers/event/cnxk/cnxk_eventdev_adptr.c
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 85bb12e00..7189ee3a7 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -77,6 +77,9 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
if (dev->xae_cnt)
xaq_cnt += dev->xae_cnt / dev->sso.xae_waes;
+ else if (dev->adptr_xae_cnt)
+ xaq_cnt += (dev->adptr_xae_cnt / dev->sso.xae_waes) +
+ (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
else
xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
(CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
@@ -125,6 +128,36 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
return rc;
}
+int
+cnxk_sso_xae_reconfigure(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc = 0;
+
+ if (event_dev->data->dev_started)
+ event_dev->dev_ops->dev_stop(event_dev);
+
+ rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to release XAQ %d", rc);
+ return rc;
+ }
+
+ rte_mempool_free(dev->xaq_pool);
+ dev->xaq_pool = NULL;
+ rc = cnxk_sso_xaq_allocate(dev);
+ if (rc < 0) {
+ plt_err("Failed to alloc XAQ %d", rc);
+ return rc;
+ }
+
+ rte_mb();
+ if (event_dev->data->dev_started)
+ event_dev->dev_ops->dev_start(event_dev);
+
+ return 0;
+}
+
int
cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
cnxk_sso_init_hws_mem_t init_hws_fn,
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 77835e463..668e51d62 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -81,6 +81,10 @@ struct cnxk_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
+ uint64_t adptr_xae_cnt;
+ uint16_t tim_adptr_ring_cnt;
+ uint16_t *timer_adptr_rings;
+ uint64_t *timer_adptr_sz;
/* Dev args */
uint32_t xae_cnt;
uint8_t qos_queue_cnt;
@@ -190,7 +194,10 @@ cnxk_sso_hws_get_cookie(void *ws)
}
/* Configuration functions */
+int cnxk_sso_xae_reconfigure(struct rte_eventdev *event_dev);
int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev);
+void cnxk_sso_updt_xae_cnt(struct cnxk_sso_evdev *dev, void *data,
+ uint32_t event_type);
/* Common ops API. */
int cnxk_sso_init(struct rte_eventdev *event_dev);
diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c
new file mode 100644
index 000000000..89a1d82c1
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+
+void
+cnxk_sso_updt_xae_cnt(struct cnxk_sso_evdev *dev, void *data,
+ uint32_t event_type)
+{
+ int i;
+
+ switch (event_type) {
+ case RTE_EVENT_TYPE_TIMER: {
+ struct cnxk_tim_ring *timr = data;
+ uint16_t *old_ring_ptr;
+ uint64_t *old_sz_ptr;
+
+ for (i = 0; i < dev->tim_adptr_ring_cnt; i++) {
+ if (timr->ring_id != dev->timer_adptr_rings[i])
+ continue;
+ if (timr->nb_timers == dev->timer_adptr_sz[i])
+ return;
+ dev->adptr_xae_cnt -= dev->timer_adptr_sz[i];
+ dev->adptr_xae_cnt += timr->nb_timers;
+ dev->timer_adptr_sz[i] = timr->nb_timers;
+
+ return;
+ }
+
+ dev->tim_adptr_ring_cnt++;
+ old_ring_ptr = dev->timer_adptr_rings;
+ old_sz_ptr = dev->timer_adptr_sz;
+
+ dev->timer_adptr_rings = rte_realloc(
+ dev->timer_adptr_rings,
+ sizeof(uint16_t) * dev->tim_adptr_ring_cnt, 0);
+ if (dev->timer_adptr_rings == NULL) {
+ dev->adptr_xae_cnt += timr->nb_timers;
+ dev->timer_adptr_rings = old_ring_ptr;
+ dev->tim_adptr_ring_cnt--;
+ return;
+ }
+
+ dev->timer_adptr_sz = rte_realloc(
+ dev->timer_adptr_sz,
+ sizeof(uint64_t) * dev->tim_adptr_ring_cnt, 0);
+
+ if (dev->timer_adptr_sz == NULL) {
+ dev->adptr_xae_cnt += timr->nb_timers;
+ dev->timer_adptr_sz = old_sz_ptr;
+ dev->tim_adptr_ring_cnt--;
+ return;
+ }
+
+ dev->timer_adptr_rings[dev->tim_adptr_ring_cnt - 1] =
+ timr->ring_id;
+ dev->timer_adptr_sz[dev->tim_adptr_ring_cnt - 1] =
+ timr->nb_timers;
+
+ dev->adptr_xae_cnt += timr->nb_timers;
+ break;
+ }
+ default:
+ break;
+ }
+}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index d93b37e4f..1eb39a789 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -161,6 +161,11 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
plt_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE);
plt_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+ /* Update SSO xae count. */
+ cnxk_sso_updt_xae_cnt(cnxk_sso_pmd_priv(dev->event_dev), tim_ring,
+ RTE_EVENT_TYPE_TIMER);
+ cnxk_sso_xae_reconfigure(dev->event_dev);
+
plt_tim_dbg(
"Total memory used %" PRIu64 "MB\n",
(uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz) +
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index ce8764eda..bd52cbc68 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -12,6 +12,7 @@ sources = files('cn10k_worker.c',
'cn10k_eventdev.c',
'cn9k_worker.c',
'cn9k_eventdev.c',
+ 'cnxk_eventdev_adptr.c',
'cnxk_eventdev.c',
'cnxk_eventdev_selftest.c',
'cnxk_eventdev_stats.c',
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 26/34] event/cnxk: add timer adapter info function
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (24 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 25/34] event/cnxk: allow adapters to resize inflights pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 27/34] event/cnxk: add devargs for chunk size and rings pbhagavatula
` (8 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add TIM event timer adapter info get function.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 1eb39a789..2fefa56f5 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -76,6 +76,18 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
return rc;
}
+static void
+cnxk_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer_adapter_info *adptr_info)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+
+ adptr_info->max_tmo_ns = tim_ring->max_tout;
+ adptr_info->min_resolution_ns = tim_ring->tck_nsec;
+ rte_memcpy(&adptr_info->conf, &adptr->data->conf,
+ sizeof(struct rte_event_timer_adapter_conf));
+}
+
static int
cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
{
@@ -218,6 +230,7 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
cnxk_tim_ops.init = cnxk_tim_ring_create;
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+ cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 27/34] event/cnxk: add devargs for chunk size and rings
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (25 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 26/34] event/cnxk: add timer adapter info function pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 28/34] event/cnxk: add TIM bucket operations pbhagavatula
` (7 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add devargs to control default chunk size and max numbers of
timer rings to attach to a given RVU PF.
Example:
--dev "0002:1e:00.0,tim_chnk_slots=1024"
--dev "0002:1e:00.0,tim_rings_lmt=4"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 23 +++++++++++++++++++++++
drivers/event/cnxk/cn10k_eventdev.c | 4 +++-
drivers/event/cnxk/cn9k_eventdev.c | 4 +++-
drivers/event/cnxk/cnxk_tim_evdev.c | 14 +++++++++++++-
drivers/event/cnxk/cnxk_tim_evdev.h | 4 ++++
5 files changed, 46 insertions(+), 3 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index c2d6ed2fb..a8199aac7 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -103,6 +103,29 @@ Runtime Config Options
-a 0002:0e:00.0,tim_disable_npa=1
+- ``TIM modify chunk slots``
+
+ The ``tim_chnk_slots`` devargs can be used to modify number of chunk slots.
+ Chunks are used to store event timers, a chunk can be visualised as an array
+ where the last element points to the next chunk and rest of them are used to
+ store events. TIM traverses the list of chunks and enqueues the event timers
+ to SSO. The default value is 255 and the max value is 4095.
+
+ For example::
+
+ -a 0002:0e:00.0,tim_chnk_slots=1023
+
+- ``TIM limit max rings reserved``
+
+ The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
+ rings i.e. event timer adapter reserved on probe. Since, TIM rings are HW
+ resources we can avoid starving other applications by not grabbing all the
+ rings.
+
+ For example::
+
+ -a 0002:0e:00.0,tim_rings_lmt=5
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index a2ef1fa73..cadc792a7 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -503,4 +503,6 @@ RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
CN10K_SSO_GW_MODE "=<int>"
- CNXK_TIM_DISABLE_NPA "=1");
+ CNXK_TIM_DISABLE_NPA "=1"
+ CNXK_TIM_CHNK_SLOTS "=<int>"
+ CNXK_TIM_RINGS_LMT "=<int>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 3a0caa009..e503f6b1c 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -572,4 +572,6 @@ RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
CN9K_SSO_SINGLE_WS "=1"
- CNXK_TIM_DISABLE_NPA "=1");
+ CNXK_TIM_DISABLE_NPA "=1"
+ CNXK_TIM_CHNK_SLOTS "=<int>"
+ CNXK_TIM_RINGS_LMT "=<int>");
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 2fefa56f5..e06fe2f52 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -253,6 +253,10 @@ cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
rte_kvargs_process(kvlist, CNXK_TIM_DISABLE_NPA, &parse_kvargs_flag,
&dev->disable_npa);
+ rte_kvargs_process(kvlist, CNXK_TIM_CHNK_SLOTS, &parse_kvargs_value,
+ &dev->chunk_slots);
+ rte_kvargs_process(kvlist, CNXK_TIM_RINGS_LMT, &parse_kvargs_value,
+ &dev->min_ring_cnt);
rte_kvargs_free(kvlist);
}
@@ -278,6 +282,7 @@ cnxk_tim_init(struct roc_sso *sso)
cnxk_tim_parse_devargs(sso->pci_dev->device.devargs, dev);
dev->tim.roc_sso = sso;
+ dev->tim.nb_lfs = dev->min_ring_cnt;
rc = roc_tim_init(&dev->tim);
if (rc < 0) {
plt_err("Failed to initialize roc tim resources");
@@ -285,7 +290,14 @@ cnxk_tim_init(struct roc_sso *sso)
return;
}
dev->nb_rings = rc;
- dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+
+ if (dev->chunk_slots && dev->chunk_slots <= CNXK_TIM_MAX_CHUNK_SLOTS &&
+ dev->chunk_slots >= CNXK_TIM_MIN_CHUNK_SLOTS) {
+ dev->chunk_sz =
+ (dev->chunk_slots + 1) * CNXK_TIM_CHUNK_ALIGNMENT;
+ } else {
+ dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+ }
}
void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 4896ed67a..9496634c8 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -34,6 +34,8 @@
#define CN9K_TIM_MIN_TMO_TKS (256)
#define CNXK_TIM_DISABLE_NPA "tim_disable_npa"
+#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
+#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
struct cnxk_tim_evdev {
struct roc_tim tim;
@@ -42,6 +44,8 @@ struct cnxk_tim_evdev {
uint32_t chunk_sz;
/* Dev args */
uint8_t disable_npa;
+ uint16_t chunk_slots;
+ uint16_t min_ring_cnt;
};
enum cnxk_tim_clk_src {
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 28/34] event/cnxk: add TIM bucket operations
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (26 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 27/34] event/cnxk: add devargs for chunk size and rings pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 29/34] event/cnxk: add timer arm routine pbhagavatula
` (6 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add TIM bucket operations used for event timer arm and cancel.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.h | 30 +++++++
drivers/event/cnxk/cnxk_tim_worker.c | 6 ++
drivers/event/cnxk/cnxk_tim_worker.h | 123 +++++++++++++++++++++++++++
drivers/event/cnxk/meson.build | 1 +
4 files changed, 160 insertions(+)
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.c
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.h
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 9496634c8..f6895417a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -37,6 +37,36 @@
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
+#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
+#define TIM_BUCKET_W1_M_CHUNK_REMAINDER \
+ ((1ULL << (64 - TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
+#define TIM_BUCKET_W1_S_LOCK (40)
+#define TIM_BUCKET_W1_M_LOCK \
+ ((1ULL << (TIM_BUCKET_W1_S_CHUNK_REMAINDER - TIM_BUCKET_W1_S_LOCK)) - 1)
+#define TIM_BUCKET_W1_S_RSVD (35)
+#define TIM_BUCKET_W1_S_BSK (34)
+#define TIM_BUCKET_W1_M_BSK \
+ ((1ULL << (TIM_BUCKET_W1_S_RSVD - TIM_BUCKET_W1_S_BSK)) - 1)
+#define TIM_BUCKET_W1_S_HBT (33)
+#define TIM_BUCKET_W1_M_HBT \
+ ((1ULL << (TIM_BUCKET_W1_S_BSK - TIM_BUCKET_W1_S_HBT)) - 1)
+#define TIM_BUCKET_W1_S_SBT (32)
+#define TIM_BUCKET_W1_M_SBT \
+ ((1ULL << (TIM_BUCKET_W1_S_HBT - TIM_BUCKET_W1_S_SBT)) - 1)
+#define TIM_BUCKET_W1_S_NUM_ENTRIES (0)
+#define TIM_BUCKET_W1_M_NUM_ENTRIES \
+ ((1ULL << (TIM_BUCKET_W1_S_SBT - TIM_BUCKET_W1_S_NUM_ENTRIES)) - 1)
+
+#define TIM_BUCKET_SEMA (TIM_BUCKET_CHUNK_REMAIN)
+
+#define TIM_BUCKET_CHUNK_REMAIN \
+ (TIM_BUCKET_W1_M_CHUNK_REMAINDER << TIM_BUCKET_W1_S_CHUNK_REMAINDER)
+
+#define TIM_BUCKET_LOCK (TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK)
+
+#define TIM_BUCKET_SEMA_WLOCK \
+ (TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
+
struct cnxk_tim_evdev {
struct roc_tim tim;
struct rte_eventdev *event_dev;
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
new file mode 100644
index 000000000..49ee85245
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -0,0 +1,6 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_tim_evdev.h"
+#include "cnxk_tim_worker.h"
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
new file mode 100644
index 000000000..d56e67360
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CNXK_TIM_WORKER_H__
+#define __CNXK_TIM_WORKER_H__
+
+#include "cnxk_tim_evdev.h"
+
+static inline uint8_t
+cnxk_tim_bkt_fetch_lock(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_LOCK) & TIM_BUCKET_W1_M_LOCK;
+}
+
+static inline int16_t
+cnxk_tim_bkt_fetch_rem(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_CHUNK_REMAINDER) &
+ TIM_BUCKET_W1_M_CHUNK_REMAINDER;
+}
+
+static inline int16_t
+cnxk_tim_bkt_get_rem(struct cnxk_tim_bkt *bktp)
+{
+ return __atomic_load_n(&bktp->chunk_remainder, __ATOMIC_ACQUIRE);
+}
+
+static inline void
+cnxk_tim_bkt_set_rem(struct cnxk_tim_bkt *bktp, uint16_t v)
+{
+ __atomic_store_n(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
+}
+
+static inline void
+cnxk_tim_bkt_sub_rem(struct cnxk_tim_bkt *bktp, uint16_t v)
+{
+ __atomic_fetch_sub(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
+}
+
+static inline uint8_t
+cnxk_tim_bkt_get_hbt(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_HBT) & TIM_BUCKET_W1_M_HBT;
+}
+
+static inline uint8_t
+cnxk_tim_bkt_get_bsk(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_BSK) & TIM_BUCKET_W1_M_BSK;
+}
+
+static inline uint64_t
+cnxk_tim_bkt_clr_bsk(struct cnxk_tim_bkt *bktp)
+{
+ /* Clear everything except lock. */
+ const uint64_t v = TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK;
+
+ return __atomic_fetch_and(&bktp->w1, v, __ATOMIC_ACQ_REL);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_fetch_sema_lock(struct cnxk_tim_bkt *bktp)
+{
+ return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA_WLOCK,
+ __ATOMIC_ACQUIRE);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_fetch_sema(struct cnxk_tim_bkt *bktp)
+{
+ return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA, __ATOMIC_RELAXED);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_inc_lock(struct cnxk_tim_bkt *bktp)
+{
+ const uint64_t v = 1ull << TIM_BUCKET_W1_S_LOCK;
+
+ return __atomic_fetch_add(&bktp->w1, v, __ATOMIC_ACQUIRE);
+}
+
+static inline void
+cnxk_tim_bkt_dec_lock(struct cnxk_tim_bkt *bktp)
+{
+ __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELEASE);
+}
+
+static inline void
+cnxk_tim_bkt_dec_lock_relaxed(struct cnxk_tim_bkt *bktp)
+{
+ __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELAXED);
+}
+
+static inline uint32_t
+cnxk_tim_bkt_get_nent(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_NUM_ENTRIES) &
+ TIM_BUCKET_W1_M_NUM_ENTRIES;
+}
+
+static inline void
+cnxk_tim_bkt_inc_nent(struct cnxk_tim_bkt *bktp)
+{
+ __atomic_add_fetch(&bktp->nb_entry, 1, __ATOMIC_RELAXED);
+}
+
+static inline void
+cnxk_tim_bkt_add_nent(struct cnxk_tim_bkt *bktp, uint32_t v)
+{
+ __atomic_add_fetch(&bktp->nb_entry, v, __ATOMIC_RELAXED);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_clr_nent(struct cnxk_tim_bkt *bktp)
+{
+ const uint64_t v =
+ ~(TIM_BUCKET_W1_M_NUM_ENTRIES << TIM_BUCKET_W1_S_NUM_ENTRIES);
+
+ return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL);
+}
+
+#endif /* __CNXK_TIM_WORKER_H__ */
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index bd52cbc68..e665dfc72 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -16,6 +16,7 @@ sources = files('cn10k_worker.c',
'cnxk_eventdev.c',
'cnxk_eventdev_selftest.c',
'cnxk_eventdev_stats.c',
+ 'cnxk_tim_worker.c',
'cnxk_tim_evdev.c',
)
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 29/34] event/cnxk: add timer arm routine
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (27 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 28/34] event/cnxk: add TIM bucket operations pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 30/34] event/cnxk: add timer arm timeout burst pbhagavatula
` (5 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event timer arm routine.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 18 ++
drivers/event/cnxk/cnxk_tim_evdev.h | 23 ++
drivers/event/cnxk/cnxk_tim_worker.c | 95 +++++++++
drivers/event/cnxk/cnxk_tim_worker.h | 300 +++++++++++++++++++++++++++
4 files changed, 436 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index e06fe2f52..ecc952a6a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -76,6 +76,21 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
return rc;
}
+static void
+cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
+{
+ uint8_t prod_flag = !tim_ring->prod_type_sp;
+
+ /* [DFB/FB] [SP][MP]*/
+ const rte_event_timer_arm_burst_t arm_burst[2][2] = {
+#define FP(_name, _f2, _f1, flags) [_f2][_f1] = cnxk_tim_arm_burst_##_name,
+ TIM_ARM_FASTPATH_MODES
+#undef FP
+ };
+
+ cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
+}
+
static void
cnxk_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
struct rte_event_timer_adapter_info *adptr_info)
@@ -173,6 +188,9 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
plt_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE);
plt_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+ /* Set fastpath ops. */
+ cnxk_tim_set_fp_ops(tim_ring);
+
/* Update SSO xae count. */
cnxk_sso_updt_xae_cnt(cnxk_sso_pmd_priv(dev->event_dev), tim_ring,
RTE_EVENT_TYPE_TIMER);
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index f6895417a..1f2aad17a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -14,6 +14,7 @@
#include <rte_event_timer_adapter.h>
#include <rte_malloc.h>
#include <rte_memzone.h>
+#include <rte_reciprocal.h>
#include "roc_api.h"
@@ -37,6 +38,11 @@
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
+#define CNXK_TIM_SP 0x1
+#define CNXK_TIM_MP 0x2
+#define CNXK_TIM_ENA_FB 0x10
+#define CNXK_TIM_ENA_DFB 0x20
+
#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
#define TIM_BUCKET_W1_M_CHUNK_REMAINDER \
((1ULL << (64 - TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
@@ -107,10 +113,14 @@ struct cnxk_tim_ring {
uintptr_t base;
uint16_t nb_chunk_slots;
uint32_t nb_bkts;
+ uint64_t last_updt_cyc;
+ uint64_t ring_start_cyc;
uint64_t tck_int;
uint64_t tot_int;
struct cnxk_tim_bkt *bkt;
struct rte_mempool *chunk_pool;
+ struct rte_reciprocal_u64 fast_div;
+ struct rte_reciprocal_u64 fast_bkt;
uint64_t arm_cnt;
uint8_t prod_type_sp;
uint8_t disable_npa;
@@ -201,6 +211,19 @@ cnxk_tim_cntfrq(void)
}
#endif
+#define TIM_ARM_FASTPATH_MODES \
+ FP(sp, 0, 0, CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
+ FP(mp, 0, 1, CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
+ FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
+ FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
+
+#define FP(_name, _f2, _f1, flags) \
+ uint16_t cnxk_tim_arm_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint16_t nb_timers);
+TIM_ARM_FASTPATH_MODES
+#undef FP
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index 49ee85245..268f845c8 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -4,3 +4,98 @@
#include "cnxk_tim_evdev.h"
#include "cnxk_tim_worker.h"
+
+static inline int
+cnxk_tim_arm_checks(const struct cnxk_tim_ring *const tim_ring,
+ struct rte_event_timer *const tim)
+{
+ if (unlikely(tim->state)) {
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ rte_errno = EALREADY;
+ goto fail;
+ }
+
+ if (unlikely(!tim->timeout_ticks ||
+ tim->timeout_ticks > tim_ring->nb_bkts)) {
+ tim->state = tim->timeout_ticks ?
+ RTE_EVENT_TIMER_ERROR_TOOLATE :
+ RTE_EVENT_TIMER_ERROR_TOOEARLY;
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ return 0;
+
+fail:
+ return -EINVAL;
+}
+
+static inline void
+cnxk_tim_format_event(const struct rte_event_timer *const tim,
+ struct cnxk_tim_ent *const entry)
+{
+ entry->w0 = (tim->ev.event & 0xFFC000000000) >> 6 |
+ (tim->ev.event & 0xFFFFFFFFF);
+ entry->wqe = tim->ev.u64;
+}
+
+static inline void
+cnxk_tim_sync_start_cyc(struct cnxk_tim_ring *tim_ring)
+{
+ uint64_t cur_cyc = cnxk_tim_cntvct();
+ uint32_t real_bkt;
+
+ if (cur_cyc - tim_ring->last_updt_cyc > tim_ring->tot_int) {
+ real_bkt = plt_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
+ cur_cyc = cnxk_tim_cntvct();
+
+ tim_ring->ring_start_cyc =
+ cur_cyc - (real_bkt * tim_ring->tck_int);
+ tim_ring->last_updt_cyc = cur_cyc;
+ }
+}
+
+static __rte_always_inline uint16_t
+cnxk_tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim, const uint16_t nb_timers,
+ const uint8_t flags)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_ent entry;
+ uint16_t index;
+ int ret;
+
+ cnxk_tim_sync_start_cyc(tim_ring);
+ for (index = 0; index < nb_timers; index++) {
+ if (cnxk_tim_arm_checks(tim_ring, tim[index]))
+ break;
+
+ cnxk_tim_format_event(tim[index], &entry);
+ if (flags & CNXK_TIM_SP)
+ ret = cnxk_tim_add_entry_sp(tim_ring,
+ tim[index]->timeout_ticks,
+ tim[index], &entry, flags);
+ if (flags & CNXK_TIM_MP)
+ ret = cnxk_tim_add_entry_mp(tim_ring,
+ tim[index]->timeout_ticks,
+ tim[index], &entry, flags);
+
+ if (unlikely(ret)) {
+ rte_errno = -ret;
+ break;
+ }
+ }
+
+ return index;
+}
+
+#define FP(_name, _f2, _f1, _flags) \
+ uint16_t __rte_noinline cnxk_tim_arm_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint16_t nb_timers) \
+ { \
+ return cnxk_tim_timer_arm_burst(adptr, tim, nb_timers, \
+ _flags); \
+ }
+TIM_ARM_FASTPATH_MODES
+#undef FP
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
index d56e67360..de8464e33 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.h
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -120,4 +120,304 @@ cnxk_tim_bkt_clr_nent(struct cnxk_tim_bkt *bktp)
return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL);
}
+static inline uint64_t
+cnxk_tim_bkt_fast_mod(uint64_t n, uint64_t d, struct rte_reciprocal_u64 R)
+{
+ return (n - (d * rte_reciprocal_divide_u64(n, &R)));
+}
+
+static __rte_always_inline void
+cnxk_tim_get_target_bucket(struct cnxk_tim_ring *const tim_ring,
+ const uint32_t rel_bkt, struct cnxk_tim_bkt **bkt,
+ struct cnxk_tim_bkt **mirr_bkt)
+{
+ const uint64_t bkt_cyc = cnxk_tim_cntvct() - tim_ring->ring_start_cyc;
+ uint64_t bucket =
+ rte_reciprocal_divide_u64(bkt_cyc, &tim_ring->fast_div) +
+ rel_bkt;
+ uint64_t mirr_bucket = 0;
+
+ bucket = cnxk_tim_bkt_fast_mod(bucket, tim_ring->nb_bkts,
+ tim_ring->fast_bkt);
+ mirr_bucket =
+ cnxk_tim_bkt_fast_mod(bucket + (tim_ring->nb_bkts >> 1),
+ tim_ring->nb_bkts, tim_ring->fast_bkt);
+ *bkt = &tim_ring->bkt[bucket];
+ *mirr_bkt = &tim_ring->bkt[mirr_bucket];
+}
+
+static struct cnxk_tim_ent *
+cnxk_tim_clr_bkt(struct cnxk_tim_ring *const tim_ring,
+ struct cnxk_tim_bkt *const bkt)
+{
+#define TIM_MAX_OUTSTANDING_OBJ 64
+ void *pend_chunks[TIM_MAX_OUTSTANDING_OBJ];
+ struct cnxk_tim_ent *chunk;
+ struct cnxk_tim_ent *pnext;
+ uint8_t objs = 0;
+
+ chunk = ((struct cnxk_tim_ent *)(uintptr_t)bkt->first_chunk);
+ chunk = (struct cnxk_tim_ent *)(uintptr_t)(chunk +
+ tim_ring->nb_chunk_slots)
+ ->w0;
+ while (chunk) {
+ pnext = (struct cnxk_tim_ent *)(uintptr_t)(
+ (chunk + tim_ring->nb_chunk_slots)->w0);
+ if (objs == TIM_MAX_OUTSTANDING_OBJ) {
+ rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks,
+ objs);
+ objs = 0;
+ }
+ pend_chunks[objs++] = chunk;
+ chunk = pnext;
+ }
+
+ if (objs)
+ rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks, objs);
+
+ return (struct cnxk_tim_ent *)(uintptr_t)bkt->first_chunk;
+}
+
+static struct cnxk_tim_ent *
+cnxk_tim_refill_chunk(struct cnxk_tim_bkt *const bkt,
+ struct cnxk_tim_bkt *const mirr_bkt,
+ struct cnxk_tim_ring *const tim_ring)
+{
+ struct cnxk_tim_ent *chunk;
+
+ if (bkt->nb_entry || !bkt->first_chunk) {
+ if (unlikely(rte_mempool_get(tim_ring->chunk_pool,
+ (void **)&chunk)))
+ return NULL;
+ if (bkt->nb_entry) {
+ *(uint64_t *)(((struct cnxk_tim_ent *)
+ mirr_bkt->current_chunk) +
+ tim_ring->nb_chunk_slots) =
+ (uintptr_t)chunk;
+ } else {
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ } else {
+ chunk = cnxk_tim_clr_bkt(tim_ring, bkt);
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+
+ return chunk;
+}
+
+static struct cnxk_tim_ent *
+cnxk_tim_insert_chunk(struct cnxk_tim_bkt *const bkt,
+ struct cnxk_tim_bkt *const mirr_bkt,
+ struct cnxk_tim_ring *const tim_ring)
+{
+ struct cnxk_tim_ent *chunk;
+
+ if (unlikely(rte_mempool_get(tim_ring->chunk_pool, (void **)&chunk)))
+ return NULL;
+
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+ if (bkt->nb_entry) {
+ *(uint64_t *)(((struct cnxk_tim_ent *)(uintptr_t)
+ mirr_bkt->current_chunk) +
+ tim_ring->nb_chunk_slots) = (uintptr_t)chunk;
+ } else {
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ return chunk;
+}
+
+static __rte_always_inline int
+cnxk_tim_add_entry_sp(struct cnxk_tim_ring *const tim_ring,
+ const uint32_t rel_bkt, struct rte_event_timer *const tim,
+ const struct cnxk_tim_ent *const pent,
+ const uint8_t flags)
+{
+ struct cnxk_tim_bkt *mirr_bkt;
+ struct cnxk_tim_ent *chunk;
+ struct cnxk_tim_bkt *bkt;
+ uint64_t lock_sema;
+ int16_t rem;
+
+__retry:
+ cnxk_tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
+
+ /* Get Bucket sema*/
+ lock_sema = cnxk_tim_bkt_fetch_sema_lock(bkt);
+
+ /* Bucket related checks. */
+ if (unlikely(cnxk_tim_bkt_get_hbt(lock_sema))) {
+ if (cnxk_tim_bkt_get_nent(lock_sema) != 0) {
+ uint64_t hbt_state;
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbz %[hbt], 33, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbnz %[hbt], 33, rty%= \n"
+ "dne%=: \n"
+ : [hbt] "=&r"(hbt_state)
+ : [w1] "r"((&bkt->w1))
+ : "memory");
+#else
+ do {
+ hbt_state = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (hbt_state & BIT_ULL(33));
+#endif
+
+ if (!(hbt_state & BIT_ULL(34))) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+ }
+ }
+ /* Insert the work. */
+ rem = cnxk_tim_bkt_fetch_rem(lock_sema);
+
+ if (!rem) {
+ if (flags & CNXK_TIM_ENA_FB)
+ chunk = cnxk_tim_refill_chunk(bkt, mirr_bkt, tim_ring);
+ if (flags & CNXK_TIM_ENA_DFB)
+ chunk = cnxk_tim_insert_chunk(bkt, mirr_bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ bkt->chunk_remainder = 0;
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ cnxk_tim_bkt_dec_lock(bkt);
+ return -ENOMEM;
+ }
+ mirr_bkt->current_chunk = (uintptr_t)chunk;
+ bkt->chunk_remainder = tim_ring->nb_chunk_slots - 1;
+ } else {
+ chunk = (struct cnxk_tim_ent *)mirr_bkt->current_chunk;
+ chunk += tim_ring->nb_chunk_slots - rem;
+ }
+
+ /* Copy work entry. */
+ *chunk = *pent;
+
+ tim->impl_opaque[0] = (uintptr_t)chunk;
+ tim->impl_opaque[1] = (uintptr_t)bkt;
+ __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
+ cnxk_tim_bkt_inc_nent(bkt);
+ cnxk_tim_bkt_dec_lock_relaxed(bkt);
+
+ return 0;
+}
+
+static __rte_always_inline int
+cnxk_tim_add_entry_mp(struct cnxk_tim_ring *const tim_ring,
+ const uint32_t rel_bkt, struct rte_event_timer *const tim,
+ const struct cnxk_tim_ent *const pent,
+ const uint8_t flags)
+{
+ struct cnxk_tim_bkt *mirr_bkt;
+ struct cnxk_tim_ent *chunk;
+ struct cnxk_tim_bkt *bkt;
+ uint64_t lock_sema;
+ int16_t rem;
+
+__retry:
+ cnxk_tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
+ /* Get Bucket sema*/
+ lock_sema = cnxk_tim_bkt_fetch_sema_lock(bkt);
+
+ /* Bucket related checks. */
+ if (unlikely(cnxk_tim_bkt_get_hbt(lock_sema))) {
+ if (cnxk_tim_bkt_get_nent(lock_sema) != 0) {
+ uint64_t hbt_state;
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbz %[hbt], 33, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbnz %[hbt], 33, rty%= \n"
+ "dne%=: \n"
+ : [hbt] "=&r"(hbt_state)
+ : [w1] "r"((&bkt->w1))
+ : "memory");
+#else
+ do {
+ hbt_state = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (hbt_state & BIT_ULL(33));
+#endif
+
+ if (!(hbt_state & BIT_ULL(34))) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+ }
+ }
+
+ rem = cnxk_tim_bkt_fetch_rem(lock_sema);
+ if (rem < 0) {
+ cnxk_tim_bkt_dec_lock(bkt);
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[rem], [%[crem]] \n"
+ " tbz %[rem], 63, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[rem], [%[crem]] \n"
+ " tbnz %[rem], 63, rty%= \n"
+ "dne%=: \n"
+ : [rem] "=&r"(rem)
+ : [crem] "r"(&bkt->w1)
+ : "memory");
+#else
+ while (__atomic_load_n((int64_t *)&bkt->w1, __ATOMIC_RELAXED) <
+ 0)
+ ;
+#endif
+ goto __retry;
+ } else if (!rem) {
+ /* Only one thread can be here*/
+ if (flags & CNXK_TIM_ENA_FB)
+ chunk = cnxk_tim_refill_chunk(bkt, mirr_bkt, tim_ring);
+ if (flags & CNXK_TIM_ENA_DFB)
+ chunk = cnxk_tim_insert_chunk(bkt, mirr_bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ cnxk_tim_bkt_set_rem(bkt, 0);
+ cnxk_tim_bkt_dec_lock(bkt);
+ return -ENOMEM;
+ }
+ *chunk = *pent;
+ if (cnxk_tim_bkt_fetch_lock(lock_sema)) {
+ do {
+ lock_sema = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (cnxk_tim_bkt_fetch_lock(lock_sema) - 1);
+ }
+ rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+ mirr_bkt->current_chunk = (uintptr_t)chunk;
+ __atomic_store_n(&bkt->chunk_remainder,
+ tim_ring->nb_chunk_slots - 1,
+ __ATOMIC_RELEASE);
+ } else {
+ chunk = (struct cnxk_tim_ent *)mirr_bkt->current_chunk;
+ chunk += tim_ring->nb_chunk_slots - rem;
+ *chunk = *pent;
+ }
+
+ tim->impl_opaque[0] = (uintptr_t)chunk;
+ tim->impl_opaque[1] = (uintptr_t)bkt;
+ __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
+ cnxk_tim_bkt_inc_nent(bkt);
+ cnxk_tim_bkt_dec_lock_relaxed(bkt);
+
+ return 0;
+}
+
#endif /* __CNXK_TIM_WORKER_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 30/34] event/cnxk: add timer arm timeout burst
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (28 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 29/34] event/cnxk: add timer arm routine pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 31/34] event/cnxk: add timer cancel function pbhagavatula
` (4 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event timer arm timeout burst function.
All the timers requested to be armed have the same timeout.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 7 ++
drivers/event/cnxk/cnxk_tim_evdev.h | 12 +++
drivers/event/cnxk/cnxk_tim_worker.c | 53 ++++++++++
drivers/event/cnxk/cnxk_tim_worker.h | 141 +++++++++++++++++++++++++++
4 files changed, 213 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index ecc952a6a..68c3b3049 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -88,7 +88,14 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
#undef FP
};
+ const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2] = {
+#define FP(_name, _f1, flags) [_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
+ TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
+ };
+
cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
+ cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
}
static void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 1f2aad17a..b66aac17c 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -217,6 +217,10 @@ cnxk_tim_cntfrq(void)
FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
+#define TIM_ARM_TMO_FASTPATH_MODES \
+ FP(dfb, 0, CNXK_TIM_ENA_DFB) \
+ FP(fb, 1, CNXK_TIM_ENA_FB)
+
#define FP(_name, _f2, _f1, flags) \
uint16_t cnxk_tim_arm_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
@@ -224,6 +228,14 @@ cnxk_tim_cntfrq(void)
TIM_ARM_FASTPATH_MODES
#undef FP
+#define FP(_name, _f1, flags) \
+ uint16_t cnxk_tim_arm_tmo_tick_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint64_t timeout_tick, \
+ const uint16_t nb_timers);
+TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index 268f845c8..717c53fb7 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -99,3 +99,56 @@ cnxk_tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
}
TIM_ARM_FASTPATH_MODES
#undef FP
+
+static __rte_always_inline uint16_t
+cnxk_tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint64_t timeout_tick,
+ const uint16_t nb_timers, const uint8_t flags)
+{
+ struct cnxk_tim_ent entry[CNXK_TIM_MAX_BURST] __rte_cache_aligned;
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ uint16_t set_timers = 0;
+ uint16_t arr_idx = 0;
+ uint16_t idx;
+ int ret;
+
+ if (unlikely(!timeout_tick || timeout_tick > tim_ring->nb_bkts)) {
+ const enum rte_event_timer_state state =
+ timeout_tick ? RTE_EVENT_TIMER_ERROR_TOOLATE :
+ RTE_EVENT_TIMER_ERROR_TOOEARLY;
+ for (idx = 0; idx < nb_timers; idx++)
+ tim[idx]->state = state;
+
+ rte_errno = EINVAL;
+ return 0;
+ }
+
+ cnxk_tim_sync_start_cyc(tim_ring);
+ while (arr_idx < nb_timers) {
+ for (idx = 0; idx < CNXK_TIM_MAX_BURST && (arr_idx < nb_timers);
+ idx++, arr_idx++) {
+ cnxk_tim_format_event(tim[arr_idx], &entry[idx]);
+ }
+ ret = cnxk_tim_add_entry_brst(tim_ring, timeout_tick,
+ &tim[set_timers], entry, idx,
+ flags);
+ set_timers += ret;
+ if (ret != idx)
+ break;
+ }
+
+ return set_timers;
+}
+
+#define FP(_name, _f1, _flags) \
+ uint16_t __rte_noinline cnxk_tim_arm_tmo_tick_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint64_t timeout_tick, \
+ const uint16_t nb_timers) \
+ { \
+ return cnxk_tim_timer_arm_tmo_brst(adptr, tim, timeout_tick, \
+ nb_timers, _flags); \
+ }
+TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
index de8464e33..56cb4cdd9 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.h
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -420,4 +420,145 @@ cnxk_tim_add_entry_mp(struct cnxk_tim_ring *const tim_ring,
return 0;
}
+static inline uint16_t
+cnxk_tim_cpy_wrk(uint16_t index, uint16_t cpy_lmt, struct cnxk_tim_ent *chunk,
+ struct rte_event_timer **const tim,
+ const struct cnxk_tim_ent *const ents,
+ const struct cnxk_tim_bkt *const bkt)
+{
+ for (; index < cpy_lmt; index++) {
+ *chunk = *(ents + index);
+ tim[index]->impl_opaque[0] = (uintptr_t)chunk++;
+ tim[index]->impl_opaque[1] = (uintptr_t)bkt;
+ tim[index]->state = RTE_EVENT_TIMER_ARMED;
+ }
+
+ return index;
+}
+
+/* Burst mode functions */
+static inline int
+cnxk_tim_add_entry_brst(struct cnxk_tim_ring *const tim_ring,
+ const uint16_t rel_bkt,
+ struct rte_event_timer **const tim,
+ const struct cnxk_tim_ent *ents,
+ const uint16_t nb_timers, const uint8_t flags)
+{
+ struct cnxk_tim_ent *chunk = NULL;
+ struct cnxk_tim_bkt *mirr_bkt;
+ struct cnxk_tim_bkt *bkt;
+ uint16_t chunk_remainder;
+ uint16_t index = 0;
+ uint64_t lock_sema;
+ int16_t rem, crem;
+ uint8_t lock_cnt;
+
+__retry:
+ cnxk_tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
+
+ /* Only one thread beyond this. */
+ lock_sema = cnxk_tim_bkt_inc_lock(bkt);
+ lock_cnt = (uint8_t)((lock_sema >> TIM_BUCKET_W1_S_LOCK) &
+ TIM_BUCKET_W1_M_LOCK);
+
+ if (lock_cnt) {
+ cnxk_tim_bkt_dec_lock(bkt);
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxrb %w[lock_cnt], [%[lock]] \n"
+ " tst %w[lock_cnt], 255 \n"
+ " beq dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxrb %w[lock_cnt], [%[lock]] \n"
+ " tst %w[lock_cnt], 255 \n"
+ " bne rty%= \n"
+ "dne%=: \n"
+ : [lock_cnt] "=&r"(lock_cnt)
+ : [lock] "r"(&bkt->lock)
+ : "memory");
+#else
+ while (__atomic_load_n(&bkt->lock, __ATOMIC_RELAXED))
+ ;
+#endif
+ goto __retry;
+ }
+
+ /* Bucket related checks. */
+ if (unlikely(cnxk_tim_bkt_get_hbt(lock_sema))) {
+ if (cnxk_tim_bkt_get_nent(lock_sema) != 0) {
+ uint64_t hbt_state;
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbz %[hbt], 33, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbnz %[hbt], 33, rty%= \n"
+ "dne%=: \n"
+ : [hbt] "=&r"(hbt_state)
+ : [w1] "r"((&bkt->w1))
+ : "memory");
+#else
+ do {
+ hbt_state = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (hbt_state & BIT_ULL(33));
+#endif
+
+ if (!(hbt_state & BIT_ULL(34))) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+ }
+ }
+
+ chunk_remainder = cnxk_tim_bkt_fetch_rem(lock_sema);
+ rem = chunk_remainder - nb_timers;
+ if (rem < 0) {
+ crem = tim_ring->nb_chunk_slots - chunk_remainder;
+ if (chunk_remainder && crem) {
+ chunk = ((struct cnxk_tim_ent *)
+ mirr_bkt->current_chunk) +
+ crem;
+
+ index = cnxk_tim_cpy_wrk(index, chunk_remainder, chunk,
+ tim, ents, bkt);
+ cnxk_tim_bkt_sub_rem(bkt, chunk_remainder);
+ cnxk_tim_bkt_add_nent(bkt, chunk_remainder);
+ }
+
+ if (flags & CNXK_TIM_ENA_FB)
+ chunk = cnxk_tim_refill_chunk(bkt, mirr_bkt, tim_ring);
+ if (flags & CNXK_TIM_ENA_DFB)
+ chunk = cnxk_tim_insert_chunk(bkt, mirr_bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ rte_errno = ENOMEM;
+ tim[index]->state = RTE_EVENT_TIMER_ERROR;
+ return crem;
+ }
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+ mirr_bkt->current_chunk = (uintptr_t)chunk;
+ cnxk_tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
+
+ rem = nb_timers - chunk_remainder;
+ cnxk_tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - rem);
+ cnxk_tim_bkt_add_nent(bkt, rem);
+ } else {
+ chunk = (struct cnxk_tim_ent *)mirr_bkt->current_chunk;
+ chunk += (tim_ring->nb_chunk_slots - chunk_remainder);
+
+ cnxk_tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
+ cnxk_tim_bkt_sub_rem(bkt, nb_timers);
+ cnxk_tim_bkt_add_nent(bkt, nb_timers);
+ }
+
+ cnxk_tim_bkt_dec_lock(bkt);
+
+ return nb_timers;
+}
+
#endif /* __CNXK_TIM_WORKER_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 31/34] event/cnxk: add timer cancel function
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (29 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 30/34] event/cnxk: add timer arm timeout burst pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 32/34] event/cnxk: add timer stats get and reset pbhagavatula
` (3 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add function to cancel event timer that has been armed.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 1 +
drivers/event/cnxk/cnxk_tim_evdev.h | 5 ++++
drivers/event/cnxk/cnxk_tim_worker.c | 30 ++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_worker.h | 37 ++++++++++++++++++++++++++++
4 files changed, 73 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 68c3b3049..62a15a4a1 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -96,6 +96,7 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
+ cnxk_tim_ops.cancel_burst = cnxk_tim_timer_cancel_burst;
}
static void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index b66aac17c..001f448d5 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -236,6 +236,11 @@ TIM_ARM_FASTPATH_MODES
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
+uint16_t
+cnxk_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint16_t nb_timers);
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index 717c53fb7..98ff143c3 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -152,3 +152,33 @@ cnxk_tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
}
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
+
+uint16_t
+cnxk_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint16_t nb_timers)
+{
+ uint16_t index;
+ int ret;
+
+ RTE_SET_USED(adptr);
+ rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+ for (index = 0; index < nb_timers; index++) {
+ if (tim[index]->state == RTE_EVENT_TIMER_CANCELED) {
+ rte_errno = EALREADY;
+ break;
+ }
+
+ if (tim[index]->state != RTE_EVENT_TIMER_ARMED) {
+ rte_errno = EINVAL;
+ break;
+ }
+ ret = cnxk_tim_rm_entry(tim[index]);
+ if (ret) {
+ rte_errno = -ret;
+ break;
+ }
+ }
+
+ return index;
+}
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
index 56cb4cdd9..7caeb1a8f 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.h
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -561,4 +561,41 @@ cnxk_tim_add_entry_brst(struct cnxk_tim_ring *const tim_ring,
return nb_timers;
}
+static int
+cnxk_tim_rm_entry(struct rte_event_timer *tim)
+{
+ struct cnxk_tim_ent *entry;
+ struct cnxk_tim_bkt *bkt;
+ uint64_t lock_sema;
+
+ if (tim->impl_opaque[1] == 0 || tim->impl_opaque[0] == 0)
+ return -ENOENT;
+
+ entry = (struct cnxk_tim_ent *)(uintptr_t)tim->impl_opaque[0];
+ if (entry->wqe != tim->ev.u64) {
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ return -ENOENT;
+ }
+
+ bkt = (struct cnxk_tim_bkt *)(uintptr_t)tim->impl_opaque[1];
+ lock_sema = cnxk_tim_bkt_inc_lock(bkt);
+ if (cnxk_tim_bkt_get_hbt(lock_sema) ||
+ !cnxk_tim_bkt_get_nent(lock_sema)) {
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ cnxk_tim_bkt_dec_lock(bkt);
+ return -ENOENT;
+ }
+
+ entry->w0 = 0;
+ entry->wqe = 0;
+ tim->state = RTE_EVENT_TIMER_CANCELED;
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ cnxk_tim_bkt_dec_lock(bkt);
+
+ return 0;
+}
+
#endif /* __CNXK_TIM_WORKER_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 32/34] event/cnxk: add timer stats get and reset
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (30 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 31/34] event/cnxk: add timer cancel function pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 33/34] event/cnxk: add timer adapter start and stop pbhagavatula
` (2 subsequent siblings)
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add event timer adapter statistics get and reset functions.
Stats are disabled by default and can be enabled through devargs.
Example:
--dev "0002:1e:00.0,tim_stats_ena=1"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 9 +++++
drivers/event/cnxk/cn10k_eventdev.c | 3 +-
drivers/event/cnxk/cn9k_eventdev.c | 3 +-
drivers/event/cnxk/cnxk_tim_evdev.c | 50 ++++++++++++++++++++++++----
drivers/event/cnxk/cnxk_tim_evdev.h | 38 ++++++++++++++-------
drivers/event/cnxk/cnxk_tim_worker.c | 11 ++++--
6 files changed, 91 insertions(+), 23 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index a8199aac7..11145dd7d 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -115,6 +115,15 @@ Runtime Config Options
-a 0002:0e:00.0,tim_chnk_slots=1023
+- ``TIM enable arm/cancel statistics``
+
+ The ``tim_stats_ena`` devargs can be used to enable arm and cancel stats of
+ event timer adapter.
+
+ For example::
+
+ -a 0002:0e:00.0,tim_stats_ena=1
+
- ``TIM limit max rings reserved``
The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index cadc792a7..bf4052c76 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -505,4 +505,5 @@ RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
CN10K_SSO_GW_MODE "=<int>"
CNXK_TIM_DISABLE_NPA "=1"
CNXK_TIM_CHNK_SLOTS "=<int>"
- CNXK_TIM_RINGS_LMT "=<int>");
+ CNXK_TIM_RINGS_LMT "=<int>"
+ CNXK_TIM_STATS_ENA "=1");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index e503f6b1c..0684417ea 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -574,4 +574,5 @@ RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
CN9K_SSO_SINGLE_WS "=1"
CNXK_TIM_DISABLE_NPA "=1"
CNXK_TIM_CHNK_SLOTS "=<int>"
- CNXK_TIM_RINGS_LMT "=<int>");
+ CNXK_TIM_RINGS_LMT "=<int>"
+ CNXK_TIM_STATS_ENA "=1");
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 62a15a4a1..a73ca33d8 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -81,21 +81,25 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
{
uint8_t prod_flag = !tim_ring->prod_type_sp;
- /* [DFB/FB] [SP][MP]*/
- const rte_event_timer_arm_burst_t arm_burst[2][2] = {
-#define FP(_name, _f2, _f1, flags) [_f2][_f1] = cnxk_tim_arm_burst_##_name,
+ /* [STATS] [DFB/FB] [SP][MP]*/
+ const rte_event_timer_arm_burst_t arm_burst[2][2][2] = {
+#define FP(_name, _f3, _f2, _f1, flags) \
+ [_f3][_f2][_f1] = cnxk_tim_arm_burst_##_name,
TIM_ARM_FASTPATH_MODES
#undef FP
};
- const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2] = {
-#define FP(_name, _f1, flags) [_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
+ const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2] = {
+#define FP(_name, _f2, _f1, flags) \
+ [_f2][_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
};
- cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
- cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
+ cnxk_tim_ops.arm_burst =
+ arm_burst[tim_ring->enable_stats][tim_ring->ena_dfb][prod_flag];
+ cnxk_tim_ops.arm_tmo_tick_burst =
+ arm_tmo_burst[tim_ring->enable_stats][tim_ring->ena_dfb];
cnxk_tim_ops.cancel_burst = cnxk_tim_timer_cancel_burst;
}
@@ -159,6 +163,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->nb_timers = rcfg->nb_timers;
tim_ring->chunk_sz = dev->chunk_sz;
tim_ring->disable_npa = dev->disable_npa;
+ tim_ring->enable_stats = dev->enable_stats;
if (tim_ring->disable_npa) {
tim_ring->nb_chunks =
@@ -241,6 +246,30 @@ cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
return 0;
}
+static int
+cnxk_tim_stats_get(const struct rte_event_timer_adapter *adapter,
+ struct rte_event_timer_adapter_stats *stats)
+{
+ struct cnxk_tim_ring *tim_ring = adapter->data->adapter_priv;
+ uint64_t bkt_cyc = cnxk_tim_cntvct() - tim_ring->ring_start_cyc;
+
+ stats->evtim_exp_count =
+ __atomic_load_n(&tim_ring->arm_cnt, __ATOMIC_RELAXED);
+ stats->ev_enq_count = stats->evtim_exp_count;
+ stats->adapter_tick_count =
+ rte_reciprocal_divide_u64(bkt_cyc, &tim_ring->fast_div);
+ return 0;
+}
+
+static int
+cnxk_tim_stats_reset(const struct rte_event_timer_adapter *adapter)
+{
+ struct cnxk_tim_ring *tim_ring = adapter->data->adapter_priv;
+
+ __atomic_store_n(&tim_ring->arm_cnt, 0, __ATOMIC_RELAXED);
+ return 0;
+}
+
int
cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
uint32_t *caps,
@@ -258,6 +287,11 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
+ if (dev->enable_stats) {
+ cnxk_tim_ops.stats_get = cnxk_tim_stats_get;
+ cnxk_tim_ops.stats_reset = cnxk_tim_stats_reset;
+ }
+
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
@@ -281,6 +315,8 @@ cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
&dev->disable_npa);
rte_kvargs_process(kvlist, CNXK_TIM_CHNK_SLOTS, &parse_kvargs_value,
&dev->chunk_slots);
+ rte_kvargs_process(kvlist, CNXK_TIM_STATS_ENA, &parse_kvargs_flag,
+ &dev->enable_stats);
rte_kvargs_process(kvlist, CNXK_TIM_RINGS_LMT, &parse_kvargs_value,
&dev->min_ring_cnt);
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 001f448d5..b5e4cfc9e 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -36,12 +36,14 @@
#define CNXK_TIM_DISABLE_NPA "tim_disable_npa"
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
+#define CNXK_TIM_STATS_ENA "tim_stats_ena"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
-#define CNXK_TIM_SP 0x1
-#define CNXK_TIM_MP 0x2
-#define CNXK_TIM_ENA_FB 0x10
-#define CNXK_TIM_ENA_DFB 0x20
+#define CNXK_TIM_SP 0x1
+#define CNXK_TIM_MP 0x2
+#define CNXK_TIM_ENA_FB 0x10
+#define CNXK_TIM_ENA_DFB 0x20
+#define CNXK_TIM_ENA_STATS 0x40
#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
#define TIM_BUCKET_W1_M_CHUNK_REMAINDER \
@@ -82,6 +84,7 @@ struct cnxk_tim_evdev {
uint8_t disable_npa;
uint16_t chunk_slots;
uint16_t min_ring_cnt;
+ uint8_t enable_stats;
};
enum cnxk_tim_clk_src {
@@ -123,6 +126,7 @@ struct cnxk_tim_ring {
struct rte_reciprocal_u64 fast_bkt;
uint64_t arm_cnt;
uint8_t prod_type_sp;
+ uint8_t enable_stats;
uint8_t disable_npa;
uint8_t ena_dfb;
uint16_t ring_id;
@@ -212,23 +216,33 @@ cnxk_tim_cntfrq(void)
#endif
#define TIM_ARM_FASTPATH_MODES \
- FP(sp, 0, 0, CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
- FP(mp, 0, 1, CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
- FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
- FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
+ FP(sp, 0, 0, 0, CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
+ FP(mp, 0, 0, 1, CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
+ FP(fb_sp, 0, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
+ FP(fb_mp, 0, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP) \
+ FP(stats_sp, 1, 0, 0, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
+ FP(stats_mp, 1, 0, 1, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
+ FP(stats_fb_sp, 1, 1, 0, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
+ FP(stats_fb_mp, 1, 1, 1, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_FB | CNXK_TIM_MP)
#define TIM_ARM_TMO_FASTPATH_MODES \
- FP(dfb, 0, CNXK_TIM_ENA_DFB) \
- FP(fb, 1, CNXK_TIM_ENA_FB)
+ FP(dfb, 0, 0, CNXK_TIM_ENA_DFB) \
+ FP(fb, 0, 1, CNXK_TIM_ENA_FB) \
+ FP(stats_dfb, 1, 0, CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_DFB) \
+ FP(stats_fb, 1, 1, CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_FB)
-#define FP(_name, _f2, _f1, flags) \
+#define FP(_name, _f3, _f2, _f1, flags) \
uint16_t cnxk_tim_arm_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint16_t nb_timers);
TIM_ARM_FASTPATH_MODES
#undef FP
-#define FP(_name, _f1, flags) \
+#define FP(_name, _f2, _f1, flags) \
uint16_t cnxk_tim_arm_tmo_tick_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint64_t timeout_tick, \
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index 98ff143c3..3ce99864a 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -86,10 +86,13 @@ cnxk_tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
}
}
+ if (flags & CNXK_TIM_ENA_STATS)
+ __atomic_fetch_add(&tim_ring->arm_cnt, index, __ATOMIC_RELAXED);
+
return index;
}
-#define FP(_name, _f2, _f1, _flags) \
+#define FP(_name, _f3, _f2, _f1, _flags) \
uint16_t __rte_noinline cnxk_tim_arm_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint16_t nb_timers) \
@@ -138,10 +141,14 @@ cnxk_tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
break;
}
+ if (flags & CNXK_TIM_ENA_STATS)
+ __atomic_fetch_add(&tim_ring->arm_cnt, set_timers,
+ __ATOMIC_RELAXED);
+
return set_timers;
}
-#define FP(_name, _f1, _flags) \
+#define FP(_name, _f2, _f1, _flags) \
uint16_t __rte_noinline cnxk_tim_arm_tmo_tick_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint64_t timeout_tick, \
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 33/34] event/cnxk: add timer adapter start and stop
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (31 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 32/34] event/cnxk: add timer stats get and reset pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 34/34] event/cnxk: add devargs to control timer adapters pbhagavatula
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add event timer adapter start and stop functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 71 ++++++++++++++++++++++++++++-
1 file changed, 70 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index a73ca33d8..19b71b4f5 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -246,6 +246,73 @@ cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
return 0;
}
+static void
+cnxk_tim_calibrate_start_tsc(struct cnxk_tim_ring *tim_ring)
+{
+#define CNXK_TIM_CALIB_ITER 1E6
+ uint32_t real_bkt, bucket;
+ int icount, ecount = 0;
+ uint64_t bkt_cyc;
+
+ for (icount = 0; icount < CNXK_TIM_CALIB_ITER; icount++) {
+ real_bkt = plt_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
+ bkt_cyc = cnxk_tim_cntvct();
+ bucket = (bkt_cyc - tim_ring->ring_start_cyc) /
+ tim_ring->tck_int;
+ bucket = bucket % (tim_ring->nb_bkts);
+ tim_ring->ring_start_cyc =
+ bkt_cyc - (real_bkt * tim_ring->tck_int);
+ if (bucket != real_bkt)
+ ecount++;
+ }
+ tim_ring->last_updt_cyc = bkt_cyc;
+ plt_tim_dbg("Bucket mispredict %3.2f distance %d\n",
+ 100 - (((double)(icount - ecount) / (double)icount) * 100),
+ bucket - real_bkt);
+}
+
+static int
+cnxk_tim_ring_start(const struct rte_event_timer_adapter *adptr)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ rc = roc_tim_lf_enable(&dev->tim, tim_ring->ring_id,
+ &tim_ring->ring_start_cyc, NULL);
+ if (rc < 0)
+ return rc;
+
+ tim_ring->tck_int = NSEC2TICK(tim_ring->tck_nsec, cnxk_tim_cntfrq());
+ tim_ring->tot_int = tim_ring->tck_int * tim_ring->nb_bkts;
+ tim_ring->fast_div = rte_reciprocal_value_u64(tim_ring->tck_int);
+ tim_ring->fast_bkt = rte_reciprocal_value_u64(tim_ring->nb_bkts);
+
+ cnxk_tim_calibrate_start_tsc(tim_ring);
+
+ return rc;
+}
+
+static int
+cnxk_tim_ring_stop(const struct rte_event_timer_adapter *adptr)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ rc = roc_tim_lf_disable(&dev->tim, tim_ring->ring_id);
+ if (rc < 0)
+ plt_err("Failed to disable timer ring");
+
+ return rc;
+}
+
static int
cnxk_tim_stats_get(const struct rte_event_timer_adapter *adapter,
struct rte_event_timer_adapter_stats *stats)
@@ -278,13 +345,14 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
RTE_SET_USED(flags);
- RTE_SET_USED(ops);
if (dev == NULL)
return -ENODEV;
cnxk_tim_ops.init = cnxk_tim_ring_create;
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+ cnxk_tim_ops.start = cnxk_tim_ring_start;
+ cnxk_tim_ops.stop = cnxk_tim_ring_stop;
cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
if (dev->enable_stats) {
@@ -295,6 +363,7 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
+ *ops = &cnxk_tim_ops;
return 0;
}
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v4 34/34] event/cnxk: add devargs to control timer adapters
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (32 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 33/34] event/cnxk: add timer adapter start and stop pbhagavatula
@ 2021-05-03 15:22 ` pbhagavatula
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
34 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-03 15:22 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add devargs to control each event timer adapter i.e. TIM rings internal
parameters uniquely. The following dict format is expected
[ring-chnk_slots-disable_npa-stats_ena]. 0 represents default values.
Example:
--dev "0002:1e:00.0,tim_ring_ctl=[2-1023-1-0]"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 11 ++++
drivers/event/cnxk/cnxk_tim_evdev.c | 96 ++++++++++++++++++++++++++++-
drivers/event/cnxk/cnxk_tim_evdev.h | 10 +++
3 files changed, 116 insertions(+), 1 deletion(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 11145dd7d..1bd935abc 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -135,6 +135,17 @@ Runtime Config Options
-a 0002:0e:00.0,tim_rings_lmt=5
+- ``TIM ring control internal parameters``
+
+ When using multiple TIM rings the ``tim_ring_ctl`` devargs can be used to
+ control each TIM rings internal parameters uniquely. The following dict
+ format is expected [ring-chnk_slots-disable_npa-stats_ena]. 0 represents
+ default values.
+
+ For Example::
+
+ -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 19b71b4f5..9d40e336d 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -121,7 +121,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
struct cnxk_tim_ring *tim_ring;
- int rc;
+ int i, rc;
if (dev == NULL)
return -ENODEV;
@@ -165,6 +165,20 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->disable_npa = dev->disable_npa;
tim_ring->enable_stats = dev->enable_stats;
+ for (i = 0; i < dev->ring_ctl_cnt; i++) {
+ struct cnxk_tim_ctl *ring_ctl = &dev->ring_ctl_data[i];
+
+ if (ring_ctl->ring == tim_ring->ring_id) {
+ tim_ring->chunk_sz =
+ ring_ctl->chunk_slots ?
+ ((uint32_t)(ring_ctl->chunk_slots + 1) *
+ CNXK_TIM_CHUNK_ALIGNMENT) :
+ tim_ring->chunk_sz;
+ tim_ring->enable_stats = ring_ctl->enable_stats;
+ tim_ring->disable_npa = ring_ctl->disable_npa;
+ }
+ }
+
if (tim_ring->disable_npa) {
tim_ring->nb_chunks =
tim_ring->nb_timers /
@@ -368,6 +382,84 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
return 0;
}
+static void
+cnxk_tim_parse_ring_param(char *value, void *opaque)
+{
+ struct cnxk_tim_evdev *dev = opaque;
+ struct cnxk_tim_ctl ring_ctl = {0};
+ char *tok = strtok(value, "-");
+ struct cnxk_tim_ctl *old_ptr;
+ uint16_t *val;
+
+ val = (uint16_t *)&ring_ctl;
+
+ if (!strlen(value))
+ return;
+
+ while (tok != NULL) {
+ *val = atoi(tok);
+ tok = strtok(NULL, "-");
+ val++;
+ }
+
+ if (val != (&ring_ctl.enable_stats + 1)) {
+ plt_err("Invalid ring param expected [ring-chunk_sz-disable_npa-enable_stats]");
+ return;
+ }
+
+ dev->ring_ctl_cnt++;
+ old_ptr = dev->ring_ctl_data;
+ dev->ring_ctl_data =
+ rte_realloc(dev->ring_ctl_data,
+ sizeof(struct cnxk_tim_ctl) * dev->ring_ctl_cnt, 0);
+ if (dev->ring_ctl_data == NULL) {
+ dev->ring_ctl_data = old_ptr;
+ dev->ring_ctl_cnt--;
+ return;
+ }
+
+ dev->ring_ctl_data[dev->ring_ctl_cnt - 1] = ring_ctl;
+}
+
+static void
+cnxk_tim_parse_ring_ctl_list(const char *value, void *opaque)
+{
+ char *s = strdup(value);
+ char *start = NULL;
+ char *end = NULL;
+ char *f = s;
+
+ while (*s) {
+ if (*s == '[')
+ start = s;
+ else if (*s == ']')
+ end = s;
+
+ if (start && start < end) {
+ *end = 0;
+ cnxk_tim_parse_ring_param(start + 1, opaque);
+ start = end;
+ s = end;
+ }
+ s++;
+ }
+
+ free(f);
+}
+
+static int
+cnxk_tim_parse_kvargs_dict(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ /* Dict format [ring-chunk_sz-disable_npa-enable_stats] use '-' as ','
+ * isn't allowed. 0 represents default.
+ */
+ cnxk_tim_parse_ring_ctl_list(value, opaque);
+
+ return 0;
+}
+
static void
cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
{
@@ -388,6 +480,8 @@ cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
&dev->enable_stats);
rte_kvargs_process(kvlist, CNXK_TIM_RINGS_LMT, &parse_kvargs_value,
&dev->min_ring_cnt);
+ rte_kvargs_process(kvlist, CNXK_TIM_RING_CTL,
+ &cnxk_tim_parse_kvargs_dict, &dev);
rte_kvargs_free(kvlist);
}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index b5e4cfc9e..c369f6f47 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -38,6 +38,7 @@
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
#define CNXK_TIM_STATS_ENA "tim_stats_ena"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
+#define CNXK_TIM_RING_CTL "tim_ring_ctl"
#define CNXK_TIM_SP 0x1
#define CNXK_TIM_MP 0x2
@@ -75,6 +76,13 @@
#define TIM_BUCKET_SEMA_WLOCK \
(TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
+struct cnxk_tim_ctl {
+ uint16_t ring;
+ uint16_t chunk_slots;
+ uint16_t disable_npa;
+ uint16_t enable_stats;
+};
+
struct cnxk_tim_evdev {
struct roc_tim tim;
struct rte_eventdev *event_dev;
@@ -85,6 +93,8 @@ struct cnxk_tim_evdev {
uint16_t chunk_slots;
uint16_t min_ring_cnt;
uint8_t enable_stats;
+ uint16_t ring_ctl_cnt;
+ struct cnxk_tim_ctl *ring_ctl_data;
};
enum cnxk_tim_clk_src {
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 00/34] " pbhagavatula
` (33 preceding siblings ...)
2021-05-03 15:22 ` [dpdk-dev] [PATCH v4 34/34] event/cnxk: add devargs to control timer adapters pbhagavatula
@ 2021-05-04 0:26 ` pbhagavatula
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 01/35] common/cnxk: rename deprecated constant pbhagavatula
` (35 more replies)
34 siblings, 36 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:26 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
This patchset adds support for Marvell CN106XX SoC based on 'common/cnxk'
driver. In future, CN9K a.k.a octeontx2 will also be supported by same
driver when code is ready and 'event/octeontx2' will be deprecated.
v5 Changes:
- Update inline asm extension prefix.
v4 Changes:
- s/PCI_ANY_ID/RTE_PCI_ANY_ID.
- Remove dependency on net_cnxk
- Fix compilation issues with xstats patch.
v3 Changes:
- Fix documentation, copyright.
- Update release notes.
v2 Changes:
- Split Rx/Tx adapter into seperate patch set to remove dependency on net/cnxk
- Add missing xStats patch.
- Fix incorrect head wait operation.
Pavan Nikhilesh (18):
common/cnxk: rename deprecated constant
common/cnxk: update inline asm prefix
event/cnxk: add build infra and device setup
event/cnxk: add platform specific device probe
event/cnxk: add common configuration validation
event/cnxk: allocate event inflight buffers
event/cnxk: add devargs to configure getwork mode
event/cnxk: add SSO HW device operations
event/cnxk: add SSO GWS fastpath enqueue functions
event/cnxk: add SSO GWS dequeue fastpath functions
event/cnxk: add SSO selftest and dump
event/cnxk: add event port and queue xstats
event/cnxk: add devargs to disable NPA
event/cnxk: allow adapters to resize inflights
event/cnxk: add TIM bucket operations
event/cnxk: add timer arm routine
event/cnxk: add timer arm timeout burst
event/cnxk: add timer cancel function
Shijith Thotton (17):
event/cnxk: add device capabilities function
event/cnxk: add platform specific device config
event/cnxk: add event queue config functions
event/cnxk: add devargs for inflight buffer count
event/cnxk: add devargs to control SSO HWGRP QoS
event/cnxk: add port config functions
event/cnxk: add event port link and unlink
event/cnxk: add device start function
event/cnxk: add device stop and close functions
event/cnxk: support event timer
event/cnxk: add timer adapter capabilities
event/cnxk: create and free timer adapter
event/cnxk: add timer adapter info function
event/cnxk: add devargs for chunk size and rings
event/cnxk: add timer stats get and reset
event/cnxk: add timer adapter start and stop
event/cnxk: add devargs to control timer adapters
MAINTAINERS | 6 +
app/test/test_eventdev.c | 14 +
doc/guides/eventdevs/cnxk.rst | 162 ++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_21_05.rst | 2 +
drivers/common/cnxk/roc_platform.h | 33 +-
drivers/common/cnxk/roc_sso.c | 63 +
drivers/common/cnxk/roc_sso.h | 19 +
drivers/common/cnxk/version.map | 2 +
drivers/event/cnxk/cn10k_eventdev.c | 509 ++++++
drivers/event/cnxk/cn10k_worker.c | 115 ++
drivers/event/cnxk/cn10k_worker.h | 175 +++
drivers/event/cnxk/cn9k_eventdev.c | 578 +++++++
drivers/event/cnxk/cn9k_worker.c | 236 +++
drivers/event/cnxk/cn9k_worker.h | 297 ++++
drivers/event/cnxk/cnxk_eventdev.c | 647 ++++++++
drivers/event/cnxk/cnxk_eventdev.h | 253 +++
drivers/event/cnxk/cnxk_eventdev_adptr.c | 67 +
drivers/event/cnxk/cnxk_eventdev_selftest.c | 1570 +++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev_stats.c | 289 ++++
drivers/event/cnxk/cnxk_tim_evdev.c | 538 +++++++
drivers/event/cnxk/cnxk_tim_evdev.h | 275 ++++
drivers/event/cnxk/cnxk_tim_worker.c | 191 +++
drivers/event/cnxk/cnxk_tim_worker.h | 601 +++++++
drivers/event/cnxk/cnxk_worker.h | 101 ++
drivers/event/cnxk/meson.build | 23 +
drivers/event/cnxk/version.map | 3 +
drivers/event/meson.build | 1 +
28 files changed, 6755 insertions(+), 16 deletions(-)
create mode 100644 doc/guides/eventdevs/cnxk.rst
create mode 100644 drivers/event/cnxk/cn10k_eventdev.c
create mode 100644 drivers/event/cnxk/cn10k_worker.c
create mode 100644 drivers/event/cnxk/cn10k_worker.h
create mode 100644 drivers/event/cnxk/cn9k_eventdev.c
create mode 100644 drivers/event/cnxk/cn9k_worker.c
create mode 100644 drivers/event/cnxk/cn9k_worker.h
create mode 100644 drivers/event/cnxk/cnxk_eventdev.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev.h
create mode 100644 drivers/event/cnxk/cnxk_eventdev_adptr.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev_selftest.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev_stats.c
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.c
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.h
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.c
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.h
create mode 100644 drivers/event/cnxk/cnxk_worker.h
create mode 100644 drivers/event/cnxk/meson.build
create mode 100644 drivers/event/cnxk/version.map
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 01/35] common/cnxk: rename deprecated constant
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
@ 2021-05-04 0:26 ` pbhagavatula
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 02/35] common/cnxk: update inline asm prefix pbhagavatula
` (34 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:26 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
The PCI_ANY_ID constant is deprecated and renamed as RTE_PCI_ANY_ID.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/common/cnxk/roc_platform.h | 24 ++++++++++--------------
1 file changed, 10 insertions(+), 14 deletions(-)
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 97600e56f..29ab71240 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -165,22 +165,18 @@ extern int cnxk_logtype_tm;
#define plt_tm_dbg(fmt, ...) plt_dbg(tm, fmt, ##__VA_ARGS__)
#ifdef __cplusplus
-#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- RTE_CLASS_ANY_ID, \
- PCI_VENDOR_ID_CAVIUM, \
- (dev), \
- PCI_ANY_ID, \
- (subsystem_dev), \
+#define CNXK_PCI_ID(subsystem_dev, dev) \
+ { \
+ RTE_CLASS_ANY_ID, PCI_VENDOR_ID_CAVIUM, (dev), RTE_PCI_ANY_ID, \
+ (subsystem_dev), \
}
#else
-#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- .class_id = RTE_CLASS_ANY_ID, \
- .vendor_id = PCI_VENDOR_ID_CAVIUM, \
- .device_id = (dev), \
- .subsystem_vendor_id = PCI_ANY_ID, \
- .subsystem_device_id = (subsystem_dev), \
+#define CNXK_PCI_ID(subsystem_dev, dev) \
+ { \
+ .class_id = RTE_CLASS_ANY_ID, \
+ .vendor_id = PCI_VENDOR_ID_CAVIUM, .device_id = (dev), \
+ .subsystem_vendor_id = RTE_PCI_ANY_ID, \
+ .subsystem_device_id = (subsystem_dev), \
}
#endif
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 02/35] common/cnxk: update inline asm prefix
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 01/35] common/cnxk: rename deprecated constant pbhagavatula
@ 2021-05-04 0:26 ` pbhagavatula
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 03/35] event/cnxk: add build infra and device setup pbhagavatula
` (33 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:26 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Update inline asm prefix to prevent overriding cpu type, instead express
the additional extensions required.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/common/cnxk/roc_platform.h | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 29ab71240..7864fa4ff 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -23,9 +23,14 @@
#include "roc_bits.h"
#if defined(__ARM_FEATURE_SVE)
-#define PLT_CPU_FEATURE_PREAMBLE ".cpu generic+crc+lse+sve\n"
+#define PLT_CPU_FEATURE_PREAMBLE \
+ ".arch_extension crc\n" \
+ ".arch_extension lse\n" \
+ ".arch_extension sve\n"
#else
-#define PLT_CPU_FEATURE_PREAMBLE ".cpu generic+crc+lse\n"
+#define PLT_CPU_FEATURE_PREAMBLE \
+ ".arch_extension crc\n" \
+ ".arch_extension lse\n"
#endif
#define PLT_ASSERT RTE_ASSERT
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 03/35] event/cnxk: add build infra and device setup
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 01/35] common/cnxk: rename deprecated constant pbhagavatula
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 02/35] common/cnxk: update inline asm prefix pbhagavatula
@ 2021-05-04 0:26 ` pbhagavatula
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 04/35] event/cnxk: add device capabilities function pbhagavatula
` (32 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:26 UTC (permalink / raw)
To: jerinj, Thomas Monjalon, Pavan Nikhilesh, Shijith Thotton,
Ray Kinsella, Neil Horman, Anatoly Burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add meson build infra structure along with the event device
SSO initialization and teardown functions.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
MAINTAINERS | 6 +++
doc/guides/eventdevs/cnxk.rst | 55 +++++++++++++++++++++
doc/guides/eventdevs/index.rst | 1 +
doc/guides/rel_notes/release_21_05.rst | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 68 ++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 39 +++++++++++++++
drivers/event/cnxk/meson.build | 13 +++++
drivers/event/cnxk/version.map | 3 ++
drivers/event/meson.build | 1 +
9 files changed, 188 insertions(+)
create mode 100644 doc/guides/eventdevs/cnxk.rst
create mode 100644 drivers/event/cnxk/cnxk_eventdev.c
create mode 100644 drivers/event/cnxk/cnxk_eventdev.h
create mode 100644 drivers/event/cnxk/meson.build
create mode 100644 drivers/event/cnxk/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 44f3d322e..5a2297e99 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1224,6 +1224,12 @@ M: Jerin Jacob <jerinj@marvell.com>
F: drivers/event/octeontx2/
F: doc/guides/eventdevs/octeontx2.rst
+Marvell cnxk
+M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+M: Shijith Thotton <sthotton@marvell.com>
+F: drivers/event/cnxk/
+F: doc/guides/eventdevs/cnxk.rst
+
NXP DPAA eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Nipun Gupta <nipun.gupta@nxp.com>
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
new file mode 100644
index 000000000..148280b85
--- /dev/null
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -0,0 +1,55 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2021 Marvell.
+
+Marvell cnxk SSO Eventdev Driver
+================================
+
+The SSO PMD (**librte_event_cnxk**) and provides poll mode
+eventdev driver support for the inbuilt event device found in the
+**Marvell OCTEON cnxk** SoC family.
+
+More information about OCTEON cnxk SoC can be found at `Marvell Official Website
+<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
+
+Supported OCTEON cnxk SoCs
+--------------------------
+
+- CN9XX
+- CN10XX
+
+Features
+--------
+
+Features of the OCTEON cnxk SSO PMD are:
+
+- 256 Event queues
+- 26 (dual) and 52 (single) Event ports on CN9XX
+- 52 Event ports on CN10XX
+- HW event scheduler
+- Supports 1M flows per event queue
+- Flow based event pipelining
+- Flow pinning support in flow based event pipelining
+- Queue based event pipelining
+- Supports ATOMIC, ORDERED, PARALLEL schedule types per flow
+- Event scheduling QoS based on event queue priority
+- Open system with configurable amount of outstanding events limited only by
+ DRAM
+- HW accelerated dequeue timeout support to enable power management
+
+Prerequisites and Compilation procedure
+---------------------------------------
+
+ See :doc:`../platform/cnxk` for setup information.
+
+Debugging Options
+-----------------
+
+.. _table_octeon_cnxk_event_debug_options:
+
+.. table:: OCTEON cnxk event device debug options
+
+ +---+------------+-------------------------------------------------------+
+ | # | Component | EAL log command |
+ +===+============+=======================================================+
+ | 1 | SSO | --log-level='pmd\.event\.cnxk,8' |
+ +---+------------+-------------------------------------------------------+
diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst
index 738788d9e..214302539 100644
--- a/doc/guides/eventdevs/index.rst
+++ b/doc/guides/eventdevs/index.rst
@@ -11,6 +11,7 @@ application through the eventdev API.
:maxdepth: 2
:numbered:
+ cnxk
dlb2
dpaa
dpaa2
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index b3224dc33..428615e4f 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -75,6 +75,8 @@ New Features
net, crypto and event PMD's.
* Added mempool/cnxk driver which provides the support for the integrated
mempool device.
+ * Added event/cnxk driver which provides the support for integrated event
+ device.
* **Enhanced ethdev representor syntax.**
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
new file mode 100644
index 000000000..7ea782eaa
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+
+int
+cnxk_sso_init(struct rte_eventdev *event_dev)
+{
+ const struct rte_memzone *mz = NULL;
+ struct rte_pci_device *pci_dev;
+ struct cnxk_sso_evdev *dev;
+ int rc;
+
+ mz = rte_memzone_reserve(CNXK_SSO_MZ_NAME, sizeof(uint64_t),
+ SOCKET_ID_ANY, 0);
+ if (mz == NULL) {
+ plt_err("Failed to create eventdev memzone");
+ return -ENOMEM;
+ }
+
+ dev = cnxk_sso_pmd_priv(event_dev);
+ pci_dev = container_of(event_dev->dev, struct rte_pci_device, device);
+ dev->sso.pci_dev = pci_dev;
+
+ *(uint64_t *)mz->addr = (uint64_t)dev;
+
+ /* Initialize the base cnxk_dev object */
+ rc = roc_sso_dev_init(&dev->sso);
+ if (rc < 0) {
+ plt_err("Failed to initialize RoC SSO rc=%d", rc);
+ goto error;
+ }
+
+ dev->is_timeout_deq = 0;
+ dev->min_dequeue_timeout_ns = USEC2NSEC(1);
+ dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF);
+ dev->max_num_events = -1;
+ dev->nb_event_queues = 0;
+ dev->nb_event_ports = 0;
+
+ return 0;
+
+error:
+ rte_memzone_free(mz);
+ return rc;
+}
+
+int
+cnxk_sso_fini(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ /* For secondary processes, nothing to be done */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ roc_sso_rsrc_fini(&dev->sso);
+ roc_sso_dev_fini(&dev->sso);
+
+ return 0;
+}
+
+int
+cnxk_sso_remove(struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_remove(pci_dev, cnxk_sso_fini);
+}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
new file mode 100644
index 000000000..74d0990fa
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CNXK_EVENTDEV_H__
+#define __CNXK_EVENTDEV_H__
+
+#include <rte_pci.h>
+
+#include <eventdev_pmd_pci.h>
+
+#include "roc_api.h"
+
+#define USEC2NSEC(__us) ((__us)*1E3)
+
+#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
+
+struct cnxk_sso_evdev {
+ struct roc_sso sso;
+ uint8_t is_timeout_deq;
+ uint8_t nb_event_queues;
+ uint8_t nb_event_ports;
+ uint32_t min_dequeue_timeout_ns;
+ uint32_t max_dequeue_timeout_ns;
+ int32_t max_num_events;
+} __rte_cache_aligned;
+
+static inline struct cnxk_sso_evdev *
+cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
+{
+ return event_dev->data->dev_private;
+}
+
+/* Common ops API. */
+int cnxk_sso_init(struct rte_eventdev *event_dev);
+int cnxk_sso_fini(struct rte_eventdev *event_dev);
+int cnxk_sso_remove(struct rte_pci_device *pci_dev);
+
+#endif /* __CNXK_EVENTDEV_H__ */
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
new file mode 100644
index 000000000..fbe245fca
--- /dev/null
+++ b/drivers/event/cnxk/meson.build
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2021 Marvell.
+#
+
+if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on 64-bit Linux'
+ subdir_done()
+endif
+
+sources = files('cnxk_eventdev.c')
+
+deps += ['bus_pci', 'common_cnxk']
diff --git a/drivers/event/cnxk/version.map b/drivers/event/cnxk/version.map
new file mode 100644
index 000000000..ee80c5172
--- /dev/null
+++ b/drivers/event/cnxk/version.map
@@ -0,0 +1,3 @@
+INTERNAL {
+ local: *;
+};
diff --git a/drivers/event/meson.build b/drivers/event/meson.build
index 539c5aeb9..63d6b410b 100644
--- a/drivers/event/meson.build
+++ b/drivers/event/meson.build
@@ -6,6 +6,7 @@ if is_windows
endif
drivers = [
+ 'cnxk',
'dlb2',
'dpaa',
'dpaa2',
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 04/35] event/cnxk: add device capabilities function
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (2 preceding siblings ...)
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 03/35] event/cnxk: add build infra and device setup pbhagavatula
@ 2021-05-04 0:26 ` pbhagavatula
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 05/35] event/cnxk: add platform specific device probe pbhagavatula
` (31 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:26 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add the info_get function to return details on the queues, flow,
prioritization capabilities, etc. which this device has.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 24 ++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 4 ++++
2 files changed, 28 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 7ea782eaa..3a7053af6 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -4,6 +4,30 @@
#include "cnxk_eventdev.h"
+void
+cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
+ struct rte_event_dev_info *dev_info)
+{
+
+ dev_info->min_dequeue_timeout_ns = dev->min_dequeue_timeout_ns;
+ dev_info->max_dequeue_timeout_ns = dev->max_dequeue_timeout_ns;
+ dev_info->max_event_queues = dev->max_event_queues;
+ dev_info->max_event_queue_flows = (1ULL << 20);
+ dev_info->max_event_queue_priority_levels = 8;
+ dev_info->max_event_priority_levels = 1;
+ dev_info->max_event_ports = dev->max_event_ports;
+ dev_info->max_event_port_dequeue_depth = 1;
+ dev_info->max_event_port_enqueue_depth = 1;
+ dev_info->max_num_events = dev->max_num_events;
+ dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
+ RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
+ RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
+ RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
+ RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
+}
+
int
cnxk_sso_init(struct rte_eventdev *event_dev)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 74d0990fa..9745bfd3e 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -17,6 +17,8 @@
struct cnxk_sso_evdev {
struct roc_sso sso;
+ uint8_t max_event_queues;
+ uint8_t max_event_ports;
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
@@ -35,5 +37,7 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
int cnxk_sso_init(struct rte_eventdev *event_dev);
int cnxk_sso_fini(struct rte_eventdev *event_dev);
int cnxk_sso_remove(struct rte_pci_device *pci_dev);
+void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
+ struct rte_event_dev_info *dev_info);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 05/35] event/cnxk: add platform specific device probe
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (3 preceding siblings ...)
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 04/35] event/cnxk: add device capabilities function pbhagavatula
@ 2021-05-04 0:26 ` pbhagavatula
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 06/35] event/cnxk: add common configuration validation pbhagavatula
` (30 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:26 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add platform specific event device probe and remove, also add
event device info get function.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 101 +++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 102 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 2 +
drivers/event/cnxk/meson.build | 5 +-
4 files changed, 209 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cn10k_eventdev.c
create mode 100644 drivers/event/cnxk/cn9k_eventdev.c
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
new file mode 100644
index 000000000..1216acaad
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+
+static void
+cn10k_sso_set_rsrc(void *arg)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ dev->max_event_ports = dev->sso.max_hws;
+ dev->max_event_queues =
+ dev->sso.max_hwgrp > RTE_EVENT_MAX_QUEUES_PER_DEV ?
+ RTE_EVENT_MAX_QUEUES_PER_DEV :
+ dev->sso.max_hwgrp;
+}
+
+static void
+cn10k_sso_info_get(struct rte_eventdev *event_dev,
+ struct rte_event_dev_info *dev_info)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ dev_info->driver_name = RTE_STR(EVENTDEV_NAME_CN10K_PMD);
+ cnxk_sso_info_get(dev, dev_info);
+}
+
+static struct rte_eventdev_ops cn10k_sso_dev_ops = {
+ .dev_infos_get = cn10k_sso_info_get,
+};
+
+static int
+cn10k_sso_init(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ if (RTE_CACHE_LINE_SIZE != 64) {
+ plt_err("Driver not compiled for CN9K");
+ return -EFAULT;
+ }
+
+ rc = roc_plt_init();
+ if (rc < 0) {
+ plt_err("Failed to initialize platform model");
+ return rc;
+ }
+
+ event_dev->dev_ops = &cn10k_sso_dev_ops;
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ rc = cnxk_sso_init(event_dev);
+ if (rc < 0)
+ return rc;
+
+ cn10k_sso_set_rsrc(cnxk_sso_pmd_priv(event_dev));
+ if (!dev->max_event_ports || !dev->max_event_queues) {
+ plt_err("Not enough eventdev resource queues=%d ports=%d",
+ dev->max_event_queues, dev->max_event_ports);
+ cnxk_sso_fini(event_dev);
+ return -ENODEV;
+ }
+
+ plt_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
+ event_dev->data->name, dev->max_event_queues,
+ dev->max_event_ports);
+
+ return 0;
+}
+
+static int
+cn10k_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_probe(pci_drv, pci_dev,
+ sizeof(struct cnxk_sso_evdev),
+ cn10k_sso_init);
+}
+
+static const struct rte_pci_id cn10k_pci_sso_map[] = {
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver cn10k_pci_sso = {
+ .id_table = cn10k_pci_sso_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
+ .probe = cn10k_sso_probe,
+ .remove = cnxk_sso_remove,
+};
+
+RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
+RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
+RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
new file mode 100644
index 000000000..988d2425f
--- /dev/null
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+
+#define CN9K_DUAL_WS_NB_WS 2
+#define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
+
+static void
+cn9k_sso_set_rsrc(void *arg)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ if (dev->dual_ws)
+ dev->max_event_ports = dev->sso.max_hws / CN9K_DUAL_WS_NB_WS;
+ else
+ dev->max_event_ports = dev->sso.max_hws;
+ dev->max_event_queues =
+ dev->sso.max_hwgrp > RTE_EVENT_MAX_QUEUES_PER_DEV ?
+ RTE_EVENT_MAX_QUEUES_PER_DEV :
+ dev->sso.max_hwgrp;
+}
+
+static void
+cn9k_sso_info_get(struct rte_eventdev *event_dev,
+ struct rte_event_dev_info *dev_info)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ dev_info->driver_name = RTE_STR(EVENTDEV_NAME_CN9K_PMD);
+ cnxk_sso_info_get(dev, dev_info);
+}
+
+static struct rte_eventdev_ops cn9k_sso_dev_ops = {
+ .dev_infos_get = cn9k_sso_info_get,
+};
+
+static int
+cn9k_sso_init(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ if (RTE_CACHE_LINE_SIZE != 128) {
+ plt_err("Driver not compiled for CN9K");
+ return -EFAULT;
+ }
+
+ rc = roc_plt_init();
+ if (rc < 0) {
+ plt_err("Failed to initialize platform model");
+ return rc;
+ }
+
+ event_dev->dev_ops = &cn9k_sso_dev_ops;
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ rc = cnxk_sso_init(event_dev);
+ if (rc < 0)
+ return rc;
+
+ cn9k_sso_set_rsrc(cnxk_sso_pmd_priv(event_dev));
+ if (!dev->max_event_ports || !dev->max_event_queues) {
+ plt_err("Not enough eventdev resource queues=%d ports=%d",
+ dev->max_event_queues, dev->max_event_ports);
+ cnxk_sso_fini(event_dev);
+ return -ENODEV;
+ }
+
+ plt_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
+ event_dev->data->name, dev->max_event_queues,
+ dev->max_event_ports);
+
+ return 0;
+}
+
+static int
+cn9k_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_probe(
+ pci_drv, pci_dev, sizeof(struct cnxk_sso_evdev), cn9k_sso_init);
+}
+
+static const struct rte_pci_id cn9k_pci_sso_map[] = {
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver cn9k_pci_sso = {
+ .id_table = cn9k_pci_sso_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
+ .probe = cn9k_sso_probe,
+ .remove = cnxk_sso_remove,
+};
+
+RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
+RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
+RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 9745bfd3e..6bdf0b347 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -25,6 +25,8 @@ struct cnxk_sso_evdev {
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
+ /* CN9K */
+ uint8_t dual_ws;
} __rte_cache_aligned;
static inline struct cnxk_sso_evdev *
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index fbe245fca..22eb28345 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -8,6 +8,9 @@ if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
subdir_done()
endif
-sources = files('cnxk_eventdev.c')
+sources = files('cn10k_eventdev.c',
+ 'cn9k_eventdev.c',
+ 'cnxk_eventdev.c',
+ )
deps += ['bus_pci', 'common_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 06/35] event/cnxk: add common configuration validation
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (4 preceding siblings ...)
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 05/35] event/cnxk: add platform specific device probe pbhagavatula
@ 2021-05-04 0:26 ` pbhagavatula
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 07/35] event/cnxk: add platform specific device config pbhagavatula
` (29 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:26 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add configuration validation, port and queue configuration
functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 70 ++++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 6 +++
2 files changed, 76 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 3a7053af6..3eab1ed29 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -28,6 +28,76 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
+int
+cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
+{
+ struct rte_event_dev_config *conf = &event_dev->data->dev_conf;
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint32_t deq_tmo_ns;
+
+ deq_tmo_ns = conf->dequeue_timeout_ns;
+
+ if (deq_tmo_ns == 0)
+ deq_tmo_ns = dev->min_dequeue_timeout_ns;
+ if (deq_tmo_ns < dev->min_dequeue_timeout_ns ||
+ deq_tmo_ns > dev->max_dequeue_timeout_ns) {
+ plt_err("Unsupported dequeue timeout requested");
+ return -EINVAL;
+ }
+
+ if (conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT)
+ dev->is_timeout_deq = 1;
+
+ dev->deq_tmo_ns = deq_tmo_ns;
+
+ if (!conf->nb_event_queues || !conf->nb_event_ports ||
+ conf->nb_event_ports > dev->max_event_ports ||
+ conf->nb_event_queues > dev->max_event_queues) {
+ plt_err("Unsupported event queues/ports requested");
+ return -EINVAL;
+ }
+
+ if (conf->nb_event_port_dequeue_depth > 1) {
+ plt_err("Unsupported event port deq depth requested");
+ return -EINVAL;
+ }
+
+ if (conf->nb_event_port_enqueue_depth > 1) {
+ plt_err("Unsupported event port enq depth requested");
+ return -EINVAL;
+ }
+
+ dev->nb_event_queues = conf->nb_event_queues;
+ dev->nb_event_ports = conf->nb_event_ports;
+
+ return 0;
+}
+
+void
+cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
+ struct rte_event_queue_conf *queue_conf)
+{
+ RTE_SET_USED(event_dev);
+ RTE_SET_USED(queue_id);
+
+ queue_conf->nb_atomic_flows = (1ULL << 20);
+ queue_conf->nb_atomic_order_sequences = (1ULL << 20);
+ queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+ queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+}
+
+void
+cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
+ struct rte_event_port_conf *port_conf)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ RTE_SET_USED(port_id);
+ port_conf->new_event_threshold = dev->max_num_events;
+ port_conf->dequeue_depth = 1;
+ port_conf->enqueue_depth = 1;
+}
+
int
cnxk_sso_init(struct rte_eventdev *event_dev)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 6bdf0b347..59d96a08f 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -22,6 +22,7 @@ struct cnxk_sso_evdev {
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
+ uint32_t deq_tmo_ns;
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
@@ -41,5 +42,10 @@ int cnxk_sso_fini(struct rte_eventdev *event_dev);
int cnxk_sso_remove(struct rte_pci_device *pci_dev);
void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
struct rte_event_dev_info *dev_info);
+int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
+void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
+ struct rte_event_queue_conf *queue_conf);
+void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
+ struct rte_event_port_conf *port_conf);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 07/35] event/cnxk: add platform specific device config
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (5 preceding siblings ...)
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 06/35] event/cnxk: add common configuration validation pbhagavatula
@ 2021-05-04 0:26 ` pbhagavatula
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 08/35] event/cnxk: add event queue config functions pbhagavatula
` (28 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:26 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add platform specific event device configuration that attaches the
requested number of SSO HWS(event ports) and HWGRP(event queues) LFs
to the RVU PF/VF.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 35 +++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 37 +++++++++++++++++++++++++++++
2 files changed, 72 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 1216acaad..779a2e026 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -16,6 +16,14 @@ cn10k_sso_set_rsrc(void *arg)
dev->sso.max_hwgrp;
}
+static int
+cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
+}
+
static void
cn10k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -26,8 +34,35 @@ cn10k_sso_info_get(struct rte_eventdev *event_dev,
cnxk_sso_info_get(dev, dev_info);
}
+static int
+cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ rc = cnxk_sso_dev_validate(event_dev);
+ if (rc < 0) {
+ plt_err("Invalid event device configuration");
+ return -EINVAL;
+ }
+
+ roc_sso_rsrc_fini(&dev->sso);
+
+ rc = cn10k_sso_rsrc_init(dev, dev->nb_event_ports,
+ dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to initialize SSO resources");
+ return -ENODEV;
+ }
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
+ .dev_configure = cn10k_sso_dev_configure,
+ .queue_def_conf = cnxk_sso_queue_def_conf,
+ .port_def_conf = cnxk_sso_port_def_conf,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 988d2425f..d042f58da 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -22,6 +22,17 @@ cn9k_sso_set_rsrc(void *arg)
dev->sso.max_hwgrp;
}
+static int
+cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
+{
+ struct cnxk_sso_evdev *dev = arg;
+
+ if (dev->dual_ws)
+ hws = hws * CN9K_DUAL_WS_NB_WS;
+
+ return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
+}
+
static void
cn9k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -32,8 +43,34 @@ cn9k_sso_info_get(struct rte_eventdev *event_dev,
cnxk_sso_info_get(dev, dev_info);
}
+static int
+cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc;
+
+ rc = cnxk_sso_dev_validate(event_dev);
+ if (rc < 0) {
+ plt_err("Invalid event device configuration");
+ return -EINVAL;
+ }
+
+ roc_sso_rsrc_fini(&dev->sso);
+
+ rc = cn9k_sso_rsrc_init(dev, dev->nb_event_ports, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to initialize SSO resources");
+ return -ENODEV;
+ }
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
+ .dev_configure = cn9k_sso_dev_configure,
+ .queue_def_conf = cnxk_sso_queue_def_conf,
+ .port_def_conf = cnxk_sso_port_def_conf,
};
static int
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 08/35] event/cnxk: add event queue config functions
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (6 preceding siblings ...)
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 07/35] event/cnxk: add platform specific device config pbhagavatula
@ 2021-05-04 0:26 ` pbhagavatula
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 09/35] event/cnxk: allocate event inflight buffers pbhagavatula
` (27 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:26 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add setup and release functions for event queues i.e.
SSO HWGRPs.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 2 ++
drivers/event/cnxk/cn9k_eventdev.c | 2 ++
drivers/event/cnxk/cnxk_eventdev.c | 19 +++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 3 +++
4 files changed, 26 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 779a2e026..557f26b8f 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -62,6 +62,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
.queue_def_conf = cnxk_sso_queue_def_conf,
+ .queue_setup = cnxk_sso_queue_setup,
+ .queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
};
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index d042f58da..eba1bfbf0 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -70,6 +70,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
.queue_def_conf = cnxk_sso_queue_def_conf,
+ .queue_setup = cnxk_sso_queue_setup,
+ .queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
};
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 3eab1ed29..e22479a19 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -86,6 +86,25 @@ cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
}
+int
+cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
+ const struct rte_event_queue_conf *queue_conf)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ plt_sso_dbg("Queue=%d prio=%d", queue_id, queue_conf->priority);
+ /* Normalize <0-255> to <0-7> */
+ return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, 0xFF, 0xFF,
+ queue_conf->priority / 32);
+}
+
+void
+cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
+{
+ RTE_SET_USED(event_dev);
+ RTE_SET_USED(queue_id);
+}
+
void
cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf)
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 59d96a08f..426219c85 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -45,6 +45,9 @@ void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
struct rte_event_queue_conf *queue_conf);
+int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
+ const struct rte_event_queue_conf *queue_conf);
+void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id);
void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 09/35] event/cnxk: allocate event inflight buffers
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (7 preceding siblings ...)
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 08/35] event/cnxk: add event queue config functions pbhagavatula
@ 2021-05-04 0:26 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 10/35] event/cnxk: add devargs for inflight buffer count pbhagavatula
` (26 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:26 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Allocate buffers in DRAM that hold inflight events.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 7 ++
drivers/event/cnxk/cn9k_eventdev.c | 7 ++
drivers/event/cnxk/cnxk_eventdev.c | 105 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 14 +++-
4 files changed, 132 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 557f26b8f..9c5ddea76 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -55,6 +55,13 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
return -ENODEV;
}
+ rc = cnxk_sso_xaq_allocate(dev);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
+ return 0;
+cnxk_rsrc_fini:
+ roc_sso_rsrc_fini(&dev->sso);
return rc;
}
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index eba1bfbf0..954fea01f 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -63,6 +63,13 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
return -ENODEV;
}
+ rc = cnxk_sso_xaq_allocate(dev);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
+ return 0;
+cnxk_rsrc_fini:
+ roc_sso_rsrc_fini(&dev->sso);
return rc;
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index e22479a19..34a8bce05 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -28,12 +28,107 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
}
+int
+cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
+{
+ char pool_name[RTE_MEMZONE_NAMESIZE];
+ uint32_t xaq_cnt, npa_aura_id;
+ const struct rte_memzone *mz;
+ struct npa_aura_s *aura;
+ static int reconfig_cnt;
+ int rc;
+
+ if (dev->xaq_pool) {
+ rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to release XAQ %d", rc);
+ return rc;
+ }
+ rte_mempool_free(dev->xaq_pool);
+ dev->xaq_pool = NULL;
+ }
+
+ /*
+ * Allocate memory for Add work backpressure.
+ */
+ mz = rte_memzone_lookup(CNXK_SSO_FC_NAME);
+ if (mz == NULL)
+ mz = rte_memzone_reserve_aligned(CNXK_SSO_FC_NAME,
+ sizeof(struct npa_aura_s) +
+ RTE_CACHE_LINE_SIZE,
+ 0, 0, RTE_CACHE_LINE_SIZE);
+ if (mz == NULL) {
+ plt_err("Failed to allocate mem for fcmem");
+ return -ENOMEM;
+ }
+
+ dev->fc_iova = mz->iova;
+ dev->fc_mem = mz->addr;
+
+ aura = (struct npa_aura_s *)((uintptr_t)dev->fc_mem +
+ RTE_CACHE_LINE_SIZE);
+ memset(aura, 0, sizeof(struct npa_aura_s));
+
+ aura->fc_ena = 1;
+ aura->fc_addr = dev->fc_iova;
+ aura->fc_hyst_bits = 0; /* Store count on all updates */
+
+ /* Taken from HRM 14.3.3(4) */
+ xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
+ xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
+ (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
+
+ plt_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
+ /* Setup XAQ based on number of nb queues. */
+ snprintf(pool_name, 30, "cnxk_xaq_buf_pool_%d", reconfig_cnt);
+ dev->xaq_pool = (void *)rte_mempool_create_empty(
+ pool_name, xaq_cnt, dev->sso.xaq_buf_size, 0, 0,
+ rte_socket_id(), 0);
+
+ if (dev->xaq_pool == NULL) {
+ plt_err("Unable to create empty mempool.");
+ rte_memzone_free(mz);
+ return -ENOMEM;
+ }
+
+ rc = rte_mempool_set_ops_byname(dev->xaq_pool,
+ rte_mbuf_platform_mempool_ops(), aura);
+ if (rc != 0) {
+ plt_err("Unable to set xaqpool ops.");
+ goto alloc_fail;
+ }
+
+ rc = rte_mempool_populate_default(dev->xaq_pool);
+ if (rc < 0) {
+ plt_err("Unable to set populate xaqpool.");
+ goto alloc_fail;
+ }
+ reconfig_cnt++;
+ /* When SW does addwork (enqueue) check if there is space in XAQ by
+ * comparing fc_addr above against the xaq_lmt calculated below.
+ * There should be a minimum headroom (CNXK_SSO_XAQ_SLACK / 2) for SSO
+ * to request XAQ to cache them even before enqueue is called.
+ */
+ dev->xaq_lmt =
+ xaq_cnt - (CNXK_SSO_XAQ_SLACK / 2 * dev->nb_event_queues);
+ dev->nb_xaq_cfg = xaq_cnt;
+
+ npa_aura_id = roc_npa_aura_handle_to_aura(dev->xaq_pool->pool_id);
+ return roc_sso_hwgrp_alloc_xaq(&dev->sso, npa_aura_id,
+ dev->nb_event_queues);
+alloc_fail:
+ rte_mempool_free(dev->xaq_pool);
+ rte_memzone_free(mz);
+ return rc;
+}
+
int
cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
{
struct rte_event_dev_config *conf = &event_dev->data->dev_conf;
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint32_t deq_tmo_ns;
+ int rc;
deq_tmo_ns = conf->dequeue_timeout_ns;
@@ -67,6 +162,16 @@ cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
return -EINVAL;
}
+ if (dev->xaq_pool) {
+ rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to release XAQ %d", rc);
+ return rc;
+ }
+ rte_mempool_free(dev->xaq_pool);
+ dev->xaq_pool = NULL;
+ }
+
dev->nb_event_queues = conf->nb_event_queues;
dev->nb_event_ports = conf->nb_event_ports;
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 426219c85..4abe4548d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -5,6 +5,7 @@
#ifndef __CNXK_EVENTDEV_H__
#define __CNXK_EVENTDEV_H__
+#include <rte_mbuf_pool_ops.h>
#include <rte_pci.h>
#include <eventdev_pmd_pci.h>
@@ -13,7 +14,10 @@
#define USEC2NSEC(__us) ((__us)*1E3)
-#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
+#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
+#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
+#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
+#define CNXK_SSO_XAQ_SLACK (8)
struct cnxk_sso_evdev {
struct roc_sso sso;
@@ -26,6 +30,11 @@ struct cnxk_sso_evdev {
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
+ uint64_t *fc_mem;
+ uint64_t xaq_lmt;
+ uint64_t nb_xaq_cfg;
+ rte_iova_t fc_iova;
+ struct rte_mempool *xaq_pool;
/* CN9K */
uint8_t dual_ws;
} __rte_cache_aligned;
@@ -36,6 +45,9 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
return event_dev->data->dev_private;
}
+/* Configuration functions */
+int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev);
+
/* Common ops API. */
int cnxk_sso_init(struct rte_eventdev *event_dev);
int cnxk_sso_fini(struct rte_eventdev *event_dev);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 10/35] event/cnxk: add devargs for inflight buffer count
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (8 preceding siblings ...)
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 09/35] event/cnxk: allocate event inflight buffers pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 11/35] event/cnxk: add devargs to control SSO HWGRP QoS pbhagavatula
` (25 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
The number of events for a *open system* event device is specified
as -1 as per the eventdev specification.
Since, SSO inflight events are only limited by DRAM size, the
xae_cnt devargs parameter is introduced to provide upper limit for
in-flight events.
Example:
--dev "0002:0e:00.0,xae_cnt=8192"
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 14 ++++++++++++++
drivers/event/cnxk/cn10k_eventdev.c | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 1 +
drivers/event/cnxk/cnxk_eventdev.c | 24 ++++++++++++++++++++++--
drivers/event/cnxk/cnxk_eventdev.h | 15 +++++++++++++++
5 files changed, 53 insertions(+), 2 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 148280b85..b556681ff 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -41,6 +41,20 @@ Prerequisites and Compilation procedure
See :doc:`../platform/cnxk` for setup information.
+
+Runtime Config Options
+----------------------
+
+- ``Maximum number of in-flight events`` (default ``8192``)
+
+ In **Marvell OCTEON cnxk** the max number of in-flight events are only limited
+ by DRAM size, the ``xae_cnt`` devargs parameter is introduced to provide
+ upper limit for in-flight events.
+
+ For example::
+
+ -a 0002:0e:00.0,xae_cnt=16384
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 9c5ddea76..020905290 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -143,3 +143,4 @@ static struct rte_pci_driver cn10k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 954fea01f..50f6fef01 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -146,3 +146,4 @@ static struct rte_pci_driver cn9k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 34a8bce05..fddd71a8d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -75,8 +75,11 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
/* Taken from HRM 14.3.3(4) */
xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
- xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
- (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
+ if (dev->xae_cnt)
+ xaq_cnt += dev->xae_cnt / dev->sso.xae_waes;
+ else
+ xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
+ (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
plt_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
/* Setup XAQ based on number of nb queues. */
@@ -222,6 +225,22 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
port_conf->enqueue_depth = 1;
}
+static void
+cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
+{
+ struct rte_kvargs *kvlist;
+
+ if (devargs == NULL)
+ return;
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, CNXK_SSO_XAE_CNT, &parse_kvargs_value,
+ &dev->xae_cnt);
+ rte_kvargs_free(kvlist);
+}
+
int
cnxk_sso_init(struct rte_eventdev *event_dev)
{
@@ -242,6 +261,7 @@ cnxk_sso_init(struct rte_eventdev *event_dev)
dev->sso.pci_dev = pci_dev;
*(uint64_t *)mz->addr = (uint64_t)dev;
+ cnxk_sso_parse_devargs(dev, pci_dev->device.devargs);
/* Initialize the base cnxk_dev object */
rc = roc_sso_dev_init(&dev->sso);
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 4abe4548d..202c6e6a7 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -5,6 +5,8 @@
#ifndef __CNXK_EVENTDEV_H__
#define __CNXK_EVENTDEV_H__
+#include <rte_devargs.h>
+#include <rte_kvargs.h>
#include <rte_mbuf_pool_ops.h>
#include <rte_pci.h>
@@ -12,6 +14,8 @@
#include "roc_api.h"
+#define CNXK_SSO_XAE_CNT "xae_cnt"
+
#define USEC2NSEC(__us) ((__us)*1E3)
#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
@@ -35,10 +39,21 @@ struct cnxk_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
+ /* Dev args */
+ uint32_t xae_cnt;
/* CN9K */
uint8_t dual_ws;
} __rte_cache_aligned;
+static inline int
+parse_kvargs_value(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ *(uint32_t *)opaque = (uint32_t)atoi(value);
+ return 0;
+}
+
static inline struct cnxk_sso_evdev *
cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
{
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 11/35] event/cnxk: add devargs to control SSO HWGRP QoS
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (9 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 10/35] event/cnxk: add devargs for inflight buffer count pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 12/35] event/cnxk: add port config functions pbhagavatula
` (24 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
SSO HWGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
events. By default the buffers are assigned to the SSO HWGRPs to
satisfy minimum HW requirements. SSO is free to assign the remaining
buffers to HWGRPs based on a preconfigured threshold.
We can control the QoS of SSO HWGRP by modifying the above mentioned
thresholds. HWGRPs that have higher importance can be assigned higher
thresholds than the rest.
Example:
--dev "0002:0e:00.0,qos=[1-50-50-50]" // [Qx-XAQ-TAQ-IAQ]
Qx -> Event queue Aka SSO GGRP.
XAQ -> DRAM In-flights.
TAQ & IAQ -> SRAM In-flights.
The values need to be expressed in terms of percentages, 0 represents
default.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 16 ++++++
drivers/event/cnxk/cn10k_eventdev.c | 3 +-
drivers/event/cnxk/cn9k_eventdev.c | 3 +-
drivers/event/cnxk/cnxk_eventdev.c | 78 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 12 ++++-
5 files changed, 109 insertions(+), 3 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index b556681ff..0583e5fdd 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -55,6 +55,22 @@ Runtime Config Options
-a 0002:0e:00.0,xae_cnt=16384
+- ``Event Group QoS support``
+
+ SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
+ events. By default the buffers are assigned to the SSO GGRPs to
+ satisfy minimum HW requirements. SSO is free to assign the remaining
+ buffers to GGRPs based on a preconfigured threshold.
+ We can control the QoS of SSO GGRP by modifying the above mentioned
+ thresholds. GGRPs that have higher importance can be assigned higher
+ thresholds than the rest. The dictionary format is as follows
+ [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] expressed in percentages, 0 represents
+ default.
+
+ For example::
+
+ -a 0002:0e:00.0,qos=[1-50-50-50]
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 020905290..0b39c6c09 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -143,4 +143,5 @@ static struct rte_pci_driver cn10k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
+ CNXK_SSO_GGRP_QOS "=<string>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 50f6fef01..ab165c850 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -146,4 +146,5 @@ static struct rte_pci_driver cn9k_pci_sso = {
RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>");
+RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
+ CNXK_SSO_GGRP_QOS "=<string>");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index fddd71a8d..e93aaccd8 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -225,6 +225,82 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
port_conf->enqueue_depth = 1;
}
+static void
+parse_queue_param(char *value, void *opaque)
+{
+ struct cnxk_sso_qos queue_qos = {0};
+ uint8_t *val = (uint8_t *)&queue_qos;
+ struct cnxk_sso_evdev *dev = opaque;
+ char *tok = strtok(value, "-");
+ struct cnxk_sso_qos *old_ptr;
+
+ if (!strlen(value))
+ return;
+
+ while (tok != NULL) {
+ *val = atoi(tok);
+ tok = strtok(NULL, "-");
+ val++;
+ }
+
+ if (val != (&queue_qos.iaq_prcnt + 1)) {
+ plt_err("Invalid QoS parameter expected [Qx-XAQ-TAQ-IAQ]");
+ return;
+ }
+
+ dev->qos_queue_cnt++;
+ old_ptr = dev->qos_parse_data;
+ dev->qos_parse_data = rte_realloc(
+ dev->qos_parse_data,
+ sizeof(struct cnxk_sso_qos) * dev->qos_queue_cnt, 0);
+ if (dev->qos_parse_data == NULL) {
+ dev->qos_parse_data = old_ptr;
+ dev->qos_queue_cnt--;
+ return;
+ }
+ dev->qos_parse_data[dev->qos_queue_cnt - 1] = queue_qos;
+}
+
+static void
+parse_qos_list(const char *value, void *opaque)
+{
+ char *s = strdup(value);
+ char *start = NULL;
+ char *end = NULL;
+ char *f = s;
+
+ while (*s) {
+ if (*s == '[')
+ start = s;
+ else if (*s == ']')
+ end = s;
+
+ if (start && start < end) {
+ *end = 0;
+ parse_queue_param(start + 1, opaque);
+ s = end;
+ start = end;
+ }
+ s++;
+ }
+
+ free(f);
+}
+
+static int
+parse_sso_kvargs_dict(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ /* Dict format [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] use '-' cause ','
+ * isn't allowed. Everything is expressed in percentages, 0 represents
+ * default.
+ */
+ parse_qos_list(value, opaque);
+
+ return 0;
+}
+
static void
cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
{
@@ -238,6 +314,8 @@ cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
rte_kvargs_process(kvlist, CNXK_SSO_XAE_CNT, &parse_kvargs_value,
&dev->xae_cnt);
+ rte_kvargs_process(kvlist, CNXK_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
+ dev);
rte_kvargs_free(kvlist);
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 202c6e6a7..b96a6a908 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,7 +14,8 @@
#include "roc_api.h"
-#define CNXK_SSO_XAE_CNT "xae_cnt"
+#define CNXK_SSO_XAE_CNT "xae_cnt"
+#define CNXK_SSO_GGRP_QOS "qos"
#define USEC2NSEC(__us) ((__us)*1E3)
@@ -23,6 +24,13 @@
#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
#define CNXK_SSO_XAQ_SLACK (8)
+struct cnxk_sso_qos {
+ uint16_t queue;
+ uint8_t xaq_prcnt;
+ uint8_t taq_prcnt;
+ uint8_t iaq_prcnt;
+};
+
struct cnxk_sso_evdev {
struct roc_sso sso;
uint8_t max_event_queues;
@@ -41,6 +49,8 @@ struct cnxk_sso_evdev {
struct rte_mempool *xaq_pool;
/* Dev args */
uint32_t xae_cnt;
+ uint8_t qos_queue_cnt;
+ struct cnxk_sso_qos *qos_parse_data;
/* CN9K */
uint8_t dual_ws;
} __rte_cache_aligned;
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 12/35] event/cnxk: add port config functions
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (10 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 11/35] event/cnxk: add devargs to control SSO HWGRP QoS pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 13/35] event/cnxk: add event port link and unlink pbhagavatula
` (23 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add SSO HWS a.k.a event port setup and release functions.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 121 +++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 147 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.c | 65 ++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 91 +++++++++++++++++
4 files changed, 424 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 0b39c6c09..fcdc1cf84 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -4,6 +4,91 @@
#include "cnxk_eventdev.h"
+static void
+cn10k_init_hws_ops(struct cn10k_sso_hws *ws, uintptr_t base)
+{
+ ws->tag_wqe_op = base + SSOW_LF_GWS_WQE0;
+ ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK0;
+ ws->updt_wqe_op = base + SSOW_LF_GWS_OP_UPD_WQP_GRP1;
+ ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM;
+ ws->swtag_untag_op = base + SSOW_LF_GWS_OP_SWTAG_UNTAG;
+ ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH;
+ ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
+}
+
+static uint32_t
+cn10k_sso_gw_mode_wdata(struct cnxk_sso_evdev *dev)
+{
+ uint32_t wdata = BIT(16) | 1;
+
+ switch (dev->gw_mode) {
+ case CN10K_GW_MODE_NONE:
+ default:
+ break;
+ case CN10K_GW_MODE_PREF:
+ wdata |= BIT(19);
+ break;
+ case CN10K_GW_MODE_PREF_WFE:
+ wdata |= BIT(20) | BIT(19);
+ break;
+ }
+
+ return wdata;
+}
+
+static void *
+cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws;
+
+ /* Allocate event port memory */
+ ws = rte_zmalloc("cn10k_ws",
+ sizeof(struct cn10k_sso_hws) + RTE_CACHE_LINE_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (ws == NULL) {
+ plt_err("Failed to alloc memory for port=%d", port_id);
+ return NULL;
+ }
+
+ /* First cache line is reserved for cookie */
+ ws = (struct cn10k_sso_hws *)((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
+ ws->base = roc_sso_hws_base_get(&dev->sso, port_id);
+ cn10k_init_hws_ops(ws, ws->base);
+ ws->hws_id = port_id;
+ ws->swtag_req = 0;
+ ws->gw_wdata = cn10k_sso_gw_mode_wdata(dev);
+ ws->lmt_base = dev->sso.lmt_base;
+
+ return ws;
+}
+
+static void
+cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = hws;
+ uint64_t val;
+
+ rte_memcpy(ws->grps_base, grps_base,
+ sizeof(uintptr_t) * CNXK_SSO_MAX_HWGRP);
+ ws->fc_mem = dev->fc_mem;
+ ws->xaq_lmt = dev->xaq_lmt;
+
+ /* Set get_work timeout for HWS */
+ val = NSEC2USEC(dev->deq_tmo_ns) - 1;
+ plt_write64(val, ws->base + SSOW_LF_GWS_NW_TIM);
+}
+
+static void
+cn10k_sso_hws_release(void *arg, void *hws)
+{
+ struct cn10k_sso_hws *ws = hws;
+
+ RTE_SET_USED(arg);
+ memset(ws, 0, sizeof(*ws));
+}
+
static void
cn10k_sso_set_rsrc(void *arg)
{
@@ -59,12 +144,46 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ rc = cnxk_setup_event_ports(event_dev, cn10k_sso_init_hws_mem,
+ cn10k_sso_hws_setup);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
+ dev->nb_event_ports = 0;
return rc;
}
+static int
+cn10k_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ const struct rte_event_port_conf *port_conf)
+{
+
+ RTE_SET_USED(port_conf);
+ return cnxk_sso_port_setup(event_dev, port_id, cn10k_sso_hws_setup);
+}
+
+static void
+cn10k_sso_port_release(void *port)
+{
+ struct cnxk_sso_hws_cookie *gws_cookie = cnxk_sso_hws_get_cookie(port);
+ struct cnxk_sso_evdev *dev;
+
+ if (port == NULL)
+ return;
+
+ dev = cnxk_sso_pmd_priv(gws_cookie->event_dev);
+ if (!gws_cookie->configured)
+ goto free;
+
+ cn10k_sso_hws_release(dev, port);
+ memset(gws_cookie, 0, sizeof(*gws_cookie));
+free:
+ rte_free(gws_cookie);
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -72,6 +191,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.queue_setup = cnxk_sso_queue_setup,
.queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
+ .port_setup = cn10k_sso_port_setup,
+ .port_release = cn10k_sso_port_release,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index ab165c850..b8c74633b 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -7,6 +7,63 @@
#define CN9K_DUAL_WS_NB_WS 2
#define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
+static void
+cn9k_init_hws_ops(struct cn9k_sso_hws_state *ws, uintptr_t base)
+{
+ ws->tag_op = base + SSOW_LF_GWS_TAG;
+ ws->wqp_op = base + SSOW_LF_GWS_WQP;
+ ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK0;
+ ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH;
+ ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM;
+ ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
+}
+
+static void
+cn9k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ uint64_t val;
+
+ /* Set get_work tmo for HWS */
+ val = NSEC2USEC(dev->deq_tmo_ns) - 1;
+ if (dev->dual_ws) {
+ dws = hws;
+ rte_memcpy(dws->grps_base, grps_base,
+ sizeof(uintptr_t) * CNXK_SSO_MAX_HWGRP);
+ dws->fc_mem = dev->fc_mem;
+ dws->xaq_lmt = dev->xaq_lmt;
+
+ plt_write64(val, dws->base[0] + SSOW_LF_GWS_NW_TIM);
+ plt_write64(val, dws->base[1] + SSOW_LF_GWS_NW_TIM);
+ } else {
+ ws = hws;
+ rte_memcpy(ws->grps_base, grps_base,
+ sizeof(uintptr_t) * CNXK_SSO_MAX_HWGRP);
+ ws->fc_mem = dev->fc_mem;
+ ws->xaq_lmt = dev->xaq_lmt;
+
+ plt_write64(val, ws->base + SSOW_LF_GWS_NW_TIM);
+ }
+}
+
+static void
+cn9k_sso_hws_release(void *arg, void *hws)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+
+ if (dev->dual_ws) {
+ dws = hws;
+ memset(dws, 0, sizeof(*dws));
+ } else {
+ ws = hws;
+ memset(ws, 0, sizeof(*ws));
+ }
+}
+
static void
cn9k_sso_set_rsrc(void *arg)
{
@@ -33,6 +90,60 @@ cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static void *
+cn9k_sso_init_hws_mem(void *arg, uint8_t port_id)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ void *data;
+
+ if (dev->dual_ws) {
+ dws = rte_zmalloc("cn9k_dual_ws",
+ sizeof(struct cn9k_sso_hws_dual) +
+ RTE_CACHE_LINE_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (dws == NULL) {
+ plt_err("Failed to alloc memory for port=%d", port_id);
+ return NULL;
+ }
+
+ dws = RTE_PTR_ADD(dws, sizeof(struct cnxk_sso_hws_cookie));
+ dws->base[0] = roc_sso_hws_base_get(
+ &dev->sso, CN9K_DUAL_WS_PAIR_ID(port_id, 0));
+ dws->base[1] = roc_sso_hws_base_get(
+ &dev->sso, CN9K_DUAL_WS_PAIR_ID(port_id, 1));
+ cn9k_init_hws_ops(&dws->ws_state[0], dws->base[0]);
+ cn9k_init_hws_ops(&dws->ws_state[1], dws->base[1]);
+ dws->hws_id = port_id;
+ dws->swtag_req = 0;
+ dws->vws = 0;
+
+ data = dws;
+ } else {
+ /* Allocate event port memory */
+ ws = rte_zmalloc("cn9k_ws",
+ sizeof(struct cn9k_sso_hws) +
+ RTE_CACHE_LINE_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (ws == NULL) {
+ plt_err("Failed to alloc memory for port=%d", port_id);
+ return NULL;
+ }
+
+ /* First cache line is reserved for cookie */
+ ws = RTE_PTR_ADD(ws, sizeof(struct cnxk_sso_hws_cookie));
+ ws->base = roc_sso_hws_base_get(&dev->sso, port_id);
+ cn9k_init_hws_ops((struct cn9k_sso_hws_state *)ws, ws->base);
+ ws->hws_id = port_id;
+ ws->swtag_req = 0;
+
+ data = ws;
+ }
+
+ return data;
+}
+
static void
cn9k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -67,12 +178,46 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ rc = cnxk_setup_event_ports(event_dev, cn9k_sso_init_hws_mem,
+ cn9k_sso_hws_setup);
+ if (rc < 0)
+ goto cnxk_rsrc_fini;
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
+ dev->nb_event_ports = 0;
return rc;
}
+static int
+cn9k_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ const struct rte_event_port_conf *port_conf)
+{
+
+ RTE_SET_USED(port_conf);
+ return cnxk_sso_port_setup(event_dev, port_id, cn9k_sso_hws_setup);
+}
+
+static void
+cn9k_sso_port_release(void *port)
+{
+ struct cnxk_sso_hws_cookie *gws_cookie = cnxk_sso_hws_get_cookie(port);
+ struct cnxk_sso_evdev *dev;
+
+ if (port == NULL)
+ return;
+
+ dev = cnxk_sso_pmd_priv(gws_cookie->event_dev);
+ if (!gws_cookie->configured)
+ goto free;
+
+ cn9k_sso_hws_release(dev, port);
+ memset(gws_cookie, 0, sizeof(*gws_cookie));
+free:
+ rte_free(gws_cookie);
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -80,6 +225,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.queue_setup = cnxk_sso_queue_setup,
.queue_release = cnxk_sso_queue_release,
.port_def_conf = cnxk_sso_port_def_conf,
+ .port_setup = cn9k_sso_port_setup,
+ .port_release = cn9k_sso_port_release,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index e93aaccd8..daf24d84a 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -125,6 +125,42 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
return rc;
}
+int
+cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
+ cnxk_sso_init_hws_mem_t init_hws_fn,
+ cnxk_sso_hws_setup_t setup_hws_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int i;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ struct cnxk_sso_hws_cookie *ws_cookie;
+ void *ws;
+
+ /* Free memory prior to re-allocation if needed */
+ if (event_dev->data->ports[i] != NULL)
+ ws = event_dev->data->ports[i];
+ else
+ ws = init_hws_fn(dev, i);
+ if (ws == NULL)
+ goto hws_fini;
+ ws_cookie = cnxk_sso_hws_get_cookie(ws);
+ ws_cookie->event_dev = event_dev;
+ ws_cookie->configured = 1;
+ event_dev->data->ports[i] = ws;
+ cnxk_sso_port_setup((struct rte_eventdev *)(uintptr_t)event_dev,
+ i, setup_hws_fn);
+ }
+
+ return 0;
+hws_fini:
+ for (i = i - 1; i >= 0; i--) {
+ event_dev->data->ports[i] = NULL;
+ rte_free(cnxk_sso_hws_get_cookie(event_dev->data->ports[i]));
+ }
+ return -ENOMEM;
+}
+
int
cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
{
@@ -225,6 +261,35 @@ cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
port_conf->enqueue_depth = 1;
}
+int
+cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ cnxk_sso_hws_setup_t hws_setup_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP] = {0};
+ uint16_t q;
+
+ plt_sso_dbg("Port=%d", port_id);
+ if (event_dev->data->ports[port_id] == NULL) {
+ plt_err("Invalid port Id %d", port_id);
+ return -EINVAL;
+ }
+
+ for (q = 0; q < dev->nb_event_queues; q++) {
+ grps_base[q] = roc_sso_hwgrp_base_get(&dev->sso, q);
+ if (grps_base[q] == 0) {
+ plt_err("Failed to get grp[%d] base addr", q);
+ return -EINVAL;
+ }
+ }
+
+ hws_setup_fn(dev, event_dev->data->ports[port_id], grps_base);
+ plt_sso_dbg("Port=%d ws=%p", port_id, event_dev->data->ports[port_id]);
+ rte_mb();
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index b96a6a908..79eab1829 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -17,13 +17,23 @@
#define CNXK_SSO_XAE_CNT "xae_cnt"
#define CNXK_SSO_GGRP_QOS "qos"
+#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
+#define CNXK_SSO_MAX_HWGRP (RTE_EVENT_MAX_QUEUES_PER_DEV + 1)
#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz"
#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
#define CNXK_SSO_XAQ_SLACK (8)
+#define CN10K_GW_MODE_NONE 0
+#define CN10K_GW_MODE_PREF 1
+#define CN10K_GW_MODE_PREF_WFE 2
+
+typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
+typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
+typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
+
struct cnxk_sso_qos {
uint16_t queue;
uint8_t xaq_prcnt;
@@ -53,6 +63,76 @@ struct cnxk_sso_evdev {
struct cnxk_sso_qos *qos_parse_data;
/* CN9K */
uint8_t dual_ws;
+ /* CN10K */
+ uint8_t gw_mode;
+} __rte_cache_aligned;
+
+/* CN10K HWS ops */
+#define CN10K_SSO_HWS_OPS \
+ uintptr_t swtag_desched_op; \
+ uintptr_t swtag_flush_op; \
+ uintptr_t swtag_untag_op; \
+ uintptr_t swtag_norm_op; \
+ uintptr_t updt_wqe_op; \
+ uintptr_t tag_wqe_op; \
+ uintptr_t getwrk_op
+
+struct cn10k_sso_hws {
+ /* Get Work Fastpath data */
+ CN10K_SSO_HWS_OPS;
+ uint32_t gw_wdata;
+ uint8_t swtag_req;
+ uint8_t hws_id;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
+ uint64_t base;
+ uintptr_t lmt_base;
+} __rte_cache_aligned;
+
+/* CN9K HWS ops */
+#define CN9K_SSO_HWS_OPS \
+ uintptr_t swtag_desched_op; \
+ uintptr_t swtag_flush_op; \
+ uintptr_t swtag_norm_op; \
+ uintptr_t getwrk_op; \
+ uintptr_t tag_op; \
+ uintptr_t wqp_op
+
+/* Event port a.k.a GWS */
+struct cn9k_sso_hws {
+ /* Get Work Fastpath data */
+ CN9K_SSO_HWS_OPS;
+ uint8_t swtag_req;
+ uint8_t hws_id;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
+ uint64_t base;
+} __rte_cache_aligned;
+
+struct cn9k_sso_hws_state {
+ CN9K_SSO_HWS_OPS;
+};
+
+struct cn9k_sso_hws_dual {
+ /* Get Work Fastpath data */
+ struct cn9k_sso_hws_state ws_state[2]; /* Ping and Pong */
+ uint8_t swtag_req;
+ uint8_t vws; /* Ping pong bit */
+ uint8_t hws_id;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[CNXK_SSO_MAX_HWGRP];
+ uint64_t base[2];
+} __rte_cache_aligned;
+
+struct cnxk_sso_hws_cookie {
+ const struct rte_eventdev *event_dev;
+ bool configured;
} __rte_cache_aligned;
static inline int
@@ -70,6 +150,12 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev)
return event_dev->data->dev_private;
}
+static inline struct cnxk_sso_hws_cookie *
+cnxk_sso_hws_get_cookie(void *ws)
+{
+ return RTE_PTR_SUB(ws, sizeof(struct cnxk_sso_hws_cookie));
+}
+
/* Configuration functions */
int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev);
@@ -80,6 +166,9 @@ int cnxk_sso_remove(struct rte_pci_device *pci_dev);
void cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
struct rte_event_dev_info *dev_info);
int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
+int cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
+ cnxk_sso_init_hws_mem_t init_hws_mem,
+ cnxk_sso_hws_setup_t hws_setup);
void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
struct rte_event_queue_conf *queue_conf);
int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
@@ -87,5 +176,7 @@ int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id);
void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf);
+int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ cnxk_sso_hws_setup_t hws_setup_fn);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 13/35] event/cnxk: add event port link and unlink
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (11 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 12/35] event/cnxk: add port config functions pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 14/35] event/cnxk: add devargs to configure getwork mode pbhagavatula
` (22 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add platform specific event port, queue link and unlink APIs.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 64 +++++++++++++++++-
drivers/event/cnxk/cn9k_eventdev.c | 101 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.c | 36 ++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 12 +++-
4 files changed, 210 insertions(+), 3 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index fcdc1cf84..db8fe8169 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -63,6 +63,24 @@ cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
return ws;
}
+static int
+cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = port;
+
+ return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+}
+
+static int
+cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = port;
+
+ return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+}
+
static void
cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
{
@@ -83,9 +101,12 @@ cn10k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
static void
cn10k_sso_hws_release(void *arg, void *hws)
{
+ struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = hws;
+ int i;
- RTE_SET_USED(arg);
+ for (i = 0; i < dev->nb_event_queues; i++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, (uint16_t *)&i, 1);
memset(ws, 0, sizeof(*ws));
}
@@ -149,6 +170,12 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ /* Restore any prior port-queue mapping. */
+ cnxk_sso_restore_links(event_dev, cn10k_sso_hws_link);
+
+ dev->configured = 1;
+ rte_mb();
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
@@ -184,6 +211,38 @@ cn10k_sso_port_release(void *port)
rte_free(gws_cookie);
}
+static int
+cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_links];
+ uint16_t link;
+
+ RTE_SET_USED(priorities);
+ for (link = 0; link < nb_links; link++)
+ hwgrp_ids[link] = queues[link];
+ nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+
+ return (int)nb_links;
+}
+
+static int
+cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_unlinks];
+ uint16_t unlink;
+
+ for (unlink = 0; unlink < nb_unlinks; unlink++)
+ hwgrp_ids[unlink] = queues[unlink];
+ nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+
+ return (int)nb_unlinks;
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -193,6 +252,9 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_def_conf = cnxk_sso_port_def_conf,
.port_setup = cn10k_sso_port_setup,
.port_release = cn10k_sso_port_release,
+ .port_link = cn10k_sso_port_link,
+ .port_unlink = cn10k_sso_port_unlink,
+ .timeout_ticks = cnxk_sso_timeout_ticks,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index b8c74633b..a0d76335f 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -18,6 +18,54 @@ cn9k_init_hws_ops(struct cn9k_sso_hws_state *ws, uintptr_t base)
ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
}
+static int
+cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ int rc;
+
+ if (dev->dual_ws) {
+ dws = port;
+ rc = roc_sso_hws_link(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
+ nb_link);
+ rc |= roc_sso_hws_link(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ map, nb_link);
+ } else {
+ ws = port;
+ rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ }
+
+ return rc;
+}
+
+static int
+cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ int rc;
+
+ if (dev->dual_ws) {
+ dws = port;
+ rc = roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ map, nb_link);
+ rc |= roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ map, nb_link);
+ } else {
+ ws = port;
+ rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ }
+
+ return rc;
+}
+
static void
cn9k_sso_hws_setup(void *arg, void *hws, uintptr_t *grps_base)
{
@@ -54,12 +102,24 @@ cn9k_sso_hws_release(void *arg, void *hws)
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
struct cn9k_sso_hws *ws;
+ int i;
if (dev->dual_ws) {
dws = hws;
+ for (i = 0; i < dev->nb_event_queues; i++) {
+ roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ (uint16_t *)&i, 1);
+ roc_sso_hws_unlink(&dev->sso,
+ CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ (uint16_t *)&i, 1);
+ }
memset(dws, 0, sizeof(*dws));
} else {
ws = hws;
+ for (i = 0; i < dev->nb_event_queues; i++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id,
+ (uint16_t *)&i, 1);
memset(ws, 0, sizeof(*ws));
}
}
@@ -183,6 +243,12 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev)
if (rc < 0)
goto cnxk_rsrc_fini;
+ /* Restore any prior port-queue mapping. */
+ cnxk_sso_restore_links(event_dev, cn9k_sso_hws_link);
+
+ dev->configured = 1;
+ rte_mb();
+
return 0;
cnxk_rsrc_fini:
roc_sso_rsrc_fini(&dev->sso);
@@ -218,6 +284,38 @@ cn9k_sso_port_release(void *port)
rte_free(gws_cookie);
}
+static int
+cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_links];
+ uint16_t link;
+
+ RTE_SET_USED(priorities);
+ for (link = 0; link < nb_links; link++)
+ hwgrp_ids[link] = queues[link];
+ nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+
+ return (int)nb_links;
+}
+
+static int
+cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t hwgrp_ids[nb_unlinks];
+ uint16_t unlink;
+
+ for (unlink = 0; unlink < nb_unlinks; unlink++)
+ hwgrp_ids[unlink] = queues[unlink];
+ nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+
+ return (int)nb_unlinks;
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -227,6 +325,9 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_def_conf = cnxk_sso_port_def_conf,
.port_setup = cn9k_sso_port_setup,
.port_release = cn9k_sso_port_release,
+ .port_link = cn9k_sso_port_link,
+ .port_unlink = cn9k_sso_port_unlink,
+ .timeout_ticks = cnxk_sso_timeout_ticks,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index daf24d84a..e68079997 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -161,6 +161,32 @@ cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
return -ENOMEM;
}
+void
+cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
+ cnxk_sso_link_t link_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t *links_map, hwgrp[CNXK_SSO_MAX_HWGRP];
+ int i, j;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ uint16_t nb_hwgrp = 0;
+
+ links_map = event_dev->data->links_map;
+ /* Point links_map to this port specific area */
+ links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+
+ for (j = 0; j < dev->nb_event_queues; j++) {
+ if (links_map[j] == 0xdead)
+ continue;
+ hwgrp[nb_hwgrp] = j;
+ nb_hwgrp++;
+ }
+
+ link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp);
+ }
+}
+
int
cnxk_sso_dev_validate(const struct rte_eventdev *event_dev)
{
@@ -290,6 +316,16 @@ cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
return 0;
}
+int
+cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
+ uint64_t *tmo_ticks)
+{
+ RTE_SET_USED(event_dev);
+ *tmo_ticks = NSEC2TICK(ns, rte_get_timer_hz());
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 79eab1829..97a944d88 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -17,8 +17,9 @@
#define CNXK_SSO_XAE_CNT "xae_cnt"
#define CNXK_SSO_GGRP_QOS "qos"
-#define NSEC2USEC(__ns) ((__ns) / 1E3)
-#define USEC2NSEC(__us) ((__us)*1E3)
+#define NSEC2USEC(__ns) ((__ns) / 1E3)
+#define USEC2NSEC(__us) ((__us)*1E3)
+#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
#define CNXK_SSO_MAX_HWGRP (RTE_EVENT_MAX_QUEUES_PER_DEV + 1)
#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc"
@@ -33,6 +34,8 @@
typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
+typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
+ uint16_t nb_link);
struct cnxk_sso_qos {
uint16_t queue;
@@ -48,6 +51,7 @@ struct cnxk_sso_evdev {
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
+ uint8_t configured;
uint32_t deq_tmo_ns;
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
@@ -169,6 +173,8 @@ int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev);
int cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
cnxk_sso_init_hws_mem_t init_hws_mem,
cnxk_sso_hws_setup_t hws_setup);
+void cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
+ cnxk_sso_link_t link_fn);
void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
struct rte_event_queue_conf *queue_conf);
int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
@@ -178,5 +184,7 @@ void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
struct rte_event_port_conf *port_conf);
int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
cnxk_sso_hws_setup_t hws_setup_fn);
+int cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
+ uint64_t *tmo_ticks);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 14/35] event/cnxk: add devargs to configure getwork mode
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (12 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 13/35] event/cnxk: add event port link and unlink pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 15/35] event/cnxk: add SSO HW device operations pbhagavatula
` (21 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add devargs to configure the platform specific getwork mode.
CN9K getwork mode by default is set to use dual workslot mode.
Add option to force single workslot mode.
Example:
--dev "0002:0e:00.0,single_ws=1"
CN10K supports multiple getwork prefetch modes, by default the
prefetch mode is set to none.
Add option to select getwork prefetch mode
Example:
--dev "0002:1e:00.0,gw_mode=1"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 18 ++++++++++++++++++
drivers/event/cnxk/cn10k_eventdev.c | 3 ++-
drivers/event/cnxk/cn9k_eventdev.c | 3 ++-
drivers/event/cnxk/cnxk_eventdev.c | 6 ++++++
drivers/event/cnxk/cnxk_eventdev.h | 6 ++++--
5 files changed, 32 insertions(+), 4 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 0583e5fdd..f48452982 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -55,6 +55,24 @@ Runtime Config Options
-a 0002:0e:00.0,xae_cnt=16384
+- ``CN9K Getwork mode``
+
+ CN9K ``single_ws`` devargs parameter is introduced to select single workslot
+ mode in SSO and disable the default dual workslot mode.
+
+ For example::
+
+ -a 0002:0e:00.0,single_ws=1
+
+- ``CN10K Getwork mode``
+
+ CN10K supports multiple getwork prefetch modes, by default the prefetch
+ mode is set to none.
+
+ For example::
+
+ -a 0002:0e:00.0,gw_mode=1
+
- ``Event Group QoS support``
SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index db8fe8169..6522351ca 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -327,4 +327,5 @@ RTE_PMD_REGISTER_PCI(event_cn10k, cn10k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
- CNXK_SSO_GGRP_QOS "=<string>");
+ CNXK_SSO_GGRP_QOS "=<string>"
+ CN10K_SSO_GW_MODE "=<int>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index a0d76335f..00c5565e7 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -395,4 +395,5 @@ RTE_PMD_REGISTER_PCI(event_cn9k, cn9k_pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
- CNXK_SSO_GGRP_QOS "=<string>");
+ CNXK_SSO_GGRP_QOS "=<string>"
+ CN9K_SSO_SINGLE_WS "=1");
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index e68079997..2a387ff95 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -406,6 +406,7 @@ static void
cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
{
struct rte_kvargs *kvlist;
+ uint8_t single_ws = 0;
if (devargs == NULL)
return;
@@ -417,6 +418,11 @@ cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs)
&dev->xae_cnt);
rte_kvargs_process(kvlist, CNXK_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
dev);
+ rte_kvargs_process(kvlist, CN9K_SSO_SINGLE_WS, &parse_kvargs_value,
+ &single_ws);
+ rte_kvargs_process(kvlist, CN10K_SSO_GW_MODE, &parse_kvargs_value,
+ &dev->gw_mode);
+ dev->dual_ws = !single_ws;
rte_kvargs_free(kvlist);
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 97a944d88..437cdf3db 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,8 +14,10 @@
#include "roc_api.h"
-#define CNXK_SSO_XAE_CNT "xae_cnt"
-#define CNXK_SSO_GGRP_QOS "qos"
+#define CNXK_SSO_XAE_CNT "xae_cnt"
+#define CNXK_SSO_GGRP_QOS "qos"
+#define CN9K_SSO_SINGLE_WS "single_ws"
+#define CN10K_SSO_GW_MODE "gw_mode"
#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 15/35] event/cnxk: add SSO HW device operations
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (13 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 14/35] event/cnxk: add devargs to configure getwork mode pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 16/35] event/cnxk: add SSO GWS fastpath enqueue functions pbhagavatula
` (20 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO HW device operations used for enqueue/dequeue.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_worker.c | 7 +
drivers/event/cnxk/cn10k_worker.h | 151 +++++++++++++++++
drivers/event/cnxk/cn9k_worker.c | 7 +
drivers/event/cnxk/cn9k_worker.h | 249 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 10 ++
drivers/event/cnxk/cnxk_worker.h | 101 ++++++++++++
drivers/event/cnxk/meson.build | 4 +-
7 files changed, 528 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cn10k_worker.c
create mode 100644 drivers/event/cnxk/cn10k_worker.h
create mode 100644 drivers/event/cnxk/cn9k_worker.c
create mode 100644 drivers/event/cnxk/cn9k_worker.h
create mode 100644 drivers/event/cnxk/cnxk_worker.h
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
new file mode 100644
index 000000000..63b587301
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cn10k_worker.h"
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
new file mode 100644
index 000000000..04517055d
--- /dev/null
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CN10K_WORKER_H__
+#define __CN10K_WORKER_H__
+
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
+
+/* SSO Operations */
+
+static __rte_always_inline uint8_t
+cn10k_sso_hws_new_event(struct cn10k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ rte_atomic_thread_fence(__ATOMIC_ACQ_REL);
+ if (ws->xaq_lmt <= *ws->fc_mem)
+ return 0;
+
+ cnxk_sso_hws_add_work(event_ptr, tag, new_tt, ws->grps_base[grp]);
+ return 1;
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_fwd_swtag(struct cn10k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint8_t cur_tt = CNXK_TT_FROM_TAG(plt_read64(ws->tag_wqe_op));
+
+ /* CNXK model
+ * cur_tt/new_tt SSO_TT_ORDERED SSO_TT_ATOMIC SSO_TT_UNTAGGED
+ *
+ * SSO_TT_ORDERED norm norm untag
+ * SSO_TT_ATOMIC norm norm untag
+ * SSO_TT_UNTAGGED norm norm NOOP
+ */
+
+ if (new_tt == SSO_TT_UNTAGGED) {
+ if (cur_tt != SSO_TT_UNTAGGED)
+ cnxk_sso_hws_swtag_untag(ws->swtag_untag_op);
+ } else {
+ cnxk_sso_hws_swtag_norm(tag, new_tt, ws->swtag_norm_op);
+ }
+ ws->swtag_req = 1;
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_fwd_group(struct cn10k_sso_hws *ws, const struct rte_event *ev,
+ const uint16_t grp)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+
+ plt_write64(ev->u64, ws->updt_wqe_op);
+ cnxk_sso_hws_swtag_desched(tag, new_tt, grp, ws->swtag_desched_op);
+}
+
+static __rte_always_inline void
+cn10k_sso_hws_forward_event(struct cn10k_sso_hws *ws,
+ const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (CNXK_GRP_FROM_TAG(plt_read64(ws->tag_wqe_op)) == grp)
+ cn10k_sso_hws_fwd_swtag(ws, ev);
+ else
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ cn10k_sso_hws_fwd_group(ws, ev, grp);
+}
+
+static __rte_always_inline uint16_t
+cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+ gw.get_work = ws->gw_wdata;
+#if defined(RTE_ARCH_ARM64) && !defined(__clang__)
+ asm volatile(
+ PLT_CPU_FEATURE_PREAMBLE
+ "caspl %[wdata], %H[wdata], %[wdata], %H[wdata], [%[gw_loc]]\n"
+ : [wdata] "+r"(gw.get_work)
+ : [gw_loc] "r"(ws->getwrk_op)
+ : "memory");
+#else
+ plt_write64(gw.u64[0], ws->getwrk_op);
+ do {
+ roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+ } while (gw.u64[0] & BIT_ULL(63));
+#endif
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+/* Used in cleaning up workslot. */
+static __rte_always_inline uint16_t
+cn10k_sso_hws_get_work_empty(struct cn10k_sso_hws *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldp %[tag], %[wqp], [%[tag_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldp %[tag], %[wqp], [%[tag_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_wqe_op)
+ : "memory");
+#else
+ do {
+ roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+ } while (gw.u64[0] & BIT_ULL(63));
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+#endif
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
new file mode 100644
index 000000000..836914163
--- /dev/null
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "roc_api.h"
+
+#include "cn9k_worker.h"
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
new file mode 100644
index 000000000..85be742c1
--- /dev/null
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -0,0 +1,249 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CN9K_WORKER_H__
+#define __CN9K_WORKER_H__
+
+#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
+
+/* SSO Operations */
+
+static __rte_always_inline uint8_t
+cn9k_sso_hws_new_event(struct cn9k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ rte_atomic_thread_fence(__ATOMIC_ACQ_REL);
+ if (ws->xaq_lmt <= *ws->fc_mem)
+ return 0;
+
+ cnxk_sso_hws_add_work(event_ptr, tag, new_tt, ws->grps_base[grp]);
+ return 1;
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_fwd_swtag(struct cn9k_sso_hws_state *vws,
+ const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint8_t cur_tt = CNXK_TT_FROM_TAG(plt_read64(vws->tag_op));
+
+ /* CNXK model
+ * cur_tt/new_tt SSO_TT_ORDERED SSO_TT_ATOMIC SSO_TT_UNTAGGED
+ *
+ * SSO_TT_ORDERED norm norm untag
+ * SSO_TT_ATOMIC norm norm untag
+ * SSO_TT_UNTAGGED norm norm NOOP
+ */
+
+ if (new_tt == SSO_TT_UNTAGGED) {
+ if (cur_tt != SSO_TT_UNTAGGED)
+ cnxk_sso_hws_swtag_untag(
+ CN9K_SSOW_GET_BASE_ADDR(vws->getwrk_op) +
+ SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ } else {
+ cnxk_sso_hws_swtag_norm(tag, new_tt, vws->swtag_norm_op);
+ }
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_fwd_group(struct cn9k_sso_hws_state *ws,
+ const struct rte_event *ev, const uint16_t grp)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+
+ plt_write64(ev->u64, CN9K_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
+ SSOW_LF_GWS_OP_UPD_WQP_GRP1);
+ cnxk_sso_hws_swtag_desched(tag, new_tt, grp, ws->swtag_desched_op);
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_forward_event(struct cn9k_sso_hws *ws, const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (CNXK_GRP_FROM_TAG(plt_read64(ws->tag_op)) == grp) {
+ cn9k_sso_hws_fwd_swtag((struct cn9k_sso_hws_state *)ws, ev);
+ ws->swtag_req = 1;
+ } else {
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ cn9k_sso_hws_fwd_group((struct cn9k_sso_hws_state *)ws, ev,
+ grp);
+ }
+}
+
+/* Dual ws ops. */
+
+static __rte_always_inline uint8_t
+cn9k_sso_hws_dual_new_event(struct cn9k_sso_hws_dual *dws,
+ const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ rte_atomic_thread_fence(__ATOMIC_ACQ_REL);
+ if (dws->xaq_lmt <= *dws->fc_mem)
+ return 0;
+
+ cnxk_sso_hws_add_work(event_ptr, tag, new_tt, dws->grps_base[grp]);
+ return 1;
+}
+
+static __rte_always_inline void
+cn9k_sso_hws_dual_forward_event(struct cn9k_sso_hws_dual *dws,
+ struct cn9k_sso_hws_state *vws,
+ const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (CNXK_GRP_FROM_TAG(plt_read64(vws->tag_op)) == grp) {
+ cn9k_sso_hws_fwd_swtag(vws, ev);
+ dws->swtag_req = 1;
+ } else {
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ cn9k_sso_hws_fwd_group(vws, ev, grp);
+ }
+}
+
+static __rte_always_inline uint16_t
+cn9k_sso_hws_dual_get_work(struct cn9k_sso_hws_state *ws,
+ struct cn9k_sso_hws_state *ws_pair,
+ struct rte_event *ev)
+{
+ const uint64_t set_gw = BIT_ULL(16) | 1;
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ "rty%=: \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: str %[gw], [%[pong]] \n"
+ " dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op),
+ [gw] "r"(set_gw), [pong] "r"(ws_pair->getwrk_op));
+#else
+ gw.u64[0] = plt_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & gw.u64[0])
+ gw.u64[0] = plt_read64(ws->tag_op);
+ gw.u64[1] = plt_read64(ws->wqp_op);
+ plt_write64(set_gw, ws_pair->getwrk_op);
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+static __rte_always_inline uint16_t
+cn9k_sso_hws_get_work(struct cn9k_sso_hws *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+ plt_write64(BIT_ULL(16) | /* wait for work. */
+ 1, /* Use Mask set 0. */
+ ws->getwrk_op);
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op));
+#else
+ gw.u64[0] = plt_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & gw.u64[0])
+ gw.u64[0] = plt_read64(ws->tag_op);
+
+ gw.u64[1] = plt_read64(ws->wqp_op);
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+/* Used in cleaning up workslot. */
+static __rte_always_inline uint16_t
+cn9k_sso_hws_get_work_empty(struct cn9k_sso_hws_state *ws, struct rte_event *ev)
+{
+ union {
+ __uint128_t get_work;
+ uint64_t u64[2];
+ } gw;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1])
+ : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op));
+#else
+ gw.u64[0] = plt_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & gw.u64[0])
+ gw.u64[0] = plt_read64(ws->tag_op);
+
+ gw.u64[1] = plt_read64(ws->wqp_op);
+#endif
+
+ gw.u64[0] = (gw.u64[0] & (0x3ull << 32)) << 6 |
+ (gw.u64[0] & (0x3FFull << 36)) << 4 |
+ (gw.u64[0] & 0xffffffff);
+
+ ev->event = gw.u64[0];
+ ev->u64 = gw.u64[1];
+
+ return !!gw.u64[1];
+}
+
+#endif
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 437cdf3db..0a3ab71e4 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -29,6 +29,16 @@
#define CNXK_SSO_XAQ_CACHE_CNT (0x7)
#define CNXK_SSO_XAQ_SLACK (8)
+#define CNXK_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY)
+#define CNXK_TT_FROM_EVENT(x) (((x) >> 38) & SSO_TT_EMPTY)
+#define CNXK_EVENT_TYPE_FROM_TAG(x) (((x) >> 28) & 0xf)
+#define CNXK_SUB_EVENT_FROM_TAG(x) (((x) >> 20) & 0xff)
+#define CNXK_CLR_SUB_EVENT(x) (~(0xffu << 20) & x)
+#define CNXK_GRP_FROM_TAG(x) (((x) >> 36) & 0x3ff)
+#define CNXK_SWTAG_PEND(x) (BIT_ULL(62) & x)
+
+#define CN9K_SSOW_GET_BASE_ADDR(_GW) ((_GW)-SSOW_LF_GWS_OP_GET_WORK0)
+
#define CN10K_GW_MODE_NONE 0
#define CN10K_GW_MODE_PREF 1
#define CN10K_GW_MODE_PREF_WFE 2
diff --git a/drivers/event/cnxk/cnxk_worker.h b/drivers/event/cnxk/cnxk_worker.h
new file mode 100644
index 000000000..4eb46ae16
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_worker.h
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CNXK_WORKER_H__
+#define __CNXK_WORKER_H__
+
+#include "cnxk_eventdev.h"
+
+/* SSO Operations */
+
+static __rte_always_inline void
+cnxk_sso_hws_add_work(const uint64_t event_ptr, const uint32_t tag,
+ const uint8_t new_tt, const uintptr_t grp_base)
+{
+ uint64_t add_work0;
+
+ add_work0 = tag | ((uint64_t)(new_tt) << 32);
+ roc_store_pair(add_work0, event_ptr, grp_base);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_desched(uint32_t tag, uint8_t new_tt, uint16_t grp,
+ uintptr_t swtag_desched_op)
+{
+ uint64_t val;
+
+ val = tag | ((uint64_t)(new_tt & 0x3) << 32) | ((uint64_t)grp << 34);
+ __atomic_store_n((uint64_t *)swtag_desched_op, val, __ATOMIC_RELEASE);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_norm(uint32_t tag, uint8_t new_tt, uintptr_t swtag_norm_op)
+{
+ uint64_t val;
+
+ val = tag | ((uint64_t)(new_tt & 0x3) << 32);
+ plt_write64(val, swtag_norm_op);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_untag(uintptr_t swtag_untag_op)
+{
+ plt_write64(0, swtag_untag_op);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_flush(uint64_t tag_op, uint64_t flush_op)
+{
+ if (CNXK_TT_FROM_TAG(plt_read64(tag_op)) == SSO_TT_EMPTY)
+ return;
+ plt_write64(0, flush_op);
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_swtag_wait(uintptr_t tag_op)
+{
+#ifdef RTE_ARCH_ARM64
+ uint64_t swtp;
+
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbz %[swtb], 62, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbnz %[swtb], 62, rty%= \n"
+ "done%=: \n"
+ : [swtb] "=&r"(swtp)
+ : [swtp_loc] "r"(tag_op));
+#else
+ /* Wait for the SWTAG/SWTAG_FULL operation */
+ while (plt_read64(tag_op) & BIT_ULL(62))
+ ;
+#endif
+}
+
+static __rte_always_inline void
+cnxk_sso_hws_head_wait(uintptr_t tag_op)
+{
+#ifdef RTE_ARCH_ARM64
+ uint64_t swtp;
+
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbz %[swtb], 35, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " tbnz %[swtb], 35, rty%= \n"
+ "done%=: \n"
+ : [swtb] "=&r"(swtp)
+ : [swtp_loc] "r"(tag_op));
+#else
+ /* Wait for the SWTAG/SWTAG_FULL operation */
+ while (plt_read64(tag_op) & BIT_ULL(35))
+ ;
+#endif
+}
+
+#endif
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 22eb28345..57b3f66ea 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -8,7 +8,9 @@ if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
subdir_done()
endif
-sources = files('cn10k_eventdev.c',
+sources = files('cn10k_worker.c',
+ 'cn10k_eventdev.c',
+ 'cn9k_worker.c',
'cn9k_eventdev.c',
'cnxk_eventdev.c',
)
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 16/35] event/cnxk: add SSO GWS fastpath enqueue functions
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (14 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 15/35] event/cnxk: add SSO HW device operations pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 17/35] event/cnxk: add SSO GWS dequeue fastpath functions pbhagavatula
` (19 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO GWS fastpath event device enqueue functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 16 +++-
drivers/event/cnxk/cn10k_worker.c | 54 ++++++++++++++
drivers/event/cnxk/cn10k_worker.h | 12 +++
drivers/event/cnxk/cn9k_eventdev.c | 25 ++++++-
drivers/event/cnxk/cn9k_worker.c | 112 ++++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_worker.h | 24 ++++++
6 files changed, 241 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 6522351ca..a1b44744b 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -2,7 +2,9 @@
* Copyright(C) 2021 Marvell.
*/
+#include "cn10k_worker.h"
#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
static void
cn10k_init_hws_ops(struct cn10k_sso_hws *ws, uintptr_t base)
@@ -130,6 +132,16 @@ cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static void
+cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
+{
+ PLT_SET_USED(event_dev);
+ event_dev->enqueue = cn10k_sso_hws_enq;
+ event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
+ event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
+ event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst;
+}
+
static void
cn10k_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -276,8 +288,10 @@ cn10k_sso_init(struct rte_eventdev *event_dev)
event_dev->dev_ops = &cn10k_sso_dev_ops;
/* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ cn10k_sso_fp_fns_set(event_dev);
return 0;
+ }
rc = cnxk_sso_init(event_dev);
if (rc < 0)
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 63b587301..9b5cb7be6 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -5,3 +5,57 @@
#include "cn10k_worker.h"
#include "cnxk_eventdev.h"
#include "cnxk_worker.h"
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq(void *port, const struct rte_event *ev)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ return cn10k_sso_hws_new_event(ws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ cn10k_sso_hws_forward_event(ws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ cnxk_sso_hws_swtag_flush(ws->tag_wqe_op, ws->swtag_flush_op);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return cn10k_sso_hws_enq(port, ev);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn10k_sso_hws *ws = port;
+ uint16_t i, rc = 1;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = cn10k_sso_hws_new_event(ws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ RTE_SET_USED(nb_events);
+ cn10k_sso_hws_forward_event(ws, ev);
+
+ return 1;
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index 04517055d..48158b320 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -148,4 +148,16 @@ cn10k_sso_hws_get_work_empty(struct cn10k_sso_hws *ws, struct rte_event *ev)
return !!gw.u64[1];
}
+/* CN10K Fastpath functions. */
+uint16_t __rte_hot cn10k_sso_hws_enq(void *port, const struct rte_event *ev);
+uint16_t __rte_hot cn10k_sso_hws_enq_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+
#endif
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 00c5565e7..61a4d0823 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -2,7 +2,9 @@
* Copyright(C) 2021 Marvell.
*/
+#include "cn9k_worker.h"
#include "cnxk_eventdev.h"
+#include "cnxk_worker.h"
#define CN9K_DUAL_WS_NB_WS 2
#define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
@@ -150,6 +152,25 @@ cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
return roc_sso_rsrc_init(&dev->sso, hws, hwgrp);
}
+static void
+cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ event_dev->enqueue = cn9k_sso_hws_enq;
+ event_dev->enqueue_burst = cn9k_sso_hws_enq_burst;
+ event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
+ event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
+
+ if (dev->dual_ws) {
+ event_dev->enqueue = cn9k_sso_hws_dual_enq;
+ event_dev->enqueue_burst = cn9k_sso_hws_dual_enq_burst;
+ event_dev->enqueue_new_burst = cn9k_sso_hws_dual_enq_new_burst;
+ event_dev->enqueue_forward_burst =
+ cn9k_sso_hws_dual_enq_fwd_burst;
+ }
+}
+
static void *
cn9k_sso_init_hws_mem(void *arg, uint8_t port_id)
{
@@ -349,8 +370,10 @@ cn9k_sso_init(struct rte_eventdev *event_dev)
event_dev->dev_ops = &cn9k_sso_dev_ops;
/* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ cn9k_sso_fp_fns_set(event_dev);
return 0;
+ }
rc = cnxk_sso_init(event_dev);
if (rc < 0)
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index 836914163..538bc4b0b 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -5,3 +5,115 @@
#include "roc_api.h"
#include "cn9k_worker.h"
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq(void *port, const struct rte_event *ev)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ return cn9k_sso_hws_new_event(ws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ cn9k_sso_hws_forward_event(ws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ cnxk_sso_hws_swtag_flush(ws->tag_op, ws->swtag_flush_op);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return cn9k_sso_hws_enq(port, ev);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws *ws = port;
+ uint16_t i, rc = 1;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = cn9k_sso_hws_new_event(ws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ RTE_SET_USED(nb_events);
+ cn9k_sso_hws_forward_event(ws, ev);
+
+ return 1;
+}
+
+/* Dual ws ops. */
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq(void *port, const struct rte_event *ev)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ struct cn9k_sso_hws_state *vws;
+
+ vws = &dws->ws_state[!dws->vws];
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ return cn9k_sso_hws_dual_new_event(dws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ cn9k_sso_hws_dual_forward_event(dws, vws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ cnxk_sso_hws_swtag_flush(vws->tag_op, vws->swtag_flush_op);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return cn9k_sso_hws_dual_enq(port, ev);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ uint16_t i, rc = 1;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = cn9k_sso_hws_dual_new_event(dws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+
+ RTE_SET_USED(nb_events);
+ cn9k_sso_hws_dual_forward_event(dws, &dws->ws_state[!dws->vws], ev);
+
+ return 1;
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index 85be742c1..aa321d0e4 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -246,4 +246,28 @@ cn9k_sso_hws_get_work_empty(struct cn9k_sso_hws_state *ws, struct rte_event *ev)
return !!gw.u64[1];
}
+/* CN9K Fastpath functions. */
+uint16_t __rte_hot cn9k_sso_hws_enq(void *port, const struct rte_event *ev);
+uint16_t __rte_hot cn9k_sso_hws_enq_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_enq_new_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_enq_fwd_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+
+uint16_t __rte_hot cn9k_sso_hws_dual_enq(void *port,
+ const struct rte_event *ev);
+uint16_t __rte_hot cn9k_sso_hws_dual_enq_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_dual_enq_new_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port,
+ const struct rte_event ev[],
+ uint16_t nb_events);
+
#endif
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 17/35] event/cnxk: add SSO GWS dequeue fastpath functions
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (15 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 16/35] event/cnxk: add SSO GWS fastpath enqueue functions pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 18/35] event/cnxk: add device start function pbhagavatula
` (18 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO GWS event dequeue fastpath functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 10 ++-
drivers/event/cnxk/cn10k_worker.c | 54 +++++++++++++
drivers/event/cnxk/cn10k_worker.h | 12 +++
drivers/event/cnxk/cn9k_eventdev.c | 15 ++++
drivers/event/cnxk/cn9k_worker.c | 117 ++++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_worker.h | 24 ++++++
6 files changed, 231 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index a1b44744b..37a7c8a8e 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -135,11 +135,19 @@ cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp)
static void
cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
{
- PLT_SET_USED(event_dev);
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
event_dev->enqueue = cn10k_sso_hws_enq;
event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst;
+
+ event_dev->dequeue = cn10k_sso_hws_deq;
+ event_dev->dequeue_burst = cn10k_sso_hws_deq_burst;
+ if (dev->is_timeout_deq) {
+ event_dev->dequeue = cn10k_sso_hws_tmo_deq;
+ event_dev->dequeue_burst = cn10k_sso_hws_tmo_deq_burst;
+ }
}
static void
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 9b5cb7be6..e2aa534c6 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -59,3 +59,57 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+uint16_t __rte_hot
+cn10k_sso_hws_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ RTE_SET_USED(timeout_ticks);
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_wqe_op);
+ return 1;
+ }
+
+ return cn10k_sso_hws_get_work(ws, ev);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_deq_burst(void *port, struct rte_event ev[], uint16_t nb_events,
+ uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn10k_sso_hws_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_tmo_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn10k_sso_hws *ws = port;
+ uint16_t ret = 1;
+ uint64_t iter;
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_wqe_op);
+ return ret;
+ }
+
+ ret = cn10k_sso_hws_get_work(ws, ev);
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++)
+ ret = cn10k_sso_hws_get_work(ws, ev);
+
+ return ret;
+}
+
+uint16_t __rte_hot
+cn10k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn10k_sso_hws_tmo_deq(port, ev, timeout_ticks);
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index 48158b320..2f093a8dd 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -160,4 +160,16 @@ uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+uint16_t __rte_hot cn10k_sso_hws_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn10k_sso_hws_tmo_deq_burst(void *port,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+
#endif
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 61a4d0823..6ba3d1466 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -162,12 +162,27 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
+ event_dev->dequeue = cn9k_sso_hws_deq;
+ event_dev->dequeue_burst = cn9k_sso_hws_deq_burst;
+ if (dev->deq_tmo_ns) {
+ event_dev->dequeue = cn9k_sso_hws_tmo_deq;
+ event_dev->dequeue_burst = cn9k_sso_hws_tmo_deq_burst;
+ }
+
if (dev->dual_ws) {
event_dev->enqueue = cn9k_sso_hws_dual_enq;
event_dev->enqueue_burst = cn9k_sso_hws_dual_enq_burst;
event_dev->enqueue_new_burst = cn9k_sso_hws_dual_enq_new_burst;
event_dev->enqueue_forward_burst =
cn9k_sso_hws_dual_enq_fwd_burst;
+
+ event_dev->dequeue = cn9k_sso_hws_dual_deq;
+ event_dev->dequeue_burst = cn9k_sso_hws_dual_deq_burst;
+ if (dev->deq_tmo_ns) {
+ event_dev->dequeue = cn9k_sso_hws_dual_tmo_deq;
+ event_dev->dequeue_burst =
+ cn9k_sso_hws_dual_tmo_deq_burst;
+ }
}
}
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index 538bc4b0b..9ceacc98d 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -60,6 +60,60 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+uint16_t __rte_hot
+cn9k_sso_hws_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ RTE_SET_USED(timeout_ticks);
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_op);
+ return 1;
+ }
+
+ return cn9k_sso_hws_get_work(ws, ev);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_deq_burst(void *port, struct rte_event ev[], uint16_t nb_events,
+ uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_tmo_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws *ws = port;
+ uint16_t ret = 1;
+ uint64_t iter;
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(ws->tag_op);
+ return ret;
+ }
+
+ ret = cn9k_sso_hws_get_work(ws, ev);
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++)
+ ret = cn9k_sso_hws_get_work(ws, ev);
+
+ return ret;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_tmo_deq(port, ev, timeout_ticks);
+}
+
/* Dual ws ops. */
uint16_t __rte_hot
@@ -117,3 +171,66 @@ cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ uint16_t gw;
+
+ RTE_SET_USED(timeout_ticks);
+ if (dws->swtag_req) {
+ dws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(dws->ws_state[!dws->vws].tag_op);
+ return 1;
+ }
+
+ gw = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
+ &dws->ws_state[!dws->vws], ev);
+ dws->vws = !dws->vws;
+ return gw;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_dual_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+ uint16_t ret = 1;
+ uint64_t iter;
+
+ if (dws->swtag_req) {
+ dws->swtag_req = 0;
+ cnxk_sso_hws_swtag_wait(dws->ws_state[!dws->vws].tag_op);
+ return ret;
+ }
+
+ ret = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
+ &dws->ws_state[!dws->vws], ev);
+ dws->vws = !dws->vws;
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) {
+ ret = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws],
+ &dws->ws_state[!dws->vws], ev);
+ dws->vws = !dws->vws;
+ }
+
+ return ret;
+}
+
+uint16_t __rte_hot
+cn9k_sso_hws_dual_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return cn9k_sso_hws_dual_tmo_deq(port, ev, timeout_ticks);
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index aa321d0e4..38fca08fb 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -270,4 +270,28 @@ uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+uint16_t __rte_hot cn9k_sso_hws_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_tmo_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+
+uint16_t __rte_hot cn9k_sso_hws_dual_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst(void *port,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t __rte_hot cn9k_sso_hws_dual_tmo_deq_burst(void *port,
+ struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+
#endif
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 18/35] event/cnxk: add device start function
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (16 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 17/35] event/cnxk: add SSO GWS dequeue fastpath functions pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 19/35] event/cnxk: add device stop and close functions pbhagavatula
` (17 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add eventdev start function along with few cleanup API's to maintain
sanity.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 127 ++++++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 113 +++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.c | 64 ++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 7 ++
4 files changed, 311 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 37a7c8a8e..8d6b1e48a 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -112,6 +112,117 @@ cn10k_sso_hws_release(void *arg, void *hws)
memset(ws, 0, sizeof(*ws));
}
+static void
+cn10k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base,
+ cnxk_handle_event_t fn, void *arg)
+{
+ struct cn10k_sso_hws *ws = hws;
+ uint64_t cq_ds_cnt = 1;
+ uint64_t aq_cnt = 1;
+ uint64_t ds_cnt = 1;
+ struct rte_event ev;
+ uint64_t val, req;
+
+ plt_write64(0, base + SSO_LF_GGRP_QCTL);
+
+ req = queue_id; /* GGRP ID */
+ req |= BIT_ULL(18); /* Grouped */
+ req |= BIT_ULL(16); /* WAIT */
+
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+
+ while (aq_cnt || cq_ds_cnt || ds_cnt) {
+ plt_write64(req, ws->getwrk_op);
+ cn10k_sso_hws_get_work_empty(ws, &ev);
+ if (fn != NULL && ev.u64 != 0)
+ fn(arg, ev);
+ if (ev.sched_type != SSO_TT_EMPTY)
+ cnxk_sso_hws_swtag_flush(ws->tag_wqe_op,
+ ws->swtag_flush_op);
+ do {
+ val = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE);
+ } while (val & BIT_ULL(56));
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ /* Extract cq and ds count */
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+ }
+
+ plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL);
+ rte_mb();
+}
+
+static void
+cn10k_sso_hws_reset(void *arg, void *hws)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn10k_sso_hws *ws = hws;
+ uintptr_t base = ws->base;
+ uint64_t pend_state;
+ union {
+ __uint128_t wdata;
+ uint64_t u64[2];
+ } gw;
+ uint8_t pend_tt;
+
+ /* Wait till getwork/swtp/waitw/desched completes. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) |
+ BIT_ULL(56) | BIT_ULL(54)));
+ pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0));
+ if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+ if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED)
+ cnxk_sso_hws_swtag_untag(base +
+ SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+ }
+
+ /* Wait for desched to complete. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & BIT_ULL(58));
+
+ switch (dev->gw_mode) {
+ case CN10K_GW_MODE_PREF:
+ while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) & BIT_ULL(63))
+ ;
+ break;
+ case CN10K_GW_MODE_PREF_WFE:
+ while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) &
+ SSOW_LF_GWS_TAG_PEND_GET_WORK_BIT)
+ continue;
+ plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL);
+ break;
+ case CN10K_GW_MODE_NONE:
+ default:
+ break;
+ }
+
+ if (CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_PRF_WQE0)) !=
+ SSO_TT_EMPTY) {
+ plt_write64(BIT_ULL(16) | 1, ws->getwrk_op);
+ do {
+ roc_load_pair(gw.u64[0], gw.u64[1], ws->tag_wqe_op);
+ } while (gw.u64[0] & BIT_ULL(63));
+ pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0));
+ if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+ if (pend_tt == SSO_TT_ATOMIC ||
+ pend_tt == SSO_TT_ORDERED)
+ cnxk_sso_hws_swtag_untag(
+ base + SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+ }
+ }
+
+ plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL);
+ rte_mb();
+}
+
static void
cn10k_sso_set_rsrc(void *arg)
{
@@ -263,6 +374,20 @@ cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
return (int)nb_unlinks;
}
+static int
+cn10k_sso_start(struct rte_eventdev *event_dev)
+{
+ int rc;
+
+ rc = cnxk_sso_start(event_dev, cn10k_sso_hws_reset,
+ cn10k_sso_hws_flush_events);
+ if (rc < 0)
+ return rc;
+ cn10k_sso_fp_fns_set(event_dev);
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -275,6 +400,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_link = cn10k_sso_port_link,
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+
+ .dev_start = cn10k_sso_start,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 6ba3d1466..20919e3a0 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -126,6 +126,102 @@ cn9k_sso_hws_release(void *arg, void *hws)
}
}
+static void
+cn9k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base,
+ cnxk_handle_event_t fn, void *arg)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(arg);
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws_state *st;
+ struct cn9k_sso_hws *ws;
+ uint64_t cq_ds_cnt = 1;
+ uint64_t aq_cnt = 1;
+ uint64_t ds_cnt = 1;
+ struct rte_event ev;
+ uintptr_t ws_base;
+ uint64_t val, req;
+
+ plt_write64(0, base + SSO_LF_GGRP_QCTL);
+
+ req = queue_id; /* GGRP ID */
+ req |= BIT_ULL(18); /* Grouped */
+ req |= BIT_ULL(16); /* WAIT */
+
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+
+ if (dev->dual_ws) {
+ dws = hws;
+ st = &dws->ws_state[0];
+ ws_base = dws->base[0];
+ } else {
+ ws = hws;
+ st = (struct cn9k_sso_hws_state *)ws;
+ ws_base = ws->base;
+ }
+
+ while (aq_cnt || cq_ds_cnt || ds_cnt) {
+ plt_write64(req, st->getwrk_op);
+ cn9k_sso_hws_get_work_empty(st, &ev);
+ if (fn != NULL && ev.u64 != 0)
+ fn(arg, ev);
+ if (ev.sched_type != SSO_TT_EMPTY)
+ cnxk_sso_hws_swtag_flush(st->tag_op,
+ st->swtag_flush_op);
+ do {
+ val = plt_read64(ws_base + SSOW_LF_GWS_PENDSTATE);
+ } while (val & BIT_ULL(56));
+ aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT);
+ /* Extract cq and ds count */
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+ }
+
+ plt_write64(0, ws_base + SSOW_LF_GWS_OP_GWC_INVAL);
+}
+
+static void
+cn9k_sso_hws_reset(void *arg, void *hws)
+{
+ struct cnxk_sso_evdev *dev = arg;
+ struct cn9k_sso_hws_dual *dws;
+ struct cn9k_sso_hws *ws;
+ uint64_t pend_state;
+ uint8_t pend_tt;
+ uintptr_t base;
+ uint64_t tag;
+ uint8_t i;
+
+ dws = hws;
+ ws = hws;
+ for (i = 0; i < (dev->dual_ws ? CN9K_DUAL_WS_NB_WS : 1); i++) {
+ base = dev->dual_ws ? dws->base[i] : ws->base;
+ /* Wait till getwork/swtp/waitw/desched completes. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) |
+ BIT_ULL(56)));
+
+ tag = plt_read64(base + SSOW_LF_GWS_TAG);
+ pend_tt = (tag >> 32) & 0x3;
+ if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+ if (pend_tt == SSO_TT_ATOMIC ||
+ pend_tt == SSO_TT_ORDERED)
+ cnxk_sso_hws_swtag_untag(
+ base + SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED);
+ }
+
+ /* Wait for desched to complete. */
+ do {
+ pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE);
+ } while (pend_state & BIT_ULL(58));
+ }
+}
+
static void
cn9k_sso_set_rsrc(void *arg)
{
@@ -352,6 +448,21 @@ cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
return (int)nb_unlinks;
}
+static int
+cn9k_sso_start(struct rte_eventdev *event_dev)
+{
+ int rc;
+
+ rc = cnxk_sso_start(event_dev, cn9k_sso_hws_reset,
+ cn9k_sso_hws_flush_events);
+ if (rc < 0)
+ return rc;
+
+ cn9k_sso_fp_fns_set(event_dev);
+
+ return rc;
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -364,6 +475,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_link = cn9k_sso_port_link,
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+
+ .dev_start = cn9k_sso_start,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 2a387ff95..5feae5288 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -326,6 +326,70 @@ cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
return 0;
}
+static void
+cnxk_handle_event(void *arg, struct rte_event event)
+{
+ struct rte_eventdev *event_dev = arg;
+
+ if (event_dev->dev_ops->dev_stop_flush != NULL)
+ event_dev->dev_ops->dev_stop_flush(
+ event_dev->data->dev_id, event,
+ event_dev->data->dev_stop_flush_arg);
+}
+
+static void
+cnxk_sso_cleanup(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn, uint8_t enable)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uintptr_t hwgrp_base;
+ uint16_t i;
+ void *ws;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ ws = event_dev->data->ports[i];
+ reset_fn(dev, ws);
+ }
+
+ rte_mb();
+ ws = event_dev->data->ports[0];
+
+ for (i = 0; i < dev->nb_event_queues; i++) {
+ /* Consume all the events through HWS0 */
+ hwgrp_base = roc_sso_hwgrp_base_get(&dev->sso, i);
+ flush_fn(ws, i, hwgrp_base, cnxk_handle_event, event_dev);
+ /* Enable/Disable SSO GGRP */
+ plt_write64(enable, hwgrp_base + SSO_LF_GGRP_QCTL);
+ }
+}
+
+int
+cnxk_sso_start(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ struct roc_sso_hwgrp_qos qos[dev->qos_queue_cnt];
+ int i, rc;
+
+ plt_sso_dbg();
+ for (i = 0; i < dev->qos_queue_cnt; i++) {
+ qos->hwgrp = dev->qos_parse_data[i].queue;
+ qos->iaq_prcnt = dev->qos_parse_data[i].iaq_prcnt;
+ qos->taq_prcnt = dev->qos_parse_data[i].taq_prcnt;
+ qos->xaq_prcnt = dev->qos_parse_data[i].xaq_prcnt;
+ }
+ rc = roc_sso_hwgrp_qos_config(&dev->sso, qos, dev->qos_queue_cnt,
+ dev->xae_cnt);
+ if (rc < 0) {
+ plt_sso_dbg("failed to configure HWGRP QoS rc = %d", rc);
+ return -EINVAL;
+ }
+ cnxk_sso_cleanup(event_dev, reset_fn, flush_fn, true);
+ rte_mb();
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 0a3ab71e4..f175c23bb 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -48,6 +48,10 @@ typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
uint16_t nb_link);
+typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
+typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
+typedef void (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
+ cnxk_handle_event_t fn, void *arg);
struct cnxk_sso_qos {
uint16_t queue;
@@ -198,5 +202,8 @@ int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
cnxk_sso_hws_setup_t hws_setup_fn);
int cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
uint64_t *tmo_ticks);
+int cnxk_sso_start(struct rte_eventdev *event_dev,
+ cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 19/35] event/cnxk: add device stop and close functions
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (17 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 18/35] event/cnxk: add device start function pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 20/35] event/cnxk: add SSO selftest and dump pbhagavatula
` (16 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add event device stop and close callback functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 15 +++++++++
drivers/event/cnxk/cn9k_eventdev.c | 14 +++++++++
drivers/event/cnxk/cnxk_eventdev.c | 48 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 6 ++++
4 files changed, 83 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 8d6b1e48a..5b7cd672c 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -388,6 +388,19 @@ cn10k_sso_start(struct rte_eventdev *event_dev)
return rc;
}
+static void
+cn10k_sso_stop(struct rte_eventdev *event_dev)
+{
+ cnxk_sso_stop(event_dev, cn10k_sso_hws_reset,
+ cn10k_sso_hws_flush_events);
+}
+
+static int
+cn10k_sso_close(struct rte_eventdev *event_dev)
+{
+ return cnxk_sso_close(event_dev, cn10k_sso_hws_unlink);
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -402,6 +415,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.timeout_ticks = cnxk_sso_timeout_ticks,
.dev_start = cn10k_sso_start,
+ .dev_stop = cn10k_sso_stop,
+ .dev_close = cn10k_sso_close,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 20919e3a0..f13f50f42 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -463,6 +463,18 @@ cn9k_sso_start(struct rte_eventdev *event_dev)
return rc;
}
+static void
+cn9k_sso_stop(struct rte_eventdev *event_dev)
+{
+ cnxk_sso_stop(event_dev, cn9k_sso_hws_reset, cn9k_sso_hws_flush_events);
+}
+
+static int
+cn9k_sso_close(struct rte_eventdev *event_dev)
+{
+ return cnxk_sso_close(event_dev, cn9k_sso_hws_unlink);
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -477,6 +489,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.timeout_ticks = cnxk_sso_timeout_ticks,
.dev_start = cn9k_sso_start,
+ .dev_stop = cn9k_sso_stop,
+ .dev_close = cn9k_sso_close,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 5feae5288..a3900315a 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -390,6 +390,54 @@ cnxk_sso_start(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
return 0;
}
+void
+cnxk_sso_stop(struct rte_eventdev *event_dev, cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn)
+{
+ plt_sso_dbg();
+ cnxk_sso_cleanup(event_dev, reset_fn, flush_fn, false);
+ rte_mb();
+}
+
+int
+cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint16_t all_queues[CNXK_SSO_MAX_HWGRP];
+ uint16_t i;
+ void *ws;
+
+ if (!dev->configured)
+ return 0;
+
+ for (i = 0; i < dev->nb_event_queues; i++)
+ all_queues[i] = i;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ ws = event_dev->data->ports[i];
+ unlink_fn(dev, ws, all_queues, dev->nb_event_queues);
+ rte_free(cnxk_sso_hws_get_cookie(ws));
+ event_dev->data->ports[i] = NULL;
+ }
+
+ roc_sso_rsrc_fini(&dev->sso);
+ rte_mempool_free(dev->xaq_pool);
+ rte_memzone_free(rte_memzone_lookup(CNXK_SSO_FC_NAME));
+
+ dev->fc_iova = 0;
+ dev->fc_mem = NULL;
+ dev->xaq_pool = NULL;
+ dev->configured = false;
+ dev->is_timeout_deq = 0;
+ dev->nb_event_ports = 0;
+ dev->max_num_events = -1;
+ dev->nb_event_queues = 0;
+ dev->min_dequeue_timeout_ns = USEC2NSEC(1);
+ dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF);
+
+ return 0;
+}
+
static void
parse_queue_param(char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index f175c23bb..3011af153 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -48,6 +48,8 @@ typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
uint16_t nb_link);
+typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map,
+ uint16_t nb_link);
typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
typedef void (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
@@ -205,5 +207,9 @@ int cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
int cnxk_sso_start(struct rte_eventdev *event_dev,
cnxk_sso_hws_reset_t reset_fn,
cnxk_sso_hws_flush_t flush_fn);
+void cnxk_sso_stop(struct rte_eventdev *event_dev,
+ cnxk_sso_hws_reset_t reset_fn,
+ cnxk_sso_hws_flush_t flush_fn);
+int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn);
#endif /* __CNXK_EVENTDEV_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 20/35] event/cnxk: add SSO selftest and dump
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (18 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 19/35] event/cnxk: add device stop and close functions pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 21/35] event/cnxk: add event port and queue xstats pbhagavatula
` (15 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add selftest to verify sanity of SSO and also add function to
dump internal state of SSO.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/test_eventdev.c | 14 +
drivers/event/cnxk/cn10k_eventdev.c | 8 +
drivers/event/cnxk/cn9k_eventdev.c | 10 +-
drivers/event/cnxk/cnxk_eventdev.c | 8 +
drivers/event/cnxk/cnxk_eventdev.h | 5 +
drivers/event/cnxk/cnxk_eventdev_selftest.c | 1570 +++++++++++++++++++
drivers/event/cnxk/meson.build | 1 +
7 files changed, 1615 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cnxk_eventdev_selftest.c
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index bcfaa53cb..843d9766b 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1036,6 +1036,18 @@ test_eventdev_selftest_dlb2(void)
return test_eventdev_selftest_impl("dlb2_event", "");
}
+static int
+test_eventdev_selftest_cn9k(void)
+{
+ return test_eventdev_selftest_impl("event_cn9k", "");
+}
+
+static int
+test_eventdev_selftest_cn10k(void)
+{
+ return test_eventdev_selftest_impl("event_cn10k", "");
+}
+
REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
REGISTER_TEST_COMMAND(eventdev_selftest_sw, test_eventdev_selftest_sw);
REGISTER_TEST_COMMAND(eventdev_selftest_octeontx,
@@ -1044,3 +1056,5 @@ REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2,
test_eventdev_selftest_octeontx2);
REGISTER_TEST_COMMAND(eventdev_selftest_dpaa2, test_eventdev_selftest_dpaa2);
REGISTER_TEST_COMMAND(eventdev_selftest_dlb2, test_eventdev_selftest_dlb2);
+REGISTER_TEST_COMMAND(eventdev_selftest_cn9k, test_eventdev_selftest_cn9k);
+REGISTER_TEST_COMMAND(eventdev_selftest_cn10k, test_eventdev_selftest_cn10k);
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 5b7cd672c..a0c6d32cc 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -401,6 +401,12 @@ cn10k_sso_close(struct rte_eventdev *event_dev)
return cnxk_sso_close(event_dev, cn10k_sso_hws_unlink);
}
+static int
+cn10k_sso_selftest(void)
+{
+ return cnxk_sso_selftest(RTE_STR(event_cn10k));
+}
+
static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -414,9 +420,11 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .dump = cnxk_sso_dump,
.dev_start = cn10k_sso_start,
.dev_stop = cn10k_sso_stop,
.dev_close = cn10k_sso_close,
+ .dev_selftest = cn10k_sso_selftest,
};
static int
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index f13f50f42..48991e522 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -222,7 +222,7 @@ cn9k_sso_hws_reset(void *arg, void *hws)
}
}
-static void
+void
cn9k_sso_set_rsrc(void *arg)
{
struct cnxk_sso_evdev *dev = arg;
@@ -475,6 +475,12 @@ cn9k_sso_close(struct rte_eventdev *event_dev)
return cnxk_sso_close(event_dev, cn9k_sso_hws_unlink);
}
+static int
+cn9k_sso_selftest(void)
+{
+ return cnxk_sso_selftest(RTE_STR(event_cn9k));
+}
+
static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.dev_infos_get = cn9k_sso_info_get,
.dev_configure = cn9k_sso_dev_configure,
@@ -488,9 +494,11 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .dump = cnxk_sso_dump,
.dev_start = cn9k_sso_start,
.dev_stop = cn9k_sso_stop,
.dev_close = cn9k_sso_close,
+ .dev_selftest = cn9k_sso_selftest,
};
static int
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index a3900315a..0f084176c 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -326,6 +326,14 @@ cnxk_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
return 0;
}
+void
+cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+ roc_sso_dump(&dev->sso, dev->sso.nb_hws, dev->sso.nb_hwgrp, f);
+}
+
static void
cnxk_handle_event(void *arg, struct rte_event event)
{
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 3011af153..9af04bc3d 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -211,5 +211,10 @@ void cnxk_sso_stop(struct rte_eventdev *event_dev,
cnxk_sso_hws_reset_t reset_fn,
cnxk_sso_hws_flush_t flush_fn);
int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn);
+int cnxk_sso_selftest(const char *dev_name);
+void cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f);
+
+/* CN9K */
+void cn9k_sso_set_rsrc(void *arg);
#endif /* __CNXK_EVENTDEV_H__ */
diff --git a/drivers/event/cnxk/cnxk_eventdev_selftest.c b/drivers/event/cnxk/cnxk_eventdev_selftest.c
new file mode 100644
index 000000000..69c15b1d0
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev_selftest.c
@@ -0,0 +1,1570 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_hexdump.h>
+#include <rte_launch.h>
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_memcpy.h>
+#include <rte_per_lcore.h>
+#include <rte_random.h>
+#include <rte_test.h>
+
+#include "cnxk_eventdev.h"
+
+#define NUM_PACKETS (1024)
+#define MAX_EVENTS (1024)
+#define MAX_STAGES (255)
+
+#define CNXK_TEST_RUN(setup, teardown, test) \
+ cnxk_test_run(setup, teardown, test, #test)
+
+static int total;
+static int passed;
+static int failed;
+static int unsupported;
+
+static int evdev;
+static struct rte_mempool *eventdev_test_mempool;
+
+struct event_attr {
+ uint32_t flow_id;
+ uint8_t event_type;
+ uint8_t sub_event_type;
+ uint8_t sched_type;
+ uint8_t queue;
+ uint8_t port;
+};
+
+static uint32_t seqn_list_index;
+static int seqn_list[NUM_PACKETS];
+
+static inline void
+seqn_list_init(void)
+{
+ RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS);
+ memset(seqn_list, 0, sizeof(seqn_list));
+ seqn_list_index = 0;
+}
+
+static inline int
+seqn_list_update(int val)
+{
+ if (seqn_list_index >= NUM_PACKETS)
+ return -1;
+
+ seqn_list[seqn_list_index++] = val;
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ return 0;
+}
+
+static inline int
+seqn_list_check(int limit)
+{
+ int i;
+
+ for (i = 0; i < limit; i++) {
+ if (seqn_list[i] != i) {
+ plt_err("Seqn mismatch %d %d", seqn_list[i], i);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+struct test_core_param {
+ uint32_t *total_events;
+ uint64_t dequeue_tmo_ticks;
+ uint8_t port;
+ uint8_t sched_type;
+};
+
+static int
+testsuite_setup(const char *eventdev_name)
+{
+ evdev = rte_event_dev_get_dev_id(eventdev_name);
+ if (evdev < 0) {
+ plt_err("%d: Eventdev %s not found", __LINE__, eventdev_name);
+ return -1;
+ }
+ return 0;
+}
+
+static void
+testsuite_teardown(void)
+{
+ rte_event_dev_close(evdev);
+ total = 0;
+ passed = 0;
+ failed = 0;
+ unsupported = 0;
+}
+
+static inline void
+devconf_set_default_sane_values(struct rte_event_dev_config *dev_conf,
+ struct rte_event_dev_info *info)
+{
+ memset(dev_conf, 0, sizeof(struct rte_event_dev_config));
+ dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns;
+ dev_conf->nb_event_ports = info->max_event_ports;
+ dev_conf->nb_event_queues = info->max_event_queues;
+ dev_conf->nb_event_queue_flows = info->max_event_queue_flows;
+ dev_conf->nb_event_port_dequeue_depth =
+ info->max_event_port_dequeue_depth;
+ dev_conf->nb_event_port_enqueue_depth =
+ info->max_event_port_enqueue_depth;
+ dev_conf->nb_event_port_enqueue_depth =
+ info->max_event_port_enqueue_depth;
+ dev_conf->nb_events_limit = info->max_num_events;
+}
+
+enum {
+ TEST_EVENTDEV_SETUP_DEFAULT,
+ TEST_EVENTDEV_SETUP_PRIORITY,
+ TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT,
+};
+
+static inline int
+_eventdev_setup(int mode)
+{
+ const char *pool_name = "evdev_cnxk_test_pool";
+ struct rte_event_dev_config dev_conf;
+ struct rte_event_dev_info info;
+ int i, ret;
+
+ /* Create and destrory pool for each test case to make it standalone */
+ eventdev_test_mempool = rte_pktmbuf_pool_create(
+ pool_name, MAX_EVENTS, 0, 0, 512, rte_socket_id());
+ if (!eventdev_test_mempool) {
+ plt_err("ERROR creating mempool");
+ return -1;
+ }
+
+ ret = rte_event_dev_info_get(evdev, &info);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+ devconf_set_default_sane_values(&dev_conf, &info);
+ if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT)
+ dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT;
+
+ ret = rte_event_dev_configure(evdev, &dev_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+ uint32_t queue_count;
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+
+ if (mode == TEST_EVENTDEV_SETUP_PRIORITY) {
+ if (queue_count > 8)
+ queue_count = 8;
+
+ /* Configure event queues(0 to n) with
+ * RTE_EVENT_DEV_PRIORITY_HIGHEST to
+ * RTE_EVENT_DEV_PRIORITY_LOWEST
+ */
+ uint8_t step =
+ (RTE_EVENT_DEV_PRIORITY_LOWEST + 1) / queue_count;
+ for (i = 0; i < (int)queue_count; i++) {
+ struct rte_event_queue_conf queue_conf;
+
+ ret = rte_event_queue_default_conf_get(evdev, i,
+ &queue_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d",
+ i);
+ queue_conf.priority = i * step;
+ ret = rte_event_queue_setup(evdev, i, &queue_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
+ i);
+ }
+
+ } else {
+ /* Configure event queues with default priority */
+ for (i = 0; i < (int)queue_count; i++) {
+ ret = rte_event_queue_setup(evdev, i, NULL);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
+ i);
+ }
+ }
+ /* Configure event ports */
+ uint32_t port_count;
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &port_count),
+ "Port count get failed");
+ for (i = 0; i < (int)port_count; i++) {
+ ret = rte_event_port_setup(evdev, i, NULL);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i);
+ ret = rte_event_port_link(evdev, i, NULL, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d",
+ i);
+ }
+
+ ret = rte_event_dev_start(evdev);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device");
+
+ return 0;
+}
+
+static inline int
+eventdev_setup(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_DEFAULT);
+}
+
+static inline int
+eventdev_setup_priority(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_PRIORITY);
+}
+
+static inline int
+eventdev_setup_dequeue_timeout(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT);
+}
+
+static inline void
+eventdev_teardown(void)
+{
+ rte_event_dev_stop(evdev);
+ rte_mempool_free(eventdev_test_mempool);
+}
+
+static inline void
+update_event_and_validation_attr(struct rte_mbuf *m, struct rte_event *ev,
+ uint32_t flow_id, uint8_t event_type,
+ uint8_t sub_event_type, uint8_t sched_type,
+ uint8_t queue, uint8_t port)
+{
+ struct event_attr *attr;
+
+ /* Store the event attributes in mbuf for future reference */
+ attr = rte_pktmbuf_mtod(m, struct event_attr *);
+ attr->flow_id = flow_id;
+ attr->event_type = event_type;
+ attr->sub_event_type = sub_event_type;
+ attr->sched_type = sched_type;
+ attr->queue = queue;
+ attr->port = port;
+
+ ev->flow_id = flow_id;
+ ev->sub_event_type = sub_event_type;
+ ev->event_type = event_type;
+ /* Inject the new event */
+ ev->op = RTE_EVENT_OP_NEW;
+ ev->sched_type = sched_type;
+ ev->queue_id = queue;
+ ev->mbuf = m;
+}
+
+static inline int
+inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type,
+ uint8_t sched_type, uint8_t queue, uint8_t port,
+ unsigned int events)
+{
+ struct rte_mbuf *m;
+ unsigned int i;
+
+ for (i = 0; i < events; i++) {
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
+
+ *rte_event_pmd_selftest_seqn(m) = i;
+ update_event_and_validation_attr(m, &ev, flow_id, event_type,
+ sub_event_type, sched_type,
+ queue, port);
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ return 0;
+}
+
+static inline int
+check_excess_events(uint8_t port)
+{
+ uint16_t valid_event;
+ struct rte_event ev;
+ int i;
+
+ /* Check for excess events, try for a few times and exit */
+ for (i = 0; i < 32; i++) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+
+ RTE_TEST_ASSERT_SUCCESS(valid_event,
+ "Unexpected valid event=%d",
+ *rte_event_pmd_selftest_seqn(ev.mbuf));
+ }
+ return 0;
+}
+
+static inline int
+generate_random_events(const unsigned int total_events)
+{
+ struct rte_event_dev_info info;
+ uint32_t queue_count;
+ unsigned int i;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+
+ ret = rte_event_dev_info_get(evdev, &info);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+ for (i = 0; i < total_events; i++) {
+ ret = inject_events(
+ rte_rand() % info.max_event_queue_flows /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ rte_rand() % queue_count /* queue */, 0 /* port */,
+ 1 /* events */);
+ if (ret)
+ return -1;
+ }
+ return ret;
+}
+
+static inline int
+validate_event(struct rte_event *ev)
+{
+ struct event_attr *attr;
+
+ attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *);
+ RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id,
+ "flow_id mismatch enq=%d deq =%d", attr->flow_id,
+ ev->flow_id);
+ RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type,
+ "event_type mismatch enq=%d deq =%d",
+ attr->event_type, ev->event_type);
+ RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type,
+ "sub_event_type mismatch enq=%d deq =%d",
+ attr->sub_event_type, ev->sub_event_type);
+ RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type,
+ "sched_type mismatch enq=%d deq =%d",
+ attr->sched_type, ev->sched_type);
+ RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id,
+ "queue mismatch enq=%d deq =%d", attr->queue,
+ ev->queue_id);
+ return 0;
+}
+
+typedef int (*validate_event_cb)(uint32_t index, uint8_t port,
+ struct rte_event *ev);
+
+static inline int
+consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn)
+{
+ uint32_t events = 0, forward_progress_cnt = 0, index = 0;
+ uint16_t valid_event;
+ struct rte_event ev;
+ int ret;
+
+ while (1) {
+ if (++forward_progress_cnt > UINT16_MAX) {
+ plt_err("Detected deadlock");
+ return -1;
+ }
+
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ forward_progress_cnt = 0;
+ ret = validate_event(&ev);
+ if (ret)
+ return -1;
+
+ if (fn != NULL) {
+ ret = fn(index, port, &ev);
+ RTE_TEST_ASSERT_SUCCESS(
+ ret, "Failed to validate test specific event");
+ }
+
+ ++index;
+
+ rte_pktmbuf_free(ev.mbuf);
+ if (++events >= total_events)
+ break;
+ }
+
+ return check_excess_events(port);
+}
+
+static int
+validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev)
+{
+ RTE_SET_USED(port);
+ RTE_TEST_ASSERT_EQUAL(index, *rte_event_pmd_selftest_seqn(ev->mbuf),
+ "index=%d != seqn=%d", index,
+ *rte_event_pmd_selftest_seqn(ev->mbuf));
+ return 0;
+}
+
+static inline int
+test_simple_enqdeq(uint8_t sched_type)
+{
+ int ret;
+
+ ret = inject_events(0 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type */, sched_type, 0 /* queue */,
+ 0 /* port */, MAX_EVENTS);
+ if (ret)
+ return -1;
+
+ return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq);
+}
+
+static int
+test_simple_enqdeq_ordered(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_simple_enqdeq_atomic(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_simple_enqdeq_parallel(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_PARALLEL);
+}
+
+/*
+ * Generate a prescribed number of events and spread them across available
+ * queues. On dequeue, using single event port(port 0) verify the enqueued
+ * event attributes
+ */
+static int
+test_multi_queue_enq_single_port_deq(void)
+{
+ int ret;
+
+ ret = generate_random_events(MAX_EVENTS);
+ if (ret)
+ return -1;
+
+ return consume_events(0 /* port */, MAX_EVENTS, NULL);
+}
+
+/*
+ * Inject 0..MAX_EVENTS events over 0..queue_count with modulus
+ * operation
+ *
+ * For example, Inject 32 events over 0..7 queues
+ * enqueue events 0, 8, 16, 24 in queue 0
+ * enqueue events 1, 9, 17, 25 in queue 1
+ * ..
+ * ..
+ * enqueue events 7, 15, 23, 31 in queue 7
+ *
+ * On dequeue, Validate the events comes in 0,8,16,24,1,9,17,25..,7,15,23,31
+ * order from queue0(highest priority) to queue7(lowest_priority)
+ */
+static int
+validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev)
+{
+ uint32_t queue_count;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ if (queue_count > 8)
+ queue_count = 8;
+ uint32_t range = MAX_EVENTS / queue_count;
+ uint32_t expected_val = (index % range) * queue_count;
+
+ expected_val += ev->queue_id;
+ RTE_SET_USED(port);
+ RTE_TEST_ASSERT_EQUAL(
+ *rte_event_pmd_selftest_seqn(ev->mbuf), expected_val,
+ "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d",
+ *rte_event_pmd_selftest_seqn(ev->mbuf), index, expected_val,
+ range, queue_count, MAX_EVENTS);
+ return 0;
+}
+
+static int
+test_multi_queue_priority(void)
+{
+ int i, max_evts_roundoff;
+ /* See validate_queue_priority() comments for priority validate logic */
+ uint32_t queue_count;
+ struct rte_mbuf *m;
+ uint8_t queue;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ if (queue_count > 8)
+ queue_count = 8;
+ max_evts_roundoff = MAX_EVENTS / queue_count;
+ max_evts_roundoff *= queue_count;
+
+ for (i = 0; i < max_evts_roundoff; i++) {
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
+
+ *rte_event_pmd_selftest_seqn(m) = i;
+ queue = i % queue_count;
+ update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU,
+ 0, RTE_SCHED_TYPE_PARALLEL,
+ queue, 0);
+ rte_event_enqueue_burst(evdev, 0, &ev, 1);
+ }
+
+ return consume_events(0, max_evts_roundoff, validate_queue_priority);
+}
+
+static int
+worker_multi_port_fn(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint32_t *total_events = param->total_events;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+ int ret;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ ret = validate_event(&ev);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event");
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ }
+
+ return 0;
+}
+
+static inline int
+wait_workers_to_join(const uint32_t *count)
+{
+ uint64_t cycles, print_cycles;
+
+ cycles = rte_get_timer_cycles();
+ print_cycles = cycles;
+ while (__atomic_load_n(count, __ATOMIC_RELAXED)) {
+ uint64_t new_cycles = rte_get_timer_cycles();
+
+ if (new_cycles - print_cycles > rte_get_timer_hz()) {
+ plt_info("Events %d",
+ __atomic_load_n(count, __ATOMIC_RELAXED));
+ print_cycles = new_cycles;
+ }
+ if (new_cycles - cycles > rte_get_timer_hz() * 10000000000) {
+ plt_err("No schedules for seconds, deadlock (%d)",
+ __atomic_load_n(count, __ATOMIC_RELAXED));
+ rte_event_dev_dump(evdev, stdout);
+ cycles = new_cycles;
+ return -1;
+ }
+ }
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+static inline int
+launch_workers_and_wait(int (*main_thread)(void *),
+ int (*worker_thread)(void *), uint32_t total_events,
+ uint8_t nb_workers, uint8_t sched_type)
+{
+ uint32_t atomic_total_events;
+ struct test_core_param *param;
+ uint64_t dequeue_tmo_ticks;
+ uint8_t port = 0;
+ int w_lcore;
+ int ret;
+
+ if (!nb_workers)
+ return 0;
+
+ __atomic_store_n(&atomic_total_events, total_events, __ATOMIC_RELAXED);
+ seqn_list_init();
+
+ param = malloc(sizeof(struct test_core_param) * nb_workers);
+ if (!param)
+ return -1;
+
+ ret = rte_event_dequeue_timeout_ticks(
+ evdev, rte_rand() % 10000000 /* 10ms */, &dequeue_tmo_ticks);
+ if (ret) {
+ free(param);
+ return -1;
+ }
+
+ param[0].total_events = &atomic_total_events;
+ param[0].sched_type = sched_type;
+ param[0].port = 0;
+ param[0].dequeue_tmo_ticks = dequeue_tmo_ticks;
+ rte_wmb();
+
+ w_lcore = rte_get_next_lcore(
+ /* start core */ -1,
+ /* skip main */ 1,
+ /* wrap */ 0);
+ rte_eal_remote_launch(main_thread, ¶m[0], w_lcore);
+
+ for (port = 1; port < nb_workers; port++) {
+ param[port].total_events = &atomic_total_events;
+ param[port].sched_type = sched_type;
+ param[port].port = port;
+ param[port].dequeue_tmo_ticks = dequeue_tmo_ticks;
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ w_lcore = rte_get_next_lcore(w_lcore, 1, 0);
+ rte_eal_remote_launch(worker_thread, ¶m[port], w_lcore);
+ }
+
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ ret = wait_workers_to_join(&atomic_total_events);
+ free(param);
+
+ return ret;
+}
+
+/*
+ * Generate a prescribed number of events and spread them across available
+ * queues. Dequeue the events through multiple ports and verify the enqueued
+ * event attributes
+ */
+static int
+test_multi_queue_enq_multi_port_deq(void)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t nr_ports;
+ int ret;
+
+ ret = generate_random_events(total_events);
+ if (ret)
+ return -1;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ plt_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ return launch_workers_and_wait(worker_multi_port_fn,
+ worker_multi_port_fn, total_events,
+ nr_ports, 0xff /* invalid */);
+}
+
+static void
+flush(uint8_t dev_id, struct rte_event event, void *arg)
+{
+ unsigned int *count = arg;
+
+ RTE_SET_USED(dev_id);
+ if (event.event_type == RTE_EVENT_TYPE_CPU)
+ *count = *count + 1;
+}
+
+static int
+test_dev_stop_flush(void)
+{
+ unsigned int total_events = MAX_EVENTS, count = 0;
+ int ret;
+
+ ret = generate_random_events(total_events);
+ if (ret)
+ return -1;
+
+ ret = rte_event_dev_stop_flush_callback_register(evdev, flush, &count);
+ if (ret)
+ return -2;
+ rte_event_dev_stop(evdev);
+ ret = rte_event_dev_stop_flush_callback_register(evdev, NULL, NULL);
+ if (ret)
+ return -3;
+ RTE_TEST_ASSERT_EQUAL(total_events, count,
+ "count mismatch total_events=%d count=%d",
+ total_events, count);
+
+ return 0;
+}
+
+static int
+validate_queue_to_port_single_link(uint32_t index, uint8_t port,
+ struct rte_event *ev)
+{
+ RTE_SET_USED(index);
+ RTE_TEST_ASSERT_EQUAL(port, ev->queue_id,
+ "queue mismatch enq=%d deq =%d", port,
+ ev->queue_id);
+
+ return 0;
+}
+
+/*
+ * Link queue x to port x and check correctness of link by checking
+ * queue_id == x on dequeue on the specific port x
+ */
+static int
+test_queue_to_port_single_link(void)
+{
+ int i, nr_links, ret;
+ uint32_t queue_count;
+ uint32_t port_count;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &port_count),
+ "Port count get failed");
+
+ /* Unlink all connections that created in eventdev_setup */
+ for (i = 0; i < (int)port_count; i++) {
+ ret = rte_event_port_unlink(evdev, i, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d",
+ i);
+ }
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+
+ nr_links = RTE_MIN(port_count, queue_count);
+ const unsigned int total_events = MAX_EVENTS / nr_links;
+
+ /* Link queue x to port x and inject events to queue x through port x */
+ for (i = 0; i < nr_links; i++) {
+ uint8_t queue = (uint8_t)i;
+
+ ret = rte_event_port_link(evdev, i, &queue, NULL, 1);
+ RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i);
+
+ ret = inject_events(0x100 /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ queue /* queue */, i /* port */,
+ total_events /* events */);
+ if (ret)
+ return -1;
+ }
+
+ /* Verify the events generated from correct queue */
+ for (i = 0; i < nr_links; i++) {
+ ret = consume_events(i /* port */, total_events,
+ validate_queue_to_port_single_link);
+ if (ret)
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+validate_queue_to_port_multi_link(uint32_t index, uint8_t port,
+ struct rte_event *ev)
+{
+ RTE_SET_USED(index);
+ RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1),
+ "queue mismatch enq=%d deq =%d", port,
+ ev->queue_id);
+
+ return 0;
+}
+
+/*
+ * Link all even number of queues to port 0 and all odd number of queues to
+ * port 1 and verify the link connection on dequeue
+ */
+static int
+test_queue_to_port_multi_link(void)
+{
+ int ret, port0_events = 0, port1_events = 0;
+ uint32_t nr_queues = 0;
+ uint32_t nr_ports = 0;
+ uint8_t queue, port;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &nr_queues),
+ "Queue count get failed");
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &nr_queues),
+ "Queue count get failed");
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+
+ if (nr_ports < 2) {
+ plt_err("Not enough ports to test ports=%d", nr_ports);
+ return 0;
+ }
+
+ /* Unlink all connections that created in eventdev_setup */
+ for (port = 0; port < nr_ports; port++) {
+ ret = rte_event_port_unlink(evdev, port, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d",
+ port);
+ }
+
+ unsigned int total_events = MAX_EVENTS / nr_queues;
+ if (!total_events) {
+ nr_queues = MAX_EVENTS;
+ total_events = MAX_EVENTS / nr_queues;
+ }
+
+ /* Link all even number of queues to port0 and odd numbers to port 1*/
+ for (queue = 0; queue < nr_queues; queue++) {
+ port = queue & 0x1;
+ ret = rte_event_port_link(evdev, port, &queue, NULL, 1);
+ RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d",
+ queue, port);
+
+ ret = inject_events(0x100 /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ queue /* queue */, port /* port */,
+ total_events /* events */);
+ if (ret)
+ return -1;
+
+ if (port == 0)
+ port0_events += total_events;
+ else
+ port1_events += total_events;
+ }
+
+ ret = consume_events(0 /* port */, port0_events,
+ validate_queue_to_port_multi_link);
+ if (ret)
+ return -1;
+ ret = consume_events(1 /* port */, port1_events,
+ validate_queue_to_port_multi_link);
+ if (ret)
+ return -1;
+
+ return 0;
+}
+
+static int
+worker_flow_based_pipeline(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
+ uint32_t *total_events = param->total_events;
+ uint8_t new_sched_type = param->sched_type;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
+ dequeue_tmo_ticks);
+ if (!valid_event)
+ continue;
+
+ /* Events from stage 0 */
+ if (ev.sub_event_type == 0) {
+ /* Move to atomic flow to maintain the ordering */
+ ev.flow_id = 0x2;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sub_event_type = 1; /* stage 1 */
+ ev.sched_type = new_sched_type;
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ } else if (ev.sub_event_type == 1) { /* Events from stage 1*/
+ uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
+
+ if (seqn_list_update(seqn) == 0) {
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1,
+ __ATOMIC_RELAXED);
+ } else {
+ plt_err("Failed to update seqn_list");
+ return -1;
+ }
+ } else {
+ plt_err("Invalid ev.sub_event_type = %d",
+ ev.sub_event_type);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+static int
+test_multiport_flow_sched_type_test(uint8_t in_sched_type,
+ uint8_t out_sched_type)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ plt_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with a 0 sequence number to total_events */
+ ret = inject_events(
+ 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */, in_sched_type, 0 /* queue */,
+ 0 /* port */, total_events /* events */);
+ if (ret)
+ return -1;
+
+ rte_mb();
+ ret = launch_workers_and_wait(worker_flow_based_pipeline,
+ worker_flow_based_pipeline, total_events,
+ nr_ports, out_sched_type);
+ if (ret)
+ return -1;
+
+ if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
+ out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
+ /* Check the events order maintained or not */
+ return seqn_list_check(total_events);
+ }
+
+ return 0;
+}
+
+/* Multi port ordered to atomic transaction */
+static int
+test_multi_port_flow_ordered_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_ordered_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_ordered_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_flow_atomic_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_atomic_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_atomic_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_flow_parallel_to_atomic(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_parallel_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_parallel_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+worker_group_based_pipeline(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
+ uint32_t *total_events = param->total_events;
+ uint8_t new_sched_type = param->sched_type;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
+ dequeue_tmo_ticks);
+ if (!valid_event)
+ continue;
+
+ /* Events from stage 0(group 0) */
+ if (ev.queue_id == 0) {
+ /* Move to atomic flow to maintain the ordering */
+ ev.flow_id = 0x2;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sched_type = new_sched_type;
+ ev.queue_id = 1; /* Stage 1*/
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/
+ uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
+
+ if (seqn_list_update(seqn) == 0) {
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1,
+ __ATOMIC_RELAXED);
+ } else {
+ plt_err("Failed to update seqn_list");
+ return -1;
+ }
+ } else {
+ plt_err("Invalid ev.queue_id = %d", ev.queue_id);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int
+test_multiport_queue_sched_type_test(uint8_t in_sched_type,
+ uint8_t out_sched_type)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t queue_count;
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ if (queue_count < 2 || !nr_ports) {
+ plt_err("Not enough queues=%d ports=%d or workers=%d",
+ queue_count, nr_ports, rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with a 0 sequence number to total_events */
+ ret = inject_events(
+ 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */, in_sched_type, 0 /* queue */,
+ 0 /* port */, total_events /* events */);
+ if (ret)
+ return -1;
+
+ ret = launch_workers_and_wait(worker_group_based_pipeline,
+ worker_group_based_pipeline, total_events,
+ nr_ports, out_sched_type);
+ if (ret)
+ return -1;
+
+ if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
+ out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
+ /* Check the events order maintained or not */
+ return seqn_list_check(total_events);
+ }
+
+ return 0;
+}
+
+static int
+test_multi_port_queue_ordered_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_ordered_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_ordered_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_queue_atomic_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_atomic_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_atomic_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_queue_parallel_to_atomic(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_parallel_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_parallel_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+worker_flow_based_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint32_t *total_events = param->total_events;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.sub_event_type == MAX_STAGES) { /* last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sub_event_type++;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+static int
+launch_multi_port_max_stages_random_sched_type(int (*fn)(void *))
+{
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ plt_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with a 0 sequence number to total_events */
+ ret = inject_events(
+ 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1) /* sched_type */,
+ 0 /* queue */, 0 /* port */, MAX_EVENTS /* events */);
+ if (ret)
+ return -1;
+
+ return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports,
+ 0xff /* invalid */);
+}
+
+/* Flow based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_flow_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_flow_based_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ uint32_t queue_count;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ uint8_t nr_queues = queue_count;
+ uint32_t *total_events = param->total_events;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.queue_id == nr_queues - 1) { /* last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id++;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+/* Queue based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_queue_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_queue_based_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_mixed_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ uint32_t queue_count;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+ &queue_count),
+ "Queue count get failed");
+ uint8_t nr_queues = queue_count;
+ uint32_t *total_events = param->total_events;
+
+ while (__atomic_load_n(total_events, __ATOMIC_RELAXED) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.queue_id == nr_queues - 1) { /* Last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ __atomic_sub_fetch(total_events, 1, __ATOMIC_RELAXED);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id++;
+ ev.sub_event_type = rte_rand() % 256;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+/* Queue and flow based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_mixed_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_mixed_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_ordered_flow_producer(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ struct rte_mbuf *m;
+ int counter = 0;
+
+ while (counter < NUM_PACKETS) {
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ if (m == NULL)
+ continue;
+
+ *rte_event_pmd_selftest_seqn(m) = counter++;
+
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ ev.flow_id = 0x1; /* Generate a fat flow */
+ ev.sub_event_type = 0;
+ /* Inject the new event */
+ ev.op = RTE_EVENT_OP_NEW;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sched_type = RTE_SCHED_TYPE_ORDERED;
+ ev.queue_id = 0;
+ ev.mbuf = m;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+
+ return 0;
+}
+
+static inline int
+test_producer_consumer_ingress_order_test(int (*fn)(void *))
+{
+ uint32_t nr_ports;
+
+ RTE_TEST_ASSERT_SUCCESS(
+ rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+ &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (rte_lcore_count() < 3 || nr_ports < 2) {
+ plt_err("### Not enough cores for test.");
+ return 0;
+ }
+
+ launch_workers_and_wait(worker_ordered_flow_producer, fn, NUM_PACKETS,
+ nr_ports, RTE_SCHED_TYPE_ATOMIC);
+ /* Check the events order maintained or not */
+ return seqn_list_check(NUM_PACKETS);
+}
+
+/* Flow based producer consumer ingress order test */
+static int
+test_flow_producer_consumer_ingress_order_test(void)
+{
+ return test_producer_consumer_ingress_order_test(
+ worker_flow_based_pipeline);
+}
+
+/* Queue based producer consumer ingress order test */
+static int
+test_queue_producer_consumer_ingress_order_test(void)
+{
+ return test_producer_consumer_ingress_order_test(
+ worker_group_based_pipeline);
+}
+
+static void
+cnxk_test_run(int (*setup)(void), void (*tdown)(void), int (*test)(void),
+ const char *name)
+{
+ if (setup() < 0) {
+ printf("Error setting up test %s", name);
+ unsupported++;
+ } else {
+ if (test() < 0) {
+ failed++;
+ printf("+ TestCase [%2d] : %s failed\n", total, name);
+ } else {
+ passed++;
+ printf("+ TestCase [%2d] : %s succeeded\n", total,
+ name);
+ }
+ }
+
+ total++;
+ tdown();
+}
+
+static int
+cnxk_sso_testsuite_run(const char *dev_name)
+{
+ int rc;
+
+ testsuite_setup(dev_name);
+
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_queue_enq_single_port_deq);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown, test_dev_stop_flush);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_queue_enq_multi_port_deq);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_to_port_single_link);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_to_port_multi_link);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_ordered);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_parallel);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_max_stages_random_sched_type);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_max_stages_random_sched_type);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_mixed_max_stages_random_sched_type);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_flow_producer_consumer_ingress_order_test);
+ CNXK_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_producer_consumer_ingress_order_test);
+ CNXK_TEST_RUN(eventdev_setup_priority, eventdev_teardown,
+ test_multi_queue_priority);
+ CNXK_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
+ test_multi_port_flow_ordered_to_atomic);
+ CNXK_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
+ test_multi_port_queue_ordered_to_atomic);
+ printf("Total tests : %d\n", total);
+ printf("Passed : %d\n", passed);
+ printf("Failed : %d\n", failed);
+ printf("Not supported : %d\n", unsupported);
+
+ rc = failed;
+ testsuite_teardown();
+
+ return rc;
+}
+
+int
+cnxk_sso_selftest(const char *dev_name)
+{
+ const struct rte_memzone *mz;
+ struct cnxk_sso_evdev *dev;
+ int rc = -1;
+
+ mz = rte_memzone_lookup(CNXK_SSO_MZ_NAME);
+ if (mz == NULL)
+ return rc;
+
+ dev = (void *)*((uint64_t *)mz->addr);
+ if (roc_model_runtime_is_cn9k()) {
+ /* Verify single ws mode. */
+ printf("Verifying CN9K Single workslot mode\n");
+ dev->dual_ws = 0;
+ cn9k_sso_set_rsrc(dev);
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ /* Verift dual ws mode. */
+ printf("Verifying CN9K Dual workslot mode\n");
+ dev->dual_ws = 1;
+ cn9k_sso_set_rsrc(dev);
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ }
+
+ if (roc_model_runtime_is_cn10k()) {
+ printf("Verifying CN10K workslot getwork mode none\n");
+ dev->gw_mode = CN10K_GW_MODE_NONE;
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ printf("Verifying CN10K workslot getwork mode prefetch\n");
+ dev->gw_mode = CN10K_GW_MODE_PREF;
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ printf("Verifying CN10K workslot getwork mode smart prefetch\n");
+ dev->gw_mode = CN10K_GW_MODE_PREF_WFE;
+ if (cnxk_sso_testsuite_run(dev_name))
+ return rc;
+ }
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 57b3f66ea..e37ea3478 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -13,6 +13,7 @@ sources = files('cn10k_worker.c',
'cn9k_worker.c',
'cn9k_eventdev.c',
'cnxk_eventdev.c',
+ 'cnxk_eventdev_selftest.c'
)
deps += ['bus_pci', 'common_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 21/35] event/cnxk: add event port and queue xstats
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (19 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 20/35] event/cnxk: add SSO selftest and dump pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 9:51 ` Kinsella, Ray
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 22/35] event/cnxk: support event timer pbhagavatula
` (14 subsequent siblings)
35 siblings, 1 reply; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
Satha Rao, Ray Kinsella, Neil Horman, Pavan Nikhilesh,
Shijith Thotton
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add support for retrieving statistics from SSO HWS and HWGRP.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/common/cnxk/roc_sso.c | 63 +++++
drivers/common/cnxk/roc_sso.h | 19 ++
drivers/common/cnxk/version.map | 2 +
drivers/event/cnxk/cnxk_eventdev.h | 15 ++
drivers/event/cnxk/cnxk_eventdev_stats.c | 289 +++++++++++++++++++++++
drivers/event/cnxk/meson.build | 3 +-
6 files changed, 390 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/cnxk/cnxk_eventdev_stats.c
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index 80d032039..1ccf2626b 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -279,6 +279,69 @@ roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
return nb_hwgrp;
}
+int
+roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws,
+ struct roc_sso_hws_stats *stats)
+{
+ struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+ struct sso_hws_stats *req_rsp;
+ int rc;
+
+ req_rsp = (struct sso_hws_stats *)mbox_alloc_msg_sso_hws_get_stats(
+ dev->mbox);
+ if (req_rsp == NULL) {
+ rc = mbox_process(dev->mbox);
+ if (rc < 0)
+ return rc;
+ req_rsp = (struct sso_hws_stats *)
+ mbox_alloc_msg_sso_hws_get_stats(dev->mbox);
+ if (req_rsp == NULL)
+ return -ENOSPC;
+ }
+ req_rsp->hws = hws;
+ rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
+ if (rc)
+ return rc;
+
+ stats->arbitration = req_rsp->arbitration;
+ return 0;
+}
+
+int
+roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
+ struct roc_sso_hwgrp_stats *stats)
+{
+ struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+ struct sso_grp_stats *req_rsp;
+ int rc;
+
+ req_rsp = (struct sso_grp_stats *)mbox_alloc_msg_sso_grp_get_stats(
+ dev->mbox);
+ if (req_rsp == NULL) {
+ rc = mbox_process(dev->mbox);
+ if (rc < 0)
+ return rc;
+ req_rsp = (struct sso_grp_stats *)
+ mbox_alloc_msg_sso_grp_get_stats(dev->mbox);
+ if (req_rsp == NULL)
+ return -ENOSPC;
+ }
+ req_rsp->grp = hwgrp;
+ rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
+ if (rc)
+ return rc;
+
+ stats->aw_status = req_rsp->aw_status;
+ stats->dq_pc = req_rsp->dq_pc;
+ stats->ds_pc = req_rsp->ds_pc;
+ stats->ext_pc = req_rsp->ext_pc;
+ stats->page_cnt = req_rsp->page_cnt;
+ stats->ts_pc = req_rsp->ts_pc;
+ stats->wa_pc = req_rsp->wa_pc;
+ stats->ws_pc = req_rsp->ws_pc;
+ return 0;
+}
+
int
roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso, uint8_t hws,
uint16_t hwgrp)
diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h
index f85799ba8..a6030e7d8 100644
--- a/drivers/common/cnxk/roc_sso.h
+++ b/drivers/common/cnxk/roc_sso.h
@@ -12,6 +12,21 @@ struct roc_sso_hwgrp_qos {
uint8_t taq_prcnt;
};
+struct roc_sso_hws_stats {
+ uint64_t arbitration;
+};
+
+struct roc_sso_hwgrp_stats {
+ uint64_t ws_pc;
+ uint64_t ext_pc;
+ uint64_t wa_pc;
+ uint64_t ts_pc;
+ uint64_t ds_pc;
+ uint64_t dq_pc;
+ uint64_t aw_status;
+ uint64_t page_cnt;
+};
+
struct roc_sso {
struct plt_pci_device *pci_dev;
/* Public data. */
@@ -61,5 +76,9 @@ uintptr_t __roc_api roc_sso_hwgrp_base_get(struct roc_sso *roc_sso,
/* Debug */
void __roc_api roc_sso_dump(struct roc_sso *roc_sso, uint8_t nb_hws,
uint16_t hwgrp, FILE *f);
+int __roc_api roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
+ struct roc_sso_hwgrp_stats *stats);
+int __roc_api roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws,
+ struct roc_sso_hws_stats *stats);
#endif /* _ROC_SSOW_H_ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 5f2264f23..8e67c83a6 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -183,8 +183,10 @@ INTERNAL {
roc_sso_hwgrp_qos_config;
roc_sso_hwgrp_release_xaq;
roc_sso_hwgrp_set_priority;
+ roc_sso_hwgrp_stats_get;
roc_sso_hws_base_get;
roc_sso_hws_link;
+ roc_sso_hws_stats_get;
roc_sso_hws_unlink;
roc_sso_ns_to_gw;
roc_sso_rsrc_fini;
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 9af04bc3d..abe36f21f 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -214,6 +214,21 @@ int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn);
int cnxk_sso_selftest(const char *dev_name);
void cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f);
+/* Stats API. */
+int cnxk_sso_xstats_get_names(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ uint8_t queue_port_id,
+ struct rte_event_dev_xstats_name *xstats_names,
+ unsigned int *ids, unsigned int size);
+int cnxk_sso_xstats_get(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ uint8_t queue_port_id, const unsigned int ids[],
+ uint64_t values[], unsigned int n);
+int cnxk_sso_xstats_reset(struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ int16_t queue_port_id, const uint32_t ids[],
+ uint32_t n);
+
/* CN9K */
void cn9k_sso_set_rsrc(void *arg);
diff --git a/drivers/event/cnxk/cnxk_eventdev_stats.c b/drivers/event/cnxk/cnxk_eventdev_stats.c
new file mode 100644
index 000000000..a3b548f46
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev_stats.c
@@ -0,0 +1,289 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+
+struct cnxk_sso_xstats_name {
+ const char name[RTE_EVENT_DEV_XSTATS_NAME_SIZE];
+ const size_t offset;
+ const uint64_t mask;
+ const uint8_t shift;
+ uint64_t reset_snap[CNXK_SSO_MAX_HWGRP];
+};
+
+static struct cnxk_sso_xstats_name sso_hws_xstats[] = {
+ {
+ "last_grp_serviced",
+ offsetof(struct roc_sso_hws_stats, arbitration),
+ 0x3FF,
+ 0,
+ {0},
+ },
+ {
+ "affinity_arbitration_credits",
+ offsetof(struct roc_sso_hws_stats, arbitration),
+ 0xF,
+ 16,
+ {0},
+ },
+};
+
+static struct cnxk_sso_xstats_name sso_hwgrp_xstats[] = {
+ {
+ "wrk_sched",
+ offsetof(struct roc_sso_hwgrp_stats, ws_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "xaq_dram",
+ offsetof(struct roc_sso_hwgrp_stats, ext_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "add_wrk",
+ offsetof(struct roc_sso_hwgrp_stats, wa_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "tag_switch_req",
+ offsetof(struct roc_sso_hwgrp_stats, ts_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "desched_req",
+ offsetof(struct roc_sso_hwgrp_stats, ds_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "desched_wrk",
+ offsetof(struct roc_sso_hwgrp_stats, dq_pc),
+ ~0x0,
+ 0,
+ {0},
+ },
+ {
+ "xaq_cached",
+ offsetof(struct roc_sso_hwgrp_stats, aw_status),
+ 0x3,
+ 0,
+ {0},
+ },
+ {
+ "work_inflight",
+ offsetof(struct roc_sso_hwgrp_stats, aw_status),
+ 0x3F,
+ 16,
+ {0},
+ },
+ {
+ "inuse_pages",
+ offsetof(struct roc_sso_hwgrp_stats, page_cnt),
+ 0xFFFFFFFF,
+ 0,
+ {0},
+ },
+};
+
+#define CNXK_SSO_NUM_HWS_XSTATS RTE_DIM(sso_hws_xstats)
+#define CNXK_SSO_NUM_GRP_XSTATS RTE_DIM(sso_hwgrp_xstats)
+
+#define CNXK_SSO_NUM_XSTATS (CNXK_SSO_NUM_HWS_XSTATS + CNXK_SSO_NUM_GRP_XSTATS)
+
+int
+cnxk_sso_xstats_get(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
+ const unsigned int ids[], uint64_t values[], unsigned int n)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ struct roc_sso_hwgrp_stats hwgrp_stats;
+ struct cnxk_sso_xstats_name *xstats;
+ struct cnxk_sso_xstats_name *xstat;
+ struct roc_sso_hws_stats hws_stats;
+ uint32_t xstats_mode_count = 0;
+ uint32_t start_offset = 0;
+ unsigned int i;
+ uint64_t value;
+ void *rsp;
+ int rc;
+
+ switch (mode) {
+ case RTE_EVENT_DEV_XSTATS_DEVICE:
+ return 0;
+ case RTE_EVENT_DEV_XSTATS_PORT:
+ if (queue_port_id >= (signed int)dev->nb_event_ports)
+ goto invalid_value;
+
+ xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hws_xstats;
+
+ rc = roc_sso_hws_stats_get(&dev->sso, queue_port_id,
+ &hws_stats);
+ if (rc < 0)
+ goto invalid_value;
+ rsp = &hws_stats;
+ break;
+ case RTE_EVENT_DEV_XSTATS_QUEUE:
+ if (queue_port_id >= (signed int)dev->nb_event_queues)
+ goto invalid_value;
+
+ xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
+ start_offset = CNXK_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hwgrp_xstats;
+
+ rc = roc_sso_hwgrp_stats_get(&dev->sso, queue_port_id,
+ &hwgrp_stats);
+ if (rc < 0)
+ goto invalid_value;
+ rsp = &hwgrp_stats;
+
+ break;
+ default:
+ plt_err("Invalid mode received");
+ goto invalid_value;
+ };
+
+ for (i = 0; i < n && i < xstats_mode_count; i++) {
+ xstat = &xstats[ids[i] - start_offset];
+ value = *(uint64_t *)((char *)rsp + xstat->offset);
+ value = (value >> xstat->shift) & xstat->mask;
+
+ values[i] = value;
+ values[i] -= xstat->reset_snap[queue_port_id];
+ }
+
+ return i;
+invalid_value:
+ return -EINVAL;
+}
+
+int
+cnxk_sso_xstats_reset(struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ int16_t queue_port_id, const uint32_t ids[], uint32_t n)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ struct roc_sso_hwgrp_stats hwgrp_stats;
+ struct cnxk_sso_xstats_name *xstats;
+ struct cnxk_sso_xstats_name *xstat;
+ struct roc_sso_hws_stats hws_stats;
+ uint32_t xstats_mode_count = 0;
+ uint32_t start_offset = 0;
+ unsigned int i;
+ uint64_t value;
+ void *rsp;
+ int rc;
+
+ switch (mode) {
+ case RTE_EVENT_DEV_XSTATS_DEVICE:
+ return 0;
+ case RTE_EVENT_DEV_XSTATS_PORT:
+ if (queue_port_id >= (signed int)dev->nb_event_ports)
+ goto invalid_value;
+
+ xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hws_xstats;
+ rc = roc_sso_hws_stats_get(&dev->sso, queue_port_id,
+ &hws_stats);
+ if (rc < 0)
+ goto invalid_value;
+ rsp = &hws_stats;
+ break;
+ case RTE_EVENT_DEV_XSTATS_QUEUE:
+ if (queue_port_id >= (signed int)dev->nb_event_queues)
+ goto invalid_value;
+
+ xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
+ start_offset = CNXK_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hwgrp_xstats;
+
+ rc = roc_sso_hwgrp_stats_get(&dev->sso, queue_port_id,
+ &hwgrp_stats);
+ if (rc < 0)
+ goto invalid_value;
+ rsp = &hwgrp_stats;
+ break;
+ default:
+ plt_err("Invalid mode received");
+ goto invalid_value;
+ };
+
+ for (i = 0; i < n && i < xstats_mode_count; i++) {
+ xstat = &xstats[ids[i] - start_offset];
+ value = *(uint64_t *)((char *)rsp + xstat->offset);
+ value = (value >> xstat->shift) & xstat->mask;
+
+ xstat->reset_snap[queue_port_id] = value;
+ }
+ return i;
+invalid_value:
+ return -EINVAL;
+}
+
+int
+cnxk_sso_xstats_get_names(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ uint8_t queue_port_id,
+ struct rte_event_dev_xstats_name *xstats_names,
+ unsigned int *ids, unsigned int size)
+{
+ struct rte_event_dev_xstats_name xstats_names_copy[CNXK_SSO_NUM_XSTATS];
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ uint32_t xstats_mode_count = 0;
+ uint32_t start_offset = 0;
+ unsigned int xidx = 0;
+ unsigned int i;
+
+ for (i = 0; i < CNXK_SSO_NUM_HWS_XSTATS; i++) {
+ snprintf(xstats_names_copy[i].name,
+ sizeof(xstats_names_copy[i].name), "%s",
+ sso_hws_xstats[i].name);
+ }
+
+ for (; i < CNXK_SSO_NUM_XSTATS; i++) {
+ snprintf(xstats_names_copy[i].name,
+ sizeof(xstats_names_copy[i].name), "%s",
+ sso_hwgrp_xstats[i - CNXK_SSO_NUM_HWS_XSTATS].name);
+ }
+
+ switch (mode) {
+ case RTE_EVENT_DEV_XSTATS_DEVICE:
+ break;
+ case RTE_EVENT_DEV_XSTATS_PORT:
+ if (queue_port_id >= (signed int)dev->nb_event_ports)
+ break;
+ xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
+ break;
+ case RTE_EVENT_DEV_XSTATS_QUEUE:
+ if (queue_port_id >= (signed int)dev->nb_event_queues)
+ break;
+ xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
+ start_offset = CNXK_SSO_NUM_HWS_XSTATS;
+ break;
+ default:
+ plt_err("Invalid mode received");
+ return -EINVAL;
+ };
+
+ if (xstats_mode_count > size || !ids || !xstats_names)
+ return xstats_mode_count;
+
+ for (i = 0; i < xstats_mode_count; i++) {
+ xidx = i + start_offset;
+ strncpy(xstats_names[i].name, xstats_names_copy[xidx].name,
+ sizeof(xstats_names[i].name));
+ ids[i] = xidx;
+ }
+
+ return i;
+}
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index e37ea3478..5b215b73f 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -13,7 +13,8 @@ sources = files('cn10k_worker.c',
'cn9k_worker.c',
'cn9k_eventdev.c',
'cnxk_eventdev.c',
- 'cnxk_eventdev_selftest.c'
+ 'cnxk_eventdev_selftest.c',
+ 'cnxk_eventdev_stats.c',
)
deps += ['bus_pci', 'common_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* Re: [dpdk-dev] [PATCH v5 21/35] event/cnxk: add event port and queue xstats
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 21/35] event/cnxk: add event port and queue xstats pbhagavatula
@ 2021-05-04 9:51 ` Kinsella, Ray
2021-05-04 10:08 ` Jerin Jacob
0 siblings, 1 reply; 185+ messages in thread
From: Kinsella, Ray @ 2021-05-04 9:51 UTC (permalink / raw)
To: pbhagavatula, jerinj, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Neil Horman, Shijith Thotton
Cc: dev
On 04/05/2021 01:27, pbhagavatula@marvell.com wrote:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add support for retrieving statistics from SSO HWS and HWGRP.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> drivers/common/cnxk/roc_sso.c | 63 +++++
> drivers/common/cnxk/roc_sso.h | 19 ++
> drivers/common/cnxk/version.map | 2 +
> drivers/event/cnxk/cnxk_eventdev.h | 15 ++
> drivers/event/cnxk/cnxk_eventdev_stats.c | 289 +++++++++++++++++++++++
> drivers/event/cnxk/meson.build | 3 +-
> 6 files changed, 390 insertions(+), 1 deletion(-)
> create mode 100644 drivers/event/cnxk/cnxk_eventdev_stats.c
>
> diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
> index 80d032039..1ccf2626b 100644
> --- a/drivers/common/cnxk/roc_sso.c
> +++ b/drivers/common/cnxk/roc_sso.c
> @@ -279,6 +279,69 @@ roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
> return nb_hwgrp;
> }
>
> +int
> +roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws,
> + struct roc_sso_hws_stats *stats)
> +{
> + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
> + struct sso_hws_stats *req_rsp;
> + int rc;
> +
> + req_rsp = (struct sso_hws_stats *)mbox_alloc_msg_sso_hws_get_stats(
> + dev->mbox);
> + if (req_rsp == NULL) {
> + rc = mbox_process(dev->mbox);
> + if (rc < 0)
> + return rc;
> + req_rsp = (struct sso_hws_stats *)
> + mbox_alloc_msg_sso_hws_get_stats(dev->mbox);
> + if (req_rsp == NULL)
> + return -ENOSPC;
> + }
> + req_rsp->hws = hws;
> + rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
> + if (rc)
> + return rc;
> +
> + stats->arbitration = req_rsp->arbitration;
> + return 0;
> +}
> +
> +int
> +roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
> + struct roc_sso_hwgrp_stats *stats)
> +{
> + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
> + struct sso_grp_stats *req_rsp;
> + int rc;
> +
> + req_rsp = (struct sso_grp_stats *)mbox_alloc_msg_sso_grp_get_stats(
> + dev->mbox);
> + if (req_rsp == NULL) {
> + rc = mbox_process(dev->mbox);
> + if (rc < 0)
> + return rc;
> + req_rsp = (struct sso_grp_stats *)
> + mbox_alloc_msg_sso_grp_get_stats(dev->mbox);
> + if (req_rsp == NULL)
> + return -ENOSPC;
> + }
> + req_rsp->grp = hwgrp;
> + rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
> + if (rc)
> + return rc;
> +
> + stats->aw_status = req_rsp->aw_status;
> + stats->dq_pc = req_rsp->dq_pc;
> + stats->ds_pc = req_rsp->ds_pc;
> + stats->ext_pc = req_rsp->ext_pc;
> + stats->page_cnt = req_rsp->page_cnt;
> + stats->ts_pc = req_rsp->ts_pc;
> + stats->wa_pc = req_rsp->wa_pc;
> + stats->ws_pc = req_rsp->ws_pc;
> + return 0;
> +}
> +
> int
> roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso, uint8_t hws,
> uint16_t hwgrp)
> diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h
> index f85799ba8..a6030e7d8 100644
> --- a/drivers/common/cnxk/roc_sso.h
> +++ b/drivers/common/cnxk/roc_sso.h
> @@ -12,6 +12,21 @@ struct roc_sso_hwgrp_qos {
> uint8_t taq_prcnt;
> };
>
> +struct roc_sso_hws_stats {
> + uint64_t arbitration;
> +};
> +
> +struct roc_sso_hwgrp_stats {
> + uint64_t ws_pc;
> + uint64_t ext_pc;
> + uint64_t wa_pc;
> + uint64_t ts_pc;
> + uint64_t ds_pc;
> + uint64_t dq_pc;
> + uint64_t aw_status;
> + uint64_t page_cnt;
> +};
> +
> struct roc_sso {
> struct plt_pci_device *pci_dev;
> /* Public data. */
> @@ -61,5 +76,9 @@ uintptr_t __roc_api roc_sso_hwgrp_base_get(struct roc_sso *roc_sso,
> /* Debug */
> void __roc_api roc_sso_dump(struct roc_sso *roc_sso, uint8_t nb_hws,
> uint16_t hwgrp, FILE *f);
> +int __roc_api roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
> + struct roc_sso_hwgrp_stats *stats);
Missing rte_internal?
> +int __roc_api roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws,
> + struct roc_sso_hws_stats *stats);
Missing rte_internal?
>
> #endif /* _ROC_SSOW_H_ */
> diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
> index 5f2264f23..8e67c83a6 100644
> --- a/drivers/common/cnxk/version.map
> +++ b/drivers/common/cnxk/version.map
> @@ -183,8 +183,10 @@ INTERNAL {
> roc_sso_hwgrp_qos_config;
> roc_sso_hwgrp_release_xaq;
> roc_sso_hwgrp_set_priority;
> + roc_sso_hwgrp_stats_get;
> roc_sso_hws_base_get;
> roc_sso_hws_link;
> + roc_sso_hws_stats_get;
> roc_sso_hws_unlink;
> roc_sso_ns_to_gw;
> roc_sso_rsrc_fini;
> diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
> index 9af04bc3d..abe36f21f 100644
> --- a/drivers/event/cnxk/cnxk_eventdev.h
> +++ b/drivers/event/cnxk/cnxk_eventdev.h
> @@ -214,6 +214,21 @@ int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn);
> int cnxk_sso_selftest(const char *dev_name);
> void cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f);
>
> +/* Stats API. */
> +int cnxk_sso_xstats_get_names(const struct rte_eventdev *event_dev,
> + enum rte_event_dev_xstats_mode mode,
> + uint8_t queue_port_id,
> + struct rte_event_dev_xstats_name *xstats_names,
> + unsigned int *ids, unsigned int size);
> +int cnxk_sso_xstats_get(const struct rte_eventdev *event_dev,
> + enum rte_event_dev_xstats_mode mode,
> + uint8_t queue_port_id, const unsigned int ids[],
> + uint64_t values[], unsigned int n);
> +int cnxk_sso_xstats_reset(struct rte_eventdev *event_dev,
> + enum rte_event_dev_xstats_mode mode,
> + int16_t queue_port_id, const uint32_t ids[],
> + uint32_t n);
> +
> /* CN9K */
> void cn9k_sso_set_rsrc(void *arg);
>
> diff --git a/drivers/event/cnxk/cnxk_eventdev_stats.c b/drivers/event/cnxk/cnxk_eventdev_stats.c
> new file mode 100644
> index 000000000..a3b548f46
> --- /dev/null
> +++ b/drivers/event/cnxk/cnxk_eventdev_stats.c
> @@ -0,0 +1,289 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2021 Marvell.
> + */
> +
> +#include "cnxk_eventdev.h"
> +
> +struct cnxk_sso_xstats_name {
> + const char name[RTE_EVENT_DEV_XSTATS_NAME_SIZE];
> + const size_t offset;
> + const uint64_t mask;
> + const uint8_t shift;
> + uint64_t reset_snap[CNXK_SSO_MAX_HWGRP];
> +};
> +
> +static struct cnxk_sso_xstats_name sso_hws_xstats[] = {
> + {
> + "last_grp_serviced",
> + offsetof(struct roc_sso_hws_stats, arbitration),
> + 0x3FF,
> + 0,
> + {0},
> + },
> + {
> + "affinity_arbitration_credits",
> + offsetof(struct roc_sso_hws_stats, arbitration),
> + 0xF,
> + 16,
> + {0},
> + },
> +};
> +
> +static struct cnxk_sso_xstats_name sso_hwgrp_xstats[] = {
> + {
> + "wrk_sched",
> + offsetof(struct roc_sso_hwgrp_stats, ws_pc),
> + ~0x0,
> + 0,
> + {0},
> + },
> + {
> + "xaq_dram",
> + offsetof(struct roc_sso_hwgrp_stats, ext_pc),
> + ~0x0,
> + 0,
> + {0},
> + },
> + {
> + "add_wrk",
> + offsetof(struct roc_sso_hwgrp_stats, wa_pc),
> + ~0x0,
> + 0,
> + {0},
> + },
> + {
> + "tag_switch_req",
> + offsetof(struct roc_sso_hwgrp_stats, ts_pc),
> + ~0x0,
> + 0,
> + {0},
> + },
> + {
> + "desched_req",
> + offsetof(struct roc_sso_hwgrp_stats, ds_pc),
> + ~0x0,
> + 0,
> + {0},
> + },
> + {
> + "desched_wrk",
> + offsetof(struct roc_sso_hwgrp_stats, dq_pc),
> + ~0x0,
> + 0,
> + {0},
> + },
> + {
> + "xaq_cached",
> + offsetof(struct roc_sso_hwgrp_stats, aw_status),
> + 0x3,
> + 0,
> + {0},
> + },
> + {
> + "work_inflight",
> + offsetof(struct roc_sso_hwgrp_stats, aw_status),
> + 0x3F,
> + 16,
> + {0},
> + },
> + {
> + "inuse_pages",
> + offsetof(struct roc_sso_hwgrp_stats, page_cnt),
> + 0xFFFFFFFF,
> + 0,
> + {0},
> + },
> +};
> +
> +#define CNXK_SSO_NUM_HWS_XSTATS RTE_DIM(sso_hws_xstats)
> +#define CNXK_SSO_NUM_GRP_XSTATS RTE_DIM(sso_hwgrp_xstats)
> +
> +#define CNXK_SSO_NUM_XSTATS (CNXK_SSO_NUM_HWS_XSTATS + CNXK_SSO_NUM_GRP_XSTATS)
> +
> +int
> +cnxk_sso_xstats_get(const struct rte_eventdev *event_dev,
> + enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
> + const unsigned int ids[], uint64_t values[], unsigned int n)
> +{
> + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
> + struct roc_sso_hwgrp_stats hwgrp_stats;
> + struct cnxk_sso_xstats_name *xstats;
> + struct cnxk_sso_xstats_name *xstat;
> + struct roc_sso_hws_stats hws_stats;
> + uint32_t xstats_mode_count = 0;
> + uint32_t start_offset = 0;
> + unsigned int i;
> + uint64_t value;
> + void *rsp;
> + int rc;
> +
> + switch (mode) {
> + case RTE_EVENT_DEV_XSTATS_DEVICE:
> + return 0;
> + case RTE_EVENT_DEV_XSTATS_PORT:
> + if (queue_port_id >= (signed int)dev->nb_event_ports)
> + goto invalid_value;
> +
> + xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
> + xstats = sso_hws_xstats;
> +
> + rc = roc_sso_hws_stats_get(&dev->sso, queue_port_id,
> + &hws_stats);
> + if (rc < 0)
> + goto invalid_value;
> + rsp = &hws_stats;
> + break;
> + case RTE_EVENT_DEV_XSTATS_QUEUE:
> + if (queue_port_id >= (signed int)dev->nb_event_queues)
> + goto invalid_value;
> +
> + xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
> + start_offset = CNXK_SSO_NUM_HWS_XSTATS;
> + xstats = sso_hwgrp_xstats;
> +
> + rc = roc_sso_hwgrp_stats_get(&dev->sso, queue_port_id,
> + &hwgrp_stats);
> + if (rc < 0)
> + goto invalid_value;
> + rsp = &hwgrp_stats;
> +
> + break;
> + default:
> + plt_err("Invalid mode received");
> + goto invalid_value;
> + };
> +
> + for (i = 0; i < n && i < xstats_mode_count; i++) {
> + xstat = &xstats[ids[i] - start_offset];
> + value = *(uint64_t *)((char *)rsp + xstat->offset);
> + value = (value >> xstat->shift) & xstat->mask;
> +
> + values[i] = value;
> + values[i] -= xstat->reset_snap[queue_port_id];
> + }
> +
> + return i;
> +invalid_value:
> + return -EINVAL;
> +}
> +
> +int
> +cnxk_sso_xstats_reset(struct rte_eventdev *event_dev,
> + enum rte_event_dev_xstats_mode mode,
> + int16_t queue_port_id, const uint32_t ids[], uint32_t n)
> +{
> + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
> + struct roc_sso_hwgrp_stats hwgrp_stats;
> + struct cnxk_sso_xstats_name *xstats;
> + struct cnxk_sso_xstats_name *xstat;
> + struct roc_sso_hws_stats hws_stats;
> + uint32_t xstats_mode_count = 0;
> + uint32_t start_offset = 0;
> + unsigned int i;
> + uint64_t value;
> + void *rsp;
> + int rc;
> +
> + switch (mode) {
> + case RTE_EVENT_DEV_XSTATS_DEVICE:
> + return 0;
> + case RTE_EVENT_DEV_XSTATS_PORT:
> + if (queue_port_id >= (signed int)dev->nb_event_ports)
> + goto invalid_value;
> +
> + xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
> + xstats = sso_hws_xstats;
> + rc = roc_sso_hws_stats_get(&dev->sso, queue_port_id,
> + &hws_stats);
> + if (rc < 0)
> + goto invalid_value;
> + rsp = &hws_stats;
> + break;
> + case RTE_EVENT_DEV_XSTATS_QUEUE:
> + if (queue_port_id >= (signed int)dev->nb_event_queues)
> + goto invalid_value;
> +
> + xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
> + start_offset = CNXK_SSO_NUM_HWS_XSTATS;
> + xstats = sso_hwgrp_xstats;
> +
> + rc = roc_sso_hwgrp_stats_get(&dev->sso, queue_port_id,
> + &hwgrp_stats);
> + if (rc < 0)
> + goto invalid_value;
> + rsp = &hwgrp_stats;
> + break;
> + default:
> + plt_err("Invalid mode received");
> + goto invalid_value;
> + };
> +
> + for (i = 0; i < n && i < xstats_mode_count; i++) {
> + xstat = &xstats[ids[i] - start_offset];
> + value = *(uint64_t *)((char *)rsp + xstat->offset);
> + value = (value >> xstat->shift) & xstat->mask;
> +
> + xstat->reset_snap[queue_port_id] = value;
> + }
> + return i;
> +invalid_value:
> + return -EINVAL;
> +}
> +
> +int
> +cnxk_sso_xstats_get_names(const struct rte_eventdev *event_dev,
> + enum rte_event_dev_xstats_mode mode,
> + uint8_t queue_port_id,
> + struct rte_event_dev_xstats_name *xstats_names,
> + unsigned int *ids, unsigned int size)
> +{
> + struct rte_event_dev_xstats_name xstats_names_copy[CNXK_SSO_NUM_XSTATS];
> + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
> + uint32_t xstats_mode_count = 0;
> + uint32_t start_offset = 0;
> + unsigned int xidx = 0;
> + unsigned int i;
> +
> + for (i = 0; i < CNXK_SSO_NUM_HWS_XSTATS; i++) {
> + snprintf(xstats_names_copy[i].name,
> + sizeof(xstats_names_copy[i].name), "%s",
> + sso_hws_xstats[i].name);
> + }
> +
> + for (; i < CNXK_SSO_NUM_XSTATS; i++) {
> + snprintf(xstats_names_copy[i].name,
> + sizeof(xstats_names_copy[i].name), "%s",
> + sso_hwgrp_xstats[i - CNXK_SSO_NUM_HWS_XSTATS].name);
> + }
> +
> + switch (mode) {
> + case RTE_EVENT_DEV_XSTATS_DEVICE:
> + break;
> + case RTE_EVENT_DEV_XSTATS_PORT:
> + if (queue_port_id >= (signed int)dev->nb_event_ports)
> + break;
> + xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
> + break;
> + case RTE_EVENT_DEV_XSTATS_QUEUE:
> + if (queue_port_id >= (signed int)dev->nb_event_queues)
> + break;
> + xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
> + start_offset = CNXK_SSO_NUM_HWS_XSTATS;
> + break;
> + default:
> + plt_err("Invalid mode received");
> + return -EINVAL;
> + };
> +
> + if (xstats_mode_count > size || !ids || !xstats_names)
> + return xstats_mode_count;
> +
> + for (i = 0; i < xstats_mode_count; i++) {
> + xidx = i + start_offset;
> + strncpy(xstats_names[i].name, xstats_names_copy[xidx].name,
> + sizeof(xstats_names[i].name));
> + ids[i] = xidx;
> + }
> +
> + return i;
> +}
> diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
> index e37ea3478..5b215b73f 100644
> --- a/drivers/event/cnxk/meson.build
> +++ b/drivers/event/cnxk/meson.build
> @@ -13,7 +13,8 @@ sources = files('cn10k_worker.c',
> 'cn9k_worker.c',
> 'cn9k_eventdev.c',
> 'cnxk_eventdev.c',
> - 'cnxk_eventdev_selftest.c'
> + 'cnxk_eventdev_selftest.c',
> + 'cnxk_eventdev_stats.c',
> )
>
> deps += ['bus_pci', 'common_cnxk']
>
^ permalink raw reply [flat|nested] 185+ messages in thread
* Re: [dpdk-dev] [PATCH v5 21/35] event/cnxk: add event port and queue xstats
2021-05-04 9:51 ` Kinsella, Ray
@ 2021-05-04 10:08 ` Jerin Jacob
0 siblings, 0 replies; 185+ messages in thread
From: Jerin Jacob @ 2021-05-04 10:08 UTC (permalink / raw)
To: Kinsella, Ray
Cc: Pavan Nikhilesh, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Neil Horman, Shijith Thotton,
dpdk-dev
On Tue, May 4, 2021 at 3:21 PM Kinsella, Ray <mdr@ashroe.eu> wrote:
>
>
>
> On 04/05/2021 01:27, pbhagavatula@marvell.com wrote:
> > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >
> > Add support for retrieving statistics from SSO HWS and HWGRP.
> >
> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > ---
> > drivers/common/cnxk/roc_sso.c | 63 +++++
> > drivers/common/cnxk/roc_sso.h | 19 ++
> > drivers/common/cnxk/version.map | 2 +
> > drivers/event/cnxk/cnxk_eventdev.h | 15 ++
> > drivers/event/cnxk/cnxk_eventdev_stats.c | 289 +++++++++++++++++++++++
> > drivers/event/cnxk/meson.build | 3 +-
> > 6 files changed, 390 insertions(+), 1 deletion(-)
> > create mode 100644 drivers/event/cnxk/cnxk_eventdev_stats.c
> >
> > diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
> > index 80d032039..1ccf2626b 100644
> > --- a/drivers/common/cnxk/roc_sso.c
> > +++ b/drivers/common/cnxk/roc_sso.c
> > @@ -279,6 +279,69 @@ roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
> > return nb_hwgrp;
> > }
> >
> > +int
> > +roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws,
> > + struct roc_sso_hws_stats *stats)
> > +{
> > + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
> > + struct sso_hws_stats *req_rsp;
> > + int rc;
> > +
> > + req_rsp = (struct sso_hws_stats *)mbox_alloc_msg_sso_hws_get_stats(
> > + dev->mbox);
> > + if (req_rsp == NULL) {
> > + rc = mbox_process(dev->mbox);
> > + if (rc < 0)
> > + return rc;
> > + req_rsp = (struct sso_hws_stats *)
> > + mbox_alloc_msg_sso_hws_get_stats(dev->mbox);
> > + if (req_rsp == NULL)
> > + return -ENOSPC;
> > + }
> > + req_rsp->hws = hws;
> > + rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
> > + if (rc)
> > + return rc;
> > +
> > + stats->arbitration = req_rsp->arbitration;
> > + return 0;
> > +}
> > +
> > +int
> > +roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
> > + struct roc_sso_hwgrp_stats *stats)
> > +{
> > + struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
> > + struct sso_grp_stats *req_rsp;
> > + int rc;
> > +
> > + req_rsp = (struct sso_grp_stats *)mbox_alloc_msg_sso_grp_get_stats(
> > + dev->mbox);
> > + if (req_rsp == NULL) {
> > + rc = mbox_process(dev->mbox);
> > + if (rc < 0)
> > + return rc;
> > + req_rsp = (struct sso_grp_stats *)
> > + mbox_alloc_msg_sso_grp_get_stats(dev->mbox);
> > + if (req_rsp == NULL)
> > + return -ENOSPC;
> > + }
> > + req_rsp->grp = hwgrp;
> > + rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
> > + if (rc)
> > + return rc;
> > +
> > + stats->aw_status = req_rsp->aw_status;
> > + stats->dq_pc = req_rsp->dq_pc;
> > + stats->ds_pc = req_rsp->ds_pc;
> > + stats->ext_pc = req_rsp->ext_pc;
> > + stats->page_cnt = req_rsp->page_cnt;
> > + stats->ts_pc = req_rsp->ts_pc;
> > + stats->wa_pc = req_rsp->wa_pc;
> > + stats->ws_pc = req_rsp->ws_pc;
> > + return 0;
> > +}
> > +
> > int
> > roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso, uint8_t hws,
> > uint16_t hwgrp)
> > diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h
> > index f85799ba8..a6030e7d8 100644
> > --- a/drivers/common/cnxk/roc_sso.h
> > +++ b/drivers/common/cnxk/roc_sso.h
> > @@ -12,6 +12,21 @@ struct roc_sso_hwgrp_qos {
> > uint8_t taq_prcnt;
> > };
> >
> > +struct roc_sso_hws_stats {
> > + uint64_t arbitration;
> > +};
> > +
> > +struct roc_sso_hwgrp_stats {
> > + uint64_t ws_pc;
> > + uint64_t ext_pc;
> > + uint64_t wa_pc;
> > + uint64_t ts_pc;
> > + uint64_t ds_pc;
> > + uint64_t dq_pc;
> > + uint64_t aw_status;
> > + uint64_t page_cnt;
> > +};
> > +
> > struct roc_sso {
> > struct plt_pci_device *pci_dev;
> > /* Public data. */
> > @@ -61,5 +76,9 @@ uintptr_t __roc_api roc_sso_hwgrp_base_get(struct roc_sso *roc_sso,
> > /* Debug */
> > void __roc_api roc_sso_dump(struct roc_sso *roc_sso, uint8_t nb_hws,
> > uint16_t hwgrp, FILE *f);
> > +int __roc_api roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
> > + struct roc_sso_hwgrp_stats *stats);
> Missing rte_internal?
> > +int __roc_api roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws,
> > + struct roc_sso_hws_stats *stats);
> Missing rte_internal?
In order to avoid changes in the common code(which is used by other
environments),
drivers/common/cnxk/roc_platform.h has #define __roc_api
__rte_internal
to meet above use case.
> >
> > #endif /* _ROC_SSOW_H_ */
> > diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
> > index 5f2264f23..8e67c83a6 100644
> > --- a/drivers/common/cnxk/version.map
> > +++ b/drivers/common/cnxk/version.map
> > @@ -183,8 +183,10 @@ INTERNAL {
> > roc_sso_hwgrp_qos_config;
> > roc_sso_hwgrp_release_xaq;
> > roc_sso_hwgrp_set_priority;
> > + roc_sso_hwgrp_stats_get;
> > roc_sso_hws_base_get;
> > roc_sso_hws_link;
> > + roc_sso_hws_stats_get;
> > roc_sso_hws_unlink;
> > roc_sso_ns_to_gw;
> > roc_sso_rsrc_fini;
> > diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
> > index 9af04bc3d..abe36f21f 100644
> > --- a/drivers/event/cnxk/cnxk_eventdev.h
> > +++ b/drivers/event/cnxk/cnxk_eventdev.h
> > @@ -214,6 +214,21 @@ int cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn);
> > int cnxk_sso_selftest(const char *dev_name);
> > void cnxk_sso_dump(struct rte_eventdev *event_dev, FILE *f);
> >
> > +/* Stats API. */
> > +int cnxk_sso_xstats_get_names(const struct rte_eventdev *event_dev,
> > + enum rte_event_dev_xstats_mode mode,
> > + uint8_t queue_port_id,
> > + struct rte_event_dev_xstats_name *xstats_names,
> > + unsigned int *ids, unsigned int size);
> > +int cnxk_sso_xstats_get(const struct rte_eventdev *event_dev,
> > + enum rte_event_dev_xstats_mode mode,
> > + uint8_t queue_port_id, const unsigned int ids[],
> > + uint64_t values[], unsigned int n);
> > +int cnxk_sso_xstats_reset(struct rte_eventdev *event_dev,
> > + enum rte_event_dev_xstats_mode mode,
> > + int16_t queue_port_id, const uint32_t ids[],
> > + uint32_t n);
> > +
> > /* CN9K */
> > void cn9k_sso_set_rsrc(void *arg);
> >
> > diff --git a/drivers/event/cnxk/cnxk_eventdev_stats.c b/drivers/event/cnxk/cnxk_eventdev_stats.c
> > new file mode 100644
> > + "inuse_pages",
> > + offsetof(struct roc_sso_hwgrp_stats, page_cnt),
> > + 0xFFFFFFFF,
> > + 0,
> > + {0},
> > + },
> > +};
> > +
> > +#define CNXK_SSO_NUM_HWS_XSTATS RTE_DIM(sso_hws_xstats)
> > +#define CNXK_SSO_NUM_GRP_XSTATS RTE_DIM(sso_hwgrp_xstats)
> > +
> > +#define CNXK_SSO_NUM_XSTATS (CNXK_SSO_NUM_HWS_XSTATS + CNXK_SSO_NUM_GRP_XSTATS)
> > +
> > +int
> > +cnxk_sso_xstats_get(const struct rte_eventdev *event_dev,
> > + enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
> > + const unsigned int ids[], uint64_t values[], unsigned int n)
> > +{
> > + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
> > + struct roc_sso_hwgrp_stats hwgrp_stats;
> > + struct cnxk_sso_xstats_name *xstats;
> > + struct cnxk_sso_xstats_name *xstat;
> > + struct roc_sso_hws_stats hws_stats;
> > + uint32_t xstats_mode_count = 0;
> > + uint32_t start_offset = 0;
> > + unsigned int i;
> > + uint64_t value;
> > + void *rsp;
> > + int rc;
> > +
> > + switch (mode) {
> > + case RTE_EVENT_DEV_XSTATS_DEVICE:
> > + return 0;
> > + case RTE_EVENT_DEV_XSTATS_PORT:
> > + if (queue_port_id >= (signed int)dev->nb_event_ports)
> > + goto invalid_value;
> > +
> > + xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
> > + xstats = sso_hws_xstats;
> > +
> > + rc = roc_sso_hws_stats_get(&dev->sso, queue_port_id,
> > + &hws_stats);
> > + if (rc < 0)
> > + goto invalid_value;
> > + rsp = &hws_stats;
> > + break;
> > + case RTE_EVENT_DEV_XSTATS_QUEUE:
> > + if (queue_port_id >= (signed int)dev->nb_event_queues)
> > + goto invalid_value;
> > +
> > + xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
> > + start_offset = CNXK_SSO_NUM_HWS_XSTATS;
> > + xstats = sso_hwgrp_xstats;
> > +
> > + rc = roc_sso_hwgrp_stats_get(&dev->sso, queue_port_id,
> > + &hwgrp_stats);
> > + if (rc < 0)
> > + goto invalid_value;
> > + rsp = &hwgrp_stats;
> > +
> > + break;
> > + default:
> > + plt_err("Invalid mode received");
> > + goto invalid_value;
> > + };
> > +
> > + for (i = 0; i < n && i < xstats_mode_count; i++) {
> > + xstat = &xstats[ids[i] - start_offset];
> > + value = *(uint64_t *)((char *)rsp + xstat->offset);
> > + value = (value >> xstat->shift) & xstat->mask;
> > +
> > + values[i] = value;
> > + values[i] -= xstat->reset_snap[queue_port_id];
> > + }
> > +
> > + return i;
> > +invalid_value:
> > + return -EINVAL;
> > +}
> > +
> > +int
> > +cnxk_sso_xstats_reset(struct rte_eventdev *event_dev,
> > + enum rte_event_dev_xstats_mode mode,
> > + int16_t queue_port_id, const uint32_t ids[], uint32_t n)
> > +{
> > + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
> > + struct roc_sso_hwgrp_stats hwgrp_stats;
> > + struct cnxk_sso_xstats_name *xstats;
> > + struct cnxk_sso_xstats_name *xstat;
> > + struct roc_sso_hws_stats hws_stats;
> > + uint32_t xstats_mode_count = 0;
> > + uint32_t start_offset = 0;
> > + unsigned int i;
> > + uint64_t value;
> > + void *rsp;
> > + int rc;
> > +
> > + switch (mode) {
> > + case RTE_EVENT_DEV_XSTATS_DEVICE:
> > + return 0;
> > + case RTE_EVENT_DEV_XSTATS_PORT:
> > + if (queue_port_id >= (signed int)dev->nb_event_ports)
> > + goto invalid_value;
> > +
> > + xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
> > + xstats = sso_hws_xstats;
> > + rc = roc_sso_hws_stats_get(&dev->sso, queue_port_id,
> > + &hws_stats);
> > + if (rc < 0)
> > + goto invalid_value;
> > + rsp = &hws_stats;
> > + break;
> > + case RTE_EVENT_DEV_XSTATS_QUEUE:
> > + if (queue_port_id >= (signed int)dev->nb_event_queues)
> > + goto invalid_value;
> > +
> > + xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
> > + start_offset = CNXK_SSO_NUM_HWS_XSTATS;
> > + xstats = sso_hwgrp_xstats;
> > +
> > + rc = roc_sso_hwgrp_stats_get(&dev->sso, queue_port_id,
> > + &hwgrp_stats);
> > + if (rc < 0)
> > + goto invalid_value;
> > + rsp = &hwgrp_stats;
> > + break;
> > + default:
> > + plt_err("Invalid mode received");
> > + goto invalid_value;
> > + };
> > +
> > + for (i = 0; i < n && i < xstats_mode_count; i++) {
> > + xstat = &xstats[ids[i] - start_offset];
> > + value = *(uint64_t *)((char *)rsp + xstat->offset);
> > + value = (value >> xstat->shift) & xstat->mask;
> > +
> > + xstat->reset_snap[queue_port_id] = value;
> > + }
> > + return i;
> > +invalid_value:
> > + return -EINVAL;
> > +}
> > +
> > +int
> > +cnxk_sso_xstats_get_names(const struct rte_eventdev *event_dev,
> > + enum rte_event_dev_xstats_mode mode,
> > + uint8_t queue_port_id,
> > + struct rte_event_dev_xstats_name *xstats_names,
> > + unsigned int *ids, unsigned int size)
> > +{
> > + struct rte_event_dev_xstats_name xstats_names_copy[CNXK_SSO_NUM_XSTATS];
> > + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
> > + uint32_t xstats_mode_count = 0;
> > + uint32_t start_offset = 0;
> > + unsigned int xidx = 0;
> > + unsigned int i;
> > +
> > + for (i = 0; i < CNXK_SSO_NUM_HWS_XSTATS; i++) {
> > + snprintf(xstats_names_copy[i].name,
> > + sizeof(xstats_names_copy[i].name), "%s",
> > + sso_hws_xstats[i].name);
> > + }
> > +
> > + for (; i < CNXK_SSO_NUM_XSTATS; i++) {
> > + snprintf(xstats_names_copy[i].name,
> > + sizeof(xstats_names_copy[i].name), "%s",
> > + sso_hwgrp_xstats[i - CNXK_SSO_NUM_HWS_XSTATS].name);
> > + }
> > +
> > + switch (mode) {
> > + case RTE_EVENT_DEV_XSTATS_DEVICE:
> > + break;
> > + case RTE_EVENT_DEV_XSTATS_PORT:
> > + if (queue_port_id >= (signed int)dev->nb_event_ports)
> > + break;
> > + xstats_mode_count = CNXK_SSO_NUM_HWS_XSTATS;
> > + break;
> > + case RTE_EVENT_DEV_XSTATS_QUEUE:
> > + if (queue_port_id >= (signed int)dev->nb_event_queues)
> > + break;
> > + xstats_mode_count = CNXK_SSO_NUM_GRP_XSTATS;
> > + start_offset = CNXK_SSO_NUM_HWS_XSTATS;
> > + break;
> > + default:
> > + plt_err("Invalid mode received");
> > + return -EINVAL;
> > + };
> > +
> > + if (xstats_mode_count > size || !ids || !xstats_names)
> > + return xstats_mode_count;
> > +
> > + for (i = 0; i < xstats_mode_count; i++) {
> > + xidx = i + start_offset;
> > + strncpy(xstats_names[i].name, xstats_names_copy[xidx].name,
> > + sizeof(xstats_names[i].name));
> > + ids[i] = xidx;
> > + }
> > +
> > + return i;
> > +}
> > diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
> > index e37ea3478..5b215b73f 100644
> > --- a/drivers/event/cnxk/meson.build
> > +++ b/drivers/event/cnxk/meson.build
> > @@ -13,7 +13,8 @@ sources = files('cn10k_worker.c',
> > 'cn9k_worker.c',
> > 'cn9k_eventdev.c',
> > 'cnxk_eventdev.c',
> > - 'cnxk_eventdev_selftest.c'
> > + 'cnxk_eventdev_selftest.c',
> > + 'cnxk_eventdev_stats.c',
> > )
> >
> > deps += ['bus_pci', 'common_cnxk']
> >
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 22/35] event/cnxk: support event timer
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (20 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 21/35] event/cnxk: add event port and queue xstats pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 23/35] event/cnxk: add timer adapter capabilities pbhagavatula
` (13 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add event timer adapter a.k.a TIM initialization on SSO probe.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 6 ++++
drivers/event/cnxk/cnxk_eventdev.c | 3 ++
drivers/event/cnxk/cnxk_eventdev.h | 2 ++
drivers/event/cnxk/cnxk_tim_evdev.c | 47 +++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_evdev.h | 44 +++++++++++++++++++++++++++
drivers/event/cnxk/meson.build | 1 +
6 files changed, 103 insertions(+)
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.c
create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.h
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index f48452982..e6f81f8b1 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -35,6 +35,10 @@ Features of the OCTEON cnxk SSO PMD are:
- Open system with configurable amount of outstanding events limited only by
DRAM
- HW accelerated dequeue timeout support to enable power management
+- HW managed event timers support through TIM, with high precision and
+ time granularity of 2.5us on CN9K and 1us on CN10K.
+- Up to 256 TIM rings a.k.a event timer adapters.
+- Up to 8 rings traversed in parallel.
Prerequisites and Compilation procedure
---------------------------------------
@@ -101,3 +105,5 @@ Debugging Options
+===+============+=======================================================+
| 1 | SSO | --log-level='pmd\.event\.cnxk,8' |
+---+------------+-------------------------------------------------------+
+ | 2 | TIM | --log-level='pmd\.event\.cnxk\.timer,8' |
+ +---+------------+-------------------------------------------------------+
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 0f084176c..85bb12e00 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -582,6 +582,8 @@ cnxk_sso_init(struct rte_eventdev *event_dev)
dev->nb_event_queues = 0;
dev->nb_event_ports = 0;
+ cnxk_tim_init(&dev->sso);
+
return 0;
error:
@@ -598,6 +600,7 @@ cnxk_sso_fini(struct rte_eventdev *event_dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ cnxk_tim_fini();
roc_sso_rsrc_fini(&dev->sso);
roc_sso_dev_fini(&dev->sso);
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index abe36f21f..1c61063c9 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -14,6 +14,8 @@
#include "roc_api.h"
+#include "cnxk_tim_evdev.h"
+
#define CNXK_SSO_XAE_CNT "xae_cnt"
#define CNXK_SSO_GGRP_QOS "qos"
#define CN9K_SSO_SINGLE_WS "single_ws"
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
new file mode 100644
index 000000000..46461b885
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+#include "cnxk_tim_evdev.h"
+
+void
+cnxk_tim_init(struct roc_sso *sso)
+{
+ const struct rte_memzone *mz;
+ struct cnxk_tim_evdev *dev;
+ int rc;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return;
+
+ mz = rte_memzone_reserve(RTE_STR(CNXK_TIM_EVDEV_NAME),
+ sizeof(struct cnxk_tim_evdev), 0, 0);
+ if (mz == NULL) {
+ plt_tim_dbg("Unable to allocate memory for TIM Event device");
+ return;
+ }
+ dev = mz->addr;
+
+ dev->tim.roc_sso = sso;
+ rc = roc_tim_init(&dev->tim);
+ if (rc < 0) {
+ plt_err("Failed to initialize roc tim resources");
+ rte_memzone_free(mz);
+ return;
+ }
+ dev->nb_rings = rc;
+ dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+}
+
+void
+cnxk_tim_fini(void)
+{
+ struct cnxk_tim_evdev *dev = tim_priv_get();
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return;
+
+ roc_tim_fini(&dev->tim);
+ rte_memzone_free(rte_memzone_lookup(RTE_STR(CNXK_TIM_EVDEV_NAME)));
+}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
new file mode 100644
index 000000000..5ddc94ed4
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CNXK_TIM_EVDEV_H__
+#define __CNXK_TIM_EVDEV_H__
+
+#include <stddef.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <eventdev_pmd_pci.h>
+#include <rte_event_timer_adapter.h>
+#include <rte_memzone.h>
+
+#include "roc_api.h"
+
+#define CNXK_TIM_EVDEV_NAME cnxk_tim_eventdev
+#define CNXK_TIM_RING_DEF_CHUNK_SZ (4096)
+
+struct cnxk_tim_evdev {
+ struct roc_tim tim;
+ struct rte_eventdev *event_dev;
+ uint16_t nb_rings;
+ uint32_t chunk_sz;
+};
+
+static inline struct cnxk_tim_evdev *
+tim_priv_get(void)
+{
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(RTE_STR(CNXK_TIM_EVDEV_NAME));
+ if (mz == NULL)
+ return NULL;
+
+ return mz->addr;
+}
+
+void cnxk_tim_init(struct roc_sso *sso);
+void cnxk_tim_fini(void);
+
+#endif /* __CNXK_TIM_EVDEV_H__ */
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 5b215b73f..ce8764eda 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -15,6 +15,7 @@ sources = files('cn10k_worker.c',
'cnxk_eventdev.c',
'cnxk_eventdev_selftest.c',
'cnxk_eventdev_stats.c',
+ 'cnxk_tim_evdev.c',
)
deps += ['bus_pci', 'common_cnxk']
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 23/35] event/cnxk: add timer adapter capabilities
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (21 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 22/35] event/cnxk: support event timer pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 24/35] event/cnxk: create and free timer adapter pbhagavatula
` (12 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add function to retrieve event timer adapter capabilities.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 2 ++
drivers/event/cnxk/cn9k_eventdev.c | 2 ++
drivers/event/cnxk/cnxk_tim_evdev.c | 22 +++++++++++++++++++++-
drivers/event/cnxk/cnxk_tim_evdev.h | 6 +++++-
4 files changed, 30 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index a0c6d32cc..0981085e8 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -420,6 +420,8 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = {
.port_unlink = cn10k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .timer_adapter_caps_get = cnxk_tim_caps_get,
+
.dump = cnxk_sso_dump,
.dev_start = cn10k_sso_start,
.dev_stop = cn10k_sso_stop,
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 48991e522..d9882ebb9 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -494,6 +494,8 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = {
.port_unlink = cn9k_sso_port_unlink,
.timeout_ticks = cnxk_sso_timeout_ticks,
+ .timer_adapter_caps_get = cnxk_tim_caps_get,
+
.dump = cnxk_sso_dump,
.dev_start = cn9k_sso_start,
.dev_stop = cn9k_sso_stop,
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 46461b885..265bee533 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -5,6 +5,26 @@
#include "cnxk_eventdev.h"
#include "cnxk_tim_evdev.h"
+int
+cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
+ uint32_t *caps,
+ const struct rte_event_timer_adapter_ops **ops)
+{
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+
+ RTE_SET_USED(flags);
+ RTE_SET_USED(ops);
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ /* Store evdev pointer for later use. */
+ dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
+ *caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
+
+ return 0;
+}
+
void
cnxk_tim_init(struct roc_sso *sso)
{
@@ -37,7 +57,7 @@ cnxk_tim_init(struct roc_sso *sso)
void
cnxk_tim_fini(void)
{
- struct cnxk_tim_evdev *dev = tim_priv_get();
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return;
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 5ddc94ed4..ece66ab25 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -27,7 +27,7 @@ struct cnxk_tim_evdev {
};
static inline struct cnxk_tim_evdev *
-tim_priv_get(void)
+cnxk_tim_priv_get(void)
{
const struct rte_memzone *mz;
@@ -38,6 +38,10 @@ tim_priv_get(void)
return mz->addr;
}
+int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
+ uint32_t *caps,
+ const struct rte_event_timer_adapter_ops **ops);
+
void cnxk_tim_init(struct roc_sso *sso);
void cnxk_tim_fini(void);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 24/35] event/cnxk: create and free timer adapter
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (22 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 23/35] event/cnxk: add timer adapter capabilities pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 25/35] event/cnxk: add devargs to disable NPA pbhagavatula
` (11 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
When the application calls timer adapter create the following is used:
- Allocate a TIM LF based on number of LF's provisioned.
- Verify the config parameters supplied.
- Allocate memory required for
* Buckets based on min and max timeout supplied.
* Allocate the chunk pool based on the number of timers.
On Free:
- Free the allocated bucket and chunk memory.
- Free the TIM lf allocated.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 174 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_evdev.h | 128 +++++++++++++++++++-
2 files changed, 300 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 265bee533..655540a72 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -5,6 +5,177 @@
#include "cnxk_eventdev.h"
#include "cnxk_tim_evdev.h"
+static struct rte_event_timer_adapter_ops cnxk_tim_ops;
+
+static int
+cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
+ struct rte_event_timer_adapter_conf *rcfg)
+{
+ unsigned int cache_sz = (tim_ring->nb_chunks / 1.5);
+ unsigned int mp_flags = 0;
+ char pool_name[25];
+ int rc;
+
+ cache_sz /= rte_lcore_count();
+ /* Create chunk pool. */
+ if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
+ mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+ plt_tim_dbg("Using single producer mode");
+ tim_ring->prod_type_sp = true;
+ }
+
+ snprintf(pool_name, sizeof(pool_name), "cnxk_tim_chunk_pool%d",
+ tim_ring->ring_id);
+
+ if (cache_sz > RTE_MEMPOOL_CACHE_MAX_SIZE)
+ cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
+ cache_sz = cache_sz != 0 ? cache_sz : 2;
+ tim_ring->nb_chunks += (cache_sz * rte_lcore_count());
+ tim_ring->chunk_pool = rte_mempool_create_empty(
+ pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz, cache_sz, 0,
+ rte_socket_id(), mp_flags);
+
+ if (tim_ring->chunk_pool == NULL) {
+ plt_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
+
+ rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
+ rte_mbuf_platform_mempool_ops(), NULL);
+ if (rc < 0) {
+ plt_err("Unable to set chunkpool ops");
+ goto free;
+ }
+
+ rc = rte_mempool_populate_default(tim_ring->chunk_pool);
+ if (rc < 0) {
+ plt_err("Unable to set populate chunkpool.");
+ goto free;
+ }
+ tim_ring->aura =
+ roc_npa_aura_handle_to_aura(tim_ring->chunk_pool->pool_id);
+ tim_ring->ena_dfb = 0;
+
+ return 0;
+
+free:
+ rte_mempool_free(tim_ring->chunk_pool);
+ return rc;
+}
+
+static int
+cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
+{
+ struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+ struct cnxk_tim_ring *tim_ring;
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ if (adptr->data->id >= dev->nb_rings)
+ return -ENODEV;
+
+ tim_ring = rte_zmalloc("cnxk_tim_prv", sizeof(struct cnxk_tim_ring), 0);
+ if (tim_ring == NULL)
+ return -ENOMEM;
+
+ rc = roc_tim_lf_alloc(&dev->tim, adptr->data->id, NULL);
+ if (rc < 0) {
+ plt_err("Failed to create timer ring");
+ goto tim_ring_free;
+ }
+
+ if (NSEC2TICK(RTE_ALIGN_MUL_CEIL(
+ rcfg->timer_tick_ns,
+ cnxk_tim_min_resolution_ns(cnxk_tim_cntfrq())),
+ cnxk_tim_cntfrq()) <
+ cnxk_tim_min_tmo_ticks(cnxk_tim_cntfrq())) {
+ if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES)
+ rcfg->timer_tick_ns = TICK2NSEC(
+ cnxk_tim_min_tmo_ticks(cnxk_tim_cntfrq()),
+ cnxk_tim_cntfrq());
+ else {
+ rc = -ERANGE;
+ goto tim_hw_free;
+ }
+ }
+ tim_ring->ring_id = adptr->data->id;
+ tim_ring->clk_src = (int)rcfg->clk_src;
+ tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL(
+ rcfg->timer_tick_ns,
+ cnxk_tim_min_resolution_ns(cnxk_tim_cntfrq()));
+ tim_ring->max_tout = rcfg->max_tmo_ns;
+ tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
+ tim_ring->nb_timers = rcfg->nb_timers;
+ tim_ring->chunk_sz = dev->chunk_sz;
+
+ tim_ring->nb_chunks = tim_ring->nb_timers;
+ tim_ring->nb_chunk_slots = CNXK_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
+ /* Create buckets. */
+ tim_ring->bkt =
+ rte_zmalloc("cnxk_tim_bucket",
+ (tim_ring->nb_bkts) * sizeof(struct cnxk_tim_bkt),
+ RTE_CACHE_LINE_SIZE);
+ if (tim_ring->bkt == NULL)
+ goto tim_hw_free;
+
+ rc = cnxk_tim_chnk_pool_create(tim_ring, rcfg);
+ if (rc < 0)
+ goto tim_bkt_free;
+
+ rc = roc_tim_lf_config(
+ &dev->tim, tim_ring->ring_id,
+ cnxk_tim_convert_clk_src(tim_ring->clk_src), 0, 0,
+ tim_ring->nb_bkts, tim_ring->chunk_sz,
+ NSEC2TICK(tim_ring->tck_nsec, cnxk_tim_cntfrq()));
+ if (rc < 0) {
+ plt_err("Failed to configure timer ring");
+ goto tim_chnk_free;
+ }
+
+ tim_ring->base = roc_tim_lf_base_get(&dev->tim, tim_ring->ring_id);
+ plt_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE);
+ plt_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+
+ plt_tim_dbg(
+ "Total memory used %" PRIu64 "MB\n",
+ (uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz) +
+ (tim_ring->nb_bkts * sizeof(struct cnxk_tim_bkt))) /
+ BIT_ULL(20)));
+
+ adptr->data->adapter_priv = tim_ring;
+ return rc;
+
+tim_chnk_free:
+ rte_mempool_free(tim_ring->chunk_pool);
+tim_bkt_free:
+ rte_free(tim_ring->bkt);
+tim_hw_free:
+ roc_tim_lf_free(&dev->tim, tim_ring->ring_id);
+tim_ring_free:
+ rte_free(tim_ring);
+ return rc;
+}
+
+static int
+cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ roc_tim_lf_free(&dev->tim, tim_ring->ring_id);
+ rte_free(tim_ring->bkt);
+ rte_mempool_free(tim_ring->chunk_pool);
+ rte_free(tim_ring);
+
+ return 0;
+}
+
int
cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
uint32_t *caps,
@@ -18,6 +189,9 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
if (dev == NULL)
return -ENODEV;
+ cnxk_tim_ops.init = cnxk_tim_ring_create;
+ cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index ece66ab25..2335707cd 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -12,12 +12,26 @@
#include <eventdev_pmd_pci.h>
#include <rte_event_timer_adapter.h>
+#include <rte_malloc.h>
#include <rte_memzone.h>
#include "roc_api.h"
-#define CNXK_TIM_EVDEV_NAME cnxk_tim_eventdev
-#define CNXK_TIM_RING_DEF_CHUNK_SZ (4096)
+#define NSECPERSEC 1E9
+#define USECPERSEC 1E6
+#define TICK2NSEC(__tck, __freq) (((__tck)*NSECPERSEC) / (__freq))
+
+#define CNXK_TIM_EVDEV_NAME cnxk_tim_eventdev
+#define CNXK_TIM_MAX_BUCKETS (0xFFFFF)
+#define CNXK_TIM_RING_DEF_CHUNK_SZ (4096)
+#define CNXK_TIM_CHUNK_ALIGNMENT (16)
+#define CNXK_TIM_MAX_BURST \
+ (RTE_CACHE_LINE_SIZE / CNXK_TIM_CHUNK_ALIGNMENT)
+#define CNXK_TIM_NB_CHUNK_SLOTS(sz) (((sz) / CNXK_TIM_CHUNK_ALIGNMENT) - 1)
+#define CNXK_TIM_MIN_CHUNK_SLOTS (0x1)
+#define CNXK_TIM_MAX_CHUNK_SLOTS (0x1FFE)
+
+#define CN9K_TIM_MIN_TMO_TKS (256)
struct cnxk_tim_evdev {
struct roc_tim tim;
@@ -26,6 +40,57 @@ struct cnxk_tim_evdev {
uint32_t chunk_sz;
};
+enum cnxk_tim_clk_src {
+ CNXK_TIM_CLK_SRC_10NS = RTE_EVENT_TIMER_ADAPTER_CPU_CLK,
+ CNXK_TIM_CLK_SRC_GPIO = RTE_EVENT_TIMER_ADAPTER_EXT_CLK0,
+ CNXK_TIM_CLK_SRC_GTI = RTE_EVENT_TIMER_ADAPTER_EXT_CLK1,
+ CNXK_TIM_CLK_SRC_PTP = RTE_EVENT_TIMER_ADAPTER_EXT_CLK2,
+};
+
+struct cnxk_tim_bkt {
+ uint64_t first_chunk;
+ union {
+ uint64_t w1;
+ struct {
+ uint32_t nb_entry;
+ uint8_t sbt : 1;
+ uint8_t hbt : 1;
+ uint8_t bsk : 1;
+ uint8_t rsvd : 5;
+ uint8_t lock;
+ int16_t chunk_remainder;
+ };
+ };
+ uint64_t current_chunk;
+ uint64_t pad;
+};
+
+struct cnxk_tim_ring {
+ uintptr_t base;
+ uint16_t nb_chunk_slots;
+ uint32_t nb_bkts;
+ uint64_t tck_int;
+ uint64_t tot_int;
+ struct cnxk_tim_bkt *bkt;
+ struct rte_mempool *chunk_pool;
+ uint64_t arm_cnt;
+ uint8_t prod_type_sp;
+ uint8_t ena_dfb;
+ uint16_t ring_id;
+ uint32_t aura;
+ uint64_t nb_timers;
+ uint64_t tck_nsec;
+ uint64_t max_tout;
+ uint64_t nb_chunks;
+ uint64_t chunk_sz;
+ enum cnxk_tim_clk_src clk_src;
+} __rte_cache_aligned;
+
+struct cnxk_tim_ent {
+ uint64_t w0;
+ uint64_t wqe;
+};
+
static inline struct cnxk_tim_evdev *
cnxk_tim_priv_get(void)
{
@@ -38,6 +103,65 @@ cnxk_tim_priv_get(void)
return mz->addr;
}
+static inline uint64_t
+cnxk_tim_min_tmo_ticks(uint64_t freq)
+{
+ if (roc_model_runtime_is_cn9k())
+ return CN9K_TIM_MIN_TMO_TKS;
+ else /* CN10K min tick is of 1us */
+ return freq / USECPERSEC;
+}
+
+static inline uint64_t
+cnxk_tim_min_resolution_ns(uint64_t freq)
+{
+ return NSECPERSEC / freq;
+}
+
+static inline enum roc_tim_clk_src
+cnxk_tim_convert_clk_src(enum cnxk_tim_clk_src clk_src)
+{
+ switch (clk_src) {
+ case RTE_EVENT_TIMER_ADAPTER_CPU_CLK:
+ return roc_model_runtime_is_cn9k() ? ROC_TIM_CLK_SRC_10NS :
+ ROC_TIM_CLK_SRC_GTI;
+ default:
+ return ROC_TIM_CLK_SRC_INVALID;
+ }
+}
+
+#ifdef RTE_ARCH_ARM64
+static inline uint64_t
+cnxk_tim_cntvct(void)
+{
+ uint64_t tsc;
+
+ asm volatile("mrs %0, cntvct_el0" : "=r"(tsc));
+ return tsc;
+}
+
+static inline uint64_t
+cnxk_tim_cntfrq(void)
+{
+ uint64_t freq;
+
+ asm volatile("mrs %0, cntfrq_el0" : "=r"(freq));
+ return freq;
+}
+#else
+static inline uint64_t
+cnxk_tim_cntvct(void)
+{
+ return 0;
+}
+
+static inline uint64_t
+cnxk_tim_cntfrq(void)
+{
+ return 0;
+}
+#endif
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 25/35] event/cnxk: add devargs to disable NPA
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (23 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 24/35] event/cnxk: create and free timer adapter pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 26/35] event/cnxk: allow adapters to resize inflights pbhagavatula
` (10 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
If the chunks are allocated from NPA then TIM can automatically free
them when traversing the list of chunks.
Add devargs to disable NPA and use software mempool to manage chunks.
Example:
--dev "0002:0e:00.0,tim_disable_npa=1"
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 10 ++++
drivers/event/cnxk/cn10k_eventdev.c | 3 +-
drivers/event/cnxk/cn9k_eventdev.c | 3 +-
drivers/event/cnxk/cnxk_eventdev.h | 9 +++
drivers/event/cnxk/cnxk_tim_evdev.c | 86 +++++++++++++++++++++--------
drivers/event/cnxk/cnxk_tim_evdev.h | 5 ++
6 files changed, 92 insertions(+), 24 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index e6f81f8b1..c2d6ed2fb 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -93,6 +93,16 @@ Runtime Config Options
-a 0002:0e:00.0,qos=[1-50-50-50]
+- ``TIM disable NPA``
+
+ By default chunks are allocated from NPA then TIM can automatically free
+ them when traversing the list of chunks. The ``tim_disable_npa`` devargs
+ parameter disables NPA and uses software mempool to manage chunks
+
+ For example::
+
+ -a 0002:0e:00.0,tim_disable_npa=1
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 0981085e8..a2ef1fa73 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -502,4 +502,5 @@ RTE_PMD_REGISTER_PCI_TABLE(event_cn10k, cn10k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
- CN10K_SSO_GW_MODE "=<int>");
+ CN10K_SSO_GW_MODE "=<int>"
+ CNXK_TIM_DISABLE_NPA "=1");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index d9882ebb9..3a0caa009 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -571,4 +571,5 @@ RTE_PMD_REGISTER_PCI_TABLE(event_cn9k, cn9k_pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
- CN9K_SSO_SINGLE_WS "=1");
+ CN9K_SSO_SINGLE_WS "=1"
+ CNXK_TIM_DISABLE_NPA "=1");
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 1c61063c9..77835e463 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -159,6 +159,15 @@ struct cnxk_sso_hws_cookie {
bool configured;
} __rte_cache_aligned;
+static inline int
+parse_kvargs_flag(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ *(uint8_t *)opaque = !!atoi(value);
+ return 0;
+}
+
static inline int
parse_kvargs_value(const char *key, const char *value, void *opaque)
{
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 655540a72..d93b37e4f 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -31,30 +31,43 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
cache_sz = cache_sz != 0 ? cache_sz : 2;
tim_ring->nb_chunks += (cache_sz * rte_lcore_count());
- tim_ring->chunk_pool = rte_mempool_create_empty(
- pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz, cache_sz, 0,
- rte_socket_id(), mp_flags);
-
- if (tim_ring->chunk_pool == NULL) {
- plt_err("Unable to create chunkpool.");
- return -ENOMEM;
- }
+ if (!tim_ring->disable_npa) {
+ tim_ring->chunk_pool = rte_mempool_create_empty(
+ pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz,
+ cache_sz, 0, rte_socket_id(), mp_flags);
+
+ if (tim_ring->chunk_pool == NULL) {
+ plt_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
- rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
- rte_mbuf_platform_mempool_ops(), NULL);
- if (rc < 0) {
- plt_err("Unable to set chunkpool ops");
- goto free;
- }
+ rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
+ rte_mbuf_platform_mempool_ops(),
+ NULL);
+ if (rc < 0) {
+ plt_err("Unable to set chunkpool ops");
+ goto free;
+ }
- rc = rte_mempool_populate_default(tim_ring->chunk_pool);
- if (rc < 0) {
- plt_err("Unable to set populate chunkpool.");
- goto free;
+ rc = rte_mempool_populate_default(tim_ring->chunk_pool);
+ if (rc < 0) {
+ plt_err("Unable to set populate chunkpool.");
+ goto free;
+ }
+ tim_ring->aura = roc_npa_aura_handle_to_aura(
+ tim_ring->chunk_pool->pool_id);
+ tim_ring->ena_dfb = 0;
+ } else {
+ tim_ring->chunk_pool = rte_mempool_create(
+ pool_name, tim_ring->nb_chunks, tim_ring->chunk_sz,
+ cache_sz, 0, NULL, NULL, NULL, NULL, rte_socket_id(),
+ mp_flags);
+ if (tim_ring->chunk_pool == NULL) {
+ plt_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
+ tim_ring->ena_dfb = 1;
}
- tim_ring->aura =
- roc_npa_aura_handle_to_aura(tim_ring->chunk_pool->pool_id);
- tim_ring->ena_dfb = 0;
return 0;
@@ -110,8 +123,17 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
tim_ring->nb_timers = rcfg->nb_timers;
tim_ring->chunk_sz = dev->chunk_sz;
+ tim_ring->disable_npa = dev->disable_npa;
+
+ if (tim_ring->disable_npa) {
+ tim_ring->nb_chunks =
+ tim_ring->nb_timers /
+ CNXK_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
+ tim_ring->nb_chunks = tim_ring->nb_chunks * tim_ring->nb_bkts;
+ } else {
+ tim_ring->nb_chunks = tim_ring->nb_timers;
+ }
- tim_ring->nb_chunks = tim_ring->nb_timers;
tim_ring->nb_chunk_slots = CNXK_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
/* Create buckets. */
tim_ring->bkt =
@@ -199,6 +221,24 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
return 0;
}
+static void
+cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
+{
+ struct rte_kvargs *kvlist;
+
+ if (devargs == NULL)
+ return;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, CNXK_TIM_DISABLE_NPA, &parse_kvargs_flag,
+ &dev->disable_npa);
+
+ rte_kvargs_free(kvlist);
+}
+
void
cnxk_tim_init(struct roc_sso *sso)
{
@@ -217,6 +257,8 @@ cnxk_tim_init(struct roc_sso *sso)
}
dev = mz->addr;
+ cnxk_tim_parse_devargs(sso->pci_dev->device.devargs, dev);
+
dev->tim.roc_sso = sso;
rc = roc_tim_init(&dev->tim);
if (rc < 0) {
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 2335707cd..4896ed67a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -33,11 +33,15 @@
#define CN9K_TIM_MIN_TMO_TKS (256)
+#define CNXK_TIM_DISABLE_NPA "tim_disable_npa"
+
struct cnxk_tim_evdev {
struct roc_tim tim;
struct rte_eventdev *event_dev;
uint16_t nb_rings;
uint32_t chunk_sz;
+ /* Dev args */
+ uint8_t disable_npa;
};
enum cnxk_tim_clk_src {
@@ -75,6 +79,7 @@ struct cnxk_tim_ring {
struct rte_mempool *chunk_pool;
uint64_t arm_cnt;
uint8_t prod_type_sp;
+ uint8_t disable_npa;
uint8_t ena_dfb;
uint16_t ring_id;
uint32_t aura;
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 26/35] event/cnxk: allow adapters to resize inflights
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (24 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 25/35] event/cnxk: add devargs to disable NPA pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 27/35] event/cnxk: add timer adapter info function pbhagavatula
` (9 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add internal SSO functions to allow event adapters to resize SSO buffers
that are used to hold in-flight events in DRAM.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 33 ++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 7 +++
drivers/event/cnxk/cnxk_eventdev_adptr.c | 67 ++++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_evdev.c | 5 ++
drivers/event/cnxk/meson.build | 1 +
5 files changed, 113 insertions(+)
create mode 100644 drivers/event/cnxk/cnxk_eventdev_adptr.c
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 85bb12e00..7189ee3a7 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -77,6 +77,9 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT;
if (dev->xae_cnt)
xaq_cnt += dev->xae_cnt / dev->sso.xae_waes;
+ else if (dev->adptr_xae_cnt)
+ xaq_cnt += (dev->adptr_xae_cnt / dev->sso.xae_waes) +
+ (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
else
xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) +
(CNXK_SSO_XAQ_SLACK * dev->nb_event_queues);
@@ -125,6 +128,36 @@ cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev)
return rc;
}
+int
+cnxk_sso_xae_reconfigure(struct rte_eventdev *event_dev)
+{
+ struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+ int rc = 0;
+
+ if (event_dev->data->dev_started)
+ event_dev->dev_ops->dev_stop(event_dev);
+
+ rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues);
+ if (rc < 0) {
+ plt_err("Failed to release XAQ %d", rc);
+ return rc;
+ }
+
+ rte_mempool_free(dev->xaq_pool);
+ dev->xaq_pool = NULL;
+ rc = cnxk_sso_xaq_allocate(dev);
+ if (rc < 0) {
+ plt_err("Failed to alloc XAQ %d", rc);
+ return rc;
+ }
+
+ rte_mb();
+ if (event_dev->data->dev_started)
+ event_dev->dev_ops->dev_start(event_dev);
+
+ return 0;
+}
+
int
cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
cnxk_sso_init_hws_mem_t init_hws_fn,
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 77835e463..668e51d62 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -81,6 +81,10 @@ struct cnxk_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
+ uint64_t adptr_xae_cnt;
+ uint16_t tim_adptr_ring_cnt;
+ uint16_t *timer_adptr_rings;
+ uint64_t *timer_adptr_sz;
/* Dev args */
uint32_t xae_cnt;
uint8_t qos_queue_cnt;
@@ -190,7 +194,10 @@ cnxk_sso_hws_get_cookie(void *ws)
}
/* Configuration functions */
+int cnxk_sso_xae_reconfigure(struct rte_eventdev *event_dev);
int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev);
+void cnxk_sso_updt_xae_cnt(struct cnxk_sso_evdev *dev, void *data,
+ uint32_t event_type);
/* Common ops API. */
int cnxk_sso_init(struct rte_eventdev *event_dev);
diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c
new file mode 100644
index 000000000..89a1d82c1
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_eventdev.h"
+
+void
+cnxk_sso_updt_xae_cnt(struct cnxk_sso_evdev *dev, void *data,
+ uint32_t event_type)
+{
+ int i;
+
+ switch (event_type) {
+ case RTE_EVENT_TYPE_TIMER: {
+ struct cnxk_tim_ring *timr = data;
+ uint16_t *old_ring_ptr;
+ uint64_t *old_sz_ptr;
+
+ for (i = 0; i < dev->tim_adptr_ring_cnt; i++) {
+ if (timr->ring_id != dev->timer_adptr_rings[i])
+ continue;
+ if (timr->nb_timers == dev->timer_adptr_sz[i])
+ return;
+ dev->adptr_xae_cnt -= dev->timer_adptr_sz[i];
+ dev->adptr_xae_cnt += timr->nb_timers;
+ dev->timer_adptr_sz[i] = timr->nb_timers;
+
+ return;
+ }
+
+ dev->tim_adptr_ring_cnt++;
+ old_ring_ptr = dev->timer_adptr_rings;
+ old_sz_ptr = dev->timer_adptr_sz;
+
+ dev->timer_adptr_rings = rte_realloc(
+ dev->timer_adptr_rings,
+ sizeof(uint16_t) * dev->tim_adptr_ring_cnt, 0);
+ if (dev->timer_adptr_rings == NULL) {
+ dev->adptr_xae_cnt += timr->nb_timers;
+ dev->timer_adptr_rings = old_ring_ptr;
+ dev->tim_adptr_ring_cnt--;
+ return;
+ }
+
+ dev->timer_adptr_sz = rte_realloc(
+ dev->timer_adptr_sz,
+ sizeof(uint64_t) * dev->tim_adptr_ring_cnt, 0);
+
+ if (dev->timer_adptr_sz == NULL) {
+ dev->adptr_xae_cnt += timr->nb_timers;
+ dev->timer_adptr_sz = old_sz_ptr;
+ dev->tim_adptr_ring_cnt--;
+ return;
+ }
+
+ dev->timer_adptr_rings[dev->tim_adptr_ring_cnt - 1] =
+ timr->ring_id;
+ dev->timer_adptr_sz[dev->tim_adptr_ring_cnt - 1] =
+ timr->nb_timers;
+
+ dev->adptr_xae_cnt += timr->nb_timers;
+ break;
+ }
+ default:
+ break;
+ }
+}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index d93b37e4f..1eb39a789 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -161,6 +161,11 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
plt_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE);
plt_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+ /* Update SSO xae count. */
+ cnxk_sso_updt_xae_cnt(cnxk_sso_pmd_priv(dev->event_dev), tim_ring,
+ RTE_EVENT_TYPE_TIMER);
+ cnxk_sso_xae_reconfigure(dev->event_dev);
+
plt_tim_dbg(
"Total memory used %" PRIu64 "MB\n",
(uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz) +
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index ce8764eda..bd52cbc68 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -12,6 +12,7 @@ sources = files('cn10k_worker.c',
'cn10k_eventdev.c',
'cn9k_worker.c',
'cn9k_eventdev.c',
+ 'cnxk_eventdev_adptr.c',
'cnxk_eventdev.c',
'cnxk_eventdev_selftest.c',
'cnxk_eventdev_stats.c',
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 27/35] event/cnxk: add timer adapter info function
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (25 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 26/35] event/cnxk: allow adapters to resize inflights pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 28/35] event/cnxk: add devargs for chunk size and rings pbhagavatula
` (8 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add TIM event timer adapter info get function.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 1eb39a789..2fefa56f5 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -76,6 +76,18 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
return rc;
}
+static void
+cnxk_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer_adapter_info *adptr_info)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+
+ adptr_info->max_tmo_ns = tim_ring->max_tout;
+ adptr_info->min_resolution_ns = tim_ring->tck_nsec;
+ rte_memcpy(&adptr_info->conf, &adptr->data->conf,
+ sizeof(struct rte_event_timer_adapter_conf));
+}
+
static int
cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
{
@@ -218,6 +230,7 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
cnxk_tim_ops.init = cnxk_tim_ring_create;
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+ cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 28/35] event/cnxk: add devargs for chunk size and rings
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (26 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 27/35] event/cnxk: add timer adapter info function pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 29/35] event/cnxk: add TIM bucket operations pbhagavatula
` (7 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add devargs to control default chunk size and max numbers of
timer rings to attach to a given RVU PF.
Example:
--dev "0002:1e:00.0,tim_chnk_slots=1024"
--dev "0002:1e:00.0,tim_rings_lmt=4"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 23 +++++++++++++++++++++++
drivers/event/cnxk/cn10k_eventdev.c | 4 +++-
drivers/event/cnxk/cn9k_eventdev.c | 4 +++-
drivers/event/cnxk/cnxk_tim_evdev.c | 14 +++++++++++++-
drivers/event/cnxk/cnxk_tim_evdev.h | 4 ++++
5 files changed, 46 insertions(+), 3 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index c2d6ed2fb..a8199aac7 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -103,6 +103,29 @@ Runtime Config Options
-a 0002:0e:00.0,tim_disable_npa=1
+- ``TIM modify chunk slots``
+
+ The ``tim_chnk_slots`` devargs can be used to modify number of chunk slots.
+ Chunks are used to store event timers, a chunk can be visualised as an array
+ where the last element points to the next chunk and rest of them are used to
+ store events. TIM traverses the list of chunks and enqueues the event timers
+ to SSO. The default value is 255 and the max value is 4095.
+
+ For example::
+
+ -a 0002:0e:00.0,tim_chnk_slots=1023
+
+- ``TIM limit max rings reserved``
+
+ The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
+ rings i.e. event timer adapter reserved on probe. Since, TIM rings are HW
+ resources we can avoid starving other applications by not grabbing all the
+ rings.
+
+ For example::
+
+ -a 0002:0e:00.0,tim_rings_lmt=5
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index a2ef1fa73..cadc792a7 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -503,4 +503,6 @@ RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
CN10K_SSO_GW_MODE "=<int>"
- CNXK_TIM_DISABLE_NPA "=1");
+ CNXK_TIM_DISABLE_NPA "=1"
+ CNXK_TIM_CHNK_SLOTS "=<int>"
+ CNXK_TIM_RINGS_LMT "=<int>");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 3a0caa009..e503f6b1c 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -572,4 +572,6 @@ RTE_PMD_REGISTER_KMOD_DEP(event_cn9k, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
CNXK_SSO_GGRP_QOS "=<string>"
CN9K_SSO_SINGLE_WS "=1"
- CNXK_TIM_DISABLE_NPA "=1");
+ CNXK_TIM_DISABLE_NPA "=1"
+ CNXK_TIM_CHNK_SLOTS "=<int>"
+ CNXK_TIM_RINGS_LMT "=<int>");
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 2fefa56f5..e06fe2f52 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -253,6 +253,10 @@ cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
rte_kvargs_process(kvlist, CNXK_TIM_DISABLE_NPA, &parse_kvargs_flag,
&dev->disable_npa);
+ rte_kvargs_process(kvlist, CNXK_TIM_CHNK_SLOTS, &parse_kvargs_value,
+ &dev->chunk_slots);
+ rte_kvargs_process(kvlist, CNXK_TIM_RINGS_LMT, &parse_kvargs_value,
+ &dev->min_ring_cnt);
rte_kvargs_free(kvlist);
}
@@ -278,6 +282,7 @@ cnxk_tim_init(struct roc_sso *sso)
cnxk_tim_parse_devargs(sso->pci_dev->device.devargs, dev);
dev->tim.roc_sso = sso;
+ dev->tim.nb_lfs = dev->min_ring_cnt;
rc = roc_tim_init(&dev->tim);
if (rc < 0) {
plt_err("Failed to initialize roc tim resources");
@@ -285,7 +290,14 @@ cnxk_tim_init(struct roc_sso *sso)
return;
}
dev->nb_rings = rc;
- dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+
+ if (dev->chunk_slots && dev->chunk_slots <= CNXK_TIM_MAX_CHUNK_SLOTS &&
+ dev->chunk_slots >= CNXK_TIM_MIN_CHUNK_SLOTS) {
+ dev->chunk_sz =
+ (dev->chunk_slots + 1) * CNXK_TIM_CHUNK_ALIGNMENT;
+ } else {
+ dev->chunk_sz = CNXK_TIM_RING_DEF_CHUNK_SZ;
+ }
}
void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 4896ed67a..9496634c8 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -34,6 +34,8 @@
#define CN9K_TIM_MIN_TMO_TKS (256)
#define CNXK_TIM_DISABLE_NPA "tim_disable_npa"
+#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
+#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
struct cnxk_tim_evdev {
struct roc_tim tim;
@@ -42,6 +44,8 @@ struct cnxk_tim_evdev {
uint32_t chunk_sz;
/* Dev args */
uint8_t disable_npa;
+ uint16_t chunk_slots;
+ uint16_t min_ring_cnt;
};
enum cnxk_tim_clk_src {
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 29/35] event/cnxk: add TIM bucket operations
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (27 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 28/35] event/cnxk: add devargs for chunk size and rings pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 30/35] event/cnxk: add timer arm routine pbhagavatula
` (6 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add TIM bucket operations used for event timer arm and cancel.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.h | 30 +++++++
drivers/event/cnxk/cnxk_tim_worker.c | 6 ++
drivers/event/cnxk/cnxk_tim_worker.h | 123 +++++++++++++++++++++++++++
drivers/event/cnxk/meson.build | 1 +
4 files changed, 160 insertions(+)
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.c
create mode 100644 drivers/event/cnxk/cnxk_tim_worker.h
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 9496634c8..f6895417a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -37,6 +37,36 @@
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
+#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
+#define TIM_BUCKET_W1_M_CHUNK_REMAINDER \
+ ((1ULL << (64 - TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
+#define TIM_BUCKET_W1_S_LOCK (40)
+#define TIM_BUCKET_W1_M_LOCK \
+ ((1ULL << (TIM_BUCKET_W1_S_CHUNK_REMAINDER - TIM_BUCKET_W1_S_LOCK)) - 1)
+#define TIM_BUCKET_W1_S_RSVD (35)
+#define TIM_BUCKET_W1_S_BSK (34)
+#define TIM_BUCKET_W1_M_BSK \
+ ((1ULL << (TIM_BUCKET_W1_S_RSVD - TIM_BUCKET_W1_S_BSK)) - 1)
+#define TIM_BUCKET_W1_S_HBT (33)
+#define TIM_BUCKET_W1_M_HBT \
+ ((1ULL << (TIM_BUCKET_W1_S_BSK - TIM_BUCKET_W1_S_HBT)) - 1)
+#define TIM_BUCKET_W1_S_SBT (32)
+#define TIM_BUCKET_W1_M_SBT \
+ ((1ULL << (TIM_BUCKET_W1_S_HBT - TIM_BUCKET_W1_S_SBT)) - 1)
+#define TIM_BUCKET_W1_S_NUM_ENTRIES (0)
+#define TIM_BUCKET_W1_M_NUM_ENTRIES \
+ ((1ULL << (TIM_BUCKET_W1_S_SBT - TIM_BUCKET_W1_S_NUM_ENTRIES)) - 1)
+
+#define TIM_BUCKET_SEMA (TIM_BUCKET_CHUNK_REMAIN)
+
+#define TIM_BUCKET_CHUNK_REMAIN \
+ (TIM_BUCKET_W1_M_CHUNK_REMAINDER << TIM_BUCKET_W1_S_CHUNK_REMAINDER)
+
+#define TIM_BUCKET_LOCK (TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK)
+
+#define TIM_BUCKET_SEMA_WLOCK \
+ (TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
+
struct cnxk_tim_evdev {
struct roc_tim tim;
struct rte_eventdev *event_dev;
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
new file mode 100644
index 000000000..49ee85245
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -0,0 +1,6 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "cnxk_tim_evdev.h"
+#include "cnxk_tim_worker.h"
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
new file mode 100644
index 000000000..d56e67360
--- /dev/null
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CNXK_TIM_WORKER_H__
+#define __CNXK_TIM_WORKER_H__
+
+#include "cnxk_tim_evdev.h"
+
+static inline uint8_t
+cnxk_tim_bkt_fetch_lock(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_LOCK) & TIM_BUCKET_W1_M_LOCK;
+}
+
+static inline int16_t
+cnxk_tim_bkt_fetch_rem(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_CHUNK_REMAINDER) &
+ TIM_BUCKET_W1_M_CHUNK_REMAINDER;
+}
+
+static inline int16_t
+cnxk_tim_bkt_get_rem(struct cnxk_tim_bkt *bktp)
+{
+ return __atomic_load_n(&bktp->chunk_remainder, __ATOMIC_ACQUIRE);
+}
+
+static inline void
+cnxk_tim_bkt_set_rem(struct cnxk_tim_bkt *bktp, uint16_t v)
+{
+ __atomic_store_n(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
+}
+
+static inline void
+cnxk_tim_bkt_sub_rem(struct cnxk_tim_bkt *bktp, uint16_t v)
+{
+ __atomic_fetch_sub(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
+}
+
+static inline uint8_t
+cnxk_tim_bkt_get_hbt(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_HBT) & TIM_BUCKET_W1_M_HBT;
+}
+
+static inline uint8_t
+cnxk_tim_bkt_get_bsk(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_BSK) & TIM_BUCKET_W1_M_BSK;
+}
+
+static inline uint64_t
+cnxk_tim_bkt_clr_bsk(struct cnxk_tim_bkt *bktp)
+{
+ /* Clear everything except lock. */
+ const uint64_t v = TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK;
+
+ return __atomic_fetch_and(&bktp->w1, v, __ATOMIC_ACQ_REL);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_fetch_sema_lock(struct cnxk_tim_bkt *bktp)
+{
+ return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA_WLOCK,
+ __ATOMIC_ACQUIRE);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_fetch_sema(struct cnxk_tim_bkt *bktp)
+{
+ return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA, __ATOMIC_RELAXED);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_inc_lock(struct cnxk_tim_bkt *bktp)
+{
+ const uint64_t v = 1ull << TIM_BUCKET_W1_S_LOCK;
+
+ return __atomic_fetch_add(&bktp->w1, v, __ATOMIC_ACQUIRE);
+}
+
+static inline void
+cnxk_tim_bkt_dec_lock(struct cnxk_tim_bkt *bktp)
+{
+ __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELEASE);
+}
+
+static inline void
+cnxk_tim_bkt_dec_lock_relaxed(struct cnxk_tim_bkt *bktp)
+{
+ __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELAXED);
+}
+
+static inline uint32_t
+cnxk_tim_bkt_get_nent(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_NUM_ENTRIES) &
+ TIM_BUCKET_W1_M_NUM_ENTRIES;
+}
+
+static inline void
+cnxk_tim_bkt_inc_nent(struct cnxk_tim_bkt *bktp)
+{
+ __atomic_add_fetch(&bktp->nb_entry, 1, __ATOMIC_RELAXED);
+}
+
+static inline void
+cnxk_tim_bkt_add_nent(struct cnxk_tim_bkt *bktp, uint32_t v)
+{
+ __atomic_add_fetch(&bktp->nb_entry, v, __ATOMIC_RELAXED);
+}
+
+static inline uint64_t
+cnxk_tim_bkt_clr_nent(struct cnxk_tim_bkt *bktp)
+{
+ const uint64_t v =
+ ~(TIM_BUCKET_W1_M_NUM_ENTRIES << TIM_BUCKET_W1_S_NUM_ENTRIES);
+
+ return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL);
+}
+
+#endif /* __CNXK_TIM_WORKER_H__ */
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index bd52cbc68..e665dfc72 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -16,6 +16,7 @@ sources = files('cn10k_worker.c',
'cnxk_eventdev.c',
'cnxk_eventdev_selftest.c',
'cnxk_eventdev_stats.c',
+ 'cnxk_tim_worker.c',
'cnxk_tim_evdev.c',
)
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 30/35] event/cnxk: add timer arm routine
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (28 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 29/35] event/cnxk: add TIM bucket operations pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 31/35] event/cnxk: add timer arm timeout burst pbhagavatula
` (5 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event timer arm routine.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 18 ++
drivers/event/cnxk/cnxk_tim_evdev.h | 23 ++
drivers/event/cnxk/cnxk_tim_worker.c | 95 +++++++++
drivers/event/cnxk/cnxk_tim_worker.h | 300 +++++++++++++++++++++++++++
4 files changed, 436 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index e06fe2f52..ecc952a6a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -76,6 +76,21 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
return rc;
}
+static void
+cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
+{
+ uint8_t prod_flag = !tim_ring->prod_type_sp;
+
+ /* [DFB/FB] [SP][MP]*/
+ const rte_event_timer_arm_burst_t arm_burst[2][2] = {
+#define FP(_name, _f2, _f1, flags) [_f2][_f1] = cnxk_tim_arm_burst_##_name,
+ TIM_ARM_FASTPATH_MODES
+#undef FP
+ };
+
+ cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
+}
+
static void
cnxk_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
struct rte_event_timer_adapter_info *adptr_info)
@@ -173,6 +188,9 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
plt_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE);
plt_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+ /* Set fastpath ops. */
+ cnxk_tim_set_fp_ops(tim_ring);
+
/* Update SSO xae count. */
cnxk_sso_updt_xae_cnt(cnxk_sso_pmd_priv(dev->event_dev), tim_ring,
RTE_EVENT_TYPE_TIMER);
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index f6895417a..1f2aad17a 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -14,6 +14,7 @@
#include <rte_event_timer_adapter.h>
#include <rte_malloc.h>
#include <rte_memzone.h>
+#include <rte_reciprocal.h>
#include "roc_api.h"
@@ -37,6 +38,11 @@
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
+#define CNXK_TIM_SP 0x1
+#define CNXK_TIM_MP 0x2
+#define CNXK_TIM_ENA_FB 0x10
+#define CNXK_TIM_ENA_DFB 0x20
+
#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
#define TIM_BUCKET_W1_M_CHUNK_REMAINDER \
((1ULL << (64 - TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
@@ -107,10 +113,14 @@ struct cnxk_tim_ring {
uintptr_t base;
uint16_t nb_chunk_slots;
uint32_t nb_bkts;
+ uint64_t last_updt_cyc;
+ uint64_t ring_start_cyc;
uint64_t tck_int;
uint64_t tot_int;
struct cnxk_tim_bkt *bkt;
struct rte_mempool *chunk_pool;
+ struct rte_reciprocal_u64 fast_div;
+ struct rte_reciprocal_u64 fast_bkt;
uint64_t arm_cnt;
uint8_t prod_type_sp;
uint8_t disable_npa;
@@ -201,6 +211,19 @@ cnxk_tim_cntfrq(void)
}
#endif
+#define TIM_ARM_FASTPATH_MODES \
+ FP(sp, 0, 0, CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
+ FP(mp, 0, 1, CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
+ FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
+ FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
+
+#define FP(_name, _f2, _f1, flags) \
+ uint16_t cnxk_tim_arm_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint16_t nb_timers);
+TIM_ARM_FASTPATH_MODES
+#undef FP
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index 49ee85245..268f845c8 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -4,3 +4,98 @@
#include "cnxk_tim_evdev.h"
#include "cnxk_tim_worker.h"
+
+static inline int
+cnxk_tim_arm_checks(const struct cnxk_tim_ring *const tim_ring,
+ struct rte_event_timer *const tim)
+{
+ if (unlikely(tim->state)) {
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ rte_errno = EALREADY;
+ goto fail;
+ }
+
+ if (unlikely(!tim->timeout_ticks ||
+ tim->timeout_ticks > tim_ring->nb_bkts)) {
+ tim->state = tim->timeout_ticks ?
+ RTE_EVENT_TIMER_ERROR_TOOLATE :
+ RTE_EVENT_TIMER_ERROR_TOOEARLY;
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ return 0;
+
+fail:
+ return -EINVAL;
+}
+
+static inline void
+cnxk_tim_format_event(const struct rte_event_timer *const tim,
+ struct cnxk_tim_ent *const entry)
+{
+ entry->w0 = (tim->ev.event & 0xFFC000000000) >> 6 |
+ (tim->ev.event & 0xFFFFFFFFF);
+ entry->wqe = tim->ev.u64;
+}
+
+static inline void
+cnxk_tim_sync_start_cyc(struct cnxk_tim_ring *tim_ring)
+{
+ uint64_t cur_cyc = cnxk_tim_cntvct();
+ uint32_t real_bkt;
+
+ if (cur_cyc - tim_ring->last_updt_cyc > tim_ring->tot_int) {
+ real_bkt = plt_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
+ cur_cyc = cnxk_tim_cntvct();
+
+ tim_ring->ring_start_cyc =
+ cur_cyc - (real_bkt * tim_ring->tck_int);
+ tim_ring->last_updt_cyc = cur_cyc;
+ }
+}
+
+static __rte_always_inline uint16_t
+cnxk_tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim, const uint16_t nb_timers,
+ const uint8_t flags)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_ent entry;
+ uint16_t index;
+ int ret;
+
+ cnxk_tim_sync_start_cyc(tim_ring);
+ for (index = 0; index < nb_timers; index++) {
+ if (cnxk_tim_arm_checks(tim_ring, tim[index]))
+ break;
+
+ cnxk_tim_format_event(tim[index], &entry);
+ if (flags & CNXK_TIM_SP)
+ ret = cnxk_tim_add_entry_sp(tim_ring,
+ tim[index]->timeout_ticks,
+ tim[index], &entry, flags);
+ if (flags & CNXK_TIM_MP)
+ ret = cnxk_tim_add_entry_mp(tim_ring,
+ tim[index]->timeout_ticks,
+ tim[index], &entry, flags);
+
+ if (unlikely(ret)) {
+ rte_errno = -ret;
+ break;
+ }
+ }
+
+ return index;
+}
+
+#define FP(_name, _f2, _f1, _flags) \
+ uint16_t __rte_noinline cnxk_tim_arm_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint16_t nb_timers) \
+ { \
+ return cnxk_tim_timer_arm_burst(adptr, tim, nb_timers, \
+ _flags); \
+ }
+TIM_ARM_FASTPATH_MODES
+#undef FP
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
index d56e67360..de8464e33 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.h
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -120,4 +120,304 @@ cnxk_tim_bkt_clr_nent(struct cnxk_tim_bkt *bktp)
return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL);
}
+static inline uint64_t
+cnxk_tim_bkt_fast_mod(uint64_t n, uint64_t d, struct rte_reciprocal_u64 R)
+{
+ return (n - (d * rte_reciprocal_divide_u64(n, &R)));
+}
+
+static __rte_always_inline void
+cnxk_tim_get_target_bucket(struct cnxk_tim_ring *const tim_ring,
+ const uint32_t rel_bkt, struct cnxk_tim_bkt **bkt,
+ struct cnxk_tim_bkt **mirr_bkt)
+{
+ const uint64_t bkt_cyc = cnxk_tim_cntvct() - tim_ring->ring_start_cyc;
+ uint64_t bucket =
+ rte_reciprocal_divide_u64(bkt_cyc, &tim_ring->fast_div) +
+ rel_bkt;
+ uint64_t mirr_bucket = 0;
+
+ bucket = cnxk_tim_bkt_fast_mod(bucket, tim_ring->nb_bkts,
+ tim_ring->fast_bkt);
+ mirr_bucket =
+ cnxk_tim_bkt_fast_mod(bucket + (tim_ring->nb_bkts >> 1),
+ tim_ring->nb_bkts, tim_ring->fast_bkt);
+ *bkt = &tim_ring->bkt[bucket];
+ *mirr_bkt = &tim_ring->bkt[mirr_bucket];
+}
+
+static struct cnxk_tim_ent *
+cnxk_tim_clr_bkt(struct cnxk_tim_ring *const tim_ring,
+ struct cnxk_tim_bkt *const bkt)
+{
+#define TIM_MAX_OUTSTANDING_OBJ 64
+ void *pend_chunks[TIM_MAX_OUTSTANDING_OBJ];
+ struct cnxk_tim_ent *chunk;
+ struct cnxk_tim_ent *pnext;
+ uint8_t objs = 0;
+
+ chunk = ((struct cnxk_tim_ent *)(uintptr_t)bkt->first_chunk);
+ chunk = (struct cnxk_tim_ent *)(uintptr_t)(chunk +
+ tim_ring->nb_chunk_slots)
+ ->w0;
+ while (chunk) {
+ pnext = (struct cnxk_tim_ent *)(uintptr_t)(
+ (chunk + tim_ring->nb_chunk_slots)->w0);
+ if (objs == TIM_MAX_OUTSTANDING_OBJ) {
+ rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks,
+ objs);
+ objs = 0;
+ }
+ pend_chunks[objs++] = chunk;
+ chunk = pnext;
+ }
+
+ if (objs)
+ rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks, objs);
+
+ return (struct cnxk_tim_ent *)(uintptr_t)bkt->first_chunk;
+}
+
+static struct cnxk_tim_ent *
+cnxk_tim_refill_chunk(struct cnxk_tim_bkt *const bkt,
+ struct cnxk_tim_bkt *const mirr_bkt,
+ struct cnxk_tim_ring *const tim_ring)
+{
+ struct cnxk_tim_ent *chunk;
+
+ if (bkt->nb_entry || !bkt->first_chunk) {
+ if (unlikely(rte_mempool_get(tim_ring->chunk_pool,
+ (void **)&chunk)))
+ return NULL;
+ if (bkt->nb_entry) {
+ *(uint64_t *)(((struct cnxk_tim_ent *)
+ mirr_bkt->current_chunk) +
+ tim_ring->nb_chunk_slots) =
+ (uintptr_t)chunk;
+ } else {
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ } else {
+ chunk = cnxk_tim_clr_bkt(tim_ring, bkt);
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+
+ return chunk;
+}
+
+static struct cnxk_tim_ent *
+cnxk_tim_insert_chunk(struct cnxk_tim_bkt *const bkt,
+ struct cnxk_tim_bkt *const mirr_bkt,
+ struct cnxk_tim_ring *const tim_ring)
+{
+ struct cnxk_tim_ent *chunk;
+
+ if (unlikely(rte_mempool_get(tim_ring->chunk_pool, (void **)&chunk)))
+ return NULL;
+
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+ if (bkt->nb_entry) {
+ *(uint64_t *)(((struct cnxk_tim_ent *)(uintptr_t)
+ mirr_bkt->current_chunk) +
+ tim_ring->nb_chunk_slots) = (uintptr_t)chunk;
+ } else {
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ return chunk;
+}
+
+static __rte_always_inline int
+cnxk_tim_add_entry_sp(struct cnxk_tim_ring *const tim_ring,
+ const uint32_t rel_bkt, struct rte_event_timer *const tim,
+ const struct cnxk_tim_ent *const pent,
+ const uint8_t flags)
+{
+ struct cnxk_tim_bkt *mirr_bkt;
+ struct cnxk_tim_ent *chunk;
+ struct cnxk_tim_bkt *bkt;
+ uint64_t lock_sema;
+ int16_t rem;
+
+__retry:
+ cnxk_tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
+
+ /* Get Bucket sema*/
+ lock_sema = cnxk_tim_bkt_fetch_sema_lock(bkt);
+
+ /* Bucket related checks. */
+ if (unlikely(cnxk_tim_bkt_get_hbt(lock_sema))) {
+ if (cnxk_tim_bkt_get_nent(lock_sema) != 0) {
+ uint64_t hbt_state;
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbz %[hbt], 33, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbnz %[hbt], 33, rty%= \n"
+ "dne%=: \n"
+ : [hbt] "=&r"(hbt_state)
+ : [w1] "r"((&bkt->w1))
+ : "memory");
+#else
+ do {
+ hbt_state = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (hbt_state & BIT_ULL(33));
+#endif
+
+ if (!(hbt_state & BIT_ULL(34))) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+ }
+ }
+ /* Insert the work. */
+ rem = cnxk_tim_bkt_fetch_rem(lock_sema);
+
+ if (!rem) {
+ if (flags & CNXK_TIM_ENA_FB)
+ chunk = cnxk_tim_refill_chunk(bkt, mirr_bkt, tim_ring);
+ if (flags & CNXK_TIM_ENA_DFB)
+ chunk = cnxk_tim_insert_chunk(bkt, mirr_bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ bkt->chunk_remainder = 0;
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ cnxk_tim_bkt_dec_lock(bkt);
+ return -ENOMEM;
+ }
+ mirr_bkt->current_chunk = (uintptr_t)chunk;
+ bkt->chunk_remainder = tim_ring->nb_chunk_slots - 1;
+ } else {
+ chunk = (struct cnxk_tim_ent *)mirr_bkt->current_chunk;
+ chunk += tim_ring->nb_chunk_slots - rem;
+ }
+
+ /* Copy work entry. */
+ *chunk = *pent;
+
+ tim->impl_opaque[0] = (uintptr_t)chunk;
+ tim->impl_opaque[1] = (uintptr_t)bkt;
+ __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
+ cnxk_tim_bkt_inc_nent(bkt);
+ cnxk_tim_bkt_dec_lock_relaxed(bkt);
+
+ return 0;
+}
+
+static __rte_always_inline int
+cnxk_tim_add_entry_mp(struct cnxk_tim_ring *const tim_ring,
+ const uint32_t rel_bkt, struct rte_event_timer *const tim,
+ const struct cnxk_tim_ent *const pent,
+ const uint8_t flags)
+{
+ struct cnxk_tim_bkt *mirr_bkt;
+ struct cnxk_tim_ent *chunk;
+ struct cnxk_tim_bkt *bkt;
+ uint64_t lock_sema;
+ int16_t rem;
+
+__retry:
+ cnxk_tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
+ /* Get Bucket sema*/
+ lock_sema = cnxk_tim_bkt_fetch_sema_lock(bkt);
+
+ /* Bucket related checks. */
+ if (unlikely(cnxk_tim_bkt_get_hbt(lock_sema))) {
+ if (cnxk_tim_bkt_get_nent(lock_sema) != 0) {
+ uint64_t hbt_state;
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbz %[hbt], 33, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbnz %[hbt], 33, rty%= \n"
+ "dne%=: \n"
+ : [hbt] "=&r"(hbt_state)
+ : [w1] "r"((&bkt->w1))
+ : "memory");
+#else
+ do {
+ hbt_state = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (hbt_state & BIT_ULL(33));
+#endif
+
+ if (!(hbt_state & BIT_ULL(34))) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+ }
+ }
+
+ rem = cnxk_tim_bkt_fetch_rem(lock_sema);
+ if (rem < 0) {
+ cnxk_tim_bkt_dec_lock(bkt);
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[rem], [%[crem]] \n"
+ " tbz %[rem], 63, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[rem], [%[crem]] \n"
+ " tbnz %[rem], 63, rty%= \n"
+ "dne%=: \n"
+ : [rem] "=&r"(rem)
+ : [crem] "r"(&bkt->w1)
+ : "memory");
+#else
+ while (__atomic_load_n((int64_t *)&bkt->w1, __ATOMIC_RELAXED) <
+ 0)
+ ;
+#endif
+ goto __retry;
+ } else if (!rem) {
+ /* Only one thread can be here*/
+ if (flags & CNXK_TIM_ENA_FB)
+ chunk = cnxk_tim_refill_chunk(bkt, mirr_bkt, tim_ring);
+ if (flags & CNXK_TIM_ENA_DFB)
+ chunk = cnxk_tim_insert_chunk(bkt, mirr_bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ cnxk_tim_bkt_set_rem(bkt, 0);
+ cnxk_tim_bkt_dec_lock(bkt);
+ return -ENOMEM;
+ }
+ *chunk = *pent;
+ if (cnxk_tim_bkt_fetch_lock(lock_sema)) {
+ do {
+ lock_sema = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (cnxk_tim_bkt_fetch_lock(lock_sema) - 1);
+ }
+ rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+ mirr_bkt->current_chunk = (uintptr_t)chunk;
+ __atomic_store_n(&bkt->chunk_remainder,
+ tim_ring->nb_chunk_slots - 1,
+ __ATOMIC_RELEASE);
+ } else {
+ chunk = (struct cnxk_tim_ent *)mirr_bkt->current_chunk;
+ chunk += tim_ring->nb_chunk_slots - rem;
+ *chunk = *pent;
+ }
+
+ tim->impl_opaque[0] = (uintptr_t)chunk;
+ tim->impl_opaque[1] = (uintptr_t)bkt;
+ __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
+ cnxk_tim_bkt_inc_nent(bkt);
+ cnxk_tim_bkt_dec_lock_relaxed(bkt);
+
+ return 0;
+}
+
#endif /* __CNXK_TIM_WORKER_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 31/35] event/cnxk: add timer arm timeout burst
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (29 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 30/35] event/cnxk: add timer arm routine pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 32/35] event/cnxk: add timer cancel function pbhagavatula
` (4 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event timer arm timeout burst function.
All the timers requested to be armed have the same timeout.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 7 ++
drivers/event/cnxk/cnxk_tim_evdev.h | 12 +++
drivers/event/cnxk/cnxk_tim_worker.c | 53 ++++++++++
drivers/event/cnxk/cnxk_tim_worker.h | 141 +++++++++++++++++++++++++++
4 files changed, 213 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index ecc952a6a..68c3b3049 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -88,7 +88,14 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
#undef FP
};
+ const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2] = {
+#define FP(_name, _f1, flags) [_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
+ TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
+ };
+
cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
+ cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
}
static void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 1f2aad17a..b66aac17c 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -217,6 +217,10 @@ cnxk_tim_cntfrq(void)
FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
+#define TIM_ARM_TMO_FASTPATH_MODES \
+ FP(dfb, 0, CNXK_TIM_ENA_DFB) \
+ FP(fb, 1, CNXK_TIM_ENA_FB)
+
#define FP(_name, _f2, _f1, flags) \
uint16_t cnxk_tim_arm_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
@@ -224,6 +228,14 @@ cnxk_tim_cntfrq(void)
TIM_ARM_FASTPATH_MODES
#undef FP
+#define FP(_name, _f1, flags) \
+ uint16_t cnxk_tim_arm_tmo_tick_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint64_t timeout_tick, \
+ const uint16_t nb_timers);
+TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index 268f845c8..717c53fb7 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -99,3 +99,56 @@ cnxk_tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
}
TIM_ARM_FASTPATH_MODES
#undef FP
+
+static __rte_always_inline uint16_t
+cnxk_tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint64_t timeout_tick,
+ const uint16_t nb_timers, const uint8_t flags)
+{
+ struct cnxk_tim_ent entry[CNXK_TIM_MAX_BURST] __rte_cache_aligned;
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ uint16_t set_timers = 0;
+ uint16_t arr_idx = 0;
+ uint16_t idx;
+ int ret;
+
+ if (unlikely(!timeout_tick || timeout_tick > tim_ring->nb_bkts)) {
+ const enum rte_event_timer_state state =
+ timeout_tick ? RTE_EVENT_TIMER_ERROR_TOOLATE :
+ RTE_EVENT_TIMER_ERROR_TOOEARLY;
+ for (idx = 0; idx < nb_timers; idx++)
+ tim[idx]->state = state;
+
+ rte_errno = EINVAL;
+ return 0;
+ }
+
+ cnxk_tim_sync_start_cyc(tim_ring);
+ while (arr_idx < nb_timers) {
+ for (idx = 0; idx < CNXK_TIM_MAX_BURST && (arr_idx < nb_timers);
+ idx++, arr_idx++) {
+ cnxk_tim_format_event(tim[arr_idx], &entry[idx]);
+ }
+ ret = cnxk_tim_add_entry_brst(tim_ring, timeout_tick,
+ &tim[set_timers], entry, idx,
+ flags);
+ set_timers += ret;
+ if (ret != idx)
+ break;
+ }
+
+ return set_timers;
+}
+
+#define FP(_name, _f1, _flags) \
+ uint16_t __rte_noinline cnxk_tim_arm_tmo_tick_burst_##_name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, const uint64_t timeout_tick, \
+ const uint16_t nb_timers) \
+ { \
+ return cnxk_tim_timer_arm_tmo_brst(adptr, tim, timeout_tick, \
+ nb_timers, _flags); \
+ }
+TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
index de8464e33..56cb4cdd9 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.h
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -420,4 +420,145 @@ cnxk_tim_add_entry_mp(struct cnxk_tim_ring *const tim_ring,
return 0;
}
+static inline uint16_t
+cnxk_tim_cpy_wrk(uint16_t index, uint16_t cpy_lmt, struct cnxk_tim_ent *chunk,
+ struct rte_event_timer **const tim,
+ const struct cnxk_tim_ent *const ents,
+ const struct cnxk_tim_bkt *const bkt)
+{
+ for (; index < cpy_lmt; index++) {
+ *chunk = *(ents + index);
+ tim[index]->impl_opaque[0] = (uintptr_t)chunk++;
+ tim[index]->impl_opaque[1] = (uintptr_t)bkt;
+ tim[index]->state = RTE_EVENT_TIMER_ARMED;
+ }
+
+ return index;
+}
+
+/* Burst mode functions */
+static inline int
+cnxk_tim_add_entry_brst(struct cnxk_tim_ring *const tim_ring,
+ const uint16_t rel_bkt,
+ struct rte_event_timer **const tim,
+ const struct cnxk_tim_ent *ents,
+ const uint16_t nb_timers, const uint8_t flags)
+{
+ struct cnxk_tim_ent *chunk = NULL;
+ struct cnxk_tim_bkt *mirr_bkt;
+ struct cnxk_tim_bkt *bkt;
+ uint16_t chunk_remainder;
+ uint16_t index = 0;
+ uint64_t lock_sema;
+ int16_t rem, crem;
+ uint8_t lock_cnt;
+
+__retry:
+ cnxk_tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
+
+ /* Only one thread beyond this. */
+ lock_sema = cnxk_tim_bkt_inc_lock(bkt);
+ lock_cnt = (uint8_t)((lock_sema >> TIM_BUCKET_W1_S_LOCK) &
+ TIM_BUCKET_W1_M_LOCK);
+
+ if (lock_cnt) {
+ cnxk_tim_bkt_dec_lock(bkt);
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxrb %w[lock_cnt], [%[lock]] \n"
+ " tst %w[lock_cnt], 255 \n"
+ " beq dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxrb %w[lock_cnt], [%[lock]] \n"
+ " tst %w[lock_cnt], 255 \n"
+ " bne rty%= \n"
+ "dne%=: \n"
+ : [lock_cnt] "=&r"(lock_cnt)
+ : [lock] "r"(&bkt->lock)
+ : "memory");
+#else
+ while (__atomic_load_n(&bkt->lock, __ATOMIC_RELAXED))
+ ;
+#endif
+ goto __retry;
+ }
+
+ /* Bucket related checks. */
+ if (unlikely(cnxk_tim_bkt_get_hbt(lock_sema))) {
+ if (cnxk_tim_bkt_get_nent(lock_sema) != 0) {
+ uint64_t hbt_state;
+#ifdef RTE_ARCH_ARM64
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbz %[hbt], 33, dne%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldxr %[hbt], [%[w1]] \n"
+ " tbnz %[hbt], 33, rty%= \n"
+ "dne%=: \n"
+ : [hbt] "=&r"(hbt_state)
+ : [w1] "r"((&bkt->w1))
+ : "memory");
+#else
+ do {
+ hbt_state = __atomic_load_n(&bkt->w1,
+ __ATOMIC_RELAXED);
+ } while (hbt_state & BIT_ULL(33));
+#endif
+
+ if (!(hbt_state & BIT_ULL(34))) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+ }
+ }
+
+ chunk_remainder = cnxk_tim_bkt_fetch_rem(lock_sema);
+ rem = chunk_remainder - nb_timers;
+ if (rem < 0) {
+ crem = tim_ring->nb_chunk_slots - chunk_remainder;
+ if (chunk_remainder && crem) {
+ chunk = ((struct cnxk_tim_ent *)
+ mirr_bkt->current_chunk) +
+ crem;
+
+ index = cnxk_tim_cpy_wrk(index, chunk_remainder, chunk,
+ tim, ents, bkt);
+ cnxk_tim_bkt_sub_rem(bkt, chunk_remainder);
+ cnxk_tim_bkt_add_nent(bkt, chunk_remainder);
+ }
+
+ if (flags & CNXK_TIM_ENA_FB)
+ chunk = cnxk_tim_refill_chunk(bkt, mirr_bkt, tim_ring);
+ if (flags & CNXK_TIM_ENA_DFB)
+ chunk = cnxk_tim_insert_chunk(bkt, mirr_bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ cnxk_tim_bkt_dec_lock(bkt);
+ rte_errno = ENOMEM;
+ tim[index]->state = RTE_EVENT_TIMER_ERROR;
+ return crem;
+ }
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+ mirr_bkt->current_chunk = (uintptr_t)chunk;
+ cnxk_tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
+
+ rem = nb_timers - chunk_remainder;
+ cnxk_tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - rem);
+ cnxk_tim_bkt_add_nent(bkt, rem);
+ } else {
+ chunk = (struct cnxk_tim_ent *)mirr_bkt->current_chunk;
+ chunk += (tim_ring->nb_chunk_slots - chunk_remainder);
+
+ cnxk_tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
+ cnxk_tim_bkt_sub_rem(bkt, nb_timers);
+ cnxk_tim_bkt_add_nent(bkt, nb_timers);
+ }
+
+ cnxk_tim_bkt_dec_lock(bkt);
+
+ return nb_timers;
+}
+
#endif /* __CNXK_TIM_WORKER_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 32/35] event/cnxk: add timer cancel function
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (30 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 31/35] event/cnxk: add timer arm timeout burst pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 33/35] event/cnxk: add timer stats get and reset pbhagavatula
` (3 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add function to cancel event timer that has been armed.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 1 +
drivers/event/cnxk/cnxk_tim_evdev.h | 5 ++++
drivers/event/cnxk/cnxk_tim_worker.c | 30 ++++++++++++++++++++++
drivers/event/cnxk/cnxk_tim_worker.h | 37 ++++++++++++++++++++++++++++
4 files changed, 73 insertions(+)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 68c3b3049..62a15a4a1 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -96,6 +96,7 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
+ cnxk_tim_ops.cancel_burst = cnxk_tim_timer_cancel_burst;
}
static void
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index b66aac17c..001f448d5 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -236,6 +236,11 @@ TIM_ARM_FASTPATH_MODES
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
+uint16_t
+cnxk_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint16_t nb_timers);
+
int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index 717c53fb7..98ff143c3 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -152,3 +152,33 @@ cnxk_tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
}
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
+
+uint16_t
+cnxk_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint16_t nb_timers)
+{
+ uint16_t index;
+ int ret;
+
+ RTE_SET_USED(adptr);
+ rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+ for (index = 0; index < nb_timers; index++) {
+ if (tim[index]->state == RTE_EVENT_TIMER_CANCELED) {
+ rte_errno = EALREADY;
+ break;
+ }
+
+ if (tim[index]->state != RTE_EVENT_TIMER_ARMED) {
+ rte_errno = EINVAL;
+ break;
+ }
+ ret = cnxk_tim_rm_entry(tim[index]);
+ if (ret) {
+ rte_errno = -ret;
+ break;
+ }
+ }
+
+ return index;
+}
diff --git a/drivers/event/cnxk/cnxk_tim_worker.h b/drivers/event/cnxk/cnxk_tim_worker.h
index 56cb4cdd9..7caeb1a8f 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.h
+++ b/drivers/event/cnxk/cnxk_tim_worker.h
@@ -561,4 +561,41 @@ cnxk_tim_add_entry_brst(struct cnxk_tim_ring *const tim_ring,
return nb_timers;
}
+static int
+cnxk_tim_rm_entry(struct rte_event_timer *tim)
+{
+ struct cnxk_tim_ent *entry;
+ struct cnxk_tim_bkt *bkt;
+ uint64_t lock_sema;
+
+ if (tim->impl_opaque[1] == 0 || tim->impl_opaque[0] == 0)
+ return -ENOENT;
+
+ entry = (struct cnxk_tim_ent *)(uintptr_t)tim->impl_opaque[0];
+ if (entry->wqe != tim->ev.u64) {
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ return -ENOENT;
+ }
+
+ bkt = (struct cnxk_tim_bkt *)(uintptr_t)tim->impl_opaque[1];
+ lock_sema = cnxk_tim_bkt_inc_lock(bkt);
+ if (cnxk_tim_bkt_get_hbt(lock_sema) ||
+ !cnxk_tim_bkt_get_nent(lock_sema)) {
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ cnxk_tim_bkt_dec_lock(bkt);
+ return -ENOENT;
+ }
+
+ entry->w0 = 0;
+ entry->wqe = 0;
+ tim->state = RTE_EVENT_TIMER_CANCELED;
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ cnxk_tim_bkt_dec_lock(bkt);
+
+ return 0;
+}
+
#endif /* __CNXK_TIM_WORKER_H__ */
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 33/35] event/cnxk: add timer stats get and reset
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (31 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 32/35] event/cnxk: add timer cancel function pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 34/35] event/cnxk: add timer adapter start and stop pbhagavatula
` (2 subsequent siblings)
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add event timer adapter statistics get and reset functions.
Stats are disabled by default and can be enabled through devargs.
Example:
--dev "0002:1e:00.0,tim_stats_ena=1"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 9 +++++
drivers/event/cnxk/cn10k_eventdev.c | 3 +-
drivers/event/cnxk/cn9k_eventdev.c | 3 +-
drivers/event/cnxk/cnxk_tim_evdev.c | 50 ++++++++++++++++++++++++----
drivers/event/cnxk/cnxk_tim_evdev.h | 38 ++++++++++++++-------
drivers/event/cnxk/cnxk_tim_worker.c | 11 ++++--
6 files changed, 91 insertions(+), 23 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index a8199aac7..11145dd7d 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -115,6 +115,15 @@ Runtime Config Options
-a 0002:0e:00.0,tim_chnk_slots=1023
+- ``TIM enable arm/cancel statistics``
+
+ The ``tim_stats_ena`` devargs can be used to enable arm and cancel stats of
+ event timer adapter.
+
+ For example::
+
+ -a 0002:0e:00.0,tim_stats_ena=1
+
- ``TIM limit max rings reserved``
The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index cadc792a7..bf4052c76 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -505,4 +505,5 @@ RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=<int>"
CN10K_SSO_GW_MODE "=<int>"
CNXK_TIM_DISABLE_NPA "=1"
CNXK_TIM_CHNK_SLOTS "=<int>"
- CNXK_TIM_RINGS_LMT "=<int>");
+ CNXK_TIM_RINGS_LMT "=<int>"
+ CNXK_TIM_STATS_ENA "=1");
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index e503f6b1c..0684417ea 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -574,4 +574,5 @@ RTE_PMD_REGISTER_PARAM_STRING(event_cn9k, CNXK_SSO_XAE_CNT "=<int>"
CN9K_SSO_SINGLE_WS "=1"
CNXK_TIM_DISABLE_NPA "=1"
CNXK_TIM_CHNK_SLOTS "=<int>"
- CNXK_TIM_RINGS_LMT "=<int>");
+ CNXK_TIM_RINGS_LMT "=<int>"
+ CNXK_TIM_STATS_ENA "=1");
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 62a15a4a1..a73ca33d8 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -81,21 +81,25 @@ cnxk_tim_set_fp_ops(struct cnxk_tim_ring *tim_ring)
{
uint8_t prod_flag = !tim_ring->prod_type_sp;
- /* [DFB/FB] [SP][MP]*/
- const rte_event_timer_arm_burst_t arm_burst[2][2] = {
-#define FP(_name, _f2, _f1, flags) [_f2][_f1] = cnxk_tim_arm_burst_##_name,
+ /* [STATS] [DFB/FB] [SP][MP]*/
+ const rte_event_timer_arm_burst_t arm_burst[2][2][2] = {
+#define FP(_name, _f3, _f2, _f1, flags) \
+ [_f3][_f2][_f1] = cnxk_tim_arm_burst_##_name,
TIM_ARM_FASTPATH_MODES
#undef FP
};
- const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2] = {
-#define FP(_name, _f1, flags) [_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
+ const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2] = {
+#define FP(_name, _f2, _f1, flags) \
+ [_f2][_f1] = cnxk_tim_arm_tmo_tick_burst_##_name,
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
};
- cnxk_tim_ops.arm_burst = arm_burst[tim_ring->ena_dfb][prod_flag];
- cnxk_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->ena_dfb];
+ cnxk_tim_ops.arm_burst =
+ arm_burst[tim_ring->enable_stats][tim_ring->ena_dfb][prod_flag];
+ cnxk_tim_ops.arm_tmo_tick_burst =
+ arm_tmo_burst[tim_ring->enable_stats][tim_ring->ena_dfb];
cnxk_tim_ops.cancel_burst = cnxk_tim_timer_cancel_burst;
}
@@ -159,6 +163,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->nb_timers = rcfg->nb_timers;
tim_ring->chunk_sz = dev->chunk_sz;
tim_ring->disable_npa = dev->disable_npa;
+ tim_ring->enable_stats = dev->enable_stats;
if (tim_ring->disable_npa) {
tim_ring->nb_chunks =
@@ -241,6 +246,30 @@ cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
return 0;
}
+static int
+cnxk_tim_stats_get(const struct rte_event_timer_adapter *adapter,
+ struct rte_event_timer_adapter_stats *stats)
+{
+ struct cnxk_tim_ring *tim_ring = adapter->data->adapter_priv;
+ uint64_t bkt_cyc = cnxk_tim_cntvct() - tim_ring->ring_start_cyc;
+
+ stats->evtim_exp_count =
+ __atomic_load_n(&tim_ring->arm_cnt, __ATOMIC_RELAXED);
+ stats->ev_enq_count = stats->evtim_exp_count;
+ stats->adapter_tick_count =
+ rte_reciprocal_divide_u64(bkt_cyc, &tim_ring->fast_div);
+ return 0;
+}
+
+static int
+cnxk_tim_stats_reset(const struct rte_event_timer_adapter *adapter)
+{
+ struct cnxk_tim_ring *tim_ring = adapter->data->adapter_priv;
+
+ __atomic_store_n(&tim_ring->arm_cnt, 0, __ATOMIC_RELAXED);
+ return 0;
+}
+
int
cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
uint32_t *caps,
@@ -258,6 +287,11 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
+ if (dev->enable_stats) {
+ cnxk_tim_ops.stats_get = cnxk_tim_stats_get;
+ cnxk_tim_ops.stats_reset = cnxk_tim_stats_reset;
+ }
+
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
@@ -281,6 +315,8 @@ cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
&dev->disable_npa);
rte_kvargs_process(kvlist, CNXK_TIM_CHNK_SLOTS, &parse_kvargs_value,
&dev->chunk_slots);
+ rte_kvargs_process(kvlist, CNXK_TIM_STATS_ENA, &parse_kvargs_flag,
+ &dev->enable_stats);
rte_kvargs_process(kvlist, CNXK_TIM_RINGS_LMT, &parse_kvargs_value,
&dev->min_ring_cnt);
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 001f448d5..b5e4cfc9e 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -36,12 +36,14 @@
#define CNXK_TIM_DISABLE_NPA "tim_disable_npa"
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
+#define CNXK_TIM_STATS_ENA "tim_stats_ena"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
-#define CNXK_TIM_SP 0x1
-#define CNXK_TIM_MP 0x2
-#define CNXK_TIM_ENA_FB 0x10
-#define CNXK_TIM_ENA_DFB 0x20
+#define CNXK_TIM_SP 0x1
+#define CNXK_TIM_MP 0x2
+#define CNXK_TIM_ENA_FB 0x10
+#define CNXK_TIM_ENA_DFB 0x20
+#define CNXK_TIM_ENA_STATS 0x40
#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
#define TIM_BUCKET_W1_M_CHUNK_REMAINDER \
@@ -82,6 +84,7 @@ struct cnxk_tim_evdev {
uint8_t disable_npa;
uint16_t chunk_slots;
uint16_t min_ring_cnt;
+ uint8_t enable_stats;
};
enum cnxk_tim_clk_src {
@@ -123,6 +126,7 @@ struct cnxk_tim_ring {
struct rte_reciprocal_u64 fast_bkt;
uint64_t arm_cnt;
uint8_t prod_type_sp;
+ uint8_t enable_stats;
uint8_t disable_npa;
uint8_t ena_dfb;
uint16_t ring_id;
@@ -212,23 +216,33 @@ cnxk_tim_cntfrq(void)
#endif
#define TIM_ARM_FASTPATH_MODES \
- FP(sp, 0, 0, CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
- FP(mp, 0, 1, CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
- FP(fb_sp, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
- FP(fb_mp, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP)
+ FP(sp, 0, 0, 0, CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
+ FP(mp, 0, 0, 1, CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
+ FP(fb_sp, 0, 1, 0, CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
+ FP(fb_mp, 0, 1, 1, CNXK_TIM_ENA_FB | CNXK_TIM_MP) \
+ FP(stats_sp, 1, 0, 0, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_DFB | CNXK_TIM_SP) \
+ FP(stats_mp, 1, 0, 1, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_DFB | CNXK_TIM_MP) \
+ FP(stats_fb_sp, 1, 1, 0, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_FB | CNXK_TIM_SP) \
+ FP(stats_fb_mp, 1, 1, 1, \
+ CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_FB | CNXK_TIM_MP)
#define TIM_ARM_TMO_FASTPATH_MODES \
- FP(dfb, 0, CNXK_TIM_ENA_DFB) \
- FP(fb, 1, CNXK_TIM_ENA_FB)
+ FP(dfb, 0, 0, CNXK_TIM_ENA_DFB) \
+ FP(fb, 0, 1, CNXK_TIM_ENA_FB) \
+ FP(stats_dfb, 1, 0, CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_DFB) \
+ FP(stats_fb, 1, 1, CNXK_TIM_ENA_STATS | CNXK_TIM_ENA_FB)
-#define FP(_name, _f2, _f1, flags) \
+#define FP(_name, _f3, _f2, _f1, flags) \
uint16_t cnxk_tim_arm_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint16_t nb_timers);
TIM_ARM_FASTPATH_MODES
#undef FP
-#define FP(_name, _f1, flags) \
+#define FP(_name, _f2, _f1, flags) \
uint16_t cnxk_tim_arm_tmo_tick_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint64_t timeout_tick, \
diff --git a/drivers/event/cnxk/cnxk_tim_worker.c b/drivers/event/cnxk/cnxk_tim_worker.c
index 98ff143c3..3ce99864a 100644
--- a/drivers/event/cnxk/cnxk_tim_worker.c
+++ b/drivers/event/cnxk/cnxk_tim_worker.c
@@ -86,10 +86,13 @@ cnxk_tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
}
}
+ if (flags & CNXK_TIM_ENA_STATS)
+ __atomic_fetch_add(&tim_ring->arm_cnt, index, __ATOMIC_RELAXED);
+
return index;
}
-#define FP(_name, _f2, _f1, _flags) \
+#define FP(_name, _f3, _f2, _f1, _flags) \
uint16_t __rte_noinline cnxk_tim_arm_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint16_t nb_timers) \
@@ -138,10 +141,14 @@ cnxk_tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
break;
}
+ if (flags & CNXK_TIM_ENA_STATS)
+ __atomic_fetch_add(&tim_ring->arm_cnt, set_timers,
+ __ATOMIC_RELAXED);
+
return set_timers;
}
-#define FP(_name, _f1, _flags) \
+#define FP(_name, _f2, _f1, _flags) \
uint16_t __rte_noinline cnxk_tim_arm_tmo_tick_burst_##_name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, const uint64_t timeout_tick, \
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 34/35] event/cnxk: add timer adapter start and stop
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (32 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 33/35] event/cnxk: add timer stats get and reset pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 35/35] event/cnxk: add devargs to control timer adapters pbhagavatula
2021-05-04 8:30 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver Jerin Jacob
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add event timer adapter start and stop functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/event/cnxk/cnxk_tim_evdev.c | 71 ++++++++++++++++++++++++++++-
1 file changed, 70 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index a73ca33d8..19b71b4f5 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -246,6 +246,73 @@ cnxk_tim_ring_free(struct rte_event_timer_adapter *adptr)
return 0;
}
+static void
+cnxk_tim_calibrate_start_tsc(struct cnxk_tim_ring *tim_ring)
+{
+#define CNXK_TIM_CALIB_ITER 1E6
+ uint32_t real_bkt, bucket;
+ int icount, ecount = 0;
+ uint64_t bkt_cyc;
+
+ for (icount = 0; icount < CNXK_TIM_CALIB_ITER; icount++) {
+ real_bkt = plt_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
+ bkt_cyc = cnxk_tim_cntvct();
+ bucket = (bkt_cyc - tim_ring->ring_start_cyc) /
+ tim_ring->tck_int;
+ bucket = bucket % (tim_ring->nb_bkts);
+ tim_ring->ring_start_cyc =
+ bkt_cyc - (real_bkt * tim_ring->tck_int);
+ if (bucket != real_bkt)
+ ecount++;
+ }
+ tim_ring->last_updt_cyc = bkt_cyc;
+ plt_tim_dbg("Bucket mispredict %3.2f distance %d\n",
+ 100 - (((double)(icount - ecount) / (double)icount) * 100),
+ bucket - real_bkt);
+}
+
+static int
+cnxk_tim_ring_start(const struct rte_event_timer_adapter *adptr)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ rc = roc_tim_lf_enable(&dev->tim, tim_ring->ring_id,
+ &tim_ring->ring_start_cyc, NULL);
+ if (rc < 0)
+ return rc;
+
+ tim_ring->tck_int = NSEC2TICK(tim_ring->tck_nsec, cnxk_tim_cntfrq());
+ tim_ring->tot_int = tim_ring->tck_int * tim_ring->nb_bkts;
+ tim_ring->fast_div = rte_reciprocal_value_u64(tim_ring->tck_int);
+ tim_ring->fast_bkt = rte_reciprocal_value_u64(tim_ring->nb_bkts);
+
+ cnxk_tim_calibrate_start_tsc(tim_ring);
+
+ return rc;
+}
+
+static int
+cnxk_tim_ring_stop(const struct rte_event_timer_adapter *adptr)
+{
+ struct cnxk_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ rc = roc_tim_lf_disable(&dev->tim, tim_ring->ring_id);
+ if (rc < 0)
+ plt_err("Failed to disable timer ring");
+
+ return rc;
+}
+
static int
cnxk_tim_stats_get(const struct rte_event_timer_adapter *adapter,
struct rte_event_timer_adapter_stats *stats)
@@ -278,13 +345,14 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
RTE_SET_USED(flags);
- RTE_SET_USED(ops);
if (dev == NULL)
return -ENODEV;
cnxk_tim_ops.init = cnxk_tim_ring_create;
cnxk_tim_ops.uninit = cnxk_tim_ring_free;
+ cnxk_tim_ops.start = cnxk_tim_ring_start;
+ cnxk_tim_ops.stop = cnxk_tim_ring_stop;
cnxk_tim_ops.get_info = cnxk_tim_ring_info_get;
if (dev->enable_stats) {
@@ -295,6 +363,7 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
+ *ops = &cnxk_tim_ops;
return 0;
}
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* [dpdk-dev] [PATCH v5 35/35] event/cnxk: add devargs to control timer adapters
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (33 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 34/35] event/cnxk: add timer adapter start and stop pbhagavatula
@ 2021-05-04 0:27 ` pbhagavatula
2021-05-04 8:30 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver Jerin Jacob
35 siblings, 0 replies; 185+ messages in thread
From: pbhagavatula @ 2021-05-04 0:27 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, Shijith Thotton; +Cc: dev
From: Shijith Thotton <sthotton@marvell.com>
Add devargs to control each event timer adapter i.e. TIM rings internal
parameters uniquely. The following dict format is expected
[ring-chnk_slots-disable_npa-stats_ena]. 0 represents default values.
Example:
--dev "0002:1e:00.0,tim_ring_ctl=[2-1023-1-0]"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 11 ++++
drivers/event/cnxk/cnxk_tim_evdev.c | 96 ++++++++++++++++++++++++++++-
drivers/event/cnxk/cnxk_tim_evdev.h | 10 +++
3 files changed, 116 insertions(+), 1 deletion(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 11145dd7d..1bd935abc 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -135,6 +135,17 @@ Runtime Config Options
-a 0002:0e:00.0,tim_rings_lmt=5
+- ``TIM ring control internal parameters``
+
+ When using multiple TIM rings the ``tim_ring_ctl`` devargs can be used to
+ control each TIM rings internal parameters uniquely. The following dict
+ format is expected [ring-chnk_slots-disable_npa-stats_ena]. 0 represents
+ default values.
+
+ For Example::
+
+ -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+
Debugging Options
-----------------
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 19b71b4f5..9d40e336d 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -121,7 +121,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
struct cnxk_tim_evdev *dev = cnxk_tim_priv_get();
struct cnxk_tim_ring *tim_ring;
- int rc;
+ int i, rc;
if (dev == NULL)
return -ENODEV;
@@ -165,6 +165,20 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->disable_npa = dev->disable_npa;
tim_ring->enable_stats = dev->enable_stats;
+ for (i = 0; i < dev->ring_ctl_cnt; i++) {
+ struct cnxk_tim_ctl *ring_ctl = &dev->ring_ctl_data[i];
+
+ if (ring_ctl->ring == tim_ring->ring_id) {
+ tim_ring->chunk_sz =
+ ring_ctl->chunk_slots ?
+ ((uint32_t)(ring_ctl->chunk_slots + 1) *
+ CNXK_TIM_CHUNK_ALIGNMENT) :
+ tim_ring->chunk_sz;
+ tim_ring->enable_stats = ring_ctl->enable_stats;
+ tim_ring->disable_npa = ring_ctl->disable_npa;
+ }
+ }
+
if (tim_ring->disable_npa) {
tim_ring->nb_chunks =
tim_ring->nb_timers /
@@ -368,6 +382,84 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
return 0;
}
+static void
+cnxk_tim_parse_ring_param(char *value, void *opaque)
+{
+ struct cnxk_tim_evdev *dev = opaque;
+ struct cnxk_tim_ctl ring_ctl = {0};
+ char *tok = strtok(value, "-");
+ struct cnxk_tim_ctl *old_ptr;
+ uint16_t *val;
+
+ val = (uint16_t *)&ring_ctl;
+
+ if (!strlen(value))
+ return;
+
+ while (tok != NULL) {
+ *val = atoi(tok);
+ tok = strtok(NULL, "-");
+ val++;
+ }
+
+ if (val != (&ring_ctl.enable_stats + 1)) {
+ plt_err("Invalid ring param expected [ring-chunk_sz-disable_npa-enable_stats]");
+ return;
+ }
+
+ dev->ring_ctl_cnt++;
+ old_ptr = dev->ring_ctl_data;
+ dev->ring_ctl_data =
+ rte_realloc(dev->ring_ctl_data,
+ sizeof(struct cnxk_tim_ctl) * dev->ring_ctl_cnt, 0);
+ if (dev->ring_ctl_data == NULL) {
+ dev->ring_ctl_data = old_ptr;
+ dev->ring_ctl_cnt--;
+ return;
+ }
+
+ dev->ring_ctl_data[dev->ring_ctl_cnt - 1] = ring_ctl;
+}
+
+static void
+cnxk_tim_parse_ring_ctl_list(const char *value, void *opaque)
+{
+ char *s = strdup(value);
+ char *start = NULL;
+ char *end = NULL;
+ char *f = s;
+
+ while (*s) {
+ if (*s == '[')
+ start = s;
+ else if (*s == ']')
+ end = s;
+
+ if (start && start < end) {
+ *end = 0;
+ cnxk_tim_parse_ring_param(start + 1, opaque);
+ start = end;
+ s = end;
+ }
+ s++;
+ }
+
+ free(f);
+}
+
+static int
+cnxk_tim_parse_kvargs_dict(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ /* Dict format [ring-chunk_sz-disable_npa-enable_stats] use '-' as ','
+ * isn't allowed. 0 represents default.
+ */
+ cnxk_tim_parse_ring_ctl_list(value, opaque);
+
+ return 0;
+}
+
static void
cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
{
@@ -388,6 +480,8 @@ cnxk_tim_parse_devargs(struct rte_devargs *devargs, struct cnxk_tim_evdev *dev)
&dev->enable_stats);
rte_kvargs_process(kvlist, CNXK_TIM_RINGS_LMT, &parse_kvargs_value,
&dev->min_ring_cnt);
+ rte_kvargs_process(kvlist, CNXK_TIM_RING_CTL,
+ &cnxk_tim_parse_kvargs_dict, &dev);
rte_kvargs_free(kvlist);
}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index b5e4cfc9e..c369f6f47 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -38,6 +38,7 @@
#define CNXK_TIM_CHNK_SLOTS "tim_chnk_slots"
#define CNXK_TIM_STATS_ENA "tim_stats_ena"
#define CNXK_TIM_RINGS_LMT "tim_rings_lmt"
+#define CNXK_TIM_RING_CTL "tim_ring_ctl"
#define CNXK_TIM_SP 0x1
#define CNXK_TIM_MP 0x2
@@ -75,6 +76,13 @@
#define TIM_BUCKET_SEMA_WLOCK \
(TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
+struct cnxk_tim_ctl {
+ uint16_t ring;
+ uint16_t chunk_slots;
+ uint16_t disable_npa;
+ uint16_t enable_stats;
+};
+
struct cnxk_tim_evdev {
struct roc_tim tim;
struct rte_eventdev *event_dev;
@@ -85,6 +93,8 @@ struct cnxk_tim_evdev {
uint16_t chunk_slots;
uint16_t min_ring_cnt;
uint8_t enable_stats;
+ uint16_t ring_ctl_cnt;
+ struct cnxk_tim_ctl *ring_ctl_data;
};
enum cnxk_tim_clk_src {
--
2.17.1
^ permalink raw reply [flat|nested] 185+ messages in thread
* Re: [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver
2021-05-04 0:26 ` [dpdk-dev] [PATCH v5 00/35] Marvell CNXK Event device Driver pbhagavatula
` (34 preceding siblings ...)
2021-05-04 0:27 ` [dpdk-dev] [PATCH v5 35/35] event/cnxk: add devargs to control timer adapters pbhagavatula
@ 2021-05-04 8:30 ` Jerin Jacob
35 siblings, 0 replies; 185+ messages in thread
From: Jerin Jacob @ 2021-05-04 8:30 UTC (permalink / raw)
To: Pavan Nikhilesh; +Cc: Jerin Jacob, dpdk-dev
On Tue, May 4, 2021 at 5:58 AM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> This patchset adds support for Marvell CN106XX SoC based on 'common/cnxk'
> driver. In future, CN9K a.k.a octeontx2 will also be supported by same
> driver when code is ready and 'event/octeontx2' will be deprecated.
Series applied to dpdk-next-eventdev/for-main. Thanks.
> v5 Changes:
> - Update inline asm extension prefix.
>
> v4 Changes:
> - s/PCI_ANY_ID/RTE_PCI_ANY_ID.
> - Remove dependency on net_cnxk
> - Fix compilation issues with xstats patch.
>
> v3 Changes:
> - Fix documentation, copyright.
> - Update release notes.
>
> v2 Changes:
> - Split Rx/Tx adapter into seperate patch set to remove dependency on net/cnxk
> - Add missing xStats patch.
> - Fix incorrect head wait operation.
>
> Pavan Nikhilesh (18):
> common/cnxk: rename deprecated constant
> common/cnxk: update inline asm prefix
> event/cnxk: add build infra and device setup
> event/cnxk: add platform specific device probe
> event/cnxk: add common configuration validation
> event/cnxk: allocate event inflight buffers
> event/cnxk: add devargs to configure getwork mode
> event/cnxk: add SSO HW device operations
> event/cnxk: add SSO GWS fastpath enqueue functions
> event/cnxk: add SSO GWS dequeue fastpath functions
> event/cnxk: add SSO selftest and dump
> event/cnxk: add event port and queue xstats
> event/cnxk: add devargs to disable NPA
> event/cnxk: allow adapters to resize inflights
> event/cnxk: add TIM bucket operations
> event/cnxk: add timer arm routine
> event/cnxk: add timer arm timeout burst
> event/cnxk: add timer cancel function
>
> Shijith Thotton (17):
> event/cnxk: add device capabilities function
> event/cnxk: add platform specific device config
> event/cnxk: add event queue config functions
> event/cnxk: add devargs for inflight buffer count
> event/cnxk: add devargs to control SSO HWGRP QoS
> event/cnxk: add port config functions
> event/cnxk: add event port link and unlink
> event/cnxk: add device start function
> event/cnxk: add device stop and close functions
> event/cnxk: support event timer
> event/cnxk: add timer adapter capabilities
> event/cnxk: create and free timer adapter
> event/cnxk: add timer adapter info function
> event/cnxk: add devargs for chunk size and rings
> event/cnxk: add timer stats get and reset
> event/cnxk: add timer adapter start and stop
> event/cnxk: add devargs to control timer adapters
>
> MAINTAINERS | 6 +
> app/test/test_eventdev.c | 14 +
> doc/guides/eventdevs/cnxk.rst | 162 ++
> doc/guides/eventdevs/index.rst | 1 +
> doc/guides/rel_notes/release_21_05.rst | 2 +
> drivers/common/cnxk/roc_platform.h | 33 +-
> drivers/common/cnxk/roc_sso.c | 63 +
> drivers/common/cnxk/roc_sso.h | 19 +
> drivers/common/cnxk/version.map | 2 +
> drivers/event/cnxk/cn10k_eventdev.c | 509 ++++++
> drivers/event/cnxk/cn10k_worker.c | 115 ++
> drivers/event/cnxk/cn10k_worker.h | 175 +++
> drivers/event/cnxk/cn9k_eventdev.c | 578 +++++++
> drivers/event/cnxk/cn9k_worker.c | 236 +++
> drivers/event/cnxk/cn9k_worker.h | 297 ++++
> drivers/event/cnxk/cnxk_eventdev.c | 647 ++++++++
> drivers/event/cnxk/cnxk_eventdev.h | 253 +++
> drivers/event/cnxk/cnxk_eventdev_adptr.c | 67 +
> drivers/event/cnxk/cnxk_eventdev_selftest.c | 1570 +++++++++++++++++++
> drivers/event/cnxk/cnxk_eventdev_stats.c | 289 ++++
> drivers/event/cnxk/cnxk_tim_evdev.c | 538 +++++++
> drivers/event/cnxk/cnxk_tim_evdev.h | 275 ++++
> drivers/event/cnxk/cnxk_tim_worker.c | 191 +++
> drivers/event/cnxk/cnxk_tim_worker.h | 601 +++++++
> drivers/event/cnxk/cnxk_worker.h | 101 ++
> drivers/event/cnxk/meson.build | 23 +
> drivers/event/cnxk/version.map | 3 +
> drivers/event/meson.build | 1 +
> 28 files changed, 6755 insertions(+), 16 deletions(-)
> create mode 100644 doc/guides/eventdevs/cnxk.rst
> create mode 100644 drivers/event/cnxk/cn10k_eventdev.c
> create mode 100644 drivers/event/cnxk/cn10k_worker.c
> create mode 100644 drivers/event/cnxk/cn10k_worker.h
> create mode 100644 drivers/event/cnxk/cn9k_eventdev.c
> create mode 100644 drivers/event/cnxk/cn9k_worker.c
> create mode 100644 drivers/event/cnxk/cn9k_worker.h
> create mode 100644 drivers/event/cnxk/cnxk_eventdev.c
> create mode 100644 drivers/event/cnxk/cnxk_eventdev.h
> create mode 100644 drivers/event/cnxk/cnxk_eventdev_adptr.c
> create mode 100644 drivers/event/cnxk/cnxk_eventdev_selftest.c
> create mode 100644 drivers/event/cnxk/cnxk_eventdev_stats.c
> create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.c
> create mode 100644 drivers/event/cnxk/cnxk_tim_evdev.h
> create mode 100644 drivers/event/cnxk/cnxk_tim_worker.c
> create mode 100644 drivers/event/cnxk/cnxk_tim_worker.h
> create mode 100644 drivers/event/cnxk/cnxk_worker.h
> create mode 100644 drivers/event/cnxk/meson.build
> create mode 100644 drivers/event/cnxk/version.map
>
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 185+ messages in thread