* [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver
@ 2019-06-28 7:49 pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 01/44] event/octeontx2: add build infra and device probe pbhagavatula
` (43 more replies)
0 siblings, 44 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
This patchset adds OCTEON TX2 event device driver and event timer
adapter driver.
More details can be found in
[PATCH v2 26/44] doc: add Marvell OCTEON TX2 event device
[PATCH v2 44/44] doc: update Marvell OCTEON TX2 eventdev
under doc/guides/eventdevs/octeontx2
v2 Changes:
- Add tim finit to teardown on exit.
- Add tim chunk size limits.
- Add missing commit messges.
- Add selftest to unit test suite.
- Fix log format.
- Remove panic from IRQ.
- Remove shared lib update from releases.
Pavan Nikhilesh (44):
event/octeontx2: add build infra and device probe
event/octeontx2: add init and fini for octeontx2 SSO object
event/octeontx2: add device capabilities function
event/octeontx2: add device configure function
event/octeontx2: add event queue config functions
event/octeontx2: allocate event inflight buffers
event/octeontx2: add devargs for inflight buffer count
event/octeontx2: add event port config functions
event/octeontx2: support linking queues to ports
event/octeontx2: support dequeue timeout tick conversion
event/octeontx2: add SSO GWS and GGRP IRQ handlers
event/octeontx2: add register dump functions
event/octeontx2: add xstats support
event/octeontx2: add SSO HW device operations
event/octeontx2: add worker enqueue functions
event/octeontx2: add worker dequeue functions
event/octeontx2: add octeontx2 SSO dual workslot mode
event/octeontx2: add SSO dual GWS HW device operations
event/octeontx2: add worker dual GWS enqueue functions
event/octeontx2: add worker dual GWS dequeue functions
event/octeontx2: add devargs to force legacy mode
event/octeontx2: add device start function
event/octeontx2: add devargs to control SSO GGRP QoS
event/octeontx2: add device stop and close functions
event/octeontx2: add SSO selftest
doc: add Marvell OCTEON TX2 event device documentation
event/octeontx2: add event timer support
event/octeontx2: add timer adapter capabilities
event/octeontx2: create and free timer adapter
event/octeontx2: allow TIM to optimize config
event/octeontx2: add devargs to disable NPA
event/octeontx2: add devargs to modify chunk slots
event/octeontx2: add TIM IRQ handlers
event/octeontx2: allow adapters to resize inflight buffers
event/octeontx2: add timer adapter info get function
event/octeontx2: add TIM bucket operations
event/octeontx2: add event timer arm routine
event/octeontx2: add event timer arm timeout burst
event/octeontx2: add event timer cancel function
event/octeontx2: add event timer stats get and reset
event/octeontx2: add even timer adapter start and stop
event/octeontx2: add devargs to limit timer adapters
event/octeontx2: add devargs to control adapter parameters
doc: update Marvell OCTEON TX2 eventdev documentation
MAINTAINERS | 6 +
app/test/test_eventdev.c | 8 +
config/common_base | 5 +
doc/guides/eventdevs/index.rst | 1 +
doc/guides/eventdevs/octeontx2.rst | 158 ++
doc/guides/platform/octeontx2.rst | 3 +
drivers/event/Makefile | 1 +
drivers/event/meson.build | 2 +-
drivers/event/octeontx2/Makefile | 46 +
drivers/event/octeontx2/meson.build | 29 +
drivers/event/octeontx2/otx2_evdev.c | 1395 +++++++++++++++
drivers/event/octeontx2/otx2_evdev.h | 266 +++
drivers/event/octeontx2/otx2_evdev_adptr.c | 19 +
drivers/event/octeontx2/otx2_evdev_irq.c | 272 +++
drivers/event/octeontx2/otx2_evdev_selftest.c | 1511 +++++++++++++++++
drivers/event/octeontx2/otx2_evdev_stats.h | 286 ++++
drivers/event/octeontx2/otx2_tim_evdev.c | 734 ++++++++
drivers/event/octeontx2/otx2_tim_evdev.h | 249 +++
drivers/event/octeontx2/otx2_tim_worker.c | 171 ++
drivers/event/octeontx2/otx2_tim_worker.h | 450 +++++
drivers/event/octeontx2/otx2_worker.c | 270 +++
drivers/event/octeontx2/otx2_worker.h | 187 ++
drivers/event/octeontx2/otx2_worker_dual.c | 207 +++
drivers/event/octeontx2/otx2_worker_dual.h | 76 +
.../rte_pmd_octeontx2_event_version.map | 4 +
mk/rte.app.mk | 2 +
26 files changed, 6357 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/eventdevs/octeontx2.rst
create mode 100644 drivers/event/octeontx2/Makefile
create mode 100644 drivers/event/octeontx2/meson.build
create mode 100644 drivers/event/octeontx2/otx2_evdev.c
create mode 100644 drivers/event/octeontx2/otx2_evdev.h
create mode 100644 drivers/event/octeontx2/otx2_evdev_adptr.c
create mode 100644 drivers/event/octeontx2/otx2_evdev_irq.c
create mode 100644 drivers/event/octeontx2/otx2_evdev_selftest.c
create mode 100644 drivers/event/octeontx2/otx2_evdev_stats.h
create mode 100644 drivers/event/octeontx2/otx2_tim_evdev.c
create mode 100644 drivers/event/octeontx2/otx2_tim_evdev.h
create mode 100644 drivers/event/octeontx2/otx2_tim_worker.c
create mode 100644 drivers/event/octeontx2/otx2_tim_worker.h
create mode 100644 drivers/event/octeontx2/otx2_worker.c
create mode 100644 drivers/event/octeontx2/otx2_worker.h
create mode 100644 drivers/event/octeontx2/otx2_worker_dual.c
create mode 100644 drivers/event/octeontx2/otx2_worker_dual.h
create mode 100644 drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 01/44] event/octeontx2: add build infra and device probe
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 8:55 ` Thomas Monjalon
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 02/44] event/octeontx2: add init and fini for octeontx2 SSO object pbhagavatula
` (42 subsequent siblings)
43 siblings, 1 reply; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj, Thomas Monjalon, Anatoly Burakov
Cc: dev, Pavan Nikhilesh, Nithin Dabilpuram
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add the make and meson based build infrastructure along with the
eventdev(SSO) device probe.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
config/common_base | 5 ++
drivers/event/Makefile | 1 +
drivers/event/meson.build | 2 +-
drivers/event/octeontx2/Makefile | 39 +++++++++++
drivers/event/octeontx2/meson.build | 21 ++++++
drivers/event/octeontx2/otx2_evdev.c | 70 +++++++++++++++++++
drivers/event/octeontx2/otx2_evdev.h | 26 +++++++
.../rte_pmd_octeontx2_event_version.map | 4 ++
mk/rte.app.mk | 2 +
9 files changed, 169 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/octeontx2/Makefile
create mode 100644 drivers/event/octeontx2/meson.build
create mode 100644 drivers/event/octeontx2/otx2_evdev.c
create mode 100644 drivers/event/octeontx2/otx2_evdev.h
create mode 100644 drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map
diff --git a/config/common_base b/config/common_base
index fa1ae249a..e85991e87 100644
--- a/config/common_base
+++ b/config/common_base
@@ -747,6 +747,11 @@ CONFIG_RTE_LIBRTE_PMD_DPAA2_QDMA_RAWDEV=n
#
CONFIG_RTE_LIBRTE_PMD_IFPGA_RAWDEV=y
+#
+# Compile PMD for octeontx sso event device
+#
+CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV=y
+
#
# Compile librte_ring
#
diff --git a/drivers/event/Makefile b/drivers/event/Makefile
index 03ad1b6cb..e4e7eff37 100644
--- a/drivers/event/Makefile
+++ b/drivers/event/Makefile
@@ -15,5 +15,6 @@ ifeq ($(CONFIG_RTE_EAL_VFIO)$(CONFIG_RTE_LIBRTE_FSLMC_BUS),yy)
DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_EVENTDEV) += dpaa2
endif
DIRS-$(CONFIG_RTE_LIBRTE_PMD_OPDL_EVENTDEV) += opdl
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += octeontx2
include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/event/meson.build b/drivers/event/meson.build
index fb723f727..b204a9f8d 100644
--- a/drivers/event/meson.build
+++ b/drivers/event/meson.build
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2017 Intel Corporation
-drivers = ['dpaa', 'dpaa2', 'opdl', 'skeleton', 'sw', 'dsw']
+drivers = ['dpaa', 'dpaa2', 'opdl', 'skeleton', 'sw', 'dsw', 'octeontx2']
if not (toolchain == 'gcc' and cc.version().version_compare('<4.8.6') and
dpdk_conf.has('RTE_ARCH_ARM64'))
drivers += 'octeontx'
diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile
new file mode 100644
index 000000000..dbf6ec22d
--- /dev/null
+++ b/drivers/event/octeontx2/Makefile
@@ -0,0 +1,39 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_octeontx2_event.a
+
+CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2
+CFLAGS += -I$(RTE_SDK)/drivers/event/octeontx2
+CFLAGS += -I$(RTE_SDK)/drivers/net/octeontx2
+CFLAGS += -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+
+ifneq ($(CONFIG_RTE_ARCH_64),y)
+CFLAGS += -Wno-int-to-pointer-cast
+CFLAGS += -Wno-pointer-to-int-cast
+endif
+
+EXPORT_MAP := rte_pmd_octeontx2_event_version.map
+
+LIBABIVER := 1
+
+#
+# all source are stored in SRCS-y
+#
+
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c
+
+LDLIBS += -lrte_eal -lrte_bus_pci -lrte_pci
+LDLIBS += -lrte_eventdev
+LDLIBS += -lrte_common_octeontx2
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
new file mode 100644
index 000000000..c4f442174
--- /dev/null
+++ b/drivers/event/octeontx2/meson.build
@@ -0,0 +1,21 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+sources = files('otx2_evdev.c')
+
+allow_experimental_apis = true
+
+extra_flags = []
+# This integrated controller runs only on a arm64 machine, remove 32bit warnings
+if not dpdk_conf.get('RTE_ARCH_64')
+ extra_flags += ['-Wno-int-to-pointer-cast', '-Wno-pointer-to-int-cast']
+endif
+
+foreach flag: extra_flags
+ if cc.has_argument(flag)
+ cflags += flag
+ endif
+endforeach
+
+deps += ['bus_pci', 'common_octeontx2']
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
new file mode 100644
index 000000000..faffd3f0c
--- /dev/null
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -0,0 +1,70 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <inttypes.h>
+
+#include <rte_bus_pci.h>
+#include <rte_common.h>
+#include <rte_eal.h>
+#include <rte_eventdev_pmd_pci.h>
+#include <rte_pci.h>
+
+#include "otx2_evdev.h"
+
+static int
+otx2_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_probe(pci_drv, pci_dev,
+ sizeof(struct otx2_sso_evdev),
+ otx2_sso_init);
+}
+
+static int
+otx2_sso_remove(struct rte_pci_device *pci_dev)
+{
+ return rte_event_pmd_pci_remove(pci_dev, otx2_sso_fini);
+}
+
+static const struct rte_pci_id pci_sso_map[] = {
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
+ PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_PF)
+ },
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver pci_sso = {
+ .id_table = pci_sso_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA,
+ .probe = otx2_sso_probe,
+ .remove = otx2_sso_remove,
+};
+
+int
+otx2_sso_init(struct rte_eventdev *event_dev)
+{
+ RTE_SET_USED(event_dev);
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ return 0;
+}
+
+int
+otx2_sso_fini(struct rte_eventdev *event_dev)
+{
+ RTE_SET_USED(event_dev);
+ /* For secondary processes, nothing to be done */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ return 0;
+}
+
+RTE_PMD_REGISTER_PCI(event_octeontx2, pci_sso);
+RTE_PMD_REGISTER_PCI_TABLE(event_octeontx2, pci_sso_map);
+RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci");
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
new file mode 100644
index 000000000..1df233293
--- /dev/null
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_EVDEV_H__
+#define __OTX2_EVDEV_H__
+
+#include <rte_eventdev.h>
+
+#include "otx2_common.h"
+
+#define EVENTDEV_NAME_OCTEONTX2_PMD otx2_eventdev
+
+#define sso_func_trace otx2_sso_dbg
+
+#define OTX2_SSO_MAX_VHGRP RTE_EVENT_MAX_QUEUES_PER_DEV
+#define OTX2_SSO_MAX_VHWS (UINT8_MAX)
+
+struct otx2_sso_evdev {
+};
+
+/* Init and Fini API's */
+int otx2_sso_init(struct rte_eventdev *event_dev);
+int otx2_sso_fini(struct rte_eventdev *event_dev);
+
+#endif /* __OTX2_EVDEV_H__ */
diff --git a/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map b/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map
new file mode 100644
index 000000000..41c65c8c9
--- /dev/null
+++ b/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map
@@ -0,0 +1,4 @@
+DPDK_19.08 {
+ local: *;
+};
+
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 81be289a8..449589ad7 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -109,6 +109,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF)$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOO
_LDLIBS-y += -lrte_common_octeontx
endif
OCTEONTX2-y := $(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL)
+OCTEONTX2-y += $(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV)
ifeq ($(findstring y,$(OCTEONTX2-y)),y)
_LDLIBS-y += -lrte_common_octeontx2
endif
@@ -293,6 +294,7 @@ endif # CONFIG_RTE_LIBRTE_FSLMC_BUS
_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += -lrte_mempool_octeontx
_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX_PMD) += -lrte_pmd_octeontx
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_OPDL_EVENTDEV) += -lrte_pmd_opdl_event
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += -lrte_pmd_octeontx2_event
endif # CONFIG_RTE_LIBRTE_EVENTDEV
ifeq ($(CONFIG_RTE_LIBRTE_RAWDEV),y)
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 02/44] event/octeontx2: add init and fini for octeontx2 SSO object
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 01/44] event/octeontx2: add build infra and device probe pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 03/44] event/octeontx2: add device capabilities function pbhagavatula
` (41 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh, Nithin Dabilpuram
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
SSO object needs to be initialized to communicate with the kernel AF
driver through mbox using the common API's.
Also, initialize the internal eventdev structure to defaults.
Attach NPA lf to the PF if needed.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/event/octeontx2/Makefile | 2 +-
drivers/event/octeontx2/meson.build | 2 +-
drivers/event/octeontx2/otx2_evdev.c | 84 +++++++++++++++++++++++++++-
drivers/event/octeontx2/otx2_evdev.h | 22 +++++++-
4 files changed, 105 insertions(+), 5 deletions(-)
diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile
index dbf6ec22d..36f0b2b12 100644
--- a/drivers/event/octeontx2/Makefile
+++ b/drivers/event/octeontx2/Makefile
@@ -34,6 +34,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c
LDLIBS += -lrte_eal -lrte_bus_pci -lrte_pci
LDLIBS += -lrte_eventdev
-LDLIBS += -lrte_common_octeontx2
+LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
index c4f442174..3fc96421d 100644
--- a/drivers/event/octeontx2/meson.build
+++ b/drivers/event/octeontx2/meson.build
@@ -18,4 +18,4 @@ foreach flag: extra_flags
endif
endforeach
-deps += ['bus_pci', 'common_octeontx2']
+deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2']
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index faffd3f0c..08ae820b9 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -46,22 +46,102 @@ static struct rte_pci_driver pci_sso = {
int
otx2_sso_init(struct rte_eventdev *event_dev)
{
- RTE_SET_USED(event_dev);
+ struct free_rsrcs_rsp *rsrc_cnt;
+ struct rte_pci_device *pci_dev;
+ struct otx2_sso_evdev *dev;
+ int rc;
+
/* For secondary processes, the primary has done all the work */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ dev = sso_pmd_priv(event_dev);
+
+ pci_dev = container_of(event_dev->dev, struct rte_pci_device, device);
+
+ /* Initialize the base otx2_dev object */
+ rc = otx2_dev_init(pci_dev, dev);
+ if (rc < 0) {
+ otx2_err("Failed to initialize otx2_dev rc=%d", rc);
+ goto error;
+ }
+
+ /* Get SSO and SSOW MSIX rsrc cnt */
+ otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
+ rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt);
+ if (rc < 0) {
+ otx2_err("Unable to get free rsrc count");
+ goto otx2_dev_uninit;
+ }
+ otx2_sso_dbg("SSO %d SSOW %d NPA %d provisioned", rsrc_cnt->sso,
+ rsrc_cnt->ssow, rsrc_cnt->npa);
+
+ dev->max_event_ports = RTE_MIN(rsrc_cnt->ssow, OTX2_SSO_MAX_VHWS);
+ dev->max_event_queues = RTE_MIN(rsrc_cnt->sso, OTX2_SSO_MAX_VHGRP);
+ /* Grab the NPA LF if required */
+ rc = otx2_npa_lf_init(pci_dev, dev);
+ if (rc < 0) {
+ otx2_err("Unable to init NPA lf. It might not be provisioned");
+ goto otx2_dev_uninit;
+ }
+
+ dev->drv_inited = true;
+ dev->is_timeout_deq = 0;
+ dev->min_dequeue_timeout_ns = USEC2NSEC(1);
+ dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF);
+ dev->max_num_events = -1;
+ dev->nb_event_queues = 0;
+ dev->nb_event_ports = 0;
+
+ if (!dev->max_event_ports || !dev->max_event_queues) {
+ otx2_err("Not enough eventdev resource queues=%d ports=%d",
+ dev->max_event_queues, dev->max_event_ports);
+ rc = -ENODEV;
+ goto otx2_npa_lf_uninit;
+ }
+
+ otx2_sso_pf_func_set(dev->pf_func);
+ otx2_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
+ event_dev->data->name, dev->max_event_queues,
+ dev->max_event_ports);
+
+
return 0;
+
+otx2_npa_lf_uninit:
+ otx2_npa_lf_fini();
+otx2_dev_uninit:
+ otx2_dev_fini(pci_dev, dev);
+error:
+ return rc;
}
int
otx2_sso_fini(struct rte_eventdev *event_dev)
{
- RTE_SET_USED(event_dev);
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ struct rte_pci_device *pci_dev;
+
/* For secondary processes, nothing to be done */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ pci_dev = container_of(event_dev->dev, struct rte_pci_device, device);
+
+ if (!dev->drv_inited)
+ goto dev_fini;
+
+ dev->drv_inited = false;
+ otx2_npa_lf_fini();
+
+dev_fini:
+ if (otx2_npa_lf_active(dev)) {
+ otx2_info("Common resource in use by other devices");
+ return -EAGAIN;
+ }
+
+ otx2_dev_fini(pci_dev, dev);
+
return 0;
}
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index 1df233293..4427efcad 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -8,6 +8,8 @@
#include <rte_eventdev.h>
#include "otx2_common.h"
+#include "otx2_dev.h"
+#include "otx2_mempool.h"
#define EVENTDEV_NAME_OCTEONTX2_PMD otx2_eventdev
@@ -16,8 +18,26 @@
#define OTX2_SSO_MAX_VHGRP RTE_EVENT_MAX_QUEUES_PER_DEV
#define OTX2_SSO_MAX_VHWS (UINT8_MAX)
+#define USEC2NSEC(__us) ((__us) * 1E3)
+
struct otx2_sso_evdev {
-};
+ OTX2_DEV; /* Base class */
+ uint8_t max_event_queues;
+ uint8_t max_event_ports;
+ uint8_t is_timeout_deq;
+ uint8_t nb_event_queues;
+ uint8_t nb_event_ports;
+ uint32_t deq_tmo_ns;
+ uint32_t min_dequeue_timeout_ns;
+ uint32_t max_dequeue_timeout_ns;
+ int32_t max_num_events;
+} __rte_cache_aligned;
+
+static inline struct otx2_sso_evdev *
+sso_pmd_priv(const struct rte_eventdev *event_dev)
+{
+ return event_dev->data->dev_private;
+}
/* Init and Fini API's */
int otx2_sso_init(struct rte_eventdev *event_dev);
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 03/44] event/octeontx2: add device capabilities function
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 01/44] event/octeontx2: add build infra and device probe pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 02/44] event/octeontx2: add init and fini for octeontx2 SSO object pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 04/44] event/octeontx2: add device configure function pbhagavatula
` (40 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add the info_get function to return details on the queues, flow,
prioritization capabilities, etc. which this device has.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.c | 31 ++++++++++++++++++++++++++++
1 file changed, 31 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 08ae820b9..839a5ccaa 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -12,6 +12,36 @@
#include "otx2_evdev.h"
+static void
+otx2_sso_info_get(struct rte_eventdev *event_dev,
+ struct rte_event_dev_info *dev_info)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+
+ dev_info->driver_name = RTE_STR(EVENTDEV_NAME_OCTEONTX2_PMD);
+ dev_info->min_dequeue_timeout_ns = dev->min_dequeue_timeout_ns;
+ dev_info->max_dequeue_timeout_ns = dev->max_dequeue_timeout_ns;
+ dev_info->max_event_queues = dev->max_event_queues;
+ dev_info->max_event_queue_flows = (1ULL << 20);
+ dev_info->max_event_queue_priority_levels = 8;
+ dev_info->max_event_priority_levels = 1;
+ dev_info->max_event_ports = dev->max_event_ports;
+ dev_info->max_event_port_dequeue_depth = 1;
+ dev_info->max_event_port_enqueue_depth = 1;
+ dev_info->max_num_events = dev->max_num_events;
+ dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
+ RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
+ RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
+ RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
+ RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
+ RTE_EVENT_DEV_CAP_NONSEQ_MODE;
+}
+
+/* Initialize and register event driver with DPDK Application */
+static struct rte_eventdev_ops otx2_sso_ops = {
+ .dev_infos_get = otx2_sso_info_get,
+};
+
static int
otx2_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
{
@@ -51,6 +81,7 @@ otx2_sso_init(struct rte_eventdev *event_dev)
struct otx2_sso_evdev *dev;
int rc;
+ event_dev->dev_ops = &otx2_sso_ops;
/* For secondary processes, the primary has done all the work */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 04/44] event/octeontx2: add device configure function
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (2 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 03/44] event/octeontx2: add device capabilities function pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 05/44] event/octeontx2: add event queue config functions pbhagavatula
` (39 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add the device configure function that attaches the requested number of
SSO GWS(event ports) and GGRP(event queues) LF's to the PF.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.c | 258 +++++++++++++++++++++++++++
drivers/event/octeontx2/otx2_evdev.h | 10 ++
2 files changed, 268 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 839a5ccaa..00996578a 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -37,9 +37,267 @@ otx2_sso_info_get(struct rte_eventdev *event_dev,
RTE_EVENT_DEV_CAP_NONSEQ_MODE;
}
+static int
+sso_hw_lf_cfg(struct otx2_mbox *mbox, enum otx2_sso_lf_type type,
+ uint16_t nb_lf, uint8_t attach)
+{
+ if (attach) {
+ struct rsrc_attach_req *req;
+
+ req = otx2_mbox_alloc_msg_attach_resources(mbox);
+ switch (type) {
+ case SSO_LF_GGRP:
+ req->sso = nb_lf;
+ break;
+ case SSO_LF_GWS:
+ req->ssow = nb_lf;
+ break;
+ default:
+ return -EINVAL;
+ }
+ req->modify = true;
+ if (otx2_mbox_process(mbox) < 0)
+ return -EIO;
+ } else {
+ struct rsrc_detach_req *req;
+
+ req = otx2_mbox_alloc_msg_detach_resources(mbox);
+ switch (type) {
+ case SSO_LF_GGRP:
+ req->sso = true;
+ break;
+ case SSO_LF_GWS:
+ req->ssow = true;
+ break;
+ default:
+ return -EINVAL;
+ }
+ req->partial = true;
+ if (otx2_mbox_process(mbox) < 0)
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int
+sso_lf_cfg(struct otx2_sso_evdev *dev, struct otx2_mbox *mbox,
+ enum otx2_sso_lf_type type, uint16_t nb_lf, uint8_t alloc)
+{
+ void *rsp;
+ int rc;
+
+ if (alloc) {
+ switch (type) {
+ case SSO_LF_GGRP:
+ {
+ struct sso_lf_alloc_req *req_ggrp;
+ req_ggrp = otx2_mbox_alloc_msg_sso_lf_alloc(mbox);
+ req_ggrp->hwgrps = nb_lf;
+ }
+ break;
+ case SSO_LF_GWS:
+ {
+ struct ssow_lf_alloc_req *req_hws;
+ req_hws = otx2_mbox_alloc_msg_ssow_lf_alloc(mbox);
+ req_hws->hws = nb_lf;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+ } else {
+ switch (type) {
+ case SSO_LF_GGRP:
+ {
+ struct sso_lf_free_req *req_ggrp;
+ req_ggrp = otx2_mbox_alloc_msg_sso_lf_free(mbox);
+ req_ggrp->hwgrps = nb_lf;
+ }
+ break;
+ case SSO_LF_GWS:
+ {
+ struct ssow_lf_free_req *req_hws;
+ req_hws = otx2_mbox_alloc_msg_ssow_lf_free(mbox);
+ req_hws->hws = nb_lf;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+ }
+
+ rc = otx2_mbox_process_msg_tmo(mbox, (void **)&rsp, ~0);
+ if (rc < 0)
+ return rc;
+
+ if (alloc && type == SSO_LF_GGRP) {
+ struct sso_lf_alloc_rsp *rsp_ggrp = rsp;
+
+ dev->xaq_buf_size = rsp_ggrp->xaq_buf_size;
+ dev->xae_waes = rsp_ggrp->xaq_wq_entries;
+ dev->iue = rsp_ggrp->in_unit_entries;
+ }
+
+ return 0;
+}
+
+static int
+sso_configure_ports(const struct rte_eventdev *event_dev)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ uint8_t nb_lf;
+ int rc;
+
+ otx2_sso_dbg("Configuring event ports %d", dev->nb_event_ports);
+
+ nb_lf = dev->nb_event_ports;
+ /* Ask AF to attach required LFs. */
+ rc = sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, true);
+ if (rc < 0) {
+ otx2_err("Failed to attach SSO GWS LF");
+ return -ENODEV;
+ }
+
+ if (sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, true) < 0) {
+ sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
+ otx2_err("Failed to init SSO GWS LF");
+ return -ENODEV;
+ }
+
+ return rc;
+}
+
+static int
+sso_configure_queues(const struct rte_eventdev *event_dev)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ uint8_t nb_lf;
+ int rc;
+
+ otx2_sso_dbg("Configuring event queues %d", dev->nb_event_queues);
+
+ nb_lf = dev->nb_event_queues;
+ /* Ask AF to attach required LFs. */
+ rc = sso_hw_lf_cfg(mbox, SSO_LF_GGRP, nb_lf, true);
+ if (rc < 0) {
+ otx2_err("Failed to attach SSO GGRP LF");
+ return -ENODEV;
+ }
+
+ if (sso_lf_cfg(dev, mbox, SSO_LF_GGRP, nb_lf, true) < 0) {
+ sso_hw_lf_cfg(mbox, SSO_LF_GGRP, nb_lf, false);
+ otx2_err("Failed to init SSO GGRP LF");
+ return -ENODEV;
+ }
+
+ return rc;
+}
+
+static void
+sso_lf_teardown(struct otx2_sso_evdev *dev,
+ enum otx2_sso_lf_type lf_type)
+{
+ uint8_t nb_lf;
+
+ switch (lf_type) {
+ case SSO_LF_GGRP:
+ nb_lf = dev->nb_event_queues;
+ break;
+ case SSO_LF_GWS:
+ nb_lf = dev->nb_event_ports;
+ break;
+ default:
+ return;
+ }
+
+ sso_lf_cfg(dev, dev->mbox, lf_type, nb_lf, false);
+ sso_hw_lf_cfg(dev->mbox, lf_type, nb_lf, false);
+}
+
+static int
+otx2_sso_configure(const struct rte_eventdev *event_dev)
+{
+ struct rte_event_dev_config *conf = &event_dev->data->dev_conf;
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ uint32_t deq_tmo_ns;
+ int rc;
+
+ sso_func_trace();
+ deq_tmo_ns = conf->dequeue_timeout_ns;
+
+ if (deq_tmo_ns == 0)
+ deq_tmo_ns = dev->min_dequeue_timeout_ns;
+
+ if (deq_tmo_ns < dev->min_dequeue_timeout_ns ||
+ deq_tmo_ns > dev->max_dequeue_timeout_ns) {
+ otx2_err("Unsupported dequeue timeout requested");
+ return -EINVAL;
+ }
+
+ if (conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT)
+ dev->is_timeout_deq = 1;
+
+ dev->deq_tmo_ns = deq_tmo_ns;
+
+ if (conf->nb_event_ports > dev->max_event_ports ||
+ conf->nb_event_queues > dev->max_event_queues) {
+ otx2_err("Unsupported event queues/ports requested");
+ return -EINVAL;
+ }
+
+ if (conf->nb_event_port_dequeue_depth > 1) {
+ otx2_err("Unsupported event port deq depth requested");
+ return -EINVAL;
+ }
+
+ if (conf->nb_event_port_enqueue_depth > 1) {
+ otx2_err("Unsupported event port enq depth requested");
+ return -EINVAL;
+ }
+
+ if (dev->nb_event_queues) {
+ /* Finit any previous queues. */
+ sso_lf_teardown(dev, SSO_LF_GGRP);
+ }
+ if (dev->nb_event_ports) {
+ /* Finit any previous ports. */
+ sso_lf_teardown(dev, SSO_LF_GWS);
+ }
+
+ dev->nb_event_queues = conf->nb_event_queues;
+ dev->nb_event_ports = conf->nb_event_ports;
+
+ if (sso_configure_ports(event_dev)) {
+ otx2_err("Failed to configure event ports");
+ return -ENODEV;
+ }
+
+ if (sso_configure_queues(event_dev) < 0) {
+ otx2_err("Failed to configure event queues");
+ rc = -ENODEV;
+ goto teardown_hws;
+ }
+
+ dev->configured = 1;
+ rte_mb();
+
+ return 0;
+
+teardown_hws:
+ sso_lf_teardown(dev, SSO_LF_GWS);
+ dev->nb_event_queues = 0;
+ dev->nb_event_ports = 0;
+ dev->configured = 0;
+ return rc;
+}
+
/* Initialize and register event driver with DPDK Application */
static struct rte_eventdev_ops otx2_sso_ops = {
.dev_infos_get = otx2_sso_info_get,
+ .dev_configure = otx2_sso_configure,
};
static int
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index 4427efcad..feb4ed6f4 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -20,6 +20,11 @@
#define USEC2NSEC(__us) ((__us) * 1E3)
+enum otx2_sso_lf_type {
+ SSO_LF_GGRP,
+ SSO_LF_GWS
+};
+
struct otx2_sso_evdev {
OTX2_DEV; /* Base class */
uint8_t max_event_queues;
@@ -27,10 +32,15 @@ struct otx2_sso_evdev {
uint8_t is_timeout_deq;
uint8_t nb_event_queues;
uint8_t nb_event_ports;
+ uint8_t configured;
uint32_t deq_tmo_ns;
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
+ /* HW const */
+ uint32_t xae_waes;
+ uint32_t xaq_buf_size;
+ uint32_t iue;
} __rte_cache_aligned;
static inline struct otx2_sso_evdev *
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 05/44] event/octeontx2: add event queue config functions
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (3 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 04/44] event/octeontx2: add device configure function pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 06/44] event/octeontx2: allocate event inflight buffers pbhagavatula
` (38 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add default config, setup and release functions for event queues i.e.
SSO GGRPS.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.c | 50 ++++++++++++++++++++++++++++
drivers/event/octeontx2/otx2_evdev.h | 17 ++++++++++
2 files changed, 67 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 00996578a..2290598d0 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -142,6 +142,13 @@ sso_lf_cfg(struct otx2_sso_evdev *dev, struct otx2_mbox *mbox,
return 0;
}
+static void
+otx2_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
+{
+ RTE_SET_USED(event_dev);
+ RTE_SET_USED(queue_id);
+}
+
static int
sso_configure_ports(const struct rte_eventdev *event_dev)
{
@@ -294,10 +301,53 @@ otx2_sso_configure(const struct rte_eventdev *event_dev)
return rc;
}
+static void
+otx2_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
+ struct rte_event_queue_conf *queue_conf)
+{
+ RTE_SET_USED(event_dev);
+ RTE_SET_USED(queue_id);
+
+ queue_conf->nb_atomic_flows = (1ULL << 20);
+ queue_conf->nb_atomic_order_sequences = (1ULL << 20);
+ queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+ queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+}
+
+static int
+otx2_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
+ const struct rte_event_queue_conf *queue_conf)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct sso_grp_priority *req;
+ int rc;
+
+ sso_func_trace("Queue=%d prio=%d", queue_id, queue_conf->priority);
+
+ req = otx2_mbox_alloc_msg_sso_grp_set_priority(dev->mbox);
+ req->grp = queue_id;
+ req->weight = 0xFF;
+ req->affinity = 0xFF;
+ /* Normalize <0-255> to <0-7> */
+ req->priority = queue_conf->priority / 32;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to set priority queue=%d", queue_id);
+ return rc;
+ }
+
+ return 0;
+}
+
/* Initialize and register event driver with DPDK Application */
static struct rte_eventdev_ops otx2_sso_ops = {
.dev_infos_get = otx2_sso_info_get,
.dev_configure = otx2_sso_configure,
+ .queue_def_conf = otx2_sso_queue_def_conf,
+ .queue_setup = otx2_sso_queue_setup,
+ .queue_release = otx2_sso_queue_release,
};
static int
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index feb4ed6f4..b46402771 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -18,6 +18,23 @@
#define OTX2_SSO_MAX_VHGRP RTE_EVENT_MAX_QUEUES_PER_DEV
#define OTX2_SSO_MAX_VHWS (UINT8_MAX)
+/* SSO LF register offsets (BAR2) */
+#define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull)
+#define SSO_LF_GGRP_OP_ADD_WORK1 (0x8ull)
+
+#define SSO_LF_GGRP_QCTL (0x20ull)
+#define SSO_LF_GGRP_EXE_DIS (0x80ull)
+#define SSO_LF_GGRP_INT (0x100ull)
+#define SSO_LF_GGRP_INT_W1S (0x108ull)
+#define SSO_LF_GGRP_INT_ENA_W1S (0x110ull)
+#define SSO_LF_GGRP_INT_ENA_W1C (0x118ull)
+#define SSO_LF_GGRP_INT_THR (0x140ull)
+#define SSO_LF_GGRP_INT_CNT (0x180ull)
+#define SSO_LF_GGRP_XAQ_CNT (0x1b0ull)
+#define SSO_LF_GGRP_AQ_CNT (0x1c0ull)
+#define SSO_LF_GGRP_AQ_THR (0x1e0ull)
+#define SSO_LF_GGRP_MISC_CNT (0x200ull)
+
#define USEC2NSEC(__us) ((__us) * 1E3)
enum otx2_sso_lf_type {
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 06/44] event/octeontx2: allocate event inflight buffers
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (4 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 05/44] event/octeontx2: add event queue config functions pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 07/44] event/octeontx2: add devargs for inflight buffer count pbhagavatula
` (37 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Allocate buffers in DRAM that hold inflight events.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/Makefile | 2 +-
drivers/event/octeontx2/otx2_evdev.c | 116 ++++++++++++++++++++++++++-
drivers/event/octeontx2/otx2_evdev.h | 8 ++
3 files changed, 124 insertions(+), 2 deletions(-)
diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile
index 36f0b2b12..b3c3beccb 100644
--- a/drivers/event/octeontx2/Makefile
+++ b/drivers/event/octeontx2/Makefile
@@ -33,7 +33,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c
LDLIBS += -lrte_eal -lrte_bus_pci -lrte_pci
-LDLIBS += -lrte_eventdev
+LDLIBS += -lrte_mempool -lrte_eventdev -lrte_mbuf
LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 2290598d0..fc4dbda0a 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -8,6 +8,7 @@
#include <rte_common.h>
#include <rte_eal.h>
#include <rte_eventdev_pmd_pci.h>
+#include <rte_mbuf_pool_ops.h>
#include <rte_pci.h>
#include "otx2_evdev.h"
@@ -203,6 +204,107 @@ sso_configure_queues(const struct rte_eventdev *event_dev)
return rc;
}
+static int
+sso_xaq_allocate(struct otx2_sso_evdev *dev)
+{
+ const struct rte_memzone *mz;
+ struct npa_aura_s *aura;
+ static int reconfig_cnt;
+ char pool_name[RTE_MEMZONE_NAMESIZE];
+ uint32_t xaq_cnt;
+ int rc;
+
+ if (dev->xaq_pool)
+ rte_mempool_free(dev->xaq_pool);
+
+ /*
+ * Allocate memory for Add work backpressure.
+ */
+ mz = rte_memzone_lookup(OTX2_SSO_FC_NAME);
+ if (mz == NULL)
+ mz = rte_memzone_reserve_aligned(OTX2_SSO_FC_NAME,
+ OTX2_ALIGN +
+ sizeof(struct npa_aura_s),
+ rte_socket_id(),
+ RTE_MEMZONE_IOVA_CONTIG,
+ OTX2_ALIGN);
+ if (mz == NULL) {
+ otx2_err("Failed to allocate mem for fcmem");
+ return -ENOMEM;
+ }
+
+ dev->fc_iova = mz->iova;
+ dev->fc_mem = mz->addr;
+
+ aura = (struct npa_aura_s *)((uintptr_t)dev->fc_mem + OTX2_ALIGN);
+ memset(aura, 0, sizeof(struct npa_aura_s));
+
+ aura->fc_ena = 1;
+ aura->fc_addr = dev->fc_iova;
+ aura->fc_hyst_bits = 0; /* Store count on all updates */
+
+ /* Taken from HRM 14.3.3(4) */
+ xaq_cnt = dev->nb_event_queues * OTX2_SSO_XAQ_CACHE_CNT;
+ xaq_cnt += (dev->iue / dev->xae_waes) +
+ (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues);
+
+ otx2_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
+ /* Setup XAQ based on number of nb queues. */
+ snprintf(pool_name, 30, "otx2_xaq_buf_pool_%d", reconfig_cnt);
+ dev->xaq_pool = (void *)rte_mempool_create_empty(pool_name,
+ xaq_cnt, dev->xaq_buf_size, 0, 0,
+ rte_socket_id(), 0);
+
+ if (dev->xaq_pool == NULL) {
+ otx2_err("Unable to create empty mempool.");
+ rte_memzone_free(mz);
+ return -ENOMEM;
+ }
+
+ rc = rte_mempool_set_ops_byname(dev->xaq_pool,
+ rte_mbuf_platform_mempool_ops(), aura);
+ if (rc != 0) {
+ otx2_err("Unable to set xaqpool ops.");
+ goto alloc_fail;
+ }
+
+ rc = rte_mempool_populate_default(dev->xaq_pool);
+ if (rc < 0) {
+ otx2_err("Unable to set populate xaqpool.");
+ goto alloc_fail;
+ }
+ reconfig_cnt++;
+ /* When SW does addwork (enqueue) check if there is space in XAQ by
+ * comparing fc_addr above against the xaq_lmt calculated below.
+ * There should be a minimum headroom (OTX2_SSO_XAQ_SLACK / 2) for SSO
+ * to request XAQ to cache them even before enqueue is called.
+ */
+ dev->xaq_lmt = xaq_cnt - (OTX2_SSO_XAQ_SLACK / 2 *
+ dev->nb_event_queues);
+ dev->nb_xaq_cfg = xaq_cnt;
+
+ return 0;
+alloc_fail:
+ rte_mempool_free(dev->xaq_pool);
+ rte_memzone_free(mz);
+ return rc;
+}
+
+static int
+sso_ggrp_alloc_xaq(struct otx2_sso_evdev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct sso_hw_setconfig *req;
+
+ otx2_sso_dbg("Configuring XAQ for GGRPs");
+ req = otx2_mbox_alloc_msg_sso_hw_setconfig(mbox);
+ req->npa_pf_func = otx2_npa_pf_func_get();
+ req->npa_aura_id = npa_lf_aura_handle_to_aura(dev->xaq_pool->pool_id);
+ req->hwgrps = dev->nb_event_queues;
+
+ return otx2_mbox_process(mbox);
+}
+
static void
sso_lf_teardown(struct otx2_sso_evdev *dev,
enum otx2_sso_lf_type lf_type)
@@ -288,11 +390,23 @@ otx2_sso_configure(const struct rte_eventdev *event_dev)
goto teardown_hws;
}
+ if (sso_xaq_allocate(dev) < 0) {
+ rc = -ENOMEM;
+ goto teardown_hwggrp;
+ }
+
+ rc = sso_ggrp_alloc_xaq(dev);
+ if (rc < 0) {
+ otx2_err("Failed to alloc xaq to ggrp %d", rc);
+ goto teardown_hwggrp;
+ }
+
dev->configured = 1;
rte_mb();
return 0;
-
+teardown_hwggrp:
+ sso_lf_teardown(dev, SSO_LF_GGRP);
teardown_hws:
sso_lf_teardown(dev, SSO_LF_GWS);
dev->nb_event_queues = 0;
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index b46402771..375640bca 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -17,6 +17,9 @@
#define OTX2_SSO_MAX_VHGRP RTE_EVENT_MAX_QUEUES_PER_DEV
#define OTX2_SSO_MAX_VHWS (UINT8_MAX)
+#define OTX2_SSO_FC_NAME "otx2_evdev_xaq_fc"
+#define OTX2_SSO_XAQ_SLACK (8)
+#define OTX2_SSO_XAQ_CACHE_CNT (0x7)
/* SSO LF register offsets (BAR2) */
#define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull)
@@ -54,6 +57,11 @@ struct otx2_sso_evdev {
uint32_t min_dequeue_timeout_ns;
uint32_t max_dequeue_timeout_ns;
int32_t max_num_events;
+ uint64_t *fc_mem;
+ uint64_t xaq_lmt;
+ uint64_t nb_xaq_cfg;
+ rte_iova_t fc_iova;
+ struct rte_mempool *xaq_pool;
/* HW const */
uint32_t xae_waes;
uint32_t xaq_buf_size;
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 07/44] event/octeontx2: add devargs for inflight buffer count
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (5 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 06/44] event/octeontx2: allocate event inflight buffers pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 08/44] event/octeontx2: add event port config functions pbhagavatula
` (36 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
The number of events for a *open system* event device is specified
as -1 as per the eventdev specification.
Since, Octeontx2 SSO inflight events are only limited by DRAM size, the
xae_cnt devargs parameter is introduced to provide upper limit for
in-flight events.
Example:
--dev "0002:0e:00.0,xae_cnt=8192"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/event/octeontx2/Makefile | 2 +-
drivers/event/octeontx2/otx2_evdev.c | 28 +++++++++++++++++++++++++++-
drivers/event/octeontx2/otx2_evdev.h | 11 +++++++++++
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile
index b3c3beccb..58853e1b9 100644
--- a/drivers/event/octeontx2/Makefile
+++ b/drivers/event/octeontx2/Makefile
@@ -32,7 +32,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c
-LDLIBS += -lrte_eal -lrte_bus_pci -lrte_pci
+LDLIBS += -lrte_eal -lrte_bus_pci -lrte_pci -lrte_kvargs
LDLIBS += -lrte_mempool -lrte_eventdev -lrte_mbuf
LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index fc4dbda0a..94c97fc9e 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -8,6 +8,7 @@
#include <rte_common.h>
#include <rte_eal.h>
#include <rte_eventdev_pmd_pci.h>
+#include <rte_kvargs.h>
#include <rte_mbuf_pool_ops.h>
#include <rte_pci.h>
@@ -245,7 +246,10 @@ sso_xaq_allocate(struct otx2_sso_evdev *dev)
/* Taken from HRM 14.3.3(4) */
xaq_cnt = dev->nb_event_queues * OTX2_SSO_XAQ_CACHE_CNT;
- xaq_cnt += (dev->iue / dev->xae_waes) +
+ if (dev->xae_cnt)
+ xaq_cnt += dev->xae_cnt / dev->xae_waes;
+ else
+ xaq_cnt += (dev->iue / dev->xae_waes) +
(OTX2_SSO_XAQ_SLACK * dev->nb_event_queues);
otx2_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
@@ -464,6 +468,25 @@ static struct rte_eventdev_ops otx2_sso_ops = {
.queue_release = otx2_sso_queue_release,
};
+#define OTX2_SSO_XAE_CNT "xae_cnt"
+
+static void
+sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs)
+{
+ struct rte_kvargs *kvlist;
+
+ if (devargs == NULL)
+ return;
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, OTX2_SSO_XAE_CNT, &parse_kvargs_value,
+ &dev->xae_cnt);
+
+ rte_kvargs_free(kvlist);
+}
+
static int
otx2_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
{
@@ -553,6 +576,8 @@ otx2_sso_init(struct rte_eventdev *event_dev)
goto otx2_npa_lf_uninit;
}
+ sso_parse_devargs(dev, pci_dev->device.devargs);
+
otx2_sso_pf_func_set(dev->pf_func);
otx2_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
event_dev->data->name, dev->max_event_queues,
@@ -601,3 +626,4 @@ otx2_sso_fini(struct rte_eventdev *event_dev)
RTE_PMD_REGISTER_PCI(event_octeontx2, pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_octeontx2, pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=<int>");
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index 375640bca..acc8b6b3e 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -62,6 +62,8 @@ struct otx2_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
+ /* Dev args */
+ uint32_t xae_cnt;
/* HW const */
uint32_t xae_waes;
uint32_t xaq_buf_size;
@@ -74,6 +76,15 @@ sso_pmd_priv(const struct rte_eventdev *event_dev)
return event_dev->data->dev_private;
}
+static inline int
+parse_kvargs_value(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ *(uint32_t *)opaque = (uint32_t)atoi(value);
+ return 0;
+}
+
/* Init and Fini API's */
int otx2_sso_init(struct rte_eventdev *event_dev);
int otx2_sso_fini(struct rte_eventdev *event_dev);
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 08/44] event/octeontx2: add event port config functions
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (6 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 07/44] event/octeontx2: add devargs for inflight buffer count pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 09/44] event/octeontx2: support linking queues to ports pbhagavatula
` (35 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add default config, setup and release functions for event ports
i.e. SSO GWS.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.c | 110 ++++++++++++++++++++++++++-
drivers/event/octeontx2/otx2_evdev.h | 59 ++++++++++++++
2 files changed, 168 insertions(+), 1 deletion(-)
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 94c97fc9e..a6bf861fb 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -144,6 +144,12 @@ sso_lf_cfg(struct otx2_sso_evdev *dev, struct otx2_mbox *mbox,
return 0;
}
+static void
+otx2_sso_port_release(void *port)
+{
+ rte_free(port);
+}
+
static void
otx2_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
{
@@ -151,13 +157,24 @@ otx2_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
RTE_SET_USED(queue_id);
}
+static void
+sso_set_port_ops(struct otx2_ssogws *ws, uintptr_t base)
+{
+ ws->tag_op = base + SSOW_LF_GWS_TAG;
+ ws->wqp_op = base + SSOW_LF_GWS_WQP;
+ ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK;
+ ws->swtp_op = base + SSOW_LF_GWS_SWTP;
+ ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM;
+ ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
+}
+
static int
sso_configure_ports(const struct rte_eventdev *event_dev)
{
struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
struct otx2_mbox *mbox = dev->mbox;
uint8_t nb_lf;
- int rc;
+ int i, rc;
otx2_sso_dbg("Configuring event ports %d", dev->nb_event_ports);
@@ -175,6 +192,40 @@ sso_configure_ports(const struct rte_eventdev *event_dev)
return -ENODEV;
}
+ for (i = 0; i < nb_lf; i++) {
+ struct otx2_ssogws *ws;
+ uintptr_t base;
+
+ /* Free memory prior to re-allocation if needed */
+ if (event_dev->data->ports[i] != NULL) {
+ ws = event_dev->data->ports[i];
+ rte_free(ws);
+ ws = NULL;
+ }
+
+ /* Allocate event port memory */
+ ws = rte_zmalloc_socket("otx2_sso_ws",
+ sizeof(struct otx2_ssogws),
+ RTE_CACHE_LINE_SIZE,
+ event_dev->data->socket_id);
+ if (ws == NULL) {
+ otx2_err("Failed to alloc memory for port=%d", i);
+ rc = -ENOMEM;
+ break;
+ }
+
+ ws->port = i;
+ base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | i << 12);
+ sso_set_port_ops(ws, base);
+
+ event_dev->data->ports[i] = ws;
+ }
+
+ if (rc < 0) {
+ sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, false);
+ sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
+ }
+
return rc;
}
@@ -459,6 +510,60 @@ otx2_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
return 0;
}
+static void
+otx2_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
+ struct rte_event_port_conf *port_conf)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+
+ RTE_SET_USED(port_id);
+ port_conf->new_event_threshold = dev->max_num_events;
+ port_conf->dequeue_depth = 1;
+ port_conf->enqueue_depth = 1;
+}
+
+static int
+otx2_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
+ const struct rte_event_port_conf *port_conf)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ uintptr_t grps_base[OTX2_SSO_MAX_VHGRP] = {0};
+ uint64_t val;
+ uint16_t q;
+
+ sso_func_trace("Port=%d", port_id);
+ RTE_SET_USED(port_conf);
+
+ if (event_dev->data->ports[port_id] == NULL) {
+ otx2_err("Invalid port Id %d", port_id);
+ return -EINVAL;
+ }
+
+ for (q = 0; q < dev->nb_event_queues; q++) {
+ grps_base[q] = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 | q << 12);
+ if (grps_base[q] == 0) {
+ otx2_err("Failed to get grp[%d] base addr", q);
+ return -EINVAL;
+ }
+ }
+
+ /* Set get_work timeout for HWS */
+ val = NSEC2USEC(dev->deq_tmo_ns) - 1;
+
+ struct otx2_ssogws *ws = event_dev->data->ports[port_id];
+ uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
+
+ rte_memcpy(ws->grps_base, grps_base,
+ sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP);
+ ws->fc_mem = dev->fc_mem;
+ ws->xaq_lmt = dev->xaq_lmt;
+ otx2_write64(val, base + SSOW_LF_GWS_NW_TIM);
+
+ otx2_sso_dbg("Port=%d ws=%p", port_id, event_dev->data->ports[port_id]);
+
+ return 0;
+}
+
/* Initialize and register event driver with DPDK Application */
static struct rte_eventdev_ops otx2_sso_ops = {
.dev_infos_get = otx2_sso_info_get,
@@ -466,6 +571,9 @@ static struct rte_eventdev_ops otx2_sso_ops = {
.queue_def_conf = otx2_sso_queue_def_conf,
.queue_setup = otx2_sso_queue_setup,
.queue_release = otx2_sso_queue_release,
+ .port_def_conf = otx2_sso_port_def_conf,
+ .port_setup = otx2_sso_port_setup,
+ .port_release = otx2_sso_port_release,
};
#define OTX2_SSO_XAE_CNT "xae_cnt"
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index acc8b6b3e..3f4931ff1 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -38,6 +38,42 @@
#define SSO_LF_GGRP_AQ_THR (0x1e0ull)
#define SSO_LF_GGRP_MISC_CNT (0x200ull)
+/* SSOW LF register offsets (BAR2) */
+#define SSOW_LF_GWS_LINKS (0x10ull)
+#define SSOW_LF_GWS_PENDWQP (0x40ull)
+#define SSOW_LF_GWS_PENDSTATE (0x50ull)
+#define SSOW_LF_GWS_NW_TIM (0x70ull)
+#define SSOW_LF_GWS_GRPMSK_CHG (0x80ull)
+#define SSOW_LF_GWS_INT (0x100ull)
+#define SSOW_LF_GWS_INT_W1S (0x108ull)
+#define SSOW_LF_GWS_INT_ENA_W1S (0x110ull)
+#define SSOW_LF_GWS_INT_ENA_W1C (0x118ull)
+#define SSOW_LF_GWS_TAG (0x200ull)
+#define SSOW_LF_GWS_WQP (0x210ull)
+#define SSOW_LF_GWS_SWTP (0x220ull)
+#define SSOW_LF_GWS_PENDTAG (0x230ull)
+#define SSOW_LF_GWS_OP_ALLOC_WE (0x400ull)
+#define SSOW_LF_GWS_OP_GET_WORK (0x600ull)
+#define SSOW_LF_GWS_OP_SWTAG_FLUSH (0x800ull)
+#define SSOW_LF_GWS_OP_SWTAG_UNTAG (0x810ull)
+#define SSOW_LF_GWS_OP_SWTP_CLR (0x820ull)
+#define SSOW_LF_GWS_OP_UPD_WQP_GRP0 (0x830ull)
+#define SSOW_LF_GWS_OP_UPD_WQP_GRP1 (0x838ull)
+#define SSOW_LF_GWS_OP_DESCHED (0x880ull)
+#define SSOW_LF_GWS_OP_DESCHED_NOSCH (0x8c0ull)
+#define SSOW_LF_GWS_OP_SWTAG_DESCHED (0x980ull)
+#define SSOW_LF_GWS_OP_SWTAG_NOSCHED (0x9c0ull)
+#define SSOW_LF_GWS_OP_CLR_NSCHED0 (0xa00ull)
+#define SSOW_LF_GWS_OP_CLR_NSCHED1 (0xa08ull)
+#define SSOW_LF_GWS_OP_SWTP_SET (0xc00ull)
+#define SSOW_LF_GWS_OP_SWTAG_NORM (0xc10ull)
+#define SSOW_LF_GWS_OP_SWTAG_FULL0 (0xc20ull)
+#define SSOW_LF_GWS_OP_SWTAG_FULL1 (0xc28ull)
+#define SSOW_LF_GWS_OP_GWC_INVAL (0xe00ull)
+
+#define OTX2_SSOW_GET_BASE_ADDR(_GW) ((_GW) - SSOW_LF_GWS_OP_GET_WORK)
+
+#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us) * 1E3)
enum otx2_sso_lf_type {
@@ -70,6 +106,29 @@ struct otx2_sso_evdev {
uint32_t iue;
} __rte_cache_aligned;
+#define OTX2_SSOGWS_OPS \
+ /* WS ops */ \
+ uintptr_t getwrk_op; \
+ uintptr_t tag_op; \
+ uintptr_t wqp_op; \
+ uintptr_t swtp_op; \
+ uintptr_t swtag_norm_op; \
+ uintptr_t swtag_desched_op; \
+ uint8_t cur_tt; \
+ uint8_t cur_grp
+
+/* Event port aka GWS */
+struct otx2_ssogws {
+ /* Get Work Fastpath data */
+ OTX2_SSOGWS_OPS;
+ uint8_t swtag_req;
+ uint8_t port;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[OTX2_SSO_MAX_VHGRP];
+} __rte_cache_aligned;
+
static inline struct otx2_sso_evdev *
sso_pmd_priv(const struct rte_eventdev *event_dev)
{
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 09/44] event/octeontx2: support linking queues to ports
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (7 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 08/44] event/octeontx2: add event port config functions pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 10/44] event/octeontx2: support dequeue timeout tick conversion pbhagavatula
` (34 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Links between queues and ports are controlled by setting/clearing GGRP
membership in SSOW_LF_GWS_GRPMSK_CHG.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.c | 73 ++++++++++++++++++++++++++++
1 file changed, 73 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index a6bf861fb..53e68902a 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -39,6 +39,60 @@ otx2_sso_info_get(struct rte_eventdev *event_dev,
RTE_EVENT_DEV_CAP_NONSEQ_MODE;
}
+static void
+sso_port_link_modify(struct otx2_ssogws *ws, uint8_t queue, uint8_t enable)
+{
+ uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
+ uint64_t val;
+
+ val = queue;
+ val |= 0ULL << 12; /* SET 0 */
+ val |= 0x8000800080000000; /* Dont modify rest of the masks */
+ val |= (uint64_t)enable << 14; /* Enable/Disable Membership. */
+
+ otx2_write64(val, base + SSOW_LF_GWS_GRPMSK_CHG);
+}
+
+static int
+otx2_sso_port_link(struct rte_eventdev *event_dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links)
+{
+ uint8_t port_id = 0;
+ uint16_t link;
+
+ RTE_SET_USED(event_dev);
+ RTE_SET_USED(priorities);
+ for (link = 0; link < nb_links; link++) {
+ struct otx2_ssogws *ws = port;
+
+ port_id = ws->port;
+ sso_port_link_modify(ws, queues[link], true);
+ }
+ sso_func_trace("Port=%d nb_links=%d", port_id, nb_links);
+
+ return (int)nb_links;
+}
+
+static int
+otx2_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks)
+{
+ uint8_t port_id = 0;
+ uint16_t unlink;
+
+ RTE_SET_USED(event_dev);
+ for (unlink = 0; unlink < nb_unlinks; unlink++) {
+ struct otx2_ssogws *ws = port;
+
+ port_id = ws->port;
+ sso_port_link_modify(ws, queues[unlink], false);
+ }
+ sso_func_trace("Port=%d nb_unlinks=%d", port_id, nb_unlinks);
+
+ return (int)nb_unlinks;
+}
+
static int
sso_hw_lf_cfg(struct otx2_mbox *mbox, enum otx2_sso_lf_type type,
uint16_t nb_lf, uint8_t attach)
@@ -157,6 +211,21 @@ otx2_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
RTE_SET_USED(queue_id);
}
+static void
+sso_clr_links(const struct rte_eventdev *event_dev)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ int i, j;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ struct otx2_ssogws *ws;
+
+ ws = event_dev->data->ports[i];
+ for (j = 0; j < dev->nb_event_queues; j++)
+ sso_port_link_modify(ws, j, false);
+ }
+}
+
static void
sso_set_port_ops(struct otx2_ssogws *ws, uintptr_t base)
{
@@ -450,6 +519,8 @@ otx2_sso_configure(const struct rte_eventdev *event_dev)
goto teardown_hwggrp;
}
+ /* Clear any prior port-queue mapping. */
+ sso_clr_links(event_dev);
rc = sso_ggrp_alloc_xaq(dev);
if (rc < 0) {
otx2_err("Failed to alloc xaq to ggrp %d", rc);
@@ -574,6 +645,8 @@ static struct rte_eventdev_ops otx2_sso_ops = {
.port_def_conf = otx2_sso_port_def_conf,
.port_setup = otx2_sso_port_setup,
.port_release = otx2_sso_port_release,
+ .port_link = otx2_sso_port_link,
+ .port_unlink = otx2_sso_port_unlink,
};
#define OTX2_SSO_XAE_CNT "xae_cnt"
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 10/44] event/octeontx2: support dequeue timeout tick conversion
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (8 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 09/44] event/octeontx2: support linking queues to ports pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 11/44] event/octeontx2: add SSO GWS and GGRP IRQ handlers pbhagavatula
` (33 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add function to convert dequeue timeout from ns to ticks.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.c | 11 +++++++++++
drivers/event/octeontx2/otx2_evdev.h | 1 +
2 files changed, 12 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 53e68902a..ef6693bc5 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -635,6 +635,16 @@ otx2_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
return 0;
}
+static int
+otx2_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
+ uint64_t *tmo_ticks)
+{
+ RTE_SET_USED(event_dev);
+ *tmo_ticks = NSEC2TICK(ns, rte_get_timer_hz());
+
+ return 0;
+}
+
/* Initialize and register event driver with DPDK Application */
static struct rte_eventdev_ops otx2_sso_ops = {
.dev_infos_get = otx2_sso_info_get,
@@ -647,6 +657,7 @@ static struct rte_eventdev_ops otx2_sso_ops = {
.port_release = otx2_sso_port_release,
.port_link = otx2_sso_port_link,
.port_unlink = otx2_sso_port_unlink,
+ .timeout_ticks = otx2_sso_timeout_ticks,
};
#define OTX2_SSO_XAE_CNT "xae_cnt"
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index 3f4931ff1..1a9de1b86 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -75,6 +75,7 @@
#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us) * 1E3)
+#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
enum otx2_sso_lf_type {
SSO_LF_GGRP,
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 11/44] event/octeontx2: add SSO GWS and GGRP IRQ handlers
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (9 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 10/44] event/octeontx2: support dequeue timeout tick conversion pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 12/44] event/octeontx2: add register dump functions pbhagavatula
` (32 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Register and implement SSO GWS and GGRP IRQ handlers for error
interrupts.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/event/octeontx2/Makefile | 1 +
drivers/event/octeontx2/meson.build | 4 +-
drivers/event/octeontx2/otx2_evdev.c | 38 +++++
drivers/event/octeontx2/otx2_evdev.h | 6 +
drivers/event/octeontx2/otx2_evdev_irq.c | 175 +++++++++++++++++++++++
5 files changed, 223 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/octeontx2/otx2_evdev_irq.c
diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile
index 58853e1b9..4f09c1fc8 100644
--- a/drivers/event/octeontx2/Makefile
+++ b/drivers/event/octeontx2/Makefile
@@ -31,6 +31,7 @@ LIBABIVER := 1
#
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_irq.c
LDLIBS += -lrte_eal -lrte_bus_pci -lrte_pci -lrte_kvargs
LDLIBS += -lrte_mempool -lrte_eventdev -lrte_mbuf
diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
index 3fc96421d..5aa8113bd 100644
--- a/drivers/event/octeontx2/meson.build
+++ b/drivers/event/octeontx2/meson.build
@@ -2,7 +2,9 @@
# Copyright(C) 2019 Marvell International Ltd.
#
-sources = files('otx2_evdev.c')
+sources = files('otx2_evdev.c',
+ 'otx2_evdev_irq.c',
+ )
allow_experimental_apis = true
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index ef6693bc5..b92bf0407 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -13,6 +13,29 @@
#include <rte_pci.h>
#include "otx2_evdev.h"
+#include "otx2_irq.h"
+
+static inline int
+sso_get_msix_offsets(const struct rte_eventdev *event_dev)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ uint8_t nb_ports = dev->nb_event_ports;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct msix_offset_rsp *msix_rsp;
+ int i, rc;
+
+ /* Get SSO and SSOW MSIX vector offsets */
+ otx2_mbox_alloc_msg_msix_offset(mbox);
+ rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
+
+ for (i = 0; i < nb_ports; i++)
+ dev->ssow_msixoff[i] = msix_rsp->ssow_msixoff[i];
+
+ for (i = 0; i < dev->nb_event_queues; i++)
+ dev->sso_msixoff[i] = msix_rsp->sso_msixoff[i];
+
+ return rc;
+}
static void
otx2_sso_info_get(struct rte_eventdev *event_dev,
@@ -491,6 +514,9 @@ otx2_sso_configure(const struct rte_eventdev *event_dev)
return -EINVAL;
}
+ if (dev->configured)
+ sso_unregister_irqs(event_dev);
+
if (dev->nb_event_queues) {
/* Finit any previous queues. */
sso_lf_teardown(dev, SSO_LF_GGRP);
@@ -527,6 +553,18 @@ otx2_sso_configure(const struct rte_eventdev *event_dev)
goto teardown_hwggrp;
}
+ rc = sso_get_msix_offsets(event_dev);
+ if (rc < 0) {
+ otx2_err("Failed to get msix offsets %d", rc);
+ goto teardown_hwggrp;
+ }
+
+ rc = sso_register_irqs(event_dev);
+ if (rc < 0) {
+ otx2_err("Failed to register irq %d", rc);
+ goto teardown_hwggrp;
+ }
+
dev->configured = 1;
rte_mb();
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index 1a9de1b86..e1d2dcc69 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -105,6 +105,9 @@ struct otx2_sso_evdev {
uint32_t xae_waes;
uint32_t xaq_buf_size;
uint32_t iue;
+ /* MSIX offsets */
+ uint16_t sso_msixoff[OTX2_SSO_MAX_VHGRP];
+ uint16_t ssow_msixoff[OTX2_SSO_MAX_VHWS];
} __rte_cache_aligned;
#define OTX2_SSOGWS_OPS \
@@ -148,5 +151,8 @@ parse_kvargs_value(const char *key, const char *value, void *opaque)
/* Init and Fini API's */
int otx2_sso_init(struct rte_eventdev *event_dev);
int otx2_sso_fini(struct rte_eventdev *event_dev);
+/* IRQ handlers */
+int sso_register_irqs(const struct rte_eventdev *event_dev);
+void sso_unregister_irqs(const struct rte_eventdev *event_dev);
#endif /* __OTX2_EVDEV_H__ */
diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c
new file mode 100644
index 000000000..7df21cc24
--- /dev/null
+++ b/drivers/event/octeontx2/otx2_evdev_irq.c
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_evdev.h"
+
+static void
+sso_lf_irq(void *param)
+{
+ uintptr_t base = (uintptr_t)param;
+ uint64_t intr;
+ uint8_t ggrp;
+
+ ggrp = (base >> 12) & 0xFF;
+
+ intr = otx2_read64(base + SSO_LF_GGRP_INT);
+ if (intr == 0)
+ return;
+
+ otx2_err("GGRP %d GGRP_INT=0x%" PRIx64 "", ggrp, intr);
+
+ /* Clear interrupt */
+ otx2_write64(intr, base + SSO_LF_GGRP_INT);
+}
+
+static int
+sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff,
+ uintptr_t base)
+{
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ int rc, vec;
+
+ vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1C);
+ /* Set used interrupt vectors */
+ rc = otx2_register_irq(handle, sso_lf_irq, (void *)base, vec);
+ /* Enable hw interrupt */
+ otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1S);
+
+ return rc;
+}
+
+static void
+ssow_lf_irq(void *param)
+{
+ uintptr_t base = (uintptr_t)param;
+ uint8_t gws = (base >> 12) & 0xFF;
+ uint64_t intr;
+
+ intr = otx2_read64(base + SSOW_LF_GWS_INT);
+ if (intr == 0)
+ return;
+
+ otx2_err("GWS %d GWS_INT=0x%" PRIx64 "", gws, intr);
+
+ /* Clear interrupt */
+ otx2_write64(intr, base + SSOW_LF_GWS_INT);
+}
+
+static int
+ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff,
+ uintptr_t base)
+{
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ int rc, vec;
+
+ vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1C);
+ /* Set used interrupt vectors */
+ rc = otx2_register_irq(handle, ssow_lf_irq, (void *)base, vec);
+ /* Enable hw interrupt */
+ otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1S);
+
+ return rc;
+}
+
+static void
+sso_lf_unregister_irq(const struct rte_eventdev *event_dev,
+ uint16_t ggrp_msixoff, uintptr_t base)
+{
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ int vec;
+
+ vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1C);
+ otx2_unregister_irq(handle, sso_lf_irq, (void *)base, vec);
+}
+
+static void
+ssow_lf_unregister_irq(const struct rte_eventdev *event_dev,
+ uint16_t gws_msixoff, uintptr_t base)
+{
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ int vec;
+
+ vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1C);
+ otx2_unregister_irq(handle, ssow_lf_irq, (void *)base, vec);
+}
+
+int
+sso_register_irqs(const struct rte_eventdev *event_dev)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ int i, rc = -EINVAL;
+ uint8_t nb_ports;
+
+ nb_ports = dev->nb_event_ports;
+
+ for (i = 0; i < dev->nb_event_queues; i++) {
+ if (dev->sso_msixoff[i] == MSIX_VECTOR_INVALID) {
+ otx2_err("Invalid SSOLF MSIX offset[%d] vector: 0x%x",
+ i, dev->sso_msixoff[i]);
+ goto fail;
+ }
+ }
+
+ for (i = 0; i < nb_ports; i++) {
+ if (dev->ssow_msixoff[i] == MSIX_VECTOR_INVALID) {
+ otx2_err("Invalid SSOWLF MSIX offset[%d] vector: 0x%x",
+ i, dev->ssow_msixoff[i]);
+ goto fail;
+ }
+ }
+
+ for (i = 0; i < dev->nb_event_queues; i++) {
+ uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 |
+ i << 12);
+ rc = sso_lf_register_irq(event_dev, dev->sso_msixoff[i], base);
+ }
+
+ for (i = 0; i < nb_ports; i++) {
+ uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 |
+ i << 12);
+ rc = ssow_lf_register_irq(event_dev, dev->ssow_msixoff[i],
+ base);
+ }
+
+fail:
+ return rc;
+}
+
+void
+sso_unregister_irqs(const struct rte_eventdev *event_dev)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ uint8_t nb_ports;
+ int i;
+
+ nb_ports = dev->nb_event_ports;
+
+ for (i = 0; i < dev->nb_event_queues; i++) {
+ uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 |
+ i << 12);
+ sso_lf_unregister_irq(event_dev, dev->sso_msixoff[i], base);
+ }
+
+ for (i = 0; i < nb_ports; i++) {
+ uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 |
+ i << 12);
+ ssow_lf_unregister_irq(event_dev, dev->ssow_msixoff[i], base);
+ }
+}
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 12/44] event/octeontx2: add register dump functions
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (10 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 11/44] event/octeontx2: add SSO GWS and GGRP IRQ handlers pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 13/44] event/octeontx2: add xstats support pbhagavatula
` (31 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO GWS and GGRP register dump function to aid debugging.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.c | 68 ++++++++++++++++++++++++++++
1 file changed, 68 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index b92bf0407..6c37c5b5c 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -683,6 +683,72 @@ otx2_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
return 0;
}
+static void
+ssogws_dump(struct otx2_ssogws *ws, FILE *f)
+{
+ uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
+
+ fprintf(f, "SSOW_LF_GWS Base addr 0x%" PRIx64 "\n", (uint64_t)base);
+ fprintf(f, "SSOW_LF_GWS_LINKS 0x%" PRIx64 "\n",
+ otx2_read64(base + SSOW_LF_GWS_LINKS));
+ fprintf(f, "SSOW_LF_GWS_PENDWQP 0x%" PRIx64 "\n",
+ otx2_read64(base + SSOW_LF_GWS_PENDWQP));
+ fprintf(f, "SSOW_LF_GWS_PENDSTATE 0x%" PRIx64 "\n",
+ otx2_read64(base + SSOW_LF_GWS_PENDSTATE));
+ fprintf(f, "SSOW_LF_GWS_NW_TIM 0x%" PRIx64 "\n",
+ otx2_read64(base + SSOW_LF_GWS_NW_TIM));
+ fprintf(f, "SSOW_LF_GWS_TAG 0x%" PRIx64 "\n",
+ otx2_read64(base + SSOW_LF_GWS_TAG));
+ fprintf(f, "SSOW_LF_GWS_WQP 0x%" PRIx64 "\n",
+ otx2_read64(base + SSOW_LF_GWS_TAG));
+ fprintf(f, "SSOW_LF_GWS_SWTP 0x%" PRIx64 "\n",
+ otx2_read64(base + SSOW_LF_GWS_SWTP));
+ fprintf(f, "SSOW_LF_GWS_PENDTAG 0x%" PRIx64 "\n",
+ otx2_read64(base + SSOW_LF_GWS_PENDTAG));
+}
+
+static void
+ssoggrp_dump(uintptr_t base, FILE *f)
+{
+ fprintf(f, "SSO_LF_GGRP Base addr 0x%" PRIx64 "\n", (uint64_t)base);
+ fprintf(f, "SSO_LF_GGRP_QCTL 0x%" PRIx64 "\n",
+ otx2_read64(base + SSO_LF_GGRP_QCTL));
+ fprintf(f, "SSO_LF_GGRP_XAQ_CNT 0x%" PRIx64 "\n",
+ otx2_read64(base + SSO_LF_GGRP_XAQ_CNT));
+ fprintf(f, "SSO_LF_GGRP_INT_THR 0x%" PRIx64 "\n",
+ otx2_read64(base + SSO_LF_GGRP_INT_THR));
+ fprintf(f, "SSO_LF_GGRP_INT_CNT 0x%" PRIX64 "\n",
+ otx2_read64(base + SSO_LF_GGRP_INT_CNT));
+ fprintf(f, "SSO_LF_GGRP_AQ_CNT 0x%" PRIX64 "\n",
+ otx2_read64(base + SSO_LF_GGRP_AQ_CNT));
+ fprintf(f, "SSO_LF_GGRP_AQ_THR 0x%" PRIX64 "\n",
+ otx2_read64(base + SSO_LF_GGRP_AQ_THR));
+ fprintf(f, "SSO_LF_GGRP_MISC_CNT 0x%" PRIx64 "\n",
+ otx2_read64(base + SSO_LF_GGRP_MISC_CNT));
+}
+
+static void
+otx2_sso_dump(struct rte_eventdev *event_dev, FILE *f)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ uint8_t queue;
+ uint8_t port;
+
+ /* Dump SSOW registers */
+ for (port = 0; port < dev->nb_event_ports; port++) {
+ fprintf(f, "[%s]SSO single workslot[%d] dump\n",
+ __func__, port);
+ ssogws_dump(event_dev->data->ports[port], f);
+ }
+
+ /* Dump SSO registers */
+ for (queue = 0; queue < dev->nb_event_queues; queue++) {
+ fprintf(f, "[%s]SSO group[%d] dump\n", __func__, queue);
+ struct otx2_ssogws *ws = event_dev->data->ports[0];
+ ssoggrp_dump(ws->grps_base[queue], f);
+ }
+}
+
/* Initialize and register event driver with DPDK Application */
static struct rte_eventdev_ops otx2_sso_ops = {
.dev_infos_get = otx2_sso_info_get,
@@ -696,6 +762,8 @@ static struct rte_eventdev_ops otx2_sso_ops = {
.port_link = otx2_sso_port_link,
.port_unlink = otx2_sso_port_unlink,
.timeout_ticks = otx2_sso_timeout_ticks,
+
+ .dump = otx2_sso_dump,
};
#define OTX2_SSO_XAE_CNT "xae_cnt"
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 13/44] event/octeontx2: add xstats support
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (11 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 12/44] event/octeontx2: add register dump functions pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 14/44] event/octeontx2: add SSO HW device operations pbhagavatula
` (30 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh, Nithin Dabilpuram
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add support for retrieving statistics from SSO GWS and GGRP.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.c | 5 +
drivers/event/octeontx2/otx2_evdev_stats.h | 242 +++++++++++++++++++++
2 files changed, 247 insertions(+)
create mode 100644 drivers/event/octeontx2/otx2_evdev_stats.h
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 6c37c5b5c..51220f447 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -12,6 +12,7 @@
#include <rte_mbuf_pool_ops.h>
#include <rte_pci.h>
+#include "otx2_evdev_stats.h"
#include "otx2_evdev.h"
#include "otx2_irq.h"
@@ -763,6 +764,10 @@ static struct rte_eventdev_ops otx2_sso_ops = {
.port_unlink = otx2_sso_port_unlink,
.timeout_ticks = otx2_sso_timeout_ticks,
+ .xstats_get = otx2_sso_xstats_get,
+ .xstats_reset = otx2_sso_xstats_reset,
+ .xstats_get_names = otx2_sso_xstats_get_names,
+
.dump = otx2_sso_dump,
};
diff --git a/drivers/event/octeontx2/otx2_evdev_stats.h b/drivers/event/octeontx2/otx2_evdev_stats.h
new file mode 100644
index 000000000..df76a1333
--- /dev/null
+++ b/drivers/event/octeontx2/otx2_evdev_stats.h
@@ -0,0 +1,242 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_EVDEV_STATS_H__
+#define __OTX2_EVDEV_STATS_H__
+
+#include "otx2_evdev.h"
+
+struct otx2_sso_xstats_name {
+ const char name[RTE_EVENT_DEV_XSTATS_NAME_SIZE];
+ const size_t offset;
+ const uint64_t mask;
+ const uint8_t shift;
+ uint64_t reset_snap[OTX2_SSO_MAX_VHGRP];
+};
+
+static struct otx2_sso_xstats_name sso_hws_xstats[] = {
+ {"last_grp_serviced", offsetof(struct sso_hws_stats, arbitration),
+ 0x3FF, 0, {0} },
+ {"affinity_arbitration_credits",
+ offsetof(struct sso_hws_stats, arbitration),
+ 0xF, 16, {0} },
+};
+
+static struct otx2_sso_xstats_name sso_grp_xstats[] = {
+ {"wrk_sched", offsetof(struct sso_grp_stats, ws_pc), ~0x0, 0,
+ {0} },
+ {"xaq_dram", offsetof(struct sso_grp_stats, ext_pc), ~0x0,
+ 0, {0} },
+ {"add_wrk", offsetof(struct sso_grp_stats, wa_pc), ~0x0, 0,
+ {0} },
+ {"tag_switch_req", offsetof(struct sso_grp_stats, ts_pc), ~0x0, 0,
+ {0} },
+ {"desched_req", offsetof(struct sso_grp_stats, ds_pc), ~0x0, 0,
+ {0} },
+ {"desched_wrk", offsetof(struct sso_grp_stats, dq_pc), ~0x0, 0,
+ {0} },
+ {"xaq_cached", offsetof(struct sso_grp_stats, aw_status), 0x3,
+ 0, {0} },
+ {"work_inflight", offsetof(struct sso_grp_stats, aw_status), 0x3F,
+ 16, {0} },
+ {"inuse_pages", offsetof(struct sso_grp_stats, page_cnt),
+ 0xFFFFFFFF, 0, {0} },
+};
+
+#define OTX2_SSO_NUM_HWS_XSTATS RTE_DIM(sso_hws_xstats)
+#define OTX2_SSO_NUM_GRP_XSTATS RTE_DIM(sso_grp_xstats)
+
+#define OTX2_SSO_NUM_XSTATS (OTX2_SSO_NUM_HWS_XSTATS + OTX2_SSO_NUM_GRP_XSTATS)
+
+static int
+otx2_sso_xstats_get(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
+ const unsigned int ids[], uint64_t values[], unsigned int n)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ struct otx2_sso_xstats_name *xstats;
+ struct otx2_sso_xstats_name *xstat;
+ struct otx2_mbox *mbox = dev->mbox;
+ uint32_t xstats_mode_count = 0;
+ uint32_t start_offset = 0;
+ unsigned int i;
+ uint64_t value;
+ void *req_rsp;
+ int rc;
+
+ switch (mode) {
+ case RTE_EVENT_DEV_XSTATS_DEVICE:
+ break;
+ case RTE_EVENT_DEV_XSTATS_PORT:
+ if (queue_port_id >= (signed int)dev->nb_event_ports)
+ goto invalid_value;
+
+ xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hws_xstats;
+
+ req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
+ ((struct sso_info_req *)req_rsp)->hws = queue_port_id;
+ rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
+ if (rc < 0)
+ goto invalid_value;
+
+ break;
+ case RTE_EVENT_DEV_XSTATS_QUEUE:
+ if (queue_port_id >= (signed int)dev->nb_event_queues)
+ goto invalid_value;
+
+ xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS;
+ start_offset = OTX2_SSO_NUM_HWS_XSTATS;
+ xstats = sso_grp_xstats;
+
+ req_rsp = otx2_mbox_alloc_msg_sso_grp_get_stats(mbox);
+ ((struct sso_info_req *)req_rsp)->grp = queue_port_id;
+ rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
+ if (rc < 0)
+ goto invalid_value;
+
+ break;
+ default:
+ otx2_err("Invalid mode received");
+ goto invalid_value;
+ };
+
+ for (i = 0; i < n && i < xstats_mode_count; i++) {
+ xstat = &xstats[ids[i] - start_offset];
+ value = *(uint64_t *)((char *)req_rsp + xstat->offset);
+ value = (value >> xstat->shift) & xstat->mask;
+
+ values[i] = value;
+ values[i] -= xstat->reset_snap[queue_port_id];
+ }
+
+ return i;
+invalid_value:
+ return -EINVAL;
+}
+
+static int
+otx2_sso_xstats_reset(struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ int16_t queue_port_id, const uint32_t ids[], uint32_t n)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ struct otx2_sso_xstats_name *xstats;
+ struct otx2_sso_xstats_name *xstat;
+ struct otx2_mbox *mbox = dev->mbox;
+ uint32_t xstats_mode_count = 0;
+ uint32_t start_offset = 0;
+ unsigned int i;
+ uint64_t value;
+ void *req_rsp;
+ int rc;
+
+ switch (mode) {
+ case RTE_EVENT_DEV_XSTATS_DEVICE:
+ return 0;
+ case RTE_EVENT_DEV_XSTATS_PORT:
+ if (queue_port_id >= (signed int)dev->nb_event_ports)
+ goto invalid_value;
+
+ xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS;
+ xstats = sso_hws_xstats;
+
+ req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
+ ((struct sso_info_req *)req_rsp)->hws = queue_port_id;
+ rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
+ if (rc < 0)
+ goto invalid_value;
+
+ break;
+ case RTE_EVENT_DEV_XSTATS_QUEUE:
+ if (queue_port_id >= (signed int)dev->nb_event_queues)
+ goto invalid_value;
+
+ xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS;
+ start_offset = OTX2_SSO_NUM_HWS_XSTATS;
+ xstats = sso_grp_xstats;
+
+ req_rsp = otx2_mbox_alloc_msg_sso_grp_get_stats(mbox);
+ ((struct sso_info_req *)req_rsp)->grp = queue_port_id;
+ rc = otx2_mbox_process_msg(mbox, (void *)&req_rsp);
+ if (rc < 0)
+ goto invalid_value;
+
+ break;
+ default:
+ otx2_err("Invalid mode received");
+ goto invalid_value;
+ };
+
+ for (i = 0; i < n && i < xstats_mode_count; i++) {
+ xstat = &xstats[ids[i] - start_offset];
+ value = *(uint64_t *)((char *)req_rsp + xstat->offset);
+ value = (value >> xstat->shift) & xstat->mask;
+
+ xstat->reset_snap[queue_port_id] = value;
+ }
+ return i;
+invalid_value:
+ return -EINVAL;
+}
+
+static int
+otx2_sso_xstats_get_names(const struct rte_eventdev *event_dev,
+ enum rte_event_dev_xstats_mode mode,
+ uint8_t queue_port_id,
+ struct rte_event_dev_xstats_name *xstats_names,
+ unsigned int *ids, unsigned int size)
+{
+ struct rte_event_dev_xstats_name xstats_names_copy[OTX2_SSO_NUM_XSTATS];
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ uint32_t xstats_mode_count = 0;
+ uint32_t start_offset = 0;
+ unsigned int xidx = 0;
+ unsigned int i;
+
+ for (i = 0; i < OTX2_SSO_NUM_HWS_XSTATS; i++) {
+ snprintf(xstats_names_copy[i].name,
+ sizeof(xstats_names_copy[i].name), "%s",
+ sso_hws_xstats[i].name);
+ }
+
+ for (; i < OTX2_SSO_NUM_XSTATS; i++) {
+ snprintf(xstats_names_copy[i].name,
+ sizeof(xstats_names_copy[i].name), "%s",
+ sso_grp_xstats[i - OTX2_SSO_NUM_HWS_XSTATS].name);
+ }
+
+ switch (mode) {
+ case RTE_EVENT_DEV_XSTATS_DEVICE:
+ break;
+ case RTE_EVENT_DEV_XSTATS_PORT:
+ if (queue_port_id >= (signed int)dev->nb_event_ports)
+ break;
+ xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS;
+ break;
+ case RTE_EVENT_DEV_XSTATS_QUEUE:
+ if (queue_port_id >= (signed int)dev->nb_event_queues)
+ break;
+ xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS;
+ start_offset = OTX2_SSO_NUM_HWS_XSTATS;
+ break;
+ default:
+ otx2_err("Invalid mode received");
+ return -EINVAL;
+ };
+
+ if (xstats_mode_count > size || !ids || !xstats_names)
+ return xstats_mode_count;
+
+ for (i = 0; i < xstats_mode_count; i++) {
+ xidx = i + start_offset;
+ strncpy(xstats_names[i].name, xstats_names_copy[xidx].name,
+ sizeof(xstats_names[i].name));
+ ids[i] = xidx;
+ }
+
+ return i;
+}
+
+#endif
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 14/44] event/octeontx2: add SSO HW device operations
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (12 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 13/44] event/octeontx2: add xstats support pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 15/44] event/octeontx2: add worker enqueue functions pbhagavatula
` (29 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO HW device operations used for enqueue/dequeue.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/event/octeontx2/Makefile | 1 +
drivers/event/octeontx2/meson.build | 3 +-
drivers/event/octeontx2/otx2_evdev.h | 22 +++
drivers/event/octeontx2/otx2_worker.c | 5 +
drivers/event/octeontx2/otx2_worker.h | 187 ++++++++++++++++++++++++++
5 files changed, 217 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/octeontx2/otx2_worker.c
create mode 100644 drivers/event/octeontx2/otx2_worker.h
diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile
index 4f09c1fc8..a3de5ca23 100644
--- a/drivers/event/octeontx2/Makefile
+++ b/drivers/event/octeontx2/Makefile
@@ -30,6 +30,7 @@ LIBABIVER := 1
# all source are stored in SRCS-y
#
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_irq.c
diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
index 5aa8113bd..1d2080b6d 100644
--- a/drivers/event/octeontx2/meson.build
+++ b/drivers/event/octeontx2/meson.build
@@ -2,7 +2,8 @@
# Copyright(C) 2019 Marvell International Ltd.
#
-sources = files('otx2_evdev.c',
+sources = files('otx2_worker.c',
+ 'otx2_evdev.c',
'otx2_evdev_irq.c',
)
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index e1d2dcc69..cccce1dea 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -82,6 +82,28 @@ enum otx2_sso_lf_type {
SSO_LF_GWS
};
+union otx2_sso_event {
+ uint64_t get_work0;
+ struct {
+ uint32_t flow_id:20;
+ uint32_t sub_event_type:8;
+ uint32_t event_type:4;
+ uint8_t op:2;
+ uint8_t rsvd:4;
+ uint8_t sched_type:2;
+ uint8_t queue_id;
+ uint8_t priority;
+ uint8_t impl_opaque;
+ };
+} __rte_aligned(64);
+
+enum {
+ SSO_SYNC_ORDERED,
+ SSO_SYNC_ATOMIC,
+ SSO_SYNC_UNTAGGED,
+ SSO_SYNC_EMPTY
+};
+
struct otx2_sso_evdev {
OTX2_DEV; /* Base class */
uint8_t max_event_queues;
diff --git a/drivers/event/octeontx2/otx2_worker.c b/drivers/event/octeontx2/otx2_worker.c
new file mode 100644
index 000000000..83f535d05
--- /dev/null
+++ b/drivers/event/octeontx2/otx2_worker.c
@@ -0,0 +1,5 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_worker.h"
diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h
new file mode 100644
index 000000000..f06ff064e
--- /dev/null
+++ b/drivers/event/octeontx2/otx2_worker.h
@@ -0,0 +1,187 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_WORKER_H__
+#define __OTX2_WORKER_H__
+
+#include <rte_common.h>
+#include <rte_branch_prediction.h>
+
+#include <otx2_common.h>
+#include "otx2_evdev.h"
+
+/* SSO Operations */
+
+static __rte_always_inline uint16_t
+otx2_ssogws_get_work(struct otx2_ssogws *ws, struct rte_event *ev)
+{
+ union otx2_sso_event event;
+ uint64_t get_work1;
+
+ otx2_write64(BIT_ULL(16) | /* wait for work. */
+ 1, /* Use Mask set 0. */
+ ws->getwrk_op);
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ " prfm pldl1keep, [%[wqp]] \n"
+ : [tag] "=&r" (event.get_work0),
+ [wqp] "=&r" (get_work1)
+ : [tag_loc] "r" (ws->tag_op),
+ [wqp_loc] "r" (ws->wqp_op)
+ );
+#else
+ event.get_work0 = otx2_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & event.get_work0)
+ event.get_work0 = otx2_read64(ws->tag_op);
+
+ get_work1 = otx2_read64(ws->wqp_op);
+ rte_prefetch0((const void *)get_work1);
+#endif
+
+ event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 |
+ (event.get_work0 & (0x3FFull << 36)) << 4 |
+ (event.get_work0 & 0xffffffff);
+ ws->cur_tt = event.sched_type;
+ ws->cur_grp = event.queue_id;
+
+
+ ev->event = event.get_work0;
+ ev->u64 = get_work1;
+
+ return !!get_work1;
+}
+
+/* Used in cleaning up workslot. */
+static __rte_always_inline uint16_t
+otx2_ssogws_get_work_empty(struct otx2_ssogws *ws, struct rte_event *ev)
+{
+ union otx2_sso_event event;
+ uint64_t get_work1;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: dmb ld \n"
+ " prfm pldl1keep, [%[wqp]] \n"
+ : [tag] "=&r" (event.get_work0),
+ [wqp] "=&r" (get_work1)
+ : [tag_loc] "r" (ws->tag_op),
+ [wqp_loc] "r" (ws->wqp_op)
+ );
+#else
+ event.get_work0 = otx2_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & event.get_work0)
+ event.get_work0 = otx2_read64(ws->tag_op);
+
+ get_work1 = otx2_read64(ws->wqp_op);
+ rte_prefetch0((const void *)get_work1);
+#endif
+
+ event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 |
+ (event.get_work0 & (0x3FFull << 36)) << 4 |
+ (event.get_work0 & 0xffffffff);
+ ws->cur_tt = event.sched_type;
+ ws->cur_grp = event.queue_id;
+
+ ev->event = event.get_work0;
+ ev->u64 = get_work1;
+
+ return !!get_work1;
+}
+
+static __rte_always_inline void
+otx2_ssogws_add_work(struct otx2_ssogws *ws, const uint64_t event_ptr,
+ const uint32_t tag, const uint8_t new_tt,
+ const uint16_t grp)
+{
+ uint64_t add_work0;
+
+ add_work0 = tag | ((uint64_t)(new_tt) << 32);
+ otx2_store_pair(add_work0, event_ptr, ws->grps_base[grp]);
+}
+
+static __rte_always_inline void
+otx2_ssogws_swtag_desched(struct otx2_ssogws *ws, uint32_t tag, uint8_t new_tt,
+ uint16_t grp)
+{
+ uint64_t val;
+
+ val = tag | ((uint64_t)(new_tt & 0x3) << 32) | ((uint64_t)grp << 34);
+ otx2_write64(val, ws->swtag_desched_op);
+}
+
+static __rte_always_inline void
+otx2_ssogws_swtag_norm(struct otx2_ssogws *ws, uint32_t tag, uint8_t new_tt)
+{
+ uint64_t val;
+
+ val = tag | ((uint64_t)(new_tt & 0x3) << 32);
+ otx2_write64(val, ws->swtag_norm_op);
+}
+
+static __rte_always_inline void
+otx2_ssogws_swtag_untag(struct otx2_ssogws *ws)
+{
+ otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
+ SSOW_LF_GWS_OP_SWTAG_UNTAG);
+ ws->cur_tt = SSO_SYNC_UNTAGGED;
+}
+
+static __rte_always_inline void
+otx2_ssogws_swtag_flush(struct otx2_ssogws *ws)
+{
+ otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
+ SSOW_LF_GWS_OP_SWTAG_FLUSH);
+ ws->cur_tt = SSO_SYNC_EMPTY;
+}
+
+static __rte_always_inline void
+otx2_ssogws_desched(struct otx2_ssogws *ws)
+{
+ otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
+ SSOW_LF_GWS_OP_DESCHED);
+}
+
+static __rte_always_inline void
+otx2_ssogws_swtag_wait(struct otx2_ssogws *ws)
+{
+#ifdef RTE_ARCH_ARM64
+ uint64_t swtp;
+
+ asm volatile (
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " cbz %[swtb], done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[swtb], [%[swtp_loc]] \n"
+ " cbnz %[swtb], rty%= \n"
+ "done%=: \n"
+ : [swtb] "=&r" (swtp)
+ : [swtp_loc] "r" (ws->swtp_op)
+ );
+#else
+ /* Wait for the SWTAG/SWTAG_FULL operation */
+ while (otx2_read64(ws->swtp_op))
+ ;
+#endif
+}
+
+#endif
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 15/44] event/octeontx2: add worker enqueue functions
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (13 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 14/44] event/octeontx2: add SSO HW device operations pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 16/44] event/octeontx2: add worker dequeue functions pbhagavatula
` (28 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add worker event enqueue functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.h | 8 ++
drivers/event/octeontx2/otx2_worker.c | 136 ++++++++++++++++++++++++++
2 files changed, 144 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index cccce1dea..4f2fd33df 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -170,6 +170,14 @@ parse_kvargs_value(const char *key, const char *value, void *opaque)
return 0;
}
+uint16_t otx2_ssogws_enq(void *port, const struct rte_event *ev);
+uint16_t otx2_ssogws_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events);
+
/* Init and Fini API's */
int otx2_sso_init(struct rte_eventdev *event_dev);
int otx2_sso_fini(struct rte_eventdev *event_dev);
diff --git a/drivers/event/octeontx2/otx2_worker.c b/drivers/event/octeontx2/otx2_worker.c
index 83f535d05..044c5f132 100644
--- a/drivers/event/octeontx2/otx2_worker.c
+++ b/drivers/event/octeontx2/otx2_worker.c
@@ -3,3 +3,139 @@
*/
#include "otx2_worker.h"
+
+static __rte_noinline uint8_t
+otx2_ssogws_new_event(struct otx2_ssogws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ if (ws->xaq_lmt <= *ws->fc_mem)
+ return 0;
+
+ otx2_ssogws_add_work(ws, event_ptr, tag, new_tt, grp);
+
+ return 1;
+}
+
+static __rte_always_inline void
+otx2_ssogws_fwd_swtag(struct otx2_ssogws *ws, const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint8_t cur_tt = ws->cur_tt;
+
+ /* 96XX model
+ * cur_tt/new_tt SSO_SYNC_ORDERED SSO_SYNC_ATOMIC SSO_SYNC_UNTAGGED
+ *
+ * SSO_SYNC_ORDERED norm norm untag
+ * SSO_SYNC_ATOMIC norm norm untag
+ * SSO_SYNC_UNTAGGED norm norm NOOP
+ */
+
+ if (new_tt == SSO_SYNC_UNTAGGED) {
+ if (cur_tt != SSO_SYNC_UNTAGGED)
+ otx2_ssogws_swtag_untag(ws);
+ } else {
+ otx2_ssogws_swtag_norm(ws, tag, new_tt);
+ }
+
+ ws->swtag_req = 1;
+}
+
+static __rte_always_inline void
+otx2_ssogws_fwd_group(struct otx2_ssogws *ws, const struct rte_event *ev,
+ const uint16_t grp)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+
+ otx2_write64(ev->u64, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
+ SSOW_LF_GWS_OP_UPD_WQP_GRP1);
+ rte_smp_wmb();
+ otx2_ssogws_swtag_desched(ws, tag, new_tt, grp);
+}
+
+static __rte_always_inline void
+otx2_ssogws_forward_event(struct otx2_ssogws *ws, const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (ws->cur_grp == grp)
+ otx2_ssogws_fwd_swtag(ws, ev);
+ else
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ otx2_ssogws_fwd_group(ws, ev, grp);
+}
+
+static __rte_always_inline void
+otx2_ssogws_release_event(struct otx2_ssogws *ws)
+{
+ otx2_ssogws_swtag_flush(ws);
+}
+
+uint16_t __hot
+otx2_ssogws_enq(void *port, const struct rte_event *ev)
+{
+ struct otx2_ssogws *ws = port;
+
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ rte_smp_mb();
+ return otx2_ssogws_new_event(ws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ otx2_ssogws_forward_event(ws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ otx2_ssogws_release_event(ws);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __hot
+otx2_ssogws_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return otx2_ssogws_enq(port, ev);
+}
+
+uint16_t __hot
+otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct otx2_ssogws *ws = port;
+ uint16_t i, rc = 1;
+
+ rte_smp_mb();
+ if (ws->xaq_lmt <= *ws->fc_mem)
+ return 0;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = otx2_ssogws_new_event(ws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __hot
+otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct otx2_ssogws *ws = port;
+
+ RTE_SET_USED(nb_events);
+ otx2_ssogws_forward_event(ws, ev);
+
+ return 1;
+}
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 16/44] event/octeontx2: add worker dequeue functions
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (14 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 15/44] event/octeontx2: add worker enqueue functions pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 17/44] event/octeontx2: add octeontx2 SSO dual workslot mode pbhagavatula
` (27 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add worker event dequeue functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.h | 10 +++++
drivers/event/octeontx2/otx2_worker.c | 55 +++++++++++++++++++++++++++
2 files changed, 65 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index 4f2fd33df..6f8d709b6 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -178,6 +178,16 @@ uint16_t otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[],
uint16_t otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[],
uint16_t nb_events);
+uint16_t otx2_ssogws_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t otx2_ssogws_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks);
+uint16_t otx2_ssogws_deq_timeout(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t otx2_ssogws_deq_timeout_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
+
/* Init and Fini API's */
int otx2_sso_init(struct rte_eventdev *event_dev);
int otx2_sso_fini(struct rte_eventdev *event_dev);
diff --git a/drivers/event/octeontx2/otx2_worker.c b/drivers/event/octeontx2/otx2_worker.c
index 044c5f132..edc574673 100644
--- a/drivers/event/octeontx2/otx2_worker.c
+++ b/drivers/event/octeontx2/otx2_worker.c
@@ -81,6 +81,61 @@ otx2_ssogws_release_event(struct otx2_ssogws *ws)
otx2_ssogws_swtag_flush(ws);
}
+uint16_t __hot
+otx2_ssogws_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct otx2_ssogws *ws = port;
+
+ RTE_SET_USED(timeout_ticks);
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ otx2_ssogws_swtag_wait(ws);
+ return 1;
+ }
+
+ return otx2_ssogws_get_work(ws, ev);
+}
+
+uint16_t __hot
+otx2_ssogws_deq_burst(void *port, struct rte_event ev[], uint16_t nb_events,
+ uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return otx2_ssogws_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __hot
+otx2_ssogws_deq_timeout(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks)
+{
+ struct otx2_ssogws *ws = port;
+ uint16_t ret = 1;
+ uint64_t iter;
+
+ if (ws->swtag_req) {
+ ws->swtag_req = 0;
+ otx2_ssogws_swtag_wait(ws);
+ return ret;
+ }
+
+ ret = otx2_ssogws_get_work(ws, ev);
+ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++)
+ ret = otx2_ssogws_get_work(ws, ev);
+
+ return ret;
+}
+
+uint16_t __hot
+otx2_ssogws_deq_timeout_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return otx2_ssogws_deq_timeout(port, ev, timeout_ticks);
+}
+
uint16_t __hot
otx2_ssogws_enq(void *port, const struct rte_event *ev)
{
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 17/44] event/octeontx2: add octeontx2 SSO dual workslot mode
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (15 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 16/44] event/octeontx2: add worker dequeue functions pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 18/44] event/octeontx2: add SSO dual GWS HW device operations pbhagavatula
` (26 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
OcteonTx2 AP core SSO cache contains two entires each entry caches
state of an single GWS aka event port.
AP core requests events from SSO by using following sequence :
1. Write to SSOW_LF_GWS_OP_GET_WORK
2. Wait for SSO to complete scheduling by polling on SSOW_LF_GWS_TAG[63]
3. SSO notifies core by clearing SSOW_LF_GWS_TAG[63] and if work is
valid SSOW_LF_GWS_WQP is non-zero.
The above sequence uses only one in-core cache entry.
In dual workslot mode we try to use both the in-core cache entries by
triggering GET_WORK on a second workslot as soon as the above sequence
completes. This effectively hides the schedule latency of SSO if there
are enough events with unique flow_tags in-flight.
This mode reserves two SSO GWS lf's for each event port effectively
doubling single core performance.
Dual workslot mode is the default mode of operation in octeontx2.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.c | 204 ++++++++++++++++++---
drivers/event/octeontx2/otx2_evdev.h | 17 ++
drivers/event/octeontx2/otx2_evdev_irq.c | 4 +-
drivers/event/octeontx2/otx2_evdev_stats.h | 52 +++++-
4 files changed, 242 insertions(+), 35 deletions(-)
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 51220f447..16d5e7dfa 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -20,7 +20,7 @@ static inline int
sso_get_msix_offsets(const struct rte_eventdev *event_dev)
{
struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t nb_ports = dev->nb_event_ports;
+ uint8_t nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1);
struct otx2_mbox *mbox = dev->mbox;
struct msix_offset_rsp *msix_rsp;
int i, rc;
@@ -82,16 +82,26 @@ otx2_sso_port_link(struct rte_eventdev *event_dev, void *port,
const uint8_t queues[], const uint8_t priorities[],
uint16_t nb_links)
{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
uint8_t port_id = 0;
uint16_t link;
- RTE_SET_USED(event_dev);
RTE_SET_USED(priorities);
for (link = 0; link < nb_links; link++) {
- struct otx2_ssogws *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify(ws, queues[link], true);
+ if (dev->dual_ws) {
+ struct otx2_ssogws_dual *ws = port;
+
+ port_id = ws->port;
+ sso_port_link_modify((struct otx2_ssogws *)
+ &ws->ws_state[0], queues[link], true);
+ sso_port_link_modify((struct otx2_ssogws *)
+ &ws->ws_state[1], queues[link], true);
+ } else {
+ struct otx2_ssogws *ws = port;
+
+ port_id = ws->port;
+ sso_port_link_modify(ws, queues[link], true);
+ }
}
sso_func_trace("Port=%d nb_links=%d", port_id, nb_links);
@@ -102,15 +112,27 @@ static int
otx2_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
uint8_t queues[], uint16_t nb_unlinks)
{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
uint8_t port_id = 0;
uint16_t unlink;
- RTE_SET_USED(event_dev);
for (unlink = 0; unlink < nb_unlinks; unlink++) {
- struct otx2_ssogws *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify(ws, queues[unlink], false);
+ if (dev->dual_ws) {
+ struct otx2_ssogws_dual *ws = port;
+
+ port_id = ws->port;
+ sso_port_link_modify((struct otx2_ssogws *)
+ &ws->ws_state[0], queues[unlink],
+ false);
+ sso_port_link_modify((struct otx2_ssogws *)
+ &ws->ws_state[1], queues[unlink],
+ false);
+ } else {
+ struct otx2_ssogws *ws = port;
+
+ port_id = ws->port;
+ sso_port_link_modify(ws, queues[unlink], false);
+ }
}
sso_func_trace("Port=%d nb_unlinks=%d", port_id, nb_unlinks);
@@ -242,11 +264,23 @@ sso_clr_links(const struct rte_eventdev *event_dev)
int i, j;
for (i = 0; i < dev->nb_event_ports; i++) {
- struct otx2_ssogws *ws;
+ if (dev->dual_ws) {
+ struct otx2_ssogws_dual *ws;
- ws = event_dev->data->ports[i];
- for (j = 0; j < dev->nb_event_queues; j++)
- sso_port_link_modify(ws, j, false);
+ ws = event_dev->data->ports[i];
+ for (j = 0; j < dev->nb_event_queues; j++) {
+ sso_port_link_modify((struct otx2_ssogws *)
+ &ws->ws_state[0], j, false);
+ sso_port_link_modify((struct otx2_ssogws *)
+ &ws->ws_state[1], j, false);
+ }
+ } else {
+ struct otx2_ssogws *ws;
+
+ ws = event_dev->data->ports[i];
+ for (j = 0; j < dev->nb_event_queues; j++)
+ sso_port_link_modify(ws, j, false);
+ }
}
}
@@ -261,6 +295,73 @@ sso_set_port_ops(struct otx2_ssogws *ws, uintptr_t base)
ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
}
+static int
+sso_configure_dual_ports(const struct rte_eventdev *event_dev)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ uint8_t vws = 0;
+ uint8_t nb_lf;
+ int i, rc;
+
+ otx2_sso_dbg("Configuring event ports %d", dev->nb_event_ports);
+
+ nb_lf = dev->nb_event_ports * 2;
+ /* Ask AF to attach required LFs. */
+ rc = sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, true);
+ if (rc < 0) {
+ otx2_err("Failed to attach SSO GWS LF");
+ return -ENODEV;
+ }
+
+ if (sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, true) < 0) {
+ sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
+ otx2_err("Failed to init SSO GWS LF");
+ return -ENODEV;
+ }
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ struct otx2_ssogws_dual *ws;
+ uintptr_t base;
+
+ /* Free memory prior to re-allocation if needed */
+ if (event_dev->data->ports[i] != NULL) {
+ ws = event_dev->data->ports[i];
+ rte_free(ws);
+ ws = NULL;
+ }
+
+ /* Allocate event port memory */
+ ws = rte_zmalloc_socket("otx2_sso_ws",
+ sizeof(struct otx2_ssogws_dual),
+ RTE_CACHE_LINE_SIZE,
+ event_dev->data->socket_id);
+ if (ws == NULL) {
+ otx2_err("Failed to alloc memory for port=%d", i);
+ rc = -ENOMEM;
+ break;
+ }
+
+ ws->port = i;
+ base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | vws << 12);
+ sso_set_port_ops((struct otx2_ssogws *)&ws->ws_state[0], base);
+ vws++;
+
+ base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | vws << 12);
+ sso_set_port_ops((struct otx2_ssogws *)&ws->ws_state[1], base);
+ vws++;
+
+ event_dev->data->ports[i] = ws;
+ }
+
+ if (rc < 0) {
+ sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, false);
+ sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
+ }
+
+ return rc;
+}
+
static int
sso_configure_ports(const struct rte_eventdev *event_dev)
{
@@ -465,6 +566,7 @@ sso_lf_teardown(struct otx2_sso_evdev *dev,
break;
case SSO_LF_GWS:
nb_lf = dev->nb_event_ports;
+ nb_lf *= dev->dual_ws ? 2 : 1;
break;
default:
return;
@@ -530,7 +632,12 @@ otx2_sso_configure(const struct rte_eventdev *event_dev)
dev->nb_event_queues = conf->nb_event_queues;
dev->nb_event_ports = conf->nb_event_ports;
- if (sso_configure_ports(event_dev)) {
+ if (dev->dual_ws)
+ rc = sso_configure_dual_ports(event_dev);
+ else
+ rc = sso_configure_ports(event_dev);
+
+ if (rc < 0) {
otx2_err("Failed to configure event ports");
return -ENODEV;
}
@@ -660,14 +767,27 @@ otx2_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
/* Set get_work timeout for HWS */
val = NSEC2USEC(dev->deq_tmo_ns) - 1;
- struct otx2_ssogws *ws = event_dev->data->ports[port_id];
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
-
- rte_memcpy(ws->grps_base, grps_base,
- sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP);
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- otx2_write64(val, base + SSOW_LF_GWS_NW_TIM);
+ if (dev->dual_ws) {
+ struct otx2_ssogws_dual *ws = event_dev->data->ports[port_id];
+
+ rte_memcpy(ws->grps_base, grps_base,
+ sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP);
+ ws->fc_mem = dev->fc_mem;
+ ws->xaq_lmt = dev->xaq_lmt;
+ otx2_write64(val, OTX2_SSOW_GET_BASE_ADDR(
+ ws->ws_state[0].getwrk_op) + SSOW_LF_GWS_NW_TIM);
+ otx2_write64(val, OTX2_SSOW_GET_BASE_ADDR(
+ ws->ws_state[1].getwrk_op) + SSOW_LF_GWS_NW_TIM);
+ } else {
+ struct otx2_ssogws *ws = event_dev->data->ports[port_id];
+ uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
+
+ rte_memcpy(ws->grps_base, grps_base,
+ sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP);
+ ws->fc_mem = dev->fc_mem;
+ ws->xaq_lmt = dev->xaq_lmt;
+ otx2_write64(val, base + SSOW_LF_GWS_NW_TIM);
+ }
otx2_sso_dbg("Port=%d ws=%p", port_id, event_dev->data->ports[port_id]);
@@ -735,18 +855,37 @@ otx2_sso_dump(struct rte_eventdev *event_dev, FILE *f)
uint8_t queue;
uint8_t port;
+ fprintf(f, "[%s] SSO running in [%s] mode\n", __func__, dev->dual_ws ?
+ "dual_ws" : "single_ws");
/* Dump SSOW registers */
for (port = 0; port < dev->nb_event_ports; port++) {
- fprintf(f, "[%s]SSO single workslot[%d] dump\n",
- __func__, port);
- ssogws_dump(event_dev->data->ports[port], f);
+ if (dev->dual_ws) {
+ struct otx2_ssogws_dual *ws =
+ event_dev->data->ports[port];
+
+ fprintf(f, "[%s] SSO dual workslot[%d] vws[%d] dump\n",
+ __func__, port, 0);
+ ssogws_dump((struct otx2_ssogws *)&ws->ws_state[0], f);
+ fprintf(f, "[%s]SSO dual workslot[%d] vws[%d] dump\n",
+ __func__, port, 1);
+ ssogws_dump((struct otx2_ssogws *)&ws->ws_state[1], f);
+ } else {
+ fprintf(f, "[%s]SSO single workslot[%d] dump\n",
+ __func__, port);
+ ssogws_dump(event_dev->data->ports[port], f);
+ }
}
/* Dump SSO registers */
for (queue = 0; queue < dev->nb_event_queues; queue++) {
fprintf(f, "[%s]SSO group[%d] dump\n", __func__, queue);
- struct otx2_ssogws *ws = event_dev->data->ports[0];
- ssoggrp_dump(ws->grps_base[queue], f);
+ if (dev->dual_ws) {
+ struct otx2_ssogws_dual *ws = event_dev->data->ports[0];
+ ssoggrp_dump(ws->grps_base[queue], f);
+ } else {
+ struct otx2_ssogws *ws = event_dev->data->ports[0];
+ ssoggrp_dump(ws->grps_base[queue], f);
+ }
}
}
@@ -879,7 +1018,14 @@ otx2_sso_init(struct rte_eventdev *event_dev)
goto otx2_npa_lf_uninit;
}
+ dev->dual_ws = 1;
sso_parse_devargs(dev, pci_dev->device.devargs);
+ if (dev->dual_ws) {
+ otx2_sso_dbg("Using dual workslot mode");
+ dev->max_event_ports = dev->max_event_ports / 2;
+ } else {
+ otx2_sso_dbg("Using single workslot mode");
+ }
otx2_sso_pf_func_set(dev->pf_func);
otx2_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index 6f8d709b6..72de9ace5 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -121,6 +121,7 @@ struct otx2_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
+ uint8_t dual_ws;
/* Dev args */
uint32_t xae_cnt;
/* HW const */
@@ -155,6 +156,22 @@ struct otx2_ssogws {
uintptr_t grps_base[OTX2_SSO_MAX_VHGRP];
} __rte_cache_aligned;
+struct otx2_ssogws_state {
+ OTX2_SSOGWS_OPS;
+};
+
+struct otx2_ssogws_dual {
+ /* Get Work Fastpath data */
+ struct otx2_ssogws_state ws_state[2]; /* Ping and Pong */
+ uint8_t swtag_req;
+ uint8_t vws; /* Ping pong bit */
+ uint8_t port;
+ /* Add Work Fastpath data */
+ uint64_t xaq_lmt __rte_cache_aligned;
+ uint64_t *fc_mem;
+ uintptr_t grps_base[OTX2_SSO_MAX_VHGRP];
+} __rte_cache_aligned;
+
static inline struct otx2_sso_evdev *
sso_pmd_priv(const struct rte_eventdev *event_dev)
{
diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c
index 7df21cc24..7379bb17f 100644
--- a/drivers/event/octeontx2/otx2_evdev_irq.c
+++ b/drivers/event/octeontx2/otx2_evdev_irq.c
@@ -117,7 +117,7 @@ sso_register_irqs(const struct rte_eventdev *event_dev)
int i, rc = -EINVAL;
uint8_t nb_ports;
- nb_ports = dev->nb_event_ports;
+ nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1);
for (i = 0; i < dev->nb_event_queues; i++) {
if (dev->sso_msixoff[i] == MSIX_VECTOR_INVALID) {
@@ -159,7 +159,7 @@ sso_unregister_irqs(const struct rte_eventdev *event_dev)
uint8_t nb_ports;
int i;
- nb_ports = dev->nb_event_ports;
+ nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1);
for (i = 0; i < dev->nb_event_queues; i++) {
uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 |
diff --git a/drivers/event/octeontx2/otx2_evdev_stats.h b/drivers/event/octeontx2/otx2_evdev_stats.h
index df76a1333..9d7c694ee 100644
--- a/drivers/event/octeontx2/otx2_evdev_stats.h
+++ b/drivers/event/octeontx2/otx2_evdev_stats.h
@@ -76,11 +76,29 @@ otx2_sso_xstats_get(const struct rte_eventdev *event_dev,
xstats = sso_hws_xstats;
req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws = queue_port_id;
+ ((struct sso_info_req *)req_rsp)->hws = dev->dual_ws ?
+ 2 * queue_port_id : queue_port_id;
rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
if (rc < 0)
goto invalid_value;
+ if (dev->dual_ws) {
+ for (i = 0; i < n && i < xstats_mode_count; i++) {
+ xstat = &xstats[ids[i] - start_offset];
+ values[i] = *(uint64_t *)
+ ((char *)req_rsp + xstat->offset);
+ values[i] = (values[i] >> xstat->shift) &
+ xstat->mask;
+ }
+
+ req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
+ ((struct sso_info_req *)req_rsp)->hws =
+ (2 * queue_port_id) + 1;
+ rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
+ if (rc < 0)
+ goto invalid_value;
+ }
+
break;
case RTE_EVENT_DEV_XSTATS_QUEUE:
if (queue_port_id >= (signed int)dev->nb_event_queues)
@@ -107,7 +125,11 @@ otx2_sso_xstats_get(const struct rte_eventdev *event_dev,
value = *(uint64_t *)((char *)req_rsp + xstat->offset);
value = (value >> xstat->shift) & xstat->mask;
- values[i] = value;
+ if ((mode == RTE_EVENT_DEV_XSTATS_PORT) && dev->dual_ws)
+ values[i] += value;
+ else
+ values[i] = value;
+
values[i] -= xstat->reset_snap[queue_port_id];
}
@@ -143,11 +165,30 @@ otx2_sso_xstats_reset(struct rte_eventdev *event_dev,
xstats = sso_hws_xstats;
req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws = queue_port_id;
+ ((struct sso_info_req *)req_rsp)->hws = dev->dual_ws ?
+ 2 * queue_port_id : queue_port_id;
rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
if (rc < 0)
goto invalid_value;
+ if (dev->dual_ws) {
+ for (i = 0; i < n && i < xstats_mode_count; i++) {
+ xstat = &xstats[ids[i] - start_offset];
+ xstat->reset_snap[queue_port_id] = *(uint64_t *)
+ ((char *)req_rsp + xstat->offset);
+ xstat->reset_snap[queue_port_id] =
+ (xstat->reset_snap[queue_port_id] >>
+ xstat->shift) & xstat->mask;
+ }
+
+ req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
+ ((struct sso_info_req *)req_rsp)->hws =
+ (2 * queue_port_id) + 1;
+ rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
+ if (rc < 0)
+ goto invalid_value;
+ }
+
break;
case RTE_EVENT_DEV_XSTATS_QUEUE:
if (queue_port_id >= (signed int)dev->nb_event_queues)
@@ -174,7 +215,10 @@ otx2_sso_xstats_reset(struct rte_eventdev *event_dev,
value = *(uint64_t *)((char *)req_rsp + xstat->offset);
value = (value >> xstat->shift) & xstat->mask;
- xstat->reset_snap[queue_port_id] = value;
+ if ((mode == RTE_EVENT_DEV_XSTATS_PORT) && dev->dual_ws)
+ xstat->reset_snap[queue_port_id] += value;
+ else
+ xstat->reset_snap[queue_port_id] = value;
}
return i;
invalid_value:
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 18/44] event/octeontx2: add SSO dual GWS HW device operations
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (16 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 17/44] event/octeontx2: add octeontx2 SSO dual workslot mode pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 19/44] event/octeontx2: add worker dual GWS enqueue functions pbhagavatula
` (25 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add SSO dual workslot mode GWS HW device operations.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/event/octeontx2/Makefile | 1 +
drivers/event/octeontx2/meson.build | 1 +
drivers/event/octeontx2/otx2_worker_dual.c | 6 ++
drivers/event/octeontx2/otx2_worker_dual.h | 76 ++++++++++++++++++++++
4 files changed, 84 insertions(+)
create mode 100644 drivers/event/octeontx2/otx2_worker_dual.c
create mode 100644 drivers/event/octeontx2/otx2_worker_dual.h
diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile
index a3de5ca23..dfecda599 100644
--- a/drivers/event/octeontx2/Makefile
+++ b/drivers/event/octeontx2/Makefile
@@ -30,6 +30,7 @@ LIBABIVER := 1
# all source are stored in SRCS-y
#
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker_dual.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_irq.c
diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
index 1d2080b6d..c2a5f3e3d 100644
--- a/drivers/event/octeontx2/meson.build
+++ b/drivers/event/octeontx2/meson.build
@@ -3,6 +3,7 @@
#
sources = files('otx2_worker.c',
+ 'otx2_worker_dual.c',
'otx2_evdev.c',
'otx2_evdev_irq.c',
)
diff --git a/drivers/event/octeontx2/otx2_worker_dual.c b/drivers/event/octeontx2/otx2_worker_dual.c
new file mode 100644
index 000000000..f762436aa
--- /dev/null
+++ b/drivers/event/octeontx2/otx2_worker_dual.c
@@ -0,0 +1,6 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_worker_dual.h"
+#include "otx2_worker.h"
diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h
new file mode 100644
index 000000000..d8453d1f7
--- /dev/null
+++ b/drivers/event/octeontx2/otx2_worker_dual.h
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_WORKER_DUAL_H__
+#define __OTX2_WORKER_DUAL_H__
+
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+
+#include <otx2_common.h>
+#include "otx2_evdev.h"
+
+/* SSO Operations */
+static __rte_always_inline uint16_t
+otx2_ssogws_dual_get_work(struct otx2_ssogws_state *ws,
+ struct otx2_ssogws_state *ws_pair,
+ struct rte_event *ev)
+{
+ const uint64_t set_gw = BIT_ULL(16) | 1;
+ union otx2_sso_event event;
+ uint64_t get_work1;
+
+#ifdef RTE_ARCH_ARM64
+ asm volatile(
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbz %[tag], 63, done%= \n"
+ " sevl \n"
+ "rty%=: wfe \n"
+ " ldr %[tag], [%[tag_loc]] \n"
+ " ldr %[wqp], [%[wqp_loc]] \n"
+ " tbnz %[tag], 63, rty%= \n"
+ "done%=: str %[gw], [%[pong]] \n"
+ " dmb ld \n"
+ " prfm pldl1keep, [%[wqp]] \n"
+ : [tag] "=&r" (event.get_work0),
+ [wqp] "=&r" (get_work1)
+ : [tag_loc] "r" (ws->tag_op),
+ [wqp_loc] "r" (ws->wqp_op),
+ [gw] "r" (set_gw),
+ [pong] "r" (ws_pair->getwrk_op)
+ );
+#else
+ event.get_work0 = otx2_read64(ws->tag_op);
+ while ((BIT_ULL(63)) & event.get_work0)
+ event.get_work0 = otx2_read64(ws->tag_op);
+ get_work1 = otx2_read64(ws->wqp_op);
+ otx2_write64(set_gw, ws_pair->getwrk_op);
+
+ rte_prefetch0((const void *)get_work1);
+#endif
+ event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 |
+ (event.get_work0 & (0x3FFull << 36)) << 4 |
+ (event.get_work0 & 0xffffffff);
+ ws->cur_tt = event.sched_type;
+ ws->cur_grp = event.queue_id;
+
+ ev->event = event.get_work0;
+ ev->u64 = get_work1;
+
+ return !!get_work1;
+}
+
+static __rte_always_inline void
+otx2_ssogws_dual_add_work(struct otx2_ssogws_dual *ws, const uint64_t event_ptr,
+ const uint32_t tag, const uint8_t new_tt,
+ const uint16_t grp)
+{
+ uint64_t add_work0;
+
+ add_work0 = tag | ((uint64_t)(new_tt) << 32);
+ otx2_store_pair(add_work0, event_ptr, ws->grps_base[grp]);
+}
+
+#endif
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 19/44] event/octeontx2: add worker dual GWS enqueue functions
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (17 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 18/44] event/octeontx2: add SSO dual GWS HW device operations pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 20/44] event/octeontx2: add worker dual GWS dequeue functions pbhagavatula
` (24 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add dual workslot mode event enqueue functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.h | 9 ++
drivers/event/octeontx2/otx2_worker_dual.c | 135 +++++++++++++++++++++
2 files changed, 144 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index 72de9ace5..fd2a4c330 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -187,6 +187,7 @@ parse_kvargs_value(const char *key, const char *value, void *opaque)
return 0;
}
+/* Single WS API's */
uint16_t otx2_ssogws_enq(void *port, const struct rte_event *ev);
uint16_t otx2_ssogws_enq_burst(void *port, const struct rte_event ev[],
uint16_t nb_events);
@@ -204,6 +205,14 @@ uint16_t otx2_ssogws_deq_timeout(void *port, struct rte_event *ev,
uint16_t otx2_ssogws_deq_timeout_burst(void *port, struct rte_event ev[],
uint16_t nb_events,
uint64_t timeout_ticks);
+/* Dual WS API's */
+uint16_t otx2_ssogws_dual_enq(void *port, const struct rte_event *ev);
+uint16_t otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events);
+uint16_t otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events);
/* Init and Fini API's */
int otx2_sso_init(struct rte_eventdev *event_dev);
diff --git a/drivers/event/octeontx2/otx2_worker_dual.c b/drivers/event/octeontx2/otx2_worker_dual.c
index f762436aa..661c78c23 100644
--- a/drivers/event/octeontx2/otx2_worker_dual.c
+++ b/drivers/event/octeontx2/otx2_worker_dual.c
@@ -4,3 +4,138 @@
#include "otx2_worker_dual.h"
#include "otx2_worker.h"
+
+static __rte_noinline uint8_t
+otx2_ssogws_dual_new_event(struct otx2_ssogws_dual *ws,
+ const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint64_t event_ptr = ev->u64;
+ const uint16_t grp = ev->queue_id;
+
+ if (ws->xaq_lmt <= *ws->fc_mem)
+ return 0;
+
+ otx2_ssogws_dual_add_work(ws, event_ptr, tag, new_tt, grp);
+
+ return 1;
+}
+
+static __rte_always_inline void
+otx2_ssogws_dual_fwd_swtag(struct otx2_ssogws_state *ws,
+ const struct rte_event *ev)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+ const uint8_t cur_tt = ws->cur_tt;
+
+ /* 96XX model
+ * cur_tt/new_tt SSO_SYNC_ORDERED SSO_SYNC_ATOMIC SSO_SYNC_UNTAGGED
+ *
+ * SSO_SYNC_ORDERED norm norm untag
+ * SSO_SYNC_ATOMIC norm norm untag
+ * SSO_SYNC_UNTAGGED norm norm NOOP
+ */
+ if (new_tt == SSO_SYNC_UNTAGGED) {
+ if (cur_tt != SSO_SYNC_UNTAGGED)
+ otx2_ssogws_swtag_untag((struct otx2_ssogws *)ws);
+ } else {
+ otx2_ssogws_swtag_norm((struct otx2_ssogws *)ws, tag, new_tt);
+ }
+}
+
+static __rte_always_inline void
+otx2_ssogws_dual_fwd_group(struct otx2_ssogws_state *ws,
+ const struct rte_event *ev, const uint16_t grp)
+{
+ const uint32_t tag = (uint32_t)ev->event;
+ const uint8_t new_tt = ev->sched_type;
+
+ otx2_write64(ev->u64, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
+ SSOW_LF_GWS_OP_UPD_WQP_GRP1);
+ rte_smp_wmb();
+ otx2_ssogws_swtag_desched((struct otx2_ssogws *)ws, tag, new_tt, grp);
+}
+
+static __rte_always_inline void
+otx2_ssogws_dual_forward_event(struct otx2_ssogws_dual *ws,
+ struct otx2_ssogws_state *vws,
+ const struct rte_event *ev)
+{
+ const uint8_t grp = ev->queue_id;
+
+ /* Group hasn't changed, Use SWTAG to forward the event */
+ if (vws->cur_grp == grp) {
+ otx2_ssogws_dual_fwd_swtag(vws, ev);
+ ws->swtag_req = 1;
+ } else {
+ /*
+ * Group has been changed for group based work pipelining,
+ * Use deschedule/add_work operation to transfer the event to
+ * new group/core
+ */
+ otx2_ssogws_dual_fwd_group(vws, ev, grp);
+ }
+}
+
+uint16_t __hot
+otx2_ssogws_dual_enq(void *port, const struct rte_event *ev)
+{
+ struct otx2_ssogws_dual *ws = port;
+ struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws];
+
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ rte_smp_mb();
+ return otx2_ssogws_dual_new_event(ws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ otx2_ssogws_dual_forward_event(ws, vws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ otx2_ssogws_swtag_flush((struct otx2_ssogws *)vws);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+uint16_t __hot
+otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ RTE_SET_USED(nb_events);
+ return otx2_ssogws_dual_enq(port, ev);
+}
+
+uint16_t __hot
+otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct otx2_ssogws_dual *ws = port;
+ uint16_t i, rc = 1;
+
+ rte_smp_mb();
+ if (ws->xaq_lmt <= *ws->fc_mem)
+ return 0;
+
+ for (i = 0; i < nb_events && rc; i++)
+ rc = otx2_ssogws_dual_new_event(ws, &ev[i]);
+
+ return nb_events;
+}
+
+uint16_t __hot
+otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ struct otx2_ssogws_dual *ws = port;
+ struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws];
+
+ RTE_SET_USED(nb_events);
+ otx2_ssogws_dual_forward_event(ws, vws, ev);
+
+ return 1;
+}
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 20/44] event/octeontx2: add worker dual GWS dequeue functions
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (18 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 19/44] event/octeontx2: add worker dual GWS enqueue functions pbhagavatula
@ 2019-06-28 7:49 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 21/44] event/octeontx2: add devargs to force legacy mode pbhagavatula
` (23 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:49 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add workder dual workslot mode dequeue functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.h | 9 +++
drivers/event/octeontx2/otx2_worker_dual.c | 66 ++++++++++++++++++++++
2 files changed, 75 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index fd2a4c330..30b5d2c32 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -214,6 +214,15 @@ uint16_t otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[],
uint16_t otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
uint16_t nb_events);
+uint16_t otx2_ssogws_dual_deq(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t otx2_ssogws_dual_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks);
+uint16_t otx2_ssogws_dual_deq_timeout(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks);
+uint16_t otx2_ssogws_dual_deq_timeout_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events,
+ uint64_t timeout_ticks);
/* Init and Fini API's */
int otx2_sso_init(struct rte_eventdev *event_dev);
int otx2_sso_fini(struct rte_eventdev *event_dev);
diff --git a/drivers/event/octeontx2/otx2_worker_dual.c b/drivers/event/octeontx2/otx2_worker_dual.c
index 661c78c23..58fd588f6 100644
--- a/drivers/event/octeontx2/otx2_worker_dual.c
+++ b/drivers/event/octeontx2/otx2_worker_dual.c
@@ -139,3 +139,69 @@ otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+uint16_t __hot
+otx2_ssogws_dual_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ struct otx2_ssogws_dual *ws = port;
+ uint8_t gw;
+
+ RTE_SET_USED(timeout_ticks);
+ if (ws->swtag_req) {
+ otx2_ssogws_swtag_wait((struct otx2_ssogws *)
+ &ws->ws_state[!ws->vws]);
+ ws->swtag_req = 0;
+ return 1;
+ }
+
+ gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws],
+ &ws->ws_state[!ws->vws], ev);
+ ws->vws = !ws->vws;
+
+ return gw;
+}
+
+uint16_t __hot
+otx2_ssogws_dual_deq_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return otx2_ssogws_dual_deq(port, ev, timeout_ticks);
+}
+
+uint16_t __hot
+otx2_ssogws_dual_deq_timeout(void *port, struct rte_event *ev,
+ uint64_t timeout_ticks)
+{
+ struct otx2_ssogws_dual *ws = port;
+ uint64_t iter;
+ uint8_t gw;
+
+ if (ws->swtag_req) {
+ otx2_ssogws_swtag_wait((struct otx2_ssogws *)
+ &ws->ws_state[!ws->vws]);
+ ws->swtag_req = 0;
+ return 1;
+ }
+
+ gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws],
+ &ws->ws_state[!ws->vws], ev);
+ ws->vws = !ws->vws;
+ for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) {
+ gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws],
+ &ws->ws_state[!ws->vws], ev);
+ ws->vws = !ws->vws;
+ }
+
+ return gw;
+}
+
+uint16_t __hot
+otx2_ssogws_dual_deq_timeout_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ RTE_SET_USED(nb_events);
+
+ return otx2_ssogws_dual_deq_timeout(port, ev, timeout_ticks);
+}
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 21/44] event/octeontx2: add devargs to force legacy mode
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (19 preceding siblings ...)
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 20/44] event/octeontx2: add worker dual GWS dequeue functions pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 22/44] event/octeontx2: add device start function pbhagavatula
` (22 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Octeontx2 SSO by default is set to use dual workslot mode.
Add devargs option to force legacy mode i.e. single workslot mode.
Example:
--dev "0002:0e:00.0,single_ws=1"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.c | 8 +++++++-
drivers/event/octeontx2/otx2_evdev.h | 11 ++++++++++-
2 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 16d5e7dfa..5dc39f029 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -911,11 +911,13 @@ static struct rte_eventdev_ops otx2_sso_ops = {
};
#define OTX2_SSO_XAE_CNT "xae_cnt"
+#define OTX2_SSO_SINGLE_WS "single_ws"
static void
sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs)
{
struct rte_kvargs *kvlist;
+ uint8_t single_ws = 0;
if (devargs == NULL)
return;
@@ -925,7 +927,10 @@ sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs)
rte_kvargs_process(kvlist, OTX2_SSO_XAE_CNT, &parse_kvargs_value,
&dev->xae_cnt);
+ rte_kvargs_process(kvlist, OTX2_SSO_SINGLE_WS, &parse_kvargs_flag,
+ &single_ws);
+ dev->dual_ws = !single_ws;
rte_kvargs_free(kvlist);
}
@@ -1075,4 +1080,5 @@ otx2_sso_fini(struct rte_eventdev *event_dev)
RTE_PMD_REGISTER_PCI(event_octeontx2, pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_octeontx2, pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=<int>");
+RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=<int>"
+ OTX2_SSO_SINGLE_WS "=1");
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index 30b5d2c32..8e614b109 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -121,8 +121,8 @@ struct otx2_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
- uint8_t dual_ws;
/* Dev args */
+ uint8_t dual_ws;
uint32_t xae_cnt;
/* HW const */
uint32_t xae_waes;
@@ -178,6 +178,15 @@ sso_pmd_priv(const struct rte_eventdev *event_dev)
return event_dev->data->dev_private;
}
+static inline int
+parse_kvargs_flag(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ *(uint8_t *)opaque = !!atoi(value);
+ return 0;
+}
+
static inline int
parse_kvargs_value(const char *key, const char *value, void *opaque)
{
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 22/44] event/octeontx2: add device start function
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (20 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 21/44] event/octeontx2: add devargs to force legacy mode pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 23/44] event/octeontx2: add devargs to control SSO GGRP QoS pbhagavatula
` (21 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj, Anatoly Burakov; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add eventdev start function along with few cleanup API's to maintain
sanity.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.c | 127 +++++++++++++++++++++++++-
drivers/event/octeontx2/otx2_evdev.h | 6 ++
drivers/event/octeontx2/otx2_worker.c | 74 +++++++++++++++
3 files changed, 206 insertions(+), 1 deletion(-)
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 5dc39f029..d6ddee1cd 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -38,6 +38,41 @@ sso_get_msix_offsets(const struct rte_eventdev *event_dev)
return rc;
}
+void
+sso_fastpath_fns_set(struct rte_eventdev *event_dev)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+
+ event_dev->enqueue = otx2_ssogws_enq;
+ event_dev->enqueue_burst = otx2_ssogws_enq_burst;
+ event_dev->enqueue_new_burst = otx2_ssogws_enq_new_burst;
+ event_dev->enqueue_forward_burst = otx2_ssogws_enq_fwd_burst;
+
+ event_dev->dequeue = otx2_ssogws_deq;
+ event_dev->dequeue_burst = otx2_ssogws_deq_burst;
+ if (dev->is_timeout_deq) {
+ event_dev->dequeue = otx2_ssogws_deq_timeout;
+ event_dev->dequeue_burst = otx2_ssogws_deq_timeout_burst;
+ }
+
+ if (dev->dual_ws) {
+ event_dev->enqueue = otx2_ssogws_dual_enq;
+ event_dev->enqueue_burst = otx2_ssogws_dual_enq_burst;
+ event_dev->enqueue_new_burst =
+ otx2_ssogws_dual_enq_new_burst;
+ event_dev->enqueue_forward_burst =
+ otx2_ssogws_dual_enq_fwd_burst;
+ event_dev->dequeue = otx2_ssogws_dual_deq;
+ event_dev->dequeue_burst = otx2_ssogws_dual_deq_burst;
+ if (dev->is_timeout_deq) {
+ event_dev->dequeue = otx2_ssogws_dual_deq_timeout;
+ event_dev->dequeue_burst =
+ otx2_ssogws_dual_deq_timeout_burst;
+ }
+ }
+ rte_mb();
+}
+
static void
otx2_sso_info_get(struct rte_eventdev *event_dev,
struct rte_event_dev_info *dev_info)
@@ -889,6 +924,93 @@ otx2_sso_dump(struct rte_eventdev *event_dev, FILE *f)
}
}
+static void
+otx2_handle_event(void *arg, struct rte_event event)
+{
+ struct rte_eventdev *event_dev = arg;
+
+ if (event_dev->dev_ops->dev_stop_flush != NULL)
+ event_dev->dev_ops->dev_stop_flush(event_dev->data->dev_id,
+ event, event_dev->data->dev_stop_flush_arg);
+}
+
+static void
+sso_cleanup(struct rte_eventdev *event_dev, uint8_t enable)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ uint16_t i;
+
+ for (i = 0; i < dev->nb_event_ports; i++) {
+ if (dev->dual_ws) {
+ struct otx2_ssogws_dual *ws;
+
+ ws = event_dev->data->ports[i];
+ ssogws_reset((struct otx2_ssogws *)&ws->ws_state[0]);
+ ssogws_reset((struct otx2_ssogws *)&ws->ws_state[1]);
+ ws->swtag_req = 0;
+ ws->vws = 0;
+ ws->ws_state[0].cur_grp = 0;
+ ws->ws_state[0].cur_tt = SSO_SYNC_EMPTY;
+ ws->ws_state[1].cur_grp = 0;
+ ws->ws_state[1].cur_tt = SSO_SYNC_EMPTY;
+ } else {
+ struct otx2_ssogws *ws;
+
+ ws = event_dev->data->ports[i];
+ ssogws_reset(ws);
+ ws->swtag_req = 0;
+ ws->cur_grp = 0;
+ ws->cur_tt = SSO_SYNC_EMPTY;
+ }
+ }
+
+ rte_mb();
+ if (dev->dual_ws) {
+ struct otx2_ssogws_dual *ws = event_dev->data->ports[0];
+ struct otx2_ssogws temp_ws;
+
+ memcpy(&temp_ws, &ws->ws_state[0],
+ sizeof(struct otx2_ssogws_state));
+ for (i = 0; i < dev->nb_event_queues; i++) {
+ /* Consume all the events through HWS0 */
+ ssogws_flush_events(&temp_ws, i, ws->grps_base[i],
+ otx2_handle_event, event_dev);
+ /* Enable/Disable SSO GGRP */
+ otx2_write64(enable, ws->grps_base[i] +
+ SSO_LF_GGRP_QCTL);
+ }
+ ws->ws_state[0].cur_grp = 0;
+ ws->ws_state[0].cur_tt = SSO_SYNC_EMPTY;
+ } else {
+ struct otx2_ssogws *ws = event_dev->data->ports[0];
+
+ for (i = 0; i < dev->nb_event_queues; i++) {
+ /* Consume all the events through HWS0 */
+ ssogws_flush_events(ws, i, ws->grps_base[i],
+ otx2_handle_event, event_dev);
+ /* Enable/Disable SSO GGRP */
+ otx2_write64(enable, ws->grps_base[i] +
+ SSO_LF_GGRP_QCTL);
+ }
+ ws->cur_grp = 0;
+ ws->cur_tt = SSO_SYNC_EMPTY;
+ }
+
+ /* reset SSO GWS cache */
+ otx2_mbox_alloc_msg_sso_ws_cache_inv(dev->mbox);
+ otx2_mbox_process(dev->mbox);
+}
+
+static int
+otx2_sso_start(struct rte_eventdev *event_dev)
+{
+ sso_func_trace();
+ sso_cleanup(event_dev, 1);
+ sso_fastpath_fns_set(event_dev);
+
+ return 0;
+}
+
/* Initialize and register event driver with DPDK Application */
static struct rte_eventdev_ops otx2_sso_ops = {
.dev_infos_get = otx2_sso_info_get,
@@ -908,6 +1030,7 @@ static struct rte_eventdev_ops otx2_sso_ops = {
.xstats_get_names = otx2_sso_xstats_get_names,
.dump = otx2_sso_dump,
+ .dev_start = otx2_sso_start,
};
#define OTX2_SSO_XAE_CNT "xae_cnt"
@@ -975,8 +1098,10 @@ otx2_sso_init(struct rte_eventdev *event_dev)
event_dev->dev_ops = &otx2_sso_ops;
/* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ sso_fastpath_fns_set(event_dev);
return 0;
+ }
dev = sso_pmd_priv(event_dev);
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index 8e614b109..4428abcfa 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -232,6 +232,12 @@ uint16_t otx2_ssogws_dual_deq_timeout(void *port, struct rte_event *ev,
uint16_t otx2_ssogws_dual_deq_timeout_burst(void *port, struct rte_event ev[],
uint16_t nb_events,
uint64_t timeout_ticks);
+void sso_fastpath_fns_set(struct rte_eventdev *event_dev);
+/* Clean up API's */
+typedef void (*otx2_handle_event_t)(void *arg, struct rte_event ev);
+void ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id,
+ uintptr_t base, otx2_handle_event_t fn, void *arg);
+void ssogws_reset(struct otx2_ssogws *ws);
/* Init and Fini API's */
int otx2_sso_init(struct rte_eventdev *event_dev);
int otx2_sso_fini(struct rte_eventdev *event_dev);
diff --git a/drivers/event/octeontx2/otx2_worker.c b/drivers/event/octeontx2/otx2_worker.c
index edc574673..7a6d4cad2 100644
--- a/drivers/event/octeontx2/otx2_worker.c
+++ b/drivers/event/octeontx2/otx2_worker.c
@@ -194,3 +194,77 @@ otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+void
+ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id, uintptr_t base,
+ otx2_handle_event_t fn, void *arg)
+{
+ uint64_t cq_ds_cnt = 1;
+ uint64_t aq_cnt = 1;
+ uint64_t ds_cnt = 1;
+ struct rte_event ev;
+ uint64_t enable;
+ uint64_t val;
+
+ enable = otx2_read64(base + SSO_LF_GGRP_QCTL);
+ if (!enable)
+ return;
+
+ val = queue_id; /* GGRP ID */
+ val |= BIT_ULL(18); /* Grouped */
+ val |= BIT_ULL(16); /* WAIT */
+
+ aq_cnt = otx2_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = otx2_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = otx2_read64(base + SSO_LF_GGRP_INT_CNT);
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+
+ while (aq_cnt || cq_ds_cnt || ds_cnt) {
+ otx2_write64(val, ws->getwrk_op);
+ otx2_ssogws_get_work_empty(ws, &ev);
+ if (fn != NULL && ev.u64 != 0)
+ fn(arg, ev);
+ if (ev.sched_type != SSO_TT_EMPTY)
+ otx2_ssogws_swtag_flush(ws);
+ rte_mb();
+ aq_cnt = otx2_read64(base + SSO_LF_GGRP_AQ_CNT);
+ ds_cnt = otx2_read64(base + SSO_LF_GGRP_MISC_CNT);
+ cq_ds_cnt = otx2_read64(base + SSO_LF_GGRP_INT_CNT);
+ /* Extract cq and ds count */
+ cq_ds_cnt &= 0x3FFF3FFF0000;
+ }
+
+ otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
+ SSOW_LF_GWS_OP_GWC_INVAL);
+ rte_mb();
+}
+
+void
+ssogws_reset(struct otx2_ssogws *ws)
+{
+ uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
+ uint64_t pend_state;
+ uint8_t pend_tt;
+ uint64_t tag;
+
+ /* Wait till getwork/swtp/waitw/desched completes. */
+ do {
+ pend_state = otx2_read64(base + SSOW_LF_GWS_PENDSTATE);
+ rte_mb();
+ } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58)));
+
+ tag = otx2_read64(base + SSOW_LF_GWS_TAG);
+ pend_tt = (tag >> 32) & 0x3;
+ if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
+ if (pend_tt == SSO_SYNC_ATOMIC || pend_tt == SSO_SYNC_ORDERED)
+ otx2_ssogws_swtag_untag(ws);
+ otx2_ssogws_desched(ws);
+ }
+ rte_mb();
+
+ /* Wait for desched to complete. */
+ do {
+ pend_state = otx2_read64(base + SSOW_LF_GWS_PENDSTATE);
+ rte_mb();
+ } while (pend_state & BIT_ULL(58));
+}
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 23/44] event/octeontx2: add devargs to control SSO GGRP QoS
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (21 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 22/44] event/octeontx2: add device start function pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 24/44] event/octeontx2: add device stop and close functions pbhagavatula
` (20 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
events. By default the buffers are assigned to the SSO GGRPs to
satisfy minimum HW requirements. SSO is free to assign the remaining
buffers to GGRPs based on a preconfigured threshold.
We can control the QoS of SSO GGRP by modifying the above mentioned
thresholds. GGRPs that have higher importance can be assigned higher
thresholds than the rest.
Example:
--dev "0002:0e:00.0,qos=[1-50-50-50]" // [Qx-XAQ-TAQ-IAQ]
Qx -> Event queue Aka SSO GGRP.
XAQ -> DRAM In-flights.
TAQ & IAQ -> SRAM In-flights.
The values need to be expressed in terms of percentages, 0 represents
default.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.c | 104 ++++++++++++++++++++++++++-
drivers/event/octeontx2/otx2_evdev.h | 9 +++
2 files changed, 112 insertions(+), 1 deletion(-)
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index d6ddee1cd..786772ba9 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -934,6 +934,34 @@ otx2_handle_event(void *arg, struct rte_event event)
event, event_dev->data->dev_stop_flush_arg);
}
+static void
+sso_qos_cfg(struct rte_eventdev *event_dev)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ struct sso_grp_qos_cfg *req;
+ uint16_t i;
+
+ for (i = 0; i < dev->qos_queue_cnt; i++) {
+ uint8_t xaq_prcnt = dev->qos_parse_data[i].xaq_prcnt;
+ uint8_t iaq_prcnt = dev->qos_parse_data[i].iaq_prcnt;
+ uint8_t taq_prcnt = dev->qos_parse_data[i].taq_prcnt;
+
+ if (dev->qos_parse_data[i].queue >= dev->nb_event_queues)
+ continue;
+
+ req = otx2_mbox_alloc_msg_sso_grp_qos_config(dev->mbox);
+ req->xaq_limit = (dev->nb_xaq_cfg *
+ (xaq_prcnt ? xaq_prcnt : 100)) / 100;
+ req->taq_thr = (SSO_HWGRP_IAQ_MAX_THR_MASK *
+ (iaq_prcnt ? iaq_prcnt : 100)) / 100;
+ req->iaq_thr = (SSO_HWGRP_TAQ_MAX_THR_MASK *
+ (taq_prcnt ? taq_prcnt : 100)) / 100;
+ }
+
+ if (dev->qos_queue_cnt)
+ otx2_mbox_process(dev->mbox);
+}
+
static void
sso_cleanup(struct rte_eventdev *event_dev, uint8_t enable)
{
@@ -1005,6 +1033,7 @@ static int
otx2_sso_start(struct rte_eventdev *event_dev)
{
sso_func_trace();
+ sso_qos_cfg(event_dev);
sso_cleanup(event_dev, 1);
sso_fastpath_fns_set(event_dev);
@@ -1035,6 +1064,76 @@ static struct rte_eventdev_ops otx2_sso_ops = {
#define OTX2_SSO_XAE_CNT "xae_cnt"
#define OTX2_SSO_SINGLE_WS "single_ws"
+#define OTX2_SSO_GGRP_QOS "qos"
+
+static void
+parse_queue_param(char *value, void *opaque)
+{
+ struct otx2_sso_qos queue_qos = {0};
+ uint8_t *val = (uint8_t *)&queue_qos;
+ struct otx2_sso_evdev *dev = opaque;
+ char *tok = strtok(value, "-");
+
+ if (!strlen(value))
+ return;
+
+ while (tok != NULL) {
+ *val = atoi(tok);
+ tok = strtok(NULL, "-");
+ val++;
+ }
+
+ if (val != (&queue_qos.iaq_prcnt + 1)) {
+ otx2_err("Invalid QoS parameter expected [Qx-XAQ-TAQ-IAQ]");
+ return;
+ }
+
+ dev->qos_queue_cnt++;
+ dev->qos_parse_data = rte_realloc(dev->qos_parse_data,
+ sizeof(struct otx2_sso_qos) *
+ dev->qos_queue_cnt, 0);
+ dev->qos_parse_data[dev->qos_queue_cnt - 1] = queue_qos;
+}
+
+static void
+parse_qos_list(const char *value, void *opaque)
+{
+ char *s = strdup(value);
+ char *start = NULL;
+ char *end = NULL;
+ char *f = s;
+
+ while (*s) {
+ if (*s == '[')
+ start = s;
+ else if (*s == ']')
+ end = s;
+
+ if (start < end && *start) {
+ *end = 0;
+ parse_queue_param(start + 1, opaque);
+ s = end;
+ start = end;
+ }
+ s++;
+ }
+
+ free(f);
+}
+
+static int
+parse_sso_kvargs_dict(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ /* Dict format [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] use '-' cause ','
+ * isn't allowed. Everything is expressed in percentages, 0 represents
+ * default.
+ */
+ parse_qos_list(value, opaque);
+
+ return 0;
+}
static void
sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs)
@@ -1052,6 +1151,8 @@ sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs)
&dev->xae_cnt);
rte_kvargs_process(kvlist, OTX2_SSO_SINGLE_WS, &parse_kvargs_flag,
&single_ws);
+ rte_kvargs_process(kvlist, OTX2_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
+ dev);
dev->dual_ws = !single_ws;
rte_kvargs_free(kvlist);
@@ -1206,4 +1307,5 @@ RTE_PMD_REGISTER_PCI(event_octeontx2, pci_sso);
RTE_PMD_REGISTER_PCI_TABLE(event_octeontx2, pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=<int>"
- OTX2_SSO_SINGLE_WS "=1");
+ OTX2_SSO_SINGLE_WS "=1"
+ OTX2_SSO_GGRP_QOS "=<string>");
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index 4428abcfa..2aa742184 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -104,6 +104,13 @@ enum {
SSO_SYNC_EMPTY
};
+struct otx2_sso_qos {
+ uint8_t queue;
+ uint8_t xaq_prcnt;
+ uint8_t taq_prcnt;
+ uint8_t iaq_prcnt;
+};
+
struct otx2_sso_evdev {
OTX2_DEV; /* Base class */
uint8_t max_event_queues;
@@ -124,6 +131,8 @@ struct otx2_sso_evdev {
/* Dev args */
uint8_t dual_ws;
uint32_t xae_cnt;
+ uint8_t qos_queue_cnt;
+ struct otx2_sso_qos *qos_parse_data;
/* HW const */
uint32_t xae_waes;
uint32_t xaq_buf_size;
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 24/44] event/octeontx2: add device stop and close functions
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (22 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 23/44] event/octeontx2: add devargs to control SSO GGRP QoS pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 25/44] event/octeontx2: add SSO selftest pbhagavatula
` (19 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.c | 39 ++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 786772ba9..5004fe2de 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -1040,6 +1040,43 @@ otx2_sso_start(struct rte_eventdev *event_dev)
return 0;
}
+static void
+otx2_sso_stop(struct rte_eventdev *event_dev)
+{
+ sso_func_trace();
+ sso_cleanup(event_dev, 0);
+ rte_mb();
+}
+
+static int
+otx2_sso_close(struct rte_eventdev *event_dev)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint16_t i;
+
+ if (!dev->configured)
+ return 0;
+
+ sso_unregister_irqs(event_dev);
+
+ for (i = 0; i < dev->nb_event_queues; i++)
+ all_queues[i] = i;
+
+ for (i = 0; i < dev->nb_event_ports; i++)
+ otx2_sso_port_unlink(event_dev, event_dev->data->ports[i],
+ all_queues, dev->nb_event_queues);
+
+ sso_lf_teardown(dev, SSO_LF_GGRP);
+ sso_lf_teardown(dev, SSO_LF_GWS);
+ dev->nb_event_ports = 0;
+ dev->nb_event_queues = 0;
+ rte_mempool_free(dev->xaq_pool);
+ rte_memzone_free(rte_memzone_lookup(OTX2_SSO_FC_NAME));
+
+ return 0;
+}
+
/* Initialize and register event driver with DPDK Application */
static struct rte_eventdev_ops otx2_sso_ops = {
.dev_infos_get = otx2_sso_info_get,
@@ -1060,6 +1097,8 @@ static struct rte_eventdev_ops otx2_sso_ops = {
.dump = otx2_sso_dump,
.dev_start = otx2_sso_start,
+ .dev_stop = otx2_sso_stop,
+ .dev_close = otx2_sso_close,
};
#define OTX2_SSO_XAE_CNT "xae_cnt"
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 25/44] event/octeontx2: add SSO selftest
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (23 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 24/44] event/octeontx2: add device stop and close functions pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 26/44] doc: add Marvell OCTEON TX2 event device documentation pbhagavatula
` (18 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add selftest to verify sanity of SSO.
Can be run by passing devargs to SSO PF as follows:
--dev "0002:0e:00.0,selftest=1"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
app/test/test_eventdev.c | 8 +
drivers/event/octeontx2/Makefile | 1 +
drivers/event/octeontx2/meson.build | 1 +
drivers/event/octeontx2/otx2_evdev.c | 11 +-
drivers/event/octeontx2/otx2_evdev.h | 3 +
drivers/event/octeontx2/otx2_evdev_selftest.c | 1511 +++++++++++++++++
6 files changed, 1534 insertions(+), 1 deletion(-)
create mode 100644 drivers/event/octeontx2/otx2_evdev_selftest.c
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index c745e997e..783140dfe 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1014,7 +1014,15 @@ test_eventdev_selftest_octeontx(void)
return test_eventdev_selftest_impl("event_octeontx", "");
}
+static int
+test_eventdev_selftest_octeontx2(void)
+{
+ return test_eventdev_selftest_impl("otx2_eventdev", "");
+}
+
REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
REGISTER_TEST_COMMAND(eventdev_selftest_sw, test_eventdev_selftest_sw);
REGISTER_TEST_COMMAND(eventdev_selftest_octeontx,
test_eventdev_selftest_octeontx);
+REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2,
+ test_eventdev_selftest_octeontx2);
diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile
index dfecda599..d6cffc1f6 100644
--- a/drivers/event/octeontx2/Makefile
+++ b/drivers/event/octeontx2/Makefile
@@ -33,6 +33,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker_dual.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_selftest.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_irq.c
LDLIBS += -lrte_eal -lrte_bus_pci -lrte_pci -lrte_kvargs
diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
index c2a5f3e3d..470564b08 100644
--- a/drivers/event/octeontx2/meson.build
+++ b/drivers/event/octeontx2/meson.build
@@ -6,6 +6,7 @@ sources = files('otx2_worker.c',
'otx2_worker_dual.c',
'otx2_evdev.c',
'otx2_evdev_irq.c',
+ 'otx2_evdev_selftest.c',
)
allow_experimental_apis = true
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 5004fe2de..c5a150954 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -1099,11 +1099,13 @@ static struct rte_eventdev_ops otx2_sso_ops = {
.dev_start = otx2_sso_start,
.dev_stop = otx2_sso_stop,
.dev_close = otx2_sso_close,
+ .dev_selftest = otx2_sso_selftest,
};
#define OTX2_SSO_XAE_CNT "xae_cnt"
#define OTX2_SSO_SINGLE_WS "single_ws"
#define OTX2_SSO_GGRP_QOS "qos"
+#define OTX2_SSO_SELFTEST "selftest"
static void
parse_queue_param(char *value, void *opaque)
@@ -1186,6 +1188,8 @@ sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs)
if (kvlist == NULL)
return;
+ rte_kvargs_process(kvlist, OTX2_SSO_SELFTEST, &parse_kvargs_flag,
+ &dev->selftest);
rte_kvargs_process(kvlist, OTX2_SSO_XAE_CNT, &parse_kvargs_value,
&dev->xae_cnt);
rte_kvargs_process(kvlist, OTX2_SSO_SINGLE_WS, &parse_kvargs_flag,
@@ -1301,6 +1305,10 @@ otx2_sso_init(struct rte_eventdev *event_dev)
otx2_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
event_dev->data->name, dev->max_event_queues,
dev->max_event_ports);
+ if (dev->selftest) {
+ event_dev->dev->driver = &pci_sso.driver;
+ event_dev->dev_ops->dev_selftest();
+ }
return 0;
@@ -1347,4 +1355,5 @@ RTE_PMD_REGISTER_PCI_TABLE(event_octeontx2, pci_sso_map);
RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=<int>"
OTX2_SSO_SINGLE_WS "=1"
- OTX2_SSO_GGRP_QOS "=<string>");
+ OTX2_SSO_GGRP_QOS "=<string>"
+ OTX2_SSO_SELFTEST "=1");
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index 2aa742184..fc8dde416 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -130,6 +130,7 @@ struct otx2_sso_evdev {
struct rte_mempool *xaq_pool;
/* Dev args */
uint8_t dual_ws;
+ uint8_t selftest;
uint32_t xae_cnt;
uint8_t qos_queue_cnt;
struct otx2_sso_qos *qos_parse_data;
@@ -247,6 +248,8 @@ typedef void (*otx2_handle_event_t)(void *arg, struct rte_event ev);
void ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id,
uintptr_t base, otx2_handle_event_t fn, void *arg);
void ssogws_reset(struct otx2_ssogws *ws);
+/* Selftest */
+int otx2_sso_selftest(void);
/* Init and Fini API's */
int otx2_sso_init(struct rte_eventdev *event_dev);
int otx2_sso_fini(struct rte_eventdev *event_dev);
diff --git a/drivers/event/octeontx2/otx2_evdev_selftest.c b/drivers/event/octeontx2/otx2_evdev_selftest.c
new file mode 100644
index 000000000..8440a50aa
--- /dev/null
+++ b/drivers/event/octeontx2/otx2_evdev_selftest.c
@@ -0,0 +1,1511 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_atomic.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_hexdump.h>
+#include <rte_launch.h>
+#include <rte_lcore.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_per_lcore.h>
+#include <rte_random.h>
+#include <rte_test.h>
+
+#include "otx2_evdev.h"
+
+#define NUM_PACKETS (1024)
+#define MAX_EVENTS (1024)
+
+#define OCTEONTX2_TEST_RUN(setup, teardown, test) \
+ octeontx_test_run(setup, teardown, test, #test)
+
+static int total;
+static int passed;
+static int failed;
+static int unsupported;
+
+static int evdev;
+static struct rte_mempool *eventdev_test_mempool;
+
+struct event_attr {
+ uint32_t flow_id;
+ uint8_t event_type;
+ uint8_t sub_event_type;
+ uint8_t sched_type;
+ uint8_t queue;
+ uint8_t port;
+};
+
+static uint32_t seqn_list_index;
+static int seqn_list[NUM_PACKETS];
+
+static inline void
+seqn_list_init(void)
+{
+ RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS);
+ memset(seqn_list, 0, sizeof(seqn_list));
+ seqn_list_index = 0;
+}
+
+static inline int
+seqn_list_update(int val)
+{
+ if (seqn_list_index >= NUM_PACKETS)
+ return -1;
+
+ seqn_list[seqn_list_index++] = val;
+ rte_smp_wmb();
+ return 0;
+}
+
+static inline int
+seqn_list_check(int limit)
+{
+ int i;
+
+ for (i = 0; i < limit; i++) {
+ if (seqn_list[i] != i) {
+ otx2_err("Seqn mismatch %d %d", seqn_list[i], i);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+struct test_core_param {
+ rte_atomic32_t *total_events;
+ uint64_t dequeue_tmo_ticks;
+ uint8_t port;
+ uint8_t sched_type;
+};
+
+static int
+testsuite_setup(void)
+{
+ const char *eventdev_name = "event_octeontx2";
+
+ evdev = rte_event_dev_get_dev_id(eventdev_name);
+ if (evdev < 0) {
+ otx2_err("%d: Eventdev %s not found", __LINE__, eventdev_name);
+ return -1;
+ }
+ return 0;
+}
+
+static void
+testsuite_teardown(void)
+{
+ rte_event_dev_close(evdev);
+}
+
+static inline void
+devconf_set_default_sane_values(struct rte_event_dev_config *dev_conf,
+ struct rte_event_dev_info *info)
+{
+ memset(dev_conf, 0, sizeof(struct rte_event_dev_config));
+ dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns;
+ dev_conf->nb_event_ports = info->max_event_ports;
+ dev_conf->nb_event_queues = info->max_event_queues;
+ dev_conf->nb_event_queue_flows = info->max_event_queue_flows;
+ dev_conf->nb_event_port_dequeue_depth =
+ info->max_event_port_dequeue_depth;
+ dev_conf->nb_event_port_enqueue_depth =
+ info->max_event_port_enqueue_depth;
+ dev_conf->nb_event_port_enqueue_depth =
+ info->max_event_port_enqueue_depth;
+ dev_conf->nb_events_limit =
+ info->max_num_events;
+}
+
+enum {
+ TEST_EVENTDEV_SETUP_DEFAULT,
+ TEST_EVENTDEV_SETUP_PRIORITY,
+ TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT,
+};
+
+static inline int
+_eventdev_setup(int mode)
+{
+ const char *pool_name = "evdev_octeontx_test_pool";
+ struct rte_event_dev_config dev_conf;
+ struct rte_event_dev_info info;
+ int i, ret;
+
+ /* Create and destrory pool for each test case to make it standalone */
+ eventdev_test_mempool = rte_pktmbuf_pool_create(pool_name, MAX_EVENTS,
+ 0, 0, 512,
+ rte_socket_id());
+ if (!eventdev_test_mempool) {
+ otx2_err("ERROR creating mempool");
+ return -1;
+ }
+
+ ret = rte_event_dev_info_get(evdev, &info);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+ devconf_set_default_sane_values(&dev_conf, &info);
+ if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT)
+ dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT;
+
+ ret = rte_event_dev_configure(evdev, &dev_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+ uint32_t queue_count;
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
+ "Queue count get failed");
+
+ if (mode == TEST_EVENTDEV_SETUP_PRIORITY) {
+ if (queue_count > 8)
+ queue_count = 8;
+
+ /* Configure event queues(0 to n) with
+ * RTE_EVENT_DEV_PRIORITY_HIGHEST to
+ * RTE_EVENT_DEV_PRIORITY_LOWEST
+ */
+ uint8_t step = (RTE_EVENT_DEV_PRIORITY_LOWEST + 1) /
+ queue_count;
+ for (i = 0; i < (int)queue_count; i++) {
+ struct rte_event_queue_conf queue_conf;
+
+ ret = rte_event_queue_default_conf_get(evdev, i,
+ &queue_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d",
+ i);
+ queue_conf.priority = i * step;
+ ret = rte_event_queue_setup(evdev, i, &queue_conf);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
+ i);
+ }
+
+ } else {
+ /* Configure event queues with default priority */
+ for (i = 0; i < (int)queue_count; i++) {
+ ret = rte_event_queue_setup(evdev, i, NULL);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
+ i);
+ }
+ }
+ /* Configure event ports */
+ uint32_t port_count;
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count),
+ "Port count get failed");
+ for (i = 0; i < (int)port_count; i++) {
+ ret = rte_event_port_setup(evdev, i, NULL);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i);
+ ret = rte_event_port_link(evdev, i, NULL, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d",
+ i);
+ }
+
+ ret = rte_event_dev_start(evdev);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device");
+
+ return 0;
+}
+
+static inline int
+eventdev_setup(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_DEFAULT);
+}
+
+static inline int
+eventdev_setup_priority(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_PRIORITY);
+}
+
+static inline int
+eventdev_setup_dequeue_timeout(void)
+{
+ return _eventdev_setup(TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT);
+}
+
+static inline void
+eventdev_teardown(void)
+{
+ rte_event_dev_stop(evdev);
+ rte_mempool_free(eventdev_test_mempool);
+}
+
+static inline void
+update_event_and_validation_attr(struct rte_mbuf *m, struct rte_event *ev,
+ uint32_t flow_id, uint8_t event_type,
+ uint8_t sub_event_type, uint8_t sched_type,
+ uint8_t queue, uint8_t port)
+{
+ struct event_attr *attr;
+
+ /* Store the event attributes in mbuf for future reference */
+ attr = rte_pktmbuf_mtod(m, struct event_attr *);
+ attr->flow_id = flow_id;
+ attr->event_type = event_type;
+ attr->sub_event_type = sub_event_type;
+ attr->sched_type = sched_type;
+ attr->queue = queue;
+ attr->port = port;
+
+ ev->flow_id = flow_id;
+ ev->sub_event_type = sub_event_type;
+ ev->event_type = event_type;
+ /* Inject the new event */
+ ev->op = RTE_EVENT_OP_NEW;
+ ev->sched_type = sched_type;
+ ev->queue_id = queue;
+ ev->mbuf = m;
+}
+
+static inline int
+inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type,
+ uint8_t sched_type, uint8_t queue, uint8_t port,
+ unsigned int events)
+{
+ struct rte_mbuf *m;
+ unsigned int i;
+
+ for (i = 0; i < events; i++) {
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
+
+ m->seqn = i;
+ update_event_and_validation_attr(m, &ev, flow_id, event_type,
+ sub_event_type, sched_type,
+ queue, port);
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ return 0;
+}
+
+static inline int
+check_excess_events(uint8_t port)
+{
+ uint16_t valid_event;
+ struct rte_event ev;
+ int i;
+
+ /* Check for excess events, try for a few times and exit */
+ for (i = 0; i < 32; i++) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+
+ RTE_TEST_ASSERT_SUCCESS(valid_event,
+ "Unexpected valid event=%d",
+ ev.mbuf->seqn);
+ }
+ return 0;
+}
+
+static inline int
+generate_random_events(const unsigned int total_events)
+{
+ struct rte_event_dev_info info;
+ uint32_t queue_count;
+ unsigned int i;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
+ "Queue count get failed");
+
+ ret = rte_event_dev_info_get(evdev, &info);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+ for (i = 0; i < total_events; i++) {
+ ret = inject_events(
+ rte_rand() % info.max_event_queue_flows /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ rte_rand() % queue_count /* queue */,
+ 0 /* port */,
+ 1 /* events */);
+ if (ret)
+ return -1;
+ }
+ return ret;
+}
+
+
+static inline int
+validate_event(struct rte_event *ev)
+{
+ struct event_attr *attr;
+
+ attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *);
+ RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id,
+ "flow_id mismatch enq=%d deq =%d",
+ attr->flow_id, ev->flow_id);
+ RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type,
+ "event_type mismatch enq=%d deq =%d",
+ attr->event_type, ev->event_type);
+ RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type,
+ "sub_event_type mismatch enq=%d deq =%d",
+ attr->sub_event_type, ev->sub_event_type);
+ RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type,
+ "sched_type mismatch enq=%d deq =%d",
+ attr->sched_type, ev->sched_type);
+ RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id,
+ "queue mismatch enq=%d deq =%d",
+ attr->queue, ev->queue_id);
+ return 0;
+}
+
+typedef int (*validate_event_cb)(uint32_t index, uint8_t port,
+ struct rte_event *ev);
+
+static inline int
+consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn)
+{
+ uint32_t events = 0, forward_progress_cnt = 0, index = 0;
+ uint16_t valid_event;
+ struct rte_event ev;
+ int ret;
+
+ while (1) {
+ if (++forward_progress_cnt > UINT16_MAX) {
+ otx2_err("Detected deadlock");
+ return -1;
+ }
+
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ forward_progress_cnt = 0;
+ ret = validate_event(&ev);
+ if (ret)
+ return -1;
+
+ if (fn != NULL) {
+ ret = fn(index, port, &ev);
+ RTE_TEST_ASSERT_SUCCESS(ret,
+ "Failed to validate test specific event");
+ }
+
+ ++index;
+
+ rte_pktmbuf_free(ev.mbuf);
+ if (++events >= total_events)
+ break;
+ }
+
+ return check_excess_events(port);
+}
+
+static int
+validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev)
+{
+ RTE_SET_USED(port);
+ RTE_TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d",
+ index, ev->mbuf->seqn);
+ return 0;
+}
+
+static inline int
+test_simple_enqdeq(uint8_t sched_type)
+{
+ int ret;
+
+ ret = inject_events(0 /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type */,
+ sched_type,
+ 0 /* queue */,
+ 0 /* port */,
+ MAX_EVENTS);
+ if (ret)
+ return -1;
+
+ return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq);
+}
+
+static int
+test_simple_enqdeq_ordered(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_simple_enqdeq_atomic(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_simple_enqdeq_parallel(void)
+{
+ return test_simple_enqdeq(RTE_SCHED_TYPE_PARALLEL);
+}
+
+/*
+ * Generate a prescribed number of events and spread them across available
+ * queues. On dequeue, using single event port(port 0) verify the enqueued
+ * event attributes
+ */
+static int
+test_multi_queue_enq_single_port_deq(void)
+{
+ int ret;
+
+ ret = generate_random_events(MAX_EVENTS);
+ if (ret)
+ return -1;
+
+ return consume_events(0 /* port */, MAX_EVENTS, NULL);
+}
+
+/*
+ * Inject 0..MAX_EVENTS events over 0..queue_count with modulus
+ * operation
+ *
+ * For example, Inject 32 events over 0..7 queues
+ * enqueue events 0, 8, 16, 24 in queue 0
+ * enqueue events 1, 9, 17, 25 in queue 1
+ * ..
+ * ..
+ * enqueue events 7, 15, 23, 31 in queue 7
+ *
+ * On dequeue, Validate the events comes in 0,8,16,24,1,9,17,25..,7,15,23,31
+ * order from queue0(highest priority) to queue7(lowest_priority)
+ */
+static int
+validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev)
+{
+ uint32_t queue_count;
+
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
+ "Queue count get failed");
+ if (queue_count > 8)
+ queue_count = 8;
+ uint32_t range = MAX_EVENTS / queue_count;
+ uint32_t expected_val = (index % range) * queue_count;
+
+ expected_val += ev->queue_id;
+ RTE_SET_USED(port);
+ RTE_TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val,
+ "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d",
+ ev->mbuf->seqn, index, expected_val, range,
+ queue_count, MAX_EVENTS);
+ return 0;
+}
+
+static int
+test_multi_queue_priority(void)
+{
+ int i, max_evts_roundoff;
+ /* See validate_queue_priority() comments for priority validate logic */
+ uint32_t queue_count;
+ struct rte_mbuf *m;
+ uint8_t queue;
+
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
+ "Queue count get failed");
+ if (queue_count > 8)
+ queue_count = 8;
+ max_evts_roundoff = MAX_EVENTS / queue_count;
+ max_evts_roundoff *= queue_count;
+
+ for (i = 0; i < max_evts_roundoff; i++) {
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
+
+ m->seqn = i;
+ queue = i % queue_count;
+ update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU,
+ 0, RTE_SCHED_TYPE_PARALLEL,
+ queue, 0);
+ rte_event_enqueue_burst(evdev, 0, &ev, 1);
+ }
+
+ return consume_events(0, max_evts_roundoff, validate_queue_priority);
+}
+
+static int
+worker_multi_port_fn(void *arg)
+{
+ struct test_core_param *param = arg;
+ rte_atomic32_t *total_events = param->total_events;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+ int ret;
+
+ while (rte_atomic32_read(total_events) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ ret = validate_event(&ev);
+ RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event");
+ rte_pktmbuf_free(ev.mbuf);
+ rte_atomic32_sub(total_events, 1);
+ }
+
+ return 0;
+}
+
+static inline int
+wait_workers_to_join(const rte_atomic32_t *count)
+{
+ uint64_t cycles, print_cycles;
+
+ cycles = rte_get_timer_cycles();
+ print_cycles = cycles;
+ while (rte_atomic32_read(count)) {
+ uint64_t new_cycles = rte_get_timer_cycles();
+
+ if (new_cycles - print_cycles > rte_get_timer_hz()) {
+ otx2_err("Events %d", rte_atomic32_read(count));
+ print_cycles = new_cycles;
+ }
+ if (new_cycles - cycles > rte_get_timer_hz() * 10000000000) {
+ otx2_err("No schedules for seconds, deadlock (%d)",
+ rte_atomic32_read(count));
+ rte_event_dev_dump(evdev, stdout);
+ cycles = new_cycles;
+ return -1;
+ }
+ }
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+static inline int
+launch_workers_and_wait(int (*master_worker)(void *),
+ int (*slave_workers)(void *), uint32_t total_events,
+ uint8_t nb_workers, uint8_t sched_type)
+{
+ rte_atomic32_t atomic_total_events;
+ struct test_core_param *param;
+ uint64_t dequeue_tmo_ticks;
+ uint8_t port = 0;
+ int w_lcore;
+ int ret;
+
+ if (!nb_workers)
+ return 0;
+
+ rte_atomic32_set(&atomic_total_events, total_events);
+ seqn_list_init();
+
+ param = malloc(sizeof(struct test_core_param) * nb_workers);
+ if (!param)
+ return -1;
+
+ ret = rte_event_dequeue_timeout_ticks(evdev,
+ rte_rand() % 10000000/* 10ms */,
+ &dequeue_tmo_ticks);
+ if (ret) {
+ free(param);
+ return -1;
+ }
+
+ param[0].total_events = &atomic_total_events;
+ param[0].sched_type = sched_type;
+ param[0].port = 0;
+ param[0].dequeue_tmo_ticks = dequeue_tmo_ticks;
+ rte_wmb();
+
+ w_lcore = rte_get_next_lcore(
+ /* start core */ -1,
+ /* skip master */ 1,
+ /* wrap */ 0);
+ rte_eal_remote_launch(master_worker, ¶m[0], w_lcore);
+
+ for (port = 1; port < nb_workers; port++) {
+ param[port].total_events = &atomic_total_events;
+ param[port].sched_type = sched_type;
+ param[port].port = port;
+ param[port].dequeue_tmo_ticks = dequeue_tmo_ticks;
+ rte_smp_wmb();
+ w_lcore = rte_get_next_lcore(w_lcore, 1, 0);
+ rte_eal_remote_launch(slave_workers, ¶m[port], w_lcore);
+ }
+
+ rte_smp_wmb();
+ ret = wait_workers_to_join(&atomic_total_events);
+ free(param);
+
+ return ret;
+}
+
+/*
+ * Generate a prescribed number of events and spread them across available
+ * queues. Dequeue the events through multiple ports and verify the enqueued
+ * event attributes
+ */
+static int
+test_multi_queue_enq_multi_port_deq(void)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t nr_ports;
+ int ret;
+
+ ret = generate_random_events(total_events);
+ if (ret)
+ return -1;
+
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ otx2_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ return launch_workers_and_wait(worker_multi_port_fn,
+ worker_multi_port_fn, total_events,
+ nr_ports, 0xff /* invalid */);
+}
+
+static
+void flush(uint8_t dev_id, struct rte_event event, void *arg)
+{
+ unsigned int *count = arg;
+
+ RTE_SET_USED(dev_id);
+ if (event.event_type == RTE_EVENT_TYPE_CPU)
+ *count = *count + 1;
+}
+
+static int
+test_dev_stop_flush(void)
+{
+ unsigned int total_events = MAX_EVENTS, count = 0;
+ int ret;
+
+ ret = generate_random_events(total_events);
+ if (ret)
+ return -1;
+
+ ret = rte_event_dev_stop_flush_callback_register(evdev, flush, &count);
+ if (ret)
+ return -2;
+ rte_event_dev_stop(evdev);
+ ret = rte_event_dev_stop_flush_callback_register(evdev, NULL, NULL);
+ if (ret)
+ return -3;
+ RTE_TEST_ASSERT_EQUAL(total_events, count,
+ "count mismatch total_events=%d count=%d",
+ total_events, count);
+
+ return 0;
+}
+
+static int
+validate_queue_to_port_single_link(uint32_t index, uint8_t port,
+ struct rte_event *ev)
+{
+ RTE_SET_USED(index);
+ RTE_TEST_ASSERT_EQUAL(port, ev->queue_id,
+ "queue mismatch enq=%d deq =%d",
+ port, ev->queue_id);
+
+ return 0;
+}
+
+/*
+ * Link queue x to port x and check correctness of link by checking
+ * queue_id == x on dequeue on the specific port x
+ */
+static int
+test_queue_to_port_single_link(void)
+{
+ int i, nr_links, ret;
+ uint32_t queue_count;
+ uint32_t port_count;
+
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count),
+ "Port count get failed");
+
+ /* Unlink all connections that created in eventdev_setup */
+ for (i = 0; i < (int)port_count; i++) {
+ ret = rte_event_port_unlink(evdev, i, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0,
+ "Failed to unlink all queues port=%d", i);
+ }
+
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
+ "Queue count get failed");
+
+ nr_links = RTE_MIN(port_count, queue_count);
+ const unsigned int total_events = MAX_EVENTS / nr_links;
+
+ /* Link queue x to port x and inject events to queue x through port x */
+ for (i = 0; i < nr_links; i++) {
+ uint8_t queue = (uint8_t)i;
+
+ ret = rte_event_port_link(evdev, i, &queue, NULL, 1);
+ RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i);
+
+ ret = inject_events(0x100 /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ queue /* queue */, i /* port */,
+ total_events /* events */);
+ if (ret)
+ return -1;
+ }
+
+ /* Verify the events generated from correct queue */
+ for (i = 0; i < nr_links; i++) {
+ ret = consume_events(i /* port */, total_events,
+ validate_queue_to_port_single_link);
+ if (ret)
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+validate_queue_to_port_multi_link(uint32_t index, uint8_t port,
+ struct rte_event *ev)
+{
+ RTE_SET_USED(index);
+ RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1),
+ "queue mismatch enq=%d deq =%d",
+ port, ev->queue_id);
+
+ return 0;
+}
+
+/*
+ * Link all even number of queues to port 0 and all odd number of queues to
+ * port 1 and verify the link connection on dequeue
+ */
+static int
+test_queue_to_port_multi_link(void)
+{
+ int ret, port0_events = 0, port1_events = 0;
+ uint32_t nr_queues = 0;
+ uint32_t nr_ports = 0;
+ uint8_t queue, port;
+
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues),
+ "Queue count get failed");
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues),
+ "Queue count get failed");
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
+ "Port count get failed");
+
+ if (nr_ports < 2) {
+ otx2_err("Not enough ports to test ports=%d", nr_ports);
+ return 0;
+ }
+
+ /* Unlink all connections that created in eventdev_setup */
+ for (port = 0; port < nr_ports; port++) {
+ ret = rte_event_port_unlink(evdev, port, NULL, 0);
+ RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d",
+ port);
+ }
+
+ const unsigned int total_events = MAX_EVENTS / nr_queues;
+
+ /* Link all even number of queues to port0 and odd numbers to port 1*/
+ for (queue = 0; queue < nr_queues; queue++) {
+ port = queue & 0x1;
+ ret = rte_event_port_link(evdev, port, &queue, NULL, 1);
+ RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d",
+ queue, port);
+
+ ret = inject_events(0x100 /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ rte_rand() % 256 /* sub_event_type */,
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
+ queue /* queue */, port /* port */,
+ total_events /* events */);
+ if (ret)
+ return -1;
+
+ if (port == 0)
+ port0_events += total_events;
+ else
+ port1_events += total_events;
+ }
+
+ ret = consume_events(0 /* port */, port0_events,
+ validate_queue_to_port_multi_link);
+ if (ret)
+ return -1;
+ ret = consume_events(1 /* port */, port1_events,
+ validate_queue_to_port_multi_link);
+ if (ret)
+ return -1;
+
+ return 0;
+}
+
+static int
+worker_flow_based_pipeline(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
+ rte_atomic32_t *total_events = param->total_events;
+ uint8_t new_sched_type = param->sched_type;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (rte_atomic32_read(total_events) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
+ dequeue_tmo_ticks);
+ if (!valid_event)
+ continue;
+
+ /* Events from stage 0 */
+ if (ev.sub_event_type == 0) {
+ /* Move to atomic flow to maintain the ordering */
+ ev.flow_id = 0x2;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sub_event_type = 1; /* stage 1 */
+ ev.sched_type = new_sched_type;
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ } else if (ev.sub_event_type == 1) { /* Events from stage 1*/
+ if (seqn_list_update(ev.mbuf->seqn) == 0) {
+ rte_pktmbuf_free(ev.mbuf);
+ rte_atomic32_sub(total_events, 1);
+ } else {
+ otx2_err("Failed to update seqn_list");
+ return -1;
+ }
+ } else {
+ otx2_err("Invalid ev.sub_event_type = %d",
+ ev.sub_event_type);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+static int
+test_multiport_flow_sched_type_test(uint8_t in_sched_type,
+ uint8_t out_sched_type)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ otx2_err("Not enough ports=%d or workers=%d", nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with m->seqn=0 to total_events */
+ ret = inject_events(0x1 /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */,
+ in_sched_type,
+ 0 /* queue */,
+ 0 /* port */,
+ total_events /* events */);
+ if (ret)
+ return -1;
+
+ rte_mb();
+ ret = launch_workers_and_wait(worker_flow_based_pipeline,
+ worker_flow_based_pipeline, total_events,
+ nr_ports, out_sched_type);
+ if (ret)
+ return -1;
+
+ if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
+ out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
+ /* Check the events order maintained or not */
+ return seqn_list_check(total_events);
+ }
+
+ return 0;
+}
+
+/* Multi port ordered to atomic transaction */
+static int
+test_multi_port_flow_ordered_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_ordered_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_ordered_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_flow_atomic_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_atomic_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_atomic_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_flow_parallel_to_atomic(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_flow_parallel_to_ordered(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_flow_parallel_to_parallel(void)
+{
+ return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+worker_group_based_pipeline(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
+ rte_atomic32_t *total_events = param->total_events;
+ uint8_t new_sched_type = param->sched_type;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (rte_atomic32_read(total_events) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
+ dequeue_tmo_ticks);
+ if (!valid_event)
+ continue;
+
+ /* Events from stage 0(group 0) */
+ if (ev.queue_id == 0) {
+ /* Move to atomic flow to maintain the ordering */
+ ev.flow_id = 0x2;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sched_type = new_sched_type;
+ ev.queue_id = 1; /* Stage 1*/
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/
+ if (seqn_list_update(ev.mbuf->seqn) == 0) {
+ rte_pktmbuf_free(ev.mbuf);
+ rte_atomic32_sub(total_events, 1);
+ } else {
+ otx2_err("Failed to update seqn_list");
+ return -1;
+ }
+ } else {
+ otx2_err("Invalid ev.queue_id = %d", ev.queue_id);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int
+test_multiport_queue_sched_type_test(uint8_t in_sched_type,
+ uint8_t out_sched_type)
+{
+ const unsigned int total_events = MAX_EVENTS;
+ uint32_t queue_count;
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
+ "Port count get failed");
+
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
+ "Queue count get failed");
+ if (queue_count < 2 || !nr_ports) {
+ otx2_err("Not enough queues=%d ports=%d or workers=%d",
+ queue_count, nr_ports,
+ rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with m->seqn=0 to total_events */
+ ret = inject_events(0x1 /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */,
+ in_sched_type,
+ 0 /* queue */,
+ 0 /* port */,
+ total_events /* events */);
+ if (ret)
+ return -1;
+
+ ret = launch_workers_and_wait(worker_group_based_pipeline,
+ worker_group_based_pipeline, total_events,
+ nr_ports, out_sched_type);
+ if (ret)
+ return -1;
+
+ if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
+ out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
+ /* Check the events order maintained or not */
+ return seqn_list_check(total_events);
+ }
+
+ return 0;
+}
+
+static int
+test_multi_port_queue_ordered_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_ordered_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_ordered_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_queue_atomic_to_atomic(void)
+{
+ /* Ingress event order test */
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_atomic_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_atomic_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+test_multi_port_queue_parallel_to_atomic(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ATOMIC);
+}
+
+static int
+test_multi_port_queue_parallel_to_ordered(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_ORDERED);
+}
+
+static int
+test_multi_port_queue_parallel_to_parallel(void)
+{
+ return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
+ RTE_SCHED_TYPE_PARALLEL);
+}
+
+static int
+worker_flow_based_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ rte_atomic32_t *total_events = param->total_events;
+ uint8_t port = param->port;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ while (rte_atomic32_read(total_events) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.sub_event_type == 255) { /* last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ rte_atomic32_sub(total_events, 1);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sub_event_type++;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+static int
+launch_multi_port_max_stages_random_sched_type(int (*fn)(void *))
+{
+ uint32_t nr_ports;
+ int ret;
+
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (!nr_ports) {
+ otx2_err("Not enough ports=%d or workers=%d",
+ nr_ports, rte_lcore_count() - 1);
+ return 0;
+ }
+
+ /* Injects events with m->seqn=0 to total_events */
+ ret = inject_events(0x1 /*flow_id */,
+ RTE_EVENT_TYPE_CPU /* event_type */,
+ 0 /* sub_event_type (stage 0) */,
+ rte_rand() %
+ (RTE_SCHED_TYPE_PARALLEL + 1) /* sched_type */,
+ 0 /* queue */,
+ 0 /* port */,
+ MAX_EVENTS /* events */);
+ if (ret)
+ return -1;
+
+ return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports,
+ 0xff /* invalid */);
+}
+
+/* Flow based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_flow_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_flow_based_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ uint32_t queue_count;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
+ "Queue count get failed");
+ uint8_t nr_queues = queue_count;
+ rte_atomic32_t *total_events = param->total_events;
+
+ while (rte_atomic32_read(total_events) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.queue_id == nr_queues - 1) { /* last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ rte_atomic32_sub(total_events, 1);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id++;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+/* Queue based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_queue_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_queue_based_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_mixed_pipeline_max_stages_rand_sched_type(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ uint32_t queue_count;
+ uint16_t valid_event;
+ struct rte_event ev;
+
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
+ "Queue count get failed");
+ uint8_t nr_queues = queue_count;
+ rte_atomic32_t *total_events = param->total_events;
+
+ while (rte_atomic32_read(total_events) > 0) {
+ valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
+ if (!valid_event)
+ continue;
+
+ if (ev.queue_id == nr_queues - 1) { /* Last stage */
+ rte_pktmbuf_free(ev.mbuf);
+ rte_atomic32_sub(total_events, 1);
+ } else {
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id++;
+ ev.sub_event_type = rte_rand() % 256;
+ ev.sched_type =
+ rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
+ ev.op = RTE_EVENT_OP_FORWARD;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+ }
+
+ return 0;
+}
+
+/* Queue and flow based pipeline with maximum stages with random sched type */
+static int
+test_multi_port_mixed_max_stages_random_sched_type(void)
+{
+ return launch_multi_port_max_stages_random_sched_type(
+ worker_mixed_pipeline_max_stages_rand_sched_type);
+}
+
+static int
+worker_ordered_flow_producer(void *arg)
+{
+ struct test_core_param *param = arg;
+ uint8_t port = param->port;
+ struct rte_mbuf *m;
+ int counter = 0;
+
+ while (counter < NUM_PACKETS) {
+ m = rte_pktmbuf_alloc(eventdev_test_mempool);
+ if (m == NULL)
+ continue;
+
+ m->seqn = counter++;
+
+ struct rte_event ev = {.event = 0, .u64 = 0};
+
+ ev.flow_id = 0x1; /* Generate a fat flow */
+ ev.sub_event_type = 0;
+ /* Inject the new event */
+ ev.op = RTE_EVENT_OP_NEW;
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.sched_type = RTE_SCHED_TYPE_ORDERED;
+ ev.queue_id = 0;
+ ev.mbuf = m;
+ rte_event_enqueue_burst(evdev, port, &ev, 1);
+ }
+
+ return 0;
+}
+
+static inline int
+test_producer_consumer_ingress_order_test(int (*fn)(void *))
+{
+ uint32_t nr_ports;
+
+ RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
+ RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
+ "Port count get failed");
+ nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
+
+ if (rte_lcore_count() < 3 || nr_ports < 2) {
+ otx2_err("### Not enough cores for test.");
+ return 0;
+ }
+
+ launch_workers_and_wait(worker_ordered_flow_producer, fn,
+ NUM_PACKETS, nr_ports, RTE_SCHED_TYPE_ATOMIC);
+ /* Check the events order maintained or not */
+ return seqn_list_check(NUM_PACKETS);
+}
+
+/* Flow based producer consumer ingress order test */
+static int
+test_flow_producer_consumer_ingress_order_test(void)
+{
+ return test_producer_consumer_ingress_order_test(
+ worker_flow_based_pipeline);
+}
+
+/* Queue based producer consumer ingress order test */
+static int
+test_queue_producer_consumer_ingress_order_test(void)
+{
+ return test_producer_consumer_ingress_order_test(
+ worker_group_based_pipeline);
+}
+
+static void octeontx_test_run(int (*setup)(void), void (*tdown)(void),
+ int (*test)(void), const char *name)
+{
+ if (setup() < 0) {
+ printf("Error setting up test %s", name);
+ unsupported++;
+ } else {
+ if (test() < 0) {
+ failed++;
+ printf("+ TestCase [%2d] : %s failed\n", total, name);
+ } else {
+ passed++;
+ printf("+ TestCase [%2d] : %s succeeded\n", total,
+ name);
+ }
+ }
+
+ total++;
+ tdown();
+}
+
+int
+otx2_sso_selftest(void)
+{
+ testsuite_setup();
+
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_ordered);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_atomic);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_simple_enqdeq_parallel);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_queue_enq_single_port_deq);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_dev_stop_flush);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_queue_enq_multi_port_deq);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_to_port_single_link);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_to_port_multi_link);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_atomic);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_ordered);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_ordered_to_parallel);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_atomic);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_ordered);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_atomic_to_parallel);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_atomic);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_ordered);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_parallel_to_parallel);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_atomic);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_ordered);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_ordered_to_parallel);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_atomic);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_ordered);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_atomic_to_parallel);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_atomic);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_ordered);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_parallel_to_parallel);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_flow_max_stages_random_sched_type);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_queue_max_stages_random_sched_type);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_multi_port_mixed_max_stages_random_sched_type);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_flow_producer_consumer_ingress_order_test);
+ OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
+ test_queue_producer_consumer_ingress_order_test);
+ OCTEONTX2_TEST_RUN(eventdev_setup_priority, eventdev_teardown,
+ test_multi_queue_priority);
+ OCTEONTX2_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
+ test_multi_port_flow_ordered_to_atomic);
+ OCTEONTX2_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
+ test_multi_port_queue_ordered_to_atomic);
+ printf("Total tests : %d\n", total);
+ printf("Passed : %d\n", passed);
+ printf("Failed : %d\n", failed);
+ printf("Not supported : %d\n", unsupported);
+
+ testsuite_teardown();
+
+ if (failed)
+ return -1;
+
+ return 0;
+}
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 26/44] doc: add Marvell OCTEON TX2 event device documentation
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (24 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 25/44] event/octeontx2: add SSO selftest pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 27/44] event/octeontx2: add event timer support pbhagavatula
` (17 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj, John McNamara, Marko Kovacevic, Nithin Dabilpuram,
Vamsi Attunuru
Cc: dev, Pavan Nikhilesh, Thomas Monjalon
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add Marvell OCTEON TX2 event device documentation.
This patch also updates the MAINTAINERS file and
updates shared library versions in release_19_08.rst.
Cc: John McNamara <john.mcnamara@intel.com>
Cc: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/index.rst | 1 +
doc/guides/eventdevs/octeontx2.rst | 104 +++++++++++++++++++++++++++++
doc/guides/platform/octeontx2.rst | 3 +
3 files changed, 108 insertions(+)
create mode 100644 doc/guides/eventdevs/octeontx2.rst
diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst
index f7382dc8a..570905b81 100644
--- a/doc/guides/eventdevs/index.rst
+++ b/doc/guides/eventdevs/index.rst
@@ -16,4 +16,5 @@ application trough the eventdev API.
dsw
sw
octeontx
+ octeontx2
opdl
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
new file mode 100644
index 000000000..928251aa6
--- /dev/null
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -0,0 +1,104 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Marvell International Ltd.
+
+OCTEON TX2 SSO Eventdev Driver
+===============================
+
+The OCTEON TX2 SSO PMD (**librte_pmd_octeontx2_event**) provides poll mode
+eventdev driver support for the inbuilt event device found in the **Marvell OCTEON TX2**
+SoC family.
+
+More information about OCTEON TX2 SoC can be found at `Marvell Official Website
+<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
+
+Features
+--------
+
+Features of the OCTEON TX2 SSO PMD are:
+
+- 256 Event queues
+- 26 (dual) and 52 (single) Event ports
+- HW event scheduler
+- Supports 1M flows per event queue
+- Flow based event pipelining
+- Flow pinning support in flow based event pipelining
+- Queue based event pipelining
+- Supports ATOMIC, ORDERED, PARALLEL schedule types per flow
+- Event scheduling QoS based on event queue priority
+- Open system with configurable amount of outstanding events limited only by
+ DRAM
+- HW accelerated dequeue timeout support to enable power management
+
+Prerequisites and Compilation procedure
+---------------------------------------
+
+ See :doc:`../platform/octeontx2` for setup information.
+
+Pre-Installation Configuration
+------------------------------
+
+Compile time Config Options
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following option can be modified in the ``config`` file.
+
+- ``CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV`` (default ``y``)
+
+ Toggle compilation of the ``librte_pmd_octeontx2_event`` driver.
+
+Runtime Config Options
+~~~~~~~~~~~~~~~~~~~~~~
+
+- ``Maximum number of in-flight events`` (default ``8192``)
+
+ In **Marvell OCTEON TX2** the max number of in-flight events are only limited
+ by DRAM size, the ``xae_cnt`` devargs parameter is introduced to provide
+ upper limit for in-flight events.
+ For example::
+
+ --dev "0002:0e:00.0,xae_cnt=16384"
+
+- ``Force legacy mode``
+
+ The ``single_ws`` devargs parameter is introduced to force legacy mode i.e
+ single workslot mode in SSO and disable the default dual workslot mode.
+ For example::
+
+ --dev "0002:0e:00.0,single_ws=1"
+
+- ``Event Group QoS support``
+
+ SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
+ events. By default the buffers are assigned to the SSO GGRPs to
+ satisfy minimum HW requirements. SSO is free to assign the remaining
+ buffers to GGRPs based on a preconfigured threshold.
+ We can control the QoS of SSO GGRP by modifying the above mentioned
+ thresholds. GGRPs that have higher importance can be assigned higher
+ thresholds than the rest. The dictionary format is as follows
+ [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] expressed in percentages, 0 represents
+ default.
+ For example::
+
+ --dev "0002:0e:00.0,qos=[1-50-50-50]"
+
+- ``Selftest``
+
+ The functionality of OCTEON TX2 eventdev can be verified using this option,
+ various unit and functional tests are run to verify the sanity.
+ The tests are run once the vdev creation is successfully complete.
+ For example::
+
+ --dev "0002:0e:00.0,selftest=1"
+
+Debugging Options
+~~~~~~~~~~~~~~~~~
+
+.. _table_octeontx2_event_debug_options:
+
+.. table:: OCTEON TX2 event device debug options
+
+ +---+------------+-------------------------------------------------------+
+ | # | Component | EAL log command |
+ +===+============+=======================================================+
+ | 1 | SSO | --log-level='pmd\.event\.octeontx2,8' |
+ +---+------------+-------------------------------------------------------+
diff --git a/doc/guides/platform/octeontx2.rst b/doc/guides/platform/octeontx2.rst
index c9ea45647..fbf1193e7 100644
--- a/doc/guides/platform/octeontx2.rst
+++ b/doc/guides/platform/octeontx2.rst
@@ -101,6 +101,9 @@ This section lists dataplane H/W block(s) available in OCTEON TX2 SoC.
#. **Mempool Driver**
See :doc:`../mempool/octeontx2` for NPA mempool driver information.
+#. **Event Device Driver**
+ See :doc:`../eventdevs/octeontx2` for SSO event device driver information.
+
Procedure to Setup Platform
---------------------------
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 27/44] event/octeontx2: add event timer support
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (25 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 26/44] doc: add Marvell OCTEON TX2 event device documentation pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 28/44] event/octeontx2: add timer adapter capabilities pbhagavatula
` (16 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj, Anatoly Burakov; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event timer adapter aka TIM initilization on SSO probe.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
Cc: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
drivers/event/octeontx2/Makefile | 1 +
drivers/event/octeontx2/meson.build | 1 +
drivers/event/octeontx2/otx2_evdev.c | 3 +
drivers/event/octeontx2/otx2_tim_evdev.c | 78 ++++++++++++++++++++++++
drivers/event/octeontx2/otx2_tim_evdev.h | 36 +++++++++++
5 files changed, 119 insertions(+)
create mode 100644 drivers/event/octeontx2/otx2_tim_evdev.c
create mode 100644 drivers/event/octeontx2/otx2_tim_evdev.h
diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile
index d6cffc1f6..2290622dd 100644
--- a/drivers/event/octeontx2/Makefile
+++ b/drivers/event/octeontx2/Makefile
@@ -33,6 +33,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker_dual.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_tim_evdev.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_selftest.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_irq.c
diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
index 470564b08..ad7f2e084 100644
--- a/drivers/event/octeontx2/meson.build
+++ b/drivers/event/octeontx2/meson.build
@@ -7,6 +7,7 @@ sources = files('otx2_worker.c',
'otx2_evdev.c',
'otx2_evdev_irq.c',
'otx2_evdev_selftest.c',
+ 'otx2_tim_evdev.c',
)
allow_experimental_apis = true
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index c5a150954..a716167b3 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -15,6 +15,7 @@
#include "otx2_evdev_stats.h"
#include "otx2_evdev.h"
#include "otx2_irq.h"
+#include "otx2_tim_evdev.h"
static inline int
sso_get_msix_offsets(const struct rte_eventdev *event_dev)
@@ -1310,6 +1311,7 @@ otx2_sso_init(struct rte_eventdev *event_dev)
event_dev->dev_ops->dev_selftest();
}
+ otx2_tim_init(pci_dev, (struct otx2_dev *)dev);
return 0;
@@ -1345,6 +1347,7 @@ otx2_sso_fini(struct rte_eventdev *event_dev)
return -EAGAIN;
}
+ otx2_tim_fini();
otx2_dev_fini(pci_dev, dev);
return 0;
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
new file mode 100644
index 000000000..004701f64
--- /dev/null
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_evdev.h"
+#include "otx2_tim_evdev.h"
+
+void
+otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev)
+{
+ struct rsrc_attach_req *atch_req;
+ struct free_rsrcs_rsp *rsrc_cnt;
+ const struct rte_memzone *mz;
+ struct otx2_tim_evdev *dev;
+ int rc;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return;
+
+ mz = rte_memzone_reserve(RTE_STR(OTX2_TIM_EVDEV_NAME),
+ sizeof(struct otx2_tim_evdev),
+ rte_socket_id(), 0);
+ if (mz == NULL) {
+ otx2_tim_dbg("Unable to allocate memory for TIM Event device");
+ return;
+ }
+
+ dev = mz->addr;
+ dev->pci_dev = pci_dev;
+ dev->mbox = cmn_dev->mbox;
+ dev->bar2 = cmn_dev->bar2;
+
+ otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
+ rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt);
+ if (rc < 0) {
+ otx2_err("Unable to get free rsrc count.");
+ goto mz_free;
+ }
+
+ dev->nb_rings = rsrc_cnt->tim;
+
+ if (!dev->nb_rings) {
+ otx2_tim_dbg("No TIM Logical functions provisioned.");
+ goto mz_free;
+ }
+
+ atch_req = otx2_mbox_alloc_msg_attach_resources(dev->mbox);
+ atch_req->modify = true;
+ atch_req->timlfs = dev->nb_rings;
+
+ rc = otx2_mbox_process(dev->mbox);
+ if (rc < 0) {
+ otx2_err("Unable to attach TIM rings.");
+ goto mz_free;
+ }
+
+ return;
+
+mz_free:
+ rte_memzone_free(mz);
+}
+
+void
+otx2_tim_fini(void)
+{
+ struct otx2_tim_evdev *dev = tim_priv_get();
+ struct rsrc_detach_req *dtch_req;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return;
+
+ dtch_req = otx2_mbox_alloc_msg_detach_resources(dev->mbox);
+ dtch_req->partial = true;
+ dtch_req->timlfs = true;
+
+ otx2_mbox_process(dev->mbox);
+ rte_memzone_free(rte_memzone_lookup(RTE_STR(OTX2_TIM_EVDEV_NAME)));
+}
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
new file mode 100644
index 000000000..9f7aeb7df
--- /dev/null
+++ b/drivers/event/octeontx2/otx2_tim_evdev.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_TIM_EVDEV_H__
+#define __OTX2_TIM_EVDEV_H__
+
+#include <rte_event_timer_adapter.h>
+
+#include "otx2_dev.h"
+
+#define OTX2_TIM_EVDEV_NAME otx2_tim_eventdev
+
+struct otx2_tim_evdev {
+ struct rte_pci_device *pci_dev;
+ struct otx2_mbox *mbox;
+ uint16_t nb_rings;
+ uintptr_t bar2;
+};
+
+static inline struct otx2_tim_evdev *
+tim_priv_get(void)
+{
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(RTE_STR(OTX2_TIM_EVDEV_NAME));
+ if (mz == NULL)
+ return NULL;
+
+ return mz->addr;
+}
+
+void otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev);
+void otx2_tim_fini(void);
+
+#endif /* __OTX2_TIM_EVDEV_H__ */
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 28/44] event/octeontx2: add timer adapter capabilities
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (26 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 27/44] event/octeontx2: add event timer support pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 29/44] event/octeontx2: create and free timer adapter pbhagavatula
` (15 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add function to retrieve event timer adapter capabilities.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.c | 2 ++
drivers/event/octeontx2/otx2_tim_evdev.c | 19 +++++++++++++++++++
drivers/event/octeontx2/otx2_tim_evdev.h | 5 +++++
3 files changed, 26 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index a716167b3..a1222b3cf 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -1092,6 +1092,8 @@ static struct rte_eventdev_ops otx2_sso_ops = {
.port_unlink = otx2_sso_port_unlink,
.timeout_ticks = otx2_sso_timeout_ticks,
+ .timer_adapter_caps_get = otx2_tim_caps_get,
+
.xstats_get = otx2_sso_xstats_get,
.xstats_reset = otx2_sso_xstats_reset,
.xstats_get_names = otx2_sso_xstats_get_names,
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index 004701f64..0f20c163b 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -5,6 +5,25 @@
#include "otx2_evdev.h"
#include "otx2_tim_evdev.h"
+int
+otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
+ uint32_t *caps,
+ const struct rte_event_timer_adapter_ops **ops)
+{
+ struct otx2_tim_evdev *dev = tim_priv_get();
+
+ RTE_SET_USED(flags);
+ RTE_SET_USED(ops);
+ if (dev == NULL)
+ return -ENODEV;
+
+ /* Store evdev pointer for later use. */
+ dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
+ *caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
+
+ return 0;
+}
+
void
otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev)
{
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
index 9f7aeb7df..e94c61b1a 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ b/drivers/event/octeontx2/otx2_tim_evdev.h
@@ -13,6 +13,7 @@
struct otx2_tim_evdev {
struct rte_pci_device *pci_dev;
+ struct rte_eventdev *event_dev;
struct otx2_mbox *mbox;
uint16_t nb_rings;
uintptr_t bar2;
@@ -30,6 +31,10 @@ tim_priv_get(void)
return mz->addr;
}
+int otx2_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
+ uint32_t *caps,
+ const struct rte_event_timer_adapter_ops **ops);
+
void otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev);
void otx2_tim_fini(void);
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 29/44] event/octeontx2: create and free timer adapter
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (27 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 28/44] event/octeontx2: add timer adapter capabilities pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 30/44] event/octeontx2: allow TIM to optimize config pbhagavatula
` (14 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
When the application calls timer adapter create the following is used:
- Allocate a TIM lf based on number of lf's provisioned.
- Verify the config parameters supplied.
- Allocate memory required for
* Buckets based on min and max timeout supplied.
* Allocate the chunk pool based on the number of timers.
On Free:
- Free the allocated bucket and chunk memory.
- Free the TIM lf allocated.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_tim_evdev.c | 259 ++++++++++++++++++++++-
drivers/event/octeontx2/otx2_tim_evdev.h | 55 +++++
2 files changed, 313 insertions(+), 1 deletion(-)
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index 0f20c163b..e24f7ce9e 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -2,9 +2,263 @@
* Copyright(C) 2019 Marvell International Ltd.
*/
+#include <rte_malloc.h>
+#include <rte_mbuf_pool_ops.h>
+
#include "otx2_evdev.h"
#include "otx2_tim_evdev.h"
+static struct rte_event_timer_adapter_ops otx2_tim_ops;
+
+static int
+tim_chnk_pool_create(struct otx2_tim_ring *tim_ring,
+ struct rte_event_timer_adapter_conf *rcfg)
+{
+ unsigned int cache_sz = (tim_ring->nb_chunks / 1.5);
+ unsigned int mp_flags = 0;
+ char pool_name[25];
+ int rc;
+
+ /* Create chunk pool. */
+ if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
+ mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+ otx2_tim_dbg("Using single producer mode");
+ tim_ring->prod_type_sp = true;
+ }
+
+ snprintf(pool_name, sizeof(pool_name), "otx2_tim_chunk_pool%d",
+ tim_ring->ring_id);
+
+ if (cache_sz > RTE_MEMPOOL_CACHE_MAX_SIZE)
+ cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
+
+ /* NPA need not have cache as free is not visible to SW */
+ tim_ring->chunk_pool = rte_mempool_create_empty(pool_name,
+ tim_ring->nb_chunks,
+ tim_ring->chunk_sz,
+ 0, 0, rte_socket_id(),
+ mp_flags);
+
+ if (tim_ring->chunk_pool == NULL) {
+ otx2_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
+
+ rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
+ rte_mbuf_platform_mempool_ops(), NULL);
+ if (rc < 0) {
+ otx2_err("Unable to set chunkpool ops");
+ goto free;
+ }
+
+ rc = rte_mempool_populate_default(tim_ring->chunk_pool);
+ if (rc < 0) {
+ otx2_err("Unable to set populate chunkpool.");
+ goto free;
+ }
+ tim_ring->aura = npa_lf_aura_handle_to_aura(
+ tim_ring->chunk_pool->pool_id);
+ tim_ring->ena_dfb = 0;
+
+ return 0;
+
+free:
+ rte_mempool_free(tim_ring->chunk_pool);
+ return rc;
+}
+
+static void
+tim_err_desc(int rc)
+{
+ switch (rc) {
+ case TIM_AF_NO_RINGS_LEFT:
+ otx2_err("Unable to allocat new TIM ring.");
+ break;
+ case TIM_AF_INVALID_NPA_PF_FUNC:
+ otx2_err("Invalid NPA pf func.");
+ break;
+ case TIM_AF_INVALID_SSO_PF_FUNC:
+ otx2_err("Invalid SSO pf func.");
+ break;
+ case TIM_AF_RING_STILL_RUNNING:
+ otx2_tim_dbg("Ring busy.");
+ break;
+ case TIM_AF_LF_INVALID:
+ otx2_err("Invalid Ring id.");
+ break;
+ case TIM_AF_CSIZE_NOT_ALIGNED:
+ otx2_err("Chunk size specified needs to be multiple of 16.");
+ break;
+ case TIM_AF_CSIZE_TOO_SMALL:
+ otx2_err("Chunk size too small.");
+ break;
+ case TIM_AF_CSIZE_TOO_BIG:
+ otx2_err("Chunk size too big.");
+ break;
+ case TIM_AF_INTERVAL_TOO_SMALL:
+ otx2_err("Bucket traversal interval too small.");
+ break;
+ case TIM_AF_INVALID_BIG_ENDIAN_VALUE:
+ otx2_err("Invalid Big endian value.");
+ break;
+ case TIM_AF_INVALID_CLOCK_SOURCE:
+ otx2_err("Invalid Clock source specified.");
+ break;
+ case TIM_AF_GPIO_CLK_SRC_NOT_ENABLED:
+ otx2_err("GPIO clock source not enabled.");
+ break;
+ case TIM_AF_INVALID_BSIZE:
+ otx2_err("Invalid bucket size.");
+ break;
+ case TIM_AF_INVALID_ENABLE_PERIODIC:
+ otx2_err("Invalid bucket size.");
+ break;
+ case TIM_AF_INVALID_ENABLE_DONTFREE:
+ otx2_err("Invalid Don't free value.");
+ break;
+ case TIM_AF_ENA_DONTFRE_NSET_PERIODIC:
+ otx2_err("Don't free bit not set when periodic is enabled.");
+ break;
+ case TIM_AF_RING_ALREADY_DISABLED:
+ otx2_err("Ring already stopped");
+ break;
+ default:
+ otx2_err("Unknown Error.");
+ }
+}
+
+static int
+otx2_tim_ring_create(struct rte_event_timer_adapter *adptr)
+{
+ struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
+ struct otx2_tim_evdev *dev = tim_priv_get();
+ struct otx2_tim_ring *tim_ring;
+ struct tim_config_req *cfg_req;
+ struct tim_ring_req *free_req;
+ struct tim_lf_alloc_req *req;
+ struct tim_lf_alloc_rsp *rsp;
+ uint64_t nb_timers;
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ if (adptr->data->id >= dev->nb_rings)
+ return -ENODEV;
+
+ req = otx2_mbox_alloc_msg_tim_lf_alloc(dev->mbox);
+ req->npa_pf_func = otx2_npa_pf_func_get();
+ req->sso_pf_func = otx2_sso_pf_func_get();
+ req->ring = adptr->data->id;
+
+ rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
+ if (rc < 0) {
+ tim_err_desc(rc);
+ return -ENODEV;
+ }
+
+ if (NSEC2TICK(RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10),
+ rsp->tenns_clk) < OTX2_TIM_MIN_TMO_TKS) {
+ rc = -ERANGE;
+ goto rng_mem_err;
+ }
+
+ tim_ring = rte_zmalloc("otx2_tim_prv", sizeof(struct otx2_tim_ring), 0);
+ if (tim_ring == NULL) {
+ rc = -ENOMEM;
+ goto rng_mem_err;
+ }
+
+ adptr->data->adapter_priv = tim_ring;
+
+ tim_ring->tenns_clk_freq = rsp->tenns_clk;
+ tim_ring->clk_src = (int)rcfg->clk_src;
+ tim_ring->ring_id = adptr->data->id;
+ tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10);
+ tim_ring->max_tout = rcfg->max_tmo_ns;
+ tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
+ tim_ring->chunk_sz = OTX2_TIM_RING_DEF_CHUNK_SZ;
+ nb_timers = rcfg->nb_timers;
+ tim_ring->nb_chunks = nb_timers / OTX2_TIM_NB_CHUNK_SLOTS(
+ tim_ring->chunk_sz);
+ tim_ring->nb_chunk_slots = OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
+
+ /* Create buckets. */
+ tim_ring->bkt = rte_zmalloc("otx2_tim_bucket", (tim_ring->nb_bkts) *
+ sizeof(struct otx2_tim_bkt),
+ RTE_CACHE_LINE_SIZE);
+ if (tim_ring->bkt == NULL)
+ goto bkt_mem_err;
+
+ rc = tim_chnk_pool_create(tim_ring, rcfg);
+ if (rc < 0)
+ goto chnk_mem_err;
+
+ cfg_req = otx2_mbox_alloc_msg_tim_config_ring(dev->mbox);
+
+ cfg_req->ring = tim_ring->ring_id;
+ cfg_req->bigendian = false;
+ cfg_req->clocksource = tim_ring->clk_src;
+ cfg_req->enableperiodic = false;
+ cfg_req->enabledontfreebuffer = tim_ring->ena_dfb;
+ cfg_req->bucketsize = tim_ring->nb_bkts;
+ cfg_req->chunksize = tim_ring->chunk_sz;
+ cfg_req->interval = NSEC2TICK(tim_ring->tck_nsec,
+ tim_ring->tenns_clk_freq);
+
+ rc = otx2_mbox_process(dev->mbox);
+ if (rc < 0) {
+ tim_err_desc(rc);
+ goto chnk_mem_err;
+ }
+
+ tim_ring->base = dev->bar2 +
+ (RVU_BLOCK_ADDR_TIM << 20 | tim_ring->ring_id << 12);
+
+ otx2_write64((uint64_t)tim_ring->bkt,
+ tim_ring->base + TIM_LF_RING_BASE);
+ otx2_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+
+ return rc;
+
+chnk_mem_err:
+ rte_free(tim_ring->bkt);
+bkt_mem_err:
+ rte_free(tim_ring);
+rng_mem_err:
+ free_req = otx2_mbox_alloc_msg_tim_lf_free(dev->mbox);
+ free_req->ring = adptr->data->id;
+ otx2_mbox_process(dev->mbox);
+ return rc;
+}
+
+static int
+otx2_tim_ring_free(struct rte_event_timer_adapter *adptr)
+{
+ struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct otx2_tim_evdev *dev = tim_priv_get();
+ struct tim_ring_req *req;
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ req = otx2_mbox_alloc_msg_tim_lf_free(dev->mbox);
+ req->ring = tim_ring->ring_id;
+
+ rc = otx2_mbox_process(dev->mbox);
+ if (rc < 0) {
+ tim_err_desc(rc);
+ return -EBUSY;
+ }
+
+ rte_free(tim_ring->bkt);
+ rte_mempool_free(tim_ring->chunk_pool);
+ rte_free(adptr->data->adapter_priv);
+
+ return 0;
+}
+
int
otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
uint32_t *caps,
@@ -13,13 +267,16 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
struct otx2_tim_evdev *dev = tim_priv_get();
RTE_SET_USED(flags);
- RTE_SET_USED(ops);
if (dev == NULL)
return -ENODEV;
+ otx2_tim_ops.init = otx2_tim_ring_create;
+ otx2_tim_ops.uninit = otx2_tim_ring_free;
+
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
+ *ops = &otx2_tim_ops;
return 0;
}
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
index e94c61b1a..aaa4d93f5 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ b/drivers/event/octeontx2/otx2_tim_evdev.h
@@ -6,11 +6,47 @@
#define __OTX2_TIM_EVDEV_H__
#include <rte_event_timer_adapter.h>
+#include <rte_event_timer_adapter_pmd.h>
#include "otx2_dev.h"
#define OTX2_TIM_EVDEV_NAME otx2_tim_eventdev
+#define otx2_tim_func_trace otx2_tim_dbg
+
+#define TIM_LF_RING_AURA (0x0)
+#define TIM_LF_RING_BASE (0x130)
+
+#define OTX2_TIM_RING_DEF_CHUNK_SZ (4096)
+#define OTX2_TIM_CHUNK_ALIGNMENT (16)
+#define OTX2_TIM_NB_CHUNK_SLOTS(sz) (((sz) / OTX2_TIM_CHUNK_ALIGNMENT) - 1)
+#define OTX2_TIM_MIN_TMO_TKS (256)
+
+enum otx2_tim_clk_src {
+ OTX2_TIM_CLK_SRC_10NS = RTE_EVENT_TIMER_ADAPTER_CPU_CLK,
+ OTX2_TIM_CLK_SRC_GPIO = RTE_EVENT_TIMER_ADAPTER_EXT_CLK0,
+ OTX2_TIM_CLK_SRC_GTI = RTE_EVENT_TIMER_ADAPTER_EXT_CLK1,
+ OTX2_TIM_CLK_SRC_PTP = RTE_EVENT_TIMER_ADAPTER_EXT_CLK2,
+};
+
+struct otx2_tim_bkt {
+ uint64_t first_chunk;
+ union {
+ uint64_t w1;
+ struct {
+ uint32_t nb_entry;
+ uint8_t sbt:1;
+ uint8_t hbt:1;
+ uint8_t bsk:1;
+ uint8_t rsvd:5;
+ uint8_t lock;
+ int16_t chunk_remainder;
+ };
+ };
+ uint64_t current_chunk;
+ uint64_t pad;
+} __rte_packed __rte_aligned(32);
+
struct otx2_tim_evdev {
struct rte_pci_device *pci_dev;
struct rte_eventdev *event_dev;
@@ -19,6 +55,25 @@ struct otx2_tim_evdev {
uintptr_t bar2;
};
+struct otx2_tim_ring {
+ uintptr_t base;
+ uint16_t nb_chunk_slots;
+ uint32_t nb_bkts;
+ struct otx2_tim_bkt *bkt;
+ struct rte_mempool *chunk_pool;
+ uint64_t tck_int;
+ uint8_t prod_type_sp;
+ uint8_t ena_dfb;
+ uint16_t ring_id;
+ uint32_t aura;
+ uint64_t tck_nsec;
+ uint64_t max_tout;
+ uint64_t nb_chunks;
+ uint64_t chunk_sz;
+ uint64_t tenns_clk_freq;
+ enum otx2_tim_clk_src clk_src;
+} __rte_cache_aligned;
+
static inline struct otx2_tim_evdev *
tim_priv_get(void)
{
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 30/44] event/octeontx2: allow TIM to optimize config
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (28 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 29/44] event/octeontx2: create and free timer adapter pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 31/44] event/octeontx2: add devargs to disable NPA pbhagavatula
` (13 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Allow TIM to optimize user supplied configuration based on
RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES flag.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_evdev.h | 1 +
drivers/event/octeontx2/otx2_tim_evdev.c | 62 +++++++++++++++++++++++-
drivers/event/octeontx2/otx2_tim_evdev.h | 3 ++
3 files changed, 64 insertions(+), 2 deletions(-)
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index fc8dde416..1e15b7e1c 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -76,6 +76,7 @@
#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us) * 1E3)
#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
+#define TICK2NSEC(__tck, __freq) (((__tck) * 1E9) / (__freq))
enum otx2_sso_lf_type {
SSO_LF_GGRP,
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index e24f7ce9e..a0953bb49 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -10,6 +10,51 @@
static struct rte_event_timer_adapter_ops otx2_tim_ops;
+static void
+tim_optimze_bkt_param(struct otx2_tim_ring *tim_ring)
+{
+ uint64_t tck_nsec;
+ uint32_t hbkts;
+ uint32_t lbkts;
+
+ hbkts = rte_align32pow2(tim_ring->nb_bkts);
+ tck_nsec = RTE_ALIGN_MUL_CEIL(tim_ring->max_tout / (hbkts - 1), 10);
+
+ if ((tck_nsec < TICK2NSEC(OTX2_TIM_MIN_TMO_TKS,
+ tim_ring->tenns_clk_freq) ||
+ hbkts > OTX2_TIM_MAX_BUCKETS))
+ hbkts = 0;
+
+ lbkts = rte_align32prevpow2(tim_ring->nb_bkts);
+ tck_nsec = RTE_ALIGN_MUL_CEIL((tim_ring->max_tout / (lbkts - 1)), 10);
+
+ if ((tck_nsec < TICK2NSEC(OTX2_TIM_MIN_TMO_TKS,
+ tim_ring->tenns_clk_freq) ||
+ lbkts > OTX2_TIM_MAX_BUCKETS))
+ lbkts = 0;
+
+ if (!hbkts && !lbkts)
+ return;
+
+ if (!hbkts) {
+ tim_ring->nb_bkts = lbkts;
+ goto end;
+ } else if (!lbkts) {
+ tim_ring->nb_bkts = hbkts;
+ goto end;
+ }
+
+ tim_ring->nb_bkts = (hbkts - tim_ring->nb_bkts) <
+ (tim_ring->nb_bkts - lbkts) ? hbkts : lbkts;
+end:
+ tim_ring->optimized = true;
+ tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL((tim_ring->max_tout /
+ (tim_ring->nb_bkts - 1)), 10);
+ otx2_tim_dbg("Optimized configured values");
+ otx2_tim_dbg("Nb_bkts : %" PRIu32 "", tim_ring->nb_bkts);
+ otx2_tim_dbg("Tck_nsec : %" PRIu64 "", tim_ring->tck_nsec);
+}
+
static int
tim_chnk_pool_create(struct otx2_tim_ring *tim_ring,
struct rte_event_timer_adapter_conf *rcfg)
@@ -159,8 +204,13 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr)
if (NSEC2TICK(RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10),
rsp->tenns_clk) < OTX2_TIM_MIN_TMO_TKS) {
- rc = -ERANGE;
- goto rng_mem_err;
+ if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES)
+ rcfg->timer_tick_ns = TICK2NSEC(OTX2_TIM_MIN_TMO_TKS,
+ rsp->tenns_clk);
+ else {
+ rc = -ERANGE;
+ goto rng_mem_err;
+ }
}
tim_ring = rte_zmalloc("otx2_tim_prv", sizeof(struct otx2_tim_ring), 0);
@@ -183,6 +233,14 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->chunk_sz);
tim_ring->nb_chunk_slots = OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
+ /* Try to optimize the bucket parameters. */
+ if ((rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES)) {
+ if (rte_is_power_of_2(tim_ring->nb_bkts))
+ tim_ring->optimized = true;
+ else
+ tim_optimze_bkt_param(tim_ring);
+ }
+
/* Create buckets. */
tim_ring->bkt = rte_zmalloc("otx2_tim_bucket", (tim_ring->nb_bkts) *
sizeof(struct otx2_tim_bkt),
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
index aaa4d93f5..fdd076ebd 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ b/drivers/event/octeontx2/otx2_tim_evdev.h
@@ -17,6 +17,8 @@
#define TIM_LF_RING_AURA (0x0)
#define TIM_LF_RING_BASE (0x130)
+#define OTX2_MAX_TIM_RINGS (256)
+#define OTX2_TIM_MAX_BUCKETS (0xFFFFF)
#define OTX2_TIM_RING_DEF_CHUNK_SZ (4096)
#define OTX2_TIM_CHUNK_ALIGNMENT (16)
#define OTX2_TIM_NB_CHUNK_SLOTS(sz) (((sz) / OTX2_TIM_CHUNK_ALIGNMENT) - 1)
@@ -63,6 +65,7 @@ struct otx2_tim_ring {
struct rte_mempool *chunk_pool;
uint64_t tck_int;
uint8_t prod_type_sp;
+ uint8_t optimized;
uint8_t ena_dfb;
uint16_t ring_id;
uint32_t aura;
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 31/44] event/octeontx2: add devargs to disable NPA
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (29 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 30/44] event/octeontx2: allow TIM to optimize config pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 32/44] event/octeontx2: add devargs to modify chunk slots pbhagavatula
` (12 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
If the chunks are allocated from NPA then TIM can automatically free
them when traversing the list of chunks.
Add devargs to disable NPA and use software mempool to manage chunks.
Example:
--dev "0002:0e:00.0,tim_disable_npa=1"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_tim_evdev.c | 81 +++++++++++++++++-------
drivers/event/octeontx2/otx2_tim_evdev.h | 3 +
2 files changed, 61 insertions(+), 23 deletions(-)
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index a0953bb49..4b9816676 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -2,6 +2,7 @@
* Copyright(C) 2019 Marvell International Ltd.
*/
+#include <rte_kvargs.h>
#include <rte_malloc.h>
#include <rte_mbuf_pool_ops.h>
@@ -77,33 +78,45 @@ tim_chnk_pool_create(struct otx2_tim_ring *tim_ring,
if (cache_sz > RTE_MEMPOOL_CACHE_MAX_SIZE)
cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
- /* NPA need not have cache as free is not visible to SW */
- tim_ring->chunk_pool = rte_mempool_create_empty(pool_name,
- tim_ring->nb_chunks,
- tim_ring->chunk_sz,
- 0, 0, rte_socket_id(),
- mp_flags);
+ if (!tim_ring->disable_npa) {
+ /* NPA need not have cache as free is not visible to SW */
+ tim_ring->chunk_pool = rte_mempool_create_empty(pool_name,
+ tim_ring->nb_chunks, tim_ring->chunk_sz,
+ 0, 0, rte_socket_id(), mp_flags);
- if (tim_ring->chunk_pool == NULL) {
- otx2_err("Unable to create chunkpool.");
- return -ENOMEM;
- }
+ if (tim_ring->chunk_pool == NULL) {
+ otx2_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
- rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
- rte_mbuf_platform_mempool_ops(), NULL);
- if (rc < 0) {
- otx2_err("Unable to set chunkpool ops");
- goto free;
- }
+ rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
+ rte_mbuf_platform_mempool_ops(),
+ NULL);
+ if (rc < 0) {
+ otx2_err("Unable to set chunkpool ops");
+ goto free;
+ }
- rc = rte_mempool_populate_default(tim_ring->chunk_pool);
- if (rc < 0) {
- otx2_err("Unable to set populate chunkpool.");
- goto free;
+ rc = rte_mempool_populate_default(tim_ring->chunk_pool);
+ if (rc < 0) {
+ otx2_err("Unable to set populate chunkpool.");
+ goto free;
+ }
+ tim_ring->aura = npa_lf_aura_handle_to_aura(
+ tim_ring->chunk_pool->pool_id);
+ tim_ring->ena_dfb = 0;
+ } else {
+ tim_ring->chunk_pool = rte_mempool_create(pool_name,
+ tim_ring->nb_chunks, tim_ring->chunk_sz,
+ cache_sz, 0, NULL, NULL, NULL, NULL,
+ rte_socket_id(),
+ mp_flags);
+ if (tim_ring->chunk_pool == NULL) {
+ otx2_err("Unable to create chunkpool.");
+ return -ENOMEM;
+ }
+ tim_ring->ena_dfb = 1;
}
- tim_ring->aura = npa_lf_aura_handle_to_aura(
- tim_ring->chunk_pool->pool_id);
- tim_ring->ena_dfb = 0;
return 0;
@@ -229,6 +242,8 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
tim_ring->chunk_sz = OTX2_TIM_RING_DEF_CHUNK_SZ;
nb_timers = rcfg->nb_timers;
+ tim_ring->disable_npa = dev->disable_npa;
+
tim_ring->nb_chunks = nb_timers / OTX2_TIM_NB_CHUNK_SLOTS(
tim_ring->chunk_sz);
tim_ring->nb_chunk_slots = OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
@@ -339,6 +354,24 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
return 0;
}
+#define OTX2_TIM_DISABLE_NPA "tim_disable_npa"
+
+static void
+tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev)
+{
+ struct rte_kvargs *kvlist;
+
+ if (devargs == NULL)
+ return;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, OTX2_TIM_DISABLE_NPA,
+ &parse_kvargs_flag, &dev->disable_npa);
+}
+
void
otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev)
{
@@ -364,6 +397,8 @@ otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev)
dev->mbox = cmn_dev->mbox;
dev->bar2 = cmn_dev->bar2;
+ tim_parse_devargs(pci_dev->device.devargs, dev);
+
otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt);
if (rc < 0) {
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
index fdd076ebd..0a0a0b4d8 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ b/drivers/event/octeontx2/otx2_tim_evdev.h
@@ -55,6 +55,8 @@ struct otx2_tim_evdev {
struct otx2_mbox *mbox;
uint16_t nb_rings;
uintptr_t bar2;
+ /* Dev args */
+ uint8_t disable_npa;
};
struct otx2_tim_ring {
@@ -65,6 +67,7 @@ struct otx2_tim_ring {
struct rte_mempool *chunk_pool;
uint64_t tck_int;
uint8_t prod_type_sp;
+ uint8_t disable_npa;
uint8_t optimized;
uint8_t ena_dfb;
uint16_t ring_id;
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 32/44] event/octeontx2: add devargs to modify chunk slots
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (30 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 31/44] event/octeontx2: add devargs to disable NPA pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 33/44] event/octeontx2: add TIM IRQ handlers pbhagavatula
` (11 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add devargs support to modify number of chunk slots. Chunks are used to
store event timers, a chunk can be visualised as an array where the last
element points to the next chunk and rest of them are used to store
events. TIM traverses the list of chunks and enqueues the event timers
to SSO.
If no argument is passed then a default value of 255 is taken.
Example:
--dev "0002:0e:00.0,tim_chnk_slots=511"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_tim_evdev.c | 14 +++++++++++++-
drivers/event/octeontx2/otx2_tim_evdev.h | 4 ++++
2 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index 4b9816676..c0a692bb5 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -240,7 +240,7 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10);
tim_ring->max_tout = rcfg->max_tmo_ns;
tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
- tim_ring->chunk_sz = OTX2_TIM_RING_DEF_CHUNK_SZ;
+ tim_ring->chunk_sz = dev->chunk_sz;
nb_timers = rcfg->nb_timers;
tim_ring->disable_npa = dev->disable_npa;
@@ -355,6 +355,7 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
}
#define OTX2_TIM_DISABLE_NPA "tim_disable_npa"
+#define OTX2_TIM_CHNK_SLOTS "tim_chnk_slots"
static void
tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev)
@@ -370,6 +371,8 @@ tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev)
rte_kvargs_process(kvlist, OTX2_TIM_DISABLE_NPA,
&parse_kvargs_flag, &dev->disable_npa);
+ rte_kvargs_process(kvlist, OTX2_TIM_CHNK_SLOTS,
+ &parse_kvargs_value, &dev->chunk_slots);
}
void
@@ -423,6 +426,15 @@ otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev)
goto mz_free;
}
+ if (dev->chunk_slots &&
+ dev->chunk_slots <= OTX2_TIM_MAX_CHUNK_SLOTS &&
+ dev->chunk_slots >= OTX2_TIM_MIN_CHUNK_SLOTS) {
+ dev->chunk_sz = (dev->chunk_slots + 1) *
+ OTX2_TIM_CHUNK_ALIGNMENT;
+ } else {
+ dev->chunk_sz = OTX2_TIM_RING_DEF_CHUNK_SZ;
+ }
+
return;
mz_free:
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
index 0a0a0b4d8..9636d8414 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ b/drivers/event/octeontx2/otx2_tim_evdev.h
@@ -22,6 +22,8 @@
#define OTX2_TIM_RING_DEF_CHUNK_SZ (4096)
#define OTX2_TIM_CHUNK_ALIGNMENT (16)
#define OTX2_TIM_NB_CHUNK_SLOTS(sz) (((sz) / OTX2_TIM_CHUNK_ALIGNMENT) - 1)
+#define OTX2_TIM_MIN_CHUNK_SLOTS (0x1)
+#define OTX2_TIM_MAX_CHUNK_SLOTS (0x1FFE)
#define OTX2_TIM_MIN_TMO_TKS (256)
enum otx2_tim_clk_src {
@@ -54,9 +56,11 @@ struct otx2_tim_evdev {
struct rte_eventdev *event_dev;
struct otx2_mbox *mbox;
uint16_t nb_rings;
+ uint32_t chunk_sz;
uintptr_t bar2;
/* Dev args */
uint8_t disable_npa;
+ uint16_t chunk_slots;
};
struct otx2_tim_ring {
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 33/44] event/octeontx2: add TIM IRQ handlers
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (31 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 32/44] event/octeontx2: add devargs to modify chunk slots pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 34/44] event/octeontx2: allow adapters to resize inflight buffers pbhagavatula
` (10 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Register and implement TIM IRQ handlers for error interrupts
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_evdev_irq.c | 97 ++++++++++++++++++++++++
drivers/event/octeontx2/otx2_tim_evdev.c | 37 +++++++++
drivers/event/octeontx2/otx2_tim_evdev.h | 14 ++++
3 files changed, 148 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c
index 7379bb17f..a2033646e 100644
--- a/drivers/event/octeontx2/otx2_evdev_irq.c
+++ b/drivers/event/octeontx2/otx2_evdev_irq.c
@@ -3,6 +3,7 @@
*/
#include "otx2_evdev.h"
+#include "otx2_tim_evdev.h"
static void
sso_lf_irq(void *param)
@@ -173,3 +174,99 @@ sso_unregister_irqs(const struct rte_eventdev *event_dev)
ssow_lf_unregister_irq(event_dev, dev->ssow_msixoff[i], base);
}
}
+
+static void
+tim_lf_irq(void *param)
+{
+ uintptr_t base = (uintptr_t)param;
+ uint64_t intr;
+ uint8_t ring;
+
+ ring = (base >> 12) & 0xFF;
+
+ intr = otx2_read64(base + TIM_LF_NRSPERR_INT);
+ otx2_err("TIM RING %d TIM_LF_NRSPERR_INT=0x%" PRIx64 "", ring, intr);
+ intr = otx2_read64(base + TIM_LF_RAS_INT);
+ otx2_err("TIM RING %d TIM_LF_RAS_INT=0x%" PRIx64 "", ring, intr);
+
+ /* Clear interrupt */
+ otx2_write64(intr, base + TIM_LF_NRSPERR_INT);
+ otx2_write64(intr, base + TIM_LF_RAS_INT);
+}
+
+static int
+tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
+ uintptr_t base)
+{
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ int rc, vec;
+
+ vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT);
+ /* Set used interrupt vectors */
+ rc = otx2_register_irq(handle, tim_lf_irq, (void *)base, vec);
+ /* Enable hw interrupt */
+ otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT_ENA_W1S);
+
+ vec = tim_msixoff + TIM_LF_INT_VEC_RAS_INT;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, base + TIM_LF_RAS_INT);
+ /* Set used interrupt vectors */
+ rc = otx2_register_irq(handle, tim_lf_irq, (void *)base, vec);
+ /* Enable hw interrupt */
+ otx2_write64(~0ull, base + TIM_LF_RAS_INT_ENA_W1S);
+
+ return rc;
+}
+
+static void
+tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
+ uintptr_t base)
+{
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ int vec;
+
+ vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT_ENA_W1C);
+ otx2_unregister_irq(handle, tim_lf_irq, (void *)base, vec);
+
+ vec = tim_msixoff + TIM_LF_INT_VEC_RAS_INT;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, base + TIM_LF_RAS_INT_ENA_W1C);
+ otx2_unregister_irq(handle, tim_lf_irq, (void *)base, vec);
+}
+
+int
+tim_register_irq(uint16_t ring_id)
+{
+ struct otx2_tim_evdev *dev = tim_priv_get();
+ int rc = -EINVAL;
+ uintptr_t base;
+
+ if (dev->tim_msixoff[ring_id] == MSIX_VECTOR_INVALID) {
+ otx2_err("Invalid TIMLF MSIX offset[%d] vector: 0x%x",
+ ring_id, dev->tim_msixoff[ring_id]);
+ goto fail;
+ }
+
+ base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12);
+ rc = tim_lf_register_irq(dev->pci_dev, dev->tim_msixoff[ring_id], base);
+fail:
+ return rc;
+}
+
+void
+tim_unregister_irq(uint16_t ring_id)
+{
+ struct otx2_tim_evdev *dev = tim_priv_get();
+ uintptr_t base;
+
+ base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12);
+ tim_lf_unregister_irq(dev->pci_dev, dev->tim_msixoff[ring_id], base);
+}
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index c0a692bb5..8324ded51 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -11,6 +11,24 @@
static struct rte_event_timer_adapter_ops otx2_tim_ops;
+static inline int
+tim_get_msix_offsets(void)
+{
+ struct otx2_tim_evdev *dev = tim_priv_get();
+ struct otx2_mbox *mbox = dev->mbox;
+ struct msix_offset_rsp *msix_rsp;
+ int i, rc;
+
+ /* Get TIM MSIX vector offsets */
+ otx2_mbox_alloc_msg_msix_offset(mbox);
+ rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
+
+ for (i = 0; i < dev->nb_rings; i++)
+ dev->tim_msixoff[i] = msix_rsp->timlf_msixoff[i];
+
+ return rc;
+}
+
static void
tim_optimze_bkt_param(struct otx2_tim_ring *tim_ring)
{
@@ -288,6 +306,10 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->base = dev->bar2 +
(RVU_BLOCK_ADDR_TIM << 20 | tim_ring->ring_id << 12);
+ rc = tim_register_irq(tim_ring->ring_id);
+ if (rc < 0)
+ goto chnk_mem_err;
+
otx2_write64((uint64_t)tim_ring->bkt,
tim_ring->base + TIM_LF_RING_BASE);
otx2_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
@@ -316,6 +338,8 @@ otx2_tim_ring_free(struct rte_event_timer_adapter *adptr)
if (dev == NULL)
return -ENODEV;
+ tim_unregister_irq(tim_ring->ring_id);
+
req = otx2_mbox_alloc_msg_tim_lf_free(dev->mbox);
req->ring = tim_ring->ring_id;
@@ -379,6 +403,7 @@ void
otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev)
{
struct rsrc_attach_req *atch_req;
+ struct rsrc_detach_req *dtch_req;
struct free_rsrcs_rsp *rsrc_cnt;
const struct rte_memzone *mz;
struct otx2_tim_evdev *dev;
@@ -426,6 +451,12 @@ otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev)
goto mz_free;
}
+ rc = tim_get_msix_offsets();
+ if (rc < 0) {
+ otx2_err("Unable to get MSIX offsets for TIM.");
+ goto detach;
+ }
+
if (dev->chunk_slots &&
dev->chunk_slots <= OTX2_TIM_MAX_CHUNK_SLOTS &&
dev->chunk_slots >= OTX2_TIM_MIN_CHUNK_SLOTS) {
@@ -437,6 +468,12 @@ otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev)
return;
+detach:
+ dtch_req = otx2_mbox_alloc_msg_detach_resources(dev->mbox);
+ dtch_req->partial = true;
+ dtch_req->timlfs = true;
+
+ otx2_mbox_process(dev->mbox);
mz_free:
rte_memzone_free(mz);
}
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
index 9636d8414..aac7dc711 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ b/drivers/event/octeontx2/otx2_tim_evdev.h
@@ -16,6 +16,14 @@
#define TIM_LF_RING_AURA (0x0)
#define TIM_LF_RING_BASE (0x130)
+#define TIM_LF_NRSPERR_INT (0x200)
+#define TIM_LF_NRSPERR_INT_W1S (0x208)
+#define TIM_LF_NRSPERR_INT_ENA_W1S (0x210)
+#define TIM_LF_NRSPERR_INT_ENA_W1C (0x218)
+#define TIM_LF_RAS_INT (0x300)
+#define TIM_LF_RAS_INT_W1S (0x308)
+#define TIM_LF_RAS_INT_ENA_W1S (0x310)
+#define TIM_LF_RAS_INT_ENA_W1C (0x318)
#define OTX2_MAX_TIM_RINGS (256)
#define OTX2_TIM_MAX_BUCKETS (0xFFFFF)
@@ -61,6 +69,8 @@ struct otx2_tim_evdev {
/* Dev args */
uint8_t disable_npa;
uint16_t chunk_slots;
+ /* MSIX offsets */
+ uint16_t tim_msixoff[OTX2_MAX_TIM_RINGS];
};
struct otx2_tim_ring {
@@ -103,4 +113,8 @@ int otx2_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
void otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev);
void otx2_tim_fini(void);
+/* TIM IRQ */
+int tim_register_irq(uint16_t ring_id);
+void tim_unregister_irq(uint16_t ring_id);
+
#endif /* __OTX2_TIM_EVDEV_H__ */
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 34/44] event/octeontx2: allow adapters to resize inflight buffers
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (32 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 33/44] event/octeontx2: add TIM IRQ handlers pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 35/44] event/octeontx2: add timer adapter info get function pbhagavatula
` (9 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add internal SSO functions to allow event adapters to resize SSO buffers
that are used to hold in-flight events in DRAM.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/Makefile | 1 +
drivers/event/octeontx2/meson.build | 1 +
drivers/event/octeontx2/otx2_evdev.c | 31 ++++++++++++++++++++++
drivers/event/octeontx2/otx2_evdev.h | 5 ++++
drivers/event/octeontx2/otx2_evdev_adptr.c | 19 +++++++++++++
drivers/event/octeontx2/otx2_tim_evdev.c | 5 ++++
6 files changed, 62 insertions(+)
create mode 100644 drivers/event/octeontx2/otx2_evdev_adptr.c
diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile
index 2290622dd..6f8d9fe2f 100644
--- a/drivers/event/octeontx2/Makefile
+++ b/drivers/event/octeontx2/Makefile
@@ -33,6 +33,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker_dual.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_adptr.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_tim_evdev.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_selftest.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_irq.c
diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
index ad7f2e084..c709b5e69 100644
--- a/drivers/event/octeontx2/meson.build
+++ b/drivers/event/octeontx2/meson.build
@@ -5,6 +5,7 @@
sources = files('otx2_worker.c',
'otx2_worker_dual.c',
'otx2_evdev.c',
+ 'otx2_evdev_adptr.c',
'otx2_evdev_irq.c',
'otx2_evdev_selftest.c',
'otx2_tim_evdev.c',
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index a1222b3cf..914869b6c 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -529,6 +529,9 @@ sso_xaq_allocate(struct otx2_sso_evdev *dev)
xaq_cnt = dev->nb_event_queues * OTX2_SSO_XAQ_CACHE_CNT;
if (dev->xae_cnt)
xaq_cnt += dev->xae_cnt / dev->xae_waes;
+ else if (dev->adptr_xae_cnt)
+ xaq_cnt += (dev->adptr_xae_cnt / dev->xae_waes) +
+ (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues);
else
xaq_cnt += (dev->iue / dev->xae_waes) +
(OTX2_SSO_XAQ_SLACK * dev->nb_event_queues);
@@ -1030,6 +1033,34 @@ sso_cleanup(struct rte_eventdev *event_dev, uint8_t enable)
otx2_mbox_process(dev->mbox);
}
+int
+sso_xae_reconfigure(struct rte_eventdev *event_dev)
+{
+ struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
+ struct rte_mempool *prev_xaq_pool;
+ int rc = 0;
+
+ if (event_dev->data->dev_started)
+ sso_cleanup(event_dev, 0);
+
+ prev_xaq_pool = dev->xaq_pool;
+ dev->xaq_pool = NULL;
+ sso_xaq_allocate(dev);
+ rc = sso_ggrp_alloc_xaq(dev);
+ if (rc < 0) {
+ otx2_err("Failed to alloc xaq to ggrp %d", rc);
+ rte_mempool_free(prev_xaq_pool);
+ return rc;
+ }
+
+ rte_mempool_free(prev_xaq_pool);
+ rte_mb();
+ if (event_dev->data->dev_started)
+ sso_cleanup(event_dev, 1);
+
+ return 0;
+}
+
static int
otx2_sso_start(struct rte_eventdev *event_dev)
{
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
index 1e15b7e1c..ba3aae5ba 100644
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ b/drivers/event/octeontx2/otx2_evdev.h
@@ -129,6 +129,7 @@ struct otx2_sso_evdev {
uint64_t nb_xaq_cfg;
rte_iova_t fc_iova;
struct rte_mempool *xaq_pool;
+ uint32_t adptr_xae_cnt;
/* Dev args */
uint8_t dual_ws;
uint8_t selftest;
@@ -243,6 +244,10 @@ uint16_t otx2_ssogws_dual_deq_timeout(void *port, struct rte_event *ev,
uint16_t otx2_ssogws_dual_deq_timeout_burst(void *port, struct rte_event ev[],
uint16_t nb_events,
uint64_t timeout_ticks);
+
+void sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data,
+ uint32_t event_type);
+int sso_xae_reconfigure(struct rte_eventdev *event_dev);
void sso_fastpath_fns_set(struct rte_eventdev *event_dev);
/* Clean up API's */
typedef void (*otx2_handle_event_t)(void *arg, struct rte_event ev);
diff --git a/drivers/event/octeontx2/otx2_evdev_adptr.c b/drivers/event/octeontx2/otx2_evdev_adptr.c
new file mode 100644
index 000000000..810722f89
--- /dev/null
+++ b/drivers/event/octeontx2/otx2_evdev_adptr.c
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_evdev.h"
+
+void
+sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data, uint32_t event_type)
+{
+ switch (event_type) {
+ case RTE_EVENT_TYPE_TIMER:
+ {
+ dev->adptr_xae_cnt += (*(uint64_t *)data);
+ break;
+ }
+ default:
+ break;
+ }
+}
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index 8324ded51..186c5d483 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -314,6 +314,11 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->base + TIM_LF_RING_BASE);
otx2_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+ /* Update SSO xae count. */
+ sso_updt_xae_cnt(sso_pmd_priv(dev->event_dev), (void *)&nb_timers,
+ RTE_EVENT_TYPE_TIMER);
+ sso_xae_reconfigure(dev->event_dev);
+
return rc;
chnk_mem_err:
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 35/44] event/octeontx2: add timer adapter info get function
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (33 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 34/44] event/octeontx2: allow adapters to resize inflight buffers pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 36/44] event/octeontx2: add TIM bucket operations pbhagavatula
` (8 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add TIM event timer adapter info get function.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_tim_evdev.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index 186c5d483..f2c14faaa 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -29,6 +29,18 @@ tim_get_msix_offsets(void)
return rc;
}
+static void
+otx2_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer_adapter_info *adptr_info)
+{
+ struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
+
+ adptr_info->max_tmo_ns = tim_ring->max_tout;
+ adptr_info->min_resolution_ns = tim_ring->tck_nsec;
+ rte_memcpy(&adptr_info->conf, &adptr->data->conf,
+ sizeof(struct rte_event_timer_adapter_conf));
+}
+
static void
tim_optimze_bkt_param(struct otx2_tim_ring *tim_ring)
{
@@ -374,6 +386,7 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
otx2_tim_ops.init = otx2_tim_ring_create;
otx2_tim_ops.uninit = otx2_tim_ring_free;
+ otx2_tim_ops.get_info = otx2_tim_ring_info_get;
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 36/44] event/octeontx2: add TIM bucket operations
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (34 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 35/44] event/octeontx2: add timer adapter info get function pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 37/44] event/octeontx2: add event timer arm routine pbhagavatula
` (7 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add TIM bucket operations used for event timer arm and cancel.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/Makefile | 1 +
drivers/event/octeontx2/meson.build | 1 +
drivers/event/octeontx2/otx2_tim_evdev.h | 36 +++++++
drivers/event/octeontx2/otx2_tim_worker.c | 7 ++
drivers/event/octeontx2/otx2_tim_worker.h | 111 ++++++++++++++++++++++
5 files changed, 156 insertions(+)
create mode 100644 drivers/event/octeontx2/otx2_tim_worker.c
create mode 100644 drivers/event/octeontx2/otx2_tim_worker.h
diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile
index 6f8d9fe2f..d01da6b11 100644
--- a/drivers/event/octeontx2/Makefile
+++ b/drivers/event/octeontx2/Makefile
@@ -32,6 +32,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker_dual.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_tim_worker.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_adptr.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_tim_evdev.c
diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
index c709b5e69..bdb5beed6 100644
--- a/drivers/event/octeontx2/meson.build
+++ b/drivers/event/octeontx2/meson.build
@@ -9,6 +9,7 @@ sources = files('otx2_worker.c',
'otx2_evdev_irq.c',
'otx2_evdev_selftest.c',
'otx2_tim_evdev.c',
+ 'otx2_tim_worker.c'
)
allow_experimental_apis = true
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
index aac7dc711..2be5d5f07 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ b/drivers/event/octeontx2/otx2_tim_evdev.h
@@ -25,6 +25,42 @@
#define TIM_LF_RAS_INT_ENA_W1S (0x310)
#define TIM_LF_RAS_INT_ENA_W1C (0x318)
+#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
+#define TIM_BUCKET_W1_M_CHUNK_REMAINDER ((1ULL << (64 - \
+ TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
+#define TIM_BUCKET_W1_S_LOCK (40)
+#define TIM_BUCKET_W1_M_LOCK ((1ULL << \
+ (TIM_BUCKET_W1_S_CHUNK_REMAINDER - \
+ TIM_BUCKET_W1_S_LOCK)) - 1)
+#define TIM_BUCKET_W1_S_RSVD (35)
+#define TIM_BUCKET_W1_S_BSK (34)
+#define TIM_BUCKET_W1_M_BSK ((1ULL << \
+ (TIM_BUCKET_W1_S_RSVD - \
+ TIM_BUCKET_W1_S_BSK)) - 1)
+#define TIM_BUCKET_W1_S_HBT (33)
+#define TIM_BUCKET_W1_M_HBT ((1ULL << \
+ (TIM_BUCKET_W1_S_BSK - \
+ TIM_BUCKET_W1_S_HBT)) - 1)
+#define TIM_BUCKET_W1_S_SBT (32)
+#define TIM_BUCKET_W1_M_SBT ((1ULL << \
+ (TIM_BUCKET_W1_S_HBT - \
+ TIM_BUCKET_W1_S_SBT)) - 1)
+#define TIM_BUCKET_W1_S_NUM_ENTRIES (0)
+#define TIM_BUCKET_W1_M_NUM_ENTRIES ((1ULL << \
+ (TIM_BUCKET_W1_S_SBT - \
+ TIM_BUCKET_W1_S_NUM_ENTRIES)) - 1)
+
+#define TIM_BUCKET_SEMA (TIM_BUCKET_CHUNK_REMAIN)
+
+#define TIM_BUCKET_CHUNK_REMAIN \
+ (TIM_BUCKET_W1_M_CHUNK_REMAINDER << TIM_BUCKET_W1_S_CHUNK_REMAINDER)
+
+#define TIM_BUCKET_LOCK \
+ (TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK)
+
+#define TIM_BUCKET_SEMA_WLOCK \
+ (TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
+
#define OTX2_MAX_TIM_RINGS (256)
#define OTX2_TIM_MAX_BUCKETS (0xFFFFF)
#define OTX2_TIM_RING_DEF_CHUNK_SZ (4096)
diff --git a/drivers/event/octeontx2/otx2_tim_worker.c b/drivers/event/octeontx2/otx2_tim_worker.c
new file mode 100644
index 000000000..29ed1fd5a
--- /dev/null
+++ b/drivers/event/octeontx2/otx2_tim_worker.c
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_tim_evdev.h"
+#include "otx2_tim_worker.h"
+
diff --git a/drivers/event/octeontx2/otx2_tim_worker.h b/drivers/event/octeontx2/otx2_tim_worker.h
new file mode 100644
index 000000000..ccb137d13
--- /dev/null
+++ b/drivers/event/octeontx2/otx2_tim_worker.h
@@ -0,0 +1,111 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_TIM_WORKER_H__
+#define __OTX2_TIM_WORKER_H__
+
+#include "otx2_tim_evdev.h"
+
+static inline int16_t
+tim_bkt_fetch_rem(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_CHUNK_REMAINDER) &
+ TIM_BUCKET_W1_M_CHUNK_REMAINDER;
+}
+
+static inline int16_t
+tim_bkt_get_rem(struct otx2_tim_bkt *bktp)
+{
+ return __atomic_load_n(&bktp->chunk_remainder, __ATOMIC_ACQUIRE);
+}
+
+static inline void
+tim_bkt_set_rem(struct otx2_tim_bkt *bktp, uint16_t v)
+{
+ __atomic_store_n(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
+}
+
+static inline void
+tim_bkt_sub_rem(struct otx2_tim_bkt *bktp, uint16_t v)
+{
+ __atomic_fetch_sub(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
+}
+
+static inline uint8_t
+tim_bkt_get_hbt(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_HBT) & TIM_BUCKET_W1_M_HBT;
+}
+
+static inline uint8_t
+tim_bkt_get_bsk(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_BSK) & TIM_BUCKET_W1_M_BSK;
+}
+
+static inline uint64_t
+tim_bkt_clr_bsk(struct otx2_tim_bkt *bktp)
+{
+ /* Clear everything except lock. */
+ const uint64_t v = TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK;
+
+ return __atomic_fetch_and(&bktp->w1, v, __ATOMIC_ACQ_REL);
+}
+
+static inline uint64_t
+tim_bkt_fetch_sema_lock(struct otx2_tim_bkt *bktp)
+{
+ return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA_WLOCK,
+ __ATOMIC_ACQUIRE);
+}
+
+static inline uint64_t
+tim_bkt_fetch_sema(struct otx2_tim_bkt *bktp)
+{
+ return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA, __ATOMIC_RELAXED);
+}
+
+static inline uint64_t
+tim_bkt_inc_lock(struct otx2_tim_bkt *bktp)
+{
+ const uint64_t v = 1ull << TIM_BUCKET_W1_S_LOCK;
+
+ return __atomic_fetch_add(&bktp->w1, v, __ATOMIC_ACQUIRE);
+}
+
+static inline void
+tim_bkt_dec_lock(struct otx2_tim_bkt *bktp)
+{
+ __atomic_add_fetch(&bktp->lock, 0xff, __ATOMIC_RELEASE);
+}
+
+static inline uint32_t
+tim_bkt_get_nent(uint64_t w1)
+{
+ return (w1 >> TIM_BUCKET_W1_S_NUM_ENTRIES) &
+ TIM_BUCKET_W1_M_NUM_ENTRIES;
+}
+
+static inline void
+tim_bkt_inc_nent(struct otx2_tim_bkt *bktp)
+{
+ __atomic_add_fetch(&bktp->nb_entry, 1, __ATOMIC_RELAXED);
+}
+
+static inline void
+tim_bkt_add_nent(struct otx2_tim_bkt *bktp, uint32_t v)
+{
+ __atomic_add_fetch(&bktp->nb_entry, v, __ATOMIC_RELAXED);
+}
+
+static inline uint64_t
+tim_bkt_clr_nent(struct otx2_tim_bkt *bktp)
+{
+ const uint64_t v = ~(TIM_BUCKET_W1_M_NUM_ENTRIES <<
+ TIM_BUCKET_W1_S_NUM_ENTRIES);
+
+ return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL);
+}
+
+#endif /* __OTX2_TIM_WORKER_H__ */
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 37/44] event/octeontx2: add event timer arm routine
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (35 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 36/44] event/octeontx2: add TIM bucket operations pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 38/44] event/octeontx2: add event timer arm timeout burst pbhagavatula
` (6 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event timer arm routine.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_tim_evdev.c | 20 +++
drivers/event/octeontx2/otx2_tim_evdev.h | 33 ++++
drivers/event/octeontx2/otx2_tim_worker.c | 77 ++++++++
drivers/event/octeontx2/otx2_tim_worker.h | 204 ++++++++++++++++++++++
4 files changed, 334 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index f2c14faaa..f4651c281 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -29,6 +29,23 @@ tim_get_msix_offsets(void)
return rc;
}
+static void
+tim_set_fp_ops(struct otx2_tim_ring *tim_ring)
+{
+ uint8_t prod_flag = !tim_ring->prod_type_sp;
+
+ /* [MOD/AND] [DFB/FB] [SP][MP]*/
+ const rte_event_timer_arm_burst_t arm_burst[2][2][2] = {
+#define FP(_name, _f3, _f2, _f1, flags) \
+ [_f3][_f2][_f1] = otx2_tim_arm_burst_ ## _name,
+TIM_ARM_FASTPATH_MODES
+#undef FP
+ };
+
+ otx2_tim_ops.arm_burst = arm_burst[tim_ring->optimized]
+ [tim_ring->ena_dfb][prod_flag];
+}
+
static void
otx2_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
struct rte_event_timer_adapter_info *adptr_info)
@@ -326,6 +343,9 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->base + TIM_LF_RING_BASE);
otx2_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
+ /* Set fastpath ops. */
+ tim_set_fp_ops(tim_ring);
+
/* Update SSO xae count. */
sso_updt_xae_cnt(sso_pmd_priv(dev->event_dev), (void *)&nb_timers,
RTE_EVENT_TYPE_TIMER);
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
index 2be5d5f07..01b271507 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ b/drivers/event/octeontx2/otx2_tim_evdev.h
@@ -7,6 +7,7 @@
#include <rte_event_timer_adapter.h>
#include <rte_event_timer_adapter_pmd.h>
+#include <rte_reciprocal.h>
#include "otx2_dev.h"
@@ -70,6 +71,13 @@
#define OTX2_TIM_MAX_CHUNK_SLOTS (0x1FFE)
#define OTX2_TIM_MIN_TMO_TKS (256)
+#define OTX2_TIM_SP 0x1
+#define OTX2_TIM_MP 0x2
+#define OTX2_TIM_BKT_AND 0x4
+#define OTX2_TIM_BKT_MOD 0x8
+#define OTX2_TIM_ENA_FB 0x10
+#define OTX2_TIM_ENA_DFB 0x20
+
enum otx2_tim_clk_src {
OTX2_TIM_CLK_SRC_10NS = RTE_EVENT_TIMER_ADAPTER_CPU_CLK,
OTX2_TIM_CLK_SRC_GPIO = RTE_EVENT_TIMER_ADAPTER_EXT_CLK0,
@@ -95,6 +103,11 @@ struct otx2_tim_bkt {
uint64_t pad;
} __rte_packed __rte_aligned(32);
+struct otx2_tim_ent {
+ uint64_t w0;
+ uint64_t wqe;
+} __rte_packed;
+
struct otx2_tim_evdev {
struct rte_pci_device *pci_dev;
struct rte_eventdev *event_dev;
@@ -111,8 +124,10 @@ struct otx2_tim_evdev {
struct otx2_tim_ring {
uintptr_t base;
+ struct rte_reciprocal_u64 fast_div;
uint16_t nb_chunk_slots;
uint32_t nb_bkts;
+ uint64_t ring_start_cyc;
struct otx2_tim_bkt *bkt;
struct rte_mempool *chunk_pool;
uint64_t tck_int;
@@ -142,6 +157,24 @@ tim_priv_get(void)
return mz->addr;
}
+#define TIM_ARM_FASTPATH_MODES \
+FP(mod_sp, 0, 0, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \
+FP(mod_mp, 0, 0, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \
+FP(mod_fb_sp, 0, 1, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \
+FP(mod_fb_mp, 0, 1, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB | OTX2_TIM_MP) \
+FP(and_sp, 1, 0, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \
+FP(and_mp, 1, 0, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \
+FP(and_fb_sp, 1, 1, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \
+FP(and_fb_mp, 1, 1, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB | OTX2_TIM_MP) \
+
+#define FP(_name, _f3, _f2, _f1, flags) \
+uint16_t otx2_tim_arm_burst_ ## _name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, \
+ const uint16_t nb_timers);
+TIM_ARM_FASTPATH_MODES
+#undef FP
+
int otx2_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/octeontx2/otx2_tim_worker.c b/drivers/event/octeontx2/otx2_tim_worker.c
index 29ed1fd5a..409575ec4 100644
--- a/drivers/event/octeontx2/otx2_tim_worker.c
+++ b/drivers/event/octeontx2/otx2_tim_worker.c
@@ -5,3 +5,80 @@
#include "otx2_tim_evdev.h"
#include "otx2_tim_worker.h"
+static inline int
+tim_arm_checks(const struct otx2_tim_ring * const tim_ring,
+ struct rte_event_timer * const tim)
+{
+ if (unlikely(tim->state)) {
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ rte_errno = EALREADY;
+ goto fail;
+ }
+
+ if (unlikely(!tim->timeout_ticks ||
+ tim->timeout_ticks >= tim_ring->nb_bkts)) {
+ tim->state = tim->timeout_ticks ? RTE_EVENT_TIMER_ERROR_TOOLATE
+ : RTE_EVENT_TIMER_ERROR_TOOEARLY;
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ return 0;
+
+fail:
+ return -EINVAL;
+}
+
+static inline void
+tim_format_event(const struct rte_event_timer * const tim,
+ struct otx2_tim_ent * const entry)
+{
+ entry->w0 = (tim->ev.event & 0xFFC000000000) >> 6 |
+ (tim->ev.event & 0xFFFFFFFFF);
+ entry->wqe = tim->ev.u64;
+}
+
+static __rte_always_inline uint16_t
+tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint16_t nb_timers,
+ const uint8_t flags)
+{
+ struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct otx2_tim_ent entry;
+ uint16_t index;
+ int ret;
+
+ for (index = 0; index < nb_timers; index++) {
+ if (tim_arm_checks(tim_ring, tim[index]))
+ break;
+
+ tim_format_event(tim[index], &entry);
+ if (flags & OTX2_TIM_SP)
+ ret = tim_add_entry_sp(tim_ring,
+ tim[index]->timeout_ticks,
+ tim[index], &entry, flags);
+ if (flags & OTX2_TIM_MP)
+ ret = tim_add_entry_mp(tim_ring,
+ tim[index]->timeout_ticks,
+ tim[index], &entry, flags);
+
+ if (unlikely(ret)) {
+ rte_errno = -ret;
+ break;
+ }
+ }
+
+ return index;
+}
+
+#define FP(_name, _f3, _f2, _f1, _flags) \
+uint16_t __rte_noinline \
+otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, \
+ const uint16_t nb_timers) \
+{ \
+ return tim_timer_arm_burst(adptr, tim, nb_timers, _flags); \
+}
+TIM_ARM_FASTPATH_MODES
+#undef FP
diff --git a/drivers/event/octeontx2/otx2_tim_worker.h b/drivers/event/octeontx2/otx2_tim_worker.h
index ccb137d13..a5e0d56bc 100644
--- a/drivers/event/octeontx2/otx2_tim_worker.h
+++ b/drivers/event/octeontx2/otx2_tim_worker.h
@@ -108,4 +108,208 @@ tim_bkt_clr_nent(struct otx2_tim_bkt *bktp)
return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL);
}
+static __rte_always_inline struct otx2_tim_bkt *
+tim_get_target_bucket(struct otx2_tim_ring * const tim_ring,
+ const uint32_t rel_bkt, const uint8_t flag)
+{
+ const uint64_t bkt_cyc = rte_rdtsc() - tim_ring->ring_start_cyc;
+ uint32_t bucket = rte_reciprocal_divide_u64(bkt_cyc,
+ &tim_ring->fast_div) + rel_bkt;
+
+ if (flag & OTX2_TIM_BKT_MOD)
+ bucket = bucket % tim_ring->nb_bkts;
+ if (flag & OTX2_TIM_BKT_AND)
+ bucket = bucket & (tim_ring->nb_bkts - 1);
+
+ return &tim_ring->bkt[bucket];
+}
+
+static struct otx2_tim_ent *
+tim_clr_bkt(struct otx2_tim_ring * const tim_ring,
+ struct otx2_tim_bkt * const bkt)
+{
+ struct otx2_tim_ent *chunk;
+ struct otx2_tim_ent *pnext;
+
+ chunk = ((struct otx2_tim_ent *)(uintptr_t)bkt->first_chunk);
+ chunk = (struct otx2_tim_ent *)(uintptr_t)(chunk +
+ tim_ring->nb_chunk_slots)->w0;
+ while (chunk) {
+ pnext = (struct otx2_tim_ent *)(uintptr_t)
+ ((chunk + tim_ring->nb_chunk_slots)->w0);
+ rte_mempool_put(tim_ring->chunk_pool, chunk);
+ chunk = pnext;
+ }
+
+ return (struct otx2_tim_ent *)(uintptr_t)bkt->first_chunk;
+}
+
+static struct otx2_tim_ent *
+tim_refill_chunk(struct otx2_tim_bkt * const bkt,
+ struct otx2_tim_ring * const tim_ring)
+{
+ struct otx2_tim_ent *chunk;
+
+ if (bkt->nb_entry || !bkt->first_chunk) {
+ if (unlikely(rte_mempool_get(tim_ring->chunk_pool,
+ (void **)&chunk)))
+ return NULL;
+ if (bkt->nb_entry) {
+ *(uint64_t *)(((struct otx2_tim_ent *)(uintptr_t)
+ bkt->current_chunk) +
+ tim_ring->nb_chunk_slots) =
+ (uintptr_t)chunk;
+ } else {
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ } else {
+ chunk = tim_clr_bkt(tim_ring, bkt);
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+
+ return chunk;
+}
+
+static struct otx2_tim_ent *
+tim_insert_chunk(struct otx2_tim_bkt * const bkt,
+ struct otx2_tim_ring * const tim_ring)
+{
+ struct otx2_tim_ent *chunk;
+
+ if (unlikely(rte_mempool_get(tim_ring->chunk_pool, (void **)&chunk)))
+ return NULL;
+
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+ if (bkt->nb_entry) {
+ *(uint64_t *)(((struct otx2_tim_ent *)(uintptr_t)
+ bkt->current_chunk) +
+ tim_ring->nb_chunk_slots) = (uintptr_t)chunk;
+ } else {
+ bkt->first_chunk = (uintptr_t)chunk;
+ }
+
+ return chunk;
+}
+
+static __rte_always_inline int
+tim_add_entry_sp(struct otx2_tim_ring * const tim_ring,
+ const uint32_t rel_bkt,
+ struct rte_event_timer * const tim,
+ const struct otx2_tim_ent * const pent,
+ const uint8_t flags)
+{
+ struct otx2_tim_ent *chunk;
+ struct otx2_tim_bkt *bkt;
+ uint64_t lock_sema;
+ int16_t rem;
+
+ bkt = tim_get_target_bucket(tim_ring, rel_bkt, flags);
+
+__retry:
+ /* Get Bucket sema*/
+ lock_sema = tim_bkt_fetch_sema(bkt);
+
+ /* Bucket related checks. */
+ if (unlikely(tim_bkt_get_hbt(lock_sema)))
+ goto __retry;
+
+ /* Insert the work. */
+ rem = tim_bkt_fetch_rem(lock_sema);
+
+ if (!rem) {
+ if (flags & OTX2_TIM_ENA_FB)
+ chunk = tim_refill_chunk(bkt, tim_ring);
+ if (flags & OTX2_TIM_ENA_DFB)
+ chunk = tim_insert_chunk(bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ tim_bkt_set_rem(bkt, 0);
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ return -ENOMEM;
+ }
+ bkt->current_chunk = (uintptr_t)chunk;
+ tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - 1);
+ } else {
+ chunk = (struct otx2_tim_ent *)(uintptr_t)bkt->current_chunk;
+ chunk += tim_ring->nb_chunk_slots - rem;
+ }
+
+ /* Copy work entry. */
+ *chunk = *pent;
+
+ tim_bkt_inc_nent(bkt);
+
+ tim->impl_opaque[0] = (uintptr_t)chunk;
+ tim->impl_opaque[1] = (uintptr_t)bkt;
+ tim->state = RTE_EVENT_TIMER_ARMED;
+
+ return 0;
+}
+
+static __rte_always_inline int
+tim_add_entry_mp(struct otx2_tim_ring * const tim_ring,
+ const uint32_t rel_bkt,
+ struct rte_event_timer * const tim,
+ const struct otx2_tim_ent * const pent,
+ const uint8_t flags)
+{
+ struct otx2_tim_ent *chunk;
+ struct otx2_tim_bkt *bkt;
+ uint64_t lock_sema;
+ int16_t rem;
+
+__retry:
+ bkt = tim_get_target_bucket(tim_ring, rel_bkt, flags);
+
+ /* Get Bucket sema*/
+ lock_sema = tim_bkt_fetch_sema_lock(bkt);
+
+ /* Bucket related checks. */
+ if (unlikely(tim_bkt_get_hbt(lock_sema))) {
+ tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+
+ rem = tim_bkt_fetch_rem(lock_sema);
+
+ if (rem < 0) {
+ /* Goto diff bucket. */
+ tim_bkt_dec_lock(bkt);
+ goto __retry;
+ } else if (!rem) {
+ /* Only one thread can be here*/
+ if (flags & OTX2_TIM_ENA_FB)
+ chunk = tim_refill_chunk(bkt, tim_ring);
+ if (flags & OTX2_TIM_ENA_DFB)
+ chunk = tim_insert_chunk(bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ tim_bkt_set_rem(bkt, 0);
+ tim_bkt_dec_lock(bkt);
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ tim->state = RTE_EVENT_TIMER_ERROR;
+ return -ENOMEM;
+ }
+ bkt->current_chunk = (uintptr_t)chunk;
+ tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - 1);
+ } else {
+ chunk = (struct otx2_tim_ent *)(uintptr_t)bkt->current_chunk;
+ chunk += tim_ring->nb_chunk_slots - rem;
+ }
+
+ /* Copy work entry. */
+ *chunk = *pent;
+ tim_bkt_dec_lock(bkt);
+ tim_bkt_inc_nent(bkt);
+ tim->impl_opaque[0] = (uintptr_t)chunk;
+ tim->impl_opaque[1] = (uintptr_t)bkt;
+ tim->state = RTE_EVENT_TIMER_ARMED;
+
+ return 0;
+}
+
#endif /* __OTX2_TIM_WORKER_H__ */
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 38/44] event/octeontx2: add event timer arm timeout burst
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (36 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 37/44] event/octeontx2: add event timer arm routine pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 39/44] event/octeontx2: add event timer cancel function pbhagavatula
` (5 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event timer arm timeout burst function.
All the timers requested to be armed have the same timeout.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_tim_evdev.c | 9 +++
drivers/event/octeontx2/otx2_tim_evdev.h | 16 ++++
drivers/event/octeontx2/otx2_tim_worker.c | 53 ++++++++++++
drivers/event/octeontx2/otx2_tim_worker.h | 98 +++++++++++++++++++++++
4 files changed, 176 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index f4651c281..fabcd3d0a 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -42,8 +42,17 @@ TIM_ARM_FASTPATH_MODES
#undef FP
};
+ const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2] = {
+#define FP(_name, _f2, _f1, flags) \
+ [_f2][_f1] = otx2_tim_arm_tmo_tick_burst_ ## _name,
+TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
+ };
+
otx2_tim_ops.arm_burst = arm_burst[tim_ring->optimized]
[tim_ring->ena_dfb][prod_flag];
+ otx2_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->optimized]
+ [tim_ring->ena_dfb];
}
static void
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
index 01b271507..751659719 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ b/drivers/event/octeontx2/otx2_tim_evdev.h
@@ -66,6 +66,8 @@
#define OTX2_TIM_MAX_BUCKETS (0xFFFFF)
#define OTX2_TIM_RING_DEF_CHUNK_SZ (4096)
#define OTX2_TIM_CHUNK_ALIGNMENT (16)
+#define OTX2_TIM_MAX_BURST (RTE_CACHE_LINE_SIZE / \
+ OTX2_TIM_CHUNK_ALIGNMENT)
#define OTX2_TIM_NB_CHUNK_SLOTS(sz) (((sz) / OTX2_TIM_CHUNK_ALIGNMENT) - 1)
#define OTX2_TIM_MIN_CHUNK_SLOTS (0x1)
#define OTX2_TIM_MAX_CHUNK_SLOTS (0x1FFE)
@@ -175,6 +177,20 @@ uint16_t otx2_tim_arm_burst_ ## _name( \
TIM_ARM_FASTPATH_MODES
#undef FP
+#define TIM_ARM_TMO_FASTPATH_MODES \
+FP(mod, 0, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB) \
+FP(mod_fb, 0, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB) \
+FP(and, 1, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB) \
+FP(and_fb, 1, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB) \
+
+#define FP(_name, _f2, _f1, flags) \
+uint16_t otx2_tim_arm_tmo_tick_burst_ ## _name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, \
+ const uint64_t timeout_tick, const uint16_t nb_timers);
+TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
+
int otx2_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/octeontx2/otx2_tim_worker.c b/drivers/event/octeontx2/otx2_tim_worker.c
index 409575ec4..737b167d1 100644
--- a/drivers/event/octeontx2/otx2_tim_worker.c
+++ b/drivers/event/octeontx2/otx2_tim_worker.c
@@ -72,6 +72,45 @@ tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
return index;
}
+static __rte_always_inline uint16_t
+tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint64_t timeout_tick,
+ const uint16_t nb_timers, const uint8_t flags)
+{
+ struct otx2_tim_ent entry[OTX2_TIM_MAX_BURST] __rte_cache_aligned;
+ struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
+ uint16_t set_timers = 0;
+ uint16_t arr_idx = 0;
+ uint16_t idx;
+ int ret;
+
+ if (unlikely(!timeout_tick || timeout_tick >= tim_ring->nb_bkts)) {
+ const enum rte_event_timer_state state = timeout_tick ?
+ RTE_EVENT_TIMER_ERROR_TOOLATE :
+ RTE_EVENT_TIMER_ERROR_TOOEARLY;
+ for (idx = 0; idx < nb_timers; idx++)
+ tim[idx]->state = state;
+
+ rte_errno = EINVAL;
+ return 0;
+ }
+
+ while (arr_idx < nb_timers) {
+ for (idx = 0; idx < OTX2_TIM_MAX_BURST && (arr_idx < nb_timers);
+ idx++, arr_idx++) {
+ tim_format_event(tim[arr_idx], &entry[idx]);
+ }
+ ret = tim_add_entry_brst(tim_ring, timeout_tick,
+ &tim[set_timers], entry, idx, flags);
+ set_timers += ret;
+ if (ret != idx)
+ break;
+ }
+
+ return set_timers;
+}
+
#define FP(_name, _f3, _f2, _f1, _flags) \
uint16_t __rte_noinline \
otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \
@@ -82,3 +121,17 @@ otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \
}
TIM_ARM_FASTPATH_MODES
#undef FP
+
+#define FP(_name, _f2, _f1, _flags) \
+uint16_t __rte_noinline \
+otx2_tim_arm_tmo_tick_burst_ ## _name( \
+ const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, \
+ const uint64_t timeout_tick, \
+ const uint16_t nb_timers) \
+{ \
+ return tim_timer_arm_tmo_brst(adptr, tim, timeout_tick, \
+ nb_timers, _flags); \
+}
+TIM_ARM_TMO_FASTPATH_MODES
+#undef FP
diff --git a/drivers/event/octeontx2/otx2_tim_worker.h b/drivers/event/octeontx2/otx2_tim_worker.h
index a5e0d56bc..da8c93ff2 100644
--- a/drivers/event/octeontx2/otx2_tim_worker.h
+++ b/drivers/event/octeontx2/otx2_tim_worker.h
@@ -312,4 +312,102 @@ tim_add_entry_mp(struct otx2_tim_ring * const tim_ring,
return 0;
}
+static inline uint16_t
+tim_cpy_wrk(uint16_t index, uint16_t cpy_lmt,
+ struct otx2_tim_ent *chunk,
+ struct rte_event_timer ** const tim,
+ const struct otx2_tim_ent * const ents,
+ const struct otx2_tim_bkt * const bkt)
+{
+ for (; index < cpy_lmt; index++) {
+ *chunk = *(ents + index);
+ tim[index]->impl_opaque[0] = (uintptr_t)chunk++;
+ tim[index]->impl_opaque[1] = (uintptr_t)bkt;
+ tim[index]->state = RTE_EVENT_TIMER_ARMED;
+ }
+
+ return index;
+}
+
+/* Burst mode functions */
+static inline int
+tim_add_entry_brst(struct otx2_tim_ring * const tim_ring,
+ const uint16_t rel_bkt,
+ struct rte_event_timer ** const tim,
+ const struct otx2_tim_ent *ents,
+ const uint16_t nb_timers, const uint8_t flags)
+{
+ struct otx2_tim_ent *chunk;
+ struct otx2_tim_bkt *bkt;
+ uint16_t chunk_remainder;
+ uint16_t index = 0;
+ uint64_t lock_sema;
+ int16_t rem, crem;
+ uint8_t lock_cnt;
+
+__retry:
+ bkt = tim_get_target_bucket(tim_ring, rel_bkt, flags);
+
+ /* Only one thread beyond this. */
+ lock_sema = tim_bkt_inc_lock(bkt);
+ lock_cnt = (uint8_t)
+ ((lock_sema >> TIM_BUCKET_W1_S_LOCK) & TIM_BUCKET_W1_M_LOCK);
+
+ if (lock_cnt) {
+ tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+
+ /* Bucket related checks. */
+ if (unlikely(tim_bkt_get_hbt(lock_sema))) {
+ tim_bkt_dec_lock(bkt);
+ goto __retry;
+ }
+
+ chunk_remainder = tim_bkt_fetch_rem(lock_sema);
+ rem = chunk_remainder - nb_timers;
+ if (rem < 0) {
+ crem = tim_ring->nb_chunk_slots - chunk_remainder;
+ if (chunk_remainder && crem) {
+ chunk = ((struct otx2_tim_ent *)
+ (uintptr_t)bkt->current_chunk) + crem;
+
+ index = tim_cpy_wrk(index, chunk_remainder, chunk, tim,
+ ents, bkt);
+ tim_bkt_sub_rem(bkt, chunk_remainder);
+ tim_bkt_add_nent(bkt, chunk_remainder);
+ }
+
+ if (flags & OTX2_TIM_ENA_FB)
+ chunk = tim_refill_chunk(bkt, tim_ring);
+ if (flags & OTX2_TIM_ENA_DFB)
+ chunk = tim_insert_chunk(bkt, tim_ring);
+
+ if (unlikely(chunk == NULL)) {
+ tim_bkt_dec_lock(bkt);
+ rte_errno = ENOMEM;
+ tim[index]->state = RTE_EVENT_TIMER_ERROR;
+ return crem;
+ }
+ *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
+ bkt->current_chunk = (uintptr_t)chunk;
+ tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
+
+ rem = nb_timers - chunk_remainder;
+ tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - rem);
+ tim_bkt_add_nent(bkt, rem);
+ } else {
+ chunk = (struct otx2_tim_ent *)(uintptr_t)bkt->current_chunk;
+ chunk += (tim_ring->nb_chunk_slots - chunk_remainder);
+
+ tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
+ tim_bkt_sub_rem(bkt, nb_timers);
+ tim_bkt_add_nent(bkt, nb_timers);
+ }
+
+ tim_bkt_dec_lock(bkt);
+
+ return nb_timers;
+}
+
#endif /* __OTX2_TIM_WORKER_H__ */
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 39/44] event/octeontx2: add event timer cancel function
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (37 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 38/44] event/octeontx2: add event timer arm timeout burst pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 40/44] event/octeontx2: add event timer stats get and reset pbhagavatula
` (4 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add function to cancel event timer that has been armed.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_tim_evdev.c | 1 +
drivers/event/octeontx2/otx2_tim_evdev.h | 4 +++
drivers/event/octeontx2/otx2_tim_worker.c | 29 ++++++++++++++++++
drivers/event/octeontx2/otx2_tim_worker.h | 37 +++++++++++++++++++++++
4 files changed, 71 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index fabcd3d0a..d95be66c6 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -53,6 +53,7 @@ TIM_ARM_TMO_FASTPATH_MODES
[tim_ring->ena_dfb][prod_flag];
otx2_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->optimized]
[tim_ring->ena_dfb];
+ otx2_tim_ops.cancel_burst = otx2_tim_timer_cancel_burst;
}
static void
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
index 751659719..7bdd5c8db 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ b/drivers/event/octeontx2/otx2_tim_evdev.h
@@ -191,6 +191,10 @@ uint16_t otx2_tim_arm_tmo_tick_burst_ ## _name( \
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
+uint16_t otx2_tim_timer_cancel_burst(
+ const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim, const uint16_t nb_timers);
+
int otx2_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
uint32_t *caps,
const struct rte_event_timer_adapter_ops **ops);
diff --git a/drivers/event/octeontx2/otx2_tim_worker.c b/drivers/event/octeontx2/otx2_tim_worker.c
index 737b167d1..fd1f02630 100644
--- a/drivers/event/octeontx2/otx2_tim_worker.c
+++ b/drivers/event/octeontx2/otx2_tim_worker.c
@@ -135,3 +135,32 @@ otx2_tim_arm_tmo_tick_burst_ ## _name( \
}
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
+
+uint16_t
+otx2_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
+ struct rte_event_timer **tim,
+ const uint16_t nb_timers)
+{
+ uint16_t index;
+ int ret;
+
+ RTE_SET_USED(adptr);
+ for (index = 0; index < nb_timers; index++) {
+ if (tim[index]->state == RTE_EVENT_TIMER_CANCELED) {
+ rte_errno = EALREADY;
+ break;
+ }
+
+ if (tim[index]->state != RTE_EVENT_TIMER_ARMED) {
+ rte_errno = EINVAL;
+ break;
+ }
+ ret = tim_rm_entry(tim[index]);
+ if (ret) {
+ rte_errno = -ret;
+ break;
+ }
+ }
+
+ return index;
+}
diff --git a/drivers/event/octeontx2/otx2_tim_worker.h b/drivers/event/octeontx2/otx2_tim_worker.h
index da8c93ff2..b193e2cab 100644
--- a/drivers/event/octeontx2/otx2_tim_worker.h
+++ b/drivers/event/octeontx2/otx2_tim_worker.h
@@ -410,4 +410,41 @@ tim_add_entry_brst(struct otx2_tim_ring * const tim_ring,
return nb_timers;
}
+static int
+tim_rm_entry(struct rte_event_timer *tim)
+{
+ struct otx2_tim_ent *entry;
+ struct otx2_tim_bkt *bkt;
+ uint64_t lock_sema;
+
+ if (tim->impl_opaque[1] == 0 || tim->impl_opaque[0] == 0)
+ return -ENOENT;
+
+ entry = (struct otx2_tim_ent *)(uintptr_t)tim->impl_opaque[0];
+ if (entry->wqe != tim->ev.u64) {
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ return -ENOENT;
+ }
+
+ bkt = (struct otx2_tim_bkt *)(uintptr_t)tim->impl_opaque[1];
+ lock_sema = tim_bkt_inc_lock(bkt);
+ if (tim_bkt_get_hbt(lock_sema) || !tim_bkt_get_nent(lock_sema)) {
+ tim_bkt_dec_lock(bkt);
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+ return -ENOENT;
+ }
+
+ entry->w0 = 0;
+ entry->wqe = 0;
+ tim_bkt_dec_lock(bkt);
+
+ tim->state = RTE_EVENT_TIMER_CANCELED;
+ tim->impl_opaque[0] = 0;
+ tim->impl_opaque[1] = 0;
+
+ return 0;
+}
+
#endif /* __OTX2_TIM_WORKER_H__ */
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 40/44] event/octeontx2: add event timer stats get and reset
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (38 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 39/44] event/octeontx2: add event timer cancel function pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 41/44] event/octeontx2: add even timer adapter start and stop pbhagavatula
` (3 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event timer adapter statistics get and reset functions.
Stats are disabled by default and can be enabled through devargs.
Example:
--dev "0002:0e:00.0,tim_stats_ena=1"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_tim_evdev.c | 55 ++++++++++++++---
drivers/event/octeontx2/otx2_tim_evdev.h | 75 ++++++++++++++++-------
drivers/event/octeontx2/otx2_tim_worker.c | 9 ++-
3 files changed, 104 insertions(+), 35 deletions(-)
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index d95be66c6..af68254f5 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -35,24 +35,26 @@ tim_set_fp_ops(struct otx2_tim_ring *tim_ring)
uint8_t prod_flag = !tim_ring->prod_type_sp;
/* [MOD/AND] [DFB/FB] [SP][MP]*/
- const rte_event_timer_arm_burst_t arm_burst[2][2][2] = {
-#define FP(_name, _f3, _f2, _f1, flags) \
- [_f3][_f2][_f1] = otx2_tim_arm_burst_ ## _name,
+ const rte_event_timer_arm_burst_t arm_burst[2][2][2][2] = {
+#define FP(_name, _f4, _f3, _f2, _f1, flags) \
+ [_f4][_f3][_f2][_f1] = otx2_tim_arm_burst_ ## _name,
TIM_ARM_FASTPATH_MODES
#undef FP
};
- const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2] = {
-#define FP(_name, _f2, _f1, flags) \
- [_f2][_f1] = otx2_tim_arm_tmo_tick_burst_ ## _name,
+ const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2][2] = {
+#define FP(_name, _f3, _f2, _f1, flags) \
+ [_f3][_f2][_f1] = otx2_tim_arm_tmo_tick_burst_ ## _name,
TIM_ARM_TMO_FASTPATH_MODES
#undef FP
};
- otx2_tim_ops.arm_burst = arm_burst[tim_ring->optimized]
- [tim_ring->ena_dfb][prod_flag];
- otx2_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->optimized]
- [tim_ring->ena_dfb];
+ otx2_tim_ops.arm_burst =
+ arm_burst[tim_ring->enable_stats][tim_ring->optimized]
+ [tim_ring->ena_dfb][prod_flag];
+ otx2_tim_ops.arm_tmo_tick_burst =
+ arm_tmo_burst[tim_ring->enable_stats][tim_ring->optimized]
+ [tim_ring->ena_dfb];
otx2_tim_ops.cancel_burst = otx2_tim_timer_cancel_burst;
}
@@ -300,6 +302,7 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->chunk_sz = dev->chunk_sz;
nb_timers = rcfg->nb_timers;
tim_ring->disable_npa = dev->disable_npa;
+ tim_ring->enable_stats = dev->enable_stats;
tim_ring->nb_chunks = nb_timers / OTX2_TIM_NB_CHUNK_SLOTS(
tim_ring->chunk_sz);
@@ -403,6 +406,30 @@ otx2_tim_ring_free(struct rte_event_timer_adapter *adptr)
return 0;
}
+static int
+otx2_tim_stats_get(const struct rte_event_timer_adapter *adapter,
+ struct rte_event_timer_adapter_stats *stats)
+{
+ struct otx2_tim_ring *tim_ring = adapter->data->adapter_priv;
+ uint64_t bkt_cyc = rte_rdtsc() - tim_ring->ring_start_cyc;
+
+
+ stats->evtim_exp_count = rte_atomic64_read(&tim_ring->arm_cnt);
+ stats->ev_enq_count = stats->evtim_exp_count;
+ stats->adapter_tick_count = rte_reciprocal_divide_u64(bkt_cyc,
+ &tim_ring->fast_div);
+ return 0;
+}
+
+static int
+otx2_tim_stats_reset(const struct rte_event_timer_adapter *adapter)
+{
+ struct otx2_tim_ring *tim_ring = adapter->data->adapter_priv;
+
+ rte_atomic64_clear(&tim_ring->arm_cnt);
+ return 0;
+}
+
int
otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
uint32_t *caps,
@@ -418,6 +445,11 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
otx2_tim_ops.uninit = otx2_tim_ring_free;
otx2_tim_ops.get_info = otx2_tim_ring_info_get;
+ if (dev->enable_stats) {
+ otx2_tim_ops.stats_get = otx2_tim_stats_get;
+ otx2_tim_ops.stats_reset = otx2_tim_stats_reset;
+ }
+
/* Store evdev pointer for later use. */
dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
*caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT;
@@ -428,6 +460,7 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
#define OTX2_TIM_DISABLE_NPA "tim_disable_npa"
#define OTX2_TIM_CHNK_SLOTS "tim_chnk_slots"
+#define OTX2_TIM_STATS_ENA "tim_stats_ena"
static void
tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev)
@@ -445,6 +478,8 @@ tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev)
&parse_kvargs_flag, &dev->disable_npa);
rte_kvargs_process(kvlist, OTX2_TIM_CHNK_SLOTS,
&parse_kvargs_value, &dev->chunk_slots);
+ rte_kvargs_process(kvlist, OTX2_TIM_STATS_ENA, &parse_kvargs_flag,
+ &dev->enable_stats);
}
void
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
index 7bdd5c8db..c8d16b03f 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ b/drivers/event/octeontx2/otx2_tim_evdev.h
@@ -79,6 +79,7 @@
#define OTX2_TIM_BKT_MOD 0x8
#define OTX2_TIM_ENA_FB 0x10
#define OTX2_TIM_ENA_DFB 0x20
+#define OTX2_TIM_ENA_STATS 0x40
enum otx2_tim_clk_src {
OTX2_TIM_CLK_SRC_10NS = RTE_EVENT_TIMER_ADAPTER_CPU_CLK,
@@ -120,6 +121,7 @@ struct otx2_tim_evdev {
/* Dev args */
uint8_t disable_npa;
uint16_t chunk_slots;
+ uint8_t enable_stats;
/* MSIX offsets */
uint16_t tim_msixoff[OTX2_MAX_TIM_RINGS];
};
@@ -133,7 +135,9 @@ struct otx2_tim_ring {
struct otx2_tim_bkt *bkt;
struct rte_mempool *chunk_pool;
uint64_t tck_int;
+ rte_atomic64_t arm_cnt;
uint8_t prod_type_sp;
+ uint8_t enable_stats;
uint8_t disable_npa;
uint8_t optimized;
uint8_t ena_dfb;
@@ -159,32 +163,57 @@ tim_priv_get(void)
return mz->addr;
}
-#define TIM_ARM_FASTPATH_MODES \
-FP(mod_sp, 0, 0, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \
-FP(mod_mp, 0, 0, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \
-FP(mod_fb_sp, 0, 1, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \
-FP(mod_fb_mp, 0, 1, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB | OTX2_TIM_MP) \
-FP(and_sp, 1, 0, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \
-FP(and_mp, 1, 0, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \
-FP(and_fb_sp, 1, 1, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \
-FP(and_fb_mp, 1, 1, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB | OTX2_TIM_MP) \
-
-#define FP(_name, _f3, _f2, _f1, flags) \
-uint16_t otx2_tim_arm_burst_ ## _name( \
- const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, \
- const uint16_t nb_timers);
+#define TIM_ARM_FASTPATH_MODES \
+FP(mod_sp, 0, 0, 0, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \
+FP(mod_mp, 0, 0, 0, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \
+FP(mod_fb_sp, 0, 0, 1, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \
+FP(mod_fb_mp, 0, 0, 1, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB | OTX2_TIM_MP) \
+FP(and_sp, 0, 1, 0, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \
+FP(and_mp, 0, 1, 0, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \
+FP(and_fb_sp, 0, 1, 1, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \
+FP(and_fb_mp, 0, 1, 1, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB | OTX2_TIM_MP) \
+FP(stats_mod_sp, 1, 0, 0, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_MOD | \
+ OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \
+FP(stats_mod_mp, 1, 0, 0, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_MOD | \
+ OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \
+FP(stats_mod_fb_sp, 1, 0, 1, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_MOD | \
+ OTX2_TIM_ENA_FB | OTX2_TIM_SP) \
+FP(stats_mod_fb_mp, 1, 0, 1, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_MOD | \
+ OTX2_TIM_ENA_FB | OTX2_TIM_MP) \
+FP(stats_and_sp, 1, 1, 0, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_AND | \
+ OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \
+FP(stats_and_mp, 1, 1, 0, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_AND | \
+ OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \
+FP(stats_and_fb_sp, 1, 1, 1, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_AND | \
+ OTX2_TIM_ENA_FB | OTX2_TIM_SP) \
+FP(stats_and_fb_mp, 1, 1, 1, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_AND | \
+ OTX2_TIM_ENA_FB | OTX2_TIM_MP)
+
+#define TIM_ARM_TMO_FASTPATH_MODES \
+FP(mod, 0, 0, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB) \
+FP(mod_fb, 0, 0, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB) \
+FP(and, 0, 1, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB) \
+FP(and_fb, 0, 1, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB) \
+FP(stats_mod, 1, 0, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_MOD | \
+ OTX2_TIM_ENA_DFB) \
+FP(stats_mod_fb, 1, 0, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_MOD | \
+ OTX2_TIM_ENA_FB) \
+FP(stats_and, 1, 1, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_AND | \
+ OTX2_TIM_ENA_DFB) \
+FP(stats_and_fb, 1, 1, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_AND | \
+ OTX2_TIM_ENA_FB)
+
+#define FP(_name, _f4, _f3, _f2, _f1, flags) \
+uint16_t \
+otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \
+ struct rte_event_timer **tim, \
+ const uint16_t nb_timers);
TIM_ARM_FASTPATH_MODES
#undef FP
-#define TIM_ARM_TMO_FASTPATH_MODES \
-FP(mod, 0, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB) \
-FP(mod_fb, 0, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB) \
-FP(and, 1, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB) \
-FP(and_fb, 1, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB) \
-
-#define FP(_name, _f2, _f1, flags) \
-uint16_t otx2_tim_arm_tmo_tick_burst_ ## _name( \
+#define FP(_name, _f3, _f2, _f1, flags) \
+uint16_t \
+otx2_tim_arm_tmo_tick_burst_ ## _name( \
const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, \
const uint64_t timeout_tick, const uint16_t nb_timers);
diff --git a/drivers/event/octeontx2/otx2_tim_worker.c b/drivers/event/octeontx2/otx2_tim_worker.c
index fd1f02630..feba61cd4 100644
--- a/drivers/event/octeontx2/otx2_tim_worker.c
+++ b/drivers/event/octeontx2/otx2_tim_worker.c
@@ -69,6 +69,9 @@ tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
}
}
+ if (flags & OTX2_TIM_ENA_STATS)
+ rte_atomic64_add(&tim_ring->arm_cnt, index);
+
return index;
}
@@ -107,11 +110,13 @@ tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
if (ret != idx)
break;
}
+ if (flags & OTX2_TIM_ENA_STATS)
+ rte_atomic64_add(&tim_ring->arm_cnt, set_timers);
return set_timers;
}
-#define FP(_name, _f3, _f2, _f1, _flags) \
+#define FP(_name, _f4, _f3, _f2, _f1, _flags) \
uint16_t __rte_noinline \
otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \
struct rte_event_timer **tim, \
@@ -122,7 +127,7 @@ otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \
TIM_ARM_FASTPATH_MODES
#undef FP
-#define FP(_name, _f2, _f1, _flags) \
+#define FP(_name, _f3, _f2, _f1, _flags) \
uint16_t __rte_noinline \
otx2_tim_arm_tmo_tick_burst_ ## _name( \
const struct rte_event_timer_adapter *adptr, \
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 41/44] event/octeontx2: add even timer adapter start and stop
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (39 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 40/44] event/octeontx2: add event timer stats get and reset pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 42/44] event/octeontx2: add devargs to limit timer adapters pbhagavatula
` (2 subsequent siblings)
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event timer adapter start and stop functions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_tim_evdev.c | 66 ++++++++++++++++++++++++
1 file changed, 66 insertions(+)
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index af68254f5..cd9a679fb 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -377,6 +377,69 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr)
return rc;
}
+static int
+otx2_tim_ring_start(const struct rte_event_timer_adapter *adptr)
+{
+ struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct otx2_tim_evdev *dev = tim_priv_get();
+ struct tim_enable_rsp *rsp;
+ struct tim_ring_req *req;
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ req = otx2_mbox_alloc_msg_tim_enable_ring(dev->mbox);
+ req->ring = tim_ring->ring_id;
+
+ rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
+ if (rc < 0) {
+ tim_err_desc(rc);
+ goto fail;
+ }
+#ifdef RTE_ARM_EAL_RDTSC_USE_PMU
+ uint64_t tenns_stmp, tenns_diff;
+ uint64_t pmu_stmp;
+
+ pmu_stmp = rte_rdtsc();
+ asm volatile("mrs %0, cntvct_el0" : "=r" (tenns_stmp));
+
+ tenns_diff = tenns_stmp - rsp->timestarted;
+ pmu_stmp = pmu_stmp - (NSEC2TICK(tenns_diff * 10, rte_get_timer_hz()));
+ tim_ring->ring_start_cyc = pmu_stmp;
+#else
+ tim_ring->ring_start_cyc = rsp->timestarted;
+#endif
+ tim_ring->tck_int = NSEC2TICK(tim_ring->tck_nsec, rte_get_timer_hz());
+ tim_ring->fast_div = rte_reciprocal_value_u64(tim_ring->tck_int);
+
+fail:
+ return rc;
+}
+
+static int
+otx2_tim_ring_stop(const struct rte_event_timer_adapter *adptr)
+{
+ struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
+ struct otx2_tim_evdev *dev = tim_priv_get();
+ struct tim_ring_req *req;
+ int rc;
+
+ if (dev == NULL)
+ return -ENODEV;
+
+ req = otx2_mbox_alloc_msg_tim_disable_ring(dev->mbox);
+ req->ring = tim_ring->ring_id;
+
+ rc = otx2_mbox_process(dev->mbox);
+ if (rc < 0) {
+ tim_err_desc(rc);
+ rc = -EBUSY;
+ }
+
+ return rc;
+}
+
static int
otx2_tim_ring_free(struct rte_event_timer_adapter *adptr)
{
@@ -438,11 +501,14 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
struct otx2_tim_evdev *dev = tim_priv_get();
RTE_SET_USED(flags);
+
if (dev == NULL)
return -ENODEV;
otx2_tim_ops.init = otx2_tim_ring_create;
otx2_tim_ops.uninit = otx2_tim_ring_free;
+ otx2_tim_ops.start = otx2_tim_ring_start;
+ otx2_tim_ops.stop = otx2_tim_ring_stop;
otx2_tim_ops.get_info = otx2_tim_ring_info_get;
if (dev->enable_stats) {
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 42/44] event/octeontx2: add devargs to limit timer adapters
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (40 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 41/44] event/octeontx2: add even timer adapter start and stop pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 43/44] event/octeontx2: add devargs to control adapter parameters pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 44/44] doc: update Marvell OCTEON TX2 eventdev documentation pbhagavatula
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add devargs to limit the max number of TIM rings reserved on probe.
Since, TIM rings are HW resources we can avoid starving other
applications by not grabbing all the rings.
Example:
--dev "0002:0e:00.0,tim_rings_lmt=2"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_tim_evdev.c | 6 +++++-
drivers/event/octeontx2/otx2_tim_evdev.h | 1 +
2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index cd9a679fb..c312bd541 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -527,6 +527,7 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
#define OTX2_TIM_DISABLE_NPA "tim_disable_npa"
#define OTX2_TIM_CHNK_SLOTS "tim_chnk_slots"
#define OTX2_TIM_STATS_ENA "tim_stats_ena"
+#define OTX2_TIM_RINGS_LMT "tim_rings_lmt"
static void
tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev)
@@ -546,6 +547,8 @@ tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev)
&parse_kvargs_value, &dev->chunk_slots);
rte_kvargs_process(kvlist, OTX2_TIM_STATS_ENA, &parse_kvargs_flag,
&dev->enable_stats);
+ rte_kvargs_process(kvlist, OTX2_TIM_RINGS_LMT, &parse_kvargs_value,
+ &dev->min_ring_cnt);
}
void
@@ -583,7 +586,8 @@ otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev)
goto mz_free;
}
- dev->nb_rings = rsrc_cnt->tim;
+ dev->nb_rings = dev->min_ring_cnt ?
+ RTE_MIN(dev->min_ring_cnt, rsrc_cnt->tim) : rsrc_cnt->tim;
if (!dev->nb_rings) {
otx2_tim_dbg("No TIM Logical functions provisioned.");
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
index c8d16b03f..5af724ef9 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ b/drivers/event/octeontx2/otx2_tim_evdev.h
@@ -121,6 +121,7 @@ struct otx2_tim_evdev {
/* Dev args */
uint8_t disable_npa;
uint16_t chunk_slots;
+ uint16_t min_ring_cnt;
uint8_t enable_stats;
/* MSIX offsets */
uint16_t tim_msixoff[OTX2_MAX_TIM_RINGS];
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 43/44] event/octeontx2: add devargs to control adapter parameters
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (41 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 42/44] event/octeontx2: add devargs to limit timer adapters pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 44/44] doc: update Marvell OCTEON TX2 eventdev documentation pbhagavatula
43 siblings, 0 replies; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add devargs to control each event timer adapter i.e. TIM rings internal
parameters uniquely. The following dict format is expected
[ring-chnk_slots-disable_npa-stats_ena]. 0 represents default values.
Example:
--dev "0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/event/octeontx2/otx2_tim_evdev.c | 87 +++++++++++++++++++++++-
drivers/event/octeontx2/otx2_tim_evdev.h | 10 +++
2 files changed, 96 insertions(+), 1 deletion(-)
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index c312bd541..446807606 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -255,7 +255,7 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr)
struct tim_lf_alloc_req *req;
struct tim_lf_alloc_rsp *rsp;
uint64_t nb_timers;
- int rc;
+ int i, rc;
if (dev == NULL)
return -ENODEV;
@@ -304,6 +304,18 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr)
tim_ring->disable_npa = dev->disable_npa;
tim_ring->enable_stats = dev->enable_stats;
+ for (i = 0; i < dev->ring_ctl_cnt ; i++) {
+ struct otx2_tim_ctl *ring_ctl = &dev->ring_ctl_data[i];
+
+ if (ring_ctl->ring == tim_ring->ring_id) {
+ tim_ring->chunk_sz = ring_ctl->chunk_slots ?
+ ((uint32_t)(ring_ctl->chunk_slots + 1) *
+ OTX2_TIM_CHUNK_ALIGNMENT) : tim_ring->chunk_sz;
+ tim_ring->enable_stats = ring_ctl->enable_stats;
+ tim_ring->disable_npa = ring_ctl->disable_npa;
+ }
+ }
+
tim_ring->nb_chunks = nb_timers / OTX2_TIM_NB_CHUNK_SLOTS(
tim_ring->chunk_sz);
tim_ring->nb_chunk_slots = OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
@@ -528,6 +540,77 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
#define OTX2_TIM_CHNK_SLOTS "tim_chnk_slots"
#define OTX2_TIM_STATS_ENA "tim_stats_ena"
#define OTX2_TIM_RINGS_LMT "tim_rings_lmt"
+#define OTX2_TIM_RING_CTL "tim_ring_ctl"
+
+static void
+tim_parse_ring_param(char *value, void *opaque)
+{
+ struct otx2_tim_evdev *dev = opaque;
+ struct otx2_tim_ctl ring_ctl = {0};
+ char *tok = strtok(value, "-");
+ uint16_t *val;
+
+ val = (uint16_t *)&ring_ctl;
+
+ if (!strlen(value))
+ return;
+
+ while (tok != NULL) {
+ *val = atoi(tok);
+ tok = strtok(NULL, "-");
+ val++;
+ }
+
+ if (val != (&ring_ctl.enable_stats + 1)) {
+ otx2_err(
+ "Invalid ring param expected [ring-chunk_sz-disable_npa-enable_stats]");
+ return;
+ }
+
+ dev->ring_ctl_cnt++;
+ dev->ring_ctl_data = rte_realloc(dev->ring_ctl_data,
+ sizeof(struct otx2_tim_ctl), 0);
+ dev->ring_ctl_data[dev->ring_ctl_cnt - 1] = ring_ctl;
+}
+
+static void
+tim_parse_ring_ctl_list(const char *value, void *opaque)
+{
+ char *s = strdup(value);
+ char *start = NULL;
+ char *end = NULL;
+ char *f = s;
+
+ while (*s) {
+ if (*s == '[')
+ start = s;
+ else if (*s == ']')
+ end = s;
+
+ if (start < end && *start) {
+ *end = 0;
+ tim_parse_ring_param(start + 1, opaque);
+ start = end;
+ s = end;
+ }
+ s++;
+ }
+
+ free(f);
+}
+
+static int
+tim_parse_kvargs_dict(const char *key, const char *value, void *opaque)
+{
+ RTE_SET_USED(key);
+
+ /* Dict format [ring-chunk_sz-disable_npa-enable_stats] use '-' as ','
+ * isn't allowed. 0 represents default.
+ */
+ tim_parse_ring_ctl_list(value, opaque);
+
+ return 0;
+}
static void
tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev)
@@ -549,6 +632,8 @@ tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev)
&dev->enable_stats);
rte_kvargs_process(kvlist, OTX2_TIM_RINGS_LMT, &parse_kvargs_value,
&dev->min_ring_cnt);
+ rte_kvargs_process(kvlist, OTX2_TIM_RING_CTL,
+ &tim_parse_kvargs_dict, &dev);
}
void
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
index 5af724ef9..eec0189c1 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ b/drivers/event/octeontx2/otx2_tim_evdev.h
@@ -111,6 +111,13 @@ struct otx2_tim_ent {
uint64_t wqe;
} __rte_packed;
+struct otx2_tim_ctl {
+ uint16_t ring;
+ uint16_t chunk_slots;
+ uint16_t disable_npa;
+ uint16_t enable_stats;
+};
+
struct otx2_tim_evdev {
struct rte_pci_device *pci_dev;
struct rte_eventdev *event_dev;
@@ -123,6 +130,9 @@ struct otx2_tim_evdev {
uint16_t chunk_slots;
uint16_t min_ring_cnt;
uint8_t enable_stats;
+ uint16_t ring_ctl_cnt;
+ struct otx2_tim_ctl *ring_ctl_data;
+ /* HW const */
/* MSIX offsets */
uint16_t tim_msixoff[OTX2_MAX_TIM_RINGS];
};
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* [dpdk-dev] [PATCH v2 44/44] doc: update Marvell OCTEON TX2 eventdev documentation
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
` (42 preceding siblings ...)
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 43/44] event/octeontx2: add devargs to control adapter parameters pbhagavatula
@ 2019-06-28 7:50 ` pbhagavatula
2019-06-28 9:00 ` Thomas Monjalon
43 siblings, 1 reply; 48+ messages in thread
From: pbhagavatula @ 2019-06-28 7:50 UTC (permalink / raw)
To: jerinj, Thomas Monjalon, John McNamara, Marko Kovacevic
Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Update Marvell OCTEON TX2 eventdev with event timer adapter i.e. TIM
capabilities.
Claim Maintainership of OCTEON TX2 eventdev.
Cc: John McNamara <john.mcnamara@intel.com>
Cc: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
MAINTAINERS | 6 ++++
doc/guides/eventdevs/octeontx2.rst | 54 ++++++++++++++++++++++++++++++
2 files changed, 60 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 0c3b48920..dfd1f77d6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1058,6 +1058,12 @@ M: Peter Mccarthy <peter.mccarthy@intel.com>
F: drivers/event/opdl/
F: doc/guides/eventdevs/opdl.rst
+Marvell OCTEON TX2
+M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+M: Jerin Jacob <jerinj@marvell.com>
+F: drivers/event/octeontx2/
+F: doc/guides/eventdevs/octeontx2.rst
+
Rawdev Drivers
--------------
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 928251aa6..b88d9cf7a 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -28,6 +28,10 @@ Features of the OCTEON TX2 SSO PMD are:
- Open system with configurable amount of outstanding events limited only by
DRAM
- HW accelerated dequeue timeout support to enable power management
+- HW managed event timers support through TIM, with high precision and
+ time granularity of 2.5us.
+- Up to 256 TIM rings aka event timer adapters.
+- Up to 8 rings traversed in parallel.
Prerequisites and Compilation procedure
---------------------------------------
@@ -90,6 +94,54 @@ Runtime Config Options
--dev "0002:0e:00.0,selftest=1"
+- ``TIM disable NPA``
+
+ By default chunks are allocated from NPA then TIM can automatically free
+ them when traversing the list of chunks. The ``tim_disable_npa`` devargs
+ parameter disables NPA and uses software mempool to manage chunks
+ For example::
+
+ --dev "0002:0e:00.0,tim_disable_npa=1"
+
+- ``TIM modify chunk slots``
+
+ The ``tim_chnk_slots`` devargs can be used to modify number of chunk slots.
+ Chunks are used to store event timers, a chunk can be visualised as an array
+ where the last element points to the next chunk and rest of them are used to
+ store events. TIM traverses the list of chunks and enqueues the event timers
+ to SSO. The default value is 255 and the max value is 4095.
+ For example::
+
+ --dev "0002:0e:00.0,tim_chnk_slots=1023"
+
+- ``TIM enable arm/cancel statistics``
+
+ The ``tim_stats_ena`` devargs can be used to enable arm and cancel stats of
+ event timer adapter.
+ For example::
+
+ --dev "0002:0e:00.0,tim_stats_ena=1"
+
+- ``TIM limit max rings reserved``
+
+ The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
+ rings i.e. event timer adapter reserved on probe. Since, TIM rings are HW
+ resources we can avoid starving other applications by not grabbing all the
+ rings.
+ For example::
+
+ --dev "0002:0e:00.0,tim_rings_lmt=5"
+
+- ``TIM ring control internal parameters``
+
+ When using multiple TIM rings the ``tim_ring_ctl`` devargs can be used to
+ control each TIM rings internal parameters uniquely. The following dict
+ format is expected [ring-chnk_slots-disable_npa-stats_ena]. 0 represents
+ default values.
+ For Example::
+
+ --dev "0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]"
+
Debugging Options
~~~~~~~~~~~~~~~~~
@@ -102,3 +154,5 @@ Debugging Options
+===+============+=======================================================+
| 1 | SSO | --log-level='pmd\.event\.octeontx2,8' |
+---+------------+-------------------------------------------------------+
+ | 2 | TIM | --log-level='pmd\.event\.octeontx2\.timer,8' |
+ +---+------------+-------------------------------------------------------+
--
2.22.0
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [dpdk-dev] [PATCH v2 01/44] event/octeontx2: add build infra and device probe
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 01/44] event/octeontx2: add build infra and device probe pbhagavatula
@ 2019-06-28 8:55 ` Thomas Monjalon
2019-06-28 9:01 ` Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 48+ messages in thread
From: Thomas Monjalon @ 2019-06-28 8:55 UTC (permalink / raw)
To: pbhagavatula; +Cc: jerinj, Anatoly Burakov, dev, Nithin Dabilpuram
Hi,
I'm checking some ordering in this patch:
28/06/2019 09:49, pbhagavatula@marvell.com:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> --- a/config/common_base
> +++ b/config/common_base
> @@ -747,6 +747,11 @@ CONFIG_RTE_LIBRTE_PMD_DPAA2_QDMA_RAWDEV=n
> #
> CONFIG_RTE_LIBRTE_PMD_IFPGA_RAWDEV=y
>
> +#
> +# Compile PMD for octeontx sso event device
> +#
> +CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV=y
Looks like you chose this line randomly?
Would be better to add after CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF.
> #
> # Compile librte_ring
> #
> diff --git a/drivers/event/Makefile b/drivers/event/Makefile
> index 03ad1b6cb..e4e7eff37 100644
> --- a/drivers/event/Makefile
> +++ b/drivers/event/Makefile
> @@ -15,5 +15,6 @@ ifeq ($(CONFIG_RTE_EAL_VFIO)$(CONFIG_RTE_LIBRTE_FSLMC_BUS),yy)
> DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_EVENTDEV) += dpaa2
> endif
> DIRS-$(CONFIG_RTE_LIBRTE_PMD_OPDL_EVENTDEV) += opdl
> +DIRS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += octeontx2
Same here, should be after octeontx.
> --- a/drivers/event/meson.build
> +++ b/drivers/event/meson.build
> @@ -1,7 +1,7 @@
> # SPDX-License-Identifier: BSD-3-Clause
> # Copyright(c) 2017 Intel Corporation
>
> -drivers = ['dpaa', 'dpaa2', 'opdl', 'skeleton', 'sw', 'dsw']
> +drivers = ['dpaa', 'dpaa2', 'opdl', 'skeleton', 'sw', 'dsw', 'octeontx2']
Could be before opdl (and keep SW ones at the end).
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -109,6 +109,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF)$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOO
> _LDLIBS-y += -lrte_common_octeontx
> endif
> OCTEONTX2-y := $(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL)
> +OCTEONTX2-y += $(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV)
> ifeq ($(findstring y,$(OCTEONTX2-y)),y)
> _LDLIBS-y += -lrte_common_octeontx2
> endif
> @@ -293,6 +294,7 @@ endif # CONFIG_RTE_LIBRTE_FSLMC_BUS
> _LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += -lrte_mempool_octeontx
> _LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX_PMD) += -lrte_pmd_octeontx
> _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_OPDL_EVENTDEV) += -lrte_pmd_opdl_event
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += -lrte_pmd_octeontx2_event
Should be after OCTEONTX.
> endif # CONFIG_RTE_LIBRTE_EVENTDEV
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [dpdk-dev] [PATCH v2 44/44] doc: update Marvell OCTEON TX2 eventdev documentation
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 44/44] doc: update Marvell OCTEON TX2 eventdev documentation pbhagavatula
@ 2019-06-28 9:00 ` Thomas Monjalon
0 siblings, 0 replies; 48+ messages in thread
From: Thomas Monjalon @ 2019-06-28 9:00 UTC (permalink / raw)
To: pbhagavatula; +Cc: jerinj, John McNamara, Marko Kovacevic, dev
28/06/2019 09:50, pbhagavatula@marvell.com:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Update Marvell OCTEON TX2 eventdev with event timer adapter i.e. TIM
> capabilities.
> Claim Maintainership of OCTEON TX2 eventdev.
It would be more meaningful to update the documentation in the patches
adding the related code.
Same for MAINTAINERS, please update it when creating files.
Thanks
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [dpdk-dev] [PATCH v2 01/44] event/octeontx2: add build infra and device probe
2019-06-28 8:55 ` Thomas Monjalon
@ 2019-06-28 9:01 ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 48+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2019-06-28 9:01 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Jerin Jacob Kollanukkaran, Anatoly Burakov, dev, Nithin Kumar Dabilpuram
Hi Thomas,
Will correct the flag order in v3.
Thanks,
Pavan.
>-----Original Message-----
>From: Thomas Monjalon <thomas@monjalon.net>
>Sent: Friday, June 28, 2019 2:25 PM
>To: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
>Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Anatoly Burakov
><anatoly.burakov@intel.com>; dev@dpdk.org; Nithin Kumar
>Dabilpuram <ndabilpuram@marvell.com>
>Subject: Re: [dpdk-dev] [PATCH v2 01/44] event/octeontx2: add build
>infra and device probe
>
>Hi,
>I'm checking some ordering in this patch:
>
>28/06/2019 09:49, pbhagavatula@marvell.com:
>> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> --- a/config/common_base
>> +++ b/config/common_base
>> @@ -747,6 +747,11 @@
>CONFIG_RTE_LIBRTE_PMD_DPAA2_QDMA_RAWDEV=n
>> #
>> CONFIG_RTE_LIBRTE_PMD_IFPGA_RAWDEV=y
>>
>> +#
>> +# Compile PMD for octeontx sso event device
>> +#
>> +CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV=y
>
>Looks like you chose this line randomly?
>
>Would be better to add after
>CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF.
>
>> #
>> # Compile librte_ring
>> #
>> diff --git a/drivers/event/Makefile b/drivers/event/Makefile
>> index 03ad1b6cb..e4e7eff37 100644
>> --- a/drivers/event/Makefile
>> +++ b/drivers/event/Makefile
>> @@ -15,5 +15,6 @@ ifeq
>($(CONFIG_RTE_EAL_VFIO)$(CONFIG_RTE_LIBRTE_FSLMC_BUS),yy)
>> DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_EVENTDEV) += dpaa2
>> endif
>> DIRS-$(CONFIG_RTE_LIBRTE_PMD_OPDL_EVENTDEV) += opdl
>> +DIRS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) +=
>octeontx2
>
>Same here, should be after octeontx.
>
>> --- a/drivers/event/meson.build
>> +++ b/drivers/event/meson.build
>> @@ -1,7 +1,7 @@
>> # SPDX-License-Identifier: BSD-3-Clause
>> # Copyright(c) 2017 Intel Corporation
>>
>> -drivers = ['dpaa', 'dpaa2', 'opdl', 'skeleton', 'sw', 'dsw']
>> +drivers = ['dpaa', 'dpaa2', 'opdl', 'skeleton', 'sw', 'dsw', 'octeontx2']
>
>Could be before opdl (and keep SW ones at the end).
>
>> --- a/mk/rte.app.mk
>> +++ b/mk/rte.app.mk
>> @@ -109,6 +109,7 @@ ifeq
>($(CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF)$(CONFIG_RTE_LIBR
>TE_OCTEONTX_MEMPOO
>> _LDLIBS-y += -lrte_common_octeontx
>> endif
>> OCTEONTX2-y := $(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL)
>> +OCTEONTX2-y +=
>$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV)
>> ifeq ($(findstring y,$(OCTEONTX2-y)),y)
>> _LDLIBS-y += -lrte_common_octeontx2
>> endif
>> @@ -293,6 +294,7 @@ endif # CONFIG_RTE_LIBRTE_FSLMC_BUS
>> _LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += -
>lrte_mempool_octeontx
>> _LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX_PMD) += -
>lrte_pmd_octeontx
>> _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_OPDL_EVENTDEV) += -
>lrte_pmd_opdl_event
>> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += -
>lrte_pmd_octeontx2_event
>
>Should be after OCTEONTX.
>
>> endif # CONFIG_RTE_LIBRTE_EVENTDEV
>
>
^ permalink raw reply [flat|nested] 48+ messages in thread
end of thread, other threads:[~2019-06-28 9:01 UTC | newest]
Thread overview: 48+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-28 7:49 [dpdk-dev] [PATCH v2 00/44] OCTEONTX2 event device driver pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 01/44] event/octeontx2: add build infra and device probe pbhagavatula
2019-06-28 8:55 ` Thomas Monjalon
2019-06-28 9:01 ` Pavan Nikhilesh Bhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 02/44] event/octeontx2: add init and fini for octeontx2 SSO object pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 03/44] event/octeontx2: add device capabilities function pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 04/44] event/octeontx2: add device configure function pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 05/44] event/octeontx2: add event queue config functions pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 06/44] event/octeontx2: allocate event inflight buffers pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 07/44] event/octeontx2: add devargs for inflight buffer count pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 08/44] event/octeontx2: add event port config functions pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 09/44] event/octeontx2: support linking queues to ports pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 10/44] event/octeontx2: support dequeue timeout tick conversion pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 11/44] event/octeontx2: add SSO GWS and GGRP IRQ handlers pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 12/44] event/octeontx2: add register dump functions pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 13/44] event/octeontx2: add xstats support pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 14/44] event/octeontx2: add SSO HW device operations pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 15/44] event/octeontx2: add worker enqueue functions pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 16/44] event/octeontx2: add worker dequeue functions pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 17/44] event/octeontx2: add octeontx2 SSO dual workslot mode pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 18/44] event/octeontx2: add SSO dual GWS HW device operations pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 19/44] event/octeontx2: add worker dual GWS enqueue functions pbhagavatula
2019-06-28 7:49 ` [dpdk-dev] [PATCH v2 20/44] event/octeontx2: add worker dual GWS dequeue functions pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 21/44] event/octeontx2: add devargs to force legacy mode pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 22/44] event/octeontx2: add device start function pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 23/44] event/octeontx2: add devargs to control SSO GGRP QoS pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 24/44] event/octeontx2: add device stop and close functions pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 25/44] event/octeontx2: add SSO selftest pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 26/44] doc: add Marvell OCTEON TX2 event device documentation pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 27/44] event/octeontx2: add event timer support pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 28/44] event/octeontx2: add timer adapter capabilities pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 29/44] event/octeontx2: create and free timer adapter pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 30/44] event/octeontx2: allow TIM to optimize config pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 31/44] event/octeontx2: add devargs to disable NPA pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 32/44] event/octeontx2: add devargs to modify chunk slots pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 33/44] event/octeontx2: add TIM IRQ handlers pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 34/44] event/octeontx2: allow adapters to resize inflight buffers pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 35/44] event/octeontx2: add timer adapter info get function pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 36/44] event/octeontx2: add TIM bucket operations pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 37/44] event/octeontx2: add event timer arm routine pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 38/44] event/octeontx2: add event timer arm timeout burst pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 39/44] event/octeontx2: add event timer cancel function pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 40/44] event/octeontx2: add event timer stats get and reset pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 41/44] event/octeontx2: add even timer adapter start and stop pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 42/44] event/octeontx2: add devargs to limit timer adapters pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 43/44] event/octeontx2: add devargs to control adapter parameters pbhagavatula
2019-06-28 7:50 ` [dpdk-dev] [PATCH v2 44/44] doc: update Marvell OCTEON TX2 eventdev documentation pbhagavatula
2019-06-28 9:00 ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).