* [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver
@ 2019-06-02 15:23 jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 01/58] net/octeontx2: add build infrastructure jerinj
` (59 more replies)
0 siblings, 60 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jerin Jacob
From: Jerin Jacob <jerinj@marvell.com>
This patchset adds support for OCTEON TX2 ethdev driver.
This patch set is depended on "OCTEON TX2 common and mempool driver" series.
http://mails.dpdk.org/archives/dev/2019-June/133329.html
This patches series also available at https://github.com/jerinjacobk/dpdk-octeontx2-nix
including the dependency patches for quick download and review.
Harman Kalra (2):
net/octeontx2: add PTP base support
net/octeontx2: add remaining PTP operations
Jerin Jacob (17):
net/octeontx2: add build infrastructure
net/octeontx2: add ethdev probe and remove
net/octeontx2: add device init and uninit
net/octeontx2: add devargs parsing functions
net/octeontx2: handle device error interrupts
net/octeontx2: add info get operation
net/octeontx2: add device configure operation
net/octeontx2: handle queue specific error interrupts
net/octeontx2: add context debug utils
net/octeontx2: add Rx queue setup and release
net/octeontx2: add Tx queue setup and release
net/octeontx2: add ptype support
net/octeontx2: add Rx and Tx descriptor operations
net/octeontx2: add Rx burst support
net/octeontx2: add Rx vector version
net/octeontx2: add Tx burst support
doc: add Marvell OCTEON TX2 ethdev documentation
Kiran Kumar K (13):
net/octeontx2: add register dump support
net/octeontx2: add basic stats operation
net/octeontx2: add extended stats operations
net/octeontx2: introducing flow driver
net/octeontx2: flow utility functions
net/octeontx2: flow mailbox utility
net/octeontx2: add flow MCAM utility functions
net/octeontx2: add flow parsing for outer layers
net/octeontx2: adding flow parsing for inner layers
net/octeontx2: add flow actions support
net/octeontx2: add flow operations
net/octeontx2: add additional flow operations
net/octeontx2: add flow init and fini
Krzysztof Kanas (2):
net/octeontx2: alloc and free TM HW resources
net/octeontx2: enable Tx through traffic manager
Nithin Dabilpuram (9):
net/octeontx2: add queue start and stop operations
net/octeontx2: introduce traffic manager
net/octeontx2: configure TM HW resources
net/octeontx2: add queue info and pool supported operations
net/octeontx2: add Rx multi segment version
net/octeontx2: add Tx multi segment version
net/octeontx2: add Tx vector version
net/octeontx2: add device start operation
net/octeontx2: add device stop and close operations
Sunil Kumar Kori (1):
net/octeontx2: add unicast MAC filter
Vamsi Attunuru (9):
net/octeontx2: add link stats operations
net/octeontx2: add promiscuous and allmulticast mode
net/octeontx2: add RSS support
net/octeontx2: handle port reconfigure
net/octeontx2: add link status set operations
net/octeontx2: add module EEPROM dump
net/octeontx2: add flow control support
net/octeontx2: add FW version get operation
net/octeontx2: add MTU set operation
Vivek Sharma (5):
net/octeontx2: connect flow API to ethdev ops
net/octeontx2: implement VLAN utility functions
net/octeontx2: support VLAN offloads
net/octeontx2: support VLAN filters
net/octeontx2: support VLAN TPID and PVID for Tx
MAINTAINERS | 8 +
config/common_base | 5 +
doc/guides/nics/features/octeontx2.ini | 47 +
doc/guides/nics/features/octeontx2_vec.ini | 44 +
doc/guides/nics/features/octeontx2_vf.ini | 39 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/octeontx2.rst | 289 +++
doc/guides/platform/octeontx2.rst | 3 +
doc/guides/rel_notes/release_19_05.rst | 1 +
drivers/net/Makefile | 1 +
drivers/net/meson.build | 2 +-
drivers/net/octeontx2/Makefile | 56 +
drivers/net/octeontx2/meson.build | 41 +
drivers/net/octeontx2/otx2_ethdev.c | 1959 +++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 496 +++++
drivers/net/octeontx2/otx2_ethdev_debug.c | 500 +++++
drivers/net/octeontx2/otx2_ethdev_devargs.c | 143 ++
drivers/net/octeontx2/otx2_ethdev_irq.c | 346 +++
drivers/net/octeontx2/otx2_ethdev_ops.c | 454 ++++
drivers/net/octeontx2/otx2_flow.c | 951 ++++++++
drivers/net/octeontx2/otx2_flow.h | 384 ++++
drivers/net/octeontx2/otx2_flow_ctrl.c | 230 ++
drivers/net/octeontx2/otx2_flow_parse.c | 944 ++++++++
drivers/net/octeontx2/otx2_flow_utils.c | 884 ++++++++
drivers/net/octeontx2/otx2_link.c | 157 ++
drivers/net/octeontx2/otx2_lookup.c | 279 +++
drivers/net/octeontx2/otx2_mac.c | 149 ++
drivers/net/octeontx2/otx2_ptp.c | 273 +++
drivers/net/octeontx2/otx2_rss.c | 378 ++++
drivers/net/octeontx2/otx2_rx.c | 410 ++++
drivers/net/octeontx2/otx2_rx.h | 333 +++
drivers/net/octeontx2/otx2_stats.c | 387 ++++
drivers/net/octeontx2/otx2_tm.c | 1396 ++++++++++++
drivers/net/octeontx2/otx2_tm.h | 153 ++
drivers/net/octeontx2/otx2_tx.c | 1033 +++++++++
drivers/net/octeontx2/otx2_tx.h | 370 ++++
drivers/net/octeontx2/otx2_vlan.c | 933 ++++++++
.../octeontx2/rte_pmd_octeontx2_version.map | 7 +
mk/rte.app.mk | 2 +
39 files changed, 14087 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/nics/features/octeontx2.ini
create mode 100644 doc/guides/nics/features/octeontx2_vec.ini
create mode 100644 doc/guides/nics/features/octeontx2_vf.ini
create mode 100644 doc/guides/nics/octeontx2.rst
create mode 100644 drivers/net/octeontx2/Makefile
create mode 100644 drivers/net/octeontx2/meson.build
create mode 100644 drivers/net/octeontx2/otx2_ethdev.c
create mode 100644 drivers/net/octeontx2/otx2_ethdev.h
create mode 100644 drivers/net/octeontx2/otx2_ethdev_debug.c
create mode 100644 drivers/net/octeontx2/otx2_ethdev_devargs.c
create mode 100644 drivers/net/octeontx2/otx2_ethdev_irq.c
create mode 100644 drivers/net/octeontx2/otx2_ethdev_ops.c
create mode 100644 drivers/net/octeontx2/otx2_flow.c
create mode 100644 drivers/net/octeontx2/otx2_flow.h
create mode 100644 drivers/net/octeontx2/otx2_flow_ctrl.c
create mode 100644 drivers/net/octeontx2/otx2_flow_parse.c
create mode 100644 drivers/net/octeontx2/otx2_flow_utils.c
create mode 100644 drivers/net/octeontx2/otx2_link.c
create mode 100644 drivers/net/octeontx2/otx2_lookup.c
create mode 100644 drivers/net/octeontx2/otx2_mac.c
create mode 100644 drivers/net/octeontx2/otx2_ptp.c
create mode 100644 drivers/net/octeontx2/otx2_rss.c
create mode 100644 drivers/net/octeontx2/otx2_rx.c
create mode 100644 drivers/net/octeontx2/otx2_rx.h
create mode 100644 drivers/net/octeontx2/otx2_stats.c
create mode 100644 drivers/net/octeontx2/otx2_tm.c
create mode 100644 drivers/net/octeontx2/otx2_tm.h
create mode 100644 drivers/net/octeontx2/otx2_tx.c
create mode 100644 drivers/net/octeontx2/otx2_tx.h
create mode 100644 drivers/net/octeontx2/otx2_vlan.c
create mode 100644 drivers/net/octeontx2/rte_pmd_octeontx2_version.map
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 01/58] net/octeontx2: add build infrastructure
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-06 15:33 ` Ferruh Yigit
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 02/58] net/octeontx2: add ethdev probe and remove jerinj
` (58 subsequent siblings)
59 siblings, 1 reply; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, Thomas Monjalon, John McNamara, Marko Kovacevic,
Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Pavan Nikhilesh
From: Jerin Jacob <jerinj@marvell.com>
Adding bare minimum PMD library and doc build infrastructure.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
config/common_base | 5 +++
doc/guides/nics/features/octeontx2.ini | 8 ++++
doc/guides/nics/features/octeontx2_vec.ini | 8 ++++
doc/guides/nics/features/octeontx2_vf.ini | 8 ++++
drivers/net/Makefile | 1 +
drivers/net/meson.build | 2 +-
drivers/net/octeontx2/Makefile | 38 +++++++++++++++++++
drivers/net/octeontx2/meson.build | 24 ++++++++++++
drivers/net/octeontx2/otx2_ethdev.c | 3 ++
.../octeontx2/rte_pmd_octeontx2_version.map | 4 ++
mk/rte.app.mk | 2 +
11 files changed, 102 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/nics/features/octeontx2.ini
create mode 100644 doc/guides/nics/features/octeontx2_vec.ini
create mode 100644 doc/guides/nics/features/octeontx2_vf.ini
create mode 100644 drivers/net/octeontx2/Makefile
create mode 100644 drivers/net/octeontx2/meson.build
create mode 100644 drivers/net/octeontx2/otx2_ethdev.c
create mode 100644 drivers/net/octeontx2/rte_pmd_octeontx2_version.map
diff --git a/config/common_base b/config/common_base
index 4a3de0360..38edad355 100644
--- a/config/common_base
+++ b/config/common_base
@@ -405,6 +405,11 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
#
CONFIG_RTE_LIBRTE_OCTEONTX_PMD=y
+#
+# Compile burst-oriented Cavium OCTEONTX2 network PMD driver
+#
+CONFIG_RTE_LIBRTE_OCTEONTX2_PMD=y
+
#
# Compile WRS accelerated virtual port (AVP) guest PMD driver
#
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
new file mode 100644
index 000000000..0ec3b6983
--- /dev/null
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -0,0 +1,8 @@
+;
+; Supported features of the 'octeontx2' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux VFIO = Y
+ARMv8 = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
new file mode 100644
index 000000000..774f136c1
--- /dev/null
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -0,0 +1,8 @@
+;
+; Supported features of the 'octeontx2_vec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux VFIO = Y
+ARMv8 = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
new file mode 100644
index 000000000..36642354e
--- /dev/null
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -0,0 +1,8 @@
+;
+; Supported features of the 'octeontx2_vf' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux VFIO = Y
+ARMv8 = Y
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 3a72cf38c..5bb618b21 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -45,6 +45,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += nfp
DIRS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt
DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += null
DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_PMD) += octeontx
+DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += octeontx2
DIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += pcap
DIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede
DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += ring
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index ed99896c3..086a2f4cd 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -31,7 +31,7 @@ drivers = ['af_packet',
'netvsc',
'nfb',
'nfp',
- 'null', 'octeontx', 'pcap', 'qede', 'ring',
+ 'null', 'octeontx', 'octeontx2', 'pcap', 'ring',
'sfc',
'softnic',
'szedata2',
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
new file mode 100644
index 000000000..0a606d27b
--- /dev/null
+++ b/drivers/net/octeontx2/Makefile
@@ -0,0 +1,38 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_octeontx2.a
+
+CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2
+CFLAGS += -I$(RTE_SDK)/drivers/net/octeontx2
+CFLAGS += -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += -flax-vector-conversions
+
+ifneq ($(CONFIG_RTE_ARCH_64),y)
+CFLAGS += -Wno-int-to-pointer-cast
+CFLAGS += -Wno-pointer-to-int-cast
+endif
+
+EXPORT_MAP := rte_pmd_octeontx2_version.map
+
+LIBABIVER := 1
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
+ otx2_ethdev.c
+
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_common_octeontx2 -lm
+LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_bus_pci -lrte_mempool_octeontx2
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
new file mode 100644
index 000000000..0bd32446b
--- /dev/null
+++ b/drivers/net/octeontx2/meson.build
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+sources = files(
+ 'otx2_ethdev.c',
+ )
+
+allow_experimental_apis = true
+deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2']
+
+cflags += ['-flax-vector-conversions','-DALLOW_EXPERIMENTAL_API']
+
+extra_flags = []
+# This integrated controller runs only on a arm64 machine, remove 32bit warnings
+if not dpdk_conf.get('RTE_ARCH_64')
+ extra_flags += ['-Wno-int-to-pointer-cast', '-Wno-pointer-to-int-cast']
+endif
+
+foreach flag: extra_flags
+ if cc.has_argument(flag)
+ cflags += flag
+ endif
+endforeach
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
new file mode 100644
index 000000000..d26535dee
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -0,0 +1,3 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
diff --git a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
new file mode 100644
index 000000000..fc8c95e91
--- /dev/null
+++ b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
@@ -0,0 +1,4 @@
+DPDK_19.05 {
+
+ local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index cd89ccfd5..3dff91190 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -127,6 +127,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_COMMON_DPAAX) += -lrte_common_dpaax
endif
OCTEONTX2-y := $(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL)
+OCTEONTX2-y += $(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD)
ifeq ($(findstring y,$(OCTEONTX2-y)),y)
_LDLIBS-y += -lrte_common_octeontx2
endif
@@ -197,6 +198,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MVPP2_PMD) += -lrte_pmd_mvpp2
_LDLIBS-$(CONFIG_RTE_LIBRTE_MVNETA_PMD) += -lrte_pmd_mvneta
_LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += -lrte_pmd_nfp
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += -lrte_pmd_null
+_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += -lrte_pmd_octeontx2 -lm
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += -lrte_pmd_pcap -lpcap
_LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += -lrte_pmd_qede
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_RING) += -lrte_pmd_ring
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 02/58] net/octeontx2: add ethdev probe and remove
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 01/58] net/octeontx2: add build infrastructure jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 03/58] net/octeontx2: add device init and uninit jerinj
` (57 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K, Anatoly Burakov
Cc: ferruh.yigit
From: Jerin Jacob <jerinj@marvell.com>
add basic PCIe ethdev probe and remove.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 93 +++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 27 +++++++++
2 files changed, 120 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_ethdev.h
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index d26535dee..05fa8988e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1,3 +1,96 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(C) 2019 Marvell International Ltd.
*/
+
+#include <rte_ethdev_pci.h>
+#include <rte_io.h>
+#include <rte_malloc.h>
+
+#include "otx2_ethdev.h"
+
+static int
+otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+
+ return -ENODEV;
+}
+
+static int
+otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
+{
+ RTE_SET_USED(eth_dev);
+ RTE_SET_USED(mbox_close);
+
+ return -ENODEV;
+}
+
+static int
+nix_remove(struct rte_pci_device *pci_dev)
+{
+ struct rte_eth_dev *eth_dev;
+ int rc;
+
+ eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+ if (eth_dev) {
+ /* Cleanup eth dev */
+ rc = otx2_eth_dev_uninit(eth_dev, true);
+ if (rc)
+ return rc;
+
+ rte_eth_dev_pci_release(eth_dev);
+ }
+
+ /* Nothing to be done for secondary processes */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ return 0;
+}
+
+static int
+nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ int rc;
+
+ RTE_SET_USED(pci_drv);
+
+ rc = rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct otx2_eth_dev),
+ otx2_eth_dev_init);
+
+ /* On error on secondary, recheck if port exists in primary or
+ * in mid of detach state.
+ */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY && rc)
+ if (!rte_eth_dev_allocated(pci_dev->device.name))
+ return 0;
+ return rc;
+}
+
+static const struct rte_pci_id pci_nix_map[] = {
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_PF)
+ },
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_VF)
+ },
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
+ PCI_DEVID_OCTEONTX2_RVU_AF_VF)
+ },
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver pci_nix = {
+ .id_table = pci_nix_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA |
+ RTE_PCI_DRV_INTR_LSC,
+ .probe = nix_probe,
+ .remove = nix_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_octeontx2, pci_nix);
+RTE_PMD_REGISTER_PCI_TABLE(net_octeontx2, pci_nix_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_octeontx2, "vfio-pci");
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
new file mode 100644
index 000000000..fd01a3254
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_ETHDEV_H__
+#define __OTX2_ETHDEV_H__
+
+#include <stdint.h>
+
+#include <rte_common.h>
+
+#include "otx2_common.h"
+#include "otx2_dev.h"
+#include "otx2_irq.h"
+#include "otx2_mempool.h"
+
+struct otx2_eth_dev {
+ OTX2_DEV; /* Base class */
+} __rte_cache_aligned;
+
+static inline struct otx2_eth_dev *
+otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
+{
+ return eth_dev->data->dev_private;
+}
+
+#endif /* __OTX2_ETHDEV_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 03/58] net/octeontx2: add device init and uninit
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 01/58] net/octeontx2: add build infrastructure jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 02/58] net/octeontx2: add ethdev probe and remove jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 04/58] net/octeontx2: add devargs parsing functions jerinj
` (56 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K, Anatoly Burakov
Cc: ferruh.yigit, Sunil Kumar Kori, Vamsi Attunuru
From: Jerin Jacob <jerinj@marvell.com>
Add basic init and uninit function which includes
attaching LF device to probed PCIe device.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 277 +++++++++++++++++++++++++++-
drivers/net/octeontx2/otx2_ethdev.h | 72 ++++++++
drivers/net/octeontx2/otx2_mac.c | 72 ++++++++
5 files changed, 418 insertions(+), 5 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_mac.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 0a606d27b..9ca1eea99 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -30,6 +30,7 @@ LIBABIVER := 1
# all source are stored in SRCS-y
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
+ otx2_mac.c \
otx2_ethdev.c
LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_common_octeontx2 -lm
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 0bd32446b..6cdd036e9 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -3,6 +3,7 @@
#
sources = files(
+ 'otx2_mac.c',
'otx2_ethdev.c',
)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 05fa8988e..08f03b4c3 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -8,27 +8,277 @@
#include "otx2_ethdev.h"
+static inline void
+otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+}
+
+static inline void
+otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+}
+
+static inline uint64_t
+nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
+{
+ uint64_t capa = NIX_RX_OFFLOAD_CAPA;
+
+ if (otx2_dev_is_vf(dev))
+ capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+
+ return capa;
+}
+
+static inline uint64_t
+nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
+{
+ RTE_SET_USED(dev);
+
+ return NIX_TX_OFFLOAD_CAPA;
+}
+
+static int
+nix_lf_free(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_lf_free_req *req;
+ struct ndc_sync_op *ndc_req;
+ int rc;
+
+ /* Sync NDC-NIX for LF */
+ ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
+ ndc_req->nix_lf_tx_sync = 1;
+ ndc_req->nix_lf_rx_sync = 1;
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ otx2_err("Error on NDC-NIX-[TX, RX] LF sync, rc %d", rc);
+
+ req = otx2_mbox_alloc_msg_nix_lf_free(mbox);
+ /* Let AF driver free all this nix lf's
+ * NPC entries allocated using NPC MBOX.
+ */
+ req->flags = 0;
+
+ return otx2_mbox_process(mbox);
+}
+
+static inline int
+nix_lf_attach(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct rsrc_attach_req *req;
+
+ /* Attach NIX(lf) */
+ req = otx2_mbox_alloc_msg_attach_resources(mbox);
+ req->modify = true;
+ req->nixlf = true;
+
+ return otx2_mbox_process(mbox);
+}
+
+static inline int
+nix_lf_get_msix_offset(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct msix_offset_rsp *msix_rsp;
+ int rc;
+
+ /* Get NPA and NIX MSIX vector offsets */
+ otx2_mbox_alloc_msg_msix_offset(mbox);
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
+
+ dev->nix_msixoff = msix_rsp->nix_msixoff;
+
+ return rc;
+}
+
+static inline int
+otx2_eth_dev_lf_detach(struct otx2_mbox *mbox)
+{
+ struct rsrc_detach_req *req;
+
+ req = otx2_mbox_alloc_msg_detach_resources(mbox);
+
+ /* Detach all except npa lf */
+ req->partial = true;
+ req->nixlf = true;
+ req->sso = true;
+ req->ssow = true;
+ req->timlfs = true;
+ req->cptlfs = true;
+
+ return otx2_mbox_process(mbox);
+}
+
static int
otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
{
- RTE_SET_USED(eth_dev);
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_pci_device *pci_dev;
+ int rc, max_entries;
- return -ENODEV;
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ /* Setup callbacks for secondary process */
+ otx2_eth_set_tx_function(eth_dev);
+ otx2_eth_set_rx_function(eth_dev);
+ return 0;
+ }
+
+ pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+ rte_eth_copy_pci_info(eth_dev, pci_dev);
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_CLOSE_REMOVE;
+
+ /* Zero out everything after OTX2_DEV to allow proper dev_reset() */
+ memset(&dev->otx2_eth_dev_data_start, 0, sizeof(*dev) -
+ offsetof(struct otx2_eth_dev, otx2_eth_dev_data_start));
+
+ if (!dev->mbox_active) {
+ /* Initialize the base otx2_dev object
+ * only if already present
+ */
+ rc = otx2_dev_init(pci_dev, dev);
+ if (rc) {
+ otx2_err("Failed to initialize otx2_dev rc=%d", rc);
+ goto error;
+ }
+ }
+
+ /* Grab the NPA LF if required */
+ rc = otx2_npa_lf_init(pci_dev, dev);
+ if (rc)
+ goto otx2_dev_uninit;
+
+ dev->configured = 0;
+ dev->drv_inited = true;
+ dev->base = dev->bar2 + (RVU_BLOCK_ADDR_NIX0 << 20);
+ dev->lmt_addr = dev->bar2 + (RVU_BLOCK_ADDR_LMT << 20);
+
+ /* Attach NIX LF */
+ rc = nix_lf_attach(dev);
+ if (rc)
+ goto otx2_npa_uninit;
+
+ /* Get NIX MSIX offset */
+ rc = nix_lf_get_msix_offset(dev);
+ if (rc)
+ goto otx2_npa_uninit;
+
+ /* Get maximum number of supported MAC entries */
+ max_entries = otx2_cgx_mac_max_entries_get(dev);
+ if (max_entries < 0) {
+ otx2_err("Failed to get max entries for mac addr");
+ rc = -ENOTSUP;
+ goto mbox_detach;
+ }
+
+ /* For VFs, returned max_entries will be 0. But to keep default MAC
+ * address, one entry must be allocated. So setting up to 1.
+ */
+ if (max_entries == 0)
+ max_entries = 1;
+
+ eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", max_entries *
+ RTE_ETHER_ADDR_LEN, 0);
+ if (eth_dev->data->mac_addrs == NULL) {
+ otx2_err("Failed to allocate memory for mac addr");
+ rc = -ENOMEM;
+ goto mbox_detach;
+ }
+
+ dev->max_mac_entries = max_entries;
+
+ rc = otx2_nix_mac_addr_get(eth_dev, dev->mac_addr);
+ if (rc)
+ goto free_mac_addrs;
+
+ /* Update the mac address */
+ memcpy(eth_dev->data->mac_addrs, dev->mac_addr, RTE_ETHER_ADDR_LEN);
+
+ /* Also sync same MAC address to CGX table */
+ otx2_cgx_mac_addr_set(eth_dev, ð_dev->data->mac_addrs[0]);
+
+ dev->tx_offload_capa = nix_get_tx_offload_capa(dev);
+ dev->rx_offload_capa = nix_get_rx_offload_capa(dev);
+
+ if (otx2_dev_is_A0(dev)) {
+ dev->hwcap |= OTX2_FIXUP_F_MIN_4K_Q;
+ dev->hwcap |= OTX2_FIXUP_F_LIMIT_CQ_FULL;
+ }
+
+ otx2_nix_dbg("Port=%d pf=%d vf=%d ver=%s msix_off=%d hwcap=0x%" PRIx64
+ " rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64,
+ eth_dev->data->port_id, dev->pf, dev->vf,
+ OTX2_ETH_DEV_PMD_VERSION, dev->nix_msixoff, dev->hwcap,
+ dev->rx_offload_capa, dev->tx_offload_capa);
+ return 0;
+
+free_mac_addrs:
+ rte_free(eth_dev->data->mac_addrs);
+mbox_detach:
+ otx2_eth_dev_lf_detach(dev->mbox);
+otx2_npa_uninit:
+ otx2_npa_lf_fini();
+otx2_dev_uninit:
+ otx2_dev_fini(pci_dev, dev);
+error:
+ otx2_err("Failed to init nix eth_dev rc=%d", rc);
+ return rc;
}
static int
otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
{
- RTE_SET_USED(eth_dev);
- RTE_SET_USED(mbox_close);
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_pci_device *pci_dev;
+ int rc;
- return -ENODEV;
+ /* Nothing to be done for secondary processes */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ rc = nix_lf_free(dev);
+ if (rc)
+ otx2_err("Failed to free nix lf, rc=%d", rc);
+
+ rc = otx2_npa_lf_fini();
+ if (rc)
+ otx2_err("Failed to cleanup npa lf, rc=%d", rc);
+
+ rte_free(eth_dev->data->mac_addrs);
+ eth_dev->data->mac_addrs = NULL;
+ dev->drv_inited = false;
+
+ pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+ rc = otx2_eth_dev_lf_detach(dev->mbox);
+ if (rc)
+ otx2_err("Failed to detach resources, rc=%d", rc);
+
+ /* Check if mbox close is needed */
+ if (!mbox_close)
+ return 0;
+
+ if (otx2_npa_lf_active(dev) || otx2_dev_active_vfs(dev)) {
+ /* Will be freed later by PMD */
+ eth_dev->data->dev_private = NULL;
+ return 0;
+ }
+
+ otx2_dev_fini(pci_dev, dev);
+ return 0;
}
static int
nix_remove(struct rte_pci_device *pci_dev)
{
struct rte_eth_dev *eth_dev;
+ struct otx2_idev_cfg *idev;
+ struct otx2_dev *otx2_dev;
int rc;
eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
@@ -45,7 +295,24 @@ nix_remove(struct rte_pci_device *pci_dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ /* Check for common resources */
+ idev = otx2_intra_dev_get_cfg();
+ if (!idev || !idev->npa_lf || idev->npa_lf->pci_dev != pci_dev)
+ return 0;
+
+ otx2_dev = container_of(idev->npa_lf, struct otx2_dev, npalf);
+
+ if (otx2_npa_lf_active(otx2_dev) || otx2_dev_active_vfs(otx2_dev))
+ goto exit;
+
+ /* Safe to cleanup mbox as no more users */
+ otx2_dev_fini(pci_dev, otx2_dev);
+ rte_free(otx2_dev);
return 0;
+
+exit:
+ otx2_info("%s: common resource in use by other devices", pci_dev->name);
+ return -EAGAIN;
}
static int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index fd01a3254..d9f72686a 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -8,14 +8,76 @@
#include <stdint.h>
#include <rte_common.h>
+#include <rte_ethdev.h>
#include "otx2_common.h"
#include "otx2_dev.h"
#include "otx2_irq.h"
#include "otx2_mempool.h"
+#define OTX2_ETH_DEV_PMD_VERSION "1.0"
+
+/* Ethdev HWCAP and Fixup flags. Use from MSB bits to avoid conflict with dev */
+
+/* Minimum CQ size should be 4K */
+#define OTX2_FIXUP_F_MIN_4K_Q BIT_ULL(63)
+#define otx2_ethdev_fixup_is_min_4k_q(dev) \
+ ((dev)->hwcap & OTX2_FIXUP_F_MIN_4K_Q)
+/* Limit CQ being full */
+#define OTX2_FIXUP_F_LIMIT_CQ_FULL BIT_ULL(62)
+#define otx2_ethdev_fixup_is_limit_cq_full(dev) \
+ ((dev)->hwcap & OTX2_FIXUP_F_LIMIT_CQ_FULL)
+
+/* Used for struct otx2_eth_dev::flags */
+#define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0)
+
+#define NIX_TX_OFFLOAD_CAPA ( \
+ DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
+ DEV_TX_OFFLOAD_MT_LOCKFREE | \
+ DEV_TX_OFFLOAD_VLAN_INSERT | \
+ DEV_TX_OFFLOAD_QINQ_INSERT | \
+ DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ DEV_TX_OFFLOAD_TCP_CKSUM | \
+ DEV_TX_OFFLOAD_UDP_CKSUM | \
+ DEV_TX_OFFLOAD_SCTP_CKSUM | \
+ DEV_TX_OFFLOAD_MULTI_SEGS | \
+ DEV_TX_OFFLOAD_IPV4_CKSUM)
+
+#define NIX_RX_OFFLOAD_CAPA ( \
+ DEV_RX_OFFLOAD_CHECKSUM | \
+ DEV_RX_OFFLOAD_SCTP_CKSUM | \
+ DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ DEV_RX_OFFLOAD_SCATTER | \
+ DEV_RX_OFFLOAD_JUMBO_FRAME | \
+ DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ DEV_RX_OFFLOAD_VLAN_STRIP | \
+ DEV_RX_OFFLOAD_VLAN_FILTER | \
+ DEV_RX_OFFLOAD_QINQ_STRIP | \
+ DEV_RX_OFFLOAD_TIMESTAMP)
+
struct otx2_eth_dev {
OTX2_DEV; /* Base class */
+ MARKER otx2_eth_dev_data_start;
+ uint16_t sqb_size;
+ uint16_t rx_chan_base;
+ uint16_t tx_chan_base;
+ uint8_t rx_chan_cnt;
+ uint8_t tx_chan_cnt;
+ uint8_t lso_tsov4_idx;
+ uint8_t lso_tsov6_idx;
+ uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
+ uint8_t max_mac_entries;
+ uint8_t configured;
+ uint16_t nix_msixoff;
+ uintptr_t base;
+ uintptr_t lmt_addr;
+ uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */
+ uint64_t rx_offloads;
+ uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */
+ uint64_t tx_offloads;
+ uint64_t rx_offload_capa;
+ uint64_t tx_offload_capa;
} __rte_cache_aligned;
static inline struct otx2_eth_dev *
@@ -24,4 +86,14 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
return eth_dev->data->dev_private;
}
+/* CGX */
+int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
+int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
+int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
+ struct rte_ether_addr *addr);
+
+/* Mac address handling */
+int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr);
+int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev);
+
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_mac.c b/drivers/net/octeontx2/otx2_mac.c
new file mode 100644
index 000000000..89b0ca6b0
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_mac.c
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_common.h>
+
+#include "otx2_dev.h"
+#include "otx2_ethdev.h"
+
+int
+otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct cgx_mac_addr_set_or_get *req;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return -ENOTSUP;
+
+ if (otx2_dev_active_vfs(dev))
+ return -ENOTSUP;
+
+ req = otx2_mbox_alloc_msg_cgx_mac_addr_set(mbox);
+ otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ otx2_err("Failed to set mac address in CGX, rc=%d", rc);
+
+ return 0;
+}
+
+int
+otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev)
+{
+ struct cgx_max_dmac_entries_get_rsp *rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_mac_max_entries_get(mbox);
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ return rsp->max_dmac_filters;
+}
+
+int
+otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_get_mac_addr_rsp *rsp;
+ int rc;
+
+ otx2_mbox_alloc_msg_nix_get_mac_addr(mbox);
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to get mac address, rc=%d", rc);
+ goto done;
+ }
+
+ otx2_mbox_memcpy(addr, rsp->mac_addr, RTE_ETHER_ADDR_LEN);
+
+done:
+ return rc;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 04/58] net/octeontx2: add devargs parsing functions
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (2 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 03/58] net/octeontx2: add device init and uninit jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 05/58] net/octeontx2: handle device error interrupts jerinj
` (55 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Pavan Nikhilesh
From: Jerin Jacob <jerinj@marvell.com>
add various devargs command line options supported by
this driver.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
drivers/net/octeontx2/Makefile | 3 +-
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 7 +
drivers/net/octeontx2/otx2_ethdev.h | 20 +++
drivers/net/octeontx2/otx2_ethdev_devargs.c | 143 ++++++++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 10 ++
6 files changed, 183 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/octeontx2/otx2_ethdev_devargs.c
create mode 100644 drivers/net/octeontx2/otx2_rx.h
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 9ca1eea99..dbcfec5b4 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -31,7 +31,8 @@ LIBABIVER := 1
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
- otx2_ethdev.c
+ otx2_ethdev.c \
+ otx2_ethdev_devargs.c
LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_common_octeontx2 -lm
LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_bus_pci -lrte_mempool_octeontx2
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 6cdd036e9..57657de3d 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -5,6 +5,7 @@
sources = files(
'otx2_mac.c',
'otx2_ethdev.c',
+ 'otx2_ethdev_devargs.c'
)
allow_experimental_apis = true
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 08f03b4c3..eeba0c2c6 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -137,6 +137,13 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
memset(&dev->otx2_eth_dev_data_start, 0, sizeof(*dev) -
offsetof(struct otx2_eth_dev, otx2_eth_dev_data_start));
+ /* Parse devargs string */
+ rc = otx2_ethdev_parse_devargs(eth_dev->device->devargs, dev);
+ if (rc) {
+ otx2_err("Failed to parse devargs rc=%d", rc);
+ goto error;
+ }
+
if (!dev->mbox_active) {
/* Initialize the base otx2_dev object
* only if already present
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index d9f72686a..f91e5fcac 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -9,11 +9,13 @@
#include <rte_common.h>
#include <rte_ethdev.h>
+#include <rte_kvargs.h>
#include "otx2_common.h"
#include "otx2_dev.h"
#include "otx2_irq.h"
#include "otx2_mempool.h"
+#include "otx2_rx.h"
#define OTX2_ETH_DEV_PMD_VERSION "1.0"
@@ -31,6 +33,8 @@
/* Used for struct otx2_eth_dev::flags */
#define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0)
+#define NIX_RSS_RETA_SIZE 64
+
#define NIX_TX_OFFLOAD_CAPA ( \
DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
DEV_TX_OFFLOAD_MT_LOCKFREE | \
@@ -56,6 +60,15 @@
DEV_RX_OFFLOAD_QINQ_STRIP | \
DEV_RX_OFFLOAD_TIMESTAMP)
+struct otx2_rss_info {
+ uint16_t rss_size;
+};
+
+struct otx2_npc_flow_info {
+ uint16_t flow_prealloc_size;
+ uint16_t flow_max_priority;
+};
+
struct otx2_eth_dev {
OTX2_DEV; /* Base class */
MARKER otx2_eth_dev_data_start;
@@ -72,12 +85,15 @@ struct otx2_eth_dev {
uint16_t nix_msixoff;
uintptr_t base;
uintptr_t lmt_addr;
+ uint16_t scalar_ena;
uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */
uint64_t rx_offloads;
uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */
uint64_t tx_offloads;
uint64_t rx_offload_capa;
uint64_t tx_offload_capa;
+ struct otx2_rss_info rss_info;
+ struct otx2_npc_flow_info npc_flow;
} __rte_cache_aligned;
static inline struct otx2_eth_dev *
@@ -96,4 +112,8 @@ int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr);
int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev);
+/* Devargs */
+int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
+ struct otx2_eth_dev *dev);
+
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
new file mode 100644
index 000000000..0b3e7c145
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -0,0 +1,143 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <inttypes.h>
+#include <math.h>
+
+#include "otx2_ethdev.h"
+
+static int
+parse_flow_max_priority(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint16_t val;
+
+ val = atoi(value);
+
+ /* Limit the max priority to 32 */
+ if (val < 1 || val > 32)
+ return -EINVAL;
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
+
+static int
+parse_flow_prealloc_size(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint16_t val;
+
+ val = atoi(value);
+
+ /* Limit the prealloc size to 32 */
+ if (val < 1 || val > 32)
+ return -EINVAL;
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
+
+static int
+parse_reta_size(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint32_t val;
+
+ val = atoi(value);
+
+ if (val <= ETH_RSS_RETA_SIZE_64)
+ val = ETH_RSS_RETA_SIZE_64;
+ else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
+ val = ETH_RSS_RETA_SIZE_128;
+ else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
+ val = ETH_RSS_RETA_SIZE_256;
+ else
+ val = NIX_RSS_RETA_SIZE;
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
+
+static int
+parse_ptype_flag(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint32_t val;
+
+ val = atoi(value);
+ if (val)
+ val = 0; /* Disable NIX_RX_OFFLOAD_PTYPE_F */
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
+
+static int
+parse_flag(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+
+ *(uint16_t *)extra_args = atoi(value);
+
+ return 0;
+}
+
+#define OTX2_RSS_RETA_SIZE "reta_size"
+#define OTX2_PTYPE_DISABLE "ptype_disable"
+#define OTX2_SCL_ENABLE "scalar_enable"
+#define OTX2_FLOW_PREALLOC_SIZE "flow_prealloc_size"
+#define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
+
+int
+otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
+{
+ uint16_t offload_flag = NIX_RX_OFFLOAD_PTYPE_F;
+ uint16_t rss_size = NIX_RSS_RETA_SIZE;
+ uint16_t flow_prealloc_size = 8;
+ uint16_t flow_max_priority = 3;
+ uint16_t scalar_enable = 0;
+ struct rte_kvargs *kvlist;
+
+ if (devargs == NULL)
+ goto null_devargs;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ goto exit;
+
+ rte_kvargs_process(kvlist, OTX2_PTYPE_DISABLE,
+ &parse_ptype_flag, &offload_flag);
+ rte_kvargs_process(kvlist, OTX2_RSS_RETA_SIZE,
+ &parse_reta_size, &rss_size);
+ rte_kvargs_process(kvlist, OTX2_SCL_ENABLE,
+ &parse_flag, &scalar_enable);
+ rte_kvargs_process(kvlist, OTX2_FLOW_PREALLOC_SIZE,
+ &parse_flow_prealloc_size, &flow_prealloc_size);
+ rte_kvargs_process(kvlist, OTX2_FLOW_MAX_PRIORITY,
+ &parse_flow_max_priority, &flow_max_priority);
+ rte_kvargs_free(kvlist);
+
+null_devargs:
+ dev->rx_offload_flags = offload_flag;
+ dev->scalar_ena = scalar_enable;
+ dev->rss_info.rss_size = rss_size;
+ dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
+ dev->npc_flow.flow_max_priority = flow_max_priority;
+ return 0;
+
+exit:
+ return -EINVAL;
+}
+
+RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
+ OTX2_RSS_RETA_SIZE "=<64|128|256>"
+ OTX2_PTYPE_DISABLE "=1"
+ OTX2_SCL_ENABLE "=1"
+ OTX2_FLOW_PREALLOC_SIZE "=<1-32>"
+ OTX2_FLOW_MAX_PRIORITY "=<1-32>");
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
new file mode 100644
index 000000000..1749c43ff
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_RX_H__
+#define __OTX2_RX_H__
+
+#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
+
+#endif /* __OTX2_RX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 05/58] net/octeontx2: handle device error interrupts
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (3 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 04/58] net/octeontx2: add devargs parsing functions jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 06/58] net/octeontx2: add info get operation jerinj
` (54 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Harman Kalra
From: Jerin Jacob <jerinj@marvell.com>
Handle device specific error and ras interrupts.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 12 +-
drivers/net/octeontx2/otx2_ethdev.h | 4 +
drivers/net/octeontx2/otx2_ethdev_irq.c | 140 ++++++++++++++++++++++++
5 files changed, 156 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_ethdev_irq.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index dbcfec5b4..a56143dcd 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -32,6 +32,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
otx2_ethdev.c \
+ otx2_ethdev_irq.c \
otx2_ethdev_devargs.c
LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_common_octeontx2 -lm
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 57657de3d..c49e1cb80 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -5,6 +5,7 @@
sources = files(
'otx2_mac.c',
'otx2_ethdev.c',
+ 'otx2_ethdev_irq.c',
'otx2_ethdev_devargs.c'
)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index eeba0c2c6..67a7ebb36 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -175,12 +175,17 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
if (rc)
goto otx2_npa_uninit;
+ /* Register LF irq handlers */
+ rc = otx2_nix_register_irqs(eth_dev);
+ if (rc)
+ goto mbox_detach;
+
/* Get maximum number of supported MAC entries */
max_entries = otx2_cgx_mac_max_entries_get(dev);
if (max_entries < 0) {
otx2_err("Failed to get max entries for mac addr");
rc = -ENOTSUP;
- goto mbox_detach;
+ goto unregister_irq;
}
/* For VFs, returned max_entries will be 0. But to keep default MAC
@@ -194,7 +199,7 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
if (eth_dev->data->mac_addrs == NULL) {
otx2_err("Failed to allocate memory for mac addr");
rc = -ENOMEM;
- goto mbox_detach;
+ goto unregister_irq;
}
dev->max_mac_entries = max_entries;
@@ -226,6 +231,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
free_mac_addrs:
rte_free(eth_dev->data->mac_addrs);
+unregister_irq:
+ otx2_nix_unregister_irqs(eth_dev);
mbox_detach:
otx2_eth_dev_lf_detach(dev->mbox);
otx2_npa_uninit:
@@ -261,6 +268,7 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
dev->drv_inited = false;
pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ otx2_nix_unregister_irqs(eth_dev);
rc = otx2_eth_dev_lf_detach(dev->mbox);
if (rc)
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index f91e5fcac..670d1ff0b 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -102,6 +102,10 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
return eth_dev->data->dev_private;
}
+/* IRQ */
+int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
+void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
+
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
new file mode 100644
index 000000000..33fed93c4
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -0,0 +1,140 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <inttypes.h>
+
+#include <rte_bus_pci.h>
+
+#include "otx2_ethdev.h"
+
+static void
+nix_lf_err_irq(void *param)
+{
+ struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t intr;
+
+ intr = otx2_read64(dev->base + NIX_LF_ERR_INT);
+ if (intr == 0)
+ return;
+
+ otx2_err("Err_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
+
+ /* Clear interrupt */
+ otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
+}
+
+static int
+nix_lf_register_err_irq(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc, vec;
+
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_ERR_INT_ENA_W1C);
+ /* Set used interrupt vectors */
+ rc = otx2_register_irq(handle, nix_lf_err_irq, eth_dev, vec);
+ /* Enable all dev interrupt except for RQ_DISABLED */
+ otx2_write64(~BIT_ULL(11), dev->base + NIX_LF_ERR_INT_ENA_W1S);
+
+ return rc;
+}
+
+static void
+nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int vec;
+
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_ERR_INT_ENA_W1C);
+ otx2_unregister_irq(handle, nix_lf_err_irq, eth_dev, vec);
+}
+
+static void
+nix_lf_ras_irq(void *param)
+{
+ struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t intr;
+
+ intr = otx2_read64(dev->base + NIX_LF_RAS);
+ if (intr == 0)
+ return;
+
+ otx2_err("Ras_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
+
+ /* Clear interrupt */
+ otx2_write64(intr, dev->base + NIX_LF_RAS);
+}
+
+static int
+nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc, vec;
+
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1C);
+ /* Set used interrupt vectors */
+ rc = otx2_register_irq(handle, nix_lf_ras_irq, eth_dev, vec);
+ /* Enable dev interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1S);
+
+ return rc;
+}
+
+static void
+nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int vec;
+
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1C);
+ otx2_unregister_irq(handle, nix_lf_ras_irq, eth_dev, vec);
+}
+
+int
+otx2_nix_register_irqs(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc;
+
+ if (dev->nix_msixoff == MSIX_VECTOR_INVALID) {
+ otx2_err("Invalid NIXLF MSIX vector offset vector: 0x%x",
+ dev->nix_msixoff);
+ return -EINVAL;
+ }
+
+ /* Register lf err interrupt */
+ rc = nix_lf_register_err_irq(eth_dev);
+ /* Register RAS interrupt */
+ rc |= nix_lf_register_ras_irq(eth_dev);
+
+ return rc;
+}
+
+void
+otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev)
+{
+ nix_lf_unregister_err_irq(eth_dev);
+ nix_lf_unregister_ras_irq(eth_dev);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 06/58] net/octeontx2: add info get operation
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (4 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 05/58] net/octeontx2: handle device error interrupts jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 07/58] net/octeontx2: add device configure operation jerinj
` (53 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru, Harman Kalra
From: Jerin Jacob <jerinj@marvell.com>
Add device information get operation.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 4 ++
doc/guides/nics/features/octeontx2_vec.ini | 4 ++
doc/guides/nics/features/octeontx2_vf.ini | 3 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 7 +++
drivers/net/octeontx2/otx2_ethdev.h | 27 +++++++++
drivers/net/octeontx2/otx2_ethdev_ops.c | 64 ++++++++++++++++++++++
8 files changed, 111 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_ethdev_ops.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 0ec3b6983..1f0148669 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -4,5 +4,9 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Speed capabilities = Y
Linux VFIO = Y
ARMv8 = Y
+Lock-free Tx queue = Y
+SR-IOV = Y
+Multiprocess aware = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 774f136c1..2b0644ee5 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -4,5 +4,9 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Speed capabilities = Y
Linux VFIO = Y
ARMv8 = Y
+Lock-free Tx queue = Y
+SR-IOV = Y
+Multiprocess aware = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 36642354e..80f0d5c95 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -4,5 +4,8 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Speed capabilities = Y
Linux VFIO = Y
ARMv8 = Y
+Lock-free Tx queue = Y
+Multiprocess aware = Y
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index a56143dcd..820202eb2 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -33,6 +33,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
otx2_ethdev.c \
otx2_ethdev_irq.c \
+ otx2_ethdev_ops.c \
otx2_ethdev_devargs.c
LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_common_octeontx2 -lm
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index c49e1cb80..a2dc983e3 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -6,6 +6,7 @@ sources = files(
'otx2_mac.c',
'otx2_ethdev.c',
'otx2_ethdev_irq.c',
+ 'otx2_ethdev_ops.c',
'otx2_ethdev_devargs.c'
)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 67a7ebb36..6e3c70559 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -64,6 +64,11 @@ nix_lf_free(struct otx2_eth_dev *dev)
return otx2_mbox_process(mbox);
}
+/* Initialize and register driver with DPDK Application */
+static const struct eth_dev_ops otx2_eth_dev_ops = {
+ .dev_infos_get = otx2_nix_info_get,
+};
+
static inline int
nix_lf_attach(struct otx2_eth_dev *dev)
{
@@ -120,6 +125,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
struct rte_pci_device *pci_dev;
int rc, max_entries;
+ eth_dev->dev_ops = &otx2_eth_dev_ops;
+
/* For secondary processes, the primary has done all the work */
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
/* Setup callbacks for secondary process */
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 670d1ff0b..00baabaac 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -33,7 +33,30 @@
/* Used for struct otx2_eth_dev::flags */
#define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0)
+#define VLAN_TAG_SIZE 4
+#define NIX_HW_L2_OVERHEAD 22
+/* ETH_HLEN+2*VLAN_HLEN */
+#define NIX_MAX_HW_MTU 9190
+#define NIX_MAX_HW_FRS (NIX_MAX_HW_MTU + NIX_HW_L2_OVERHEAD)
+#define NIX_MIN_HW_FRS 60
+#define NIX_HASH_KEY_SIZE 48 /* 352 Bits */
#define NIX_RSS_RETA_SIZE 64
+#define NIX_RX_MIN_DESC 16
+#define NIX_RX_MIN_DESC_ALIGN 16
+#define NIX_RX_NB_SEG_MAX 6
+
+/* If PTP is enabled additional SEND MEM DESC is required which
+ * takes 2 words, hence max 7 iova address are possible
+ */
+#if defined(RTE_LIBRTE_IEEE1588)
+#define NIX_TX_NB_SEG_MAX 7
+#else
+#define NIX_TX_NB_SEG_MAX 9
+#endif
+
+#define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
+ ETH_RSS_TCP | ETH_RSS_SCTP | \
+ ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD)
#define NIX_TX_OFFLOAD_CAPA ( \
DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
@@ -102,6 +125,10 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
return eth_dev->data->dev_private;
}
+/* Ops */
+void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_dev_info *dev_info);
+
/* IRQ */
int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
new file mode 100644
index 000000000..9f86635d4
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+
+void
+otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ devinfo->min_rx_bufsize = NIX_MIN_HW_FRS;
+ devinfo->max_rx_pktlen = NIX_MAX_HW_FRS;
+ devinfo->max_rx_queues = RTE_MAX_QUEUES_PER_PORT;
+ devinfo->max_tx_queues = RTE_MAX_QUEUES_PER_PORT;
+ devinfo->max_mac_addrs = dev->max_mac_entries;
+ devinfo->max_vfs = pci_dev->max_vfs;
+ devinfo->max_mtu = devinfo->max_rx_pktlen - NIX_HW_L2_OVERHEAD;
+ devinfo->min_mtu = devinfo->min_rx_bufsize - NIX_HW_L2_OVERHEAD;
+
+ devinfo->rx_offload_capa = dev->rx_offload_capa;
+ devinfo->tx_offload_capa = dev->tx_offload_capa;
+ devinfo->rx_queue_offload_capa = 0;
+ devinfo->tx_queue_offload_capa = 0;
+
+ devinfo->reta_size = dev->rss_info.rss_size;
+ devinfo->hash_key_size = NIX_HASH_KEY_SIZE;
+ devinfo->flow_type_rss_offloads = NIX_RSS_OFFLOAD;
+
+ devinfo->default_rxconf = (struct rte_eth_rxconf) {
+ .rx_drop_en = 0,
+ .offloads = 0,
+ };
+
+ devinfo->default_txconf = (struct rte_eth_txconf) {
+ .offloads = 0,
+ };
+
+ devinfo->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = UINT16_MAX,
+ .nb_min = NIX_RX_MIN_DESC,
+ .nb_align = NIX_RX_MIN_DESC_ALIGN,
+ .nb_seg_max = NIX_RX_NB_SEG_MAX,
+ .nb_mtu_seg_max = NIX_RX_NB_SEG_MAX,
+ };
+ devinfo->rx_desc_lim.nb_max =
+ RTE_ALIGN_MUL_FLOOR(devinfo->rx_desc_lim.nb_max,
+ NIX_RX_MIN_DESC_ALIGN);
+
+ devinfo->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = UINT16_MAX,
+ .nb_min = 1,
+ .nb_align = 1,
+ .nb_seg_max = NIX_TX_NB_SEG_MAX,
+ .nb_mtu_seg_max = NIX_TX_NB_SEG_MAX,
+ };
+
+ /* Auto negotiation disabled */
+ devinfo->speed_capa = ETH_LINK_SPEED_FIXED;
+ devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
+ ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
+ ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 07/58] net/octeontx2: add device configure operation
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (5 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 06/58] net/octeontx2: add info get operation jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 08/58] net/octeontx2: handle queue specific error interrupts jerinj
` (52 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru
From: Jerin Jacob <jerinj@marvell.com>
Add device configure operation. This would call lf_alloc
mailbox to allocate a NIX LF and upon return, AF will
return the attributes for the select LF.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 151 ++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 11 ++
2 files changed, 162 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 6e3c70559..65d72a47f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -39,6 +39,52 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
return NIX_TX_OFFLOAD_CAPA;
}
+static int
+nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_lf_alloc_req *req;
+ struct nix_lf_alloc_rsp *rsp;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_nix_lf_alloc(mbox);
+ req->rq_cnt = nb_rxq;
+ req->sq_cnt = nb_txq;
+ req->cq_cnt = nb_rxq;
+ /* XQE_SZ should be in Sync with NIX_CQ_ENTRY_SZ */
+ RTE_BUILD_BUG_ON(NIX_CQ_ENTRY_SZ != 128);
+ req->xqe_sz = NIX_XQESZ_W16;
+ req->rss_sz = dev->rss_info.rss_size;
+ req->rss_grps = NIX_RSS_GRPS;
+ req->npa_func = otx2_npa_pf_func_get();
+ req->sso_func = otx2_sso_pf_func_get();
+ req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
+ if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
+ req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
+ }
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ dev->sqb_size = rsp->sqb_size;
+ dev->tx_chan_base = rsp->tx_chan_base;
+ dev->rx_chan_base = rsp->rx_chan_base;
+ dev->rx_chan_cnt = rsp->rx_chan_cnt;
+ dev->tx_chan_cnt = rsp->tx_chan_cnt;
+ dev->lso_tsov4_idx = rsp->lso_tsov4_idx;
+ dev->lso_tsov6_idx = rsp->lso_tsov6_idx;
+ dev->lf_tx_stats = rsp->lf_tx_stats;
+ dev->lf_rx_stats = rsp->lf_rx_stats;
+ dev->cints = rsp->cints;
+ dev->qints = rsp->qints;
+ dev->npc_flow.channel = dev->rx_chan_base;
+
+ return 0;
+}
+
static int
nix_lf_free(struct otx2_eth_dev *dev)
{
@@ -64,9 +110,114 @@ nix_lf_free(struct otx2_eth_dev *dev)
return otx2_mbox_process(mbox);
}
+static int
+otx2_nix_configure(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct rte_eth_conf *conf = &data->dev_conf;
+ struct rte_eth_rxmode *rxmode = &conf->rxmode;
+ struct rte_eth_txmode *txmode = &conf->txmode;
+ char ea_fmt[RTE_ETHER_ADDR_FMT_SIZE];
+ struct rte_ether_addr *ea;
+ uint8_t nb_rxq, nb_txq;
+ int rc;
+
+ rc = -EINVAL;
+
+ /* Sanity checks */
+ if (rte_eal_has_hugepages() == 0) {
+ otx2_err("Huge page is not configured");
+ goto fail;
+ }
+
+ if (rte_eal_iova_mode() != RTE_IOVA_VA) {
+ otx2_err("iova mode should be va");
+ goto fail;
+ }
+
+ if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ otx2_err("Setting link speed/duplex not supported");
+ goto fail;
+ }
+
+ if (conf->dcb_capability_en == 1) {
+ otx2_err("dcb enable is not supported");
+ goto fail;
+ }
+
+ if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) {
+ otx2_err("Flow director is not supported");
+ goto fail;
+ }
+
+ if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
+ goto fail;
+ }
+
+ if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
+ goto fail;
+ }
+
+ /* Free the resources allocated from the previous configure */
+ if (dev->configured == 1)
+ nix_lf_free(dev);
+
+ if (otx2_dev_is_A0(dev) &&
+ (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
+ ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+ otx2_err("Outer IP and SCTP checksum unsupported");
+ rc = -EINVAL;
+ goto fail;
+ }
+
+ dev->rx_offloads = rxmode->offloads;
+ dev->tx_offloads = txmode->offloads;
+ dev->rss_info.rss_grps = NIX_RSS_GRPS;
+
+ nb_rxq = RTE_MAX(data->nb_rx_queues, 1);
+ nb_txq = RTE_MAX(data->nb_tx_queues, 1);
+
+ /* Alloc a nix lf */
+ rc = nix_lf_alloc(dev, nb_rxq, nb_txq);
+ if (rc) {
+ otx2_err("Failed to init nix_lf rc=%d", rc);
+ goto fail;
+ }
+
+ /* Update the mac address */
+ ea = eth_dev->data->mac_addrs;
+ memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN);
+ if (rte_is_zero_ether_addr(ea))
+ rte_eth_random_addr((uint8_t *)ea);
+
+ rte_ether_format_addr(ea_fmt, RTE_ETHER_ADDR_FMT_SIZE, ea);
+
+ otx2_nix_dbg("Configured port%d mac=%s nb_rxq=%d nb_txq=%d"
+ " rx_offloads=0x%" PRIx64 " tx_offloads=0x%" PRIx64 ""
+ " rx_flags=0x%x tx_flags=0x%x",
+ eth_dev->data->port_id, ea_fmt, nb_rxq,
+ nb_txq, dev->rx_offloads, dev->tx_offloads,
+ dev->rx_offload_flags, dev->tx_offload_flags);
+
+ /* All good */
+ dev->configured = 1;
+ dev->configured_nb_rx_qs = data->nb_rx_queues;
+ dev->configured_nb_tx_qs = data->nb_tx_queues;
+ return 0;
+
+fail:
+ return rc;
+}
+
/* Initialize and register driver with DPDK Application */
static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
+ .dev_configure = otx2_nix_configure,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 00baabaac..27cad971c 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -39,11 +39,14 @@
#define NIX_MAX_HW_MTU 9190
#define NIX_MAX_HW_FRS (NIX_MAX_HW_MTU + NIX_HW_L2_OVERHEAD)
#define NIX_MIN_HW_FRS 60
+/* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/
+#define NIX_RSS_GRPS 8
#define NIX_HASH_KEY_SIZE 48 /* 352 Bits */
#define NIX_RSS_RETA_SIZE 64
#define NIX_RX_MIN_DESC 16
#define NIX_RX_MIN_DESC_ALIGN 16
#define NIX_RX_NB_SEG_MAX 6
+#define NIX_CQ_ENTRY_SZ 128
/* If PTP is enabled additional SEND MEM DESC is required which
* takes 2 words, hence max 7 iova address are possible
@@ -85,9 +88,11 @@
struct otx2_rss_info {
uint16_t rss_size;
+ uint8_t rss_grps;
};
struct otx2_npc_flow_info {
+ uint16_t channel; /*rx channel */
uint16_t flow_prealloc_size;
uint16_t flow_max_priority;
};
@@ -104,7 +109,13 @@ struct otx2_eth_dev {
uint8_t lso_tsov6_idx;
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t max_mac_entries;
+ uint8_t lf_tx_stats;
+ uint8_t lf_rx_stats;
+ uint16_t cints;
+ uint16_t qints;
uint8_t configured;
+ uint8_t configured_nb_rx_qs;
+ uint8_t configured_nb_tx_qs;
uint16_t nix_msixoff;
uintptr_t base;
uintptr_t lmt_addr;
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 08/58] net/octeontx2: handle queue specific error interrupts
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (6 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 07/58] net/octeontx2: add device configure operation jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 09/58] net/octeontx2: add context debug utils jerinj
` (51 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: ferruh.yigit
From: Jerin Jacob <jerinj@marvell.com>
Handle queue specific error interrupts.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 16 +-
drivers/net/octeontx2/otx2_ethdev.h | 9 ++
drivers/net/octeontx2/otx2_ethdev_irq.c | 191 ++++++++++++++++++++++++
3 files changed, 215 insertions(+), 1 deletion(-)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 65d72a47f..045855c2e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -163,8 +163,10 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
}
/* Free the resources allocated from the previous configure */
- if (dev->configured == 1)
+ if (dev->configured == 1) {
+ oxt2_nix_unregister_queue_irqs(eth_dev);
nix_lf_free(dev);
+ }
if (otx2_dev_is_A0(dev) &&
(txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
@@ -189,6 +191,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto fail;
}
+ /* Register queue IRQs */
+ rc = oxt2_nix_register_queue_irqs(eth_dev);
+ if (rc) {
+ otx2_err("Failed to register queue interrupts rc=%d", rc);
+ goto free_nix_lf;
+ }
+
/* Update the mac address */
ea = eth_dev->data->mac_addrs;
memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN);
@@ -210,6 +219,8 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
dev->configured_nb_tx_qs = data->nb_tx_queues;
return 0;
+free_nix_lf:
+ rc = nix_lf_free(dev);
fail:
return rc;
}
@@ -413,6 +424,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ /* Unregister queue irqs */
+ oxt2_nix_unregister_queue_irqs(eth_dev);
+
rc = nix_lf_free(dev);
if (rc)
otx2_err("Failed to free nix lf, rc=%d", rc);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 27cad971c..ca0587a63 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -86,6 +86,11 @@
DEV_RX_OFFLOAD_QINQ_STRIP | \
DEV_RX_OFFLOAD_TIMESTAMP)
+struct otx2_qint {
+ struct rte_eth_dev *eth_dev;
+ uint8_t qintx;
+};
+
struct otx2_rss_info {
uint16_t rss_size;
uint8_t rss_grps;
@@ -114,6 +119,7 @@ struct otx2_eth_dev {
uint16_t cints;
uint16_t qints;
uint8_t configured;
+ uint8_t configured_qints;
uint8_t configured_nb_rx_qs;
uint8_t configured_nb_tx_qs;
uint16_t nix_msixoff;
@@ -126,6 +132,7 @@ struct otx2_eth_dev {
uint64_t tx_offloads;
uint64_t rx_offload_capa;
uint64_t tx_offload_capa;
+ struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
struct otx2_rss_info rss_info;
struct otx2_npc_flow_info npc_flow;
} __rte_cache_aligned;
@@ -142,7 +149,9 @@ void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
/* IRQ */
int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
+int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev);
void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
+void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev);
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index 33fed93c4..476c7ea78 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -112,6 +112,197 @@ nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
otx2_unregister_irq(handle, nix_lf_ras_irq, eth_dev, vec);
}
+static inline uint8_t
+nix_lf_q_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t q,
+ uint32_t off, uint64_t mask)
+{
+ uint64_t reg, wdata;
+ uint8_t qint;
+
+ wdata = (uint64_t)q << 44;
+ reg = otx2_atomic64_add_nosync(wdata, (int64_t *)(dev->base + off));
+
+ if (reg & BIT_ULL(42) /* OP_ERR */) {
+ otx2_err("Failed execute irq get off=0x%x", off);
+ return 0;
+ }
+
+ qint = reg & 0xff;
+ wdata &= mask;
+ otx2_write64(wdata, dev->base + off);
+
+ return qint;
+}
+
+static inline uint8_t
+nix_lf_rq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t rq)
+{
+ return nix_lf_q_irq_get_and_clear(dev, rq, NIX_LF_RQ_OP_INT, ~0xff00);
+}
+
+static inline uint8_t
+nix_lf_cq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t cq)
+{
+ return nix_lf_q_irq_get_and_clear(dev, cq, NIX_LF_CQ_OP_INT, ~0xff00);
+}
+
+static inline uint8_t
+nix_lf_sq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t sq)
+{
+ return nix_lf_q_irq_get_and_clear(dev, sq, NIX_LF_SQ_OP_INT, ~0x1ff00);
+}
+
+static inline void
+nix_lf_sq_debug_reg(struct otx2_eth_dev *dev, uint32_t off)
+{
+ uint64_t reg;
+
+ reg = otx2_read64(dev->base + off);
+ if (reg & BIT_ULL(44))
+ otx2_err("SQ=%d err_code=0x%x",
+ (int)((reg >> 8) & 0xfffff), (uint8_t)(reg & 0xff));
+}
+
+static void
+nix_lf_q_irq(void *param)
+{
+ struct otx2_qint *qint = (struct otx2_qint *)param;
+ struct rte_eth_dev *eth_dev = qint->eth_dev;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint8_t irq, qintx = qint->qintx;
+ int q, cq, rq, sq;
+ uint64_t intr;
+
+ intr = otx2_read64(dev->base + NIX_LF_QINTX_INT(qintx));
+ if (intr == 0)
+ return;
+
+ otx2_err("Queue_intr=0x%" PRIx64 " qintx=%d pf=%d, vf=%d",
+ intr, qintx, dev->pf, dev->vf);
+
+ /* Handle RQ interrupts */
+ for (q = 0; q < eth_dev->data->nb_rx_queues; q++) {
+ rq = q % dev->qints;
+ irq = nix_lf_rq_irq_get_and_clear(dev, rq);
+
+ if (irq & BIT_ULL(NIX_RQINT_DROP))
+ otx2_err("RQ=%d NIX_RQINT_DROP", rq);
+
+ if (irq & BIT_ULL(NIX_RQINT_RED))
+ otx2_err("RQ=%d NIX_RQINT_RED", rq);
+ }
+
+ /* Handle CQ interrupts */
+ for (q = 0; q < eth_dev->data->nb_rx_queues; q++) {
+ cq = q % dev->qints;
+ irq = nix_lf_cq_irq_get_and_clear(dev, cq);
+
+ if (irq & BIT_ULL(NIX_CQERRINT_DOOR_ERR))
+ otx2_err("CQ=%d NIX_CQERRINT_DOOR_ERR", cq);
+
+ if (irq & BIT_ULL(NIX_CQERRINT_WR_FULL))
+ otx2_err("CQ=%d NIX_CQERRINT_WR_FULL", cq);
+
+ if (irq & BIT_ULL(NIX_CQERRINT_CQE_FAULT))
+ otx2_err("CQ=%d NIX_CQERRINT_CQE_FAULT", cq);
+ }
+
+ /* Handle SQ interrupts */
+ for (q = 0; q < eth_dev->data->nb_tx_queues; q++) {
+ sq = q % dev->qints;
+ irq = nix_lf_sq_irq_get_and_clear(dev, sq);
+
+ if (irq & BIT_ULL(NIX_SQINT_LMT_ERR)) {
+ otx2_err("SQ=%d NIX_SQINT_LMT_ERR", sq);
+ nix_lf_sq_debug_reg(dev, NIX_LF_SQ_OP_ERR_DBG);
+ }
+ if (irq & BIT_ULL(NIX_SQINT_MNQ_ERR)) {
+ otx2_err("SQ=%d NIX_SQINT_MNQ_ERR", sq);
+ nix_lf_sq_debug_reg(dev, NIX_LF_MNQ_ERR_DBG);
+ }
+ if (irq & BIT_ULL(NIX_SQINT_SEND_ERR)) {
+ otx2_err("SQ=%d NIX_SQINT_SEND_ERR", sq);
+ nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG);
+ }
+ if (irq & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL)) {
+ otx2_err("SQ=%d NIX_SQINT_SQB_ALLOC_FAIL", sq);
+ nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG);
+ }
+ }
+
+ /* Clear interrupt */
+ otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
+}
+
+int
+oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int vec, q, sqs, rqs, qs, rc = 0;
+
+ /* Figure out max qintx required */
+ rqs = RTE_MIN(dev->qints, eth_dev->data->nb_rx_queues);
+ sqs = RTE_MIN(dev->qints, eth_dev->data->nb_tx_queues);
+ qs = RTE_MAX(rqs, sqs);
+
+ dev->configured_qints = qs;
+
+ for (q = 0; q < qs; q++) {
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q;
+
+ /* Clear QINT CNT */
+ otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
+
+ /* Clear interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q));
+
+ dev->qints_mem[q].eth_dev = eth_dev;
+ dev->qints_mem[q].qintx = q;
+
+ /* Sync qints_mem update */
+ rte_smp_wmb();
+
+ /* Register queue irq vector */
+ rc = otx2_register_irq(handle, nix_lf_q_irq,
+ &dev->qints_mem[q], vec);
+ if (rc)
+ break;
+
+ otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
+ otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q));
+ /* Enable QINT interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1S(q));
+ }
+
+ return rc;
+}
+
+void
+oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int vec, q;
+
+ for (q = 0; q < dev->configured_qints; q++) {
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q;
+
+ /* Clear QINT CNT */
+ otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
+ otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q));
+
+ /* Clear interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q));
+
+ /* Unregister queue irq vector */
+ otx2_unregister_irq(handle, nix_lf_q_irq,
+ &dev->qints_mem[q], vec);
+ }
+}
+
int
otx2_nix_register_irqs(struct rte_eth_dev *eth_dev)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 09/58] net/octeontx2: add context debug utils
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (7 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 08/58] net/octeontx2: handle queue specific error interrupts jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-06 15:41 ` Ferruh Yigit
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 10/58] net/octeontx2: add register dump support jerinj
` (50 subsequent siblings)
59 siblings, 1 reply; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vivek Sharma
From: Jerin Jacob <jerinj@marvell.com>
Add RQ,SQ,CQ context and CQE structure dump utils.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.h | 4 +
drivers/net/octeontx2/otx2_ethdev_debug.c | 272 ++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev_irq.c | 9 +
5 files changed, 287 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_ethdev_debug.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 820202eb2..0dfd43f4f 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -34,6 +34,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_ethdev.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
+ otx2_ethdev_debug.c \
otx2_ethdev_devargs.c
LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_common_octeontx2 -lm
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index a2dc983e3..1c010c342 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -7,6 +7,7 @@ sources = files(
'otx2_ethdev.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
+ 'otx2_ethdev_debug.c',
'otx2_ethdev_devargs.c'
)
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index ca0587a63..ff14a0129 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -153,6 +153,10 @@ int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev);
void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev);
+/* Debug */
+int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
+void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
+
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c
new file mode 100644
index 000000000..39cda7637
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev_debug.c
@@ -0,0 +1,272 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+
+#define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
+
+static inline void
+nix_lf_sq_dump(struct nix_sq_ctx_s *ctx)
+{
+ nix_dump("W0: sqe_way_mask \t\t%d\nW0: cq \t\t\t\t%d",
+ ctx->sqe_way_mask, ctx->cq);
+ nix_dump("W0: sdp_mcast \t\t\t%d\nW0: substream \t\t\t0x%03x",
+ ctx->sdp_mcast, ctx->substream);
+ nix_dump("W0: qint_idx \t\t\t%d\nW0: ena \t\t\t%d\n",
+ ctx->qint_idx, ctx->ena);
+
+ nix_dump("W1: sqb_count \t\t\t%d\nW1: default_chan \t\t%d",
+ ctx->sqb_count, ctx->default_chan);
+ nix_dump("W1: smq_rr_quantum \t\t%d\nW1: sso_ena \t\t\t%d",
+ ctx->smq_rr_quantum, ctx->sso_ena);
+ nix_dump("W1: xoff \t\t\t%d\nW1: cq_ena \t\t\t%d\nW1: smq\t\t\t\t%d\n",
+ ctx->xoff, ctx->cq_ena, ctx->smq);
+
+ nix_dump("W2: sqe_stype \t\t\t%d\nW2: sq_int_ena \t\t\t%d",
+ ctx->sqe_stype, ctx->sq_int_ena);
+ nix_dump("W2: sq_int \t\t\t%d\nW2: sqb_aura \t\t\t%d",
+ ctx->sq_int, ctx->sqb_aura);
+ nix_dump("W2: smq_rr_count \t\t%d\n", ctx->smq_rr_count);
+
+ nix_dump("W3: smq_next_sq_vld\t\t%d\nW3: smq_pend\t\t\t%d",
+ ctx->smq_next_sq_vld, ctx->smq_pend);
+ nix_dump("W3: smenq_next_sqb_vld \t%d\nW3: head_offset\t\t\t%d",
+ ctx->smenq_next_sqb_vld, ctx->head_offset);
+ nix_dump("W3: smenq_offset\t\t%d\nW3: tail_offset \t\t%d",
+ ctx->smenq_offset, ctx->tail_offset);
+ nix_dump("W3: smq_lso_segnum \t\t%d\nW3: smq_next_sq \t\t%d",
+ ctx->smq_lso_segnum, ctx->smq_next_sq);
+ nix_dump("W3: mnq_dis \t\t\t%d\nW3: lmt_dis \t\t\t%d",
+ ctx->mnq_dis, ctx->lmt_dis);
+ nix_dump("W3: cq_limit\t\t\t%d\nW3: max_sqe_size\t\t%d\n",
+ ctx->cq_limit, ctx->max_sqe_size);
+
+ nix_dump("W4: next_sqb \t\t\t0x%" PRIx64 "", ctx->next_sqb);
+ nix_dump("W5: tail_sqb \t\t\t0x%" PRIx64 "", ctx->tail_sqb);
+ nix_dump("W6: smenq_sqb \t\t\t0x%" PRIx64 "", ctx->smenq_sqb);
+ nix_dump("W7: smenq_next_sqb \t\t0x%" PRIx64 "", ctx->smenq_next_sqb);
+ nix_dump("W8: head_sqb \t\t\t0x%" PRIx64 "", ctx->head_sqb);
+
+ nix_dump("W9: vfi_lso_vld \t\t%d\nW9: vfi_lso_vlan1_ins_ena\t%d",
+ ctx->vfi_lso_vld, ctx->vfi_lso_vlan1_ins_ena);
+ nix_dump("W9: vfi_lso_vlan0_ins_ena\t%d\nW9: vfi_lso_mps\t\t\t%d",
+ ctx->vfi_lso_vlan0_ins_ena, ctx->vfi_lso_mps);
+ nix_dump("W9: vfi_lso_sb \t\t\t%d\nW9: vfi_lso_sizem1\t\t%d",
+ ctx->vfi_lso_sb, ctx->vfi_lso_sizem1);
+ nix_dump("W9: vfi_lso_total\t\t%d", ctx->vfi_lso_total);
+
+ nix_dump("W10: scm_lso_rem \t\t0x%" PRIx64 "",
+ (uint64_t)ctx->scm_lso_rem);
+ nix_dump("W11: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
+ nix_dump("W12: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
+ nix_dump("W14: dropped_octs \t\t0x%" PRIx64 "",
+ (uint64_t)ctx->drop_octs);
+ nix_dump("W15: dropped_pkts \t\t0x%" PRIx64 "",
+ (uint64_t)ctx->drop_pkts);
+}
+
+static inline void
+nix_lf_rq_dump(struct nix_rq_ctx_s *ctx)
+{
+ nix_dump("W0: wqe_aura \t\t\t%d\nW0: substream \t\t\t0x%03x",
+ ctx->wqe_aura, ctx->substream);
+ nix_dump("W0: cq \t\t\t\t%d\nW0: ena_wqwd \t\t\t%d",
+ ctx->cq, ctx->ena_wqwd);
+ nix_dump("W0: ipsech_ena \t\t\t%d\nW0: sso_ena \t\t\t%d",
+ ctx->ipsech_ena, ctx->sso_ena);
+ nix_dump("W0: ena \t\t\t%d\n", ctx->ena);
+
+ nix_dump("W1: lpb_drop_ena \t\t%d\nW1: spb_drop_ena \t\t%d",
+ ctx->lpb_drop_ena, ctx->spb_drop_ena);
+ nix_dump("W1: xqe_drop_ena \t\t%d\nW1: wqe_caching \t\t%d",
+ ctx->xqe_drop_ena, ctx->wqe_caching);
+ nix_dump("W1: pb_caching \t\t\t%d\nW1: sso_tt \t\t\t%d",
+ ctx->pb_caching, ctx->sso_tt);
+ nix_dump("W1: sso_grp \t\t\t%d\nW1: lpb_aura \t\t\t%d",
+ ctx->sso_grp, ctx->lpb_aura);
+ nix_dump("W1: spb_aura \t\t\t%d\n", ctx->spb_aura);
+
+ nix_dump("W2: xqe_hdr_split \t\t%d\nW2: xqe_imm_copy \t\t%d",
+ ctx->xqe_hdr_split, ctx->xqe_imm_copy);
+ nix_dump("W2: xqe_imm_size \t\t%d\nW2: later_skip \t\t\t%d",
+ ctx->xqe_imm_size, ctx->later_skip);
+ nix_dump("W2: first_skip \t\t\t%d\nW2: lpb_sizem1 \t\t\t%d",
+ ctx->first_skip, ctx->lpb_sizem1);
+ nix_dump("W2: spb_ena \t\t\t%d\nW2: wqe_skip \t\t\t%d",
+ ctx->spb_ena, ctx->wqe_skip);
+ nix_dump("W2: spb_sizem1 \t\t\t%d\n", ctx->spb_sizem1);
+
+ nix_dump("W3: spb_pool_pass \t\t%d\nW3: spb_pool_drop \t\t%d",
+ ctx->spb_pool_pass, ctx->spb_pool_drop);
+ nix_dump("W3: spb_aura_pass \t\t%d\nW3: spb_aura_drop \t\t%d",
+ ctx->spb_aura_pass, ctx->spb_aura_drop);
+ nix_dump("W3: wqe_pool_pass \t\t%d\nW3: wqe_pool_drop \t\t%d",
+ ctx->wqe_pool_pass, ctx->wqe_pool_drop);
+ nix_dump("W3: xqe_pass \t\t\t%d\nW3: xqe_drop \t\t\t%d\n",
+ ctx->xqe_pass, ctx->xqe_drop);
+
+ nix_dump("W4: qint_idx \t\t\t%d\nW4: rq_int_ena \t\t\t%d",
+ ctx->qint_idx, ctx->rq_int_ena);
+ nix_dump("W4: rq_int \t\t\t%d\nW4: lpb_pool_pass \t\t%d",
+ ctx->rq_int, ctx->lpb_pool_pass);
+ nix_dump("W4: lpb_pool_drop \t\t%d\nW4: lpb_aura_pass \t\t%d",
+ ctx->lpb_pool_drop, ctx->lpb_aura_pass);
+ nix_dump("W4: lpb_aura_drop \t\t%d\n", ctx->lpb_aura_drop);
+
+ nix_dump("W5: flow_tagw \t\t\t%d\nW5: bad_utag \t\t\t%d",
+ ctx->flow_tagw, ctx->bad_utag);
+ nix_dump("W5: good_utag \t\t\t%d\nW5: ltag \t\t\t%d\n",
+ ctx->good_utag, ctx->ltag);
+
+ nix_dump("W6: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
+ nix_dump("W7: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
+ nix_dump("W8: drop_octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_octs);
+ nix_dump("W9: drop_pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_pkts);
+ nix_dump("W10: re_pkts \t\t\t0x%" PRIx64 "\n", (uint64_t)ctx->re_pkts);
+}
+
+static inline void
+nix_lf_cq_dump(struct nix_cq_ctx_s *ctx)
+{
+ nix_dump("W0: base \t\t\t0x%" PRIx64 "\n", ctx->base);
+
+ nix_dump("W1: wrptr \t\t\t%" PRIx64 "", (uint64_t)ctx->wrptr);
+ nix_dump("W1: avg_con \t\t\t%d\nW1: cint_idx \t\t\t%d",
+ ctx->avg_con, ctx->cint_idx);
+ nix_dump("W1: cq_err \t\t\t%d\nW1: qint_idx \t\t\t%d",
+ ctx->cq_err, ctx->qint_idx);
+ nix_dump("W1: bpid \t\t\t%d\nW1: bp_ena \t\t\t%d\n",
+ ctx->bpid, ctx->bp_ena);
+
+ nix_dump("W2: update_time \t\t%d\nW2: avg_level \t\t\t%d",
+ ctx->update_time, ctx->avg_level);
+ nix_dump("W2: head \t\t\t%d\nW2: tail \t\t\t%d\n",
+ ctx->head, ctx->tail);
+
+ nix_dump("W3: cq_err_int_ena \t\t%d\nW3: cq_err_int \t\t\t%d",
+ ctx->cq_err_int_ena, ctx->cq_err_int);
+ nix_dump("W3: qsize \t\t\t%d\nW3: caching \t\t\t%d",
+ ctx->qsize, ctx->caching);
+ nix_dump("W3: substream \t\t\t0x%03x\nW3: ena \t\t\t%d",
+ ctx->substream, ctx->ena);
+ nix_dump("W3: drop_ena \t\t\t%d\nW3: drop \t\t\t%d",
+ ctx->drop_ena, ctx->drop);
+ nix_dump("W3: bp \t\t\t\t%d\n", ctx->bp);
+}
+
+int
+otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc, q, rq = eth_dev->data->nb_rx_queues;
+ int sq = eth_dev->data->nb_tx_queues;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_rsp *rsp;
+ struct nix_aq_enq_req *aq;
+
+ for (q = 0; q < rq; q++) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = q;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to get cq context");
+ goto fail;
+ }
+ nix_dump("============== port=%d cq=%d ===============",
+ eth_dev->data->port_id, q);
+ nix_lf_cq_dump(&rsp->cq);
+ }
+
+ for (q = 0; q < rq; q++) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = q;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void **)&rsp);
+ if (rc) {
+ otx2_err("Failed to get rq context");
+ goto fail;
+ }
+ nix_dump("============== port=%d rq=%d ===============",
+ eth_dev->data->port_id, q);
+ nix_lf_rq_dump(&rsp->rq);
+ }
+ for (q = 0; q < sq; q++) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = q;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to get sq context");
+ goto fail;
+ }
+ nix_dump("============== port=%d sq=%d ===============",
+ eth_dev->data->port_id, q);
+ nix_lf_sq_dump(&rsp->sq);
+ }
+
+fail:
+ return rc;
+}
+
+/* Dumps struct nix_cqe_hdr_s and struct nix_rx_parse_s */
+void
+otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
+{
+ const struct nix_rx_parse_s *rx =
+ (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1);
+
+ nix_dump("tag \t\t0x%x\tq \t\t%d\t\tnode \t\t%d\tcqe_type \t%d",
+ cq->tag, cq->q, cq->node, cq->cqe_type);
+
+ nix_dump("W0: chan \t%d\t\tdesc_sizem1 \t%d",
+ rx->chan, rx->desc_sizem1);
+ nix_dump("W0: imm_copy \t%d\t\texpress \t%d",
+ rx->imm_copy, rx->express);
+ nix_dump("W0: wqwd \t%d\t\terrlev \t\t%d\t\terrcode \t%d",
+ rx->wqwd, rx->errlev, rx->errcode);
+ nix_dump("W0: latype \t%d\t\tlbtype \t\t%d\t\tlctype \t\t%d",
+ rx->latype, rx->lbtype, rx->lctype);
+ nix_dump("W0: ldtype \t%d\t\tletype \t\t%d\t\tlftype \t\t%d",
+ rx->ldtype, rx->letype, rx->lftype);
+ nix_dump("W0: lgtype \t%d \t\tlhtype \t\t%d",
+ rx->lgtype, rx->lhtype);
+
+ nix_dump("W1: pkt_lenm1 \t%d", rx->pkt_lenm1);
+ nix_dump("W1: l2m \t%d\t\tl2b \t\t%d\t\tl3m \t\t%d\tl3b \t\t%d",
+ rx->l2m, rx->l2b, rx->l3m, rx->l3b);
+ nix_dump("W1: vtag0_valid %d\t\tvtag0_gone \t%d",
+ rx->vtag0_valid, rx->vtag0_gone);
+ nix_dump("W1: vtag1_valid %d\t\tvtag1_gone \t%d",
+ rx->vtag1_valid, rx->vtag1_gone);
+ nix_dump("W1: pkind \t%d", rx->pkind);
+ nix_dump("W1: vtag0_tci \t%d\t\tvtag1_tci \t%d",
+ rx->vtag0_tci, rx->vtag1_tci);
+
+ nix_dump("W2: laflags \t%d\t\tlbflags\t\t%d\t\tlcflags \t%d",
+ rx->laflags, rx->lbflags, rx->lcflags);
+ nix_dump("W2: ldflags \t%d\t\tleflags\t\t%d\t\tlfflags \t%d",
+ rx->ldflags, rx->leflags, rx->lfflags);
+ nix_dump("W2: lgflags \t%d\t\tlhflags \t%d",
+ rx->lgflags, rx->lhflags);
+
+ nix_dump("W3: eoh_ptr \t%d\t\twqe_aura \t%d\t\tpb_aura \t%d",
+ rx->eoh_ptr, rx->wqe_aura, rx->pb_aura);
+ nix_dump("W3: match_id \t%d", rx->match_id);
+
+ nix_dump("W4: laptr \t%d\t\tlbptr \t\t%d\t\tlcptr \t\t%d",
+ rx->laptr, rx->lbptr, rx->lcptr);
+ nix_dump("W4: ldptr \t%d\t\tleptr \t\t%d\t\tlfptr \t\t%d",
+ rx->ldptr, rx->leptr, rx->lfptr);
+ nix_dump("W4: lgptr \t%d\t\tlhptr \t\t%d", rx->lgptr, rx->lhptr);
+
+ nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d",
+ rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg);
+}
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index 476c7ea78..9bc9d99f8 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -23,6 +23,9 @@ nix_lf_err_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
+
+ otx2_nix_queues_ctx_dump(eth_dev);
+ rte_panic("nix_lf_error_interrupt\n");
}
static int
@@ -75,6 +78,9 @@ nix_lf_ras_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_RAS);
+
+ otx2_nix_queues_ctx_dump(eth_dev);
+ rte_panic("nix_lf_ras_interrupt\n");
}
static int
@@ -232,6 +238,9 @@ nix_lf_q_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
+
+ otx2_nix_queues_ctx_dump(eth_dev);
+ rte_panic("nix_lf_q_interrupt\n");
}
int
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 10/58] net/octeontx2: add register dump support
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (8 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 09/58] net/octeontx2: add context debug utils jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 11/58] net/octeontx2: add link stats operations jerinj
` (49 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit
From: Kiran Kumar K <kirankumark@marvell.com>
Add register dump support and mark Registers dump in features.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 1 +
drivers/net/octeontx2/otx2_ethdev.h | 3 +
drivers/net/octeontx2/otx2_ethdev_debug.c | 228 +++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev_irq.c | 6 +
7 files changed, 241 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 1f0148669..ce3067596 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -10,3 +10,4 @@ ARMv8 = Y
Lock-free Tx queue = Y
SR-IOV = Y
Multiprocess aware = Y
+Registers dump = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 2b0644ee5..b2be52ccb 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -10,3 +10,4 @@ ARMv8 = Y
Lock-free Tx queue = Y
SR-IOV = Y
Multiprocess aware = Y
+Registers dump = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 80f0d5c95..76b0c3c10 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -9,3 +9,4 @@ Linux VFIO = Y
ARMv8 = Y
Lock-free Tx queue = Y
Multiprocess aware = Y
+Registers dump = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 045855c2e..48d5a15d6 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -229,6 +229,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
.dev_configure = otx2_nix_configure,
+ .get_reg = otx2_nix_dev_get_reg,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index ff14a0129..c01fe0211 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -154,6 +154,9 @@ void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev);
/* Debug */
+int otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data);
+int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
+ struct rte_dev_reg_info *regs);
int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c
index 39cda7637..9f06e5505 100644
--- a/drivers/net/octeontx2/otx2_ethdev_debug.c
+++ b/drivers/net/octeontx2/otx2_ethdev_debug.c
@@ -5,6 +5,234 @@
#include "otx2_ethdev.h"
#define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
+#define NIX_REG_INFO(reg) {reg, #reg}
+
+struct nix_lf_reg_info {
+ uint32_t offset;
+ const char *name;
+};
+
+static const struct
+nix_lf_reg_info nix_lf_reg[] = {
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(0)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(1)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(2)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(3)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(4)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(5)),
+ NIX_REG_INFO(NIX_LF_CFG),
+ NIX_REG_INFO(NIX_LF_GINT),
+ NIX_REG_INFO(NIX_LF_GINT_W1S),
+ NIX_REG_INFO(NIX_LF_GINT_ENA_W1C),
+ NIX_REG_INFO(NIX_LF_GINT_ENA_W1S),
+ NIX_REG_INFO(NIX_LF_ERR_INT),
+ NIX_REG_INFO(NIX_LF_ERR_INT_W1S),
+ NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1C),
+ NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1S),
+ NIX_REG_INFO(NIX_LF_RAS),
+ NIX_REG_INFO(NIX_LF_RAS_W1S),
+ NIX_REG_INFO(NIX_LF_RAS_ENA_W1C),
+ NIX_REG_INFO(NIX_LF_RAS_ENA_W1S),
+ NIX_REG_INFO(NIX_LF_SQ_OP_ERR_DBG),
+ NIX_REG_INFO(NIX_LF_MNQ_ERR_DBG),
+ NIX_REG_INFO(NIX_LF_SEND_ERR_DBG),
+};
+
+static int
+nix_lf_get_reg_count(struct otx2_eth_dev *dev)
+{
+ int reg_count = 0;
+
+ reg_count = RTE_DIM(nix_lf_reg);
+ /* NIX_LF_TX_STATX */
+ reg_count += dev->lf_tx_stats;
+ /* NIX_LF_RX_STATX */
+ reg_count += dev->lf_rx_stats;
+ /* NIX_LF_QINTX_CNT*/
+ reg_count += dev->qints;
+ /* NIX_LF_QINTX_INT */
+ reg_count += dev->qints;
+ /* NIX_LF_QINTX_ENA_W1S */
+ reg_count += dev->qints;
+ /* NIX_LF_QINTX_ENA_W1C */
+ reg_count += dev->qints;
+ /* NIX_LF_CINTX_CNT */
+ reg_count += dev->cints;
+ /* NIX_LF_CINTX_WAIT */
+ reg_count += dev->cints;
+ /* NIX_LF_CINTX_INT */
+ reg_count += dev->cints;
+ /* NIX_LF_CINTX_INT_W1S */
+ reg_count += dev->cints;
+ /* NIX_LF_CINTX_ENA_W1S */
+ reg_count += dev->cints;
+ /* NIX_LF_CINTX_ENA_W1C */
+ reg_count += dev->cints;
+
+ return reg_count;
+}
+
+int
+otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data)
+{
+ uintptr_t nix_lf_base = dev->base;
+ bool dump_stdout;
+ uint64_t reg;
+ uint32_t i;
+
+ dump_stdout = data ? 0 : 1;
+
+ for (i = 0; i < RTE_DIM(nix_lf_reg); i++) {
+ reg = otx2_read64(nix_lf_base + nix_lf_reg[i].offset);
+ if (dump_stdout && reg)
+ nix_dump("%32s = 0x%" PRIx64,
+ nix_lf_reg[i].name, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_TX_STATX */
+ for (i = 0; i < dev->lf_tx_stats; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_TX_STATX(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_TX_STATX", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_RX_STATX */
+ for (i = 0; i < dev->lf_rx_stats; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_RX_STATX(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_RX_STATX", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_QINTX_CNT*/
+ for (i = 0; i < dev->qints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_CNT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_QINTX_CNT", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_QINTX_INT */
+ for (i = 0; i < dev->qints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_INT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_QINTX_INT", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_QINTX_ENA_W1S */
+ for (i = 0; i < dev->qints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1S(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_QINTX_ENA_W1S", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_QINTX_ENA_W1C */
+ for (i = 0; i < dev->qints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1C(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_QINTX_ENA_W1C", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_CNT */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_CNT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_CNT", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_WAIT */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_WAIT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_WAIT", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_INT */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_INT", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_INT_W1S */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT_W1S(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_INT_W1S", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_ENA_W1S */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1S(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_ENA_W1S", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_ENA_W1C */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1C(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_ENA_W1C", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+ return 0;
+}
+
+int
+otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev, struct rte_dev_reg_info *regs)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t *data = regs->data;
+
+ if (data == NULL) {
+ regs->length = nix_lf_get_reg_count(dev);
+ regs->width = 8;
+ return 0;
+ }
+
+ if (!regs->length ||
+ regs->length == (uint32_t)nix_lf_get_reg_count(dev)) {
+ otx2_nix_reg_dump(dev, data);
+ return 0;
+ }
+
+ return -ENOTSUP;
+}
static inline void
nix_lf_sq_dump(struct nix_sq_ctx_s *ctx)
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index 9bc9d99f8..7bb0ef35e 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -24,6 +24,8 @@ nix_lf_err_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
+ /* Dump registers to std out */
+ otx2_nix_reg_dump(dev, NULL);
otx2_nix_queues_ctx_dump(eth_dev);
rte_panic("nix_lf_error_interrupt\n");
}
@@ -79,6 +81,8 @@ nix_lf_ras_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_RAS);
+ /* Dump registers to std out */
+ otx2_nix_reg_dump(dev, NULL);
otx2_nix_queues_ctx_dump(eth_dev);
rte_panic("nix_lf_ras_interrupt\n");
}
@@ -239,6 +243,8 @@ nix_lf_q_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
+ /* Dump registers to std out */
+ otx2_nix_reg_dump(dev, NULL);
otx2_nix_queues_ctx_dump(eth_dev);
rte_panic("nix_lf_q_interrupt\n");
}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 11/58] net/octeontx2: add link stats operations
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (9 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 10/58] net/octeontx2: add register dump support jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 12/58] net/octeontx2: add basic stats operation jerinj
` (48 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Add link stats related operations and mark respective
items in the documentation.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 2 +
doc/guides/nics/features/octeontx2_vf.ini | 2 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 8 ++
drivers/net/octeontx2/otx2_ethdev.h | 8 ++
drivers/net/octeontx2/otx2_link.c | 108 +++++++++++++++++++++
8 files changed, 132 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_link.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index ce3067596..60009ab6d 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -10,4 +10,6 @@ ARMv8 = Y
Lock-free Tx queue = Y
SR-IOV = Y
Multiprocess aware = Y
+Link status = Y
+Link status event = Y
Registers dump = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index b2be52ccb..3a859edd1 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -10,4 +10,6 @@ ARMv8 = Y
Lock-free Tx queue = Y
SR-IOV = Y
Multiprocess aware = Y
+Link status = Y
+Link status event = Y
Registers dump = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 76b0c3c10..e1cbd18b1 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -9,4 +9,6 @@ Linux VFIO = Y
ARMv8 = Y
Lock-free Tx queue = Y
Multiprocess aware = Y
+Link status = Y
+Link status event = Y
Registers dump = Y
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 0dfd43f4f..aa428fe6a 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -31,6 +31,7 @@ LIBABIVER := 1
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
+ otx2_link.c \
otx2_ethdev.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 1c010c342..117d038ab 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -4,6 +4,7 @@
sources = files(
'otx2_mac.c',
+ 'otx2_link.c',
'otx2_ethdev.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 48d5a15d6..cb4f6ebb9 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -39,6 +39,10 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
return NIX_TX_OFFLOAD_CAPA;
}
+static const struct otx2_dev_ops otx2_dev_ops = {
+ .link_status_update = otx2_eth_dev_link_status_update,
+};
+
static int
nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
{
@@ -229,6 +233,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
.dev_configure = otx2_nix_configure,
+ .link_update = otx2_nix_link_update,
.get_reg = otx2_nix_dev_get_reg,
};
@@ -324,6 +329,9 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
goto error;
}
}
+ /* Device generic callbacks */
+ dev->ops = &otx2_dev_ops;
+ dev->eth_dev = eth_dev;
/* Grab the NPA LF if required */
rc = otx2_npa_lf_init(pci_dev, dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index c01fe0211..8a099817d 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -116,6 +116,7 @@ struct otx2_eth_dev {
uint8_t max_mac_entries;
uint8_t lf_tx_stats;
uint8_t lf_rx_stats;
+ uint16_t flags;
uint16_t cints;
uint16_t qints;
uint8_t configured;
@@ -135,6 +136,7 @@ struct otx2_eth_dev {
struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
struct otx2_rss_info rss_info;
struct otx2_npc_flow_info npc_flow;
+ struct rte_eth_dev *eth_dev;
} __rte_cache_aligned;
static inline struct otx2_eth_dev *
@@ -147,6 +149,12 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_info *dev_info);
+/* Link */
+void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
+int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
+void otx2_eth_dev_link_status_update(struct otx2_dev *dev,
+ struct cgx_link_user_info *link);
+
/* IRQ */
int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
new file mode 100644
index 000000000..228a0cd8e
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_link.c
@@ -0,0 +1,108 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_common.h>
+#include <rte_ethdev_pci.h>
+
+#include "otx2_ethdev.h"
+
+void
+otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set)
+{
+ if (set)
+ dev->flags |= OTX2_LINK_CFG_IN_PROGRESS_F;
+ else
+ dev->flags &= ~OTX2_LINK_CFG_IN_PROGRESS_F;
+
+ rte_wmb();
+}
+
+static inline int
+nix_wait_for_link_cfg(struct otx2_eth_dev *dev)
+{
+ uint16_t wait = 1000;
+
+ do {
+ rte_rmb();
+ if (!(dev->flags & OTX2_LINK_CFG_IN_PROGRESS_F))
+ break;
+ wait--;
+ rte_delay_ms(1);
+ } while (wait);
+
+ return wait ? 0 : -1;
+}
+
+static void
+nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
+{
+ if (link && link->link_status)
+ otx2_info("Port %d: Link Up - speed %u Mbps - %s",
+ (int)(eth_dev->data->port_id),
+ (uint32_t)link->link_speed,
+ link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+ "full-duplex" : "half-duplex");
+ else
+ otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
+}
+
+void
+otx2_eth_dev_link_status_update(struct otx2_dev *dev,
+ struct cgx_link_user_info *link)
+{
+ struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
+ struct rte_eth_dev *eth_dev = otx2_dev->eth_dev;
+ struct rte_eth_link eth_link;
+
+ if (!link || !dev || !eth_dev->data->dev_conf.intr_conf.lsc)
+ return;
+
+ if (nix_wait_for_link_cfg(otx2_dev)) {
+ otx2_err("Timeout waiting for link_cfg to complete");
+ return;
+ }
+
+ eth_link.link_status = link->link_up;
+ eth_link.link_speed = link->speed;
+ eth_link.link_autoneg = ETH_LINK_AUTONEG;
+ eth_link.link_duplex = link->full_duplex;
+
+ /* Print link info */
+ nix_link_status_print(eth_dev, ð_link);
+
+ /* Update link info */
+ rte_eth_linkstatus_set(eth_dev, ð_link);
+
+ /* Set the flag and execute application callbacks */
+ _rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+}
+
+int
+otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_link_info_msg *rsp;
+ struct rte_eth_link link;
+ int rc;
+
+ RTE_SET_USED(wait_to_complete);
+
+ if (otx2_dev_is_lbk(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_get_linkinfo(mbox);
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ link.link_status = rsp->link_info.link_up;
+ link.link_speed = rsp->link_info.speed;
+ link.link_autoneg = ETH_LINK_AUTONEG;
+
+ if (rsp->link_info.full_duplex)
+ link.link_duplex = rsp->link_info.full_duplex;
+
+ return rte_eth_linkstatus_set(eth_dev, &link);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 12/58] net/octeontx2: add basic stats operation
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (10 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 11/58] net/octeontx2: add link stats operations jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 13/58] net/octeontx2: add extended stats operations jerinj
` (47 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru
From: Kiran Kumar K <kirankumark@marvell.com>
Add basic stat operation and updated the feature list.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 2 +
doc/guides/nics/features/octeontx2_vf.ini | 2 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 3 +
drivers/net/octeontx2/otx2_ethdev.h | 17 +++
drivers/net/octeontx2/otx2_stats.c | 117 +++++++++++++++++++++
8 files changed, 145 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_stats.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 60009ab6d..72336ae15 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -12,4 +12,6 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Basic stats = Y
+Stats per queue = Y
Registers dump = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 3a859edd1..0f3850188 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -12,4 +12,6 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Basic stats = Y
+Stats per queue = Y
Registers dump = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index e1cbd18b1..8bc72c4fb 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -11,4 +11,6 @@ Lock-free Tx queue = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Basic stats = Y
+Stats per queue = Y
Registers dump = Y
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index aa428fe6a..dcd692b7b 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -32,6 +32,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
otx2_link.c \
+ otx2_stats.c \
otx2_ethdev.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 117d038ab..384237104 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -5,6 +5,7 @@
sources = files(
'otx2_mac.c',
'otx2_link.c',
+ 'otx2_stats.c',
'otx2_ethdev.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index cb4f6ebb9..5787029d9 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -234,7 +234,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
.dev_configure = otx2_nix_configure,
.link_update = otx2_nix_link_update,
+ .stats_get = otx2_nix_dev_stats_get,
+ .stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
+ .queue_stats_mapping_set = otx2_nix_queue_stats_mapping,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 8a099817d..c9366a9ed 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -57,6 +57,12 @@
#define NIX_TX_NB_SEG_MAX 9
#endif
+#define CQ_OP_STAT_OP_ERR 63
+#define CQ_OP_STAT_CQ_ERR 46
+
+#define OP_ERR BIT_ULL(CQ_OP_STAT_OP_ERR)
+#define CQ_ERR BIT_ULL(CQ_OP_STAT_CQ_ERR)
+
#define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
ETH_RSS_TCP | ETH_RSS_SCTP | \
ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD)
@@ -135,6 +141,8 @@ struct otx2_eth_dev {
uint64_t tx_offload_capa;
struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
struct otx2_rss_info rss_info;
+ uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
+ uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
struct otx2_npc_flow_info npc_flow;
struct rte_eth_dev *eth_dev;
} __rte_cache_aligned;
@@ -168,6 +176,15 @@ int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
+/* Stats */
+int otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_stats *stats);
+void otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev);
+
+int otx2_nix_queue_stats_mapping(struct rte_eth_dev *dev,
+ uint16_t queue_id, uint8_t stat_idx,
+ uint8_t is_rx);
+
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
diff --git a/drivers/net/octeontx2/otx2_stats.c b/drivers/net/octeontx2/otx2_stats.c
new file mode 100644
index 000000000..ade0f6ad6
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_stats.c
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <inttypes.h>
+
+#include "otx2_ethdev.h"
+
+int
+otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_stats *stats)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t reg, val;
+ uint32_t qidx, i;
+ int64_t *addr;
+
+ stats->opackets = otx2_read64(dev->base +
+ NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_UCAST));
+ stats->opackets += otx2_read64(dev->base +
+ NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_MCAST));
+ stats->opackets += otx2_read64(dev->base +
+ NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_BCAST));
+ stats->oerrors = otx2_read64(dev->base +
+ NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_DROP));
+ stats->obytes = otx2_read64(dev->base +
+ NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_OCTS));
+
+ stats->ipackets = otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_UCAST));
+ stats->ipackets += otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_MCAST));
+ stats->ipackets += otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_BCAST));
+ stats->imissed = otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_DROP));
+ stats->ibytes = otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_OCTS));
+ stats->ierrors = otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_ERR));
+
+ for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
+ if (dev->txmap[i] & (1 << 31)) {
+ qidx = dev->txmap[i] & 0xFFFF;
+ reg = (((uint64_t)qidx) << 32);
+
+ addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_ipackets[i] = val;
+
+ addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_OCTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_ibytes[i] = val;
+
+ addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_DROP_PKTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_errors[i] = val;
+ }
+ }
+
+ for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
+ if (dev->rxmap[i] & (1 << 31)) {
+ qidx = dev->rxmap[i] & 0xFFFF;
+ reg = (((uint64_t)qidx) << 32);
+
+ addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_PKTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_opackets[i] = val;
+
+ addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_OCTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_obytes[i] = val;
+
+ addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_DROP_PKTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_errors[i] += val;
+ }
+ }
+
+ return 0;
+}
+
+void
+otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+
+ otx2_mbox_alloc_msg_nix_stats_rst(mbox);
+ otx2_mbox_process(mbox);
+}
+
+int
+otx2_nix_queue_stats_mapping(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+ uint8_t stat_idx, uint8_t is_rx)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ if (is_rx)
+ dev->rxmap[stat_idx] = ((1 << 31) | queue_id);
+ else
+ dev->txmap[stat_idx] = ((1 << 31) | queue_id);
+
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 13/58] net/octeontx2: add extended stats operations
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (11 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 12/58] net/octeontx2: add basic stats operation jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 14/58] net/octeontx2: add promiscuous and allmulticast mode jerinj
` (46 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru
From: Kiran Kumar K <kirankumark@marvell.com>
Add extended operations and updated the feature list.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 5 +
drivers/net/octeontx2/otx2_ethdev.h | 13 +
drivers/net/octeontx2/otx2_stats.c | 270 +++++++++++++++++++++
6 files changed, 291 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 72336ae15..3835b5069 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -14,4 +14,5 @@ Link status = Y
Link status event = Y
Basic stats = Y
Stats per queue = Y
+Extended stats = Y
Registers dump = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 0f3850188..e18443742 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -13,5 +13,6 @@ Multiprocess aware = Y
Link status = Y
Link status event = Y
Basic stats = Y
+Extended stats = Y
Stats per queue = Y
Registers dump = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 8bc72c4fb..89df760b3 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -12,5 +12,6 @@ Multiprocess aware = Y
Link status = Y
Link status event = Y
Basic stats = Y
+Extended stats = Y
Stats per queue = Y
Registers dump = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 5787029d9..937ba6399 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -238,6 +238,11 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
.queue_stats_mapping_set = otx2_nix_queue_stats_mapping,
+ .xstats_get = otx2_nix_xstats_get,
+ .xstats_get_names = otx2_nix_xstats_get_names,
+ .xstats_reset = otx2_nix_xstats_reset,
+ .xstats_get_by_id = otx2_nix_xstats_get_by_id,
+ .xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index c9366a9ed..223dd5a5a 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -184,6 +184,19 @@ void otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev);
int otx2_nix_queue_stats_mapping(struct rte_eth_dev *dev,
uint16_t queue_id, uint8_t stat_idx,
uint8_t is_rx);
+int otx2_nix_xstats_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat *xstats, unsigned int n);
+int otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int limit);
+void otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev);
+
+int otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev,
+ const uint64_t *ids,
+ uint64_t *values, unsigned int n);
+int otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids, unsigned int limit);
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
diff --git a/drivers/net/octeontx2/otx2_stats.c b/drivers/net/octeontx2/otx2_stats.c
index ade0f6ad6..deb83b704 100644
--- a/drivers/net/octeontx2/otx2_stats.c
+++ b/drivers/net/octeontx2/otx2_stats.c
@@ -6,6 +6,45 @@
#include "otx2_ethdev.h"
+struct otx2_nix_xstats_name {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ uint32_t offset;
+};
+
+static const struct otx2_nix_xstats_name nix_tx_xstats[] = {
+ {"tx_ucast", NIX_STAT_LF_TX_TX_UCAST},
+ {"tx_bcast", NIX_STAT_LF_TX_TX_BCAST},
+ {"tx_mcast", NIX_STAT_LF_TX_TX_MCAST},
+ {"tx_drop", NIX_STAT_LF_TX_TX_DROP},
+ {"tx_octs", NIX_STAT_LF_TX_TX_OCTS},
+};
+
+static const struct otx2_nix_xstats_name nix_rx_xstats[] = {
+ {"rx_octs", NIX_STAT_LF_RX_RX_OCTS},
+ {"rx_ucast", NIX_STAT_LF_RX_RX_UCAST},
+ {"rx_bcast", NIX_STAT_LF_RX_RX_BCAST},
+ {"rx_mcast", NIX_STAT_LF_RX_RX_MCAST},
+ {"rx_drop", NIX_STAT_LF_RX_RX_DROP},
+ {"rx_drop_octs", NIX_STAT_LF_RX_RX_DROP_OCTS},
+ {"rx_fcs", NIX_STAT_LF_RX_RX_FCS},
+ {"rx_err", NIX_STAT_LF_RX_RX_ERR},
+ {"rx_drp_bcast", NIX_STAT_LF_RX_RX_DRP_BCAST},
+ {"rx_drp_mcast", NIX_STAT_LF_RX_RX_DRP_MCAST},
+ {"rx_drp_l3bcast", NIX_STAT_LF_RX_RX_DRP_L3BCAST},
+ {"rx_drp_l3mcast", NIX_STAT_LF_RX_RX_DRP_L3MCAST},
+};
+
+static const struct otx2_nix_xstats_name nix_q_xstats[] = {
+ {"rq_op_re_pkts", NIX_LF_RQ_OP_RE_PKTS},
+};
+
+#define OTX2_NIX_NUM_RX_XSTATS RTE_DIM(nix_rx_xstats)
+#define OTX2_NIX_NUM_TX_XSTATS RTE_DIM(nix_tx_xstats)
+#define OTX2_NIX_NUM_QUEUE_XSTATS RTE_DIM(nix_q_xstats)
+
+#define OTX2_NIX_NUM_XSTATS_REG (OTX2_NIX_NUM_RX_XSTATS + \
+ OTX2_NIX_NUM_TX_XSTATS + OTX2_NIX_NUM_QUEUE_XSTATS)
+
int
otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
struct rte_eth_stats *stats)
@@ -115,3 +154,234 @@ otx2_nix_queue_stats_mapping(struct rte_eth_dev *eth_dev, uint16_t queue_id,
return 0;
}
+
+int
+otx2_nix_xstats_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat *xstats,
+ unsigned int n)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ unsigned int i, count = 0;
+ uint64_t reg, val;
+
+ if (n < OTX2_NIX_NUM_XSTATS_REG)
+ return OTX2_NIX_NUM_XSTATS_REG;
+
+ if (xstats == NULL)
+ return 0;
+
+ for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) {
+ xstats[count].value = otx2_read64(dev->base +
+ NIX_LF_TX_STATX(nix_tx_xstats[i].offset));
+ xstats[count].id = count;
+ count++;
+ }
+
+ for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) {
+ xstats[count].value = otx2_read64(dev->base +
+ NIX_LF_RX_STATX(nix_rx_xstats[i].offset));
+ xstats[count].id = count;
+ count++;
+ }
+
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ reg = (((uint64_t)i) << 32);
+ val = otx2_atomic64_add_nosync(reg, (int64_t *)(dev->base +
+ nix_q_xstats[0].offset));
+ if (val & OP_ERR)
+ val = 0;
+ xstats[count].value += val;
+ }
+ xstats[count].id = count;
+ count++;
+
+ return count;
+}
+
+int
+otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int limit)
+{
+ unsigned int i, count = 0;
+
+ RTE_SET_USED(eth_dev);
+
+ if (limit < OTX2_NIX_NUM_XSTATS_REG && xstats_names != NULL)
+ return -ENOMEM;
+
+ if (xstats_names) {
+ for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "%s", nix_tx_xstats[i].name);
+ count++;
+ }
+
+ for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "%s", nix_rx_xstats[i].name);
+ count++;
+ }
+
+ for (i = 0; i < OTX2_NIX_NUM_QUEUE_XSTATS; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "%s", nix_q_xstats[i].name);
+ count++;
+ }
+ }
+
+ return OTX2_NIX_NUM_XSTATS_REG;
+}
+
+int
+otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids, unsigned int limit)
+{
+ struct rte_eth_xstat_name xstats_names_copy[OTX2_NIX_NUM_XSTATS_REG];
+ uint16_t i;
+
+ if (limit < OTX2_NIX_NUM_XSTATS_REG && ids == NULL)
+ return OTX2_NIX_NUM_XSTATS_REG;
+
+ if (limit > OTX2_NIX_NUM_XSTATS_REG)
+ return -EINVAL;
+
+ if (xstats_names == NULL)
+ return -ENOMEM;
+
+ otx2_nix_xstats_get_names(eth_dev, xstats_names_copy, limit);
+
+ for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) {
+ if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) {
+ otx2_err("Invalid id value");
+ return -EINVAL;
+ }
+ strncpy(xstats_names[i].name, xstats_names_copy[ids[i]].name,
+ sizeof(xstats_names[i].name));
+ }
+
+ return limit;
+}
+
+int
+otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev, const uint64_t *ids,
+ uint64_t *values, unsigned int n)
+{
+ struct rte_eth_xstat xstats[OTX2_NIX_NUM_XSTATS_REG];
+ uint16_t i;
+
+ if (n < OTX2_NIX_NUM_XSTATS_REG && ids == NULL)
+ return OTX2_NIX_NUM_XSTATS_REG;
+
+ if (n > OTX2_NIX_NUM_XSTATS_REG)
+ return -EINVAL;
+
+ if (values == NULL)
+ return -ENOMEM;
+
+ otx2_nix_xstats_get(eth_dev, xstats, n);
+
+ for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) {
+ if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) {
+ otx2_err("Invalid id value");
+ return -EINVAL;
+ }
+ values[i] = xstats[ids[i]].value;
+ }
+
+ return n;
+}
+
+static void
+nix_queue_stats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_rsp *rsp;
+ struct nix_aq_enq_req *aq;
+ uint32_t i;
+ int rc;
+
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = i;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to read rq context");
+ return;
+ }
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = i;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+ otx2_mbox_memcpy(&aq->rq, &rsp->rq, sizeof(rsp->rq));
+ otx2_mbox_memset(&aq->rq_mask, 0, sizeof(aq->rq_mask));
+ aq->rq.octs = 0;
+ aq->rq.pkts = 0;
+ aq->rq.drop_octs = 0;
+ aq->rq.drop_pkts = 0;
+ aq->rq.re_pkts = 0;
+
+ aq->rq_mask.octs = ~(aq->rq_mask.octs);
+ aq->rq_mask.pkts = ~(aq->rq_mask.pkts);
+ aq->rq_mask.drop_octs = ~(aq->rq_mask.drop_octs);
+ aq->rq_mask.drop_pkts = ~(aq->rq_mask.drop_pkts);
+ aq->rq_mask.re_pkts = ~(aq->rq_mask.re_pkts);
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to write rq context");
+ return;
+ }
+ }
+
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = i;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to read sq context");
+ return;
+ }
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = i;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+ otx2_mbox_memcpy(&aq->sq, &rsp->sq, sizeof(rsp->sq));
+ otx2_mbox_memset(&aq->sq_mask, 0, sizeof(aq->sq_mask));
+ aq->sq.octs = 0;
+ aq->sq.pkts = 0;
+ aq->sq.drop_octs = 0;
+ aq->sq.drop_pkts = 0;
+
+ aq->sq_mask.octs = ~(aq->sq_mask.octs);
+ aq->sq_mask.pkts = ~(aq->sq_mask.pkts);
+ aq->sq_mask.drop_octs = ~(aq->sq_mask.drop_octs);
+ aq->sq_mask.drop_pkts = ~(aq->sq_mask.drop_pkts);
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to write sq context");
+ return;
+ }
+ }
+}
+
+void
+otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+
+ otx2_mbox_alloc_msg_nix_stats_rst(mbox);
+ otx2_mbox_process(mbox);
+
+ /* Reset queue stats */
+ nix_queue_stats_reset(eth_dev);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 14/58] net/octeontx2: add promiscuous and allmulticast mode
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (12 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 13/58] net/octeontx2: add extended stats operations jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 15/58] net/octeontx2: add unicast MAC filter jerinj
` (45 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru, Sunil Kumar Kori
From: Vamsi Attunuru <vattunuru@marvell.com>
Add promiscuous and allmulticast mode for PF devices and
update the respective feature list.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 2 +
drivers/net/octeontx2/otx2_ethdev.c | 4 ++
drivers/net/octeontx2/otx2_ethdev.h | 6 ++
drivers/net/octeontx2/otx2_ethdev_ops.c | 82 ++++++++++++++++++++++
5 files changed, 96 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 3835b5069..40da1bb68 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -12,6 +12,8 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Promiscuous mode = Y
+Allmulticast mode = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index e18443742..1b89be452 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -12,6 +12,8 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Promiscuous mode = Y
+Allmulticast mode = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 937ba6399..826ce7f4e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -237,6 +237,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
+ .promiscuous_enable = otx2_nix_promisc_enable,
+ .promiscuous_disable = otx2_nix_promisc_disable,
+ .allmulticast_enable = otx2_nix_allmulticast_enable,
+ .allmulticast_disable = otx2_nix_allmulticast_disable,
.queue_stats_mapping_set = otx2_nix_queue_stats_mapping,
.xstats_get = otx2_nix_xstats_get,
.xstats_get_names = otx2_nix_xstats_get_names,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 223dd5a5a..549bc26e4 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -157,6 +157,12 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_info *dev_info);
+void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en);
+void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
+void otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev);
+void otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev);
+void otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev);
+
/* Link */
void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 9f86635d4..77cfa2cec 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -4,6 +4,88 @@
#include "otx2_ethdev.h"
+static void
+nix_cgx_promisc_config(struct rte_eth_dev *eth_dev, int en)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return;
+
+ if (en)
+ otx2_mbox_alloc_msg_cgx_promisc_enable(mbox);
+ else
+ otx2_mbox_alloc_msg_cgx_promisc_disable(mbox);
+
+ otx2_mbox_process(mbox);
+}
+
+void
+otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_rx_mode *req;
+
+ if (otx2_dev_is_vf(dev))
+ return;
+
+ req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox);
+
+ if (en)
+ req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC;
+
+ otx2_mbox_process(mbox);
+ eth_dev->data->promiscuous = en;
+}
+
+void
+otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev)
+{
+ otx2_nix_promisc_config(eth_dev, 1);
+ nix_cgx_promisc_config(eth_dev, 1);
+}
+
+void
+otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev)
+{
+ otx2_nix_promisc_config(eth_dev, 0);
+ nix_cgx_promisc_config(eth_dev, 0);
+}
+
+static void
+nix_allmulticast_config(struct rte_eth_dev *eth_dev, int en)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_rx_mode *req;
+
+ if (otx2_dev_is_vf(dev))
+ return;
+
+ req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox);
+
+ if (en)
+ req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_ALLMULTI;
+ else if (eth_dev->data->promiscuous)
+ req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC;
+
+ otx2_mbox_process(mbox);
+}
+
+void
+otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev)
+{
+ nix_allmulticast_config(eth_dev, 1);
+}
+
+void
+otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev)
+{
+ nix_allmulticast_config(eth_dev, 0);
+}
+
void
otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 15/58] net/octeontx2: add unicast MAC filter
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (13 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 14/58] net/octeontx2: add promiscuous and allmulticast mode jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 16/58] net/octeontx2: add RSS support jerinj
` (44 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Sunil Kumar Kori, Vamsi Attunuru
From: Sunil Kumar Kori <skori@marvell.com>
Add unicast MAC filter for PF device and
update the respective feature list.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 3 +
drivers/net/octeontx2/otx2_ethdev.h | 6 ++
drivers/net/octeontx2/otx2_mac.c | 77 ++++++++++++++++++++++
5 files changed, 88 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 40da1bb68..cb77ab0fc 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -14,6 +14,7 @@ Link status = Y
Link status event = Y
Promiscuous mode = Y
Allmulticast mode = Y
+Unicast MAC filter = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 1b89be452..a51291158 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -14,6 +14,7 @@ Link status = Y
Link status event = Y
Promiscuous mode = Y
Allmulticast mode = Y
+Unicast MAC filter = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 826ce7f4e..a72c901f4 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -237,6 +237,9 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
+ .mac_addr_add = otx2_nix_mac_addr_add,
+ .mac_addr_remove = otx2_nix_mac_addr_del,
+ .mac_addr_set = otx2_nix_mac_addr_set,
.promiscuous_enable = otx2_nix_promisc_enable,
.promiscuous_disable = otx2_nix_promisc_disable,
.allmulticast_enable = otx2_nix_allmulticast_enable,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 549bc26e4..8d0147afb 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -211,7 +211,13 @@ int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *addr);
/* Mac address handling */
+int otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev,
+ struct rte_ether_addr *addr);
int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr);
+int otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev,
+ struct rte_ether_addr *addr,
+ uint32_t index, uint32_t pool);
+void otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index);
int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev);
/* Devargs */
diff --git a/drivers/net/octeontx2/otx2_mac.c b/drivers/net/octeontx2/otx2_mac.c
index 89b0ca6b0..b4bcc61f8 100644
--- a/drivers/net/octeontx2/otx2_mac.c
+++ b/drivers/net/octeontx2/otx2_mac.c
@@ -49,6 +49,83 @@ otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev)
return rsp->max_dmac_filters;
}
+int
+otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr,
+ uint32_t index __rte_unused, uint32_t pool __rte_unused)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_mac_addr_add_req *req;
+ struct cgx_mac_addr_add_rsp *rsp;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return -ENOTSUP;
+
+ if (otx2_dev_active_vfs(dev))
+ return -ENOTSUP;
+
+ req = otx2_mbox_alloc_msg_cgx_mac_addr_add(mbox);
+ otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to add mac address, rc=%d", rc);
+ goto done;
+ }
+
+ /* Enable promiscuous mode at NIX level */
+ otx2_nix_promisc_config(eth_dev, 1);
+
+done:
+ return rc;
+}
+
+void
+otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_mac_addr_del_req *req;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return;
+
+ req = otx2_mbox_alloc_msg_cgx_mac_addr_del(mbox);
+ req->index = index;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ otx2_err("Failed to delete mac address, rc=%d", rc);
+}
+
+int
+otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_set_mac_addr *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_nix_set_mac_addr(mbox);
+ otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to set mac address, rc=%d", rc);
+ goto done;
+ }
+
+ otx2_mbox_memcpy(dev->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ /* Install the same entry into CGX DMAC filter table too. */
+ otx2_cgx_mac_addr_set(eth_dev, addr);
+
+done:
+ return rc;
+}
+
int
otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 16/58] net/octeontx2: add RSS support
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (14 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 15/58] net/octeontx2: add unicast MAC filter jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 17/58] net/octeontx2: add Rx queue setup and release jerinj
` (43 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Add RSS support and expose RSS related functions
to implement RSS action for rte_flow driver.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 4 +
doc/guides/nics/features/octeontx2_vec.ini | 4 +
doc/guides/nics/features/octeontx2_vf.ini | 4 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 11 +
drivers/net/octeontx2/otx2_ethdev.h | 35 ++
| 378 +++++++++++++++++++++
8 files changed, 438 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_rss.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index cb77ab0fc..48ac58b3a 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -15,6 +15,10 @@ Link status event = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+Inner RSS = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index a51291158..6fc647af4 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -15,6 +15,10 @@ Link status event = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+Inner RSS = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 89df760b3..af3c70269 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -11,6 +11,10 @@ Lock-free Tx queue = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+Inner RSS = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index dcd692b7b..67352ec81 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -30,6 +30,7 @@ LIBABIVER := 1
# all source are stored in SRCS-y
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
+ otx2_rss.c \
otx2_mac.c \
otx2_link.c \
otx2_stats.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 384237104..b7e56e2ca 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -3,6 +3,7 @@
#
sources = files(
+ 'otx2_rss.c',
'otx2_mac.c',
'otx2_link.c',
'otx2_stats.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index a72c901f4..5289c79e8 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -195,6 +195,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto fail;
}
+ /* Configure RSS */
+ rc = otx2_nix_rss_config(eth_dev);
+ if (rc) {
+ otx2_err("Failed to configure rss rc=%d", rc);
+ goto free_nix_lf;
+ }
+
/* Register queue IRQs */
rc = oxt2_nix_register_queue_irqs(eth_dev);
if (rc) {
@@ -245,6 +252,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.allmulticast_enable = otx2_nix_allmulticast_enable,
.allmulticast_disable = otx2_nix_allmulticast_disable,
.queue_stats_mapping_set = otx2_nix_queue_stats_mapping,
+ .reta_update = otx2_nix_dev_reta_update,
+ .reta_query = otx2_nix_dev_reta_query,
+ .rss_hash_update = otx2_nix_rss_hash_update,
+ .rss_hash_conf_get = otx2_nix_rss_hash_conf_get,
.xstats_get = otx2_nix_xstats_get,
.xstats_get_names = otx2_nix_xstats_get_names,
.xstats_reset = otx2_nix_xstats_reset,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 8d0147afb..67b164740 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -39,6 +39,9 @@
#define NIX_MAX_HW_MTU 9190
#define NIX_MAX_HW_FRS (NIX_MAX_HW_MTU + NIX_HW_L2_OVERHEAD)
#define NIX_MIN_HW_FRS 60
+#define NIX_MIN_SQB 512
+#define NIX_SQB_LIST_SPACE 2
+#define NIX_RSS_RETA_SIZE_MAX 256
/* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/
#define NIX_RSS_GRPS 8
#define NIX_HASH_KEY_SIZE 48 /* 352 Bits */
@@ -92,14 +95,22 @@
DEV_RX_OFFLOAD_QINQ_STRIP | \
DEV_RX_OFFLOAD_TIMESTAMP)
+#define NIX_DEFAULT_RSS_CTX_GROUP 0
+#define NIX_DEFAULT_RSS_MCAM_IDX -1
+
struct otx2_qint {
struct rte_eth_dev *eth_dev;
uint8_t qintx;
};
struct otx2_rss_info {
+ uint64_t nix_rss;
+ uint32_t flowkey_cfg;
uint16_t rss_size;
uint8_t rss_grps;
+ uint8_t alg_idx; /* Selected algo index */
+ uint16_t ind_tbl[NIX_RSS_RETA_SIZE_MAX];
+ uint8_t key[NIX_HASH_KEY_SIZE];
};
struct otx2_npc_flow_info {
@@ -204,6 +215,30 @@ int otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
struct rte_eth_xstat_name *xstats_names,
const uint64_t *ids, unsigned int limit);
+/* RSS */
+void otx2_nix_rss_set_key(struct otx2_eth_dev *dev,
+ uint8_t *key, uint32_t key_len);
+uint32_t otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev,
+ uint64_t ethdev_rss, uint8_t rss_level);
+int otx2_rss_set_hf(struct otx2_eth_dev *dev,
+ uint32_t flowkey_cfg, uint8_t *alg_idx,
+ uint8_t group, int mcam_index);
+int otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev, uint8_t group,
+ uint16_t *ind_tbl);
+int otx2_nix_rss_config(struct rte_eth_dev *eth_dev);
+
+int otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+int otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+int otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_conf *rss_conf);
+
+int otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_conf *rss_conf);
+
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
--git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
new file mode 100644
index 000000000..089846da7
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -0,0 +1,378 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+
+int
+otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev,
+ uint8_t group, uint16_t *ind_tbl)
+{
+ struct otx2_rss_info *rss = &dev->rss_info;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *req;
+ int rc, idx;
+
+ for (idx = 0; idx < rss->rss_size; idx++) {
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req)
+ return -ENOMEM;
+ }
+ req->rss.rq = ind_tbl[idx];
+ /* Fill AQ info */
+ req->qidx = (group * rss->rss_size) + idx;
+ req->ctype = NIX_AQ_CTYPE_RSS;
+ req->op = NIX_AQ_INSTOP_INIT;
+ }
+
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ return 0;
+}
+
+int
+otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_rss_info *rss = &dev->rss_info;
+ int rc, i, j;
+ int idx = 0;
+
+ rc = -EINVAL;
+ if (reta_size != dev->rss_info.rss_size) {
+ otx2_err("Size of hash lookup table configured "
+ "(%d) doesn't match the number hardware can supported "
+ "(%d)", reta_size, dev->rss_info.rss_size);
+ goto fail;
+ }
+
+ /* Copy RETA table */
+ for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ if ((reta_conf[i].mask >> j) & 0x01)
+ rss->ind_tbl[idx] = reta_conf[i].reta[j];
+ idx++;
+ }
+ }
+
+ return otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl);
+
+fail:
+ return rc;
+}
+
+int
+otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_rss_info *rss = &dev->rss_info;
+ int rc, i, j;
+
+ rc = -EINVAL;
+
+ if (reta_size != dev->rss_info.rss_size) {
+ otx2_err("Size of hash lookup table configured "
+ "(%d) doesn't match the number hardware can supported "
+ "(%d)", reta_size, dev->rss_info.rss_size);
+ goto fail;
+ }
+
+ /* Copy RETA table */
+ for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ if ((reta_conf[i].mask >> j) & 0x01)
+ reta_conf[i].reta[j] = rss->ind_tbl[j];
+ }
+
+ return 0;
+
+fail:
+ return rc;
+}
+
+void
+otx2_nix_rss_set_key(struct otx2_eth_dev *dev, uint8_t *key,
+ uint32_t key_len)
+{
+ const uint8_t default_key[NIX_HASH_KEY_SIZE] = {
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD
+ };
+ struct otx2_rss_info *rss = &dev->rss_info;
+ uint64_t *keyptr;
+ uint64_t val;
+ uint32_t idx;
+
+ if (key == NULL || key == 0) {
+ keyptr = (uint64_t *)(uintptr_t)default_key;
+ key_len = NIX_HASH_KEY_SIZE;
+ memset(rss->key, 0, key_len);
+ } else {
+ memcpy(rss->key, key, key_len);
+ keyptr = (uint64_t *)rss->key;
+ }
+
+ for (idx = 0; idx < (key_len >> 3); idx++) {
+ val = rte_cpu_to_be_64(*keyptr);
+ otx2_write64(val, dev->base + NIX_LF_RX_SECRETX(idx));
+ keyptr++;
+ }
+}
+
+static void
+rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
+{
+ uint64_t *keyptr = (uint64_t *)key;
+ uint64_t val;
+ int idx;
+
+ for (idx = 0; idx < (NIX_HASH_KEY_SIZE >> 3); idx++) {
+ val = otx2_read64(dev->base + NIX_LF_RX_SECRETX(idx));
+ *keyptr = rte_be_to_cpu_64(val);
+ keyptr++;
+ }
+}
+
+#define RSS_IPV4_ENABLE ( \
+ ETH_RSS_IPV4 | \
+ ETH_RSS_FRAG_IPV4 | \
+ ETH_RSS_NONFRAG_IPV4_UDP | \
+ ETH_RSS_NONFRAG_IPV4_TCP | \
+ ETH_RSS_NONFRAG_IPV4_SCTP)
+
+#define RSS_IPV6_ENABLE ( \
+ ETH_RSS_IPV6 | \
+ ETH_RSS_FRAG_IPV6 | \
+ ETH_RSS_NONFRAG_IPV6_UDP | \
+ ETH_RSS_NONFRAG_IPV6_TCP | \
+ ETH_RSS_NONFRAG_IPV6_SCTP)
+
+#define RSS_IPV6_EX_ENABLE ( \
+ ETH_RSS_IPV6_EX | \
+ ETH_RSS_IPV6_TCP_EX | \
+ ETH_RSS_IPV6_UDP_EX)
+
+#define RSS_MAX_LEVELS 3
+
+#define RSS_IPV4_INDEX 0
+#define RSS_IPV6_INDEX 1
+#define RSS_TCP_INDEX 2
+#define RSS_UDP_INDEX 3
+#define RSS_SCTP_INDEX 4
+#define RSS_DMAC_INDEX 5
+
+uint32_t
+otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
+ uint8_t rss_level)
+{
+ uint32_t flow_key_type[RSS_MAX_LEVELS][6] = {
+ {
+ FLOW_KEY_TYPE_IPV4, FLOW_KEY_TYPE_IPV6,
+ FLOW_KEY_TYPE_TCP, FLOW_KEY_TYPE_UDP,
+ FLOW_KEY_TYPE_SCTP, FLOW_KEY_TYPE_ETH_DMAC
+ },
+ {
+ FLOW_KEY_TYPE_INNR_IPV4, FLOW_KEY_TYPE_INNR_IPV6,
+ FLOW_KEY_TYPE_INNR_TCP, FLOW_KEY_TYPE_INNR_UDP,
+ FLOW_KEY_TYPE_INNR_SCTP, FLOW_KEY_TYPE_INNR_ETH_DMAC
+ },
+ {
+ FLOW_KEY_TYPE_IPV4 | FLOW_KEY_TYPE_INNR_IPV4,
+ FLOW_KEY_TYPE_IPV6 | FLOW_KEY_TYPE_INNR_IPV6,
+ FLOW_KEY_TYPE_TCP | FLOW_KEY_TYPE_INNR_TCP,
+ FLOW_KEY_TYPE_UDP | FLOW_KEY_TYPE_INNR_UDP,
+ FLOW_KEY_TYPE_SCTP | FLOW_KEY_TYPE_INNR_SCTP,
+ FLOW_KEY_TYPE_ETH_DMAC | FLOW_KEY_TYPE_INNR_ETH_DMAC
+ }
+ };
+ uint32_t flowkey_cfg = 0;
+
+ dev->rss_info.nix_rss = ethdev_rss;
+
+ if (ethdev_rss & RSS_IPV4_ENABLE)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_IPV4_INDEX];
+
+ if (ethdev_rss & RSS_IPV6_ENABLE)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
+
+ if (ethdev_rss & ETH_RSS_TCP)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
+
+ if (ethdev_rss & ETH_RSS_UDP)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
+
+ if (ethdev_rss & ETH_RSS_SCTP)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
+
+ if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
+
+ if (ethdev_rss & RSS_IPV6_EX_ENABLE)
+ flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
+
+ if (ethdev_rss & ETH_RSS_PORT)
+ flowkey_cfg |= FLOW_KEY_TYPE_PORT;
+
+ if (ethdev_rss & ETH_RSS_NVGRE)
+ flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
+
+ if (ethdev_rss & ETH_RSS_VXLAN) {
+ flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
+ if (flowkey_cfg & FLOW_KEY_TYPE_UDP)
+ flowkey_cfg |= FLOW_KEY_TYPE_UDP_VXLAN;
+ }
+
+ if (ethdev_rss & ETH_RSS_GENEVE) {
+ flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
+ if (flowkey_cfg & FLOW_KEY_TYPE_UDP)
+ flowkey_cfg |= FLOW_KEY_TYPE_UDP_GENEVE;
+ }
+
+ return flowkey_cfg;
+}
+
+int
+otx2_rss_set_hf(struct otx2_eth_dev *dev, uint32_t flowkey_cfg,
+ uint8_t *alg_idx, uint8_t group, int mcam_index)
+{
+ struct nix_rss_flowkey_cfg_rsp *rss_rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_rss_flowkey_cfg *cfg;
+ int rc;
+
+ rc = -EINVAL;
+
+ dev->rss_info.flowkey_cfg = flowkey_cfg;
+
+ cfg = otx2_mbox_alloc_msg_nix_rss_flowkey_cfg(mbox);
+
+ cfg->flowkey_cfg = flowkey_cfg;
+ cfg->mcam_index = mcam_index; /* -1 indicates default group */
+ cfg->group = group; /* 0 is default group */
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rss_rsp);
+ if (rc)
+ return rc;
+
+ if (alg_idx)
+ *alg_idx = rss_rsp->alg_idx;
+
+ return rc;
+}
+
+int
+otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint32_t flowkey_cfg;
+ uint8_t alg_idx;
+ int rc;
+
+ rc = -EINVAL;
+
+ if (rss_conf->rss_key && rss_conf->rss_key_len != NIX_HASH_KEY_SIZE) {
+ otx2_err("Hash key size mismatch %d vs %d",
+ rss_conf->rss_key_len, NIX_HASH_KEY_SIZE);
+ goto fail;
+ }
+
+ if (rss_conf->rss_key)
+ otx2_nix_rss_set_key(dev, rss_conf->rss_key,
+ (uint32_t)rss_conf->rss_key_len);
+
+ flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_conf->rss_hf, 0);
+
+ rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx,
+ NIX_DEFAULT_RSS_CTX_GROUP,
+ NIX_DEFAULT_RSS_MCAM_IDX);
+ if (rc) {
+ otx2_err("Failed to set RSS hash function rc=%d", rc);
+ return rc;
+ }
+
+ dev->rss_info.alg_idx = alg_idx;
+
+fail:
+ return rc;
+}
+
+int
+otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ if (rss_conf->rss_key)
+ rss_get_key(dev, rss_conf->rss_key);
+
+ rss_conf->rss_key_len = NIX_HASH_KEY_SIZE;
+ rss_conf->rss_hf = dev->rss_info.nix_rss;
+
+ return 0;
+}
+
+int
+otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint32_t idx, qcnt = eth_dev->data->nb_rx_queues;
+ uint32_t flowkey_cfg;
+ uint64_t rss_hf;
+ uint8_t alg_idx;
+ int rc;
+
+ /* Skip further configuration if selected mode is not RSS */
+ if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+ return 0;
+
+ /* Update default RSS key and cfg */
+ otx2_nix_rss_set_key(dev, NULL, 0);
+
+ /* Update default RSS RETA */
+ for (idx = 0; idx < dev->rss_info.rss_size; idx++)
+ dev->rss_info.ind_tbl[idx] = idx % qcnt;
+
+ /* Init RSS table context */
+ rc = otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl);
+ if (rc) {
+ otx2_err("Failed to init RSS table rc=%d", rc);
+ return rc;
+ }
+
+ rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
+ flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, 0);
+
+ rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx,
+ NIX_DEFAULT_RSS_CTX_GROUP,
+ NIX_DEFAULT_RSS_MCAM_IDX);
+ if (rc) {
+ otx2_err("Failed to set RSS hash function rc=%d", rc);
+ return rc;
+ }
+
+ dev->rss_info.alg_idx = alg_idx;
+
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 17/58] net/octeontx2: add Rx queue setup and release
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (15 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 16/58] net/octeontx2: add RSS support jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 18/58] net/octeontx2: add Tx " jerinj
` (42 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru
From: Jerin Jacob <jerinj@marvell.com>
Add Rx queue setup and release.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 310 ++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 51 +++++
2 files changed, 361 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 5289c79e8..dbbc2263d 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -2,9 +2,15 @@
* Copyright(C) 2019 Marvell International Ltd.
*/
+#include <inttypes.h>
+#include <math.h>
+
#include <rte_ethdev_pci.h>
#include <rte_io.h>
#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_pool_ops.h>
+#include <rte_mempool.h>
#include "otx2_ethdev.h"
@@ -114,6 +120,308 @@ nix_lf_free(struct otx2_eth_dev *dev)
return otx2_mbox_process(mbox);
}
+static inline void
+nix_rx_queue_reset(struct otx2_eth_rxq *rxq)
+{
+ rxq->head = 0;
+ rxq->available = 0;
+}
+
+static inline uint32_t
+nix_qsize_to_val(enum nix_q_size_e qsize)
+{
+ return (16UL << (qsize * 2));
+}
+
+static inline enum nix_q_size_e
+nix_qsize_clampup_get(struct otx2_eth_dev *dev, uint32_t val)
+{
+ int i;
+
+ if (otx2_ethdev_fixup_is_min_4k_q(dev))
+ i = nix_q_size_4K;
+ else
+ i = nix_q_size_16;
+
+ for (; i < nix_q_size_max; i++)
+ if (val <= nix_qsize_to_val(i))
+ break;
+
+ if (i >= nix_q_size_max)
+ i = nix_q_size_max - 1;
+
+ return i;
+}
+
+static int
+nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
+ uint16_t qid, struct otx2_eth_rxq *rxq, struct rte_mempool *mp)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ const struct rte_memzone *rz;
+ uint32_t ring_size, cq_size;
+ struct nix_aq_enq_req *aq;
+ uint16_t first_skip;
+ int rc;
+
+ cq_size = rxq->qlen;
+ ring_size = cq_size * NIX_CQ_ENTRY_SZ;
+ rz = rte_eth_dma_zone_reserve(eth_dev, "cq", qid, ring_size,
+ NIX_CQ_ALIGN, dev->node);
+ if (rz == NULL) {
+ otx2_err("Failed to allocate mem for cq hw ring");
+ rc = -ENOMEM;
+ goto fail;
+ }
+ memset(rz->addr, 0, rz->len);
+ rxq->desc = (uintptr_t)rz->addr;
+ rxq->qmask = cq_size - 1;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_INIT;
+
+ aq->cq.ena = 1;
+ aq->cq.caching = 1;
+ aq->cq.qsize = rxq->qsize;
+ aq->cq.base = rz->iova;
+ aq->cq.avg_level = 0xff;
+ aq->cq.cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT);
+ aq->cq.cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR);
+
+ /* Many to one reduction */
+ aq->cq.qint_idx = qid % dev->qints;
+
+ if (otx2_ethdev_fixup_is_limit_cq_full(dev)) {
+ uint16_t min_rx_drop;
+ const float rx_cq_skid = 1024 * 256;
+
+ min_rx_drop = ceil(rx_cq_skid / (float)cq_size);
+ aq->cq.drop = min_rx_drop;
+ aq->cq.drop_ena = 1;
+ }
+
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to init cq context");
+ goto fail;
+ }
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_INIT;
+
+ aq->rq.sso_ena = 0;
+ aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
+ aq->rq.spb_ena = 0;
+ aq->rq.lpb_aura = npa_lf_aura_handle_to_aura(mp->pool_id);
+ first_skip = (sizeof(struct rte_mbuf));
+ first_skip += RTE_PKTMBUF_HEADROOM;
+ first_skip += rte_pktmbuf_priv_size(mp);
+ rxq->data_off = first_skip;
+
+ first_skip /= 8; /* Expressed in number of dwords */
+ aq->rq.first_skip = first_skip;
+ aq->rq.later_skip = (sizeof(struct rte_mbuf) / 8);
+ aq->rq.flow_tagw = 32; /* 32-bits */
+ aq->rq.lpb_sizem1 = rte_pktmbuf_data_room_size(mp);
+ aq->rq.lpb_sizem1 += rte_pktmbuf_priv_size(mp);
+ aq->rq.lpb_sizem1 += sizeof(struct rte_mbuf);
+ aq->rq.lpb_sizem1 /= 8;
+ aq->rq.lpb_sizem1 -= 1; /* Expressed in size minus one */
+ aq->rq.ena = 1;
+ aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
+ aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
+ aq->rq.rq_int_ena = 0;
+ /* Many to one reduction */
+ aq->rq.qint_idx = qid % dev->qints;
+
+ if (otx2_ethdev_fixup_is_limit_cq_full(dev))
+ aq->rq.xqe_drop_ena = 1;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to init rq context");
+ goto fail;
+ }
+
+ return 0;
+fail:
+ return rc;
+}
+
+static int
+nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *aq;
+ int rc;
+
+ /* RQ is already disabled */
+ /* Disable CQ */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ aq->cq.ena = 0;
+ aq->cq_mask.ena = ~(aq->cq_mask.ena);
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to disable cq context");
+ return rc;
+ }
+
+ return 0;
+}
+
+static inline int
+nix_get_data_off(struct otx2_eth_dev *dev)
+{
+ RTE_SET_USED(dev);
+
+ return 0;
+}
+
+uint64_t
+otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id)
+{
+ struct rte_mbuf mb_def;
+ uint64_t *tmp;
+
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) % 8 != 0);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, refcnt) -
+ offsetof(struct rte_mbuf, data_off) != 2);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, nb_segs) -
+ offsetof(struct rte_mbuf, data_off) != 4);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, port) -
+ offsetof(struct rte_mbuf, data_off) != 6);
+ mb_def.nb_segs = 1;
+ mb_def.data_off = RTE_PKTMBUF_HEADROOM + nix_get_data_off(dev);
+ mb_def.port = port_id;
+ rte_mbuf_refcnt_set(&mb_def, 1);
+
+ /* Prevent compiler reordering: rearm_data covers previous fields */
+ rte_compiler_barrier();
+ tmp = (uint64_t *)&mb_def.rearm_data;
+
+ return *tmp;
+}
+
+static void
+otx2_nix_rx_queue_release(void *rx_queue)
+{
+ struct otx2_eth_rxq *rxq = rx_queue;
+
+ if (!rxq)
+ return;
+
+ otx2_nix_dbg("Releasing rxq %u", rxq->rq);
+ nix_cq_rq_uninit(rxq->eth_dev, rxq);
+ rte_free(rx_queue);
+}
+
+static int
+otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
+ uint16_t nb_desc, unsigned int socket,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_mempool_ops *ops;
+ struct otx2_eth_rxq *rxq;
+ const char *platform_ops;
+ enum nix_q_size_e qsize;
+ uint64_t offloads;
+ int rc;
+
+ rc = -EINVAL;
+
+ /* Compile time check to make sure all fast path elements in a CL */
+ RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_rxq, slow_path_start) >= 128);
+
+ /* Sanity checks */
+ if (rx_conf->rx_deferred_start == 1) {
+ otx2_err("Deferred Rx start is not supported");
+ goto fail;
+ }
+
+ platform_ops = rte_mbuf_platform_mempool_ops();
+ /* This driver needs octeontx2_npa mempool ops to work */
+ ops = rte_mempool_get_ops(mp->ops_index);
+ if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) {
+ otx2_err("mempool ops should be of octeontx2_npa type");
+ goto fail;
+ }
+
+ if (mp->pool_id == 0) {
+ otx2_err("Invalid pool_id");
+ goto fail;
+ }
+
+ /* Free memory prior to re-allocation if needed */
+ if (eth_dev->data->rx_queues[rq] != NULL) {
+ otx2_nix_dbg("Freeing memory prior to re-allocation %d", rq);
+ otx2_nix_rx_queue_release(eth_dev->data->rx_queues[rq]);
+ eth_dev->data->rx_queues[rq] = NULL;
+ }
+
+ offloads = rx_conf->offloads | eth_dev->data->dev_conf.rxmode.offloads;
+ dev->rx_offloads |= offloads;
+
+ /* Find the CQ queue size */
+ qsize = nix_qsize_clampup_get(dev, nb_desc);
+ /* Allocate rxq memory */
+ rxq = rte_zmalloc_socket("otx2 rxq", sizeof(*rxq), OTX2_ALIGN, socket);
+ if (rxq == NULL) {
+ otx2_err("Failed to allocate rq=%d", rq);
+ rc = -ENOMEM;
+ goto fail;
+ }
+
+ rxq->eth_dev = eth_dev;
+ rxq->rq = rq;
+ rxq->cq_door = dev->base + NIX_LF_CQ_OP_DOOR;
+ rxq->cq_status = (int64_t *)(dev->base + NIX_LF_CQ_OP_STATUS);
+ rxq->wdata = (uint64_t)rq << 32;
+ rxq->aura = npa_lf_aura_handle_to_aura(mp->pool_id);
+ rxq->mbuf_initializer = otx2_nix_rxq_mbuf_setup(dev,
+ eth_dev->data->port_id);
+ rxq->offloads = offloads;
+ rxq->pool = mp;
+ rxq->qlen = nix_qsize_to_val(qsize);
+ rxq->qsize = qsize;
+
+ /* Alloc completion queue */
+ rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp);
+ if (rc) {
+ otx2_err("Failed to allocate rxq=%u", rq);
+ goto free_rxq;
+ }
+
+ rxq->qconf.socket_id = socket;
+ rxq->qconf.nb_desc = nb_desc;
+ rxq->qconf.mempool = mp;
+ memcpy(&rxq->qconf.conf.rx, rx_conf, sizeof(struct rte_eth_rxconf));
+
+ nix_rx_queue_reset(rxq);
+ otx2_nix_dbg("rq=%d pool=%s qsize=%d nb_desc=%d->%d",
+ rq, mp->name, qsize, nb_desc, rxq->qlen);
+
+ eth_dev->data->rx_queues[rq] = rxq;
+ eth_dev->data->rx_queue_state[rq] = RTE_ETH_QUEUE_STATE_STOPPED;
+ return 0;
+
+free_rxq:
+ otx2_nix_rx_queue_release(rxq);
+fail:
+ return rc;
+}
+
static int
otx2_nix_configure(struct rte_eth_dev *eth_dev)
{
@@ -241,6 +549,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
.dev_configure = otx2_nix_configure,
.link_update = otx2_nix_link_update,
+ .rx_queue_setup = otx2_nix_rx_queue_setup,
+ .rx_queue_release = otx2_nix_rx_queue_release,
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 67b164740..562724b4e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -10,6 +10,9 @@
#include <rte_common.h>
#include <rte_ethdev.h>
#include <rte_kvargs.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_string_fns.h>
#include "otx2_common.h"
#include "otx2_dev.h"
@@ -50,6 +53,7 @@
#define NIX_RX_MIN_DESC_ALIGN 16
#define NIX_RX_NB_SEG_MAX 6
#define NIX_CQ_ENTRY_SZ 128
+#define NIX_CQ_ALIGN 512
/* If PTP is enabled additional SEND MEM DESC is required which
* takes 2 words, hence max 7 iova address are possible
@@ -98,6 +102,19 @@
#define NIX_DEFAULT_RSS_CTX_GROUP 0
#define NIX_DEFAULT_RSS_MCAM_IDX -1
+enum nix_q_size_e {
+ nix_q_size_16, /* 16 entries */
+ nix_q_size_64, /* 64 entries */
+ nix_q_size_256,
+ nix_q_size_1K,
+ nix_q_size_4K,
+ nix_q_size_16K,
+ nix_q_size_64K,
+ nix_q_size_256K,
+ nix_q_size_1M, /* Million entries */
+ nix_q_size_max
+};
+
struct otx2_qint {
struct rte_eth_dev *eth_dev;
uint8_t qintx;
@@ -113,6 +130,16 @@ struct otx2_rss_info {
uint8_t key[NIX_HASH_KEY_SIZE];
};
+struct otx2_eth_qconf {
+ union {
+ struct rte_eth_txconf tx;
+ struct rte_eth_rxconf rx;
+ } conf;
+ void *mempool;
+ uint32_t socket_id;
+ uint16_t nb_desc;
+};
+
struct otx2_npc_flow_info {
uint16_t channel; /*rx channel */
uint16_t flow_prealloc_size;
@@ -158,6 +185,29 @@ struct otx2_eth_dev {
struct rte_eth_dev *eth_dev;
} __rte_cache_aligned;
+struct otx2_eth_rxq {
+ uint64_t mbuf_initializer;
+ uint64_t data_off;
+ uintptr_t desc;
+ void *lookup_mem;
+ uintptr_t cq_door;
+ uint64_t wdata;
+ int64_t *cq_status;
+ uint32_t head;
+ uint32_t qmask;
+ uint32_t available;
+ uint16_t rq;
+ struct otx2_timesync_info *tstamp;
+ MARKER slow_path_start;
+ uint64_t aura;
+ uint64_t offloads;
+ uint32_t qlen;
+ struct rte_mempool *pool;
+ enum nix_q_size_e qsize;
+ struct rte_eth_dev *eth_dev;
+ struct otx2_eth_qconf qconf;
+} __rte_cache_aligned;
+
static inline struct otx2_eth_dev *
otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
{
@@ -173,6 +223,7 @@ void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
void otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev);
void otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev);
void otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev);
+uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id);
/* Link */
void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 18/58] net/octeontx2: add Tx queue setup and release
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (16 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 17/58] net/octeontx2: add Rx queue setup and release jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 19/58] net/octeontx2: handle port reconfigure jerinj
` (41 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: ferruh.yigit
From: Jerin Jacob <jerinj@marvell.com>
Add Tx queue setup and release.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 384 +++++++++++++++++++++++++++-
drivers/net/octeontx2/otx2_ethdev.h | 24 ++
drivers/net/octeontx2/otx2_tx.h | 28 ++
3 files changed, 435 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/octeontx2/otx2_tx.h
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index dbbc2263d..b501ba865 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -422,6 +422,372 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
return rc;
}
+static inline uint8_t
+nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
+{
+ /*
+ * Maximum three segments can be supported with W8, Choose
+ * NIX_MAXSQESZ_W16 for multi segment offload.
+ */
+ if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ return NIX_MAXSQESZ_W16;
+ else
+ return NIX_MAXSQESZ_W8;
+}
+
+static int
+nix_sq_init(struct otx2_eth_txq *txq)
+{
+ struct otx2_eth_dev *dev = txq->dev;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *sq;
+
+ if (txq->sqb_pool->pool_id == 0)
+ return -EINVAL;
+
+ sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ sq->qidx = txq->sq;
+ sq->ctype = NIX_AQ_CTYPE_SQ;
+ sq->op = NIX_AQ_INSTOP_INIT;
+ sq->sq.max_sqe_size = nix_sq_max_sqe_sz(txq);
+
+ sq->sq.default_chan = dev->tx_chan_base;
+ sq->sq.sqe_stype = NIX_STYPE_STF;
+ sq->sq.ena = 1;
+ if (sq->sq.max_sqe_size == NIX_MAXSQESZ_W8)
+ sq->sq.sqe_stype = NIX_STYPE_STP;
+ sq->sq.sqb_aura =
+ npa_lf_aura_handle_to_aura(txq->sqb_pool->pool_id);
+ sq->sq.sq_int_ena = BIT(NIX_SQINT_LMT_ERR);
+ sq->sq.sq_int_ena |= BIT(NIX_SQINT_SQB_ALLOC_FAIL);
+ sq->sq.sq_int_ena |= BIT(NIX_SQINT_SEND_ERR);
+ sq->sq.sq_int_ena |= BIT(NIX_SQINT_MNQ_ERR);
+
+ /* Many to one reduction */
+ sq->sq.qint_idx = txq->sq % dev->qints;
+
+ return otx2_mbox_process(mbox);
+}
+
+static int
+nix_sq_uninit(struct otx2_eth_txq *txq)
+{
+ struct otx2_eth_dev *dev = txq->dev;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct ndc_sync_op *ndc_req;
+ struct nix_aq_enq_rsp *rsp;
+ struct nix_aq_enq_req *aq;
+ uint16_t sqes_per_sqb;
+ void *sqb_buf;
+ int rc, count;
+
+ otx2_nix_dbg("Cleaning up sq %u", txq->sq);
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = txq->sq;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ /* Check if sq is already cleaned up */
+ if (!rsp->sq.ena)
+ return 0;
+
+ /* Disable sq */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = txq->sq;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ aq->sq_mask.ena = ~aq->sq_mask.ena;
+ aq->sq.ena = 0;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* Read SQ and free sqb's */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = txq->sq;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (aq->sq.smq_pend)
+ rte_panic("otx2: sq has pending sqe's");
+
+ count = aq->sq.sqb_count;
+ sqes_per_sqb = 1 << txq->sqes_per_sqb_log2;
+ /* Free SQB's that are used */
+ sqb_buf = (void *)rsp->sq.head_sqb;
+ while (count) {
+ void *next_sqb;
+
+ next_sqb = *(void **)((uintptr_t)sqb_buf + ((sqes_per_sqb - 1) *
+ nix_sq_max_sqe_sz(txq)));
+ npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1,
+ (uint64_t)sqb_buf);
+ sqb_buf = next_sqb;
+ count--;
+ }
+
+ /* Free next to use sqb */
+ if (rsp->sq.next_sqb)
+ npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1,
+ rsp->sq.next_sqb);
+
+ /* Sync NDC-NIX-TX for LF */
+ ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
+ ndc_req->nix_lf_tx_sync = 1;
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ otx2_err("Error on NDC-NIX-TX LF sync, rc %d", rc);
+
+ return rc;
+}
+
+static int
+nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
+{
+ struct otx2_eth_dev *dev = txq->dev;
+ uint16_t sqes_per_sqb, nb_sqb_bufs;
+ char name[RTE_MEMPOOL_NAMESIZE];
+ struct rte_mempool_objsz sz;
+ struct npa_aura_s *aura;
+ uint32_t tmp, blk_sz;
+
+ aura = (struct npa_aura_s *)((uintptr_t)txq->fc_mem + OTX2_ALIGN);
+ snprintf(name, sizeof(name), "otx2_sqb_pool_%d_%d", port, txq->sq);
+ blk_sz = dev->sqb_size;
+
+ if (nix_sq_max_sqe_sz(txq) == NIX_MAXSQESZ_W16)
+ sqes_per_sqb = (dev->sqb_size / 8) / 16;
+ else
+ sqes_per_sqb = (dev->sqb_size / 8) / 8;
+
+ nb_sqb_bufs = nb_desc / sqes_per_sqb;
+ /* Clamp up to minimum SQB buffers */
+ nb_sqb_bufs = RTE_MAX(NIX_MIN_SQB, nb_sqb_bufs + NIX_SQB_LIST_SPACE);
+
+ txq->sqb_pool = rte_mempool_create_empty(name, nb_sqb_bufs, blk_sz,
+ 0, 0, dev->node,
+ MEMPOOL_F_NO_SPREAD);
+ txq->nb_sqb_bufs = nb_sqb_bufs;
+ txq->sqes_per_sqb_log2 = (uint16_t)rte_log2_u32(sqes_per_sqb);
+ txq->nb_sqb_bufs_adj = nb_sqb_bufs -
+ RTE_ALIGN_MUL_CEIL(nb_sqb_bufs, sqes_per_sqb) / sqes_per_sqb;
+ txq->nb_sqb_bufs_adj =
+ (NIX_SQB_LOWER_THRESH * txq->nb_sqb_bufs_adj) / 100;
+
+ if (txq->sqb_pool == NULL) {
+ otx2_err("Failed to allocate sqe mempool");
+ goto fail;
+ }
+
+ memset(aura, 0, sizeof(*aura));
+ aura->fc_ena = 1;
+ aura->fc_addr = txq->fc_iova;
+ aura->fc_hyst_bits = 0; /* Store count on all updates */
+ if (rte_mempool_set_ops_byname(txq->sqb_pool, "octeontx2_npa", aura)) {
+ otx2_err("Failed to set ops for sqe mempool");
+ goto fail;
+ }
+ if (rte_mempool_populate_default(txq->sqb_pool) < 0) {
+ otx2_err("Failed to populate sqe mempool");
+ goto fail;
+ }
+
+ tmp = rte_mempool_calc_obj_size(blk_sz, MEMPOOL_F_NO_SPREAD, &sz);
+ if (dev->sqb_size != sz.elt_size) {
+ otx2_err("sqe pool block size is not expected %d != %d",
+ dev->sqb_size, tmp);
+ goto fail;
+ }
+
+ return 0;
+fail:
+ return -ENOMEM;
+}
+
+void
+otx2_nix_form_default_desc(struct otx2_eth_txq *txq)
+{
+ struct nix_send_ext_s *send_hdr_ext;
+ struct nix_send_hdr_s *send_hdr;
+ struct nix_send_mem_s *send_mem;
+ union nix_send_sg_s *sg;
+
+ /* Initialize the fields based on basic single segment packet */
+ memset(&txq->cmd, 0, sizeof(txq->cmd));
+
+ if (txq->dev->tx_offload_flags & NIX_TX_NEED_EXT_HDR) {
+ send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0];
+ /* 2(HDR) + 2(EXT_HDR) + 1(SG) + 1(IOVA) = 6/2 - 1 = 2 */
+ send_hdr->w0.sizem1 = 2;
+
+ send_hdr_ext = (struct nix_send_ext_s *)&txq->cmd[2];
+ send_hdr_ext->w0.subdc = NIX_SUBDC_EXT;
+ if (txq->dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F) {
+ /* Default: one seg packet would have:
+ * 2(HDR) + 2(EXT) + 1(SG) + 1(IOVA) + 2(MEM)
+ * => 8/2 - 1 = 3
+ */
+ send_hdr->w0.sizem1 = 3;
+ send_hdr_ext->w0.tstmp = 1;
+
+ /* To calculate the offset for send_mem,
+ * send_hdr->w0.sizem1 * 2
+ */
+ send_mem = (struct nix_send_mem_s *)(txq->cmd +
+ (send_hdr->w0.sizem1 << 1));
+ send_mem->subdc = NIX_SUBDC_MEM;
+ send_mem->dsz = 0x0;
+ send_mem->wmem = 0x1;
+ send_mem->alg = NIX_SENDMEMALG_SETTSTMP;
+ }
+ sg = (union nix_send_sg_s *)&txq->cmd[4];
+ } else {
+ send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0];
+ /* 2(HDR) + 1(SG) + 1(IOVA) = 4/2 - 1 = 1 */
+ send_hdr->w0.sizem1 = 1;
+ sg = (union nix_send_sg_s *)&txq->cmd[2];
+ }
+
+ send_hdr->w0.sq = txq->sq;
+ sg->subdc = NIX_SUBDC_SG;
+ sg->segs = 1;
+ sg->ld_type = NIX_SENDLDTYPE_LDD;
+
+ rte_smp_wmb();
+}
+
+static void
+otx2_nix_tx_queue_release(void *_txq)
+{
+ struct otx2_eth_txq *txq = _txq;
+
+ if (!txq)
+ return;
+
+ otx2_nix_dbg("Releasing txq %u", txq->sq);
+
+ /* Free sqb's and disable sq */
+ nix_sq_uninit(txq);
+
+ if (txq->sqb_pool) {
+ rte_mempool_free(txq->sqb_pool);
+ txq->sqb_pool = NULL;
+ }
+ rte_free(txq);
+}
+
+
+static int
+otx2_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t sq,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ const struct rte_memzone *fc;
+ struct otx2_eth_txq *txq;
+ uint64_t offloads;
+ int rc;
+
+ rc = -EINVAL;
+
+ /* Compile time check to make sure all fast path elements in a CL */
+ RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_txq, slow_path_start) >= 128);
+
+ if (tx_conf->tx_deferred_start) {
+ otx2_err("Tx deferred start is not supported");
+ goto fail;
+ }
+
+ /* Free memory prior to re-allocation if needed. */
+ if (eth_dev->data->tx_queues[sq] != NULL) {
+ otx2_nix_dbg("Freeing memory prior to re-allocation %d", sq);
+ otx2_nix_tx_queue_release(eth_dev->data->tx_queues[sq]);
+ eth_dev->data->tx_queues[sq] = NULL;
+ }
+
+ /* Find the expected offloads for this queue */
+ offloads = tx_conf->offloads | eth_dev->data->dev_conf.txmode.offloads;
+
+ /* Allocating tx queue data structure */
+ txq = rte_zmalloc_socket("otx2_ethdev TX queue", sizeof(*txq),
+ OTX2_ALIGN, socket_id);
+ if (txq == NULL) {
+ otx2_err("Failed to alloc txq=%d", sq);
+ rc = -ENOMEM;
+ goto fail;
+ }
+ txq->sq = sq;
+ txq->dev = dev;
+ txq->sqb_pool = NULL;
+ txq->offloads = offloads;
+ dev->tx_offloads |= offloads;
+
+ /*
+ * Allocate memory for flow control updates from HW.
+ * Alloc one cache line, so that fits all FC_STYPE modes.
+ */
+ fc = rte_eth_dma_zone_reserve(eth_dev, "fcmem", sq,
+ OTX2_ALIGN + sizeof(struct npa_aura_s),
+ OTX2_ALIGN, dev->node);
+ if (fc == NULL) {
+ otx2_err("Failed to allocate mem for fcmem");
+ rc = -ENOMEM;
+ goto free_txq;
+ }
+ txq->fc_iova = fc->iova;
+ txq->fc_mem = fc->addr;
+
+ /* Initialize the aura sqb pool */
+ rc = nix_alloc_sqb_pool(eth_dev->data->port_id, txq, nb_desc);
+ if (rc) {
+ otx2_err("Failed to alloc sqe pool rc=%d", rc);
+ goto free_txq;
+ }
+
+ /* Initialize the SQ */
+ rc = nix_sq_init(txq);
+ if (rc) {
+ otx2_err("Failed to init sq=%d context", sq);
+ goto free_txq;
+ }
+
+ txq->fc_cache_pkts = 0;
+ txq->io_addr = dev->base + NIX_LF_OP_SENDX(0);
+ /* Evenly distribute LMT slot for each sq */
+ txq->lmt_addr = (void *)(dev->lmt_addr + ((sq & LMT_SLOT_MASK) << 12));
+
+ txq->qconf.socket_id = socket_id;
+ txq->qconf.nb_desc = nb_desc;
+ memcpy(&txq->qconf.conf.tx, tx_conf, sizeof(struct rte_eth_txconf));
+
+ otx2_nix_form_default_desc(txq);
+
+ otx2_nix_dbg("sq=%d fc=%p offload=0x%" PRIx64 " sqb=0x%" PRIx64 ""
+ " lmt_addr=%p nb_sqb_bufs=%d sqes_per_sqb_log2=%d", sq,
+ fc->addr, offloads, txq->sqb_pool->pool_id, txq->lmt_addr,
+ txq->nb_sqb_bufs, txq->sqes_per_sqb_log2);
+ eth_dev->data->tx_queues[sq] = txq;
+ eth_dev->data->tx_queue_state[sq] = RTE_ETH_QUEUE_STATE_STOPPED;
+ return 0;
+
+free_txq:
+ otx2_nix_tx_queue_release(txq);
+fail:
+ return rc;
+}
+
+
static int
otx2_nix_configure(struct rte_eth_dev *eth_dev)
{
@@ -549,6 +915,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
.dev_configure = otx2_nix_configure,
.link_update = otx2_nix_link_update,
+ .tx_queue_setup = otx2_nix_tx_queue_setup,
+ .tx_queue_release = otx2_nix_tx_queue_release,
.rx_queue_setup = otx2_nix_rx_queue_setup,
.rx_queue_release = otx2_nix_rx_queue_release,
.stats_get = otx2_nix_dev_stats_get,
@@ -763,12 +1131,26 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
struct rte_pci_device *pci_dev;
- int rc;
+ int rc, i;
/* Nothing to be done for secondary processes */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ /* Free up SQs */
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]);
+ eth_dev->data->tx_queues[i] = NULL;
+ }
+ eth_dev->data->nb_tx_queues = 0;
+
+ /* Free up RQ's and CQ's */
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ otx2_nix_rx_queue_release(eth_dev->data->rx_queues[i]);
+ eth_dev->data->rx_queues[i] = NULL;
+ }
+ eth_dev->data->nb_rx_queues = 0;
+
/* Unregister queue irqs */
oxt2_nix_unregister_queue_irqs(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 562724b4e..4ec950100 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -19,6 +19,7 @@
#include "otx2_irq.h"
#include "otx2_mempool.h"
#include "otx2_rx.h"
+#include "otx2_tx.h"
#define OTX2_ETH_DEV_PMD_VERSION "1.0"
@@ -54,6 +55,8 @@
#define NIX_RX_NB_SEG_MAX 6
#define NIX_CQ_ENTRY_SZ 128
#define NIX_CQ_ALIGN 512
+#define NIX_SQB_LOWER_THRESH 90
+#define LMT_SLOT_MASK 0x7f
/* If PTP is enabled additional SEND MEM DESC is required which
* takes 2 words, hence max 7 iova address are possible
@@ -185,6 +188,24 @@ struct otx2_eth_dev {
struct rte_eth_dev *eth_dev;
} __rte_cache_aligned;
+struct otx2_eth_txq {
+ uint64_t cmd[8];
+ int64_t fc_cache_pkts;
+ uint64_t *fc_mem;
+ void *lmt_addr;
+ rte_iova_t io_addr;
+ rte_iova_t fc_iova;
+ uint16_t sqes_per_sqb_log2;
+ int16_t nb_sqb_bufs_adj;
+ MARKER slow_path_start;
+ uint16_t nb_sqb_bufs;
+ uint16_t sq;
+ uint64_t offloads;
+ struct otx2_eth_dev *dev;
+ struct rte_mempool *sqb_pool;
+ struct otx2_eth_qconf qconf;
+} __rte_cache_aligned;
+
struct otx2_eth_rxq {
uint64_t mbuf_initializer;
uint64_t data_off;
@@ -310,4 +331,7 @@ int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev);
int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
struct otx2_eth_dev *dev);
+/* Rx and Tx routines */
+void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
+
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
new file mode 100644
index 000000000..4d0993f87
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_tx.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_TX_H__
+#define __OTX2_TX_H__
+
+#define NIX_TX_OFFLOAD_NONE (0)
+#define NIX_TX_OFFLOAD_L3_L4_CSUM_F BIT(0)
+#define NIX_TX_OFFLOAD_OL3_OL4_CSUM_F BIT(1)
+#define NIX_TX_OFFLOAD_VLAN_QINQ_F BIT(2)
+#define NIX_TX_OFFLOAD_MBUF_NOFF_F BIT(3)
+#define NIX_TX_OFFLOAD_TSTAMP_F BIT(4)
+
+/* Flags to control xmit_prepare function.
+ * Defining it from backwards to denote its been
+ * not used as offload flags to pick function
+ */
+#define NIX_TX_MULTI_SEG_F BIT(15)
+
+#define NIX_TX_NEED_SEND_HDR_W1 \
+ (NIX_TX_OFFLOAD_L3_L4_CSUM_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F | \
+ NIX_TX_OFFLOAD_VLAN_QINQ_F)
+
+#define NIX_TX_NEED_EXT_HDR \
+ (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F)
+
+#endif /* __OTX2_TX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 19/58] net/octeontx2: handle port reconfigure
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (17 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 18/58] net/octeontx2: add Tx " jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 20/58] net/octeontx2: add queue start and stop operations jerinj
` (40 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
setup tx & rx queues with the previous configuration during
port reconfig, it handles cases like port reconfigure without
reconfiguring tx & rx queues.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 180 ++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 2 +
2 files changed, 182 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index b501ba865..6e14e12f0 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -787,6 +787,172 @@ otx2_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t sq,
return rc;
}
+static int
+nix_store_queue_cfg_and_then_release(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_eth_qconf *tx_qconf = NULL;
+ struct otx2_eth_qconf *rx_qconf = NULL;
+ struct otx2_eth_txq **txq;
+ struct otx2_eth_rxq **rxq;
+ int i, nb_rxq, nb_txq;
+
+ nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues);
+ nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues);
+
+ tx_qconf = malloc(nb_txq * sizeof(*tx_qconf));
+ if (tx_qconf == NULL) {
+ otx2_err("Failed to allocate memory for tx_qconf");
+ goto fail;
+ }
+
+ rx_qconf = malloc(nb_rxq * sizeof(*rx_qconf));
+ if (rx_qconf == NULL) {
+ otx2_err("Failed to allocate memory for rx_qconf");
+ goto fail;
+ }
+
+ txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues;
+ for (i = 0; i < nb_txq; i++) {
+ if (txq[i] == NULL) {
+ otx2_err("txq[%d] is already released", i);
+ goto fail;
+ }
+ memcpy(&tx_qconf[i], &txq[i]->qconf, sizeof(*tx_qconf));
+ otx2_nix_tx_queue_release(txq[i]);
+ eth_dev->data->tx_queues[i] = NULL;
+ }
+
+ rxq = (struct otx2_eth_rxq **)eth_dev->data->rx_queues;
+ for (i = 0; i < nb_rxq; i++) {
+ if (rxq[i] == NULL) {
+ otx2_err("rxq[%d] is already released", i);
+ goto fail;
+ }
+ memcpy(&rx_qconf[i], &rxq[i]->qconf, sizeof(*rx_qconf));
+ otx2_nix_rx_queue_release(rxq[i]);
+ eth_dev->data->rx_queues[i] = NULL;
+ }
+
+ dev->tx_qconf = tx_qconf;
+ dev->rx_qconf = rx_qconf;
+ return 0;
+
+fail:
+ if (tx_qconf)
+ free(tx_qconf);
+ if (rx_qconf)
+ free(rx_qconf);
+
+ return -ENOMEM;
+}
+
+static int
+nix_restore_queue_cfg(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_eth_qconf *tx_qconf = dev->tx_qconf;
+ struct otx2_eth_qconf *rx_qconf = dev->rx_qconf;
+ struct otx2_eth_txq **txq;
+ struct otx2_eth_rxq **rxq;
+ int rc, i, nb_rxq, nb_txq;
+
+ nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues);
+ nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues);
+
+ rc = -ENOMEM;
+ /* Setup tx & rx queues with previous configuration so
+ * that the queues can be functional in cases like ports
+ * are started without re configuring queues.
+ *
+ * Usual re config sequence is like below:
+ * port_configure() {
+ * if(reconfigure) {
+ * queue_release()
+ * queue_setup()
+ * }
+ * queue_configure() {
+ * queue_release()
+ * queue_setup()
+ * }
+ * }
+ * port_start()
+ *
+ * In some application's control path, queue_configure() would
+ * NOT be invoked for TXQs/RXQs in port_configure().
+ * In such cases, queues can be functional after start as the
+ * queues are already setup in port_configure().
+ */
+ for (i = 0; i < nb_txq; i++) {
+ rc = otx2_nix_tx_queue_setup(eth_dev, i, tx_qconf[i].nb_desc,
+ tx_qconf[i].socket_id,
+ &tx_qconf[i].conf.tx);
+ if (rc) {
+ otx2_err("Failed to setup tx queue rc=%d", rc);
+ txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues;
+ for (i -= 1; i >= 0; i--)
+ otx2_nix_tx_queue_release(txq[i]);
+ goto fail;
+ }
+ }
+
+ free(tx_qconf); tx_qconf = NULL;
+
+ for (i = 0; i < nb_rxq; i++) {
+ rc = otx2_nix_rx_queue_setup(eth_dev, i, rx_qconf[i].nb_desc,
+ rx_qconf[i].socket_id,
+ &rx_qconf[i].conf.rx,
+ rx_qconf[i].mempool);
+ if (rc) {
+ otx2_err("Failed to setup rx queue rc=%d", rc);
+ rxq = (struct otx2_eth_rxq **)eth_dev->data->rx_queues;
+ for (i -= 1; i >= 0; i--)
+ otx2_nix_rx_queue_release(rxq[i]);
+ goto release_tx_queues;
+ }
+ }
+
+ free(rx_qconf); rx_qconf = NULL;
+
+ return 0;
+
+release_tx_queues:
+ txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues;
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
+ otx2_nix_tx_queue_release(txq[i]);
+fail:
+ if (tx_qconf)
+ free(tx_qconf);
+ if (rx_qconf)
+ free(rx_qconf);
+
+ return rc;
+}
+
+static uint16_t
+nix_eth_nop_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts)
+{
+ RTE_SET_USED(queue);
+ RTE_SET_USED(mbufs);
+ RTE_SET_USED(pkts);
+
+ return 0;
+}
+
+static void
+nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
+{
+ /* These dummy functions are required for supporting
+ * some applications which reconfigure queues without
+ * stopping tx burst and rx burst threads(eg kni app)
+ * When the queues context is saved, txq/rxqs are released
+ * which caused app crash since rx/tx burst is still
+ * on different lcores
+ */
+ eth_dev->tx_pkt_burst = nix_eth_nop_burst;
+ eth_dev->rx_pkt_burst = nix_eth_nop_burst;
+ rte_mb();
+}
static int
otx2_nix_configure(struct rte_eth_dev *eth_dev)
@@ -843,6 +1009,10 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
/* Free the resources allocated from the previous configure */
if (dev->configured == 1) {
oxt2_nix_unregister_queue_irqs(eth_dev);
+ nix_set_nop_rxtx_function(eth_dev);
+ rc = nix_store_queue_cfg_and_then_release(eth_dev);
+ if (rc)
+ goto fail;
nix_lf_free(dev);
}
@@ -883,6 +1053,16 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ /*
+ * Restore queue config when reconfigure followed by
+ * reconfigure and no queue configure invoked from application case.
+ */
+ if (dev->configured == 1) {
+ rc = nix_restore_queue_cfg(eth_dev);
+ if (rc)
+ goto free_nix_lf;
+ }
+
/* Update the mac address */
ea = eth_dev->data->mac_addrs;
memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 4ec950100..c0568dcd1 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -185,6 +185,8 @@ struct otx2_eth_dev {
uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
struct otx2_npc_flow_info npc_flow;
+ struct otx2_eth_qconf *tx_qconf;
+ struct otx2_eth_qconf *rx_qconf;
struct rte_eth_dev *eth_dev;
} __rte_cache_aligned;
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 20/58] net/octeontx2: add queue start and stop operations
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (18 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 19/58] net/octeontx2: handle port reconfigure jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 21/58] net/octeontx2: introduce traffic manager jerinj
` (39 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add queue start and stop operations. Tx queue needs
to update the flow control value, Which will be
added in sub subsequent patch.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 92 ++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 2 +
5 files changed, 97 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 48ac58b3a..31816a183 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -12,6 +12,7 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 6fc647af4..d79428652 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -12,6 +12,7 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index af3c70269..d4deb52af 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -11,6 +11,7 @@ Lock-free Tx queue = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Queue start/stop = Y
RSS hash = Y
RSS key update = Y
RSS reta update = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 6e14e12f0..04a953441 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -252,6 +252,26 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
return rc;
}
+static int
+nix_rq_enb_dis(struct rte_eth_dev *eth_dev,
+ struct otx2_eth_rxq *rxq, const bool enb)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *aq;
+
+ /* Pkts will be dropped silently if RQ is disabled */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ aq->rq.ena = enb;
+ aq->rq_mask.ena = ~(aq->rq_mask.ena);
+
+ return otx2_mbox_process(mbox);
+}
+
static int
nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
{
@@ -1090,6 +1110,74 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
return rc;
}
+int
+otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+ struct rte_eth_dev_data *data = eth_dev->data;
+
+ if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
+ return 0;
+
+ data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+ return 0;
+}
+
+int
+otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+ struct rte_eth_dev_data *data = eth_dev->data;
+
+ if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+ return 0;
+
+ data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+ return 0;
+}
+
+static int
+otx2_nix_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+ struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx];
+ struct rte_eth_dev_data *data = eth_dev->data;
+ int rc;
+
+ if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
+ return 0;
+
+ rc = nix_rq_enb_dis(rxq->eth_dev, rxq, true);
+ if (rc) {
+ otx2_err("Failed to enable rxq=%u, rc=%d", qidx, rc);
+ goto done;
+ }
+
+ data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+
+done:
+ return rc;
+}
+
+static int
+otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+ struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx];
+ struct rte_eth_dev_data *data = eth_dev->data;
+ int rc;
+
+ if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+ return 0;
+
+ rc = nix_rq_enb_dis(rxq->eth_dev, rxq, false);
+ if (rc) {
+ otx2_err("Failed to disable rxq=%u, rc=%d", qidx, rc);
+ goto done;
+ }
+
+ data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+done:
+ return rc;
+}
+
/* Initialize and register driver with DPDK Application */
static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
@@ -1099,6 +1187,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.tx_queue_release = otx2_nix_tx_queue_release,
.rx_queue_setup = otx2_nix_rx_queue_setup,
.rx_queue_release = otx2_nix_rx_queue_release,
+ .tx_queue_start = otx2_nix_tx_queue_start,
+ .tx_queue_stop = otx2_nix_tx_queue_stop,
+ .rx_queue_start = otx2_nix_rx_queue_start,
+ .rx_queue_stop = otx2_nix_rx_queue_stop,
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index c0568dcd1..7b8c7e1e5 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -246,6 +246,8 @@ void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
void otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev);
void otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev);
void otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev);
+int otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx);
+int otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx);
uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id);
/* Link */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 21/58] net/octeontx2: introduce traffic manager
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (19 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 20/58] net/octeontx2: add queue start and stop operations jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 22/58] net/octeontx2: alloc and free TM HW resources jerinj
` (38 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Krzysztof Kanas
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Introduce traffic manager infra and default hierarchy
creation.
Upon ethdev configure, a default hierarchy is
created with one-to-one mapped tm nodes. This topology
will be overridden when user explicitly creates and commits
a new hierarchy using rte_tm interface.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 16 ++
drivers/net/octeontx2/otx2_ethdev.h | 14 ++
drivers/net/octeontx2/otx2_tm.c | 252 ++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_tm.h | 67 ++++++++
6 files changed, 351 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_tm.c
create mode 100644 drivers/net/octeontx2/otx2_tm.h
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 67352ec81..cf2ba0e0e 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -30,6 +30,7 @@ LIBABIVER := 1
# all source are stored in SRCS-y
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
+ otx2_tm.c \
otx2_rss.c \
otx2_mac.c \
otx2_link.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index b7e56e2ca..14e8e78f8 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -3,6 +3,7 @@
#
sources = files(
+ 'otx2_tm.c',
'otx2_rss.c',
'otx2_mac.c',
'otx2_link.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 04a953441..2808058a8 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1033,6 +1033,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
rc = nix_store_queue_cfg_and_then_release(eth_dev);
if (rc)
goto fail;
+ otx2_nix_tm_fini(eth_dev);
nix_lf_free(dev);
}
@@ -1066,6 +1067,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ /* Init the default TM scheduler hierarchy */
+ rc = otx2_nix_tm_init_default(eth_dev);
+ if (rc) {
+ otx2_err("Failed to init traffic manager rc=%d", rc);
+ goto free_nix_lf;
+ }
+
/* Register queue IRQs */
rc = oxt2_nix_register_queue_irqs(eth_dev);
if (rc) {
@@ -1368,6 +1376,9 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
/* Also sync same MAC address to CGX table */
otx2_cgx_mac_addr_set(eth_dev, ð_dev->data->mac_addrs[0]);
+ /* Initialize the tm data structures */
+ otx2_nix_tm_conf_init(eth_dev);
+
dev->tx_offload_capa = nix_get_tx_offload_capa(dev);
dev->rx_offload_capa = nix_get_rx_offload_capa(dev);
@@ -1423,6 +1434,11 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
}
eth_dev->data->nb_rx_queues = 0;
+ /* Free tm resources */
+ rc = otx2_nix_tm_fini(eth_dev);
+ if (rc)
+ otx2_err("Failed to cleanup tm, rc=%d", rc);
+
/* Unregister queue irqs */
oxt2_nix_unregister_queue_irqs(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7b8c7e1e5..b2b7d4186 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -19,6 +19,7 @@
#include "otx2_irq.h"
#include "otx2_mempool.h"
#include "otx2_rx.h"
+#include "otx2_tm.h"
#include "otx2_tx.h"
#define OTX2_ETH_DEV_PMD_VERSION "1.0"
@@ -181,6 +182,19 @@ struct otx2_eth_dev {
uint64_t rx_offload_capa;
uint64_t tx_offload_capa;
struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
+ uint16_t txschq[NIX_TXSCH_LVL_CNT];
+ uint16_t txschq_contig[NIX_TXSCH_LVL_CNT];
+ uint16_t txschq_index[NIX_TXSCH_LVL_CNT];
+ uint16_t txschq_contig_index[NIX_TXSCH_LVL_CNT];
+ /* Dis-contiguous queues */
+ uint16_t txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
+ /* Contiguous queues */
+ uint16_t txschq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
+ uint16_t otx2_tm_root_lvl;
+ uint16_t tm_flags;
+ uint16_t tm_leaf_cnt;
+ struct otx2_nix_tm_node_list node_list;
+ struct otx2_nix_tm_shaper_profile_list shaper_profile_list;
struct otx2_rss_info rss_info;
uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
new file mode 100644
index 000000000..bc0474242
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -0,0 +1,252 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_malloc.h>
+
+#include "otx2_ethdev.h"
+#include "otx2_tm.h"
+
+/* Use last LVL_CNT nodes as default nodes */
+#define NIX_DEFAULT_NODE_ID_START (RTE_TM_NODE_ID_NULL - NIX_TXSCH_LVL_CNT)
+
+enum otx2_tm_node_level {
+ OTX2_TM_LVL_ROOT = 0,
+ OTX2_TM_LVL_SCH1,
+ OTX2_TM_LVL_SCH2,
+ OTX2_TM_LVL_SCH3,
+ OTX2_TM_LVL_SCH4,
+ OTX2_TM_LVL_QUEUE,
+ OTX2_TM_LVL_MAX,
+};
+
+static bool
+nix_tm_have_tl1_access(struct otx2_eth_dev *dev)
+{
+ bool is_lbk = otx2_dev_is_lbk(dev);
+ return otx2_dev_is_pf(dev) && !otx2_dev_is_A0(dev) &&
+ !is_lbk && !dev->maxvf;
+}
+
+static struct otx2_nix_tm_shaper_profile *
+nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id)
+{
+ struct otx2_nix_tm_shaper_profile *tm_shaper_profile;
+
+ TAILQ_FOREACH(tm_shaper_profile, &dev->shaper_profile_list, shaper) {
+ if (tm_shaper_profile->shaper_profile_id == shaper_id)
+ return tm_shaper_profile;
+ }
+ return NULL;
+}
+
+static struct otx2_nix_tm_node *
+nix_tm_node_search(struct otx2_eth_dev *dev,
+ uint32_t node_id, bool user)
+{
+ struct otx2_nix_tm_node *tm_node;
+
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (tm_node->id == node_id &&
+ (user == !!(tm_node->flags & NIX_TM_NODE_USER)))
+ return tm_node;
+ }
+ return NULL;
+}
+
+static int
+nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint16_t hw_lvl_id,
+ uint16_t level_id, bool user,
+ struct rte_tm_node_params *params)
+{
+ struct otx2_nix_tm_shaper_profile *shaper_profile;
+ struct otx2_nix_tm_node *tm_node, *parent_node;
+ uint32_t shaper_profile_id;
+
+ shaper_profile_id = params->shaper_profile_id;
+ shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id);
+
+ parent_node = nix_tm_node_search(dev, parent_node_id, user);
+
+ tm_node = rte_zmalloc("otx2_nix_tm_node",
+ sizeof(struct otx2_nix_tm_node), 0);
+ if (!tm_node)
+ return -ENOMEM;
+
+ tm_node->level_id = level_id;
+ tm_node->hw_lvl_id = hw_lvl_id;
+
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->rr_prio = 0xf;
+ tm_node->max_prio = UINT32_MAX;
+ tm_node->hw_id = UINT32_MAX;
+ tm_node->flags = 0;
+ if (user)
+ tm_node->flags = NIX_TM_NODE_USER;
+ rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params));
+
+ if (shaper_profile)
+ shaper_profile->reference_count++;
+ tm_node->parent = parent_node;
+ tm_node->parent_hw_id = UINT32_MAX;
+
+ TAILQ_INSERT_TAIL(&dev->node_list, tm_node, node);
+
+ return 0;
+}
+
+static int
+nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev)
+{
+ struct otx2_nix_tm_shaper_profile *shaper_profile;
+
+ while ((shaper_profile = TAILQ_FIRST(&dev->shaper_profile_list))) {
+ if (shaper_profile->reference_count)
+ otx2_tm_dbg("Shaper profile %u has non zero references",
+ shaper_profile->shaper_profile_id);
+ TAILQ_REMOVE(&dev->shaper_profile_list, shaper_profile, shaper);
+ rte_free(shaper_profile);
+ }
+
+ return 0;
+}
+
+static int
+nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint32_t def = eth_dev->data->nb_tx_queues;
+ struct rte_tm_node_params params;
+ uint32_t leaf_parent, i;
+ int rc = 0;
+
+ /* Default params */
+ memset(¶ms, 0, sizeof(params));
+ params.shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE;
+
+ if (nix_tm_have_tl1_access(dev)) {
+ dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1;
+ rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL1,
+ OTX2_TM_LVL_ROOT, false, ¶ms);
+ if (rc)
+ goto exit;
+ rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL2,
+ OTX2_TM_LVL_SCH1, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL3,
+ OTX2_TM_LVL_SCH2, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL4,
+ OTX2_TM_LVL_SCH3, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 4, def + 3, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_SMQ,
+ OTX2_TM_LVL_SCH4, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ leaf_parent = def + 4;
+ } else {
+ dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
+ rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL2,
+ OTX2_TM_LVL_ROOT, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL3,
+ OTX2_TM_LVL_SCH1, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL4,
+ OTX2_TM_LVL_SCH2, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_SMQ,
+ OTX2_TM_LVL_SCH3, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ leaf_parent = def + 3;
+ }
+
+ /* Add leaf nodes */
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ rc = nix_tm_node_add_to_list(dev, i, leaf_parent, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_CNT,
+ OTX2_TM_LVL_QUEUE, false, ¶ms);
+ if (rc)
+ break;
+ }
+
+exit:
+ return rc;
+}
+
+void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ TAILQ_INIT(&dev->node_list);
+ TAILQ_INIT(&dev->shaper_profile_list);
+}
+
+int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
+ int rc;
+
+ /* Clear shaper profiles */
+ nix_tm_clear_shaper_profiles(dev);
+ dev->tm_flags = NIX_TM_DEFAULT_TREE;
+
+ rc = nix_tm_prepare_default_tree(eth_dev);
+ if (rc != 0)
+ return rc;
+
+ dev->tm_leaf_cnt = sq_cnt;
+
+ return 0;
+}
+
+int
+otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ /* Clear shaper profiles */
+ nix_tm_clear_shaper_profiles(dev);
+
+ dev->tm_flags = 0;
+ return 0;
+}
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
new file mode 100644
index 000000000..94023fa99
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_TM_H__
+#define __OTX2_TM_H__
+
+#include <stdbool.h>
+
+#include <rte_tm_driver.h>
+
+#define NIX_TM_DEFAULT_TREE BIT_ULL(0)
+
+struct otx2_eth_dev;
+
+void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev);
+int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
+int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
+
+struct otx2_nix_tm_node {
+ TAILQ_ENTRY(otx2_nix_tm_node) node;
+ uint32_t id;
+ uint32_t hw_id;
+ uint32_t priority;
+ uint32_t weight;
+ uint16_t level_id;
+ uint16_t hw_lvl_id;
+ uint32_t rr_prio;
+ uint32_t rr_num;
+ uint32_t max_prio;
+ uint32_t parent_hw_id;
+ uint32_t flags;
+#define NIX_TM_NODE_HWRES BIT_ULL(0)
+#define NIX_TM_NODE_ENABLED BIT_ULL(1)
+#define NIX_TM_NODE_USER BIT_ULL(2)
+ struct otx2_nix_tm_node *parent;
+ struct rte_tm_node_params params;
+};
+
+struct otx2_nix_tm_shaper_profile {
+ TAILQ_ENTRY(otx2_nix_tm_shaper_profile) shaper;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+struct shaper_params {
+ uint64_t burst_exponent;
+ uint64_t burst_mantissa;
+ uint64_t div_exp;
+ uint64_t exponent;
+ uint64_t mantissa;
+ uint64_t burst;
+ uint64_t rate;
+};
+
+TAILQ_HEAD(otx2_nix_tm_node_list, otx2_nix_tm_node);
+TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
+
+#define MAX_SCHED_WEIGHT ((uint8_t)~0)
+#define NIX_TM_RR_QUANTUM_MAX ((1 << 24) - 1)
+
+/* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT */
+/* = NIX_MAX_HW_MTU */
+#define DEFAULT_RR_WEIGHT 71
+
+#endif /* __OTX2_TM_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 22/58] net/octeontx2: alloc and free TM HW resources
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (20 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 21/58] net/octeontx2: introduce traffic manager jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 23/58] net/octeontx2: configure " jerinj
` (37 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Krzysztof Kanas
From: Krzysztof Kanas <kkanas@marvell.com>
Allocate and free shaper/scheduler hardware resources for
nodes of hirearchy levels in sw.
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/otx2_tm.c | 350 ++++++++++++++++++++++++++++++++
1 file changed, 350 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index bc0474242..91f31df05 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -54,6 +54,69 @@ nix_tm_node_search(struct otx2_eth_dev *dev,
return NULL;
}
+static uint32_t
+check_rr(struct otx2_eth_dev *dev, uint32_t priority, uint32_t parent_id)
+{
+ struct otx2_nix_tm_node *tm_node;
+ uint32_t rr_num = 0;
+
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (!tm_node->parent)
+ continue;
+
+ if (!(tm_node->parent->id == parent_id))
+ continue;
+
+ if (tm_node->priority == priority)
+ rr_num++;
+ }
+ return rr_num;
+}
+
+static int
+nix_tm_update_parent_info(struct otx2_eth_dev *dev)
+{
+ struct otx2_nix_tm_node *tm_node_child;
+ struct otx2_nix_tm_node *tm_node;
+ struct otx2_nix_tm_node *parent;
+ uint32_t rr_num = 0;
+ uint32_t priority;
+
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (!tm_node->parent)
+ continue;
+ /* Count group of children of same priority i.e are RR */
+ parent = tm_node->parent;
+ priority = tm_node->priority;
+ rr_num = check_rr(dev, priority, parent->id);
+
+ /* Assuming that multiple RR groups are
+ * not configured based on capability.
+ */
+ if (rr_num > 1) {
+ parent->rr_prio = priority;
+ parent->rr_num = rr_num;
+ }
+
+ /* Find out static priority children that are not in RR */
+ TAILQ_FOREACH(tm_node_child, &dev->node_list, node) {
+ if (!tm_node_child->parent)
+ continue;
+ if (parent->id != tm_node_child->parent->id)
+ continue;
+ if (parent->max_prio == UINT32_MAX &&
+ tm_node_child->priority != parent->rr_prio)
+ parent->max_prio = 0;
+
+ if (parent->max_prio < tm_node_child->priority &&
+ parent->rr_prio != tm_node_child->priority)
+ parent->max_prio = tm_node_child->priority;
+ }
+ }
+
+ return 0;
+}
+
static int
nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
uint32_t parent_node_id, uint32_t priority,
@@ -115,6 +178,274 @@ nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev)
return 0;
}
+static int
+nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
+ uint32_t flags, bool hw_only)
+{
+ struct otx2_nix_tm_shaper_profile *shaper_profile;
+ struct otx2_nix_tm_node *tm_node, *next_node;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_txsch_free_req *req;
+ uint32_t shaper_profile_id;
+ bool skip_node = false;
+ int rc = 0;
+
+ next_node = TAILQ_FIRST(&dev->node_list);
+ while (next_node) {
+ tm_node = next_node;
+ next_node = TAILQ_NEXT(tm_node, node);
+
+ /* Check for only requested nodes */
+ if ((tm_node->flags & flags_mask) != flags)
+ continue;
+
+ if (nix_tm_have_tl1_access(dev) &&
+ tm_node->hw_lvl_id == NIX_TXSCH_LVL_TL1)
+ skip_node = true;
+
+ otx2_tm_dbg("Free hwres for node %u, hwlvl %u, hw_id %u (%p)",
+ tm_node->id, tm_node->hw_lvl_id,
+ tm_node->hw_id, tm_node);
+ /* Free specific HW resource if requested */
+ if (!skip_node && flags_mask &&
+ tm_node->flags & NIX_TM_NODE_HWRES) {
+ req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
+ req->flags = 0;
+ req->schq_lvl = tm_node->hw_lvl_id;
+ req->schq = tm_node->hw_id;
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ break;
+ } else {
+ skip_node = false;
+ }
+ tm_node->flags &= ~NIX_TM_NODE_HWRES;
+
+ /* Leave software elements if needed */
+ if (hw_only)
+ continue;
+
+ shaper_profile_id = tm_node->params.shaper_profile_id;
+ shaper_profile =
+ nix_tm_shaper_profile_search(dev, shaper_profile_id);
+ if (shaper_profile)
+ shaper_profile->reference_count--;
+
+ TAILQ_REMOVE(&dev->node_list, tm_node, node);
+ rte_free(tm_node);
+ }
+
+ if (!flags_mask) {
+ /* Free all hw resources */
+ req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
+ req->flags = TXSCHQ_FREE_ALL;
+
+ return otx2_mbox_process(mbox);
+ }
+
+ return rc;
+}
+
+static uint8_t
+nix_tm_copy_rsp_to_dev(struct otx2_eth_dev *dev,
+ struct nix_txsch_alloc_rsp *rsp)
+{
+ uint16_t schq;
+ uint8_t lvl;
+
+ for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
+ for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++) {
+ dev->txschq_list[lvl][schq] = rsp->schq_list[lvl][schq];
+ dev->txschq_contig_list[lvl][schq] =
+ rsp->schq_contig_list[lvl][schq];
+ }
+
+ dev->txschq[lvl] = rsp->schq[lvl];
+ dev->txschq_contig[lvl] = rsp->schq_contig[lvl];
+ }
+ return 0;
+}
+
+static int
+nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
+ struct otx2_nix_tm_node *child,
+ struct otx2_nix_tm_node *parent)
+{
+ uint32_t hw_id, schq_con_index, prio_offset;
+ uint32_t l_id, schq_index;
+
+ otx2_tm_dbg("Assign hw id for child node %u, lvl %u, hw_lvl %u (%p)",
+ child->id, child->level_id, child->hw_lvl_id, child);
+
+ child->flags |= NIX_TM_NODE_HWRES;
+
+ /* Process root nodes */
+ if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
+ child->hw_lvl_id == dev->otx2_tm_root_lvl && !parent) {
+ int idx = 0;
+ uint32_t tschq_con_index;
+
+ l_id = child->hw_lvl_id;
+ tschq_con_index = dev->txschq_contig_index[l_id];
+ hw_id = dev->txschq_contig_list[l_id][tschq_con_index];
+ child->hw_id = hw_id;
+ dev->txschq_contig_index[l_id]++;
+ /* Update TL1 hw_id for its parent for config purpose */
+ idx = dev->txschq_index[NIX_TXSCH_LVL_TL1]++;
+ hw_id = dev->txschq_list[NIX_TXSCH_LVL_TL1][idx];
+ child->parent_hw_id = hw_id;
+ return 0;
+ }
+ if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL1 &&
+ child->hw_lvl_id == dev->otx2_tm_root_lvl && !parent) {
+ uint32_t tschq_con_index;
+
+ l_id = child->hw_lvl_id;
+ tschq_con_index = dev->txschq_index[l_id];
+ hw_id = dev->txschq_list[l_id][tschq_con_index];
+ child->hw_id = hw_id;
+ dev->txschq_index[l_id]++;
+ return 0;
+ }
+
+ /* Process children with parents */
+ l_id = child->hw_lvl_id;
+ schq_index = dev->txschq_index[l_id];
+ schq_con_index = dev->txschq_contig_index[l_id];
+
+ if (child->priority == parent->rr_prio) {
+ hw_id = dev->txschq_list[l_id][schq_index];
+ child->hw_id = hw_id;
+ child->parent_hw_id = parent->hw_id;
+ dev->txschq_index[l_id]++;
+ } else {
+ prio_offset = schq_con_index + child->priority;
+ hw_id = dev->txschq_contig_list[l_id][prio_offset];
+ child->hw_id = hw_id;
+ }
+ return 0;
+}
+
+static int
+nix_tm_assign_hw_id(struct otx2_eth_dev *dev)
+{
+ struct otx2_nix_tm_node *parent, *child;
+ uint32_t child_hw_lvl, con_index_inc, i;
+
+ for (i = NIX_TXSCH_LVL_TL1; i > 0; i--) {
+ TAILQ_FOREACH(parent, &dev->node_list, node) {
+ child_hw_lvl = parent->hw_lvl_id - 1;
+ if (parent->hw_lvl_id != i)
+ continue;
+ TAILQ_FOREACH(child, &dev->node_list, node) {
+ if (!child->parent)
+ continue;
+ if (child->parent->id != parent->id)
+ continue;
+ nix_tm_assign_id_to_node(dev, child, parent);
+ }
+
+ con_index_inc = parent->max_prio + 1;
+ dev->txschq_contig_index[child_hw_lvl] += con_index_inc;
+
+ /*
+ * Explicitly assign id to parent node if it
+ * doesn't have a parent
+ */
+ if (parent->hw_lvl_id == dev->otx2_tm_root_lvl)
+ nix_tm_assign_id_to_node(dev, parent, NULL);
+ }
+ }
+ return 0;
+}
+
+static uint8_t
+nix_tm_count_req_schq(struct otx2_eth_dev *dev,
+ struct nix_txsch_alloc_req *req, uint8_t lvl)
+{
+ struct otx2_nix_tm_node *tm_node;
+ uint8_t contig_count;
+
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (lvl == tm_node->hw_lvl_id) {
+ req->schq[lvl - 1] += tm_node->rr_num;
+ if (tm_node->max_prio != UINT32_MAX) {
+ contig_count = tm_node->max_prio + 1;
+ req->schq_contig[lvl - 1] += contig_count;
+ }
+ }
+ if (lvl == dev->otx2_tm_root_lvl &&
+ dev->otx2_tm_root_lvl && lvl == NIX_TXSCH_LVL_TL2 &&
+ tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) {
+ req->schq_contig[dev->otx2_tm_root_lvl]++;
+ }
+ }
+
+ req->schq[NIX_TXSCH_LVL_TL1] = 1;
+ req->schq_contig[NIX_TXSCH_LVL_TL1] = 0;
+
+ return 0;
+}
+
+static int
+nix_tm_prepare_txschq_req(struct otx2_eth_dev *dev,
+ struct nix_txsch_alloc_req *req)
+{
+ uint8_t i;
+
+ for (i = NIX_TXSCH_LVL_TL1; i > 0; i--)
+ nix_tm_count_req_schq(dev, req, i);
+
+ for (i = 0; i < NIX_TXSCH_LVL_CNT; i++) {
+ dev->txschq_index[i] = 0;
+ dev->txschq_contig_index[i] = 0;
+ }
+ return 0;
+}
+
+static int
+nix_tm_send_txsch_alloc_msg(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_txsch_alloc_req *req;
+ struct nix_txsch_alloc_rsp *rsp;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_nix_txsch_alloc(mbox);
+
+ rc = nix_tm_prepare_txschq_req(dev, req);
+ if (rc)
+ return rc;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ nix_tm_copy_rsp_to_dev(dev, rsp);
+
+ nix_tm_assign_hw_id(dev);
+ return 0;
+}
+
+static int
+nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc;
+
+ RTE_SET_USED(xmit_enable);
+
+ nix_tm_update_parent_info(dev);
+
+ rc = nix_tm_send_txsch_alloc_msg(dev);
+ if (rc) {
+ otx2_err("TM failed to alloc tm resources=%d", rc);
+ return rc;
+ }
+
+ return 0;
+}
+
static int
nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
{
@@ -226,6 +557,13 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
int rc;
+ /* Free up all resources already held */
+ rc = nix_tm_free_resources(dev, 0, 0, false);
+ if (rc) {
+ otx2_err("Failed to freeup existing resources,rc=%d", rc);
+ return rc;
+ }
+
/* Clear shaper profiles */
nix_tm_clear_shaper_profiles(dev);
dev->tm_flags = NIX_TM_DEFAULT_TREE;
@@ -234,6 +572,9 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
if (rc != 0)
return rc;
+ rc = nix_tm_alloc_resources(eth_dev, false);
+ if (rc != 0)
+ return rc;
dev->tm_leaf_cnt = sq_cnt;
return 0;
@@ -243,6 +584,15 @@ int
otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc;
+
+ /* Xmit is assumed to be disabled */
+ /* Free up resources already held */
+ rc = nix_tm_free_resources(dev, 0, 0, false);
+ if (rc) {
+ otx2_err("Failed to freeup existing resources,rc=%d", rc);
+ return rc;
+ }
/* Clear shaper profiles */
nix_tm_clear_shaper_profiles(dev);
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 23/58] net/octeontx2: configure TM HW resources
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (21 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 22/58] net/octeontx2: alloc and free TM HW resources jerinj
@ 2019-06-02 15:23 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 24/58] net/octeontx2: enable Tx through traffic manager jerinj
` (36 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:23 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Krzysztof Kanas
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
This patch sets up and configure hierarchy in hw
nodes. Since all the registers are with RVU AF,
register configuration is also done using mbox
communication.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
drivers/net/octeontx2/otx2_tm.c | 504 ++++++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_tm.h | 82 ++++++
2 files changed, 586 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 91f31df05..463f90acd 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -20,6 +20,41 @@ enum otx2_tm_node_level {
OTX2_TM_LVL_MAX,
};
+static inline
+uint64_t shaper2regval(struct shaper_params *shaper)
+{
+ return (shaper->burst_exponent << 37) | (shaper->burst_mantissa << 29) |
+ (shaper->div_exp << 13) | (shaper->exponent << 9) |
+ (shaper->mantissa << 1);
+}
+
+static int
+nix_get_link(struct otx2_eth_dev *dev)
+{
+ int link = 13 /* SDP */;
+ uint16_t lmac_chan;
+ uint16_t map;
+
+ lmac_chan = dev->tx_chan_base;
+
+ /* CGX lmac link */
+ if (lmac_chan >= 0x800) {
+ map = lmac_chan & 0x7FF;
+ link = 4 * ((map >> 8) & 0xF) + ((map >> 4) & 0xF);
+ } else if (lmac_chan < 0x700) {
+ /* LBK channel */
+ link = 12;
+ }
+
+ return link;
+}
+
+static uint8_t
+nix_get_relchan(struct otx2_eth_dev *dev)
+{
+ return dev->tx_chan_base & 0xff;
+}
+
static bool
nix_tm_have_tl1_access(struct otx2_eth_dev *dev)
{
@@ -28,6 +63,24 @@ nix_tm_have_tl1_access(struct otx2_eth_dev *dev)
!is_lbk && !dev->maxvf;
}
+static int
+find_prio_anchor(struct otx2_eth_dev *dev, uint32_t node_id)
+{
+ struct otx2_nix_tm_node *child_node;
+
+ TAILQ_FOREACH(child_node, &dev->node_list, node) {
+ if (!child_node->parent)
+ continue;
+ if (!(child_node->parent->id == node_id))
+ continue;
+ if (child_node->priority == child_node->parent->rr_prio)
+ continue;
+ return child_node->hw_id - child_node->priority;
+ }
+ return 0;
+}
+
+
static struct otx2_nix_tm_shaper_profile *
nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id)
{
@@ -40,6 +93,451 @@ nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id)
return NULL;
}
+static inline uint64_t
+shaper_rate_to_nix(uint64_t cclk_hz, uint64_t cclk_ticks,
+ uint64_t value, uint64_t *exponent_p,
+ uint64_t *mantissa_p, uint64_t *div_exp_p)
+{
+ uint64_t div_exp, exponent, mantissa;
+
+ /* Boundary checks */
+ if (value < MIN_SHAPER_RATE(cclk_hz, cclk_ticks) ||
+ value > MAX_SHAPER_RATE(cclk_hz, cclk_ticks))
+ return 0;
+
+ if (value <= SHAPER_RATE(cclk_hz, cclk_ticks, 0, 0, 0)) {
+ /* Calculate rate div_exp and mantissa using
+ * the following formula:
+ *
+ * value = (cclk_hz * (256 + mantissa)
+ * / ((cclk_ticks << div_exp) * 256)
+ */
+ div_exp = 0;
+ exponent = 0;
+ mantissa = MAX_RATE_MANTISSA;
+
+ while (value < (cclk_hz / (cclk_ticks << div_exp)))
+ div_exp += 1;
+
+ while (value <
+ ((cclk_hz * (256 + mantissa)) /
+ ((cclk_ticks << div_exp) * 256)))
+ mantissa -= 1;
+ } else {
+ /* Calculate rate exponent and mantissa using
+ * the following formula:
+ *
+ * value = (cclk_hz * ((256 + mantissa) << exponent)
+ * / (cclk_ticks * 256)
+ *
+ */
+ div_exp = 0;
+ exponent = MAX_RATE_EXPONENT;
+ mantissa = MAX_RATE_MANTISSA;
+
+ while (value < (cclk_hz * (1 << exponent)) / cclk_ticks)
+ exponent -= 1;
+
+ while (value < (cclk_hz * ((256 + mantissa) << exponent)) /
+ (cclk_ticks * 256))
+ mantissa -= 1;
+ }
+
+ if (div_exp > MAX_RATE_DIV_EXP ||
+ exponent > MAX_RATE_EXPONENT || mantissa > MAX_RATE_MANTISSA)
+ return 0;
+
+ if (div_exp_p)
+ *div_exp_p = div_exp;
+ if (exponent_p)
+ *exponent_p = exponent;
+ if (mantissa_p)
+ *mantissa_p = mantissa;
+
+ /* Calculate real rate value */
+ return SHAPER_RATE(cclk_hz, cclk_ticks, exponent, mantissa, div_exp);
+}
+
+static inline uint64_t
+lx_shaper_rate_to_nix(uint64_t cclk_hz, uint32_t hw_lvl,
+ uint64_t value, uint64_t *exponent,
+ uint64_t *mantissa, uint64_t *div_exp)
+{
+ if (hw_lvl == NIX_TXSCH_LVL_TL1)
+ return shaper_rate_to_nix(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS,
+ value, exponent, mantissa, div_exp);
+ else
+ return shaper_rate_to_nix(cclk_hz, LX_TIME_WHEEL_CCLK_TICKS,
+ value, exponent, mantissa, div_exp);
+}
+
+static inline uint64_t
+shaper_burst_to_nix(uint64_t value, uint64_t *exponent_p,
+ uint64_t *mantissa_p)
+{
+ uint64_t exponent, mantissa;
+
+ if (value < MIN_SHAPER_BURST || value > MAX_SHAPER_BURST)
+ return 0;
+
+ /* Calculate burst exponent and mantissa using
+ * the following formula:
+ *
+ * value = (((256 + mantissa) << (exponent + 1)
+ / 256)
+ *
+ */
+ exponent = MAX_BURST_EXPONENT;
+ mantissa = MAX_BURST_MANTISSA;
+
+ while (value < (1ull << (exponent + 1)))
+ exponent -= 1;
+
+ while (value < ((256 + mantissa) << (exponent + 1)) / 256)
+ mantissa -= 1;
+
+ if (exponent > MAX_BURST_EXPONENT || mantissa > MAX_BURST_MANTISSA)
+ return 0;
+
+ if (exponent_p)
+ *exponent_p = exponent;
+ if (mantissa_p)
+ *mantissa_p = mantissa;
+
+ return SHAPER_BURST(exponent, mantissa);
+}
+
+static int
+configure_shaper_cir_pir_reg(struct otx2_eth_dev *dev,
+ struct otx2_nix_tm_node *tm_node,
+ struct shaper_params *cir,
+ struct shaper_params *pir)
+{
+ uint32_t shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE;
+ struct otx2_nix_tm_shaper_profile *shaper_profile = NULL;
+ struct rte_tm_shaper_params *param;
+
+ shaper_profile_id = tm_node->params.shaper_profile_id;
+
+ shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id);
+ if (shaper_profile) {
+ param = &shaper_profile->profile;
+ /* Calculate CIR exponent and mantissa */
+ if (param->committed.rate)
+ cir->rate = lx_shaper_rate_to_nix(CCLK_HZ,
+ tm_node->hw_lvl_id,
+ param->committed.rate,
+ &cir->exponent,
+ &cir->mantissa,
+ &cir->div_exp);
+
+ /* Calculate PIR exponent and mantissa */
+ if (param->peak.rate)
+ pir->rate = lx_shaper_rate_to_nix(CCLK_HZ,
+ tm_node->hw_lvl_id,
+ param->peak.rate,
+ &pir->exponent,
+ &pir->mantissa,
+ &pir->div_exp);
+
+ /* Calculate CIR burst exponent and mantissa */
+ if (param->committed.size)
+ cir->burst = shaper_burst_to_nix(param->committed.size,
+ &cir->burst_exponent,
+ &cir->burst_mantissa);
+
+ /* Calculate PIR burst exponent and mantissa */
+ if (param->peak.size)
+ pir->burst = shaper_burst_to_nix(param->peak.size,
+ &pir->burst_exponent,
+ &pir->burst_mantissa);
+ }
+
+ return 0;
+}
+
+static int
+send_tm_reqval(struct otx2_mbox *mbox, struct nix_txschq_config *req)
+{
+ int rc;
+
+ if (req->num_regs > MAX_REGS_PER_MBOX_MSG)
+ return -ERANGE;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ req->num_regs = 0;
+ return 0;
+}
+
+static int
+populate_tm_registers(struct otx2_eth_dev *dev,
+ struct otx2_nix_tm_node *tm_node)
+{
+ uint64_t strict_schedul_prio, rr_prio;
+ struct otx2_mbox *mbox = dev->mbox;
+ volatile uint64_t *reg, *regval;
+ uint64_t parent = 0, child = 0;
+ struct shaper_params cir, pir;
+ struct nix_txschq_config *req;
+ uint64_t rr_quantum;
+ uint32_t hw_lvl;
+ uint32_t schq;
+ int rc;
+
+ memset(&cir, 0, sizeof(cir));
+ memset(&pir, 0, sizeof(pir));
+
+ /* Skip leaf nodes */
+ if (tm_node->hw_lvl_id == NIX_TXSCH_LVL_CNT)
+ return 0;
+
+ /* Root node will not have a parent node */
+ if (tm_node->hw_lvl_id == dev->otx2_tm_root_lvl)
+ parent = tm_node->parent_hw_id;
+ else
+ parent = tm_node->parent->hw_id;
+
+ /* Do we need this trigger to configure TL1 */
+ if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
+ tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) {
+ schq = parent;
+ /*
+ * Default config for TL1.
+ * For VF this is always ignored.
+ */
+
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = NIX_TXSCH_LVL_TL1;
+
+ /* Set DWRR quantum */
+ req->reg[0] = NIX_AF_TL1X_SCHEDULE(schq);
+ req->regval[0] = TXSCH_TL1_DFLT_RR_QTM;
+ req->num_regs++;
+
+ req->reg[1] = NIX_AF_TL1X_TOPOLOGY(schq);
+ req->regval[1] = (TXSCH_TL1_DFLT_RR_PRIO << 1);
+ req->num_regs++;
+
+ req->reg[2] = NIX_AF_TL1X_CIR(schq);
+ req->regval[2] = 0;
+ req->num_regs++;
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ }
+
+ if (tm_node->hw_lvl_id != NIX_TXSCH_LVL_SMQ)
+ child = find_prio_anchor(dev, tm_node->id);
+
+ rr_prio = tm_node->rr_prio;
+ hw_lvl = tm_node->hw_lvl_id;
+ strict_schedul_prio = tm_node->priority;
+ schq = tm_node->hw_id;
+ rr_quantum = (tm_node->weight * NIX_TM_RR_QUANTUM_MAX) /
+ MAX_SCHED_WEIGHT;
+
+ configure_shaper_cir_pir_reg(dev, tm_node, &cir, &pir);
+
+ otx2_tm_dbg("Configure node %p, lvl %u hw_lvl %u, id %u, hw_id %u,"
+ "parent_hw_id %" PRIx64 ", pir %" PRIx64 ", cir %" PRIx64,
+ tm_node, tm_node->level_id, hw_lvl,
+ tm_node->id, schq, parent, pir.rate, cir.rate);
+
+ rc = -EFAULT;
+
+ switch (hw_lvl) {
+ case NIX_TXSCH_LVL_SMQ:
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = hw_lvl;
+ reg = req->reg;
+ regval = req->regval;
+ req->num_regs = 0;
+
+ /* Set xoff which will be cleared later */
+ *reg++ = NIX_AF_SMQX_CFG(schq);
+ *regval++ = BIT_ULL(50) |
+ (NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS;
+ req->num_regs++;
+ *reg++ = NIX_AF_MDQX_PARENT(schq);
+ *regval++ = parent << 16;
+ req->num_regs++;
+ *reg++ = NIX_AF_MDQX_SCHEDULE(schq);
+ *regval++ = (strict_schedul_prio << 24) | rr_quantum;
+ req->num_regs++;
+ if (pir.rate && pir.burst) {
+ *reg++ = NIX_AF_MDQX_PIR(schq);
+ *regval++ = shaper2regval(&pir) | 1;
+ req->num_regs++;
+ }
+
+ if (cir.rate && cir.burst) {
+ *reg++ = NIX_AF_MDQX_CIR(schq);
+ *regval++ = shaper2regval(&cir) | 1;
+ req->num_regs++;
+ }
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ break;
+ case NIX_TXSCH_LVL_TL4:
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = hw_lvl;
+ req->num_regs = 0;
+ reg = req->reg;
+ regval = req->regval;
+
+ *reg++ = NIX_AF_TL4X_PARENT(schq);
+ *regval++ = parent << 16;
+ req->num_regs++;
+ *reg++ = NIX_AF_TL4X_TOPOLOGY(schq);
+ *regval++ = (child << 32) | (rr_prio << 1);
+ req->num_regs++;
+ *reg++ = NIX_AF_TL4X_SCHEDULE(schq);
+ *regval++ = (strict_schedul_prio << 24) | rr_quantum;
+ req->num_regs++;
+ if (pir.rate && pir.burst) {
+ *reg++ = NIX_AF_TL4X_PIR(schq);
+ *regval++ = shaper2regval(&pir) | 1;
+ req->num_regs++;
+ }
+ if (cir.rate && cir.burst) {
+ *reg++ = NIX_AF_TL4X_CIR(schq);
+ *regval++ = shaper2regval(&cir) | 1;
+ req->num_regs++;
+ }
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ break;
+ case NIX_TXSCH_LVL_TL3:
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = hw_lvl;
+ req->num_regs = 0;
+ reg = req->reg;
+ regval = req->regval;
+
+ *reg++ = NIX_AF_TL3X_PARENT(schq);
+ *regval++ = parent << 16;
+ req->num_regs++;
+ *reg++ = NIX_AF_TL3X_TOPOLOGY(schq);
+ *regval++ = (child << 32) | (rr_prio << 1);
+ req->num_regs++;
+ *reg++ = NIX_AF_TL3X_SCHEDULE(schq);
+ *regval++ = (strict_schedul_prio << 24) | rr_quantum;
+ req->num_regs++;
+ if (pir.rate && pir.burst) {
+ *reg++ = NIX_AF_TL3X_PIR(schq);
+ *regval++ = shaper2regval(&pir) | 1;
+ req->num_regs++;
+ }
+ if (cir.rate && cir.burst) {
+ *reg++ = NIX_AF_TL3X_CIR(schq);
+ *regval++ = shaper2regval(&cir) | 1;
+ req->num_regs++;
+ }
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ break;
+ case NIX_TXSCH_LVL_TL2:
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = hw_lvl;
+ req->num_regs = 0;
+ reg = req->reg;
+ regval = req->regval;
+
+ *reg++ = NIX_AF_TL2X_PARENT(schq);
+ *regval++ = parent << 16;
+ req->num_regs++;
+ *reg++ = NIX_AF_TL2X_TOPOLOGY(schq);
+ *regval++ = (child << 32) | (rr_prio << 1);
+ req->num_regs++;
+ *reg++ = NIX_AF_TL2X_SCHEDULE(schq);
+ if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2)
+ *regval++ = (1 << 24) | rr_quantum;
+ else
+ *regval++ = (strict_schedul_prio << 24) | rr_quantum;
+ req->num_regs++;
+ *reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq, nix_get_link(dev));
+ *regval++ = BIT_ULL(12) | nix_get_relchan(dev);
+ req->num_regs++;
+ if (pir.rate && pir.burst) {
+ *reg++ = NIX_AF_TL2X_PIR(schq);
+ *regval++ = shaper2regval(&pir) | 1;
+ req->num_regs++;
+ }
+ if (cir.rate && cir.burst) {
+ *reg++ = NIX_AF_TL2X_CIR(schq);
+ *regval++ = shaper2regval(&cir) | 1;
+ req->num_regs++;
+ }
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ break;
+ case NIX_TXSCH_LVL_TL1:
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = hw_lvl;
+ req->num_regs = 0;
+ reg = req->reg;
+ regval = req->regval;
+
+ *reg++ = NIX_AF_TL1X_SCHEDULE(schq);
+ *regval++ = rr_quantum;
+ req->num_regs++;
+ *reg++ = NIX_AF_TL1X_TOPOLOGY(schq);
+ *regval++ = (child << 32) | (rr_prio << 1 /*RR_PRIO*/);
+ req->num_regs++;
+ if (cir.rate && cir.burst) {
+ *reg++ = NIX_AF_TL1X_CIR(schq);
+ *regval++ = shaper2regval(&cir) | 1;
+ req->num_regs++;
+ }
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ break;
+ }
+
+ return 0;
+error:
+ otx2_err("Txschq cfg request failed for node %p, rc=%d", tm_node, rc);
+ return rc;
+}
+
+
+static int
+nix_tm_txsch_reg_config(struct otx2_eth_dev *dev)
+{
+ struct otx2_nix_tm_node *tm_node;
+ uint32_t lvl;
+ int rc = 0;
+
+ if (nix_get_link(dev) == 13)
+ return -EPERM;
+
+ for (lvl = 0; lvl < (uint32_t)dev->otx2_tm_root_lvl + 1; lvl++) {
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (tm_node->hw_lvl_id == lvl) {
+ rc = populate_tm_registers(dev, tm_node);
+ if (rc)
+ goto exit;
+ }
+ }
+ }
+exit:
+ return rc;
+}
+
static struct otx2_nix_tm_node *
nix_tm_node_search(struct otx2_eth_dev *dev,
uint32_t node_id, bool user)
@@ -443,6 +941,12 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
return rc;
}
+ rc = nix_tm_txsch_reg_config(dev);
+ if (rc) {
+ otx2_err("TM failed to configure sched registers=%d", rc);
+ return rc;
+ }
+
return 0;
}
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index 94023fa99..af1bb1862 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -64,4 +64,86 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
/* = NIX_MAX_HW_MTU */
#define DEFAULT_RR_WEIGHT 71
+/** NIX rate limits */
+#define MAX_RATE_DIV_EXP 12
+#define MAX_RATE_EXPONENT 0xf
+#define MAX_RATE_MANTISSA 0xff
+
+/** NIX rate limiter time-wheel resolution */
+#define L1_TIME_WHEEL_CCLK_TICKS 240
+#define LX_TIME_WHEEL_CCLK_TICKS 860
+
+#define CCLK_HZ 1000000000
+
+/* NIX rate calculation
+ * CCLK = coprocessor-clock frequency in MHz
+ * CCLK_TICKS = rate limiter time-wheel resolution
+ *
+ * PIR_ADD = ((256 + NIX_*_PIR[RATE_MANTISSA])
+ * << NIX_*_PIR[RATE_EXPONENT]) / 256
+ * PIR = (CCLK / (CCLK_TICKS << NIX_*_PIR[RATE_DIVIDER_EXPONENT]))
+ * * PIR_ADD
+ *
+ * CIR_ADD = ((256 + NIX_*_CIR[RATE_MANTISSA])
+ * << NIX_*_CIR[RATE_EXPONENT]) / 256
+ * CIR = (CCLK / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT]))
+ * * CIR_ADD
+ */
+#define SHAPER_RATE(cclk_hz, cclk_ticks, \
+ exponent, mantissa, div_exp) \
+ (((uint64_t)(cclk_hz) * ((256 + (mantissa)) << (exponent))) \
+ / (((cclk_ticks) << (div_exp)) * 256))
+
+#define L1_SHAPER_RATE(cclk_hz, exponent, mantissa, div_exp) \
+ SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS, \
+ exponent, mantissa, div_exp)
+
+#define LX_SHAPER_RATE(cclk_hz, exponent, mantissa, div_exp) \
+ SHAPER_RATE(cclk_hz, LX_TIME_WHEEL_CCLK_TICKS, \
+ exponent, mantissa, div_exp)
+
+/* Shaper rate limits */
+#define MIN_SHAPER_RATE(cclk_hz, cclk_ticks) \
+ SHAPER_RATE(cclk_hz, cclk_ticks, 0, 0, MAX_RATE_DIV_EXP)
+
+#define MAX_SHAPER_RATE(cclk_hz, cclk_ticks) \
+ SHAPER_RATE(cclk_hz, cclk_ticks, MAX_RATE_EXPONENT, \
+ MAX_RATE_MANTISSA, 0)
+
+#define MIN_L1_SHAPER_RATE(cclk_hz) \
+ MIN_SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS)
+
+#define MAX_L1_SHAPER_RATE(cclk_hz) \
+ MAX_SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS)
+
+/** TM Shaper - low level operations */
+
+/** NIX burst limits */
+#define MAX_BURST_EXPONENT 0xf
+#define MAX_BURST_MANTISSA 0xff
+
+/* NIX burst calculation
+ * PIR_BURST = ((256 + NIX_*_PIR[BURST_MANTISSA])
+ * << (NIX_*_PIR[BURST_EXPONENT] + 1))
+ * / 256
+ *
+ * CIR_BURST = ((256 + NIX_*_CIR[BURST_MANTISSA])
+ * << (NIX_*_CIR[BURST_EXPONENT] + 1))
+ * / 256
+ */
+#define SHAPER_BURST(exponent, mantissa) \
+ (((256 + (mantissa)) << ((exponent) + 1)) / 256)
+
+/** Shaper burst limits */
+#define MIN_SHAPER_BURST \
+ SHAPER_BURST(0, 0)
+
+#define MAX_SHAPER_BURST \
+ SHAPER_BURST(MAX_BURST_EXPONENT,\
+ MAX_BURST_MANTISSA)
+
+/* Default TL1 priority and Quantum from AF */
+#define TXSCH_TL1_DFLT_RR_QTM ((1 << 24) - 1)
+#define TXSCH_TL1_DFLT_RR_PRIO 1
+
#endif /* __OTX2_TM_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 24/58] net/octeontx2: enable Tx through traffic manager
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (22 preceding siblings ...)
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 23/58] net/octeontx2: configure " jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 25/58] net/octeontx2: add ptype support jerinj
` (35 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Krzysztof Kanas, Vamsi Attunuru
From: Krzysztof Kanas <kkanas@marvell.com>
This patch enables pkt transmit through traffic manager
hierarchy by clearing software XOFF on the nodes and linking
tx queues to corresponding leaf nodes.
It also adds support to start and stop tx queue using
traffic manager.
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 72 ++++++-
drivers/net/octeontx2/otx2_tm.c | 295 +++++++++++++++++++++++++++-
drivers/net/octeontx2/otx2_tm.h | 4 +
3 files changed, 366 insertions(+), 5 deletions(-)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 2808058a8..a269e1be6 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -120,6 +120,32 @@ nix_lf_free(struct otx2_eth_dev *dev)
return otx2_mbox_process(mbox);
}
+int
+otx2_cgx_rxtx_start(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_start_rxtx(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
+int
+otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_stop_rxtx(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
static inline void
nix_rx_queue_reset(struct otx2_eth_rxq *rxq)
{
@@ -461,16 +487,27 @@ nix_sq_init(struct otx2_eth_txq *txq)
struct otx2_eth_dev *dev = txq->dev;
struct otx2_mbox *mbox = dev->mbox;
struct nix_aq_enq_req *sq;
+ uint32_t rr_quantum;
+ uint16_t smq;
+ int rc;
if (txq->sqb_pool->pool_id == 0)
return -EINVAL;
+ rc = otx2_nix_tm_get_leaf_data(dev, txq->sq, &rr_quantum, &smq);
+ if (rc) {
+ otx2_err("Failed to get sq->smq(leaf node), rc=%d", rc);
+ return rc;
+ }
+
sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
sq->qidx = txq->sq;
sq->ctype = NIX_AQ_CTYPE_SQ;
sq->op = NIX_AQ_INSTOP_INIT;
sq->sq.max_sqe_size = nix_sq_max_sqe_sz(txq);
+ sq->sq.smq = smq;
+ sq->sq.smq_rr_quantum = rr_quantum;
sq->sq.default_chan = dev->tx_chan_base;
sq->sq.sqe_stype = NIX_STYPE_STF;
sq->sq.ena = 1;
@@ -697,6 +734,9 @@ otx2_nix_tx_queue_release(void *_txq)
otx2_nix_dbg("Releasing txq %u", txq->sq);
+ /* Flush and disable tm */
+ otx2_nix_tm_sw_xoff(txq, false);
+
/* Free sqb's and disable sq */
nix_sq_uninit(txq);
@@ -1122,24 +1162,52 @@ int
otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
{
struct rte_eth_dev_data *data = eth_dev->data;
+ struct otx2_eth_txq *txq;
+ int rc = -EINVAL;
+
+ txq = eth_dev->data->tx_queues[qidx];
if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
return 0;
+ rc = otx2_nix_sq_sqb_aura_fc(txq, true);
+ if (rc) {
+ otx2_err("Failed to enable sqb aura fc, txq=%u, rc=%d",
+ qidx, rc);
+ goto done;
+ }
+
data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
- return 0;
+
+done:
+ return rc;
}
int
otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
{
struct rte_eth_dev_data *data = eth_dev->data;
+ struct otx2_eth_txq *txq;
+ int rc;
+
+ txq = eth_dev->data->tx_queues[qidx];
if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
return 0;
+ txq->fc_cache_pkts = 0;
+
+ rc = otx2_nix_sq_sqb_aura_fc(txq, false);
+ if (rc) {
+ otx2_err("Failed to disable sqb aura fc, txq=%u, rc=%d",
+ qidx, rc);
+ goto done;
+ }
+
data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
- return 0;
+
+done:
+ return rc;
}
static int
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 463f90acd..4439389b8 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -676,6 +676,223 @@ nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev)
return 0;
}
+static int
+nix_smq_xoff(struct otx2_eth_dev *dev, uint16_t smq, bool enable)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_txschq_config *req;
+
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = NIX_TXSCH_LVL_SMQ;
+ req->num_regs = 1;
+
+ req->reg[0] = NIX_AF_SMQX_CFG(smq);
+ /* Unmodified fields */
+ req->regval[0] = (NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS;
+
+ if (enable)
+ req->regval[0] |= BIT_ULL(50) | BIT_ULL(49);
+ else
+ req->regval[0] |= 0;
+
+ return otx2_mbox_process(mbox);
+}
+
+int
+otx2_nix_sq_sqb_aura_fc(void *__txq, bool enable)
+{
+ struct otx2_eth_txq *txq = __txq;
+ struct npa_aq_enq_req *req;
+ struct npa_aq_enq_rsp *rsp;
+ struct otx2_npa_lf *lf;
+ struct otx2_mbox *mbox;
+ uint64_t aura_handle;
+ int rc;
+
+ lf = otx2_npa_lf_obj_get();
+ if (!lf)
+ return -EFAULT;
+ mbox = lf->mbox;
+ /* Set/clear sqb aura fc_ena */
+ aura_handle = txq->sqb_pool->pool_id;
+ req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+
+ req->aura_id = npa_lf_aura_handle_to_aura(aura_handle);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_WRITE;
+ /* Below is not needed for aura writes but AF driver needs it */
+ /* AF will translate to associated poolctx */
+ req->aura.pool_addr = req->aura_id;
+
+ req->aura.fc_ena = enable;
+ req->aura_mask.fc_ena = 1;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* Read back npa aura ctx */
+ req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+
+ req->aura_id = npa_lf_aura_handle_to_aura(aura_handle);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ /* Init when enabled as there might be no triggers */
+ if (enable)
+ *(volatile uint64_t *)txq->fc_mem = rsp->aura.count;
+ else
+ *(volatile uint64_t *)txq->fc_mem = txq->nb_sqb_bufs;
+ /* Sync write barrier */
+ rte_wmb();
+
+ return 0;
+}
+
+static void
+nix_txq_flush_sq_spin(struct otx2_eth_txq *txq)
+{
+ uint16_t sqb_cnt, head_off, tail_off;
+ struct otx2_eth_dev *dev = txq->dev;
+ uint16_t sq = txq->sq;
+ uint64_t reg, val;
+ int64_t *regaddr;
+
+ while (true) {
+ reg = ((uint64_t)sq << 32);
+ regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
+ val = otx2_atomic64_add_nosync(reg, regaddr);
+
+ regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS);
+ val = otx2_atomic64_add_nosync(reg, regaddr);
+ sqb_cnt = val & 0xFFFF;
+ head_off = (val >> 20) & 0x3F;
+ tail_off = (val >> 28) & 0x3F;
+
+ /* SQ reached quiescent state */
+ if (sqb_cnt <= 1 && head_off == tail_off &&
+ (*txq->fc_mem == txq->nb_sqb_bufs)) {
+ break;
+ }
+
+ rte_pause();
+ }
+}
+
+int
+otx2_nix_tm_sw_xoff(void *__txq, bool dev_started)
+{
+ struct otx2_eth_txq *txq = __txq;
+ struct otx2_eth_dev *dev = txq->dev;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *req;
+ struct nix_aq_enq_rsp *rsp;
+ uint16_t smq;
+ int rc;
+
+ /* Get smq from sq */
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ req->qidx = txq->sq;
+ req->ctype = NIX_AQ_CTYPE_SQ;
+ req->op = NIX_AQ_INSTOP_READ;
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to get smq, rc=%d", rc);
+ return -EIO;
+ }
+
+ /* Check if sq is enabled */
+ if (!rsp->sq.ena)
+ return 0;
+
+ smq = rsp->sq.smq;
+
+ /* Enable CGX RXTX to drain pkts */
+ if (!dev_started) {
+ rc = otx2_cgx_rxtx_start(dev);
+ if (rc)
+ return rc;
+ }
+
+ rc = otx2_nix_sq_sqb_aura_fc(txq, false);
+ if (rc < 0) {
+ otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
+ goto cleanup;
+ }
+
+ /* Disable smq xoff for case it was enabled earlier */
+ rc = nix_smq_xoff(dev, smq, false);
+ if (rc) {
+ otx2_err("Failed to enable smq for sq %u, rc=%d", txq->sq, rc);
+ goto cleanup;
+ }
+
+ /* Wait for sq entries to be flushed */
+ nix_txq_flush_sq_spin(txq);
+
+ /* Flush and enable smq xoff */
+ rc = nix_smq_xoff(dev, smq, true);
+ if (rc) {
+ otx2_err("Failed to disable smq for sq %u, rc=%d", txq->sq, rc);
+ return rc;
+ }
+
+cleanup:
+ /* Restore cgx state */
+ if (!dev_started)
+ rc |= otx2_cgx_rxtx_stop(dev);
+
+ return rc;
+}
+
+static int
+nix_tm_sw_xon(struct otx2_eth_txq *txq,
+ uint16_t smq, uint32_t rr_quantum)
+{
+ struct otx2_eth_dev *dev = txq->dev;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *req;
+ int rc;
+
+ otx2_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum %u",
+ txq->sq, txq->sq, rr_quantum);
+ /* Set smq from sq */
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ req->qidx = txq->sq;
+ req->ctype = NIX_AQ_CTYPE_SQ;
+ req->op = NIX_AQ_INSTOP_WRITE;
+ req->sq.smq = smq;
+ req->sq.smq_rr_quantum = rr_quantum;
+ req->sq_mask.smq = ~req->sq_mask.smq;
+ req->sq_mask.smq_rr_quantum = ~req->sq_mask.smq_rr_quantum;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to set smq, rc=%d", rc);
+ return -EIO;
+ }
+
+ /* Enable sqb_aura fc */
+ rc = otx2_nix_sq_sqb_aura_fc(txq, true);
+ if (rc < 0) {
+ otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
+ return rc;
+ }
+
+ /* Disable smq xoff */
+ rc = nix_smq_xoff(dev, smq, false);
+ if (rc) {
+ otx2_err("Failed to enable smq for sq %u", txq->sq);
+ return rc;
+ }
+
+ return 0;
+}
+
static int
nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
uint32_t flags, bool hw_only)
@@ -929,10 +1146,11 @@ static int
nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_nix_tm_node *tm_node;
+ uint16_t sq, smq, rr_quantum;
+ struct otx2_eth_txq *txq;
int rc;
- RTE_SET_USED(xmit_enable);
-
nix_tm_update_parent_info(dev);
rc = nix_tm_send_txsch_alloc_msg(dev);
@@ -947,7 +1165,43 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
return rc;
}
- return 0;
+ /* Enable xmit as all the topology is ready */
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (tm_node->flags & NIX_TM_NODE_ENABLED)
+ continue;
+
+ /* Enable xmit on sq */
+ if (tm_node->level_id != OTX2_TM_LVL_QUEUE) {
+ tm_node->flags |= NIX_TM_NODE_ENABLED;
+ continue;
+ }
+
+ /* Don't enable SMQ or mark as enable */
+ if (!xmit_enable)
+ continue;
+
+ sq = tm_node->id;
+ if (sq > eth_dev->data->nb_tx_queues) {
+ rc = -EFAULT;
+ break;
+ }
+
+ txq = eth_dev->data->tx_queues[sq];
+
+ smq = tm_node->parent->hw_id;
+ rr_quantum = (tm_node->weight *
+ NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT;
+
+ rc = nix_tm_sw_xon(txq, smq, rr_quantum);
+ if (rc)
+ break;
+ tm_node->flags |= NIX_TM_NODE_ENABLED;
+ }
+
+ if (rc)
+ otx2_err("TM failed to enable xmit on sq %u, rc=%d", sq, rc);
+
+ return rc;
}
static int
@@ -1104,3 +1358,38 @@ otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
dev->tm_flags = 0;
return 0;
}
+
+int
+otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
+ uint32_t *rr_quantum, uint16_t *smq)
+{
+ struct otx2_nix_tm_node *tm_node;
+ int rc;
+
+ /* 0..sq_cnt-1 are leaf nodes */
+ if (sq >= dev->tm_leaf_cnt)
+ return -EINVAL;
+
+ /* Search for internal node first */
+ tm_node = nix_tm_node_search(dev, sq, false);
+ if (!tm_node)
+ tm_node = nix_tm_node_search(dev, sq, true);
+
+ /* Check if we found a valid leaf node */
+ if (!tm_node || tm_node->level_id != OTX2_TM_LVL_QUEUE ||
+ !tm_node->parent || tm_node->parent->hw_id == UINT32_MAX) {
+ return -EIO;
+ }
+
+ /* Get SMQ Id of leaf node's parent */
+ *smq = tm_node->parent->hw_id;
+ *rr_quantum = (tm_node->weight * NIX_TM_RR_QUANTUM_MAX)
+ / MAX_SCHED_WEIGHT;
+
+ rc = nix_smq_xoff(dev, *smq, false);
+ if (rc)
+ return rc;
+ tm_node->flags |= NIX_TM_NODE_ENABLED;
+
+ return 0;
+}
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index af1bb1862..2a009eece 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -16,6 +16,10 @@ struct otx2_eth_dev;
void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev);
int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
+int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
+ uint32_t *rr_quantum, uint16_t *smq);
+int otx2_nix_tm_sw_xoff(void *_txq, bool dev_started);
+int otx2_nix_sq_sqb_aura_fc(void *_txq, bool enable);
struct otx2_nix_tm_node {
TAILQ_ENTRY(otx2_nix_tm_node) node;
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 25/58] net/octeontx2: add ptype support
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (23 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 24/58] net/octeontx2: enable Tx through traffic manager jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-06 15:50 ` Ferruh Yigit
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 26/58] net/octeontx2: add link status set operations jerinj
` (34 subsequent siblings)
59 siblings, 1 reply; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Harman Kalra
From: Jerin Jacob <jerinj@marvell.com>
The fields from CQE needs to be converted to
ptype and rx ol flags in mbuf. This patch adds
create lookup memory for those items to be
used in Fastpath.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 2 +
drivers/net/octeontx2/otx2_ethdev.h | 6 +
drivers/net/octeontx2/otx2_lookup.c | 279 ++++++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 7 +
.../octeontx2/rte_pmd_octeontx2_version.map | 3 +
10 files changed, 302 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_lookup.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 31816a183..221fc84d8 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -20,6 +20,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Packet type parsing = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index d79428652..e11327c7a 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -20,6 +20,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Packet type parsing = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index d4deb52af..b2115cea4 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -16,6 +16,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Packet type parsing = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index cf2ba0e0e..00f61c354 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -35,6 +35,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
otx2_link.c \
otx2_stats.c \
+ otx2_lookup.c \
otx2_ethdev.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 14e8e78f8..eb5206ea1 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -8,6 +8,7 @@ sources = files(
'otx2_mac.c',
'otx2_link.c',
'otx2_stats.c',
+ 'otx2_lookup.c',
'otx2_ethdev.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index a269e1be6..9fbade075 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -441,6 +441,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
rxq->pool = mp;
rxq->qlen = nix_qsize_to_val(qsize);
rxq->qsize = qsize;
+ rxq->lookup_mem = otx2_nix_fastpath_lookup_mem_get();
/* Alloc completion queue */
rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp);
@@ -1267,6 +1268,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.tx_queue_stop = otx2_nix_tx_queue_stop,
.rx_queue_start = otx2_nix_rx_queue_start,
.rx_queue_stop = otx2_nix_rx_queue_stop,
+ .dev_supported_ptypes_get = otx2_nix_supported_ptypes_get,
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index b2b7d4186..83d6b2dc2 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -335,6 +335,12 @@ int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *addr);
+/* Lookup configuration */
+void *otx2_nix_fastpath_lookup_mem_get(void);
+
+/* PTYPES */
+const uint32_t *otx2_nix_supported_ptypes_get(struct rte_eth_dev *dev);
+
/* Mac address handling */
int otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *addr);
diff --git a/drivers/net/octeontx2/otx2_lookup.c b/drivers/net/octeontx2/otx2_lookup.c
new file mode 100644
index 000000000..025933efa
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_lookup.c
@@ -0,0 +1,279 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_common.h>
+#include <rte_memzone.h>
+
+#include "otx2_ethdev.h"
+
+/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */
+#define ERRCODE_ERRLEN_WIDTH 12
+#define ERR_ARRAY_SZ ((BIT(ERRCODE_ERRLEN_WIDTH)) *\
+ sizeof(uint32_t))
+
+#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ)
+
+const uint32_t *
+otx2_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ static const uint32_t ptypes[] = {
+ RTE_PTYPE_L2_ETHER_QINQ, /* LB */
+ RTE_PTYPE_L2_ETHER_VLAN, /* LB */
+ RTE_PTYPE_L2_ETHER_TIMESYNC, /* LB */
+ RTE_PTYPE_L2_ETHER_ARP, /* LC */
+ RTE_PTYPE_L2_ETHER_NSH, /* LC */
+ RTE_PTYPE_L2_ETHER_FCOE, /* LC */
+ RTE_PTYPE_L2_ETHER_MPLS, /* LC */
+ RTE_PTYPE_L3_IPV4, /* LC */
+ RTE_PTYPE_L3_IPV4_EXT, /* LC */
+ RTE_PTYPE_L3_IPV6, /* LC */
+ RTE_PTYPE_L3_IPV6_EXT, /* LC */
+ RTE_PTYPE_L4_TCP, /* LD */
+ RTE_PTYPE_L4_UDP, /* LD */
+ RTE_PTYPE_L4_SCTP, /* LD */
+ RTE_PTYPE_L4_ICMP, /* LD */
+ RTE_PTYPE_L4_IGMP, /* LD */
+ RTE_PTYPE_TUNNEL_GRE, /* LD */
+ RTE_PTYPE_TUNNEL_ESP, /* LD */
+ RTE_PTYPE_INNER_L2_ETHER,/* LE */
+ RTE_PTYPE_INNER_L3_IPV4, /* LF */
+ RTE_PTYPE_INNER_L3_IPV6, /* LF */
+ RTE_PTYPE_INNER_L4_TCP, /* LG */
+ RTE_PTYPE_INNER_L4_UDP, /* LG */
+ RTE_PTYPE_INNER_L4_SCTP, /* LG */
+ RTE_PTYPE_INNER_L4_ICMP, /* LG */
+ };
+
+ if (dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)
+ return ptypes;
+ else
+ return NULL;
+}
+
+/*
+ * +------------------ +------------------ +
+ * | | IL4 | IL3| IL2 | TU | L4 | L3 | L2 |
+ * +-------------------+-------------------+
+ *
+ * +-------------------+------------------ +
+ * | | LG | LF | LE | LD | LC | LB | |
+ * +-------------------+-------------------+
+ *
+ * ptype [LD - LC - LB] = TU - L4 - L3 - T2
+ * ptype_tunnel[LG - LF - LE] = IL4 - IL3 - IL2 - TU
+ *
+ */
+static void
+nix_create_non_tunnel_ptype_array(uint16_t *ptype)
+{
+ uint8_t lb, lc, ld;
+ uint16_t idx, val;
+
+ for (idx = 0; idx < PTYPE_NON_TUNNEL_ARRAY_SZ; idx++) {
+ lb = idx & 0xF;
+ lc = (idx & 0xF0) >> 4;
+ ld = (idx & 0xF00) >> 8;
+ val = RTE_PTYPE_UNKNOWN;
+
+ switch (lb) {
+ case NPC_LT_LB_QINQ:
+ val |= RTE_PTYPE_L2_ETHER_QINQ;
+ break;
+ case NPC_LT_LB_CTAG:
+ val |= RTE_PTYPE_L2_ETHER_VLAN;
+ break;
+ }
+
+ switch (lc) {
+ case NPC_LT_LC_ARP:
+ val |= RTE_PTYPE_L2_ETHER_ARP;
+ break;
+ case NPC_LT_LC_NSH:
+ val |= RTE_PTYPE_L2_ETHER_NSH;
+ break;
+ case NPC_LT_LC_FCOE:
+ val |= RTE_PTYPE_L2_ETHER_FCOE;
+ break;
+ case NPC_LT_LC_MPLS:
+ val |= RTE_PTYPE_L2_ETHER_MPLS;
+ break;
+ case NPC_LT_LC_IP:
+ val |= RTE_PTYPE_L3_IPV4;
+ break;
+ case NPC_LT_LC_IP_OPT:
+ val |= RTE_PTYPE_L3_IPV4_EXT;
+ break;
+ case NPC_LT_LC_IP6:
+ val |= RTE_PTYPE_L3_IPV6;
+ break;
+ case NPC_LT_LC_IP6_EXT:
+ val |= RTE_PTYPE_L3_IPV6_EXT;
+ break;
+ case NPC_LT_LC_PTP:
+ val |= RTE_PTYPE_L2_ETHER_TIMESYNC;
+ break;
+ }
+
+ switch (ld) {
+ case NPC_LT_LD_TCP:
+ val |= RTE_PTYPE_L4_TCP;
+ break;
+ case NPC_LT_LD_UDP:
+ val |= RTE_PTYPE_L4_UDP;
+ break;
+ case NPC_LT_LD_SCTP:
+ val |= RTE_PTYPE_L4_SCTP;
+ break;
+ case NPC_LT_LD_ICMP:
+ val |= RTE_PTYPE_L4_ICMP;
+ break;
+ case NPC_LT_LD_IGMP:
+ val |= RTE_PTYPE_L4_IGMP;
+ break;
+ case NPC_LT_LD_GRE:
+ val |= RTE_PTYPE_TUNNEL_GRE;
+ break;
+ case NPC_LT_LD_ESP:
+ val |= RTE_PTYPE_TUNNEL_ESP;
+ break;
+ }
+ ptype[idx] = val;
+ }
+}
+
+#define TU_SHIFT(x) ((x) >> PTYPE_WIDTH)
+static void
+nix_create_tunnel_ptype_array(uint16_t *ptype)
+{
+ uint8_t le, lf, lg;
+ uint16_t idx, val;
+
+ /* Skip non tunnel ptype array memory */
+ ptype = ptype + PTYPE_NON_TUNNEL_ARRAY_SZ;
+
+ for (idx = 0; idx < PTYPE_TUNNEL_ARRAY_SZ; idx++) {
+ le = idx & 0xF;
+ lf = (idx & 0xF0) >> 4;
+ lg = (idx & 0xF00) >> 8;
+ val = RTE_PTYPE_UNKNOWN;
+
+ switch (le) {
+ case NPC_LT_LE_TU_ETHER:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L2_ETHER);
+ break;
+ }
+ switch (lf) {
+ case NPC_LT_LF_TU_IP:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV4);
+ break;
+ case NPC_LT_LF_TU_IP6:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV6);
+ break;
+ }
+ switch (lg) {
+ case NPC_LT_LG_TU_TCP:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L4_TCP);
+ break;
+ case NPC_LT_LG_TU_UDP:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L4_UDP);
+ break;
+ case NPC_LT_LG_TU_SCTP:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L4_SCTP);
+ break;
+ case NPC_LT_LG_TU_ICMP:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L4_ICMP);
+ break;
+ }
+
+ ptype[idx] = val;
+ }
+}
+
+static void
+nix_create_rx_ol_flags_array(void *mem)
+{
+ uint16_t idx, errcode, errlev;
+ uint32_t val, *ol_flags;
+
+ /* Skip ptype array memory */
+ ol_flags = (uint32_t *)((uint8_t *)mem + PTYPE_ARRAY_SZ);
+
+ for (idx = 0; idx < BIT(ERRCODE_ERRLEN_WIDTH); idx++) {
+ errlev = idx & 0xf;
+ errcode = (idx & 0xff0) >> 4;
+
+ val = PKT_RX_IP_CKSUM_UNKNOWN;
+ val |= PKT_RX_L4_CKSUM_UNKNOWN;
+ val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN;
+
+ switch (errlev) {
+ case NPC_ERRLEV_RE:
+ /* Mark all errors as BAD checksum errors */
+ if (errcode) {
+ val |= PKT_RX_IP_CKSUM_BAD;
+ val |= PKT_RX_L4_CKSUM_BAD;
+ } else {
+ val |= PKT_RX_IP_CKSUM_GOOD;
+ val |= PKT_RX_L4_CKSUM_GOOD;
+ }
+ break;
+ case NPC_ERRLEV_LC:
+ if (errcode == NPC_EC_OIP4_CSUM ||
+ errcode == NPC_EC_IP_FRAG_OFFSET_1) {
+ val |= PKT_RX_IP_CKSUM_BAD;
+ val |= PKT_RX_EIP_CKSUM_BAD;
+ } else {
+ val |= PKT_RX_IP_CKSUM_GOOD;
+ }
+ break;
+ case NPC_ERRLEV_LF:
+ if (errcode == NPC_EC_IIP4_CSUM)
+ val |= PKT_RX_IP_CKSUM_BAD;
+ else
+ val |= PKT_RX_IP_CKSUM_GOOD;
+ break;
+ case NPC_ERRLEV_NIX:
+ if (errcode == NIX_RX_PERRCODE_OL4_CHK) {
+ val |= PKT_RX_OUTER_L4_CKSUM_BAD;
+ val |= PKT_RX_L4_CKSUM_BAD;
+ } else if (errcode == NIX_RX_PERRCODE_IL4_CHK) {
+ val |= PKT_RX_L4_CKSUM_BAD;
+ } else {
+ val |= PKT_RX_IP_CKSUM_GOOD;
+ val |= PKT_RX_L4_CKSUM_GOOD;
+ }
+ break;
+ }
+
+ ol_flags[idx] = val;
+ }
+}
+
+void *
+otx2_nix_fastpath_lookup_mem_get(void)
+{
+ const char name[] = "otx2_nix_fastpath_lookup_mem";
+ const struct rte_memzone *mz;
+ void *mem;
+
+ mz = rte_memzone_lookup(name);
+ if (mz != NULL)
+ return mz->addr;
+
+ /* Request for the first time */
+ mz = rte_memzone_reserve_aligned(name, LOOKUP_ARRAY_SZ,
+ SOCKET_ID_ANY, 0, OTX2_ALIGN);
+ if (mz != NULL) {
+ mem = mz->addr;
+ /* Form the ptype array lookup memory */
+ nix_create_non_tunnel_ptype_array(mem);
+ nix_create_tunnel_ptype_array(mem);
+ /* Form the rx ol_flags based on errcode */
+ nix_create_rx_ol_flags_array(mem);
+ return mem;
+ }
+ return NULL;
+}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index 1749c43ff..1283fdf37 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -5,6 +5,13 @@
#ifndef __OTX2_RX_H__
#define __OTX2_RX_H__
+#define PTYPE_WIDTH 12
+#define PTYPE_NON_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH)
+#define PTYPE_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH)
+#define PTYPE_ARRAY_SZ ((PTYPE_NON_TUNNEL_ARRAY_SZ +\
+ PTYPE_TUNNEL_ARRAY_SZ) *\
+ sizeof(uint16_t))
+
#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
#endif /* __OTX2_RX_H__ */
diff --git a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
index fc8c95e91..3cfd37715 100644
--- a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
+++ b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
@@ -1,4 +1,7 @@
DPDK_19.05 {
+ global:
+
+ otx2_nix_fastpath_lookup_mem_get;
local: *;
};
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 26/58] net/octeontx2: add link status set operations
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (24 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 25/58] net/octeontx2: add ptype support jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 27/58] net/octeontx2: add queue info and pool supported operations jerinj
` (33 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Add support for setting the link up and down.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 2 ++
drivers/net/octeontx2/otx2_ethdev.h | 2 ++
drivers/net/octeontx2/otx2_link.c | 49 +++++++++++++++++++++++++++++
3 files changed, 53 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 9fbade075..9ceeb6ffa 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1268,6 +1268,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.tx_queue_stop = otx2_nix_tx_queue_stop,
.rx_queue_start = otx2_nix_rx_queue_start,
.rx_queue_stop = otx2_nix_rx_queue_stop,
+ .dev_set_link_up = otx2_nix_dev_set_link_up,
+ .dev_set_link_down = otx2_nix_dev_set_link_down,
.dev_supported_ptypes_get = otx2_nix_supported_ptypes_get,
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 83d6b2dc2..7bd3e83e4 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -269,6 +269,8 @@ void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
void otx2_eth_dev_link_status_update(struct otx2_dev *dev,
struct cgx_link_user_info *link);
+int otx2_nix_dev_set_link_up(struct rte_eth_dev *eth_dev);
+int otx2_nix_dev_set_link_down(struct rte_eth_dev *eth_dev);
/* IRQ */
int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
index 228a0cd8e..8fcbdc9b7 100644
--- a/drivers/net/octeontx2/otx2_link.c
+++ b/drivers/net/octeontx2/otx2_link.c
@@ -106,3 +106,52 @@ otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
return rte_eth_linkstatus_set(eth_dev, &link);
}
+
+static int
+nix_dev_set_link_state(struct rte_eth_dev *eth_dev, uint8_t enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_set_link_state_msg *req;
+
+ req = otx2_mbox_alloc_msg_cgx_set_link_state(mbox);
+ req->enable = enable;
+ return otx2_mbox_process(mbox);
+}
+
+int
+otx2_nix_dev_set_link_up(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc, i;
+
+ if (otx2_dev_is_vf(dev))
+ return -ENOTSUP;
+
+ rc = nix_dev_set_link_state(eth_dev, 1);
+ if (rc)
+ goto done;
+
+ /* Start tx queues */
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
+ otx2_nix_tx_queue_start(eth_dev, i);
+
+done:
+ return rc;
+}
+
+int
+otx2_nix_dev_set_link_down(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int i;
+
+ if (otx2_dev_is_vf(dev))
+ return -ENOTSUP;
+
+ /* Stop tx queues */
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
+ otx2_nix_tx_queue_stop(eth_dev, i);
+
+ return nix_dev_set_link_state(eth_dev, 0);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 27/58] net/octeontx2: add queue info and pool supported operations
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (25 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 26/58] net/octeontx2: add link status set operations jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 28/58] net/octeontx2: add Rx and Tx descriptor operations jerinj
` (32 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: ferruh.yigit
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add Rx and Tx queue info get and pool ops supported
operations.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 3 ++
drivers/net/octeontx2/otx2_ethdev.h | 5 +++
drivers/net/octeontx2/otx2_ethdev_ops.c | 51 +++++++++++++++++++++++++
3 files changed, 59 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 9ceeb6ffa..e9af48c8d 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1291,6 +1291,9 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.xstats_reset = otx2_nix_xstats_reset,
.xstats_get_by_id = otx2_nix_xstats_get_by_id,
.xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id,
+ .rxq_info_get = otx2_nix_rxq_info_get,
+ .txq_info_get = otx2_nix_txq_info_get,
+ .pool_ops_supported = otx2_nix_pool_ops_supported,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7bd3e83e4..594021285 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -254,6 +254,11 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
/* Ops */
void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_info *dev_info);
+int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool);
+void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+void otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en);
void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 77cfa2cec..95a5eb6ed 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -2,6 +2,8 @@
* Copyright(C) 2019 Marvell International Ltd.
*/
+#include <rte_mbuf_pool_ops.h>
+
#include "otx2_ethdev.h"
static void
@@ -86,6 +88,55 @@ otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev)
nix_allmulticast_config(eth_dev, 0);
}
+void
+otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct otx2_eth_rxq *rxq;
+
+ rxq = eth_dev->data->rx_queues[queue_id];
+
+ qinfo->mp = rxq->pool;
+ qinfo->scattered_rx = eth_dev->data->scattered_rx;
+ qinfo->nb_desc = rxq->qconf.nb_desc;
+
+ qinfo->conf.rx_free_thresh = 0;
+ qinfo->conf.rx_drop_en = 0;
+ qinfo->conf.rx_deferred_start = 0;
+ qinfo->conf.offloads = rxq->offloads;
+}
+
+void
+otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct otx2_eth_txq *txq;
+
+ txq = eth_dev->data->tx_queues[queue_id];
+
+ qinfo->nb_desc = txq->qconf.nb_desc;
+
+ qinfo->conf.tx_thresh.pthresh = 0;
+ qinfo->conf.tx_thresh.hthresh = 0;
+ qinfo->conf.tx_thresh.wthresh = 0;
+
+ qinfo->conf.tx_free_thresh = 0;
+ qinfo->conf.tx_rs_thresh = 0;
+ qinfo->conf.offloads = txq->offloads;
+ qinfo->conf.tx_deferred_start = 0;
+}
+
+int
+otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
+{
+ RTE_SET_USED(eth_dev);
+
+ if (!strcmp(pool, rte_mbuf_platform_mempool_ops()))
+ return 0;
+
+ return -ENOTSUP;
+}
+
void
otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 28/58] net/octeontx2: add Rx and Tx descriptor operations
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (26 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 27/58] net/octeontx2: add queue info and pool supported operations jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 29/58] net/octeontx2: add module EEPROM dump jerinj
` (31 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit
From: Jerin Jacob <jerinj@marvell.com>
Add Rx and Tx queue descriptor related operations.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 2 +
doc/guides/nics/features/octeontx2_vf.ini | 2 +
drivers/net/octeontx2/otx2_ethdev.c | 4 ++
drivers/net/octeontx2/otx2_ethdev.h | 4 ++
drivers/net/octeontx2/otx2_ethdev_ops.c | 83 ++++++++++++++++++++++
6 files changed, 97 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 221fc84d8..79b49bf66 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -12,6 +12,7 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Free Tx mbuf on demand = Y
Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
@@ -21,6 +22,7 @@ RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Packet type parsing = Y
+Rx descriptor status = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index e11327c7a..fc0390dac 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -12,6 +12,7 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Free Tx mbuf on demand = Y
Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
@@ -21,6 +22,7 @@ RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Packet type parsing = Y
+Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index b2115cea4..6c63e12d0 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -11,12 +11,14 @@ Lock-free Tx queue = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Free Tx mbuf on demand = Y
Queue start/stop = Y
RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Packet type parsing = Y
+Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index e9af48c8d..41adc6858 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1293,6 +1293,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id,
.rxq_info_get = otx2_nix_rxq_info_get,
.txq_info_get = otx2_nix_txq_info_get,
+ .rx_queue_count = otx2_nix_rx_queue_count,
+ .rx_descriptor_done = otx2_nix_rx_descriptor_done,
+ .rx_descriptor_status = otx2_nix_rx_descriptor_status,
+ .tx_done_cleanup = otx2_nix_tx_done_cleanup,
.pool_ops_supported = otx2_nix_pool_ops_supported,
};
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 594021285..c849231d0 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -259,6 +259,10 @@ void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
void otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo);
+uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx);
+int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
+int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset);
+int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en);
void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 95a5eb6ed..627f20cf5 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -126,6 +126,89 @@ otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
qinfo->conf.tx_deferred_start = 0;
}
+static void
+nix_rx_head_tail_get(struct otx2_eth_dev *dev,
+ uint32_t *head, uint32_t *tail, uint16_t queue_idx)
+{
+ uint64_t reg, val;
+
+ if (head == NULL || tail == NULL)
+ return;
+
+ reg = (((uint64_t)queue_idx) << 32);
+ val = otx2_atomic64_add_nosync(reg, (int64_t *)
+ (dev->base + NIX_LF_CQ_OP_STATUS));
+ if (val & (OP_ERR | CQ_ERR))
+ val = 0;
+
+ *tail = (uint32_t)(val & 0xFFFFF);
+ *head = (uint32_t)((val >> 20) & 0xFFFFF);
+}
+
+uint32_t
+otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx)
+{
+ struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx];
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint32_t head, tail;
+
+ nix_rx_head_tail_get(dev, &head, &tail, queue_idx);
+ return (tail - head) % rxq->qlen;
+}
+
+static inline int
+nix_offset_has_packet(uint32_t head, uint32_t tail, uint16_t offset)
+{
+ /* Check given offset(queue index) has packet filled by HW */
+ if (tail > head && offset <= tail && offset >= head)
+ return 1;
+ /* Wrap around case */
+ if (head > tail && (offset >= head || offset <= tail))
+ return 1;
+
+ return 0;
+}
+
+int
+otx2_nix_rx_descriptor_done(void *rx_queue, uint16_t offset)
+{
+ struct otx2_eth_rxq *rxq = rx_queue;
+ uint32_t head, tail;
+
+ nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev),
+ &head, &tail, rxq->rq);
+
+ return nix_offset_has_packet(head, tail, offset);
+}
+
+int
+otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset)
+{
+ struct otx2_eth_rxq *rxq = rx_queue;
+ uint32_t head, tail;
+
+ if (rxq->qlen >= offset)
+ return -EINVAL;
+
+ nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev),
+ &head, &tail, rxq->rq);
+
+ if (nix_offset_has_packet(head, tail, offset))
+ return RTE_ETH_RX_DESC_DONE;
+ else
+ return RTE_ETH_RX_DESC_AVAIL;
+}
+
+/* It is a NOP for octeontx2 as HW frees the buffer on xmit */
+int
+otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+ RTE_SET_USED(txq);
+ RTE_SET_USED(free_cnt);
+
+ return 0;
+}
+
int
otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 29/58] net/octeontx2: add module EEPROM dump
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (27 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 28/58] net/octeontx2: add Rx and Tx descriptor operations jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 30/58] net/octeontx2: add flow control support jerinj
` (30 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
add module EEPROM dump operation.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 2 +
drivers/net/octeontx2/otx2_ethdev.h | 4 ++
drivers/net/octeontx2/otx2_ethdev_ops.c | 51 ++++++++++++++++++++++
6 files changed, 60 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 79b49bf66..18daccc49 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -26,4 +26,5 @@ Rx descriptor status = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
+Module EEPROM dump = Y
Registers dump = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index fc0390dac..ccf4dac42 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -26,4 +26,5 @@ Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
+Module EEPROM dump = Y
Registers dump = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 6c63e12d0..812d5d649 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -22,4 +22,5 @@ Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
+Module EEPROM dump = Y
Registers dump = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 41adc6858..0df487983 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1298,6 +1298,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.rx_descriptor_status = otx2_nix_rx_descriptor_status,
.tx_done_cleanup = otx2_nix_tx_done_cleanup,
.pool_ops_supported = otx2_nix_pool_ops_supported,
+ .get_module_info = otx2_nix_get_module_info,
+ .get_module_eeprom = otx2_nix_get_module_eeprom,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index c849231d0..8fbd4532e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -254,6 +254,10 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
/* Ops */
void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_info *dev_info);
+int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
+ struct rte_eth_dev_module_info *modinfo);
+int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
+ struct rte_dev_eeprom_info *info);
int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool);
void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 627f20cf5..51c156786 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -220,6 +220,57 @@ otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
return -ENOTSUP;
}
+static struct cgx_fw_data *
+nix_get_fwdata(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_fw_data *rsp = NULL;
+
+ otx2_mbox_alloc_msg_cgx_get_aux_link_info(mbox);
+
+ otx2_mbox_process_msg(mbox, (void *)&rsp);
+
+ return rsp;
+}
+
+int
+otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
+ struct rte_eth_dev_module_info *modinfo)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct cgx_fw_data *rsp;
+
+ rsp = nix_get_fwdata(dev);
+ if (rsp == NULL)
+ return -EIO;
+
+ modinfo->type = rsp->fwdata.sfp_eeprom.sff_id;
+ modinfo->eeprom_len = SFP_EEPROM_SIZE;
+
+ return 0;
+}
+
+int
+otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
+ struct rte_dev_eeprom_info *info)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct cgx_fw_data *rsp;
+
+ if (!info->data || !info->length ||
+ (info->offset + info->length > SFP_EEPROM_SIZE))
+ return -EINVAL;
+
+ rsp = nix_get_fwdata(dev);
+ if (rsp == NULL)
+ return -EIO;
+
+ otx2_mbox_memcpy(info->data, rsp->fwdata.sfp_eeprom.buf + info->offset,
+ info->length);
+
+ return 0;
+}
+
void
otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 30/58] net/octeontx2: add flow control support
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (28 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 29/58] net/octeontx2: add module EEPROM dump jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 31/58] net/octeontx2: add PTP base support jerinj
` (29 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Add flow control operations and exposed
otx2_nix_update_flow_ctrl_mode() to enable on the
configured mode in dev_start().
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 20 ++
drivers/net/octeontx2/otx2_ethdev.h | 23 +++
drivers/net/octeontx2/otx2_flow_ctrl.c | 230 +++++++++++++++++++++
7 files changed, 277 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_flow_ctrl.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 18daccc49..ba7fdc868 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -21,6 +21,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Flow control = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index ccf4dac42..b909918ce 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -21,6 +21,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Flow control = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 00f61c354..1d3788466 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -37,6 +37,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_stats.c \
otx2_lookup.c \
otx2_ethdev.c \
+ otx2_flow_ctrl.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
otx2_ethdev_debug.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index eb5206ea1..e4fcac763 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -10,6 +10,7 @@ sources = files(
'otx2_stats.c',
'otx2_lookup.c',
'otx2_ethdev.c',
+ 'otx2_flow_ctrl.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
'otx2_ethdev_debug.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 0df487983..97e0e3465 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -216,6 +216,14 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
aq->cq.cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT);
aq->cq.cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR);
+ /* TX pause frames enable flowctrl on RX side */
+ if (dev->fc_info.tx_pause) {
+ /* Single bpid is allocated for all rx channels for now */
+ aq->cq.bpid = dev->fc_info.bpid[0];
+ aq->cq.bp = NIX_CQ_BP_LEVEL;
+ aq->cq.bp_ena = 1;
+ }
+
/* Many to one reduction */
aq->cq.qint_idx = qid % dev->qints;
@@ -1069,6 +1077,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
/* Free the resources allocated from the previous configure */
if (dev->configured == 1) {
+ otx2_nix_rxchan_bpid_cfg(eth_dev, false);
oxt2_nix_unregister_queue_irqs(eth_dev);
nix_set_nop_rxtx_function(eth_dev);
rc = nix_store_queue_cfg_and_then_release(eth_dev);
@@ -1122,6 +1131,12 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ rc = otx2_nix_rxchan_bpid_cfg(eth_dev, true);
+ if (rc) {
+ otx2_err("Failed to configure nix rx chan bpid cfg rc=%d", rc);
+ goto free_nix_lf;
+ }
+
/*
* Restore queue config when reconfigure followed by
* reconfigure and no queue configure invoked from application case.
@@ -1300,6 +1315,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.pool_ops_supported = otx2_nix_pool_ops_supported,
.get_module_info = otx2_nix_get_module_info,
.get_module_eeprom = otx2_nix_get_module_eeprom,
+ .flow_ctrl_get = otx2_nix_flow_ctrl_get,
+ .flow_ctrl_set = otx2_nix_flow_ctrl_set,
};
static inline int
@@ -1501,6 +1518,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ /* Disable nix bpid config */
+ otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+
/* Free up SQs */
for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 8fbd4532e..fad151b54 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -68,6 +68,9 @@
#define NIX_TX_NB_SEG_MAX 9
#endif
+/* Apply BP when CQ is 75% full */
+#define NIX_CQ_BP_LEVEL (25 * 256 / 100)
+
#define CQ_OP_STAT_OP_ERR 63
#define CQ_OP_STAT_CQ_ERR 46
@@ -150,6 +153,14 @@ struct otx2_npc_flow_info {
uint16_t flow_max_priority;
};
+struct otx2_fc_info {
+ enum rte_eth_fc_mode mode; /**< Link flow control mode */
+ uint8_t rx_pause;
+ uint8_t tx_pause;
+ uint8_t chan_cnt;
+ uint16_t bpid[NIX_MAX_CHAN];
+};
+
struct otx2_eth_dev {
OTX2_DEV; /* Base class */
MARKER otx2_eth_dev_data_start;
@@ -196,6 +207,7 @@ struct otx2_eth_dev {
struct otx2_nix_tm_node_list node_list;
struct otx2_nix_tm_shaper_profile_list shaper_profile_list;
struct otx2_rss_info rss_info;
+ struct otx2_fc_info fc_info;
uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
struct otx2_npc_flow_info npc_flow;
@@ -350,6 +362,17 @@ int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *addr);
+/* Flow Control */
+int otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_fc_conf *fc_conf);
+
+int otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
+ struct rte_eth_fc_conf *fc_conf);
+
+int otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb);
+
+int otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev);
+
/* Lookup configuration */
void *otx2_nix_fastpath_lookup_mem_get(void);
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
new file mode 100644
index 000000000..bd3cda594
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_flow_ctrl.c
@@ -0,0 +1,230 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+
+int
+otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_fc_info *fc = &dev->fc_info;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_bp_cfg_req *req;
+ struct nix_bp_cfg_rsp *rsp;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ if (enb) {
+ req = otx2_mbox_alloc_msg_nix_bp_enable(mbox);
+ req->chan_base = 0;
+ req->chan_cnt = 1;
+ req->bpid_per_chan = 0;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc || req->chan_cnt != rsp->chan_cnt) {
+ otx2_err("Insufficient BPIDs, alloc=%u < req=%u rc=%d",
+ rsp->chan_cnt, req->chan_cnt, rc);
+ return rc;
+ }
+
+ fc->bpid[0] = rsp->chan_bpid[0];
+ } else {
+ req = otx2_mbox_alloc_msg_nix_bp_disable(mbox);
+ req->chan_base = 0;
+ req->chan_cnt = 1;
+
+ rc = otx2_mbox_process(mbox);
+
+ memset(fc->bpid, 0, sizeof(uint16_t) * NIX_MAX_CHAN);
+ }
+
+ return rc;
+}
+
+int
+otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_fc_conf *fc_conf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct cgx_pause_frm_cfg *req, *rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return -ENOTSUP;
+
+ req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
+ req->set = 0;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ goto done;
+
+ if (rsp->rx_pause && rsp->tx_pause)
+ fc_conf->mode = RTE_FC_FULL;
+ else if (rsp->rx_pause)
+ fc_conf->mode = RTE_FC_RX_PAUSE;
+ else if (rsp->tx_pause)
+ fc_conf->mode = RTE_FC_TX_PAUSE;
+ else
+ fc_conf->mode = RTE_FC_NONE;
+
+done:
+ return rc;
+}
+
+static int
+otx2_nix_cq_bp_cfg(struct rte_eth_dev *eth_dev, bool enb)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_fc_info *fc = &dev->fc_info;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *aq;
+ struct otx2_eth_rxq *rxq;
+ int i, rc;
+
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ rxq = eth_dev->data->rx_queues[i];
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ /* The shared memory buffer can be full.
+ * flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq)
+ return -ENOMEM;
+ }
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ if (enb) {
+ aq->cq.bpid = fc->bpid[0];
+ aq->cq_mask.bpid = ~(aq->cq_mask.bpid);
+ aq->cq.bp = NIX_CQ_BP_LEVEL;
+ aq->cq_mask.bp = ~(aq->cq_mask.bp);
+ }
+
+ aq->cq.bp_ena = !!enb;
+ aq->cq_mask.bp_ena = ~(aq->cq_mask.bp_ena);
+ }
+
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+otx2_nix_rx_fc_cfg(struct rte_eth_dev *eth_dev, bool enb)
+{
+ return otx2_nix_cq_bp_cfg(eth_dev, enb);
+}
+
+int
+otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
+ struct rte_eth_fc_conf *fc_conf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_fc_info *fc = &dev->fc_info;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_pause_frm_cfg *req;
+ uint8_t tx_pause, rx_pause;
+ int rc = 0;
+
+ if (otx2_dev_is_vf(dev))
+ return -ENOTSUP;
+
+ if (fc_conf->high_water || fc_conf->low_water || fc_conf->pause_time ||
+ fc_conf->mac_ctrl_frame_fwd || fc_conf->autoneg) {
+ otx2_info("Flowctrl parameter is not supported");
+ return -EINVAL;
+ }
+
+ if (fc_conf->mode == fc->mode)
+ return 0;
+
+ rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
+ (fc_conf->mode == RTE_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
+ (fc_conf->mode == RTE_FC_TX_PAUSE);
+
+ /* Check if TX pause frame is already enabled or not */
+ if (fc->tx_pause ^ tx_pause) {
+ if (otx2_dev_is_A0(dev) && eth_dev->data->dev_started) {
+ /* on A0, CQ should be in disabled state
+ * while setting flow control configuration.
+ */
+ otx2_info("Stop the port=%d for setting flow control\n",
+ eth_dev->data->port_id);
+ return 0;
+ }
+ /* TX pause frames, enable/disable flowctrl on RX side. */
+ rc = otx2_nix_rx_fc_cfg(eth_dev, tx_pause);
+ if (rc)
+ return rc;
+ }
+
+ req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
+ req->set = 1;
+ req->rx_pause = rx_pause;
+ req->tx_pause = tx_pause;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ fc->tx_pause = tx_pause;
+ fc->rx_pause = rx_pause;
+ fc->mode = fc_conf->mode;
+
+ return rc;
+}
+
+int
+otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_fc_info *fc = &dev->fc_info;
+ struct rte_eth_fc_conf fc_conf;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
+ /* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+ * by AF driver, update those info in PMD structure.
+ */
+ otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
+
+ if (fc_conf.mode != fc->mode && fc->mode == RTE_FC_NONE) {
+ /* PMD disables HW flow control in the initial application's call
+ * to dev_start(), application uses flow_ctrl_set() API to set
+ * flow control later.
+ */
+ fc->mode = fc_conf.mode;
+ fc_conf.mode = RTE_FC_NONE;
+ }
+
+ /* To avoid Link credit deadlock on A0, disable Tx FC if it's enabled */
+ if (otx2_dev_is_A0(dev) &&
+ (fc_conf.mode == RTE_FC_FULL || fc_conf.mode == RTE_FC_RX_PAUSE)) {
+ fc_conf.mode =
+ (fc_conf.mode == RTE_FC_FULL ||
+ fc_conf.mode == RTE_FC_TX_PAUSE) ?
+ RTE_FC_TX_PAUSE : RTE_FC_NONE;
+ }
+
+ return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 31/58] net/octeontx2: add PTP base support
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (29 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 30/58] net/octeontx2: add flow control support jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 32/58] net/octeontx2: add remaining PTP operations jerinj
` (28 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Harman Kalra, Zyta Szpak
From: Harman Kalra <hkalra@marvell.com>
Add PTP enable and disable operations.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: Zyta Szpak <zyta@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 22 ++++-
drivers/net/octeontx2/otx2_ethdev.h | 17 ++++
drivers/net/octeontx2/otx2_ptp.c | 135 ++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 11 +++
6 files changed, 184 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_ptp.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 1d3788466..b1c8e4e52 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -33,6 +33,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_tm.c \
otx2_rss.c \
otx2_mac.c \
+ otx2_ptp.c \
otx2_link.c \
otx2_stats.c \
otx2_lookup.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index e4fcac763..57d6c0a58 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -6,6 +6,7 @@ sources = files(
'otx2_tm.c',
'otx2_rss.c',
'otx2_mac.c',
+ 'otx2_ptp.c',
'otx2_link.c',
'otx2_stats.c',
'otx2_lookup.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 97e0e3465..683aecd4e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -336,9 +336,7 @@ nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
static inline int
nix_get_data_off(struct otx2_eth_dev *dev)
{
- RTE_SET_USED(dev);
-
- return 0;
+ return otx2_ethdev_is_ptp_en(dev) ? NIX_TIMESYNC_RX_OFFSET : 0;
}
uint64_t
@@ -450,6 +448,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
rxq->qlen = nix_qsize_to_val(qsize);
rxq->qsize = qsize;
rxq->lookup_mem = otx2_nix_fastpath_lookup_mem_get();
+ rxq->tstamp = &dev->tstamp;
/* Alloc completion queue */
rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp);
@@ -716,6 +715,7 @@ otx2_nix_form_default_desc(struct otx2_eth_txq *txq)
send_mem->dsz = 0x0;
send_mem->wmem = 0x1;
send_mem->alg = NIX_SENDMEMALG_SETTSTMP;
+ send_mem->addr = txq->dev->tstamp.tx_tstamp_iova;
}
sg = (union nix_send_sg_s *)&txq->cmd[4];
} else {
@@ -1137,6 +1137,16 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ /* Enable PTP if it was requested by the app or if it is already
+ * enabled in PF owning this VF
+ */
+ memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
+ if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+ otx2_ethdev_is_ptp_en(dev))
+ otx2_nix_timesync_enable(eth_dev);
+ else
+ otx2_nix_timesync_disable(eth_dev);
+
/*
* Restore queue config when reconfigure followed by
* reconfigure and no queue configure invoked from application case.
@@ -1317,6 +1327,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.get_module_eeprom = otx2_nix_get_module_eeprom,
.flow_ctrl_get = otx2_nix_flow_ctrl_get,
.flow_ctrl_set = otx2_nix_flow_ctrl_set,
+ .timesync_enable = otx2_nix_timesync_enable,
+ .timesync_disable = otx2_nix_timesync_disable,
};
static inline int
@@ -1521,6 +1533,10 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
/* Disable nix bpid config */
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+ /* Disable PTP if already enabled */
+ if (otx2_ethdev_is_ptp_en(dev))
+ otx2_nix_timesync_disable(eth_dev);
+
/* Free up SQs */
for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index fad151b54..809a9656f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -13,6 +13,7 @@
#include <rte_mbuf.h>
#include <rte_mempool.h>
#include <rte_string_fns.h>
+#include <rte_time.h>
#include "otx2_common.h"
#include "otx2_dev.h"
@@ -109,6 +110,12 @@
#define NIX_DEFAULT_RSS_CTX_GROUP 0
#define NIX_DEFAULT_RSS_MCAM_IDX -1
+#define otx2_ethdev_is_ptp_en(dev) ((dev)->ptp_en)
+
+#define NIX_TIMESYNC_TX_CMD_LEN 8
+/* Additional timesync values. */
+#define OTX2_CYCLECOUNTER_MASK 0xffffffffffffffffULL
+
enum nix_q_size_e {
nix_q_size_16, /* 16 entries */
nix_q_size_64, /* 64 entries */
@@ -214,6 +221,12 @@ struct otx2_eth_dev {
struct otx2_eth_qconf *tx_qconf;
struct otx2_eth_qconf *rx_qconf;
struct rte_eth_dev *eth_dev;
+ /* PTP counters */
+ bool ptp_en;
+ struct otx2_timesync_info tstamp;
+ struct rte_timecounter systime_tc;
+ struct rte_timecounter rx_tstamp_tc;
+ struct rte_timecounter tx_tstamp_tc;
} __rte_cache_aligned;
struct otx2_eth_txq {
@@ -396,4 +409,8 @@ int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
/* Rx and Tx routines */
void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
+/* Timesync - PTP routines */
+int otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev);
+int otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev);
+
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
new file mode 100644
index 000000000..105067949
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -0,0 +1,135 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_ethdev_driver.h>
+
+#include "otx2_ethdev.h"
+
+#define PTP_FREQ_ADJUST (1 << 9)
+
+static void
+nix_start_timecounters(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ memset(&dev->systime_tc, 0, sizeof(struct rte_timecounter));
+ memset(&dev->rx_tstamp_tc, 0, sizeof(struct rte_timecounter));
+ memset(&dev->tx_tstamp_tc, 0, sizeof(struct rte_timecounter));
+
+ dev->systime_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
+ dev->rx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
+ dev->tx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
+}
+
+static int
+nix_ptp_config(struct rte_eth_dev *eth_dev, int en)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ uint8_t rc = 0;
+
+ if (otx2_dev_is_vf(dev))
+ return rc;
+
+ if (en) {
+ /* Enable time stamping of sent PTP packets. */
+ otx2_mbox_alloc_msg_nix_lf_ptp_tx_enable(mbox);
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("MBOX ptp tx conf enable failed: err %d", rc);
+ return rc;
+ }
+ /* Enable time stamping of received PTP packets. */
+ otx2_mbox_alloc_msg_cgx_ptp_rx_enable(mbox);
+ } else {
+ /* Disable time stamping of sent PTP packets. */
+ otx2_mbox_alloc_msg_nix_lf_ptp_tx_disable(mbox);
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("MBOX ptp tx conf disable failed: err %d", rc);
+ return rc;
+ }
+ /* Disable time stamping of received PTP packets. */
+ otx2_mbox_alloc_msg_cgx_ptp_rx_disable(mbox);
+ }
+
+ return otx2_mbox_process(mbox);
+}
+
+int
+otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int i, rc = 0;
+
+ if (otx2_ethdev_is_ptp_en(dev)) {
+ otx2_info("PTP mode is already enabled ");
+ return -EINVAL;
+ }
+
+ /* If we are VF, no further action can be taken */
+ if (otx2_dev_is_vf(dev))
+ return -EINVAL;
+
+ if (!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)) {
+ otx2_err("Ptype offload is disabled, it should be enabled");
+ return -EINVAL;
+ }
+
+ /* Allocating a iova address for tx tstamp */
+ const struct rte_memzone *ts;
+ ts = rte_eth_dma_zone_reserve(eth_dev, "otx2_ts",
+ 0, OTX2_ALIGN, OTX2_ALIGN,
+ dev->node);
+ if (ts == NULL)
+ otx2_err("Failed to allocate mem for tx tstamp addr");
+
+ dev->tstamp.tx_tstamp_iova = ts->iova;
+ dev->tstamp.tx_tstamp = ts->addr;
+
+ /* System time should be already on by default */
+ nix_start_timecounters(eth_dev);
+
+ dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
+ dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
+
+ rc = nix_ptp_config(eth_dev, 1);
+ if (!rc) {
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
+ otx2_nix_form_default_desc(txq);
+ }
+ }
+ return rc;
+}
+
+int
+otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int i, rc = 0;
+
+ if (!otx2_ethdev_is_ptp_en(dev)) {
+ otx2_nix_dbg("PTP mode is disabled");
+ return -EINVAL;
+ }
+
+ /* If we are VF, nothing else can be done */
+ if (otx2_dev_is_vf(dev))
+ return -EINVAL;
+
+ dev->rx_offloads &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
+ dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
+
+ rc = nix_ptp_config(eth_dev, 0);
+ if (!rc) {
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
+ otx2_nix_form_default_desc(txq);
+ }
+ }
+ return rc;
+}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index 1283fdf37..0c3627c12 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -13,5 +13,16 @@
sizeof(uint16_t))
#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
+#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
+
+#define NIX_TIMESYNC_RX_OFFSET 8
+
+struct otx2_timesync_info {
+ uint64_t rx_tstamp;
+ rte_iova_t tx_tstamp_iova;
+ uint64_t *tx_tstamp;
+ uint8_t tx_ready;
+ uint8_t rx_ready;
+} __rte_cache_aligned;
#endif /* __OTX2_RX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 32/58] net/octeontx2: add remaining PTP operations
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (30 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 31/58] net/octeontx2: add PTP base support jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 33/58] net/octeontx2: introducing flow driver jerinj
` (27 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Harman Kalra, Zyta Szpak
From: Harman Kalra <hkalra@marvell.com>
Add remaining PTP configuration/slowpath operations.
Timesync feature is available only for PF devices.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: Zyta Szpak <zyta@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
drivers/net/octeontx2/otx2_ethdev.c | 6 ++
drivers/net/octeontx2/otx2_ethdev.h | 11 +++
drivers/net/octeontx2/otx2_ptp.c | 130 +++++++++++++++++++++++++
4 files changed, 149 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index ba7fdc868..0f416ee4b 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -23,6 +23,8 @@ RSS reta update = Y
Inner RSS = Y
Flow control = Y
Packet type parsing = Y
+Timesync = Y
+Timestamp offload = Y
Rx descriptor status = Y
Basic stats = Y
Stats per queue = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 683aecd4e..9cd3ce407 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -47,6 +47,7 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
static const struct otx2_dev_ops otx2_dev_ops = {
.link_status_update = otx2_eth_dev_link_status_update,
+ .ptp_info_update = otx2_eth_dev_ptp_info_update
};
static int
@@ -1329,6 +1330,11 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.flow_ctrl_set = otx2_nix_flow_ctrl_set,
.timesync_enable = otx2_nix_timesync_enable,
.timesync_disable = otx2_nix_timesync_disable,
+ .timesync_read_rx_timestamp = otx2_nix_timesync_read_rx_timestamp,
+ .timesync_read_tx_timestamp = otx2_nix_timesync_read_tx_timestamp,
+ .timesync_adjust_time = otx2_nix_timesync_adjust_time,
+ .timesync_read_time = otx2_nix_timesync_read_time,
+ .timesync_write_time = otx2_nix_timesync_write_time,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 809a9656f..ba6d1736e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -412,5 +412,16 @@ void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
/* Timesync - PTP routines */
int otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev);
int otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev);
+int otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev,
+ struct timespec *timestamp,
+ uint32_t flags);
+int otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev,
+ struct timespec *timestamp);
+int otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta);
+int otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev,
+ const struct timespec *ts);
+int otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev,
+ struct timespec *ts);
+int otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en);
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 105067949..5291da241 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -57,6 +57,23 @@ nix_ptp_config(struct rte_eth_dev *eth_dev, int en)
return otx2_mbox_process(mbox);
}
+int
+otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en)
+{
+ struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
+ struct rte_eth_dev *eth_dev = otx2_dev->eth_dev;
+ int i;
+
+ otx2_dev->ptp_en = ptp_en;
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[i];
+ rxq->mbuf_initializer =
+ otx2_nix_rxq_mbuf_setup(otx2_dev,
+ eth_dev->data->port_id);
+ }
+ return 0;
+}
+
int
otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
{
@@ -133,3 +150,116 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
}
return rc;
}
+
+int
+otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev,
+ struct timespec *timestamp,
+ uint32_t __rte_unused flags)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_timesync_info *tstamp = &dev->tstamp;
+ uint64_t ns;
+
+ if (!tstamp->rx_ready)
+ return -EINVAL;
+
+ ns = rte_timecounter_update(&dev->rx_tstamp_tc, tstamp->rx_tstamp);
+ *timestamp = rte_ns_to_timespec(ns);
+ tstamp->rx_ready = 0;
+
+ otx2_nix_dbg("rx timestamp: %llu sec: %lu nsec %lu",
+ (unsigned long long)tstamp->rx_tstamp, timestamp->tv_sec,
+ timestamp->tv_nsec);
+
+ return 0;
+}
+
+int
+otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev,
+ struct timespec *timestamp)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_timesync_info *tstamp = &dev->tstamp;
+ uint64_t ns;
+
+ if (*tstamp->tx_tstamp == 0)
+ return -EINVAL;
+
+ ns = rte_timecounter_update(&dev->tx_tstamp_tc, *tstamp->tx_tstamp);
+ *timestamp = rte_ns_to_timespec(ns);
+
+ otx2_nix_dbg("tx timestamp: %llu sec: %lu nsec %lu",
+ *(unsigned long long *)tstamp->tx_tstamp,
+ timestamp->tv_sec, timestamp->tv_nsec);
+
+ *tstamp->tx_tstamp = 0;
+ rte_wmb();
+
+ return 0;
+}
+
+int
+otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct ptp_req *req;
+ struct ptp_rsp *rsp;
+ int rc;
+
+ /* Adjust the frequent to make tics increments in 10^9 tics per sec */
+ if (delta < PTP_FREQ_ADJUST && delta > -PTP_FREQ_ADJUST) {
+ req = otx2_mbox_alloc_msg_ptp_op(mbox);
+ req->op = PTP_OP_ADJFINE;
+ req->scaled_ppm = delta;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+ }
+ dev->systime_tc.nsec += delta;
+ dev->rx_tstamp_tc.nsec += delta;
+ dev->tx_tstamp_tc.nsec += delta;
+
+ return 0;
+}
+
+int
+otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev,
+ const struct timespec *ts)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t ns;
+
+ ns = rte_timespec_to_ns(ts);
+ /* Set the time counters to a new value. */
+ dev->systime_tc.nsec = ns;
+ dev->rx_tstamp_tc.nsec = ns;
+ dev->tx_tstamp_tc.nsec = ns;
+
+ return 0;
+}
+
+int
+otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev, struct timespec *ts)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct ptp_req *req;
+ struct ptp_rsp *rsp;
+ uint64_t ns;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_ptp_op(mbox);
+ req->op = PTP_OP_GET_CLOCK;
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ ns = rte_timecounter_update(&dev->systime_tc, rsp->clk);
+ *ts = rte_ns_to_timespec(ns);
+
+ otx2_nix_dbg("PTP time read: %ld.%09ld", ts->tv_sec, ts->tv_nsec);
+
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 33/58] net/octeontx2: introducing flow driver
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (31 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 32/58] net/octeontx2: add remaining PTP operations jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 34/58] net/octeontx2: flow utility functions jerinj
` (26 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Introducing flow infra for octeontx2.
This will be used to maintain rte_flow rules.
Create, destroy, validate,query, flush, isolate flow operations
will be supported.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.h | 7 +-
drivers/net/octeontx2/otx2_flow.h | 384 ++++++++++++++++++++++++++++
2 files changed, 385 insertions(+), 6 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_flow.h
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index ba6d1736e..1edc7da29 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -17,6 +17,7 @@
#include "otx2_common.h"
#include "otx2_dev.h"
+#include "otx2_flow.h"
#include "otx2_irq.h"
#include "otx2_mempool.h"
#include "otx2_rx.h"
@@ -154,12 +155,6 @@ struct otx2_eth_qconf {
uint16_t nb_desc;
};
-struct otx2_npc_flow_info {
- uint16_t channel; /*rx channel */
- uint16_t flow_prealloc_size;
- uint16_t flow_max_priority;
-};
-
struct otx2_fc_info {
enum rte_eth_fc_mode mode; /**< Link flow control mode */
uint8_t rx_pause;
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
new file mode 100644
index 000000000..07d9e9fd6
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_flow.h
@@ -0,0 +1,384 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_FLOW_H__
+#define __OTX2_FLOW_H__
+
+#include <stdint.h>
+
+#include <rte_flow_driver.h>
+#include <rte_malloc.h>
+#include <rte_tailq.h>
+
+#include "otx2_common.h"
+#include "otx2_ethdev.h"
+#include "otx2_mbox.h"
+
+struct otx2_eth_dev;
+
+int otx2_flow_init(struct otx2_eth_dev *hw);
+int otx2_flow_fini(struct otx2_eth_dev *hw);
+extern const struct rte_flow_ops otx2_flow_ops;
+
+enum {
+ OTX2_INTF_RX = 0,
+ OTX2_INTF_TX = 1,
+ OTX2_INTF_MAX = 2,
+};
+
+#define NPC_COUNTER_NONE (-1)
+/* 32 bytes from LDATA_CFG & 32 bytes from FLAGS_CFG */
+#define NPC_MAX_EXTRACT_DATA_LEN (64)
+#define NPC_LDATA_LFLAG_LEN (16)
+#define NPC_MCAM_TOT_ENTRIES (4096)
+#define NPC_MAX_KEY_NIBBLES (31)
+/* Bit offsets */
+#define NPC_LAYER_KEYX_SZ (12)
+#define NPC_PARSE_KEX_S_LA_OFFSET (28)
+#define NPC_PARSE_KEX_S_LID_OFFSET(lid) \
+ ((((lid) - NPC_LID_LA) * NPC_LAYER_KEYX_SZ) \
+ + NPC_PARSE_KEX_S_LA_OFFSET)
+
+
+/* supported flow actions flags */
+#define OTX2_FLOW_ACT_MARK (1 << 0)
+#define OTX2_FLOW_ACT_FLAG (1 << 1)
+#define OTX2_FLOW_ACT_DROP (1 << 2)
+#define OTX2_FLOW_ACT_QUEUE (1 << 3)
+#define OTX2_FLOW_ACT_RSS (1 << 4)
+#define OTX2_FLOW_ACT_DUP (1 << 5)
+#define OTX2_FLOW_ACT_SEC (1 << 6)
+#define OTX2_FLOW_ACT_COUNT (1 << 7)
+
+/* terminating actions */
+#define OTX2_FLOW_ACT_TERM (OTX2_FLOW_ACT_DROP | \
+ OTX2_FLOW_ACT_QUEUE | \
+ OTX2_FLOW_ACT_RSS | \
+ OTX2_FLOW_ACT_DUP | \
+ OTX2_FLOW_ACT_SEC)
+
+/* This mark value indicates flag action */
+#define OTX2_FLOW_FLAG_VAL (0xffff)
+
+#define NIX_RX_ACT_MATCH_OFFSET (40)
+#define NIX_RX_ACT_MATCH_MASK (0xFFFF)
+
+#define NIX_RSS_ACT_GRP_OFFSET (20)
+#define NIX_RSS_ACT_ALG_OFFSET (56)
+#define NIX_RSS_ACT_GRP_MASK (0xFFFFF)
+#define NIX_RSS_ACT_ALG_MASK (0x1F)
+
+/* PMD-specific definition of the opaque struct rte_flow */
+#define OTX2_MAX_MCAM_WIDTH_DWORDS 7
+
+enum npc_mcam_intf {
+ NPC_MCAM_RX,
+ NPC_MCAM_TX
+};
+
+struct npc_xtract_info {
+ /* Length in bytes of pkt data extracted. len = 0
+ * indicates that extraction is disabled.
+ */
+ uint8_t len;
+ uint8_t hdr_off; /* Byte offset of proto hdr: extract_src */
+ uint8_t key_off; /* Byte offset in MCAM key where data is placed */
+ uint8_t enable; /* Extraction enabled or disabled */
+};
+
+/* Information for a given {LAYER, LTYPE} */
+struct npc_lid_lt_xtract_info {
+ /* Info derived from parser configuration */
+ uint16_t npc_proto; /* Network protocol identified */
+ uint8_t valid_flags_mask; /* Flags applicable */
+ uint8_t is_terminating:1; /* No more parsing */
+ struct npc_xtract_info xtract[NPC_MAX_LD];
+};
+
+union npc_kex_ldata_flags_cfg {
+ struct {
+ #if defined(__BIG_ENDIAN_BITFIELD)
+ uint64_t rvsd_62_1 : 61;
+ uint64_t lid : 3;
+ #else
+ uint64_t lid : 3;
+ uint64_t rvsd_62_1 : 61;
+ #endif
+ } s;
+
+ uint64_t i;
+};
+
+typedef struct npc_lid_lt_xtract_info
+ otx2_dxcfg_t[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT];
+typedef struct npc_lid_lt_xtract_info
+ otx2_fxcfg_t[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL];
+typedef union npc_kex_ldata_flags_cfg otx2_ld_flags_t[NPC_MAX_LD];
+
+
+/* MBOX_MSG_NPC_GET_DATAX_CFG Response */
+struct npc_get_datax_cfg {
+ /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */
+ union npc_kex_ldata_flags_cfg ld_flags[NPC_MAX_LD];
+ /* Extract information indexed with [LID][LTYPE] */
+ struct npc_lid_lt_xtract_info lid_lt_xtract[NPC_MAX_LID][NPC_MAX_LT];
+ /* Flags based extract indexed with [LDATA][FLAGS_LOWER_NIBBLE]
+ * Fields flags_ena_ld0, flags_ena_ld1 in
+ * struct npc_lid_lt_xtract_info indicate if this is applicable
+ * for a given {LAYER, LTYPE}
+ */
+ struct npc_xtract_info flag_xtract[NPC_MAX_LD][NPC_MAX_LT];
+};
+
+struct otx2_mcam_ents_info {
+ /* Current max & min values of mcam index */
+ uint32_t max_id;
+ uint32_t min_id;
+ uint32_t free_ent;
+ uint32_t live_ent;
+};
+
+struct rte_flow {
+ uint8_t nix_intf;
+ uint32_t mcam_id;
+ int32_t ctr_id;
+ uint32_t priority;
+ /* Contiguous match string */
+ uint64_t mcam_data[OTX2_MAX_MCAM_WIDTH_DWORDS];
+ uint64_t mcam_mask[OTX2_MAX_MCAM_WIDTH_DWORDS];
+ uint64_t npc_action;
+ TAILQ_ENTRY(rte_flow) next;
+};
+
+TAILQ_HEAD(otx2_flow_list, rte_flow);
+
+/* Accessed from ethdev private - otx2_eth_dev */
+struct otx2_npc_flow_info {
+ rte_atomic32_t mark_actions;
+ uint32_t keyx_supp_nmask[NPC_MAX_INTF];/* nibble mask */
+ uint32_t keyx_len[NPC_MAX_INTF]; /* per intf key len in bits */
+ uint32_t datax_len[NPC_MAX_INTF]; /* per intf data len in bits */
+ uint32_t keyw[NPC_MAX_INTF]; /* max key + data len bits */
+ uint32_t mcam_entries; /* mcam entries supported */
+ otx2_dxcfg_t prx_dxcfg; /* intf, lid, lt, extract */
+ otx2_fxcfg_t prx_fxcfg; /* Flag extract */
+ otx2_ld_flags_t prx_lfcfg; /* KEX LD_Flags CFG */
+ /* mcam entry info per priority level: both free & in-use */
+ struct otx2_mcam_ents_info *flow_entry_info;
+ /* Bitmap of free preallocated entries in ascending index &
+ * descending priority
+ */
+ struct rte_bitmap **free_entries;
+ /* Bitmap of free preallocated entries in descending index &
+ * ascending priority
+ */
+ struct rte_bitmap **free_entries_rev;
+ /* Bitmap of live entries in ascending index & descending priority */
+ struct rte_bitmap **live_entries;
+ /* Bitmap of live entries in descending index & ascending priority */
+ struct rte_bitmap **live_entries_rev;
+ /* Priority bucket wise tail queue of all rte_flow resources */
+ struct otx2_flow_list *flow_list;
+ uint32_t rss_grps; /* rss groups supported */
+ struct rte_bitmap *rss_grp_entries;
+ uint16_t channel; /*rx channel */
+ uint16_t flow_prealloc_size;
+ uint16_t flow_max_priority;
+};
+
+struct otx2_parse_state {
+ struct otx2_npc_flow_info *npc;
+ const struct rte_flow_item *pattern;
+ const struct rte_flow_item *last_pattern; /* Temp usage */
+ struct rte_flow_error *error;
+ struct rte_flow *flow;
+ uint8_t tunnel;
+ uint8_t terminate;
+ uint8_t layer_mask;
+ uint8_t lt[NPC_MAX_LID];
+ uint8_t flags[NPC_MAX_LID];
+ uint8_t *mcam_data; /* point to flow->mcam_data + key_len */
+ uint8_t *mcam_mask; /* point to flow->mcam_mask + key_len */
+};
+
+struct otx2_flow_item_info {
+ const void *def_mask; /* rte_flow default mask */
+ void *hw_mask; /* hardware supported mask */
+ int len; /* length of item */
+ const void *spec; /* spec to use, NULL implies match any */
+ const void *mask; /* mask to use */
+};
+
+struct otx2_idev_kex_cfg {
+ struct npc_get_kex_cfg_rsp kex_cfg;
+ rte_atomic16_t kex_refcnt;
+};
+
+enum npc_kpu_parser_flag {
+ NPC_F_NA = 0,
+ NPC_F_PKI,
+ NPC_F_PKI_VLAN,
+ NPC_F_PKI_ETAG,
+ NPC_F_PKI_ITAG,
+ NPC_F_PKI_MPLS,
+ NPC_F_PKI_NSH,
+ NPC_F_ETYPE_UNK,
+ NPC_F_ETHER_VLAN,
+ NPC_F_ETHER_ETAG,
+ NPC_F_ETHER_ITAG,
+ NPC_F_ETHER_MPLS,
+ NPC_F_ETHER_NSH,
+ NPC_F_STAG_CTAG,
+ NPC_F_STAG_CTAG_UNK,
+ NPC_F_STAG_STAG_CTAG,
+ NPC_F_STAG_STAG_STAG,
+ NPC_F_QINQ_CTAG,
+ NPC_F_QINQ_CTAG_UNK,
+ NPC_F_QINQ_QINQ_CTAG,
+ NPC_F_QINQ_QINQ_QINQ,
+ NPC_F_BTAG_ITAG,
+ NPC_F_BTAG_ITAG_STAG,
+ NPC_F_BTAG_ITAG_CTAG,
+ NPC_F_BTAG_ITAG_UNK,
+ NPC_F_ETAG_CTAG,
+ NPC_F_ETAG_BTAG_ITAG,
+ NPC_F_ETAG_STAG,
+ NPC_F_ETAG_QINQ,
+ NPC_F_ETAG_ITAG,
+ NPC_F_ETAG_ITAG_STAG,
+ NPC_F_ETAG_ITAG_CTAG,
+ NPC_F_ETAG_ITAG_UNK,
+ NPC_F_ITAG_STAG_CTAG,
+ NPC_F_ITAG_STAG,
+ NPC_F_ITAG_CTAG,
+ NPC_F_MPLS_4_LABELS,
+ NPC_F_MPLS_3_LABELS,
+ NPC_F_MPLS_2_LABELS,
+ NPC_F_IP_HAS_OPTIONS,
+ NPC_F_IP_IP_IN_IP,
+ NPC_F_IP_6TO4,
+ NPC_F_IP_MPLS_IN_IP,
+ NPC_F_IP_UNK_PROTO,
+ NPC_F_IP_IP_IN_IP_HAS_OPTIONS,
+ NPC_F_IP_6TO4_HAS_OPTIONS,
+ NPC_F_IP_MPLS_IN_IP_HAS_OPTIONS,
+ NPC_F_IP_UNK_PROTO_HAS_OPTIONS,
+ NPC_F_IP6_HAS_EXT,
+ NPC_F_IP6_TUN_IP6,
+ NPC_F_IP6_MPLS_IN_IP,
+ NPC_F_TCP_HAS_OPTIONS,
+ NPC_F_TCP_HTTP,
+ NPC_F_TCP_HTTPS,
+ NPC_F_TCP_PPTP,
+ NPC_F_TCP_UNK_PORT,
+ NPC_F_TCP_HTTP_HAS_OPTIONS,
+ NPC_F_TCP_HTTPS_HAS_OPTIONS,
+ NPC_F_TCP_PPTP_HAS_OPTIONS,
+ NPC_F_TCP_UNK_PORT_HAS_OPTIONS,
+ NPC_F_UDP_VXLAN,
+ NPC_F_UDP_VXLAN_NOVNI,
+ NPC_F_UDP_VXLAN_NOVNI_NSH,
+ NPC_F_UDP_VXLANGPE,
+ NPC_F_UDP_VXLANGPE_NSH,
+ NPC_F_UDP_VXLANGPE_MPLS,
+ NPC_F_UDP_VXLANGPE_NOVNI,
+ NPC_F_UDP_VXLANGPE_NOVNI_NSH,
+ NPC_F_UDP_VXLANGPE_NOVNI_MPLS,
+ NPC_F_UDP_VXLANGPE_UNK,
+ NPC_F_UDP_VXLANGPE_NONP,
+ NPC_F_UDP_GTP_GTPC,
+ NPC_F_UDP_GTP_GTPU_G_PDU,
+ NPC_F_UDP_GTP_GTPU_UNK,
+ NPC_F_UDP_UNK_PORT,
+ NPC_F_UDP_GENEVE,
+ NPC_F_UDP_GENEVE_OAM,
+ NPC_F_UDP_GENEVE_CRI_OPT,
+ NPC_F_UDP_GENEVE_OAM_CRI_OPT,
+ NPC_F_GRE_NVGRE,
+ NPC_F_GRE_HAS_SRE,
+ NPC_F_GRE_HAS_CSUM,
+ NPC_F_GRE_HAS_KEY,
+ NPC_F_GRE_HAS_SEQ,
+ NPC_F_GRE_HAS_CSUM_KEY,
+ NPC_F_GRE_HAS_CSUM_SEQ,
+ NPC_F_GRE_HAS_KEY_SEQ,
+ NPC_F_GRE_HAS_CSUM_KEY_SEQ,
+ NPC_F_GRE_HAS_ROUTE,
+ NPC_F_GRE_UNK_PROTO,
+ NPC_F_GRE_VER1,
+ NPC_F_GRE_VER1_HAS_SEQ,
+ NPC_F_GRE_VER1_HAS_ACK,
+ NPC_F_GRE_VER1_HAS_SEQ_ACK,
+ NPC_F_GRE_VER1_UNK_PROTO,
+ NPC_F_TU_ETHER_UNK,
+ NPC_F_TU_ETHER_CTAG,
+ NPC_F_TU_ETHER_CTAG_UNK,
+ NPC_F_TU_ETHER_STAG_CTAG,
+ NPC_F_TU_ETHER_STAG_CTAG_UNK,
+ NPC_F_TU_ETHER_STAG,
+ NPC_F_TU_ETHER_STAG_UNK,
+ NPC_F_TU_ETHER_QINQ_CTAG,
+ NPC_F_TU_ETHER_QINQ_CTAG_UNK,
+ NPC_F_TU_ETHER_QINQ,
+ NPC_F_TU_ETHER_QINQ_UNK,
+ NPC_F_LAST /* has to be the last item */
+};
+
+
+int otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id);
+
+int otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id,
+ uint64_t *count);
+
+int otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id);
+
+int otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry);
+
+int otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox);
+
+int otx2_flow_update_parse_state(struct otx2_parse_state *pst,
+ struct otx2_flow_item_info *info,
+ int lid, int lt, uint8_t flags);
+
+int otx2_flow_parse_item_basic(const struct rte_flow_item *item,
+ struct otx2_flow_item_info *info,
+ struct rte_flow_error *error);
+
+void otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask);
+
+int otx2_flow_mcam_alloc_and_write(struct rte_flow *flow,
+ struct otx2_mbox *mbox,
+ struct otx2_parse_state *pst,
+ struct otx2_npc_flow_info *flow_info);
+
+void otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst,
+ struct otx2_flow_item_info *info,
+ int lid, int lt);
+
+const struct rte_flow_item *
+otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern);
+
+int otx2_flow_parse_lh(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_lg(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_lf(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_le(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_ld(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_lc(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_lb(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_la(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_actions(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error,
+ struct rte_flow *flow);
+
+#endif /* __OTX2_FLOW_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 34/58] net/octeontx2: flow utility functions
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (32 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 33/58] net/octeontx2: introducing flow driver jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 35/58] net/octeontx2: flow mailbox utility jerinj
` (25 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
First pass rte_flow utility functions for octeontx2.
These will be used to communicate with AF driver.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_flow_utils.c | 369 ++++++++++++++++++++++++
3 files changed, 371 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_flow_utils.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index b1c8e4e52..7773643af 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -39,6 +39,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_lookup.c \
otx2_ethdev.c \
otx2_flow_ctrl.c \
+ otx2_flow_utils.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
otx2_ethdev_debug.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 57d6c0a58..cd168c32f 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -12,6 +12,7 @@ sources = files(
'otx2_lookup.c',
'otx2_ethdev.c',
'otx2_flow_ctrl.c',
+ 'otx2_flow_utils.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
'otx2_ethdev_debug.c',
diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c
new file mode 100644
index 000000000..bf20d7319
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_flow_utils.c
@@ -0,0 +1,369 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+#include "otx2_flow.h"
+
+int
+otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id)
+{
+ struct npc_mcam_oper_counter_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_free_counter(mbox);
+ req->cntr = ctr_id;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, NULL);
+
+ return rc;
+}
+
+int
+otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id,
+ uint64_t *count)
+{
+ struct npc_mcam_oper_counter_req *req;
+ struct npc_mcam_oper_counter_rsp *rsp;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_counter_stats(mbox);
+ req->cntr = ctr_id;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
+
+ *count = rsp->stat;
+ return rc;
+}
+
+int
+otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id)
+{
+ struct npc_mcam_oper_counter_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_clear_counter(mbox);
+ req->cntr = ctr_id;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, NULL);
+
+ return rc;
+}
+
+int
+otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry)
+{
+ struct npc_mcam_free_entry_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
+ req->entry = entry;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, NULL);
+
+ return rc;
+}
+
+int
+otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox)
+{
+ struct npc_mcam_free_entry_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
+ req->all = 1;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, NULL);
+
+ return rc;
+}
+
+static void
+flow_prep_mcam_ldata(uint8_t *ptr, const uint8_t *data, int len)
+{
+ int idx;
+
+ for (idx = 0; idx < len; idx++)
+ ptr[idx] = data[len - 1 - idx];
+}
+
+static size_t
+flow_check_copysz(size_t size, size_t len)
+{
+ if (len <= size)
+ return len;
+
+ rte_panic("String op-overflow");
+}
+
+static inline int
+flow_mem_is_zero(const void *mem, int len)
+{
+ const char *m = mem;
+ int i;
+
+ for (i = 0; i < len; i++) {
+ if (m[i] != 0)
+ return 0;
+ }
+ return 1;
+}
+
+void
+otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst,
+ struct otx2_flow_item_info *info, int lid, int lt)
+{
+ struct npc_xtract_info *xinfo;
+ char *hw_mask = info->hw_mask;
+ int i, j;
+ int intf;
+
+ intf = pst->flow->nix_intf;
+ xinfo = pst->npc->prx_dxcfg[intf][lid][lt].xtract;
+ memset(hw_mask, 0, info->len);
+
+ for (i = 0; i < NPC_MAX_LD; i++) {
+ int max_off = xinfo[i].hdr_off + xinfo[i].len;
+
+ if (xinfo[i].enable == 0)
+ continue;
+
+ if (max_off > info->len)
+ max_off = info->len;
+
+ for (j = xinfo[i].hdr_off; j < max_off; j++)
+ hw_mask[j] = 0xff;
+ }
+}
+
+int
+otx2_flow_update_parse_state(struct otx2_parse_state *pst,
+ struct otx2_flow_item_info *info, int lid, int lt,
+ uint8_t flags)
+{
+ uint8_t int_info_mask[NPC_MAX_EXTRACT_DATA_LEN];
+ uint8_t int_info[NPC_MAX_EXTRACT_DATA_LEN];
+ struct npc_lid_lt_xtract_info *xinfo;
+ int len = 0;
+ int intf;
+ int i;
+
+ otx2_npc_dbg("Parse state function info mask total %s",
+ (const uint8_t *)info->mask);
+
+ pst->layer_mask |= lid;
+ pst->lt[lid] = lt;
+ pst->flags[lid] = flags;
+
+ intf = pst->flow->nix_intf;
+ xinfo = &pst->npc->prx_dxcfg[intf][lid][lt];
+ otx2_npc_dbg("Is_terminating = %d", xinfo->is_terminating);
+ if (xinfo->is_terminating)
+ pst->terminate = 1;
+
+ /* Need to check if flags are supported but in latest
+ * KPU profile, flags are used as enumeration! No way,
+ * it can be validated unless MBOX is changed to return
+ * set of valid values out of 2**8 possible values.
+ */
+ if (info->spec == NULL) { /* Nothing to match */
+ otx2_npc_dbg("Info spec NULL");
+ goto done;
+ }
+
+ /* Copy spec and mask into mcam match string, mask.
+ * Since both RTE FLOW and OTX2 MCAM use network-endianness
+ * for data, we are saved from nasty conversions.
+ */
+ for (i = 0; i < NPC_MAX_LD; i++) {
+ struct npc_xtract_info *x;
+ int k, idx;
+
+ x = &xinfo->xtract[i];
+ len = x->len;
+
+ if (x->enable == 0)
+ continue;
+
+ otx2_npc_dbg("x->hdr_off = %d, len = %d, info->len = %d,"
+ "x->key_off = %d", x->hdr_off, len, info->len,
+ x->key_off);
+
+ if (x->hdr_off + len > info->len)
+ len = info->len - x->hdr_off;
+
+ /* Check for over-write of previous layer */
+ if (!flow_mem_is_zero(pst->mcam_mask + x->key_off,
+ len)) {
+ /* Cannot support this data match */
+ rte_flow_error_set(pst->error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ pst->pattern,
+ "Extraction unsupported");
+ return -rte_errno;
+ }
+
+ len = flow_check_copysz((OTX2_MAX_MCAM_WIDTH_DWORDS * 8)
+ - x->key_off,
+ len);
+ /* Need to reverse complete structure so that dest addr is at
+ * MSB so as to program the MCAM using mcam_data & mcam_mask
+ * arrays
+ */
+ flow_prep_mcam_ldata(int_info,
+ (const uint8_t *)info->spec + x->hdr_off,
+ x->len);
+ flow_prep_mcam_ldata(int_info_mask,
+ (const uint8_t *)info->mask + x->hdr_off,
+ x->len);
+
+ otx2_npc_dbg("Spec: ");
+ for (k = 0; k < info->len; k++)
+ otx2_npc_dbg("0x%.2x ",
+ ((const uint8_t *)info->spec)[k]);
+
+ otx2_npc_dbg("Int_info: ");
+ for (k = 0; k < info->len; k++)
+ otx2_npc_dbg("0x%.2x ", int_info[k]);
+
+ memcpy(pst->mcam_mask + x->key_off, int_info_mask, len);
+ memcpy(pst->mcam_data + x->key_off, int_info, len);
+
+ otx2_npc_dbg("Parse state mcam data & mask");
+ for (idx = 0; idx < len ; idx++)
+ otx2_npc_dbg("data[%d]: 0x%x, mask[%d]: 0x%x", idx,
+ *(pst->mcam_data + idx + x->key_off), idx,
+ *(pst->mcam_mask + idx + x->key_off));
+ }
+
+done:
+ /* Next pattern to parse by subsequent layers */
+ pst->pattern++;
+ return 0;
+}
+
+static inline int
+flow_range_is_valid(const char *spec, const char *last, const char *mask,
+ int len)
+{
+ /* Mask must be zero or equal to spec as we do not support
+ * non-contiguous ranges.
+ */
+ while (len--) {
+ if (last[len] &&
+ (spec[len] & mask[len]) != (last[len] & mask[len]))
+ return 0; /* False */
+ }
+ return 1;
+}
+
+
+static inline int
+flow_mask_is_supported(const char *mask, const char *hw_mask, int len)
+{
+ /*
+ * If no hw_mask, assume nothing is supported.
+ * mask is never NULL
+ */
+ if (hw_mask == NULL)
+ return flow_mem_is_zero(mask, len);
+
+ while (len--) {
+ if ((mask[len] | hw_mask[len]) != hw_mask[len])
+ return 0; /* False */
+ }
+ return 1;
+}
+
+int
+otx2_flow_parse_item_basic(const struct rte_flow_item *item,
+ struct otx2_flow_item_info *info,
+ struct rte_flow_error *error)
+{
+ /* Item must not be NULL */
+ if (item == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "Item is NULL");
+ return -rte_errno;
+ }
+ /* If spec is NULL, both mask and last must be NULL, this
+ * makes it to match ANY value (eq to mask = 0).
+ * Setting either mask or last without spec is an error
+ */
+ if (item->spec == NULL) {
+ if (item->last == NULL && item->mask == NULL) {
+ info->spec = NULL;
+ return 0;
+ }
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "mask or last set without spec");
+ return -rte_errno;
+ }
+
+ /* We have valid spec */
+ info->spec = item->spec;
+
+ /* If mask is not set, use default mask, err if default mask is
+ * also NULL.
+ */
+ if (item->mask == NULL) {
+ otx2_npc_dbg("Item mask null, using default mask");
+ if (info->def_mask == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "No mask or default mask given");
+ return -rte_errno;
+ }
+ info->mask = info->def_mask;
+ } else {
+ info->mask = item->mask;
+ }
+
+ /* mask specified must be subset of hw supported mask
+ * mask | hw_mask == hw_mask
+ */
+ if (!flow_mask_is_supported(info->mask, info->hw_mask, info->len)) {
+ rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Unsupported field in the mask");
+ return -rte_errno;
+ }
+
+ /* Now we have spec and mask. OTX2 does not support non-contiguous
+ * range. We should have either:
+ * - spec & mask == last & mask or,
+ * - last == 0 or,
+ * - last == NULL
+ */
+ if (item->last != NULL && !flow_mem_is_zero(item->last, info->len)) {
+ if (!flow_range_is_valid(item->spec, item->last, info->mask,
+ info->len)) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Unsupported range for match");
+ return -rte_errno;
+ }
+ }
+
+ return 0;
+}
+
+void
+otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask)
+{
+ uint64_t cdata[2] = {0ULL, 0ULL}, nibble;
+ int i, j = 0;
+
+ for (i = 0; i < NPC_MAX_KEY_NIBBLES; i++) {
+ if (nibble_mask & (1 << i)) {
+ nibble = (data[i / 16] >> ((i & 0xf) * 4)) & 0xf;
+ cdata[j / 16] |= (nibble << ((j & 0xf) * 4));
+ j += 1;
+ }
+ }
+
+ data[0] = cdata[0];
+ data[1] = cdata[1];
+}
+
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 35/58] net/octeontx2: flow mailbox utility
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (33 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 34/58] net/octeontx2: flow utility functions jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 36/58] net/octeontx2: add flow MCAM utility functions jerinj
` (24 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding mailbox utility functions for rte_flow. These will be used
to alloc, reserve and write the entries to the device on request.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow.h | 6 +-
drivers/net/octeontx2/otx2_flow_utils.c | 259 ++++++++++++++++++++++++
2 files changed, 264 insertions(+), 1 deletion(-)
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
index 07d9e9fd6..04c5e487f 100644
--- a/drivers/net/octeontx2/otx2_flow.h
+++ b/drivers/net/octeontx2/otx2_flow.h
@@ -380,5 +380,9 @@ int otx2_flow_parse_actions(struct rte_eth_dev *dev,
const struct rte_flow_action actions[],
struct rte_flow_error *error,
struct rte_flow *flow);
-
+int
+flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info,
+ struct npc_mcam_alloc_entry_rsp *rsp,
+ int req_prio);
#endif /* __OTX2_FLOW_H__ */
diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c
index bf20d7319..288f5776e 100644
--- a/drivers/net/octeontx2/otx2_flow_utils.c
+++ b/drivers/net/octeontx2/otx2_flow_utils.c
@@ -367,3 +367,262 @@ otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask)
data[1] = cdata[1];
}
+static int
+flow_first_set_bit(uint64_t slab)
+{
+ int num = 0;
+
+ if ((slab & 0xffffffff) == 0) {
+ num += 32;
+ slab >>= 32;
+ }
+ if ((slab & 0xffff) == 0) {
+ num += 16;
+ slab >>= 16;
+ }
+ if ((slab & 0xff) == 0) {
+ num += 8;
+ slab >>= 8;
+ }
+ if ((slab & 0xf) == 0) {
+ num += 4;
+ slab >>= 4;
+ }
+ if ((slab & 0x3) == 0) {
+ num += 2;
+ slab >>= 2;
+ }
+ if ((slab & 0x1) == 0)
+ num += 1;
+
+ return num;
+}
+
+static int
+flow_shift_lv_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info,
+ uint32_t old_ent, uint32_t new_ent)
+{
+ struct npc_mcam_shift_entry_req *req;
+ struct npc_mcam_shift_entry_rsp *rsp;
+ struct otx2_flow_list *list;
+ struct rte_flow *flow_iter;
+ int rc = 0;
+
+ otx2_npc_dbg("Old ent:%u new ent:%u priority:%u", old_ent, new_ent,
+ flow->priority);
+
+ list = &flow_info->flow_list[flow->priority];
+
+ /* Old entry is disabled & it's contents are moved to new_entry,
+ * new entry is enabled finally.
+ */
+ req = otx2_mbox_alloc_msg_npc_mcam_shift_entry(mbox);
+ req->curr_entry[0] = old_ent;
+ req->new_entry[0] = new_ent;
+ req->shift_count = 1;
+
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ /* Remove old node from list */
+ TAILQ_FOREACH(flow_iter, list, next) {
+ if (flow_iter->mcam_id == old_ent)
+ TAILQ_REMOVE(list, flow_iter, next);
+ }
+
+ /* Insert node with new mcam id at right place */
+ TAILQ_FOREACH(flow_iter, list, next) {
+ if (flow_iter->mcam_id > new_ent)
+ TAILQ_INSERT_BEFORE(flow_iter, flow, next);
+ }
+ return rc;
+}
+
+/* Exchange all required entries with a given priority level */
+static int
+flow_shift_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info,
+ struct npc_mcam_alloc_entry_rsp *rsp, int dir, int prio_lvl)
+{
+ struct rte_bitmap *fr_bmp, *fr_bmp_rev, *lv_bmp, *lv_bmp_rev, *bmp;
+ uint32_t e_fr = 0, e_lv = 0, e, e_id = 0, mcam_entries;
+ uint64_t fr_bit_pos = 0, lv_bit_pos = 0, bit_pos = 0;
+ /* Bit position within the slab */
+ uint32_t sl_fr_bit_off = 0, sl_lv_bit_off = 0;
+ /* Overall bit position of the start of slab */
+ /* free & live entry index */
+ int rc_fr = 0, rc_lv = 0, rc = 0, idx = 0;
+ struct otx2_mcam_ents_info *ent_info;
+ /* free & live bitmap slab */
+ uint64_t sl_fr = 0, sl_lv = 0, *sl;
+
+ fr_bmp = flow_info->free_entries[prio_lvl];
+ fr_bmp_rev = flow_info->free_entries_rev[prio_lvl];
+ lv_bmp = flow_info->live_entries[prio_lvl];
+ lv_bmp_rev = flow_info->live_entries_rev[prio_lvl];
+ ent_info = &flow_info->flow_entry_info[prio_lvl];
+ mcam_entries = flow_info->mcam_entries;
+
+
+ /* New entries allocated are always contiguous, but older entries
+ * already in free/live bitmap can be non-contiguous: so return
+ * shifted entries should be in non-contiguous format.
+ */
+ while (idx <= rsp->count) {
+ if (!sl_fr && !sl_lv) {
+ /* Lower index elements to be exchanged */
+ if (dir < 0) {
+ rc_fr = rte_bitmap_scan(fr_bmp, &e_fr, &sl_fr);
+ rc_lv = rte_bitmap_scan(lv_bmp, &e_lv, &sl_lv);
+ otx2_npc_dbg("Fwd slab rc fr %u rc lv %u "
+ "e_fr %u e_lv %u", rc_fr, rc_lv,
+ e_fr, e_lv);
+ } else {
+ rc_fr = rte_bitmap_scan(fr_bmp_rev,
+ &sl_fr_bit_off,
+ &sl_fr);
+ rc_lv = rte_bitmap_scan(lv_bmp_rev,
+ &sl_lv_bit_off,
+ &sl_lv);
+
+ otx2_npc_dbg("Rev slab rc fr %u rc lv %u "
+ "e_fr %u e_lv %u", rc_fr, rc_lv,
+ e_fr, e_lv);
+ }
+ }
+
+ if (rc_fr) {
+ fr_bit_pos = flow_first_set_bit(sl_fr);
+ e_fr = sl_fr_bit_off + fr_bit_pos;
+ otx2_npc_dbg("Fr_bit_pos 0x%" PRIx64, fr_bit_pos);
+ } else {
+ e_fr = ~(0);
+ }
+
+ if (rc_lv) {
+ lv_bit_pos = flow_first_set_bit(sl_lv);
+ e_lv = sl_lv_bit_off + lv_bit_pos;
+ otx2_npc_dbg("Lv_bit_pos 0x%" PRIx64, lv_bit_pos);
+ } else {
+ e_lv = ~(0);
+ }
+
+ /* First entry is from free_bmap */
+ if (e_fr < e_lv) {
+ bmp = fr_bmp;
+ e = e_fr;
+ sl = &sl_fr;
+ bit_pos = fr_bit_pos;
+ if (dir > 0)
+ e_id = mcam_entries - e - 1;
+ else
+ e_id = e;
+ otx2_npc_dbg("Fr e %u e_id %u", e, e_id);
+ } else {
+ bmp = lv_bmp;
+ e = e_lv;
+ sl = &sl_lv;
+ bit_pos = lv_bit_pos;
+ if (dir > 0)
+ e_id = mcam_entries - e - 1;
+ else
+ e_id = e;
+
+ otx2_npc_dbg("Lv e %u e_id %u", e, e_id);
+ if (idx < rsp->count)
+ rc =
+ flow_shift_lv_ent(mbox, flow,
+ flow_info, e_id,
+ rsp->entry + idx);
+ }
+
+ rte_bitmap_clear(bmp, e);
+ rte_bitmap_set(bmp, rsp->entry + idx);
+ /* Update entry list, use non-contiguous
+ * list now.
+ */
+ rsp->entry_list[idx] = e_id;
+ *sl &= ~(1 << bit_pos);
+
+ /* Update min & max entry identifiers in current
+ * priority level.
+ */
+ if (dir < 0) {
+ ent_info->max_id = rsp->entry + idx;
+ ent_info->min_id = e_id;
+ } else {
+ ent_info->max_id = e_id;
+ ent_info->min_id = rsp->entry;
+ }
+
+ idx++;
+ }
+ return rc;
+}
+
+/* Validate if newly allocated entries lie in the correct priority zone
+ * since NPC_MCAM_LOWER_PRIO & NPC_MCAM_HIGHER_PRIO don't ensure zone accuracy.
+ * If not properly aligned, shift entries to do so
+ */
+int
+flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info,
+ struct npc_mcam_alloc_entry_rsp *rsp,
+ int req_prio)
+{
+ int prio_idx = 0, rc = 0, needs_shift = 0, idx, prio = flow->priority;
+ struct otx2_mcam_ents_info *info = flow_info->flow_entry_info;
+ int dir = (req_prio == NPC_MCAM_HIGHER_PRIO) ? 1 : -1;
+ uint32_t tot_ent = 0;
+
+ otx2_npc_dbg("Dir %d, priority = %d", dir, prio);
+
+ if (dir < 0)
+ prio_idx = flow_info->flow_max_priority - 1;
+
+ /* Only live entries needs to be shifted, free entries can just be
+ * moved by bits manipulation.
+ */
+
+ /* For dir = -1(NPC_MCAM_LOWER_PRIO), when shifting,
+ * NPC_MAX_PREALLOC_ENT are exchanged with adjoining higher priority
+ * level entries(lower indexes).
+ *
+ * For dir = +1(NPC_MCAM_HIGHER_PRIO), during shift,
+ * NPC_MAX_PREALLOC_ENT are exchanged with adjoining lower priority
+ * level entries(higher indexes) with highest indexes.
+ */
+ do {
+ tot_ent = info[prio_idx].free_ent + info[prio_idx].live_ent;
+
+ if (dir < 0 && prio_idx != prio &&
+ rsp->entry > info[prio_idx].max_id && tot_ent) {
+ otx2_npc_dbg("Rsp entry %u prio idx %u "
+ "max id %u", rsp->entry, prio_idx,
+ info[prio_idx].max_id);
+
+ needs_shift = 1;
+ } else if ((dir > 0) && (prio_idx != prio) &&
+ (rsp->entry < info[prio_idx].min_id) && tot_ent) {
+ otx2_npc_dbg("Rsp entry %u prio idx %u "
+ "min id %u", rsp->entry, prio_idx,
+ info[prio_idx].min_id);
+ needs_shift = 1;
+ }
+
+ otx2_npc_dbg("Needs_shift = %d", needs_shift);
+ if (needs_shift) {
+ needs_shift = 0;
+ rc = flow_shift_ent(mbox, flow, flow_info, rsp, dir,
+ prio_idx);
+ } else {
+ for (idx = 0; idx < rsp->count; idx++)
+ rsp->entry_list[idx] = rsp->entry + idx;
+ }
+ } while ((prio_idx != prio) && (prio_idx += dir));
+
+ return rc;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 36/58] net/octeontx2: add flow MCAM utility functions
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (34 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 35/58] net/octeontx2: flow mailbox utility jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 37/58] net/octeontx2: add flow parsing for outer layers jerinj
` (23 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding MCAM utility functions to alloc and write the entries.
These will be used to arrange the flow rules based on priority.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow.h | 6 +-
drivers/net/octeontx2/otx2_flow_utils.c | 258 +++++++++++++++++++++++-
2 files changed, 258 insertions(+), 6 deletions(-)
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
index 04c5e487f..07d9e9fd6 100644
--- a/drivers/net/octeontx2/otx2_flow.h
+++ b/drivers/net/octeontx2/otx2_flow.h
@@ -380,9 +380,5 @@ int otx2_flow_parse_actions(struct rte_eth_dev *dev,
const struct rte_flow_action actions[],
struct rte_flow_error *error,
struct rte_flow *flow);
-int
-flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info,
- struct npc_mcam_alloc_entry_rsp *rsp,
- int req_prio);
+
#endif /* __OTX2_FLOW_H__ */
diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c
index 288f5776e..1dd57cc0f 100644
--- a/drivers/net/octeontx2/otx2_flow_utils.c
+++ b/drivers/net/octeontx2/otx2_flow_utils.c
@@ -5,6 +5,22 @@
#include "otx2_ethdev.h"
#include "otx2_flow.h"
+static int
+flow_mcam_alloc_counter(struct otx2_mbox *mbox, uint16_t *ctr)
+{
+ struct npc_mcam_alloc_counter_req *req;
+ struct npc_mcam_alloc_counter_rsp *rsp;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_alloc_counter(mbox);
+ req->count = 1;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
+
+ *ctr = rsp->cntr_list[0];
+ return rc;
+}
+
int
otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id)
{
@@ -567,7 +583,7 @@ flow_shift_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
* since NPC_MCAM_LOWER_PRIO & NPC_MCAM_HIGHER_PRIO don't ensure zone accuracy.
* If not properly aligned, shift entries to do so
*/
-int
+static int
flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
struct otx2_npc_flow_info *flow_info,
struct npc_mcam_alloc_entry_rsp *rsp,
@@ -626,3 +642,243 @@ flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
return rc;
}
+
+static int
+flow_find_ref_entry(struct otx2_npc_flow_info *flow_info, int *prio,
+ int prio_lvl)
+{
+ struct otx2_mcam_ents_info *info = flow_info->flow_entry_info;
+ int step = 1;
+
+ while (step < flow_info->flow_max_priority) {
+ if (((prio_lvl + step) < flow_info->flow_max_priority) &&
+ info[prio_lvl + step].live_ent) {
+ *prio = NPC_MCAM_HIGHER_PRIO;
+ return info[prio_lvl + step].min_id;
+ }
+
+ if (((prio_lvl - step) >= 0) &&
+ info[prio_lvl - step].live_ent) {
+ otx2_npc_dbg("Prio_lvl %u live %u", prio_lvl - step,
+ info[prio_lvl - step].live_ent);
+ *prio = NPC_MCAM_LOWER_PRIO;
+ return info[prio_lvl - step].max_id;
+ }
+ step++;
+ }
+ *prio = NPC_MCAM_ANY_PRIO;
+ return 0;
+}
+
+static int
+flow_fill_entry_cache(struct otx2_mbox *mbox, struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info, uint32_t *free_ent)
+{
+ struct rte_bitmap *free_bmp, *free_bmp_rev, *live_bmp, *live_bmp_rev;
+ struct npc_mcam_alloc_entry_rsp rsp_local;
+ struct npc_mcam_alloc_entry_rsp *rsp_cmd;
+ struct npc_mcam_alloc_entry_req *req;
+ struct npc_mcam_alloc_entry_rsp *rsp;
+ struct otx2_mcam_ents_info *info;
+ uint16_t ref_ent, idx;
+ int rc, prio;
+
+ info = &flow_info->flow_entry_info[flow->priority];
+ free_bmp = flow_info->free_entries[flow->priority];
+ free_bmp_rev = flow_info->free_entries_rev[flow->priority];
+ live_bmp = flow_info->live_entries[flow->priority];
+ live_bmp_rev = flow_info->live_entries_rev[flow->priority];
+
+ ref_ent = flow_find_ref_entry(flow_info, &prio, flow->priority);
+
+ req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
+ req->contig = 1;
+ req->count = flow_info->flow_prealloc_size;
+ req->priority = prio;
+ req->ref_entry = ref_ent;
+
+ otx2_npc_dbg("Fill cache ref entry %u prio %u", ref_ent, prio);
+
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp_cmd);
+ if (rc)
+ return rc;
+
+ rsp = &rsp_local;
+ memcpy(rsp, rsp_cmd, sizeof(*rsp));
+
+ otx2_npc_dbg("Alloc entry %u count %u , prio = %d", rsp->entry,
+ rsp->count, prio);
+
+ /* Non-first ent cache fill */
+ if (prio != NPC_MCAM_ANY_PRIO) {
+ flow_validate_and_shift_prio_ent(mbox, flow, flow_info, rsp,
+ prio);
+ } else {
+ /* Copy into response entry list */
+ for (idx = 0; idx < rsp->count; idx++)
+ rsp->entry_list[idx] = rsp->entry + idx;
+ }
+
+ otx2_npc_dbg("Fill entry cache rsp count %u", rsp->count);
+ /* Update free entries, reverse free entries list,
+ * min & max entry ids.
+ */
+ for (idx = 0; idx < rsp->count; idx++) {
+ if (unlikely(rsp->entry_list[idx] < info->min_id))
+ info->min_id = rsp->entry_list[idx];
+
+ if (unlikely(rsp->entry_list[idx] > info->max_id))
+ info->max_id = rsp->entry_list[idx];
+
+ /* Skip entry to be returned, not to be part of free
+ * list.
+ */
+ if (prio == NPC_MCAM_HIGHER_PRIO) {
+ if (unlikely(idx == (rsp->count - 1))) {
+ *free_ent = rsp->entry_list[idx];
+ continue;
+ }
+ } else {
+ if (unlikely(!idx)) {
+ *free_ent = rsp->entry_list[idx];
+ continue;
+ }
+ }
+ info->free_ent++;
+ rte_bitmap_set(free_bmp, rsp->entry_list[idx]);
+ rte_bitmap_set(free_bmp_rev, flow_info->mcam_entries -
+ rsp->entry_list[idx] - 1);
+
+ otx2_npc_dbg("Final rsp entry %u rsp entry rev %u",
+ rsp->entry_list[idx],
+ flow_info->mcam_entries - rsp->entry_list[idx] - 1);
+ }
+
+ otx2_npc_dbg("Cache free entry %u, rev = %u", *free_ent,
+ flow_info->mcam_entries - *free_ent - 1);
+ info->live_ent++;
+ rte_bitmap_set(live_bmp, *free_ent);
+ rte_bitmap_set(live_bmp_rev, flow_info->mcam_entries - *free_ent - 1);
+
+ return 0;
+}
+
+static int
+flow_check_preallocated_entry_cache(struct otx2_mbox *mbox,
+ struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info)
+{
+ struct rte_bitmap *free, *free_rev, *live, *live_rev;
+ uint32_t pos = 0, free_ent = 0, mcam_entries;
+ struct otx2_mcam_ents_info *info;
+ uint64_t slab = 0;
+ int rc;
+
+ otx2_npc_dbg("Flow priority %u", flow->priority);
+
+ info = &flow_info->flow_entry_info[flow->priority];
+
+ free_rev = flow_info->free_entries_rev[flow->priority];
+ free = flow_info->free_entries[flow->priority];
+ live_rev = flow_info->live_entries_rev[flow->priority];
+ live = flow_info->live_entries[flow->priority];
+ mcam_entries = flow_info->mcam_entries;
+
+ if (info->free_ent) {
+ rc = rte_bitmap_scan(free, &pos, &slab);
+ if (rc) {
+ /* Get free_ent from free entry bitmap */
+ free_ent = pos + __builtin_ctzll(slab);
+ otx2_npc_dbg("Allocated from cache entry %u", free_ent);
+ /* Remove from free bitmaps and add to live ones */
+ rte_bitmap_clear(free, free_ent);
+ rte_bitmap_set(live, free_ent);
+ rte_bitmap_clear(free_rev,
+ mcam_entries - free_ent - 1);
+ rte_bitmap_set(live_rev,
+ mcam_entries - free_ent - 1);
+
+ info->free_ent--;
+ info->live_ent++;
+ return free_ent;
+ }
+
+ otx2_npc_dbg("No free entry:its a mess");
+ return -1;
+ }
+
+ rc = flow_fill_entry_cache(mbox, flow, flow_info, &free_ent);
+ if (rc)
+ return rc;
+
+ return free_ent;
+}
+
+int
+otx2_flow_mcam_alloc_and_write(struct rte_flow *flow, struct otx2_mbox *mbox,
+ __rte_unused struct otx2_parse_state *pst,
+ struct otx2_npc_flow_info *flow_info)
+{
+ int use_ctr = (flow->ctr_id == NPC_COUNTER_NONE ? 0 : 1);
+ struct npc_mcam_write_entry_req *req;
+ struct mbox_msghdr *rsp;
+ uint16_t ctr = ~(0);
+ int rc, idx;
+ int entry;
+
+ if (use_ctr) {
+ rc = flow_mcam_alloc_counter(mbox, &ctr);
+ if (rc)
+ return rc;
+ }
+
+ entry = flow_check_preallocated_entry_cache(mbox, flow, flow_info);
+ if (entry < 0) {
+ otx2_err("Prealloc failed");
+ otx2_flow_mcam_free_counter(mbox, ctr);
+ return NPC_MCAM_ALLOC_FAILED;
+ }
+ req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
+ req->set_cntr = use_ctr;
+ req->cntr = ctr;
+ req->entry = entry;
+ otx2_npc_dbg("Alloc & write entry %u", entry);
+
+ req->intf =
+ (flow->nix_intf == OTX2_INTF_RX) ? NPC_MCAM_RX : NPC_MCAM_TX;
+ req->enable_entry = 1;
+ req->entry_data.action = flow->npc_action;
+
+ /*
+ * DPDK sets vtag action on per interface basis, not
+ * per flow basis. It is a matter of how we decide to support
+ * this pmd specific behavior. There are two ways:
+ * 1. Inherit the vtag action from the one configured
+ * for this interface. This can be read from the
+ * vtag_action configured for default mcam entry of
+ * this pf_func.
+ * 2. Do not support vtag action with rte_flow.
+ *
+ * Second approach is used now.
+ */
+ req->entry_data.vtag_action = 0ULL;
+
+ for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
+ req->entry_data.kw[idx] = flow->mcam_data[idx];
+ req->entry_data.kw_mask[idx] = flow->mcam_mask[idx];
+ }
+
+ req->entry_data.kw[0] |= flow_info->channel;
+ req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1);
+
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
+ if (rc != 0)
+ return rc;
+
+ flow->mcam_id = entry;
+ if (use_ctr)
+ flow->ctr_id = ctr;
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 37/58] net/octeontx2: add flow parsing for outer layers
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (35 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 36/58] net/octeontx2: add flow MCAM utility functions jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 38/58] net/octeontx2: adding flow parsing for inner layers jerinj
` (22 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding functionality to parse outer layers from ld to lh.
These will be used parse outer layers L2, L3, L4 and tunnel types.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_flow_parse.c | 463 ++++++++++++++++++++++++
3 files changed, 465 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_flow_parse.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 7773643af..f38901b89 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -39,6 +39,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_lookup.c \
otx2_ethdev.c \
otx2_flow_ctrl.c \
+ otx2_flow_parse.c \
otx2_flow_utils.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index cd168c32f..cbab77f7b 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -12,6 +12,7 @@ sources = files(
'otx2_lookup.c',
'otx2_ethdev.c',
'otx2_flow_ctrl.c',
+ 'otx2_flow_parse.c',
'otx2_flow_utils.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
new file mode 100644
index 000000000..2d0fa439a
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -0,0 +1,463 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+#include "otx2_flow.h"
+
+const struct rte_flow_item *
+otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern)
+{
+ while ((pattern->type == RTE_FLOW_ITEM_TYPE_VOID) ||
+ (pattern->type == RTE_FLOW_ITEM_TYPE_ANY))
+ pattern++;
+
+ return pattern;
+}
+
+int
+otx2_flow_parse_lh(struct otx2_parse_state *pst __rte_unused)
+{
+ return 0;
+}
+
+/*
+ * Tunnel+ESP, Tunnel+ICMP4/6, Tunnel+TCP, Tunnel+UDP,
+ * Tunnel+SCTP
+ */
+int
+otx2_flow_parse_lg(struct otx2_parse_state *pst)
+{
+ struct otx2_flow_item_info info;
+ char hw_mask[64];
+ int lid, lt;
+ int rc;
+
+ if (!pst->tunnel)
+ return 0;
+
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ lid = NPC_LID_LG;
+
+ switch (pst->pattern->type) {
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ lt = NPC_LT_LG_TU_UDP;
+ info.def_mask = &rte_flow_item_udp_mask;
+ info.len = sizeof(struct rte_flow_item_udp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ lt = NPC_LT_LG_TU_TCP;
+ info.def_mask = &rte_flow_item_tcp_mask;
+ info.len = sizeof(struct rte_flow_item_tcp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ lt = NPC_LT_LG_TU_SCTP;
+ info.def_mask = &rte_flow_item_sctp_mask;
+ info.len = sizeof(struct rte_flow_item_sctp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_ESP:
+ lt = NPC_LT_LG_TU_ESP;
+ info.def_mask = &rte_flow_item_esp_mask;
+ info.len = sizeof(struct rte_flow_item_esp);
+ break;
+ default:
+ return 0;
+ }
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
+}
+
+/* Tunnel+IPv4, Tunnel+IPv6 */
+int
+otx2_flow_parse_lf(struct otx2_parse_state *pst)
+{
+ struct otx2_flow_item_info info;
+ char hw_mask[64];
+ int lid, lt;
+ int rc;
+
+ if (!pst->tunnel)
+ return 0;
+
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ lid = NPC_LID_LF;
+
+ if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) {
+ lt = NPC_LT_LF_TU_IP;
+ info.def_mask = &rte_flow_item_ipv4_mask;
+ info.len = sizeof(struct rte_flow_item_ipv4);
+ } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+ lt = NPC_LT_LF_TU_IP6;
+ info.def_mask = &rte_flow_item_ipv6_mask;
+ info.len = sizeof(struct rte_flow_item_ipv6);
+ } else {
+ /* There is no tunneled IP header */
+ return 0;
+ }
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
+}
+
+/* Tunnel+Ether */
+int
+otx2_flow_parse_le(struct otx2_parse_state *pst)
+{
+ const struct rte_flow_item *pattern, *last_pattern;
+ struct rte_flow_item_eth hw_mask;
+ struct otx2_flow_item_info info;
+ int lid, lt, lflags;
+ int nr_vlans = 0;
+ int rc;
+
+ /* We hit this layer if there is a tunneling protocol */
+ if (!pst->tunnel)
+ return 0;
+
+ if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH)
+ return 0;
+
+ lid = NPC_LID_LE;
+ lt = NPC_LT_LE_TU_ETHER;
+ lflags = 0;
+
+ info.def_mask = &rte_flow_item_vlan_mask;
+ /* No match support for vlan tags */
+ info.hw_mask = NULL;
+ info.len = sizeof(struct rte_flow_item_vlan);
+ info.spec = NULL;
+ info.mask = NULL;
+
+ /* Look ahead and find out any VLAN tags. These can be
+ * detected but no data matching is available.
+ */
+ last_pattern = pst->pattern;
+ pattern = pst->pattern + 1;
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+ nr_vlans++;
+ rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+ last_pattern = pattern;
+ pattern++;
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ }
+ otx2_npc_dbg("Nr_vlans = %d", nr_vlans);
+ switch (nr_vlans) {
+ case 0:
+ break;
+ case 1:
+ lflags = NPC_F_TU_ETHER_CTAG;
+ break;
+ case 2:
+ lflags = NPC_F_TU_ETHER_STAG_CTAG;
+ break;
+ default:
+ rte_flow_error_set(pst->error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ last_pattern,
+ "more than 2 vlans with tunneled Ethernet "
+ "not supported");
+ return -rte_errno;
+ }
+
+ info.def_mask = &rte_flow_item_eth_mask;
+ info.hw_mask = &hw_mask;
+ info.len = sizeof(struct rte_flow_item_eth);
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ info.spec = NULL;
+ info.mask = NULL;
+
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ pst->pattern = last_pattern;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+static int
+otx2_flow_parse_ld_udp_tunnel(struct otx2_parse_state *pst)
+{
+ /*
+ * We are positioned at UDP. Scan ahead and look for
+ * UDP encapsulated tunnel protocols. If available,
+ * parse them. In that case handle this:
+ * - RTE spec assumes we point to tunnel header.
+ * - NPC parser provides offset from UDP header.
+ */
+
+ /*
+ * Note: Add support to GENEVE, VXLAN_GPE when we
+ * upgrade DPDK
+ *
+ * Note: Better to split flags into two nibbles:
+ * - Higher nibble can have flags
+ * - Lower nibble to further enumerate protocols
+ * and have flags based extraction
+ */
+ const struct rte_flow_item *pattern = pst->pattern + 1;
+ struct otx2_flow_item_info info;
+ int lid, lt, lflags;
+ char hw_mask[64];
+ int rc;
+
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_mask = NULL;
+ info.def_mask = NULL;
+ info.len = 0;
+ lid = NPC_LID_LD;
+ lt = NPC_LT_LD_UDP;
+ lflags = 0;
+
+ /* Ensure we are not matching anything in UDP */
+ rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
+ if (rc)
+ return rc;
+
+ info.hw_mask = &hw_mask;
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ otx2_npc_dbg("Pattern->type = %d", pattern->type);
+ switch (pattern->type) {
+ case RTE_FLOW_ITEM_TYPE_VXLAN:
+ lflags = NPC_F_UDP_VXLAN;
+ info.def_mask = &rte_flow_item_vxlan_mask;
+ info.len = sizeof(struct rte_flow_item_vxlan);
+ lt = NPC_LT_LD_UDP_VXLAN;
+ break;
+ case RTE_FLOW_ITEM_TYPE_GTPC:
+ lflags = NPC_F_UDP_GTP_GTPC;
+ info.def_mask = &rte_flow_item_gtp_mask;
+ info.len = sizeof(struct rte_flow_item_gtp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_GTPU:
+ lflags = NPC_F_UDP_GTP_GTPU_G_PDU;
+ info.def_mask = &rte_flow_item_gtp_mask;
+ info.len = sizeof(struct rte_flow_item_gtp);
+ break;
+ default:
+ return 0;
+ }
+
+ /* Now pst->pattern must point to tunnel header */
+ pst->pattern = pattern;
+ pst->tunnel = 1;
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ /* Get past UDP header */
+ rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+static int
+flow_parse_mpls_label_stack(struct otx2_parse_state *pst, int *flag)
+{
+ int nr_labels = 0;
+ const struct rte_flow_item *pattern = pst->pattern;
+ struct otx2_flow_item_info info;
+ int rc;
+ uint8_t flag_list[] = {0, NPC_F_MPLS_2_LABELS,
+ NPC_F_MPLS_3_LABELS, NPC_F_MPLS_4_LABELS};
+
+ /*
+ * pst->pattern points to first MPLS label. We only check
+ * that subsequent labels do not have anything to match.
+ */
+ info.def_mask = &rte_flow_item_mpls_mask;
+ info.hw_mask = NULL;
+ info.len = sizeof(struct rte_flow_item_mpls);
+ info.spec = NULL;
+ info.mask = NULL;
+
+ while (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS) {
+ nr_labels++;
+
+ /* Basic validation of 2nd/3rd/4th mpls item */
+ if (nr_labels > 1) {
+ rc = otx2_flow_parse_item_basic(pattern, &info,
+ pst->error);
+ if (rc != 0)
+ return rc;
+ }
+ pst->last_pattern = pattern;
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ }
+
+ if (nr_labels > 4) {
+ rte_flow_error_set(pst->error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ pst->last_pattern,
+ "more than 4 mpls labels not supported");
+ return -rte_errno;
+ }
+
+ *flag = flag_list[nr_labels - 1];
+ return 0;
+}
+
+static int
+otx2_flow_parse_lc_ld_mpls(struct otx2_parse_state *pst, int lid)
+{
+ /* Find number of MPLS labels */
+ struct rte_flow_item_mpls hw_mask;
+ struct otx2_flow_item_info info;
+ int lt, lflags;
+ int rc;
+
+ lflags = 0;
+
+ if (lid == NPC_LID_LC)
+ lt = NPC_LT_LC_MPLS;
+ else
+ lt = NPC_LT_LD_TU_MPLS;
+
+ /* Prepare for parsing the first item */
+ info.def_mask = &rte_flow_item_mpls_mask;
+ info.hw_mask = &hw_mask;
+ info.len = sizeof(struct rte_flow_item_mpls);
+ info.spec = NULL;
+ info.mask = NULL;
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ /*
+ * Parse for more labels.
+ * This sets lflags and pst->last_pattern correctly.
+ */
+ rc = flow_parse_mpls_label_stack(pst, &lflags);
+ if (rc != 0)
+ return rc;
+
+ pst->tunnel = 1;
+ pst->pattern = pst->last_pattern;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+/*
+ * ICMP, ICMP6, UDP, TCP, SCTP, VXLAN, GRE, NVGRE,
+ * GTP, GTPC, GTPU, ESP
+ *
+ * Note: UDP tunnel protocols are identified by flags.
+ * LPTR for these protocol still points to UDP
+ * header. Need flag based extraction to support
+ * this.
+ */
+int
+otx2_flow_parse_ld(struct otx2_parse_state *pst)
+{
+ char hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
+ struct otx2_flow_item_info info;
+ int lid, lt, lflags;
+ int rc;
+
+ if (pst->tunnel) {
+ /* We have already parsed MPLS or IPv4/v6 followed
+ * by MPLS or IPv4/v6. Subsequent TCP/UDP etc
+ * would be parsed as tunneled versions. Skip
+ * this layer, except for tunneled MPLS. If LC is
+ * MPLS, we have anyway skipped all stacked MPLS
+ * labels.
+ */
+ if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
+ return otx2_flow_parse_lc_ld_mpls(pst, NPC_LID_LD);
+ return 0;
+ }
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ info.def_mask = NULL;
+ info.len = 0;
+
+ lid = NPC_LID_LD;
+ lflags = 0;
+
+ otx2_npc_dbg("Pst->pattern->type = %d", pst->pattern->type);
+ switch (pst->pattern->type) {
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ if (pst->lt[NPC_LID_LC] == NPC_LT_LC_IP6)
+ lt = NPC_LT_LD_ICMP6;
+ else
+ lt = NPC_LT_LD_ICMP;
+ info.def_mask = &rte_flow_item_icmp_mask;
+ info.len = sizeof(struct rte_flow_item_icmp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ /* Check if a tunnel follows. If yes, we do not
+ * match anything in UDP spec but process the
+ * tunnel spec.
+ */
+ rc = otx2_flow_parse_ld_udp_tunnel(pst);
+ if (rc != 0)
+ return rc;
+
+ /* If tunnel was present and processed, we are done. */
+ if (pst->tunnel)
+ return 0;
+
+ /* This is UDP without tunnel */
+ lt = NPC_LT_LD_UDP;
+ info.def_mask = &rte_flow_item_udp_mask;
+ info.len = sizeof(struct rte_flow_item_udp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ lt = NPC_LT_LD_TCP;
+ info.def_mask = &rte_flow_item_tcp_mask;
+ info.len = sizeof(struct rte_flow_item_tcp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ lt = NPC_LT_LD_SCTP;
+ info.def_mask = &rte_flow_item_sctp_mask;
+ info.len = sizeof(struct rte_flow_item_sctp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_ESP:
+ lt = NPC_LT_LD_ESP;
+ info.def_mask = &rte_flow_item_esp_mask;
+ info.len = sizeof(struct rte_flow_item_esp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_GRE:
+ lt = NPC_LT_LD_GRE;
+ info.def_mask = &rte_flow_item_gre_mask;
+ info.len = sizeof(struct rte_flow_item_gre);
+ break;
+ case RTE_FLOW_ITEM_TYPE_NVGRE:
+ lt = NPC_LT_LD_GRE;
+ lflags = NPC_F_GRE_NVGRE;
+ info.def_mask = &rte_flow_item_nvgre_mask;
+ info.len = sizeof(struct rte_flow_item_nvgre);
+ /* Further IP/Ethernet are parsed as tunneled */
+ pst->tunnel = 1;
+ break;
+ default:
+ return 0;
+ }
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 38/58] net/octeontx2: adding flow parsing for inner layers
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (36 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 37/58] net/octeontx2: add flow parsing for outer layers jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 39/58] net/octeontx2: add flow actions support jerinj
` (21 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: ferruh.yigit
From: Kiran Kumar K <kirankumark@marvell.com>
Adding functionality to parse inner layers from la to lc.
These will be used to parse inner layers L2, L3, L4 types.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
drivers/net/octeontx2/otx2_flow_parse.c | 202 ++++++++++++++++++++++++
1 file changed, 202 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index 2d0fa439a..1351dff4c 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -461,3 +461,205 @@ otx2_flow_parse_ld(struct otx2_parse_state *pst)
return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
}
+
+static inline void
+flow_check_lc_ip_tunnel(struct otx2_parse_state *pst)
+{
+ const struct rte_flow_item *pattern = pst->pattern + 1;
+
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ if (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS ||
+ pattern->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+ pattern->type == RTE_FLOW_ITEM_TYPE_IPV6)
+ pst->tunnel = 1;
+}
+
+/* Outer IPv4, Outer IPv6, MPLS, ARP */
+int
+otx2_flow_parse_lc(struct otx2_parse_state *pst)
+{
+ uint8_t hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
+ struct otx2_flow_item_info info;
+ int lid, lt;
+ int rc;
+
+ if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
+ return otx2_flow_parse_lc_ld_mpls(pst, NPC_LID_LC);
+
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ lid = NPC_LID_LC;
+
+ switch (pst->pattern->type) {
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ lt = NPC_LT_LC_IP;
+ info.def_mask = &rte_flow_item_ipv4_mask;
+ info.len = sizeof(struct rte_flow_item_ipv4);
+ break;
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ lid = NPC_LID_LC;
+ lt = NPC_LT_LC_IP6;
+ info.def_mask = &rte_flow_item_ipv6_mask;
+ info.len = sizeof(struct rte_flow_item_ipv6);
+ break;
+ case RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4:
+ lt = NPC_LT_LC_ARP;
+ info.def_mask = &rte_flow_item_arp_eth_ipv4_mask;
+ info.len = sizeof(struct rte_flow_item_arp_eth_ipv4);
+ break;
+ default:
+ /* No match at this layer */
+ return 0;
+ }
+
+ /* Identify if IP tunnels MPLS or IPv4/v6 */
+ flow_check_lc_ip_tunnel(pst);
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
+}
+
+/* VLAN, ETAG */
+int
+otx2_flow_parse_lb(struct otx2_parse_state *pst)
+{
+ const struct rte_flow_item *pattern = pst->pattern;
+ const struct rte_flow_item *last_pattern;
+ char hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
+ struct otx2_flow_item_info info;
+ int lid, lt, lflags;
+ int nr_vlans = 0;
+ int rc;
+
+ info.spec = NULL;
+ info.mask = NULL;
+
+ lid = NPC_LID_LB;
+ lflags = 0;
+ last_pattern = pattern;
+
+ if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+ /* RTE vlan is either 802.1q or 802.1ad,
+ * this maps to either CTAG/STAG. We need to decide
+ * based on number of VLANS present. Matching is
+ * supported on first tag only.
+ */
+ info.def_mask = &rte_flow_item_vlan_mask;
+ info.hw_mask = NULL;
+ info.len = sizeof(struct rte_flow_item_vlan);
+
+ pattern = pst->pattern;
+ while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+ nr_vlans++;
+
+ /* Basic validation of 2nd/3rd vlan item */
+ if (nr_vlans > 1) {
+ otx2_npc_dbg("Vlans = %d", nr_vlans);
+ rc = otx2_flow_parse_item_basic(pattern, &info,
+ pst->error);
+ if (rc != 0)
+ return rc;
+ }
+ last_pattern = pattern;
+ pattern++;
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ }
+
+ switch (nr_vlans) {
+ case 1:
+ lt = NPC_LT_LB_CTAG;
+ break;
+ case 2:
+ lt = NPC_LT_LB_STAG;
+ lflags = NPC_F_STAG_CTAG;
+ break;
+ case 3:
+ lt = NPC_LT_LB_STAG;
+ lflags = NPC_F_STAG_STAG_CTAG;
+ break;
+ default:
+ rte_flow_error_set(pst->error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ last_pattern,
+ "more than 3 vlans not supported");
+ return -rte_errno;
+ }
+ } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_E_TAG) {
+ /* we can support ETAG and match a subsequent CTAG
+ * without any matching support.
+ */
+ lt = NPC_LT_LB_ETAG;
+ lflags = 0;
+
+ last_pattern = pst->pattern;
+ pattern = otx2_flow_skip_void_and_any_items(pst->pattern + 1);
+ if (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+ info.def_mask = &rte_flow_item_vlan_mask;
+ /* set supported mask to NULL for vlan tag */
+ info.hw_mask = NULL;
+ info.len = sizeof(struct rte_flow_item_vlan);
+ rc = otx2_flow_parse_item_basic(pattern, &info,
+ pst->error);
+ if (rc != 0)
+ return rc;
+
+ lflags = NPC_F_ETAG_CTAG;
+ last_pattern = pattern;
+ }
+
+ info.def_mask = &rte_flow_item_e_tag_mask;
+ info.len = sizeof(struct rte_flow_item_e_tag);
+ } else {
+ return 0;
+ }
+
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ /* Point pattern to last item consumed */
+ pst->pattern = last_pattern;
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+int
+otx2_flow_parse_la(struct otx2_parse_state *pst)
+{
+ struct rte_flow_item_eth hw_mask;
+ struct otx2_flow_item_info info;
+ int lid, lt;
+ int rc;
+
+ /* Identify the pattern type into lid, lt */
+ if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH)
+ return 0;
+
+ lid = NPC_LID_LA;
+ lt = NPC_LT_LA_ETHER;
+
+ /* Prepare for parsing the item */
+ info.def_mask = &rte_flow_item_eth_mask;
+ info.hw_mask = &hw_mask;
+ info.len = sizeof(struct rte_flow_item_eth);
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ info.spec = NULL;
+ info.mask = NULL;
+
+ /* Basic validation of item parameters */
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc)
+ return rc;
+
+ /* Update pst if not validate only? clash check? */
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 39/58] net/octeontx2: add flow actions support
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (37 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 38/58] net/octeontx2: adding flow parsing for inner layers jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 40/58] net/octeontx2: add flow operations jerinj
` (20 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding support to parse flow actions like drop, count, mark, rss, queue.
On egress side, only drop and count actions were supported.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow_parse.c | 276 ++++++++++++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 1 +
2 files changed, 277 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index 1351dff4c..cf13813d8 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -663,3 +663,279 @@ otx2_flow_parse_la(struct otx2_parse_state *pst)
/* Update pst if not validate only? clash check? */
return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
}
+
+static int
+parse_rss_action(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_action *act,
+ struct rte_flow_error *error)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ struct otx2_rss_info *rss_info = &hw->rss_info;
+ const struct rte_flow_action_rss *rss;
+ uint32_t i;
+
+ rss = (const struct rte_flow_action_rss *)act->conf;
+
+ /* Not supported */
+ if (attr->egress) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+ attr, "No support of RSS in egress");
+ }
+
+ if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "multi-queue mode is disabled");
+
+ /* Parse RSS related parameters from configuration */
+ if (!rss || !rss->queue_num)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "no valid queues");
+
+ if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "non-default RSS hash functions"
+ " are not supported");
+
+ if (rss->key_len && rss->key_len > RTE_DIM(rss_info->key))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "RSS hash key too large");
+
+ if (rss->queue_num > rss_info->rss_size)
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "too many queues for RSS context");
+
+ for (i = 0; i < rss->queue_num; i++) {
+ if (rss->queue[i] >= dev->data->nb_rx_queues)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act,
+ "queue id > max number"
+ " of queues");
+ }
+
+ return 0;
+}
+
+int
+otx2_flow_parse_actions(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error,
+ struct rte_flow *flow)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ struct otx2_npc_flow_info *npc = &hw->npc_flow;
+ const struct rte_flow_action_count *act_count;
+ const struct rte_flow_action_mark *act_mark;
+ const struct rte_flow_action_queue *act_q;
+ const char *errmsg = NULL;
+ int sel_act, req_act = 0;
+ uint16_t pf_func;
+ int errcode = 0;
+ int mark = 0;
+ int rq = 0;
+
+ /* Initialize actions */
+ flow->ctr_id = NPC_COUNTER_NONE;
+
+ for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+ otx2_npc_dbg("Action type = %d", actions->type);
+
+ switch (actions->type) {
+ case RTE_FLOW_ACTION_TYPE_VOID:
+ break;
+ case RTE_FLOW_ACTION_TYPE_MARK:
+ act_mark =
+ (const struct rte_flow_action_mark *)actions->conf;
+
+ /* We have only 16 bits. Use highest val for flag */
+ if (act_mark->id > (OTX2_FLOW_FLAG_VAL - 2)) {
+ errmsg = "mark value must be < 0xfffe";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+ mark = act_mark->id + 1;
+ req_act |= OTX2_FLOW_ACT_MARK;
+ rte_atomic32_inc(&npc->mark_actions);
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_FLAG:
+ mark = OTX2_FLOW_FLAG_VAL;
+ req_act |= OTX2_FLOW_ACT_FLAG;
+ rte_atomic32_inc(&npc->mark_actions);
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_COUNT:
+ act_count =
+ (const struct rte_flow_action_count *)
+ actions->conf;
+
+ if (act_count->shared == 1) {
+ errmsg = "Shared Counters not supported";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+ /* Indicates, need a counter */
+ flow->ctr_id = 1;
+ req_act |= OTX2_FLOW_ACT_COUNT;
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_DROP:
+ req_act |= OTX2_FLOW_ACT_DROP;
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ /* Applicable only to ingress flow */
+ act_q = (const struct rte_flow_action_queue *)
+ actions->conf;
+ rq = act_q->index;
+ if (rq >= dev->data->nb_rx_queues) {
+ errmsg = "invalid queue index";
+ errcode = EINVAL;
+ goto err_exit;
+ }
+ req_act |= OTX2_FLOW_ACT_QUEUE;
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ errcode = parse_rss_action(dev, attr, actions, error);
+ if (errcode)
+ return -rte_errno;
+
+ req_act |= OTX2_FLOW_ACT_RSS;
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_SECURITY:
+ /* Assumes user has already configured security
+ * session for this flow. Associated conf is
+ * opaque. When RTE security is implemented for otx2,
+ * we need to verify that for specified security
+ * session:
+ * action_type ==
+ * RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL &&
+ * session_protocol ==
+ * RTE_SECURITY_PROTOCOL_IPSEC
+ *
+ * RSS is not supported with inline ipsec. Get the
+ * rq from associated conf, or make
+ * RTE_FLOW_ACTION_TYPE_QUEUE compulsory with this
+ * action.
+ * Currently, rq = 0 is assumed.
+ */
+ req_act |= OTX2_FLOW_ACT_SEC;
+ rq = 0;
+ break;
+ default:
+ errmsg = "Unsupported action specified";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+ }
+
+ /* Check if actions specified are compatible */
+ if (attr->egress) {
+ /* Only DROP/COUNT is supported */
+ if (!(req_act & OTX2_FLOW_ACT_DROP)) {
+ errmsg = "DROP is required action for egress";
+ errcode = EINVAL;
+ goto err_exit;
+ } else if (req_act & ~(OTX2_FLOW_ACT_DROP |
+ OTX2_FLOW_ACT_COUNT)) {
+ errmsg = "Unsupported action specified";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+ flow->npc_action = NIX_TX_ACTIONOP_DROP;
+ return 0;
+ }
+
+ /* We have already verified the attr, this is ingress.
+ * - Exactly one terminating action is supported
+ * - Exactly one of MARK or FLAG is supported
+ * - If terminating action is DROP, only count is valid.
+ */
+ sel_act = req_act & OTX2_FLOW_ACT_TERM;
+ if ((sel_act & (sel_act - 1)) != 0) {
+ errmsg = "Only one terminating action supported";
+ errcode = EINVAL;
+ goto err_exit;
+ }
+
+ if (req_act & OTX2_FLOW_ACT_DROP) {
+ sel_act = req_act & ~OTX2_FLOW_ACT_COUNT;
+ if ((sel_act & (sel_act - 1)) != 0) {
+ errmsg = "Only COUNT action is supported "
+ "with DROP ingress action";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+ }
+
+ if ((req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK))
+ == (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) {
+ errmsg = "Only one of FLAG or MARK action is supported";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+
+ /* Set NIX_RX_ACTIONOP */
+ if (req_act & OTX2_FLOW_ACT_DROP) {
+ flow->npc_action = NIX_RX_ACTIONOP_DROP;
+ } else if (req_act & OTX2_FLOW_ACT_QUEUE) {
+ flow->npc_action = NIX_RX_ACTIONOP_UCAST;
+ flow->npc_action |= (uint64_t)rq << 20;
+ } else if (req_act & OTX2_FLOW_ACT_RSS) {
+ /* When user added a rule for rss, first we will add the
+ *rule in MCAM and then update the action, once if we have
+ *FLOW_KEY_ALG index. So, till we update the action with
+ *flow_key_alg index, set the action to drop.
+ */
+ if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ flow->npc_action = NIX_RX_ACTIONOP_DROP;
+ else
+ flow->npc_action = NIX_RX_ACTIONOP_UCAST;
+ } else if (req_act & OTX2_FLOW_ACT_SEC) {
+ flow->npc_action = NIX_RX_ACTIONOP_UCAST_IPSEC;
+ flow->npc_action |= (uint64_t)rq << 20;
+ } else if (req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) {
+ flow->npc_action = NIX_RX_ACTIONOP_UCAST;
+ } else if (req_act & OTX2_FLOW_ACT_COUNT) {
+ /* Keep OTX2_FLOW_ACT_COUNT always at the end
+ * This is default action, when user specify only
+ * COUNT ACTION
+ */
+ flow->npc_action = NIX_RX_ACTIONOP_UCAST;
+ } else {
+ /* Should never reach here */
+ errmsg = "Invalid action specified";
+ errcode = EINVAL;
+ goto err_exit;
+ }
+
+ if (mark)
+ flow->npc_action |= (uint64_t)mark << 40;
+
+ if (rte_atomic32_read(&npc->mark_actions) == 1)
+ hw->rx_offload_flags |= NIX_RX_OFFLOAD_MARK_UPDATE_F;
+
+
+ /* Ideally AF must ensure that correct pf_func is set */
+ pf_func = otx2_pfvf_func(hw->pf, hw->vf);
+ flow->npc_action |= (uint64_t)pf_func << 4;
+
+ return 0;
+
+err_exit:
+ rte_flow_error_set(error, errcode,
+ RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL,
+ errmsg);
+ return -rte_errno;
+}
+
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index 0c3627c12..b9c9ff3cc 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -14,6 +14,7 @@
#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
+#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4)
#define NIX_TIMESYNC_RX_OFFSET 8
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 40/58] net/octeontx2: add flow operations
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (38 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 39/58] net/octeontx2: add flow actions support jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 41/58] net/octeontx2: add additional " jerinj
` (19 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding the initial flow ops like flow_create and flow_validate.
These will be used to alloc and write flow rule to the device
and validate the flow rule.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_flow.c | 430 ++++++++++++++++++++++++++++++
3 files changed, 432 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_flow.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index f38901b89..d651c8c50 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -34,6 +34,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_rss.c \
otx2_mac.c \
otx2_ptp.c \
+ otx2_flow.c \
otx2_link.c \
otx2_stats.c \
otx2_lookup.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index cbab77f7b..a2c494bb4 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -7,6 +7,7 @@ sources = files(
'otx2_rss.c',
'otx2_mac.c',
'otx2_ptp.c',
+ 'otx2_flow.c',
'otx2_link.c',
'otx2_stats.c',
'otx2_lookup.c',
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
new file mode 100644
index 000000000..d1e1c4411
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -0,0 +1,430 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+#include "otx2_flow.h"
+
+static int
+flow_program_npc(struct otx2_parse_state *pst, struct otx2_mbox *mbox,
+ struct otx2_npc_flow_info *flow_info)
+{
+ /* This is non-LDATA part in search key */
+ uint64_t key_data[2] = {0ULL, 0ULL};
+ uint64_t key_mask[2] = {0ULL, 0ULL};
+ int intf = pst->flow->nix_intf;
+ uint64_t lt, flags;
+ int off, idx;
+ uint64_t val;
+ int key_len;
+ uint8_t lid;
+
+ for (lid = 0; lid < NPC_MAX_LID; lid++) {
+ /* Offset in key */
+ off = NPC_PARSE_KEX_S_LID_OFFSET(lid);
+ lt = pst->lt[lid] & 0xf;
+ flags = pst->flags[lid] & 0xff;
+ /* NPC_LAYER_KEX_S */
+ val = (lt << 8) | flags;
+ key_data[off / UINT64_BIT] |= (val << (off & 0x3f));
+ val = (flags == 0 ? 0 : 0xffULL);
+ if (lt)
+ val |= 0xf00ULL;
+ key_mask[off / UINT64_BIT] |= (val << (off & 0x3f));
+ };
+
+ otx2_npc_dbg("Npc prog key data0: 0x%" PRIx64 ", data1: 0x%" PRIx64,
+ key_data[0], key_data[1]);
+ /*
+ * Channel, errlev, errcode, l2_l3_bc_mc
+ * AF must set the channel. For time being, it can be
+ * hard-coded
+ * Rest of the fields are zero for now.
+ */
+
+ /*
+ * Compress key_data and key_mask, skipping any disabled
+ * nibbles.
+ */
+ otx2_flow_keyx_compress(key_data, pst->npc->keyx_supp_nmask[intf]);
+ otx2_flow_keyx_compress(key_mask, pst->npc->keyx_supp_nmask[intf]);
+
+ /* Copy this into mcam string */
+ key_len = (pst->npc->keyx_len[intf] + 7) / 8;
+ otx2_npc_dbg("Key_len = %d", key_len);
+ memcpy(pst->flow->mcam_data, key_data, key_len);
+ memcpy(pst->flow->mcam_mask, key_mask, key_len);
+
+ otx2_npc_dbg("Final flow data");
+ for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
+ otx2_npc_dbg("data[%d]: 0x%" PRIx64 ", mask[%d]: 0x%" PRIx64,
+ idx, pst->flow->mcam_data[idx],
+ idx, pst->flow->mcam_mask[idx]);
+ }
+
+ /*
+ * Now we have mcam data and mask formatted as
+ * [Key_len/4 nibbles][0 or 1 nibble hole][data]
+ * hole is present if key_len is odd number of nibbles.
+ * mcam data must be split into 64 bits + 48 bits segments
+ * for each back W0, W1.
+ */
+
+ return otx2_flow_mcam_alloc_and_write(pst->flow, mbox, pst, flow_info);
+}
+
+static int
+flow_parse_attr(struct rte_eth_dev *eth_dev,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error,
+ struct rte_flow *flow)
+{
+ struct otx2_eth_dev *dev = eth_dev->data->dev_private;
+ const char *errmsg = NULL;
+
+ if (attr == NULL)
+ errmsg = "Attribute can't be empty";
+ else if (attr->group)
+ errmsg = "Groups are not supported";
+ else if (attr->priority >= dev->npc_flow.flow_max_priority)
+ errmsg = "Priority should be with in specified range";
+ else if ((!attr->egress && !attr->ingress) ||
+ (attr->egress && attr->ingress))
+ errmsg = "Exactly one of ingress or egress must be set";
+
+ if (errmsg != NULL) {
+ rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ATTR,
+ attr, errmsg);
+ return -ENOTSUP;
+ }
+
+ if (attr->ingress)
+ flow->nix_intf = OTX2_INTF_RX;
+ else
+ flow->nix_intf = OTX2_INTF_TX;
+
+ flow->priority = attr->priority;
+ return 0;
+}
+
+static inline int
+flow_get_free_rss_grp(struct rte_bitmap *bmap,
+ uint32_t size, uint32_t *pos)
+{
+ for (*pos = 0; *pos < size; ++*pos) {
+ if (!rte_bitmap_get(bmap, *pos))
+ break;
+ }
+
+ return *pos < size ? 0 : -1;
+}
+
+static int
+flow_configure_rss_action(struct otx2_eth_dev *dev,
+ const struct rte_flow_action_rss *rss,
+ uint8_t *alg_idx, uint32_t *rss_grp,
+ int mcam_index)
+{
+ struct otx2_npc_flow_info *flow_info = &dev->npc_flow;
+ uint16_t reta[NIX_RSS_RETA_SIZE_MAX];
+ uint32_t flowkey_cfg, grp_aval, i;
+ uint16_t *ind_tbl = NULL;
+ uint8_t flowkey_algx;
+ int rc;
+
+ rc = flow_get_free_rss_grp(flow_info->rss_grp_entries,
+ flow_info->rss_grps, &grp_aval);
+ /* RSS group :0 is not usable for flow rss action */
+ if (rc < 0 || grp_aval == 0)
+ return -ENOSPC;
+
+ *rss_grp = grp_aval;
+
+ otx2_nix_rss_set_key(dev, (uint8_t *)(uintptr_t)rss->key,
+ rss->key_len);
+
+ /* If queue count passed in the rss action is less than
+ * HW configured reta size, replicate rss action reta
+ * across HW reta table.
+ */
+ if (dev->rss_info.rss_size > rss->queue_num) {
+ ind_tbl = reta;
+
+ for (i = 0; i < (dev->rss_info.rss_size / rss->queue_num); i++)
+ memcpy(reta + i * rss->queue_num, rss->queue,
+ sizeof(uint16_t) * rss->queue_num);
+
+ i = dev->rss_info.rss_size % rss->queue_num;
+ if (i)
+ memcpy(&reta[dev->rss_info.rss_size] - i,
+ rss->queue, i * sizeof(uint16_t));
+ } else {
+ ind_tbl = (uint16_t *)(uintptr_t)rss->queue;
+ }
+
+ rc = otx2_nix_rss_tbl_init(dev, *rss_grp, ind_tbl);
+ if (rc) {
+ otx2_err("Failed to init rss table rc = %d", rc);
+ return rc;
+ }
+
+ flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss->types, rss->level);
+
+ rc = otx2_rss_set_hf(dev, flowkey_cfg, &flowkey_algx,
+ *rss_grp, mcam_index);
+ if (rc) {
+ otx2_err("Failed to set rss hash function rc = %d", rc);
+ return rc;
+ }
+
+ *alg_idx = flowkey_algx;
+
+ rte_bitmap_set(flow_info->rss_grp_entries, *rss_grp);
+
+ return 0;
+}
+
+
+static int
+flow_program_rss_action(struct rte_eth_dev *eth_dev,
+ const struct rte_flow_action actions[],
+ struct rte_flow *flow)
+{
+ struct otx2_eth_dev *dev = eth_dev->data->dev_private;
+ const struct rte_flow_action_rss *rss;
+ uint32_t rss_grp;
+ uint8_t alg_idx;
+ int rc;
+
+ for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+ if (actions->type == RTE_FLOW_ACTION_TYPE_RSS) {
+ rss = (const struct rte_flow_action_rss *)actions->conf;
+
+ rc = flow_configure_rss_action(dev,
+ rss, &alg_idx, &rss_grp,
+ flow->mcam_id);
+ if (rc)
+ return rc;
+
+ flow->npc_action |=
+ ((uint64_t)(alg_idx & NIX_RSS_ACT_ALG_MASK) <<
+ NIX_RSS_ACT_ALG_OFFSET) |
+ ((uint64_t)(rss_grp & NIX_RSS_ACT_GRP_MASK) <<
+ NIX_RSS_ACT_GRP_OFFSET);
+ }
+ }
+ return 0;
+}
+
+static int
+flow_parse_meta_items(__rte_unused struct otx2_parse_state *pst)
+{
+ otx2_npc_dbg("Meta Item");
+ return 0;
+}
+
+/*
+ * Parse function of each layer:
+ * - Consume one or more patterns that are relevant.
+ * - Update parse_state
+ * - Set parse_state.pattern = last item consumed
+ * - Set appropriate error code/message when returning error.
+ */
+typedef int (*flow_parse_stage_func_t)(struct otx2_parse_state *pst);
+
+static int
+flow_parse_pattern(struct rte_eth_dev *dev,
+ const struct rte_flow_item pattern[],
+ struct rte_flow_error *error,
+ struct rte_flow *flow,
+ struct otx2_parse_state *pst)
+{
+ flow_parse_stage_func_t parse_stage_funcs[] = {
+ flow_parse_meta_items,
+ otx2_flow_parse_la,
+ otx2_flow_parse_lb,
+ otx2_flow_parse_lc,
+ otx2_flow_parse_ld,
+ otx2_flow_parse_le,
+ otx2_flow_parse_lf,
+ otx2_flow_parse_lg,
+ otx2_flow_parse_lh,
+ };
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ uint8_t layer = 0;
+ int key_offset;
+ int rc;
+
+ if (pattern == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_NUM, NULL,
+ "pattern is NULL");
+ return -EINVAL;
+ }
+
+ memset(pst, 0, sizeof(*pst));
+ pst->npc = &hw->npc_flow;
+ pst->error = error;
+ pst->flow = flow;
+
+ /* Use integral byte offset */
+ key_offset = pst->npc->keyx_len[flow->nix_intf];
+ key_offset = (key_offset + 7) / 8;
+
+ /* Location where LDATA would begin */
+ pst->mcam_data = (uint8_t *)flow->mcam_data;
+ pst->mcam_mask = (uint8_t *)flow->mcam_mask;
+
+ while (pattern->type != RTE_FLOW_ITEM_TYPE_END &&
+ layer < RTE_DIM(parse_stage_funcs)) {
+ otx2_npc_dbg("Pattern type = %d", pattern->type);
+
+ /* Skip place-holders */
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+
+ pst->pattern = pattern;
+ otx2_npc_dbg("Is tunnel = %d, layer = %d", pst->tunnel, layer);
+ rc = parse_stage_funcs[layer](pst);
+ if (rc != 0)
+ return -rte_errno;
+
+ layer++;
+
+ /*
+ * Parse stage function sets pst->pattern to
+ * 1 past the last item it consumed.
+ */
+ pattern = pst->pattern;
+
+ if (pst->terminate)
+ break;
+ }
+
+ /* Skip trailing place-holders */
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+
+ /* Are there more items than what we can handle? */
+ if (pattern->type != RTE_FLOW_ITEM_TYPE_END) {
+ rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, pattern,
+ "unsupported item in the sequence");
+ return -ENOTSUP;
+ }
+
+ return 0;
+}
+
+static int
+flow_parse_rule(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error,
+ struct rte_flow *flow,
+ struct otx2_parse_state *pst)
+{
+ int err;
+
+ /* Check attributes */
+ err = flow_parse_attr(dev, attr, error, flow);
+ if (err)
+ return err;
+
+ /* Check actions */
+ err = otx2_flow_parse_actions(dev, attr, actions, error, flow);
+ if (err)
+ return err;
+
+ /* Check pattern */
+ err = flow_parse_pattern(dev, pattern, error, flow, pst);
+ if (err)
+ return err;
+
+ /* Check for overlaps? */
+ return 0;
+}
+
+static int
+otx2_flow_validate(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct otx2_parse_state parse_state;
+ struct rte_flow flow;
+
+ memset(&flow, 0, sizeof(flow));
+ return flow_parse_rule(dev, attr, pattern, actions, error, &flow,
+ &parse_state);
+}
+
+static struct rte_flow *
+otx2_flow_create(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ struct otx2_parse_state parse_state;
+ struct otx2_mbox *mbox = hw->mbox;
+ struct rte_flow *flow, *flow_iter;
+ struct otx2_flow_list *list;
+ int rc;
+
+ flow = rte_zmalloc("otx2_rte_flow", sizeof(*flow), 0);
+ if (flow == NULL) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Memory allocation failed");
+ return NULL;
+ }
+ memset(flow, 0, sizeof(*flow));
+
+ rc = flow_parse_rule(dev, attr, pattern, actions, error, flow,
+ &parse_state);
+ if (rc != 0)
+ goto err_exit;
+
+ rc = flow_program_npc(&parse_state, mbox, &hw->npc_flow);
+ if (rc != 0) {
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to insert filter");
+ goto err_exit;
+ }
+
+ rc = flow_program_rss_action(dev, actions, flow);
+ if (rc != 0) {
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to program rss action");
+ goto err_exit;
+ }
+
+
+ list = &hw->npc_flow.flow_list[flow->priority];
+ /* List in ascending order of mcam entries */
+ TAILQ_FOREACH(flow_iter, list, next) {
+ if (flow_iter->mcam_id > flow->mcam_id) {
+ TAILQ_INSERT_BEFORE(flow_iter, flow, next);
+ return flow;
+ }
+ }
+
+ TAILQ_INSERT_TAIL(list, flow, next);
+ return flow;
+
+err_exit:
+ rte_free(flow);
+ return NULL;
+}
+
+const struct rte_flow_ops otx2_flow_ops = {
+ .validate = otx2_flow_validate,
+ .create = otx2_flow_create,
+};
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 41/58] net/octeontx2: add additional flow operations
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (39 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 40/58] net/octeontx2: add flow operations jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 42/58] net/octeontx2: add flow init and fini jerinj
` (18 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding the initial flow ops like flow_create and flow_validate.
These will be used to alloc and write flow rule to device and validate
the flow rule.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow.c | 197 ++++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 3 +
2 files changed, 200 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index d1e1c4411..33fdafeb7 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -5,6 +5,39 @@
#include "otx2_ethdev.h"
#include "otx2_flow.h"
+static int
+flow_free_all_resources(struct otx2_eth_dev *hw)
+{
+ struct otx2_npc_flow_info *npc = &hw->npc_flow;
+ struct otx2_mbox *mbox = hw->mbox;
+ struct otx2_mcam_ents_info *info;
+ struct rte_bitmap *bmap;
+ struct rte_flow *flow;
+ int rc, idx;
+
+ /* Free all MCAM entries allocated */
+ rc = otx2_flow_mcam_free_all_entries(mbox);
+
+ /* Free any MCAM counters and delete flow list */
+ for (idx = 0; idx < npc->flow_max_priority; idx++) {
+ while ((flow = TAILQ_FIRST(&npc->flow_list[idx])) != NULL) {
+ if (flow->ctr_id != NPC_COUNTER_NONE)
+ rc |= otx2_flow_mcam_free_counter(mbox,
+ flow->ctr_id);
+
+ TAILQ_REMOVE(&npc->flow_list[idx], flow, next);
+ rte_free(flow);
+ bmap = npc->live_entries[flow->priority];
+ rte_bitmap_clear(bmap, flow->mcam_id);
+ }
+ info = &npc->flow_entry_info[idx];
+ info->free_ent = 0;
+ info->live_ent = 0;
+ }
+ return rc;
+}
+
+
static int
flow_program_npc(struct otx2_parse_state *pst, struct otx2_mbox *mbox,
struct otx2_npc_flow_info *flow_info)
@@ -216,6 +249,27 @@ flow_program_rss_action(struct rte_eth_dev *eth_dev,
return 0;
}
+static int
+flow_free_rss_action(struct rte_eth_dev *eth_dev,
+ struct rte_flow *flow)
+{
+ struct otx2_eth_dev *dev = eth_dev->data->dev_private;
+ struct otx2_npc_flow_info *npc = &dev->npc_flow;
+ uint32_t rss_grp;
+
+ if (flow->npc_action & NIX_RX_ACTIONOP_RSS) {
+ rss_grp = (flow->npc_action >> NIX_RSS_ACT_GRP_OFFSET) &
+ NIX_RSS_ACT_GRP_MASK;
+ if (rss_grp == 0 || rss_grp >= npc->rss_grps)
+ return -EINVAL;
+
+ rte_bitmap_clear(npc->rss_grp_entries, rss_grp);
+ }
+
+ return 0;
+}
+
+
static int
flow_parse_meta_items(__rte_unused struct otx2_parse_state *pst)
{
@@ -424,7 +478,150 @@ otx2_flow_create(struct rte_eth_dev *dev,
return NULL;
}
+static int
+otx2_flow_destroy(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ struct otx2_npc_flow_info *npc = &hw->npc_flow;
+ struct otx2_mbox *mbox = hw->mbox;
+ struct rte_bitmap *bmap;
+ uint16_t match_id;
+ int rc;
+
+ match_id = (flow->npc_action >> NIX_RX_ACT_MATCH_OFFSET) &
+ NIX_RX_ACT_MATCH_MASK;
+
+ if (match_id && match_id < OTX2_FLOW_ACTION_FLAG_DEFAULT) {
+ if (rte_atomic32_read(&npc->mark_actions) == 0)
+ return -EINVAL;
+
+ /* Clear mark offload flag if there are no more mark actions */
+ if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0)
+ hw->rx_offload_flags &= ~NIX_RX_OFFLOAD_MARK_UPDATE_F;
+ }
+
+ rc = flow_free_rss_action(dev, flow);
+ if (rc != 0) {
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to free rss action");
+ }
+
+ rc = otx2_flow_mcam_free_entry(mbox, flow->mcam_id);
+ if (rc != 0) {
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to destroy filter");
+ }
+
+ TAILQ_REMOVE(&npc->flow_list[flow->priority], flow, next);
+
+ bmap = npc->live_entries[flow->priority];
+ rte_bitmap_clear(bmap, flow->mcam_id);
+
+ rte_free(flow);
+ return 0;
+}
+
+static int
+otx2_flow_flush(struct rte_eth_dev *dev,
+ struct rte_flow_error *error)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ int rc;
+
+ rc = flow_free_all_resources(hw);
+ if (rc) {
+ otx2_err("Error when deleting NPC MCAM entries "
+ ", counters");
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to flush filter");
+ return -rte_errno;
+ }
+
+ return 0;
+}
+
+static int
+otx2_flow_isolate(struct rte_eth_dev *dev __rte_unused,
+ int enable __rte_unused,
+ struct rte_flow_error *error)
+{
+ /*
+ * If we support, we need to un-install the default mcam
+ * entry for this port.
+ */
+
+ rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Flow isolation not supported");
+
+ return -rte_errno;
+}
+
+static int
+otx2_flow_query(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ const struct rte_flow_action *action,
+ void *data,
+ struct rte_flow_error *error)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ struct rte_flow_query_count *query = data;
+ struct otx2_mbox *mbox = hw->mbox;
+ const char *errmsg = NULL;
+ int errcode = ENOTSUP;
+ int rc;
+
+ if (action->type != RTE_FLOW_ACTION_TYPE_COUNT) {
+ errmsg = "Only COUNT is supported in query";
+ goto err_exit;
+ }
+
+ if (flow->ctr_id == NPC_COUNTER_NONE) {
+ errmsg = "Counter is not available";
+ goto err_exit;
+ }
+
+ rc = otx2_flow_mcam_read_counter(mbox, flow->ctr_id, &query->hits);
+ if (rc != 0) {
+ errcode = EIO;
+ errmsg = "Error reading flow counter";
+ goto err_exit;
+ }
+ query->hits_set = 1;
+ query->bytes_set = 0;
+
+ if (query->reset)
+ rc = otx2_flow_mcam_clear_counter(mbox, flow->ctr_id);
+ if (rc != 0) {
+ errcode = EIO;
+ errmsg = "Error clearing flow counter";
+ goto err_exit;
+ }
+
+ return 0;
+
+err_exit:
+ rte_flow_error_set(error, errcode,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ errmsg);
+ return -rte_errno;
+}
+
const struct rte_flow_ops otx2_flow_ops = {
.validate = otx2_flow_validate,
.create = otx2_flow_create,
+ .destroy = otx2_flow_destroy,
+ .flush = otx2_flow_flush,
+ .query = otx2_flow_query,
+ .isolate = otx2_flow_isolate,
};
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index b9c9ff3cc..687cf2b40 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -5,6 +5,9 @@
#ifndef __OTX2_RX_H__
#define __OTX2_RX_H__
+/* Default mark value used when none is provided. */
+#define OTX2_FLOW_ACTION_FLAG_DEFAULT 0xffff
+
#define PTYPE_WIDTH 12
#define PTYPE_NON_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH)
#define PTYPE_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH)
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 42/58] net/octeontx2: add flow init and fini
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (40 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 41/58] net/octeontx2: add additional " jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 43/58] net/octeontx2: connect flow API to ethdev ops jerinj
` (17 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding the flow init and fini functionality.
These API will be called from dev init and
will initialize and de-initialize the flow related memory.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow.c | 315 ++++++++++++++++++++++++++++++
1 file changed, 315 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 33fdafeb7..1fbe6b86e 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -625,3 +625,318 @@ const struct rte_flow_ops otx2_flow_ops = {
.query = otx2_flow_query,
.isolate = otx2_flow_isolate,
};
+
+static int
+flow_supp_key_len(uint32_t supp_mask)
+{
+ int nib_count = 0;
+ while (supp_mask) {
+ nib_count++;
+ supp_mask &= (supp_mask - 1);
+ }
+ return nib_count * 4;
+}
+
+/* Refer HRM register:
+ * NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG
+ * and
+ * NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG
+ **/
+#define BYTESM1_SHIFT 16
+#define HDR_OFF_SHIFT 8
+static void
+flow_update_kex_info(struct npc_xtract_info *xtract_info,
+ uint64_t val)
+{
+ xtract_info->len = ((val >> BYTESM1_SHIFT) & 0xf) + 1;
+ xtract_info->hdr_off = (val >> HDR_OFF_SHIFT) & 0xff;
+ xtract_info->key_off = val & 0x3f;
+ xtract_info->enable = ((val >> 7) & 0x1);
+}
+
+static void
+flow_process_mkex_cfg(struct otx2_npc_flow_info *npc,
+ struct npc_get_kex_cfg_rsp *kex_rsp)
+{
+ volatile uint64_t (*q)[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT]
+ [NPC_MAX_LD];
+ struct npc_xtract_info *x_info = NULL;
+ int lid, lt, ld, fl, ix;
+ otx2_dxcfg_t *p;
+ uint64_t keyw;
+ uint64_t val;
+
+ npc->keyx_supp_nmask[NPC_MCAM_RX] =
+ kex_rsp->rx_keyx_cfg & 0x7fffffffULL;
+ npc->keyx_supp_nmask[NPC_MCAM_TX] =
+ kex_rsp->tx_keyx_cfg & 0x7fffffffULL;
+ npc->keyx_len[NPC_MCAM_RX] =
+ flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_RX]);
+ npc->keyx_len[NPC_MCAM_TX] =
+ flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_TX]);
+
+ keyw = (kex_rsp->rx_keyx_cfg >> 32) & 0x7ULL;
+ npc->keyw[NPC_MCAM_RX] = keyw;
+ keyw = (kex_rsp->tx_keyx_cfg >> 32) & 0x7ULL;
+ npc->keyw[NPC_MCAM_TX] = keyw;
+
+ /* Update KEX_LD_FLAG */
+ for (ix = 0; ix < NPC_MAX_INTF; ix++) {
+ for (ld = 0; ld < NPC_MAX_LD; ld++) {
+ for (fl = 0; fl < NPC_MAX_LFL; fl++) {
+ x_info =
+ &npc->prx_fxcfg[ix][ld][fl].xtract[0];
+ val = kex_rsp->intf_ld_flags[ix][ld][fl];
+ flow_update_kex_info(x_info, val);
+ }
+ }
+ }
+
+ /* Update LID, LT and LDATA cfg */
+ p = &npc->prx_dxcfg;
+ q = (volatile uint64_t (*)[][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD])
+ (&kex_rsp->intf_lid_lt_ld);
+ for (ix = 0; ix < NPC_MAX_INTF; ix++) {
+ for (lid = 0; lid < NPC_MAX_LID; lid++) {
+ for (lt = 0; lt < NPC_MAX_LT; lt++) {
+ for (ld = 0; ld < NPC_MAX_LD; ld++) {
+ x_info = &(*p)[ix][lid][lt].xtract[ld];
+ val = (*q)[ix][lid][lt][ld];
+ flow_update_kex_info(x_info, val);
+ }
+ }
+ }
+ }
+ /* Update LDATA Flags cfg */
+ npc->prx_lfcfg[0].i = kex_rsp->kex_ld_flags[0];
+ npc->prx_lfcfg[1].i = kex_rsp->kex_ld_flags[1];
+}
+
+static struct otx2_idev_kex_cfg *
+flow_intra_dev_kex_cfg(void)
+{
+ static const char name[] = "octeontx2_intra_device_kex_conf";
+ struct otx2_idev_kex_cfg *idev;
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(name);
+ if (mz)
+ return mz->addr;
+
+ /* Request for the first time */
+ mz = rte_memzone_reserve_aligned(name, sizeof(struct otx2_idev_kex_cfg),
+ SOCKET_ID_ANY, 0, OTX2_ALIGN);
+ if (mz) {
+ idev = mz->addr;
+ rte_atomic16_set(&idev->kex_refcnt, 0);
+ return idev;
+ }
+ return NULL;
+}
+
+static int
+flow_fetch_kex_cfg(struct otx2_eth_dev *dev)
+{
+ struct otx2_npc_flow_info *npc = &dev->npc_flow;
+ struct npc_get_kex_cfg_rsp *kex_rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct otx2_idev_kex_cfg *idev;
+ int rc = 0;
+
+ idev = flow_intra_dev_kex_cfg();
+ if (!idev)
+ return -ENOMEM;
+
+ /* Is kex_cfg read by any another driver? */
+ if (rte_atomic16_add_return(&idev->kex_refcnt, 1) == 1) {
+ /* Call mailbox to get key & data size */
+ (void)otx2_mbox_alloc_msg_npc_get_kex_cfg(mbox);
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&kex_rsp);
+ if (rc) {
+ otx2_err("Failed to fetch NPC keyx config");
+ goto done;
+ }
+ memcpy(&idev->kex_cfg, kex_rsp,
+ sizeof(struct npc_get_kex_cfg_rsp));
+ }
+
+ flow_process_mkex_cfg(npc, &idev->kex_cfg);
+
+done:
+ return rc;
+}
+
+int
+otx2_flow_init(struct otx2_eth_dev *hw)
+{
+ uint8_t *mem = NULL, *nix_mem = NULL, *npc_mem = NULL;
+ struct otx2_npc_flow_info *npc = &hw->npc_flow;
+ uint32_t bmap_sz;
+ int rc = 0, idx;
+
+ rc = flow_fetch_kex_cfg(hw);
+ if (rc) {
+ otx2_err("Failed to fetch NPC keyx config from idev");
+ return rc;
+ }
+
+ rte_atomic32_init(&npc->mark_actions);
+
+ npc->mcam_entries = NPC_MCAM_TOT_ENTRIES >> npc->keyw[NPC_MCAM_RX];
+ /* Free, free_rev, live and live_rev entries */
+ bmap_sz = rte_bitmap_get_memory_footprint(npc->mcam_entries);
+ mem = rte_zmalloc(NULL, 4 * bmap_sz * npc->flow_max_priority,
+ RTE_CACHE_LINE_SIZE);
+ if (mem == NULL) {
+ otx2_err("Bmap alloc failed");
+ rc = -ENOMEM;
+ return rc;
+ }
+
+ npc->flow_entry_info = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct otx2_mcam_ents_info),
+ 0);
+ if (npc->flow_entry_info == NULL) {
+ otx2_err("flow_entry_info alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->free_entries = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct rte_bitmap),
+ 0);
+ if (npc->free_entries == NULL) {
+ otx2_err("free_entries alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->free_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct rte_bitmap),
+ 0);
+ if (npc->free_entries_rev == NULL) {
+ otx2_err("free_entries_rev alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->live_entries = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct rte_bitmap),
+ 0);
+ if (npc->live_entries == NULL) {
+ otx2_err("live_entries alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->live_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct rte_bitmap),
+ 0);
+ if (npc->live_entries_rev == NULL) {
+ otx2_err("live_entries_rev alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->flow_list = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct otx2_flow_list),
+ 0);
+ if (npc->flow_list == NULL) {
+ otx2_err("flow_list alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc_mem = mem;
+ for (idx = 0; idx < npc->flow_max_priority; idx++) {
+ TAILQ_INIT(&npc->flow_list[idx]);
+
+ npc->free_entries[idx] =
+ rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
+ mem += bmap_sz;
+
+ npc->free_entries_rev[idx] =
+ rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
+ mem += bmap_sz;
+
+ npc->live_entries[idx] =
+ rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
+ mem += bmap_sz;
+
+ npc->live_entries_rev[idx] =
+ rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
+ mem += bmap_sz;
+
+ npc->flow_entry_info[idx].free_ent = 0;
+ npc->flow_entry_info[idx].live_ent = 0;
+ npc->flow_entry_info[idx].max_id = 0;
+ npc->flow_entry_info[idx].min_id = ~(0);
+ }
+
+ npc->rss_grps = NIX_RSS_GRPS;
+
+ bmap_sz = rte_bitmap_get_memory_footprint(npc->rss_grps);
+ nix_mem = rte_zmalloc(NULL, bmap_sz, RTE_CACHE_LINE_SIZE);
+ if (nix_mem == NULL) {
+ otx2_err("Bmap alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->rss_grp_entries = rte_bitmap_init(npc->rss_grps, nix_mem, bmap_sz);
+
+ /* Group 0 will be used for RSS,
+ * 1 -7 will be used for rte_flow RSS action
+ */
+ rte_bitmap_set(npc->rss_grp_entries, 0);
+
+ return 0;
+
+err:
+ if (npc->flow_list)
+ rte_free(npc->flow_list);
+ if (npc->live_entries_rev)
+ rte_free(npc->live_entries_rev);
+ if (npc->live_entries)
+ rte_free(npc->live_entries);
+ if (npc->free_entries_rev)
+ rte_free(npc->free_entries_rev);
+ if (npc->free_entries)
+ rte_free(npc->free_entries);
+ if (npc->flow_entry_info)
+ rte_free(npc->flow_entry_info);
+ if (npc_mem)
+ rte_free(npc_mem);
+ if (nix_mem)
+ rte_free(nix_mem);
+ return rc;
+}
+
+int
+otx2_flow_fini(struct otx2_eth_dev *hw)
+{
+ struct otx2_npc_flow_info *npc = &hw->npc_flow;
+ int rc;
+
+ rc = flow_free_all_resources(hw);
+ if (rc) {
+ otx2_err("Error when deleting NPC MCAM entries, counters");
+ return rc;
+ }
+
+ if (npc->flow_list)
+ rte_free(npc->flow_list);
+ if (npc->live_entries_rev)
+ rte_free(npc->live_entries_rev);
+ if (npc->live_entries)
+ rte_free(npc->live_entries);
+ if (npc->free_entries_rev)
+ rte_free(npc->free_entries_rev);
+ if (npc->free_entries)
+ rte_free(npc->free_entries);
+ if (npc->flow_entry_info)
+ rte_free(npc->flow_entry_info);
+
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 43/58] net/octeontx2: connect flow API to ethdev ops
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (41 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 42/58] net/octeontx2: add flow init and fini jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 44/58] net/octeontx2: implement VLAN utility functions jerinj
` (16 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vivek Sharma
From: Vivek Sharma <viveksharma@marvell.com>
Connect rte_flow driver ops to ethdev via .filter_ctrl op.
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 10 ++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 3 +++
drivers/net/octeontx2/otx2_ethdev_ops.c | 21 +++++++++++++++++++++
6 files changed, 37 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 0f416ee4b..4917057f6 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -22,6 +22,7 @@ RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Flow control = Y
+Flow API = Y
Packet type parsing = Y
Timesync = Y
Timestamp offload = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index b909918ce..9049e8e99 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -22,6 +22,7 @@ RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Flow control = Y
+Flow API = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 812d5d649..735b7447a 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -17,6 +17,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Flow API = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 9cd3ce407..bda5b4aa4 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1079,6 +1079,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
/* Free the resources allocated from the previous configure */
if (dev->configured == 1) {
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+ otx2_flow_fini(dev);
oxt2_nix_unregister_queue_irqs(eth_dev);
nix_set_nop_rxtx_function(eth_dev);
rc = nix_store_queue_cfg_and_then_release(eth_dev);
@@ -1324,6 +1325,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.rx_descriptor_status = otx2_nix_rx_descriptor_status,
.tx_done_cleanup = otx2_nix_tx_done_cleanup,
.pool_ops_supported = otx2_nix_pool_ops_supported,
+ .filter_ctrl = otx2_nix_dev_filter_ctrl,
.get_module_info = otx2_nix_get_module_info,
.get_module_eeprom = otx2_nix_get_module_eeprom,
.flow_ctrl_get = otx2_nix_flow_ctrl_get,
@@ -1503,6 +1505,11 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
dev->hwcap |= OTX2_FIXUP_F_LIMIT_CQ_FULL;
}
+ /* Initialize rte-flow */
+ rc = otx2_flow_init(dev);
+ if (rc)
+ goto free_mac_addrs;
+
otx2_nix_dbg("Port=%d pf=%d vf=%d ver=%s msix_off=%d hwcap=0x%" PRIx64
" rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64,
eth_dev->data->port_id, dev->pf, dev->vf,
@@ -1539,6 +1546,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
/* Disable nix bpid config */
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+ /* Disable other rte_flow entries */
+ otx2_flow_fini(dev);
+
/* Disable PTP if already enabled */
if (otx2_ethdev_is_ptp_en(dev))
otx2_nix_timesync_disable(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 1edc7da29..e9123641c 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -274,6 +274,9 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
/* Ops */
void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_info *dev_info);
+int otx2_nix_dev_filter_ctrl(struct rte_eth_dev *eth_dev,
+ enum rte_filter_type filter_type,
+ enum rte_filter_op filter_op, void *arg);
int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_module_info *modinfo);
int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 51c156786..1da9222b7 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -220,6 +220,27 @@ otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
return -ENOTSUP;
}
+int
+otx2_nix_dev_filter_ctrl(struct rte_eth_dev *eth_dev,
+ enum rte_filter_type filter_type,
+ enum rte_filter_op filter_op, void *arg)
+{
+ RTE_SET_USED(eth_dev);
+
+ if (filter_type != RTE_ETH_FILTER_GENERIC) {
+ otx2_err("Unsupported filter type %d", filter_type);
+ return -ENOTSUP;
+ }
+
+ if (filter_op == RTE_ETH_FILTER_GET) {
+ *(const void **)arg = &otx2_flow_ops;
+ return 0;
+ }
+
+ otx2_err("Invalid filter_op %d", filter_op);
+ return -EINVAL;
+}
+
static struct cgx_fw_data *
nix_get_fwdata(struct otx2_eth_dev *dev)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 44/58] net/octeontx2: implement VLAN utility functions
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (42 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 43/58] net/octeontx2: connect flow API to ethdev ops jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 45/58] net/octeontx2: support VLAN offloads jerinj
` (15 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vivek Sharma
From: Vivek Sharma <viveksharma@marvell.com>
Implement accessory functions needed for VLAN functionality.
Introduce VLAN related structures as well.
Maximum Vtag insertion size is controlled by SMQ configuration.
This patch also configure SMQ for supporting upto double vtag insertion.
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 10 ++
drivers/net/octeontx2/otx2_ethdev.h | 48 +++++++
drivers/net/octeontx2/otx2_tm.c | 5 +-
drivers/net/octeontx2/otx2_vlan.c | 190 ++++++++++++++++++++++++++++
6 files changed, 253 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_vlan.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index d651c8c50..b1cc6d83b 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -36,6 +36,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_ptp.c \
otx2_flow.c \
otx2_link.c \
+ otx2_vlan.c \
otx2_stats.c \
otx2_lookup.c \
otx2_ethdev.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index a2c494bb4..d5f272c8b 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -9,6 +9,7 @@ sources = files(
'otx2_ptp.c',
'otx2_flow.c',
'otx2_link.c',
+ 'otx2_vlan.c',
'otx2_stats.c',
'otx2_lookup.c',
'otx2_ethdev.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index bda5b4aa4..cfc22a2da 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1079,6 +1079,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
/* Free the resources allocated from the previous configure */
if (dev->configured == 1) {
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+ otx2_nix_vlan_fini(eth_dev);
otx2_flow_fini(dev);
oxt2_nix_unregister_queue_irqs(eth_dev);
nix_set_nop_rxtx_function(eth_dev);
@@ -1126,6 +1127,12 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ rc = otx2_nix_vlan_offload_init(eth_dev);
+ if (rc) {
+ otx2_err("Failed to init vlan offload rc=%d", rc);
+ goto free_nix_lf;
+ }
+
/* Register queue IRQs */
rc = oxt2_nix_register_queue_irqs(eth_dev);
if (rc) {
@@ -1546,6 +1553,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
/* Disable nix bpid config */
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+ /* Disable vlan offloads */
+ otx2_nix_vlan_fini(eth_dev);
+
/* Disable other rte_flow entries */
otx2_flow_fini(dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index e9123641c..b54018ae0 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -40,6 +40,7 @@
/* Used for struct otx2_eth_dev::flags */
#define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0)
+#define NIX_MAX_VTAG_INS 2
#define VLAN_TAG_SIZE 4
#define NIX_HW_L2_OVERHEAD 22
/* ETH_HLEN+2*VLAN_HLEN */
@@ -163,6 +164,47 @@ struct otx2_fc_info {
uint16_t bpid[NIX_MAX_CHAN];
};
+struct vlan_mkex_info {
+ struct npc_xtract_info la_xtract;
+ struct npc_xtract_info lb_xtract;
+ uint64_t lb_lt_offset;
+};
+
+struct vlan_entry {
+ uint32_t mcam_idx;
+ uint16_t vlan_id;
+ TAILQ_ENTRY(vlan_entry) next;
+};
+
+TAILQ_HEAD(otx2_vlan_filter_tbl, vlan_entry);
+
+struct otx2_vlan_info {
+ struct otx2_vlan_filter_tbl fltr_tbl;
+ /* MKEX layer info */
+ struct mcam_entry def_tx_mcam_ent;
+ struct mcam_entry def_rx_mcam_ent;
+ struct vlan_mkex_info mkex;
+ /* Default mcam entry that matches vlan packets */
+ uint32_t def_rx_mcam_idx;
+ uint32_t def_tx_mcam_idx;
+ /* MCAM entry that matches double vlan packets */
+ uint32_t qinq_mcam_idx;
+ /* Indices of tx_vtag def registers */
+ uint32_t outer_vlan_idx;
+ uint32_t inner_vlan_idx;
+ uint16_t outer_vlan_tpid;
+ uint16_t inner_vlan_tpid;
+ uint16_t pvid;
+ /* QinQ entry allocated before default one */
+ uint8_t qinq_before_def;
+ uint8_t pvid_insert_on;
+ /* Rx vtag action type */
+ uint8_t vtag_type_idx;
+ uint8_t filter_on;
+ uint8_t strip_on;
+ uint8_t qinq_on;
+};
+
struct otx2_eth_dev {
OTX2_DEV; /* Base class */
MARKER otx2_eth_dev_data_start;
@@ -222,6 +264,7 @@ struct otx2_eth_dev {
struct rte_timecounter systime_tc;
struct rte_timecounter rx_tstamp_tc;
struct rte_timecounter tx_tstamp_tc;
+ struct otx2_vlan_info vlan_info;
} __rte_cache_aligned;
struct otx2_eth_txq {
@@ -422,4 +465,9 @@ int otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev,
struct timespec *ts);
int otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en);
+/* VLAN */
+int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev);
+int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev);
+
+
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 4439389b8..246920695 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -359,7 +359,7 @@ populate_tm_registers(struct otx2_eth_dev *dev,
/* Set xoff which will be cleared later */
*reg++ = NIX_AF_SMQX_CFG(schq);
- *regval++ = BIT_ULL(50) |
+ *regval++ = BIT_ULL(50) | ((uint64_t)NIX_MAX_VTAG_INS << 36) |
(NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS;
req->num_regs++;
*reg++ = NIX_AF_MDQX_PARENT(schq);
@@ -688,7 +688,8 @@ nix_smq_xoff(struct otx2_eth_dev *dev, uint16_t smq, bool enable)
req->reg[0] = NIX_AF_SMQX_CFG(smq);
/* Unmodified fields */
- req->regval[0] = (NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS;
+ req->regval[0] = ((uint64_t)NIX_MAX_VTAG_INS << 36) |
+ (NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS;
if (enable)
req->regval[0] |= BIT_ULL(50) | BIT_ULL(49);
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
new file mode 100644
index 000000000..b3136d2cf
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -0,0 +1,190 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_malloc.h>
+#include <rte_tailq.h>
+
+#include "otx2_ethdev.h"
+#include "otx2_flow.h"
+
+
+#define VLAN_ID_MATCH 0x1
+#define VTAG_F_MATCH 0x2
+#define MAC_ADDR_MATCH 0x4
+#define QINQ_F_MATCH 0x8
+#define VLAN_DROP 0x10
+
+enum vtag_cfg_dir {
+ VTAG_TX,
+ VTAG_RX
+};
+
+static int
+__rte_unused nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev,
+ uint32_t entry, const int enable)
+{
+ struct npc_mcam_ena_dis_entry_req *req;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc = -EINVAL;
+
+ if (enable)
+ req = otx2_mbox_alloc_msg_npc_mcam_ena_entry(mbox);
+ else
+ req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox);
+
+ req->entry = entry;
+
+ rc = otx2_mbox_process_msg(mbox, NULL);
+ return rc;
+}
+
+static int
+__rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
+{
+ struct npc_mcam_free_entry_req *req;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc = -EINVAL;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
+ req->entry = entry;
+
+ rc = otx2_mbox_process_msg(mbox, NULL);
+ return rc;
+}
+
+static int
+__rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
+ struct mcam_entry *entry, uint8_t intf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct npc_mcam_write_entry_req *req;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct msghdr *rsp;
+ int rc = -EINVAL;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
+
+ req->entry = ent_idx;
+ req->intf = intf;
+ req->enable_entry = 1;
+ memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ return rc;
+}
+
+static int
+__rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev,
+ struct mcam_entry *entry,
+ uint8_t intf, bool drop)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct npc_mcam_alloc_and_write_entry_req *req;
+ struct npc_mcam_alloc_and_write_entry_rsp *rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc = -EINVAL;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_alloc_and_write_entry(mbox);
+
+ if (intf == NPC_MCAM_RX) {
+ if (!drop && dev->vlan_info.def_rx_mcam_idx) {
+ req->priority = NPC_MCAM_HIGHER_PRIO;
+ req->ref_entry = dev->vlan_info.def_rx_mcam_idx;
+ } else if (drop && dev->vlan_info.qinq_mcam_idx) {
+ req->priority = NPC_MCAM_LOWER_PRIO;
+ req->ref_entry = dev->vlan_info.qinq_mcam_idx;
+ } else {
+ req->priority = NPC_MCAM_ANY_PRIO;
+ req->ref_entry = 0;
+ }
+ } else {
+ req->priority = NPC_MCAM_ANY_PRIO;
+ req->ref_entry = 0;
+ }
+
+ req->intf = intf;
+ req->enable_entry = 1;
+ memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ return rsp->entry;
+}
+
+static int
+nix_vlan_rx_mkex_offset(uint64_t mask)
+{
+ int nib_count = 0;
+
+ while (mask) {
+ nib_count += mask & 1;
+ mask >>= 1;
+ }
+
+ return nib_count * 4;
+}
+
+static int
+nix_vlan_get_mkex_info(struct otx2_eth_dev *dev)
+{
+ struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
+ struct otx2_npc_flow_info *npc = &dev->npc_flow;
+ struct npc_xtract_info *x_info = NULL;
+ uint64_t rx_keyx;
+ otx2_dxcfg_t *p;
+ int rc = -EINVAL;
+
+ if (npc == NULL) {
+ otx2_err("Missing npc mkex configuration");
+ return rc;
+ }
+
+#define NPC_KEX_CHAN_NIBBLE_ENA 0x7ULL
+#define NPC_KEX_LB_LTYPE_NIBBLE_ENA 0x1000ULL
+#define NPC_KEX_LB_LTYPE_NIBBLE_MASK 0xFFFULL
+
+ rx_keyx = npc->keyx_supp_nmask[NPC_MCAM_RX];
+ if ((rx_keyx & NPC_KEX_CHAN_NIBBLE_ENA) != NPC_KEX_CHAN_NIBBLE_ENA)
+ return rc;
+
+ if ((rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_ENA) !=
+ NPC_KEX_LB_LTYPE_NIBBLE_ENA)
+ return rc;
+
+ mkex->lb_lt_offset =
+ nix_vlan_rx_mkex_offset(rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_MASK);
+
+ p = &npc->prx_dxcfg;
+ x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LA][NPC_LT_LA_ETHER].xtract[0];
+ memcpy(&mkex->la_xtract, x_info, sizeof(struct npc_xtract_info));
+ x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LB][NPC_LT_LB_CTAG].xtract[0];
+ memcpy(&mkex->lb_xtract, x_info, sizeof(struct npc_xtract_info));
+
+ return 0;
+}
+
+int
+otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc;
+
+ /* Port initialized for first time or restarted */
+ if (!dev->configured) {
+ rc = nix_vlan_get_mkex_info(dev);
+ if (rc) {
+ otx2_err("Failed to get vlan mkex info rc=%d", rc);
+ return rc;
+ }
+ }
+ return 0;
+}
+
+int
+otx2_nix_vlan_fini(__rte_unused struct rte_eth_dev *eth_dev)
+{
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 45/58] net/octeontx2: support VLAN offloads
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (43 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 44/58] net/octeontx2: implement VLAN utility functions jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 46/58] net/octeontx2: support VLAN filters jerinj
` (14 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vivek Sharma
From: Vivek Sharma <viveksharma@marvell.com>
Support configuring VLAN offloads for an ethernet device.
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 2 +
doc/guides/nics/features/octeontx2_vf.ini | 2 +
drivers/net/octeontx2/otx2_ethdev.c | 1 +
drivers/net/octeontx2/otx2_ethdev.h | 1 +
drivers/net/octeontx2/otx2_rx.h | 1 +
drivers/net/octeontx2/otx2_vlan.c | 424 ++++++++++++++++++++-
7 files changed, 425 insertions(+), 8 deletions(-)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 4917057f6..f811c38e3 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -23,6 +23,8 @@ RSS reta update = Y
Inner RSS = Y
Flow control = Y
Flow API = Y
+VLAN offload = Y
+QinQ offload = Y
Packet type parsing = Y
Timesync = Y
Timestamp offload = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 9049e8e99..77c3a5637 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -23,6 +23,8 @@ RSS reta update = Y
Inner RSS = Y
Flow control = Y
Flow API = Y
+VLAN offload = Y
+QinQ offload = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 735b7447a..4571a1e78 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -18,6 +18,8 @@ RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Flow API = Y
+VLAN offload = Y
+QinQ offload = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index cfc22a2da..362e46941 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1344,6 +1344,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.timesync_adjust_time = otx2_nix_timesync_adjust_time,
.timesync_read_time = otx2_nix_timesync_read_time,
.timesync_write_time = otx2_nix_timesync_write_time,
+ .vlan_offload_set = otx2_nix_vlan_offload_set,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index b54018ae0..816371c37 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -468,6 +468,7 @@ int otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en);
/* VLAN */
int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev);
int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev);
+int otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask);
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index 687cf2b40..763dc402e 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -16,6 +16,7 @@
sizeof(uint16_t))
#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
+#define NIX_RX_OFFLOAD_VLAN_STRIP_F BIT(3)
#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4)
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index b3136d2cf..d9880d069 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -39,8 +39,50 @@ __rte_unused nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev,
return rc;
}
+static void
+nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
+ struct mcam_entry *entry, bool qinq, bool drop)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int pcifunc = otx2_pfvf_func(dev->pf, dev->vf);
+ uint64_t action = 0, vtag_action = 0;
+
+ action = NIX_RX_ACTIONOP_UCAST;
+
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ action = NIX_RX_ACTIONOP_RSS;
+ action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
+ }
+
+ action |= (uint64_t)pcifunc << 4;
+ entry->action = action;
+
+ if (drop) {
+ entry->action &= ~((uint64_t)0xF);
+ entry->action |= NIX_RX_ACTIONOP_DROP;
+ return;
+ }
+
+ if (!qinq) {
+ /* VTAG0 fields denote CTAG in single vlan case */
+ vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
+ vtag_action |= (NPC_LID_LB << 8);
+ vtag_action |= NIX_RX_VTAGACTION_VTAG0_RELPTR;
+ } else {
+ /* VTAG0 & VTAG1 fields denote CTAG & STAG respectively */
+ vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
+ vtag_action |= (NPC_LID_LB << 8);
+ vtag_action |= NIX_RX_VTAGACTION_VTAG1_RELPTR;
+ vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 47);
+ vtag_action |= ((uint64_t)(NPC_LID_LB) << 40);
+ vtag_action |= (NIX_RX_VTAGACTION_VTAG0_RELPTR << 32);
+ }
+
+ entry->vtag_action = vtag_action;
+}
+
static int
-__rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
+nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
{
struct npc_mcam_free_entry_req *req;
struct otx2_mbox *mbox = dev->mbox;
@@ -54,8 +96,8 @@ __rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
}
static int
-__rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
- struct mcam_entry *entry, uint8_t intf)
+nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
+ struct mcam_entry *entry, uint8_t intf)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
struct npc_mcam_write_entry_req *req;
@@ -75,9 +117,9 @@ __rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
}
static int
-__rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev,
- struct mcam_entry *entry,
- uint8_t intf, bool drop)
+nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev,
+ struct mcam_entry *entry,
+ uint8_t intf, bool drop)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
struct npc_mcam_alloc_and_write_entry_req *req;
@@ -114,6 +156,347 @@ __rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev,
return rsp->entry;
}
+/* Configure mcam entry with required MCAM search rules */
+static int
+nix_vlan_mcam_config(struct rte_eth_dev *eth_dev,
+ uint16_t vlan_id, uint16_t flags)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
+ volatile uint8_t *key_data, *key_mask;
+ uint64_t mcam_data, mcam_mask;
+ struct mcam_entry entry;
+ uint8_t *mac_addr;
+ int idx, kwi = 0;
+
+ memset(&entry, 0, sizeof(struct mcam_entry));
+ key_data = (volatile uint8_t *)entry.kw;
+ key_mask = (volatile uint8_t *)entry.kw_mask;
+
+ /* Channel base extracted to KW0[11:0] */
+ entry.kw[kwi] = dev->rx_chan_base;
+ entry.kw_mask[kwi] = BIT_ULL(12) - 1;
+
+ /* Adds vlan_id & LB CTAG flag to MCAM KW */
+ if (flags & VLAN_ID_MATCH) {
+ entry.kw[kwi] |= NPC_LT_LB_CTAG << mkex->lb_lt_offset;
+ entry.kw_mask[kwi] |= 0xFULL << mkex->lb_lt_offset;
+
+ mcam_data = (vlan_id << 16);
+ mcam_mask = BIT_ULL(32) - 1;
+ otx2_mbox_memcpy(key_data + mkex->lb_xtract.key_off,
+ &mcam_data, mkex->lb_xtract.len + 1);
+ otx2_mbox_memcpy(key_mask + mkex->lb_xtract.key_off,
+ &mcam_mask, mkex->lb_xtract.len + 1);
+ }
+
+ /* Adds LB STAG flag to MCAM KW */
+ if (flags & QINQ_F_MATCH) {
+ entry.kw[kwi] |= NPC_LT_LB_STAG << mkex->lb_lt_offset;
+ entry.kw_mask[kwi] |= 0xFULL << mkex->lb_lt_offset;
+ }
+
+ /* Adds LB CTAG & LB STAG flags to MCAM KW */
+ if (flags & VTAG_F_MATCH) {
+ entry.kw[kwi] |= (NPC_LT_LB_CTAG | NPC_LT_LB_STAG)
+ << mkex->lb_lt_offset;
+ entry.kw_mask[kwi] |= (NPC_LT_LB_CTAG & NPC_LT_LB_STAG)
+ << mkex->lb_lt_offset;
+ }
+
+ /* Adds port MAC address to MCAM KW */
+ if (flags & MAC_ADDR_MATCH) {
+ mcam_data = 0ULL;
+ mac_addr = dev->mac_addr;
+ for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
+ mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
+
+ mcam_mask = BIT_ULL(48) - 1;
+ otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off,
+ &mcam_data, mkex->la_xtract.len + 1);
+ otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
+ &mcam_mask, mkex->la_xtract.len + 1);
+ }
+
+ /* VLAN_DROP: for drop action for all vlan packets when filter is on.
+ * For QinQ, enable vtag action for both outer & inner tags
+ */
+ if (flags & VLAN_DROP) {
+ nix_set_rx_vlan_action(eth_dev, &entry, false, true);
+ dev->vlan_info.def_rx_mcam_ent = entry;
+ } else if (flags & QINQ_F_MATCH) {
+ nix_set_rx_vlan_action(eth_dev, &entry, true, false);
+ } else {
+ nix_set_rx_vlan_action(eth_dev, &entry, false, false);
+ }
+
+ return nix_vlan_mcam_alloc_and_write(eth_dev, &entry, NIX_INTF_RX,
+ flags & VLAN_DROP);
+}
+
+/* Installs/Removes/Modifies default rx entry */
+static int
+nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
+ bool filter, bool enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ uint16_t flags = 0;
+ int mcam_idx, rc;
+
+ /* Use default mcam entry to either drop vlan traffic when
+ * vlan filter is on or strip vtag when strip is enabled.
+ * Allocate default entry which matches port mac address
+ * and vtag(ctag/stag) flags with drop action.
+ */
+ if (!vlan->def_rx_mcam_idx) {
+ if (filter && enable)
+ flags = MAC_ADDR_MATCH | VTAG_F_MATCH | VLAN_DROP;
+ else if (strip && enable)
+ flags = MAC_ADDR_MATCH | VTAG_F_MATCH;
+ else
+ return 0;
+
+ mcam_idx = nix_vlan_mcam_config(eth_dev, 0, flags);
+ if (mcam_idx < 0) {
+ otx2_err("Failed to config vlan mcam");
+ return -mcam_idx;
+ }
+
+ vlan->def_rx_mcam_idx = mcam_idx;
+ return 0;
+ }
+
+ /* Filter is already enabled, so packets would be dropped anyways. No
+ * processing needed for enabling strip wrt mcam entry.
+ */
+
+ /* Filter disable request */
+ if (vlan->filter_on && filter && !enable) {
+ vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF);
+
+ /* Free default rx entry only when
+ * 1. strip is not on and
+ * 2. qinq entry is allocated before default entry.
+ */
+ if (vlan->strip_on ||
+ (vlan->qinq_on && !vlan->qinq_before_def)) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode ==
+ ETH_MQ_RX_RSS)
+ vlan->def_rx_mcam_ent.action |=
+ NIX_RX_ACTIONOP_RSS;
+ else
+ vlan->def_rx_mcam_ent.action |=
+ NIX_RX_ACTIONOP_UCAST;
+ return nix_vlan_mcam_write(eth_dev,
+ vlan->def_rx_mcam_idx,
+ &vlan->def_rx_mcam_ent,
+ NIX_INTF_RX);
+ } else {
+ rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
+ if (rc)
+ return rc;
+ vlan->def_rx_mcam_idx = 0;
+ }
+ }
+
+ /* Filter enable request */
+ if (!vlan->filter_on && filter && enable) {
+ vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF);
+ vlan->def_rx_mcam_ent.action |= NIX_RX_ACTIONOP_DROP;
+ return nix_vlan_mcam_write(eth_dev, vlan->def_rx_mcam_idx,
+ &vlan->def_rx_mcam_ent, NIX_INTF_RX);
+ }
+
+ /* Strip disable request */
+ if (vlan->strip_on && strip && !enable) {
+ if (!vlan->filter_on &&
+ !(vlan->qinq_on && !vlan->qinq_before_def)) {
+ rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
+ if (rc)
+ return rc;
+ vlan->def_rx_mcam_idx = 0;
+ }
+ }
+
+ return 0;
+}
+
+/* Configure vlan stripping on or off */
+static int
+nix_vlan_hw_strip(struct rte_eth_dev *eth_dev, const uint8_t enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_vtag_config *vtag_cfg;
+ int rc = -EINVAL;
+
+ rc = nix_vlan_handle_default_rx_entry(eth_dev, true, false, enable);
+ if (rc) {
+ otx2_err("Failed to config default rx entry");
+ return rc;
+ }
+
+ vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
+ /* cfg_type = 1 for rx vlan cfg */
+ vtag_cfg->cfg_type = VTAG_RX;
+
+ if (enable)
+ vtag_cfg->rx.strip_vtag = 1;
+ else
+ vtag_cfg->rx.strip_vtag = 0;
+
+ /* Always capture */
+ vtag_cfg->rx.capture_vtag = 1;
+ vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
+ /* Use rx vtag type index[0] for now */
+ vtag_cfg->rx.vtag_type = 0;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ dev->vlan_info.strip_on = enable;
+ return rc;
+}
+
+/* Configure vlan filtering on or off for all vlans if vlan_id == 0 */
+static int
+nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable,
+ uint16_t vlan_id)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc = -EINVAL;
+
+ if (!vlan_id && enable) {
+ rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
+ enable);
+ if (rc) {
+ otx2_err("Failed to config vlan mcam");
+ return rc;
+ }
+ dev->vlan_info.filter_on = enable;
+ return 0;
+ }
+
+ if (!vlan_id && !enable) {
+ rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
+ enable);
+ if (rc) {
+ otx2_err("Failed to config vlan mcam");
+ return rc;
+ }
+ dev->vlan_info.filter_on = enable;
+ return 0;
+ }
+
+ return 0;
+}
+
+/* Configure double vlan(qinq) on or off */
+static int
+otx2_nix_config_double_vlan(struct rte_eth_dev *eth_dev,
+ const uint8_t enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan_info;
+ int mcam_idx;
+ int rc;
+
+ vlan_info = &dev->vlan_info;
+
+ if (!enable) {
+ if (!vlan_info->qinq_mcam_idx)
+ return 0;
+
+ rc = nix_vlan_mcam_free(dev, vlan_info->qinq_mcam_idx);
+ if (rc)
+ return rc;
+
+ vlan_info->qinq_mcam_idx = 0;
+ dev->vlan_info.qinq_on = 0;
+ vlan_info->qinq_before_def = 0;
+ return 0;
+ }
+
+ mcam_idx =
+ nix_vlan_mcam_config(eth_dev, 0, QINQ_F_MATCH | MAC_ADDR_MATCH);
+ if (mcam_idx < 0)
+ return mcam_idx;
+
+ if (!vlan_info->def_rx_mcam_idx)
+ vlan_info->qinq_before_def = 1;
+
+ vlan_info->qinq_mcam_idx = mcam_idx;
+ dev->vlan_info.qinq_on = 1;
+ return 0;
+}
+
+int
+otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t offloads = dev->rx_offloads;
+ struct rte_eth_rxmode *rxmode;
+ int rc;
+
+ rxmode = ð_dev->data->dev_conf.rxmode;
+
+ if (mask & ETH_VLAN_EXTEND_MASK) {
+ otx2_err("Extend offload not supported");
+ return -ENOTSUP;
+ }
+
+ if (mask & ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rc = nix_vlan_hw_strip(eth_dev, true);
+ } else {
+ offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rc = nix_vlan_hw_strip(eth_dev, false);
+ }
+ if (rc)
+ goto done;
+ }
+
+ if (mask & ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rc = nix_vlan_hw_filter(eth_dev, true, 0);
+ } else {
+ offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ rc = nix_vlan_hw_filter(eth_dev, false, 0);
+ }
+ if (rc)
+ goto done;
+ }
+
+ if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) {
+ if (!dev->vlan_info.qinq_on) {
+ offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ rc = otx2_nix_config_double_vlan(eth_dev, true);
+ if (rc)
+ goto done;
+ }
+ } else {
+ if (dev->vlan_info.qinq_on) {
+ offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+ rc = otx2_nix_config_double_vlan(eth_dev, false);
+ if (rc)
+ goto done;
+ }
+ }
+
+ if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_QINQ_STRIP)) {
+ dev->rx_offloads |= offloads;
+ dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
+ }
+
+done:
+ return rc;
+}
+
static int
nix_vlan_rx_mkex_offset(uint64_t mask)
{
@@ -170,7 +553,7 @@ int
otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc;
+ int rc, mask;
/* Port initialized for first time or restarted */
if (!dev->configured) {
@@ -179,12 +562,37 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
otx2_err("Failed to get vlan mkex info rc=%d", rc);
return rc;
}
+
+ TAILQ_INIT(&dev->vlan_info.fltr_tbl);
}
+
+ mask =
+ ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+ rc = otx2_nix_vlan_offload_set(eth_dev, mask);
+ if (rc) {
+ otx2_err("Failed to set vlan offload rc=%d", rc);
+ return rc;
+ }
+
return 0;
}
int
-otx2_nix_vlan_fini(__rte_unused struct rte_eth_dev *eth_dev)
+otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev)
{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ int rc;
+
+ if (!dev->configured) {
+ if (vlan->def_rx_mcam_idx) {
+ rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
+ if (rc)
+ return rc;
+ }
+ }
+
+ otx2_nix_config_double_vlan(eth_dev, false);
+ vlan->def_rx_mcam_idx = 0;
return 0;
}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 46/58] net/octeontx2: support VLAN filters
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (44 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 45/58] net/octeontx2: support VLAN offloads jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 47/58] net/octeontx2: support VLAN TPID and PVID for Tx jerinj
` (13 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vivek Sharma
From: Vivek Sharma <viveksharma@marvell.com>
Support setting up VLAN filters so as to allow tagged
packet's reception after VLAN HW Filter offload is enabled.
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 2 +
drivers/net/octeontx2/otx2_ethdev.h | 5 +-
drivers/net/octeontx2/otx2_vlan.c | 147 ++++++++++++++++++++-
6 files changed, 154 insertions(+), 3 deletions(-)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index f811c38e3..3567e3f63 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -21,6 +21,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+VLAN filter = Y
Flow control = Y
Flow API = Y
VLAN offload = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 77c3a5637..7edc80348 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -21,6 +21,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+VLAN filter = Y
Flow control = Y
Flow API = Y
VLAN offload = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 4571a1e78..fcc1ddc03 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -17,6 +17,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+VLAN filter = Y
Flow API = Y
VLAN offload = Y
QinQ offload = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 362e46941..175e80e44 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1345,6 +1345,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.timesync_read_time = otx2_nix_timesync_read_time,
.timesync_write_time = otx2_nix_timesync_write_time,
.vlan_offload_set = otx2_nix_vlan_offload_set,
+ .vlan_filter_set = otx2_nix_vlan_filter_set,
+ .vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 816371c37..a3babe51a 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -469,6 +469,9 @@ int otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en);
int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev);
int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev);
int otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask);
-
+int otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
+ int on);
+void otx2_nix_vlan_strip_queue_set(struct rte_eth_dev *dev,
+ uint16_t queue, int on);
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index d9880d069..3e60da099 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -21,8 +21,8 @@ enum vtag_cfg_dir {
};
static int
-__rte_unused nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev,
- uint32_t entry, const int enable)
+nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev,
+ uint32_t entry, const int enable)
{
struct npc_mcam_ena_dis_entry_req *req;
struct otx2_mbox *mbox = dev->mbox;
@@ -366,6 +366,8 @@ nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable,
uint16_t vlan_id)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ struct vlan_entry *entry;
int rc = -EINVAL;
if (!vlan_id && enable) {
@@ -379,6 +381,24 @@ nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable,
return 0;
}
+ /* Enable/disable existing vlan filter entries */
+ TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
+ if (vlan_id) {
+ if (entry->vlan_id == vlan_id) {
+ rc = nix_vlan_mcam_enb_dis(dev,
+ entry->mcam_idx,
+ enable);
+ if (rc)
+ return rc;
+ }
+ } else {
+ rc = nix_vlan_mcam_enb_dis(dev, entry->mcam_idx,
+ enable);
+ if (rc)
+ return rc;
+ }
+ }
+
if (!vlan_id && !enable) {
rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
enable);
@@ -393,6 +413,80 @@ nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable,
return 0;
}
+/* Enable/disable vlan filtering for the given vlan_id */
+int
+otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
+ int on)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ struct vlan_entry *entry;
+ int entry_exists = 0;
+ int rc = -EINVAL;
+ int mcam_idx;
+
+ if (!vlan_id) {
+ otx2_err("Vlan Id can't be zero");
+ return rc;
+ }
+
+ if (!vlan->def_rx_mcam_idx) {
+ otx2_err("Vlan Filtering is disabled, enable it first");
+ return rc;
+ }
+
+ if (on) {
+ TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
+ if (entry->vlan_id == vlan_id) {
+ /* Vlan entry already exists */
+ entry_exists = 1;
+ /* mcam entry already allocated */
+ if (entry->mcam_idx) {
+ rc = nix_vlan_hw_filter(eth_dev, on,
+ vlan_id);
+ return rc;
+ }
+ }
+ }
+
+ if (!entry_exists) {
+ entry = rte_zmalloc("otx2_nix_vlan_entry",
+ sizeof(struct vlan_entry), 0);
+ if (!entry) {
+ otx2_err("Failed to allocate memory");
+ return -ENOMEM;
+ }
+ }
+
+ /* Enables vlan_id & mac address based filtering */
+ mcam_idx = nix_vlan_mcam_config(eth_dev, vlan_id,
+ VLAN_ID_MATCH |
+ MAC_ADDR_MATCH);
+ if (mcam_idx < 0) {
+ otx2_err("Failed to config vlan mcam");
+ TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
+ rte_free(entry);
+ return mcam_idx;
+ }
+
+ entry->mcam_idx = mcam_idx;
+ if (!entry_exists) {
+ entry->vlan_id = vlan_id;
+ TAILQ_INSERT_HEAD(&vlan->fltr_tbl, entry, next);
+ }
+ } else {
+ TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
+ if (entry->vlan_id == vlan_id) {
+ nix_vlan_mcam_free(dev, entry->mcam_idx);
+ TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
+ rte_free(entry);
+ break;
+ }
+ }
+ }
+ return 0;
+}
+
/* Configure double vlan(qinq) on or off */
static int
otx2_nix_config_double_vlan(struct rte_eth_dev *eth_dev,
@@ -497,6 +591,13 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
return rc;
}
+void otx2_nix_vlan_strip_queue_set(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused uint16_t queue,
+ __rte_unused int on)
+{
+ otx2_err("Not Supported");
+}
+
static int
nix_vlan_rx_mkex_offset(uint64_t mask)
{
@@ -549,6 +650,27 @@ nix_vlan_get_mkex_info(struct otx2_eth_dev *dev)
return 0;
}
+static void nix_vlan_reinstall_vlan_filters(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct vlan_entry *entry;
+ int rc;
+
+ /* VLAN filters can't be set without setting filtern on */
+ rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, true);
+ if (rc) {
+ otx2_err("Failed to reinstall vlan filters");
+ return;
+ }
+
+ while ((entry = TAILQ_FIRST(&dev->vlan_info.fltr_tbl)) != NULL) {
+ rc = otx2_nix_vlan_filter_set(eth_dev, entry->vlan_id, true);
+ if (rc)
+ otx2_err("Failed to reinstall filter for vlan:%d",
+ entry->vlan_id);
+ }
+}
+
int
otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
{
@@ -564,6 +686,11 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
}
TAILQ_INIT(&dev->vlan_info.fltr_tbl);
+ } else {
+ /* Reinstall all mcam entries now if filter offload is set */
+ if (eth_dev->data->dev_conf.rxmode.offloads &
+ DEV_RX_OFFLOAD_VLAN_FILTER)
+ nix_vlan_reinstall_vlan_filters(eth_dev);
}
mask =
@@ -582,8 +709,24 @@ otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
struct otx2_vlan_info *vlan = &dev->vlan_info;
+ struct vlan_entry *entry;
int rc;
+ while ((entry = TAILQ_FIRST(&vlan->fltr_tbl)) != NULL) {
+ if (!dev->configured) {
+ rc = nix_vlan_mcam_free(dev, entry->mcam_idx);
+ if (rc)
+ return rc;
+ TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
+ rte_free(entry);
+ } else {
+ /* MCAM entries freed by flow_fini & lf_free on
+ * port stop.
+ */
+ entry->mcam_idx = 0;
+ }
+ }
+
if (!dev->configured) {
if (vlan->def_rx_mcam_idx) {
rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 47/58] net/octeontx2: support VLAN TPID and PVID for Tx
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (45 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 46/58] net/octeontx2: support VLAN filters jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 48/58] net/octeontx2: add FW version get operation jerinj
` (12 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vivek Sharma
From: Vivek Sharma <viveksharma@marvell.com>
Implement support for setting VLAN TPID and PVID for Tx
packets.
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 2 +
drivers/net/octeontx2/otx2_ethdev.h | 5 +
drivers/net/octeontx2/otx2_vlan.c | 191 ++++++++++++++++++++++++++++
3 files changed, 198 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 175e80e44..c5dcdc21c 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1347,6 +1347,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.vlan_offload_set = otx2_nix_vlan_offload_set,
.vlan_filter_set = otx2_nix_vlan_filter_set,
.vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set,
+ .vlan_tpid_set = otx2_nix_vlan_tpid_set,
+ .vlan_pvid_set = otx2_nix_vlan_pvid_set,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index a3babe51a..3f11802eb 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -473,5 +473,10 @@ int otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
int on);
void otx2_nix_vlan_strip_queue_set(struct rte_eth_dev *dev,
uint16_t queue, int on);
+int
+otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
+ enum rte_vlan_type type, uint16_t tpid);
+int
+otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on);
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index 3e60da099..3c0d40553 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -81,6 +81,37 @@ nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
entry->vtag_action = vtag_action;
}
+static void
+nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
+ int vtag_index)
+{
+ union {
+ uint64_t reg;
+ struct nix_tx_vtag_action_s act;
+ } vtag_action;
+
+ uint64_t action;
+
+ action = NIX_TX_ACTIONOP_UCAST_DEFAULT;
+
+ if (type == ETH_VLAN_TYPE_OUTER) {
+ vtag_action.act.vtag0_def = vtag_index;
+ vtag_action.act.vtag0_lid = NPC_LID_LA;
+ vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
+ vtag_action.act.vtag0_relptr = sizeof(struct nix_inst_hdr_s) +
+ 2 * RTE_ETHER_ADDR_LEN + NIX_RX_VTAGACTION_VTAG0_RELPTR;
+ } else {
+ vtag_action.act.vtag1_def = vtag_index;
+ vtag_action.act.vtag1_lid = NPC_LID_LA;
+ vtag_action.act.vtag1_op = NIX_TX_VTAGOP_INSERT;
+ vtag_action.act.vtag1_relptr = sizeof(struct nix_inst_hdr_s) +
+ 2 * RTE_ETHER_ADDR_LEN + NIX_RX_VTAGACTION_VTAG1_RELPTR;
+ }
+
+ entry->action = action;
+ entry->vtag_action = vtag_action.reg;
+}
+
static int
nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
{
@@ -322,6 +353,46 @@ nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
return 0;
}
+/* Installs/Removes default tx entry */
+static int
+nix_vlan_handle_default_tx_entry(struct rte_eth_dev *eth_dev,
+ enum rte_vlan_type type, int vtag_index,
+ int enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ struct mcam_entry entry;
+ uint16_t pf_func;
+ int rc;
+
+ if (!vlan->def_tx_mcam_idx && enable) {
+ memset(&entry, 0, sizeof(struct mcam_entry));
+
+ /* Only pf_func is matched, swap it's bytes */
+ pf_func = (dev->pf_func & 0xff) << 8;
+ pf_func |= (dev->pf_func >> 8) & 0xff;
+
+ /* PF Func extracted to KW1[63:48] */
+ entry.kw[1] = (uint64_t)pf_func << 48;
+ entry.kw_mask[1] = (BIT_ULL(16) - 1) << 48;
+
+ nix_set_tx_vlan_action(&entry, type, vtag_index);
+ vlan->def_tx_mcam_ent = entry;
+
+ return nix_vlan_mcam_alloc_and_write(eth_dev, &entry,
+ NIX_INTF_TX, 0);
+ }
+
+ if (vlan->def_tx_mcam_idx && !enable) {
+ rc = nix_vlan_mcam_free(dev, vlan->def_tx_mcam_idx);
+ if (rc)
+ return rc;
+ vlan->def_rx_mcam_idx = 0;
+ }
+
+ return 0;
+}
+
/* Configure vlan stripping on or off */
static int
nix_vlan_hw_strip(struct rte_eth_dev *eth_dev, const uint8_t enable)
@@ -591,6 +662,126 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
return rc;
}
+int
+otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
+ enum rte_vlan_type type, uint16_t tpid)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct nix_set_vlan_tpid *tpid_cfg;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc;
+
+ tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
+
+ tpid_cfg->tpid = tpid;
+ if (type == ETH_VLAN_TYPE_OUTER)
+ tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
+ else
+ tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ if (type == ETH_VLAN_TYPE_OUTER)
+ dev->vlan_info.outer_vlan_tpid = tpid;
+ else
+ dev->vlan_info.inner_vlan_tpid = tpid;
+ return 0;
+}
+
+int
+otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct otx2_eth_dev *otx2_dev = otx2_eth_pmd_priv(dev);
+ struct otx2_mbox *mbox = otx2_dev->mbox;
+ struct nix_vtag_config *vtag_cfg;
+ struct nix_vtag_config_rsp *rsp;
+ struct otx2_vlan_info *vlan;
+ int rc, rc1, vtag_index = 0;
+
+ if (vlan_id == 0) {
+ otx2_err("vlan id can't be zero");
+ return -EINVAL;
+ }
+
+ vlan = &otx2_dev->vlan_info;
+
+ if (on && vlan->pvid_insert_on && vlan->pvid == vlan_id) {
+ otx2_err("pvid %d is already enabled", vlan_id);
+ return -EINVAL;
+ }
+
+ if (on && vlan->pvid_insert_on && vlan->pvid != vlan_id) {
+ otx2_err("another pvid is enabled, disable that first");
+ return -EINVAL;
+ }
+
+ /* No pvid active */
+ if (!on && !vlan->pvid_insert_on)
+ return 0;
+
+ /* Given pvid already disabled */
+ if (!on && vlan->pvid != vlan_id)
+ return 0;
+
+ vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
+
+ if (on) {
+ vtag_cfg->cfg_type = VTAG_TX;
+ vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
+
+ if (vlan->outer_vlan_tpid)
+ vtag_cfg->tx.vtag0 =
+ (vlan->outer_vlan_tpid << 16) | vlan_id;
+ else
+ vtag_cfg->tx.vtag0 =
+ ((RTE_ETHER_TYPE_VLAN << 16) | vlan_id);
+ vtag_cfg->tx.cfg_vtag0 = 1;
+ } else {
+ vtag_cfg->cfg_type = VTAG_TX;
+ vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
+
+ vtag_cfg->tx.vtag0_idx = vlan->outer_vlan_idx;
+ vtag_cfg->tx.free_vtag0 = 1;
+ }
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (on) {
+ vtag_index = rsp->vtag0_idx;
+ } else {
+ vlan->pvid = 0;
+ vlan->pvid_insert_on = 0;
+ vlan->outer_vlan_idx = 0;
+ }
+
+ rc = nix_vlan_handle_default_tx_entry(dev, ETH_VLAN_TYPE_OUTER,
+ vtag_index, on);
+ if (rc < 0) {
+ printf("Default tx entry failed with rc %d\n", rc);
+ vtag_cfg->tx.vtag0_idx = vtag_index;
+ vtag_cfg->tx.free_vtag0 = 1;
+ vtag_cfg->tx.cfg_vtag0 = 0;
+
+ rc1 = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc1)
+ otx2_err("Vtag free failed");
+
+ return rc;
+ }
+
+ if (on) {
+ vlan->pvid = vlan_id;
+ vlan->pvid_insert_on = 1;
+ vlan->outer_vlan_idx = vtag_index;
+ }
+
+ return 0;
+}
+
void otx2_nix_vlan_strip_queue_set(__rte_unused struct rte_eth_dev *dev,
__rte_unused uint16_t queue,
__rte_unused int on)
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 48/58] net/octeontx2: add FW version get operation
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (46 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 47/58] net/octeontx2: support VLAN TPID and PVID for Tx jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-06 16:06 ` Ferruh Yigit
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 49/58] net/octeontx2: add Rx burst support jerinj
` (11 subsequent siblings)
59 siblings, 1 reply; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Add firmware version get operation.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 1 +
drivers/net/octeontx2/otx2_ethdev.h | 3 +++
drivers/net/octeontx2/otx2_ethdev_ops.c | 22 ++++++++++++++++++++++
drivers/net/octeontx2/otx2_flow.c | 7 +++++++
7 files changed, 36 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 3567e3f63..6117e1edf 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -33,5 +33,6 @@ Rx descriptor status = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
+FW version = Y
Module EEPROM dump = Y
Registers dump = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 7edc80348..66c327cfc 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -31,5 +31,6 @@ Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
+FW version = Y
Module EEPROM dump = Y
Registers dump = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index fcc1ddc03..3aa0491e1 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -26,5 +26,6 @@ Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
+FW version = Y
Module EEPROM dump = Y
Registers dump = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index c5dcdc21c..b449bb032 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1335,6 +1335,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.filter_ctrl = otx2_nix_dev_filter_ctrl,
.get_module_info = otx2_nix_get_module_info,
.get_module_eeprom = otx2_nix_get_module_eeprom,
+ .fw_version_get = otx2_nix_fw_version_get,
.flow_ctrl_get = otx2_nix_flow_ctrl_get,
.flow_ctrl_set = otx2_nix_flow_ctrl_set,
.timesync_enable = otx2_nix_timesync_enable,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 3f11802eb..7bb42be8d 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -216,6 +216,7 @@ struct otx2_eth_dev {
uint8_t lso_tsov4_idx;
uint8_t lso_tsov6_idx;
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
+ uint8_t mkex_pfl_name[MKEX_NAME_LEN];
uint8_t max_mac_entries;
uint8_t lf_tx_stats;
uint8_t lf_rx_stats;
@@ -320,6 +321,8 @@ void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
int otx2_nix_dev_filter_ctrl(struct rte_eth_dev *eth_dev,
enum rte_filter_type filter_type,
enum rte_filter_op filter_op, void *arg);
+int otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
+ size_t fw_size);
int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_module_info *modinfo);
int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 1da9222b7..d2cb5ba1c 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -209,6 +209,28 @@ otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt)
return 0;
}
+int
+otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
+ size_t fw_size)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc = (int)fw_size;
+
+ if (fw_size > sizeof(dev->mkex_pfl_name))
+ rc = sizeof(dev->mkex_pfl_name);
+
+ rc = strlcpy(fw_version, (char *)dev->mkex_pfl_name, rc);
+
+ rc += 1; /* Add the size of '\0' */
+ if (fw_size < (uint32_t)rc)
+ goto done;
+ else
+ return 0;
+
+done:
+ return rc;
+}
+
int
otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
{
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 1fbe6b86e..270433cd6 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -740,6 +740,7 @@ flow_fetch_kex_cfg(struct otx2_eth_dev *dev)
struct otx2_npc_flow_info *npc = &dev->npc_flow;
struct npc_get_kex_cfg_rsp *kex_rsp;
struct otx2_mbox *mbox = dev->mbox;
+ char mkex_pfl_name[MKEX_NAME_LEN];
struct otx2_idev_kex_cfg *idev;
int rc = 0;
@@ -761,6 +762,12 @@ flow_fetch_kex_cfg(struct otx2_eth_dev *dev)
sizeof(struct npc_get_kex_cfg_rsp));
}
+ otx2_mbox_memcpy(mkex_pfl_name,
+ idev->kex_cfg.mkex_pfl_name, MKEX_NAME_LEN);
+
+ strlcpy((char *)dev->mkex_pfl_name,
+ mkex_pfl_name, sizeof(dev->mkex_pfl_name));
+
flow_process_mkex_cfg(npc, &idev->kex_cfg);
done:
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 49/58] net/octeontx2: add Rx burst support
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (47 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 48/58] net/octeontx2: add FW version get operation jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 50/58] net/octeontx2: add Rx multi segment version jerinj
` (10 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Pavan Nikhilesh, Harman Kalra
From: Jerin Jacob <jerinj@marvell.com>
Add Rx burst support.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 2 +-
drivers/net/octeontx2/otx2_ethdev.c | 6 -
drivers/net/octeontx2/otx2_ethdev.h | 2 +
drivers/net/octeontx2/otx2_rx.c | 128 ++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 249 +++++++++++++++++++++++++++-
6 files changed, 380 insertions(+), 8 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_rx.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index b1cc6d83b..76847b2c2 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -30,6 +30,7 @@ LIBABIVER := 1
# all source are stored in SRCS-y
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
+ otx2_rx.c \
otx2_tm.c \
otx2_rss.c \
otx2_mac.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index d5f272c8b..1361f1707 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -2,7 +2,7 @@
# Copyright(C) 2019 Marvell International Ltd.
#
-sources = files(
+sources = files('otx2_rx.c',
'otx2_tm.c',
'otx2_rss.c',
'otx2_mac.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index b449bb032..9b55e757e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -14,12 +14,6 @@
#include "otx2_ethdev.h"
-static inline void
-otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(eth_dev);
-}
-
static inline void
otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
{
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7bb42be8d..3ba47f6ab 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -259,6 +259,7 @@ struct otx2_eth_dev {
struct otx2_eth_qconf *tx_qconf;
struct otx2_eth_qconf *rx_qconf;
struct rte_eth_dev *eth_dev;
+ eth_rx_burst_t rx_pkt_burst_no_offload;
/* PTP counters */
bool ptp_en;
struct otx2_timesync_info tstamp;
@@ -451,6 +452,7 @@ int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
struct otx2_eth_dev *dev);
/* Rx and Tx routines */
+void otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev);
void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
/* Timesync - PTP routines */
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
new file mode 100644
index 000000000..b4a3e9d55
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -0,0 +1,128 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_vect.h>
+
+#include "otx2_ethdev.h"
+#include "otx2_rx.h"
+
+#define NIX_DESCS_PER_LOOP 4
+#define CQE_CAST(x) ((struct nix_cqe_hdr_s *)(x))
+#define CQE_SZ(x) ((x) * NIX_CQ_ENTRY_SZ)
+
+static inline uint16_t
+nix_rx_nb_pkts(struct otx2_eth_rxq *rxq, const uint64_t wdata,
+ const uint16_t pkts, const uint32_t qmask)
+{
+ uint32_t available = rxq->available;
+
+ /* Update the available count if cached value is not enough */
+ if (unlikely(available < pkts)) {
+ uint64_t reg, head, tail;
+
+ /* Use LDADDA version to avoid reorder */
+ reg = otx2_atomic64_add_sync(wdata, rxq->cq_status);
+ /* CQ_OP_STATUS operation error */
+ if (reg & BIT_ULL(CQ_OP_STAT_OP_ERR) ||
+ reg & BIT_ULL(CQ_OP_STAT_CQ_ERR))
+ return 0;
+
+ tail = reg & 0xFFFFF;
+ head = (reg >> 20) & 0xFFFFF;
+ if (tail < head)
+ available = tail - head + qmask + 1;
+ else
+ available = tail - head;
+
+ rxq->available = available;
+ }
+
+ return RTE_MIN(pkts, available);
+}
+
+static __rte_always_inline uint16_t
+nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t pkts, const uint16_t flags)
+{
+ struct otx2_eth_rxq *rxq = rx_queue;
+ const uint64_t mbuf_init = rxq->mbuf_initializer;
+ const void *lookup_mem = rxq->lookup_mem;
+ const uint64_t data_off = rxq->data_off;
+ const uintptr_t desc = rxq->desc;
+ const uint64_t wdata = rxq->wdata;
+ const uint32_t qmask = rxq->qmask;
+ uint16_t packets = 0, nb_pkts;
+ uint32_t head = rxq->head;
+ struct nix_cqe_hdr_s *cq;
+ struct rte_mbuf *mbuf;
+
+ nb_pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
+
+ while (packets < nb_pkts) {
+ /* Prefetch N desc ahead */
+ rte_prefetch_non_temporal((void *)(desc + (CQE_SZ(head + 2))));
+ cq = (struct nix_cqe_hdr_s *)(desc + CQE_SZ(head));
+
+ mbuf = nix_get_mbuf_from_cqe(cq, data_off);
+
+ otx2_nix_cqe_to_mbuf(cq, mbuf, lookup_mem, mbuf_init, flags);
+ otx2_nix_mbuf_to_tstamp(mbuf, rxq->tstamp, flags);
+ rx_pkts[packets++] = mbuf;
+ otx2_prefetch_store_keep(mbuf);
+ head++;
+ head &= qmask;
+ }
+
+ rxq->head = head;
+ rxq->available -= nb_pkts;
+
+ /* Free all the CQs that we've processed */
+ otx2_write64((wdata | nb_pkts), rxq->cq_door);
+
+ return nb_pkts;
+}
+
+
+#define R(name, f5, f4, f3, f2, f1, f0, flags) \
+static uint16_t __rte_noinline __hot \
+otx2_nix_recv_pkts_ ## name(void *rx_queue, \
+ struct rte_mbuf **rx_pkts, uint16_t pkts) \
+{ \
+ return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags)); \
+} \
+
+NIX_RX_FASTPATH_MODES
+#undef R
+
+static inline void
+pick_rx_func(struct rte_eth_dev *eth_dev,
+ const eth_rx_burst_t rx_burst[2][2][2][2][2][2])
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ /* [TSTMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
+ eth_dev->rx_pkt_burst = rx_burst
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F)]
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_RSS_F)];
+}
+
+void
+otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev)
+{
+ const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2] = {
+#define R(name, f5, f4, f3, f2, f1, f0, flags) \
+ [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_ ## name,
+
+NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ pick_rx_func(eth_dev, nix_eth_rx_burst);
+
+ rte_mb();
+}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index 763dc402e..fc0e87d14 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -15,10 +15,13 @@
PTYPE_TUNNEL_ARRAY_SZ) *\
sizeof(uint16_t))
+#define NIX_RX_OFFLOAD_NONE (0)
+#define NIX_RX_OFFLOAD_RSS_F BIT(0)
#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
+#define NIX_RX_OFFLOAD_CHECKSUM_F BIT(2)
#define NIX_RX_OFFLOAD_VLAN_STRIP_F BIT(3)
-#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4)
+#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
#define NIX_TIMESYNC_RX_OFFSET 8
@@ -30,4 +33,248 @@ struct otx2_timesync_info {
uint8_t rx_ready;
} __rte_cache_aligned;
+union mbuf_initializer {
+ struct {
+ uint16_t data_off;
+ uint16_t refcnt;
+ uint16_t nb_segs;
+ uint16_t port;
+ } fields;
+ uint64_t value;
+};
+
+static __rte_always_inline void
+otx2_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
+ struct otx2_timesync_info *tstamp, const uint16_t flag)
+{
+ if ((flag & NIX_RX_OFFLOAD_TSTAMP_F) &&
+ mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC &&
+ (mbuf->data_off == RTE_PKTMBUF_HEADROOM +
+ NIX_TIMESYNC_RX_OFFSET)) {
+ uint64_t *tstamp_ptr;
+
+ /* Deal with rx timestamp */
+ tstamp_ptr = rte_pktmbuf_mtod_offset(mbuf, uint64_t *,
+ -NIX_TIMESYNC_RX_OFFSET);
+ mbuf->timestamp = rte_be_to_cpu_64(*tstamp_ptr);
+ tstamp->rx_tstamp = mbuf->timestamp;
+ tstamp->rx_ready = 1;
+ mbuf->ol_flags |= PKT_RX_IEEE1588_PTP | PKT_RX_IEEE1588_TMST
+ | PKT_RX_TIMESTAMP;
+ }
+}
+
+static __rte_always_inline uint64_t
+nix_clear_data_off(uint64_t oldval)
+{
+ union mbuf_initializer mbuf_init = { .value = oldval };
+
+ mbuf_init.fields.data_off = 0;
+ return mbuf_init.value;
+}
+
+static __rte_always_inline struct rte_mbuf *
+nix_get_mbuf_from_cqe(void *cq, const uint64_t data_off)
+{
+ rte_iova_t buff;
+
+ /* Skip CQE, NIX_RX_PARSE_S and SG HDR(9 DWORDs) and peek buff addr */
+ buff = *((rte_iova_t *)((uint64_t *)cq + 9));
+ return (struct rte_mbuf *)(buff - data_off);
+}
+
+
+static __rte_always_inline uint32_t
+nix_ptype_get(const void * const lookup_mem, const uint64_t in)
+{
+ const uint16_t * const ptype = lookup_mem;
+ const uint16_t lg_lf_le = (in & 0xFFF000000000000) >> 48;
+ const uint16_t tu_l2 = ptype[(in & 0x000FFF000000000) >> 36];
+ const uint16_t il4_tu = ptype[PTYPE_NON_TUNNEL_ARRAY_SZ + lg_lf_le];
+
+ return (il4_tu << PTYPE_WIDTH) | tu_l2;
+}
+
+static __rte_always_inline uint32_t
+nix_rx_olflags_get(const void * const lookup_mem, const uint64_t in)
+{
+ const uint32_t * const ol_flags = (const uint32_t * const)
+ ((const uint8_t * const)lookup_mem + PTYPE_ARRAY_SZ);
+
+ return ol_flags[(in & 0xfff00000) >> 20];
+}
+
+static inline uint64_t
+nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
+ struct rte_mbuf *mbuf)
+{
+ /* There is no separate bit to check match_id
+ * is valid or not? and no flag to identify it is an
+ * RTE_FLOW_ACTION_TYPE_FLAG vs RTE_FLOW_ACTION_TYPE_MARK
+ * action. The former case addressed through 0 being invalid
+ * value and inc/dec match_id pair when MARK is activated.
+ * The later case addressed through defining
+ * OTX2_FLOW_MARK_DEFAULT as value for
+ * RTE_FLOW_ACTION_TYPE_MARK.
+ * This would translate to not use
+ * OTX2_FLOW_ACTION_FLAG_DEFAULT - 1 and
+ * OTX2_FLOW_ACTION_FLAG_DEFAULT for match_id.
+ * i.e valid mark_id's are from
+ * 0 to OTX2_FLOW_ACTION_FLAG_DEFAULT - 2
+ */
+ if (likely(match_id)) {
+ ol_flags |= PKT_RX_FDIR;
+ if (match_id != OTX2_FLOW_ACTION_FLAG_DEFAULT) {
+ ol_flags |= PKT_RX_FDIR_ID;
+ mbuf->hash.fdir.hi = match_id - 1;
+ }
+ }
+
+ return ol_flags;
+}
+
+static __rte_always_inline void
+otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *mbuf,
+ const void *lookup_mem, const uint64_t val,
+ const uint16_t flag)
+{
+ const struct nix_rx_parse_s *rx =
+ (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1);
+ const uint64_t w1 = *(const uint64_t *)rx;
+ const uint16_t len = rx->pkt_lenm1 + 1;
+ uint16_t ol_flags = 0;
+
+ /* Mark mempool obj as "get" as it is alloc'ed by NIX */
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+
+ if (flag & NIX_RX_OFFLOAD_PTYPE_F)
+ mbuf->packet_type = nix_ptype_get(lookup_mem, w1);
+ else
+ mbuf->packet_type = 0;
+
+ if (flag & NIX_RX_OFFLOAD_RSS_F) {
+ mbuf->hash.rss = cq->tag;
+ ol_flags |= PKT_RX_RSS_HASH;
+ }
+
+ if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
+ ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
+
+ if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
+ if (rx->vtag0_gone) {
+ ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+ mbuf->vlan_tci = rx->vtag0_tci;
+ }
+ if (rx->vtag1_gone) {
+ ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+ mbuf->vlan_tci_outer = rx->vtag1_tci;
+ }
+ }
+
+ if (flag & NIX_RX_OFFLOAD_MARK_UPDATE_F)
+ ol_flags = nix_update_match_id(rx->match_id, ol_flags, mbuf);
+
+ mbuf->ol_flags = ol_flags;
+ *(uint64_t *)(&mbuf->rearm_data) = val;
+ mbuf->pkt_len = len;
+
+ mbuf->data_len = len;
+}
+
+#define CKSUM_F NIX_RX_OFFLOAD_CHECKSUM_F
+#define PTYPE_F NIX_RX_OFFLOAD_PTYPE_F
+#define RSS_F NIX_RX_OFFLOAD_RSS_F
+#define RX_VLAN_F NIX_RX_OFFLOAD_VLAN_STRIP_F
+#define MARK_F NIX_RX_OFFLOAD_MARK_UPDATE_F
+#define TS_F NIX_RX_OFFLOAD_TSTAMP_F
+
+/* [TSMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
+#define NIX_RX_FASTPATH_MODES \
+R(no_offload, 0, 0, 0, 0, 0, 0, NIX_RX_OFFLOAD_NONE) \
+R(rss, 0, 0, 0, 0, 0, 1, RSS_F) \
+R(ptype, 0, 0, 0, 0, 1, 0, PTYPE_F) \
+R(ptype_rss, 0, 0, 0, 0, 1, 1, PTYPE_F | RSS_F) \
+R(cksum, 0, 0, 0, 1, 0, 0, CKSUM_F) \
+R(cksum_rss, 0, 0, 0, 1, 0, 1, CKSUM_F | RSS_F) \
+R(cksum_ptype, 0, 0, 0, 1, 1, 0, CKSUM_F | PTYPE_F) \
+R(cksum_ptype_rss, 0, 0, 0, 1, 1, 1, CKSUM_F | PTYPE_F | RSS_F)\
+R(vlan, 0, 0, 1, 0, 0, 0, RX_VLAN_F) \
+R(vlan_rss, 0, 0, 1, 0, 0, 1, RX_VLAN_F | RSS_F) \
+R(vlan_ptype, 0, 0, 1, 0, 1, 0, RX_VLAN_F | PTYPE_F) \
+R(vlan_ptype_rss, 0, 0, 1, 0, 1, 1, RX_VLAN_F | PTYPE_F | RSS_F)\
+R(vlan_cksum, 0, 0, 1, 1, 0, 0, RX_VLAN_F | CKSUM_F) \
+R(vlan_cksum_rss, 0, 0, 1, 1, 0, 1, RX_VLAN_F | CKSUM_F | RSS_F)\
+R(vlan_cksum_ptype, 0, 0, 1, 1, 1, 0, \
+ RX_VLAN_F | CKSUM_F | PTYPE_F) \
+R(vlan_cksum_ptype_rss, 0, 0, 1, 1, 1, 1, \
+ RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(mark, 0, 1, 0, 0, 0, 0, MARK_F) \
+R(mark_rss, 0, 1, 0, 0, 0, 1, MARK_F | RSS_F) \
+R(mark_ptype, 0, 1, 0, 0, 1, 0, MARK_F | PTYPE_F) \
+R(mark_ptype_rss, 0, 1, 0, 0, 1, 1, MARK_F | PTYPE_F | RSS_F)\
+R(mark_cksum, 0, 1, 0, 1, 0, 0, MARK_F | CKSUM_F) \
+R(mark_cksum_rss, 0, 1, 0, 1, 0, 1, MARK_F | CKSUM_F | RSS_F)\
+R(mark_cksum_ptype, 0, 1, 0, 1, 1, 0, MARK_F | CKSUM_F | PTYPE_F)\
+R(mark_cksum_ptype_rss, 0, 1, 0, 1, 1, 1, \
+ MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(mark_vlan, 0, 1, 1, 0, 0, 0, MARK_F | RX_VLAN_F) \
+R(mark_vlan_rss, 0, 1, 1, 0, 0, 1, MARK_F | RX_VLAN_F | RSS_F)\
+R(mark_vlan_ptype, 0, 1, 1, 0, 1, 0, \
+ MARK_F | RX_VLAN_F | PTYPE_F) \
+R(mark_vlan_ptype_rss, 0, 1, 1, 0, 1, 1, \
+ MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
+R(mark_vlan_cksum, 0, 1, 1, 1, 0, 0, \
+ MARK_F | RX_VLAN_F | CKSUM_F) \
+R(mark_vlan_cksum_rss, 0, 1, 1, 1, 0, 1, \
+ MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
+R(mark_vlan_cksum_ptype, 0, 1, 1, 1, 1, 0, \
+ MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
+R(mark_vlan_cksum_ptype_rss, 0, 1, 1, 1, 1, 1, \
+ MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(ts, 1, 0, 0, 0, 0, 0, TS_F) \
+R(ts_rss, 1, 0, 0, 0, 0, 1, TS_F | RSS_F) \
+R(ts_ptype, 1, 0, 0, 0, 1, 0, TS_F | PTYPE_F) \
+R(ts_ptype_rss, 1, 0, 0, 0, 1, 1, TS_F | PTYPE_F | RSS_F)\
+R(ts_cksum, 1, 0, 0, 1, 0, 0, TS_F | CKSUM_F) \
+R(ts_cksum_rss, 1, 0, 0, 1, 0, 1, TS_F | CKSUM_F | RSS_F)\
+R(ts_cksum_ptype, 1, 0, 0, 1, 1, 0, TS_F | CKSUM_F | PTYPE_F)\
+R(ts_cksum_ptype_rss, 1, 0, 0, 1, 1, 1, \
+ TS_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(ts_vlan, 1, 0, 1, 0, 0, 0, TS_F | RX_VLAN_F) \
+R(ts_vlan_rss, 1, 0, 1, 0, 0, 1, TS_F | RX_VLAN_F | RSS_F)\
+R(ts_vlan_ptype, 1, 0, 1, 0, 1, 0, TS_F | RX_VLAN_F | PTYPE_F)\
+R(ts_vlan_ptype_rss, 1, 0, 1, 0, 1, 1, \
+ TS_F | RX_VLAN_F | PTYPE_F | RSS_F) \
+R(ts_vlan_cksum, 1, 0, 1, 1, 0, 0, \
+ TS_F | RX_VLAN_F | CKSUM_F) \
+R(ts_vlan_cksum_rss, 1, 0, 1, 1, 0, 1, \
+ MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
+R(ts_vlan_cksum_ptype, 1, 0, 1, 1, 1, 0, \
+ TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
+R(ts_vlan_cksum_ptype_rss, 1, 0, 1, 1, 1, 1, \
+ TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(ts_mark, 1, 1, 0, 0, 0, 0, TS_F | MARK_F) \
+R(ts_mark_rss, 1, 1, 0, 0, 0, 1, TS_F | MARK_F | RSS_F)\
+R(ts_mark_ptype, 1, 1, 0, 0, 1, 0, TS_F | MARK_F | PTYPE_F)\
+R(ts_mark_ptype_rss, 1, 1, 0, 0, 1, 1, \
+ TS_F | MARK_F | PTYPE_F | RSS_F) \
+R(ts_mark_cksum, 1, 1, 0, 1, 0, 0, TS_F | MARK_F | CKSUM_F)\
+R(ts_mark_cksum_rss, 1, 1, 0, 1, 0, 1, \
+ TS_F | MARK_F | CKSUM_F | RSS_F)\
+R(ts_mark_cksum_ptype, 1, 1, 0, 1, 1, 0, \
+ TS_F | MARK_F | CKSUM_F | PTYPE_F) \
+R(ts_mark_cksum_ptype_rss, 1, 1, 0, 1, 1, 1, \
+ TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(ts_mark_vlan, 1, 1, 1, 0, 0, 0, TS_F | MARK_F | RX_VLAN_F)\
+R(ts_mark_vlan_rss, 1, 1, 1, 0, 0, 1, \
+ TS_F | MARK_F | RX_VLAN_F | RSS_F)\
+R(ts_mark_vlan_ptype, 1, 1, 1, 0, 1, 0, \
+ TS_F | MARK_F | RX_VLAN_F | PTYPE_F) \
+R(ts_mark_vlan_ptype_rss, 1, 1, 1, 0, 1, 1, \
+ TS_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
+R(ts_mark_vlan_cksum_ptype, 1, 1, 1, 1, 1, 0, \
+ TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
+R(ts_mark_vlan_cksum_ptype_rss, 1, 1, 1, 1, 1, 1, \
+ TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)
+
#endif /* __OTX2_RX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 50/58] net/octeontx2: add Rx multi segment version
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (48 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 49/58] net/octeontx2: add Rx burst support jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 51/58] net/octeontx2: add Rx vector version jerinj
` (9 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K, Anatoly Burakov
Cc: ferruh.yigit, Pavan Nikhilesh
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add multi segment version of packet Receive function.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 2 +
drivers/net/octeontx2/otx2_rx.c | 25 ++++++++++
drivers/net/octeontx2/otx2_rx.h | 55 +++++++++++++++++++++-
5 files changed, 84 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 6117e1edf..18bcf81cf 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -24,6 +24,8 @@ Inner RSS = Y
VLAN filter = Y
Flow control = Y
Flow API = Y
+Jumbo frame = Y
+Scattered Rx = Y
VLAN offload = Y
QinQ offload = Y
Packet type parsing = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 66c327cfc..97a24671e 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -24,6 +24,7 @@ Inner RSS = Y
VLAN filter = Y
Flow control = Y
Flow API = Y
+Jumbo frame = Y
VLAN offload = Y
QinQ offload = Y
Packet type parsing = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 3aa0491e1..916a6d7b0 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -19,6 +19,8 @@ RSS reta update = Y
Inner RSS = Y
VLAN filter = Y
Flow API = Y
+Jumbo frame = Y
+Scattered Rx = Y
VLAN offload = Y
QinQ offload = Y
Packet type parsing = Y
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index b4a3e9d55..0f0919338 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -91,6 +91,14 @@ otx2_nix_recv_pkts_ ## name(void *rx_queue, \
{ \
return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags)); \
} \
+ \
+static uint16_t __rte_noinline __hot \
+otx2_nix_recv_pkts_mseg_ ## name(void *rx_queue, \
+ struct rte_mbuf **rx_pkts, uint16_t pkts) \
+{ \
+ return nix_recv_pkts(rx_queue, rx_pkts, pkts, \
+ (flags) | NIX_RX_MULTI_SEG_F); \
+} \
NIX_RX_FASTPATH_MODES
#undef R
@@ -114,15 +122,32 @@ pick_rx_func(struct rte_eth_dev *eth_dev,
void
otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev)
{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2] = {
#define R(name, f5, f4, f3, f2, f1, f0, flags) \
[f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_ ## name,
+NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2] = {
+#define R(name, f5, f4, f3, f2, f1, f0, flags) \
+ [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_mseg_ ## name,
+
NIX_RX_FASTPATH_MODES
#undef R
};
pick_rx_func(eth_dev, nix_eth_rx_burst);
+ if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
+
+ /* Copy multi seg version with no offload for tear down sequence */
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+ dev->rx_pkt_burst_no_offload =
+ nix_eth_rx_burst_mseg[0][0][0][0][0][0];
rte_mb();
}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index fc0e87d14..1d1150786 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -23,6 +23,11 @@
#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4)
#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
+/* Flags to control cqe_to_mbuf conversion function.
+ * Defining it from backwards to denote its been
+ * not used as offload flags to pick function
+ */
+#define NIX_RX_MULTI_SEG_F BIT(15)
#define NIX_TIMESYNC_RX_OFFSET 8
struct otx2_timesync_info {
@@ -133,6 +138,51 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
return ol_flags;
}
+static __rte_always_inline void
+nix_cqe_xtract_mseg(const struct nix_rx_parse_s *rx,
+ struct rte_mbuf *mbuf, uint64_t rearm)
+{
+ const rte_iova_t *iova_list;
+ struct rte_mbuf *head;
+ const rte_iova_t *eol;
+ uint8_t nb_segs;
+ uint64_t sg;
+
+ sg = *(const uint64_t *)(rx + 1);
+ nb_segs = (sg >> 48) & 0x3;
+ mbuf->nb_segs = nb_segs;
+ mbuf->data_len = sg & 0xFFFF;
+ sg = sg >> 16;
+
+ eol = ((const rte_iova_t *)(rx + 1) + ((rx->desc_sizem1 + 1) << 1));
+ /* Skip SG_S and first IOVA*/
+ iova_list = ((const rte_iova_t *)(rx + 1)) + 2;
+ nb_segs--;
+
+ rearm = rearm & ~0xFFFF;
+
+ head = mbuf;
+ while (nb_segs) {
+ mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
+ mbuf = mbuf->next;
+
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+
+ mbuf->data_len = sg & 0xFFFF;
+ sg = sg >> 16;
+ *(uint64_t *)(&mbuf->rearm_data) = rearm;
+ nb_segs--;
+ iova_list++;
+
+ if (!nb_segs && (iova_list + 1 < eol)) {
+ sg = *(const uint64_t *)(iova_list);
+ nb_segs = (sg >> 48) & 0x3;
+ head->nb_segs += nb_segs;
+ iova_list = (const rte_iova_t *)(iova_list + 1);
+ }
+ }
+}
+
static __rte_always_inline void
otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *mbuf,
const void *lookup_mem, const uint64_t val,
@@ -178,7 +228,10 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *mbuf,
*(uint64_t *)(&mbuf->rearm_data) = val;
mbuf->pkt_len = len;
- mbuf->data_len = len;
+ if (flag & NIX_RX_MULTI_SEG_F)
+ nix_cqe_xtract_mseg(rx, mbuf, val);
+ else
+ mbuf->data_len = len;
}
#define CKSUM_F NIX_RX_OFFLOAD_CHECKSUM_F
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 51/58] net/octeontx2: add Rx vector version
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (49 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 50/58] net/octeontx2: add Rx multi segment version jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 52/58] net/octeontx2: add Tx burst support jerinj
` (8 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: ferruh.yigit
From: Jerin Jacob <jerinj@marvell.com>
Add vector version of packet Receive function.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/otx2_rx.c | 259 +++++++++++++++++++++++++++++++-
1 file changed, 258 insertions(+), 1 deletion(-)
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index 0f0919338..4ba881ffb 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -83,6 +83,239 @@ nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_pkts;
}
+#if defined(RTE_ARCH_ARM64)
+
+static __rte_always_inline uint64_t
+nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
+{
+ if (w2 & BIT_ULL(21) /* vtag0_gone */) {
+ ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+ *f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
+ }
+
+ return ol_flags;
+}
+
+static __rte_always_inline uint64_t
+nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
+{
+ if (w2 & BIT_ULL(23) /* vtag1_gone */) {
+ ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+ mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
+ }
+
+ return ol_flags;
+}
+
+static __rte_always_inline uint16_t
+nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t pkts, const uint16_t flags)
+{
+ struct otx2_eth_rxq *rxq = rx_queue; uint16_t packets = 0;
+ uint64x2_t cq0_w8, cq1_w8, cq2_w8, cq3_w8, mbuf01, mbuf23;
+ const uint64_t mbuf_initializer = rxq->mbuf_initializer;
+ const uint64x2_t data_off = vdupq_n_u64(rxq->data_off);
+ uint64_t ol_flags0, ol_flags1, ol_flags2, ol_flags3;
+ uint64x2_t rearm0 = vdupq_n_u64(mbuf_initializer);
+ uint64x2_t rearm1 = vdupq_n_u64(mbuf_initializer);
+ uint64x2_t rearm2 = vdupq_n_u64(mbuf_initializer);
+ uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer);
+ struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+ const uint16_t *lookup_mem = rxq->lookup_mem;
+ const uint32_t qmask = rxq->qmask;
+ const uint64_t wdata = rxq->wdata;
+ const uintptr_t desc = rxq->desc;
+ uint8x16_t f0, f1, f2, f3;
+ uint32_t head = rxq->head;
+
+ pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
+ /* Packets has to be floor-aligned to NIX_DESCS_PER_LOOP */
+ pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
+
+ while (packets < pkts) {
+ /* Get the CQ pointers, since the ring size is multiple of
+ * 4, We can avoid checking the wrap around of head
+ * value after the each access unlike scalar version.
+ */
+ const uintptr_t cq0 = desc + CQE_SZ(head);
+
+ /* Prefetch N desc ahead */
+ rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(8)));
+ rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(9)));
+ rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(10)));
+ rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(11)));
+
+ /* Get NIX_RX_SG_S for size and buffer pointer */
+ cq0_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0) + 64));
+ cq1_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1) + 64));
+ cq2_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2) + 64));
+ cq3_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3) + 64));
+
+ /* Extract mbuf from NIX_RX_SG_S */
+ mbuf01 = vzip2q_u64(cq0_w8, cq1_w8);
+ mbuf23 = vzip2q_u64(cq2_w8, cq3_w8);
+ mbuf01 = vqsubq_u64(mbuf01, data_off);
+ mbuf23 = vqsubq_u64(mbuf23, data_off);
+
+ /* Move mbufs to scalar registers for future use */
+ mbuf0 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 0);
+ mbuf1 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 1);
+ mbuf2 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 0);
+ mbuf3 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 1);
+
+ /* Mask to get packet len from NIX_RX_SG_S */
+ const uint8x16_t shuf_msk = {
+ 0xFF, 0xFF, /* pkt_type set as unknown */
+ 0xFF, 0xFF, /* pkt_type set as unknown */
+ 0, 1, /* octet 1~0, low 16 bits pkt_len */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
+ 0, 1, /* octet 1~0, 16 bits data_len */
+ 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF
+ };
+
+ /* Form the rx_descriptor_fields1 with pkt_len and data_len */
+ f0 = vqtbl1q_u8(cq0_w8, shuf_msk);
+ f1 = vqtbl1q_u8(cq1_w8, shuf_msk);
+ f2 = vqtbl1q_u8(cq2_w8, shuf_msk);
+ f3 = vqtbl1q_u8(cq3_w8, shuf_msk);
+
+ /* Load CQE word0 and word 1 */
+ uint64x2_t cq0_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0)));
+ uint64x2_t cq1_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1)));
+ uint64x2_t cq2_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2)));
+ uint64x2_t cq3_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3)));
+
+ if (flags & NIX_RX_OFFLOAD_RSS_F) {
+ /* Fill rss in the rx_descriptor_fields1 */
+ f0 = vsetq_lane_u32(vgetq_lane_u32(cq0_w0, 0), f0, 3);
+ f1 = vsetq_lane_u32(vgetq_lane_u32(cq1_w0, 0), f1, 3);
+ f2 = vsetq_lane_u32(vgetq_lane_u32(cq2_w0, 0), f2, 3);
+ f3 = vsetq_lane_u32(vgetq_lane_u32(cq3_w0, 0), f3, 3);
+ ol_flags0 = PKT_RX_RSS_HASH;
+ ol_flags1 = PKT_RX_RSS_HASH;
+ ol_flags2 = PKT_RX_RSS_HASH;
+ ol_flags3 = PKT_RX_RSS_HASH;
+ } else {
+ ol_flags0 = 0; ol_flags1 = 0;
+ ol_flags2 = 0; ol_flags3 = 0;
+ }
+
+ if (flags & NIX_RX_OFFLOAD_PTYPE_F) {
+ /* Fill packet_type in the rx_descriptor_fields1 */
+ f0 = vsetq_lane_u32(nix_ptype_get(lookup_mem,
+ vgetq_lane_u64(cq0_w0, 1)), f0, 0);
+ f1 = vsetq_lane_u32(nix_ptype_get(lookup_mem,
+ vgetq_lane_u64(cq1_w0, 1)), f1, 0);
+ f2 = vsetq_lane_u32(nix_ptype_get(lookup_mem,
+ vgetq_lane_u64(cq2_w0, 1)), f2, 0);
+ f3 = vsetq_lane_u32(nix_ptype_get(lookup_mem,
+ vgetq_lane_u64(cq3_w0, 1)), f3, 0);
+ }
+
+ if (flags & NIX_RX_OFFLOAD_CHECKSUM_F) {
+ ol_flags0 |= nix_rx_olflags_get(lookup_mem,
+ vgetq_lane_u64(cq0_w0, 1));
+ ol_flags1 |= nix_rx_olflags_get(lookup_mem,
+ vgetq_lane_u64(cq1_w0, 1));
+ ol_flags2 |= nix_rx_olflags_get(lookup_mem,
+ vgetq_lane_u64(cq2_w0, 1));
+ ol_flags3 |= nix_rx_olflags_get(lookup_mem,
+ vgetq_lane_u64(cq3_w0, 1));
+ }
+
+ if (flags & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
+ uint64_t cq0_w2 = *(uint64_t *)(cq0 + CQE_SZ(0) + 16);
+ uint64_t cq1_w2 = *(uint64_t *)(cq0 + CQE_SZ(1) + 16);
+ uint64_t cq2_w2 = *(uint64_t *)(cq0 + CQE_SZ(2) + 16);
+ uint64_t cq3_w2 = *(uint64_t *)(cq0 + CQE_SZ(3) + 16);
+
+ ol_flags0 = nix_vlan_update(cq0_w2, ol_flags0, &f0);
+ ol_flags1 = nix_vlan_update(cq1_w2, ol_flags1, &f1);
+ ol_flags2 = nix_vlan_update(cq2_w2, ol_flags2, &f2);
+ ol_flags3 = nix_vlan_update(cq3_w2, ol_flags3, &f3);
+
+ ol_flags0 = nix_qinq_update(cq0_w2, ol_flags0, mbuf0);
+ ol_flags1 = nix_qinq_update(cq1_w2, ol_flags1, mbuf1);
+ ol_flags2 = nix_qinq_update(cq2_w2, ol_flags2, mbuf2);
+ ol_flags3 = nix_qinq_update(cq3_w2, ol_flags3, mbuf3);
+ }
+
+ if (flags & NIX_RX_OFFLOAD_MARK_UPDATE_F) {
+ ol_flags0 = nix_update_match_id(*(uint16_t *)
+ (cq0 + CQE_SZ(0) + 38), ol_flags0, mbuf0);
+ ol_flags1 = nix_update_match_id(*(uint16_t *)
+ (cq0 + CQE_SZ(1) + 38), ol_flags1, mbuf1);
+ ol_flags2 = nix_update_match_id(*(uint16_t *)
+ (cq0 + CQE_SZ(2) + 38), ol_flags2, mbuf2);
+ ol_flags3 = nix_update_match_id(*(uint16_t *)
+ (cq0 + CQE_SZ(3) + 38), ol_flags3, mbuf3);
+ }
+
+ /* Form rearm_data with ol_flags */
+ rearm0 = vsetq_lane_u64(ol_flags0, rearm0, 1);
+ rearm1 = vsetq_lane_u64(ol_flags1, rearm1, 1);
+ rearm2 = vsetq_lane_u64(ol_flags2, rearm2, 1);
+ rearm3 = vsetq_lane_u64(ol_flags3, rearm3, 1);
+
+ /* Update rx_descriptor_fields1 */
+ vst1q_u64((uint64_t *)mbuf0->rx_descriptor_fields1, f0);
+ vst1q_u64((uint64_t *)mbuf1->rx_descriptor_fields1, f1);
+ vst1q_u64((uint64_t *)mbuf2->rx_descriptor_fields1, f2);
+ vst1q_u64((uint64_t *)mbuf3->rx_descriptor_fields1, f3);
+
+ /* Update rearm_data */
+ vst1q_u64((uint64_t *)mbuf0->rearm_data, rearm0);
+ vst1q_u64((uint64_t *)mbuf1->rearm_data, rearm1);
+ vst1q_u64((uint64_t *)mbuf2->rearm_data, rearm2);
+ vst1q_u64((uint64_t *)mbuf3->rearm_data, rearm3);
+
+ /* Store the mbufs to rx_pkts */
+ vst1q_u64((uint64_t *)&rx_pkts[packets], mbuf01);
+ vst1q_u64((uint64_t *)&rx_pkts[packets + 2], mbuf23);
+
+ /* Prefetch mbufs */
+ otx2_prefetch_store_keep(mbuf0);
+ otx2_prefetch_store_keep(mbuf1);
+ otx2_prefetch_store_keep(mbuf2);
+ otx2_prefetch_store_keep(mbuf3);
+
+ /* Mark mempool obj as "get" as it is alloc'ed by NIX */
+ __mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1);
+ __mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1);
+ __mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1);
+ __mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1);
+
+ /* Advance head pointer and packets */
+ head += NIX_DESCS_PER_LOOP; head &= qmask;
+ packets += NIX_DESCS_PER_LOOP;
+ }
+
+ rxq->head = head;
+ rxq->available -= packets;
+
+ rte_cio_wmb();
+ /* Free all the CQs that we've processed */
+ otx2_write64((rxq->wdata | packets), rxq->cq_door);
+
+ return packets;
+}
+
+#else
+
+static inline uint16_t
+nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t pkts, const uint16_t flags)
+{
+ RTE_SET_USED(rx_queue);
+ RTE_SET_USED(rx_pkts);
+ RTE_SET_USED(pkts);
+ RTE_SET_USED(flags);
+
+ return 0;
+}
+
+#endif
#define R(name, f5, f4, f3, f2, f1, f0, flags) \
static uint16_t __rte_noinline __hot \
@@ -99,6 +332,16 @@ otx2_nix_recv_pkts_mseg_ ## name(void *rx_queue, \
return nix_recv_pkts(rx_queue, rx_pkts, pkts, \
(flags) | NIX_RX_MULTI_SEG_F); \
} \
+ \
+static uint16_t __rte_noinline __hot \
+otx2_nix_recv_pkts_vec_ ## name(void *rx_queue, \
+ struct rte_mbuf **rx_pkts, uint16_t pkts) \
+{ \
+ /* TSTMP is not supported by vector */ \
+ if ((flags) & NIX_RX_OFFLOAD_TSTAMP_F) \
+ return 0; \
+ return nix_recv_pkts_vector(rx_queue, rx_pkts, pkts, (flags)); \
+} \
NIX_RX_FASTPATH_MODES
#undef R
@@ -140,7 +383,21 @@ NIX_RX_FASTPATH_MODES
#undef R
};
- pick_rx_func(eth_dev, nix_eth_rx_burst);
+ const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2] = {
+#define R(name, f5, f4, f3, f2, f1, f0, flags) \
+ [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_vec_ ## name,
+
+NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ /* For PTP enabled, scalar rx function should be chosen as most of the
+ * PTP apps are implemented to rx burst 1 pkt.
+ */
+ if (dev->scalar_ena || dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ pick_rx_func(eth_dev, nix_eth_rx_burst);
+ else
+ pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 52/58] net/octeontx2: add Tx burst support
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (50 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 51/58] net/octeontx2: add Rx vector version jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 53/58] net/octeontx2: add Tx multi segment version jerinj
` (7 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Pavan Nikhilesh, Harman Kalra
From: Jerin Jacob <jerinj@marvell.com>
Add Tx burst support.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 5 +
doc/guides/nics/features/octeontx2_vec.ini | 5 +
doc/guides/nics/features/octeontx2_vf.ini | 5 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 6 -
drivers/net/octeontx2/otx2_ethdev.h | 1 +
drivers/net/octeontx2/otx2_tx.c | 94 ++++++++
drivers/net/octeontx2/otx2_tx.h | 261 +++++++++++++++++++++
9 files changed, 373 insertions(+), 6 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_tx.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 18bcf81cf..396979451 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -12,6 +12,7 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Fast mbuf free = Y
Free Tx mbuf on demand = Y
Queue start/stop = Y
Promiscuous mode = Y
@@ -28,6 +29,10 @@ Jumbo frame = Y
Scattered Rx = Y
VLAN offload = Y
QinQ offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
+Inner L3 checksum = Y
+Inner L4 checksum = Y
Packet type parsing = Y
Timesync = Y
Timestamp offload = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 97a24671e..1435fd91e 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -12,6 +12,7 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Fast mbuf free = Y
Free Tx mbuf on demand = Y
Queue start/stop = Y
Promiscuous mode = Y
@@ -27,6 +28,10 @@ Flow API = Y
Jumbo frame = Y
VLAN offload = Y
QinQ offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
+Inner L3 checksum = Y
+Inner L4 checksum = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 916a6d7b0..0d5137316 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -11,6 +11,7 @@ Lock-free Tx queue = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Fast mbuf free = Y
Free Tx mbuf on demand = Y
Queue start/stop = Y
RSS hash = Y
@@ -23,6 +24,10 @@ Jumbo frame = Y
Scattered Rx = Y
VLAN offload = Y
QinQ offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
+Inner L3 checksum = Y
+Inner L4 checksum = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 76847b2c2..102bf49d7 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -31,6 +31,7 @@ LIBABIVER := 1
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_rx.c \
+ otx2_tx.c \
otx2_tm.c \
otx2_rss.c \
otx2_mac.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 1361f1707..f9b796b5c 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -3,6 +3,7 @@
#
sources = files('otx2_rx.c',
+ 'otx2_tx.c',
'otx2_tm.c',
'otx2_rss.c',
'otx2_mac.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 9b55e757e..fdcab89b8 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -14,12 +14,6 @@
#include "otx2_ethdev.h"
-static inline void
-otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(eth_dev);
-}
-
static inline uint64_t
nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
{
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 3ba47f6ab..bcc351b76 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -453,6 +453,7 @@ int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
/* Rx and Tx routines */
void otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev);
+void otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev);
void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
/* Timesync - PTP routines */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
new file mode 100644
index 000000000..16d69b74f
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -0,0 +1,94 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_vect.h>
+
+#include "otx2_ethdev.h"
+
+#define NIX_XMIT_FC_OR_RETURN(txq, pkts) do { \
+ /* Cached value is low, Update the fc_cache_pkts */ \
+ if (unlikely((txq)->fc_cache_pkts < (pkts))) { \
+ /* Multiply with sqe_per_sqb to express in pkts */ \
+ (txq)->fc_cache_pkts = \
+ ((txq)->nb_sqb_bufs_adj - *(txq)->fc_mem) << \
+ (txq)->sqes_per_sqb_log2; \
+ /* Check it again for the room */ \
+ if (unlikely((txq)->fc_cache_pkts < (pkts))) \
+ return 0; \
+ } \
+} while (0)
+
+
+static __rte_always_inline uint16_t
+nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t pkts, uint64_t *cmd, const uint16_t flags)
+{
+ struct otx2_eth_txq *txq = tx_queue; uint16_t i;
+ const rte_iova_t io_addr = txq->io_addr;
+ void *lmt_addr = txq->lmt_addr;
+
+ NIX_XMIT_FC_OR_RETURN(txq, pkts);
+
+ otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags));
+
+ /* Lets commit any changes in the packet */
+ rte_cio_wmb();
+
+ for (i = 0; i < pkts; i++) {
+ otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags);
+ /* Passing no of segdw as 4: HDR + EXT + SG + SMEM */
+ otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
+ tx_pkts[i]->ol_flags, 4, flags);
+ otx2_nix_xmit_one(cmd, lmt_addr, io_addr, flags);
+ }
+
+ /* Reduce the cached count */
+ txq->fc_cache_pkts -= pkts;
+
+ return pkts;
+}
+
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+static uint16_t __rte_noinline __hot \
+otx2_nix_xmit_pkts_ ## name(void *tx_queue, \
+ struct rte_mbuf **tx_pkts, uint16_t pkts) \
+{ \
+ uint64_t cmd[sz]; \
+ \
+ return nix_xmit_pkts(tx_queue, tx_pkts, pkts, cmd, flags); \
+}
+
+NIX_TX_FASTPATH_MODES
+#undef T
+
+static inline void
+pick_tx_func(struct rte_eth_dev *eth_dev,
+ const eth_tx_burst_t tx_burst[2][2][2][2][2])
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ /* [TSTMP] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
+ eth_dev->tx_pkt_burst = tx_burst
+ [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F)]
+ [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
+ [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
+ [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
+ [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
+}
+
+void
+otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
+{
+ const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2] = {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_ ## name,
+
+NIX_TX_FASTPATH_MODES
+#undef T
+ };
+
+ pick_tx_func(eth_dev, nix_eth_tx_burst);
+
+ rte_mb();
+}
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
index 4d0993f87..db4c1f70f 100644
--- a/drivers/net/octeontx2/otx2_tx.h
+++ b/drivers/net/octeontx2/otx2_tx.h
@@ -25,4 +25,265 @@
#define NIX_TX_NEED_EXT_HDR \
(NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F)
+/* Function to determine no of tx subdesc required in case ext
+ * sub desc is enabled.
+ */
+static __rte_always_inline int
+otx2_nix_tx_ext_subs(const uint16_t flags)
+{
+ return (flags & NIX_TX_OFFLOAD_TSTAMP_F) ? 2 :
+ ((flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) ? 1 : 0);
+}
+
+static __rte_always_inline void
+otx2_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc,
+ const uint64_t ol_flags, const uint16_t no_segdw,
+ const uint16_t flags)
+{
+ if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
+ struct nix_send_mem_s *send_mem;
+ uint16_t off = (no_segdw - 1) << 1;
+
+ send_mem = (struct nix_send_mem_s *)(cmd + off);
+ if (flags & NIX_TX_MULTI_SEG_F)
+ /* Retrieving the default desc values */
+ cmd[off] = send_mem_desc[6];
+
+ /* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp
+ * should not be updated at tx tstamp registered address, rather
+ * a dummy address which is eight bytes ahead would be updated
+ */
+ send_mem->addr = (rte_iova_t)((uint64_t *)send_mem_desc[7] +
+ !(ol_flags & PKT_TX_IEEE1588_TMST));
+ }
+}
+
+static inline void
+otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
+{
+ struct nix_send_ext_s *send_hdr_ext;
+ struct nix_send_hdr_s *send_hdr;
+ uint64_t ol_flags = 0, mask;
+ union nix_send_hdr_w1_u w1;
+ union nix_send_sg_s *sg;
+
+ send_hdr = (struct nix_send_hdr_s *)cmd;
+ if (flags & NIX_TX_NEED_EXT_HDR) {
+ send_hdr_ext = (struct nix_send_ext_s *)(cmd + 2);
+ sg = (union nix_send_sg_s *)(cmd + 4);
+ /* Clear previous markings */
+ send_hdr_ext->w0.lso = 0;
+ send_hdr_ext->w1.u = 0;
+ } else {
+ sg = (union nix_send_sg_s *)(cmd + 2);
+ }
+
+ if (flags & NIX_TX_NEED_SEND_HDR_W1) {
+ ol_flags = m->ol_flags;
+ w1.u = 0;
+ }
+
+ if (!(flags & NIX_TX_MULTI_SEG_F)) {
+ send_hdr->w0.total = m->data_len;
+ send_hdr->w0.aura =
+ npa_lf_aura_handle_to_aura(m->pool->pool_id);
+ }
+
+ /*
+ * L3type: 2 => IPV4
+ * 3 => IPV4 with csum
+ * 4 => IPV6
+ * L3type and L3ptr needs to be set for either
+ * L3 csum or L4 csum or LSO
+ *
+ */
+
+ if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
+ (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
+ const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+ const uint8_t ol3type =
+ ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
+ ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
+ !!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+
+ /* Outer L3 */
+ w1.ol3type = ol3type;
+ mask = 0xffffull << ((!!ol3type) << 4);
+ w1.ol3ptr = ~mask & m->outer_l2_len;
+ w1.ol4ptr = ~mask & (w1.ol3ptr + m->outer_l3_len);
+
+ /* Outer L4 */
+ w1.ol4type = csum + (csum << 1);
+
+ /* Inner L3 */
+ w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
+ ((!!(ol_flags & PKT_TX_IPV6)) << 2);
+ w1.il3ptr = w1.ol4ptr + m->l2_len;
+ w1.il4ptr = w1.il3ptr + m->l3_len;
+ /* Increment it by 1 if it is IPV4 as 3 is with csum */
+ w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM);
+
+ /* Inner L4 */
+ w1.il4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+
+ /* In case of no tunnel header use only
+ * shift IL3/IL4 fields a bit to use
+ * OL3/OL4 for header checksum
+ */
+ mask = !ol3type;
+ w1.u = ((w1.u & 0xFFFFFFFF00000000) >> (mask << 3)) |
+ ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
+
+ } else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
+ const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+ const uint8_t outer_l2_len = m->outer_l2_len;
+
+ /* Outer L3 */
+ w1.ol3ptr = outer_l2_len;
+ w1.ol4ptr = outer_l2_len + m->outer_l3_len;
+ /* Increment it by 1 if it is IPV4 as 3 is with csum */
+ w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
+ ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
+ !!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+
+ /* Outer L4 */
+ w1.ol4type = csum + (csum << 1);
+
+ } else if (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) {
+ const uint8_t l2_len = m->l2_len;
+
+ /* Always use OLXPTR and OLXTYPE when only
+ * when one header is present
+ */
+
+ /* Inner L3 */
+ w1.ol3ptr = l2_len;
+ w1.ol4ptr = l2_len + m->l3_len;
+ /* Increment it by 1 if it is IPV4 as 3 is with csum */
+ w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
+ ((!!(ol_flags & PKT_TX_IPV6)) << 2) +
+ !!(ol_flags & PKT_TX_IP_CKSUM);
+
+ /* Inner L4 */
+ w1.ol4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+ }
+
+ if (flags & NIX_TX_NEED_EXT_HDR &&
+ flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
+ send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN);
+ /* HW will update ptr after vlan0 update */
+ send_hdr_ext->w1.vlan1_ins_ptr = 12;
+ send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
+
+ send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ);
+ /* 2B before end of l2 header */
+ send_hdr_ext->w1.vlan0_ins_ptr = 12;
+ send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
+ }
+
+ if (flags & NIX_TX_NEED_SEND_HDR_W1)
+ send_hdr->w1.u = w1.u;
+
+ if (!(flags & NIX_TX_MULTI_SEG_F)) {
+ sg->seg1_size = m->data_len;
+ *(rte_iova_t *)(++sg) = rte_mbuf_data_iova(m);
+
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
+ /* Set don't free bit if reference count > 1 */
+ if (rte_pktmbuf_prefree_seg(m) == NULL)
+ send_hdr->w0.df = 1; /* SET DF */
+ }
+ /* Mark mempool object as "put" since it is freed by NIX */
+ if (!send_hdr->w0.df)
+ __mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+ }
+}
+
+
+static __rte_always_inline void
+otx2_nix_xmit_one(uint64_t *cmd, void *lmt_addr,
+ const rte_iova_t io_addr, const uint32_t flags)
+{
+ uint64_t lmt_status;
+
+ do {
+ otx2_lmt_mov(lmt_addr, cmd, otx2_nix_tx_ext_subs(flags));
+ lmt_status = otx2_lmt_submit(io_addr);
+ } while (lmt_status == 0);
+}
+
+
+#define L3L4CSUM_F NIX_TX_OFFLOAD_L3_L4_CSUM_F
+#define OL3OL4CSUM_F NIX_TX_OFFLOAD_OL3_OL4_CSUM_F
+#define VLAN_F NIX_TX_OFFLOAD_VLAN_QINQ_F
+#define NOFF_F NIX_TX_OFFLOAD_MBUF_NOFF_F
+#define TSP_F NIX_TX_OFFLOAD_TSTAMP_F
+
+/* [TSTMP] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
+#define NIX_TX_FASTPATH_MODES \
+T(no_offload, 0, 0, 0, 0, 0, 4, \
+ NIX_TX_OFFLOAD_NONE) \
+T(l3l4csum, 0, 0, 0, 0, 1, 4, \
+ L3L4CSUM_F) \
+T(ol3ol4csum, 0, 0, 0, 1, 0, 4, \
+ OL3OL4CSUM_F) \
+T(ol3ol4csum_l3l4csum, 0, 0, 0, 1, 1, 4, \
+ OL3OL4CSUM_F | L3L4CSUM_F) \
+T(vlan, 0, 0, 1, 0, 0, 6, \
+ VLAN_F) \
+T(vlan_l3l4csum, 0, 0, 1, 0, 1, 6, \
+ VLAN_F | L3L4CSUM_F) \
+T(vlan_ol3ol4csum, 0, 0, 1, 1, 0, 6, \
+ VLAN_F | OL3OL4CSUM_F) \
+T(vlan_ol3ol4csum_l3l4csum, 0, 0, 1, 1, 1, 6, \
+ VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(noff, 0, 1, 0, 0, 0, 4, \
+ NOFF_F) \
+T(noff_l3l4csum, 0, 1, 0, 0, 1, 4, \
+ NOFF_F | L3L4CSUM_F) \
+T(noff_ol3ol4csum, 0, 1, 0, 1, 0, 4, \
+ NOFF_F | OL3OL4CSUM_F) \
+T(noff_ol3ol4csum_l3l4csum, 0, 1, 0, 1, 1, 4, \
+ NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(noff_vlan, 0, 1, 1, 0, 0, 6, \
+ NOFF_F | VLAN_F) \
+T(noff_vlan_l3l4csum, 0, 1, 1, 0, 1, 6, \
+ NOFF_F | VLAN_F | L3L4CSUM_F) \
+T(noff_vlan_ol3ol4csum, 0, 1, 1, 1, 0, 6, \
+ NOFF_F | VLAN_F | OL3OL4CSUM_F) \
+T(noff_vlan_ol3ol4csum_l3l4csum, 0, 1, 1, 1, 1, 6, \
+ NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(ts, 1, 0, 0, 0, 0, 8, \
+ TSP_F) \
+T(ts_l3l4csum, 1, 0, 0, 0, 1, 8, \
+ TSP_F | L3L4CSUM_F) \
+T(ts_ol3ol4csum, 1, 0, 0, 1, 0, 8, \
+ TSP_F | OL3OL4CSUM_F) \
+T(ts_ol3ol4csum_l3l4csum, 1, 0, 0, 1, 1, 8, \
+ TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(ts_vlan, 1, 0, 1, 0, 0, 8, \
+ TSP_F | VLAN_F) \
+T(ts_vlan_l3l4csum, 1, 0, 1, 0, 1, 8, \
+ TSP_F | VLAN_F | L3L4CSUM_F) \
+T(ts_vlan_ol3ol4csum, 1, 0, 1, 1, 0, 8, \
+ TSP_F | VLAN_F | OL3OL4CSUM_F) \
+T(ts_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 1, 8, \
+ TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(ts_noff, 1, 1, 0, 0, 0, 8, \
+ TSP_F | NOFF_F) \
+T(ts_noff_l3l4csum, 1, 1, 0, 0, 1, 8, \
+ TSP_F | NOFF_F | L3L4CSUM_F) \
+T(ts_noff_ol3ol4csum, 1, 1, 0, 1, 0, 8, \
+ TSP_F | NOFF_F | OL3OL4CSUM_F) \
+T(ts_noff_ol3ol4csum_l3l4csum, 1, 1, 0, 1, 1, 8, \
+ TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(ts_noff_vlan, 1, 1, 1, 0, 0, 8, \
+ TSP_F | NOFF_F | VLAN_F) \
+T(ts_noff_vlan_l3l4csum, 1, 1, 1, 0, 1, 8, \
+ TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
+T(ts_noff_vlan_ol3ol4csum, 1, 1, 1, 1, 0, 8, \
+ TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
+T(ts_noff_vlan_ol3ol4csum_l3l4csum, 1, 1, 1, 1, 1, 8, \
+ TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)
+
#endif /* __OTX2_TX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 53/58] net/octeontx2: add Tx multi segment version
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (51 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 52/58] net/octeontx2: add Tx burst support jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 54/58] net/octeontx2: add Tx vector version jerinj
` (6 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Pavan Nikhilesh
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add multi segment version of packet Transmit function.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.h | 4 ++
drivers/net/octeontx2/otx2_tx.c | 58 +++++++++++++++++++++
drivers/net/octeontx2/otx2_tx.h | 81 +++++++++++++++++++++++++++++
3 files changed, 143 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index bcc351b76..dff4de250 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -71,6 +71,10 @@
#define NIX_TX_NB_SEG_MAX 9
#endif
+#define NIX_TX_MSEG_SG_DWORDS \
+ ((RTE_ALIGN_MUL_CEIL(NIX_TX_NB_SEG_MAX, 3) / 3) \
+ + NIX_TX_NB_SEG_MAX)
+
/* Apply BP when CQ is 75% full */
#define NIX_CQ_BP_LEVEL (25 * 256 / 100)
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index 16d69b74f..0ac5ea652 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -49,6 +49,37 @@ nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
return pkts;
}
+static __rte_always_inline uint16_t
+nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t pkts, uint64_t *cmd, const uint16_t flags)
+{
+ struct otx2_eth_txq *txq = tx_queue; uint64_t i;
+ const rte_iova_t io_addr = txq->io_addr;
+ void *lmt_addr = txq->lmt_addr;
+ uint16_t segdw;
+
+ NIX_XMIT_FC_OR_RETURN(txq, pkts);
+
+ otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags));
+
+ /* Lets commit any changes in the packet */
+ rte_cio_wmb();
+
+ for (i = 0; i < pkts; i++) {
+ otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags);
+ segdw = otx2_nix_prepare_mseg(tx_pkts[i], cmd, flags);
+ otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
+ tx_pkts[i]->ol_flags, segdw,
+ flags);
+ otx2_nix_xmit_mseg_one(cmd, lmt_addr, io_addr, segdw);
+ }
+
+ /* Reduce the cached count */
+ txq->fc_cache_pkts -= pkts;
+
+ return pkts;
+}
+
#define T(name, f4, f3, f2, f1, f0, sz, flags) \
static uint16_t __rte_noinline __hot \
otx2_nix_xmit_pkts_ ## name(void *tx_queue, \
@@ -62,6 +93,20 @@ otx2_nix_xmit_pkts_ ## name(void *tx_queue, \
NIX_TX_FASTPATH_MODES
#undef T
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+static uint16_t __rte_noinline __hot \
+otx2_nix_xmit_pkts_mseg_ ## name(void *tx_queue, \
+ struct rte_mbuf **tx_pkts, uint16_t pkts) \
+{ \
+ uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \
+ \
+ return nix_xmit_pkts_mseg(tx_queue, tx_pkts, pkts, cmd, \
+ (flags) | NIX_TX_MULTI_SEG_F); \
+}
+
+NIX_TX_FASTPATH_MODES
+#undef T
+
static inline void
pick_tx_func(struct rte_eth_dev *eth_dev,
const eth_tx_burst_t tx_burst[2][2][2][2][2])
@@ -80,15 +125,28 @@ pick_tx_func(struct rte_eth_dev *eth_dev,
void
otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2] = {
#define T(name, f4, f3, f2, f1, f0, sz, flags) \
[f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_ ## name,
+NIX_TX_FASTPATH_MODES
+#undef T
+ };
+
+ const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2] = {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_mseg_ ## name,
+
NIX_TX_FASTPATH_MODES
#undef T
};
pick_tx_func(eth_dev, nix_eth_tx_burst);
+ if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
+
rte_mb();
}
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
index db4c1f70f..b75a220ea 100644
--- a/drivers/net/octeontx2/otx2_tx.h
+++ b/drivers/net/octeontx2/otx2_tx.h
@@ -212,6 +212,87 @@ otx2_nix_xmit_one(uint64_t *cmd, void *lmt_addr,
} while (lmt_status == 0);
}
+static __rte_always_inline uint16_t
+otx2_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
+{
+ struct nix_send_hdr_s *send_hdr;
+ union nix_send_sg_s *sg;
+ struct rte_mbuf *m_next;
+ uint64_t *slist, sg_u;
+ uint64_t nb_segs;
+ uint64_t segdw;
+ uint8_t off, i;
+
+ send_hdr = (struct nix_send_hdr_s *)cmd;
+ send_hdr->w0.total = m->pkt_len;
+ send_hdr->w0.aura = npa_lf_aura_handle_to_aura(m->pool->pool_id);
+
+ if (flags & NIX_TX_NEED_EXT_HDR)
+ off = 2;
+ else
+ off = 0;
+
+ sg = (union nix_send_sg_s *)&cmd[2 + off];
+ sg_u = sg->u;
+ slist = &cmd[3 + off];
+
+ i = 0;
+ nb_segs = m->nb_segs;
+
+ /* Fill mbuf segments */
+ do {
+ m_next = m->next;
+ sg_u = sg_u | ((uint64_t)m->data_len << (i << 4));
+ *slist = rte_mbuf_data_iova(m);
+ /* Set invert df if reference count > 1 */
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)
+ sg_u |=
+ ((uint64_t)(rte_pktmbuf_prefree_seg(m) == NULL) <<
+ (i + 55));
+ /* Mark mempool object as "put" since it is freed by NIX */
+ if (!(sg_u & (1ULL << (i + 55)))) {
+ m->next = NULL;
+ __mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+ }
+ slist++;
+ i++;
+ nb_segs--;
+ if (i > 2 && nb_segs) {
+ i = 0;
+ /* Next SG subdesc */
+ *(uint64_t *)slist = sg_u & 0xFC00000000000000;
+ sg->u = sg_u;
+ sg->segs = 3;
+ sg = (union nix_send_sg_s *)slist;
+ sg_u = sg->u;
+ slist++;
+ }
+ m = m_next;
+ } while (nb_segs);
+
+ sg->u = sg_u;
+ sg->segs = i;
+ segdw = (uint64_t *)slist - (uint64_t *)&cmd[2 + off];
+ /* Roundup extra dwords to multiple of 2 */
+ segdw = (segdw >> 1) + (segdw & 0x1);
+ /* Default dwords */
+ segdw += (off >> 1) + 1 + !!(flags & NIX_TX_OFFLOAD_TSTAMP_F);
+ send_hdr->w0.sizem1 = segdw - 1;
+
+ return segdw;
+}
+
+static __rte_always_inline void
+otx2_nix_xmit_mseg_one(uint64_t *cmd, void *lmt_addr,
+ rte_iova_t io_addr, uint16_t segdw)
+{
+ uint64_t lmt_status;
+
+ do {
+ otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw);
+ lmt_status = otx2_lmt_submit(io_addr);
+ } while (lmt_status == 0);
+}
#define L3L4CSUM_F NIX_TX_OFFLOAD_L3_L4_CSUM_F
#define OL3OL4CSUM_F NIX_TX_OFFLOAD_OL3_OL4_CSUM_F
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 54/58] net/octeontx2: add Tx vector version
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (52 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 53/58] net/octeontx2: add Tx multi segment version jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 55/58] net/octeontx2: add device start operation jerinj
` (5 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Pavan Nikhilesh
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add vector version of packet transmit function.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/net/octeontx2/otx2_tx.c | 883 +++++++++++++++++++++++++++++++-
1 file changed, 882 insertions(+), 1 deletion(-)
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index 0ac5ea652..6bce55112 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -80,6 +80,859 @@ nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
return pkts;
}
+#if defined(RTE_ARCH_ARM64)
+
+#define NIX_DESCS_PER_LOOP 4
+static __rte_always_inline uint16_t
+nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t pkts, const uint16_t flags)
+{
+ uint64x2_t dataoff_iova0, dataoff_iova1, dataoff_iova2, dataoff_iova3;
+ uint64x2_t len_olflags0, len_olflags1, len_olflags2, len_olflags3;
+ uint64_t *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+ uint64x2_t senddesc01_w0, senddesc23_w0;
+ uint64x2_t senddesc01_w1, senddesc23_w1;
+ uint64x2_t sgdesc01_w0, sgdesc23_w0;
+ uint64x2_t sgdesc01_w1, sgdesc23_w1;
+ struct otx2_eth_txq *txq = tx_queue;
+ uint64_t *lmt_addr = txq->lmt_addr;
+ rte_iova_t io_addr = txq->io_addr;
+ uint64x2_t ltypes01, ltypes23;
+ uint64x2_t xtmp128, ytmp128;
+ uint64x2_t xmask01, xmask23;
+ uint64x2_t mbuf01, mbuf23;
+ uint64x2_t cmd00, cmd01;
+ uint64x2_t cmd10, cmd11;
+ uint64x2_t cmd20, cmd21;
+ uint64x2_t cmd30, cmd31;
+ uint64_t lmt_status, i;
+
+ pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
+
+ NIX_XMIT_FC_OR_RETURN(txq, pkts);
+
+ /* Reduce the cached count */
+ txq->fc_cache_pkts -= pkts;
+
+ /* Lets commit any changes in the packet */
+ rte_cio_wmb();
+
+ senddesc01_w0 = vld1q_dup_u64(&txq->cmd[0]);
+ senddesc23_w0 = senddesc01_w0;
+ senddesc01_w1 = vdupq_n_u64(0);
+ senddesc23_w1 = senddesc01_w1;
+ sgdesc01_w0 = vld1q_dup_u64(&txq->cmd[2]);
+ sgdesc23_w0 = sgdesc01_w0;
+
+ for (i = 0; i < pkts; i += NIX_DESCS_PER_LOOP) {
+ mbuf01 = vld1q_u64((uint64_t *)tx_pkts);
+ mbuf23 = vld1q_u64((uint64_t *)(tx_pkts + 2));
+
+ /* Clear lower 32bit of SEND_HDR_W0 and SEND_SG_W0 */
+ senddesc01_w0 = vbicq_u64(senddesc01_w0,
+ vdupq_n_u64(0xFFFFFFFF));
+ sgdesc01_w0 = vbicq_u64(sgdesc01_w0,
+ vdupq_n_u64(0xFFFFFFFF));
+
+ senddesc23_w0 = senddesc01_w0;
+ sgdesc23_w0 = sgdesc01_w0;
+
+ tx_pkts = tx_pkts + NIX_DESCS_PER_LOOP;
+
+ /* Move mbufs to iova */
+ mbuf0 = (uint64_t *)vgetq_lane_u64(mbuf01, 0);
+ mbuf1 = (uint64_t *)vgetq_lane_u64(mbuf01, 1);
+ mbuf2 = (uint64_t *)vgetq_lane_u64(mbuf23, 0);
+ mbuf3 = (uint64_t *)vgetq_lane_u64(mbuf23, 1);
+
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mbuf, buf_iova));
+ /*
+ * Get mbuf's, olflags, iova, pktlen, dataoff
+ * dataoff_iovaX.D[0] = iova,
+ * dataoff_iovaX.D[1](15:0) = mbuf->dataoff
+ * len_olflagsX.D[0] = ol_flags,
+ * len_olflagsX.D[1](63:32) = mbuf->pkt_len
+ */
+ dataoff_iova0 = vld1q_u64(mbuf0);
+ len_olflags0 = vld1q_u64(mbuf0 + 2);
+ dataoff_iova1 = vld1q_u64(mbuf1);
+ len_olflags1 = vld1q_u64(mbuf1 + 2);
+ dataoff_iova2 = vld1q_u64(mbuf2);
+ len_olflags2 = vld1q_u64(mbuf2 + 2);
+ dataoff_iova3 = vld1q_u64(mbuf3);
+ len_olflags3 = vld1q_u64(mbuf3 + 2);
+
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
+ struct rte_mbuf *mbuf;
+ /* Set don't free bit if reference count > 1 */
+ xmask01 = vdupq_n_u64(0);
+ xmask23 = xmask01;
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
+ offsetof(struct rte_mbuf, buf_iova));
+
+ if (rte_pktmbuf_prefree_seg(mbuf) == NULL)
+ vsetq_lane_u64(0x80000, xmask01, 0);
+ else
+ __mempool_check_cookies(mbuf->pool,
+ (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
+ offsetof(struct rte_mbuf, buf_iova));
+ if (rte_pktmbuf_prefree_seg(mbuf) == NULL)
+ vsetq_lane_u64(0x80000, xmask01, 1);
+ else
+ __mempool_check_cookies(mbuf->pool,
+ (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
+ offsetof(struct rte_mbuf, buf_iova));
+ if (rte_pktmbuf_prefree_seg(mbuf) == NULL)
+ vsetq_lane_u64(0x80000, xmask23, 0);
+ else
+ __mempool_check_cookies(mbuf->pool,
+ (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
+ offsetof(struct rte_mbuf, buf_iova));
+ if (rte_pktmbuf_prefree_seg(mbuf) == NULL)
+ vsetq_lane_u64(0x80000, xmask23, 1);
+ else
+ __mempool_check_cookies(mbuf->pool,
+ (void **)&mbuf,
+ 1, 0);
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
+ } else {
+ struct rte_mbuf *mbuf;
+ /* Mark mempool object as "put" since
+ * it is freed by NIX
+ */
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
+ offsetof(struct rte_mbuf, buf_iova));
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
+ offsetof(struct rte_mbuf, buf_iova));
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
+ offsetof(struct rte_mbuf, buf_iova));
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
+ offsetof(struct rte_mbuf, buf_iova));
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+ 1, 0);
+ RTE_SET_USED(mbuf);
+ }
+
+ /* Move mbufs to point pool */
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mbuf, pool) -
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mbuf, pool) -
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mbuf, pool) -
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mbuf, pool) -
+ offsetof(struct rte_mbuf, buf_iova));
+
+ if (flags &
+ (NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
+ NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
+ /* Get tx_offload for ol2, ol3, l2, l3 lengths */
+ /*
+ * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7)
+ * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7)
+ */
+
+ asm volatile ("LD1 {%[a].D}[0],[%[in]]\n\t" :
+ [a]"+w"(senddesc01_w1) :
+ [in]"r"(mbuf0 + 2) : "memory");
+
+ asm volatile ("LD1 {%[a].D}[1],[%[in]]\n\t" :
+ [a]"+w"(senddesc01_w1) :
+ [in]"r"(mbuf1 + 2) : "memory");
+
+ asm volatile ("LD1 {%[b].D}[0],[%[in]]\n\t" :
+ [b]"+w"(senddesc23_w1) :
+ [in]"r"(mbuf2 + 2) : "memory");
+
+ asm volatile ("LD1 {%[b].D}[1],[%[in]]\n\t" :
+ [b]"+w"(senddesc23_w1) :
+ [in]"r"(mbuf3 + 2) : "memory");
+
+ /* Get pool pointer alone */
+ mbuf0 = (uint64_t *)*mbuf0;
+ mbuf1 = (uint64_t *)*mbuf1;
+ mbuf2 = (uint64_t *)*mbuf2;
+ mbuf3 = (uint64_t *)*mbuf3;
+ } else {
+ /* Get pool pointer alone */
+ mbuf0 = (uint64_t *)*mbuf0;
+ mbuf1 = (uint64_t *)*mbuf1;
+ mbuf2 = (uint64_t *)*mbuf2;
+ mbuf3 = (uint64_t *)*mbuf3;
+ }
+
+ const uint8x16_t shuf_mask2 = {
+ 0x4, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xc, 0xd, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ };
+ xtmp128 = vzip2q_u64(len_olflags0, len_olflags1);
+ ytmp128 = vzip2q_u64(len_olflags2, len_olflags3);
+
+ /* Clear dataoff_iovaX.D[1] bits other than dataoff(15:0) */
+ const uint64x2_t and_mask0 = {
+ 0xFFFFFFFFFFFFFFFF,
+ 0x000000000000FFFF,
+ };
+
+ dataoff_iova0 = vandq_u64(dataoff_iova0, and_mask0);
+ dataoff_iova1 = vandq_u64(dataoff_iova1, and_mask0);
+ dataoff_iova2 = vandq_u64(dataoff_iova2, and_mask0);
+ dataoff_iova3 = vandq_u64(dataoff_iova3, and_mask0);
+
+ /*
+ * Pick only 16 bits of pktlen preset at bits 63:32
+ * and place them at bits 15:0.
+ */
+ xtmp128 = vqtbl1q_u8(xtmp128, shuf_mask2);
+ ytmp128 = vqtbl1q_u8(ytmp128, shuf_mask2);
+
+ /* Add pairwise to get dataoff + iova in sgdesc_w1 */
+ sgdesc01_w1 = vpaddq_u64(dataoff_iova0, dataoff_iova1);
+ sgdesc23_w1 = vpaddq_u64(dataoff_iova2, dataoff_iova3);
+
+ /* Orr both sgdesc_w0 and senddesc_w0 with 16 bits of
+ * pktlen at 15:0 position.
+ */
+ sgdesc01_w0 = vorrq_u64(sgdesc01_w0, xtmp128);
+ sgdesc23_w0 = vorrq_u64(sgdesc23_w0, ytmp128);
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xtmp128);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, ytmp128);
+
+ if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
+ !(flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
+ /*
+ * Lookup table to translate ol_flags to
+ * il3/il4 types. But we still use ol3/ol4 types in
+ * senddesc_w1 as only one header processing is enabled.
+ */
+ const uint8x16_t tbl = {
+ /* [0-15] = il4type:il3type */
+ 0x04, /* none (IPv6 assumed) */
+ 0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */
+ 0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */
+ 0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */
+ 0x03, /* PKT_TX_IP_CKSUM */
+ 0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */
+ 0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */
+ 0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */
+ 0x02, /* PKT_TX_IPV4 */
+ 0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */
+ 0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */
+ 0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */
+ 0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */
+ 0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_TCP_CKSUM
+ */
+ 0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_SCTP_CKSUM
+ */
+ 0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_UDP_CKSUM
+ */
+ };
+
+ /* Extract olflags to translate to iltypes */
+ xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
+ ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
+
+ /*
+ * E(47):L3_LEN(9):L2_LEN(7+z)
+ * E(47):L3_LEN(9):L2_LEN(7+z)
+ */
+ senddesc01_w1 = vshlq_n_u64(senddesc01_w1, 1);
+ senddesc23_w1 = vshlq_n_u64(senddesc23_w1, 1);
+
+ /* Move OLFLAGS bits 55:52 to 51:48
+ * with zeros preprended on the byte and rest
+ * don't care
+ */
+ xtmp128 = vshrq_n_u8(xtmp128, 4);
+ ytmp128 = vshrq_n_u8(ytmp128, 4);
+ /*
+ * E(48):L3_LEN(8):L2_LEN(z+7)
+ * E(48):L3_LEN(8):L2_LEN(z+7)
+ */
+ const int8x16_t tshft3 = {
+ -1, 0, 8, 8, 8, 8, 8, 8,
+ -1, 0, 8, 8, 8, 8, 8, 8,
+ };
+
+ senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
+ senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
+
+ /* Do the lookup */
+ ltypes01 = vqtbl1q_u8(tbl, xtmp128);
+ ltypes23 = vqtbl1q_u8(tbl, ytmp128);
+
+ /* Just use ld1q to retrieve aura
+ * when we don't need tx_offload
+ */
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mempool, pool_id));
+
+ /* Pick only relevant fields i.e Bit 48:55 of iltype
+ * and place it in ol3/ol4type of senddesc_w1
+ */
+ const uint8x16_t shuf_mask0 = {
+ 0xFF, 0xFF, 0xFF, 0xFF, 0x6, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xE, 0xFF, 0xFF, 0xFF,
+ };
+
+ ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
+ ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
+
+ /* Prepare ol4ptr, ol3ptr from ol3len, ol2len.
+ * a [E(32):E(16):OL3(8):OL2(8)]
+ * a = a + (a << 8)
+ * a [E(32):E(16):(OL3+OL2):OL2]
+ * => E(32):E(16)::OL4PTR(8):OL3PTR(8)
+ */
+ senddesc01_w1 = vaddq_u8(senddesc01_w1,
+ vshlq_n_u16(senddesc01_w1, 8));
+ senddesc23_w1 = vaddq_u8(senddesc23_w1,
+ vshlq_n_u16(senddesc23_w1, 8));
+
+ /* Create first half of 4W cmd for 4 mbufs (sgdesc) */
+ cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
+ cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
+
+ xmask01 = vdupq_n_u64(0);
+ xmask23 = xmask01;
+ asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
+
+ asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
+ xmask01 = vshlq_n_u64(xmask01, 20);
+ xmask23 = vshlq_n_u64(xmask23, 20);
+
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
+ /* Move ltypes to senddesc*_w1 */
+ senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
+ senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
+
+ /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
+ cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
+ cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
+ cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
+ cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
+
+ } else if (!(flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
+ (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
+ /*
+ * Lookup table to translate ol_flags to
+ * ol3/ol4 types.
+ */
+
+ const uint8x16_t tbl = {
+ /* [0-15] = ol4type:ol3type */
+ 0x00, /* none */
+ 0x03, /* OUTER_IP_CKSUM */
+ 0x02, /* OUTER_IPV4 */
+ 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */
+ 0x04, /* OUTER_IPV6 */
+ 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */
+ 0x00, /* OUTER_IPV6 | OUTER_IPV4 */
+ 0x00, /* OUTER_IPV6 | OUTER_IPV4 |
+ * OUTER_IP_CKSUM
+ */
+ 0x00, /* OUTER_UDP_CKSUM */
+ 0x33, /* OUTER_UDP_CKSUM | OUTER_IP_CKSUM */
+ 0x32, /* OUTER_UDP_CKSUM | OUTER_IPV4 */
+ 0x33, /* OUTER_UDP_CKSUM | OUTER_IPV4 |
+ * OUTER_IP_CKSUM
+ */
+ 0x34, /* OUTER_UDP_CKSUM | OUTER_IPV6 */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IP_CKSUM
+ */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IPV4
+ */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IPV4 | OUTER_IP_CKSUM
+ */
+ };
+
+ /* Extract olflags to translate to iltypes */
+ xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
+ ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
+
+ /*
+ * E(47):OL3_LEN(9):OL2_LEN(7+z)
+ * E(47):OL3_LEN(9):OL2_LEN(7+z)
+ */
+ const uint8x16_t shuf_mask5 = {
+ 0x6, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xE, 0xD, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ };
+ senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5);
+ senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5);
+
+ /* Extract outer ol flags only */
+ const uint64x2_t o_cksum_mask = {
+ 0x1C00020000000000,
+ 0x1C00020000000000,
+ };
+
+ xtmp128 = vandq_u64(xtmp128, o_cksum_mask);
+ ytmp128 = vandq_u64(ytmp128, o_cksum_mask);
+
+ /* Extract OUTER_UDP_CKSUM bit 41 and
+ * move it to bit 61
+ */
+
+ xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20);
+ ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20);
+
+ /* Shift oltype by 2 to start nibble from BIT(56)
+ * instead of BIT(58)
+ */
+ xtmp128 = vshrq_n_u8(xtmp128, 2);
+ ytmp128 = vshrq_n_u8(ytmp128, 2);
+ /*
+ * E(48):L3_LEN(8):L2_LEN(z+7)
+ * E(48):L3_LEN(8):L2_LEN(z+7)
+ */
+ const int8x16_t tshft3 = {
+ -1, 0, 8, 8, 8, 8, 8, 8,
+ -1, 0, 8, 8, 8, 8, 8, 8,
+ };
+
+ senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
+ senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
+
+ /* Do the lookup */
+ ltypes01 = vqtbl1q_u8(tbl, xtmp128);
+ ltypes23 = vqtbl1q_u8(tbl, ytmp128);
+
+ /* Just use ld1q to retrieve aura
+ * when we don't need tx_offload
+ */
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mempool, pool_id));
+
+ /* Pick only relevant fields i.e Bit 56:63 of oltype
+ * and place it in ol3/ol4type of senddesc_w1
+ */
+ const uint8x16_t shuf_mask0 = {
+ 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xFF, 0xFF, 0xFF,
+ };
+
+ ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
+ ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
+
+ /* Prepare ol4ptr, ol3ptr from ol3len, ol2len.
+ * a [E(32):E(16):OL3(8):OL2(8)]
+ * a = a + (a << 8)
+ * a [E(32):E(16):(OL3+OL2):OL2]
+ * => E(32):E(16)::OL4PTR(8):OL3PTR(8)
+ */
+ senddesc01_w1 = vaddq_u8(senddesc01_w1,
+ vshlq_n_u16(senddesc01_w1, 8));
+ senddesc23_w1 = vaddq_u8(senddesc23_w1,
+ vshlq_n_u16(senddesc23_w1, 8));
+
+ /* Create second half of 4W cmd for 4 mbufs (sgdesc) */
+ cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
+ cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
+
+ xmask01 = vdupq_n_u64(0);
+ xmask23 = xmask01;
+ asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
+
+ asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
+ xmask01 = vshlq_n_u64(xmask01, 20);
+ xmask23 = vshlq_n_u64(xmask23, 20);
+
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
+ /* Move ltypes to senddesc*_w1 */
+ senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
+ senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
+
+ /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
+ cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
+ cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
+ cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
+ cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
+
+ } else if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
+ (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
+ /* Lookup table to translate ol_flags to
+ * ol4type, ol3type, il4type, il3type of senddesc_w1
+ */
+ const uint8x16x2_t tbl = {
+ {
+ {
+ /* [0-15] = il4type:il3type */
+ 0x04, /* none (IPv6) */
+ 0x14, /* PKT_TX_TCP_CKSUM (IPv6) */
+ 0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */
+ 0x34, /* PKT_TX_UDP_CKSUM (IPv6) */
+ 0x03, /* PKT_TX_IP_CKSUM */
+ 0x13, /* PKT_TX_IP_CKSUM |
+ * PKT_TX_TCP_CKSUM
+ */
+ 0x23, /* PKT_TX_IP_CKSUM |
+ * PKT_TX_SCTP_CKSUM
+ */
+ 0x33, /* PKT_TX_IP_CKSUM |
+ * PKT_TX_UDP_CKSUM
+ */
+ 0x02, /* PKT_TX_IPV4 */
+ 0x12, /* PKT_TX_IPV4 |
+ * PKT_TX_TCP_CKSUM
+ */
+ 0x22, /* PKT_TX_IPV4 |
+ * PKT_TX_SCTP_CKSUM
+ */
+ 0x32, /* PKT_TX_IPV4 |
+ * PKT_TX_UDP_CKSUM
+ */
+ 0x03, /* PKT_TX_IPV4 |
+ * PKT_TX_IP_CKSUM
+ */
+ 0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_TCP_CKSUM
+ */
+ 0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_SCTP_CKSUM
+ */
+ 0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_UDP_CKSUM
+ */
+ },
+
+ {
+ /* [16-31] = ol4type:ol3type */
+ 0x00, /* none */
+ 0x03, /* OUTER_IP_CKSUM */
+ 0x02, /* OUTER_IPV4 */
+ 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */
+ 0x04, /* OUTER_IPV6 */
+ 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */
+ 0x00, /* OUTER_IPV6 | OUTER_IPV4 */
+ 0x00, /* OUTER_IPV6 | OUTER_IPV4 |
+ * OUTER_IP_CKSUM
+ */
+ 0x00, /* OUTER_UDP_CKSUM */
+ 0x33, /* OUTER_UDP_CKSUM |
+ * OUTER_IP_CKSUM
+ */
+ 0x32, /* OUTER_UDP_CKSUM |
+ * OUTER_IPV4
+ */
+ 0x33, /* OUTER_UDP_CKSUM |
+ * OUTER_IPV4 | OUTER_IP_CKSUM
+ */
+ 0x34, /* OUTER_UDP_CKSUM |
+ * OUTER_IPV6
+ */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IP_CKSUM
+ */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IPV4
+ */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IPV4 | OUTER_IP_CKSUM
+ */
+ },
+ }
+ };
+
+ /* Extract olflags to translate to oltype & iltype */
+ xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
+ ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
+
+ /*
+ * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z)
+ * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z)
+ */
+ const uint32x4_t tshft_4 = {
+ 1, 0,
+ 1, 0,
+ };
+ senddesc01_w1 = vshlq_u32(senddesc01_w1, tshft_4);
+ senddesc23_w1 = vshlq_u32(senddesc23_w1, tshft_4);
+
+ /*
+ * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z)
+ * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z)
+ */
+ const uint8x16_t shuf_mask5 = {
+ 0x6, 0x5, 0x0, 0x1, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xE, 0xD, 0x8, 0x9, 0xFF, 0xFF, 0xFF, 0xFF,
+ };
+ senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5);
+ senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5);
+
+ /* Extract outer and inner header ol_flags */
+ const uint64x2_t oi_cksum_mask = {
+ 0x1CF0020000000000,
+ 0x1CF0020000000000,
+ };
+
+ xtmp128 = vandq_u64(xtmp128, oi_cksum_mask);
+ ytmp128 = vandq_u64(ytmp128, oi_cksum_mask);
+
+ /* Extract OUTER_UDP_CKSUM bit 41 and
+ * move it to bit 61
+ */
+
+ xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20);
+ ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20);
+
+ /* Shift right oltype by 2 and iltype by 4
+ * to start oltype nibble from BIT(58)
+ * instead of BIT(56) and iltype nibble from BIT(48)
+ * instead of BIT(52).
+ */
+ const int8x16_t tshft5 = {
+ 8, 8, 8, 8, 8, 8, -4, -2,
+ 8, 8, 8, 8, 8, 8, -4, -2,
+ };
+
+ xtmp128 = vshlq_u8(xtmp128, tshft5);
+ ytmp128 = vshlq_u8(ytmp128, tshft5);
+ /*
+ * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8)
+ * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8)
+ */
+ const int8x16_t tshft3 = {
+ -1, 0, -1, 0, 0, 0, 0, 0,
+ -1, 0, -1, 0, 0, 0, 0, 0,
+ };
+
+ senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
+ senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
+
+ /* Mark Bit(4) of oltype */
+ const uint64x2_t oi_cksum_mask2 = {
+ 0x1000000000000000,
+ 0x1000000000000000,
+ };
+
+ xtmp128 = vorrq_u64(xtmp128, oi_cksum_mask2);
+ ytmp128 = vorrq_u64(ytmp128, oi_cksum_mask2);
+
+ /* Do the lookup */
+ ltypes01 = vqtbl2q_u8(tbl, xtmp128);
+ ltypes23 = vqtbl2q_u8(tbl, ytmp128);
+
+ /* Just use ld1q to retrieve aura
+ * when we don't need tx_offload
+ */
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mempool, pool_id));
+
+ /* Pick only relevant fields i.e Bit 48:55 of iltype and
+ * Bit 56:63 of oltype and place it in corresponding
+ * place in senddesc_w1.
+ */
+ const uint8x16_t shuf_mask0 = {
+ 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0x6, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xE, 0xFF, 0xFF,
+ };
+
+ ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
+ ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
+
+ /* Prepare l4ptr, l3ptr, ol4ptr, ol3ptr from
+ * l3len, l2len, ol3len, ol2len.
+ * a [E(32):L3(8):L2(8):OL3(8):OL2(8)]
+ * a = a + (a << 8)
+ * a [E:(L3+L2):(L2+OL3):(OL3+OL2):OL2]
+ * a = a + (a << 16)
+ * a [E:(L3+L2+OL3+OL2):(L2+OL3+OL2):(OL3+OL2):OL2]
+ * => E(32):IL4PTR(8):IL3PTR(8):OL4PTR(8):OL3PTR(8)
+ */
+ senddesc01_w1 = vaddq_u8(senddesc01_w1,
+ vshlq_n_u32(senddesc01_w1, 8));
+ senddesc23_w1 = vaddq_u8(senddesc23_w1,
+ vshlq_n_u32(senddesc23_w1, 8));
+
+ /* Create second half of 4W cmd for 4 mbufs (sgdesc) */
+ cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
+ cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
+
+ /* Continue preparing l4ptr, l3ptr, ol4ptr, ol3ptr */
+ senddesc01_w1 = vaddq_u8(senddesc01_w1,
+ vshlq_n_u32(senddesc01_w1, 16));
+ senddesc23_w1 = vaddq_u8(senddesc23_w1,
+ vshlq_n_u32(senddesc23_w1, 16));
+
+ xmask01 = vdupq_n_u64(0);
+ xmask23 = xmask01;
+ asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
+
+ asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
+ xmask01 = vshlq_n_u64(xmask01, 20);
+ xmask23 = vshlq_n_u64(xmask23, 20);
+
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
+ /* Move ltypes to senddesc*_w1 */
+ senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
+ senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
+
+ /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
+ cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
+ cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
+ cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
+ cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
+ } else {
+ /* Just use ld1q to retrieve aura
+ * when we don't need tx_offload
+ */
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mempool, pool_id));
+ xmask01 = vdupq_n_u64(0);
+ xmask23 = xmask01;
+ asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
+
+ asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
+ xmask01 = vshlq_n_u64(xmask01, 20);
+ xmask23 = vshlq_n_u64(xmask23, 20);
+
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
+
+ /* Create 4W cmd for 4 mbufs (sendhdr, sgdesc) */
+ cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
+ cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
+ cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
+ cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
+ cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
+ cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
+ }
+
+ do {
+ vst1q_u64(lmt_addr, cmd00);
+ vst1q_u64(lmt_addr + 2, cmd01);
+ vst1q_u64(lmt_addr + 4, cmd10);
+ vst1q_u64(lmt_addr + 6, cmd11);
+ vst1q_u64(lmt_addr + 8, cmd20);
+ vst1q_u64(lmt_addr + 10, cmd21);
+ vst1q_u64(lmt_addr + 12, cmd30);
+ vst1q_u64(lmt_addr + 14, cmd31);
+ lmt_status = otx2_lmt_submit(io_addr);
+
+ } while (lmt_status == 0);
+ }
+
+ return pkts;
+}
+
+#else
+static __rte_always_inline uint16_t
+nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t pkts, const uint16_t flags)
+{
+ RTE_SET_USED(tx_queue);
+ RTE_SET_USED(tx_pkts);
+ RTE_SET_USED(pkts);
+ RTE_SET_USED(flags);
+ return 0;
+}
+#endif
+
#define T(name, f4, f3, f2, f1, f0, sz, flags) \
static uint16_t __rte_noinline __hot \
otx2_nix_xmit_pkts_ ## name(void *tx_queue, \
@@ -107,6 +960,21 @@ otx2_nix_xmit_pkts_mseg_ ## name(void *tx_queue, \
NIX_TX_FASTPATH_MODES
#undef T
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+static uint16_t __rte_noinline __hot \
+otx2_nix_xmit_pkts_vec_ ## name(void *tx_queue, \
+ struct rte_mbuf **tx_pkts, uint16_t pkts) \
+{ \
+ /* VLAN and TSTMP is not supported by vec */ \
+ if ((flags) & NIX_TX_OFFLOAD_VLAN_QINQ_F || \
+ (flags) & NIX_TX_OFFLOAD_TSTAMP_F) \
+ return 0; \
+ return nix_xmit_pkts_vector(tx_queue, tx_pkts, pkts, (flags)); \
+}
+
+NIX_TX_FASTPATH_MODES
+#undef T
+
static inline void
pick_tx_func(struct rte_eth_dev *eth_dev,
const eth_tx_burst_t tx_burst[2][2][2][2][2])
@@ -143,7 +1011,20 @@ NIX_TX_FASTPATH_MODES
#undef T
};
- pick_tx_func(eth_dev, nix_eth_tx_burst);
+ const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2] = {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_vec_ ## name,
+
+NIX_TX_FASTPATH_MODES
+#undef T
+ };
+
+ if (dev->scalar_ena ||
+ (dev->tx_offload_flags &
+ (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F)))
+ pick_tx_func(eth_dev, nix_eth_tx_burst);
+ else
+ pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 55/58] net/octeontx2: add device start operation
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (53 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 54/58] net/octeontx2: add Tx vector version jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 56/58] net/octeontx2: add device stop and close operations jerinj
` (4 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add device start operation and update the correct
function pointers for Rx and Tx burst functions.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 180 ++++++++++++++++++++++++
drivers/net/octeontx2/otx2_flow.c | 4 +-
drivers/net/octeontx2/otx2_flow_parse.c | 7 +-
drivers/net/octeontx2/otx2_ptp.c | 8 ++
drivers/net/octeontx2/otx2_vlan.c | 1 +
5 files changed, 197 insertions(+), 3 deletions(-)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index fdcab89b8..bdf291996 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -135,6 +135,55 @@ otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev)
return otx2_mbox_process(mbox);
}
+static int
+npc_rx_enable(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ otx2_mbox_alloc_msg_nix_lf_start_rx(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
+static int
+npc_rx_disable(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ otx2_mbox_alloc_msg_nix_lf_stop_rx(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
+static int
+nix_cgx_start_link_event(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_start_linkevents(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
+static int
+cgx_intlbk_enable(struct otx2_eth_dev *dev, bool en)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ if (en)
+ otx2_mbox_alloc_msg_cgx_intlbk_enable(mbox);
+ else
+ otx2_mbox_alloc_msg_cgx_intlbk_disable(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
static inline void
nix_rx_queue_reset(struct otx2_eth_rxq *rxq)
{
@@ -478,6 +527,74 @@ nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
return NIX_MAXSQESZ_W8;
}
+static uint16_t
+nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct rte_eth_conf *conf = &data->dev_conf;
+ struct rte_eth_rxmode *rxmode = &conf->rxmode;
+ uint16_t flags = 0;
+
+ if (rxmode->mq_mode == ETH_MQ_RX_RSS)
+ flags |= NIX_RX_OFFLOAD_RSS_F;
+
+ if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM))
+ flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
+
+ if (dev->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
+
+ if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ flags |= NIX_RX_MULTI_SEG_F;
+
+ if (dev->rx_offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_QINQ_STRIP))
+ flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
+
+ if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ flags |= NIX_RX_OFFLOAD_TSTAMP_F;
+
+ return flags;
+}
+
+static uint16_t
+nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t conf = dev->tx_offloads;
+ uint16_t flags = 0;
+
+ /* Fastpath is dependent on these enums */
+ RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52));
+ RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52));
+ RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52));
+
+ if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
+ conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
+
+ if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
+
+ if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
+ conf & DEV_TX_OFFLOAD_UDP_CKSUM ||
+ conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
+
+ if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
+
+ if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ flags |= NIX_TX_MULTI_SEG_F;
+
+ return flags;
+}
+
static int
nix_sq_init(struct otx2_eth_txq *txq)
{
@@ -1089,6 +1206,8 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
dev->rx_offloads = rxmode->offloads;
dev->tx_offloads = txmode->offloads;
+ dev->rx_offload_flags |= nix_rx_offload_flags(eth_dev);
+ dev->tx_offload_flags |= nix_tx_offload_flags(eth_dev);
dev->rss_info.rss_grps = NIX_RSS_GRPS;
nb_rxq = RTE_MAX(data->nb_rx_queues, 1);
@@ -1128,6 +1247,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ /* Configure loop back mode */
+ rc = cgx_intlbk_enable(dev, eth_dev->data->dev_conf.lpbk_mode);
+ if (rc) {
+ otx2_err("Failed to configure cgx loop back mode rc=%d", rc);
+ goto free_nix_lf;
+ }
+
rc = otx2_nix_rxchan_bpid_cfg(eth_dev, true);
if (rc) {
otx2_err("Failed to configure nix rx chan bpid cfg rc=%d", rc);
@@ -1277,6 +1403,59 @@ otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
return rc;
}
+static int
+otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc, i;
+
+ /* Start rx queues */
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ rc = otx2_nix_rx_queue_start(eth_dev, i);
+ if (rc)
+ return rc;
+ }
+
+ /* Start tx queues */
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ rc = otx2_nix_tx_queue_start(eth_dev, i);
+ if (rc)
+ return rc;
+ }
+
+ rc = otx2_nix_update_flow_ctrl_mode(eth_dev);
+ if (rc) {
+ otx2_err("Failed to update flow ctrl mode %d", rc);
+ return rc;
+ }
+
+ rc = npc_rx_enable(dev);
+ if (rc) {
+ otx2_err("Failed to enable NPC rx %d", rc);
+ return rc;
+ }
+
+ otx2_nix_toggle_flag_link_cfg(dev, true);
+
+ rc = nix_cgx_start_link_event(dev);
+ if (rc) {
+ otx2_err("Failed to start cgx link event %d", rc);
+ goto rx_disable;
+ }
+
+ otx2_nix_toggle_flag_link_cfg(dev, false);
+ otx2_eth_set_tx_function(eth_dev);
+ otx2_eth_set_rx_function(eth_dev);
+
+ return 0;
+
+rx_disable:
+ npc_rx_disable(dev);
+ otx2_nix_toggle_flag_link_cfg(dev, false);
+ return rc;
+}
+
+
/* Initialize and register driver with DPDK Application */
static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
@@ -1286,6 +1465,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.tx_queue_release = otx2_nix_tx_queue_release,
.rx_queue_setup = otx2_nix_rx_queue_setup,
.rx_queue_release = otx2_nix_rx_queue_release,
+ .dev_start = otx2_nix_dev_start,
.tx_queue_start = otx2_nix_tx_queue_start,
.tx_queue_stop = otx2_nix_tx_queue_stop,
.rx_queue_start = otx2_nix_rx_queue_start,
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 270433cd6..68337631d 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -498,8 +498,10 @@ otx2_flow_destroy(struct rte_eth_dev *dev,
return -EINVAL;
/* Clear mark offload flag if there are no more mark actions */
- if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0)
+ if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0) {
hw->rx_offload_flags &= ~NIX_RX_OFFLOAD_MARK_UPDATE_F;
+ otx2_eth_set_rx_function(dev);
+ }
}
rc = flow_free_rss_action(dev, flow);
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index cf13813d8..cebae645e 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -922,8 +922,11 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
if (mark)
flow->npc_action |= (uint64_t)mark << 40;
- if (rte_atomic32_read(&npc->mark_actions) == 1)
- hw->rx_offload_flags |= NIX_RX_OFFLOAD_MARK_UPDATE_F;
+ if (rte_atomic32_read(&npc->mark_actions) == 1) {
+ hw->rx_offload_flags |=
+ NIX_RX_OFFLOAD_MARK_UPDATE_F;
+ otx2_eth_set_rx_function(dev);
+ }
/* Ideally AF must ensure that correct pf_func is set */
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 5291da241..0186c629a 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -118,6 +118,10 @@ otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
otx2_nix_form_default_desc(txq);
}
+
+ /* Setting up the function pointers as per new offload flags */
+ otx2_eth_set_rx_function(eth_dev);
+ otx2_eth_set_tx_function(eth_dev);
}
return rc;
}
@@ -147,6 +151,10 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
otx2_nix_form_default_desc(txq);
}
+
+ /* Setting up the function pointers as per new offload flags */
+ otx2_eth_set_rx_function(eth_dev);
+ otx2_eth_set_tx_function(eth_dev);
}
return rc;
}
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index 3c0d40553..4f56cefd9 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -656,6 +656,7 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
DEV_RX_OFFLOAD_QINQ_STRIP)) {
dev->rx_offloads |= offloads;
dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
+ otx2_eth_set_rx_function(eth_dev);
}
done:
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 56/58] net/octeontx2: add device stop and close operations
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (54 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 55/58] net/octeontx2: add device start operation jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-06 16:23 ` Ferruh Yigit
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 57/58] net/octeontx2: add MTU set operation jerinj
` (3 subsequent siblings)
59 siblings, 1 reply; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add device stop, close and reset operations.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 70 +++++++++++++++++++++++++++++
1 file changed, 70 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index bdf291996..6c67cecd5 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -184,6 +184,19 @@ cgx_intlbk_enable(struct otx2_eth_dev *dev, bool en)
return otx2_mbox_process(mbox);
}
+static int
+nix_cgx_stop_link_event(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_stop_linkevents(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
static inline void
nix_rx_queue_reset(struct otx2_eth_rxq *rxq)
{
@@ -1403,6 +1416,37 @@ otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
return rc;
}
+static void
+otx2_nix_dev_stop(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_mbuf *rx_pkts[32];
+ struct otx2_eth_rxq *rxq;
+ int count, i, j, rc;
+
+ nix_cgx_stop_link_event(dev);
+ npc_rx_disable(dev);
+
+ /* Stop rx queues and free up pkts pending */
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ rc = otx2_nix_rx_queue_stop(eth_dev, i);
+ if (rc)
+ continue;
+
+ rxq = eth_dev->data->rx_queues[i];
+ count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32);
+ while (count) {
+ for (j = 0; j < count; j++)
+ rte_pktmbuf_free(rx_pkts[j]);
+ count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32);
+ }
+ }
+
+ /* Stop tx queues */
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
+ otx2_nix_tx_queue_stop(eth_dev, i);
+}
+
static int
otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
{
@@ -1455,6 +1499,8 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
return rc;
}
+static int otx2_nix_dev_reset(struct rte_eth_dev *eth_dev);
+static void otx2_nix_dev_close(struct rte_eth_dev *eth_dev);
/* Initialize and register driver with DPDK Application */
static const struct eth_dev_ops otx2_eth_dev_ops = {
@@ -1466,6 +1512,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.rx_queue_setup = otx2_nix_rx_queue_setup,
.rx_queue_release = otx2_nix_rx_queue_release,
.dev_start = otx2_nix_dev_start,
+ .dev_stop = otx2_nix_dev_stop,
+ .dev_close = otx2_nix_dev_close,
.tx_queue_start = otx2_nix_tx_queue_start,
.tx_queue_stop = otx2_nix_tx_queue_stop,
.rx_queue_start = otx2_nix_rx_queue_start,
@@ -1473,6 +1521,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_set_link_up = otx2_nix_dev_set_link_up,
.dev_set_link_down = otx2_nix_dev_set_link_down,
.dev_supported_ptypes_get = otx2_nix_supported_ptypes_get,
+ .dev_reset = otx2_nix_dev_reset,
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
@@ -1727,6 +1776,7 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
/* Disable nix bpid config */
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+ npc_rx_disable(dev);
/* Disable vlan offloads */
otx2_nix_vlan_fini(eth_dev);
@@ -1737,6 +1787,8 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
if (otx2_ethdev_is_ptp_en(dev))
otx2_nix_timesync_disable(eth_dev);
+ nix_cgx_stop_link_event(dev);
+
/* Free up SQs */
for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]);
@@ -1792,6 +1844,24 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
return 0;
}
+static void
+otx2_nix_dev_close(struct rte_eth_dev *eth_dev)
+{
+ otx2_eth_dev_uninit(eth_dev, true);
+}
+
+static int
+otx2_nix_dev_reset(struct rte_eth_dev *eth_dev)
+{
+ int rc;
+
+ rc = otx2_eth_dev_uninit(eth_dev, false);
+ if (rc)
+ return rc;
+
+ return otx2_eth_dev_init(eth_dev);
+}
+
static int
nix_remove(struct rte_pci_device *pci_dev)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 57/58] net/octeontx2: add MTU set operation
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (55 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 56/58] net/octeontx2: add device stop and close operations jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 58/58] doc: add Marvell OCTEON TX2 ethdev documentation jerinj
` (2 subsequent siblings)
59 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: ferruh.yigit, Vamsi Attunuru, Sunil Kumar Kori
From: Vamsi Attunuru <vattunuru@marvell.com>
Add MTU set operation and MTU update feature.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 7 ++
drivers/net/octeontx2/otx2_ethdev.h | 4 ++
drivers/net/octeontx2/otx2_ethdev_ops.c | 80 ++++++++++++++++++++++
5 files changed, 93 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 396979451..e96c588fa 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -15,6 +15,7 @@ Link status event = Y
Fast mbuf free = Y
Free Tx mbuf on demand = Y
Queue start/stop = Y
+MTU update = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 1435fd91e..7ad097df4 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -15,6 +15,7 @@ Link status event = Y
Fast mbuf free = Y
Free Tx mbuf on demand = Y
Queue start/stop = Y
+MTU update = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 6c67cecd5..ddd924ce8 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1453,6 +1453,12 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, i;
+ if (eth_dev->data->nb_rx_queues != 0) {
+ rc = otx2_nix_recalc_mtu(eth_dev);
+ if (rc)
+ return rc;
+ }
+
/* Start rx queues */
for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
rc = otx2_nix_rx_queue_start(eth_dev, i);
@@ -1525,6 +1531,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
+ .mtu_set = otx2_nix_mtu_set,
.mac_addr_add = otx2_nix_mac_addr_add,
.mac_addr_remove = otx2_nix_mac_addr_del,
.mac_addr_set = otx2_nix_mac_addr_set,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index dff4de250..862a1ccbb 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -351,6 +351,10 @@ int otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx);
int otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx);
uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id);
+/* MTU */
+int otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu);
+int otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev);
+
/* Link */
void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index d2cb5ba1c..e8959e179 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -6,6 +6,86 @@
#include "otx2_ethdev.h"
+int
+otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
+{
+ uint32_t buffsz, frame_size = mtu + NIX_HW_L2_OVERHEAD;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_frs_cfg *req;
+ int rc;
+
+ /* Check if MTU is within the allowed range */
+ if (frame_size < NIX_MIN_HW_FRS || frame_size > NIX_MAX_HW_FRS)
+ return -EINVAL;
+
+ buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
+
+ /* Refuse MTU that requires the support of scattered packets
+ * when this feature has not been enabled before.
+ */
+ if (data->dev_started && frame_size > buffsz &&
+ !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER))
+ return -EINVAL;
+
+ /* Check <seg size> * <max_seg> >= max_frame */
+ if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ (frame_size > buffsz * NIX_RX_NB_SEG_MAX))
+ return -EINVAL;
+
+ req = otx2_mbox_alloc_msg_nix_set_hw_frs(mbox);
+ req->update_smq = true;
+ req->maxlen = frame_size;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ if (frame_size > RTE_ETHER_MAX_LEN)
+ dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+ /* Update max_rx_pkt_len */
+ data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+ return rc;
+}
+
+int
+otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct rte_pktmbuf_pool_private *mbp_priv;
+ struct otx2_eth_rxq *rxq;
+ uint32_t buffsz;
+ uint16_t mtu;
+ int rc;
+
+ /* Get rx buffer size */
+ rxq = data->rx_queues[0];
+ mbp_priv = rte_mempool_get_priv(rxq->pool);
+ buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
+
+ /* Setup scatter mode if needed by jumbo */
+ if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz)
+ dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+
+ /* Setup MTU based on max_rx_pkt_len or default */
+ mtu = ((dev->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) ||
+ (data->dev_conf.rxmode.max_rx_pkt_len < RTE_ETHER_MAX_LEN)) ?
+ data->dev_conf.rxmode.max_rx_pkt_len - NIX_HW_L2_OVERHEAD :
+ RTE_ETHER_MTU;
+
+ rc = otx2_nix_mtu_set(eth_dev, mtu);
+ if (rc)
+ otx2_err("Failed to set default MTU size %d", rc);
+
+ return rc;
+}
+
static void
nix_cgx_promisc_config(struct rte_eth_dev *eth_dev, int en)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v1 58/58] doc: add Marvell OCTEON TX2 ethdev documentation
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (56 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 57/58] net/octeontx2: add MTU set operation jerinj
@ 2019-06-02 15:24 ` jerinj
2019-06-06 16:50 ` Ferruh Yigit
2019-06-06 15:23 ` [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver Ferruh Yigit
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
59 siblings, 1 reply; 196+ messages in thread
From: jerinj @ 2019-06-02 15:24 UTC (permalink / raw)
To: dev, Thomas Monjalon, John McNamara, Marko Kovacevic,
Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K, Vamsi Attunuru
Cc: ferruh.yigit
From: Jerin Jacob <jerinj@marvell.com>
Add Marvell OCTEON TX2 ethdev documentation.
This patch also updates the MAINTAINERS file and
shared library versions in release_19_08.rst.
Cc: John McNamara <john.mcnamara@intel.com>
Cc: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
MAINTAINERS | 8 +
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/octeontx2.rst | 289 +++++++++++++++++++++
doc/guides/platform/octeontx2.rst | 3 +
doc/guides/rel_notes/release_19_05.rst | 1 +
8 files changed, 305 insertions(+)
create mode 100644 doc/guides/nics/octeontx2.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index 74ac6d41f..fe509c1f9 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -668,6 +668,14 @@ F: drivers/net/mvneta/
F: doc/guides/nics/mvneta.rst
F: doc/guides/nics/features/mvneta.ini
+Marvell OCTEON TX2
+M: Jerin Jacob <jerinj@marvell.com>
+M: Nithin Dabilpuram <ndabilpuram@marvell.com>
+M: Kiran Kumar K <kirankumark@marvell.com>
+F: drivers/net/octeontx2/
+F: doc/guides/nics/features/octeontx2*.rst
+F: doc/guides/nics/octeontx2.rst
+
Mellanox mlx4
M: Matan Azrad <matan@mellanox.com>
M: Shahaf Shuler <shahafs@mellanox.com>
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index e96c588fa..ef1a638e9 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -44,3 +44,4 @@ Extended stats = Y
FW version = Y
Module EEPROM dump = Y
Registers dump = Y
+Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 7ad097df4..8f95727f7 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -41,3 +41,4 @@ Stats per queue = Y
FW version = Y
Module EEPROM dump = Y
Registers dump = Y
+Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 0d5137316..e78385bb2 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -36,3 +36,4 @@ Stats per queue = Y
FW version = Y
Module EEPROM dump = Y
Registers dump = Y
+Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 2221c35f2..6fa075594 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -44,6 +44,7 @@ Network Interface Controller Drivers
nfb
nfp
octeontx
+ octeontx2
qede
sfc_efx
softnic
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
new file mode 100644
index 000000000..2f14a4a1c
--- /dev/null
+++ b/doc/guides/nics/octeontx2.rst
@@ -0,0 +1,289 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(C) 2019 Marvell International Ltd.
+
+OCTEON TX2 Poll Mode driver
+===========================
+
+The OCTEON TX2 ETHDEV PMD (**librte_pmd_octeontx2**) provides poll mode ethdev
+driver support for the inbuilt network device found in **Marvell OCTEON TX2**
+SoC family as well as for their virtual functions (VF) in SR-IOV context.
+
+More information can be found at `Marvell Official Website
+<https://www.marvell.com/embedded-processors/infrastructure-processors>`_.
+
+Features
+--------
+
+Features of the OCTEON TX2 Ethdev PMD are:
+
+- Packet type information
+- Promiscuous mode
+- Port hardware statistics
+- Jumbo frames
+- SR-IOV VF
+- Lock-free Tx queue
+- Multiple queues for TX and RX
+- Receiver Side Scaling (RSS)
+- MAC/VLAN filtering
+- Generic flow API
+- Inner and Outer Checksum offload
+- VLAN/QinQ stripping and insertion
+- Port hardware statistics
+- Link state information
+- Link flow control
+- MTU update
+- Scatter-Gather IO support
+- Vector Poll mode driver
+- Debug utilities - Context dump and error interrupt support
+- IEEE1588 timestamping
+- HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
+
+Prerequisites
+-------------
+
+See :doc:`../platform/octeontx2` for setup information.
+
+Compile time Config Options
+---------------------------
+
+The following options may be modified in the ``config`` file.
+
+- ``CONFIG_RTE_LIBRTE_OCTEONTX2_PMD`` (default ``y``)
+
+ Toggle compilation of the ``librte_pmd_octeontx2`` driver.
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+To compile the OCTEON TX2 PMD for Linux arm64 gcc,
+use arm64-octeontx2-linux-gcc as target.
+
+#. Running testpmd:
+
+ Follow instructions available in the document
+ :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+ to run testpmd.
+
+ Example output:
+
+ .. code-block:: console
+
+ ./build/app/testpmd -c 0x300 -w 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
+ EAL: Detected 24 lcore(s)
+ EAL: Detected 1 NUMA nodes
+ EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
+ EAL: No available hugepages reported in hugepages-2048kB
+ EAL: Probing VFIO support...
+ EAL: VFIO support initialized
+ EAL: PCI device 0002:02:00.0 on NUMA socket 0
+ EAL: probe driver: 177d:a063 net_octeontx2
+ EAL: using IOMMU type 1 (Type 1)
+ testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456, size=2176, socket=0
+ testpmd: preferred mempool ops selected: octeontx2_npa
+ Configuring Port 0 (socket 0)
+ PMD: Port 0: Link Up - speed 40000 Mbps - full-duplex
+
+ Port 0: link state change event
+ Port 0: 36:10:66:88:7A:57
+ Checking link statuses...
+ Done
+ No commandline core given, start packet forwarding
+ io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
+ Logical Core 9 (socket 0) forwards packets on 1 streams:
+ RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
+
+ io packet forwarding packets/burst=32
+ nb forwarding cores=1 - nb forwarding ports=1
+ port 0: RX queue number: 1 Tx queue number: 1
+ Rx offloads=0x0 Tx offloads=0x10000
+ RX queue: 0
+ RX desc=512 - RX free threshold=0
+ RX threshold registers: pthresh=0 hthresh=0 wthresh=0
+ RX Offloads=0x0
+ TX queue: 0
+ TX desc=512 - TX free threshold=0
+ TX threshold registers: pthresh=0 hthresh=0 wthresh=0
+ TX offloads=0x10000 - TX RS bit threshold=0
+ Press enter to exit
+
+Runtime Config Options
+----------------------
+
+- ``HW offload ptype parsing disable`` (default ``0``)
+
+ Packet type parsing is HW offloaded by default and this feature may be toggled
+ using ``ptype_disable`` ``devargs`` parameter.
+
+- ``Rx scalar mode enable`` (default ``0``)
+
+ Ethdev rx supports both scalar and vector mode, it may be selected at runtime
+ using ``scalar_enable`` ``devargs`` parameter.
+
+- ``RSS reta size`` (default ``64``)
+
+ RSS redirection table size may be configured during runtime using ``reta_size``
+ ``devargs`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,reta_size=256
+
+ With the above configuration, reta table of size 256 is populated.
+
+- ``Flow priority levels`` (default ``3``)
+
+ RTE Flow priority levels can be configured during runtime using
+ ``flow_max_priority`` ``devargs`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,flow_max_priority=10
+
+ With the above configuration, priority level was set to 10 (0-9). Max
+ priority level supported is 32.
+
+- ``Reserve Flow entries`` (default ``8``)
+
+ RTE flow entries can be pre allocated and the size of pre allocation can be
+ selected runtime using ``flow_prealloc_size`` ``devargs`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,flow_prealloc_size=4
+
+ With the above configuration, pre alloc size was set to 4. Max pre alloc
+ size supported is 32.
+
+.. note::
+
+ Above devarg parameters are configurable per device, user needs to pass the
+ parameters to all the PCIe devices if application requires to configure on
+ all the ethdev ports.
+
+Limitations
+-----------
+
+``mempool_octeontx2`` external mempool handler dependency
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The OCTEON TX2 SoC family NIC has inbuilt HW assisted external mempool manager.
+``net_octeontx2`` pmd only works with ``mempool_octeontx2`` mempool handler
+as it is performance wise most effective way for packet allocation and Tx buffer
+recycling on OCTEON TX2 SoC platform.
+
+CRC striping
+~~~~~~~~~~~~
+
+The OCTEON TX2 SoC family NICs strip the CRC for every packet being received by
+the host interface irrespective of the offload configuration.
+
+
+Debugging Options
+-----------------
+
+.. _table_octeontx2_ethdev_debug_options:
+
+.. table:: OCTEON TX2 ethdev debug options
+
+ +---+------------+-------------------------------------------------------+
+ | # | Component | EAL log command |
+ +===+============+=======================================================+
+ | 1 | NIX | --log-level='pmd\.net.octeontx2,8' |
+ +---+------------+-------------------------------------------------------+
+ | 2 | NPC | --log-level='pmd\.net.octeontx2\.flow,8' |
+ +---+------------+-------------------------------------------------------+
+
+RTE Flow Support
+----------------
+
+The OCTEON TX2 SoC family NIC has support for the following patterns and
+actions.
+
+Patterns:
+
+.. _table_octeontx2_supported_flow_item_types:
+
+.. table:: Item types
+
+ +----+--------------------------------+
+ | # | Pattern Type |
+ +====+================================+
+ | 1 | RTE_FLOW_ITEM_TYPE_ETH |
+ +----+--------------------------------+
+ | 2 | RTE_FLOW_ITEM_TYPE_VLAN |
+ +----+--------------------------------+
+ | 3 | RTE_FLOW_ITEM_TYPE_E_TAG |
+ +----+--------------------------------+
+ | 4 | RTE_FLOW_ITEM_TYPE_IPV4 |
+ +----+--------------------------------+
+ | 5 | RTE_FLOW_ITEM_TYPE_IPV6 |
+ +----+--------------------------------+
+ | 6 | RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4|
+ +----+--------------------------------+
+ | 7 | RTE_FLOW_ITEM_TYPE_MPLS |
+ +----+--------------------------------+
+ | 8 | RTE_FLOW_ITEM_TYPE_ICMP |
+ +----+--------------------------------+
+ | 9 | RTE_FLOW_ITEM_TYPE_UDP |
+ +----+--------------------------------+
+ | 10 | RTE_FLOW_ITEM_TYPE_TCP |
+ +----+--------------------------------+
+ | 11 | RTE_FLOW_ITEM_TYPE_SCTP |
+ +----+--------------------------------+
+ | 12 | RTE_FLOW_ITEM_TYPE_ESP |
+ +----+--------------------------------+
+ | 13 | RTE_FLOW_ITEM_TYPE_GRE |
+ +----+--------------------------------+
+ | 14 | RTE_FLOW_ITEM_TYPE_NVGRE |
+ +----+--------------------------------+
+ | 15 | RTE_FLOW_ITEM_TYPE_VXLAN |
+ +----+--------------------------------+
+ | 16 | RTE_FLOW_ITEM_TYPE_GTPC |
+ +----+--------------------------------+
+ | 17 | RTE_FLOW_ITEM_TYPE_GTPU |
+ +----+--------------------------------+
+ | 18 | RTE_FLOW_ITEM_TYPE_VOID |
+ +----+--------------------------------+
+ | 19 | RTE_FLOW_ITEM_TYPE_ANY |
+ +----+--------------------------------+
+
+Actions:
+
+.. _table_octeontx2_supported_ingress_action_types:
+
+.. table:: Ingress action types
+
+ +----+--------------------------------+
+ | # | Action Type |
+ +====+================================+
+ | 1 | RTE_FLOW_ACTION_TYPE_VOID |
+ +----+--------------------------------+
+ | 2 | RTE_FLOW_ACTION_TYPE_MARK |
+ +----+--------------------------------+
+ | 3 | RTE_FLOW_ACTION_TYPE_FLAG |
+ +----+--------------------------------+
+ | 4 | RTE_FLOW_ACTION_TYPE_COUNT |
+ +----+--------------------------------+
+ | 5 | RTE_FLOW_ACTION_TYPE_DROP |
+ +----+--------------------------------+
+ | 6 | RTE_FLOW_ACTION_TYPE_QUEUE |
+ +----+--------------------------------+
+ | 7 | RTE_FLOW_ACTION_TYPE_RSS |
+ +----+--------------------------------+
+ | 8 | RTE_FLOW_ACTION_TYPE_SECURITY |
+ +----+--------------------------------+
+
+.. _table_octeontx2_supported_egress_action_types:
+
+.. table:: Egress action types
+
+ +----+--------------------------------+
+ | # | Action Type |
+ +====+================================+
+ | 1 | RTE_FLOW_ACTION_TYPE_COUNT |
+ +----+--------------------------------+
+ | 2 | RTE_FLOW_ACTION_TYPE_DROP |
+ +----+--------------------------------+
diff --git a/doc/guides/platform/octeontx2.rst b/doc/guides/platform/octeontx2.rst
index c9ea45647..d2592f119 100644
--- a/doc/guides/platform/octeontx2.rst
+++ b/doc/guides/platform/octeontx2.rst
@@ -98,6 +98,9 @@ HW Offload Drivers
This section lists dataplane H/W block(s) available in OCTEON TX2 SoC.
+#. **Ethdev Driver**
+ See :doc:`../nics/octeontx2` for NIX Ethdev driver information.
+
#. **Mempool Driver**
See :doc:`../mempool/octeontx2` for NPA mempool driver information.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index b4c6972e3..e925ccf0e 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -386,6 +386,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_pmd_i40e.so.2
librte_pmd_ixgbe.so.2
librte_pmd_dpaa2_qdma.so.1
+ + librte_pmd_octeontx2.so.1
librte_pmd_ring.so.2
librte_pmd_softnic.so.1
librte_pmd_vhost.so.2
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (57 preceding siblings ...)
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 58/58] doc: add Marvell OCTEON TX2 ethdev documentation jerinj
@ 2019-06-06 15:23 ` Ferruh Yigit
2019-06-10 9:54 ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
59 siblings, 1 reply; 196+ messages in thread
From: Ferruh Yigit @ 2019-06-06 15:23 UTC (permalink / raw)
To: jerinj, dev
On 6/2/2019 4:23 PM, jerinj@marvell.com wrote:
> From: Jerin Jacob <jerinj@marvell.com>
>
> This patchset adds support for OCTEON TX2 ethdev driver.
>
> This patch set is depended on "OCTEON TX2 common and mempool driver" series.
> http://mails.dpdk.org/archives/dev/2019-June/133329.html
Hi Jerin,
I will wait for the dependent patches to be merged to be able to full review the
patchset, I will go through it for now.
It can be good to try to get dependent patchsets early so that this also can
make in on time.
>
> This patches series also available at https://github.com/jerinjacobk/dpdk-octeontx2-nix
> including the dependency patches for quick download and review.
>
> Harman Kalra (2):
> net/octeontx2: add PTP base support
> net/octeontx2: add remaining PTP operations
>
> Jerin Jacob (17):
> net/octeontx2: add build infrastructure
> net/octeontx2: add ethdev probe and remove
> net/octeontx2: add device init and uninit
> net/octeontx2: add devargs parsing functions
> net/octeontx2: handle device error interrupts
> net/octeontx2: add info get operation
> net/octeontx2: add device configure operation
> net/octeontx2: handle queue specific error interrupts
> net/octeontx2: add context debug utils
> net/octeontx2: add Rx queue setup and release
> net/octeontx2: add Tx queue setup and release
> net/octeontx2: add ptype support
> net/octeontx2: add Rx and Tx descriptor operations
> net/octeontx2: add Rx burst support
> net/octeontx2: add Rx vector version
> net/octeontx2: add Tx burst support
> doc: add Marvell OCTEON TX2 ethdev documentation
>
> Kiran Kumar K (13):
> net/octeontx2: add register dump support
> net/octeontx2: add basic stats operation
> net/octeontx2: add extended stats operations
> net/octeontx2: introducing flow driver
> net/octeontx2: flow utility functions
> net/octeontx2: flow mailbox utility
> net/octeontx2: add flow MCAM utility functions
> net/octeontx2: add flow parsing for outer layers
> net/octeontx2: adding flow parsing for inner layers
> net/octeontx2: add flow actions support
> net/octeontx2: add flow operations
> net/octeontx2: add additional flow operations
> net/octeontx2: add flow init and fini
>
> Krzysztof Kanas (2):
> net/octeontx2: alloc and free TM HW resources
> net/octeontx2: enable Tx through traffic manager
>
> Nithin Dabilpuram (9):
> net/octeontx2: add queue start and stop operations
> net/octeontx2: introduce traffic manager
> net/octeontx2: configure TM HW resources
> net/octeontx2: add queue info and pool supported operations
> net/octeontx2: add Rx multi segment version
> net/octeontx2: add Tx multi segment version
> net/octeontx2: add Tx vector version
> net/octeontx2: add device start operation
> net/octeontx2: add device stop and close operations
>
> Sunil Kumar Kori (1):
> net/octeontx2: add unicast MAC filter
>
> Vamsi Attunuru (9):
> net/octeontx2: add link stats operations
> net/octeontx2: add promiscuous and allmulticast mode
> net/octeontx2: add RSS support
> net/octeontx2: handle port reconfigure
> net/octeontx2: add link status set operations
> net/octeontx2: add module EEPROM dump
> net/octeontx2: add flow control support
> net/octeontx2: add FW version get operation
> net/octeontx2: add MTU set operation
>
> Vivek Sharma (5):
> net/octeontx2: connect flow API to ethdev ops
> net/octeontx2: implement VLAN utility functions
> net/octeontx2: support VLAN offloads
> net/octeontx2: support VLAN filters
> net/octeontx2: support VLAN TPID and PVID for Tx
<...>
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [PATCH v1 01/58] net/octeontx2: add build infrastructure
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 01/58] net/octeontx2: add build infrastructure jerinj
@ 2019-06-06 15:33 ` Ferruh Yigit
2019-06-06 16:40 ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
0 siblings, 1 reply; 196+ messages in thread
From: Ferruh Yigit @ 2019-06-06 15:33 UTC (permalink / raw)
To: jerinj, dev, Thomas Monjalon, John McNamara, Marko Kovacevic,
Nithin Dabilpuram, Kiran Kumar K
Cc: Pavan Nikhilesh
On 6/2/2019 4:23 PM, jerinj@marvell.com wrote:
> From: Jerin Jacob <jerinj@marvell.com>
>
> Adding bare minimum PMD library and doc build infrastructure.
>
> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> config/common_base | 5 +++
> doc/guides/nics/features/octeontx2.ini | 8 ++++
> doc/guides/nics/features/octeontx2_vec.ini | 8 ++++
> doc/guides/nics/features/octeontx2_vf.ini | 8 ++++
> drivers/net/Makefile | 1 +
> drivers/net/meson.build | 2 +-
> drivers/net/octeontx2/Makefile | 38 +++++++++++++++++++
> drivers/net/octeontx2/meson.build | 24 ++++++++++++
> drivers/net/octeontx2/otx2_ethdev.c | 3 ++
> .../octeontx2/rte_pmd_octeontx2_version.map | 4 ++
> mk/rte.app.mk | 2 +
It can be good to include MAINTAINERS file in this patch, of course with the
content that introduced in this patch.
> 11 files changed, 102 insertions(+), 1 deletion(-)
> create mode 100644 doc/guides/nics/features/octeontx2.ini
> create mode 100644 doc/guides/nics/features/octeontx2_vec.ini
> create mode 100644 doc/guides/nics/features/octeontx2_vf.ini
> create mode 100644 drivers/net/octeontx2/Makefile
> create mode 100644 drivers/net/octeontx2/meson.build
> create mode 100644 drivers/net/octeontx2/otx2_ethdev.c
> create mode 100644 drivers/net/octeontx2/rte_pmd_octeontx2_version.map
>
> diff --git a/config/common_base b/config/common_base
> index 4a3de0360..38edad355 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -405,6 +405,11 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
> #
> CONFIG_RTE_LIBRTE_OCTEONTX_PMD=y
>
> +#
> +# Compile burst-oriented Cavium OCTEONTX2 network PMD driver
> +#
> +CONFIG_RTE_LIBRTE_OCTEONTX2_PMD=y
> +
Since .ini files only has "ARMv8", should the PMD disabled in other config files?
Or is the support coming for those architectures in next patches?
If this is only for Armv8 & Linux, better to keep disabled it in the base config
and enable only in that specific config file.
> #
> # Compile WRS accelerated virtual port (AVP) guest PMD driver
> #
> diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
> new file mode 100644
> index 000000000..0ec3b6983
> --- /dev/null
> +++ b/doc/guides/nics/features/octeontx2.ini
> @@ -0,0 +1,8 @@
> +;
> +; Supported features of the 'octeontx2' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +Linux VFIO = Y
> +ARMv8 = Y
> diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
> new file mode 100644
> index 000000000..774f136c1
> --- /dev/null
> +++ b/doc/guides/nics/features/octeontx2_vec.ini
> @@ -0,0 +1,8 @@
> +;
> +; Supported features of the 'octeontx2_vec' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +Linux VFIO = Y
> +ARMv8 = Y
I think it is good to introduce vector .ini file with the patch that enables
vector path, same with below vf one.
> diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
> new file mode 100644
> index 000000000..36642354e
> --- /dev/null
> +++ b/doc/guides/nics/features/octeontx2_vf.ini
> @@ -0,0 +1,8 @@
> +;
> +; Supported features of the 'octeontx2_vf' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +Linux VFIO = Y
> +ARMv8 = Y
> diff --git a/drivers/net/Makefile b/drivers/net/Makefile
> index 3a72cf38c..5bb618b21 100644
> --- a/drivers/net/Makefile
> +++ b/drivers/net/Makefile
> @@ -45,6 +45,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += nfp
> DIRS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt
> DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += null
> DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_PMD) += octeontx
> +DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += octeontx2
> DIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += pcap
> DIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede
> DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += ring
> diff --git a/drivers/net/meson.build b/drivers/net/meson.build
> index ed99896c3..086a2f4cd 100644
> --- a/drivers/net/meson.build
> +++ b/drivers/net/meson.build
> @@ -31,7 +31,7 @@ drivers = ['af_packet',
> 'netvsc',
> 'nfb',
> 'nfp',
> - 'null', 'octeontx', 'pcap', 'qede', 'ring',
> + 'null', 'octeontx', 'octeontx2', 'pcap', 'ring',
Multiline is causing conflicts, can you please break the line while adding new
one, like:
'null', 'octeontx',
'octeontx2',
'qede', 'ring',
> 'sfc',
> 'softnic',
> 'szedata2',
> diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
> new file mode 100644
> index 000000000..0a606d27b
> --- /dev/null
> +++ b/drivers/net/octeontx2/Makefile
> @@ -0,0 +1,38 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(C) 2019 Marvell International Ltd.
> +#
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +#
> +# library name
> +#
> +LIB = librte_pmd_octeontx2.a
> +
> +CFLAGS += $(WERROR_FLAGS)
> +CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2
> +CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2
> +CFLAGS += -I$(RTE_SDK)/drivers/net/octeontx2
> +CFLAGS += -O3
> +CFLAGS += -DALLOW_EXPERIMENTAL_API
Can you please add this flag when really an experimental API is called?
And for that case add a comment here the name of that experimental function,
this will help us to remove unnecessary flags when APIs become non experimental.
> +CFLAGS += -flax-vector-conversions
Same for this one, please add when needed.
> +
> +ifneq ($(CONFIG_RTE_ARCH_64),y)
> +CFLAGS += -Wno-int-to-pointer-cast
> +CFLAGS += -Wno-pointer-to-int-cast
Is there a way to get rid of these? Why need to ignore these warnings?
> +endif
> +
> +EXPORT_MAP := rte_pmd_octeontx2_version.map
> +
> +LIBABIVER := 1
> +
> +#
> +# all source are stored in SRCS-y
> +#
> +SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
> + otx2_ethdev.c
> +
> +LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_common_octeontx2 -lm
> +LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_bus_pci -lrte_mempool_octeontx2
Can you please just keep minimum required dependencies?
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
> new file mode 100644
> index 000000000..0bd32446b
> --- /dev/null
> +++ b/drivers/net/octeontx2/meson.build
> @@ -0,0 +1,24 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(C) 2019 Marvell International Ltd.
> +#
> +
> +sources = files(
> + 'otx2_ethdev.c',
> + )
> +
> +allow_experimental_apis = true
All comments for makefile valid for meson too, can you please check?
> +deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2']
> +
> +cflags += ['-flax-vector-conversions','-DALLOW_EXPERIMENTAL_API']
> +
> +extra_flags = []
> +# This integrated controller runs only on a arm64 machine, remove 32bit warnings
> +if not dpdk_conf.get('RTE_ARCH_64')
> + extra_flags += ['-Wno-int-to-pointer-cast', '-Wno-pointer-to-int-cast']
> +endif
> +
> +foreach flag: extra_flags
> + if cc.has_argument(flag)
> + cflags += flag
> + endif
> +endforeach
> diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
> new file mode 100644
> index 000000000..d26535dee
> --- /dev/null
> +++ b/drivers/net/octeontx2/otx2_ethdev.c
> @@ -0,0 +1,3 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2019 Marvell International Ltd.
> + */
> diff --git a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
> new file mode 100644
> index 000000000..fc8c95e91
> --- /dev/null
> +++ b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
> @@ -0,0 +1,4 @@
> +DPDK_19.05 {
DPDK_19.08 now.
> +
> + local: *;
> +};
> diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> index cd89ccfd5..3dff91190 100644
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -127,6 +127,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_COMMON_DPAAX) += -lrte_common_dpaax
> endif
>
> OCTEONTX2-y := $(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL)
> +OCTEONTX2-y += $(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD)
> ifeq ($(findstring y,$(OCTEONTX2-y)),y)
> _LDLIBS-y += -lrte_common_octeontx2
> endif
> @@ -197,6 +198,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MVPP2_PMD) += -lrte_pmd_mvpp2
> _LDLIBS-$(CONFIG_RTE_LIBRTE_MVNETA_PMD) += -lrte_pmd_mvneta
> _LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += -lrte_pmd_nfp
> _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += -lrte_pmd_null
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += -lrte_pmd_octeontx2 -lm
> _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += -lrte_pmd_pcap -lpcap
> _LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += -lrte_pmd_qede
> _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_RING) += -lrte_pmd_ring
>
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [PATCH v1 09/58] net/octeontx2: add context debug utils
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 09/58] net/octeontx2: add context debug utils jerinj
@ 2019-06-06 15:41 ` Ferruh Yigit
2019-06-06 16:04 ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
0 siblings, 1 reply; 196+ messages in thread
From: Ferruh Yigit @ 2019-06-06 15:41 UTC (permalink / raw)
To: jerinj, dev, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
On 6/2/2019 4:23 PM, jerinj@marvell.com wrote:
> From: Jerin Jacob <jerinj@marvell.com>
>
> Add RQ,SQ,CQ context and CQE structure dump utils.
>
> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
<...>
> @@ -23,6 +23,9 @@ nix_lf_err_irq(void *param)
>
> /* Clear interrupt */
> otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
> +
> + otx2_nix_queues_ctx_dump(eth_dev);
> + rte_panic("nix_lf_error_interrupt\n");
> }
>
> static int
> @@ -75,6 +78,9 @@ nix_lf_ras_irq(void *param)
>
> /* Clear interrupt */
> otx2_write64(intr, dev->base + NIX_LF_RAS);
> +
> + otx2_nix_queues_ctx_dump(eth_dev);
> + rte_panic("nix_lf_ras_interrupt\n");
> }
>
> static int
> @@ -232,6 +238,9 @@ nix_lf_q_irq(void *param)
>
> /* Clear interrupt */
> otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
> +
> + otx2_nix_queues_ctx_dump(eth_dev);
> + rte_panic("nix_lf_q_interrupt\n");
rte_panic() is not allowed in the PMDs, please remove them.
> }
>
> int
>
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [PATCH v1 25/58] net/octeontx2: add ptype support
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 25/58] net/octeontx2: add ptype support jerinj
@ 2019-06-06 15:50 ` Ferruh Yigit
2019-06-06 15:59 ` Jerin Jacob Kollanukkaran
0 siblings, 1 reply; 196+ messages in thread
From: Ferruh Yigit @ 2019-06-06 15:50 UTC (permalink / raw)
To: jerinj, dev, John McNamara, Marko Kovacevic, Nithin Dabilpuram,
Kiran Kumar K
Cc: Harman Kalra
On 6/2/2019 4:24 PM, jerinj@marvell.com wrote:
> From: Jerin Jacob <jerinj@marvell.com>
>
> The fields from CQE needs to be converted to
> ptype and rx ol flags in mbuf. This patch adds
> create lookup memory for those items to be
> used in Fastpath.
>
> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
<...>
> @@ -1,4 +1,7 @@
> DPDK_19.05 {
> + global:
> +
> + otx2_nix_fastpath_lookup_mem_get;
Why this function is in the .map file?
.map file is for the functions that this PMD exposes to application to call,
this look intended to use within the library itself, if so no need to be in .map
file.
>
> local: *;
> };
>
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [PATCH v1 25/58] net/octeontx2: add ptype support
2019-06-06 15:50 ` Ferruh Yigit
@ 2019-06-06 15:59 ` Jerin Jacob Kollanukkaran
2019-06-06 16:20 ` Ferruh Yigit
0 siblings, 1 reply; 196+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-06-06 15:59 UTC (permalink / raw)
To: Ferruh Yigit, dev, John McNamara, Marko Kovacevic,
Nithin Kumar Dabilpuram, Kiran Kumar Kokkilagadda
Cc: Harman Kalra
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Thursday, June 6, 2019 9:20 PM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dev@dpdk.org; John
> McNamara <john.mcnamara@intel.com>; Marko Kovacevic
> <marko.kovacevic@intel.com>; Nithin Kumar Dabilpuram
> <ndabilpuram@marvell.com>; Kiran Kumar Kokkilagadda
> <kirankumark@marvell.com>
> Cc: Harman Kalra <hkalra@marvell.com>
> Subject: Re: [dpdk-dev] [PATCH v1 25/58] net/octeontx2: add ptype support
>
> On 6/2/2019 4:24 PM, jerinj@marvell.com wrote:
> > From: Jerin Jacob <jerinj@marvell.com>
> >
> > The fields from CQE needs to be converted to ptype and rx ol flags in
> > mbuf. This patch adds create lookup memory for those items to be used
> > in Fastpath.
> >
> > Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> > Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
> > Signed-off-by: Harman Kalra <hkalra@marvell.com>
>
> <...>
>
> > @@ -1,4 +1,7 @@
> > DPDK_19.05 {
> > + global:
> > +
> > + otx2_nix_fastpath_lookup_mem_get;
>
> Why this function is in the .map file?
It is used by octeontx2 eventdev driver in driver/event/octeontx2
> .map file is for the functions that this PMD exposes to application to call, this
> look intended to use within the library itself, if so no need to be in .map file.
>
> >
> > local: *;
> > };
> >
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 09/58] net/octeontx2: add context debug utils
2019-06-06 15:41 ` Ferruh Yigit
@ 2019-06-06 16:04 ` Jerin Jacob Kollanukkaran
2019-06-06 16:18 ` Ferruh Yigit
0 siblings, 1 reply; 196+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-06-06 16:04 UTC (permalink / raw)
To: Ferruh Yigit, dev, Nithin Kumar Dabilpuram, Kiran Kumar Kokkilagadda
Cc: Vivek Kumar Sharma
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Thursday, June 6, 2019 9:12 PM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dev@dpdk.org; Nithin
> Kumar Dabilpuram <ndabilpuram@marvell.com>; Kiran Kumar Kokkilagadda
> <kirankumark@marvell.com>
> Cc: Vivek Kumar Sharma <viveksharma@marvell.com>
> Subject: [EXT] Re: [dpdk-dev] [PATCH v1 09/58] net/octeontx2: add context
> debug utils
> On 6/2/2019 4:23 PM, jerinj@marvell.com wrote:
> > From: Jerin Jacob <jerinj@marvell.com>
> >
> > Add RQ,SQ,CQ context and CQE structure dump utils.
> >
> > Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> > Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
>
> <...>
>
> > @@ -23,6 +23,9 @@ nix_lf_err_irq(void *param)
> >
> > /* Clear interrupt */
> > otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
> > +
> > + otx2_nix_queues_ctx_dump(eth_dev);
> > + rte_panic("nix_lf_error_interrupt\n");
> > }
> >
> > static int
> > @@ -75,6 +78,9 @@ nix_lf_ras_irq(void *param)
> >
> > /* Clear interrupt */
> > otx2_write64(intr, dev->base + NIX_LF_RAS);
> > +
> > + otx2_nix_queues_ctx_dump(eth_dev);
> > + rte_panic("nix_lf_ras_interrupt\n");
> > }
> >
> > static int
> > @@ -232,6 +238,9 @@ nix_lf_q_irq(void *param)
> >
> > /* Clear interrupt */
> > otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
> > +
> > + otx2_nix_queues_ctx_dump(eth_dev);
> > + rte_panic("nix_lf_q_interrupt\n");
>
> rte_panic() is not allowed in the PMDs, please remove them.
It an error interrupt handler ie. fatal error and driver can not proceed.
Should I call abort() or simply return ? I think, we can treat this as a
special case for rte_panic() if it is in error interrupt handler.
Thoughts?
>
> > }
> >
> > int
> >
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [PATCH v1 48/58] net/octeontx2: add FW version get operation
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 48/58] net/octeontx2: add FW version get operation jerinj
@ 2019-06-06 16:06 ` Ferruh Yigit
2019-06-07 5:51 ` [dpdk-dev] [EXT] " Vamsi Krishna Attunuru
0 siblings, 1 reply; 196+ messages in thread
From: Ferruh Yigit @ 2019-06-06 16:06 UTC (permalink / raw)
To: jerinj, dev, John McNamara, Marko Kovacevic, Nithin Dabilpuram,
Kiran Kumar K
Cc: Vamsi Attunuru
On 6/2/2019 4:24 PM, jerinj@marvell.com wrote:
> From: Vamsi Attunuru <vattunuru@marvell.com>
>
> Add firmware version get operation.
>
> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
<...>
> @@ -209,6 +209,28 @@ otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt)
> return 0;
> }
>
> +int
> +otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
> + size_t fw_size)
> +{
> + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
> + int rc = (int)fw_size;
> +
> + if (fw_size > sizeof(dev->mkex_pfl_name))
> + rc = sizeof(dev->mkex_pfl_name);
> +
> + rc = strlcpy(fw_version, (char *)dev->mkex_pfl_name, rc);
> +
> + rc += 1; /* Add the size of '\0' */
> + if (fw_size < (uint32_t)rc)
> + goto done;
> + else
> + return 0;
> +
> +done:
> + return rc;
> +}
Up to you but this can be done without a 'goto':
...
if (fw_size < (uint32_t)rc)
return rc;
return 0;
}
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 09/58] net/octeontx2: add context debug utils
2019-06-06 16:04 ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
@ 2019-06-06 16:18 ` Ferruh Yigit
2019-06-06 16:27 ` Jerin Jacob Kollanukkaran
0 siblings, 1 reply; 196+ messages in thread
From: Ferruh Yigit @ 2019-06-06 16:18 UTC (permalink / raw)
To: Jerin Jacob Kollanukkaran, dev, Nithin Kumar Dabilpuram,
Kiran Kumar Kokkilagadda
Cc: Vivek Kumar Sharma
On 6/6/2019 5:04 PM, Jerin Jacob Kollanukkaran wrote:
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Thursday, June 6, 2019 9:12 PM
>> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dev@dpdk.org; Nithin
>> Kumar Dabilpuram <ndabilpuram@marvell.com>; Kiran Kumar Kokkilagadda
>> <kirankumark@marvell.com>
>> Cc: Vivek Kumar Sharma <viveksharma@marvell.com>
>> Subject: [EXT] Re: [dpdk-dev] [PATCH v1 09/58] net/octeontx2: add context
>> debug utils
>> On 6/2/2019 4:23 PM, jerinj@marvell.com wrote:
>>> From: Jerin Jacob <jerinj@marvell.com>
>>>
>>> Add RQ,SQ,CQ context and CQE structure dump utils.
>>>
>>> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
>>> Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
>>
>> <...>
>>
>>> @@ -23,6 +23,9 @@ nix_lf_err_irq(void *param)
>>>
>>> /* Clear interrupt */
>>> otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
>>> +
>>> + otx2_nix_queues_ctx_dump(eth_dev);
>>> + rte_panic("nix_lf_error_interrupt\n");
>>> }
>>>
>>> static int
>>> @@ -75,6 +78,9 @@ nix_lf_ras_irq(void *param)
>>>
>>> /* Clear interrupt */
>>> otx2_write64(intr, dev->base + NIX_LF_RAS);
>>> +
>>> + otx2_nix_queues_ctx_dump(eth_dev);
>>> + rte_panic("nix_lf_ras_interrupt\n");
>>> }
>>>
>>> static int
>>> @@ -232,6 +238,9 @@ nix_lf_q_irq(void *param)
>>>
>>> /* Clear interrupt */
>>> otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
>>> +
>>> + otx2_nix_queues_ctx_dump(eth_dev);
>>> + rte_panic("nix_lf_q_interrupt\n");
>>
>> rte_panic() is not allowed in the PMDs, please remove them.
>
> It an error interrupt handler ie. fatal error and driver can not proceed.
> Should I call abort() or simply return ? I think, we can treat this as a
> special case for rte_panic() if it is in error interrupt handler.
>
> Thoughts?
Driver may not proceed but perhaps application may, (with a fail-over method etc
... ) a driver shouldn't cause application to exit, app itself should give the
decision.
I think best thing to do is, deactivate the driver and send some kind of error
to the application.
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [PATCH v1 25/58] net/octeontx2: add ptype support
2019-06-06 15:59 ` Jerin Jacob Kollanukkaran
@ 2019-06-06 16:20 ` Ferruh Yigit
2019-06-07 8:54 ` Jerin Jacob Kollanukkaran
0 siblings, 1 reply; 196+ messages in thread
From: Ferruh Yigit @ 2019-06-06 16:20 UTC (permalink / raw)
To: Jerin Jacob Kollanukkaran, dev, John McNamara, Marko Kovacevic,
Nithin Kumar Dabilpuram, Kiran Kumar Kokkilagadda
Cc: Harman Kalra
On 6/6/2019 4:59 PM, Jerin Jacob Kollanukkaran wrote:
>
>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Thursday, June 6, 2019 9:20 PM
>> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dev@dpdk.org; John
>> McNamara <john.mcnamara@intel.com>; Marko Kovacevic
>> <marko.kovacevic@intel.com>; Nithin Kumar Dabilpuram
>> <ndabilpuram@marvell.com>; Kiran Kumar Kokkilagadda
>> <kirankumark@marvell.com>
>> Cc: Harman Kalra <hkalra@marvell.com>
>> Subject: Re: [dpdk-dev] [PATCH v1 25/58] net/octeontx2: add ptype support
>>
>> On 6/2/2019 4:24 PM, jerinj@marvell.com wrote:
>>> From: Jerin Jacob <jerinj@marvell.com>
>>>
>>> The fields from CQE needs to be converted to ptype and rx ol flags in
>>> mbuf. This patch adds create lookup memory for those items to be used
>>> in Fastpath.
>>>
>>> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
>>> Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
>>> Signed-off-by: Harman Kalra <hkalra@marvell.com>
>>
>> <...>
>>
>>> @@ -1,4 +1,7 @@
>>> DPDK_19.05 {
>>> + global:
>>> +
>>> + otx2_nix_fastpath_lookup_mem_get;
>>
>> Why this function is in the .map file?
>
> It is used by octeontx2 eventdev driver in driver/event/octeontx2
OK, any way to get rid of it, like using event-eth adapters etc ?
>
>> .map file is for the functions that this PMD exposes to application to call, this
>> look intended to use within the library itself, if so no need to be in .map file.
>>
>>>
>>> local: *;
>>> };
>>>
>
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [PATCH v1 56/58] net/octeontx2: add device stop and close operations
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 56/58] net/octeontx2: add device stop and close operations jerinj
@ 2019-06-06 16:23 ` Ferruh Yigit
2019-06-07 5:11 ` Nithin Dabilpuram
0 siblings, 1 reply; 196+ messages in thread
From: Ferruh Yigit @ 2019-06-06 16:23 UTC (permalink / raw)
To: jerinj, dev, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vamsi Attunuru
On 6/2/2019 4:24 PM, jerinj@marvell.com wrote:
> From: Nithin Dabilpuram <ndabilpuram@marvell.com>
>
> Add device stop, close and reset operations.
>
> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
<...>
> @@ -1792,6 +1844,24 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
> return 0;
> }
>
> +static void
> +otx2_nix_dev_close(struct rte_eth_dev *eth_dev)
> +{
> + otx2_eth_dev_uninit(eth_dev, true);
> +}
'close' should free all PMD resources, with 'RTE_ETH_DEV_CLOSE_REMOVE' flag
ethdev API can free the ethdev level allocated memory itself.
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 09/58] net/octeontx2: add context debug utils
2019-06-06 16:18 ` Ferruh Yigit
@ 2019-06-06 16:27 ` Jerin Jacob Kollanukkaran
0 siblings, 0 replies; 196+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-06-06 16:27 UTC (permalink / raw)
To: Ferruh Yigit, dev, Nithin Kumar Dabilpuram, Kiran Kumar Kokkilagadda
Cc: Vivek Kumar Sharma
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Thursday, June 6, 2019 9:49 PM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dev@dpdk.org; Nithin
> Kumar Dabilpuram <ndabilpuram@marvell.com>; Kiran Kumar Kokkilagadda
> <kirankumark@marvell.com>
> Cc: Vivek Kumar Sharma <viveksharma@marvell.com>
> Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v1 09/58] net/octeontx2: add
> context debug utils
>
> On 6/6/2019 5:04 PM, Jerin Jacob Kollanukkaran wrote:
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >> Sent: Thursday, June 6, 2019 9:12 PM
> >> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dev@dpdk.org;
> >> Nithin Kumar Dabilpuram <ndabilpuram@marvell.com>; Kiran Kumar
> >> Kokkilagadda <kirankumark@marvell.com>
> >> Cc: Vivek Kumar Sharma <viveksharma@marvell.com>
> >> Subject: [EXT] Re: [dpdk-dev] [PATCH v1 09/58] net/octeontx2: add
> >> context debug utils On 6/2/2019 4:23 PM, jerinj@marvell.com wrote:
> >>> From: Jerin Jacob <jerinj@marvell.com>
> >>>
> >>> Add RQ,SQ,CQ context and CQE structure dump utils.
> >>>
> >>> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> >>> Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
> >>
> >> <...>
> >>
> >>> @@ -23,6 +23,9 @@ nix_lf_err_irq(void *param)
> >>>
> >>> /* Clear interrupt */
> >>> otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
> >>> +
> >>> + otx2_nix_queues_ctx_dump(eth_dev);
> >>> + rte_panic("nix_lf_error_interrupt\n");
> >>> }
> >>>
> >>> static int
> >>> @@ -75,6 +78,9 @@ nix_lf_ras_irq(void *param)
> >>>
> >>> /* Clear interrupt */
> >>> otx2_write64(intr, dev->base + NIX_LF_RAS);
> >>> +
> >>> + otx2_nix_queues_ctx_dump(eth_dev);
> >>> + rte_panic("nix_lf_ras_interrupt\n");
> >>> }
> >>>
> >>> static int
> >>> @@ -232,6 +238,9 @@ nix_lf_q_irq(void *param)
> >>>
> >>> /* Clear interrupt */
> >>> otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
> >>> +
> >>> + otx2_nix_queues_ctx_dump(eth_dev);
> >>> + rte_panic("nix_lf_q_interrupt\n");
> >>
> >> rte_panic() is not allowed in the PMDs, please remove them.
> >
> > It an error interrupt handler ie. fatal error and driver can not proceed.
> > Should I call abort() or simply return ? I think, we can treat this as
> > a special case for rte_panic() if it is in error interrupt handler.
> >
> > Thoughts?
>
> Driver may not proceed but perhaps application may, (with a fail-over
> method etc ... ) a driver shouldn't cause application to exit, app itself should
> give the decision.
OK. I will remove the rte_panic().
>
> I think best thing to do is, deactivate the driver and send some kind of error
> to the application.
>
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 01/58] net/octeontx2: add build infrastructure
2019-06-06 15:33 ` Ferruh Yigit
@ 2019-06-06 16:40 ` Jerin Jacob Kollanukkaran
0 siblings, 0 replies; 196+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-06-06 16:40 UTC (permalink / raw)
To: Ferruh Yigit, dev, Thomas Monjalon, John McNamara,
Marko Kovacevic, Nithin Kumar Dabilpuram,
Kiran Kumar Kokkilagadda
Cc: Pavan Nikhilesh Bhagavatula
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Thursday, June 6, 2019 9:03 PM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dev@dpdk.org; Thomas
> Monjalon <thomas@monjalon.net>; John McNamara
> <john.mcnamara@intel.com>; Marko Kovacevic
> <marko.kovacevic@intel.com>; Nithin Kumar Dabilpuram
> <ndabilpuram@marvell.com>; Kiran Kumar Kokkilagadda
> <kirankumark@marvell.com>
> Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> Subject: [EXT] Re: [dpdk-dev] [PATCH v1 01/58] net/octeontx2: add build
> infrastructure
> On 6/2/2019 4:23 PM, jerinj@marvell.com wrote:
> > From: Jerin Jacob <jerinj@marvell.com>
> >
> > Adding bare minimum PMD library and doc build infrastructure.
> >
> > Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> > Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > ---
> > config/common_base | 5 +++
> > doc/guides/nics/features/octeontx2.ini | 8 ++++
> > doc/guides/nics/features/octeontx2_vec.ini | 8 ++++
> > doc/guides/nics/features/octeontx2_vf.ini | 8 ++++
> > drivers/net/Makefile | 1 +
> > drivers/net/meson.build | 2 +-
> > drivers/net/octeontx2/Makefile | 38 +++++++++++++++++++
> > drivers/net/octeontx2/meson.build | 24 ++++++++++++
> > drivers/net/octeontx2/otx2_ethdev.c | 3 ++
> > .../octeontx2/rte_pmd_octeontx2_version.map | 4 ++
> > mk/rte.app.mk | 2 +
>
> It can be good to include MAINTAINERS file in this patch, of course with the
> content that introduced in this patch.
OK
>
> > 11 files changed, 102 insertions(+), 1 deletion(-) create mode
> > 100644 doc/guides/nics/features/octeontx2.ini
> > create mode 100644 doc/guides/nics/features/octeontx2_vec.ini
> > create mode 100644 doc/guides/nics/features/octeontx2_vf.ini
> > create mode 100644 drivers/net/octeontx2/Makefile create mode 100644
> > drivers/net/octeontx2/meson.build create mode 100644
> > drivers/net/octeontx2/otx2_ethdev.c
> > create mode 100644
> > drivers/net/octeontx2/rte_pmd_octeontx2_version.map
> >
> > diff --git a/config/common_base b/config/common_base index
> > 4a3de0360..38edad355 100644
> > --- a/config/common_base
> > +++ b/config/common_base
> > @@ -405,6 +405,11 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
> > #
> > CONFIG_RTE_LIBRTE_OCTEONTX_PMD=y
> >
> > +#
> > +# Compile burst-oriented Cavium OCTEONTX2 network PMD driver #
> > +CONFIG_RTE_LIBRTE_OCTEONTX2_PMD=y
> > +
>
> Since .ini files only has "ARMv8", should the PMD disabled in other config
> files?
> Or is the support coming for those architectures in next patches?
> If this is only for Armv8 & Linux, better to keep disabled it in the base config
> and enable only in that specific config file.
It does build for x86. I have added in the default config so that
It will build for x86 as well so that ethdev changes will not be opted
Out for this driver as not everyone have arm64 platform to compile this driver.
>
> > #
> > # Compile WRS accelerated virtual port (AVP) guest PMD driver # diff
> > --git a/doc/guides/nics/features/octeontx2.ini
> > b/doc/guides/nics/features/octeontx2.ini
> > new file mode 100644
> > index 000000000..0ec3b6983
> > --- /dev/null
> > +++ b/doc/guides/nics/features/octeontx2.ini
> > @@ -0,0 +1,8 @@
> > +;
> > +; Supported features of the 'octeontx2' network poll mode driver.
> > +;
> > +; Refer to default.ini for the full list of available PMD features.
> > +;
> > +[Features]
> > +Linux VFIO = Y
> > +ARMv8 = Y
> > diff --git a/doc/guides/nics/features/octeontx2_vec.ini
> > b/doc/guides/nics/features/octeontx2_vec.ini
> > new file mode 100644
> > index 000000000..774f136c1
> > --- /dev/null
> > +++ b/doc/guides/nics/features/octeontx2_vec.ini
> > @@ -0,0 +1,8 @@
> > +;
> > +; Supported features of the 'octeontx2_vec' network poll mode driver.
> > +;
> > +; Refer to default.ini for the full list of available PMD features.
> > +;
> > +[Features]
> > +Linux VFIO = Y
> > +ARMv8 = Y
>
> I think it is good to introduce vector .ini file with the patch that enables vector
> path, same with below vf one.
I have added only slowpath stuff that’s common for vector and scalar.
>
> > diff --git a/doc/guides/nics/features/octeontx2_vf.ini
> > b/doc/guides/nics/features/octeontx2_vf.ini
> > new file mode 100644
> > index 000000000..36642354e
> > --- /dev/null
> > +++ b/doc/guides/nics/features/octeontx2_vf.ini
> > @@ -0,0 +1,8 @@
> > +;
> > +; Supported features of the 'octeontx2_vf' network poll mode driver.
> > +;
> > +; Refer to default.ini for the full list of available PMD features.
> > +;
> > +[Features]
> > +Linux VFIO = Y
> > +ARMv8 = Y
> > diff --git a/drivers/net/Makefile b/drivers/net/Makefile index
> > 3a72cf38c..5bb618b21 100644
> > --- a/drivers/net/Makefile
> > +++ b/drivers/net/Makefile
> > @@ -45,6 +45,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += nfp
> > DIRS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt
> > DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += null
> > DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_PMD) += octeontx
> > +DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += octeontx2
> > DIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += pcap
> > DIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede
> > DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += ring diff --git
> > a/drivers/net/meson.build b/drivers/net/meson.build index
> > ed99896c3..086a2f4cd 100644
> > --- a/drivers/net/meson.build
> > +++ b/drivers/net/meson.build
> > @@ -31,7 +31,7 @@ drivers = ['af_packet',
> > 'netvsc',
> > 'nfb',
> > 'nfp',
> > - 'null', 'octeontx', 'pcap', 'qede', 'ring',
> > + 'null', 'octeontx', 'octeontx2', 'pcap', 'ring',
>
> Multiline is causing conflicts, can you please break the line while adding new
> one, like:
> 'null', 'octeontx',
> 'octeontx2',
> 'qede', 'ring',
Makes sense. I will fix it.
>
> > 'sfc',
> > 'softnic',
> > 'szedata2',
> > diff --git a/drivers/net/octeontx2/Makefile
> > b/drivers/net/octeontx2/Makefile new file mode 100644 index
> > 000000000..0a606d27b
> > --- /dev/null
> > +++ b/drivers/net/octeontx2/Makefile
> > @@ -0,0 +1,38 @@
> > +# SPDX-License-Identifier: BSD-3-Clause # Copyright(C) 2019 Marvell
> > +International Ltd.
> > +#
> > +
> > +include $(RTE_SDK)/mk/rte.vars.mk
> > +
> > +#
> > +# library name
> > +#
> > +LIB = librte_pmd_octeontx2.a
> > +
> > +CFLAGS += $(WERROR_FLAGS)
> > +CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2
> > +CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2
> > +CFLAGS += -I$(RTE_SDK)/drivers/net/octeontx2
> > +CFLAGS += -O3
> > +CFLAGS += -DALLOW_EXPERIMENTAL_API
>
> Can you please add this flag when really an experimental API is called?
> And for that case add a comment here the name of that experimental
> function, this will help us to remove unnecessary flags when APIs become
> non experimental.
I will fix it.
>
> > +CFLAGS += -flax-vector-conversions
>
> Same for this one, please add when needed.
I will fix it.
>
> > +
> > +ifneq ($(CONFIG_RTE_ARCH_64),y)
> > +CFLAGS += -Wno-int-to-pointer-cast
> > +CFLAGS += -Wno-pointer-to-int-cast
>
> Is there a way to get rid of these? Why need to ignore these warnings?
Those things are from base code. I would keep as it is.
>
> > +endif
> > +
> > +EXPORT_MAP := rte_pmd_octeontx2_version.map
> > +
> > +LIBABIVER := 1
> > +
> > +#
> > +# all source are stored in SRCS-y
> > +#
> > +SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
> > + otx2_ethdev.c
> > +
> > +LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_common_octeontx2
> > +-lm LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_bus_pci
> > +-lrte_mempool_octeontx2
>
> Can you please just keep minimum required dependencies?
Sure.
>
> > +
> > +include $(RTE_SDK)/mk/rte.lib.mk
> > diff --git a/drivers/net/octeontx2/meson.build
> > b/drivers/net/octeontx2/meson.build
> > new file mode 100644
> > index 000000000..0bd32446b
> > --- /dev/null
> > +++ b/drivers/net/octeontx2/meson.build
> > @@ -0,0 +1,24 @@
> > +# SPDX-License-Identifier: BSD-3-Clause # Copyright(C) 2019 Marvell
> > +International Ltd.
> > +#
> > +
> > +sources = files(
> > + 'otx2_ethdev.c',
> > + )
> > +
> > +allow_experimental_apis = true
>
> All comments for makefile valid for meson too, can you please check?
Sure.
>
> > +deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2']
> > +
> > +cflags += ['-flax-vector-conversions','-DALLOW_EXPERIMENTAL_API']
> > +
> > +extra_flags = []
> > +# This integrated controller runs only on a arm64 machine, remove
> > +32bit warnings if not dpdk_conf.get('RTE_ARCH_64')
> > + extra_flags += ['-Wno-int-to-pointer-cast',
> > +'-Wno-pointer-to-int-cast'] endif
> > +
> > +foreach flag: extra_flags
> > + if cc.has_argument(flag)
> > + cflags += flag
> > + endif
> > +endforeach
> > diff --git a/drivers/net/octeontx2/otx2_ethdev.c
> > b/drivers/net/octeontx2/otx2_ethdev.c
> > new file mode 100644
> > index 000000000..d26535dee
> > --- /dev/null
> > +++ b/drivers/net/octeontx2/otx2_ethdev.c
> > @@ -0,0 +1,3 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(C) 2019 Marvell International Ltd.
> > + */
> > diff --git a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
> > b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
> > new file mode 100644
> > index 000000000..fc8c95e91
> > --- /dev/null
> > +++ b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
> > @@ -0,0 +1,4 @@
> > +DPDK_19.05 {
>
> DPDK_19.08 now.
Good catch. I will fix it.
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [PATCH v1 58/58] doc: add Marvell OCTEON TX2 ethdev documentation
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 58/58] doc: add Marvell OCTEON TX2 ethdev documentation jerinj
@ 2019-06-06 16:50 ` Ferruh Yigit
2019-06-07 3:42 ` Jerin Jacob Kollanukkaran
0 siblings, 1 reply; 196+ messages in thread
From: Ferruh Yigit @ 2019-06-06 16:50 UTC (permalink / raw)
To: jerinj, dev, Thomas Monjalon, John McNamara, Marko Kovacevic,
Nithin Dabilpuram, Kiran Kumar K, Vamsi Attunuru
On 6/2/2019 4:24 PM, jerinj@marvell.com wrote:
> From: Jerin Jacob <jerinj@marvell.com>
>
> Add Marvell OCTEON TX2 ethdev documentation.
>
> This patch also updates the MAINTAINERS file and
> shared library versions in release_19_08.rst.
>
> Cc: John McNamara <john.mcnamara@intel.com>
> Cc: Thomas Monjalon <thomas@monjalon.net>
>
> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
> Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
<...>
> +Debugging Options
> +-----------------
> +
> +.. _table_octeontx2_ethdev_debug_options:
> +
> +.. table:: OCTEON TX2 ethdev debug options
> +
> + +---+------------+-------------------------------------------------------+
> + | # | Component | EAL log command |
> + +===+============+=======================================================+
> + | 1 | NIX | --log-level='pmd\.net.octeontx2,8' |
> + +---+------------+-------------------------------------------------------+
> + | 2 | NPC | --log-level='pmd\.net.octeontx2\.flow,8' |
> + +---+------------+-------------------------------------------------------+
Are these log types registered?
I can't find them, but I may be missed them since not applied whole set.
<...>
> diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
> index b4c6972e3..e925ccf0e 100644
> --- a/doc/guides/rel_notes/release_19_05.rst
> +++ b/doc/guides/rel_notes/release_19_05.rst
Can you please use 19.08 release notes, instead of 19.05.
Also can you please announce the PMD in "New Features" section of the release notes?
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [PATCH v1 58/58] doc: add Marvell OCTEON TX2 ethdev documentation
2019-06-06 16:50 ` Ferruh Yigit
@ 2019-06-07 3:42 ` Jerin Jacob Kollanukkaran
0 siblings, 0 replies; 196+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-06-07 3:42 UTC (permalink / raw)
To: Ferruh Yigit, dev, Thomas Monjalon, John McNamara,
Marko Kovacevic, Nithin Kumar Dabilpuram,
Kiran Kumar Kokkilagadda, Vamsi Krishna Attunuru
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Thursday, June 6, 2019 10:20 PM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dev@dpdk.org; Thomas
> Monjalon <thomas@monjalon.net>; John McNamara
> <john.mcnamara@intel.com>; Marko Kovacevic
> <marko.kovacevic@intel.com>; Nithin Kumar Dabilpuram
> <ndabilpuram@marvell.com>; Kiran Kumar Kokkilagadda
> <kirankumark@marvell.com>; Vamsi Krishna Attunuru
> <vattunuru@marvell.com>
> Subject: Re: [dpdk-dev] [PATCH v1 58/58] doc: add Marvell OCTEON TX2
> ethdev documentation
>
> On 6/2/2019 4:24 PM, jerinj@marvell.com wrote:
> > From: Jerin Jacob <jerinj@marvell.com>
> >
> > Add Marvell OCTEON TX2 ethdev documentation.
> >
> > This patch also updates the MAINTAINERS file and shared library
> > versions in release_19_08.rst.
> >
> > Cc: John McNamara <john.mcnamara@intel.com>
> > Cc: Thomas Monjalon <thomas@monjalon.net>
> >
> > Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> > Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
> > Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
> > Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
>
> <...>
>
> > +Debugging Options
> > +-----------------
> > +
> > +.. _table_octeontx2_ethdev_debug_options:
> > +
> > +.. table:: OCTEON TX2 ethdev debug options
> > +
> > + +---+------------+-------------------------------------------------------+
> > + | # | Component | EAL log command |
> > +
> +===+============+========================================
> ===============+
> > + | 1 | NIX | --log-level='pmd\.net.octeontx2,8' |
> > + +---+------------+-------------------------------------------------------+
> > + | 2 | NPC | --log-level='pmd\.net.octeontx2\.flow,8' |
> > +
> > + +---+------------+--------------------------------------------------
> > + -----+
>
> Are these log types registered?
> I can't find them, but I may be missed them since not applied whole set.
Yes.
>
> <...>
>
> > diff --git a/doc/guides/rel_notes/release_19_05.rst
> > b/doc/guides/rel_notes/release_19_05.rst
> > index b4c6972e3..e925ccf0e 100644
> > --- a/doc/guides/rel_notes/release_19_05.rst
> > +++ b/doc/guides/rel_notes/release_19_05.rst
>
> Can you please use 19.08 release notes, instead of 19.05.
My bad. Will fix it in v2.
>
> Also can you please announce the PMD in "New Features" section of the
> release notes?
Yes. I will send a separate patch for the same as there are multiple drivers(ethdev, evendev, mempool, rawdriver for dma)
getting added for octeontx2. I will combine them to single entry in "New Features"
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [PATCH v1 56/58] net/octeontx2: add device stop and close operations
2019-06-06 16:23 ` Ferruh Yigit
@ 2019-06-07 5:11 ` Nithin Dabilpuram
0 siblings, 0 replies; 196+ messages in thread
From: Nithin Dabilpuram @ 2019-06-07 5:11 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: jerinj, dev, Kiran Kumar K, Vamsi Attunuru
On Thu, Jun 06, 2019 at 05:23:12PM +0100, Ferruh Yigit wrote:
> On 6/2/2019 4:24 PM, jerinj@marvell.com wrote:
> > From: Nithin Dabilpuram <ndabilpuram@marvell.com>
> >
> > Add device stop, close and reset operations.
> >
> > Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> > Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
>
> <...>
>
> > @@ -1792,6 +1844,24 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
> > return 0;
> > }
> >
> > +static void
> > +otx2_nix_dev_close(struct rte_eth_dev *eth_dev)
> > +{
> > + otx2_eth_dev_uninit(eth_dev, true);
> > +}
>
> 'close' should free all PMD resources, with 'RTE_ETH_DEV_CLOSE_REMOVE' flag
> ethdev API can free the ethdev level allocated memory itself.
>
Agreed, we are adhering to the spec and handling close with RTE_ETH_DEV_CLOSE_REMOVE flag
behavior where close cannot return error and even rte_eth_dev will itself be freed.
Do you see any issue ?
>
>
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 48/58] net/octeontx2: add FW version get operation
2019-06-06 16:06 ` Ferruh Yigit
@ 2019-06-07 5:51 ` Vamsi Krishna Attunuru
0 siblings, 0 replies; 196+ messages in thread
From: Vamsi Krishna Attunuru @ 2019-06-07 5:51 UTC (permalink / raw)
To: Ferruh Yigit, Jerin Jacob Kollanukkaran, dev, John McNamara,
Marko Kovacevic, Nithin Kumar Dabilpuram,
Kiran Kumar Kokkilagadda
________________________________
From: Ferruh Yigit <ferruh.yigit@intel.com>
Sent: Thursday, June 6, 2019 9:36 PM
To: Jerin Jacob Kollanukkaran; dev@dpdk.org; John McNamara; Marko Kovacevic; Nithin Kumar Dabilpuram; Kiran Kumar Kokkilagadda
Cc: Vamsi Krishna Attunuru
Subject: [EXT] Re: [dpdk-dev] [PATCH v1 48/58] net/octeontx2: add FW version get operation
External Email
----------------------------------------------------------------------
On 6/2/2019 4:24 PM, jerinj@marvell.com wrote:
> From: Vamsi Attunuru <vattunuru@marvell.com>
>
> Add firmware version get operation.
>
> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
<...>
> @@ -209,6 +209,28 @@ otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt)
> return 0;
> }
>
> +int
> +otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
> + size_t fw_size)
> +{
> + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
> + int rc = (int)fw_size;
> +
> + if (fw_size > sizeof(dev->mkex_pfl_name))
> + rc = sizeof(dev->mkex_pfl_name);
> +
> + rc = strlcpy(fw_version, (char *)dev->mkex_pfl_name, rc);
> +
> + rc += 1; /* Add the size of '\0' */
> + if (fw_size < (uint32_t)rc)
> + goto done;
> + else
> + return 0;
> +
> +done:
> + return rc;
> +}
Up to you but this can be done without a 'goto':
Agreed, will fix it in v2.
...
if (fw_size < (uint32_t)rc)
return rc;
return 0;
}
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [PATCH v1 25/58] net/octeontx2: add ptype support
2019-06-06 16:20 ` Ferruh Yigit
@ 2019-06-07 8:54 ` Jerin Jacob Kollanukkaran
0 siblings, 0 replies; 196+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-06-07 8:54 UTC (permalink / raw)
To: Ferruh Yigit, dev, John McNamara, Marko Kovacevic,
Nithin Kumar Dabilpuram, Kiran Kumar Kokkilagadda
Cc: Harman Kalra
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Thursday, June 6, 2019 9:50 PM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dev@dpdk.org; John
> McNamara <john.mcnamara@intel.com>; Marko Kovacevic
> <marko.kovacevic@intel.com>; Nithin Kumar Dabilpuram
> <ndabilpuram@marvell.com>; Kiran Kumar Kokkilagadda
> <kirankumark@marvell.com>
> Cc: Harman Kalra <hkalra@marvell.com>
> Subject: Re: [dpdk-dev] [PATCH v1 25/58] net/octeontx2: add ptype support
>
> On 6/6/2019 4:59 PM, Jerin Jacob Kollanukkaran wrote:
> >
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >> Sent: Thursday, June 6, 2019 9:20 PM
> >> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dev@dpdk.org;
> >> John McNamara <john.mcnamara@intel.com>; Marko Kovacevic
> >> <marko.kovacevic@intel.com>; Nithin Kumar Dabilpuram
> >> <ndabilpuram@marvell.com>; Kiran Kumar Kokkilagadda
> >> <kirankumark@marvell.com>
> >> Cc: Harman Kalra <hkalra@marvell.com>
> >> Subject: Re: [dpdk-dev] [PATCH v1 25/58] net/octeontx2: add ptype
> >> support
> >>
> >> On 6/2/2019 4:24 PM, jerinj@marvell.com wrote:
> >>> From: Jerin Jacob <jerinj@marvell.com>
> >>>
> >>> The fields from CQE needs to be converted to ptype and rx ol flags
> >>> in mbuf. This patch adds create lookup memory for those items to be
> >>> used in Fastpath.
> >>>
> >>> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> >>> Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
> >>> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> >>
> >> <...>
> >>
> >>> @@ -1,4 +1,7 @@
> >>> DPDK_19.05 {
> >>> + global:
> >>> +
> >>> + otx2_nix_fastpath_lookup_mem_get;
> >>
> >> Why this function is in the .map file?
> >
> > It is used by octeontx2 eventdev driver in driver/event/octeontx2
>
> OK, any way to get rid of it, like using event-eth adapters etc ?
OK. I will try to rework to get rid of this function exposing.
>
> >
> >> .map file is for the functions that this PMD exposes to application
> >> to call, this look intended to use within the library itself, if so no need to
> be in .map file.
> >>
> >>>
> >>> local: *;
> >>> };
> >>>
> >
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 00/58] OCTEON TX2 Ethdev driver
2019-06-06 15:23 ` [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver Ferruh Yigit
@ 2019-06-10 9:54 ` Jerin Jacob Kollanukkaran
0 siblings, 0 replies; 196+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-06-10 9:54 UTC (permalink / raw)
To: Ferruh Yigit, dev, thomas
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Thursday, June 6, 2019 8:53 PM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver
>
> On 6/2/2019 4:23 PM, jerinj@marvell.com wrote:
> > From: Jerin Jacob <jerinj@marvell.com>
> >
> > This patchset adds support for OCTEON TX2 ethdev driver.
> >
> > This patch set is depended on "OCTEON TX2 common and mempool driver"
> series.
> > http://mails.dpdk.org/archives/dev/2019-June/133329.html
>
> Hi Jerin,
>
> I will wait for the dependent patches to be merged to be able to full review
> the patchset, I will go through it for now.
Hi Thomas,
Could you merge the "OCTEON TX2 common and mempool driver"[1] series
If there are no more review comments.
Following patches sets[2] has dependency on this series.
[1] http://mails.dpdk.org/archives/dev/2019-June/133329.html
[2]
http://patches.dpdk.org/patch/54002/
http://patches.dpdk.org/patch/54057/
http://patches.dpdk.org/patch/54017/
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 00/57] OCTEON TX2 Ethdev driver
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
` (58 preceding siblings ...)
2019-06-06 15:23 ` [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver Ferruh Yigit
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 01/57] net/octeontx2: add build and doc infrastructure jerinj
` (57 more replies)
59 siblings, 58 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev; +Cc: Jerin Jacob
From: Jerin Jacob <jerinj@marvell.com>
This patchset adds support for OCTEON TX2 ethdev driver.
v2:
# Moved maintainers file to the first patch(Ferruh)
# removed reference to to v19.05(Ferruh)
# Makefile/Meson CFLAGS moved to specific patches(Ferruh)
# Move Documentation updates to specific patches(Ferruh)
# reworked the code to remove the need for exposing
otx2_nix_fastpath_lookup_mem_get function(Ferruh)
# Updated goto logic in net/octeontx2: add FW version get operation(Ferruh)
# Added "add Rx interrupts support" patch
Harman Kalra (3):
net/octeontx2: add PTP base support
net/octeontx2: add remaining PTP operations
net/octeontx2: add Rx interrupts support
Jerin Jacob (16):
net/octeontx2: add build and doc infrastructure
net/octeontx2: add ethdev probe and remove
net/octeontx2: add device init and uninit
net/octeontx2: add devargs parsing functions
net/octeontx2: handle device error interrupts
net/octeontx2: add info get operation
net/octeontx2: add device configure operation
net/octeontx2: handle queue specific error interrupts
net/octeontx2: add context debug utils
net/octeontx2: add Rx queue setup and release
net/octeontx2: add Tx queue setup and release
net/octeontx2: add ptype support
net/octeontx2: add Rx and Tx descriptor operations
net/octeontx2: add Rx burst support
net/octeontx2: add Rx vector version
net/octeontx2: add Tx burst support
Kiran Kumar K (13):
net/octeontx2: add register dump support
net/octeontx2: add basic stats operation
net/octeontx2: add extended stats operations
net/octeontx2: introducing flow driver
net/octeontx2: add flow utility functions
net/octeontx2: add flow mbox utility functions
net/octeontx2: add flow MCAM utility functions
net/octeontx2: add flow parsing for outer layers
net/octeontx2: add flow actions support
net/octeontx2: add flow parse actions support
net/octeontx2: add flow operations
net/octeontx2: add flow destroy ops support
net/octeontx2: add flow init and fini
Krzysztof Kanas (2):
net/octeontx2: alloc and free TM HW resources
net/octeontx2: enable Tx through traffic manager
Nithin Dabilpuram (9):
net/octeontx2: add queue start and stop operations
net/octeontx2: introduce traffic manager
net/octeontx2: configure TM HW resources
net/octeontx2: add queue info and pool supported operations
net/octeontx2: add Rx multi segment version
net/octeontx2: add Tx multi segment version
net/octeontx2: add Tx vector version
net/octeontx2: add device start operation
net/octeontx2: add device stop and close operations
Sunil Kumar Kori (1):
net/octeontx2: add unicast MAC filter
Vamsi Attunuru (8):
net/octeontx2: add link stats operations
net/octeontx2: add promiscuous and allmulticast mode
net/octeontx2: add RSS support
net/octeontx2: handle port reconfigure
net/octeontx2: add module EEPROM dump
net/octeontx2: add flow control support
net/octeontx2: add FW version get operation
net/octeontx2: add MTU set operation
Vivek Sharma (5):
net/octeontx2: connect flow API to ethdev ops
net/octeontx2: implement VLAN utility functions
net/octeontx2: support VLAN offloads
net/octeontx2: support VLAN filters
net/octeontx2: support VLAN TPID and PVID for Tx
MAINTAINERS | 9 +
config/common_base | 5 +
doc/guides/nics/features/octeontx2.ini | 50 +
doc/guides/nics/features/octeontx2_vec.ini | 46 +
doc/guides/nics/features/octeontx2_vf.ini | 42 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/octeontx2.rst | 302 +++
doc/guides/platform/octeontx2.rst | 3 +
drivers/net/Makefile | 1 +
drivers/net/meson.build | 6 +-
drivers/net/octeontx2/Makefile | 55 +
drivers/net/octeontx2/meson.build | 40 +
drivers/net/octeontx2/otx2_ethdev.c | 1996 +++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 529 +++++
drivers/net/octeontx2/otx2_ethdev_debug.c | 500 +++++
drivers/net/octeontx2/otx2_ethdev_devargs.c | 165 ++
drivers/net/octeontx2/otx2_ethdev_irq.c | 468 ++++
drivers/net/octeontx2/otx2_ethdev_ops.c | 461 ++++
drivers/net/octeontx2/otx2_flow.c | 981 ++++++++
drivers/net/octeontx2/otx2_flow.h | 390 ++++
drivers/net/octeontx2/otx2_flow_ctrl.c | 220 ++
drivers/net/octeontx2/otx2_flow_parse.c | 947 ++++++++
drivers/net/octeontx2/otx2_flow_utils.c | 910 ++++++++
drivers/net/octeontx2/otx2_link.c | 108 +
drivers/net/octeontx2/otx2_lookup.c | 315 +++
drivers/net/octeontx2/otx2_mac.c | 149 ++
drivers/net/octeontx2/otx2_ptp.c | 273 +++
drivers/net/octeontx2/otx2_rss.c | 372 +++
drivers/net/octeontx2/otx2_rx.c | 411 ++++
drivers/net/octeontx2/otx2_rx.h | 333 +++
drivers/net/octeontx2/otx2_stats.c | 387 ++++
drivers/net/octeontx2/otx2_tm.c | 1396 ++++++++++++
drivers/net/octeontx2/otx2_tm.h | 153 ++
drivers/net/octeontx2/otx2_tx.c | 1033 +++++++++
drivers/net/octeontx2/otx2_tx.h | 370 +++
drivers/net/octeontx2/otx2_vlan.c | 1034 +++++++++
.../octeontx2/rte_pmd_octeontx2_version.map | 4 +
mk/rte.app.mk | 2 +
38 files changed, 14466 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/nics/features/octeontx2.ini
create mode 100644 doc/guides/nics/features/octeontx2_vec.ini
create mode 100644 doc/guides/nics/features/octeontx2_vf.ini
create mode 100644 doc/guides/nics/octeontx2.rst
create mode 100644 drivers/net/octeontx2/Makefile
create mode 100644 drivers/net/octeontx2/meson.build
create mode 100644 drivers/net/octeontx2/otx2_ethdev.c
create mode 100644 drivers/net/octeontx2/otx2_ethdev.h
create mode 100644 drivers/net/octeontx2/otx2_ethdev_debug.c
create mode 100644 drivers/net/octeontx2/otx2_ethdev_devargs.c
create mode 100644 drivers/net/octeontx2/otx2_ethdev_irq.c
create mode 100644 drivers/net/octeontx2/otx2_ethdev_ops.c
create mode 100644 drivers/net/octeontx2/otx2_flow.c
create mode 100644 drivers/net/octeontx2/otx2_flow.h
create mode 100644 drivers/net/octeontx2/otx2_flow_ctrl.c
create mode 100644 drivers/net/octeontx2/otx2_flow_parse.c
create mode 100644 drivers/net/octeontx2/otx2_flow_utils.c
create mode 100644 drivers/net/octeontx2/otx2_link.c
create mode 100644 drivers/net/octeontx2/otx2_lookup.c
create mode 100644 drivers/net/octeontx2/otx2_mac.c
create mode 100644 drivers/net/octeontx2/otx2_ptp.c
create mode 100644 drivers/net/octeontx2/otx2_rss.c
create mode 100644 drivers/net/octeontx2/otx2_rx.c
create mode 100644 drivers/net/octeontx2/otx2_rx.h
create mode 100644 drivers/net/octeontx2/otx2_stats.c
create mode 100644 drivers/net/octeontx2/otx2_tm.c
create mode 100644 drivers/net/octeontx2/otx2_tm.h
create mode 100644 drivers/net/octeontx2/otx2_tx.c
create mode 100644 drivers/net/octeontx2/otx2_tx.h
create mode 100644 drivers/net/octeontx2/otx2_vlan.c
create mode 100644 drivers/net/octeontx2/rte_pmd_octeontx2_version.map
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 01/57] net/octeontx2: add build and doc infrastructure
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 02/57] net/octeontx2: add ethdev probe and remove jerinj
` (56 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Thomas Monjalon, John McNamara, Marko Kovacevic,
Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K, Vamsi Attunuru
From: Jerin Jacob <jerinj@marvell.com>
Adding bare minimum PMD library and doc build infrastructure
and claim the maintainership for octeontx2 PMD.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
MAINTAINERS | 9 ++++++
config/common_base | 5 +++
doc/guides/nics/features/octeontx2.ini | 9 ++++++
doc/guides/nics/features/octeontx2_vec.ini | 9 ++++++
doc/guides/nics/features/octeontx2_vf.ini | 9 ++++++
doc/guides/nics/index.rst | 1 +
doc/guides/nics/octeontx2.rst | 32 +++++++++++++++++++
doc/guides/platform/octeontx2.rst | 3 ++
drivers/net/Makefile | 1 +
drivers/net/meson.build | 6 +++-
drivers/net/octeontx2/Makefile | 30 +++++++++++++++++
drivers/net/octeontx2/meson.build | 9 ++++++
drivers/net/octeontx2/otx2_ethdev.c | 3 ++
.../octeontx2/rte_pmd_octeontx2_version.map | 4 +++
mk/rte.app.mk | 2 ++
15 files changed, 131 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/nics/features/octeontx2.ini
create mode 100644 doc/guides/nics/features/octeontx2_vec.ini
create mode 100644 doc/guides/nics/features/octeontx2_vf.ini
create mode 100644 doc/guides/nics/octeontx2.rst
create mode 100644 drivers/net/octeontx2/Makefile
create mode 100644 drivers/net/octeontx2/meson.build
create mode 100644 drivers/net/octeontx2/otx2_ethdev.c
create mode 100644 drivers/net/octeontx2/rte_pmd_octeontx2_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 24431832a..37fb91d64 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -684,6 +684,15 @@ F: drivers/net/mvneta/
F: doc/guides/nics/mvneta.rst
F: doc/guides/nics/features/mvneta.ini
+Marvell OCTEON TX2
+M: Jerin Jacob <jerinj@marvell.com>
+M: Nithin Dabilpuram <ndabilpuram@marvell.com>
+M: Kiran Kumar K <kirankumark@marvell.com>
+T: git://dpdk.org/next/dpdk-next-net-mrvl
+F: drivers/net/octeontx2/
+F: doc/guides/nics/features/octeontx2*.rst
+F: doc/guides/nics/octeontx2.rst
+
Mellanox mlx4
M: Matan Azrad <matan@mellanox.com>
M: Shahaf Shuler <shahafs@mellanox.com>
diff --git a/config/common_base b/config/common_base
index e700bf1e7..6cc44b65a 100644
--- a/config/common_base
+++ b/config/common_base
@@ -411,6 +411,11 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
#
CONFIG_RTE_LIBRTE_OCTEONTX_PMD=y
+#
+# Compile burst-oriented Marvell OCTEON TX2 network PMD driver
+#
+CONFIG_RTE_LIBRTE_OCTEONTX2_PMD=y
+
#
# Compile WRS accelerated virtual port (AVP) guest PMD driver
#
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
new file mode 100644
index 000000000..84d5ad779
--- /dev/null
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'octeontx2' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux VFIO = Y
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
new file mode 100644
index 000000000..5fd7e4c5c
--- /dev/null
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'octeontx2_vec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux VFIO = Y
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
new file mode 100644
index 000000000..3128cc120
--- /dev/null
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'octeontx2_vf' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux VFIO = Y
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index d664c4592..9fec02f3e 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -46,6 +46,7 @@ Network Interface Controller Drivers
nfb
nfp
octeontx
+ octeontx2
qede
sfc_efx
softnic
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
new file mode 100644
index 000000000..f0bd36be3
--- /dev/null
+++ b/doc/guides/nics/octeontx2.rst
@@ -0,0 +1,32 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(C) 2019 Marvell International Ltd.
+
+OCTEON TX2 Poll Mode driver
+===========================
+
+The OCTEON TX2 ETHDEV PMD (**librte_pmd_octeontx2**) provides poll mode ethdev
+driver support for the inbuilt network device found in **Marvell OCTEON TX2**
+SoC family as well as for their virtual functions (VF) in SR-IOV context.
+
+More information can be found at `Marvell Official Website
+<https://www.marvell.com/embedded-processors/infrastructure-processors>`_.
+
+Features
+--------
+
+Features of the OCTEON TX2 Ethdev PMD are:
+
+
+Prerequisites
+-------------
+
+See :doc:`../platform/octeontx2` for setup information.
+
+Compile time Config Options
+---------------------------
+
+The following options may be modified in the ``config`` file.
+
+- ``CONFIG_RTE_LIBRTE_OCTEONTX2_PMD`` (default ``y``)
+
+ Toggle compilation of the ``librte_pmd_octeontx2`` driver.
diff --git a/doc/guides/platform/octeontx2.rst b/doc/guides/platform/octeontx2.rst
index c9ea45647..d2592f119 100644
--- a/doc/guides/platform/octeontx2.rst
+++ b/doc/guides/platform/octeontx2.rst
@@ -98,6 +98,9 @@ HW Offload Drivers
This section lists dataplane H/W block(s) available in OCTEON TX2 SoC.
+#. **Ethdev Driver**
+ See :doc:`../nics/octeontx2` for NIX Ethdev driver information.
+
#. **Mempool Driver**
See :doc:`../mempool/octeontx2` for NPA mempool driver information.
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index a1d45d9cb..5767fdf65 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -47,6 +47,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += nfp
DIRS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt
DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += null
DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_PMD) += octeontx
+DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += octeontx2
DIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += pcap
DIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede
DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += ring
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 86e704e13..513f19b33 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -33,7 +33,11 @@ drivers = ['af_packet',
'netvsc',
'nfb',
'nfp',
- 'null', 'octeontx', 'pcap', 'qede', 'ring',
+ 'null',
+ 'octeontx',
+ 'octeontx2',
+ 'pcap',
+ 'ring',
'sfc',
'softnic',
'szedata2',
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
new file mode 100644
index 000000000..9c467352f
--- /dev/null
+++ b/drivers/net/octeontx2/Makefile
@@ -0,0 +1,30 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_octeontx2.a
+
+CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2
+CFLAGS += -I$(RTE_SDK)/drivers/net/octeontx2
+CFLAGS += -O3
+
+EXPORT_MAP := rte_pmd_octeontx2_version.map
+
+LIBABIVER := 1
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
+ otx2_ethdev.c
+
+LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
new file mode 100644
index 000000000..0d0ca32da
--- /dev/null
+++ b/drivers/net/octeontx2/meson.build
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+sources = files(
+ 'otx2_ethdev.c',
+ )
+
+deps += ['common_octeontx2', 'mempool_octeontx2']
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
new file mode 100644
index 000000000..d26535dee
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -0,0 +1,3 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
diff --git a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
new file mode 100644
index 000000000..9a61188cd
--- /dev/null
+++ b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
@@ -0,0 +1,4 @@
+DPDK_19.08 {
+
+ local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 2b5696a27..fab72ff6a 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -109,6 +109,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF)$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOO
_LDLIBS-y += -lrte_common_octeontx
endif
OCTEONTX2-y := $(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL)
+OCTEONTX2-y += $(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD)
ifeq ($(findstring y,$(OCTEONTX2-y)),y)
_LDLIBS-y += -lrte_common_octeontx2
endif
@@ -195,6 +196,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MVPP2_PMD) += -lrte_pmd_mvpp2
_LDLIBS-$(CONFIG_RTE_LIBRTE_MVNETA_PMD) += -lrte_pmd_mvneta
_LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += -lrte_pmd_nfp
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += -lrte_pmd_null
+_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += -lrte_pmd_octeontx2
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += -lrte_pmd_pcap -lpcap
_LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += -lrte_pmd_qede
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_RING) += -lrte_pmd_ring
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 02/57] net/octeontx2: add ethdev probe and remove
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 01/57] net/octeontx2: add build and doc infrastructure jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 03/57] net/octeontx2: add device init and uninit jerinj
` (55 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K, Anatoly Burakov
From: Jerin Jacob <jerinj@marvell.com>
add basic PCIe ethdev probe and remove.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/Makefile | 8 ++-
drivers/net/octeontx2/meson.build | 14 ++++-
drivers/net/octeontx2/otx2_ethdev.c | 93 +++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 27 +++++++++
4 files changed, 140 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_ethdev.h
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 9c467352f..b3060e2dd 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -15,6 +15,11 @@ CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2
CFLAGS += -I$(RTE_SDK)/drivers/net/octeontx2
CFLAGS += -O3
+ifneq ($(CONFIG_RTE_ARCH_64),y)
+CFLAGS += -Wno-int-to-pointer-cast
+CFLAGS += -Wno-pointer-to-int-cast
+endif
+
EXPORT_MAP := rte_pmd_octeontx2_version.map
LIBABIVER := 1
@@ -25,6 +30,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_ethdev.c
-LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2
+LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal
+LDLIBS += -lrte_ethdev -lrte_bus_pci
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 0d0ca32da..db375f33b 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -6,4 +6,16 @@ sources = files(
'otx2_ethdev.c',
)
-deps += ['common_octeontx2', 'mempool_octeontx2']
+deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2']
+
+extra_flags = []
+# This integrated controller runs only on a arm64 machine, remove 32bit warnings
+if not dpdk_conf.get('RTE_ARCH_64')
+ extra_flags += ['-Wno-int-to-pointer-cast', '-Wno-pointer-to-int-cast']
+endif
+
+foreach flag: extra_flags
+ if cc.has_argument(flag)
+ cflags += flag
+ endif
+endforeach
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index d26535dee..05fa8988e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1,3 +1,96 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(C) 2019 Marvell International Ltd.
*/
+
+#include <rte_ethdev_pci.h>
+#include <rte_io.h>
+#include <rte_malloc.h>
+
+#include "otx2_ethdev.h"
+
+static int
+otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+
+ return -ENODEV;
+}
+
+static int
+otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
+{
+ RTE_SET_USED(eth_dev);
+ RTE_SET_USED(mbox_close);
+
+ return -ENODEV;
+}
+
+static int
+nix_remove(struct rte_pci_device *pci_dev)
+{
+ struct rte_eth_dev *eth_dev;
+ int rc;
+
+ eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+ if (eth_dev) {
+ /* Cleanup eth dev */
+ rc = otx2_eth_dev_uninit(eth_dev, true);
+ if (rc)
+ return rc;
+
+ rte_eth_dev_pci_release(eth_dev);
+ }
+
+ /* Nothing to be done for secondary processes */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ return 0;
+}
+
+static int
+nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ int rc;
+
+ RTE_SET_USED(pci_drv);
+
+ rc = rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct otx2_eth_dev),
+ otx2_eth_dev_init);
+
+ /* On error on secondary, recheck if port exists in primary or
+ * in mid of detach state.
+ */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY && rc)
+ if (!rte_eth_dev_allocated(pci_dev->device.name))
+ return 0;
+ return rc;
+}
+
+static const struct rte_pci_id pci_nix_map[] = {
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_PF)
+ },
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_VF)
+ },
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
+ PCI_DEVID_OCTEONTX2_RVU_AF_VF)
+ },
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver pci_nix = {
+ .id_table = pci_nix_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA |
+ RTE_PCI_DRV_INTR_LSC,
+ .probe = nix_probe,
+ .remove = nix_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_octeontx2, pci_nix);
+RTE_PMD_REGISTER_PCI_TABLE(net_octeontx2, pci_nix_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_octeontx2, "vfio-pci");
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
new file mode 100644
index 000000000..fd01a3254
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_ETHDEV_H__
+#define __OTX2_ETHDEV_H__
+
+#include <stdint.h>
+
+#include <rte_common.h>
+
+#include "otx2_common.h"
+#include "otx2_dev.h"
+#include "otx2_irq.h"
+#include "otx2_mempool.h"
+
+struct otx2_eth_dev {
+ OTX2_DEV; /* Base class */
+} __rte_cache_aligned;
+
+static inline struct otx2_eth_dev *
+otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
+{
+ return eth_dev->data->dev_private;
+}
+
+#endif /* __OTX2_ETHDEV_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 03/57] net/octeontx2: add device init and uninit
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 01/57] net/octeontx2: add build and doc infrastructure jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 02/57] net/octeontx2: add ethdev probe and remove jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 04/57] net/octeontx2: add devargs parsing functions jerinj
` (54 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K, Anatoly Burakov
Cc: Sunil Kumar Kori, Vamsi Attunuru
From: Jerin Jacob <jerinj@marvell.com>
Add basic init and uninit function which includes
attaching LF device to probed PCIe device.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 277 +++++++++++++++++++++++++++-
drivers/net/octeontx2/otx2_ethdev.h | 72 ++++++++
drivers/net/octeontx2/otx2_mac.c | 72 ++++++++
5 files changed, 418 insertions(+), 5 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_mac.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index b3060e2dd..4ff3609d2 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -28,6 +28,7 @@ LIBABIVER := 1
# all source are stored in SRCS-y
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
+ otx2_mac.c \
otx2_ethdev.c
LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index db375f33b..b153f166d 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -3,6 +3,7 @@
#
sources = files(
+ 'otx2_mac.c',
'otx2_ethdev.c',
)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 05fa8988e..08f03b4c3 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -8,27 +8,277 @@
#include "otx2_ethdev.h"
+static inline void
+otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+}
+
+static inline void
+otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+}
+
+static inline uint64_t
+nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
+{
+ uint64_t capa = NIX_RX_OFFLOAD_CAPA;
+
+ if (otx2_dev_is_vf(dev))
+ capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+
+ return capa;
+}
+
+static inline uint64_t
+nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
+{
+ RTE_SET_USED(dev);
+
+ return NIX_TX_OFFLOAD_CAPA;
+}
+
+static int
+nix_lf_free(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_lf_free_req *req;
+ struct ndc_sync_op *ndc_req;
+ int rc;
+
+ /* Sync NDC-NIX for LF */
+ ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
+ ndc_req->nix_lf_tx_sync = 1;
+ ndc_req->nix_lf_rx_sync = 1;
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ otx2_err("Error on NDC-NIX-[TX, RX] LF sync, rc %d", rc);
+
+ req = otx2_mbox_alloc_msg_nix_lf_free(mbox);
+ /* Let AF driver free all this nix lf's
+ * NPC entries allocated using NPC MBOX.
+ */
+ req->flags = 0;
+
+ return otx2_mbox_process(mbox);
+}
+
+static inline int
+nix_lf_attach(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct rsrc_attach_req *req;
+
+ /* Attach NIX(lf) */
+ req = otx2_mbox_alloc_msg_attach_resources(mbox);
+ req->modify = true;
+ req->nixlf = true;
+
+ return otx2_mbox_process(mbox);
+}
+
+static inline int
+nix_lf_get_msix_offset(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct msix_offset_rsp *msix_rsp;
+ int rc;
+
+ /* Get NPA and NIX MSIX vector offsets */
+ otx2_mbox_alloc_msg_msix_offset(mbox);
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
+
+ dev->nix_msixoff = msix_rsp->nix_msixoff;
+
+ return rc;
+}
+
+static inline int
+otx2_eth_dev_lf_detach(struct otx2_mbox *mbox)
+{
+ struct rsrc_detach_req *req;
+
+ req = otx2_mbox_alloc_msg_detach_resources(mbox);
+
+ /* Detach all except npa lf */
+ req->partial = true;
+ req->nixlf = true;
+ req->sso = true;
+ req->ssow = true;
+ req->timlfs = true;
+ req->cptlfs = true;
+
+ return otx2_mbox_process(mbox);
+}
+
static int
otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
{
- RTE_SET_USED(eth_dev);
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_pci_device *pci_dev;
+ int rc, max_entries;
- return -ENODEV;
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ /* Setup callbacks for secondary process */
+ otx2_eth_set_tx_function(eth_dev);
+ otx2_eth_set_rx_function(eth_dev);
+ return 0;
+ }
+
+ pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+ rte_eth_copy_pci_info(eth_dev, pci_dev);
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_CLOSE_REMOVE;
+
+ /* Zero out everything after OTX2_DEV to allow proper dev_reset() */
+ memset(&dev->otx2_eth_dev_data_start, 0, sizeof(*dev) -
+ offsetof(struct otx2_eth_dev, otx2_eth_dev_data_start));
+
+ if (!dev->mbox_active) {
+ /* Initialize the base otx2_dev object
+ * only if already present
+ */
+ rc = otx2_dev_init(pci_dev, dev);
+ if (rc) {
+ otx2_err("Failed to initialize otx2_dev rc=%d", rc);
+ goto error;
+ }
+ }
+
+ /* Grab the NPA LF if required */
+ rc = otx2_npa_lf_init(pci_dev, dev);
+ if (rc)
+ goto otx2_dev_uninit;
+
+ dev->configured = 0;
+ dev->drv_inited = true;
+ dev->base = dev->bar2 + (RVU_BLOCK_ADDR_NIX0 << 20);
+ dev->lmt_addr = dev->bar2 + (RVU_BLOCK_ADDR_LMT << 20);
+
+ /* Attach NIX LF */
+ rc = nix_lf_attach(dev);
+ if (rc)
+ goto otx2_npa_uninit;
+
+ /* Get NIX MSIX offset */
+ rc = nix_lf_get_msix_offset(dev);
+ if (rc)
+ goto otx2_npa_uninit;
+
+ /* Get maximum number of supported MAC entries */
+ max_entries = otx2_cgx_mac_max_entries_get(dev);
+ if (max_entries < 0) {
+ otx2_err("Failed to get max entries for mac addr");
+ rc = -ENOTSUP;
+ goto mbox_detach;
+ }
+
+ /* For VFs, returned max_entries will be 0. But to keep default MAC
+ * address, one entry must be allocated. So setting up to 1.
+ */
+ if (max_entries == 0)
+ max_entries = 1;
+
+ eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", max_entries *
+ RTE_ETHER_ADDR_LEN, 0);
+ if (eth_dev->data->mac_addrs == NULL) {
+ otx2_err("Failed to allocate memory for mac addr");
+ rc = -ENOMEM;
+ goto mbox_detach;
+ }
+
+ dev->max_mac_entries = max_entries;
+
+ rc = otx2_nix_mac_addr_get(eth_dev, dev->mac_addr);
+ if (rc)
+ goto free_mac_addrs;
+
+ /* Update the mac address */
+ memcpy(eth_dev->data->mac_addrs, dev->mac_addr, RTE_ETHER_ADDR_LEN);
+
+ /* Also sync same MAC address to CGX table */
+ otx2_cgx_mac_addr_set(eth_dev, ð_dev->data->mac_addrs[0]);
+
+ dev->tx_offload_capa = nix_get_tx_offload_capa(dev);
+ dev->rx_offload_capa = nix_get_rx_offload_capa(dev);
+
+ if (otx2_dev_is_A0(dev)) {
+ dev->hwcap |= OTX2_FIXUP_F_MIN_4K_Q;
+ dev->hwcap |= OTX2_FIXUP_F_LIMIT_CQ_FULL;
+ }
+
+ otx2_nix_dbg("Port=%d pf=%d vf=%d ver=%s msix_off=%d hwcap=0x%" PRIx64
+ " rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64,
+ eth_dev->data->port_id, dev->pf, dev->vf,
+ OTX2_ETH_DEV_PMD_VERSION, dev->nix_msixoff, dev->hwcap,
+ dev->rx_offload_capa, dev->tx_offload_capa);
+ return 0;
+
+free_mac_addrs:
+ rte_free(eth_dev->data->mac_addrs);
+mbox_detach:
+ otx2_eth_dev_lf_detach(dev->mbox);
+otx2_npa_uninit:
+ otx2_npa_lf_fini();
+otx2_dev_uninit:
+ otx2_dev_fini(pci_dev, dev);
+error:
+ otx2_err("Failed to init nix eth_dev rc=%d", rc);
+ return rc;
}
static int
otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
{
- RTE_SET_USED(eth_dev);
- RTE_SET_USED(mbox_close);
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_pci_device *pci_dev;
+ int rc;
- return -ENODEV;
+ /* Nothing to be done for secondary processes */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ rc = nix_lf_free(dev);
+ if (rc)
+ otx2_err("Failed to free nix lf, rc=%d", rc);
+
+ rc = otx2_npa_lf_fini();
+ if (rc)
+ otx2_err("Failed to cleanup npa lf, rc=%d", rc);
+
+ rte_free(eth_dev->data->mac_addrs);
+ eth_dev->data->mac_addrs = NULL;
+ dev->drv_inited = false;
+
+ pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+ rc = otx2_eth_dev_lf_detach(dev->mbox);
+ if (rc)
+ otx2_err("Failed to detach resources, rc=%d", rc);
+
+ /* Check if mbox close is needed */
+ if (!mbox_close)
+ return 0;
+
+ if (otx2_npa_lf_active(dev) || otx2_dev_active_vfs(dev)) {
+ /* Will be freed later by PMD */
+ eth_dev->data->dev_private = NULL;
+ return 0;
+ }
+
+ otx2_dev_fini(pci_dev, dev);
+ return 0;
}
static int
nix_remove(struct rte_pci_device *pci_dev)
{
struct rte_eth_dev *eth_dev;
+ struct otx2_idev_cfg *idev;
+ struct otx2_dev *otx2_dev;
int rc;
eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
@@ -45,7 +295,24 @@ nix_remove(struct rte_pci_device *pci_dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ /* Check for common resources */
+ idev = otx2_intra_dev_get_cfg();
+ if (!idev || !idev->npa_lf || idev->npa_lf->pci_dev != pci_dev)
+ return 0;
+
+ otx2_dev = container_of(idev->npa_lf, struct otx2_dev, npalf);
+
+ if (otx2_npa_lf_active(otx2_dev) || otx2_dev_active_vfs(otx2_dev))
+ goto exit;
+
+ /* Safe to cleanup mbox as no more users */
+ otx2_dev_fini(pci_dev, otx2_dev);
+ rte_free(otx2_dev);
return 0;
+
+exit:
+ otx2_info("%s: common resource in use by other devices", pci_dev->name);
+ return -EAGAIN;
}
static int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index fd01a3254..d9f72686a 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -8,14 +8,76 @@
#include <stdint.h>
#include <rte_common.h>
+#include <rte_ethdev.h>
#include "otx2_common.h"
#include "otx2_dev.h"
#include "otx2_irq.h"
#include "otx2_mempool.h"
+#define OTX2_ETH_DEV_PMD_VERSION "1.0"
+
+/* Ethdev HWCAP and Fixup flags. Use from MSB bits to avoid conflict with dev */
+
+/* Minimum CQ size should be 4K */
+#define OTX2_FIXUP_F_MIN_4K_Q BIT_ULL(63)
+#define otx2_ethdev_fixup_is_min_4k_q(dev) \
+ ((dev)->hwcap & OTX2_FIXUP_F_MIN_4K_Q)
+/* Limit CQ being full */
+#define OTX2_FIXUP_F_LIMIT_CQ_FULL BIT_ULL(62)
+#define otx2_ethdev_fixup_is_limit_cq_full(dev) \
+ ((dev)->hwcap & OTX2_FIXUP_F_LIMIT_CQ_FULL)
+
+/* Used for struct otx2_eth_dev::flags */
+#define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0)
+
+#define NIX_TX_OFFLOAD_CAPA ( \
+ DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
+ DEV_TX_OFFLOAD_MT_LOCKFREE | \
+ DEV_TX_OFFLOAD_VLAN_INSERT | \
+ DEV_TX_OFFLOAD_QINQ_INSERT | \
+ DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ DEV_TX_OFFLOAD_TCP_CKSUM | \
+ DEV_TX_OFFLOAD_UDP_CKSUM | \
+ DEV_TX_OFFLOAD_SCTP_CKSUM | \
+ DEV_TX_OFFLOAD_MULTI_SEGS | \
+ DEV_TX_OFFLOAD_IPV4_CKSUM)
+
+#define NIX_RX_OFFLOAD_CAPA ( \
+ DEV_RX_OFFLOAD_CHECKSUM | \
+ DEV_RX_OFFLOAD_SCTP_CKSUM | \
+ DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ DEV_RX_OFFLOAD_SCATTER | \
+ DEV_RX_OFFLOAD_JUMBO_FRAME | \
+ DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ DEV_RX_OFFLOAD_VLAN_STRIP | \
+ DEV_RX_OFFLOAD_VLAN_FILTER | \
+ DEV_RX_OFFLOAD_QINQ_STRIP | \
+ DEV_RX_OFFLOAD_TIMESTAMP)
+
struct otx2_eth_dev {
OTX2_DEV; /* Base class */
+ MARKER otx2_eth_dev_data_start;
+ uint16_t sqb_size;
+ uint16_t rx_chan_base;
+ uint16_t tx_chan_base;
+ uint8_t rx_chan_cnt;
+ uint8_t tx_chan_cnt;
+ uint8_t lso_tsov4_idx;
+ uint8_t lso_tsov6_idx;
+ uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
+ uint8_t max_mac_entries;
+ uint8_t configured;
+ uint16_t nix_msixoff;
+ uintptr_t base;
+ uintptr_t lmt_addr;
+ uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */
+ uint64_t rx_offloads;
+ uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */
+ uint64_t tx_offloads;
+ uint64_t rx_offload_capa;
+ uint64_t tx_offload_capa;
} __rte_cache_aligned;
static inline struct otx2_eth_dev *
@@ -24,4 +86,14 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
return eth_dev->data->dev_private;
}
+/* CGX */
+int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
+int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
+int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
+ struct rte_ether_addr *addr);
+
+/* Mac address handling */
+int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr);
+int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev);
+
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_mac.c b/drivers/net/octeontx2/otx2_mac.c
new file mode 100644
index 000000000..89b0ca6b0
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_mac.c
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_common.h>
+
+#include "otx2_dev.h"
+#include "otx2_ethdev.h"
+
+int
+otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct cgx_mac_addr_set_or_get *req;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return -ENOTSUP;
+
+ if (otx2_dev_active_vfs(dev))
+ return -ENOTSUP;
+
+ req = otx2_mbox_alloc_msg_cgx_mac_addr_set(mbox);
+ otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ otx2_err("Failed to set mac address in CGX, rc=%d", rc);
+
+ return 0;
+}
+
+int
+otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev)
+{
+ struct cgx_max_dmac_entries_get_rsp *rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_mac_max_entries_get(mbox);
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ return rsp->max_dmac_filters;
+}
+
+int
+otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_get_mac_addr_rsp *rsp;
+ int rc;
+
+ otx2_mbox_alloc_msg_nix_get_mac_addr(mbox);
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to get mac address, rc=%d", rc);
+ goto done;
+ }
+
+ otx2_mbox_memcpy(addr, rsp->mac_addr, RTE_ETHER_ADDR_LEN);
+
+done:
+ return rc;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 04/57] net/octeontx2: add devargs parsing functions
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (2 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 03/57] net/octeontx2: add device init and uninit jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 05/57] net/octeontx2: handle device error interrupts jerinj
` (53 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic
Cc: Pavan Nikhilesh, Vamsi Attunuru
From: Jerin Jacob <jerinj@marvell.com>
add various devargs command line options supported by
this driver.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/octeontx2.rst | 67 ++++++++
drivers/net/octeontx2/Makefile | 5 +-
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 7 +
drivers/net/octeontx2/otx2_ethdev.h | 23 +++
drivers/net/octeontx2/otx2_ethdev_devargs.c | 165 ++++++++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 10 ++
7 files changed, 276 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_ethdev_devargs.c
create mode 100644 drivers/net/octeontx2/otx2_rx.h
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index f0bd36be3..92a7ebc42 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -30,3 +30,70 @@ The following options may be modified in the ``config`` file.
- ``CONFIG_RTE_LIBRTE_OCTEONTX2_PMD`` (default ``y``)
Toggle compilation of the ``librte_pmd_octeontx2`` driver.
+
+Runtime Config Options
+----------------------
+
+- ``HW offload ptype parsing disable`` (default ``0``)
+
+ Packet type parsing is HW offloaded by default and this feature may be toggled
+ using ``ptype_disable`` ``devargs`` parameter.
+
+- ``Rx&Tx scalar mode enable`` (default ``0``)
+
+ Ethdev supports both scalar and vector mode, it may be selected at runtime
+ using ``scalar_enable`` ``devargs`` parameter.
+
+- ``RSS reta size`` (default ``64``)
+
+ RSS redirection table size may be configured during runtime using ``reta_size``
+ ``devargs`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,reta_size=256
+
+ With the above configuration, reta table of size 256 is populated.
+
+- ``Flow priority levels`` (default ``3``)
+
+ RTE Flow priority levels can be configured during runtime using
+ ``flow_max_priority`` ``devargs`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,flow_max_priority=10
+
+ With the above configuration, priority level was set to 10 (0-9). Max
+ priority level supported is 32.
+
+- ``Reserve Flow entries`` (default ``8``)
+
+ RTE flow entries can be pre allocated and the size of pre allocation can be
+ selected runtime using ``flow_prealloc_size`` ``devargs`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,flow_prealloc_size=4
+
+ With the above configuration, pre alloc size was set to 4. Max pre alloc
+ size supported is 32.
+
+- ``Max SQB buffer count`` (default ``512``)
+
+ Send queue descriptor buffer count may be limited during runtime using
+ ``max_sqb_count`` ``devargs`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,max_sqb_count=64
+
+ With the above configuration, each send queue's decscriptor buffer count is
+ limited to a maximum of 64 buffers.
+
+
+.. note::
+
+ Above devarg parameters are configurable per device, user needs to pass the
+ parameters to all the PCIe devices if application requires to configure on
+ all the ethdev ports.
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 4ff3609d2..d1c8871d8 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -29,9 +29,10 @@ LIBABIVER := 1
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
- otx2_ethdev.c
+ otx2_ethdev.c \
+ otx2_ethdev_devargs.c
LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal
-LDLIBS += -lrte_ethdev -lrte_bus_pci
+LDLIBS += -lrte_ethdev -lrte_bus_pci -lrte_kvargs
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index b153f166d..b5c6fb978 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -5,6 +5,7 @@
sources = files(
'otx2_mac.c',
'otx2_ethdev.c',
+ 'otx2_ethdev_devargs.c'
)
deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2']
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 08f03b4c3..eeba0c2c6 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -137,6 +137,13 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
memset(&dev->otx2_eth_dev_data_start, 0, sizeof(*dev) -
offsetof(struct otx2_eth_dev, otx2_eth_dev_data_start));
+ /* Parse devargs string */
+ rc = otx2_ethdev_parse_devargs(eth_dev->device->devargs, dev);
+ if (rc) {
+ otx2_err("Failed to parse devargs rc=%d", rc);
+ goto error;
+ }
+
if (!dev->mbox_active) {
/* Initialize the base otx2_dev object
* only if already present
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index d9f72686a..a83688392 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -9,11 +9,13 @@
#include <rte_common.h>
#include <rte_ethdev.h>
+#include <rte_kvargs.h>
#include "otx2_common.h"
#include "otx2_dev.h"
#include "otx2_irq.h"
#include "otx2_mempool.h"
+#include "otx2_rx.h"
#define OTX2_ETH_DEV_PMD_VERSION "1.0"
@@ -31,6 +33,10 @@
/* Used for struct otx2_eth_dev::flags */
#define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0)
+#define NIX_MAX_SQB 512
+#define NIX_MIN_SQB 32
+#define NIX_RSS_RETA_SIZE 64
+
#define NIX_TX_OFFLOAD_CAPA ( \
DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
DEV_TX_OFFLOAD_MT_LOCKFREE | \
@@ -56,6 +62,15 @@
DEV_RX_OFFLOAD_QINQ_STRIP | \
DEV_RX_OFFLOAD_TIMESTAMP)
+struct otx2_rss_info {
+ uint16_t rss_size;
+};
+
+struct otx2_npc_flow_info {
+ uint16_t flow_prealloc_size;
+ uint16_t flow_max_priority;
+};
+
struct otx2_eth_dev {
OTX2_DEV; /* Base class */
MARKER otx2_eth_dev_data_start;
@@ -72,12 +87,16 @@ struct otx2_eth_dev {
uint16_t nix_msixoff;
uintptr_t base;
uintptr_t lmt_addr;
+ uint16_t scalar_ena;
+ uint16_t max_sqb_count;
uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */
uint64_t rx_offloads;
uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */
uint64_t tx_offloads;
uint64_t rx_offload_capa;
uint64_t tx_offload_capa;
+ struct otx2_rss_info rss_info;
+ struct otx2_npc_flow_info npc_flow;
} __rte_cache_aligned;
static inline struct otx2_eth_dev *
@@ -96,4 +115,8 @@ int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr);
int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev);
+/* Devargs */
+int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
+ struct otx2_eth_dev *dev);
+
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
new file mode 100644
index 000000000..85e7e312a
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <inttypes.h>
+#include <math.h>
+
+#include "otx2_ethdev.h"
+
+static int
+parse_flow_max_priority(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint16_t val;
+
+ val = atoi(value);
+
+ /* Limit the max priority to 32 */
+ if (val < 1 || val > 32)
+ return -EINVAL;
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
+
+static int
+parse_flow_prealloc_size(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint16_t val;
+
+ val = atoi(value);
+
+ /* Limit the prealloc size to 32 */
+ if (val < 1 || val > 32)
+ return -EINVAL;
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
+
+static int
+parse_reta_size(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint32_t val;
+
+ val = atoi(value);
+
+ if (val <= ETH_RSS_RETA_SIZE_64)
+ val = ETH_RSS_RETA_SIZE_64;
+ else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
+ val = ETH_RSS_RETA_SIZE_128;
+ else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
+ val = ETH_RSS_RETA_SIZE_256;
+ else
+ val = NIX_RSS_RETA_SIZE;
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
+
+static int
+parse_ptype_flag(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint32_t val;
+
+ val = atoi(value);
+ if (val)
+ val = 0; /* Disable NIX_RX_OFFLOAD_PTYPE_F */
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
+
+static int
+parse_flag(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+
+ *(uint16_t *)extra_args = atoi(value);
+
+ return 0;
+}
+
+static int
+parse_sqb_count(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint32_t val;
+
+ val = atoi(value);
+
+ if (val < NIX_MIN_SQB || val > NIX_MAX_SQB)
+ return -EINVAL;
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
+
+#define OTX2_RSS_RETA_SIZE "reta_size"
+#define OTX2_PTYPE_DISABLE "ptype_disable"
+#define OTX2_SCL_ENABLE "scalar_enable"
+#define OTX2_MAX_SQB_COUNT "max_sqb_count"
+#define OTX2_FLOW_PREALLOC_SIZE "flow_prealloc_size"
+#define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
+
+int
+otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
+{
+ uint16_t offload_flag = NIX_RX_OFFLOAD_PTYPE_F;
+ uint16_t rss_size = NIX_RSS_RETA_SIZE;
+ uint16_t sqb_count = NIX_MAX_SQB;
+ uint16_t flow_prealloc_size = 8;
+ uint16_t flow_max_priority = 3;
+ uint16_t scalar_enable = 0;
+ struct rte_kvargs *kvlist;
+
+ if (devargs == NULL)
+ goto null_devargs;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ goto exit;
+
+ rte_kvargs_process(kvlist, OTX2_PTYPE_DISABLE,
+ &parse_ptype_flag, &offload_flag);
+ rte_kvargs_process(kvlist, OTX2_RSS_RETA_SIZE,
+ &parse_reta_size, &rss_size);
+ rte_kvargs_process(kvlist, OTX2_SCL_ENABLE,
+ &parse_flag, &scalar_enable);
+ rte_kvargs_process(kvlist, OTX2_MAX_SQB_COUNT,
+ &parse_sqb_count, &sqb_count);
+ rte_kvargs_process(kvlist, OTX2_FLOW_PREALLOC_SIZE,
+ &parse_flow_prealloc_size, &flow_prealloc_size);
+ rte_kvargs_process(kvlist, OTX2_FLOW_MAX_PRIORITY,
+ &parse_flow_max_priority, &flow_max_priority);
+ rte_kvargs_free(kvlist);
+
+null_devargs:
+ dev->rx_offload_flags = offload_flag;
+ dev->scalar_ena = scalar_enable;
+ dev->max_sqb_count = sqb_count;
+ dev->rss_info.rss_size = rss_size;
+ dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
+ dev->npc_flow.flow_max_priority = flow_max_priority;
+ return 0;
+
+exit:
+ return -EINVAL;
+}
+
+RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
+ OTX2_RSS_RETA_SIZE "=<64|128|256>"
+ OTX2_PTYPE_DISABLE "=1"
+ OTX2_SCL_ENABLE "=1"
+ OTX2_MAX_SQB_COUNT "=<32-512>"
+ OTX2_FLOW_PREALLOC_SIZE "=<1-32>"
+ OTX2_FLOW_MAX_PRIORITY "=<1-32>");
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
new file mode 100644
index 000000000..1749c43ff
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_RX_H__
+#define __OTX2_RX_H__
+
+#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
+
+#endif /* __OTX2_RX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 05/57] net/octeontx2: handle device error interrupts
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (3 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 04/57] net/octeontx2: add devargs parsing functions jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 06/57] net/octeontx2: add info get operation jerinj
` (52 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Harman Kalra
From: Jerin Jacob <jerinj@marvell.com>
Handle device specific error and ras interrupts.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 12 +-
drivers/net/octeontx2/otx2_ethdev.h | 4 +
drivers/net/octeontx2/otx2_ethdev_irq.c | 140 ++++++++++++++++++++++++
5 files changed, 156 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_ethdev_irq.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index d1c8871d8..54f8f268d 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -30,6 +30,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
otx2_ethdev.c \
+ otx2_ethdev_irq.c \
otx2_ethdev_devargs.c
LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index b5c6fb978..148f7d339 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -5,6 +5,7 @@
sources = files(
'otx2_mac.c',
'otx2_ethdev.c',
+ 'otx2_ethdev_irq.c',
'otx2_ethdev_devargs.c'
)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index eeba0c2c6..67a7ebb36 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -175,12 +175,17 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
if (rc)
goto otx2_npa_uninit;
+ /* Register LF irq handlers */
+ rc = otx2_nix_register_irqs(eth_dev);
+ if (rc)
+ goto mbox_detach;
+
/* Get maximum number of supported MAC entries */
max_entries = otx2_cgx_mac_max_entries_get(dev);
if (max_entries < 0) {
otx2_err("Failed to get max entries for mac addr");
rc = -ENOTSUP;
- goto mbox_detach;
+ goto unregister_irq;
}
/* For VFs, returned max_entries will be 0. But to keep default MAC
@@ -194,7 +199,7 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
if (eth_dev->data->mac_addrs == NULL) {
otx2_err("Failed to allocate memory for mac addr");
rc = -ENOMEM;
- goto mbox_detach;
+ goto unregister_irq;
}
dev->max_mac_entries = max_entries;
@@ -226,6 +231,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
free_mac_addrs:
rte_free(eth_dev->data->mac_addrs);
+unregister_irq:
+ otx2_nix_unregister_irqs(eth_dev);
mbox_detach:
otx2_eth_dev_lf_detach(dev->mbox);
otx2_npa_uninit:
@@ -261,6 +268,7 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
dev->drv_inited = false;
pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ otx2_nix_unregister_irqs(eth_dev);
rc = otx2_eth_dev_lf_detach(dev->mbox);
if (rc)
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index a83688392..f7d8838df 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -105,6 +105,10 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
return eth_dev->data->dev_private;
}
+/* IRQ */
+int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
+void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
+
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
new file mode 100644
index 000000000..33fed93c4
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -0,0 +1,140 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <inttypes.h>
+
+#include <rte_bus_pci.h>
+
+#include "otx2_ethdev.h"
+
+static void
+nix_lf_err_irq(void *param)
+{
+ struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t intr;
+
+ intr = otx2_read64(dev->base + NIX_LF_ERR_INT);
+ if (intr == 0)
+ return;
+
+ otx2_err("Err_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
+
+ /* Clear interrupt */
+ otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
+}
+
+static int
+nix_lf_register_err_irq(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc, vec;
+
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_ERR_INT_ENA_W1C);
+ /* Set used interrupt vectors */
+ rc = otx2_register_irq(handle, nix_lf_err_irq, eth_dev, vec);
+ /* Enable all dev interrupt except for RQ_DISABLED */
+ otx2_write64(~BIT_ULL(11), dev->base + NIX_LF_ERR_INT_ENA_W1S);
+
+ return rc;
+}
+
+static void
+nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int vec;
+
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_ERR_INT_ENA_W1C);
+ otx2_unregister_irq(handle, nix_lf_err_irq, eth_dev, vec);
+}
+
+static void
+nix_lf_ras_irq(void *param)
+{
+ struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t intr;
+
+ intr = otx2_read64(dev->base + NIX_LF_RAS);
+ if (intr == 0)
+ return;
+
+ otx2_err("Ras_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
+
+ /* Clear interrupt */
+ otx2_write64(intr, dev->base + NIX_LF_RAS);
+}
+
+static int
+nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc, vec;
+
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1C);
+ /* Set used interrupt vectors */
+ rc = otx2_register_irq(handle, nix_lf_ras_irq, eth_dev, vec);
+ /* Enable dev interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1S);
+
+ return rc;
+}
+
+static void
+nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int vec;
+
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1C);
+ otx2_unregister_irq(handle, nix_lf_ras_irq, eth_dev, vec);
+}
+
+int
+otx2_nix_register_irqs(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc;
+
+ if (dev->nix_msixoff == MSIX_VECTOR_INVALID) {
+ otx2_err("Invalid NIXLF MSIX vector offset vector: 0x%x",
+ dev->nix_msixoff);
+ return -EINVAL;
+ }
+
+ /* Register lf err interrupt */
+ rc = nix_lf_register_err_irq(eth_dev);
+ /* Register RAS interrupt */
+ rc |= nix_lf_register_ras_irq(eth_dev);
+
+ return rc;
+}
+
+void
+otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev)
+{
+ nix_lf_unregister_err_irq(eth_dev);
+ nix_lf_unregister_ras_irq(eth_dev);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 06/57] net/octeontx2: add info get operation
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (4 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 05/57] net/octeontx2: handle device error interrupts jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 07/57] net/octeontx2: add device configure operation jerinj
` (51 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru, Harman Kalra
From: Jerin Jacob <jerinj@marvell.com>
Add device information get operation.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 4 ++
doc/guides/nics/features/octeontx2_vec.ini | 4 ++
doc/guides/nics/features/octeontx2_vf.ini | 3 +
doc/guides/nics/octeontx2.rst | 2 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 7 +++
drivers/net/octeontx2/otx2_ethdev.h | 45 +++++++++++++++
drivers/net/octeontx2/otx2_ethdev_ops.c | 64 ++++++++++++++++++++++
9 files changed, 131 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_ethdev_ops.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 84d5ad779..356b88de7 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -4,6 +4,10 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Speed capabilities = Y
+Lock-free Tx queue = Y
+SR-IOV = Y
+Multiprocess aware = Y
Linux VFIO = Y
ARMv8 = Y
Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 5fd7e4c5c..5f4eaa3f4 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -4,6 +4,10 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Speed capabilities = Y
+Lock-free Tx queue = Y
+SR-IOV = Y
+Multiprocess aware = Y
Linux VFIO = Y
ARMv8 = Y
Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 3128cc120..024b032d4 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -4,6 +4,9 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Speed capabilities = Y
+Lock-free Tx queue = Y
+Multiprocess aware = Y
Linux VFIO = Y
ARMv8 = Y
Usage doc = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 92a7ebc42..e3f4c2c43 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -16,6 +16,8 @@ Features
Features of the OCTEON TX2 Ethdev PMD are:
+- SR-IOV VF
+- Lock-free Tx queue
Prerequisites
-------------
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 54f8f268d..5083637e4 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -31,6 +31,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
otx2_ethdev.c \
otx2_ethdev_irq.c \
+ otx2_ethdev_ops.c \
otx2_ethdev_devargs.c
LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 148f7d339..aa8417e3f 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -6,6 +6,7 @@ sources = files(
'otx2_mac.c',
'otx2_ethdev.c',
'otx2_ethdev_irq.c',
+ 'otx2_ethdev_ops.c',
'otx2_ethdev_devargs.c'
)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 67a7ebb36..6e3c70559 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -64,6 +64,11 @@ nix_lf_free(struct otx2_eth_dev *dev)
return otx2_mbox_process(mbox);
}
+/* Initialize and register driver with DPDK Application */
+static const struct eth_dev_ops otx2_eth_dev_ops = {
+ .dev_infos_get = otx2_nix_info_get,
+};
+
static inline int
nix_lf_attach(struct otx2_eth_dev *dev)
{
@@ -120,6 +125,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
struct rte_pci_device *pci_dev;
int rc, max_entries;
+ eth_dev->dev_ops = &otx2_eth_dev_ops;
+
/* For secondary processes, the primary has done all the work */
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
/* Setup callbacks for secondary process */
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index f7d8838df..666ceba91 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -33,9 +33,50 @@
/* Used for struct otx2_eth_dev::flags */
#define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0)
+/* VLAN tag inserted by NIX_TX_VTAG_ACTION.
+ * In Tx space is always reserved for this in FRS.
+ */
+#define NIX_MAX_VTAG_INS 2
+#define NIX_MAX_VTAG_ACT_SIZE (4 * NIX_MAX_VTAG_INS)
+
+/* ETH_HLEN+ETH_FCS+2*VLAN_HLEN */
+#define NIX_L2_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + 8)
+
+/* HW config of frame size doesn't include FCS */
+#define NIX_MAX_HW_FRS 9212
+#define NIX_MIN_HW_FRS 60
+
+/* Since HW FRS includes NPC VTAG insertion space, user has reduced FRS */
+#define NIX_MAX_FRS \
+ (NIX_MAX_HW_FRS + RTE_ETHER_CRC_LEN - NIX_MAX_VTAG_ACT_SIZE)
+
+#define NIX_MIN_FRS \
+ (NIX_MIN_HW_FRS + RTE_ETHER_CRC_LEN)
+
+#define NIX_MAX_MTU \
+ (NIX_MAX_FRS - NIX_L2_OVERHEAD)
+
#define NIX_MAX_SQB 512
#define NIX_MIN_SQB 32
+#define NIX_HASH_KEY_SIZE 48 /* 352 Bits */
#define NIX_RSS_RETA_SIZE 64
+#define NIX_RX_MIN_DESC 16
+#define NIX_RX_MIN_DESC_ALIGN 16
+#define NIX_RX_NB_SEG_MAX 6
+
+/* If PTP is enabled additional SEND MEM DESC is required which
+ * takes 2 words, hence max 7 iova address are possible
+ */
+#if defined(RTE_LIBRTE_IEEE1588)
+#define NIX_TX_NB_SEG_MAX 7
+#else
+#define NIX_TX_NB_SEG_MAX 9
+#endif
+
+#define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
+ ETH_RSS_TCP | ETH_RSS_SCTP | \
+ ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD)
#define NIX_TX_OFFLOAD_CAPA ( \
DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
@@ -105,6 +146,10 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
return eth_dev->data->dev_private;
}
+/* Ops */
+void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_dev_info *dev_info);
+
/* IRQ */
int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
new file mode 100644
index 000000000..df7e909d2
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+
+void
+otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ devinfo->min_rx_bufsize = NIX_MIN_FRS;
+ devinfo->max_rx_pktlen = NIX_MAX_FRS;
+ devinfo->max_rx_queues = RTE_MAX_QUEUES_PER_PORT;
+ devinfo->max_tx_queues = RTE_MAX_QUEUES_PER_PORT;
+ devinfo->max_mac_addrs = dev->max_mac_entries;
+ devinfo->max_vfs = pci_dev->max_vfs;
+ devinfo->max_mtu = devinfo->max_rx_pktlen - NIX_L2_OVERHEAD;
+ devinfo->min_mtu = devinfo->min_rx_bufsize - NIX_L2_OVERHEAD;
+
+ devinfo->rx_offload_capa = dev->rx_offload_capa;
+ devinfo->tx_offload_capa = dev->tx_offload_capa;
+ devinfo->rx_queue_offload_capa = 0;
+ devinfo->tx_queue_offload_capa = 0;
+
+ devinfo->reta_size = dev->rss_info.rss_size;
+ devinfo->hash_key_size = NIX_HASH_KEY_SIZE;
+ devinfo->flow_type_rss_offloads = NIX_RSS_OFFLOAD;
+
+ devinfo->default_rxconf = (struct rte_eth_rxconf) {
+ .rx_drop_en = 0,
+ .offloads = 0,
+ };
+
+ devinfo->default_txconf = (struct rte_eth_txconf) {
+ .offloads = 0,
+ };
+
+ devinfo->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = UINT16_MAX,
+ .nb_min = NIX_RX_MIN_DESC,
+ .nb_align = NIX_RX_MIN_DESC_ALIGN,
+ .nb_seg_max = NIX_RX_NB_SEG_MAX,
+ .nb_mtu_seg_max = NIX_RX_NB_SEG_MAX,
+ };
+ devinfo->rx_desc_lim.nb_max =
+ RTE_ALIGN_MUL_FLOOR(devinfo->rx_desc_lim.nb_max,
+ NIX_RX_MIN_DESC_ALIGN);
+
+ devinfo->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = UINT16_MAX,
+ .nb_min = 1,
+ .nb_align = 1,
+ .nb_seg_max = NIX_TX_NB_SEG_MAX,
+ .nb_mtu_seg_max = NIX_TX_NB_SEG_MAX,
+ };
+
+ /* Auto negotiation disabled */
+ devinfo->speed_capa = ETH_LINK_SPEED_FIXED;
+ devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
+ ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
+ ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 07/57] net/octeontx2: add device configure operation
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (5 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 06/57] net/octeontx2: add info get operation jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 08/57] net/octeontx2: handle queue specific error interrupts jerinj
` (50 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vamsi Attunuru
From: Jerin Jacob <jerinj@marvell.com>
Add device configure operation. This would call lf_alloc
mailbox to allocate a NIX LF and upon return, AF will
return the attributes for the select LF.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 151 ++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 11 ++
2 files changed, 162 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 6e3c70559..65d72a47f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -39,6 +39,52 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
return NIX_TX_OFFLOAD_CAPA;
}
+static int
+nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_lf_alloc_req *req;
+ struct nix_lf_alloc_rsp *rsp;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_nix_lf_alloc(mbox);
+ req->rq_cnt = nb_rxq;
+ req->sq_cnt = nb_txq;
+ req->cq_cnt = nb_rxq;
+ /* XQE_SZ should be in Sync with NIX_CQ_ENTRY_SZ */
+ RTE_BUILD_BUG_ON(NIX_CQ_ENTRY_SZ != 128);
+ req->xqe_sz = NIX_XQESZ_W16;
+ req->rss_sz = dev->rss_info.rss_size;
+ req->rss_grps = NIX_RSS_GRPS;
+ req->npa_func = otx2_npa_pf_func_get();
+ req->sso_func = otx2_sso_pf_func_get();
+ req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
+ if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
+ req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
+ }
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ dev->sqb_size = rsp->sqb_size;
+ dev->tx_chan_base = rsp->tx_chan_base;
+ dev->rx_chan_base = rsp->rx_chan_base;
+ dev->rx_chan_cnt = rsp->rx_chan_cnt;
+ dev->tx_chan_cnt = rsp->tx_chan_cnt;
+ dev->lso_tsov4_idx = rsp->lso_tsov4_idx;
+ dev->lso_tsov6_idx = rsp->lso_tsov6_idx;
+ dev->lf_tx_stats = rsp->lf_tx_stats;
+ dev->lf_rx_stats = rsp->lf_rx_stats;
+ dev->cints = rsp->cints;
+ dev->qints = rsp->qints;
+ dev->npc_flow.channel = dev->rx_chan_base;
+
+ return 0;
+}
+
static int
nix_lf_free(struct otx2_eth_dev *dev)
{
@@ -64,9 +110,114 @@ nix_lf_free(struct otx2_eth_dev *dev)
return otx2_mbox_process(mbox);
}
+static int
+otx2_nix_configure(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct rte_eth_conf *conf = &data->dev_conf;
+ struct rte_eth_rxmode *rxmode = &conf->rxmode;
+ struct rte_eth_txmode *txmode = &conf->txmode;
+ char ea_fmt[RTE_ETHER_ADDR_FMT_SIZE];
+ struct rte_ether_addr *ea;
+ uint8_t nb_rxq, nb_txq;
+ int rc;
+
+ rc = -EINVAL;
+
+ /* Sanity checks */
+ if (rte_eal_has_hugepages() == 0) {
+ otx2_err("Huge page is not configured");
+ goto fail;
+ }
+
+ if (rte_eal_iova_mode() != RTE_IOVA_VA) {
+ otx2_err("iova mode should be va");
+ goto fail;
+ }
+
+ if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ otx2_err("Setting link speed/duplex not supported");
+ goto fail;
+ }
+
+ if (conf->dcb_capability_en == 1) {
+ otx2_err("dcb enable is not supported");
+ goto fail;
+ }
+
+ if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) {
+ otx2_err("Flow director is not supported");
+ goto fail;
+ }
+
+ if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
+ goto fail;
+ }
+
+ if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
+ goto fail;
+ }
+
+ /* Free the resources allocated from the previous configure */
+ if (dev->configured == 1)
+ nix_lf_free(dev);
+
+ if (otx2_dev_is_A0(dev) &&
+ (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
+ ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+ otx2_err("Outer IP and SCTP checksum unsupported");
+ rc = -EINVAL;
+ goto fail;
+ }
+
+ dev->rx_offloads = rxmode->offloads;
+ dev->tx_offloads = txmode->offloads;
+ dev->rss_info.rss_grps = NIX_RSS_GRPS;
+
+ nb_rxq = RTE_MAX(data->nb_rx_queues, 1);
+ nb_txq = RTE_MAX(data->nb_tx_queues, 1);
+
+ /* Alloc a nix lf */
+ rc = nix_lf_alloc(dev, nb_rxq, nb_txq);
+ if (rc) {
+ otx2_err("Failed to init nix_lf rc=%d", rc);
+ goto fail;
+ }
+
+ /* Update the mac address */
+ ea = eth_dev->data->mac_addrs;
+ memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN);
+ if (rte_is_zero_ether_addr(ea))
+ rte_eth_random_addr((uint8_t *)ea);
+
+ rte_ether_format_addr(ea_fmt, RTE_ETHER_ADDR_FMT_SIZE, ea);
+
+ otx2_nix_dbg("Configured port%d mac=%s nb_rxq=%d nb_txq=%d"
+ " rx_offloads=0x%" PRIx64 " tx_offloads=0x%" PRIx64 ""
+ " rx_flags=0x%x tx_flags=0x%x",
+ eth_dev->data->port_id, ea_fmt, nb_rxq,
+ nb_txq, dev->rx_offloads, dev->tx_offloads,
+ dev->rx_offload_flags, dev->tx_offload_flags);
+
+ /* All good */
+ dev->configured = 1;
+ dev->configured_nb_rx_qs = data->nb_rx_queues;
+ dev->configured_nb_tx_qs = data->nb_tx_queues;
+ return 0;
+
+fail:
+ return rc;
+}
+
/* Initialize and register driver with DPDK Application */
static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
+ .dev_configure = otx2_nix_configure,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 666ceba91..c1528e2ac 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -59,11 +59,14 @@
#define NIX_MAX_SQB 512
#define NIX_MIN_SQB 32
+/* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/
+#define NIX_RSS_GRPS 8
#define NIX_HASH_KEY_SIZE 48 /* 352 Bits */
#define NIX_RSS_RETA_SIZE 64
#define NIX_RX_MIN_DESC 16
#define NIX_RX_MIN_DESC_ALIGN 16
#define NIX_RX_NB_SEG_MAX 6
+#define NIX_CQ_ENTRY_SZ 128
/* If PTP is enabled additional SEND MEM DESC is required which
* takes 2 words, hence max 7 iova address are possible
@@ -105,9 +108,11 @@
struct otx2_rss_info {
uint16_t rss_size;
+ uint8_t rss_grps;
};
struct otx2_npc_flow_info {
+ uint16_t channel; /*rx channel */
uint16_t flow_prealloc_size;
uint16_t flow_max_priority;
};
@@ -124,7 +129,13 @@ struct otx2_eth_dev {
uint8_t lso_tsov6_idx;
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t max_mac_entries;
+ uint8_t lf_tx_stats;
+ uint8_t lf_rx_stats;
+ uint16_t cints;
+ uint16_t qints;
uint8_t configured;
+ uint8_t configured_nb_rx_qs;
+ uint8_t configured_nb_tx_qs;
uint16_t nix_msixoff;
uintptr_t base;
uintptr_t lmt_addr;
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 08/57] net/octeontx2: handle queue specific error interrupts
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (6 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 07/57] net/octeontx2: add device configure operation jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 09/57] net/octeontx2: add context debug utils jerinj
` (49 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic
From: Jerin Jacob <jerinj@marvell.com>
Handle queue specific error interrupts.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 16 +-
drivers/net/octeontx2/otx2_ethdev.h | 9 ++
drivers/net/octeontx2/otx2_ethdev_irq.c | 191 ++++++++++++++++++++++++
4 files changed, 216 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index e3f4c2c43..50e825968 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -18,6 +18,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- SR-IOV VF
- Lock-free Tx queue
+- Debug utilities - error interrupt support
Prerequisites
-------------
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 65d72a47f..045855c2e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -163,8 +163,10 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
}
/* Free the resources allocated from the previous configure */
- if (dev->configured == 1)
+ if (dev->configured == 1) {
+ oxt2_nix_unregister_queue_irqs(eth_dev);
nix_lf_free(dev);
+ }
if (otx2_dev_is_A0(dev) &&
(txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
@@ -189,6 +191,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto fail;
}
+ /* Register queue IRQs */
+ rc = oxt2_nix_register_queue_irqs(eth_dev);
+ if (rc) {
+ otx2_err("Failed to register queue interrupts rc=%d", rc);
+ goto free_nix_lf;
+ }
+
/* Update the mac address */
ea = eth_dev->data->mac_addrs;
memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN);
@@ -210,6 +219,8 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
dev->configured_nb_tx_qs = data->nb_tx_queues;
return 0;
+free_nix_lf:
+ rc = nix_lf_free(dev);
fail:
return rc;
}
@@ -413,6 +424,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ /* Unregister queue irqs */
+ oxt2_nix_unregister_queue_irqs(eth_dev);
+
rc = nix_lf_free(dev);
if (rc)
otx2_err("Failed to free nix lf, rc=%d", rc);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index c1528e2ac..d9cdd33b5 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -106,6 +106,11 @@
DEV_RX_OFFLOAD_QINQ_STRIP | \
DEV_RX_OFFLOAD_TIMESTAMP)
+struct otx2_qint {
+ struct rte_eth_dev *eth_dev;
+ uint8_t qintx;
+};
+
struct otx2_rss_info {
uint16_t rss_size;
uint8_t rss_grps;
@@ -134,6 +139,7 @@ struct otx2_eth_dev {
uint16_t cints;
uint16_t qints;
uint8_t configured;
+ uint8_t configured_qints;
uint8_t configured_nb_rx_qs;
uint8_t configured_nb_tx_qs;
uint16_t nix_msixoff;
@@ -147,6 +153,7 @@ struct otx2_eth_dev {
uint64_t tx_offloads;
uint64_t rx_offload_capa;
uint64_t tx_offload_capa;
+ struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
struct otx2_rss_info rss_info;
struct otx2_npc_flow_info npc_flow;
} __rte_cache_aligned;
@@ -163,7 +170,9 @@ void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
/* IRQ */
int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
+int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev);
void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
+void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev);
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index 33fed93c4..476c7ea78 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -112,6 +112,197 @@ nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
otx2_unregister_irq(handle, nix_lf_ras_irq, eth_dev, vec);
}
+static inline uint8_t
+nix_lf_q_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t q,
+ uint32_t off, uint64_t mask)
+{
+ uint64_t reg, wdata;
+ uint8_t qint;
+
+ wdata = (uint64_t)q << 44;
+ reg = otx2_atomic64_add_nosync(wdata, (int64_t *)(dev->base + off));
+
+ if (reg & BIT_ULL(42) /* OP_ERR */) {
+ otx2_err("Failed execute irq get off=0x%x", off);
+ return 0;
+ }
+
+ qint = reg & 0xff;
+ wdata &= mask;
+ otx2_write64(wdata, dev->base + off);
+
+ return qint;
+}
+
+static inline uint8_t
+nix_lf_rq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t rq)
+{
+ return nix_lf_q_irq_get_and_clear(dev, rq, NIX_LF_RQ_OP_INT, ~0xff00);
+}
+
+static inline uint8_t
+nix_lf_cq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t cq)
+{
+ return nix_lf_q_irq_get_and_clear(dev, cq, NIX_LF_CQ_OP_INT, ~0xff00);
+}
+
+static inline uint8_t
+nix_lf_sq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t sq)
+{
+ return nix_lf_q_irq_get_and_clear(dev, sq, NIX_LF_SQ_OP_INT, ~0x1ff00);
+}
+
+static inline void
+nix_lf_sq_debug_reg(struct otx2_eth_dev *dev, uint32_t off)
+{
+ uint64_t reg;
+
+ reg = otx2_read64(dev->base + off);
+ if (reg & BIT_ULL(44))
+ otx2_err("SQ=%d err_code=0x%x",
+ (int)((reg >> 8) & 0xfffff), (uint8_t)(reg & 0xff));
+}
+
+static void
+nix_lf_q_irq(void *param)
+{
+ struct otx2_qint *qint = (struct otx2_qint *)param;
+ struct rte_eth_dev *eth_dev = qint->eth_dev;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint8_t irq, qintx = qint->qintx;
+ int q, cq, rq, sq;
+ uint64_t intr;
+
+ intr = otx2_read64(dev->base + NIX_LF_QINTX_INT(qintx));
+ if (intr == 0)
+ return;
+
+ otx2_err("Queue_intr=0x%" PRIx64 " qintx=%d pf=%d, vf=%d",
+ intr, qintx, dev->pf, dev->vf);
+
+ /* Handle RQ interrupts */
+ for (q = 0; q < eth_dev->data->nb_rx_queues; q++) {
+ rq = q % dev->qints;
+ irq = nix_lf_rq_irq_get_and_clear(dev, rq);
+
+ if (irq & BIT_ULL(NIX_RQINT_DROP))
+ otx2_err("RQ=%d NIX_RQINT_DROP", rq);
+
+ if (irq & BIT_ULL(NIX_RQINT_RED))
+ otx2_err("RQ=%d NIX_RQINT_RED", rq);
+ }
+
+ /* Handle CQ interrupts */
+ for (q = 0; q < eth_dev->data->nb_rx_queues; q++) {
+ cq = q % dev->qints;
+ irq = nix_lf_cq_irq_get_and_clear(dev, cq);
+
+ if (irq & BIT_ULL(NIX_CQERRINT_DOOR_ERR))
+ otx2_err("CQ=%d NIX_CQERRINT_DOOR_ERR", cq);
+
+ if (irq & BIT_ULL(NIX_CQERRINT_WR_FULL))
+ otx2_err("CQ=%d NIX_CQERRINT_WR_FULL", cq);
+
+ if (irq & BIT_ULL(NIX_CQERRINT_CQE_FAULT))
+ otx2_err("CQ=%d NIX_CQERRINT_CQE_FAULT", cq);
+ }
+
+ /* Handle SQ interrupts */
+ for (q = 0; q < eth_dev->data->nb_tx_queues; q++) {
+ sq = q % dev->qints;
+ irq = nix_lf_sq_irq_get_and_clear(dev, sq);
+
+ if (irq & BIT_ULL(NIX_SQINT_LMT_ERR)) {
+ otx2_err("SQ=%d NIX_SQINT_LMT_ERR", sq);
+ nix_lf_sq_debug_reg(dev, NIX_LF_SQ_OP_ERR_DBG);
+ }
+ if (irq & BIT_ULL(NIX_SQINT_MNQ_ERR)) {
+ otx2_err("SQ=%d NIX_SQINT_MNQ_ERR", sq);
+ nix_lf_sq_debug_reg(dev, NIX_LF_MNQ_ERR_DBG);
+ }
+ if (irq & BIT_ULL(NIX_SQINT_SEND_ERR)) {
+ otx2_err("SQ=%d NIX_SQINT_SEND_ERR", sq);
+ nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG);
+ }
+ if (irq & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL)) {
+ otx2_err("SQ=%d NIX_SQINT_SQB_ALLOC_FAIL", sq);
+ nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG);
+ }
+ }
+
+ /* Clear interrupt */
+ otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
+}
+
+int
+oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int vec, q, sqs, rqs, qs, rc = 0;
+
+ /* Figure out max qintx required */
+ rqs = RTE_MIN(dev->qints, eth_dev->data->nb_rx_queues);
+ sqs = RTE_MIN(dev->qints, eth_dev->data->nb_tx_queues);
+ qs = RTE_MAX(rqs, sqs);
+
+ dev->configured_qints = qs;
+
+ for (q = 0; q < qs; q++) {
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q;
+
+ /* Clear QINT CNT */
+ otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
+
+ /* Clear interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q));
+
+ dev->qints_mem[q].eth_dev = eth_dev;
+ dev->qints_mem[q].qintx = q;
+
+ /* Sync qints_mem update */
+ rte_smp_wmb();
+
+ /* Register queue irq vector */
+ rc = otx2_register_irq(handle, nix_lf_q_irq,
+ &dev->qints_mem[q], vec);
+ if (rc)
+ break;
+
+ otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
+ otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q));
+ /* Enable QINT interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1S(q));
+ }
+
+ return rc;
+}
+
+void
+oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int vec, q;
+
+ for (q = 0; q < dev->configured_qints; q++) {
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q;
+
+ /* Clear QINT CNT */
+ otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
+ otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q));
+
+ /* Clear interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q));
+
+ /* Unregister queue irq vector */
+ otx2_unregister_irq(handle, nix_lf_q_irq,
+ &dev->qints_mem[q], vec);
+ }
+}
+
int
otx2_nix_register_irqs(struct rte_eth_dev *eth_dev)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 09/57] net/octeontx2: add context debug utils
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (7 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 08/57] net/octeontx2: handle queue specific error interrupts jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 10/57] net/octeontx2: add register dump support jerinj
` (48 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic
Cc: Vivek Sharma
From: Jerin Jacob <jerinj@marvell.com>
Add RQ,SQ,CQ context and CQE structure dump utils.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
doc/guides/nics/octeontx2.rst | 2 +-
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.h | 4 +
drivers/net/octeontx2/otx2_ethdev_debug.c | 272 ++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev_irq.c | 6 +
6 files changed, 285 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/octeontx2/otx2_ethdev_debug.c
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 50e825968..75d5746e8 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -18,7 +18,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- SR-IOV VF
- Lock-free Tx queue
-- Debug utilities - error interrupt support
+- Debug utilities - Context dump and error interrupt support
Prerequisites
-------------
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 5083637e4..c6e24a535 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -32,6 +32,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_ethdev.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
+ otx2_ethdev_debug.c \
otx2_ethdev_devargs.c
LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index aa8417e3f..a06e1192c 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -7,6 +7,7 @@ sources = files(
'otx2_ethdev.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
+ 'otx2_ethdev_debug.c',
'otx2_ethdev_devargs.c'
)
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index d9cdd33b5..7c0bef28e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -174,6 +174,10 @@ int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev);
void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev);
+/* Debug */
+int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
+void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
+
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c
new file mode 100644
index 000000000..39cda7637
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev_debug.c
@@ -0,0 +1,272 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+
+#define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
+
+static inline void
+nix_lf_sq_dump(struct nix_sq_ctx_s *ctx)
+{
+ nix_dump("W0: sqe_way_mask \t\t%d\nW0: cq \t\t\t\t%d",
+ ctx->sqe_way_mask, ctx->cq);
+ nix_dump("W0: sdp_mcast \t\t\t%d\nW0: substream \t\t\t0x%03x",
+ ctx->sdp_mcast, ctx->substream);
+ nix_dump("W0: qint_idx \t\t\t%d\nW0: ena \t\t\t%d\n",
+ ctx->qint_idx, ctx->ena);
+
+ nix_dump("W1: sqb_count \t\t\t%d\nW1: default_chan \t\t%d",
+ ctx->sqb_count, ctx->default_chan);
+ nix_dump("W1: smq_rr_quantum \t\t%d\nW1: sso_ena \t\t\t%d",
+ ctx->smq_rr_quantum, ctx->sso_ena);
+ nix_dump("W1: xoff \t\t\t%d\nW1: cq_ena \t\t\t%d\nW1: smq\t\t\t\t%d\n",
+ ctx->xoff, ctx->cq_ena, ctx->smq);
+
+ nix_dump("W2: sqe_stype \t\t\t%d\nW2: sq_int_ena \t\t\t%d",
+ ctx->sqe_stype, ctx->sq_int_ena);
+ nix_dump("W2: sq_int \t\t\t%d\nW2: sqb_aura \t\t\t%d",
+ ctx->sq_int, ctx->sqb_aura);
+ nix_dump("W2: smq_rr_count \t\t%d\n", ctx->smq_rr_count);
+
+ nix_dump("W3: smq_next_sq_vld\t\t%d\nW3: smq_pend\t\t\t%d",
+ ctx->smq_next_sq_vld, ctx->smq_pend);
+ nix_dump("W3: smenq_next_sqb_vld \t%d\nW3: head_offset\t\t\t%d",
+ ctx->smenq_next_sqb_vld, ctx->head_offset);
+ nix_dump("W3: smenq_offset\t\t%d\nW3: tail_offset \t\t%d",
+ ctx->smenq_offset, ctx->tail_offset);
+ nix_dump("W3: smq_lso_segnum \t\t%d\nW3: smq_next_sq \t\t%d",
+ ctx->smq_lso_segnum, ctx->smq_next_sq);
+ nix_dump("W3: mnq_dis \t\t\t%d\nW3: lmt_dis \t\t\t%d",
+ ctx->mnq_dis, ctx->lmt_dis);
+ nix_dump("W3: cq_limit\t\t\t%d\nW3: max_sqe_size\t\t%d\n",
+ ctx->cq_limit, ctx->max_sqe_size);
+
+ nix_dump("W4: next_sqb \t\t\t0x%" PRIx64 "", ctx->next_sqb);
+ nix_dump("W5: tail_sqb \t\t\t0x%" PRIx64 "", ctx->tail_sqb);
+ nix_dump("W6: smenq_sqb \t\t\t0x%" PRIx64 "", ctx->smenq_sqb);
+ nix_dump("W7: smenq_next_sqb \t\t0x%" PRIx64 "", ctx->smenq_next_sqb);
+ nix_dump("W8: head_sqb \t\t\t0x%" PRIx64 "", ctx->head_sqb);
+
+ nix_dump("W9: vfi_lso_vld \t\t%d\nW9: vfi_lso_vlan1_ins_ena\t%d",
+ ctx->vfi_lso_vld, ctx->vfi_lso_vlan1_ins_ena);
+ nix_dump("W9: vfi_lso_vlan0_ins_ena\t%d\nW9: vfi_lso_mps\t\t\t%d",
+ ctx->vfi_lso_vlan0_ins_ena, ctx->vfi_lso_mps);
+ nix_dump("W9: vfi_lso_sb \t\t\t%d\nW9: vfi_lso_sizem1\t\t%d",
+ ctx->vfi_lso_sb, ctx->vfi_lso_sizem1);
+ nix_dump("W9: vfi_lso_total\t\t%d", ctx->vfi_lso_total);
+
+ nix_dump("W10: scm_lso_rem \t\t0x%" PRIx64 "",
+ (uint64_t)ctx->scm_lso_rem);
+ nix_dump("W11: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
+ nix_dump("W12: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
+ nix_dump("W14: dropped_octs \t\t0x%" PRIx64 "",
+ (uint64_t)ctx->drop_octs);
+ nix_dump("W15: dropped_pkts \t\t0x%" PRIx64 "",
+ (uint64_t)ctx->drop_pkts);
+}
+
+static inline void
+nix_lf_rq_dump(struct nix_rq_ctx_s *ctx)
+{
+ nix_dump("W0: wqe_aura \t\t\t%d\nW0: substream \t\t\t0x%03x",
+ ctx->wqe_aura, ctx->substream);
+ nix_dump("W0: cq \t\t\t\t%d\nW0: ena_wqwd \t\t\t%d",
+ ctx->cq, ctx->ena_wqwd);
+ nix_dump("W0: ipsech_ena \t\t\t%d\nW0: sso_ena \t\t\t%d",
+ ctx->ipsech_ena, ctx->sso_ena);
+ nix_dump("W0: ena \t\t\t%d\n", ctx->ena);
+
+ nix_dump("W1: lpb_drop_ena \t\t%d\nW1: spb_drop_ena \t\t%d",
+ ctx->lpb_drop_ena, ctx->spb_drop_ena);
+ nix_dump("W1: xqe_drop_ena \t\t%d\nW1: wqe_caching \t\t%d",
+ ctx->xqe_drop_ena, ctx->wqe_caching);
+ nix_dump("W1: pb_caching \t\t\t%d\nW1: sso_tt \t\t\t%d",
+ ctx->pb_caching, ctx->sso_tt);
+ nix_dump("W1: sso_grp \t\t\t%d\nW1: lpb_aura \t\t\t%d",
+ ctx->sso_grp, ctx->lpb_aura);
+ nix_dump("W1: spb_aura \t\t\t%d\n", ctx->spb_aura);
+
+ nix_dump("W2: xqe_hdr_split \t\t%d\nW2: xqe_imm_copy \t\t%d",
+ ctx->xqe_hdr_split, ctx->xqe_imm_copy);
+ nix_dump("W2: xqe_imm_size \t\t%d\nW2: later_skip \t\t\t%d",
+ ctx->xqe_imm_size, ctx->later_skip);
+ nix_dump("W2: first_skip \t\t\t%d\nW2: lpb_sizem1 \t\t\t%d",
+ ctx->first_skip, ctx->lpb_sizem1);
+ nix_dump("W2: spb_ena \t\t\t%d\nW2: wqe_skip \t\t\t%d",
+ ctx->spb_ena, ctx->wqe_skip);
+ nix_dump("W2: spb_sizem1 \t\t\t%d\n", ctx->spb_sizem1);
+
+ nix_dump("W3: spb_pool_pass \t\t%d\nW3: spb_pool_drop \t\t%d",
+ ctx->spb_pool_pass, ctx->spb_pool_drop);
+ nix_dump("W3: spb_aura_pass \t\t%d\nW3: spb_aura_drop \t\t%d",
+ ctx->spb_aura_pass, ctx->spb_aura_drop);
+ nix_dump("W3: wqe_pool_pass \t\t%d\nW3: wqe_pool_drop \t\t%d",
+ ctx->wqe_pool_pass, ctx->wqe_pool_drop);
+ nix_dump("W3: xqe_pass \t\t\t%d\nW3: xqe_drop \t\t\t%d\n",
+ ctx->xqe_pass, ctx->xqe_drop);
+
+ nix_dump("W4: qint_idx \t\t\t%d\nW4: rq_int_ena \t\t\t%d",
+ ctx->qint_idx, ctx->rq_int_ena);
+ nix_dump("W4: rq_int \t\t\t%d\nW4: lpb_pool_pass \t\t%d",
+ ctx->rq_int, ctx->lpb_pool_pass);
+ nix_dump("W4: lpb_pool_drop \t\t%d\nW4: lpb_aura_pass \t\t%d",
+ ctx->lpb_pool_drop, ctx->lpb_aura_pass);
+ nix_dump("W4: lpb_aura_drop \t\t%d\n", ctx->lpb_aura_drop);
+
+ nix_dump("W5: flow_tagw \t\t\t%d\nW5: bad_utag \t\t\t%d",
+ ctx->flow_tagw, ctx->bad_utag);
+ nix_dump("W5: good_utag \t\t\t%d\nW5: ltag \t\t\t%d\n",
+ ctx->good_utag, ctx->ltag);
+
+ nix_dump("W6: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
+ nix_dump("W7: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
+ nix_dump("W8: drop_octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_octs);
+ nix_dump("W9: drop_pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_pkts);
+ nix_dump("W10: re_pkts \t\t\t0x%" PRIx64 "\n", (uint64_t)ctx->re_pkts);
+}
+
+static inline void
+nix_lf_cq_dump(struct nix_cq_ctx_s *ctx)
+{
+ nix_dump("W0: base \t\t\t0x%" PRIx64 "\n", ctx->base);
+
+ nix_dump("W1: wrptr \t\t\t%" PRIx64 "", (uint64_t)ctx->wrptr);
+ nix_dump("W1: avg_con \t\t\t%d\nW1: cint_idx \t\t\t%d",
+ ctx->avg_con, ctx->cint_idx);
+ nix_dump("W1: cq_err \t\t\t%d\nW1: qint_idx \t\t\t%d",
+ ctx->cq_err, ctx->qint_idx);
+ nix_dump("W1: bpid \t\t\t%d\nW1: bp_ena \t\t\t%d\n",
+ ctx->bpid, ctx->bp_ena);
+
+ nix_dump("W2: update_time \t\t%d\nW2: avg_level \t\t\t%d",
+ ctx->update_time, ctx->avg_level);
+ nix_dump("W2: head \t\t\t%d\nW2: tail \t\t\t%d\n",
+ ctx->head, ctx->tail);
+
+ nix_dump("W3: cq_err_int_ena \t\t%d\nW3: cq_err_int \t\t\t%d",
+ ctx->cq_err_int_ena, ctx->cq_err_int);
+ nix_dump("W3: qsize \t\t\t%d\nW3: caching \t\t\t%d",
+ ctx->qsize, ctx->caching);
+ nix_dump("W3: substream \t\t\t0x%03x\nW3: ena \t\t\t%d",
+ ctx->substream, ctx->ena);
+ nix_dump("W3: drop_ena \t\t\t%d\nW3: drop \t\t\t%d",
+ ctx->drop_ena, ctx->drop);
+ nix_dump("W3: bp \t\t\t\t%d\n", ctx->bp);
+}
+
+int
+otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc, q, rq = eth_dev->data->nb_rx_queues;
+ int sq = eth_dev->data->nb_tx_queues;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_rsp *rsp;
+ struct nix_aq_enq_req *aq;
+
+ for (q = 0; q < rq; q++) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = q;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to get cq context");
+ goto fail;
+ }
+ nix_dump("============== port=%d cq=%d ===============",
+ eth_dev->data->port_id, q);
+ nix_lf_cq_dump(&rsp->cq);
+ }
+
+ for (q = 0; q < rq; q++) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = q;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void **)&rsp);
+ if (rc) {
+ otx2_err("Failed to get rq context");
+ goto fail;
+ }
+ nix_dump("============== port=%d rq=%d ===============",
+ eth_dev->data->port_id, q);
+ nix_lf_rq_dump(&rsp->rq);
+ }
+ for (q = 0; q < sq; q++) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = q;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to get sq context");
+ goto fail;
+ }
+ nix_dump("============== port=%d sq=%d ===============",
+ eth_dev->data->port_id, q);
+ nix_lf_sq_dump(&rsp->sq);
+ }
+
+fail:
+ return rc;
+}
+
+/* Dumps struct nix_cqe_hdr_s and struct nix_rx_parse_s */
+void
+otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
+{
+ const struct nix_rx_parse_s *rx =
+ (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1);
+
+ nix_dump("tag \t\t0x%x\tq \t\t%d\t\tnode \t\t%d\tcqe_type \t%d",
+ cq->tag, cq->q, cq->node, cq->cqe_type);
+
+ nix_dump("W0: chan \t%d\t\tdesc_sizem1 \t%d",
+ rx->chan, rx->desc_sizem1);
+ nix_dump("W0: imm_copy \t%d\t\texpress \t%d",
+ rx->imm_copy, rx->express);
+ nix_dump("W0: wqwd \t%d\t\terrlev \t\t%d\t\terrcode \t%d",
+ rx->wqwd, rx->errlev, rx->errcode);
+ nix_dump("W0: latype \t%d\t\tlbtype \t\t%d\t\tlctype \t\t%d",
+ rx->latype, rx->lbtype, rx->lctype);
+ nix_dump("W0: ldtype \t%d\t\tletype \t\t%d\t\tlftype \t\t%d",
+ rx->ldtype, rx->letype, rx->lftype);
+ nix_dump("W0: lgtype \t%d \t\tlhtype \t\t%d",
+ rx->lgtype, rx->lhtype);
+
+ nix_dump("W1: pkt_lenm1 \t%d", rx->pkt_lenm1);
+ nix_dump("W1: l2m \t%d\t\tl2b \t\t%d\t\tl3m \t\t%d\tl3b \t\t%d",
+ rx->l2m, rx->l2b, rx->l3m, rx->l3b);
+ nix_dump("W1: vtag0_valid %d\t\tvtag0_gone \t%d",
+ rx->vtag0_valid, rx->vtag0_gone);
+ nix_dump("W1: vtag1_valid %d\t\tvtag1_gone \t%d",
+ rx->vtag1_valid, rx->vtag1_gone);
+ nix_dump("W1: pkind \t%d", rx->pkind);
+ nix_dump("W1: vtag0_tci \t%d\t\tvtag1_tci \t%d",
+ rx->vtag0_tci, rx->vtag1_tci);
+
+ nix_dump("W2: laflags \t%d\t\tlbflags\t\t%d\t\tlcflags \t%d",
+ rx->laflags, rx->lbflags, rx->lcflags);
+ nix_dump("W2: ldflags \t%d\t\tleflags\t\t%d\t\tlfflags \t%d",
+ rx->ldflags, rx->leflags, rx->lfflags);
+ nix_dump("W2: lgflags \t%d\t\tlhflags \t%d",
+ rx->lgflags, rx->lhflags);
+
+ nix_dump("W3: eoh_ptr \t%d\t\twqe_aura \t%d\t\tpb_aura \t%d",
+ rx->eoh_ptr, rx->wqe_aura, rx->pb_aura);
+ nix_dump("W3: match_id \t%d", rx->match_id);
+
+ nix_dump("W4: laptr \t%d\t\tlbptr \t\t%d\t\tlcptr \t\t%d",
+ rx->laptr, rx->lbptr, rx->lcptr);
+ nix_dump("W4: ldptr \t%d\t\tleptr \t\t%d\t\tlfptr \t\t%d",
+ rx->ldptr, rx->leptr, rx->lfptr);
+ nix_dump("W4: lgptr \t%d\t\tlhptr \t\t%d", rx->lgptr, rx->lhptr);
+
+ nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d",
+ rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg);
+}
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index 476c7ea78..fdebdef38 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -23,6 +23,8 @@ nix_lf_err_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
+
+ otx2_nix_queues_ctx_dump(eth_dev);
}
static int
@@ -75,6 +77,8 @@ nix_lf_ras_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_RAS);
+
+ otx2_nix_queues_ctx_dump(eth_dev);
}
static int
@@ -232,6 +236,8 @@ nix_lf_q_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
+
+ otx2_nix_queues_ctx_dump(eth_dev);
}
int
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 10/57] net/octeontx2: add register dump support
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (8 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 09/57] net/octeontx2: add context debug utils jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 11/57] net/octeontx2: add link stats operations jerinj
` (47 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
From: Kiran Kumar K <kirankumark@marvell.com>
Add register dump support and mark Registers dump in features.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 1 +
drivers/net/octeontx2/otx2_ethdev.h | 3 +
drivers/net/octeontx2/otx2_ethdev_debug.c | 228 +++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev_irq.c | 6 +
7 files changed, 241 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 356b88de7..7d53bf0e7 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -8,6 +8,7 @@ Speed capabilities = Y
Lock-free Tx queue = Y
SR-IOV = Y
Multiprocess aware = Y
+Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 5f4eaa3f4..e0cc7b22d 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -8,6 +8,7 @@ Speed capabilities = Y
Lock-free Tx queue = Y
SR-IOV = Y
Multiprocess aware = Y
+Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 024b032d4..6dfdf88c6 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -7,6 +7,7 @@
Speed capabilities = Y
Lock-free Tx queue = Y
Multiprocess aware = Y
+Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
Usage doc = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 045855c2e..48d5a15d6 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -229,6 +229,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
.dev_configure = otx2_nix_configure,
+ .get_reg = otx2_nix_dev_get_reg,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7c0bef28e..7313689b0 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -175,6 +175,9 @@ void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev);
/* Debug */
+int otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data);
+int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
+ struct rte_dev_reg_info *regs);
int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c
index 39cda7637..9f06e5505 100644
--- a/drivers/net/octeontx2/otx2_ethdev_debug.c
+++ b/drivers/net/octeontx2/otx2_ethdev_debug.c
@@ -5,6 +5,234 @@
#include "otx2_ethdev.h"
#define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
+#define NIX_REG_INFO(reg) {reg, #reg}
+
+struct nix_lf_reg_info {
+ uint32_t offset;
+ const char *name;
+};
+
+static const struct
+nix_lf_reg_info nix_lf_reg[] = {
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(0)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(1)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(2)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(3)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(4)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(5)),
+ NIX_REG_INFO(NIX_LF_CFG),
+ NIX_REG_INFO(NIX_LF_GINT),
+ NIX_REG_INFO(NIX_LF_GINT_W1S),
+ NIX_REG_INFO(NIX_LF_GINT_ENA_W1C),
+ NIX_REG_INFO(NIX_LF_GINT_ENA_W1S),
+ NIX_REG_INFO(NIX_LF_ERR_INT),
+ NIX_REG_INFO(NIX_LF_ERR_INT_W1S),
+ NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1C),
+ NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1S),
+ NIX_REG_INFO(NIX_LF_RAS),
+ NIX_REG_INFO(NIX_LF_RAS_W1S),
+ NIX_REG_INFO(NIX_LF_RAS_ENA_W1C),
+ NIX_REG_INFO(NIX_LF_RAS_ENA_W1S),
+ NIX_REG_INFO(NIX_LF_SQ_OP_ERR_DBG),
+ NIX_REG_INFO(NIX_LF_MNQ_ERR_DBG),
+ NIX_REG_INFO(NIX_LF_SEND_ERR_DBG),
+};
+
+static int
+nix_lf_get_reg_count(struct otx2_eth_dev *dev)
+{
+ int reg_count = 0;
+
+ reg_count = RTE_DIM(nix_lf_reg);
+ /* NIX_LF_TX_STATX */
+ reg_count += dev->lf_tx_stats;
+ /* NIX_LF_RX_STATX */
+ reg_count += dev->lf_rx_stats;
+ /* NIX_LF_QINTX_CNT*/
+ reg_count += dev->qints;
+ /* NIX_LF_QINTX_INT */
+ reg_count += dev->qints;
+ /* NIX_LF_QINTX_ENA_W1S */
+ reg_count += dev->qints;
+ /* NIX_LF_QINTX_ENA_W1C */
+ reg_count += dev->qints;
+ /* NIX_LF_CINTX_CNT */
+ reg_count += dev->cints;
+ /* NIX_LF_CINTX_WAIT */
+ reg_count += dev->cints;
+ /* NIX_LF_CINTX_INT */
+ reg_count += dev->cints;
+ /* NIX_LF_CINTX_INT_W1S */
+ reg_count += dev->cints;
+ /* NIX_LF_CINTX_ENA_W1S */
+ reg_count += dev->cints;
+ /* NIX_LF_CINTX_ENA_W1C */
+ reg_count += dev->cints;
+
+ return reg_count;
+}
+
+int
+otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data)
+{
+ uintptr_t nix_lf_base = dev->base;
+ bool dump_stdout;
+ uint64_t reg;
+ uint32_t i;
+
+ dump_stdout = data ? 0 : 1;
+
+ for (i = 0; i < RTE_DIM(nix_lf_reg); i++) {
+ reg = otx2_read64(nix_lf_base + nix_lf_reg[i].offset);
+ if (dump_stdout && reg)
+ nix_dump("%32s = 0x%" PRIx64,
+ nix_lf_reg[i].name, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_TX_STATX */
+ for (i = 0; i < dev->lf_tx_stats; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_TX_STATX(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_TX_STATX", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_RX_STATX */
+ for (i = 0; i < dev->lf_rx_stats; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_RX_STATX(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_RX_STATX", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_QINTX_CNT*/
+ for (i = 0; i < dev->qints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_CNT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_QINTX_CNT", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_QINTX_INT */
+ for (i = 0; i < dev->qints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_INT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_QINTX_INT", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_QINTX_ENA_W1S */
+ for (i = 0; i < dev->qints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1S(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_QINTX_ENA_W1S", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_QINTX_ENA_W1C */
+ for (i = 0; i < dev->qints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1C(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_QINTX_ENA_W1C", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_CNT */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_CNT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_CNT", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_WAIT */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_WAIT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_WAIT", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_INT */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_INT", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_INT_W1S */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT_W1S(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_INT_W1S", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_ENA_W1S */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1S(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_ENA_W1S", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_ENA_W1C */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1C(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_ENA_W1C", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+ return 0;
+}
+
+int
+otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev, struct rte_dev_reg_info *regs)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t *data = regs->data;
+
+ if (data == NULL) {
+ regs->length = nix_lf_get_reg_count(dev);
+ regs->width = 8;
+ return 0;
+ }
+
+ if (!regs->length ||
+ regs->length == (uint32_t)nix_lf_get_reg_count(dev)) {
+ otx2_nix_reg_dump(dev, data);
+ return 0;
+ }
+
+ return -ENOTSUP;
+}
static inline void
nix_lf_sq_dump(struct nix_sq_ctx_s *ctx)
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index fdebdef38..066aca7a5 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -24,6 +24,8 @@ nix_lf_err_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
+ /* Dump registers to std out */
+ otx2_nix_reg_dump(dev, NULL);
otx2_nix_queues_ctx_dump(eth_dev);
}
@@ -78,6 +80,8 @@ nix_lf_ras_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_RAS);
+ /* Dump registers to std out */
+ otx2_nix_reg_dump(dev, NULL);
otx2_nix_queues_ctx_dump(eth_dev);
}
@@ -237,6 +241,8 @@ nix_lf_q_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
+ /* Dump registers to std out */
+ otx2_nix_reg_dump(dev, NULL);
otx2_nix_queues_ctx_dump(eth_dev);
}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 11/57] net/octeontx2: add link stats operations
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (9 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 10/57] net/octeontx2: add register dump support jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 12/57] net/octeontx2: add basic stats operation jerinj
` (46 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Add link stats related operations and mark respective
items in the documentation.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 2 +
doc/guides/nics/features/octeontx2_vf.ini | 2 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 8 ++
drivers/net/octeontx2/otx2_ethdev.h | 8 ++
drivers/net/octeontx2/otx2_link.c | 108 +++++++++++++++++++++
9 files changed, 133 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_link.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 7d53bf0e7..828351409 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -8,6 +8,8 @@ Speed capabilities = Y
Lock-free Tx queue = Y
SR-IOV = Y
Multiprocess aware = Y
+Link status = Y
+Link status event = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index e0cc7b22d..719692dc6 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -8,6 +8,8 @@ Speed capabilities = Y
Lock-free Tx queue = Y
SR-IOV = Y
Multiprocess aware = Y
+Link status = Y
+Link status event = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 6dfdf88c6..4d5667583 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -7,6 +7,8 @@
Speed capabilities = Y
Lock-free Tx queue = Y
Multiprocess aware = Y
+Link status = Y
+Link status event = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 75d5746e8..a163f9128 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -18,6 +18,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- SR-IOV VF
- Lock-free Tx queue
+- Link state information
- Debug utilities - Context dump and error interrupt support
Prerequisites
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index c6e24a535..2dfb5043d 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -29,6 +29,7 @@ LIBABIVER := 1
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
+ otx2_link.c \
otx2_ethdev.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index a06e1192c..d693386b9 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -4,6 +4,7 @@
sources = files(
'otx2_mac.c',
+ 'otx2_link.c',
'otx2_ethdev.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 48d5a15d6..cb4f6ebb9 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -39,6 +39,10 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
return NIX_TX_OFFLOAD_CAPA;
}
+static const struct otx2_dev_ops otx2_dev_ops = {
+ .link_status_update = otx2_eth_dev_link_status_update,
+};
+
static int
nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
{
@@ -229,6 +233,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
.dev_configure = otx2_nix_configure,
+ .link_update = otx2_nix_link_update,
.get_reg = otx2_nix_dev_get_reg,
};
@@ -324,6 +329,9 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
goto error;
}
}
+ /* Device generic callbacks */
+ dev->ops = &otx2_dev_ops;
+ dev->eth_dev = eth_dev;
/* Grab the NPA LF if required */
rc = otx2_npa_lf_init(pci_dev, dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7313689b0..d8490337d 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -136,6 +136,7 @@ struct otx2_eth_dev {
uint8_t max_mac_entries;
uint8_t lf_tx_stats;
uint8_t lf_rx_stats;
+ uint16_t flags;
uint16_t cints;
uint16_t qints;
uint8_t configured;
@@ -156,6 +157,7 @@ struct otx2_eth_dev {
struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
struct otx2_rss_info rss_info;
struct otx2_npc_flow_info npc_flow;
+ struct rte_eth_dev *eth_dev;
} __rte_cache_aligned;
static inline struct otx2_eth_dev *
@@ -168,6 +170,12 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_info *dev_info);
+/* Link */
+void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
+int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
+void otx2_eth_dev_link_status_update(struct otx2_dev *dev,
+ struct cgx_link_user_info *link);
+
/* IRQ */
int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
new file mode 100644
index 000000000..228a0cd8e
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_link.c
@@ -0,0 +1,108 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_common.h>
+#include <rte_ethdev_pci.h>
+
+#include "otx2_ethdev.h"
+
+void
+otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set)
+{
+ if (set)
+ dev->flags |= OTX2_LINK_CFG_IN_PROGRESS_F;
+ else
+ dev->flags &= ~OTX2_LINK_CFG_IN_PROGRESS_F;
+
+ rte_wmb();
+}
+
+static inline int
+nix_wait_for_link_cfg(struct otx2_eth_dev *dev)
+{
+ uint16_t wait = 1000;
+
+ do {
+ rte_rmb();
+ if (!(dev->flags & OTX2_LINK_CFG_IN_PROGRESS_F))
+ break;
+ wait--;
+ rte_delay_ms(1);
+ } while (wait);
+
+ return wait ? 0 : -1;
+}
+
+static void
+nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
+{
+ if (link && link->link_status)
+ otx2_info("Port %d: Link Up - speed %u Mbps - %s",
+ (int)(eth_dev->data->port_id),
+ (uint32_t)link->link_speed,
+ link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+ "full-duplex" : "half-duplex");
+ else
+ otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
+}
+
+void
+otx2_eth_dev_link_status_update(struct otx2_dev *dev,
+ struct cgx_link_user_info *link)
+{
+ struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
+ struct rte_eth_dev *eth_dev = otx2_dev->eth_dev;
+ struct rte_eth_link eth_link;
+
+ if (!link || !dev || !eth_dev->data->dev_conf.intr_conf.lsc)
+ return;
+
+ if (nix_wait_for_link_cfg(otx2_dev)) {
+ otx2_err("Timeout waiting for link_cfg to complete");
+ return;
+ }
+
+ eth_link.link_status = link->link_up;
+ eth_link.link_speed = link->speed;
+ eth_link.link_autoneg = ETH_LINK_AUTONEG;
+ eth_link.link_duplex = link->full_duplex;
+
+ /* Print link info */
+ nix_link_status_print(eth_dev, ð_link);
+
+ /* Update link info */
+ rte_eth_linkstatus_set(eth_dev, ð_link);
+
+ /* Set the flag and execute application callbacks */
+ _rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+}
+
+int
+otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_link_info_msg *rsp;
+ struct rte_eth_link link;
+ int rc;
+
+ RTE_SET_USED(wait_to_complete);
+
+ if (otx2_dev_is_lbk(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_get_linkinfo(mbox);
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ link.link_status = rsp->link_info.link_up;
+ link.link_speed = rsp->link_info.speed;
+ link.link_autoneg = ETH_LINK_AUTONEG;
+
+ if (rsp->link_info.full_duplex)
+ link.link_duplex = rsp->link_info.full_duplex;
+
+ return rte_eth_linkstatus_set(eth_dev, &link);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 12/57] net/octeontx2: add basic stats operation
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (10 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 11/57] net/octeontx2: add link stats operations jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 13/57] net/octeontx2: add extended stats operations jerinj
` (45 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru
From: Kiran Kumar K <kirankumark@marvell.com>
Add basic stat operation and updated the feature list.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 2 +
doc/guides/nics/features/octeontx2_vf.ini | 2 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 3 +
drivers/net/octeontx2/otx2_ethdev.h | 17 +++
drivers/net/octeontx2/otx2_stats.c | 117 +++++++++++++++++++++
9 files changed, 146 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_stats.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 828351409..557107016 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -10,6 +10,8 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Basic stats = Y
+Stats per queue = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 719692dc6..3a2b78e06 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -10,6 +10,8 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Basic stats = Y
+Stats per queue = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 4d5667583..499f66c5c 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -9,6 +9,8 @@ Lock-free Tx queue = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Basic stats = Y
+Stats per queue = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index a163f9128..2944bbb99 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -18,6 +18,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- SR-IOV VF
- Lock-free Tx queue
+- Port hardware statistics
- Link state information
- Debug utilities - Context dump and error interrupt support
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 2dfb5043d..fbe5e9f44 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -30,6 +30,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
otx2_link.c \
+ otx2_stats.c \
otx2_ethdev.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index d693386b9..1c57b1bb4 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -5,6 +5,7 @@
sources = files(
'otx2_mac.c',
'otx2_link.c',
+ 'otx2_stats.c',
'otx2_ethdev.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index cb4f6ebb9..5787029d9 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -234,7 +234,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
.dev_configure = otx2_nix_configure,
.link_update = otx2_nix_link_update,
+ .stats_get = otx2_nix_dev_stats_get,
+ .stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
+ .queue_stats_mapping_set = otx2_nix_queue_stats_mapping,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index d8490337d..1cd9893a6 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -77,6 +77,12 @@
#define NIX_TX_NB_SEG_MAX 9
#endif
+#define CQ_OP_STAT_OP_ERR 63
+#define CQ_OP_STAT_CQ_ERR 46
+
+#define OP_ERR BIT_ULL(CQ_OP_STAT_OP_ERR)
+#define CQ_ERR BIT_ULL(CQ_OP_STAT_CQ_ERR)
+
#define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
ETH_RSS_TCP | ETH_RSS_SCTP | \
ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD)
@@ -156,6 +162,8 @@ struct otx2_eth_dev {
uint64_t tx_offload_capa;
struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
struct otx2_rss_info rss_info;
+ uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
+ uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
struct otx2_npc_flow_info npc_flow;
struct rte_eth_dev *eth_dev;
} __rte_cache_aligned;
@@ -189,6 +197,15 @@ int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
+/* Stats */
+int otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_stats *stats);
+void otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev);
+
+int otx2_nix_queue_stats_mapping(struct rte_eth_dev *dev,
+ uint16_t queue_id, uint8_t stat_idx,
+ uint8_t is_rx);
+
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
diff --git a/drivers/net/octeontx2/otx2_stats.c b/drivers/net/octeontx2/otx2_stats.c
new file mode 100644
index 000000000..ade0f6ad6
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_stats.c
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <inttypes.h>
+
+#include "otx2_ethdev.h"
+
+int
+otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_stats *stats)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t reg, val;
+ uint32_t qidx, i;
+ int64_t *addr;
+
+ stats->opackets = otx2_read64(dev->base +
+ NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_UCAST));
+ stats->opackets += otx2_read64(dev->base +
+ NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_MCAST));
+ stats->opackets += otx2_read64(dev->base +
+ NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_BCAST));
+ stats->oerrors = otx2_read64(dev->base +
+ NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_DROP));
+ stats->obytes = otx2_read64(dev->base +
+ NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_OCTS));
+
+ stats->ipackets = otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_UCAST));
+ stats->ipackets += otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_MCAST));
+ stats->ipackets += otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_BCAST));
+ stats->imissed = otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_DROP));
+ stats->ibytes = otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_OCTS));
+ stats->ierrors = otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_ERR));
+
+ for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
+ if (dev->txmap[i] & (1 << 31)) {
+ qidx = dev->txmap[i] & 0xFFFF;
+ reg = (((uint64_t)qidx) << 32);
+
+ addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_ipackets[i] = val;
+
+ addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_OCTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_ibytes[i] = val;
+
+ addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_DROP_PKTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_errors[i] = val;
+ }
+ }
+
+ for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
+ if (dev->rxmap[i] & (1 << 31)) {
+ qidx = dev->rxmap[i] & 0xFFFF;
+ reg = (((uint64_t)qidx) << 32);
+
+ addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_PKTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_opackets[i] = val;
+
+ addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_OCTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_obytes[i] = val;
+
+ addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_DROP_PKTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_errors[i] += val;
+ }
+ }
+
+ return 0;
+}
+
+void
+otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+
+ otx2_mbox_alloc_msg_nix_stats_rst(mbox);
+ otx2_mbox_process(mbox);
+}
+
+int
+otx2_nix_queue_stats_mapping(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+ uint8_t stat_idx, uint8_t is_rx)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ if (is_rx)
+ dev->rxmap[stat_idx] = ((1 << 31) | queue_id);
+ else
+ dev->txmap[stat_idx] = ((1 << 31) | queue_id);
+
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 13/57] net/octeontx2: add extended stats operations
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (11 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 12/57] net/octeontx2: add basic stats operation jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 14/57] net/octeontx2: add promiscuous and allmulticast mode jerinj
` (44 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru
From: Kiran Kumar K <kirankumark@marvell.com>
Add extended operations and updated the feature list.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 5 +
drivers/net/octeontx2/otx2_ethdev.h | 13 +
drivers/net/octeontx2/otx2_stats.c | 270 +++++++++++++++++++++
6 files changed, 291 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 557107016..8d7c3588c 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Basic stats = Y
Stats per queue = Y
+Extended stats = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 3a2b78e06..a6e6876fa 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -11,6 +11,7 @@ Multiprocess aware = Y
Link status = Y
Link status event = Y
Basic stats = Y
+Extended stats = Y
Stats per queue = Y
Registers dump = Y
Linux VFIO = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 499f66c5c..6ec83e823 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -10,6 +10,7 @@ Multiprocess aware = Y
Link status = Y
Link status event = Y
Basic stats = Y
+Extended stats = Y
Stats per queue = Y
Registers dump = Y
Linux VFIO = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 5787029d9..937ba6399 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -238,6 +238,11 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
.queue_stats_mapping_set = otx2_nix_queue_stats_mapping,
+ .xstats_get = otx2_nix_xstats_get,
+ .xstats_get_names = otx2_nix_xstats_get_names,
+ .xstats_reset = otx2_nix_xstats_reset,
+ .xstats_get_by_id = otx2_nix_xstats_get_by_id,
+ .xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 1cd9893a6..7d53a6643 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -205,6 +205,19 @@ void otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev);
int otx2_nix_queue_stats_mapping(struct rte_eth_dev *dev,
uint16_t queue_id, uint8_t stat_idx,
uint8_t is_rx);
+int otx2_nix_xstats_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat *xstats, unsigned int n);
+int otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int limit);
+void otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev);
+
+int otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev,
+ const uint64_t *ids,
+ uint64_t *values, unsigned int n);
+int otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids, unsigned int limit);
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
diff --git a/drivers/net/octeontx2/otx2_stats.c b/drivers/net/octeontx2/otx2_stats.c
index ade0f6ad6..deb83b704 100644
--- a/drivers/net/octeontx2/otx2_stats.c
+++ b/drivers/net/octeontx2/otx2_stats.c
@@ -6,6 +6,45 @@
#include "otx2_ethdev.h"
+struct otx2_nix_xstats_name {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ uint32_t offset;
+};
+
+static const struct otx2_nix_xstats_name nix_tx_xstats[] = {
+ {"tx_ucast", NIX_STAT_LF_TX_TX_UCAST},
+ {"tx_bcast", NIX_STAT_LF_TX_TX_BCAST},
+ {"tx_mcast", NIX_STAT_LF_TX_TX_MCAST},
+ {"tx_drop", NIX_STAT_LF_TX_TX_DROP},
+ {"tx_octs", NIX_STAT_LF_TX_TX_OCTS},
+};
+
+static const struct otx2_nix_xstats_name nix_rx_xstats[] = {
+ {"rx_octs", NIX_STAT_LF_RX_RX_OCTS},
+ {"rx_ucast", NIX_STAT_LF_RX_RX_UCAST},
+ {"rx_bcast", NIX_STAT_LF_RX_RX_BCAST},
+ {"rx_mcast", NIX_STAT_LF_RX_RX_MCAST},
+ {"rx_drop", NIX_STAT_LF_RX_RX_DROP},
+ {"rx_drop_octs", NIX_STAT_LF_RX_RX_DROP_OCTS},
+ {"rx_fcs", NIX_STAT_LF_RX_RX_FCS},
+ {"rx_err", NIX_STAT_LF_RX_RX_ERR},
+ {"rx_drp_bcast", NIX_STAT_LF_RX_RX_DRP_BCAST},
+ {"rx_drp_mcast", NIX_STAT_LF_RX_RX_DRP_MCAST},
+ {"rx_drp_l3bcast", NIX_STAT_LF_RX_RX_DRP_L3BCAST},
+ {"rx_drp_l3mcast", NIX_STAT_LF_RX_RX_DRP_L3MCAST},
+};
+
+static const struct otx2_nix_xstats_name nix_q_xstats[] = {
+ {"rq_op_re_pkts", NIX_LF_RQ_OP_RE_PKTS},
+};
+
+#define OTX2_NIX_NUM_RX_XSTATS RTE_DIM(nix_rx_xstats)
+#define OTX2_NIX_NUM_TX_XSTATS RTE_DIM(nix_tx_xstats)
+#define OTX2_NIX_NUM_QUEUE_XSTATS RTE_DIM(nix_q_xstats)
+
+#define OTX2_NIX_NUM_XSTATS_REG (OTX2_NIX_NUM_RX_XSTATS + \
+ OTX2_NIX_NUM_TX_XSTATS + OTX2_NIX_NUM_QUEUE_XSTATS)
+
int
otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
struct rte_eth_stats *stats)
@@ -115,3 +154,234 @@ otx2_nix_queue_stats_mapping(struct rte_eth_dev *eth_dev, uint16_t queue_id,
return 0;
}
+
+int
+otx2_nix_xstats_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat *xstats,
+ unsigned int n)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ unsigned int i, count = 0;
+ uint64_t reg, val;
+
+ if (n < OTX2_NIX_NUM_XSTATS_REG)
+ return OTX2_NIX_NUM_XSTATS_REG;
+
+ if (xstats == NULL)
+ return 0;
+
+ for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) {
+ xstats[count].value = otx2_read64(dev->base +
+ NIX_LF_TX_STATX(nix_tx_xstats[i].offset));
+ xstats[count].id = count;
+ count++;
+ }
+
+ for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) {
+ xstats[count].value = otx2_read64(dev->base +
+ NIX_LF_RX_STATX(nix_rx_xstats[i].offset));
+ xstats[count].id = count;
+ count++;
+ }
+
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ reg = (((uint64_t)i) << 32);
+ val = otx2_atomic64_add_nosync(reg, (int64_t *)(dev->base +
+ nix_q_xstats[0].offset));
+ if (val & OP_ERR)
+ val = 0;
+ xstats[count].value += val;
+ }
+ xstats[count].id = count;
+ count++;
+
+ return count;
+}
+
+int
+otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int limit)
+{
+ unsigned int i, count = 0;
+
+ RTE_SET_USED(eth_dev);
+
+ if (limit < OTX2_NIX_NUM_XSTATS_REG && xstats_names != NULL)
+ return -ENOMEM;
+
+ if (xstats_names) {
+ for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "%s", nix_tx_xstats[i].name);
+ count++;
+ }
+
+ for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "%s", nix_rx_xstats[i].name);
+ count++;
+ }
+
+ for (i = 0; i < OTX2_NIX_NUM_QUEUE_XSTATS; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "%s", nix_q_xstats[i].name);
+ count++;
+ }
+ }
+
+ return OTX2_NIX_NUM_XSTATS_REG;
+}
+
+int
+otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids, unsigned int limit)
+{
+ struct rte_eth_xstat_name xstats_names_copy[OTX2_NIX_NUM_XSTATS_REG];
+ uint16_t i;
+
+ if (limit < OTX2_NIX_NUM_XSTATS_REG && ids == NULL)
+ return OTX2_NIX_NUM_XSTATS_REG;
+
+ if (limit > OTX2_NIX_NUM_XSTATS_REG)
+ return -EINVAL;
+
+ if (xstats_names == NULL)
+ return -ENOMEM;
+
+ otx2_nix_xstats_get_names(eth_dev, xstats_names_copy, limit);
+
+ for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) {
+ if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) {
+ otx2_err("Invalid id value");
+ return -EINVAL;
+ }
+ strncpy(xstats_names[i].name, xstats_names_copy[ids[i]].name,
+ sizeof(xstats_names[i].name));
+ }
+
+ return limit;
+}
+
+int
+otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev, const uint64_t *ids,
+ uint64_t *values, unsigned int n)
+{
+ struct rte_eth_xstat xstats[OTX2_NIX_NUM_XSTATS_REG];
+ uint16_t i;
+
+ if (n < OTX2_NIX_NUM_XSTATS_REG && ids == NULL)
+ return OTX2_NIX_NUM_XSTATS_REG;
+
+ if (n > OTX2_NIX_NUM_XSTATS_REG)
+ return -EINVAL;
+
+ if (values == NULL)
+ return -ENOMEM;
+
+ otx2_nix_xstats_get(eth_dev, xstats, n);
+
+ for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) {
+ if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) {
+ otx2_err("Invalid id value");
+ return -EINVAL;
+ }
+ values[i] = xstats[ids[i]].value;
+ }
+
+ return n;
+}
+
+static void
+nix_queue_stats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_rsp *rsp;
+ struct nix_aq_enq_req *aq;
+ uint32_t i;
+ int rc;
+
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = i;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to read rq context");
+ return;
+ }
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = i;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+ otx2_mbox_memcpy(&aq->rq, &rsp->rq, sizeof(rsp->rq));
+ otx2_mbox_memset(&aq->rq_mask, 0, sizeof(aq->rq_mask));
+ aq->rq.octs = 0;
+ aq->rq.pkts = 0;
+ aq->rq.drop_octs = 0;
+ aq->rq.drop_pkts = 0;
+ aq->rq.re_pkts = 0;
+
+ aq->rq_mask.octs = ~(aq->rq_mask.octs);
+ aq->rq_mask.pkts = ~(aq->rq_mask.pkts);
+ aq->rq_mask.drop_octs = ~(aq->rq_mask.drop_octs);
+ aq->rq_mask.drop_pkts = ~(aq->rq_mask.drop_pkts);
+ aq->rq_mask.re_pkts = ~(aq->rq_mask.re_pkts);
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to write rq context");
+ return;
+ }
+ }
+
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = i;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to read sq context");
+ return;
+ }
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = i;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+ otx2_mbox_memcpy(&aq->sq, &rsp->sq, sizeof(rsp->sq));
+ otx2_mbox_memset(&aq->sq_mask, 0, sizeof(aq->sq_mask));
+ aq->sq.octs = 0;
+ aq->sq.pkts = 0;
+ aq->sq.drop_octs = 0;
+ aq->sq.drop_pkts = 0;
+
+ aq->sq_mask.octs = ~(aq->sq_mask.octs);
+ aq->sq_mask.pkts = ~(aq->sq_mask.pkts);
+ aq->sq_mask.drop_octs = ~(aq->sq_mask.drop_octs);
+ aq->sq_mask.drop_pkts = ~(aq->sq_mask.drop_pkts);
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to write sq context");
+ return;
+ }
+ }
+}
+
+void
+otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+
+ otx2_mbox_alloc_msg_nix_stats_rst(mbox);
+ otx2_mbox_process(mbox);
+
+ /* Reset queue stats */
+ nix_queue_stats_reset(eth_dev);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 14/57] net/octeontx2: add promiscuous and allmulticast mode
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (12 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 13/57] net/octeontx2: add extended stats operations jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 15/57] net/octeontx2: add unicast MAC filter jerinj
` (43 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru, Sunil Kumar Kori
From: Vamsi Attunuru <vattunuru@marvell.com>
Add promiscuous and allmulticast mode for PF devices and
update the respective feature list.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 2 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 4 ++
drivers/net/octeontx2/otx2_ethdev.h | 6 ++
drivers/net/octeontx2/otx2_ethdev_ops.c | 82 ++++++++++++++++++++++
6 files changed, 97 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 8d7c3588c..9f682609d 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -10,6 +10,8 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Promiscuous mode = Y
+Allmulticast mode = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index a6e6876fa..764e95ce6 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -10,6 +10,8 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Promiscuous mode = Y
+Allmulticast mode = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 2944bbb99..9ef7be08f 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -16,6 +16,7 @@ Features
Features of the OCTEON TX2 Ethdev PMD are:
+- Promiscuous mode
- SR-IOV VF
- Lock-free Tx queue
- Port hardware statistics
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 937ba6399..826ce7f4e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -237,6 +237,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
+ .promiscuous_enable = otx2_nix_promisc_enable,
+ .promiscuous_disable = otx2_nix_promisc_disable,
+ .allmulticast_enable = otx2_nix_allmulticast_enable,
+ .allmulticast_disable = otx2_nix_allmulticast_disable,
.queue_stats_mapping_set = otx2_nix_queue_stats_mapping,
.xstats_get = otx2_nix_xstats_get,
.xstats_get_names = otx2_nix_xstats_get_names,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7d53a6643..814fd6ec3 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -178,6 +178,12 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_info *dev_info);
+void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en);
+void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
+void otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev);
+void otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev);
+void otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev);
+
/* Link */
void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index df7e909d2..301a597f8 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -4,6 +4,88 @@
#include "otx2_ethdev.h"
+static void
+nix_cgx_promisc_config(struct rte_eth_dev *eth_dev, int en)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return;
+
+ if (en)
+ otx2_mbox_alloc_msg_cgx_promisc_enable(mbox);
+ else
+ otx2_mbox_alloc_msg_cgx_promisc_disable(mbox);
+
+ otx2_mbox_process(mbox);
+}
+
+void
+otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_rx_mode *req;
+
+ if (otx2_dev_is_vf(dev))
+ return;
+
+ req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox);
+
+ if (en)
+ req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC;
+
+ otx2_mbox_process(mbox);
+ eth_dev->data->promiscuous = en;
+}
+
+void
+otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev)
+{
+ otx2_nix_promisc_config(eth_dev, 1);
+ nix_cgx_promisc_config(eth_dev, 1);
+}
+
+void
+otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev)
+{
+ otx2_nix_promisc_config(eth_dev, 0);
+ nix_cgx_promisc_config(eth_dev, 0);
+}
+
+static void
+nix_allmulticast_config(struct rte_eth_dev *eth_dev, int en)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_rx_mode *req;
+
+ if (otx2_dev_is_vf(dev))
+ return;
+
+ req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox);
+
+ if (en)
+ req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_ALLMULTI;
+ else if (eth_dev->data->promiscuous)
+ req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC;
+
+ otx2_mbox_process(mbox);
+}
+
+void
+otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev)
+{
+ nix_allmulticast_config(eth_dev, 1);
+}
+
+void
+otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev)
+{
+ nix_allmulticast_config(eth_dev, 0);
+}
+
void
otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 15/57] net/octeontx2: add unicast MAC filter
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (13 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 14/57] net/octeontx2: add promiscuous and allmulticast mode jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 16/57] net/octeontx2: add RSS support jerinj
` (42 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Sunil Kumar Kori, Vamsi Attunuru
From: Sunil Kumar Kori <skori@marvell.com>
Add unicast MAC filter for PF device and
update the respective feature list.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 3 +
drivers/net/octeontx2/otx2_ethdev.h | 6 ++
drivers/net/octeontx2/otx2_mac.c | 77 ++++++++++++++++++++++
6 files changed, 89 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 9f682609d..566496113 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Promiscuous mode = Y
Allmulticast mode = Y
+Unicast MAC filter = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 764e95ce6..195a48940 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Promiscuous mode = Y
Allmulticast mode = Y
+Unicast MAC filter = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 9ef7be08f..8385c9c18 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -19,6 +19,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Promiscuous mode
- SR-IOV VF
- Lock-free Tx queue
+- MAC filtering
- Port hardware statistics
- Link state information
- Debug utilities - Context dump and error interrupt support
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 826ce7f4e..a72c901f4 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -237,6 +237,9 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
+ .mac_addr_add = otx2_nix_mac_addr_add,
+ .mac_addr_remove = otx2_nix_mac_addr_del,
+ .mac_addr_set = otx2_nix_mac_addr_set,
.promiscuous_enable = otx2_nix_promisc_enable,
.promiscuous_disable = otx2_nix_promisc_disable,
.allmulticast_enable = otx2_nix_allmulticast_enable,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 814fd6ec3..56517845b 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -232,7 +232,13 @@ int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *addr);
/* Mac address handling */
+int otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev,
+ struct rte_ether_addr *addr);
int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr);
+int otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev,
+ struct rte_ether_addr *addr,
+ uint32_t index, uint32_t pool);
+void otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index);
int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev);
/* Devargs */
diff --git a/drivers/net/octeontx2/otx2_mac.c b/drivers/net/octeontx2/otx2_mac.c
index 89b0ca6b0..b4bcc61f8 100644
--- a/drivers/net/octeontx2/otx2_mac.c
+++ b/drivers/net/octeontx2/otx2_mac.c
@@ -49,6 +49,83 @@ otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev)
return rsp->max_dmac_filters;
}
+int
+otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr,
+ uint32_t index __rte_unused, uint32_t pool __rte_unused)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_mac_addr_add_req *req;
+ struct cgx_mac_addr_add_rsp *rsp;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return -ENOTSUP;
+
+ if (otx2_dev_active_vfs(dev))
+ return -ENOTSUP;
+
+ req = otx2_mbox_alloc_msg_cgx_mac_addr_add(mbox);
+ otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to add mac address, rc=%d", rc);
+ goto done;
+ }
+
+ /* Enable promiscuous mode at NIX level */
+ otx2_nix_promisc_config(eth_dev, 1);
+
+done:
+ return rc;
+}
+
+void
+otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_mac_addr_del_req *req;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return;
+
+ req = otx2_mbox_alloc_msg_cgx_mac_addr_del(mbox);
+ req->index = index;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ otx2_err("Failed to delete mac address, rc=%d", rc);
+}
+
+int
+otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_set_mac_addr *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_nix_set_mac_addr(mbox);
+ otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to set mac address, rc=%d", rc);
+ goto done;
+ }
+
+ otx2_mbox_memcpy(dev->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ /* Install the same entry into CGX DMAC filter table too. */
+ otx2_cgx_mac_addr_set(eth_dev, addr);
+
+done:
+ return rc;
+}
+
int
otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 16/57] net/octeontx2: add RSS support
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (14 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 15/57] net/octeontx2: add unicast MAC filter jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 17/57] net/octeontx2: add Rx queue setup and release jerinj
` (41 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Add RSS support and expose RSS related functions
to implement RSS action for rte_flow driver.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 4 +
doc/guides/nics/features/octeontx2_vec.ini | 4 +
doc/guides/nics/features/octeontx2_vf.ini | 4 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 11 +
drivers/net/octeontx2/otx2_ethdev.h | 33 ++
| 372 +++++++++++++++++++++
9 files changed, 431 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_rss.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 566496113..f2d47d57b 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -13,6 +13,10 @@ Link status event = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+Inner RSS = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 195a48940..a67353d2a 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -13,6 +13,10 @@ Link status event = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+Inner RSS = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 6ec83e823..97d66ddde 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -9,6 +9,10 @@ Lock-free Tx queue = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+Inner RSS = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 8385c9c18..3bee3f3ca 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -19,6 +19,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Promiscuous mode
- SR-IOV VF
- Lock-free Tx queue
+- Receiver Side Scaling (RSS)
- MAC filtering
- Port hardware statistics
- Link state information
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index fbe5e9f44..f9f9ae6e6 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -28,6 +28,7 @@ LIBABIVER := 1
# all source are stored in SRCS-y
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
+ otx2_rss.c \
otx2_mac.c \
otx2_link.c \
otx2_stats.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 1c57b1bb4..8681a2642 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -3,6 +3,7 @@
#
sources = files(
+ 'otx2_rss.c',
'otx2_mac.c',
'otx2_link.c',
'otx2_stats.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index a72c901f4..5289c79e8 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -195,6 +195,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto fail;
}
+ /* Configure RSS */
+ rc = otx2_nix_rss_config(eth_dev);
+ if (rc) {
+ otx2_err("Failed to configure rss rc=%d", rc);
+ goto free_nix_lf;
+ }
+
/* Register queue IRQs */
rc = oxt2_nix_register_queue_irqs(eth_dev);
if (rc) {
@@ -245,6 +252,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.allmulticast_enable = otx2_nix_allmulticast_enable,
.allmulticast_disable = otx2_nix_allmulticast_disable,
.queue_stats_mapping_set = otx2_nix_queue_stats_mapping,
+ .reta_update = otx2_nix_dev_reta_update,
+ .reta_query = otx2_nix_dev_reta_query,
+ .rss_hash_update = otx2_nix_rss_hash_update,
+ .rss_hash_conf_get = otx2_nix_rss_hash_conf_get,
.xstats_get = otx2_nix_xstats_get,
.xstats_get_names = otx2_nix_xstats_get_names,
.xstats_reset = otx2_nix_xstats_reset,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 56517845b..19a4e45b0 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -59,6 +59,7 @@
#define NIX_MAX_SQB 512
#define NIX_MIN_SQB 32
+#define NIX_RSS_RETA_SIZE_MAX 256
/* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/
#define NIX_RSS_GRPS 8
#define NIX_HASH_KEY_SIZE 48 /* 352 Bits */
@@ -112,14 +113,22 @@
DEV_RX_OFFLOAD_QINQ_STRIP | \
DEV_RX_OFFLOAD_TIMESTAMP)
+#define NIX_DEFAULT_RSS_CTX_GROUP 0
+#define NIX_DEFAULT_RSS_MCAM_IDX -1
+
struct otx2_qint {
struct rte_eth_dev *eth_dev;
uint8_t qintx;
};
struct otx2_rss_info {
+ uint64_t nix_rss;
+ uint32_t flowkey_cfg;
uint16_t rss_size;
uint8_t rss_grps;
+ uint8_t alg_idx; /* Selected algo index */
+ uint16_t ind_tbl[NIX_RSS_RETA_SIZE_MAX];
+ uint8_t key[NIX_HASH_KEY_SIZE];
};
struct otx2_npc_flow_info {
@@ -225,6 +234,30 @@ int otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
struct rte_eth_xstat_name *xstats_names,
const uint64_t *ids, unsigned int limit);
+/* RSS */
+void otx2_nix_rss_set_key(struct otx2_eth_dev *dev,
+ uint8_t *key, uint32_t key_len);
+uint32_t otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev,
+ uint64_t ethdev_rss, uint8_t rss_level);
+int otx2_rss_set_hf(struct otx2_eth_dev *dev,
+ uint32_t flowkey_cfg, uint8_t *alg_idx,
+ uint8_t group, int mcam_index);
+int otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev, uint8_t group,
+ uint16_t *ind_tbl);
+int otx2_nix_rss_config(struct rte_eth_dev *eth_dev);
+
+int otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+int otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+int otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_conf *rss_conf);
+
+int otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_conf *rss_conf);
+
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
--git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
new file mode 100644
index 000000000..5afa21490
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -0,0 +1,372 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+
+int
+otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev,
+ uint8_t group, uint16_t *ind_tbl)
+{
+ struct otx2_rss_info *rss = &dev->rss_info;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *req;
+ int rc, idx;
+
+ for (idx = 0; idx < rss->rss_size; idx++) {
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req)
+ return -ENOMEM;
+ }
+ req->rss.rq = ind_tbl[idx];
+ /* Fill AQ info */
+ req->qidx = (group * rss->rss_size) + idx;
+ req->ctype = NIX_AQ_CTYPE_RSS;
+ req->op = NIX_AQ_INSTOP_INIT;
+ }
+
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ return 0;
+}
+
+int
+otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_rss_info *rss = &dev->rss_info;
+ int rc, i, j;
+ int idx = 0;
+
+ rc = -EINVAL;
+ if (reta_size != dev->rss_info.rss_size) {
+ otx2_err("Size of hash lookup table configured "
+ "(%d) doesn't match the number hardware can supported "
+ "(%d)", reta_size, dev->rss_info.rss_size);
+ goto fail;
+ }
+
+ /* Copy RETA table */
+ for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ if ((reta_conf[i].mask >> j) & 0x01)
+ rss->ind_tbl[idx] = reta_conf[i].reta[j];
+ idx++;
+ }
+ }
+
+ return otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl);
+
+fail:
+ return rc;
+}
+
+int
+otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_rss_info *rss = &dev->rss_info;
+ int rc, i, j;
+
+ rc = -EINVAL;
+
+ if (reta_size != dev->rss_info.rss_size) {
+ otx2_err("Size of hash lookup table configured "
+ "(%d) doesn't match the number hardware can supported "
+ "(%d)", reta_size, dev->rss_info.rss_size);
+ goto fail;
+ }
+
+ /* Copy RETA table */
+ for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ if ((reta_conf[i].mask >> j) & 0x01)
+ reta_conf[i].reta[j] = rss->ind_tbl[j];
+ }
+
+ return 0;
+
+fail:
+ return rc;
+}
+
+void
+otx2_nix_rss_set_key(struct otx2_eth_dev *dev, uint8_t *key,
+ uint32_t key_len)
+{
+ const uint8_t default_key[NIX_HASH_KEY_SIZE] = {
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD
+ };
+ struct otx2_rss_info *rss = &dev->rss_info;
+ uint64_t *keyptr;
+ uint64_t val;
+ uint32_t idx;
+
+ if (key == NULL || key == 0) {
+ keyptr = (uint64_t *)(uintptr_t)default_key;
+ key_len = NIX_HASH_KEY_SIZE;
+ memset(rss->key, 0, key_len);
+ } else {
+ memcpy(rss->key, key, key_len);
+ keyptr = (uint64_t *)rss->key;
+ }
+
+ for (idx = 0; idx < (key_len >> 3); idx++) {
+ val = rte_cpu_to_be_64(*keyptr);
+ otx2_write64(val, dev->base + NIX_LF_RX_SECRETX(idx));
+ keyptr++;
+ }
+}
+
+static void
+rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
+{
+ uint64_t *keyptr = (uint64_t *)key;
+ uint64_t val;
+ int idx;
+
+ for (idx = 0; idx < (NIX_HASH_KEY_SIZE >> 3); idx++) {
+ val = otx2_read64(dev->base + NIX_LF_RX_SECRETX(idx));
+ *keyptr = rte_be_to_cpu_64(val);
+ keyptr++;
+ }
+}
+
+#define RSS_IPV4_ENABLE ( \
+ ETH_RSS_IPV4 | \
+ ETH_RSS_FRAG_IPV4 | \
+ ETH_RSS_NONFRAG_IPV4_UDP | \
+ ETH_RSS_NONFRAG_IPV4_TCP | \
+ ETH_RSS_NONFRAG_IPV4_SCTP)
+
+#define RSS_IPV6_ENABLE ( \
+ ETH_RSS_IPV6 | \
+ ETH_RSS_FRAG_IPV6 | \
+ ETH_RSS_NONFRAG_IPV6_UDP | \
+ ETH_RSS_NONFRAG_IPV6_TCP | \
+ ETH_RSS_NONFRAG_IPV6_SCTP)
+
+#define RSS_IPV6_EX_ENABLE ( \
+ ETH_RSS_IPV6_EX | \
+ ETH_RSS_IPV6_TCP_EX | \
+ ETH_RSS_IPV6_UDP_EX)
+
+#define RSS_MAX_LEVELS 3
+
+#define RSS_IPV4_INDEX 0
+#define RSS_IPV6_INDEX 1
+#define RSS_TCP_INDEX 2
+#define RSS_UDP_INDEX 3
+#define RSS_SCTP_INDEX 4
+#define RSS_DMAC_INDEX 5
+
+uint32_t
+otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
+ uint8_t rss_level)
+{
+ uint32_t flow_key_type[RSS_MAX_LEVELS][6] = {
+ {
+ FLOW_KEY_TYPE_IPV4, FLOW_KEY_TYPE_IPV6,
+ FLOW_KEY_TYPE_TCP, FLOW_KEY_TYPE_UDP,
+ FLOW_KEY_TYPE_SCTP, FLOW_KEY_TYPE_ETH_DMAC
+ },
+ {
+ FLOW_KEY_TYPE_INNR_IPV4, FLOW_KEY_TYPE_INNR_IPV6,
+ FLOW_KEY_TYPE_INNR_TCP, FLOW_KEY_TYPE_INNR_UDP,
+ FLOW_KEY_TYPE_INNR_SCTP, FLOW_KEY_TYPE_INNR_ETH_DMAC
+ },
+ {
+ FLOW_KEY_TYPE_IPV4 | FLOW_KEY_TYPE_INNR_IPV4,
+ FLOW_KEY_TYPE_IPV6 | FLOW_KEY_TYPE_INNR_IPV6,
+ FLOW_KEY_TYPE_TCP | FLOW_KEY_TYPE_INNR_TCP,
+ FLOW_KEY_TYPE_UDP | FLOW_KEY_TYPE_INNR_UDP,
+ FLOW_KEY_TYPE_SCTP | FLOW_KEY_TYPE_INNR_SCTP,
+ FLOW_KEY_TYPE_ETH_DMAC | FLOW_KEY_TYPE_INNR_ETH_DMAC
+ }
+ };
+ uint32_t flowkey_cfg = 0;
+
+ dev->rss_info.nix_rss = ethdev_rss;
+
+ if (ethdev_rss & RSS_IPV4_ENABLE)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_IPV4_INDEX];
+
+ if (ethdev_rss & RSS_IPV6_ENABLE)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
+
+ if (ethdev_rss & ETH_RSS_TCP)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
+
+ if (ethdev_rss & ETH_RSS_UDP)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
+
+ if (ethdev_rss & ETH_RSS_SCTP)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
+
+ if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
+
+ if (ethdev_rss & RSS_IPV6_EX_ENABLE)
+ flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
+
+ if (ethdev_rss & ETH_RSS_PORT)
+ flowkey_cfg |= FLOW_KEY_TYPE_PORT;
+
+ if (ethdev_rss & ETH_RSS_NVGRE)
+ flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
+
+ if (ethdev_rss & ETH_RSS_VXLAN)
+ flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
+
+ if (ethdev_rss & ETH_RSS_GENEVE)
+ flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
+
+ return flowkey_cfg;
+}
+
+int
+otx2_rss_set_hf(struct otx2_eth_dev *dev, uint32_t flowkey_cfg,
+ uint8_t *alg_idx, uint8_t group, int mcam_index)
+{
+ struct nix_rss_flowkey_cfg_rsp *rss_rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_rss_flowkey_cfg *cfg;
+ int rc;
+
+ rc = -EINVAL;
+
+ dev->rss_info.flowkey_cfg = flowkey_cfg;
+
+ cfg = otx2_mbox_alloc_msg_nix_rss_flowkey_cfg(mbox);
+
+ cfg->flowkey_cfg = flowkey_cfg;
+ cfg->mcam_index = mcam_index; /* -1 indicates default group */
+ cfg->group = group; /* 0 is default group */
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rss_rsp);
+ if (rc)
+ return rc;
+
+ if (alg_idx)
+ *alg_idx = rss_rsp->alg_idx;
+
+ return rc;
+}
+
+int
+otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint32_t flowkey_cfg;
+ uint8_t alg_idx;
+ int rc;
+
+ rc = -EINVAL;
+
+ if (rss_conf->rss_key && rss_conf->rss_key_len != NIX_HASH_KEY_SIZE) {
+ otx2_err("Hash key size mismatch %d vs %d",
+ rss_conf->rss_key_len, NIX_HASH_KEY_SIZE);
+ goto fail;
+ }
+
+ if (rss_conf->rss_key)
+ otx2_nix_rss_set_key(dev, rss_conf->rss_key,
+ (uint32_t)rss_conf->rss_key_len);
+
+ flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_conf->rss_hf, 0);
+
+ rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx,
+ NIX_DEFAULT_RSS_CTX_GROUP,
+ NIX_DEFAULT_RSS_MCAM_IDX);
+ if (rc) {
+ otx2_err("Failed to set RSS hash function rc=%d", rc);
+ return rc;
+ }
+
+ dev->rss_info.alg_idx = alg_idx;
+
+fail:
+ return rc;
+}
+
+int
+otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ if (rss_conf->rss_key)
+ rss_get_key(dev, rss_conf->rss_key);
+
+ rss_conf->rss_key_len = NIX_HASH_KEY_SIZE;
+ rss_conf->rss_hf = dev->rss_info.nix_rss;
+
+ return 0;
+}
+
+int
+otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint32_t idx, qcnt = eth_dev->data->nb_rx_queues;
+ uint32_t flowkey_cfg;
+ uint64_t rss_hf;
+ uint8_t alg_idx;
+ int rc;
+
+ /* Skip further configuration if selected mode is not RSS */
+ if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+ return 0;
+
+ /* Update default RSS key and cfg */
+ otx2_nix_rss_set_key(dev, NULL, 0);
+
+ /* Update default RSS RETA */
+ for (idx = 0; idx < dev->rss_info.rss_size; idx++)
+ dev->rss_info.ind_tbl[idx] = idx % qcnt;
+
+ /* Init RSS table context */
+ rc = otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl);
+ if (rc) {
+ otx2_err("Failed to init RSS table rc=%d", rc);
+ return rc;
+ }
+
+ rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
+ flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, 0);
+
+ rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx,
+ NIX_DEFAULT_RSS_CTX_GROUP,
+ NIX_DEFAULT_RSS_MCAM_IDX);
+ if (rc) {
+ otx2_err("Failed to set RSS hash function rc=%d", rc);
+ return rc;
+ }
+
+ dev->rss_info.alg_idx = alg_idx;
+
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 17/57] net/octeontx2: add Rx queue setup and release
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (15 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 16/57] net/octeontx2: add RSS support jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 18/57] net/octeontx2: add Tx " jerinj
` (40 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K, Thomas Monjalon
Cc: Vamsi Attunuru
From: Jerin Jacob <jerinj@marvell.com>
Add Rx queue setup and release.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/Makefile | 2 +-
drivers/net/octeontx2/otx2_ethdev.c | 310 +++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 51 ++++
drivers/net/octeontx2/otx2_ethdev_ops.c | 2 +
mk/rte.app.mk | 2 +-
8 files changed, 368 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index f2d47d57b..d0a2204d2 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -10,6 +10,7 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Runtime Rx queue setup = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index a67353d2a..64125a73f 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -10,6 +10,7 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Runtime Rx queue setup = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 97d66ddde..acda5e680 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -9,6 +9,7 @@ Lock-free Tx queue = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Runtime Rx queue setup = Y
RSS hash = Y
RSS key update = Y
RSS reta update = Y
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index f9f9ae6e6..f40561afb 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -39,6 +39,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_ethdev_devargs.c
LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal
-LDLIBS += -lrte_ethdev -lrte_bus_pci -lrte_kvargs
+LDLIBS += -lrte_ethdev -lrte_bus_pci -lrte_kvargs -lrte_mbuf -lrte_mempool -lm
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 5289c79e8..dbbc2263d 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -2,9 +2,15 @@
* Copyright(C) 2019 Marvell International Ltd.
*/
+#include <inttypes.h>
+#include <math.h>
+
#include <rte_ethdev_pci.h>
#include <rte_io.h>
#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_pool_ops.h>
+#include <rte_mempool.h>
#include "otx2_ethdev.h"
@@ -114,6 +120,308 @@ nix_lf_free(struct otx2_eth_dev *dev)
return otx2_mbox_process(mbox);
}
+static inline void
+nix_rx_queue_reset(struct otx2_eth_rxq *rxq)
+{
+ rxq->head = 0;
+ rxq->available = 0;
+}
+
+static inline uint32_t
+nix_qsize_to_val(enum nix_q_size_e qsize)
+{
+ return (16UL << (qsize * 2));
+}
+
+static inline enum nix_q_size_e
+nix_qsize_clampup_get(struct otx2_eth_dev *dev, uint32_t val)
+{
+ int i;
+
+ if (otx2_ethdev_fixup_is_min_4k_q(dev))
+ i = nix_q_size_4K;
+ else
+ i = nix_q_size_16;
+
+ for (; i < nix_q_size_max; i++)
+ if (val <= nix_qsize_to_val(i))
+ break;
+
+ if (i >= nix_q_size_max)
+ i = nix_q_size_max - 1;
+
+ return i;
+}
+
+static int
+nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
+ uint16_t qid, struct otx2_eth_rxq *rxq, struct rte_mempool *mp)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ const struct rte_memzone *rz;
+ uint32_t ring_size, cq_size;
+ struct nix_aq_enq_req *aq;
+ uint16_t first_skip;
+ int rc;
+
+ cq_size = rxq->qlen;
+ ring_size = cq_size * NIX_CQ_ENTRY_SZ;
+ rz = rte_eth_dma_zone_reserve(eth_dev, "cq", qid, ring_size,
+ NIX_CQ_ALIGN, dev->node);
+ if (rz == NULL) {
+ otx2_err("Failed to allocate mem for cq hw ring");
+ rc = -ENOMEM;
+ goto fail;
+ }
+ memset(rz->addr, 0, rz->len);
+ rxq->desc = (uintptr_t)rz->addr;
+ rxq->qmask = cq_size - 1;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_INIT;
+
+ aq->cq.ena = 1;
+ aq->cq.caching = 1;
+ aq->cq.qsize = rxq->qsize;
+ aq->cq.base = rz->iova;
+ aq->cq.avg_level = 0xff;
+ aq->cq.cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT);
+ aq->cq.cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR);
+
+ /* Many to one reduction */
+ aq->cq.qint_idx = qid % dev->qints;
+
+ if (otx2_ethdev_fixup_is_limit_cq_full(dev)) {
+ uint16_t min_rx_drop;
+ const float rx_cq_skid = 1024 * 256;
+
+ min_rx_drop = ceil(rx_cq_skid / (float)cq_size);
+ aq->cq.drop = min_rx_drop;
+ aq->cq.drop_ena = 1;
+ }
+
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to init cq context");
+ goto fail;
+ }
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_INIT;
+
+ aq->rq.sso_ena = 0;
+ aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
+ aq->rq.spb_ena = 0;
+ aq->rq.lpb_aura = npa_lf_aura_handle_to_aura(mp->pool_id);
+ first_skip = (sizeof(struct rte_mbuf));
+ first_skip += RTE_PKTMBUF_HEADROOM;
+ first_skip += rte_pktmbuf_priv_size(mp);
+ rxq->data_off = first_skip;
+
+ first_skip /= 8; /* Expressed in number of dwords */
+ aq->rq.first_skip = first_skip;
+ aq->rq.later_skip = (sizeof(struct rte_mbuf) / 8);
+ aq->rq.flow_tagw = 32; /* 32-bits */
+ aq->rq.lpb_sizem1 = rte_pktmbuf_data_room_size(mp);
+ aq->rq.lpb_sizem1 += rte_pktmbuf_priv_size(mp);
+ aq->rq.lpb_sizem1 += sizeof(struct rte_mbuf);
+ aq->rq.lpb_sizem1 /= 8;
+ aq->rq.lpb_sizem1 -= 1; /* Expressed in size minus one */
+ aq->rq.ena = 1;
+ aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
+ aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
+ aq->rq.rq_int_ena = 0;
+ /* Many to one reduction */
+ aq->rq.qint_idx = qid % dev->qints;
+
+ if (otx2_ethdev_fixup_is_limit_cq_full(dev))
+ aq->rq.xqe_drop_ena = 1;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to init rq context");
+ goto fail;
+ }
+
+ return 0;
+fail:
+ return rc;
+}
+
+static int
+nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *aq;
+ int rc;
+
+ /* RQ is already disabled */
+ /* Disable CQ */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ aq->cq.ena = 0;
+ aq->cq_mask.ena = ~(aq->cq_mask.ena);
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to disable cq context");
+ return rc;
+ }
+
+ return 0;
+}
+
+static inline int
+nix_get_data_off(struct otx2_eth_dev *dev)
+{
+ RTE_SET_USED(dev);
+
+ return 0;
+}
+
+uint64_t
+otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id)
+{
+ struct rte_mbuf mb_def;
+ uint64_t *tmp;
+
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) % 8 != 0);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, refcnt) -
+ offsetof(struct rte_mbuf, data_off) != 2);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, nb_segs) -
+ offsetof(struct rte_mbuf, data_off) != 4);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, port) -
+ offsetof(struct rte_mbuf, data_off) != 6);
+ mb_def.nb_segs = 1;
+ mb_def.data_off = RTE_PKTMBUF_HEADROOM + nix_get_data_off(dev);
+ mb_def.port = port_id;
+ rte_mbuf_refcnt_set(&mb_def, 1);
+
+ /* Prevent compiler reordering: rearm_data covers previous fields */
+ rte_compiler_barrier();
+ tmp = (uint64_t *)&mb_def.rearm_data;
+
+ return *tmp;
+}
+
+static void
+otx2_nix_rx_queue_release(void *rx_queue)
+{
+ struct otx2_eth_rxq *rxq = rx_queue;
+
+ if (!rxq)
+ return;
+
+ otx2_nix_dbg("Releasing rxq %u", rxq->rq);
+ nix_cq_rq_uninit(rxq->eth_dev, rxq);
+ rte_free(rx_queue);
+}
+
+static int
+otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
+ uint16_t nb_desc, unsigned int socket,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_mempool_ops *ops;
+ struct otx2_eth_rxq *rxq;
+ const char *platform_ops;
+ enum nix_q_size_e qsize;
+ uint64_t offloads;
+ int rc;
+
+ rc = -EINVAL;
+
+ /* Compile time check to make sure all fast path elements in a CL */
+ RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_rxq, slow_path_start) >= 128);
+
+ /* Sanity checks */
+ if (rx_conf->rx_deferred_start == 1) {
+ otx2_err("Deferred Rx start is not supported");
+ goto fail;
+ }
+
+ platform_ops = rte_mbuf_platform_mempool_ops();
+ /* This driver needs octeontx2_npa mempool ops to work */
+ ops = rte_mempool_get_ops(mp->ops_index);
+ if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) {
+ otx2_err("mempool ops should be of octeontx2_npa type");
+ goto fail;
+ }
+
+ if (mp->pool_id == 0) {
+ otx2_err("Invalid pool_id");
+ goto fail;
+ }
+
+ /* Free memory prior to re-allocation if needed */
+ if (eth_dev->data->rx_queues[rq] != NULL) {
+ otx2_nix_dbg("Freeing memory prior to re-allocation %d", rq);
+ otx2_nix_rx_queue_release(eth_dev->data->rx_queues[rq]);
+ eth_dev->data->rx_queues[rq] = NULL;
+ }
+
+ offloads = rx_conf->offloads | eth_dev->data->dev_conf.rxmode.offloads;
+ dev->rx_offloads |= offloads;
+
+ /* Find the CQ queue size */
+ qsize = nix_qsize_clampup_get(dev, nb_desc);
+ /* Allocate rxq memory */
+ rxq = rte_zmalloc_socket("otx2 rxq", sizeof(*rxq), OTX2_ALIGN, socket);
+ if (rxq == NULL) {
+ otx2_err("Failed to allocate rq=%d", rq);
+ rc = -ENOMEM;
+ goto fail;
+ }
+
+ rxq->eth_dev = eth_dev;
+ rxq->rq = rq;
+ rxq->cq_door = dev->base + NIX_LF_CQ_OP_DOOR;
+ rxq->cq_status = (int64_t *)(dev->base + NIX_LF_CQ_OP_STATUS);
+ rxq->wdata = (uint64_t)rq << 32;
+ rxq->aura = npa_lf_aura_handle_to_aura(mp->pool_id);
+ rxq->mbuf_initializer = otx2_nix_rxq_mbuf_setup(dev,
+ eth_dev->data->port_id);
+ rxq->offloads = offloads;
+ rxq->pool = mp;
+ rxq->qlen = nix_qsize_to_val(qsize);
+ rxq->qsize = qsize;
+
+ /* Alloc completion queue */
+ rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp);
+ if (rc) {
+ otx2_err("Failed to allocate rxq=%u", rq);
+ goto free_rxq;
+ }
+
+ rxq->qconf.socket_id = socket;
+ rxq->qconf.nb_desc = nb_desc;
+ rxq->qconf.mempool = mp;
+ memcpy(&rxq->qconf.conf.rx, rx_conf, sizeof(struct rte_eth_rxconf));
+
+ nix_rx_queue_reset(rxq);
+ otx2_nix_dbg("rq=%d pool=%s qsize=%d nb_desc=%d->%d",
+ rq, mp->name, qsize, nb_desc, rxq->qlen);
+
+ eth_dev->data->rx_queues[rq] = rxq;
+ eth_dev->data->rx_queue_state[rq] = RTE_ETH_QUEUE_STATE_STOPPED;
+ return 0;
+
+free_rxq:
+ otx2_nix_rx_queue_release(rxq);
+fail:
+ return rc;
+}
+
static int
otx2_nix_configure(struct rte_eth_dev *eth_dev)
{
@@ -241,6 +549,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
.dev_configure = otx2_nix_configure,
.link_update = otx2_nix_link_update,
+ .rx_queue_setup = otx2_nix_rx_queue_setup,
+ .rx_queue_release = otx2_nix_rx_queue_release,
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 19a4e45b0..a09393336 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -10,6 +10,9 @@
#include <rte_common.h>
#include <rte_ethdev.h>
#include <rte_kvargs.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_string_fns.h>
#include "otx2_common.h"
#include "otx2_dev.h"
@@ -68,6 +71,7 @@
#define NIX_RX_MIN_DESC_ALIGN 16
#define NIX_RX_NB_SEG_MAX 6
#define NIX_CQ_ENTRY_SZ 128
+#define NIX_CQ_ALIGN 512
/* If PTP is enabled additional SEND MEM DESC is required which
* takes 2 words, hence max 7 iova address are possible
@@ -116,6 +120,19 @@
#define NIX_DEFAULT_RSS_CTX_GROUP 0
#define NIX_DEFAULT_RSS_MCAM_IDX -1
+enum nix_q_size_e {
+ nix_q_size_16, /* 16 entries */
+ nix_q_size_64, /* 64 entries */
+ nix_q_size_256,
+ nix_q_size_1K,
+ nix_q_size_4K,
+ nix_q_size_16K,
+ nix_q_size_64K,
+ nix_q_size_256K,
+ nix_q_size_1M, /* Million entries */
+ nix_q_size_max
+};
+
struct otx2_qint {
struct rte_eth_dev *eth_dev;
uint8_t qintx;
@@ -131,6 +148,16 @@ struct otx2_rss_info {
uint8_t key[NIX_HASH_KEY_SIZE];
};
+struct otx2_eth_qconf {
+ union {
+ struct rte_eth_txconf tx;
+ struct rte_eth_rxconf rx;
+ } conf;
+ void *mempool;
+ uint32_t socket_id;
+ uint16_t nb_desc;
+};
+
struct otx2_npc_flow_info {
uint16_t channel; /*rx channel */
uint16_t flow_prealloc_size;
@@ -177,6 +204,29 @@ struct otx2_eth_dev {
struct rte_eth_dev *eth_dev;
} __rte_cache_aligned;
+struct otx2_eth_rxq {
+ uint64_t mbuf_initializer;
+ uint64_t data_off;
+ uintptr_t desc;
+ void *lookup_mem;
+ uintptr_t cq_door;
+ uint64_t wdata;
+ int64_t *cq_status;
+ uint32_t head;
+ uint32_t qmask;
+ uint32_t available;
+ uint16_t rq;
+ struct otx2_timesync_info *tstamp;
+ MARKER slow_path_start;
+ uint64_t aura;
+ uint64_t offloads;
+ uint32_t qlen;
+ struct rte_mempool *pool;
+ enum nix_q_size_e qsize;
+ struct rte_eth_dev *eth_dev;
+ struct otx2_eth_qconf qconf;
+} __rte_cache_aligned;
+
static inline struct otx2_eth_dev *
otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
{
@@ -192,6 +242,7 @@ void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
void otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev);
void otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev);
void otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev);
+uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id);
/* Link */
void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 301a597f8..71d36b44a 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -143,4 +143,6 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+
+ devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP;
}
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index fab72ff6a..a852e5157 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -196,7 +196,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MVPP2_PMD) += -lrte_pmd_mvpp2
_LDLIBS-$(CONFIG_RTE_LIBRTE_MVNETA_PMD) += -lrte_pmd_mvneta
_LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += -lrte_pmd_nfp
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += -lrte_pmd_null
-_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += -lrte_pmd_octeontx2
+_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += -lrte_pmd_octeontx2 -lm
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += -lrte_pmd_pcap -lpcap
_LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += -lrte_pmd_qede
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_RING) += -lrte_pmd_ring
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 18/57] net/octeontx2: add Tx queue setup and release
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (16 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 17/57] net/octeontx2: add Rx queue setup and release jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 19/57] net/octeontx2: handle port reconfigure jerinj
` (39 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
From: Jerin Jacob <jerinj@marvell.com>
Add Tx queue setup and release.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 385 ++++++++++++++++++++-
drivers/net/octeontx2/otx2_ethdev.h | 25 ++
drivers/net/octeontx2/otx2_ethdev_ops.c | 3 +-
drivers/net/octeontx2/otx2_tx.h | 28 ++
8 files changed, 443 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_tx.h
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index d0a2204d2..c8f07fa1d 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -11,6 +11,7 @@ Multiprocess aware = Y
Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
+Runtime Tx queue setup = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 64125a73f..a98b7d523 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -11,6 +11,7 @@ Multiprocess aware = Y
Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
+Runtime Tx queue setup = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index acda5e680..9746357ce 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -10,6 +10,7 @@ Multiprocess aware = Y
Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
+Runtime Tx queue setup = Y
RSS hash = Y
RSS key update = Y
RSS reta update = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 3bee3f3ca..d7e8f3d56 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -19,6 +19,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Promiscuous mode
- SR-IOV VF
- Lock-free Tx queue
+- Multiple queues for TX and RX
- Receiver Side Scaling (RSS)
- MAC filtering
- Port hardware statistics
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index dbbc2263d..62943cc31 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -422,6 +422,373 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
return rc;
}
+static inline uint8_t
+nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
+{
+ /*
+ * Maximum three segments can be supported with W8, Choose
+ * NIX_MAXSQESZ_W16 for multi segment offload.
+ */
+ if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ return NIX_MAXSQESZ_W16;
+ else
+ return NIX_MAXSQESZ_W8;
+}
+
+static int
+nix_sq_init(struct otx2_eth_txq *txq)
+{
+ struct otx2_eth_dev *dev = txq->dev;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *sq;
+
+ if (txq->sqb_pool->pool_id == 0)
+ return -EINVAL;
+
+ sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ sq->qidx = txq->sq;
+ sq->ctype = NIX_AQ_CTYPE_SQ;
+ sq->op = NIX_AQ_INSTOP_INIT;
+ sq->sq.max_sqe_size = nix_sq_max_sqe_sz(txq);
+
+ sq->sq.default_chan = dev->tx_chan_base;
+ sq->sq.sqe_stype = NIX_STYPE_STF;
+ sq->sq.ena = 1;
+ if (sq->sq.max_sqe_size == NIX_MAXSQESZ_W8)
+ sq->sq.sqe_stype = NIX_STYPE_STP;
+ sq->sq.sqb_aura =
+ npa_lf_aura_handle_to_aura(txq->sqb_pool->pool_id);
+ sq->sq.sq_int_ena = BIT(NIX_SQINT_LMT_ERR);
+ sq->sq.sq_int_ena |= BIT(NIX_SQINT_SQB_ALLOC_FAIL);
+ sq->sq.sq_int_ena |= BIT(NIX_SQINT_SEND_ERR);
+ sq->sq.sq_int_ena |= BIT(NIX_SQINT_MNQ_ERR);
+
+ /* Many to one reduction */
+ sq->sq.qint_idx = txq->sq % dev->qints;
+
+ return otx2_mbox_process(mbox);
+}
+
+static int
+nix_sq_uninit(struct otx2_eth_txq *txq)
+{
+ struct otx2_eth_dev *dev = txq->dev;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct ndc_sync_op *ndc_req;
+ struct nix_aq_enq_rsp *rsp;
+ struct nix_aq_enq_req *aq;
+ uint16_t sqes_per_sqb;
+ void *sqb_buf;
+ int rc, count;
+
+ otx2_nix_dbg("Cleaning up sq %u", txq->sq);
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = txq->sq;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ /* Check if sq is already cleaned up */
+ if (!rsp->sq.ena)
+ return 0;
+
+ /* Disable sq */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = txq->sq;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ aq->sq_mask.ena = ~aq->sq_mask.ena;
+ aq->sq.ena = 0;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* Read SQ and free sqb's */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = txq->sq;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (aq->sq.smq_pend)
+ otx2_err("SQ has pending sqe's");
+
+ count = aq->sq.sqb_count;
+ sqes_per_sqb = 1 << txq->sqes_per_sqb_log2;
+ /* Free SQB's that are used */
+ sqb_buf = (void *)rsp->sq.head_sqb;
+ while (count) {
+ void *next_sqb;
+
+ next_sqb = *(void **)((uintptr_t)sqb_buf + ((sqes_per_sqb - 1) *
+ nix_sq_max_sqe_sz(txq)));
+ npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1,
+ (uint64_t)sqb_buf);
+ sqb_buf = next_sqb;
+ count--;
+ }
+
+ /* Free next to use sqb */
+ if (rsp->sq.next_sqb)
+ npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1,
+ rsp->sq.next_sqb);
+
+ /* Sync NDC-NIX-TX for LF */
+ ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
+ ndc_req->nix_lf_tx_sync = 1;
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ otx2_err("Error on NDC-NIX-TX LF sync, rc %d", rc);
+
+ return rc;
+}
+
+static int
+nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
+{
+ struct otx2_eth_dev *dev = txq->dev;
+ uint16_t sqes_per_sqb, nb_sqb_bufs;
+ char name[RTE_MEMPOOL_NAMESIZE];
+ struct rte_mempool_objsz sz;
+ struct npa_aura_s *aura;
+ uint32_t tmp, blk_sz;
+
+ aura = (struct npa_aura_s *)((uintptr_t)txq->fc_mem + OTX2_ALIGN);
+ snprintf(name, sizeof(name), "otx2_sqb_pool_%d_%d", port, txq->sq);
+ blk_sz = dev->sqb_size;
+
+ if (nix_sq_max_sqe_sz(txq) == NIX_MAXSQESZ_W16)
+ sqes_per_sqb = (dev->sqb_size / 8) / 16;
+ else
+ sqes_per_sqb = (dev->sqb_size / 8) / 8;
+
+ nb_sqb_bufs = nb_desc / sqes_per_sqb;
+ /* Clamp up to devarg passed SQB count */
+ nb_sqb_bufs = RTE_MIN(dev->max_sqb_count, RTE_MAX(NIX_MIN_SQB,
+ nb_sqb_bufs + NIX_SQB_LIST_SPACE));
+
+ txq->sqb_pool = rte_mempool_create_empty(name, nb_sqb_bufs, blk_sz,
+ 0, 0, dev->node,
+ MEMPOOL_F_NO_SPREAD);
+ txq->nb_sqb_bufs = nb_sqb_bufs;
+ txq->sqes_per_sqb_log2 = (uint16_t)rte_log2_u32(sqes_per_sqb);
+ txq->nb_sqb_bufs_adj = nb_sqb_bufs -
+ RTE_ALIGN_MUL_CEIL(nb_sqb_bufs, sqes_per_sqb) / sqes_per_sqb;
+ txq->nb_sqb_bufs_adj =
+ (NIX_SQB_LOWER_THRESH * txq->nb_sqb_bufs_adj) / 100;
+
+ if (txq->sqb_pool == NULL) {
+ otx2_err("Failed to allocate sqe mempool");
+ goto fail;
+ }
+
+ memset(aura, 0, sizeof(*aura));
+ aura->fc_ena = 1;
+ aura->fc_addr = txq->fc_iova;
+ aura->fc_hyst_bits = 0; /* Store count on all updates */
+ if (rte_mempool_set_ops_byname(txq->sqb_pool, "octeontx2_npa", aura)) {
+ otx2_err("Failed to set ops for sqe mempool");
+ goto fail;
+ }
+ if (rte_mempool_populate_default(txq->sqb_pool) < 0) {
+ otx2_err("Failed to populate sqe mempool");
+ goto fail;
+ }
+
+ tmp = rte_mempool_calc_obj_size(blk_sz, MEMPOOL_F_NO_SPREAD, &sz);
+ if (dev->sqb_size != sz.elt_size) {
+ otx2_err("sqe pool block size is not expected %d != %d",
+ dev->sqb_size, tmp);
+ goto fail;
+ }
+
+ return 0;
+fail:
+ return -ENOMEM;
+}
+
+void
+otx2_nix_form_default_desc(struct otx2_eth_txq *txq)
+{
+ struct nix_send_ext_s *send_hdr_ext;
+ struct nix_send_hdr_s *send_hdr;
+ struct nix_send_mem_s *send_mem;
+ union nix_send_sg_s *sg;
+
+ /* Initialize the fields based on basic single segment packet */
+ memset(&txq->cmd, 0, sizeof(txq->cmd));
+
+ if (txq->dev->tx_offload_flags & NIX_TX_NEED_EXT_HDR) {
+ send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0];
+ /* 2(HDR) + 2(EXT_HDR) + 1(SG) + 1(IOVA) = 6/2 - 1 = 2 */
+ send_hdr->w0.sizem1 = 2;
+
+ send_hdr_ext = (struct nix_send_ext_s *)&txq->cmd[2];
+ send_hdr_ext->w0.subdc = NIX_SUBDC_EXT;
+ if (txq->dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F) {
+ /* Default: one seg packet would have:
+ * 2(HDR) + 2(EXT) + 1(SG) + 1(IOVA) + 2(MEM)
+ * => 8/2 - 1 = 3
+ */
+ send_hdr->w0.sizem1 = 3;
+ send_hdr_ext->w0.tstmp = 1;
+
+ /* To calculate the offset for send_mem,
+ * send_hdr->w0.sizem1 * 2
+ */
+ send_mem = (struct nix_send_mem_s *)(txq->cmd +
+ (send_hdr->w0.sizem1 << 1));
+ send_mem->subdc = NIX_SUBDC_MEM;
+ send_mem->dsz = 0x0;
+ send_mem->wmem = 0x1;
+ send_mem->alg = NIX_SENDMEMALG_SETTSTMP;
+ }
+ sg = (union nix_send_sg_s *)&txq->cmd[4];
+ } else {
+ send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0];
+ /* 2(HDR) + 1(SG) + 1(IOVA) = 4/2 - 1 = 1 */
+ send_hdr->w0.sizem1 = 1;
+ sg = (union nix_send_sg_s *)&txq->cmd[2];
+ }
+
+ send_hdr->w0.sq = txq->sq;
+ sg->subdc = NIX_SUBDC_SG;
+ sg->segs = 1;
+ sg->ld_type = NIX_SENDLDTYPE_LDD;
+
+ rte_smp_wmb();
+}
+
+static void
+otx2_nix_tx_queue_release(void *_txq)
+{
+ struct otx2_eth_txq *txq = _txq;
+
+ if (!txq)
+ return;
+
+ otx2_nix_dbg("Releasing txq %u", txq->sq);
+
+ /* Free sqb's and disable sq */
+ nix_sq_uninit(txq);
+
+ if (txq->sqb_pool) {
+ rte_mempool_free(txq->sqb_pool);
+ txq->sqb_pool = NULL;
+ }
+ rte_free(txq);
+}
+
+
+static int
+otx2_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t sq,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ const struct rte_memzone *fc;
+ struct otx2_eth_txq *txq;
+ uint64_t offloads;
+ int rc;
+
+ rc = -EINVAL;
+
+ /* Compile time check to make sure all fast path elements in a CL */
+ RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_txq, slow_path_start) >= 128);
+
+ if (tx_conf->tx_deferred_start) {
+ otx2_err("Tx deferred start is not supported");
+ goto fail;
+ }
+
+ /* Free memory prior to re-allocation if needed. */
+ if (eth_dev->data->tx_queues[sq] != NULL) {
+ otx2_nix_dbg("Freeing memory prior to re-allocation %d", sq);
+ otx2_nix_tx_queue_release(eth_dev->data->tx_queues[sq]);
+ eth_dev->data->tx_queues[sq] = NULL;
+ }
+
+ /* Find the expected offloads for this queue */
+ offloads = tx_conf->offloads | eth_dev->data->dev_conf.txmode.offloads;
+
+ /* Allocating tx queue data structure */
+ txq = rte_zmalloc_socket("otx2_ethdev TX queue", sizeof(*txq),
+ OTX2_ALIGN, socket_id);
+ if (txq == NULL) {
+ otx2_err("Failed to alloc txq=%d", sq);
+ rc = -ENOMEM;
+ goto fail;
+ }
+ txq->sq = sq;
+ txq->dev = dev;
+ txq->sqb_pool = NULL;
+ txq->offloads = offloads;
+ dev->tx_offloads |= offloads;
+
+ /*
+ * Allocate memory for flow control updates from HW.
+ * Alloc one cache line, so that fits all FC_STYPE modes.
+ */
+ fc = rte_eth_dma_zone_reserve(eth_dev, "fcmem", sq,
+ OTX2_ALIGN + sizeof(struct npa_aura_s),
+ OTX2_ALIGN, dev->node);
+ if (fc == NULL) {
+ otx2_err("Failed to allocate mem for fcmem");
+ rc = -ENOMEM;
+ goto free_txq;
+ }
+ txq->fc_iova = fc->iova;
+ txq->fc_mem = fc->addr;
+
+ /* Initialize the aura sqb pool */
+ rc = nix_alloc_sqb_pool(eth_dev->data->port_id, txq, nb_desc);
+ if (rc) {
+ otx2_err("Failed to alloc sqe pool rc=%d", rc);
+ goto free_txq;
+ }
+
+ /* Initialize the SQ */
+ rc = nix_sq_init(txq);
+ if (rc) {
+ otx2_err("Failed to init sq=%d context", sq);
+ goto free_txq;
+ }
+
+ txq->fc_cache_pkts = 0;
+ txq->io_addr = dev->base + NIX_LF_OP_SENDX(0);
+ /* Evenly distribute LMT slot for each sq */
+ txq->lmt_addr = (void *)(dev->lmt_addr + ((sq & LMT_SLOT_MASK) << 12));
+
+ txq->qconf.socket_id = socket_id;
+ txq->qconf.nb_desc = nb_desc;
+ memcpy(&txq->qconf.conf.tx, tx_conf, sizeof(struct rte_eth_txconf));
+
+ otx2_nix_form_default_desc(txq);
+
+ otx2_nix_dbg("sq=%d fc=%p offload=0x%" PRIx64 " sqb=0x%" PRIx64 ""
+ " lmt_addr=%p nb_sqb_bufs=%d sqes_per_sqb_log2=%d", sq,
+ fc->addr, offloads, txq->sqb_pool->pool_id, txq->lmt_addr,
+ txq->nb_sqb_bufs, txq->sqes_per_sqb_log2);
+ eth_dev->data->tx_queues[sq] = txq;
+ eth_dev->data->tx_queue_state[sq] = RTE_ETH_QUEUE_STATE_STOPPED;
+ return 0;
+
+free_txq:
+ otx2_nix_tx_queue_release(txq);
+fail:
+ return rc;
+}
+
+
static int
otx2_nix_configure(struct rte_eth_dev *eth_dev)
{
@@ -549,6 +916,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
.dev_configure = otx2_nix_configure,
.link_update = otx2_nix_link_update,
+ .tx_queue_setup = otx2_nix_tx_queue_setup,
+ .tx_queue_release = otx2_nix_tx_queue_release,
.rx_queue_setup = otx2_nix_rx_queue_setup,
.rx_queue_release = otx2_nix_rx_queue_release,
.stats_get = otx2_nix_dev_stats_get,
@@ -763,12 +1132,26 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
struct rte_pci_device *pci_dev;
- int rc;
+ int rc, i;
/* Nothing to be done for secondary processes */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ /* Free up SQs */
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]);
+ eth_dev->data->tx_queues[i] = NULL;
+ }
+ eth_dev->data->nb_tx_queues = 0;
+
+ /* Free up RQ's and CQ's */
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ otx2_nix_rx_queue_release(eth_dev->data->rx_queues[i]);
+ eth_dev->data->rx_queues[i] = NULL;
+ }
+ eth_dev->data->nb_rx_queues = 0;
+
/* Unregister queue irqs */
oxt2_nix_unregister_queue_irqs(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index a09393336..0ce67f634 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -19,6 +19,7 @@
#include "otx2_irq.h"
#include "otx2_mempool.h"
#include "otx2_rx.h"
+#include "otx2_tx.h"
#define OTX2_ETH_DEV_PMD_VERSION "1.0"
@@ -62,6 +63,7 @@
#define NIX_MAX_SQB 512
#define NIX_MIN_SQB 32
+#define NIX_SQB_LIST_SPACE 2
#define NIX_RSS_RETA_SIZE_MAX 256
/* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/
#define NIX_RSS_GRPS 8
@@ -72,6 +74,8 @@
#define NIX_RX_NB_SEG_MAX 6
#define NIX_CQ_ENTRY_SZ 128
#define NIX_CQ_ALIGN 512
+#define NIX_SQB_LOWER_THRESH 90
+#define LMT_SLOT_MASK 0x7f
/* If PTP is enabled additional SEND MEM DESC is required which
* takes 2 words, hence max 7 iova address are possible
@@ -204,6 +208,24 @@ struct otx2_eth_dev {
struct rte_eth_dev *eth_dev;
} __rte_cache_aligned;
+struct otx2_eth_txq {
+ uint64_t cmd[8];
+ int64_t fc_cache_pkts;
+ uint64_t *fc_mem;
+ void *lmt_addr;
+ rte_iova_t io_addr;
+ rte_iova_t fc_iova;
+ uint16_t sqes_per_sqb_log2;
+ int16_t nb_sqb_bufs_adj;
+ MARKER slow_path_start;
+ uint16_t nb_sqb_bufs;
+ uint16_t sq;
+ uint64_t offloads;
+ struct otx2_eth_dev *dev;
+ struct rte_mempool *sqb_pool;
+ struct otx2_eth_qconf qconf;
+} __rte_cache_aligned;
+
struct otx2_eth_rxq {
uint64_t mbuf_initializer;
uint64_t data_off;
@@ -329,4 +351,7 @@ int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev);
int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
struct otx2_eth_dev *dev);
+/* Rx and Tx routines */
+void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
+
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 71d36b44a..1c935b627 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -144,5 +144,6 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
- devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP;
+ devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
+ RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
}
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
new file mode 100644
index 000000000..4d0993f87
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_tx.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_TX_H__
+#define __OTX2_TX_H__
+
+#define NIX_TX_OFFLOAD_NONE (0)
+#define NIX_TX_OFFLOAD_L3_L4_CSUM_F BIT(0)
+#define NIX_TX_OFFLOAD_OL3_OL4_CSUM_F BIT(1)
+#define NIX_TX_OFFLOAD_VLAN_QINQ_F BIT(2)
+#define NIX_TX_OFFLOAD_MBUF_NOFF_F BIT(3)
+#define NIX_TX_OFFLOAD_TSTAMP_F BIT(4)
+
+/* Flags to control xmit_prepare function.
+ * Defining it from backwards to denote its been
+ * not used as offload flags to pick function
+ */
+#define NIX_TX_MULTI_SEG_F BIT(15)
+
+#define NIX_TX_NEED_SEND_HDR_W1 \
+ (NIX_TX_OFFLOAD_L3_L4_CSUM_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F | \
+ NIX_TX_OFFLOAD_VLAN_QINQ_F)
+
+#define NIX_TX_NEED_EXT_HDR \
+ (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F)
+
+#endif /* __OTX2_TX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 19/57] net/octeontx2: handle port reconfigure
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (17 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 18/57] net/octeontx2: add Tx " jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 20/57] net/octeontx2: add queue start and stop operations jerinj
` (38 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
setup tx & rx queues with the previous configuration during
port reconfig, it handles cases like port reconfigure without
reconfiguring tx & rx queues.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 180 ++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 2 +
2 files changed, 182 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 62943cc31..bc6e8fb8a 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -788,6 +788,172 @@ otx2_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t sq,
return rc;
}
+static int
+nix_store_queue_cfg_and_then_release(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_eth_qconf *tx_qconf = NULL;
+ struct otx2_eth_qconf *rx_qconf = NULL;
+ struct otx2_eth_txq **txq;
+ struct otx2_eth_rxq **rxq;
+ int i, nb_rxq, nb_txq;
+
+ nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues);
+ nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues);
+
+ tx_qconf = malloc(nb_txq * sizeof(*tx_qconf));
+ if (tx_qconf == NULL) {
+ otx2_err("Failed to allocate memory for tx_qconf");
+ goto fail;
+ }
+
+ rx_qconf = malloc(nb_rxq * sizeof(*rx_qconf));
+ if (rx_qconf == NULL) {
+ otx2_err("Failed to allocate memory for rx_qconf");
+ goto fail;
+ }
+
+ txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues;
+ for (i = 0; i < nb_txq; i++) {
+ if (txq[i] == NULL) {
+ otx2_err("txq[%d] is already released", i);
+ goto fail;
+ }
+ memcpy(&tx_qconf[i], &txq[i]->qconf, sizeof(*tx_qconf));
+ otx2_nix_tx_queue_release(txq[i]);
+ eth_dev->data->tx_queues[i] = NULL;
+ }
+
+ rxq = (struct otx2_eth_rxq **)eth_dev->data->rx_queues;
+ for (i = 0; i < nb_rxq; i++) {
+ if (rxq[i] == NULL) {
+ otx2_err("rxq[%d] is already released", i);
+ goto fail;
+ }
+ memcpy(&rx_qconf[i], &rxq[i]->qconf, sizeof(*rx_qconf));
+ otx2_nix_rx_queue_release(rxq[i]);
+ eth_dev->data->rx_queues[i] = NULL;
+ }
+
+ dev->tx_qconf = tx_qconf;
+ dev->rx_qconf = rx_qconf;
+ return 0;
+
+fail:
+ if (tx_qconf)
+ free(tx_qconf);
+ if (rx_qconf)
+ free(rx_qconf);
+
+ return -ENOMEM;
+}
+
+static int
+nix_restore_queue_cfg(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_eth_qconf *tx_qconf = dev->tx_qconf;
+ struct otx2_eth_qconf *rx_qconf = dev->rx_qconf;
+ struct otx2_eth_txq **txq;
+ struct otx2_eth_rxq **rxq;
+ int rc, i, nb_rxq, nb_txq;
+
+ nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues);
+ nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues);
+
+ rc = -ENOMEM;
+ /* Setup tx & rx queues with previous configuration so
+ * that the queues can be functional in cases like ports
+ * are started without re configuring queues.
+ *
+ * Usual re config sequence is like below:
+ * port_configure() {
+ * if(reconfigure) {
+ * queue_release()
+ * queue_setup()
+ * }
+ * queue_configure() {
+ * queue_release()
+ * queue_setup()
+ * }
+ * }
+ * port_start()
+ *
+ * In some application's control path, queue_configure() would
+ * NOT be invoked for TXQs/RXQs in port_configure().
+ * In such cases, queues can be functional after start as the
+ * queues are already setup in port_configure().
+ */
+ for (i = 0; i < nb_txq; i++) {
+ rc = otx2_nix_tx_queue_setup(eth_dev, i, tx_qconf[i].nb_desc,
+ tx_qconf[i].socket_id,
+ &tx_qconf[i].conf.tx);
+ if (rc) {
+ otx2_err("Failed to setup tx queue rc=%d", rc);
+ txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues;
+ for (i -= 1; i >= 0; i--)
+ otx2_nix_tx_queue_release(txq[i]);
+ goto fail;
+ }
+ }
+
+ free(tx_qconf); tx_qconf = NULL;
+
+ for (i = 0; i < nb_rxq; i++) {
+ rc = otx2_nix_rx_queue_setup(eth_dev, i, rx_qconf[i].nb_desc,
+ rx_qconf[i].socket_id,
+ &rx_qconf[i].conf.rx,
+ rx_qconf[i].mempool);
+ if (rc) {
+ otx2_err("Failed to setup rx queue rc=%d", rc);
+ rxq = (struct otx2_eth_rxq **)eth_dev->data->rx_queues;
+ for (i -= 1; i >= 0; i--)
+ otx2_nix_rx_queue_release(rxq[i]);
+ goto release_tx_queues;
+ }
+ }
+
+ free(rx_qconf); rx_qconf = NULL;
+
+ return 0;
+
+release_tx_queues:
+ txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues;
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
+ otx2_nix_tx_queue_release(txq[i]);
+fail:
+ if (tx_qconf)
+ free(tx_qconf);
+ if (rx_qconf)
+ free(rx_qconf);
+
+ return rc;
+}
+
+static uint16_t
+nix_eth_nop_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts)
+{
+ RTE_SET_USED(queue);
+ RTE_SET_USED(mbufs);
+ RTE_SET_USED(pkts);
+
+ return 0;
+}
+
+static void
+nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
+{
+ /* These dummy functions are required for supporting
+ * some applications which reconfigure queues without
+ * stopping tx burst and rx burst threads(eg kni app)
+ * When the queues context is saved, txq/rxqs are released
+ * which caused app crash since rx/tx burst is still
+ * on different lcores
+ */
+ eth_dev->tx_pkt_burst = nix_eth_nop_burst;
+ eth_dev->rx_pkt_burst = nix_eth_nop_burst;
+ rte_mb();
+}
static int
otx2_nix_configure(struct rte_eth_dev *eth_dev)
@@ -844,6 +1010,10 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
/* Free the resources allocated from the previous configure */
if (dev->configured == 1) {
oxt2_nix_unregister_queue_irqs(eth_dev);
+ nix_set_nop_rxtx_function(eth_dev);
+ rc = nix_store_queue_cfg_and_then_release(eth_dev);
+ if (rc)
+ goto fail;
nix_lf_free(dev);
}
@@ -884,6 +1054,16 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ /*
+ * Restore queue config when reconfigure followed by
+ * reconfigure and no queue configure invoked from application case.
+ */
+ if (dev->configured == 1) {
+ rc = nix_restore_queue_cfg(eth_dev);
+ if (rc)
+ goto free_nix_lf;
+ }
+
/* Update the mac address */
ea = eth_dev->data->mac_addrs;
memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 0ce67f634..ffc350e0d 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -205,6 +205,8 @@ struct otx2_eth_dev {
uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
struct otx2_npc_flow_info npc_flow;
+ struct otx2_eth_qconf *tx_qconf;
+ struct otx2_eth_qconf *rx_qconf;
struct rte_eth_dev *eth_dev;
} __rte_cache_aligned;
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 20/57] net/octeontx2: add queue start and stop operations
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (18 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 19/57] net/octeontx2: handle port reconfigure jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 21/57] net/octeontx2: introduce traffic manager jerinj
` (37 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add queue start and stop operations. Tx queue needs
to update the flow control value, Which will be
added in sub subsequent patch.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 92 ++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 2 +
5 files changed, 97 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index c8f07fa1d..ca40358da 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index a98b7d523..b720c116f 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 9746357ce..5a287493f 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -11,6 +11,7 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Queue start/stop = Y
RSS hash = Y
RSS key update = Y
RSS reta update = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index bc6e8fb8a..c8271b1ab 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -252,6 +252,26 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
return rc;
}
+static int
+nix_rq_enb_dis(struct rte_eth_dev *eth_dev,
+ struct otx2_eth_rxq *rxq, const bool enb)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *aq;
+
+ /* Pkts will be dropped silently if RQ is disabled */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ aq->rq.ena = enb;
+ aq->rq_mask.ena = ~(aq->rq_mask.ena);
+
+ return otx2_mbox_process(mbox);
+}
+
static int
nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
{
@@ -1091,6 +1111,74 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
return rc;
}
+int
+otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+ struct rte_eth_dev_data *data = eth_dev->data;
+
+ if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
+ return 0;
+
+ data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+ return 0;
+}
+
+int
+otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+ struct rte_eth_dev_data *data = eth_dev->data;
+
+ if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+ return 0;
+
+ data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+ return 0;
+}
+
+static int
+otx2_nix_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+ struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx];
+ struct rte_eth_dev_data *data = eth_dev->data;
+ int rc;
+
+ if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
+ return 0;
+
+ rc = nix_rq_enb_dis(rxq->eth_dev, rxq, true);
+ if (rc) {
+ otx2_err("Failed to enable rxq=%u, rc=%d", qidx, rc);
+ goto done;
+ }
+
+ data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+
+done:
+ return rc;
+}
+
+static int
+otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+ struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx];
+ struct rte_eth_dev_data *data = eth_dev->data;
+ int rc;
+
+ if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+ return 0;
+
+ rc = nix_rq_enb_dis(rxq->eth_dev, rxq, false);
+ if (rc) {
+ otx2_err("Failed to disable rxq=%u, rc=%d", qidx, rc);
+ goto done;
+ }
+
+ data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+done:
+ return rc;
+}
+
/* Initialize and register driver with DPDK Application */
static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
@@ -1100,6 +1188,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.tx_queue_release = otx2_nix_tx_queue_release,
.rx_queue_setup = otx2_nix_rx_queue_setup,
.rx_queue_release = otx2_nix_rx_queue_release,
+ .tx_queue_start = otx2_nix_tx_queue_start,
+ .tx_queue_stop = otx2_nix_tx_queue_stop,
+ .rx_queue_start = otx2_nix_rx_queue_start,
+ .rx_queue_stop = otx2_nix_rx_queue_stop,
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index ffc350e0d..4e06b7111 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -266,6 +266,8 @@ void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
void otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev);
void otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev);
void otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev);
+int otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx);
+int otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx);
uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id);
/* Link */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 21/57] net/octeontx2: introduce traffic manager
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (19 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 20/57] net/octeontx2: add queue start and stop operations jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 22/57] net/octeontx2: alloc and free TM HW resources jerinj
` (36 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Introduce traffic manager infra and default hierarchy
creation.
Upon ethdev configure, a default hierarchy is
created with one-to-one mapped tm nodes. This topology
will be overridden when user explicitly creates and commits
a new hierarchy using rte_tm interface.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 16 ++
drivers/net/octeontx2/otx2_ethdev.h | 14 ++
drivers/net/octeontx2/otx2_tm.c | 252 ++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_tm.h | 67 ++++++++
6 files changed, 351 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_tm.c
create mode 100644 drivers/net/octeontx2/otx2_tm.h
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index f40561afb..164621087 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -28,6 +28,7 @@ LIBABIVER := 1
# all source are stored in SRCS-y
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
+ otx2_tm.c \
otx2_rss.c \
otx2_mac.c \
otx2_link.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 8681a2642..e344d877f 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -3,6 +3,7 @@
#
sources = files(
+ 'otx2_tm.c',
'otx2_rss.c',
'otx2_mac.c',
'otx2_link.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index c8271b1ab..899865749 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1034,6 +1034,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
rc = nix_store_queue_cfg_and_then_release(eth_dev);
if (rc)
goto fail;
+ otx2_nix_tm_fini(eth_dev);
nix_lf_free(dev);
}
@@ -1067,6 +1068,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ /* Init the default TM scheduler hierarchy */
+ rc = otx2_nix_tm_init_default(eth_dev);
+ if (rc) {
+ otx2_err("Failed to init traffic manager rc=%d", rc);
+ goto free_nix_lf;
+ }
+
/* Register queue IRQs */
rc = oxt2_nix_register_queue_irqs(eth_dev);
if (rc) {
@@ -1369,6 +1377,9 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
/* Also sync same MAC address to CGX table */
otx2_cgx_mac_addr_set(eth_dev, ð_dev->data->mac_addrs[0]);
+ /* Initialize the tm data structures */
+ otx2_nix_tm_conf_init(eth_dev);
+
dev->tx_offload_capa = nix_get_tx_offload_capa(dev);
dev->rx_offload_capa = nix_get_rx_offload_capa(dev);
@@ -1424,6 +1435,11 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
}
eth_dev->data->nb_rx_queues = 0;
+ /* Free tm resources */
+ rc = otx2_nix_tm_fini(eth_dev);
+ if (rc)
+ otx2_err("Failed to cleanup tm, rc=%d", rc);
+
/* Unregister queue irqs */
oxt2_nix_unregister_queue_irqs(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 4e06b7111..9f73bf89b 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -19,6 +19,7 @@
#include "otx2_irq.h"
#include "otx2_mempool.h"
#include "otx2_rx.h"
+#include "otx2_tm.h"
#include "otx2_tx.h"
#define OTX2_ETH_DEV_PMD_VERSION "1.0"
@@ -201,6 +202,19 @@ struct otx2_eth_dev {
uint64_t rx_offload_capa;
uint64_t tx_offload_capa;
struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
+ uint16_t txschq[NIX_TXSCH_LVL_CNT];
+ uint16_t txschq_contig[NIX_TXSCH_LVL_CNT];
+ uint16_t txschq_index[NIX_TXSCH_LVL_CNT];
+ uint16_t txschq_contig_index[NIX_TXSCH_LVL_CNT];
+ /* Dis-contiguous queues */
+ uint16_t txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
+ /* Contiguous queues */
+ uint16_t txschq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
+ uint16_t otx2_tm_root_lvl;
+ uint16_t tm_flags;
+ uint16_t tm_leaf_cnt;
+ struct otx2_nix_tm_node_list node_list;
+ struct otx2_nix_tm_shaper_profile_list shaper_profile_list;
struct otx2_rss_info rss_info;
uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
new file mode 100644
index 000000000..bc0474242
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -0,0 +1,252 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_malloc.h>
+
+#include "otx2_ethdev.h"
+#include "otx2_tm.h"
+
+/* Use last LVL_CNT nodes as default nodes */
+#define NIX_DEFAULT_NODE_ID_START (RTE_TM_NODE_ID_NULL - NIX_TXSCH_LVL_CNT)
+
+enum otx2_tm_node_level {
+ OTX2_TM_LVL_ROOT = 0,
+ OTX2_TM_LVL_SCH1,
+ OTX2_TM_LVL_SCH2,
+ OTX2_TM_LVL_SCH3,
+ OTX2_TM_LVL_SCH4,
+ OTX2_TM_LVL_QUEUE,
+ OTX2_TM_LVL_MAX,
+};
+
+static bool
+nix_tm_have_tl1_access(struct otx2_eth_dev *dev)
+{
+ bool is_lbk = otx2_dev_is_lbk(dev);
+ return otx2_dev_is_pf(dev) && !otx2_dev_is_A0(dev) &&
+ !is_lbk && !dev->maxvf;
+}
+
+static struct otx2_nix_tm_shaper_profile *
+nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id)
+{
+ struct otx2_nix_tm_shaper_profile *tm_shaper_profile;
+
+ TAILQ_FOREACH(tm_shaper_profile, &dev->shaper_profile_list, shaper) {
+ if (tm_shaper_profile->shaper_profile_id == shaper_id)
+ return tm_shaper_profile;
+ }
+ return NULL;
+}
+
+static struct otx2_nix_tm_node *
+nix_tm_node_search(struct otx2_eth_dev *dev,
+ uint32_t node_id, bool user)
+{
+ struct otx2_nix_tm_node *tm_node;
+
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (tm_node->id == node_id &&
+ (user == !!(tm_node->flags & NIX_TM_NODE_USER)))
+ return tm_node;
+ }
+ return NULL;
+}
+
+static int
+nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint16_t hw_lvl_id,
+ uint16_t level_id, bool user,
+ struct rte_tm_node_params *params)
+{
+ struct otx2_nix_tm_shaper_profile *shaper_profile;
+ struct otx2_nix_tm_node *tm_node, *parent_node;
+ uint32_t shaper_profile_id;
+
+ shaper_profile_id = params->shaper_profile_id;
+ shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id);
+
+ parent_node = nix_tm_node_search(dev, parent_node_id, user);
+
+ tm_node = rte_zmalloc("otx2_nix_tm_node",
+ sizeof(struct otx2_nix_tm_node), 0);
+ if (!tm_node)
+ return -ENOMEM;
+
+ tm_node->level_id = level_id;
+ tm_node->hw_lvl_id = hw_lvl_id;
+
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->rr_prio = 0xf;
+ tm_node->max_prio = UINT32_MAX;
+ tm_node->hw_id = UINT32_MAX;
+ tm_node->flags = 0;
+ if (user)
+ tm_node->flags = NIX_TM_NODE_USER;
+ rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params));
+
+ if (shaper_profile)
+ shaper_profile->reference_count++;
+ tm_node->parent = parent_node;
+ tm_node->parent_hw_id = UINT32_MAX;
+
+ TAILQ_INSERT_TAIL(&dev->node_list, tm_node, node);
+
+ return 0;
+}
+
+static int
+nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev)
+{
+ struct otx2_nix_tm_shaper_profile *shaper_profile;
+
+ while ((shaper_profile = TAILQ_FIRST(&dev->shaper_profile_list))) {
+ if (shaper_profile->reference_count)
+ otx2_tm_dbg("Shaper profile %u has non zero references",
+ shaper_profile->shaper_profile_id);
+ TAILQ_REMOVE(&dev->shaper_profile_list, shaper_profile, shaper);
+ rte_free(shaper_profile);
+ }
+
+ return 0;
+}
+
+static int
+nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint32_t def = eth_dev->data->nb_tx_queues;
+ struct rte_tm_node_params params;
+ uint32_t leaf_parent, i;
+ int rc = 0;
+
+ /* Default params */
+ memset(¶ms, 0, sizeof(params));
+ params.shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE;
+
+ if (nix_tm_have_tl1_access(dev)) {
+ dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1;
+ rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL1,
+ OTX2_TM_LVL_ROOT, false, ¶ms);
+ if (rc)
+ goto exit;
+ rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL2,
+ OTX2_TM_LVL_SCH1, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL3,
+ OTX2_TM_LVL_SCH2, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL4,
+ OTX2_TM_LVL_SCH3, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 4, def + 3, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_SMQ,
+ OTX2_TM_LVL_SCH4, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ leaf_parent = def + 4;
+ } else {
+ dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
+ rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL2,
+ OTX2_TM_LVL_ROOT, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL3,
+ OTX2_TM_LVL_SCH1, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL4,
+ OTX2_TM_LVL_SCH2, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_SMQ,
+ OTX2_TM_LVL_SCH3, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ leaf_parent = def + 3;
+ }
+
+ /* Add leaf nodes */
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ rc = nix_tm_node_add_to_list(dev, i, leaf_parent, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_CNT,
+ OTX2_TM_LVL_QUEUE, false, ¶ms);
+ if (rc)
+ break;
+ }
+
+exit:
+ return rc;
+}
+
+void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ TAILQ_INIT(&dev->node_list);
+ TAILQ_INIT(&dev->shaper_profile_list);
+}
+
+int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
+ int rc;
+
+ /* Clear shaper profiles */
+ nix_tm_clear_shaper_profiles(dev);
+ dev->tm_flags = NIX_TM_DEFAULT_TREE;
+
+ rc = nix_tm_prepare_default_tree(eth_dev);
+ if (rc != 0)
+ return rc;
+
+ dev->tm_leaf_cnt = sq_cnt;
+
+ return 0;
+}
+
+int
+otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ /* Clear shaper profiles */
+ nix_tm_clear_shaper_profiles(dev);
+
+ dev->tm_flags = 0;
+ return 0;
+}
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
new file mode 100644
index 000000000..94023fa99
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_TM_H__
+#define __OTX2_TM_H__
+
+#include <stdbool.h>
+
+#include <rte_tm_driver.h>
+
+#define NIX_TM_DEFAULT_TREE BIT_ULL(0)
+
+struct otx2_eth_dev;
+
+void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev);
+int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
+int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
+
+struct otx2_nix_tm_node {
+ TAILQ_ENTRY(otx2_nix_tm_node) node;
+ uint32_t id;
+ uint32_t hw_id;
+ uint32_t priority;
+ uint32_t weight;
+ uint16_t level_id;
+ uint16_t hw_lvl_id;
+ uint32_t rr_prio;
+ uint32_t rr_num;
+ uint32_t max_prio;
+ uint32_t parent_hw_id;
+ uint32_t flags;
+#define NIX_TM_NODE_HWRES BIT_ULL(0)
+#define NIX_TM_NODE_ENABLED BIT_ULL(1)
+#define NIX_TM_NODE_USER BIT_ULL(2)
+ struct otx2_nix_tm_node *parent;
+ struct rte_tm_node_params params;
+};
+
+struct otx2_nix_tm_shaper_profile {
+ TAILQ_ENTRY(otx2_nix_tm_shaper_profile) shaper;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+struct shaper_params {
+ uint64_t burst_exponent;
+ uint64_t burst_mantissa;
+ uint64_t div_exp;
+ uint64_t exponent;
+ uint64_t mantissa;
+ uint64_t burst;
+ uint64_t rate;
+};
+
+TAILQ_HEAD(otx2_nix_tm_node_list, otx2_nix_tm_node);
+TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
+
+#define MAX_SCHED_WEIGHT ((uint8_t)~0)
+#define NIX_TM_RR_QUANTUM_MAX ((1 << 24) - 1)
+
+/* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT */
+/* = NIX_MAX_HW_MTU */
+#define DEFAULT_RR_WEIGHT 71
+
+#endif /* __OTX2_TM_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 22/57] net/octeontx2: alloc and free TM HW resources
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (20 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 21/57] net/octeontx2: introduce traffic manager jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 23/57] net/octeontx2: configure " jerinj
` (35 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas
From: Krzysztof Kanas <kkanas@marvell.com>
Allocate and free shaper/scheduler hardware resources for
nodes of hirearchy levels in sw.
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/otx2_tm.c | 350 ++++++++++++++++++++++++++++++++
1 file changed, 350 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index bc0474242..91f31df05 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -54,6 +54,69 @@ nix_tm_node_search(struct otx2_eth_dev *dev,
return NULL;
}
+static uint32_t
+check_rr(struct otx2_eth_dev *dev, uint32_t priority, uint32_t parent_id)
+{
+ struct otx2_nix_tm_node *tm_node;
+ uint32_t rr_num = 0;
+
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (!tm_node->parent)
+ continue;
+
+ if (!(tm_node->parent->id == parent_id))
+ continue;
+
+ if (tm_node->priority == priority)
+ rr_num++;
+ }
+ return rr_num;
+}
+
+static int
+nix_tm_update_parent_info(struct otx2_eth_dev *dev)
+{
+ struct otx2_nix_tm_node *tm_node_child;
+ struct otx2_nix_tm_node *tm_node;
+ struct otx2_nix_tm_node *parent;
+ uint32_t rr_num = 0;
+ uint32_t priority;
+
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (!tm_node->parent)
+ continue;
+ /* Count group of children of same priority i.e are RR */
+ parent = tm_node->parent;
+ priority = tm_node->priority;
+ rr_num = check_rr(dev, priority, parent->id);
+
+ /* Assuming that multiple RR groups are
+ * not configured based on capability.
+ */
+ if (rr_num > 1) {
+ parent->rr_prio = priority;
+ parent->rr_num = rr_num;
+ }
+
+ /* Find out static priority children that are not in RR */
+ TAILQ_FOREACH(tm_node_child, &dev->node_list, node) {
+ if (!tm_node_child->parent)
+ continue;
+ if (parent->id != tm_node_child->parent->id)
+ continue;
+ if (parent->max_prio == UINT32_MAX &&
+ tm_node_child->priority != parent->rr_prio)
+ parent->max_prio = 0;
+
+ if (parent->max_prio < tm_node_child->priority &&
+ parent->rr_prio != tm_node_child->priority)
+ parent->max_prio = tm_node_child->priority;
+ }
+ }
+
+ return 0;
+}
+
static int
nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
uint32_t parent_node_id, uint32_t priority,
@@ -115,6 +178,274 @@ nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev)
return 0;
}
+static int
+nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
+ uint32_t flags, bool hw_only)
+{
+ struct otx2_nix_tm_shaper_profile *shaper_profile;
+ struct otx2_nix_tm_node *tm_node, *next_node;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_txsch_free_req *req;
+ uint32_t shaper_profile_id;
+ bool skip_node = false;
+ int rc = 0;
+
+ next_node = TAILQ_FIRST(&dev->node_list);
+ while (next_node) {
+ tm_node = next_node;
+ next_node = TAILQ_NEXT(tm_node, node);
+
+ /* Check for only requested nodes */
+ if ((tm_node->flags & flags_mask) != flags)
+ continue;
+
+ if (nix_tm_have_tl1_access(dev) &&
+ tm_node->hw_lvl_id == NIX_TXSCH_LVL_TL1)
+ skip_node = true;
+
+ otx2_tm_dbg("Free hwres for node %u, hwlvl %u, hw_id %u (%p)",
+ tm_node->id, tm_node->hw_lvl_id,
+ tm_node->hw_id, tm_node);
+ /* Free specific HW resource if requested */
+ if (!skip_node && flags_mask &&
+ tm_node->flags & NIX_TM_NODE_HWRES) {
+ req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
+ req->flags = 0;
+ req->schq_lvl = tm_node->hw_lvl_id;
+ req->schq = tm_node->hw_id;
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ break;
+ } else {
+ skip_node = false;
+ }
+ tm_node->flags &= ~NIX_TM_NODE_HWRES;
+
+ /* Leave software elements if needed */
+ if (hw_only)
+ continue;
+
+ shaper_profile_id = tm_node->params.shaper_profile_id;
+ shaper_profile =
+ nix_tm_shaper_profile_search(dev, shaper_profile_id);
+ if (shaper_profile)
+ shaper_profile->reference_count--;
+
+ TAILQ_REMOVE(&dev->node_list, tm_node, node);
+ rte_free(tm_node);
+ }
+
+ if (!flags_mask) {
+ /* Free all hw resources */
+ req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
+ req->flags = TXSCHQ_FREE_ALL;
+
+ return otx2_mbox_process(mbox);
+ }
+
+ return rc;
+}
+
+static uint8_t
+nix_tm_copy_rsp_to_dev(struct otx2_eth_dev *dev,
+ struct nix_txsch_alloc_rsp *rsp)
+{
+ uint16_t schq;
+ uint8_t lvl;
+
+ for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
+ for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++) {
+ dev->txschq_list[lvl][schq] = rsp->schq_list[lvl][schq];
+ dev->txschq_contig_list[lvl][schq] =
+ rsp->schq_contig_list[lvl][schq];
+ }
+
+ dev->txschq[lvl] = rsp->schq[lvl];
+ dev->txschq_contig[lvl] = rsp->schq_contig[lvl];
+ }
+ return 0;
+}
+
+static int
+nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
+ struct otx2_nix_tm_node *child,
+ struct otx2_nix_tm_node *parent)
+{
+ uint32_t hw_id, schq_con_index, prio_offset;
+ uint32_t l_id, schq_index;
+
+ otx2_tm_dbg("Assign hw id for child node %u, lvl %u, hw_lvl %u (%p)",
+ child->id, child->level_id, child->hw_lvl_id, child);
+
+ child->flags |= NIX_TM_NODE_HWRES;
+
+ /* Process root nodes */
+ if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
+ child->hw_lvl_id == dev->otx2_tm_root_lvl && !parent) {
+ int idx = 0;
+ uint32_t tschq_con_index;
+
+ l_id = child->hw_lvl_id;
+ tschq_con_index = dev->txschq_contig_index[l_id];
+ hw_id = dev->txschq_contig_list[l_id][tschq_con_index];
+ child->hw_id = hw_id;
+ dev->txschq_contig_index[l_id]++;
+ /* Update TL1 hw_id for its parent for config purpose */
+ idx = dev->txschq_index[NIX_TXSCH_LVL_TL1]++;
+ hw_id = dev->txschq_list[NIX_TXSCH_LVL_TL1][idx];
+ child->parent_hw_id = hw_id;
+ return 0;
+ }
+ if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL1 &&
+ child->hw_lvl_id == dev->otx2_tm_root_lvl && !parent) {
+ uint32_t tschq_con_index;
+
+ l_id = child->hw_lvl_id;
+ tschq_con_index = dev->txschq_index[l_id];
+ hw_id = dev->txschq_list[l_id][tschq_con_index];
+ child->hw_id = hw_id;
+ dev->txschq_index[l_id]++;
+ return 0;
+ }
+
+ /* Process children with parents */
+ l_id = child->hw_lvl_id;
+ schq_index = dev->txschq_index[l_id];
+ schq_con_index = dev->txschq_contig_index[l_id];
+
+ if (child->priority == parent->rr_prio) {
+ hw_id = dev->txschq_list[l_id][schq_index];
+ child->hw_id = hw_id;
+ child->parent_hw_id = parent->hw_id;
+ dev->txschq_index[l_id]++;
+ } else {
+ prio_offset = schq_con_index + child->priority;
+ hw_id = dev->txschq_contig_list[l_id][prio_offset];
+ child->hw_id = hw_id;
+ }
+ return 0;
+}
+
+static int
+nix_tm_assign_hw_id(struct otx2_eth_dev *dev)
+{
+ struct otx2_nix_tm_node *parent, *child;
+ uint32_t child_hw_lvl, con_index_inc, i;
+
+ for (i = NIX_TXSCH_LVL_TL1; i > 0; i--) {
+ TAILQ_FOREACH(parent, &dev->node_list, node) {
+ child_hw_lvl = parent->hw_lvl_id - 1;
+ if (parent->hw_lvl_id != i)
+ continue;
+ TAILQ_FOREACH(child, &dev->node_list, node) {
+ if (!child->parent)
+ continue;
+ if (child->parent->id != parent->id)
+ continue;
+ nix_tm_assign_id_to_node(dev, child, parent);
+ }
+
+ con_index_inc = parent->max_prio + 1;
+ dev->txschq_contig_index[child_hw_lvl] += con_index_inc;
+
+ /*
+ * Explicitly assign id to parent node if it
+ * doesn't have a parent
+ */
+ if (parent->hw_lvl_id == dev->otx2_tm_root_lvl)
+ nix_tm_assign_id_to_node(dev, parent, NULL);
+ }
+ }
+ return 0;
+}
+
+static uint8_t
+nix_tm_count_req_schq(struct otx2_eth_dev *dev,
+ struct nix_txsch_alloc_req *req, uint8_t lvl)
+{
+ struct otx2_nix_tm_node *tm_node;
+ uint8_t contig_count;
+
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (lvl == tm_node->hw_lvl_id) {
+ req->schq[lvl - 1] += tm_node->rr_num;
+ if (tm_node->max_prio != UINT32_MAX) {
+ contig_count = tm_node->max_prio + 1;
+ req->schq_contig[lvl - 1] += contig_count;
+ }
+ }
+ if (lvl == dev->otx2_tm_root_lvl &&
+ dev->otx2_tm_root_lvl && lvl == NIX_TXSCH_LVL_TL2 &&
+ tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) {
+ req->schq_contig[dev->otx2_tm_root_lvl]++;
+ }
+ }
+
+ req->schq[NIX_TXSCH_LVL_TL1] = 1;
+ req->schq_contig[NIX_TXSCH_LVL_TL1] = 0;
+
+ return 0;
+}
+
+static int
+nix_tm_prepare_txschq_req(struct otx2_eth_dev *dev,
+ struct nix_txsch_alloc_req *req)
+{
+ uint8_t i;
+
+ for (i = NIX_TXSCH_LVL_TL1; i > 0; i--)
+ nix_tm_count_req_schq(dev, req, i);
+
+ for (i = 0; i < NIX_TXSCH_LVL_CNT; i++) {
+ dev->txschq_index[i] = 0;
+ dev->txschq_contig_index[i] = 0;
+ }
+ return 0;
+}
+
+static int
+nix_tm_send_txsch_alloc_msg(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_txsch_alloc_req *req;
+ struct nix_txsch_alloc_rsp *rsp;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_nix_txsch_alloc(mbox);
+
+ rc = nix_tm_prepare_txschq_req(dev, req);
+ if (rc)
+ return rc;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ nix_tm_copy_rsp_to_dev(dev, rsp);
+
+ nix_tm_assign_hw_id(dev);
+ return 0;
+}
+
+static int
+nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc;
+
+ RTE_SET_USED(xmit_enable);
+
+ nix_tm_update_parent_info(dev);
+
+ rc = nix_tm_send_txsch_alloc_msg(dev);
+ if (rc) {
+ otx2_err("TM failed to alloc tm resources=%d", rc);
+ return rc;
+ }
+
+ return 0;
+}
+
static int
nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
{
@@ -226,6 +557,13 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
int rc;
+ /* Free up all resources already held */
+ rc = nix_tm_free_resources(dev, 0, 0, false);
+ if (rc) {
+ otx2_err("Failed to freeup existing resources,rc=%d", rc);
+ return rc;
+ }
+
/* Clear shaper profiles */
nix_tm_clear_shaper_profiles(dev);
dev->tm_flags = NIX_TM_DEFAULT_TREE;
@@ -234,6 +572,9 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
if (rc != 0)
return rc;
+ rc = nix_tm_alloc_resources(eth_dev, false);
+ if (rc != 0)
+ return rc;
dev->tm_leaf_cnt = sq_cnt;
return 0;
@@ -243,6 +584,15 @@ int
otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc;
+
+ /* Xmit is assumed to be disabled */
+ /* Free up resources already held */
+ rc = nix_tm_free_resources(dev, 0, 0, false);
+ if (rc) {
+ otx2_err("Failed to freeup existing resources,rc=%d", rc);
+ return rc;
+ }
/* Clear shaper profiles */
nix_tm_clear_shaper_profiles(dev);
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 23/57] net/octeontx2: configure TM HW resources
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (21 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 22/57] net/octeontx2: alloc and free TM HW resources jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 24/57] net/octeontx2: enable Tx through traffic manager jerinj
` (34 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
This patch sets up and configure hierarchy in hw
nodes. Since all the registers are with RVU AF,
register configuration is also done using mbox
communication.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
drivers/net/octeontx2/otx2_tm.c | 504 ++++++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_tm.h | 82 ++++++
2 files changed, 586 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 91f31df05..c6154e4d4 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -20,6 +20,41 @@ enum otx2_tm_node_level {
OTX2_TM_LVL_MAX,
};
+static inline
+uint64_t shaper2regval(struct shaper_params *shaper)
+{
+ return (shaper->burst_exponent << 37) | (shaper->burst_mantissa << 29) |
+ (shaper->div_exp << 13) | (shaper->exponent << 9) |
+ (shaper->mantissa << 1);
+}
+
+static int
+nix_get_link(struct otx2_eth_dev *dev)
+{
+ int link = 13 /* SDP */;
+ uint16_t lmac_chan;
+ uint16_t map;
+
+ lmac_chan = dev->tx_chan_base;
+
+ /* CGX lmac link */
+ if (lmac_chan >= 0x800) {
+ map = lmac_chan & 0x7FF;
+ link = 4 * ((map >> 8) & 0xF) + ((map >> 4) & 0xF);
+ } else if (lmac_chan < 0x700) {
+ /* LBK channel */
+ link = 12;
+ }
+
+ return link;
+}
+
+static uint8_t
+nix_get_relchan(struct otx2_eth_dev *dev)
+{
+ return dev->tx_chan_base & 0xff;
+}
+
static bool
nix_tm_have_tl1_access(struct otx2_eth_dev *dev)
{
@@ -28,6 +63,24 @@ nix_tm_have_tl1_access(struct otx2_eth_dev *dev)
!is_lbk && !dev->maxvf;
}
+static int
+find_prio_anchor(struct otx2_eth_dev *dev, uint32_t node_id)
+{
+ struct otx2_nix_tm_node *child_node;
+
+ TAILQ_FOREACH(child_node, &dev->node_list, node) {
+ if (!child_node->parent)
+ continue;
+ if (!(child_node->parent->id == node_id))
+ continue;
+ if (child_node->priority == child_node->parent->rr_prio)
+ continue;
+ return child_node->hw_id - child_node->priority;
+ }
+ return 0;
+}
+
+
static struct otx2_nix_tm_shaper_profile *
nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id)
{
@@ -40,6 +93,451 @@ nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id)
return NULL;
}
+static inline uint64_t
+shaper_rate_to_nix(uint64_t cclk_hz, uint64_t cclk_ticks,
+ uint64_t value, uint64_t *exponent_p,
+ uint64_t *mantissa_p, uint64_t *div_exp_p)
+{
+ uint64_t div_exp, exponent, mantissa;
+
+ /* Boundary checks */
+ if (value < MIN_SHAPER_RATE(cclk_hz, cclk_ticks) ||
+ value > MAX_SHAPER_RATE(cclk_hz, cclk_ticks))
+ return 0;
+
+ if (value <= SHAPER_RATE(cclk_hz, cclk_ticks, 0, 0, 0)) {
+ /* Calculate rate div_exp and mantissa using
+ * the following formula:
+ *
+ * value = (cclk_hz * (256 + mantissa)
+ * / ((cclk_ticks << div_exp) * 256)
+ */
+ div_exp = 0;
+ exponent = 0;
+ mantissa = MAX_RATE_MANTISSA;
+
+ while (value < (cclk_hz / (cclk_ticks << div_exp)))
+ div_exp += 1;
+
+ while (value <
+ ((cclk_hz * (256 + mantissa)) /
+ ((cclk_ticks << div_exp) * 256)))
+ mantissa -= 1;
+ } else {
+ /* Calculate rate exponent and mantissa using
+ * the following formula:
+ *
+ * value = (cclk_hz * ((256 + mantissa) << exponent)
+ * / (cclk_ticks * 256)
+ *
+ */
+ div_exp = 0;
+ exponent = MAX_RATE_EXPONENT;
+ mantissa = MAX_RATE_MANTISSA;
+
+ while (value < (cclk_hz * (1 << exponent)) / cclk_ticks)
+ exponent -= 1;
+
+ while (value < (cclk_hz * ((256 + mantissa) << exponent)) /
+ (cclk_ticks * 256))
+ mantissa -= 1;
+ }
+
+ if (div_exp > MAX_RATE_DIV_EXP ||
+ exponent > MAX_RATE_EXPONENT || mantissa > MAX_RATE_MANTISSA)
+ return 0;
+
+ if (div_exp_p)
+ *div_exp_p = div_exp;
+ if (exponent_p)
+ *exponent_p = exponent;
+ if (mantissa_p)
+ *mantissa_p = mantissa;
+
+ /* Calculate real rate value */
+ return SHAPER_RATE(cclk_hz, cclk_ticks, exponent, mantissa, div_exp);
+}
+
+static inline uint64_t
+lx_shaper_rate_to_nix(uint64_t cclk_hz, uint32_t hw_lvl,
+ uint64_t value, uint64_t *exponent,
+ uint64_t *mantissa, uint64_t *div_exp)
+{
+ if (hw_lvl == NIX_TXSCH_LVL_TL1)
+ return shaper_rate_to_nix(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS,
+ value, exponent, mantissa, div_exp);
+ else
+ return shaper_rate_to_nix(cclk_hz, LX_TIME_WHEEL_CCLK_TICKS,
+ value, exponent, mantissa, div_exp);
+}
+
+static inline uint64_t
+shaper_burst_to_nix(uint64_t value, uint64_t *exponent_p,
+ uint64_t *mantissa_p)
+{
+ uint64_t exponent, mantissa;
+
+ if (value < MIN_SHAPER_BURST || value > MAX_SHAPER_BURST)
+ return 0;
+
+ /* Calculate burst exponent and mantissa using
+ * the following formula:
+ *
+ * value = (((256 + mantissa) << (exponent + 1)
+ / 256)
+ *
+ */
+ exponent = MAX_BURST_EXPONENT;
+ mantissa = MAX_BURST_MANTISSA;
+
+ while (value < (1ull << (exponent + 1)))
+ exponent -= 1;
+
+ while (value < ((256 + mantissa) << (exponent + 1)) / 256)
+ mantissa -= 1;
+
+ if (exponent > MAX_BURST_EXPONENT || mantissa > MAX_BURST_MANTISSA)
+ return 0;
+
+ if (exponent_p)
+ *exponent_p = exponent;
+ if (mantissa_p)
+ *mantissa_p = mantissa;
+
+ return SHAPER_BURST(exponent, mantissa);
+}
+
+static int
+configure_shaper_cir_pir_reg(struct otx2_eth_dev *dev,
+ struct otx2_nix_tm_node *tm_node,
+ struct shaper_params *cir,
+ struct shaper_params *pir)
+{
+ uint32_t shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE;
+ struct otx2_nix_tm_shaper_profile *shaper_profile = NULL;
+ struct rte_tm_shaper_params *param;
+
+ shaper_profile_id = tm_node->params.shaper_profile_id;
+
+ shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id);
+ if (shaper_profile) {
+ param = &shaper_profile->profile;
+ /* Calculate CIR exponent and mantissa */
+ if (param->committed.rate)
+ cir->rate = lx_shaper_rate_to_nix(CCLK_HZ,
+ tm_node->hw_lvl_id,
+ param->committed.rate,
+ &cir->exponent,
+ &cir->mantissa,
+ &cir->div_exp);
+
+ /* Calculate PIR exponent and mantissa */
+ if (param->peak.rate)
+ pir->rate = lx_shaper_rate_to_nix(CCLK_HZ,
+ tm_node->hw_lvl_id,
+ param->peak.rate,
+ &pir->exponent,
+ &pir->mantissa,
+ &pir->div_exp);
+
+ /* Calculate CIR burst exponent and mantissa */
+ if (param->committed.size)
+ cir->burst = shaper_burst_to_nix(param->committed.size,
+ &cir->burst_exponent,
+ &cir->burst_mantissa);
+
+ /* Calculate PIR burst exponent and mantissa */
+ if (param->peak.size)
+ pir->burst = shaper_burst_to_nix(param->peak.size,
+ &pir->burst_exponent,
+ &pir->burst_mantissa);
+ }
+
+ return 0;
+}
+
+static int
+send_tm_reqval(struct otx2_mbox *mbox, struct nix_txschq_config *req)
+{
+ int rc;
+
+ if (req->num_regs > MAX_REGS_PER_MBOX_MSG)
+ return -ERANGE;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ req->num_regs = 0;
+ return 0;
+}
+
+static int
+populate_tm_registers(struct otx2_eth_dev *dev,
+ struct otx2_nix_tm_node *tm_node)
+{
+ uint64_t strict_schedul_prio, rr_prio;
+ struct otx2_mbox *mbox = dev->mbox;
+ volatile uint64_t *reg, *regval;
+ uint64_t parent = 0, child = 0;
+ struct shaper_params cir, pir;
+ struct nix_txschq_config *req;
+ uint64_t rr_quantum;
+ uint32_t hw_lvl;
+ uint32_t schq;
+ int rc;
+
+ memset(&cir, 0, sizeof(cir));
+ memset(&pir, 0, sizeof(pir));
+
+ /* Skip leaf nodes */
+ if (tm_node->hw_lvl_id == NIX_TXSCH_LVL_CNT)
+ return 0;
+
+ /* Root node will not have a parent node */
+ if (tm_node->hw_lvl_id == dev->otx2_tm_root_lvl)
+ parent = tm_node->parent_hw_id;
+ else
+ parent = tm_node->parent->hw_id;
+
+ /* Do we need this trigger to configure TL1 */
+ if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
+ tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) {
+ schq = parent;
+ /*
+ * Default config for TL1.
+ * For VF this is always ignored.
+ */
+
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = NIX_TXSCH_LVL_TL1;
+
+ /* Set DWRR quantum */
+ req->reg[0] = NIX_AF_TL1X_SCHEDULE(schq);
+ req->regval[0] = TXSCH_TL1_DFLT_RR_QTM;
+ req->num_regs++;
+
+ req->reg[1] = NIX_AF_TL1X_TOPOLOGY(schq);
+ req->regval[1] = (TXSCH_TL1_DFLT_RR_PRIO << 1);
+ req->num_regs++;
+
+ req->reg[2] = NIX_AF_TL1X_CIR(schq);
+ req->regval[2] = 0;
+ req->num_regs++;
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ }
+
+ if (tm_node->hw_lvl_id != NIX_TXSCH_LVL_SMQ)
+ child = find_prio_anchor(dev, tm_node->id);
+
+ rr_prio = tm_node->rr_prio;
+ hw_lvl = tm_node->hw_lvl_id;
+ strict_schedul_prio = tm_node->priority;
+ schq = tm_node->hw_id;
+ rr_quantum = (tm_node->weight * NIX_TM_RR_QUANTUM_MAX) /
+ MAX_SCHED_WEIGHT;
+
+ configure_shaper_cir_pir_reg(dev, tm_node, &cir, &pir);
+
+ otx2_tm_dbg("Configure node %p, lvl %u hw_lvl %u, id %u, hw_id %u,"
+ "parent_hw_id %" PRIx64 ", pir %" PRIx64 ", cir %" PRIx64,
+ tm_node, tm_node->level_id, hw_lvl,
+ tm_node->id, schq, parent, pir.rate, cir.rate);
+
+ rc = -EFAULT;
+
+ switch (hw_lvl) {
+ case NIX_TXSCH_LVL_SMQ:
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = hw_lvl;
+ reg = req->reg;
+ regval = req->regval;
+ req->num_regs = 0;
+
+ /* Set xoff which will be cleared later */
+ *reg++ = NIX_AF_SMQX_CFG(schq);
+ *regval++ = BIT_ULL(50) | ((uint64_t)NIX_MAX_VTAG_INS << 36) |
+ (NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS;
+ req->num_regs++;
+ *reg++ = NIX_AF_MDQX_PARENT(schq);
+ *regval++ = parent << 16;
+ req->num_regs++;
+ *reg++ = NIX_AF_MDQX_SCHEDULE(schq);
+ *regval++ = (strict_schedul_prio << 24) | rr_quantum;
+ req->num_regs++;
+ if (pir.rate && pir.burst) {
+ *reg++ = NIX_AF_MDQX_PIR(schq);
+ *regval++ = shaper2regval(&pir) | 1;
+ req->num_regs++;
+ }
+
+ if (cir.rate && cir.burst) {
+ *reg++ = NIX_AF_MDQX_CIR(schq);
+ *regval++ = shaper2regval(&cir) | 1;
+ req->num_regs++;
+ }
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ break;
+ case NIX_TXSCH_LVL_TL4:
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = hw_lvl;
+ req->num_regs = 0;
+ reg = req->reg;
+ regval = req->regval;
+
+ *reg++ = NIX_AF_TL4X_PARENT(schq);
+ *regval++ = parent << 16;
+ req->num_regs++;
+ *reg++ = NIX_AF_TL4X_TOPOLOGY(schq);
+ *regval++ = (child << 32) | (rr_prio << 1);
+ req->num_regs++;
+ *reg++ = NIX_AF_TL4X_SCHEDULE(schq);
+ *regval++ = (strict_schedul_prio << 24) | rr_quantum;
+ req->num_regs++;
+ if (pir.rate && pir.burst) {
+ *reg++ = NIX_AF_TL4X_PIR(schq);
+ *regval++ = shaper2regval(&pir) | 1;
+ req->num_regs++;
+ }
+ if (cir.rate && cir.burst) {
+ *reg++ = NIX_AF_TL4X_CIR(schq);
+ *regval++ = shaper2regval(&cir) | 1;
+ req->num_regs++;
+ }
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ break;
+ case NIX_TXSCH_LVL_TL3:
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = hw_lvl;
+ req->num_regs = 0;
+ reg = req->reg;
+ regval = req->regval;
+
+ *reg++ = NIX_AF_TL3X_PARENT(schq);
+ *regval++ = parent << 16;
+ req->num_regs++;
+ *reg++ = NIX_AF_TL3X_TOPOLOGY(schq);
+ *regval++ = (child << 32) | (rr_prio << 1);
+ req->num_regs++;
+ *reg++ = NIX_AF_TL3X_SCHEDULE(schq);
+ *regval++ = (strict_schedul_prio << 24) | rr_quantum;
+ req->num_regs++;
+ if (pir.rate && pir.burst) {
+ *reg++ = NIX_AF_TL3X_PIR(schq);
+ *regval++ = shaper2regval(&pir) | 1;
+ req->num_regs++;
+ }
+ if (cir.rate && cir.burst) {
+ *reg++ = NIX_AF_TL3X_CIR(schq);
+ *regval++ = shaper2regval(&cir) | 1;
+ req->num_regs++;
+ }
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ break;
+ case NIX_TXSCH_LVL_TL2:
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = hw_lvl;
+ req->num_regs = 0;
+ reg = req->reg;
+ regval = req->regval;
+
+ *reg++ = NIX_AF_TL2X_PARENT(schq);
+ *regval++ = parent << 16;
+ req->num_regs++;
+ *reg++ = NIX_AF_TL2X_TOPOLOGY(schq);
+ *regval++ = (child << 32) | (rr_prio << 1);
+ req->num_regs++;
+ *reg++ = NIX_AF_TL2X_SCHEDULE(schq);
+ if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2)
+ *regval++ = (1 << 24) | rr_quantum;
+ else
+ *regval++ = (strict_schedul_prio << 24) | rr_quantum;
+ req->num_regs++;
+ *reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq, nix_get_link(dev));
+ *regval++ = BIT_ULL(12) | nix_get_relchan(dev);
+ req->num_regs++;
+ if (pir.rate && pir.burst) {
+ *reg++ = NIX_AF_TL2X_PIR(schq);
+ *regval++ = shaper2regval(&pir) | 1;
+ req->num_regs++;
+ }
+ if (cir.rate && cir.burst) {
+ *reg++ = NIX_AF_TL2X_CIR(schq);
+ *regval++ = shaper2regval(&cir) | 1;
+ req->num_regs++;
+ }
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ break;
+ case NIX_TXSCH_LVL_TL1:
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = hw_lvl;
+ req->num_regs = 0;
+ reg = req->reg;
+ regval = req->regval;
+
+ *reg++ = NIX_AF_TL1X_SCHEDULE(schq);
+ *regval++ = rr_quantum;
+ req->num_regs++;
+ *reg++ = NIX_AF_TL1X_TOPOLOGY(schq);
+ *regval++ = (child << 32) | (rr_prio << 1 /*RR_PRIO*/);
+ req->num_regs++;
+ if (cir.rate && cir.burst) {
+ *reg++ = NIX_AF_TL1X_CIR(schq);
+ *regval++ = shaper2regval(&cir) | 1;
+ req->num_regs++;
+ }
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ break;
+ }
+
+ return 0;
+error:
+ otx2_err("Txschq cfg request failed for node %p, rc=%d", tm_node, rc);
+ return rc;
+}
+
+
+static int
+nix_tm_txsch_reg_config(struct otx2_eth_dev *dev)
+{
+ struct otx2_nix_tm_node *tm_node;
+ uint32_t lvl;
+ int rc = 0;
+
+ if (nix_get_link(dev) == 13)
+ return -EPERM;
+
+ for (lvl = 0; lvl < (uint32_t)dev->otx2_tm_root_lvl + 1; lvl++) {
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (tm_node->hw_lvl_id == lvl) {
+ rc = populate_tm_registers(dev, tm_node);
+ if (rc)
+ goto exit;
+ }
+ }
+ }
+exit:
+ return rc;
+}
+
static struct otx2_nix_tm_node *
nix_tm_node_search(struct otx2_eth_dev *dev,
uint32_t node_id, bool user)
@@ -443,6 +941,12 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
return rc;
}
+ rc = nix_tm_txsch_reg_config(dev);
+ if (rc) {
+ otx2_err("TM failed to configure sched registers=%d", rc);
+ return rc;
+ }
+
return 0;
}
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index 94023fa99..af1bb1862 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -64,4 +64,86 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
/* = NIX_MAX_HW_MTU */
#define DEFAULT_RR_WEIGHT 71
+/** NIX rate limits */
+#define MAX_RATE_DIV_EXP 12
+#define MAX_RATE_EXPONENT 0xf
+#define MAX_RATE_MANTISSA 0xff
+
+/** NIX rate limiter time-wheel resolution */
+#define L1_TIME_WHEEL_CCLK_TICKS 240
+#define LX_TIME_WHEEL_CCLK_TICKS 860
+
+#define CCLK_HZ 1000000000
+
+/* NIX rate calculation
+ * CCLK = coprocessor-clock frequency in MHz
+ * CCLK_TICKS = rate limiter time-wheel resolution
+ *
+ * PIR_ADD = ((256 + NIX_*_PIR[RATE_MANTISSA])
+ * << NIX_*_PIR[RATE_EXPONENT]) / 256
+ * PIR = (CCLK / (CCLK_TICKS << NIX_*_PIR[RATE_DIVIDER_EXPONENT]))
+ * * PIR_ADD
+ *
+ * CIR_ADD = ((256 + NIX_*_CIR[RATE_MANTISSA])
+ * << NIX_*_CIR[RATE_EXPONENT]) / 256
+ * CIR = (CCLK / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT]))
+ * * CIR_ADD
+ */
+#define SHAPER_RATE(cclk_hz, cclk_ticks, \
+ exponent, mantissa, div_exp) \
+ (((uint64_t)(cclk_hz) * ((256 + (mantissa)) << (exponent))) \
+ / (((cclk_ticks) << (div_exp)) * 256))
+
+#define L1_SHAPER_RATE(cclk_hz, exponent, mantissa, div_exp) \
+ SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS, \
+ exponent, mantissa, div_exp)
+
+#define LX_SHAPER_RATE(cclk_hz, exponent, mantissa, div_exp) \
+ SHAPER_RATE(cclk_hz, LX_TIME_WHEEL_CCLK_TICKS, \
+ exponent, mantissa, div_exp)
+
+/* Shaper rate limits */
+#define MIN_SHAPER_RATE(cclk_hz, cclk_ticks) \
+ SHAPER_RATE(cclk_hz, cclk_ticks, 0, 0, MAX_RATE_DIV_EXP)
+
+#define MAX_SHAPER_RATE(cclk_hz, cclk_ticks) \
+ SHAPER_RATE(cclk_hz, cclk_ticks, MAX_RATE_EXPONENT, \
+ MAX_RATE_MANTISSA, 0)
+
+#define MIN_L1_SHAPER_RATE(cclk_hz) \
+ MIN_SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS)
+
+#define MAX_L1_SHAPER_RATE(cclk_hz) \
+ MAX_SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS)
+
+/** TM Shaper - low level operations */
+
+/** NIX burst limits */
+#define MAX_BURST_EXPONENT 0xf
+#define MAX_BURST_MANTISSA 0xff
+
+/* NIX burst calculation
+ * PIR_BURST = ((256 + NIX_*_PIR[BURST_MANTISSA])
+ * << (NIX_*_PIR[BURST_EXPONENT] + 1))
+ * / 256
+ *
+ * CIR_BURST = ((256 + NIX_*_CIR[BURST_MANTISSA])
+ * << (NIX_*_CIR[BURST_EXPONENT] + 1))
+ * / 256
+ */
+#define SHAPER_BURST(exponent, mantissa) \
+ (((256 + (mantissa)) << ((exponent) + 1)) / 256)
+
+/** Shaper burst limits */
+#define MIN_SHAPER_BURST \
+ SHAPER_BURST(0, 0)
+
+#define MAX_SHAPER_BURST \
+ SHAPER_BURST(MAX_BURST_EXPONENT,\
+ MAX_BURST_MANTISSA)
+
+/* Default TL1 priority and Quantum from AF */
+#define TXSCH_TL1_DFLT_RR_QTM ((1 << 24) - 1)
+#define TXSCH_TL1_DFLT_RR_PRIO 1
+
#endif /* __OTX2_TM_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 24/57] net/octeontx2: enable Tx through traffic manager
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (22 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 23/57] net/octeontx2: configure " jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 25/57] net/octeontx2: add ptype support jerinj
` (33 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: Krzysztof Kanas, Vamsi Attunuru
From: Krzysztof Kanas <kkanas@marvell.com>
This patch enables pkt transmit through traffic manager
hierarchy by clearing software XOFF on the nodes and linking
tx queues to corresponding leaf nodes.
It also adds support to start and stop tx queue using
traffic manager.
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 75 ++++++-
drivers/net/octeontx2/otx2_tm.c | 296 +++++++++++++++++++++++++++-
drivers/net/octeontx2/otx2_tm.h | 4 +
3 files changed, 370 insertions(+), 5 deletions(-)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 899865749..62b1e3d14 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -120,6 +120,32 @@ nix_lf_free(struct otx2_eth_dev *dev)
return otx2_mbox_process(mbox);
}
+int
+otx2_cgx_rxtx_start(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_start_rxtx(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
+int
+otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_stop_rxtx(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
static inline void
nix_rx_queue_reset(struct otx2_eth_rxq *rxq)
{
@@ -461,16 +487,27 @@ nix_sq_init(struct otx2_eth_txq *txq)
struct otx2_eth_dev *dev = txq->dev;
struct otx2_mbox *mbox = dev->mbox;
struct nix_aq_enq_req *sq;
+ uint32_t rr_quantum;
+ uint16_t smq;
+ int rc;
if (txq->sqb_pool->pool_id == 0)
return -EINVAL;
+ rc = otx2_nix_tm_get_leaf_data(dev, txq->sq, &rr_quantum, &smq);
+ if (rc) {
+ otx2_err("Failed to get sq->smq(leaf node), rc=%d", rc);
+ return rc;
+ }
+
sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
sq->qidx = txq->sq;
sq->ctype = NIX_AQ_CTYPE_SQ;
sq->op = NIX_AQ_INSTOP_INIT;
sq->sq.max_sqe_size = nix_sq_max_sqe_sz(txq);
+ sq->sq.smq = smq;
+ sq->sq.smq_rr_quantum = rr_quantum;
sq->sq.default_chan = dev->tx_chan_base;
sq->sq.sqe_stype = NIX_STYPE_STF;
sq->sq.ena = 1;
@@ -692,12 +729,18 @@ static void
otx2_nix_tx_queue_release(void *_txq)
{
struct otx2_eth_txq *txq = _txq;
+ struct rte_eth_dev *eth_dev;
if (!txq)
return;
+ eth_dev = txq->dev->eth_dev;
+
otx2_nix_dbg("Releasing txq %u", txq->sq);
+ /* Flush and disable tm */
+ otx2_nix_tm_sw_xoff(txq, eth_dev->data->dev_started);
+
/* Free sqb's and disable sq */
nix_sq_uninit(txq);
@@ -1123,24 +1166,52 @@ int
otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
{
struct rte_eth_dev_data *data = eth_dev->data;
+ struct otx2_eth_txq *txq;
+ int rc = -EINVAL;
+
+ txq = eth_dev->data->tx_queues[qidx];
if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
return 0;
+ rc = otx2_nix_sq_sqb_aura_fc(txq, true);
+ if (rc) {
+ otx2_err("Failed to enable sqb aura fc, txq=%u, rc=%d",
+ qidx, rc);
+ goto done;
+ }
+
data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
- return 0;
+
+done:
+ return rc;
}
int
otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
{
struct rte_eth_dev_data *data = eth_dev->data;
+ struct otx2_eth_txq *txq;
+ int rc;
+
+ txq = eth_dev->data->tx_queues[qidx];
if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
return 0;
+ txq->fc_cache_pkts = 0;
+
+ rc = otx2_nix_sq_sqb_aura_fc(txq, false);
+ if (rc) {
+ otx2_err("Failed to disable sqb aura fc, txq=%u, rc=%d",
+ qidx, rc);
+ goto done;
+ }
+
data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
- return 0;
+
+done:
+ return rc;
}
static int
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index c6154e4d4..246920695 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -676,6 +676,224 @@ nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev)
return 0;
}
+static int
+nix_smq_xoff(struct otx2_eth_dev *dev, uint16_t smq, bool enable)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_txschq_config *req;
+
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = NIX_TXSCH_LVL_SMQ;
+ req->num_regs = 1;
+
+ req->reg[0] = NIX_AF_SMQX_CFG(smq);
+ /* Unmodified fields */
+ req->regval[0] = ((uint64_t)NIX_MAX_VTAG_INS << 36) |
+ (NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS;
+
+ if (enable)
+ req->regval[0] |= BIT_ULL(50) | BIT_ULL(49);
+ else
+ req->regval[0] |= 0;
+
+ return otx2_mbox_process(mbox);
+}
+
+int
+otx2_nix_sq_sqb_aura_fc(void *__txq, bool enable)
+{
+ struct otx2_eth_txq *txq = __txq;
+ struct npa_aq_enq_req *req;
+ struct npa_aq_enq_rsp *rsp;
+ struct otx2_npa_lf *lf;
+ struct otx2_mbox *mbox;
+ uint64_t aura_handle;
+ int rc;
+
+ lf = otx2_npa_lf_obj_get();
+ if (!lf)
+ return -EFAULT;
+ mbox = lf->mbox;
+ /* Set/clear sqb aura fc_ena */
+ aura_handle = txq->sqb_pool->pool_id;
+ req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+
+ req->aura_id = npa_lf_aura_handle_to_aura(aura_handle);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_WRITE;
+ /* Below is not needed for aura writes but AF driver needs it */
+ /* AF will translate to associated poolctx */
+ req->aura.pool_addr = req->aura_id;
+
+ req->aura.fc_ena = enable;
+ req->aura_mask.fc_ena = 1;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* Read back npa aura ctx */
+ req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+
+ req->aura_id = npa_lf_aura_handle_to_aura(aura_handle);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ /* Init when enabled as there might be no triggers */
+ if (enable)
+ *(volatile uint64_t *)txq->fc_mem = rsp->aura.count;
+ else
+ *(volatile uint64_t *)txq->fc_mem = txq->nb_sqb_bufs;
+ /* Sync write barrier */
+ rte_wmb();
+
+ return 0;
+}
+
+static void
+nix_txq_flush_sq_spin(struct otx2_eth_txq *txq)
+{
+ uint16_t sqb_cnt, head_off, tail_off;
+ struct otx2_eth_dev *dev = txq->dev;
+ uint16_t sq = txq->sq;
+ uint64_t reg, val;
+ int64_t *regaddr;
+
+ while (true) {
+ reg = ((uint64_t)sq << 32);
+ regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
+ val = otx2_atomic64_add_nosync(reg, regaddr);
+
+ regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS);
+ val = otx2_atomic64_add_nosync(reg, regaddr);
+ sqb_cnt = val & 0xFFFF;
+ head_off = (val >> 20) & 0x3F;
+ tail_off = (val >> 28) & 0x3F;
+
+ /* SQ reached quiescent state */
+ if (sqb_cnt <= 1 && head_off == tail_off &&
+ (*txq->fc_mem == txq->nb_sqb_bufs)) {
+ break;
+ }
+
+ rte_pause();
+ }
+}
+
+int
+otx2_nix_tm_sw_xoff(void *__txq, bool dev_started)
+{
+ struct otx2_eth_txq *txq = __txq;
+ struct otx2_eth_dev *dev = txq->dev;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *req;
+ struct nix_aq_enq_rsp *rsp;
+ uint16_t smq;
+ int rc;
+
+ /* Get smq from sq */
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ req->qidx = txq->sq;
+ req->ctype = NIX_AQ_CTYPE_SQ;
+ req->op = NIX_AQ_INSTOP_READ;
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to get smq, rc=%d", rc);
+ return -EIO;
+ }
+
+ /* Check if sq is enabled */
+ if (!rsp->sq.ena)
+ return 0;
+
+ smq = rsp->sq.smq;
+
+ /* Enable CGX RXTX to drain pkts */
+ if (!dev_started) {
+ rc = otx2_cgx_rxtx_start(dev);
+ if (rc)
+ return rc;
+ }
+
+ rc = otx2_nix_sq_sqb_aura_fc(txq, false);
+ if (rc < 0) {
+ otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
+ goto cleanup;
+ }
+
+ /* Disable smq xoff for case it was enabled earlier */
+ rc = nix_smq_xoff(dev, smq, false);
+ if (rc) {
+ otx2_err("Failed to enable smq for sq %u, rc=%d", txq->sq, rc);
+ goto cleanup;
+ }
+
+ /* Wait for sq entries to be flushed */
+ nix_txq_flush_sq_spin(txq);
+
+ /* Flush and enable smq xoff */
+ rc = nix_smq_xoff(dev, smq, true);
+ if (rc) {
+ otx2_err("Failed to disable smq for sq %u, rc=%d", txq->sq, rc);
+ return rc;
+ }
+
+cleanup:
+ /* Restore cgx state */
+ if (!dev_started)
+ rc |= otx2_cgx_rxtx_stop(dev);
+
+ return rc;
+}
+
+static int
+nix_tm_sw_xon(struct otx2_eth_txq *txq,
+ uint16_t smq, uint32_t rr_quantum)
+{
+ struct otx2_eth_dev *dev = txq->dev;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *req;
+ int rc;
+
+ otx2_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum %u",
+ txq->sq, txq->sq, rr_quantum);
+ /* Set smq from sq */
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ req->qidx = txq->sq;
+ req->ctype = NIX_AQ_CTYPE_SQ;
+ req->op = NIX_AQ_INSTOP_WRITE;
+ req->sq.smq = smq;
+ req->sq.smq_rr_quantum = rr_quantum;
+ req->sq_mask.smq = ~req->sq_mask.smq;
+ req->sq_mask.smq_rr_quantum = ~req->sq_mask.smq_rr_quantum;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to set smq, rc=%d", rc);
+ return -EIO;
+ }
+
+ /* Enable sqb_aura fc */
+ rc = otx2_nix_sq_sqb_aura_fc(txq, true);
+ if (rc < 0) {
+ otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
+ return rc;
+ }
+
+ /* Disable smq xoff */
+ rc = nix_smq_xoff(dev, smq, false);
+ if (rc) {
+ otx2_err("Failed to enable smq for sq %u", txq->sq);
+ return rc;
+ }
+
+ return 0;
+}
+
static int
nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
uint32_t flags, bool hw_only)
@@ -929,10 +1147,11 @@ static int
nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_nix_tm_node *tm_node;
+ uint16_t sq, smq, rr_quantum;
+ struct otx2_eth_txq *txq;
int rc;
- RTE_SET_USED(xmit_enable);
-
nix_tm_update_parent_info(dev);
rc = nix_tm_send_txsch_alloc_msg(dev);
@@ -947,7 +1166,43 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
return rc;
}
- return 0;
+ /* Enable xmit as all the topology is ready */
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (tm_node->flags & NIX_TM_NODE_ENABLED)
+ continue;
+
+ /* Enable xmit on sq */
+ if (tm_node->level_id != OTX2_TM_LVL_QUEUE) {
+ tm_node->flags |= NIX_TM_NODE_ENABLED;
+ continue;
+ }
+
+ /* Don't enable SMQ or mark as enable */
+ if (!xmit_enable)
+ continue;
+
+ sq = tm_node->id;
+ if (sq > eth_dev->data->nb_tx_queues) {
+ rc = -EFAULT;
+ break;
+ }
+
+ txq = eth_dev->data->tx_queues[sq];
+
+ smq = tm_node->parent->hw_id;
+ rr_quantum = (tm_node->weight *
+ NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT;
+
+ rc = nix_tm_sw_xon(txq, smq, rr_quantum);
+ if (rc)
+ break;
+ tm_node->flags |= NIX_TM_NODE_ENABLED;
+ }
+
+ if (rc)
+ otx2_err("TM failed to enable xmit on sq %u, rc=%d", sq, rc);
+
+ return rc;
}
static int
@@ -1104,3 +1359,38 @@ otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
dev->tm_flags = 0;
return 0;
}
+
+int
+otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
+ uint32_t *rr_quantum, uint16_t *smq)
+{
+ struct otx2_nix_tm_node *tm_node;
+ int rc;
+
+ /* 0..sq_cnt-1 are leaf nodes */
+ if (sq >= dev->tm_leaf_cnt)
+ return -EINVAL;
+
+ /* Search for internal node first */
+ tm_node = nix_tm_node_search(dev, sq, false);
+ if (!tm_node)
+ tm_node = nix_tm_node_search(dev, sq, true);
+
+ /* Check if we found a valid leaf node */
+ if (!tm_node || tm_node->level_id != OTX2_TM_LVL_QUEUE ||
+ !tm_node->parent || tm_node->parent->hw_id == UINT32_MAX) {
+ return -EIO;
+ }
+
+ /* Get SMQ Id of leaf node's parent */
+ *smq = tm_node->parent->hw_id;
+ *rr_quantum = (tm_node->weight * NIX_TM_RR_QUANTUM_MAX)
+ / MAX_SCHED_WEIGHT;
+
+ rc = nix_smq_xoff(dev, *smq, false);
+ if (rc)
+ return rc;
+ tm_node->flags |= NIX_TM_NODE_ENABLED;
+
+ return 0;
+}
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index af1bb1862..2a009eece 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -16,6 +16,10 @@ struct otx2_eth_dev;
void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev);
int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
+int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
+ uint32_t *rr_quantum, uint16_t *smq);
+int otx2_nix_tm_sw_xoff(void *_txq, bool dev_started);
+int otx2_nix_sq_sqb_aura_fc(void *_txq, bool enable);
struct otx2_nix_tm_node {
TAILQ_ENTRY(otx2_nix_tm_node) node;
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 25/57] net/octeontx2: add ptype support
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (23 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 24/57] net/octeontx2: enable Tx through traffic manager jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 26/57] net/octeontx2: add queue info and pool supported operations jerinj
` (32 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Harman Kalra
From: Jerin Jacob <jerinj@marvell.com>
The fields from CQE needs to be converted to
ptype and rx ol flags in mbuf. This patch adds
create lookup memory for those items to be
used in Fastpath.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 2 +
drivers/net/octeontx2/otx2_ethdev.h | 6 +
drivers/net/octeontx2/otx2_lookup.c | 315 +++++++++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 7 +
10 files changed, 336 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_lookup.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index ca40358da..0de07776f 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -20,6 +20,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Packet type parsing = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index b720c116f..b4b253aa4 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -20,6 +20,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Packet type parsing = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 5a287493f..21cc4861e 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -16,6 +16,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Packet type parsing = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index d7e8f3d56..07e44b031 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -16,6 +16,7 @@ Features
Features of the OCTEON TX2 Ethdev PMD are:
+- Packet type information
- Promiscuous mode
- SR-IOV VF
- Lock-free Tx queue
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 164621087..d434b0b9d 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -33,6 +33,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
otx2_link.c \
otx2_stats.c \
+ otx2_lookup.c \
otx2_ethdev.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index e344d877f..3dff3e53d 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -8,6 +8,7 @@ sources = files(
'otx2_mac.c',
'otx2_link.c',
'otx2_stats.c',
+ 'otx2_lookup.c',
'otx2_ethdev.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 62b1e3d14..a9cdafc33 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -441,6 +441,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
rxq->pool = mp;
rxq->qlen = nix_qsize_to_val(qsize);
rxq->qsize = qsize;
+ rxq->lookup_mem = otx2_nix_fastpath_lookup_mem_get();
/* Alloc completion queue */
rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp);
@@ -1271,6 +1272,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.tx_queue_stop = otx2_nix_tx_queue_stop,
.rx_queue_start = otx2_nix_rx_queue_start,
.rx_queue_stop = otx2_nix_rx_queue_stop,
+ .dev_supported_ptypes_get = otx2_nix_supported_ptypes_get,
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 9f73bf89b..cfc4dfe14 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -355,6 +355,12 @@ int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *addr);
+/* Lookup configuration */
+void *otx2_nix_fastpath_lookup_mem_get(void);
+
+/* PTYPES */
+const uint32_t *otx2_nix_supported_ptypes_get(struct rte_eth_dev *dev);
+
/* Mac address handling */
int otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *addr);
diff --git a/drivers/net/octeontx2/otx2_lookup.c b/drivers/net/octeontx2/otx2_lookup.c
new file mode 100644
index 000000000..99199d08a
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_lookup.c
@@ -0,0 +1,315 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_common.h>
+#include <rte_memzone.h>
+
+#include "otx2_ethdev.h"
+
+/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */
+#define ERRCODE_ERRLEN_WIDTH 12
+#define ERR_ARRAY_SZ ((BIT(ERRCODE_ERRLEN_WIDTH)) *\
+ sizeof(uint32_t))
+
+#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ)
+
+const uint32_t *
+otx2_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ static const uint32_t ptypes[] = {
+ RTE_PTYPE_L2_ETHER_QINQ, /* LB */
+ RTE_PTYPE_L2_ETHER_VLAN, /* LB */
+ RTE_PTYPE_L2_ETHER_TIMESYNC, /* LB */
+ RTE_PTYPE_L2_ETHER_ARP, /* LC */
+ RTE_PTYPE_L2_ETHER_NSH, /* LC */
+ RTE_PTYPE_L2_ETHER_FCOE, /* LC */
+ RTE_PTYPE_L2_ETHER_MPLS, /* LC */
+ RTE_PTYPE_L3_IPV4, /* LC */
+ RTE_PTYPE_L3_IPV4_EXT, /* LC */
+ RTE_PTYPE_L3_IPV6, /* LC */
+ RTE_PTYPE_L3_IPV6_EXT, /* LC */
+ RTE_PTYPE_L4_TCP, /* LD */
+ RTE_PTYPE_L4_UDP, /* LD */
+ RTE_PTYPE_L4_SCTP, /* LD */
+ RTE_PTYPE_L4_ICMP, /* LD */
+ RTE_PTYPE_L4_IGMP, /* LD */
+ RTE_PTYPE_TUNNEL_GRE, /* LD */
+ RTE_PTYPE_TUNNEL_ESP, /* LD */
+ RTE_PTYPE_TUNNEL_NVGRE, /* LD */
+ RTE_PTYPE_TUNNEL_VXLAN, /* LE */
+ RTE_PTYPE_TUNNEL_GENEVE, /* LE */
+ RTE_PTYPE_TUNNEL_GTPC, /* LE */
+ RTE_PTYPE_TUNNEL_GTPU, /* LE */
+ RTE_PTYPE_TUNNEL_VXLAN_GPE, /* LE */
+ RTE_PTYPE_TUNNEL_MPLS_IN_GRE, /* LE */
+ RTE_PTYPE_TUNNEL_MPLS_IN_UDP, /* LE */
+ RTE_PTYPE_INNER_L2_ETHER,/* LF */
+ RTE_PTYPE_INNER_L3_IPV4, /* LG */
+ RTE_PTYPE_INNER_L3_IPV6, /* LG */
+ RTE_PTYPE_INNER_L4_TCP, /* LH */
+ RTE_PTYPE_INNER_L4_UDP, /* LH */
+ RTE_PTYPE_INNER_L4_SCTP, /* LH */
+ RTE_PTYPE_INNER_L4_ICMP, /* LH */
+ };
+
+ if (dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)
+ return ptypes;
+ else
+ return NULL;
+}
+
+/*
+ * +------------------ +------------------ +
+ * | | IL4 | IL3| IL2 | TU | L4 | L3 | L2 |
+ * +-------------------+-------------------+
+ *
+ * +-------------------+------------------ +
+ * | | LH | LG | LF | LE | LD | LC | LB |
+ * +-------------------+-------------------+
+ *
+ * ptype [LE - LD - LC - LB] = TU - L4 - L3 - T2
+ * ptype_tunnel[LH - LG - LF] = IL4 - IL3 - IL2 - TU
+ *
+ */
+static void
+nix_create_non_tunnel_ptype_array(uint16_t *ptype)
+{
+ uint8_t lb, lc, ld, le;
+ uint16_t idx, val;
+
+ for (idx = 0; idx < PTYPE_NON_TUNNEL_ARRAY_SZ; idx++) {
+ lb = idx & 0xF;
+ lc = (idx & 0xF0) >> 4;
+ ld = (idx & 0xF00) >> 8;
+ le = (idx & 0xF000) >> 12;
+ val = RTE_PTYPE_UNKNOWN;
+
+ switch (lb) {
+ case NPC_LT_LB_QINQ:
+ val |= RTE_PTYPE_L2_ETHER_QINQ;
+ break;
+ case NPC_LT_LB_CTAG:
+ val |= RTE_PTYPE_L2_ETHER_VLAN;
+ break;
+ }
+
+ switch (lc) {
+ case NPC_LT_LC_ARP:
+ val |= RTE_PTYPE_L2_ETHER_ARP;
+ break;
+ case NPC_LT_LC_NSH:
+ val |= RTE_PTYPE_L2_ETHER_NSH;
+ break;
+ case NPC_LT_LC_FCOE:
+ val |= RTE_PTYPE_L2_ETHER_FCOE;
+ break;
+ case NPC_LT_LC_MPLS:
+ val |= RTE_PTYPE_L2_ETHER_MPLS;
+ break;
+ case NPC_LT_LC_IP:
+ val |= RTE_PTYPE_L3_IPV4;
+ break;
+ case NPC_LT_LC_IP_OPT:
+ val |= RTE_PTYPE_L3_IPV4_EXT;
+ break;
+ case NPC_LT_LC_IP6:
+ val |= RTE_PTYPE_L3_IPV6;
+ break;
+ case NPC_LT_LC_IP6_EXT:
+ val |= RTE_PTYPE_L3_IPV6_EXT;
+ break;
+ case NPC_LT_LC_PTP:
+ val |= RTE_PTYPE_L2_ETHER_TIMESYNC;
+ break;
+ }
+
+ switch (ld) {
+ case NPC_LT_LD_TCP:
+ val |= RTE_PTYPE_L4_TCP;
+ break;
+ case NPC_LT_LD_UDP:
+ val |= RTE_PTYPE_L4_UDP;
+ break;
+ case NPC_LT_LD_SCTP:
+ val |= RTE_PTYPE_L4_SCTP;
+ break;
+ case NPC_LT_LD_ICMP:
+ val |= RTE_PTYPE_L4_ICMP;
+ break;
+ case NPC_LT_LD_IGMP:
+ val |= RTE_PTYPE_L4_IGMP;
+ break;
+ case NPC_LT_LD_GRE:
+ val |= RTE_PTYPE_TUNNEL_GRE;
+ break;
+ case NPC_LT_LD_NVGRE:
+ val |= RTE_PTYPE_TUNNEL_NVGRE;
+ break;
+ case NPC_LT_LD_ESP:
+ val |= RTE_PTYPE_TUNNEL_ESP;
+ break;
+ }
+
+ switch (le) {
+ case NPC_LT_LE_VXLAN:
+ val |= RTE_PTYPE_TUNNEL_VXLAN;
+ break;
+ case NPC_LT_LE_VXLANGPE:
+ val |= RTE_PTYPE_TUNNEL_VXLAN_GPE;
+ break;
+ case NPC_LT_LE_GENEVE:
+ val |= RTE_PTYPE_TUNNEL_GENEVE;
+ break;
+ case NPC_LT_LE_GTPC:
+ val |= RTE_PTYPE_TUNNEL_GTPC;
+ break;
+ case NPC_LT_LE_GTPU:
+ val |= RTE_PTYPE_TUNNEL_GTPU;
+ break;
+ case NPC_LT_LE_TU_MPLS_IN_GRE:
+ val |= RTE_PTYPE_TUNNEL_MPLS_IN_GRE;
+ break;
+ case NPC_LT_LE_TU_MPLS_IN_UDP:
+ val |= RTE_PTYPE_TUNNEL_MPLS_IN_UDP;
+ break;
+ }
+ ptype[idx] = val;
+ }
+}
+
+#define TU_SHIFT(x) ((x) >> PTYPE_WIDTH)
+static void
+nix_create_tunnel_ptype_array(uint16_t *ptype)
+{
+ uint8_t le, lf, lg;
+ uint16_t idx, val;
+
+ /* Skip non tunnel ptype array memory */
+ ptype = ptype + PTYPE_NON_TUNNEL_ARRAY_SZ;
+
+ for (idx = 0; idx < PTYPE_TUNNEL_ARRAY_SZ; idx++) {
+ le = idx & 0xF;
+ lf = (idx & 0xF0) >> 4;
+ lg = (idx & 0xF00) >> 8;
+ val = RTE_PTYPE_UNKNOWN;
+
+ switch (le) {
+ case NPC_LT_LF_TU_ETHER:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L2_ETHER);
+ break;
+ }
+ switch (lf) {
+ case NPC_LT_LG_TU_IP:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV4);
+ break;
+ case NPC_LT_LG_TU_IP6:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV6);
+ break;
+ }
+ switch (lg) {
+ case NPC_LT_LH_TU_TCP:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L4_TCP);
+ break;
+ case NPC_LT_LH_TU_UDP:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L4_UDP);
+ break;
+ case NPC_LT_LH_TU_SCTP:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L4_SCTP);
+ break;
+ case NPC_LT_LH_TU_ICMP:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L4_ICMP);
+ break;
+ }
+
+ ptype[idx] = val;
+ }
+}
+
+static void
+nix_create_rx_ol_flags_array(void *mem)
+{
+ uint16_t idx, errcode, errlev;
+ uint32_t val, *ol_flags;
+
+ /* Skip ptype array memory */
+ ol_flags = (uint32_t *)((uint8_t *)mem + PTYPE_ARRAY_SZ);
+
+ for (idx = 0; idx < BIT(ERRCODE_ERRLEN_WIDTH); idx++) {
+ errlev = idx & 0xf;
+ errcode = (idx & 0xff0) >> 4;
+
+ val = PKT_RX_IP_CKSUM_UNKNOWN;
+ val |= PKT_RX_L4_CKSUM_UNKNOWN;
+ val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN;
+
+ switch (errlev) {
+ case NPC_ERRLEV_RE:
+ /* Mark all errors as BAD checksum errors */
+ if (errcode) {
+ val |= PKT_RX_IP_CKSUM_BAD;
+ val |= PKT_RX_L4_CKSUM_BAD;
+ } else {
+ val |= PKT_RX_IP_CKSUM_GOOD;
+ val |= PKT_RX_L4_CKSUM_GOOD;
+ }
+ break;
+ case NPC_ERRLEV_LC:
+ if (errcode == NPC_EC_OIP4_CSUM ||
+ errcode == NPC_EC_IP_FRAG_OFFSET_1) {
+ val |= PKT_RX_IP_CKSUM_BAD;
+ val |= PKT_RX_EIP_CKSUM_BAD;
+ } else {
+ val |= PKT_RX_IP_CKSUM_GOOD;
+ }
+ break;
+ case NPC_ERRLEV_LG:
+ if (errcode == NPC_EC_IIP4_CSUM)
+ val |= PKT_RX_IP_CKSUM_BAD;
+ else
+ val |= PKT_RX_IP_CKSUM_GOOD;
+ break;
+ case NPC_ERRLEV_NIX:
+ if (errcode == NIX_RX_PERRCODE_OL4_CHK) {
+ val |= PKT_RX_OUTER_L4_CKSUM_BAD;
+ val |= PKT_RX_L4_CKSUM_BAD;
+ } else if (errcode == NIX_RX_PERRCODE_IL4_CHK) {
+ val |= PKT_RX_L4_CKSUM_BAD;
+ } else {
+ val |= PKT_RX_IP_CKSUM_GOOD;
+ val |= PKT_RX_L4_CKSUM_GOOD;
+ }
+ break;
+ }
+
+ ol_flags[idx] = val;
+ }
+}
+
+void *
+otx2_nix_fastpath_lookup_mem_get(void)
+{
+ const char name[] = "otx2_nix_fastpath_lookup_mem";
+ const struct rte_memzone *mz;
+ void *mem;
+
+ mz = rte_memzone_lookup(name);
+ if (mz != NULL)
+ return mz->addr;
+
+ /* Request for the first time */
+ mz = rte_memzone_reserve_aligned(name, LOOKUP_ARRAY_SZ,
+ SOCKET_ID_ANY, 0, OTX2_ALIGN);
+ if (mz != NULL) {
+ mem = mz->addr;
+ /* Form the ptype array lookup memory */
+ nix_create_non_tunnel_ptype_array(mem);
+ nix_create_tunnel_ptype_array(mem);
+ /* Form the rx ol_flags based on errcode */
+ nix_create_rx_ol_flags_array(mem);
+ return mem;
+ }
+ return NULL;
+}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index 1749c43ff..1283fdf37 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -5,6 +5,13 @@
#ifndef __OTX2_RX_H__
#define __OTX2_RX_H__
+#define PTYPE_WIDTH 12
+#define PTYPE_NON_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH)
+#define PTYPE_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH)
+#define PTYPE_ARRAY_SZ ((PTYPE_NON_TUNNEL_ARRAY_SZ +\
+ PTYPE_TUNNEL_ARRAY_SZ) *\
+ sizeof(uint16_t))
+
#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
#endif /* __OTX2_RX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 26/57] net/octeontx2: add queue info and pool supported operations
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (24 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 25/57] net/octeontx2: add ptype support jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 27/57] net/octeontx2: add Rx and Tx descriptor operations jerinj
` (31 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add Rx and Tx queue info get and pool ops supported
operations.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 3 ++
drivers/net/octeontx2/otx2_ethdev.h | 5 +++
drivers/net/octeontx2/otx2_ethdev_ops.c | 51 +++++++++++++++++++++++++
3 files changed, 59 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index a9cdafc33..a504870f6 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1293,6 +1293,9 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.xstats_reset = otx2_nix_xstats_reset,
.xstats_get_by_id = otx2_nix_xstats_get_by_id,
.xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id,
+ .rxq_info_get = otx2_nix_rxq_info_get,
+ .txq_info_get = otx2_nix_txq_info_get,
+ .pool_ops_supported = otx2_nix_pool_ops_supported,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index cfc4dfe14..199d5f242 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -274,6 +274,11 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
/* Ops */
void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_info *dev_info);
+int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool);
+void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+void otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en);
void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 1c935b627..eda5f8a01 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -2,6 +2,8 @@
* Copyright(C) 2019 Marvell International Ltd.
*/
+#include <rte_mbuf_pool_ops.h>
+
#include "otx2_ethdev.h"
static void
@@ -86,6 +88,55 @@ otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev)
nix_allmulticast_config(eth_dev, 0);
}
+void
+otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct otx2_eth_rxq *rxq;
+
+ rxq = eth_dev->data->rx_queues[queue_id];
+
+ qinfo->mp = rxq->pool;
+ qinfo->scattered_rx = eth_dev->data->scattered_rx;
+ qinfo->nb_desc = rxq->qconf.nb_desc;
+
+ qinfo->conf.rx_free_thresh = 0;
+ qinfo->conf.rx_drop_en = 0;
+ qinfo->conf.rx_deferred_start = 0;
+ qinfo->conf.offloads = rxq->offloads;
+}
+
+void
+otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct otx2_eth_txq *txq;
+
+ txq = eth_dev->data->tx_queues[queue_id];
+
+ qinfo->nb_desc = txq->qconf.nb_desc;
+
+ qinfo->conf.tx_thresh.pthresh = 0;
+ qinfo->conf.tx_thresh.hthresh = 0;
+ qinfo->conf.tx_thresh.wthresh = 0;
+
+ qinfo->conf.tx_free_thresh = 0;
+ qinfo->conf.tx_rs_thresh = 0;
+ qinfo->conf.offloads = txq->offloads;
+ qinfo->conf.tx_deferred_start = 0;
+}
+
+int
+otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
+{
+ RTE_SET_USED(eth_dev);
+
+ if (!strcmp(pool, rte_mbuf_platform_mempool_ops()))
+ return 0;
+
+ return -ENOTSUP;
+}
+
void
otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 27/57] net/octeontx2: add Rx and Tx descriptor operations
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (25 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 26/57] net/octeontx2: add queue info and pool supported operations jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 28/57] net/octeontx2: add module EEPROM dump jerinj
` (30 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
From: Jerin Jacob <jerinj@marvell.com>
Add Rx and Tx queue descriptor related operations.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 2 +
doc/guides/nics/features/octeontx2_vf.ini | 2 +
drivers/net/octeontx2/otx2_ethdev.c | 4 ++
drivers/net/octeontx2/otx2_ethdev.h | 4 ++
drivers/net/octeontx2/otx2_ethdev_ops.c | 83 ++++++++++++++++++++++
6 files changed, 97 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 0de07776f..f07b64f24 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Free Tx mbuf on demand = Y
Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
@@ -21,6 +22,7 @@ RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Packet type parsing = Y
+Rx descriptor status = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index b4b253aa4..911c926e4 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Free Tx mbuf on demand = Y
Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
@@ -21,6 +22,7 @@ RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Packet type parsing = Y
+Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 21cc4861e..e275e6469 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -11,12 +11,14 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Free Tx mbuf on demand = Y
Queue start/stop = Y
RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Packet type parsing = Y
+Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index a504870f6..feeba5c96 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1295,6 +1295,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id,
.rxq_info_get = otx2_nix_rxq_info_get,
.txq_info_get = otx2_nix_txq_info_get,
+ .rx_queue_count = otx2_nix_rx_queue_count,
+ .rx_descriptor_done = otx2_nix_rx_descriptor_done,
+ .rx_descriptor_status = otx2_nix_rx_descriptor_status,
+ .tx_done_cleanup = otx2_nix_tx_done_cleanup,
.pool_ops_supported = otx2_nix_pool_ops_supported,
};
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 199d5f242..8f2691c80 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -279,6 +279,10 @@ void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
void otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo);
+uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx);
+int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
+int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset);
+int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en);
void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index eda5f8a01..44cc17200 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -126,6 +126,89 @@ otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
qinfo->conf.tx_deferred_start = 0;
}
+static void
+nix_rx_head_tail_get(struct otx2_eth_dev *dev,
+ uint32_t *head, uint32_t *tail, uint16_t queue_idx)
+{
+ uint64_t reg, val;
+
+ if (head == NULL || tail == NULL)
+ return;
+
+ reg = (((uint64_t)queue_idx) << 32);
+ val = otx2_atomic64_add_nosync(reg, (int64_t *)
+ (dev->base + NIX_LF_CQ_OP_STATUS));
+ if (val & (OP_ERR | CQ_ERR))
+ val = 0;
+
+ *tail = (uint32_t)(val & 0xFFFFF);
+ *head = (uint32_t)((val >> 20) & 0xFFFFF);
+}
+
+uint32_t
+otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx)
+{
+ struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx];
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint32_t head, tail;
+
+ nix_rx_head_tail_get(dev, &head, &tail, queue_idx);
+ return (tail - head) % rxq->qlen;
+}
+
+static inline int
+nix_offset_has_packet(uint32_t head, uint32_t tail, uint16_t offset)
+{
+ /* Check given offset(queue index) has packet filled by HW */
+ if (tail > head && offset <= tail && offset >= head)
+ return 1;
+ /* Wrap around case */
+ if (head > tail && (offset >= head || offset <= tail))
+ return 1;
+
+ return 0;
+}
+
+int
+otx2_nix_rx_descriptor_done(void *rx_queue, uint16_t offset)
+{
+ struct otx2_eth_rxq *rxq = rx_queue;
+ uint32_t head, tail;
+
+ nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev),
+ &head, &tail, rxq->rq);
+
+ return nix_offset_has_packet(head, tail, offset);
+}
+
+int
+otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset)
+{
+ struct otx2_eth_rxq *rxq = rx_queue;
+ uint32_t head, tail;
+
+ if (rxq->qlen >= offset)
+ return -EINVAL;
+
+ nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev),
+ &head, &tail, rxq->rq);
+
+ if (nix_offset_has_packet(head, tail, offset))
+ return RTE_ETH_RX_DESC_DONE;
+ else
+ return RTE_ETH_RX_DESC_AVAIL;
+}
+
+/* It is a NOP for octeontx2 as HW frees the buffer on xmit */
+int
+otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+ RTE_SET_USED(txq);
+ RTE_SET_USED(free_cnt);
+
+ return 0;
+}
+
int
otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 28/57] net/octeontx2: add module EEPROM dump
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (26 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 27/57] net/octeontx2: add Rx and Tx descriptor operations jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 29/57] net/octeontx2: add flow control support jerinj
` (29 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
add module EEPROM dump operation.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 2 +
drivers/net/octeontx2/otx2_ethdev.h | 4 ++
drivers/net/octeontx2/otx2_ethdev_ops.c | 51 ++++++++++++++++++++++
6 files changed, 60 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index f07b64f24..87141244a 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -26,6 +26,7 @@ Rx descriptor status = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
+Module EEPROM dump = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 911c926e4..dafbe003c 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -26,6 +26,7 @@ Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
+Module EEPROM dump = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index e275e6469..7fba7e1d9 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -22,6 +22,7 @@ Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
+Module EEPROM dump = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index feeba5c96..58c2f97b5 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1300,6 +1300,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.rx_descriptor_status = otx2_nix_rx_descriptor_status,
.tx_done_cleanup = otx2_nix_tx_done_cleanup,
.pool_ops_supported = otx2_nix_pool_ops_supported,
+ .get_module_info = otx2_nix_get_module_info,
+ .get_module_eeprom = otx2_nix_get_module_eeprom,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 8f2691c80..5dd5d8c8b 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -274,6 +274,10 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
/* Ops */
void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_info *dev_info);
+int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
+ struct rte_eth_dev_module_info *modinfo);
+int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
+ struct rte_dev_eeprom_info *info);
int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool);
void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 44cc17200..2a949439a 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -220,6 +220,57 @@ otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
return -ENOTSUP;
}
+static struct cgx_fw_data *
+nix_get_fwdata(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_fw_data *rsp = NULL;
+
+ otx2_mbox_alloc_msg_cgx_get_aux_link_info(mbox);
+
+ otx2_mbox_process_msg(mbox, (void *)&rsp);
+
+ return rsp;
+}
+
+int
+otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
+ struct rte_eth_dev_module_info *modinfo)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct cgx_fw_data *rsp;
+
+ rsp = nix_get_fwdata(dev);
+ if (rsp == NULL)
+ return -EIO;
+
+ modinfo->type = rsp->fwdata.sfp_eeprom.sff_id;
+ modinfo->eeprom_len = SFP_EEPROM_SIZE;
+
+ return 0;
+}
+
+int
+otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
+ struct rte_dev_eeprom_info *info)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct cgx_fw_data *rsp;
+
+ if (!info->data || !info->length ||
+ (info->offset + info->length > SFP_EEPROM_SIZE))
+ return -EINVAL;
+
+ rsp = nix_get_fwdata(dev);
+ if (rsp == NULL)
+ return -EIO;
+
+ otx2_mbox_memcpy(info->data, rsp->fwdata.sfp_eeprom.buf + info->offset,
+ info->length);
+
+ return 0;
+}
+
void
otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 29/57] net/octeontx2: add flow control support
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (27 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 28/57] net/octeontx2: add module EEPROM dump jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 30/57] net/octeontx2: add PTP base support jerinj
` (28 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Add flow control operations and exposed
otx2_nix_update_flow_ctrl_mode() to enable on the
configured mode in dev_start().
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 20 ++
drivers/net/octeontx2/otx2_ethdev.h | 23 +++
drivers/net/octeontx2/otx2_flow_ctrl.c | 220 +++++++++++++++++++++
8 files changed, 268 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_flow_ctrl.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 87141244a..00feb0cf2 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -21,6 +21,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Flow control = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index dafbe003c..f3f812804 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -21,6 +21,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Flow control = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 07e44b031..20281b030 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -25,6 +25,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- MAC filtering
- Port hardware statistics
- Link state information
+- Link flow control
- Debug utilities - Context dump and error interrupt support
Prerequisites
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index d434b0b9d..4a361846f 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -35,6 +35,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_stats.c \
otx2_lookup.c \
otx2_ethdev.c \
+ otx2_flow_ctrl.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
otx2_ethdev_debug.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 3dff3e53d..4b56f4461 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -10,6 +10,7 @@ sources = files(
'otx2_stats.c',
'otx2_lookup.c',
'otx2_ethdev.c',
+ 'otx2_flow_ctrl.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
'otx2_ethdev_debug.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 58c2f97b5..19b502903 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -216,6 +216,14 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
aq->cq.cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT);
aq->cq.cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR);
+ /* TX pause frames enable flowctrl on RX side */
+ if (dev->fc_info.tx_pause) {
+ /* Single bpid is allocated for all rx channels for now */
+ aq->cq.bpid = dev->fc_info.bpid[0];
+ aq->cq.bp = NIX_CQ_BP_LEVEL;
+ aq->cq.bp_ena = 1;
+ }
+
/* Many to one reduction */
aq->cq.qint_idx = qid % dev->qints;
@@ -1073,6 +1081,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
/* Free the resources allocated from the previous configure */
if (dev->configured == 1) {
+ otx2_nix_rxchan_bpid_cfg(eth_dev, false);
oxt2_nix_unregister_queue_irqs(eth_dev);
nix_set_nop_rxtx_function(eth_dev);
rc = nix_store_queue_cfg_and_then_release(eth_dev);
@@ -1126,6 +1135,12 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ rc = otx2_nix_rxchan_bpid_cfg(eth_dev, true);
+ if (rc) {
+ otx2_err("Failed to configure nix rx chan bpid cfg rc=%d", rc);
+ goto free_nix_lf;
+ }
+
/*
* Restore queue config when reconfigure followed by
* reconfigure and no queue configure invoked from application case.
@@ -1302,6 +1317,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.pool_ops_supported = otx2_nix_pool_ops_supported,
.get_module_info = otx2_nix_get_module_info,
.get_module_eeprom = otx2_nix_get_module_eeprom,
+ .flow_ctrl_get = otx2_nix_flow_ctrl_get,
+ .flow_ctrl_set = otx2_nix_flow_ctrl_set,
};
static inline int
@@ -1503,6 +1520,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ /* Disable nix bpid config */
+ otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+
/* Free up SQs */
for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 5dd5d8c8b..03ecd32ec 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -87,6 +87,9 @@
#define NIX_TX_NB_SEG_MAX 9
#endif
+/* Apply BP when CQ is 75% full */
+#define NIX_CQ_BP_LEVEL (25 * 256 / 100)
+
#define CQ_OP_STAT_OP_ERR 63
#define CQ_OP_STAT_CQ_ERR 46
@@ -169,6 +172,14 @@ struct otx2_npc_flow_info {
uint16_t flow_max_priority;
};
+struct otx2_fc_info {
+ enum rte_eth_fc_mode mode; /**< Link flow control mode */
+ uint8_t rx_pause;
+ uint8_t tx_pause;
+ uint8_t chan_cnt;
+ uint16_t bpid[NIX_MAX_CHAN];
+};
+
struct otx2_eth_dev {
OTX2_DEV; /* Base class */
MARKER otx2_eth_dev_data_start;
@@ -216,6 +227,7 @@ struct otx2_eth_dev {
struct otx2_nix_tm_node_list node_list;
struct otx2_nix_tm_shaper_profile_list shaper_profile_list;
struct otx2_rss_info rss_info;
+ struct otx2_fc_info fc_info;
uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
struct otx2_npc_flow_info npc_flow;
@@ -368,6 +380,17 @@ int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *addr);
+/* Flow Control */
+int otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_fc_conf *fc_conf);
+
+int otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
+ struct rte_eth_fc_conf *fc_conf);
+
+int otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb);
+
+int otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev);
+
/* Lookup configuration */
void *otx2_nix_fastpath_lookup_mem_get(void);
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
new file mode 100644
index 000000000..0392086d8
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_flow_ctrl.c
@@ -0,0 +1,220 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+
+int
+otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_fc_info *fc = &dev->fc_info;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_bp_cfg_req *req;
+ struct nix_bp_cfg_rsp *rsp;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ if (enb) {
+ req = otx2_mbox_alloc_msg_nix_bp_enable(mbox);
+ req->chan_base = 0;
+ req->chan_cnt = 1;
+ req->bpid_per_chan = 0;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc || req->chan_cnt != rsp->chan_cnt) {
+ otx2_err("Insufficient BPIDs, alloc=%u < req=%u rc=%d",
+ rsp->chan_cnt, req->chan_cnt, rc);
+ return rc;
+ }
+
+ fc->bpid[0] = rsp->chan_bpid[0];
+ } else {
+ req = otx2_mbox_alloc_msg_nix_bp_disable(mbox);
+ req->chan_base = 0;
+ req->chan_cnt = 1;
+
+ rc = otx2_mbox_process(mbox);
+
+ memset(fc->bpid, 0, sizeof(uint16_t) * NIX_MAX_CHAN);
+ }
+
+ return rc;
+}
+
+int
+otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_fc_conf *fc_conf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct cgx_pause_frm_cfg *req, *rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return -ENOTSUP;
+
+ req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
+ req->set = 0;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ goto done;
+
+ if (rsp->rx_pause && rsp->tx_pause)
+ fc_conf->mode = RTE_FC_FULL;
+ else if (rsp->rx_pause)
+ fc_conf->mode = RTE_FC_RX_PAUSE;
+ else if (rsp->tx_pause)
+ fc_conf->mode = RTE_FC_TX_PAUSE;
+ else
+ fc_conf->mode = RTE_FC_NONE;
+
+done:
+ return rc;
+}
+
+static int
+otx2_nix_cq_bp_cfg(struct rte_eth_dev *eth_dev, bool enb)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_fc_info *fc = &dev->fc_info;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *aq;
+ struct otx2_eth_rxq *rxq;
+ int i, rc;
+
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ rxq = eth_dev->data->rx_queues[i];
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ /* The shared memory buffer can be full.
+ * flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq)
+ return -ENOMEM;
+ }
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ if (enb) {
+ aq->cq.bpid = fc->bpid[0];
+ aq->cq_mask.bpid = ~(aq->cq_mask.bpid);
+ aq->cq.bp = NIX_CQ_BP_LEVEL;
+ aq->cq_mask.bp = ~(aq->cq_mask.bp);
+ }
+
+ aq->cq.bp_ena = !!enb;
+ aq->cq_mask.bp_ena = ~(aq->cq_mask.bp_ena);
+ }
+
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+otx2_nix_rx_fc_cfg(struct rte_eth_dev *eth_dev, bool enb)
+{
+ return otx2_nix_cq_bp_cfg(eth_dev, enb);
+}
+
+int
+otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
+ struct rte_eth_fc_conf *fc_conf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_fc_info *fc = &dev->fc_info;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_pause_frm_cfg *req;
+ uint8_t tx_pause, rx_pause;
+ int rc = 0;
+
+ if (otx2_dev_is_vf(dev))
+ return -ENOTSUP;
+
+ if (fc_conf->high_water || fc_conf->low_water || fc_conf->pause_time ||
+ fc_conf->mac_ctrl_frame_fwd || fc_conf->autoneg) {
+ otx2_info("Flowctrl parameter is not supported");
+ return -EINVAL;
+ }
+
+ if (fc_conf->mode == fc->mode)
+ return 0;
+
+ rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
+ (fc_conf->mode == RTE_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
+ (fc_conf->mode == RTE_FC_TX_PAUSE);
+
+ /* Check if TX pause frame is already enabled or not */
+ if (fc->tx_pause ^ tx_pause) {
+ if (otx2_dev_is_A0(dev) && eth_dev->data->dev_started) {
+ /* on A0, CQ should be in disabled state
+ * while setting flow control configuration.
+ */
+ otx2_info("Stop the port=%d for setting flow control\n",
+ eth_dev->data->port_id);
+ return 0;
+ }
+ /* TX pause frames, enable/disable flowctrl on RX side. */
+ rc = otx2_nix_rx_fc_cfg(eth_dev, tx_pause);
+ if (rc)
+ return rc;
+ }
+
+ req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
+ req->set = 1;
+ req->rx_pause = rx_pause;
+ req->tx_pause = tx_pause;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ fc->tx_pause = tx_pause;
+ fc->rx_pause = rx_pause;
+ fc->mode = fc_conf->mode;
+
+ return rc;
+}
+
+int
+otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_eth_fc_conf fc_conf;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
+ /* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+ * by AF driver, update those info in PMD structure.
+ */
+ otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
+
+ /* To avoid Link credit deadlock on A0, disable Tx FC if it's enabled */
+ if (otx2_dev_is_A0(dev) &&
+ (fc_conf.mode == RTE_FC_FULL || fc_conf.mode == RTE_FC_RX_PAUSE)) {
+ fc_conf.mode =
+ (fc_conf.mode == RTE_FC_FULL ||
+ fc_conf.mode == RTE_FC_TX_PAUSE) ?
+ RTE_FC_TX_PAUSE : RTE_FC_NONE;
+ }
+
+ return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 30/57] net/octeontx2: add PTP base support
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (28 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 29/57] net/octeontx2: add flow control support jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 31/57] net/octeontx2: add remaining PTP operations jerinj
` (27 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic
Cc: Harman Kalra, Zyta Szpak
From: Harman Kalra <hkalra@marvell.com>
Add PTP enable and disable operations.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: Zyta Szpak <zyta@marvell.com>
---
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 22 ++++-
drivers/net/octeontx2/otx2_ethdev.h | 17 ++++
drivers/net/octeontx2/otx2_ptp.c | 135 ++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 11 +++
7 files changed, 185 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_ptp.c
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 20281b030..41eb3c7b9 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -27,6 +27,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Link state information
- Link flow control
- Debug utilities - Context dump and error interrupt support
+- IEEE1588 timestamping
Prerequisites
-------------
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 4a361846f..a0155e727 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -31,6 +31,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_tm.c \
otx2_rss.c \
otx2_mac.c \
+ otx2_ptp.c \
otx2_link.c \
otx2_stats.c \
otx2_lookup.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 4b56f4461..2cac57d2b 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -6,6 +6,7 @@ sources = files(
'otx2_tm.c',
'otx2_rss.c',
'otx2_mac.c',
+ 'otx2_ptp.c',
'otx2_link.c',
'otx2_stats.c',
'otx2_lookup.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 19b502903..29e8130f4 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -336,9 +336,7 @@ nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
static inline int
nix_get_data_off(struct otx2_eth_dev *dev)
{
- RTE_SET_USED(dev);
-
- return 0;
+ return otx2_ethdev_is_ptp_en(dev) ? NIX_TIMESYNC_RX_OFFSET : 0;
}
uint64_t
@@ -450,6 +448,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
rxq->qlen = nix_qsize_to_val(qsize);
rxq->qsize = qsize;
rxq->lookup_mem = otx2_nix_fastpath_lookup_mem_get();
+ rxq->tstamp = &dev->tstamp;
/* Alloc completion queue */
rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp);
@@ -717,6 +716,7 @@ otx2_nix_form_default_desc(struct otx2_eth_txq *txq)
send_mem->dsz = 0x0;
send_mem->wmem = 0x1;
send_mem->alg = NIX_SENDMEMALG_SETTSTMP;
+ send_mem->addr = txq->dev->tstamp.tx_tstamp_iova;
}
sg = (union nix_send_sg_s *)&txq->cmd[4];
} else {
@@ -1141,6 +1141,16 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ /* Enable PTP if it was requested by the app or if it is already
+ * enabled in PF owning this VF
+ */
+ memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
+ if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+ otx2_ethdev_is_ptp_en(dev))
+ otx2_nix_timesync_enable(eth_dev);
+ else
+ otx2_nix_timesync_disable(eth_dev);
+
/*
* Restore queue config when reconfigure followed by
* reconfigure and no queue configure invoked from application case.
@@ -1319,6 +1329,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.get_module_eeprom = otx2_nix_get_module_eeprom,
.flow_ctrl_get = otx2_nix_flow_ctrl_get,
.flow_ctrl_set = otx2_nix_flow_ctrl_set,
+ .timesync_enable = otx2_nix_timesync_enable,
+ .timesync_disable = otx2_nix_timesync_disable,
};
static inline int
@@ -1523,6 +1535,10 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
/* Disable nix bpid config */
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+ /* Disable PTP if already enabled */
+ if (otx2_ethdev_is_ptp_en(dev))
+ otx2_nix_timesync_disable(eth_dev);
+
/* Free up SQs */
for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 03ecd32ec..1ca28add4 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -13,6 +13,7 @@
#include <rte_mbuf.h>
#include <rte_mempool.h>
#include <rte_string_fns.h>
+#include <rte_time.h>
#include "otx2_common.h"
#include "otx2_dev.h"
@@ -128,6 +129,12 @@
#define NIX_DEFAULT_RSS_CTX_GROUP 0
#define NIX_DEFAULT_RSS_MCAM_IDX -1
+#define otx2_ethdev_is_ptp_en(dev) ((dev)->ptp_en)
+
+#define NIX_TIMESYNC_TX_CMD_LEN 8
+/* Additional timesync values. */
+#define OTX2_CYCLECOUNTER_MASK 0xffffffffffffffffULL
+
enum nix_q_size_e {
nix_q_size_16, /* 16 entries */
nix_q_size_64, /* 64 entries */
@@ -234,6 +241,12 @@ struct otx2_eth_dev {
struct otx2_eth_qconf *tx_qconf;
struct otx2_eth_qconf *rx_qconf;
struct rte_eth_dev *eth_dev;
+ /* PTP counters */
+ bool ptp_en;
+ struct otx2_timesync_info tstamp;
+ struct rte_timecounter systime_tc;
+ struct rte_timecounter rx_tstamp_tc;
+ struct rte_timecounter tx_tstamp_tc;
} __rte_cache_aligned;
struct otx2_eth_txq {
@@ -414,4 +427,8 @@ int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
/* Rx and Tx routines */
void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
+/* Timesync - PTP routines */
+int otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev);
+int otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev);
+
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
new file mode 100644
index 000000000..105067949
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -0,0 +1,135 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_ethdev_driver.h>
+
+#include "otx2_ethdev.h"
+
+#define PTP_FREQ_ADJUST (1 << 9)
+
+static void
+nix_start_timecounters(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ memset(&dev->systime_tc, 0, sizeof(struct rte_timecounter));
+ memset(&dev->rx_tstamp_tc, 0, sizeof(struct rte_timecounter));
+ memset(&dev->tx_tstamp_tc, 0, sizeof(struct rte_timecounter));
+
+ dev->systime_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
+ dev->rx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
+ dev->tx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
+}
+
+static int
+nix_ptp_config(struct rte_eth_dev *eth_dev, int en)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ uint8_t rc = 0;
+
+ if (otx2_dev_is_vf(dev))
+ return rc;
+
+ if (en) {
+ /* Enable time stamping of sent PTP packets. */
+ otx2_mbox_alloc_msg_nix_lf_ptp_tx_enable(mbox);
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("MBOX ptp tx conf enable failed: err %d", rc);
+ return rc;
+ }
+ /* Enable time stamping of received PTP packets. */
+ otx2_mbox_alloc_msg_cgx_ptp_rx_enable(mbox);
+ } else {
+ /* Disable time stamping of sent PTP packets. */
+ otx2_mbox_alloc_msg_nix_lf_ptp_tx_disable(mbox);
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("MBOX ptp tx conf disable failed: err %d", rc);
+ return rc;
+ }
+ /* Disable time stamping of received PTP packets. */
+ otx2_mbox_alloc_msg_cgx_ptp_rx_disable(mbox);
+ }
+
+ return otx2_mbox_process(mbox);
+}
+
+int
+otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int i, rc = 0;
+
+ if (otx2_ethdev_is_ptp_en(dev)) {
+ otx2_info("PTP mode is already enabled ");
+ return -EINVAL;
+ }
+
+ /* If we are VF, no further action can be taken */
+ if (otx2_dev_is_vf(dev))
+ return -EINVAL;
+
+ if (!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)) {
+ otx2_err("Ptype offload is disabled, it should be enabled");
+ return -EINVAL;
+ }
+
+ /* Allocating a iova address for tx tstamp */
+ const struct rte_memzone *ts;
+ ts = rte_eth_dma_zone_reserve(eth_dev, "otx2_ts",
+ 0, OTX2_ALIGN, OTX2_ALIGN,
+ dev->node);
+ if (ts == NULL)
+ otx2_err("Failed to allocate mem for tx tstamp addr");
+
+ dev->tstamp.tx_tstamp_iova = ts->iova;
+ dev->tstamp.tx_tstamp = ts->addr;
+
+ /* System time should be already on by default */
+ nix_start_timecounters(eth_dev);
+
+ dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
+ dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
+
+ rc = nix_ptp_config(eth_dev, 1);
+ if (!rc) {
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
+ otx2_nix_form_default_desc(txq);
+ }
+ }
+ return rc;
+}
+
+int
+otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int i, rc = 0;
+
+ if (!otx2_ethdev_is_ptp_en(dev)) {
+ otx2_nix_dbg("PTP mode is disabled");
+ return -EINVAL;
+ }
+
+ /* If we are VF, nothing else can be done */
+ if (otx2_dev_is_vf(dev))
+ return -EINVAL;
+
+ dev->rx_offloads &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
+ dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
+
+ rc = nix_ptp_config(eth_dev, 0);
+ if (!rc) {
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
+ otx2_nix_form_default_desc(txq);
+ }
+ }
+ return rc;
+}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index 1283fdf37..0c3627c12 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -13,5 +13,16 @@
sizeof(uint16_t))
#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
+#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
+
+#define NIX_TIMESYNC_RX_OFFSET 8
+
+struct otx2_timesync_info {
+ uint64_t rx_tstamp;
+ rte_iova_t tx_tstamp_iova;
+ uint64_t *tx_tstamp;
+ uint8_t tx_ready;
+ uint8_t rx_ready;
+} __rte_cache_aligned;
#endif /* __OTX2_RX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 31/57] net/octeontx2: add remaining PTP operations
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (29 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 30/57] net/octeontx2: add PTP base support jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 32/57] net/octeontx2: introducing flow driver jerinj
` (26 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Harman Kalra, Zyta Szpak
From: Harman Kalra <hkalra@marvell.com>
Add remaining PTP configuration/slowpath operations.
Timesync feature is available only for PF devices.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: Zyta Szpak <zyta@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
drivers/net/octeontx2/otx2_ethdev.c | 6 ++
drivers/net/octeontx2/otx2_ethdev.h | 11 +++
drivers/net/octeontx2/otx2_ptp.c | 130 +++++++++++++++++++++++++
4 files changed, 149 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 00feb0cf2..46fb00be6 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -23,6 +23,8 @@ RSS reta update = Y
Inner RSS = Y
Flow control = Y
Packet type parsing = Y
+Timesync = Y
+Timestamp offload = Y
Rx descriptor status = Y
Basic stats = Y
Stats per queue = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 29e8130f4..7512aacb3 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -47,6 +47,7 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
static const struct otx2_dev_ops otx2_dev_ops = {
.link_status_update = otx2_eth_dev_link_status_update,
+ .ptp_info_update = otx2_eth_dev_ptp_info_update
};
static int
@@ -1331,6 +1332,11 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.flow_ctrl_set = otx2_nix_flow_ctrl_set,
.timesync_enable = otx2_nix_timesync_enable,
.timesync_disable = otx2_nix_timesync_disable,
+ .timesync_read_rx_timestamp = otx2_nix_timesync_read_rx_timestamp,
+ .timesync_read_tx_timestamp = otx2_nix_timesync_read_tx_timestamp,
+ .timesync_adjust_time = otx2_nix_timesync_adjust_time,
+ .timesync_read_time = otx2_nix_timesync_read_time,
+ .timesync_write_time = otx2_nix_timesync_write_time,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 1ca28add4..8f8d93a39 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -430,5 +430,16 @@ void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
/* Timesync - PTP routines */
int otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev);
int otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev);
+int otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev,
+ struct timespec *timestamp,
+ uint32_t flags);
+int otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev,
+ struct timespec *timestamp);
+int otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta);
+int otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev,
+ const struct timespec *ts);
+int otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev,
+ struct timespec *ts);
+int otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en);
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 105067949..5291da241 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -57,6 +57,23 @@ nix_ptp_config(struct rte_eth_dev *eth_dev, int en)
return otx2_mbox_process(mbox);
}
+int
+otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en)
+{
+ struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
+ struct rte_eth_dev *eth_dev = otx2_dev->eth_dev;
+ int i;
+
+ otx2_dev->ptp_en = ptp_en;
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[i];
+ rxq->mbuf_initializer =
+ otx2_nix_rxq_mbuf_setup(otx2_dev,
+ eth_dev->data->port_id);
+ }
+ return 0;
+}
+
int
otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
{
@@ -133,3 +150,116 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
}
return rc;
}
+
+int
+otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev,
+ struct timespec *timestamp,
+ uint32_t __rte_unused flags)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_timesync_info *tstamp = &dev->tstamp;
+ uint64_t ns;
+
+ if (!tstamp->rx_ready)
+ return -EINVAL;
+
+ ns = rte_timecounter_update(&dev->rx_tstamp_tc, tstamp->rx_tstamp);
+ *timestamp = rte_ns_to_timespec(ns);
+ tstamp->rx_ready = 0;
+
+ otx2_nix_dbg("rx timestamp: %llu sec: %lu nsec %lu",
+ (unsigned long long)tstamp->rx_tstamp, timestamp->tv_sec,
+ timestamp->tv_nsec);
+
+ return 0;
+}
+
+int
+otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev,
+ struct timespec *timestamp)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_timesync_info *tstamp = &dev->tstamp;
+ uint64_t ns;
+
+ if (*tstamp->tx_tstamp == 0)
+ return -EINVAL;
+
+ ns = rte_timecounter_update(&dev->tx_tstamp_tc, *tstamp->tx_tstamp);
+ *timestamp = rte_ns_to_timespec(ns);
+
+ otx2_nix_dbg("tx timestamp: %llu sec: %lu nsec %lu",
+ *(unsigned long long *)tstamp->tx_tstamp,
+ timestamp->tv_sec, timestamp->tv_nsec);
+
+ *tstamp->tx_tstamp = 0;
+ rte_wmb();
+
+ return 0;
+}
+
+int
+otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct ptp_req *req;
+ struct ptp_rsp *rsp;
+ int rc;
+
+ /* Adjust the frequent to make tics increments in 10^9 tics per sec */
+ if (delta < PTP_FREQ_ADJUST && delta > -PTP_FREQ_ADJUST) {
+ req = otx2_mbox_alloc_msg_ptp_op(mbox);
+ req->op = PTP_OP_ADJFINE;
+ req->scaled_ppm = delta;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+ }
+ dev->systime_tc.nsec += delta;
+ dev->rx_tstamp_tc.nsec += delta;
+ dev->tx_tstamp_tc.nsec += delta;
+
+ return 0;
+}
+
+int
+otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev,
+ const struct timespec *ts)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t ns;
+
+ ns = rte_timespec_to_ns(ts);
+ /* Set the time counters to a new value. */
+ dev->systime_tc.nsec = ns;
+ dev->rx_tstamp_tc.nsec = ns;
+ dev->tx_tstamp_tc.nsec = ns;
+
+ return 0;
+}
+
+int
+otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev, struct timespec *ts)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct ptp_req *req;
+ struct ptp_rsp *rsp;
+ uint64_t ns;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_ptp_op(mbox);
+ req->op = PTP_OP_GET_CLOCK;
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ ns = rte_timecounter_update(&dev->systime_tc, rsp->clk);
+ *ts = rte_ns_to_timespec(ns);
+
+ otx2_nix_dbg("PTP time read: %ld.%09ld", ts->tv_sec, ts->tv_nsec);
+
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 32/57] net/octeontx2: introducing flow driver
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (30 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 31/57] net/octeontx2: add remaining PTP operations jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 33/57] net/octeontx2: add flow utility functions jerinj
` (25 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Introducing flow infra for octeontx2.
This will be used to maintain rte_flow rules.
Create, destroy, validate, query, flush, isolate flow operations
will be supported.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow.h | 388 ++++++++++++++++++++++++++++++
1 file changed, 388 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_flow.h
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
new file mode 100644
index 000000000..95bb6c2bf
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_flow.h
@@ -0,0 +1,388 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_FLOW_H__
+#define __OTX2_FLOW_H__
+
+#include <stdint.h>
+
+#include <rte_flow_driver.h>
+#include <rte_malloc.h>
+#include <rte_tailq.h>
+
+#include "otx2_common.h"
+#include "otx2_ethdev.h"
+#include "otx2_mbox.h"
+
+int otx2_flow_init(struct otx2_eth_dev *hw);
+int otx2_flow_fini(struct otx2_eth_dev *hw);
+extern const struct rte_flow_ops otx2_flow_ops;
+
+enum {
+ OTX2_INTF_RX = 0,
+ OTX2_INTF_TX = 1,
+ OTX2_INTF_MAX = 2,
+};
+
+#define NPC_IH_LENGTH 8
+#define NPC_TPID_LENGTH 2
+#define NPC_COUNTER_NONE (-1)
+/* 32 bytes from LDATA_CFG & 32 bytes from FLAGS_CFG */
+#define NPC_MAX_EXTRACT_DATA_LEN (64)
+#define NPC_LDATA_LFLAG_LEN (16)
+#define NPC_MCAM_TOT_ENTRIES (4096)
+#define NPC_MAX_KEY_NIBBLES (31)
+/* Nibble offsets */
+#define NPC_LAYER_KEYX_SZ (3)
+#define NPC_PARSE_KEX_S_LA_OFFSET (7)
+#define NPC_PARSE_KEX_S_LID_OFFSET(lid) \
+ ((((lid) - NPC_LID_LA) * NPC_LAYER_KEYX_SZ) \
+ + NPC_PARSE_KEX_S_LA_OFFSET)
+
+
+/* supported flow actions flags */
+#define OTX2_FLOW_ACT_MARK (1 << 0)
+#define OTX2_FLOW_ACT_FLAG (1 << 1)
+#define OTX2_FLOW_ACT_DROP (1 << 2)
+#define OTX2_FLOW_ACT_QUEUE (1 << 3)
+#define OTX2_FLOW_ACT_RSS (1 << 4)
+#define OTX2_FLOW_ACT_DUP (1 << 5)
+#define OTX2_FLOW_ACT_SEC (1 << 6)
+#define OTX2_FLOW_ACT_COUNT (1 << 7)
+
+/* terminating actions */
+#define OTX2_FLOW_ACT_TERM (OTX2_FLOW_ACT_DROP | \
+ OTX2_FLOW_ACT_QUEUE | \
+ OTX2_FLOW_ACT_RSS | \
+ OTX2_FLOW_ACT_DUP | \
+ OTX2_FLOW_ACT_SEC)
+
+/* This mark value indicates flag action */
+#define OTX2_FLOW_FLAG_VAL (0xffff)
+
+#define NIX_RX_ACT_MATCH_OFFSET (40)
+#define NIX_RX_ACT_MATCH_MASK (0xFFFF)
+
+#define NIX_RSS_ACT_GRP_OFFSET (20)
+#define NIX_RSS_ACT_ALG_OFFSET (56)
+#define NIX_RSS_ACT_GRP_MASK (0xFFFFF)
+#define NIX_RSS_ACT_ALG_MASK (0x1F)
+
+/* PMD-specific definition of the opaque struct rte_flow */
+#define OTX2_MAX_MCAM_WIDTH_DWORDS 7
+
+enum npc_mcam_intf {
+ NPC_MCAM_RX,
+ NPC_MCAM_TX
+};
+
+struct npc_xtract_info {
+ /* Length in bytes of pkt data extracted. len = 0
+ * indicates that extraction is disabled.
+ */
+ uint8_t len;
+ uint8_t hdr_off; /* Byte offset of proto hdr: extract_src */
+ uint8_t key_off; /* Byte offset in MCAM key where data is placed */
+ uint8_t enable; /* Extraction enabled or disabled */
+};
+
+/* Information for a given {LAYER, LTYPE} */
+struct npc_lid_lt_xtract_info {
+ /* Info derived from parser configuration */
+ uint16_t npc_proto; /* Network protocol identified */
+ uint8_t valid_flags_mask; /* Flags applicable */
+ uint8_t is_terminating:1; /* No more parsing */
+ struct npc_xtract_info xtract[NPC_MAX_LD];
+};
+
+union npc_kex_ldata_flags_cfg {
+ struct {
+ #if defined(__BIG_ENDIAN_BITFIELD)
+ uint64_t rvsd_62_1 : 61;
+ uint64_t lid : 3;
+ #else
+ uint64_t lid : 3;
+ uint64_t rvsd_62_1 : 61;
+ #endif
+ } s;
+
+ uint64_t i;
+};
+
+typedef struct npc_lid_lt_xtract_info
+ otx2_dxcfg_t[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT];
+typedef struct npc_lid_lt_xtract_info
+ otx2_fxcfg_t[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL];
+typedef union npc_kex_ldata_flags_cfg otx2_ld_flags_t[NPC_MAX_LD];
+
+
+/* MBOX_MSG_NPC_GET_DATAX_CFG Response */
+struct npc_get_datax_cfg {
+ /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */
+ union npc_kex_ldata_flags_cfg ld_flags[NPC_MAX_LD];
+ /* Extract information indexed with [LID][LTYPE] */
+ struct npc_lid_lt_xtract_info lid_lt_xtract[NPC_MAX_LID][NPC_MAX_LT];
+ /* Flags based extract indexed with [LDATA][FLAGS_LOWER_NIBBLE]
+ * Fields flags_ena_ld0, flags_ena_ld1 in
+ * struct npc_lid_lt_xtract_info indicate if this is applicable
+ * for a given {LAYER, LTYPE}
+ */
+ struct npc_xtract_info flag_xtract[NPC_MAX_LD][NPC_MAX_LT];
+};
+
+struct otx2_mcam_ents_info {
+ /* Current max & min values of mcam index */
+ uint32_t max_id;
+ uint32_t min_id;
+ uint32_t free_ent;
+ uint32_t live_ent;
+};
+
+struct rte_flow {
+ uint8_t nix_intf;
+ uint32_t mcam_id;
+ int32_t ctr_id;
+ uint32_t priority;
+ /* Contiguous match string */
+ uint64_t mcam_data[OTX2_MAX_MCAM_WIDTH_DWORDS];
+ uint64_t mcam_mask[OTX2_MAX_MCAM_WIDTH_DWORDS];
+ uint64_t npc_action;
+ TAILQ_ENTRY(rte_flow) next;
+};
+
+TAILQ_HEAD(otx2_flow_list, rte_flow);
+
+/* Accessed from ethdev private - otx2_eth_dev */
+struct otx2_npc_flow_info {
+ rte_atomic32_t mark_actions;
+ uint32_t keyx_supp_nmask[NPC_MAX_INTF];/* nibble mask */
+ uint32_t keyx_len[NPC_MAX_INTF]; /* per intf key len in bits */
+ uint32_t datax_len[NPC_MAX_INTF]; /* per intf data len in bits */
+ uint32_t keyw[NPC_MAX_INTF]; /* max key + data len bits */
+ uint32_t mcam_entries; /* mcam entries supported */
+ otx2_dxcfg_t prx_dxcfg; /* intf, lid, lt, extract */
+ otx2_fxcfg_t prx_fxcfg; /* Flag extract */
+ otx2_ld_flags_t prx_lfcfg; /* KEX LD_Flags CFG */
+ /* mcam entry info per priority level: both free & in-use */
+ struct otx2_mcam_ents_info *flow_entry_info;
+ /* Bitmap of free preallocated entries in ascending index &
+ * descending priority
+ */
+ struct rte_bitmap **free_entries;
+ /* Bitmap of free preallocated entries in descending index &
+ * ascending priority
+ */
+ struct rte_bitmap **free_entries_rev;
+ /* Bitmap of live entries in ascending index & descending priority */
+ struct rte_bitmap **live_entries;
+ /* Bitmap of live entries in descending index & ascending priority */
+ struct rte_bitmap **live_entries_rev;
+ /* Priority bucket wise tail queue of all rte_flow resources */
+ struct otx2_flow_list *flow_list;
+ uint32_t rss_grps; /* rss groups supported */
+ struct rte_bitmap *rss_grp_entries;
+ uint16_t channel; /*rx channel */
+ uint16_t flow_prealloc_size;
+ uint16_t flow_max_priority;
+};
+
+struct otx2_parse_state {
+ struct otx2_npc_flow_info *npc;
+ const struct rte_flow_item *pattern;
+ const struct rte_flow_item *last_pattern; /* Temp usage */
+ struct rte_flow_error *error;
+ struct rte_flow *flow;
+ uint8_t tunnel;
+ uint8_t terminate;
+ uint8_t layer_mask;
+ uint8_t lt[NPC_MAX_LID];
+ uint8_t flags[NPC_MAX_LID];
+ uint8_t *mcam_data; /* point to flow->mcam_data + key_len */
+ uint8_t *mcam_mask; /* point to flow->mcam_mask + key_len */
+};
+
+struct otx2_flow_item_info {
+ const void *def_mask; /* rte_flow default mask */
+ void *hw_mask; /* hardware supported mask */
+ int len; /* length of item */
+ const void *spec; /* spec to use, NULL implies match any */
+ const void *mask; /* mask to use */
+ uint8_t hw_hdr_len; /* Extra data len at each layer*/
+};
+
+struct otx2_idev_kex_cfg {
+ struct npc_get_kex_cfg_rsp kex_cfg;
+ rte_atomic16_t kex_refcnt;
+};
+
+enum npc_kpu_parser_flag {
+ NPC_F_NA = 0,
+ NPC_F_PKI,
+ NPC_F_PKI_VLAN,
+ NPC_F_PKI_ETAG,
+ NPC_F_PKI_ITAG,
+ NPC_F_PKI_MPLS,
+ NPC_F_PKI_NSH,
+ NPC_F_ETYPE_UNK,
+ NPC_F_ETHER_VLAN,
+ NPC_F_ETHER_ETAG,
+ NPC_F_ETHER_ITAG,
+ NPC_F_ETHER_MPLS,
+ NPC_F_ETHER_NSH,
+ NPC_F_STAG_CTAG,
+ NPC_F_STAG_CTAG_UNK,
+ NPC_F_STAG_STAG_CTAG,
+ NPC_F_STAG_STAG_STAG,
+ NPC_F_QINQ_CTAG,
+ NPC_F_QINQ_CTAG_UNK,
+ NPC_F_QINQ_QINQ_CTAG,
+ NPC_F_QINQ_QINQ_QINQ,
+ NPC_F_BTAG_ITAG,
+ NPC_F_BTAG_ITAG_STAG,
+ NPC_F_BTAG_ITAG_CTAG,
+ NPC_F_BTAG_ITAG_UNK,
+ NPC_F_ETAG_CTAG,
+ NPC_F_ETAG_BTAG_ITAG,
+ NPC_F_ETAG_STAG,
+ NPC_F_ETAG_QINQ,
+ NPC_F_ETAG_ITAG,
+ NPC_F_ETAG_ITAG_STAG,
+ NPC_F_ETAG_ITAG_CTAG,
+ NPC_F_ETAG_ITAG_UNK,
+ NPC_F_ITAG_STAG_CTAG,
+ NPC_F_ITAG_STAG,
+ NPC_F_ITAG_CTAG,
+ NPC_F_MPLS_4_LABELS,
+ NPC_F_MPLS_3_LABELS,
+ NPC_F_MPLS_2_LABELS,
+ NPC_F_IP_HAS_OPTIONS,
+ NPC_F_IP_IP_IN_IP,
+ NPC_F_IP_6TO4,
+ NPC_F_IP_MPLS_IN_IP,
+ NPC_F_IP_UNK_PROTO,
+ NPC_F_IP_IP_IN_IP_HAS_OPTIONS,
+ NPC_F_IP_6TO4_HAS_OPTIONS,
+ NPC_F_IP_MPLS_IN_IP_HAS_OPTIONS,
+ NPC_F_IP_UNK_PROTO_HAS_OPTIONS,
+ NPC_F_IP6_HAS_EXT,
+ NPC_F_IP6_TUN_IP6,
+ NPC_F_IP6_MPLS_IN_IP,
+ NPC_F_TCP_HAS_OPTIONS,
+ NPC_F_TCP_HTTP,
+ NPC_F_TCP_HTTPS,
+ NPC_F_TCP_PPTP,
+ NPC_F_TCP_UNK_PORT,
+ NPC_F_TCP_HTTP_HAS_OPTIONS,
+ NPC_F_TCP_HTTPS_HAS_OPTIONS,
+ NPC_F_TCP_PPTP_HAS_OPTIONS,
+ NPC_F_TCP_UNK_PORT_HAS_OPTIONS,
+ NPC_F_UDP_VXLAN,
+ NPC_F_UDP_VXLAN_NOVNI,
+ NPC_F_UDP_VXLAN_NOVNI_NSH,
+ NPC_F_UDP_VXLANGPE,
+ NPC_F_UDP_VXLANGPE_NSH,
+ NPC_F_UDP_VXLANGPE_MPLS,
+ NPC_F_UDP_VXLANGPE_NOVNI,
+ NPC_F_UDP_VXLANGPE_NOVNI_NSH,
+ NPC_F_UDP_VXLANGPE_NOVNI_MPLS,
+ NPC_F_UDP_VXLANGPE_UNK,
+ NPC_F_UDP_VXLANGPE_NONP,
+ NPC_F_UDP_GTP_GTPC,
+ NPC_F_UDP_GTP_GTPU_G_PDU,
+ NPC_F_UDP_GTP_GTPU_UNK,
+ NPC_F_UDP_UNK_PORT,
+ NPC_F_UDP_GENEVE,
+ NPC_F_UDP_GENEVE_OAM,
+ NPC_F_UDP_GENEVE_CRI_OPT,
+ NPC_F_UDP_GENEVE_OAM_CRI_OPT,
+ NPC_F_GRE_NVGRE,
+ NPC_F_GRE_HAS_SRE,
+ NPC_F_GRE_HAS_CSUM,
+ NPC_F_GRE_HAS_KEY,
+ NPC_F_GRE_HAS_SEQ,
+ NPC_F_GRE_HAS_CSUM_KEY,
+ NPC_F_GRE_HAS_CSUM_SEQ,
+ NPC_F_GRE_HAS_KEY_SEQ,
+ NPC_F_GRE_HAS_CSUM_KEY_SEQ,
+ NPC_F_GRE_HAS_ROUTE,
+ NPC_F_GRE_UNK_PROTO,
+ NPC_F_GRE_VER1,
+ NPC_F_GRE_VER1_HAS_SEQ,
+ NPC_F_GRE_VER1_HAS_ACK,
+ NPC_F_GRE_VER1_HAS_SEQ_ACK,
+ NPC_F_GRE_VER1_UNK_PROTO,
+ NPC_F_TU_ETHER_UNK,
+ NPC_F_TU_ETHER_CTAG,
+ NPC_F_TU_ETHER_CTAG_UNK,
+ NPC_F_TU_ETHER_STAG_CTAG,
+ NPC_F_TU_ETHER_STAG_CTAG_UNK,
+ NPC_F_TU_ETHER_STAG,
+ NPC_F_TU_ETHER_STAG_UNK,
+ NPC_F_TU_ETHER_QINQ_CTAG,
+ NPC_F_TU_ETHER_QINQ_CTAG_UNK,
+ NPC_F_TU_ETHER_QINQ,
+ NPC_F_TU_ETHER_QINQ_UNK,
+ NPC_F_LAST /* has to be the last item */
+};
+
+
+int otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id);
+
+int otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id,
+ uint64_t *count);
+
+int otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id);
+
+int otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry);
+
+int otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox);
+
+int otx2_flow_update_parse_state(struct otx2_parse_state *pst,
+ struct otx2_flow_item_info *info,
+ int lid, int lt, uint8_t flags);
+
+int otx2_flow_parse_item_basic(const struct rte_flow_item *item,
+ struct otx2_flow_item_info *info,
+ struct rte_flow_error *error);
+
+void otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask);
+
+int otx2_flow_mcam_alloc_and_write(struct rte_flow *flow,
+ struct otx2_mbox *mbox,
+ struct otx2_parse_state *pst,
+ struct otx2_npc_flow_info *flow_info);
+
+void otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst,
+ struct otx2_flow_item_info *info,
+ int lid, int lt);
+
+const struct rte_flow_item *
+otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern);
+
+int otx2_flow_parse_lh(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_lg(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_lf(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_le(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_ld(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_lc(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_lb(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_la(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_actions(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error,
+ struct rte_flow *flow);
+
+int otx2_flow_free_all_resources(struct otx2_eth_dev *hw);
+
+int otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid);
+#endif /* __OTX2_FLOW_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 33/57] net/octeontx2: add flow utility functions
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (31 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 32/57] net/octeontx2: introducing flow driver jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 34/57] net/octeontx2: add flow mbox " jerinj
` (24 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
First pass rte_flow utility functions for octeontx2.
These will be used to communicate with AF driver.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.h | 7 +-
drivers/net/octeontx2/otx2_flow.h | 2 +
drivers/net/octeontx2/otx2_flow_utils.c | 387 ++++++++++++++++++++++++
5 files changed, 392 insertions(+), 6 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_flow_utils.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index a0155e727..8d1aeae3f 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -37,6 +37,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_lookup.c \
otx2_ethdev.c \
otx2_flow_ctrl.c \
+ otx2_flow_utils.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
otx2_ethdev_debug.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 2cac57d2b..75156ddbe 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -12,6 +12,7 @@ sources = files(
'otx2_lookup.c',
'otx2_ethdev.c',
'otx2_flow_ctrl.c',
+ 'otx2_flow_utils.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
'otx2_ethdev_debug.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 8f8d93a39..e8a22b6ec 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -17,6 +17,7 @@
#include "otx2_common.h"
#include "otx2_dev.h"
+#include "otx2_flow.h"
#include "otx2_irq.h"
#include "otx2_mempool.h"
#include "otx2_rx.h"
@@ -173,12 +174,6 @@ struct otx2_eth_qconf {
uint16_t nb_desc;
};
-struct otx2_npc_flow_info {
- uint16_t channel; /*rx channel */
- uint16_t flow_prealloc_size;
- uint16_t flow_max_priority;
-};
-
struct otx2_fc_info {
enum rte_eth_fc_mode mode; /**< Link flow control mode */
uint8_t rx_pause;
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
index 95bb6c2bf..f5cc3b983 100644
--- a/drivers/net/octeontx2/otx2_flow.h
+++ b/drivers/net/octeontx2/otx2_flow.h
@@ -15,6 +15,8 @@
#include "otx2_ethdev.h"
#include "otx2_mbox.h"
+struct otx2_eth_dev;
+
int otx2_flow_init(struct otx2_eth_dev *hw);
int otx2_flow_fini(struct otx2_eth_dev *hw);
extern const struct rte_flow_ops otx2_flow_ops;
diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c
new file mode 100644
index 000000000..6078a827b
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_flow_utils.c
@@ -0,0 +1,387 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+#include "otx2_flow.h"
+
+int
+otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id)
+{
+ struct npc_mcam_oper_counter_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_free_counter(mbox);
+ req->cntr = ctr_id;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, NULL);
+
+ return rc;
+}
+
+int
+otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id,
+ uint64_t *count)
+{
+ struct npc_mcam_oper_counter_req *req;
+ struct npc_mcam_oper_counter_rsp *rsp;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_counter_stats(mbox);
+ req->cntr = ctr_id;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
+
+ *count = rsp->stat;
+ return rc;
+}
+
+int
+otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id)
+{
+ struct npc_mcam_oper_counter_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_clear_counter(mbox);
+ req->cntr = ctr_id;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, NULL);
+
+ return rc;
+}
+
+int
+otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry)
+{
+ struct npc_mcam_free_entry_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
+ req->entry = entry;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, NULL);
+
+ return rc;
+}
+
+int
+otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox)
+{
+ struct npc_mcam_free_entry_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
+ req->all = 1;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, NULL);
+
+ return rc;
+}
+
+static void
+flow_prep_mcam_ldata(uint8_t *ptr, const uint8_t *data, int len)
+{
+ int idx;
+
+ for (idx = 0; idx < len; idx++)
+ ptr[idx] = data[len - 1 - idx];
+}
+
+static int
+flow_check_copysz(size_t size, size_t len)
+{
+ if (len <= size)
+ return len;
+ return -1;
+}
+
+static inline int
+flow_mem_is_zero(const void *mem, int len)
+{
+ const char *m = mem;
+ int i;
+
+ for (i = 0; i < len; i++) {
+ if (m[i] != 0)
+ return 0;
+ }
+ return 1;
+}
+
+void
+otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst,
+ struct otx2_flow_item_info *info, int lid, int lt)
+{
+ struct npc_xtract_info *xinfo;
+ char *hw_mask = info->hw_mask;
+ int max_off, offset;
+ int i, j;
+ int intf;
+
+ intf = pst->flow->nix_intf;
+ xinfo = pst->npc->prx_dxcfg[intf][lid][lt].xtract;
+ memset(hw_mask, 0, info->len);
+
+ for (i = 0; i < NPC_MAX_LD; i++) {
+ if (xinfo[i].hdr_off < info->hw_hdr_len)
+ continue;
+
+ max_off = xinfo[i].hdr_off + xinfo[i].len - info->hw_hdr_len;
+
+ if (xinfo[i].enable == 0)
+ continue;
+
+ if (max_off > info->len)
+ max_off = info->len;
+
+ offset = xinfo[i].hdr_off - info->hw_hdr_len;
+ for (j = offset; j < max_off; j++)
+ hw_mask[j] = 0xff;
+ }
+}
+
+int
+otx2_flow_update_parse_state(struct otx2_parse_state *pst,
+ struct otx2_flow_item_info *info, int lid, int lt,
+ uint8_t flags)
+{
+ uint8_t int_info_mask[NPC_MAX_EXTRACT_DATA_LEN];
+ uint8_t int_info[NPC_MAX_EXTRACT_DATA_LEN];
+ struct npc_lid_lt_xtract_info *xinfo;
+ int len = 0;
+ int intf;
+ int i;
+
+ otx2_npc_dbg("Parse state function info mask total %s",
+ (const uint8_t *)info->mask);
+
+ pst->layer_mask |= lid;
+ pst->lt[lid] = lt;
+ pst->flags[lid] = flags;
+
+ intf = pst->flow->nix_intf;
+ xinfo = &pst->npc->prx_dxcfg[intf][lid][lt];
+ otx2_npc_dbg("Is_terminating = %d", xinfo->is_terminating);
+ if (xinfo->is_terminating)
+ pst->terminate = 1;
+
+ /* Need to check if flags are supported but in latest
+ * KPU profile, flags are used as enumeration! No way,
+ * it can be validated unless MBOX is changed to return
+ * set of valid values out of 2**8 possible values.
+ */
+ if (info->spec == NULL) { /* Nothing to match */
+ otx2_npc_dbg("Info spec NULL");
+ goto done;
+ }
+
+ /* Copy spec and mask into mcam match string, mask.
+ * Since both RTE FLOW and OTX2 MCAM use network-endianness
+ * for data, we are saved from nasty conversions.
+ */
+ for (i = 0; i < NPC_MAX_LD; i++) {
+ struct npc_xtract_info *x;
+ int k, idx, hdr_off;
+
+ x = &xinfo->xtract[i];
+ len = x->len;
+ hdr_off = x->hdr_off;
+
+ if (hdr_off < info->hw_hdr_len)
+ continue;
+
+ if (x->enable == 0)
+ continue;
+
+ otx2_npc_dbg("x->hdr_off = %d, len = %d, info->len = %d,"
+ "x->key_off = %d", x->hdr_off, len, info->len,
+ x->key_off);
+
+ hdr_off -= info->hw_hdr_len;
+
+ if (hdr_off + len > info->len)
+ len = info->len - hdr_off;
+
+ /* Check for over-write of previous layer */
+ if (!flow_mem_is_zero(pst->mcam_mask + x->key_off,
+ len)) {
+ /* Cannot support this data match */
+ rte_flow_error_set(pst->error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ pst->pattern,
+ "Extraction unsupported");
+ return -rte_errno;
+ }
+
+ len = flow_check_copysz((OTX2_MAX_MCAM_WIDTH_DWORDS * 8)
+ - x->key_off,
+ len);
+ if (len < 0) {
+ rte_flow_error_set(pst->error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ pst->pattern,
+ "Internal Error");
+ return -rte_errno;
+ }
+
+ /* Need to reverse complete structure so that dest addr is at
+ * MSB so as to program the MCAM using mcam_data & mcam_mask
+ * arrays
+ */
+ flow_prep_mcam_ldata(int_info,
+ (const uint8_t *)info->spec + hdr_off,
+ x->len);
+ flow_prep_mcam_ldata(int_info_mask,
+ (const uint8_t *)info->mask + hdr_off,
+ x->len);
+
+ otx2_npc_dbg("Spec: ");
+ for (k = 0; k < info->len; k++)
+ otx2_npc_dbg("0x%.2x ",
+ ((const uint8_t *)info->spec)[k]);
+
+ otx2_npc_dbg("Int_info: ");
+ for (k = 0; k < info->len; k++)
+ otx2_npc_dbg("0x%.2x ", int_info[k]);
+
+ memcpy(pst->mcam_mask + x->key_off, int_info_mask, len);
+ memcpy(pst->mcam_data + x->key_off, int_info, len);
+
+ otx2_npc_dbg("Parse state mcam data & mask");
+ for (idx = 0; idx < len ; idx++)
+ otx2_npc_dbg("data[%d]: 0x%x, mask[%d]: 0x%x", idx,
+ *(pst->mcam_data + idx + x->key_off), idx,
+ *(pst->mcam_mask + idx + x->key_off));
+ }
+
+done:
+ /* Next pattern to parse by subsequent layers */
+ pst->pattern++;
+ return 0;
+}
+
+static inline int
+flow_range_is_valid(const char *spec, const char *last, const char *mask,
+ int len)
+{
+ /* Mask must be zero or equal to spec as we do not support
+ * non-contiguous ranges.
+ */
+ while (len--) {
+ if (last[len] &&
+ (spec[len] & mask[len]) != (last[len] & mask[len]))
+ return 0; /* False */
+ }
+ return 1;
+}
+
+
+static inline int
+flow_mask_is_supported(const char *mask, const char *hw_mask, int len)
+{
+ /*
+ * If no hw_mask, assume nothing is supported.
+ * mask is never NULL
+ */
+ if (hw_mask == NULL)
+ return flow_mem_is_zero(mask, len);
+
+ while (len--) {
+ if ((mask[len] | hw_mask[len]) != hw_mask[len])
+ return 0; /* False */
+ }
+ return 1;
+}
+
+int
+otx2_flow_parse_item_basic(const struct rte_flow_item *item,
+ struct otx2_flow_item_info *info,
+ struct rte_flow_error *error)
+{
+ /* Item must not be NULL */
+ if (item == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "Item is NULL");
+ return -rte_errno;
+ }
+ /* If spec is NULL, both mask and last must be NULL, this
+ * makes it to match ANY value (eq to mask = 0).
+ * Setting either mask or last without spec is an error
+ */
+ if (item->spec == NULL) {
+ if (item->last == NULL && item->mask == NULL) {
+ info->spec = NULL;
+ return 0;
+ }
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "mask or last set without spec");
+ return -rte_errno;
+ }
+
+ /* We have valid spec */
+ info->spec = item->spec;
+
+ /* If mask is not set, use default mask, err if default mask is
+ * also NULL.
+ */
+ if (item->mask == NULL) {
+ otx2_npc_dbg("Item mask null, using default mask");
+ if (info->def_mask == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "No mask or default mask given");
+ return -rte_errno;
+ }
+ info->mask = info->def_mask;
+ } else {
+ info->mask = item->mask;
+ }
+
+ /* mask specified must be subset of hw supported mask
+ * mask | hw_mask == hw_mask
+ */
+ if (!flow_mask_is_supported(info->mask, info->hw_mask, info->len)) {
+ rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Unsupported field in the mask");
+ return -rte_errno;
+ }
+
+ /* Now we have spec and mask. OTX2 does not support non-contiguous
+ * range. We should have either:
+ * - spec & mask == last & mask or,
+ * - last == 0 or,
+ * - last == NULL
+ */
+ if (item->last != NULL && !flow_mem_is_zero(item->last, info->len)) {
+ if (!flow_range_is_valid(item->spec, item->last, info->mask,
+ info->len)) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Unsupported range for match");
+ return -rte_errno;
+ }
+ }
+
+ return 0;
+}
+
+void
+otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask)
+{
+ uint64_t cdata[2] = {0ULL, 0ULL}, nibble;
+ int i, j = 0;
+
+ for (i = 0; i < NPC_MAX_KEY_NIBBLES; i++) {
+ if (nibble_mask & (1 << i)) {
+ nibble = (data[i / 16] >> ((i & 0xf) * 4)) & 0xf;
+ cdata[j / 16] |= (nibble << ((j & 0xf) * 4));
+ j += 1;
+ }
+ }
+
+ data[0] = cdata[0];
+ data[1] = cdata[1];
+}
+
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 34/57] net/octeontx2: add flow mbox utility functions
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (32 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 33/57] net/octeontx2: add flow utility functions jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 35/57] net/octeontx2: add flow MCAM " jerinj
` (23 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding mailbox utility functions for rte_flow. These will be used
to alloc, reserve and write the entries to the device on request.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow.h | 6 +
drivers/net/octeontx2/otx2_flow_utils.c | 259 ++++++++++++++++++++++++
2 files changed, 265 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
index f5cc3b983..a37d86512 100644
--- a/drivers/net/octeontx2/otx2_flow.h
+++ b/drivers/net/octeontx2/otx2_flow.h
@@ -387,4 +387,10 @@ int otx2_flow_parse_actions(struct rte_eth_dev *dev,
int otx2_flow_free_all_resources(struct otx2_eth_dev *hw);
int otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid);
+
+int
+flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info,
+ struct npc_mcam_alloc_entry_rsp *rsp,
+ int req_prio);
#endif /* __OTX2_FLOW_H__ */
diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c
index 6078a827b..c56a22ed1 100644
--- a/drivers/net/octeontx2/otx2_flow_utils.c
+++ b/drivers/net/octeontx2/otx2_flow_utils.c
@@ -385,3 +385,262 @@ otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask)
data[1] = cdata[1];
}
+static int
+flow_first_set_bit(uint64_t slab)
+{
+ int num = 0;
+
+ if ((slab & 0xffffffff) == 0) {
+ num += 32;
+ slab >>= 32;
+ }
+ if ((slab & 0xffff) == 0) {
+ num += 16;
+ slab >>= 16;
+ }
+ if ((slab & 0xff) == 0) {
+ num += 8;
+ slab >>= 8;
+ }
+ if ((slab & 0xf) == 0) {
+ num += 4;
+ slab >>= 4;
+ }
+ if ((slab & 0x3) == 0) {
+ num += 2;
+ slab >>= 2;
+ }
+ if ((slab & 0x1) == 0)
+ num += 1;
+
+ return num;
+}
+
+static int
+flow_shift_lv_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info,
+ uint32_t old_ent, uint32_t new_ent)
+{
+ struct npc_mcam_shift_entry_req *req;
+ struct npc_mcam_shift_entry_rsp *rsp;
+ struct otx2_flow_list *list;
+ struct rte_flow *flow_iter;
+ int rc = 0;
+
+ otx2_npc_dbg("Old ent:%u new ent:%u priority:%u", old_ent, new_ent,
+ flow->priority);
+
+ list = &flow_info->flow_list[flow->priority];
+
+ /* Old entry is disabled & it's contents are moved to new_entry,
+ * new entry is enabled finally.
+ */
+ req = otx2_mbox_alloc_msg_npc_mcam_shift_entry(mbox);
+ req->curr_entry[0] = old_ent;
+ req->new_entry[0] = new_ent;
+ req->shift_count = 1;
+
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ /* Remove old node from list */
+ TAILQ_FOREACH(flow_iter, list, next) {
+ if (flow_iter->mcam_id == old_ent)
+ TAILQ_REMOVE(list, flow_iter, next);
+ }
+
+ /* Insert node with new mcam id at right place */
+ TAILQ_FOREACH(flow_iter, list, next) {
+ if (flow_iter->mcam_id > new_ent)
+ TAILQ_INSERT_BEFORE(flow_iter, flow, next);
+ }
+ return rc;
+}
+
+/* Exchange all required entries with a given priority level */
+static int
+flow_shift_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info,
+ struct npc_mcam_alloc_entry_rsp *rsp, int dir, int prio_lvl)
+{
+ struct rte_bitmap *fr_bmp, *fr_bmp_rev, *lv_bmp, *lv_bmp_rev, *bmp;
+ uint32_t e_fr = 0, e_lv = 0, e, e_id = 0, mcam_entries;
+ uint64_t fr_bit_pos = 0, lv_bit_pos = 0, bit_pos = 0;
+ /* Bit position within the slab */
+ uint32_t sl_fr_bit_off = 0, sl_lv_bit_off = 0;
+ /* Overall bit position of the start of slab */
+ /* free & live entry index */
+ int rc_fr = 0, rc_lv = 0, rc = 0, idx = 0;
+ struct otx2_mcam_ents_info *ent_info;
+ /* free & live bitmap slab */
+ uint64_t sl_fr = 0, sl_lv = 0, *sl;
+
+ fr_bmp = flow_info->free_entries[prio_lvl];
+ fr_bmp_rev = flow_info->free_entries_rev[prio_lvl];
+ lv_bmp = flow_info->live_entries[prio_lvl];
+ lv_bmp_rev = flow_info->live_entries_rev[prio_lvl];
+ ent_info = &flow_info->flow_entry_info[prio_lvl];
+ mcam_entries = flow_info->mcam_entries;
+
+
+ /* New entries allocated are always contiguous, but older entries
+ * already in free/live bitmap can be non-contiguous: so return
+ * shifted entries should be in non-contiguous format.
+ */
+ while (idx <= rsp->count) {
+ if (!sl_fr && !sl_lv) {
+ /* Lower index elements to be exchanged */
+ if (dir < 0) {
+ rc_fr = rte_bitmap_scan(fr_bmp, &e_fr, &sl_fr);
+ rc_lv = rte_bitmap_scan(lv_bmp, &e_lv, &sl_lv);
+ otx2_npc_dbg("Fwd slab rc fr %u rc lv %u "
+ "e_fr %u e_lv %u", rc_fr, rc_lv,
+ e_fr, e_lv);
+ } else {
+ rc_fr = rte_bitmap_scan(fr_bmp_rev,
+ &sl_fr_bit_off,
+ &sl_fr);
+ rc_lv = rte_bitmap_scan(lv_bmp_rev,
+ &sl_lv_bit_off,
+ &sl_lv);
+
+ otx2_npc_dbg("Rev slab rc fr %u rc lv %u "
+ "e_fr %u e_lv %u", rc_fr, rc_lv,
+ e_fr, e_lv);
+ }
+ }
+
+ if (rc_fr) {
+ fr_bit_pos = flow_first_set_bit(sl_fr);
+ e_fr = sl_fr_bit_off + fr_bit_pos;
+ otx2_npc_dbg("Fr_bit_pos 0x%" PRIx64, fr_bit_pos);
+ } else {
+ e_fr = ~(0);
+ }
+
+ if (rc_lv) {
+ lv_bit_pos = flow_first_set_bit(sl_lv);
+ e_lv = sl_lv_bit_off + lv_bit_pos;
+ otx2_npc_dbg("Lv_bit_pos 0x%" PRIx64, lv_bit_pos);
+ } else {
+ e_lv = ~(0);
+ }
+
+ /* First entry is from free_bmap */
+ if (e_fr < e_lv) {
+ bmp = fr_bmp;
+ e = e_fr;
+ sl = &sl_fr;
+ bit_pos = fr_bit_pos;
+ if (dir > 0)
+ e_id = mcam_entries - e - 1;
+ else
+ e_id = e;
+ otx2_npc_dbg("Fr e %u e_id %u", e, e_id);
+ } else {
+ bmp = lv_bmp;
+ e = e_lv;
+ sl = &sl_lv;
+ bit_pos = lv_bit_pos;
+ if (dir > 0)
+ e_id = mcam_entries - e - 1;
+ else
+ e_id = e;
+
+ otx2_npc_dbg("Lv e %u e_id %u", e, e_id);
+ if (idx < rsp->count)
+ rc =
+ flow_shift_lv_ent(mbox, flow,
+ flow_info, e_id,
+ rsp->entry + idx);
+ }
+
+ rte_bitmap_clear(bmp, e);
+ rte_bitmap_set(bmp, rsp->entry + idx);
+ /* Update entry list, use non-contiguous
+ * list now.
+ */
+ rsp->entry_list[idx] = e_id;
+ *sl &= ~(1 << bit_pos);
+
+ /* Update min & max entry identifiers in current
+ * priority level.
+ */
+ if (dir < 0) {
+ ent_info->max_id = rsp->entry + idx;
+ ent_info->min_id = e_id;
+ } else {
+ ent_info->max_id = e_id;
+ ent_info->min_id = rsp->entry;
+ }
+
+ idx++;
+ }
+ return rc;
+}
+
+/* Validate if newly allocated entries lie in the correct priority zone
+ * since NPC_MCAM_LOWER_PRIO & NPC_MCAM_HIGHER_PRIO don't ensure zone accuracy.
+ * If not properly aligned, shift entries to do so
+ */
+int
+flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info,
+ struct npc_mcam_alloc_entry_rsp *rsp,
+ int req_prio)
+{
+ int prio_idx = 0, rc = 0, needs_shift = 0, idx, prio = flow->priority;
+ struct otx2_mcam_ents_info *info = flow_info->flow_entry_info;
+ int dir = (req_prio == NPC_MCAM_HIGHER_PRIO) ? 1 : -1;
+ uint32_t tot_ent = 0;
+
+ otx2_npc_dbg("Dir %d, priority = %d", dir, prio);
+
+ if (dir < 0)
+ prio_idx = flow_info->flow_max_priority - 1;
+
+ /* Only live entries needs to be shifted, free entries can just be
+ * moved by bits manipulation.
+ */
+
+ /* For dir = -1(NPC_MCAM_LOWER_PRIO), when shifting,
+ * NPC_MAX_PREALLOC_ENT are exchanged with adjoining higher priority
+ * level entries(lower indexes).
+ *
+ * For dir = +1(NPC_MCAM_HIGHER_PRIO), during shift,
+ * NPC_MAX_PREALLOC_ENT are exchanged with adjoining lower priority
+ * level entries(higher indexes) with highest indexes.
+ */
+ do {
+ tot_ent = info[prio_idx].free_ent + info[prio_idx].live_ent;
+
+ if (dir < 0 && prio_idx != prio &&
+ rsp->entry > info[prio_idx].max_id && tot_ent) {
+ otx2_npc_dbg("Rsp entry %u prio idx %u "
+ "max id %u", rsp->entry, prio_idx,
+ info[prio_idx].max_id);
+
+ needs_shift = 1;
+ } else if ((dir > 0) && (prio_idx != prio) &&
+ (rsp->entry < info[prio_idx].min_id) && tot_ent) {
+ otx2_npc_dbg("Rsp entry %u prio idx %u "
+ "min id %u", rsp->entry, prio_idx,
+ info[prio_idx].min_id);
+ needs_shift = 1;
+ }
+
+ otx2_npc_dbg("Needs_shift = %d", needs_shift);
+ if (needs_shift) {
+ needs_shift = 0;
+ rc = flow_shift_ent(mbox, flow, flow_info, rsp, dir,
+ prio_idx);
+ } else {
+ for (idx = 0; idx < rsp->count; idx++)
+ rsp->entry_list[idx] = rsp->entry + idx;
+ }
+ } while ((prio_idx != prio) && (prio_idx += dir));
+
+ return rc;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 35/57] net/octeontx2: add flow MCAM utility functions
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (33 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 34/57] net/octeontx2: add flow mbox " jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 36/57] net/octeontx2: add flow parsing for outer layers jerinj
` (22 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding MCAM utility functions to alloc and write the entries.
These will be used to arrange the flow rules based on priority.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow.h | 6 -
drivers/net/octeontx2/otx2_flow_utils.c | 266 +++++++++++++++++++++++-
2 files changed, 265 insertions(+), 7 deletions(-)
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
index a37d86512..f5cc3b983 100644
--- a/drivers/net/octeontx2/otx2_flow.h
+++ b/drivers/net/octeontx2/otx2_flow.h
@@ -387,10 +387,4 @@ int otx2_flow_parse_actions(struct rte_eth_dev *dev,
int otx2_flow_free_all_resources(struct otx2_eth_dev *hw);
int otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid);
-
-int
-flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info,
- struct npc_mcam_alloc_entry_rsp *rsp,
- int req_prio);
#endif /* __OTX2_FLOW_H__ */
diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c
index c56a22ed1..8a0fe7615 100644
--- a/drivers/net/octeontx2/otx2_flow_utils.c
+++ b/drivers/net/octeontx2/otx2_flow_utils.c
@@ -5,6 +5,22 @@
#include "otx2_ethdev.h"
#include "otx2_flow.h"
+static int
+flow_mcam_alloc_counter(struct otx2_mbox *mbox, uint16_t *ctr)
+{
+ struct npc_mcam_alloc_counter_req *req;
+ struct npc_mcam_alloc_counter_rsp *rsp;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_alloc_counter(mbox);
+ req->count = 1;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
+
+ *ctr = rsp->cntr_list[0];
+ return rc;
+}
+
int
otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id)
{
@@ -585,7 +601,7 @@ flow_shift_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
* since NPC_MCAM_LOWER_PRIO & NPC_MCAM_HIGHER_PRIO don't ensure zone accuracy.
* If not properly aligned, shift entries to do so
*/
-int
+static int
flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
struct otx2_npc_flow_info *flow_info,
struct npc_mcam_alloc_entry_rsp *rsp,
@@ -644,3 +660,251 @@ flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
return rc;
}
+
+static int
+flow_find_ref_entry(struct otx2_npc_flow_info *flow_info, int *prio,
+ int prio_lvl)
+{
+ struct otx2_mcam_ents_info *info = flow_info->flow_entry_info;
+ int step = 1;
+
+ while (step < flow_info->flow_max_priority) {
+ if (((prio_lvl + step) < flow_info->flow_max_priority) &&
+ info[prio_lvl + step].live_ent) {
+ *prio = NPC_MCAM_HIGHER_PRIO;
+ return info[prio_lvl + step].min_id;
+ }
+
+ if (((prio_lvl - step) >= 0) &&
+ info[prio_lvl - step].live_ent) {
+ otx2_npc_dbg("Prio_lvl %u live %u", prio_lvl - step,
+ info[prio_lvl - step].live_ent);
+ *prio = NPC_MCAM_LOWER_PRIO;
+ return info[prio_lvl - step].max_id;
+ }
+ step++;
+ }
+ *prio = NPC_MCAM_ANY_PRIO;
+ return 0;
+}
+
+static int
+flow_fill_entry_cache(struct otx2_mbox *mbox, struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info, uint32_t *free_ent)
+{
+ struct rte_bitmap *free_bmp, *free_bmp_rev, *live_bmp, *live_bmp_rev;
+ struct npc_mcam_alloc_entry_rsp rsp_local;
+ struct npc_mcam_alloc_entry_rsp *rsp_cmd;
+ struct npc_mcam_alloc_entry_req *req;
+ struct npc_mcam_alloc_entry_rsp *rsp;
+ struct otx2_mcam_ents_info *info;
+ uint16_t ref_ent, idx;
+ int rc, prio;
+
+ info = &flow_info->flow_entry_info[flow->priority];
+ free_bmp = flow_info->free_entries[flow->priority];
+ free_bmp_rev = flow_info->free_entries_rev[flow->priority];
+ live_bmp = flow_info->live_entries[flow->priority];
+ live_bmp_rev = flow_info->live_entries_rev[flow->priority];
+
+ ref_ent = flow_find_ref_entry(flow_info, &prio, flow->priority);
+
+ req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
+ req->contig = 1;
+ req->count = flow_info->flow_prealloc_size;
+ req->priority = prio;
+ req->ref_entry = ref_ent;
+
+ otx2_npc_dbg("Fill cache ref entry %u prio %u", ref_ent, prio);
+
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp_cmd);
+ if (rc)
+ return rc;
+
+ rsp = &rsp_local;
+ memcpy(rsp, rsp_cmd, sizeof(*rsp));
+
+ otx2_npc_dbg("Alloc entry %u count %u , prio = %d", rsp->entry,
+ rsp->count, prio);
+
+ /* Non-first ent cache fill */
+ if (prio != NPC_MCAM_ANY_PRIO) {
+ flow_validate_and_shift_prio_ent(mbox, flow, flow_info, rsp,
+ prio);
+ } else {
+ /* Copy into response entry list */
+ for (idx = 0; idx < rsp->count; idx++)
+ rsp->entry_list[idx] = rsp->entry + idx;
+ }
+
+ otx2_npc_dbg("Fill entry cache rsp count %u", rsp->count);
+ /* Update free entries, reverse free entries list,
+ * min & max entry ids.
+ */
+ for (idx = 0; idx < rsp->count; idx++) {
+ if (unlikely(rsp->entry_list[idx] < info->min_id))
+ info->min_id = rsp->entry_list[idx];
+
+ if (unlikely(rsp->entry_list[idx] > info->max_id))
+ info->max_id = rsp->entry_list[idx];
+
+ /* Skip entry to be returned, not to be part of free
+ * list.
+ */
+ if (prio == NPC_MCAM_HIGHER_PRIO) {
+ if (unlikely(idx == (rsp->count - 1))) {
+ *free_ent = rsp->entry_list[idx];
+ continue;
+ }
+ } else {
+ if (unlikely(!idx)) {
+ *free_ent = rsp->entry_list[idx];
+ continue;
+ }
+ }
+ info->free_ent++;
+ rte_bitmap_set(free_bmp, rsp->entry_list[idx]);
+ rte_bitmap_set(free_bmp_rev, flow_info->mcam_entries -
+ rsp->entry_list[idx] - 1);
+
+ otx2_npc_dbg("Final rsp entry %u rsp entry rev %u",
+ rsp->entry_list[idx],
+ flow_info->mcam_entries - rsp->entry_list[idx] - 1);
+ }
+
+ otx2_npc_dbg("Cache free entry %u, rev = %u", *free_ent,
+ flow_info->mcam_entries - *free_ent - 1);
+ info->live_ent++;
+ rte_bitmap_set(live_bmp, *free_ent);
+ rte_bitmap_set(live_bmp_rev, flow_info->mcam_entries - *free_ent - 1);
+
+ return 0;
+}
+
+static int
+flow_check_preallocated_entry_cache(struct otx2_mbox *mbox,
+ struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info)
+{
+ struct rte_bitmap *free, *free_rev, *live, *live_rev;
+ uint32_t pos = 0, free_ent = 0, mcam_entries;
+ struct otx2_mcam_ents_info *info;
+ uint64_t slab = 0;
+ int rc;
+
+ otx2_npc_dbg("Flow priority %u", flow->priority);
+
+ info = &flow_info->flow_entry_info[flow->priority];
+
+ free_rev = flow_info->free_entries_rev[flow->priority];
+ free = flow_info->free_entries[flow->priority];
+ live_rev = flow_info->live_entries_rev[flow->priority];
+ live = flow_info->live_entries[flow->priority];
+ mcam_entries = flow_info->mcam_entries;
+
+ if (info->free_ent) {
+ rc = rte_bitmap_scan(free, &pos, &slab);
+ if (rc) {
+ /* Get free_ent from free entry bitmap */
+ free_ent = pos + __builtin_ctzll(slab);
+ otx2_npc_dbg("Allocated from cache entry %u", free_ent);
+ /* Remove from free bitmaps and add to live ones */
+ rte_bitmap_clear(free, free_ent);
+ rte_bitmap_set(live, free_ent);
+ rte_bitmap_clear(free_rev,
+ mcam_entries - free_ent - 1);
+ rte_bitmap_set(live_rev,
+ mcam_entries - free_ent - 1);
+
+ info->free_ent--;
+ info->live_ent++;
+ return free_ent;
+ }
+
+ otx2_npc_dbg("No free entry:its a mess");
+ return -1;
+ }
+
+ rc = flow_fill_entry_cache(mbox, flow, flow_info, &free_ent);
+ if (rc)
+ return rc;
+
+ return free_ent;
+}
+
+int
+otx2_flow_mcam_alloc_and_write(struct rte_flow *flow, struct otx2_mbox *mbox,
+ __rte_unused struct otx2_parse_state *pst,
+ struct otx2_npc_flow_info *flow_info)
+{
+ int use_ctr = (flow->ctr_id == NPC_COUNTER_NONE ? 0 : 1);
+ struct npc_mcam_write_entry_req *req;
+ struct mbox_msghdr *rsp;
+ uint16_t ctr = ~(0);
+ int rc, idx;
+ int entry;
+
+ if (use_ctr) {
+ rc = flow_mcam_alloc_counter(mbox, &ctr);
+ if (rc)
+ return rc;
+ }
+
+ entry = flow_check_preallocated_entry_cache(mbox, flow, flow_info);
+ if (entry < 0) {
+ otx2_err("Prealloc failed");
+ otx2_flow_mcam_free_counter(mbox, ctr);
+ return NPC_MCAM_ALLOC_FAILED;
+ }
+ req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
+ req->set_cntr = use_ctr;
+ req->cntr = ctr;
+ req->entry = entry;
+ otx2_npc_dbg("Alloc & write entry %u", entry);
+
+ req->intf =
+ (flow->nix_intf == OTX2_INTF_RX) ? NPC_MCAM_RX : NPC_MCAM_TX;
+ req->enable_entry = 1;
+ req->entry_data.action = flow->npc_action;
+
+ /*
+ * DPDK sets vtag action on per interface basis, not
+ * per flow basis. It is a matter of how we decide to support
+ * this pmd specific behavior. There are two ways:
+ * 1. Inherit the vtag action from the one configured
+ * for this interface. This can be read from the
+ * vtag_action configured for default mcam entry of
+ * this pf_func.
+ * 2. Do not support vtag action with rte_flow.
+ *
+ * Second approach is used now.
+ */
+ req->entry_data.vtag_action = 0ULL;
+
+ for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
+ req->entry_data.kw[idx] = flow->mcam_data[idx];
+ req->entry_data.kw_mask[idx] = flow->mcam_mask[idx];
+ }
+
+ if (flow->nix_intf == OTX2_INTF_RX) {
+ req->entry_data.kw[0] |= flow_info->channel;
+ req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1);
+ } else {
+ uint16_t pf_func = (flow->npc_action >> 4) & 0xffff;
+
+ pf_func = htons(pf_func);
+ req->entry_data.kw[0] |= ((uint64_t)pf_func << 32);
+ req->entry_data.kw_mask[0] |= ((uint64_t)0xffff << 32);
+ }
+
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
+ if (rc != 0)
+ return rc;
+
+ flow->mcam_id = entry;
+ if (use_ctr)
+ flow->ctr_id = ctr;
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 36/57] net/octeontx2: add flow parsing for outer layers
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (34 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 35/57] net/octeontx2: add flow MCAM " jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 37/57] net/octeontx2: add flow actions support jerinj
` (21 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding functionality to parse outer layers from ld to lh.
These will be used parse outer layers L2, L3, L4 and tunnel types.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_flow_parse.c | 459 ++++++++++++++++++++++++
3 files changed, 461 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_flow_parse.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 8d1aeae3f..3eb4dba53 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -37,6 +37,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_lookup.c \
otx2_ethdev.c \
otx2_flow_ctrl.c \
+ otx2_flow_parse.c \
otx2_flow_utils.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 75156ddbe..f608c4947 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -12,6 +12,7 @@ sources = files(
'otx2_lookup.c',
'otx2_ethdev.c',
'otx2_flow_ctrl.c',
+ 'otx2_flow_parse.c',
'otx2_flow_utils.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
new file mode 100644
index 000000000..d27a24833
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -0,0 +1,459 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+#include "otx2_flow.h"
+
+const struct rte_flow_item *
+otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern)
+{
+ while ((pattern->type == RTE_FLOW_ITEM_TYPE_VOID) ||
+ (pattern->type == RTE_FLOW_ITEM_TYPE_ANY))
+ pattern++;
+
+ return pattern;
+}
+
+/*
+ * Tunnel+ESP, Tunnel+ICMP4/6, Tunnel+TCP, Tunnel+UDP,
+ * Tunnel+SCTP
+ */
+int
+otx2_flow_parse_lh(struct otx2_parse_state *pst)
+{
+ struct otx2_flow_item_info info;
+ char hw_mask[64];
+ int lid, lt;
+ int rc;
+
+ if (!pst->tunnel)
+ return 0;
+
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+ lid = NPC_LID_LH;
+
+ switch (pst->pattern->type) {
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ lt = NPC_LT_LH_TU_UDP;
+ info.def_mask = &rte_flow_item_udp_mask;
+ info.len = sizeof(struct rte_flow_item_udp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ lt = NPC_LT_LH_TU_TCP;
+ info.def_mask = &rte_flow_item_tcp_mask;
+ info.len = sizeof(struct rte_flow_item_tcp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ lt = NPC_LT_LH_TU_SCTP;
+ info.def_mask = &rte_flow_item_sctp_mask;
+ info.len = sizeof(struct rte_flow_item_sctp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_ESP:
+ lt = NPC_LT_LH_TU_ESP;
+ info.def_mask = &rte_flow_item_esp_mask;
+ info.len = sizeof(struct rte_flow_item_esp);
+ break;
+ default:
+ return 0;
+ }
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
+}
+
+/* Tunnel+IPv4, Tunnel+IPv6 */
+int
+otx2_flow_parse_lg(struct otx2_parse_state *pst)
+{
+ struct otx2_flow_item_info info;
+ char hw_mask[64];
+ int lid, lt;
+ int rc;
+
+ if (!pst->tunnel)
+ return 0;
+
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+ lid = NPC_LID_LG;
+
+ if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) {
+ lt = NPC_LT_LG_TU_IP;
+ info.def_mask = &rte_flow_item_ipv4_mask;
+ info.len = sizeof(struct rte_flow_item_ipv4);
+ } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+ lt = NPC_LT_LG_TU_IP6;
+ info.def_mask = &rte_flow_item_ipv6_mask;
+ info.len = sizeof(struct rte_flow_item_ipv6);
+ } else {
+ /* There is no tunneled IP header */
+ return 0;
+ }
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
+}
+
+/* Tunnel+Ether */
+int
+otx2_flow_parse_lf(struct otx2_parse_state *pst)
+{
+ const struct rte_flow_item *pattern, *last_pattern;
+ struct rte_flow_item_eth hw_mask;
+ struct otx2_flow_item_info info;
+ int lid, lt, lflags;
+ int nr_vlans = 0;
+ int rc;
+
+ /* We hit this layer if there is a tunneling protocol */
+ if (!pst->tunnel)
+ return 0;
+
+ if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH)
+ return 0;
+
+ lid = NPC_LID_LF;
+ lt = NPC_LT_LF_TU_ETHER;
+ lflags = 0;
+
+ info.def_mask = &rte_flow_item_vlan_mask;
+ /* No match support for vlan tags */
+ info.hw_mask = NULL;
+ info.len = sizeof(struct rte_flow_item_vlan);
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+
+ /* Look ahead and find out any VLAN tags. These can be
+ * detected but no data matching is available.
+ */
+ last_pattern = pst->pattern;
+ pattern = pst->pattern + 1;
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+ nr_vlans++;
+ rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+ last_pattern = pattern;
+ pattern++;
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ }
+ otx2_npc_dbg("Nr_vlans = %d", nr_vlans);
+ switch (nr_vlans) {
+ case 0:
+ break;
+ case 1:
+ lflags = NPC_F_TU_ETHER_CTAG;
+ break;
+ case 2:
+ lflags = NPC_F_TU_ETHER_STAG_CTAG;
+ break;
+ default:
+ rte_flow_error_set(pst->error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ last_pattern,
+ "more than 2 vlans with tunneled Ethernet "
+ "not supported");
+ return -rte_errno;
+ }
+
+ info.def_mask = &rte_flow_item_eth_mask;
+ info.hw_mask = &hw_mask;
+ info.len = sizeof(struct rte_flow_item_eth);
+ info.hw_hdr_len = 0;
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ info.spec = NULL;
+ info.mask = NULL;
+
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ pst->pattern = last_pattern;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+int
+otx2_flow_parse_le(struct otx2_parse_state *pst)
+{
+ /*
+ * We are positioned at UDP. Scan ahead and look for
+ * UDP encapsulated tunnel protocols. If available,
+ * parse them. In that case handle this:
+ * - RTE spec assumes we point to tunnel header.
+ * - NPC parser provides offset from UDP header.
+ */
+
+ /*
+ * Note: Add support to GENEVE, VXLAN_GPE when we
+ * upgrade DPDK
+ *
+ * Note: Better to split flags into two nibbles:
+ * - Higher nibble can have flags
+ * - Lower nibble to further enumerate protocols
+ * and have flags based extraction
+ */
+ const struct rte_flow_item *pattern = pst->pattern;
+ struct otx2_flow_item_info info;
+ int lid, lt, lflags;
+ char hw_mask[64];
+ int rc;
+
+ if (pst->tunnel)
+ return 0;
+
+ if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
+ return otx2_flow_parse_mpls(pst, NPC_LID_LE);
+
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_mask = NULL;
+ info.def_mask = NULL;
+ info.len = 0;
+ info.hw_hdr_len = 0;
+ lid = NPC_LID_LE;
+ lflags = 0;
+
+ /* Ensure we are not matching anything in UDP */
+ rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
+ if (rc)
+ return rc;
+
+ info.hw_mask = &hw_mask;
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ otx2_npc_dbg("Pattern->type = %d", pattern->type);
+ switch (pattern->type) {
+ case RTE_FLOW_ITEM_TYPE_VXLAN:
+ lflags = NPC_F_UDP_VXLAN;
+ info.def_mask = &rte_flow_item_vxlan_mask;
+ info.len = sizeof(struct rte_flow_item_vxlan);
+ lt = NPC_LT_LE_VXLAN;
+ break;
+ case RTE_FLOW_ITEM_TYPE_GTPC:
+ lflags = NPC_F_UDP_GTP_GTPC;
+ info.def_mask = &rte_flow_item_gtp_mask;
+ info.len = sizeof(struct rte_flow_item_gtp);
+ lt = NPC_LT_LE_GTPC;
+ break;
+ case RTE_FLOW_ITEM_TYPE_GTPU:
+ lflags = NPC_F_UDP_GTP_GTPU_G_PDU;
+ info.def_mask = &rte_flow_item_gtp_mask;
+ info.len = sizeof(struct rte_flow_item_gtp);
+ lt = NPC_LT_LE_GTPU;
+ break;
+ default:
+ return 0;
+ }
+
+ pst->tunnel = 1;
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+static int
+flow_parse_mpls_label_stack(struct otx2_parse_state *pst, int *flag)
+{
+ int nr_labels = 0;
+ const struct rte_flow_item *pattern = pst->pattern;
+ struct otx2_flow_item_info info;
+ int rc;
+ uint8_t flag_list[] = {0, NPC_F_MPLS_2_LABELS,
+ NPC_F_MPLS_3_LABELS, NPC_F_MPLS_4_LABELS};
+
+ /*
+ * pst->pattern points to first MPLS label. We only check
+ * that subsequent labels do not have anything to match.
+ */
+ info.def_mask = &rte_flow_item_mpls_mask;
+ info.hw_mask = NULL;
+ info.len = sizeof(struct rte_flow_item_mpls);
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+
+ while (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS) {
+ nr_labels++;
+
+ /* Basic validation of 2nd/3rd/4th mpls item */
+ if (nr_labels > 1) {
+ rc = otx2_flow_parse_item_basic(pattern, &info,
+ pst->error);
+ if (rc != 0)
+ return rc;
+ }
+ pst->last_pattern = pattern;
+ pattern++;
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ }
+
+ if (nr_labels > 4) {
+ rte_flow_error_set(pst->error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ pst->last_pattern,
+ "more than 4 mpls labels not supported");
+ return -rte_errno;
+ }
+
+ *flag = flag_list[nr_labels - 1];
+ return 0;
+}
+
+int
+otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid)
+{
+ /* Find number of MPLS labels */
+ struct rte_flow_item_mpls hw_mask;
+ struct otx2_flow_item_info info;
+ int lt, lflags;
+ int rc;
+
+ lflags = 0;
+
+ if (lid == NPC_LID_LC)
+ lt = NPC_LT_LC_MPLS;
+ else if (lid == NPC_LID_LD)
+ lt = NPC_LT_LD_TU_MPLS_IN_IP;
+ else
+ lt = NPC_LT_LE_TU_MPLS_IN_UDP;
+
+ /* Prepare for parsing the first item */
+ info.def_mask = &rte_flow_item_mpls_mask;
+ info.hw_mask = &hw_mask;
+ info.len = sizeof(struct rte_flow_item_mpls);
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ /*
+ * Parse for more labels.
+ * This sets lflags and pst->last_pattern correctly.
+ */
+ rc = flow_parse_mpls_label_stack(pst, &lflags);
+ if (rc != 0)
+ return rc;
+
+ pst->tunnel = 1;
+ pst->pattern = pst->last_pattern;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+/*
+ * ICMP, ICMP6, UDP, TCP, SCTP, VXLAN, GRE, NVGRE,
+ * GTP, GTPC, GTPU, ESP
+ *
+ * Note: UDP tunnel protocols are identified by flags.
+ * LPTR for these protocol still points to UDP
+ * header. Need flag based extraction to support
+ * this.
+ */
+int
+otx2_flow_parse_ld(struct otx2_parse_state *pst)
+{
+ char hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
+ struct otx2_flow_item_info info;
+ int lid, lt, lflags;
+ int rc;
+
+ if (pst->tunnel) {
+ /* We have already parsed MPLS or IPv4/v6 followed
+ * by MPLS or IPv4/v6. Subsequent TCP/UDP etc
+ * would be parsed as tunneled versions. Skip
+ * this layer, except for tunneled MPLS. If LC is
+ * MPLS, we have anyway skipped all stacked MPLS
+ * labels.
+ */
+ if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
+ return otx2_flow_parse_mpls(pst, NPC_LID_LD);
+ return 0;
+ }
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ info.def_mask = NULL;
+ info.len = 0;
+ info.hw_hdr_len = 0;
+
+ lid = NPC_LID_LD;
+ lflags = 0;
+
+ otx2_npc_dbg("Pst->pattern->type = %d", pst->pattern->type);
+ switch (pst->pattern->type) {
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ if (pst->lt[NPC_LID_LC] == NPC_LT_LC_IP6)
+ lt = NPC_LT_LD_ICMP6;
+ else
+ lt = NPC_LT_LD_ICMP;
+ info.def_mask = &rte_flow_item_icmp_mask;
+ info.len = sizeof(struct rte_flow_item_icmp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ lt = NPC_LT_LD_UDP;
+ info.def_mask = &rte_flow_item_udp_mask;
+ info.len = sizeof(struct rte_flow_item_udp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ lt = NPC_LT_LD_TCP;
+ info.def_mask = &rte_flow_item_tcp_mask;
+ info.len = sizeof(struct rte_flow_item_tcp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ lt = NPC_LT_LD_SCTP;
+ info.def_mask = &rte_flow_item_sctp_mask;
+ info.len = sizeof(struct rte_flow_item_sctp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_ESP:
+ lt = NPC_LT_LD_ESP;
+ info.def_mask = &rte_flow_item_esp_mask;
+ info.len = sizeof(struct rte_flow_item_esp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_GRE:
+ lt = NPC_LT_LD_GRE;
+ info.def_mask = &rte_flow_item_gre_mask;
+ info.len = sizeof(struct rte_flow_item_gre);
+ break;
+ case RTE_FLOW_ITEM_TYPE_NVGRE:
+ lt = NPC_LT_LD_GRE;
+ lflags = NPC_F_GRE_NVGRE;
+ info.def_mask = &rte_flow_item_nvgre_mask;
+ info.len = sizeof(struct rte_flow_item_nvgre);
+ /* Further IP/Ethernet are parsed as tunneled */
+ pst->tunnel = 1;
+ break;
+ default:
+ return 0;
+ }
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 37/57] net/octeontx2: add flow actions support
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (35 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 36/57] net/octeontx2: add flow parsing for outer layers jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 38/57] net/octeontx2: add flow parse " jerinj
` (20 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding support to parse flow actions like drop, count, mark, rss, queue.
On egress side, only drop and count actions were supported.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow_parse.c | 210 ++++++++++++++++++++++++
1 file changed, 210 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index d27a24833..79c60a9ea 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -457,3 +457,213 @@ otx2_flow_parse_ld(struct otx2_parse_state *pst)
return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
}
+
+static inline void
+flow_check_lc_ip_tunnel(struct otx2_parse_state *pst)
+{
+ const struct rte_flow_item *pattern = pst->pattern + 1;
+
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ if (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS ||
+ pattern->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+ pattern->type == RTE_FLOW_ITEM_TYPE_IPV6)
+ pst->tunnel = 1;
+}
+
+/* Outer IPv4, Outer IPv6, MPLS, ARP */
+int
+otx2_flow_parse_lc(struct otx2_parse_state *pst)
+{
+ uint8_t hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
+ struct otx2_flow_item_info info;
+ int lid, lt;
+ int rc;
+
+ if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
+ return otx2_flow_parse_mpls(pst, NPC_LID_LC);
+
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+ lid = NPC_LID_LC;
+
+ switch (pst->pattern->type) {
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ lt = NPC_LT_LC_IP;
+ info.def_mask = &rte_flow_item_ipv4_mask;
+ info.len = sizeof(struct rte_flow_item_ipv4);
+ break;
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ lid = NPC_LID_LC;
+ lt = NPC_LT_LC_IP6;
+ info.def_mask = &rte_flow_item_ipv6_mask;
+ info.len = sizeof(struct rte_flow_item_ipv6);
+ break;
+ case RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4:
+ lt = NPC_LT_LC_ARP;
+ info.def_mask = &rte_flow_item_arp_eth_ipv4_mask;
+ info.len = sizeof(struct rte_flow_item_arp_eth_ipv4);
+ break;
+ default:
+ /* No match at this layer */
+ return 0;
+ }
+
+ /* Identify if IP tunnels MPLS or IPv4/v6 */
+ flow_check_lc_ip_tunnel(pst);
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
+}
+
+/* VLAN, ETAG */
+int
+otx2_flow_parse_lb(struct otx2_parse_state *pst)
+{
+ const struct rte_flow_item *pattern = pst->pattern;
+ const struct rte_flow_item *last_pattern;
+ char hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
+ struct otx2_flow_item_info info;
+ int lid, lt, lflags;
+ int nr_vlans = 0;
+ int rc;
+
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = NPC_TPID_LENGTH;
+
+ lid = NPC_LID_LB;
+ lflags = 0;
+ last_pattern = pattern;
+
+ if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+ /* RTE vlan is either 802.1q or 802.1ad,
+ * this maps to either CTAG/STAG. We need to decide
+ * based on number of VLANS present. Matching is
+ * supported on first tag only.
+ */
+ info.def_mask = &rte_flow_item_vlan_mask;
+ info.hw_mask = NULL;
+ info.len = sizeof(struct rte_flow_item_vlan);
+
+ pattern = pst->pattern;
+ while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+ nr_vlans++;
+
+ /* Basic validation of 2nd/3rd vlan item */
+ if (nr_vlans > 1) {
+ otx2_npc_dbg("Vlans = %d", nr_vlans);
+ rc = otx2_flow_parse_item_basic(pattern, &info,
+ pst->error);
+ if (rc != 0)
+ return rc;
+ }
+ last_pattern = pattern;
+ pattern++;
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ }
+
+ switch (nr_vlans) {
+ case 1:
+ lt = NPC_LT_LB_CTAG;
+ break;
+ case 2:
+ lt = NPC_LT_LB_STAG;
+ lflags = NPC_F_STAG_CTAG;
+ break;
+ case 3:
+ lt = NPC_LT_LB_STAG;
+ lflags = NPC_F_STAG_STAG_CTAG;
+ break;
+ default:
+ rte_flow_error_set(pst->error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ last_pattern,
+ "more than 3 vlans not supported");
+ return -rte_errno;
+ }
+ } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_E_TAG) {
+ /* we can support ETAG and match a subsequent CTAG
+ * without any matching support.
+ */
+ lt = NPC_LT_LB_ETAG;
+ lflags = 0;
+
+ last_pattern = pst->pattern;
+ pattern = otx2_flow_skip_void_and_any_items(pst->pattern + 1);
+ if (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+ info.def_mask = &rte_flow_item_vlan_mask;
+ /* set supported mask to NULL for vlan tag */
+ info.hw_mask = NULL;
+ info.len = sizeof(struct rte_flow_item_vlan);
+ rc = otx2_flow_parse_item_basic(pattern, &info,
+ pst->error);
+ if (rc != 0)
+ return rc;
+
+ lflags = NPC_F_ETAG_CTAG;
+ last_pattern = pattern;
+ }
+
+ info.def_mask = &rte_flow_item_e_tag_mask;
+ info.len = sizeof(struct rte_flow_item_e_tag);
+ } else {
+ return 0;
+ }
+
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ /* Point pattern to last item consumed */
+ pst->pattern = last_pattern;
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+int
+otx2_flow_parse_la(struct otx2_parse_state *pst)
+{
+ struct rte_flow_item_eth hw_mask;
+ struct otx2_flow_item_info info;
+ int lid, lt;
+ int rc;
+
+ /* Identify the pattern type into lid, lt */
+ if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH)
+ return 0;
+
+ lid = NPC_LID_LA;
+ lt = NPC_LT_LA_ETHER;
+ info.hw_hdr_len = 0;
+
+ if (pst->flow->nix_intf == NIX_INTF_TX) {
+ lt = NPC_LT_LA_IH_NIX_ETHER;
+ info.hw_hdr_len = NPC_IH_LENGTH;
+ }
+
+ /* Prepare for parsing the item */
+ info.def_mask = &rte_flow_item_eth_mask;
+ info.hw_mask = &hw_mask;
+ info.len = sizeof(struct rte_flow_item_eth);
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ info.spec = NULL;
+ info.mask = NULL;
+
+ /* Basic validation of item parameters */
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc)
+ return rc;
+
+ /* Update pst if not validate only? clash check? */
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 38/57] net/octeontx2: add flow parse actions support
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (36 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 37/57] net/octeontx2: add flow actions support jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 39/57] net/octeontx2: add flow operations jerinj
` (19 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding support to parse flow actions like drop, count, mark, rss, queue.
On egress side, only drop and count actions were supported.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow_parse.c | 276 ++++++++++++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 1 +
2 files changed, 277 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index 79c60a9ea..4cf5ce17e 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -667,3 +667,279 @@ otx2_flow_parse_la(struct otx2_parse_state *pst)
/* Update pst if not validate only? clash check? */
return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
}
+
+static int
+parse_rss_action(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_action *act,
+ struct rte_flow_error *error)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ struct otx2_rss_info *rss_info = &hw->rss_info;
+ const struct rte_flow_action_rss *rss;
+ uint32_t i;
+
+ rss = (const struct rte_flow_action_rss *)act->conf;
+
+ /* Not supported */
+ if (attr->egress) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+ attr, "No support of RSS in egress");
+ }
+
+ if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "multi-queue mode is disabled");
+
+ /* Parse RSS related parameters from configuration */
+ if (!rss || !rss->queue_num)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "no valid queues");
+
+ if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "non-default RSS hash functions"
+ " are not supported");
+
+ if (rss->key_len && rss->key_len > RTE_DIM(rss_info->key))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "RSS hash key too large");
+
+ if (rss->queue_num > rss_info->rss_size)
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "too many queues for RSS context");
+
+ for (i = 0; i < rss->queue_num; i++) {
+ if (rss->queue[i] >= dev->data->nb_rx_queues)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act,
+ "queue id > max number"
+ " of queues");
+ }
+
+ return 0;
+}
+
+int
+otx2_flow_parse_actions(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error,
+ struct rte_flow *flow)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ struct otx2_npc_flow_info *npc = &hw->npc_flow;
+ const struct rte_flow_action_count *act_count;
+ const struct rte_flow_action_mark *act_mark;
+ const struct rte_flow_action_queue *act_q;
+ const char *errmsg = NULL;
+ int sel_act, req_act = 0;
+ uint16_t pf_func;
+ int errcode = 0;
+ int mark = 0;
+ int rq = 0;
+
+ /* Initialize actions */
+ flow->ctr_id = NPC_COUNTER_NONE;
+
+ for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+ otx2_npc_dbg("Action type = %d", actions->type);
+
+ switch (actions->type) {
+ case RTE_FLOW_ACTION_TYPE_VOID:
+ break;
+ case RTE_FLOW_ACTION_TYPE_MARK:
+ act_mark =
+ (const struct rte_flow_action_mark *)actions->conf;
+
+ /* We have only 16 bits. Use highest val for flag */
+ if (act_mark->id > (OTX2_FLOW_FLAG_VAL - 2)) {
+ errmsg = "mark value must be < 0xfffe";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+ mark = act_mark->id + 1;
+ req_act |= OTX2_FLOW_ACT_MARK;
+ rte_atomic32_inc(&npc->mark_actions);
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_FLAG:
+ mark = OTX2_FLOW_FLAG_VAL;
+ req_act |= OTX2_FLOW_ACT_FLAG;
+ rte_atomic32_inc(&npc->mark_actions);
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_COUNT:
+ act_count =
+ (const struct rte_flow_action_count *)
+ actions->conf;
+
+ if (act_count->shared == 1) {
+ errmsg = "Shared Counters not supported";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+ /* Indicates, need a counter */
+ flow->ctr_id = 1;
+ req_act |= OTX2_FLOW_ACT_COUNT;
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_DROP:
+ req_act |= OTX2_FLOW_ACT_DROP;
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ /* Applicable only to ingress flow */
+ act_q = (const struct rte_flow_action_queue *)
+ actions->conf;
+ rq = act_q->index;
+ if (rq >= dev->data->nb_rx_queues) {
+ errmsg = "invalid queue index";
+ errcode = EINVAL;
+ goto err_exit;
+ }
+ req_act |= OTX2_FLOW_ACT_QUEUE;
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ errcode = parse_rss_action(dev, attr, actions, error);
+ if (errcode)
+ return -rte_errno;
+
+ req_act |= OTX2_FLOW_ACT_RSS;
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_SECURITY:
+ /* Assumes user has already configured security
+ * session for this flow. Associated conf is
+ * opaque. When RTE security is implemented for otx2,
+ * we need to verify that for specified security
+ * session:
+ * action_type ==
+ * RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL &&
+ * session_protocol ==
+ * RTE_SECURITY_PROTOCOL_IPSEC
+ *
+ * RSS is not supported with inline ipsec. Get the
+ * rq from associated conf, or make
+ * RTE_FLOW_ACTION_TYPE_QUEUE compulsory with this
+ * action.
+ * Currently, rq = 0 is assumed.
+ */
+ req_act |= OTX2_FLOW_ACT_SEC;
+ rq = 0;
+ break;
+ default:
+ errmsg = "Unsupported action specified";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+ }
+
+ /* Check if actions specified are compatible */
+ if (attr->egress) {
+ /* Only DROP/COUNT is supported */
+ if (!(req_act & OTX2_FLOW_ACT_DROP)) {
+ errmsg = "DROP is required action for egress";
+ errcode = EINVAL;
+ goto err_exit;
+ } else if (req_act & ~(OTX2_FLOW_ACT_DROP |
+ OTX2_FLOW_ACT_COUNT)) {
+ errmsg = "Unsupported action specified";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+ flow->npc_action = NIX_TX_ACTIONOP_DROP;
+ goto set_pf_func;
+ }
+
+ /* We have already verified the attr, this is ingress.
+ * - Exactly one terminating action is supported
+ * - Exactly one of MARK or FLAG is supported
+ * - If terminating action is DROP, only count is valid.
+ */
+ sel_act = req_act & OTX2_FLOW_ACT_TERM;
+ if ((sel_act & (sel_act - 1)) != 0) {
+ errmsg = "Only one terminating action supported";
+ errcode = EINVAL;
+ goto err_exit;
+ }
+
+ if (req_act & OTX2_FLOW_ACT_DROP) {
+ sel_act = req_act & ~OTX2_FLOW_ACT_COUNT;
+ if ((sel_act & (sel_act - 1)) != 0) {
+ errmsg = "Only COUNT action is supported "
+ "with DROP ingress action";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+ }
+
+ if ((req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK))
+ == (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) {
+ errmsg = "Only one of FLAG or MARK action is supported";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+
+ /* Set NIX_RX_ACTIONOP */
+ if (req_act & OTX2_FLOW_ACT_DROP) {
+ flow->npc_action = NIX_RX_ACTIONOP_DROP;
+ } else if (req_act & OTX2_FLOW_ACT_QUEUE) {
+ flow->npc_action = NIX_RX_ACTIONOP_UCAST;
+ flow->npc_action |= (uint64_t)rq << 20;
+ } else if (req_act & OTX2_FLOW_ACT_RSS) {
+ /* When user added a rule for rss, first we will add the
+ *rule in MCAM and then update the action, once if we have
+ *FLOW_KEY_ALG index. So, till we update the action with
+ *flow_key_alg index, set the action to drop.
+ */
+ if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ flow->npc_action = NIX_RX_ACTIONOP_DROP;
+ else
+ flow->npc_action = NIX_RX_ACTIONOP_UCAST;
+ } else if (req_act & OTX2_FLOW_ACT_SEC) {
+ flow->npc_action = NIX_RX_ACTIONOP_UCAST_IPSEC;
+ flow->npc_action |= (uint64_t)rq << 20;
+ } else if (req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) {
+ flow->npc_action = NIX_RX_ACTIONOP_UCAST;
+ } else if (req_act & OTX2_FLOW_ACT_COUNT) {
+ /* Keep OTX2_FLOW_ACT_COUNT always at the end
+ * This is default action, when user specify only
+ * COUNT ACTION
+ */
+ flow->npc_action = NIX_RX_ACTIONOP_UCAST;
+ } else {
+ /* Should never reach here */
+ errmsg = "Invalid action specified";
+ errcode = EINVAL;
+ goto err_exit;
+ }
+
+ if (mark)
+ flow->npc_action |= (uint64_t)mark << 40;
+
+ if (rte_atomic32_read(&npc->mark_actions) == 1)
+ hw->rx_offload_flags |=
+ NIX_RX_OFFLOAD_MARK_UPDATE_F;
+
+set_pf_func:
+ /* Ideally AF must ensure that correct pf_func is set */
+ pf_func = otx2_pfvf_func(hw->pf, hw->vf);
+ flow->npc_action |= (uint64_t)pf_func << 4;
+
+ return 0;
+
+err_exit:
+ rte_flow_error_set(error, errcode,
+ RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL,
+ errmsg);
+ return -rte_errno;
+}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index 0c3627c12..db79451b9 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -13,6 +13,7 @@
sizeof(uint16_t))
#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
+#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4)
#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
#define NIX_TIMESYNC_RX_OFFSET 8
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 39/57] net/octeontx2: add flow operations
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (37 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 38/57] net/octeontx2: add flow parse " jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 40/57] net/octeontx2: add flow destroy ops support jerinj
` (18 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding the initial flow ops like flow_create and flow_validate.
These will be used to alloc and write flow rule to device and
validate the flow rule.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_flow.c | 451 ++++++++++++++++++++++++++++++
3 files changed, 453 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_flow.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 3eb4dba53..21559f631 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -32,6 +32,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_rss.c \
otx2_mac.c \
otx2_ptp.c \
+ otx2_flow.c \
otx2_link.c \
otx2_stats.c \
otx2_lookup.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index f608c4947..f0e03bffe 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -7,6 +7,7 @@ sources = files(
'otx2_rss.c',
'otx2_mac.c',
'otx2_ptp.c',
+ 'otx2_flow.c',
'otx2_link.c',
'otx2_stats.c',
'otx2_lookup.c',
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
new file mode 100644
index 000000000..896aef00a
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -0,0 +1,451 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+#include "otx2_flow.h"
+
+static int
+flow_program_npc(struct otx2_parse_state *pst, struct otx2_mbox *mbox,
+ struct otx2_npc_flow_info *flow_info)
+{
+ /* This is non-LDATA part in search key */
+ uint64_t key_data[2] = {0ULL, 0ULL};
+ uint64_t key_mask[2] = {0ULL, 0ULL};
+ int intf = pst->flow->nix_intf;
+ int key_len, bit = 0, index;
+ int off, idx, data_off = 0;
+ uint8_t lid, mask, data;
+ uint16_t layer_info;
+ uint64_t lt, flags;
+
+
+ /* Skip till Layer A data start */
+ while (bit < NPC_PARSE_KEX_S_LA_OFFSET) {
+ if (flow_info->keyx_supp_nmask[intf] & (1 << bit))
+ data_off++;
+ bit++;
+ }
+
+ /* Each bit represents 1 nibble */
+ data_off *= 4;
+
+ index = 0;
+ for (lid = 0; lid < NPC_MAX_LID; lid++) {
+ /* Offset in key */
+ off = NPC_PARSE_KEX_S_LID_OFFSET(lid);
+ lt = pst->lt[lid] & 0xf;
+ flags = pst->flags[lid] & 0xff;
+
+ /* NPC_LAYER_KEX_S */
+ layer_info = ((flow_info->keyx_supp_nmask[intf] >> off) & 0x7);
+
+ if (layer_info) {
+ for (idx = 0; idx <= 2 ; idx++) {
+ if (layer_info & (1 << idx)) {
+ if (idx == 2)
+ data = lt;
+ else if (idx == 1)
+ data = ((flags >> 4) & 0xf);
+ else
+ data = (flags & 0xf);
+
+ if (data_off >= 64) {
+ data_off = 0;
+ index++;
+ }
+ key_data[index] |= ((uint64_t)data <<
+ data_off);
+ mask = 0xf;
+ if (lt == 0)
+ mask = 0;
+ key_mask[index] |= ((uint64_t)mask <<
+ data_off);
+ data_off += 4;
+ }
+ }
+ }
+ }
+
+ otx2_npc_dbg("Npc prog key data0: 0x%" PRIx64 ", data1: 0x%" PRIx64,
+ key_data[0], key_data[1]);
+
+ /* Copy this into mcam string */
+ key_len = (pst->npc->keyx_len[intf] + 7) / 8;
+ otx2_npc_dbg("Key_len = %d", key_len);
+ memcpy(pst->flow->mcam_data, key_data, key_len);
+ memcpy(pst->flow->mcam_mask, key_mask, key_len);
+
+ otx2_npc_dbg("Final flow data");
+ for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
+ otx2_npc_dbg("data[%d]: 0x%" PRIx64 ", mask[%d]: 0x%" PRIx64,
+ idx, pst->flow->mcam_data[idx],
+ idx, pst->flow->mcam_mask[idx]);
+ }
+
+ /*
+ * Now we have mcam data and mask formatted as
+ * [Key_len/4 nibbles][0 or 1 nibble hole][data]
+ * hole is present if key_len is odd number of nibbles.
+ * mcam data must be split into 64 bits + 48 bits segments
+ * for each back W0, W1.
+ */
+
+ return otx2_flow_mcam_alloc_and_write(pst->flow, mbox, pst, flow_info);
+}
+
+static int
+flow_parse_attr(struct rte_eth_dev *eth_dev,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error,
+ struct rte_flow *flow)
+{
+ struct otx2_eth_dev *dev = eth_dev->data->dev_private;
+ const char *errmsg = NULL;
+
+ if (attr == NULL)
+ errmsg = "Attribute can't be empty";
+ else if (attr->group)
+ errmsg = "Groups are not supported";
+ else if (attr->priority >= dev->npc_flow.flow_max_priority)
+ errmsg = "Priority should be with in specified range";
+ else if ((!attr->egress && !attr->ingress) ||
+ (attr->egress && attr->ingress))
+ errmsg = "Exactly one of ingress or egress must be set";
+
+ if (errmsg != NULL) {
+ rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ATTR,
+ attr, errmsg);
+ return -ENOTSUP;
+ }
+
+ if (attr->ingress)
+ flow->nix_intf = OTX2_INTF_RX;
+ else
+ flow->nix_intf = OTX2_INTF_TX;
+
+ flow->priority = attr->priority;
+ return 0;
+}
+
+static inline int
+flow_get_free_rss_grp(struct rte_bitmap *bmap,
+ uint32_t size, uint32_t *pos)
+{
+ for (*pos = 0; *pos < size; ++*pos) {
+ if (!rte_bitmap_get(bmap, *pos))
+ break;
+ }
+
+ return *pos < size ? 0 : -1;
+}
+
+static int
+flow_configure_rss_action(struct otx2_eth_dev *dev,
+ const struct rte_flow_action_rss *rss,
+ uint8_t *alg_idx, uint32_t *rss_grp,
+ int mcam_index)
+{
+ struct otx2_npc_flow_info *flow_info = &dev->npc_flow;
+ uint16_t reta[NIX_RSS_RETA_SIZE_MAX];
+ uint32_t flowkey_cfg, grp_aval, i;
+ uint16_t *ind_tbl = NULL;
+ uint8_t flowkey_algx;
+ int rc;
+
+ rc = flow_get_free_rss_grp(flow_info->rss_grp_entries,
+ flow_info->rss_grps, &grp_aval);
+ /* RSS group :0 is not usable for flow rss action */
+ if (rc < 0 || grp_aval == 0)
+ return -ENOSPC;
+
+ *rss_grp = grp_aval;
+
+ otx2_nix_rss_set_key(dev, (uint8_t *)(uintptr_t)rss->key,
+ rss->key_len);
+
+ /* If queue count passed in the rss action is less than
+ * HW configured reta size, replicate rss action reta
+ * across HW reta table.
+ */
+ if (dev->rss_info.rss_size > rss->queue_num) {
+ ind_tbl = reta;
+
+ for (i = 0; i < (dev->rss_info.rss_size / rss->queue_num); i++)
+ memcpy(reta + i * rss->queue_num, rss->queue,
+ sizeof(uint16_t) * rss->queue_num);
+
+ i = dev->rss_info.rss_size % rss->queue_num;
+ if (i)
+ memcpy(&reta[dev->rss_info.rss_size] - i,
+ rss->queue, i * sizeof(uint16_t));
+ } else {
+ ind_tbl = (uint16_t *)(uintptr_t)rss->queue;
+ }
+
+ rc = otx2_nix_rss_tbl_init(dev, *rss_grp, ind_tbl);
+ if (rc) {
+ otx2_err("Failed to init rss table rc = %d", rc);
+ return rc;
+ }
+
+ flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss->types, rss->level);
+
+ rc = otx2_rss_set_hf(dev, flowkey_cfg, &flowkey_algx,
+ *rss_grp, mcam_index);
+ if (rc) {
+ otx2_err("Failed to set rss hash function rc = %d", rc);
+ return rc;
+ }
+
+ *alg_idx = flowkey_algx;
+
+ rte_bitmap_set(flow_info->rss_grp_entries, *rss_grp);
+
+ return 0;
+}
+
+
+static int
+flow_program_rss_action(struct rte_eth_dev *eth_dev,
+ const struct rte_flow_action actions[],
+ struct rte_flow *flow)
+{
+ struct otx2_eth_dev *dev = eth_dev->data->dev_private;
+ const struct rte_flow_action_rss *rss;
+ uint32_t rss_grp;
+ uint8_t alg_idx;
+ int rc;
+
+ for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+ if (actions->type == RTE_FLOW_ACTION_TYPE_RSS) {
+ rss = (const struct rte_flow_action_rss *)actions->conf;
+
+ rc = flow_configure_rss_action(dev,
+ rss, &alg_idx, &rss_grp,
+ flow->mcam_id);
+ if (rc)
+ return rc;
+
+ flow->npc_action |=
+ ((uint64_t)(alg_idx & NIX_RSS_ACT_ALG_MASK) <<
+ NIX_RSS_ACT_ALG_OFFSET) |
+ ((uint64_t)(rss_grp & NIX_RSS_ACT_GRP_MASK) <<
+ NIX_RSS_ACT_GRP_OFFSET);
+ }
+ }
+ return 0;
+}
+
+static int
+flow_parse_meta_items(__rte_unused struct otx2_parse_state *pst)
+{
+ otx2_npc_dbg("Meta Item");
+ return 0;
+}
+
+/*
+ * Parse function of each layer:
+ * - Consume one or more patterns that are relevant.
+ * - Update parse_state
+ * - Set parse_state.pattern = last item consumed
+ * - Set appropriate error code/message when returning error.
+ */
+typedef int (*flow_parse_stage_func_t)(struct otx2_parse_state *pst);
+
+static int
+flow_parse_pattern(struct rte_eth_dev *dev,
+ const struct rte_flow_item pattern[],
+ struct rte_flow_error *error,
+ struct rte_flow *flow,
+ struct otx2_parse_state *pst)
+{
+ flow_parse_stage_func_t parse_stage_funcs[] = {
+ flow_parse_meta_items,
+ otx2_flow_parse_la,
+ otx2_flow_parse_lb,
+ otx2_flow_parse_lc,
+ otx2_flow_parse_ld,
+ otx2_flow_parse_le,
+ otx2_flow_parse_lf,
+ otx2_flow_parse_lg,
+ otx2_flow_parse_lh,
+ };
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ uint8_t layer = 0;
+ int key_offset;
+ int rc;
+
+ if (pattern == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_NUM, NULL,
+ "pattern is NULL");
+ return -EINVAL;
+ }
+
+ memset(pst, 0, sizeof(*pst));
+ pst->npc = &hw->npc_flow;
+ pst->error = error;
+ pst->flow = flow;
+
+ /* Use integral byte offset */
+ key_offset = pst->npc->keyx_len[flow->nix_intf];
+ key_offset = (key_offset + 7) / 8;
+
+ /* Location where LDATA would begin */
+ pst->mcam_data = (uint8_t *)flow->mcam_data;
+ pst->mcam_mask = (uint8_t *)flow->mcam_mask;
+
+ while (pattern->type != RTE_FLOW_ITEM_TYPE_END &&
+ layer < RTE_DIM(parse_stage_funcs)) {
+ otx2_npc_dbg("Pattern type = %d", pattern->type);
+
+ /* Skip place-holders */
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+
+ pst->pattern = pattern;
+ otx2_npc_dbg("Is tunnel = %d, layer = %d", pst->tunnel, layer);
+ rc = parse_stage_funcs[layer](pst);
+ if (rc != 0)
+ return -rte_errno;
+
+ layer++;
+
+ /*
+ * Parse stage function sets pst->pattern to
+ * 1 past the last item it consumed.
+ */
+ pattern = pst->pattern;
+
+ if (pst->terminate)
+ break;
+ }
+
+ /* Skip trailing place-holders */
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+
+ /* Are there more items than what we can handle? */
+ if (pattern->type != RTE_FLOW_ITEM_TYPE_END) {
+ rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, pattern,
+ "unsupported item in the sequence");
+ return -ENOTSUP;
+ }
+
+ return 0;
+}
+
+static int
+flow_parse_rule(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error,
+ struct rte_flow *flow,
+ struct otx2_parse_state *pst)
+{
+ int err;
+
+ /* Check attributes */
+ err = flow_parse_attr(dev, attr, error, flow);
+ if (err)
+ return err;
+
+ /* Check actions */
+ err = otx2_flow_parse_actions(dev, attr, actions, error, flow);
+ if (err)
+ return err;
+
+ /* Check pattern */
+ err = flow_parse_pattern(dev, pattern, error, flow, pst);
+ if (err)
+ return err;
+
+ /* Check for overlaps? */
+ return 0;
+}
+
+static int
+otx2_flow_validate(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct otx2_parse_state parse_state;
+ struct rte_flow flow;
+
+ memset(&flow, 0, sizeof(flow));
+ return flow_parse_rule(dev, attr, pattern, actions, error, &flow,
+ &parse_state);
+}
+
+static struct rte_flow *
+otx2_flow_create(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ struct otx2_parse_state parse_state;
+ struct otx2_mbox *mbox = hw->mbox;
+ struct rte_flow *flow, *flow_iter;
+ struct otx2_flow_list *list;
+ int rc;
+
+ flow = rte_zmalloc("otx2_rte_flow", sizeof(*flow), 0);
+ if (flow == NULL) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Memory allocation failed");
+ return NULL;
+ }
+ memset(flow, 0, sizeof(*flow));
+
+ rc = flow_parse_rule(dev, attr, pattern, actions, error, flow,
+ &parse_state);
+ if (rc != 0)
+ goto err_exit;
+
+ rc = flow_program_npc(&parse_state, mbox, &hw->npc_flow);
+ if (rc != 0) {
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to insert filter");
+ goto err_exit;
+ }
+
+ rc = flow_program_rss_action(dev, actions, flow);
+ if (rc != 0) {
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to program rss action");
+ goto err_exit;
+ }
+
+
+ list = &hw->npc_flow.flow_list[flow->priority];
+ /* List in ascending order of mcam entries */
+ TAILQ_FOREACH(flow_iter, list, next) {
+ if (flow_iter->mcam_id > flow->mcam_id) {
+ TAILQ_INSERT_BEFORE(flow_iter, flow, next);
+ return flow;
+ }
+ }
+
+ TAILQ_INSERT_TAIL(list, flow, next);
+ return flow;
+
+err_exit:
+ rte_free(flow);
+ return NULL;
+}
+
+const struct rte_flow_ops otx2_flow_ops = {
+ .validate = otx2_flow_validate,
+ .create = otx2_flow_create,
+};
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 40/57] net/octeontx2: add flow destroy ops support
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (38 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 39/57] net/octeontx2: add flow operations jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 41/57] net/octeontx2: add flow init and fini jerinj
` (17 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding few more flow operations like flow_destroy, flow_isolate
and flow_flush.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow.c | 206 ++++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 3 +
2 files changed, 209 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 896aef00a..24bde623d 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -5,6 +5,48 @@
#include "otx2_ethdev.h"
#include "otx2_flow.h"
+int
+otx2_flow_free_all_resources(struct otx2_eth_dev *hw)
+{
+ struct otx2_npc_flow_info *npc = &hw->npc_flow;
+ struct otx2_mbox *mbox = hw->mbox;
+ struct otx2_mcam_ents_info *info;
+ struct rte_bitmap *bmap;
+ struct rte_flow *flow;
+ int entry_count = 0;
+ int rc, idx;
+
+ for (idx = 0; idx < npc->flow_max_priority; idx++) {
+ info = &npc->flow_entry_info[idx];
+ entry_count += info->live_ent;
+ }
+
+ if (entry_count == 0)
+ return 0;
+
+ /* Free all MCAM entries allocated */
+ rc = otx2_flow_mcam_free_all_entries(mbox);
+
+ /* Free any MCAM counters and delete flow list */
+ for (idx = 0; idx < npc->flow_max_priority; idx++) {
+ while ((flow = TAILQ_FIRST(&npc->flow_list[idx])) != NULL) {
+ if (flow->ctr_id != NPC_COUNTER_NONE)
+ rc |= otx2_flow_mcam_free_counter(mbox,
+ flow->ctr_id);
+
+ TAILQ_REMOVE(&npc->flow_list[idx], flow, next);
+ rte_free(flow);
+ bmap = npc->live_entries[flow->priority];
+ rte_bitmap_clear(bmap, flow->mcam_id);
+ }
+ info = &npc->flow_entry_info[idx];
+ info->free_ent = 0;
+ info->live_ent = 0;
+ }
+ return rc;
+}
+
+
static int
flow_program_npc(struct otx2_parse_state *pst, struct otx2_mbox *mbox,
struct otx2_npc_flow_info *flow_info)
@@ -237,6 +279,27 @@ flow_program_rss_action(struct rte_eth_dev *eth_dev,
return 0;
}
+static int
+flow_free_rss_action(struct rte_eth_dev *eth_dev,
+ struct rte_flow *flow)
+{
+ struct otx2_eth_dev *dev = eth_dev->data->dev_private;
+ struct otx2_npc_flow_info *npc = &dev->npc_flow;
+ uint32_t rss_grp;
+
+ if (flow->npc_action & NIX_RX_ACTIONOP_RSS) {
+ rss_grp = (flow->npc_action >> NIX_RSS_ACT_GRP_OFFSET) &
+ NIX_RSS_ACT_GRP_MASK;
+ if (rss_grp == 0 || rss_grp >= npc->rss_grps)
+ return -EINVAL;
+
+ rte_bitmap_clear(npc->rss_grp_entries, rss_grp);
+ }
+
+ return 0;
+}
+
+
static int
flow_parse_meta_items(__rte_unused struct otx2_parse_state *pst)
{
@@ -445,7 +508,150 @@ otx2_flow_create(struct rte_eth_dev *dev,
return NULL;
}
+static int
+otx2_flow_destroy(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ struct otx2_npc_flow_info *npc = &hw->npc_flow;
+ struct otx2_mbox *mbox = hw->mbox;
+ struct rte_bitmap *bmap;
+ uint16_t match_id;
+ int rc;
+
+ match_id = (flow->npc_action >> NIX_RX_ACT_MATCH_OFFSET) &
+ NIX_RX_ACT_MATCH_MASK;
+
+ if (match_id && match_id < OTX2_FLOW_ACTION_FLAG_DEFAULT) {
+ if (rte_atomic32_read(&npc->mark_actions) == 0)
+ return -EINVAL;
+
+ /* Clear mark offload flag if there are no more mark actions */
+ if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0)
+ hw->rx_offload_flags &= ~NIX_RX_OFFLOAD_MARK_UPDATE_F;
+ }
+
+ rc = flow_free_rss_action(dev, flow);
+ if (rc != 0) {
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to free rss action");
+ }
+
+ rc = otx2_flow_mcam_free_entry(mbox, flow->mcam_id);
+ if (rc != 0) {
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to destroy filter");
+ }
+
+ TAILQ_REMOVE(&npc->flow_list[flow->priority], flow, next);
+
+ bmap = npc->live_entries[flow->priority];
+ rte_bitmap_clear(bmap, flow->mcam_id);
+
+ rte_free(flow);
+ return 0;
+}
+
+static int
+otx2_flow_flush(struct rte_eth_dev *dev,
+ struct rte_flow_error *error)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ int rc;
+
+ rc = otx2_flow_free_all_resources(hw);
+ if (rc) {
+ otx2_err("Error when deleting NPC MCAM entries "
+ ", counters");
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to flush filter");
+ return -rte_errno;
+ }
+
+ return 0;
+}
+
+static int
+otx2_flow_isolate(struct rte_eth_dev *dev __rte_unused,
+ int enable __rte_unused,
+ struct rte_flow_error *error)
+{
+ /*
+ * If we support, we need to un-install the default mcam
+ * entry for this port.
+ */
+
+ rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Flow isolation not supported");
+
+ return -rte_errno;
+}
+
+static int
+otx2_flow_query(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ const struct rte_flow_action *action,
+ void *data,
+ struct rte_flow_error *error)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ struct rte_flow_query_count *query = data;
+ struct otx2_mbox *mbox = hw->mbox;
+ const char *errmsg = NULL;
+ int errcode = ENOTSUP;
+ int rc;
+
+ if (action->type != RTE_FLOW_ACTION_TYPE_COUNT) {
+ errmsg = "Only COUNT is supported in query";
+ goto err_exit;
+ }
+
+ if (flow->ctr_id == NPC_COUNTER_NONE) {
+ errmsg = "Counter is not available";
+ goto err_exit;
+ }
+
+ rc = otx2_flow_mcam_read_counter(mbox, flow->ctr_id, &query->hits);
+ if (rc != 0) {
+ errcode = EIO;
+ errmsg = "Error reading flow counter";
+ goto err_exit;
+ }
+ query->hits_set = 1;
+ query->bytes_set = 0;
+
+ if (query->reset)
+ rc = otx2_flow_mcam_clear_counter(mbox, flow->ctr_id);
+ if (rc != 0) {
+ errcode = EIO;
+ errmsg = "Error clearing flow counter";
+ goto err_exit;
+ }
+
+ return 0;
+
+err_exit:
+ rte_flow_error_set(error, errcode,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ errmsg);
+ return -rte_errno;
+}
+
const struct rte_flow_ops otx2_flow_ops = {
.validate = otx2_flow_validate,
.create = otx2_flow_create,
+ .destroy = otx2_flow_destroy,
+ .flush = otx2_flow_flush,
+ .query = otx2_flow_query,
+ .isolate = otx2_flow_isolate,
};
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index db79451b9..e18e04658 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -5,6 +5,9 @@
#ifndef __OTX2_RX_H__
#define __OTX2_RX_H__
+/* Default mark value used when none is provided. */
+#define OTX2_FLOW_ACTION_FLAG_DEFAULT 0xffff
+
#define PTYPE_WIDTH 12
#define PTYPE_NON_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH)
#define PTYPE_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH)
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 41/57] net/octeontx2: add flow init and fini
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (39 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 40/57] net/octeontx2: add flow destroy ops support jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 42/57] net/octeontx2: connect flow API to ethdev ops jerinj
` (16 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding the flow init and fini functionality. These will be called from
dev init and will initialize and de-initialize the flow related memory.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow.c | 315 ++++++++++++++++++++++++++++++
1 file changed, 315 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 24bde623d..94bd85161 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -655,3 +655,318 @@ const struct rte_flow_ops otx2_flow_ops = {
.query = otx2_flow_query,
.isolate = otx2_flow_isolate,
};
+
+static int
+flow_supp_key_len(uint32_t supp_mask)
+{
+ int nib_count = 0;
+ while (supp_mask) {
+ nib_count++;
+ supp_mask &= (supp_mask - 1);
+ }
+ return nib_count * 4;
+}
+
+/* Refer HRM register:
+ * NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG
+ * and
+ * NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG
+ **/
+#define BYTESM1_SHIFT 16
+#define HDR_OFF_SHIFT 8
+static void
+flow_update_kex_info(struct npc_xtract_info *xtract_info,
+ uint64_t val)
+{
+ xtract_info->len = ((val >> BYTESM1_SHIFT) & 0xf) + 1;
+ xtract_info->hdr_off = (val >> HDR_OFF_SHIFT) & 0xff;
+ xtract_info->key_off = val & 0x3f;
+ xtract_info->enable = ((val >> 7) & 0x1);
+}
+
+static void
+flow_process_mkex_cfg(struct otx2_npc_flow_info *npc,
+ struct npc_get_kex_cfg_rsp *kex_rsp)
+{
+ volatile uint64_t (*q)[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT]
+ [NPC_MAX_LD];
+ struct npc_xtract_info *x_info = NULL;
+ int lid, lt, ld, fl, ix;
+ otx2_dxcfg_t *p;
+ uint64_t keyw;
+ uint64_t val;
+
+ npc->keyx_supp_nmask[NPC_MCAM_RX] =
+ kex_rsp->rx_keyx_cfg & 0x7fffffffULL;
+ npc->keyx_supp_nmask[NPC_MCAM_TX] =
+ kex_rsp->tx_keyx_cfg & 0x7fffffffULL;
+ npc->keyx_len[NPC_MCAM_RX] =
+ flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_RX]);
+ npc->keyx_len[NPC_MCAM_TX] =
+ flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_TX]);
+
+ keyw = (kex_rsp->rx_keyx_cfg >> 32) & 0x7ULL;
+ npc->keyw[NPC_MCAM_RX] = keyw;
+ keyw = (kex_rsp->tx_keyx_cfg >> 32) & 0x7ULL;
+ npc->keyw[NPC_MCAM_TX] = keyw;
+
+ /* Update KEX_LD_FLAG */
+ for (ix = 0; ix < NPC_MAX_INTF; ix++) {
+ for (ld = 0; ld < NPC_MAX_LD; ld++) {
+ for (fl = 0; fl < NPC_MAX_LFL; fl++) {
+ x_info =
+ &npc->prx_fxcfg[ix][ld][fl].xtract[0];
+ val = kex_rsp->intf_ld_flags[ix][ld][fl];
+ flow_update_kex_info(x_info, val);
+ }
+ }
+ }
+
+ /* Update LID, LT and LDATA cfg */
+ p = &npc->prx_dxcfg;
+ q = (volatile uint64_t (*)[][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD])
+ (&kex_rsp->intf_lid_lt_ld);
+ for (ix = 0; ix < NPC_MAX_INTF; ix++) {
+ for (lid = 0; lid < NPC_MAX_LID; lid++) {
+ for (lt = 0; lt < NPC_MAX_LT; lt++) {
+ for (ld = 0; ld < NPC_MAX_LD; ld++) {
+ x_info = &(*p)[ix][lid][lt].xtract[ld];
+ val = (*q)[ix][lid][lt][ld];
+ flow_update_kex_info(x_info, val);
+ }
+ }
+ }
+ }
+ /* Update LDATA Flags cfg */
+ npc->prx_lfcfg[0].i = kex_rsp->kex_ld_flags[0];
+ npc->prx_lfcfg[1].i = kex_rsp->kex_ld_flags[1];
+}
+
+static struct otx2_idev_kex_cfg *
+flow_intra_dev_kex_cfg(void)
+{
+ static const char name[] = "octeontx2_intra_device_kex_conf";
+ struct otx2_idev_kex_cfg *idev;
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(name);
+ if (mz)
+ return mz->addr;
+
+ /* Request for the first time */
+ mz = rte_memzone_reserve_aligned(name, sizeof(struct otx2_idev_kex_cfg),
+ SOCKET_ID_ANY, 0, OTX2_ALIGN);
+ if (mz) {
+ idev = mz->addr;
+ rte_atomic16_set(&idev->kex_refcnt, 0);
+ return idev;
+ }
+ return NULL;
+}
+
+static int
+flow_fetch_kex_cfg(struct otx2_eth_dev *dev)
+{
+ struct otx2_npc_flow_info *npc = &dev->npc_flow;
+ struct npc_get_kex_cfg_rsp *kex_rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct otx2_idev_kex_cfg *idev;
+ int rc = 0;
+
+ idev = flow_intra_dev_kex_cfg();
+ if (!idev)
+ return -ENOMEM;
+
+ /* Is kex_cfg read by any another driver? */
+ if (rte_atomic16_add_return(&idev->kex_refcnt, 1) == 1) {
+ /* Call mailbox to get key & data size */
+ (void)otx2_mbox_alloc_msg_npc_get_kex_cfg(mbox);
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&kex_rsp);
+ if (rc) {
+ otx2_err("Failed to fetch NPC keyx config");
+ goto done;
+ }
+ memcpy(&idev->kex_cfg, kex_rsp,
+ sizeof(struct npc_get_kex_cfg_rsp));
+ }
+
+ flow_process_mkex_cfg(npc, &idev->kex_cfg);
+
+done:
+ return rc;
+}
+
+int
+otx2_flow_init(struct otx2_eth_dev *hw)
+{
+ uint8_t *mem = NULL, *nix_mem = NULL, *npc_mem = NULL;
+ struct otx2_npc_flow_info *npc = &hw->npc_flow;
+ uint32_t bmap_sz;
+ int rc = 0, idx;
+
+ rc = flow_fetch_kex_cfg(hw);
+ if (rc) {
+ otx2_err("Failed to fetch NPC keyx config from idev");
+ return rc;
+ }
+
+ rte_atomic32_init(&npc->mark_actions);
+
+ npc->mcam_entries = NPC_MCAM_TOT_ENTRIES >> npc->keyw[NPC_MCAM_RX];
+ /* Free, free_rev, live and live_rev entries */
+ bmap_sz = rte_bitmap_get_memory_footprint(npc->mcam_entries);
+ mem = rte_zmalloc(NULL, 4 * bmap_sz * npc->flow_max_priority,
+ RTE_CACHE_LINE_SIZE);
+ if (mem == NULL) {
+ otx2_err("Bmap alloc failed");
+ rc = -ENOMEM;
+ return rc;
+ }
+
+ npc->flow_entry_info = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct otx2_mcam_ents_info),
+ 0);
+ if (npc->flow_entry_info == NULL) {
+ otx2_err("flow_entry_info alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->free_entries = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct rte_bitmap),
+ 0);
+ if (npc->free_entries == NULL) {
+ otx2_err("free_entries alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->free_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct rte_bitmap),
+ 0);
+ if (npc->free_entries_rev == NULL) {
+ otx2_err("free_entries_rev alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->live_entries = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct rte_bitmap),
+ 0);
+ if (npc->live_entries == NULL) {
+ otx2_err("live_entries alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->live_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct rte_bitmap),
+ 0);
+ if (npc->live_entries_rev == NULL) {
+ otx2_err("live_entries_rev alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->flow_list = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct otx2_flow_list),
+ 0);
+ if (npc->flow_list == NULL) {
+ otx2_err("flow_list alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc_mem = mem;
+ for (idx = 0; idx < npc->flow_max_priority; idx++) {
+ TAILQ_INIT(&npc->flow_list[idx]);
+
+ npc->free_entries[idx] =
+ rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
+ mem += bmap_sz;
+
+ npc->free_entries_rev[idx] =
+ rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
+ mem += bmap_sz;
+
+ npc->live_entries[idx] =
+ rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
+ mem += bmap_sz;
+
+ npc->live_entries_rev[idx] =
+ rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
+ mem += bmap_sz;
+
+ npc->flow_entry_info[idx].free_ent = 0;
+ npc->flow_entry_info[idx].live_ent = 0;
+ npc->flow_entry_info[idx].max_id = 0;
+ npc->flow_entry_info[idx].min_id = ~(0);
+ }
+
+ npc->rss_grps = NIX_RSS_GRPS;
+
+ bmap_sz = rte_bitmap_get_memory_footprint(npc->rss_grps);
+ nix_mem = rte_zmalloc(NULL, bmap_sz, RTE_CACHE_LINE_SIZE);
+ if (nix_mem == NULL) {
+ otx2_err("Bmap alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->rss_grp_entries = rte_bitmap_init(npc->rss_grps, nix_mem, bmap_sz);
+
+ /* Group 0 will be used for RSS,
+ * 1 -7 will be used for rte_flow RSS action
+ */
+ rte_bitmap_set(npc->rss_grp_entries, 0);
+
+ return 0;
+
+err:
+ if (npc->flow_list)
+ rte_free(npc->flow_list);
+ if (npc->live_entries_rev)
+ rte_free(npc->live_entries_rev);
+ if (npc->live_entries)
+ rte_free(npc->live_entries);
+ if (npc->free_entries_rev)
+ rte_free(npc->free_entries_rev);
+ if (npc->free_entries)
+ rte_free(npc->free_entries);
+ if (npc->flow_entry_info)
+ rte_free(npc->flow_entry_info);
+ if (npc_mem)
+ rte_free(npc_mem);
+ if (nix_mem)
+ rte_free(nix_mem);
+ return rc;
+}
+
+int
+otx2_flow_fini(struct otx2_eth_dev *hw)
+{
+ struct otx2_npc_flow_info *npc = &hw->npc_flow;
+ int rc;
+
+ rc = otx2_flow_free_all_resources(hw);
+ if (rc) {
+ otx2_err("Error when deleting NPC MCAM entries, counters");
+ return rc;
+ }
+
+ if (npc->flow_list)
+ rte_free(npc->flow_list);
+ if (npc->live_entries_rev)
+ rte_free(npc->live_entries_rev);
+ if (npc->live_entries)
+ rte_free(npc->live_entries);
+ if (npc->free_entries_rev)
+ rte_free(npc->free_entries_rev);
+ if (npc->free_entries)
+ rte_free(npc->free_entries);
+ if (npc->flow_entry_info)
+ rte_free(npc->flow_entry_info);
+
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 42/57] net/octeontx2: connect flow API to ethdev ops
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (40 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 41/57] net/octeontx2: add flow init and fini jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 43/57] net/octeontx2: implement VLAN utility functions jerinj
` (15 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vivek Sharma
From: Vivek Sharma <viveksharma@marvell.com>
Connect rte_flow driver ops to ethdev via .filter_ctrl op.
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
doc/guides/nics/octeontx2.rst | 93 ++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.c | 9 +++
drivers/net/octeontx2/otx2_ethdev.h | 3 +
drivers/net/octeontx2/otx2_ethdev_ops.c | 21 +++++
7 files changed, 129 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 46fb00be6..33d2f2785 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -22,6 +22,7 @@ RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Flow control = Y
+Flow API = Y
Packet type parsing = Y
Timesync = Y
Timestamp offload = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index f3f812804..980a4daf9 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -22,6 +22,7 @@ RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Flow control = Y
+Flow API = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 7fba7e1d9..330534a90 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -17,6 +17,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Flow API = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 41eb3c7b9..0f1756932 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -23,6 +23,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Multiple queues for TX and RX
- Receiver Side Scaling (RSS)
- MAC filtering
+- Generic flow API
- Port hardware statistics
- Link state information
- Link flow control
@@ -109,3 +110,95 @@ Runtime Config Options
Above devarg parameters are configurable per device, user needs to pass the
parameters to all the PCIe devices if application requires to configure on
all the ethdev ports.
+
+RTE Flow Support
+----------------
+
+The OCTEON TX2 SoC family NIC has support for the following patterns and
+actions.
+
+Patterns:
+
+.. _table_octeontx2_supported_flow_item_types:
+
+.. table:: Item types
+
+ +----+--------------------------------+
+ | # | Pattern Type |
+ +====+================================+
+ | 1 | RTE_FLOW_ITEM_TYPE_ETH |
+ +----+--------------------------------+
+ | 2 | RTE_FLOW_ITEM_TYPE_VLAN |
+ +----+--------------------------------+
+ | 3 | RTE_FLOW_ITEM_TYPE_E_TAG |
+ +----+--------------------------------+
+ | 4 | RTE_FLOW_ITEM_TYPE_IPV4 |
+ +----+--------------------------------+
+ | 5 | RTE_FLOW_ITEM_TYPE_IPV6 |
+ +----+--------------------------------+
+ | 6 | RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4|
+ +----+--------------------------------+
+ | 7 | RTE_FLOW_ITEM_TYPE_MPLS |
+ +----+--------------------------------+
+ | 8 | RTE_FLOW_ITEM_TYPE_ICMP |
+ +----+--------------------------------+
+ | 9 | RTE_FLOW_ITEM_TYPE_UDP |
+ +----+--------------------------------+
+ | 10 | RTE_FLOW_ITEM_TYPE_TCP |
+ +----+--------------------------------+
+ | 11 | RTE_FLOW_ITEM_TYPE_SCTP |
+ +----+--------------------------------+
+ | 12 | RTE_FLOW_ITEM_TYPE_ESP |
+ +----+--------------------------------+
+ | 13 | RTE_FLOW_ITEM_TYPE_GRE |
+ +----+--------------------------------+
+ | 14 | RTE_FLOW_ITEM_TYPE_NVGRE |
+ +----+--------------------------------+
+ | 15 | RTE_FLOW_ITEM_TYPE_VXLAN |
+ +----+--------------------------------+
+ | 16 | RTE_FLOW_ITEM_TYPE_GTPC |
+ +----+--------------------------------+
+ | 17 | RTE_FLOW_ITEM_TYPE_GTPU |
+ +----+--------------------------------+
+ | 18 | RTE_FLOW_ITEM_TYPE_VOID |
+ +----+--------------------------------+
+ | 19 | RTE_FLOW_ITEM_TYPE_ANY |
+ +----+--------------------------------+
+
+Actions:
+
+.. _table_octeontx2_supported_ingress_action_types:
+
+.. table:: Ingress action types
+
+ +----+--------------------------------+
+ | # | Action Type |
+ +====+================================+
+ | 1 | RTE_FLOW_ACTION_TYPE_VOID |
+ +----+--------------------------------+
+ | 2 | RTE_FLOW_ACTION_TYPE_MARK |
+ +----+--------------------------------+
+ | 3 | RTE_FLOW_ACTION_TYPE_FLAG |
+ +----+--------------------------------+
+ | 4 | RTE_FLOW_ACTION_TYPE_COUNT |
+ +----+--------------------------------+
+ | 5 | RTE_FLOW_ACTION_TYPE_DROP |
+ +----+--------------------------------+
+ | 6 | RTE_FLOW_ACTION_TYPE_QUEUE |
+ +----+--------------------------------+
+ | 7 | RTE_FLOW_ACTION_TYPE_RSS |
+ +----+--------------------------------+
+ | 8 | RTE_FLOW_ACTION_TYPE_SECURITY |
+ +----+--------------------------------+
+
+.. _table_octeontx2_supported_egress_action_types:
+
+.. table:: Egress action types
+
+ +----+--------------------------------+
+ | # | Action Type |
+ +====+================================+
+ | 1 | RTE_FLOW_ACTION_TYPE_COUNT |
+ +----+--------------------------------+
+ | 2 | RTE_FLOW_ACTION_TYPE_DROP |
+ +----+--------------------------------+
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 7512aacb3..09201fd23 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1326,6 +1326,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.rx_descriptor_status = otx2_nix_rx_descriptor_status,
.tx_done_cleanup = otx2_nix_tx_done_cleanup,
.pool_ops_supported = otx2_nix_pool_ops_supported,
+ .filter_ctrl = otx2_nix_dev_filter_ctrl,
.get_module_info = otx2_nix_get_module_info,
.get_module_eeprom = otx2_nix_get_module_eeprom,
.flow_ctrl_get = otx2_nix_flow_ctrl_get,
@@ -1505,6 +1506,11 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
dev->hwcap |= OTX2_FIXUP_F_LIMIT_CQ_FULL;
}
+ /* Initialize rte-flow */
+ rc = otx2_flow_init(dev);
+ if (rc)
+ goto free_mac_addrs;
+
otx2_nix_dbg("Port=%d pf=%d vf=%d ver=%s msix_off=%d hwcap=0x%" PRIx64
" rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64,
eth_dev->data->port_id, dev->pf, dev->vf,
@@ -1541,6 +1547,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
/* Disable nix bpid config */
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+ /* Disable other rte_flow entries */
+ otx2_flow_fini(dev);
+
/* Disable PTP if already enabled */
if (otx2_ethdev_is_ptp_en(dev))
otx2_nix_timesync_disable(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index e8a22b6ec..ad12f2553 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -294,6 +294,9 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
/* Ops */
void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_info *dev_info);
+int otx2_nix_dev_filter_ctrl(struct rte_eth_dev *eth_dev,
+ enum rte_filter_type filter_type,
+ enum rte_filter_op filter_op, void *arg);
int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_module_info *modinfo);
int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 2a949439a..e55acd4e0 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -220,6 +220,27 @@ otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
return -ENOTSUP;
}
+int
+otx2_nix_dev_filter_ctrl(struct rte_eth_dev *eth_dev,
+ enum rte_filter_type filter_type,
+ enum rte_filter_op filter_op, void *arg)
+{
+ RTE_SET_USED(eth_dev);
+
+ if (filter_type != RTE_ETH_FILTER_GENERIC) {
+ otx2_err("Unsupported filter type %d", filter_type);
+ return -ENOTSUP;
+ }
+
+ if (filter_op == RTE_ETH_FILTER_GET) {
+ *(const void **)arg = &otx2_flow_ops;
+ return 0;
+ }
+
+ otx2_err("Invalid filter_op %d", filter_op);
+ return -EINVAL;
+}
+
static struct cgx_fw_data *
nix_get_fwdata(struct otx2_eth_dev *dev)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 43/57] net/octeontx2: implement VLAN utility functions
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (41 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 42/57] net/octeontx2: connect flow API to ethdev ops jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 44/57] net/octeontx2: support VLAN offloads jerinj
` (14 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Vivek Sharma <viveksharma@marvell.com>
Implement accessory functions needed for VLAN functionality.
Introduce VLAN related structures as well.
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 10 ++
drivers/net/octeontx2/otx2_ethdev.h | 46 +++++++
drivers/net/octeontx2/otx2_vlan.c | 190 ++++++++++++++++++++++++++++
5 files changed, 248 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_vlan.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 21559f631..d22ddae33 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -34,6 +34,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_ptp.c \
otx2_flow.c \
otx2_link.c \
+ otx2_vlan.c \
otx2_stats.c \
otx2_lookup.c \
otx2_ethdev.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index f0e03bffe..6281ee21b 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -9,6 +9,7 @@ sources = files(
'otx2_ptp.c',
'otx2_flow.c',
'otx2_link.c',
+ 'otx2_vlan.c',
'otx2_stats.c',
'otx2_lookup.c',
'otx2_ethdev.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 09201fd23..48c2e8f57 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1083,6 +1083,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
/* Free the resources allocated from the previous configure */
if (dev->configured == 1) {
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+ otx2_nix_vlan_fini(eth_dev);
oxt2_nix_unregister_queue_irqs(eth_dev);
nix_set_nop_rxtx_function(eth_dev);
rc = nix_store_queue_cfg_and_then_release(eth_dev);
@@ -1129,6 +1130,12 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ rc = otx2_nix_vlan_offload_init(eth_dev);
+ if (rc) {
+ otx2_err("Failed to init vlan offload rc=%d", rc);
+ goto free_nix_lf;
+ }
+
/* Register queue IRQs */
rc = oxt2_nix_register_queue_irqs(eth_dev);
if (rc) {
@@ -1547,6 +1554,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
/* Disable nix bpid config */
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+ /* Disable vlan offloads */
+ otx2_nix_vlan_fini(eth_dev);
+
/* Disable other rte_flow entries */
otx2_flow_fini(dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index ad12f2553..8577272b4 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -182,6 +182,47 @@ struct otx2_fc_info {
uint16_t bpid[NIX_MAX_CHAN];
};
+struct vlan_mkex_info {
+ struct npc_xtract_info la_xtract;
+ struct npc_xtract_info lb_xtract;
+ uint64_t lb_lt_offset;
+};
+
+struct vlan_entry {
+ uint32_t mcam_idx;
+ uint16_t vlan_id;
+ TAILQ_ENTRY(vlan_entry) next;
+};
+
+TAILQ_HEAD(otx2_vlan_filter_tbl, vlan_entry);
+
+struct otx2_vlan_info {
+ struct otx2_vlan_filter_tbl fltr_tbl;
+ /* MKEX layer info */
+ struct mcam_entry def_tx_mcam_ent;
+ struct mcam_entry def_rx_mcam_ent;
+ struct vlan_mkex_info mkex;
+ /* Default mcam entry that matches vlan packets */
+ uint32_t def_rx_mcam_idx;
+ uint32_t def_tx_mcam_idx;
+ /* MCAM entry that matches double vlan packets */
+ uint32_t qinq_mcam_idx;
+ /* Indices of tx_vtag def registers */
+ uint32_t outer_vlan_idx;
+ uint32_t inner_vlan_idx;
+ uint16_t outer_vlan_tpid;
+ uint16_t inner_vlan_tpid;
+ uint16_t pvid;
+ /* QinQ entry allocated before default one */
+ uint8_t qinq_before_def;
+ uint8_t pvid_insert_on;
+ /* Rx vtag action type */
+ uint8_t vtag_type_idx;
+ uint8_t filter_on;
+ uint8_t strip_on;
+ uint8_t qinq_on;
+};
+
struct otx2_eth_dev {
OTX2_DEV; /* Base class */
MARKER otx2_eth_dev_data_start;
@@ -233,6 +274,7 @@ struct otx2_eth_dev {
uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
struct otx2_npc_flow_info npc_flow;
+ struct otx2_vlan_info vlan_info;
struct otx2_eth_qconf *tx_qconf;
struct otx2_eth_qconf *rx_qconf;
struct rte_eth_dev *eth_dev;
@@ -402,6 +444,10 @@ int otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb);
int otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev);
+/* VLAN */
+int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev);
+int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev);
+
/* Lookup configuration */
void *otx2_nix_fastpath_lookup_mem_get(void);
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
new file mode 100644
index 000000000..b3136d2cf
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -0,0 +1,190 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_malloc.h>
+#include <rte_tailq.h>
+
+#include "otx2_ethdev.h"
+#include "otx2_flow.h"
+
+
+#define VLAN_ID_MATCH 0x1
+#define VTAG_F_MATCH 0x2
+#define MAC_ADDR_MATCH 0x4
+#define QINQ_F_MATCH 0x8
+#define VLAN_DROP 0x10
+
+enum vtag_cfg_dir {
+ VTAG_TX,
+ VTAG_RX
+};
+
+static int
+__rte_unused nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev,
+ uint32_t entry, const int enable)
+{
+ struct npc_mcam_ena_dis_entry_req *req;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc = -EINVAL;
+
+ if (enable)
+ req = otx2_mbox_alloc_msg_npc_mcam_ena_entry(mbox);
+ else
+ req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox);
+
+ req->entry = entry;
+
+ rc = otx2_mbox_process_msg(mbox, NULL);
+ return rc;
+}
+
+static int
+__rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
+{
+ struct npc_mcam_free_entry_req *req;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc = -EINVAL;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
+ req->entry = entry;
+
+ rc = otx2_mbox_process_msg(mbox, NULL);
+ return rc;
+}
+
+static int
+__rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
+ struct mcam_entry *entry, uint8_t intf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct npc_mcam_write_entry_req *req;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct msghdr *rsp;
+ int rc = -EINVAL;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
+
+ req->entry = ent_idx;
+ req->intf = intf;
+ req->enable_entry = 1;
+ memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ return rc;
+}
+
+static int
+__rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev,
+ struct mcam_entry *entry,
+ uint8_t intf, bool drop)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct npc_mcam_alloc_and_write_entry_req *req;
+ struct npc_mcam_alloc_and_write_entry_rsp *rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc = -EINVAL;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_alloc_and_write_entry(mbox);
+
+ if (intf == NPC_MCAM_RX) {
+ if (!drop && dev->vlan_info.def_rx_mcam_idx) {
+ req->priority = NPC_MCAM_HIGHER_PRIO;
+ req->ref_entry = dev->vlan_info.def_rx_mcam_idx;
+ } else if (drop && dev->vlan_info.qinq_mcam_idx) {
+ req->priority = NPC_MCAM_LOWER_PRIO;
+ req->ref_entry = dev->vlan_info.qinq_mcam_idx;
+ } else {
+ req->priority = NPC_MCAM_ANY_PRIO;
+ req->ref_entry = 0;
+ }
+ } else {
+ req->priority = NPC_MCAM_ANY_PRIO;
+ req->ref_entry = 0;
+ }
+
+ req->intf = intf;
+ req->enable_entry = 1;
+ memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ return rsp->entry;
+}
+
+static int
+nix_vlan_rx_mkex_offset(uint64_t mask)
+{
+ int nib_count = 0;
+
+ while (mask) {
+ nib_count += mask & 1;
+ mask >>= 1;
+ }
+
+ return nib_count * 4;
+}
+
+static int
+nix_vlan_get_mkex_info(struct otx2_eth_dev *dev)
+{
+ struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
+ struct otx2_npc_flow_info *npc = &dev->npc_flow;
+ struct npc_xtract_info *x_info = NULL;
+ uint64_t rx_keyx;
+ otx2_dxcfg_t *p;
+ int rc = -EINVAL;
+
+ if (npc == NULL) {
+ otx2_err("Missing npc mkex configuration");
+ return rc;
+ }
+
+#define NPC_KEX_CHAN_NIBBLE_ENA 0x7ULL
+#define NPC_KEX_LB_LTYPE_NIBBLE_ENA 0x1000ULL
+#define NPC_KEX_LB_LTYPE_NIBBLE_MASK 0xFFFULL
+
+ rx_keyx = npc->keyx_supp_nmask[NPC_MCAM_RX];
+ if ((rx_keyx & NPC_KEX_CHAN_NIBBLE_ENA) != NPC_KEX_CHAN_NIBBLE_ENA)
+ return rc;
+
+ if ((rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_ENA) !=
+ NPC_KEX_LB_LTYPE_NIBBLE_ENA)
+ return rc;
+
+ mkex->lb_lt_offset =
+ nix_vlan_rx_mkex_offset(rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_MASK);
+
+ p = &npc->prx_dxcfg;
+ x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LA][NPC_LT_LA_ETHER].xtract[0];
+ memcpy(&mkex->la_xtract, x_info, sizeof(struct npc_xtract_info));
+ x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LB][NPC_LT_LB_CTAG].xtract[0];
+ memcpy(&mkex->lb_xtract, x_info, sizeof(struct npc_xtract_info));
+
+ return 0;
+}
+
+int
+otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc;
+
+ /* Port initialized for first time or restarted */
+ if (!dev->configured) {
+ rc = nix_vlan_get_mkex_info(dev);
+ if (rc) {
+ otx2_err("Failed to get vlan mkex info rc=%d", rc);
+ return rc;
+ }
+ }
+ return 0;
+}
+
+int
+otx2_nix_vlan_fini(__rte_unused struct rte_eth_dev *eth_dev)
+{
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 44/57] net/octeontx2: support VLAN offloads
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (42 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 43/57] net/octeontx2: implement VLAN utility functions jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 45/57] net/octeontx2: support VLAN filters jerinj
` (13 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vivek Sharma
From: Vivek Sharma <viveksharma@marvell.com>
Support configuring VLAN offloads for an ethernet device and
dynamic promiscuous mode configuration for VLAN filters where
filters are updated according to promiscuous mode of the device.
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 2 +
doc/guides/nics/features/octeontx2_vf.ini | 2 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 1 +
drivers/net/octeontx2/otx2_ethdev.h | 3 +
drivers/net/octeontx2/otx2_ethdev_ops.c | 1 +
drivers/net/octeontx2/otx2_rx.h | 1 +
drivers/net/octeontx2/otx2_vlan.c | 523 ++++++++++++++++++++-
9 files changed, 527 insertions(+), 9 deletions(-)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 33d2f2785..ac4712b0c 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -23,6 +23,8 @@ RSS reta update = Y
Inner RSS = Y
Flow control = Y
Flow API = Y
+VLAN offload = Y
+QinQ offload = Y
Packet type parsing = Y
Timesync = Y
Timestamp offload = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 980a4daf9..e54c1babe 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -23,6 +23,8 @@ RSS reta update = Y
Inner RSS = Y
Flow control = Y
Flow API = Y
+VLAN offload = Y
+QinQ offload = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 330534a90..769ab16ee 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -18,6 +18,8 @@ RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Flow API = Y
+VLAN offload = Y
+QinQ offload = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 0f1756932..a53b71a6d 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -24,6 +24,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Receiver Side Scaling (RSS)
- MAC filtering
- Generic flow API
+- VLAN/QinQ stripping and insertion
- Port hardware statistics
- Link state information
- Link flow control
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 48c2e8f57..55a5cdc48 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1345,6 +1345,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.timesync_adjust_time = otx2_nix_timesync_adjust_time,
.timesync_read_time = otx2_nix_timesync_read_time,
.timesync_write_time = otx2_nix_timesync_write_time,
+ .vlan_offload_set = otx2_nix_vlan_offload_set,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 8577272b4..50fd18b6e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -221,6 +221,7 @@ struct otx2_vlan_info {
uint8_t filter_on;
uint8_t strip_on;
uint8_t qinq_on;
+ uint8_t promisc_on;
};
struct otx2_eth_dev {
@@ -447,6 +448,8 @@ int otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev);
/* VLAN */
int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev);
int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev);
+int otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask);
+void otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable);
/* Lookup configuration */
void *otx2_nix_fastpath_lookup_mem_get(void);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index e55acd4e0..690d8ac0c 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -40,6 +40,7 @@ otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en)
otx2_mbox_process(mbox);
eth_dev->data->promiscuous = en;
+ otx2_nix_vlan_update_promisc(eth_dev, en);
}
void
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index e18e04658..7dc34d705 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -16,6 +16,7 @@
sizeof(uint16_t))
#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
+#define NIX_RX_OFFLOAD_VLAN_STRIP_F BIT(3)
#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4)
#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index b3136d2cf..7cf4f3136 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -14,6 +14,7 @@
#define MAC_ADDR_MATCH 0x4
#define QINQ_F_MATCH 0x8
#define VLAN_DROP 0x10
+#define DEF_F_ENTRY 0x20
enum vtag_cfg_dir {
VTAG_TX,
@@ -39,8 +40,50 @@ __rte_unused nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev,
return rc;
}
+static void
+nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
+ struct mcam_entry *entry, bool qinq, bool drop)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int pcifunc = otx2_pfvf_func(dev->pf, dev->vf);
+ uint64_t action = 0, vtag_action = 0;
+
+ action = NIX_RX_ACTIONOP_UCAST;
+
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ action = NIX_RX_ACTIONOP_RSS;
+ action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
+ }
+
+ action |= (uint64_t)pcifunc << 4;
+ entry->action = action;
+
+ if (drop) {
+ entry->action &= ~((uint64_t)0xF);
+ entry->action |= NIX_RX_ACTIONOP_DROP;
+ return;
+ }
+
+ if (!qinq) {
+ /* VTAG0 fields denote CTAG in single vlan case */
+ vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
+ vtag_action |= (NPC_LID_LB << 8);
+ vtag_action |= NIX_RX_VTAGACTION_VTAG0_RELPTR;
+ } else {
+ /* VTAG0 & VTAG1 fields denote CTAG & STAG respectively */
+ vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
+ vtag_action |= (NPC_LID_LB << 8);
+ vtag_action |= NIX_RX_VTAGACTION_VTAG1_RELPTR;
+ vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 47);
+ vtag_action |= ((uint64_t)(NPC_LID_LB) << 40);
+ vtag_action |= (NIX_RX_VTAGACTION_VTAG0_RELPTR << 32);
+ }
+
+ entry->vtag_action = vtag_action;
+}
+
static int
-__rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
+nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
{
struct npc_mcam_free_entry_req *req;
struct otx2_mbox *mbox = dev->mbox;
@@ -54,8 +97,8 @@ __rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
}
static int
-__rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
- struct mcam_entry *entry, uint8_t intf)
+nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
+ struct mcam_entry *entry, uint8_t intf, uint8_t ena)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
struct npc_mcam_write_entry_req *req;
@@ -67,7 +110,7 @@ __rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
req->entry = ent_idx;
req->intf = intf;
- req->enable_entry = 1;
+ req->enable_entry = ena;
memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
@@ -75,9 +118,9 @@ __rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
}
static int
-__rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev,
- struct mcam_entry *entry,
- uint8_t intf, bool drop)
+nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev,
+ struct mcam_entry *entry,
+ uint8_t intf, bool drop)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
struct npc_mcam_alloc_and_write_entry_req *req;
@@ -114,6 +157,443 @@ __rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev,
return rsp->entry;
}
+static void
+nix_vlan_update_mac(struct rte_eth_dev *eth_dev, int mcam_index,
+ int enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
+ volatile uint8_t *key_data, *key_mask;
+ struct npc_mcam_read_entry_req *req;
+ struct npc_mcam_read_entry_rsp *rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ uint64_t mcam_data, mcam_mask;
+ struct mcam_entry entry;
+ uint8_t intf, mcam_ena;
+ int idx, rc = -EINVAL;
+ uint8_t *mac_addr;
+
+ memset(&entry, 0, sizeof(struct mcam_entry));
+
+ /* Read entry first */
+ req = otx2_mbox_alloc_msg_npc_mcam_read_entry(mbox);
+
+ req->entry = mcam_index;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to read entry %d", mcam_index);
+ return;
+ }
+
+ entry = rsp->entry_data;
+ intf = rsp->intf;
+ mcam_ena = rsp->enable;
+
+ /* Update mcam address */
+ key_data = (volatile uint8_t *)entry.kw;
+ key_mask = (volatile uint8_t *)entry.kw_mask;
+
+ if (enable) {
+ mcam_mask = 0;
+ otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
+ &mcam_mask, mkex->la_xtract.len + 1);
+
+ } else {
+ mcam_data = 0ULL;
+ mac_addr = dev->mac_addr;
+ for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
+ mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
+
+ mcam_mask = BIT_ULL(48) - 1;
+
+ otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off,
+ &mcam_data, mkex->la_xtract.len + 1);
+ otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
+ &mcam_mask, mkex->la_xtract.len + 1);
+ }
+
+ /* Write back the mcam entry */
+ rc = nix_vlan_mcam_write(eth_dev, mcam_index,
+ &entry, intf, mcam_ena);
+ if (rc) {
+ otx2_err("Failed to write entry %d", mcam_index);
+ return;
+ }
+}
+
+void
+otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ struct vlan_entry *entry;
+
+ /* Already in required mode */
+ if (enable == vlan->promisc_on)
+ return;
+
+ /* Update default rx entry */
+ if (vlan->def_rx_mcam_idx)
+ nix_vlan_update_mac(eth_dev, vlan->def_rx_mcam_idx, enable);
+
+ /* Update all other rx filter entries */
+ TAILQ_FOREACH(entry, &vlan->fltr_tbl, next)
+ nix_vlan_update_mac(eth_dev, entry->mcam_idx, enable);
+
+ vlan->promisc_on = enable;
+}
+
+/* Configure mcam entry with required MCAM search rules */
+static int
+nix_vlan_mcam_config(struct rte_eth_dev *eth_dev,
+ uint16_t vlan_id, uint16_t flags)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
+ volatile uint8_t *key_data, *key_mask;
+ uint64_t mcam_data, mcam_mask;
+ struct mcam_entry entry;
+ uint8_t *mac_addr;
+ int idx, kwi = 0;
+
+ memset(&entry, 0, sizeof(struct mcam_entry));
+ key_data = (volatile uint8_t *)entry.kw;
+ key_mask = (volatile uint8_t *)entry.kw_mask;
+
+ /* Channel base extracted to KW0[11:0] */
+ entry.kw[kwi] = dev->rx_chan_base;
+ entry.kw_mask[kwi] = BIT_ULL(12) - 1;
+
+ /* Adds vlan_id & LB CTAG flag to MCAM KW */
+ if (flags & VLAN_ID_MATCH) {
+ entry.kw[kwi] |= NPC_LT_LB_CTAG << mkex->lb_lt_offset;
+ entry.kw_mask[kwi] |= 0xFULL << mkex->lb_lt_offset;
+
+ mcam_data = (vlan_id << 16);
+ mcam_mask = (BIT_ULL(16) - 1) << 16;
+ otx2_mbox_memcpy(key_data + mkex->lb_xtract.key_off,
+ &mcam_data, mkex->lb_xtract.len + 1);
+ otx2_mbox_memcpy(key_mask + mkex->lb_xtract.key_off,
+ &mcam_mask, mkex->lb_xtract.len + 1);
+ }
+
+ /* Adds LB STAG flag to MCAM KW */
+ if (flags & QINQ_F_MATCH) {
+ entry.kw[kwi] |= NPC_LT_LB_STAG << mkex->lb_lt_offset;
+ entry.kw_mask[kwi] |= 0xFULL << mkex->lb_lt_offset;
+ }
+
+ /* Adds LB CTAG & LB STAG flags to MCAM KW */
+ if (flags & VTAG_F_MATCH) {
+ entry.kw[kwi] |= (NPC_LT_LB_CTAG | NPC_LT_LB_STAG)
+ << mkex->lb_lt_offset;
+ entry.kw_mask[kwi] |= (NPC_LT_LB_CTAG & NPC_LT_LB_STAG)
+ << mkex->lb_lt_offset;
+ }
+
+ /* Adds port MAC address to MCAM KW */
+ if (flags & MAC_ADDR_MATCH) {
+ mcam_data = 0ULL;
+ mac_addr = dev->mac_addr;
+ for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
+ mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
+
+ mcam_mask = BIT_ULL(48) - 1;
+ otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off,
+ &mcam_data, mkex->la_xtract.len + 1);
+ otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
+ &mcam_mask, mkex->la_xtract.len + 1);
+ }
+
+ /* VLAN_DROP: for drop action for all vlan packets when filter is on.
+ * For QinQ, enable vtag action for both outer & inner tags
+ */
+ if (flags & VLAN_DROP)
+ nix_set_rx_vlan_action(eth_dev, &entry, false, true);
+ else if (flags & QINQ_F_MATCH)
+ nix_set_rx_vlan_action(eth_dev, &entry, true, false);
+ else
+ nix_set_rx_vlan_action(eth_dev, &entry, false, false);
+
+ if (flags & DEF_F_ENTRY)
+ dev->vlan_info.def_rx_mcam_ent = entry;
+
+ return nix_vlan_mcam_alloc_and_write(eth_dev, &entry, NIX_INTF_RX,
+ flags & VLAN_DROP);
+}
+
+/* Installs/Removes/Modifies default rx entry */
+static int
+nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
+ bool filter, bool enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ uint16_t flags = 0;
+ int mcam_idx, rc;
+
+ /* Use default mcam entry to either drop vlan traffic when
+ * vlan filter is on or strip vtag when strip is enabled.
+ * Allocate default entry which matches port mac address
+ * and vtag(ctag/stag) flags with drop action.
+ */
+ if (!vlan->def_rx_mcam_idx) {
+ if (!eth_dev->data->promiscuous)
+ flags = MAC_ADDR_MATCH;
+
+ if (filter && enable)
+ flags |= VTAG_F_MATCH | VLAN_DROP;
+ else if (strip && enable)
+ flags |= VTAG_F_MATCH;
+ else
+ return 0;
+
+ flags |= DEF_F_ENTRY;
+
+ mcam_idx = nix_vlan_mcam_config(eth_dev, 0, flags);
+ if (mcam_idx < 0) {
+ otx2_err("Failed to config vlan mcam");
+ return -mcam_idx;
+ }
+
+ vlan->def_rx_mcam_idx = mcam_idx;
+ return 0;
+ }
+
+ /* Filter is already enabled, so packets would be dropped anyways. No
+ * processing needed for enabling strip wrt mcam entry.
+ */
+
+ /* Filter disable request */
+ if (vlan->filter_on && filter && !enable) {
+ vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF);
+
+ /* Free default rx entry only when
+ * 1. strip is not on and
+ * 2. qinq entry is allocated before default entry.
+ */
+ if (vlan->strip_on ||
+ (vlan->qinq_on && !vlan->qinq_before_def)) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode ==
+ ETH_MQ_RX_RSS)
+ vlan->def_rx_mcam_ent.action |=
+ NIX_RX_ACTIONOP_RSS;
+ else
+ vlan->def_rx_mcam_ent.action |=
+ NIX_RX_ACTIONOP_UCAST;
+ return nix_vlan_mcam_write(eth_dev,
+ vlan->def_rx_mcam_idx,
+ &vlan->def_rx_mcam_ent,
+ NIX_INTF_RX, 1);
+ } else {
+ rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
+ if (rc)
+ return rc;
+ vlan->def_rx_mcam_idx = 0;
+ }
+ }
+
+ /* Filter enable request */
+ if (!vlan->filter_on && filter && enable) {
+ vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF);
+ vlan->def_rx_mcam_ent.action |= NIX_RX_ACTIONOP_DROP;
+ return nix_vlan_mcam_write(eth_dev, vlan->def_rx_mcam_idx,
+ &vlan->def_rx_mcam_ent, NIX_INTF_RX, 1);
+ }
+
+ /* Strip disable request */
+ if (vlan->strip_on && strip && !enable) {
+ if (!vlan->filter_on &&
+ !(vlan->qinq_on && !vlan->qinq_before_def)) {
+ rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
+ if (rc)
+ return rc;
+ vlan->def_rx_mcam_idx = 0;
+ }
+ }
+
+ return 0;
+}
+
+/* Configure vlan stripping on or off */
+static int
+nix_vlan_hw_strip(struct rte_eth_dev *eth_dev, const uint8_t enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_vtag_config *vtag_cfg;
+ int rc = -EINVAL;
+
+ rc = nix_vlan_handle_default_rx_entry(eth_dev, true, false, enable);
+ if (rc) {
+ otx2_err("Failed to config default rx entry");
+ return rc;
+ }
+
+ vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
+ /* cfg_type = 1 for rx vlan cfg */
+ vtag_cfg->cfg_type = VTAG_RX;
+
+ if (enable)
+ vtag_cfg->rx.strip_vtag = 1;
+ else
+ vtag_cfg->rx.strip_vtag = 0;
+
+ /* Always capture */
+ vtag_cfg->rx.capture_vtag = 1;
+ vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
+ /* Use rx vtag type index[0] for now */
+ vtag_cfg->rx.vtag_type = 0;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ dev->vlan_info.strip_on = enable;
+ return rc;
+}
+
+/* Configure vlan filtering on or off for all vlans if vlan_id == 0 */
+static int
+nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable,
+ uint16_t vlan_id)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc = -EINVAL;
+
+ if (!vlan_id && enable) {
+ rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
+ enable);
+ if (rc) {
+ otx2_err("Failed to config vlan mcam");
+ return rc;
+ }
+ dev->vlan_info.filter_on = enable;
+ return 0;
+ }
+
+ if (!vlan_id && !enable) {
+ rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
+ enable);
+ if (rc) {
+ otx2_err("Failed to config vlan mcam");
+ return rc;
+ }
+ dev->vlan_info.filter_on = enable;
+ return 0;
+ }
+
+ return 0;
+}
+
+/* Configure double vlan(qinq) on or off */
+static int
+otx2_nix_config_double_vlan(struct rte_eth_dev *eth_dev,
+ const uint8_t enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan_info;
+ int mcam_idx;
+ int rc;
+
+ vlan_info = &dev->vlan_info;
+
+ if (!enable) {
+ if (!vlan_info->qinq_mcam_idx)
+ return 0;
+
+ rc = nix_vlan_mcam_free(dev, vlan_info->qinq_mcam_idx);
+ if (rc)
+ return rc;
+
+ vlan_info->qinq_mcam_idx = 0;
+ dev->vlan_info.qinq_on = 0;
+ vlan_info->qinq_before_def = 0;
+ return 0;
+ }
+
+ if (eth_dev->data->promiscuous)
+ mcam_idx = nix_vlan_mcam_config(eth_dev, 0, QINQ_F_MATCH);
+ else
+ mcam_idx = nix_vlan_mcam_config(eth_dev, 0,
+ QINQ_F_MATCH | MAC_ADDR_MATCH);
+ if (mcam_idx < 0)
+ return mcam_idx;
+
+ if (!vlan_info->def_rx_mcam_idx)
+ vlan_info->qinq_before_def = 1;
+
+ vlan_info->qinq_mcam_idx = mcam_idx;
+ dev->vlan_info.qinq_on = 1;
+ return 0;
+}
+
+int
+otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t offloads = dev->rx_offloads;
+ struct rte_eth_rxmode *rxmode;
+ int rc;
+
+ rxmode = ð_dev->data->dev_conf.rxmode;
+
+ if (mask & ETH_VLAN_EXTEND_MASK) {
+ otx2_err("Extend offload not supported");
+ return -ENOTSUP;
+ }
+
+ if (mask & ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rc = nix_vlan_hw_strip(eth_dev, true);
+ } else {
+ offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rc = nix_vlan_hw_strip(eth_dev, false);
+ }
+ if (rc)
+ goto done;
+ }
+
+ if (mask & ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rc = nix_vlan_hw_filter(eth_dev, true, 0);
+ } else {
+ offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ rc = nix_vlan_hw_filter(eth_dev, false, 0);
+ }
+ if (rc)
+ goto done;
+ }
+
+ if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) {
+ if (!dev->vlan_info.qinq_on) {
+ offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ rc = otx2_nix_config_double_vlan(eth_dev, true);
+ if (rc)
+ goto done;
+ }
+ } else {
+ if (dev->vlan_info.qinq_on) {
+ offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+ rc = otx2_nix_config_double_vlan(eth_dev, false);
+ if (rc)
+ goto done;
+ }
+ }
+
+ if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_QINQ_STRIP)) {
+ dev->rx_offloads |= offloads;
+ dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
+ }
+
+done:
+ return rc;
+}
+
static int
nix_vlan_rx_mkex_offset(uint64_t mask)
{
@@ -170,7 +650,7 @@ int
otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc;
+ int rc, mask;
/* Port initialized for first time or restarted */
if (!dev->configured) {
@@ -179,12 +659,37 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
otx2_err("Failed to get vlan mkex info rc=%d", rc);
return rc;
}
+
+ TAILQ_INIT(&dev->vlan_info.fltr_tbl);
}
+
+ mask =
+ ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+ rc = otx2_nix_vlan_offload_set(eth_dev, mask);
+ if (rc) {
+ otx2_err("Failed to set vlan offload rc=%d", rc);
+ return rc;
+ }
+
return 0;
}
int
-otx2_nix_vlan_fini(__rte_unused struct rte_eth_dev *eth_dev)
+otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev)
{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ int rc;
+
+ if (!dev->configured) {
+ if (vlan->def_rx_mcam_idx) {
+ rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
+ if (rc)
+ return rc;
+ }
+ }
+
+ otx2_nix_config_double_vlan(eth_dev, false);
+ vlan->def_rx_mcam_idx = 0;
return 0;
}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 45/57] net/octeontx2: support VLAN filters
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (43 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 44/57] net/octeontx2: support VLAN offloads jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 46/57] net/octeontx2: support VLAN TPID and PVID for Tx jerinj
` (12 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vivek Sharma
From: Vivek Sharma <viveksharma@marvell.com>
Support setting up VLAN filters so as to allow tagged
packet's reception after VLAN HW Filter offload is enabled.
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
doc/guides/nics/octeontx2.rst | 2 +-
drivers/net/octeontx2/otx2_ethdev.c | 2 +
drivers/net/octeontx2/otx2_ethdev.h | 4 +
drivers/net/octeontx2/otx2_vlan.c | 149 ++++++++++++++++++++-
7 files changed, 157 insertions(+), 3 deletions(-)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index ac4712b0c..37b802999 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -21,6 +21,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+VLAN filter = Y
Flow control = Y
Flow API = Y
VLAN offload = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index e54c1babe..ccedd1359 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -21,6 +21,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+VLAN filter = Y
Flow control = Y
Flow API = Y
VLAN offload = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 769ab16ee..24df14717 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -17,6 +17,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+VLAN filter = Y
Flow API = Y
VLAN offload = Y
QinQ offload = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index a53b71a6d..d6082e508 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -22,7 +22,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Lock-free Tx queue
- Multiple queues for TX and RX
- Receiver Side Scaling (RSS)
-- MAC filtering
+- MAC/VLAN filtering
- Generic flow API
- VLAN/QinQ stripping and insertion
- Port hardware statistics
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 55a5cdc48..78cbd8811 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1346,6 +1346,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.timesync_read_time = otx2_nix_timesync_read_time,
.timesync_write_time = otx2_nix_timesync_write_time,
.vlan_offload_set = otx2_nix_vlan_offload_set,
+ .vlan_filter_set = otx2_nix_vlan_filter_set,
+ .vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 50fd18b6e..996ddec47 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -450,6 +450,10 @@ int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev);
int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev);
int otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask);
void otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable);
+int otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
+ int on);
+void otx2_nix_vlan_strip_queue_set(struct rte_eth_dev *dev,
+ uint16_t queue, int on);
/* Lookup configuration */
void *otx2_nix_fastpath_lookup_mem_get(void);
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index 7cf4f3136..6216d6545 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -22,8 +22,8 @@ enum vtag_cfg_dir {
};
static int
-__rte_unused nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev,
- uint32_t entry, const int enable)
+nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev,
+ uint32_t entry, const int enable)
{
struct npc_mcam_ena_dis_entry_req *req;
struct otx2_mbox *mbox = dev->mbox;
@@ -460,6 +460,8 @@ nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable,
uint16_t vlan_id)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ struct vlan_entry *entry;
int rc = -EINVAL;
if (!vlan_id && enable) {
@@ -473,6 +475,24 @@ nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable,
return 0;
}
+ /* Enable/disable existing vlan filter entries */
+ TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
+ if (vlan_id) {
+ if (entry->vlan_id == vlan_id) {
+ rc = nix_vlan_mcam_enb_dis(dev,
+ entry->mcam_idx,
+ enable);
+ if (rc)
+ return rc;
+ }
+ } else {
+ rc = nix_vlan_mcam_enb_dis(dev, entry->mcam_idx,
+ enable);
+ if (rc)
+ return rc;
+ }
+ }
+
if (!vlan_id && !enable) {
rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
enable);
@@ -487,6 +507,85 @@ nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable,
return 0;
}
+/* Enable/disable vlan filtering for the given vlan_id */
+int
+otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
+ int on)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ struct vlan_entry *entry;
+ int entry_exists = 0;
+ int rc = -EINVAL;
+ int mcam_idx;
+
+ if (!vlan_id) {
+ otx2_err("Vlan Id can't be zero");
+ return rc;
+ }
+
+ if (!vlan->def_rx_mcam_idx) {
+ otx2_err("Vlan Filtering is disabled, enable it first");
+ return rc;
+ }
+
+ if (on) {
+ TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
+ if (entry->vlan_id == vlan_id) {
+ /* Vlan entry already exists */
+ entry_exists = 1;
+ /* Mcam entry already allocated */
+ if (entry->mcam_idx) {
+ rc = nix_vlan_hw_filter(eth_dev, on,
+ vlan_id);
+ return rc;
+ }
+ break;
+ }
+ }
+
+ if (!entry_exists) {
+ entry = rte_zmalloc("otx2_nix_vlan_entry",
+ sizeof(struct vlan_entry), 0);
+ if (!entry) {
+ otx2_err("Failed to allocate memory");
+ return -ENOMEM;
+ }
+ }
+
+ /* Enables vlan_id & mac address based filtering */
+ if (eth_dev->data->promiscuous)
+ mcam_idx = nix_vlan_mcam_config(eth_dev, vlan_id,
+ VLAN_ID_MATCH);
+ else
+ mcam_idx = nix_vlan_mcam_config(eth_dev, vlan_id,
+ VLAN_ID_MATCH |
+ MAC_ADDR_MATCH);
+ if (mcam_idx < 0) {
+ otx2_err("Failed to config vlan mcam");
+ TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
+ rte_free(entry);
+ return mcam_idx;
+ }
+
+ entry->mcam_idx = mcam_idx;
+ if (!entry_exists) {
+ entry->vlan_id = vlan_id;
+ TAILQ_INSERT_HEAD(&vlan->fltr_tbl, entry, next);
+ }
+ } else {
+ TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
+ if (entry->vlan_id == vlan_id) {
+ nix_vlan_mcam_free(dev, entry->mcam_idx);
+ TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
+ rte_free(entry);
+ break;
+ }
+ }
+ }
+ return 0;
+}
+
/* Configure double vlan(qinq) on or off */
static int
otx2_nix_config_double_vlan(struct rte_eth_dev *eth_dev,
@@ -594,6 +693,13 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
return rc;
}
+void otx2_nix_vlan_strip_queue_set(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused uint16_t queue,
+ __rte_unused int on)
+{
+ otx2_err("Not Supported");
+}
+
static int
nix_vlan_rx_mkex_offset(uint64_t mask)
{
@@ -646,6 +752,27 @@ nix_vlan_get_mkex_info(struct otx2_eth_dev *dev)
return 0;
}
+static void nix_vlan_reinstall_vlan_filters(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct vlan_entry *entry;
+ int rc;
+
+ /* VLAN filters can't be set without setting filtern on */
+ rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, true);
+ if (rc) {
+ otx2_err("Failed to reinstall vlan filters");
+ return;
+ }
+
+ TAILQ_FOREACH(entry, &dev->vlan_info.fltr_tbl, next) {
+ rc = otx2_nix_vlan_filter_set(eth_dev, entry->vlan_id, true);
+ if (rc)
+ otx2_err("Failed to reinstall filter for vlan:%d",
+ entry->vlan_id);
+ }
+}
+
int
otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
{
@@ -661,6 +788,11 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
}
TAILQ_INIT(&dev->vlan_info.fltr_tbl);
+ } else {
+ /* Reinstall all mcam entries now if filter offload is set */
+ if (eth_dev->data->dev_conf.rxmode.offloads &
+ DEV_RX_OFFLOAD_VLAN_FILTER)
+ nix_vlan_reinstall_vlan_filters(eth_dev);
}
mask =
@@ -679,8 +811,21 @@ otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
struct otx2_vlan_info *vlan = &dev->vlan_info;
+ struct vlan_entry *entry;
int rc;
+ TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
+ if (!dev->configured) {
+ TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
+ rte_free(entry);
+ } else {
+ /* MCAM entries freed by flow_fini & lf_free on
+ * port stop.
+ */
+ entry->mcam_idx = 0;
+ }
+ }
+
if (!dev->configured) {
if (vlan->def_rx_mcam_idx) {
rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 46/57] net/octeontx2: support VLAN TPID and PVID for Tx
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (44 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 45/57] net/octeontx2: support VLAN filters jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 47/57] net/octeontx2: add FW version get operation jerinj
` (11 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Vivek Sharma <viveksharma@marvell.com>
Implement support for setting VLAN TPID and PVID for Tx packets.
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 2 +
drivers/net/octeontx2/otx2_ethdev.h | 5 +-
drivers/net/octeontx2/otx2_vlan.c | 193 ++++++++++++++++++++++++++++
3 files changed, 199 insertions(+), 1 deletion(-)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 78cbd8811..ad305dcd8 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1348,6 +1348,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.vlan_offload_set = otx2_nix_vlan_offload_set,
.vlan_filter_set = otx2_nix_vlan_filter_set,
.vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set,
+ .vlan_tpid_set = otx2_nix_vlan_tpid_set,
+ .vlan_pvid_set = otx2_nix_vlan_pvid_set,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 996ddec47..12db92257 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -453,7 +453,10 @@ void otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable);
int otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
int on);
void otx2_nix_vlan_strip_queue_set(struct rte_eth_dev *dev,
- uint16_t queue, int on);
+ uint16_t queue, int on);
+int otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
+ enum rte_vlan_type type, uint16_t tpid);
+int otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on);
/* Lookup configuration */
void *otx2_nix_fastpath_lookup_mem_get(void);
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index 6216d6545..dc0f4e032 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -82,6 +82,39 @@ nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
entry->vtag_action = vtag_action;
}
+static void
+nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
+ int vtag_index)
+{
+ union {
+ uint64_t reg;
+ struct nix_tx_vtag_action_s act;
+ } vtag_action;
+
+ uint64_t action;
+
+ action = NIX_TX_ACTIONOP_UCAST_DEFAULT;
+
+ /*
+ * Take offset from LA since in case of untagged packet,
+ * lbptr is zero.
+ */
+ if (type == ETH_VLAN_TYPE_OUTER) {
+ vtag_action.act.vtag0_def = vtag_index;
+ vtag_action.act.vtag0_lid = NPC_LID_LA;
+ vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
+ vtag_action.act.vtag0_relptr = NIX_TX_VTAGACTION_VTAG0_RELPTR;
+ } else {
+ vtag_action.act.vtag1_def = vtag_index;
+ vtag_action.act.vtag1_lid = NPC_LID_LA;
+ vtag_action.act.vtag1_op = NIX_TX_VTAGOP_INSERT;
+ vtag_action.act.vtag1_relptr = NIX_TX_VTAGACTION_VTAG1_RELPTR;
+ }
+
+ entry->action = action;
+ entry->vtag_action = vtag_action.reg;
+}
+
static int
nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
{
@@ -416,6 +449,46 @@ nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
return 0;
}
+/* Installs/Removes default tx entry */
+static int
+nix_vlan_handle_default_tx_entry(struct rte_eth_dev *eth_dev,
+ enum rte_vlan_type type, int vtag_index,
+ int enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ struct mcam_entry entry;
+ uint16_t pf_func;
+ int rc;
+
+ if (!vlan->def_tx_mcam_idx && enable) {
+ memset(&entry, 0, sizeof(struct mcam_entry));
+
+ /* Only pf_func is matched, swap it's bytes */
+ pf_func = (dev->pf_func & 0xff) << 8;
+ pf_func |= (dev->pf_func >> 8) & 0xff;
+
+ /* PF Func extracted to KW1[63:48] */
+ entry.kw[1] = (uint64_t)pf_func << 48;
+ entry.kw_mask[1] = (BIT_ULL(16) - 1) << 48;
+
+ nix_set_tx_vlan_action(&entry, type, vtag_index);
+ vlan->def_tx_mcam_ent = entry;
+
+ return nix_vlan_mcam_alloc_and_write(eth_dev, &entry,
+ NIX_INTF_TX, 0);
+ }
+
+ if (vlan->def_tx_mcam_idx && !enable) {
+ rc = nix_vlan_mcam_free(dev, vlan->def_tx_mcam_idx);
+ if (rc)
+ return rc;
+ vlan->def_rx_mcam_idx = 0;
+ }
+
+ return 0;
+}
+
/* Configure vlan stripping on or off */
static int
nix_vlan_hw_strip(struct rte_eth_dev *eth_dev, const uint8_t enable)
@@ -693,6 +766,126 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
return rc;
}
+int
+otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
+ enum rte_vlan_type type, uint16_t tpid)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct nix_set_vlan_tpid *tpid_cfg;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc;
+
+ tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
+
+ tpid_cfg->tpid = tpid;
+ if (type == ETH_VLAN_TYPE_OUTER)
+ tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
+ else
+ tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ if (type == ETH_VLAN_TYPE_OUTER)
+ dev->vlan_info.outer_vlan_tpid = tpid;
+ else
+ dev->vlan_info.inner_vlan_tpid = tpid;
+ return 0;
+}
+
+int
+otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct otx2_eth_dev *otx2_dev = otx2_eth_pmd_priv(dev);
+ struct otx2_mbox *mbox = otx2_dev->mbox;
+ struct nix_vtag_config *vtag_cfg;
+ struct nix_vtag_config_rsp *rsp;
+ struct otx2_vlan_info *vlan;
+ int rc, rc1, vtag_index = 0;
+
+ if (vlan_id == 0) {
+ otx2_err("vlan id can't be zero");
+ return -EINVAL;
+ }
+
+ vlan = &otx2_dev->vlan_info;
+
+ if (on && vlan->pvid_insert_on && vlan->pvid == vlan_id) {
+ otx2_err("pvid %d is already enabled", vlan_id);
+ return -EINVAL;
+ }
+
+ if (on && vlan->pvid_insert_on && vlan->pvid != vlan_id) {
+ otx2_err("another pvid is enabled, disable that first");
+ return -EINVAL;
+ }
+
+ /* No pvid active */
+ if (!on && !vlan->pvid_insert_on)
+ return 0;
+
+ /* Given pvid already disabled */
+ if (!on && vlan->pvid != vlan_id)
+ return 0;
+
+ vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
+
+ if (on) {
+ vtag_cfg->cfg_type = VTAG_TX;
+ vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
+
+ if (vlan->outer_vlan_tpid)
+ vtag_cfg->tx.vtag0 =
+ (vlan->outer_vlan_tpid << 16) | vlan_id;
+ else
+ vtag_cfg->tx.vtag0 =
+ ((RTE_ETHER_TYPE_VLAN << 16) | vlan_id);
+ vtag_cfg->tx.cfg_vtag0 = 1;
+ } else {
+ vtag_cfg->cfg_type = VTAG_TX;
+ vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
+
+ vtag_cfg->tx.vtag0_idx = vlan->outer_vlan_idx;
+ vtag_cfg->tx.free_vtag0 = 1;
+ }
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (on) {
+ vtag_index = rsp->vtag0_idx;
+ } else {
+ vlan->pvid = 0;
+ vlan->pvid_insert_on = 0;
+ vlan->outer_vlan_idx = 0;
+ }
+
+ rc = nix_vlan_handle_default_tx_entry(dev, ETH_VLAN_TYPE_OUTER,
+ vtag_index, on);
+ if (rc < 0) {
+ printf("Default tx entry failed with rc %d\n", rc);
+ vtag_cfg->tx.vtag0_idx = vtag_index;
+ vtag_cfg->tx.free_vtag0 = 1;
+ vtag_cfg->tx.cfg_vtag0 = 0;
+
+ rc1 = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc1)
+ otx2_err("Vtag free failed");
+
+ return rc;
+ }
+
+ if (on) {
+ vlan->pvid = vlan_id;
+ vlan->pvid_insert_on = 1;
+ vlan->outer_vlan_idx = vtag_index;
+ }
+
+ return 0;
+}
+
void otx2_nix_vlan_strip_queue_set(__rte_unused struct rte_eth_dev *dev,
__rte_unused uint16_t queue,
__rte_unused int on)
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 47/57] net/octeontx2: add FW version get operation
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (45 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 46/57] net/octeontx2: support VLAN TPID and PVID for Tx jerinj
@ 2019-06-30 18:05 ` jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 48/57] net/octeontx2: add Rx burst support jerinj
` (10 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:05 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Add firmware version get operation.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 1 +
drivers/net/octeontx2/otx2_ethdev.h | 3 +++
drivers/net/octeontx2/otx2_ethdev_ops.c | 19 +++++++++++++++++++
drivers/net/octeontx2/otx2_flow.c | 7 +++++++
7 files changed, 33 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 37b802999..211ff93e7 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -33,6 +33,7 @@ Rx descriptor status = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
+FW version = Y
Module EEPROM dump = Y
Registers dump = Y
Linux VFIO = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index ccedd1359..967a3757d 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -31,6 +31,7 @@ Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
+FW version = Y
Module EEPROM dump = Y
Registers dump = Y
Linux VFIO = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 24df14717..884167c88 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -26,6 +26,7 @@ Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
+FW version = Y
Module EEPROM dump = Y
Registers dump = Y
Linux VFIO = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index ad305dcd8..15f46a9bf 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1336,6 +1336,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.filter_ctrl = otx2_nix_dev_filter_ctrl,
.get_module_info = otx2_nix_get_module_info,
.get_module_eeprom = otx2_nix_get_module_eeprom,
+ .fw_version_get = otx2_nix_fw_version_get,
.flow_ctrl_get = otx2_nix_flow_ctrl_get,
.flow_ctrl_set = otx2_nix_flow_ctrl_set,
.timesync_enable = otx2_nix_timesync_enable,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 12db92257..e18483969 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -235,6 +235,7 @@ struct otx2_eth_dev {
uint8_t lso_tsov4_idx;
uint8_t lso_tsov6_idx;
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
+ uint8_t mkex_pfl_name[MKEX_NAME_LEN];
uint8_t max_mac_entries;
uint8_t lf_tx_stats;
uint8_t lf_rx_stats;
@@ -340,6 +341,8 @@ void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
int otx2_nix_dev_filter_ctrl(struct rte_eth_dev *eth_dev,
enum rte_filter_type filter_type,
enum rte_filter_op filter_op, void *arg);
+int otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
+ size_t fw_size);
int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_module_info *modinfo);
int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 690d8ac0c..6a3048336 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -210,6 +210,25 @@ otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt)
return 0;
}
+int
+otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
+ size_t fw_size)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc = (int)fw_size;
+
+ if (fw_size > sizeof(dev->mkex_pfl_name))
+ rc = sizeof(dev->mkex_pfl_name);
+
+ rc = strlcpy(fw_version, (char *)dev->mkex_pfl_name, rc);
+
+ rc += 1; /* Add the size of '\0' */
+ if (fw_size < (uint32_t)rc)
+ return rc;
+
+ return 0;
+}
+
int
otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
{
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 94bd85161..3ddecfb23 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -770,6 +770,7 @@ flow_fetch_kex_cfg(struct otx2_eth_dev *dev)
struct otx2_npc_flow_info *npc = &dev->npc_flow;
struct npc_get_kex_cfg_rsp *kex_rsp;
struct otx2_mbox *mbox = dev->mbox;
+ char mkex_pfl_name[MKEX_NAME_LEN];
struct otx2_idev_kex_cfg *idev;
int rc = 0;
@@ -791,6 +792,12 @@ flow_fetch_kex_cfg(struct otx2_eth_dev *dev)
sizeof(struct npc_get_kex_cfg_rsp));
}
+ otx2_mbox_memcpy(mkex_pfl_name,
+ idev->kex_cfg.mkex_pfl_name, MKEX_NAME_LEN);
+
+ strlcpy((char *)dev->mkex_pfl_name,
+ mkex_pfl_name, sizeof(dev->mkex_pfl_name));
+
flow_process_mkex_cfg(npc, &idev->kex_cfg);
done:
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 48/57] net/octeontx2: add Rx burst support
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (46 preceding siblings ...)
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 47/57] net/octeontx2: add FW version get operation jerinj
@ 2019-06-30 18:06 ` jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 49/57] net/octeontx2: add Rx multi segment version jerinj
` (9 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:06 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: Pavan Nikhilesh, Harman Kalra
From: Jerin Jacob <jerinj@marvell.com>
Add Rx burst support.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 2 +-
drivers/net/octeontx2/otx2_ethdev.c | 6 -
drivers/net/octeontx2/otx2_ethdev.h | 2 +
drivers/net/octeontx2/otx2_rx.c | 129 +++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 247 ++++++++++++++++++++++++++++
6 files changed, 380 insertions(+), 7 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_rx.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index d22ddae33..3e25d2ad4 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -28,6 +28,7 @@ LIBABIVER := 1
# all source are stored in SRCS-y
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
+ otx2_rx.c \
otx2_tm.c \
otx2_rss.c \
otx2_mac.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 6281ee21b..975b2e715 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -2,7 +2,7 @@
# Copyright(C) 2019 Marvell International Ltd.
#
-sources = files(
+sources = files('otx2_rx.c',
'otx2_tm.c',
'otx2_rss.c',
'otx2_mac.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 15f46a9bf..1f8a22300 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -14,12 +14,6 @@
#include "otx2_ethdev.h"
-static inline void
-otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(eth_dev);
-}
-
static inline void
otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
{
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index e18483969..22cf86981 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -280,6 +280,7 @@ struct otx2_eth_dev {
struct otx2_eth_qconf *tx_qconf;
struct otx2_eth_qconf *rx_qconf;
struct rte_eth_dev *eth_dev;
+ eth_rx_burst_t rx_pkt_burst_no_offload;
/* PTP counters */
bool ptp_en;
struct otx2_timesync_info tstamp;
@@ -482,6 +483,7 @@ int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
struct otx2_eth_dev *dev);
/* Rx and Tx routines */
+void otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev);
void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
/* Timesync - PTP routines */
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
new file mode 100644
index 000000000..4d5223e10
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -0,0 +1,129 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_vect.h>
+
+#include "otx2_ethdev.h"
+#include "otx2_rx.h"
+
+#define NIX_DESCS_PER_LOOP 4
+#define CQE_CAST(x) ((struct nix_cqe_hdr_s *)(x))
+#define CQE_SZ(x) ((x) * NIX_CQ_ENTRY_SZ)
+
+static inline uint16_t
+nix_rx_nb_pkts(struct otx2_eth_rxq *rxq, const uint64_t wdata,
+ const uint16_t pkts, const uint32_t qmask)
+{
+ uint32_t available = rxq->available;
+
+ /* Update the available count if cached value is not enough */
+ if (unlikely(available < pkts)) {
+ uint64_t reg, head, tail;
+
+ /* Use LDADDA version to avoid reorder */
+ reg = otx2_atomic64_add_sync(wdata, rxq->cq_status);
+ /* CQ_OP_STATUS operation error */
+ if (reg & BIT_ULL(CQ_OP_STAT_OP_ERR) ||
+ reg & BIT_ULL(CQ_OP_STAT_CQ_ERR))
+ return 0;
+
+ tail = reg & 0xFFFFF;
+ head = (reg >> 20) & 0xFFFFF;
+ if (tail < head)
+ available = tail - head + qmask + 1;
+ else
+ available = tail - head;
+
+ rxq->available = available;
+ }
+
+ return RTE_MIN(pkts, available);
+}
+
+static __rte_always_inline uint16_t
+nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t pkts, const uint16_t flags)
+{
+ struct otx2_eth_rxq *rxq = rx_queue;
+ const uint64_t mbuf_init = rxq->mbuf_initializer;
+ const void *lookup_mem = rxq->lookup_mem;
+ const uint64_t data_off = rxq->data_off;
+ const uintptr_t desc = rxq->desc;
+ const uint64_t wdata = rxq->wdata;
+ const uint32_t qmask = rxq->qmask;
+ uint16_t packets = 0, nb_pkts;
+ uint32_t head = rxq->head;
+ struct nix_cqe_hdr_s *cq;
+ struct rte_mbuf *mbuf;
+
+ nb_pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
+
+ while (packets < nb_pkts) {
+ /* Prefetch N desc ahead */
+ rte_prefetch_non_temporal((void *)(desc + (CQE_SZ(head + 2))));
+ cq = (struct nix_cqe_hdr_s *)(desc + CQE_SZ(head));
+
+ mbuf = nix_get_mbuf_from_cqe(cq, data_off);
+
+ otx2_nix_cqe_to_mbuf(cq, cq->tag, mbuf, lookup_mem, mbuf_init,
+ flags);
+ otx2_nix_mbuf_to_tstamp(mbuf, rxq->tstamp, flags);
+ rx_pkts[packets++] = mbuf;
+ otx2_prefetch_store_keep(mbuf);
+ head++;
+ head &= qmask;
+ }
+
+ rxq->head = head;
+ rxq->available -= nb_pkts;
+
+ /* Free all the CQs that we've processed */
+ otx2_write64((wdata | nb_pkts), rxq->cq_door);
+
+ return nb_pkts;
+}
+
+
+#define R(name, f5, f4, f3, f2, f1, f0, flags) \
+static uint16_t __rte_noinline __hot \
+otx2_nix_recv_pkts_ ## name(void *rx_queue, \
+ struct rte_mbuf **rx_pkts, uint16_t pkts) \
+{ \
+ return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags)); \
+} \
+
+NIX_RX_FASTPATH_MODES
+#undef R
+
+static inline void
+pick_rx_func(struct rte_eth_dev *eth_dev,
+ const eth_rx_burst_t rx_burst[2][2][2][2][2][2])
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ /* [TSTMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
+ eth_dev->rx_pkt_burst = rx_burst
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F)]
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_RSS_F)];
+}
+
+void
+otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev)
+{
+ const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2] = {
+#define R(name, f5, f4, f3, f2, f1, f0, flags) \
+ [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_ ## name,
+
+NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ pick_rx_func(eth_dev, nix_eth_rx_burst);
+
+ rte_mb();
+}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index 7dc34d705..32343c27b 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -15,7 +15,10 @@
PTYPE_TUNNEL_ARRAY_SZ) *\
sizeof(uint16_t))
+#define NIX_RX_OFFLOAD_NONE (0)
+#define NIX_RX_OFFLOAD_RSS_F BIT(0)
#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
+#define NIX_RX_OFFLOAD_CHECKSUM_F BIT(2)
#define NIX_RX_OFFLOAD_VLAN_STRIP_F BIT(3)
#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4)
#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
@@ -30,4 +33,248 @@ struct otx2_timesync_info {
uint8_t rx_ready;
} __rte_cache_aligned;
+union mbuf_initializer {
+ struct {
+ uint16_t data_off;
+ uint16_t refcnt;
+ uint16_t nb_segs;
+ uint16_t port;
+ } fields;
+ uint64_t value;
+};
+
+static __rte_always_inline void
+otx2_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
+ struct otx2_timesync_info *tstamp, const uint16_t flag)
+{
+ if ((flag & NIX_RX_OFFLOAD_TSTAMP_F) &&
+ mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC &&
+ (mbuf->data_off == RTE_PKTMBUF_HEADROOM +
+ NIX_TIMESYNC_RX_OFFSET)) {
+ uint64_t *tstamp_ptr;
+
+ /* Deal with rx timestamp */
+ tstamp_ptr = rte_pktmbuf_mtod_offset(mbuf, uint64_t *,
+ -NIX_TIMESYNC_RX_OFFSET);
+ mbuf->timestamp = rte_be_to_cpu_64(*tstamp_ptr);
+ tstamp->rx_tstamp = mbuf->timestamp;
+ tstamp->rx_ready = 1;
+ mbuf->ol_flags |= PKT_RX_IEEE1588_PTP | PKT_RX_IEEE1588_TMST
+ | PKT_RX_TIMESTAMP;
+ }
+}
+
+static __rte_always_inline uint64_t
+nix_clear_data_off(uint64_t oldval)
+{
+ union mbuf_initializer mbuf_init = { .value = oldval };
+
+ mbuf_init.fields.data_off = 0;
+ return mbuf_init.value;
+}
+
+static __rte_always_inline struct rte_mbuf *
+nix_get_mbuf_from_cqe(void *cq, const uint64_t data_off)
+{
+ rte_iova_t buff;
+
+ /* Skip CQE, NIX_RX_PARSE_S and SG HDR(9 DWORDs) and peek buff addr */
+ buff = *((rte_iova_t *)((uint64_t *)cq + 9));
+ return (struct rte_mbuf *)(buff - data_off);
+}
+
+
+static __rte_always_inline uint32_t
+nix_ptype_get(const void * const lookup_mem, const uint64_t in)
+{
+ const uint16_t * const ptype = lookup_mem;
+ const uint16_t lg_lf_le = (in & 0xFFF000000000000) >> 48;
+ const uint16_t tu_l2 = ptype[(in & 0x000FFF000000000) >> 36];
+ const uint16_t il4_tu = ptype[PTYPE_NON_TUNNEL_ARRAY_SZ + lg_lf_le];
+
+ return (il4_tu << PTYPE_WIDTH) | tu_l2;
+}
+
+static __rte_always_inline uint32_t
+nix_rx_olflags_get(const void * const lookup_mem, const uint64_t in)
+{
+ const uint32_t * const ol_flags = (const uint32_t *)
+ ((const uint8_t *)lookup_mem + PTYPE_ARRAY_SZ);
+
+ return ol_flags[(in & 0xfff00000) >> 20];
+}
+
+static inline uint64_t
+nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
+ struct rte_mbuf *mbuf)
+{
+ /* There is no separate bit to check match_id
+ * is valid or not? and no flag to identify it is an
+ * RTE_FLOW_ACTION_TYPE_FLAG vs RTE_FLOW_ACTION_TYPE_MARK
+ * action. The former case addressed through 0 being invalid
+ * value and inc/dec match_id pair when MARK is activated.
+ * The later case addressed through defining
+ * OTX2_FLOW_MARK_DEFAULT as value for
+ * RTE_FLOW_ACTION_TYPE_MARK.
+ * This would translate to not use
+ * OTX2_FLOW_ACTION_FLAG_DEFAULT - 1 and
+ * OTX2_FLOW_ACTION_FLAG_DEFAULT for match_id.
+ * i.e valid mark_id's are from
+ * 0 to OTX2_FLOW_ACTION_FLAG_DEFAULT - 2
+ */
+ if (likely(match_id)) {
+ ol_flags |= PKT_RX_FDIR;
+ if (match_id != OTX2_FLOW_ACTION_FLAG_DEFAULT) {
+ ol_flags |= PKT_RX_FDIR_ID;
+ mbuf->hash.fdir.hi = match_id - 1;
+ }
+ }
+
+ return ol_flags;
+}
+
+static __rte_always_inline void
+otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
+ struct rte_mbuf *mbuf, const void *lookup_mem,
+ const uint64_t val, const uint16_t flag)
+{
+ const struct nix_rx_parse_s *rx =
+ (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1);
+ const uint64_t w1 = *(const uint64_t *)rx;
+ const uint16_t len = rx->pkt_lenm1 + 1;
+ uint16_t ol_flags = 0;
+
+ /* Mark mempool obj as "get" as it is alloc'ed by NIX */
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+
+ if (flag & NIX_RX_OFFLOAD_PTYPE_F)
+ mbuf->packet_type = nix_ptype_get(lookup_mem, w1);
+ else
+ mbuf->packet_type = 0;
+
+ if (flag & NIX_RX_OFFLOAD_RSS_F) {
+ mbuf->hash.rss = tag;
+ ol_flags |= PKT_RX_RSS_HASH;
+ }
+
+ if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
+ ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
+
+ if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
+ if (rx->vtag0_gone) {
+ ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+ mbuf->vlan_tci = rx->vtag0_tci;
+ }
+ if (rx->vtag1_gone) {
+ ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+ mbuf->vlan_tci_outer = rx->vtag1_tci;
+ }
+ }
+
+ if (flag & NIX_RX_OFFLOAD_MARK_UPDATE_F)
+ ol_flags = nix_update_match_id(rx->match_id, ol_flags, mbuf);
+
+ mbuf->ol_flags = ol_flags;
+ *(uint64_t *)(&mbuf->rearm_data) = val;
+ mbuf->pkt_len = len;
+
+ mbuf->data_len = len;
+}
+
+#define CKSUM_F NIX_RX_OFFLOAD_CHECKSUM_F
+#define PTYPE_F NIX_RX_OFFLOAD_PTYPE_F
+#define RSS_F NIX_RX_OFFLOAD_RSS_F
+#define RX_VLAN_F NIX_RX_OFFLOAD_VLAN_STRIP_F
+#define MARK_F NIX_RX_OFFLOAD_MARK_UPDATE_F
+#define TS_F NIX_RX_OFFLOAD_TSTAMP_F
+
+/* [TSMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
+#define NIX_RX_FASTPATH_MODES \
+R(no_offload, 0, 0, 0, 0, 0, 0, NIX_RX_OFFLOAD_NONE) \
+R(rss, 0, 0, 0, 0, 0, 1, RSS_F) \
+R(ptype, 0, 0, 0, 0, 1, 0, PTYPE_F) \
+R(ptype_rss, 0, 0, 0, 0, 1, 1, PTYPE_F | RSS_F) \
+R(cksum, 0, 0, 0, 1, 0, 0, CKSUM_F) \
+R(cksum_rss, 0, 0, 0, 1, 0, 1, CKSUM_F | RSS_F) \
+R(cksum_ptype, 0, 0, 0, 1, 1, 0, CKSUM_F | PTYPE_F) \
+R(cksum_ptype_rss, 0, 0, 0, 1, 1, 1, CKSUM_F | PTYPE_F | RSS_F)\
+R(vlan, 0, 0, 1, 0, 0, 0, RX_VLAN_F) \
+R(vlan_rss, 0, 0, 1, 0, 0, 1, RX_VLAN_F | RSS_F) \
+R(vlan_ptype, 0, 0, 1, 0, 1, 0, RX_VLAN_F | PTYPE_F) \
+R(vlan_ptype_rss, 0, 0, 1, 0, 1, 1, RX_VLAN_F | PTYPE_F | RSS_F)\
+R(vlan_cksum, 0, 0, 1, 1, 0, 0, RX_VLAN_F | CKSUM_F) \
+R(vlan_cksum_rss, 0, 0, 1, 1, 0, 1, RX_VLAN_F | CKSUM_F | RSS_F)\
+R(vlan_cksum_ptype, 0, 0, 1, 1, 1, 0, \
+ RX_VLAN_F | CKSUM_F | PTYPE_F) \
+R(vlan_cksum_ptype_rss, 0, 0, 1, 1, 1, 1, \
+ RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(mark, 0, 1, 0, 0, 0, 0, MARK_F) \
+R(mark_rss, 0, 1, 0, 0, 0, 1, MARK_F | RSS_F) \
+R(mark_ptype, 0, 1, 0, 0, 1, 0, MARK_F | PTYPE_F) \
+R(mark_ptype_rss, 0, 1, 0, 0, 1, 1, MARK_F | PTYPE_F | RSS_F)\
+R(mark_cksum, 0, 1, 0, 1, 0, 0, MARK_F | CKSUM_F) \
+R(mark_cksum_rss, 0, 1, 0, 1, 0, 1, MARK_F | CKSUM_F | RSS_F)\
+R(mark_cksum_ptype, 0, 1, 0, 1, 1, 0, MARK_F | CKSUM_F | PTYPE_F)\
+R(mark_cksum_ptype_rss, 0, 1, 0, 1, 1, 1, \
+ MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(mark_vlan, 0, 1, 1, 0, 0, 0, MARK_F | RX_VLAN_F) \
+R(mark_vlan_rss, 0, 1, 1, 0, 0, 1, MARK_F | RX_VLAN_F | RSS_F)\
+R(mark_vlan_ptype, 0, 1, 1, 0, 1, 0, \
+ MARK_F | RX_VLAN_F | PTYPE_F) \
+R(mark_vlan_ptype_rss, 0, 1, 1, 0, 1, 1, \
+ MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
+R(mark_vlan_cksum, 0, 1, 1, 1, 0, 0, \
+ MARK_F | RX_VLAN_F | CKSUM_F) \
+R(mark_vlan_cksum_rss, 0, 1, 1, 1, 0, 1, \
+ MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
+R(mark_vlan_cksum_ptype, 0, 1, 1, 1, 1, 0, \
+ MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
+R(mark_vlan_cksum_ptype_rss, 0, 1, 1, 1, 1, 1, \
+ MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(ts, 1, 0, 0, 0, 0, 0, TS_F) \
+R(ts_rss, 1, 0, 0, 0, 0, 1, TS_F | RSS_F) \
+R(ts_ptype, 1, 0, 0, 0, 1, 0, TS_F | PTYPE_F) \
+R(ts_ptype_rss, 1, 0, 0, 0, 1, 1, TS_F | PTYPE_F | RSS_F)\
+R(ts_cksum, 1, 0, 0, 1, 0, 0, TS_F | CKSUM_F) \
+R(ts_cksum_rss, 1, 0, 0, 1, 0, 1, TS_F | CKSUM_F | RSS_F)\
+R(ts_cksum_ptype, 1, 0, 0, 1, 1, 0, TS_F | CKSUM_F | PTYPE_F)\
+R(ts_cksum_ptype_rss, 1, 0, 0, 1, 1, 1, \
+ TS_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(ts_vlan, 1, 0, 1, 0, 0, 0, TS_F | RX_VLAN_F) \
+R(ts_vlan_rss, 1, 0, 1, 0, 0, 1, TS_F | RX_VLAN_F | RSS_F)\
+R(ts_vlan_ptype, 1, 0, 1, 0, 1, 0, TS_F | RX_VLAN_F | PTYPE_F)\
+R(ts_vlan_ptype_rss, 1, 0, 1, 0, 1, 1, \
+ TS_F | RX_VLAN_F | PTYPE_F | RSS_F) \
+R(ts_vlan_cksum, 1, 0, 1, 1, 0, 0, \
+ TS_F | RX_VLAN_F | CKSUM_F) \
+R(ts_vlan_cksum_rss, 1, 0, 1, 1, 0, 1, \
+ MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
+R(ts_vlan_cksum_ptype, 1, 0, 1, 1, 1, 0, \
+ TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
+R(ts_vlan_cksum_ptype_rss, 1, 0, 1, 1, 1, 1, \
+ TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(ts_mark, 1, 1, 0, 0, 0, 0, TS_F | MARK_F) \
+R(ts_mark_rss, 1, 1, 0, 0, 0, 1, TS_F | MARK_F | RSS_F)\
+R(ts_mark_ptype, 1, 1, 0, 0, 1, 0, TS_F | MARK_F | PTYPE_F)\
+R(ts_mark_ptype_rss, 1, 1, 0, 0, 1, 1, \
+ TS_F | MARK_F | PTYPE_F | RSS_F) \
+R(ts_mark_cksum, 1, 1, 0, 1, 0, 0, TS_F | MARK_F | CKSUM_F)\
+R(ts_mark_cksum_rss, 1, 1, 0, 1, 0, 1, \
+ TS_F | MARK_F | CKSUM_F | RSS_F)\
+R(ts_mark_cksum_ptype, 1, 1, 0, 1, 1, 0, \
+ TS_F | MARK_F | CKSUM_F | PTYPE_F) \
+R(ts_mark_cksum_ptype_rss, 1, 1, 0, 1, 1, 1, \
+ TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(ts_mark_vlan, 1, 1, 1, 0, 0, 0, TS_F | MARK_F | RX_VLAN_F)\
+R(ts_mark_vlan_rss, 1, 1, 1, 0, 0, 1, \
+ TS_F | MARK_F | RX_VLAN_F | RSS_F)\
+R(ts_mark_vlan_ptype, 1, 1, 1, 0, 1, 0, \
+ TS_F | MARK_F | RX_VLAN_F | PTYPE_F) \
+R(ts_mark_vlan_ptype_rss, 1, 1, 1, 0, 1, 1, \
+ TS_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
+R(ts_mark_vlan_cksum_ptype, 1, 1, 1, 1, 1, 0, \
+ TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
+R(ts_mark_vlan_cksum_ptype_rss, 1, 1, 1, 1, 1, 1, \
+ TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)
+
#endif /* __OTX2_RX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 49/57] net/octeontx2: add Rx multi segment version
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (47 preceding siblings ...)
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 48/57] net/octeontx2: add Rx burst support jerinj
@ 2019-06-30 18:06 ` jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 50/57] net/octeontx2: add Rx vector version jerinj
` (8 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:06 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K, Anatoly Burakov
Cc: Pavan Nikhilesh
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add multi segment version of packet Receive function.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 2 +
doc/guides/nics/octeontx2.rst | 2 +
drivers/net/octeontx2/otx2_rx.c | 25 ++++++++++
drivers/net/octeontx2/otx2_rx.h | 55 +++++++++++++++++++++-
6 files changed, 86 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 211ff93e7..3280cba78 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -24,6 +24,8 @@ Inner RSS = Y
VLAN filter = Y
Flow control = Y
Flow API = Y
+Jumbo frame = Y
+Scattered Rx = Y
VLAN offload = Y
QinQ offload = Y
Packet type parsing = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 967a3757d..315722e60 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -24,6 +24,7 @@ Inner RSS = Y
VLAN filter = Y
Flow control = Y
Flow API = Y
+Jumbo frame = Y
VLAN offload = Y
QinQ offload = Y
Packet type parsing = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 884167c88..17b223221 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -19,6 +19,8 @@ RSS reta update = Y
Inner RSS = Y
VLAN filter = Y
Flow API = Y
+Jumbo frame = Y
+Scattered Rx = Y
VLAN offload = Y
QinQ offload = Y
Packet type parsing = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index d6082e508..3d9fc5f1d 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -18,6 +18,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Packet type information
- Promiscuous mode
+- Jumbo frames
- SR-IOV VF
- Lock-free Tx queue
- Multiple queues for TX and RX
@@ -28,6 +29,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Port hardware statistics
- Link state information
- Link flow control
+- Scatter-Gather IO support
- Debug utilities - Context dump and error interrupt support
- IEEE1588 timestamping
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index 4d5223e10..fca182785 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -92,6 +92,14 @@ otx2_nix_recv_pkts_ ## name(void *rx_queue, \
{ \
return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags)); \
} \
+ \
+static uint16_t __rte_noinline __hot \
+otx2_nix_recv_pkts_mseg_ ## name(void *rx_queue, \
+ struct rte_mbuf **rx_pkts, uint16_t pkts) \
+{ \
+ return nix_recv_pkts(rx_queue, rx_pkts, pkts, \
+ (flags) | NIX_RX_MULTI_SEG_F); \
+} \
NIX_RX_FASTPATH_MODES
#undef R
@@ -115,15 +123,32 @@ pick_rx_func(struct rte_eth_dev *eth_dev,
void
otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev)
{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2] = {
#define R(name, f5, f4, f3, f2, f1, f0, flags) \
[f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_ ## name,
+NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2] = {
+#define R(name, f5, f4, f3, f2, f1, f0, flags) \
+ [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_mseg_ ## name,
+
NIX_RX_FASTPATH_MODES
#undef R
};
pick_rx_func(eth_dev, nix_eth_rx_burst);
+ if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
+
+ /* Copy multi seg version with no offload for tear down sequence */
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+ dev->rx_pkt_burst_no_offload =
+ nix_eth_rx_burst_mseg[0][0][0][0][0][0];
rte_mb();
}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index 32343c27b..167badd46 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -23,6 +23,11 @@
#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4)
#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
+/* Flags to control cqe_to_mbuf conversion function.
+ * Defining it from backwards to denote its been
+ * not used as offload flags to pick function
+ */
+#define NIX_RX_MULTI_SEG_F BIT(15)
#define NIX_TIMESYNC_RX_OFFSET 8
struct otx2_timesync_info {
@@ -133,6 +138,51 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
return ol_flags;
}
+static __rte_always_inline void
+nix_cqe_xtract_mseg(const struct nix_rx_parse_s *rx,
+ struct rte_mbuf *mbuf, uint64_t rearm)
+{
+ const rte_iova_t *iova_list;
+ struct rte_mbuf *head;
+ const rte_iova_t *eol;
+ uint8_t nb_segs;
+ uint64_t sg;
+
+ sg = *(const uint64_t *)(rx + 1);
+ nb_segs = (sg >> 48) & 0x3;
+ mbuf->nb_segs = nb_segs;
+ mbuf->data_len = sg & 0xFFFF;
+ sg = sg >> 16;
+
+ eol = ((const rte_iova_t *)(rx + 1) + ((rx->desc_sizem1 + 1) << 1));
+ /* Skip SG_S and first IOVA*/
+ iova_list = ((const rte_iova_t *)(rx + 1)) + 2;
+ nb_segs--;
+
+ rearm = rearm & ~0xFFFF;
+
+ head = mbuf;
+ while (nb_segs) {
+ mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
+ mbuf = mbuf->next;
+
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+
+ mbuf->data_len = sg & 0xFFFF;
+ sg = sg >> 16;
+ *(uint64_t *)(&mbuf->rearm_data) = rearm;
+ nb_segs--;
+ iova_list++;
+
+ if (!nb_segs && (iova_list + 1 < eol)) {
+ sg = *(const uint64_t *)(iova_list);
+ nb_segs = (sg >> 48) & 0x3;
+ head->nb_segs += nb_segs;
+ iova_list = (const rte_iova_t *)(iova_list + 1);
+ }
+ }
+}
+
static __rte_always_inline void
otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
struct rte_mbuf *mbuf, const void *lookup_mem,
@@ -178,7 +228,10 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
*(uint64_t *)(&mbuf->rearm_data) = val;
mbuf->pkt_len = len;
- mbuf->data_len = len;
+ if (flag & NIX_RX_MULTI_SEG_F)
+ nix_cqe_xtract_mseg(rx, mbuf, val);
+ else
+ mbuf->data_len = len;
}
#define CKSUM_F NIX_RX_OFFLOAD_CHECKSUM_F
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 50/57] net/octeontx2: add Rx vector version
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (48 preceding siblings ...)
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 49/57] net/octeontx2: add Rx multi segment version jerinj
@ 2019-06-30 18:06 ` jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 51/57] net/octeontx2: add Tx burst support jerinj
` (7 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:06 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic
From: Jerin Jacob <jerinj@marvell.com>
Add vector version of packet Receive function.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 2 +
drivers/net/octeontx2/otx2_rx.c | 259 +++++++++++++++++++++++++++++-
4 files changed, 262 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 3d9fc5f1d..9d6596ad8 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -30,6 +30,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Link state information
- Link flow control
- Scatter-Gather IO support
+- Vector Poll mode driver
- Debug utilities - Context dump and error interrupt support
- IEEE1588 timestamping
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 3e25d2ad4..a5f125655 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -14,6 +14,7 @@ CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2
CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2
CFLAGS += -I$(RTE_SDK)/drivers/net/octeontx2
CFLAGS += -O3
+CFLAGS += -flax-vector-conversions
ifneq ($(CONFIG_RTE_ARCH_64),y)
CFLAGS += -Wno-int-to-pointer-cast
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 975b2e715..9d151f88d 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -24,6 +24,8 @@ sources = files('otx2_rx.c',
deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2']
+cflags += ['-flax-vector-conversions']
+
extra_flags = []
# This integrated controller runs only on a arm64 machine, remove 32bit warnings
if not dpdk_conf.get('RTE_ARCH_64')
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index fca182785..deefe9588 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -84,6 +84,239 @@ nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_pkts;
}
+#if defined(RTE_ARCH_ARM64)
+
+static __rte_always_inline uint64_t
+nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
+{
+ if (w2 & BIT_ULL(21) /* vtag0_gone */) {
+ ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+ *f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
+ }
+
+ return ol_flags;
+}
+
+static __rte_always_inline uint64_t
+nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
+{
+ if (w2 & BIT_ULL(23) /* vtag1_gone */) {
+ ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+ mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
+ }
+
+ return ol_flags;
+}
+
+static __rte_always_inline uint16_t
+nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t pkts, const uint16_t flags)
+{
+ struct otx2_eth_rxq *rxq = rx_queue; uint16_t packets = 0;
+ uint64x2_t cq0_w8, cq1_w8, cq2_w8, cq3_w8, mbuf01, mbuf23;
+ const uint64_t mbuf_initializer = rxq->mbuf_initializer;
+ const uint64x2_t data_off = vdupq_n_u64(rxq->data_off);
+ uint64_t ol_flags0, ol_flags1, ol_flags2, ol_flags3;
+ uint64x2_t rearm0 = vdupq_n_u64(mbuf_initializer);
+ uint64x2_t rearm1 = vdupq_n_u64(mbuf_initializer);
+ uint64x2_t rearm2 = vdupq_n_u64(mbuf_initializer);
+ uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer);
+ struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+ const uint16_t *lookup_mem = rxq->lookup_mem;
+ const uint32_t qmask = rxq->qmask;
+ const uint64_t wdata = rxq->wdata;
+ const uintptr_t desc = rxq->desc;
+ uint8x16_t f0, f1, f2, f3;
+ uint32_t head = rxq->head;
+
+ pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
+ /* Packets has to be floor-aligned to NIX_DESCS_PER_LOOP */
+ pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
+
+ while (packets < pkts) {
+ /* Get the CQ pointers, since the ring size is multiple of
+ * 4, We can avoid checking the wrap around of head
+ * value after the each access unlike scalar version.
+ */
+ const uintptr_t cq0 = desc + CQE_SZ(head);
+
+ /* Prefetch N desc ahead */
+ rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(8)));
+ rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(9)));
+ rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(10)));
+ rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(11)));
+
+ /* Get NIX_RX_SG_S for size and buffer pointer */
+ cq0_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0) + 64));
+ cq1_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1) + 64));
+ cq2_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2) + 64));
+ cq3_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3) + 64));
+
+ /* Extract mbuf from NIX_RX_SG_S */
+ mbuf01 = vzip2q_u64(cq0_w8, cq1_w8);
+ mbuf23 = vzip2q_u64(cq2_w8, cq3_w8);
+ mbuf01 = vqsubq_u64(mbuf01, data_off);
+ mbuf23 = vqsubq_u64(mbuf23, data_off);
+
+ /* Move mbufs to scalar registers for future use */
+ mbuf0 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 0);
+ mbuf1 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 1);
+ mbuf2 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 0);
+ mbuf3 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 1);
+
+ /* Mask to get packet len from NIX_RX_SG_S */
+ const uint8x16_t shuf_msk = {
+ 0xFF, 0xFF, /* pkt_type set as unknown */
+ 0xFF, 0xFF, /* pkt_type set as unknown */
+ 0, 1, /* octet 1~0, low 16 bits pkt_len */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
+ 0, 1, /* octet 1~0, 16 bits data_len */
+ 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF
+ };
+
+ /* Form the rx_descriptor_fields1 with pkt_len and data_len */
+ f0 = vqtbl1q_u8(cq0_w8, shuf_msk);
+ f1 = vqtbl1q_u8(cq1_w8, shuf_msk);
+ f2 = vqtbl1q_u8(cq2_w8, shuf_msk);
+ f3 = vqtbl1q_u8(cq3_w8, shuf_msk);
+
+ /* Load CQE word0 and word 1 */
+ uint64x2_t cq0_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0)));
+ uint64x2_t cq1_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1)));
+ uint64x2_t cq2_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2)));
+ uint64x2_t cq3_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3)));
+
+ if (flags & NIX_RX_OFFLOAD_RSS_F) {
+ /* Fill rss in the rx_descriptor_fields1 */
+ f0 = vsetq_lane_u32(vgetq_lane_u32(cq0_w0, 0), f0, 3);
+ f1 = vsetq_lane_u32(vgetq_lane_u32(cq1_w0, 0), f1, 3);
+ f2 = vsetq_lane_u32(vgetq_lane_u32(cq2_w0, 0), f2, 3);
+ f3 = vsetq_lane_u32(vgetq_lane_u32(cq3_w0, 0), f3, 3);
+ ol_flags0 = PKT_RX_RSS_HASH;
+ ol_flags1 = PKT_RX_RSS_HASH;
+ ol_flags2 = PKT_RX_RSS_HASH;
+ ol_flags3 = PKT_RX_RSS_HASH;
+ } else {
+ ol_flags0 = 0; ol_flags1 = 0;
+ ol_flags2 = 0; ol_flags3 = 0;
+ }
+
+ if (flags & NIX_RX_OFFLOAD_PTYPE_F) {
+ /* Fill packet_type in the rx_descriptor_fields1 */
+ f0 = vsetq_lane_u32(nix_ptype_get(lookup_mem,
+ vgetq_lane_u64(cq0_w0, 1)), f0, 0);
+ f1 = vsetq_lane_u32(nix_ptype_get(lookup_mem,
+ vgetq_lane_u64(cq1_w0, 1)), f1, 0);
+ f2 = vsetq_lane_u32(nix_ptype_get(lookup_mem,
+ vgetq_lane_u64(cq2_w0, 1)), f2, 0);
+ f3 = vsetq_lane_u32(nix_ptype_get(lookup_mem,
+ vgetq_lane_u64(cq3_w0, 1)), f3, 0);
+ }
+
+ if (flags & NIX_RX_OFFLOAD_CHECKSUM_F) {
+ ol_flags0 |= nix_rx_olflags_get(lookup_mem,
+ vgetq_lane_u64(cq0_w0, 1));
+ ol_flags1 |= nix_rx_olflags_get(lookup_mem,
+ vgetq_lane_u64(cq1_w0, 1));
+ ol_flags2 |= nix_rx_olflags_get(lookup_mem,
+ vgetq_lane_u64(cq2_w0, 1));
+ ol_flags3 |= nix_rx_olflags_get(lookup_mem,
+ vgetq_lane_u64(cq3_w0, 1));
+ }
+
+ if (flags & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
+ uint64_t cq0_w2 = *(uint64_t *)(cq0 + CQE_SZ(0) + 16);
+ uint64_t cq1_w2 = *(uint64_t *)(cq0 + CQE_SZ(1) + 16);
+ uint64_t cq2_w2 = *(uint64_t *)(cq0 + CQE_SZ(2) + 16);
+ uint64_t cq3_w2 = *(uint64_t *)(cq0 + CQE_SZ(3) + 16);
+
+ ol_flags0 = nix_vlan_update(cq0_w2, ol_flags0, &f0);
+ ol_flags1 = nix_vlan_update(cq1_w2, ol_flags1, &f1);
+ ol_flags2 = nix_vlan_update(cq2_w2, ol_flags2, &f2);
+ ol_flags3 = nix_vlan_update(cq3_w2, ol_flags3, &f3);
+
+ ol_flags0 = nix_qinq_update(cq0_w2, ol_flags0, mbuf0);
+ ol_flags1 = nix_qinq_update(cq1_w2, ol_flags1, mbuf1);
+ ol_flags2 = nix_qinq_update(cq2_w2, ol_flags2, mbuf2);
+ ol_flags3 = nix_qinq_update(cq3_w2, ol_flags3, mbuf3);
+ }
+
+ if (flags & NIX_RX_OFFLOAD_MARK_UPDATE_F) {
+ ol_flags0 = nix_update_match_id(*(uint16_t *)
+ (cq0 + CQE_SZ(0) + 38), ol_flags0, mbuf0);
+ ol_flags1 = nix_update_match_id(*(uint16_t *)
+ (cq0 + CQE_SZ(1) + 38), ol_flags1, mbuf1);
+ ol_flags2 = nix_update_match_id(*(uint16_t *)
+ (cq0 + CQE_SZ(2) + 38), ol_flags2, mbuf2);
+ ol_flags3 = nix_update_match_id(*(uint16_t *)
+ (cq0 + CQE_SZ(3) + 38), ol_flags3, mbuf3);
+ }
+
+ /* Form rearm_data with ol_flags */
+ rearm0 = vsetq_lane_u64(ol_flags0, rearm0, 1);
+ rearm1 = vsetq_lane_u64(ol_flags1, rearm1, 1);
+ rearm2 = vsetq_lane_u64(ol_flags2, rearm2, 1);
+ rearm3 = vsetq_lane_u64(ol_flags3, rearm3, 1);
+
+ /* Update rx_descriptor_fields1 */
+ vst1q_u64((uint64_t *)mbuf0->rx_descriptor_fields1, f0);
+ vst1q_u64((uint64_t *)mbuf1->rx_descriptor_fields1, f1);
+ vst1q_u64((uint64_t *)mbuf2->rx_descriptor_fields1, f2);
+ vst1q_u64((uint64_t *)mbuf3->rx_descriptor_fields1, f3);
+
+ /* Update rearm_data */
+ vst1q_u64((uint64_t *)mbuf0->rearm_data, rearm0);
+ vst1q_u64((uint64_t *)mbuf1->rearm_data, rearm1);
+ vst1q_u64((uint64_t *)mbuf2->rearm_data, rearm2);
+ vst1q_u64((uint64_t *)mbuf3->rearm_data, rearm3);
+
+ /* Store the mbufs to rx_pkts */
+ vst1q_u64((uint64_t *)&rx_pkts[packets], mbuf01);
+ vst1q_u64((uint64_t *)&rx_pkts[packets + 2], mbuf23);
+
+ /* Prefetch mbufs */
+ otx2_prefetch_store_keep(mbuf0);
+ otx2_prefetch_store_keep(mbuf1);
+ otx2_prefetch_store_keep(mbuf2);
+ otx2_prefetch_store_keep(mbuf3);
+
+ /* Mark mempool obj as "get" as it is alloc'ed by NIX */
+ __mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1);
+ __mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1);
+ __mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1);
+ __mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1);
+
+ /* Advance head pointer and packets */
+ head += NIX_DESCS_PER_LOOP; head &= qmask;
+ packets += NIX_DESCS_PER_LOOP;
+ }
+
+ rxq->head = head;
+ rxq->available -= packets;
+
+ rte_cio_wmb();
+ /* Free all the CQs that we've processed */
+ otx2_write64((rxq->wdata | packets), rxq->cq_door);
+
+ return packets;
+}
+
+#else
+
+static inline uint16_t
+nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t pkts, const uint16_t flags)
+{
+ RTE_SET_USED(rx_queue);
+ RTE_SET_USED(rx_pkts);
+ RTE_SET_USED(pkts);
+ RTE_SET_USED(flags);
+
+ return 0;
+}
+
+#endif
#define R(name, f5, f4, f3, f2, f1, f0, flags) \
static uint16_t __rte_noinline __hot \
@@ -100,6 +333,16 @@ otx2_nix_recv_pkts_mseg_ ## name(void *rx_queue, \
return nix_recv_pkts(rx_queue, rx_pkts, pkts, \
(flags) | NIX_RX_MULTI_SEG_F); \
} \
+ \
+static uint16_t __rte_noinline __hot \
+otx2_nix_recv_pkts_vec_ ## name(void *rx_queue, \
+ struct rte_mbuf **rx_pkts, uint16_t pkts) \
+{ \
+ /* TSTMP is not supported by vector */ \
+ if ((flags) & NIX_RX_OFFLOAD_TSTAMP_F) \
+ return 0; \
+ return nix_recv_pkts_vector(rx_queue, rx_pkts, pkts, (flags)); \
+} \
NIX_RX_FASTPATH_MODES
#undef R
@@ -141,7 +384,21 @@ NIX_RX_FASTPATH_MODES
#undef R
};
- pick_rx_func(eth_dev, nix_eth_rx_burst);
+ const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2] = {
+#define R(name, f5, f4, f3, f2, f1, f0, flags) \
+ [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_vec_ ## name,
+
+NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ /* For PTP enabled, scalar rx function should be chosen as most of the
+ * PTP apps are implemented to rx burst 1 pkt.
+ */
+ if (dev->scalar_ena || dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ pick_rx_func(eth_dev, nix_eth_rx_burst);
+ else
+ pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 51/57] net/octeontx2: add Tx burst support
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (49 preceding siblings ...)
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 50/57] net/octeontx2: add Rx vector version jerinj
@ 2019-06-30 18:06 ` jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 52/57] net/octeontx2: add Tx multi segment version jerinj
` (6 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:06 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Pavan Nikhilesh, Harman Kalra
From: Jerin Jacob <jerinj@marvell.com>
Add Tx burst support.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 5 +
doc/guides/nics/features/octeontx2_vec.ini | 5 +
doc/guides/nics/features/octeontx2_vf.ini | 5 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 6 -
drivers/net/octeontx2/otx2_ethdev.h | 1 +
drivers/net/octeontx2/otx2_tx.c | 94 ++++++++
drivers/net/octeontx2/otx2_tx.h | 261 +++++++++++++++++++++
10 files changed, 374 insertions(+), 6 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_tx.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 3280cba78..1856d9924 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Fast mbuf free = Y
Free Tx mbuf on demand = Y
Queue start/stop = Y
Promiscuous mode = Y
@@ -28,6 +29,10 @@ Jumbo frame = Y
Scattered Rx = Y
VLAN offload = Y
QinQ offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
+Inner L3 checksum = Y
+Inner L4 checksum = Y
Packet type parsing = Y
Timesync = Y
Timestamp offload = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 315722e60..053fca288 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Fast mbuf free = Y
Free Tx mbuf on demand = Y
Queue start/stop = Y
Promiscuous mode = Y
@@ -27,6 +28,10 @@ Flow API = Y
Jumbo frame = Y
VLAN offload = Y
QinQ offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
+Inner L3 checksum = Y
+Inner L4 checksum = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 17b223221..bef451d01 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -11,6 +11,7 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Fast mbuf free = Y
Free Tx mbuf on demand = Y
Queue start/stop = Y
RSS hash = Y
@@ -23,6 +24,10 @@ Jumbo frame = Y
Scattered Rx = Y
VLAN offload = Y
QinQ offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
+Inner L3 checksum = Y
+Inner L4 checksum = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 9d6596ad8..90ca4e2d2 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -25,6 +25,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Receiver Side Scaling (RSS)
- MAC/VLAN filtering
- Generic flow API
+- Inner and Outer Checksum offload
- VLAN/QinQ stripping and insertion
- Port hardware statistics
- Link state information
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index a5f125655..c187d2555 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -30,6 +30,7 @@ LIBABIVER := 1
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_rx.c \
+ otx2_tx.c \
otx2_tm.c \
otx2_rss.c \
otx2_mac.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 9d151f88d..94bf09a78 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -3,6 +3,7 @@
#
sources = files('otx2_rx.c',
+ 'otx2_tx.c',
'otx2_tm.c',
'otx2_rss.c',
'otx2_mac.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 1f8a22300..44753cbf5 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -14,12 +14,6 @@
#include "otx2_ethdev.h"
-static inline void
-otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(eth_dev);
-}
-
static inline uint64_t
nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
{
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 22cf86981..1f9323fe3 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -484,6 +484,7 @@ int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
/* Rx and Tx routines */
void otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev);
+void otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev);
void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
/* Timesync - PTP routines */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
new file mode 100644
index 000000000..16d69b74f
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -0,0 +1,94 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_vect.h>
+
+#include "otx2_ethdev.h"
+
+#define NIX_XMIT_FC_OR_RETURN(txq, pkts) do { \
+ /* Cached value is low, Update the fc_cache_pkts */ \
+ if (unlikely((txq)->fc_cache_pkts < (pkts))) { \
+ /* Multiply with sqe_per_sqb to express in pkts */ \
+ (txq)->fc_cache_pkts = \
+ ((txq)->nb_sqb_bufs_adj - *(txq)->fc_mem) << \
+ (txq)->sqes_per_sqb_log2; \
+ /* Check it again for the room */ \
+ if (unlikely((txq)->fc_cache_pkts < (pkts))) \
+ return 0; \
+ } \
+} while (0)
+
+
+static __rte_always_inline uint16_t
+nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t pkts, uint64_t *cmd, const uint16_t flags)
+{
+ struct otx2_eth_txq *txq = tx_queue; uint16_t i;
+ const rte_iova_t io_addr = txq->io_addr;
+ void *lmt_addr = txq->lmt_addr;
+
+ NIX_XMIT_FC_OR_RETURN(txq, pkts);
+
+ otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags));
+
+ /* Lets commit any changes in the packet */
+ rte_cio_wmb();
+
+ for (i = 0; i < pkts; i++) {
+ otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags);
+ /* Passing no of segdw as 4: HDR + EXT + SG + SMEM */
+ otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
+ tx_pkts[i]->ol_flags, 4, flags);
+ otx2_nix_xmit_one(cmd, lmt_addr, io_addr, flags);
+ }
+
+ /* Reduce the cached count */
+ txq->fc_cache_pkts -= pkts;
+
+ return pkts;
+}
+
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+static uint16_t __rte_noinline __hot \
+otx2_nix_xmit_pkts_ ## name(void *tx_queue, \
+ struct rte_mbuf **tx_pkts, uint16_t pkts) \
+{ \
+ uint64_t cmd[sz]; \
+ \
+ return nix_xmit_pkts(tx_queue, tx_pkts, pkts, cmd, flags); \
+}
+
+NIX_TX_FASTPATH_MODES
+#undef T
+
+static inline void
+pick_tx_func(struct rte_eth_dev *eth_dev,
+ const eth_tx_burst_t tx_burst[2][2][2][2][2])
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ /* [TSTMP] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
+ eth_dev->tx_pkt_burst = tx_burst
+ [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F)]
+ [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
+ [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
+ [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
+ [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
+}
+
+void
+otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
+{
+ const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2] = {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_ ## name,
+
+NIX_TX_FASTPATH_MODES
+#undef T
+ };
+
+ pick_tx_func(eth_dev, nix_eth_tx_burst);
+
+ rte_mb();
+}
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
index 4d0993f87..db4c1f70f 100644
--- a/drivers/net/octeontx2/otx2_tx.h
+++ b/drivers/net/octeontx2/otx2_tx.h
@@ -25,4 +25,265 @@
#define NIX_TX_NEED_EXT_HDR \
(NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F)
+/* Function to determine no of tx subdesc required in case ext
+ * sub desc is enabled.
+ */
+static __rte_always_inline int
+otx2_nix_tx_ext_subs(const uint16_t flags)
+{
+ return (flags & NIX_TX_OFFLOAD_TSTAMP_F) ? 2 :
+ ((flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) ? 1 : 0);
+}
+
+static __rte_always_inline void
+otx2_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc,
+ const uint64_t ol_flags, const uint16_t no_segdw,
+ const uint16_t flags)
+{
+ if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
+ struct nix_send_mem_s *send_mem;
+ uint16_t off = (no_segdw - 1) << 1;
+
+ send_mem = (struct nix_send_mem_s *)(cmd + off);
+ if (flags & NIX_TX_MULTI_SEG_F)
+ /* Retrieving the default desc values */
+ cmd[off] = send_mem_desc[6];
+
+ /* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp
+ * should not be updated at tx tstamp registered address, rather
+ * a dummy address which is eight bytes ahead would be updated
+ */
+ send_mem->addr = (rte_iova_t)((uint64_t *)send_mem_desc[7] +
+ !(ol_flags & PKT_TX_IEEE1588_TMST));
+ }
+}
+
+static inline void
+otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
+{
+ struct nix_send_ext_s *send_hdr_ext;
+ struct nix_send_hdr_s *send_hdr;
+ uint64_t ol_flags = 0, mask;
+ union nix_send_hdr_w1_u w1;
+ union nix_send_sg_s *sg;
+
+ send_hdr = (struct nix_send_hdr_s *)cmd;
+ if (flags & NIX_TX_NEED_EXT_HDR) {
+ send_hdr_ext = (struct nix_send_ext_s *)(cmd + 2);
+ sg = (union nix_send_sg_s *)(cmd + 4);
+ /* Clear previous markings */
+ send_hdr_ext->w0.lso = 0;
+ send_hdr_ext->w1.u = 0;
+ } else {
+ sg = (union nix_send_sg_s *)(cmd + 2);
+ }
+
+ if (flags & NIX_TX_NEED_SEND_HDR_W1) {
+ ol_flags = m->ol_flags;
+ w1.u = 0;
+ }
+
+ if (!(flags & NIX_TX_MULTI_SEG_F)) {
+ send_hdr->w0.total = m->data_len;
+ send_hdr->w0.aura =
+ npa_lf_aura_handle_to_aura(m->pool->pool_id);
+ }
+
+ /*
+ * L3type: 2 => IPV4
+ * 3 => IPV4 with csum
+ * 4 => IPV6
+ * L3type and L3ptr needs to be set for either
+ * L3 csum or L4 csum or LSO
+ *
+ */
+
+ if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
+ (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
+ const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+ const uint8_t ol3type =
+ ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
+ ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
+ !!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+
+ /* Outer L3 */
+ w1.ol3type = ol3type;
+ mask = 0xffffull << ((!!ol3type) << 4);
+ w1.ol3ptr = ~mask & m->outer_l2_len;
+ w1.ol4ptr = ~mask & (w1.ol3ptr + m->outer_l3_len);
+
+ /* Outer L4 */
+ w1.ol4type = csum + (csum << 1);
+
+ /* Inner L3 */
+ w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
+ ((!!(ol_flags & PKT_TX_IPV6)) << 2);
+ w1.il3ptr = w1.ol4ptr + m->l2_len;
+ w1.il4ptr = w1.il3ptr + m->l3_len;
+ /* Increment it by 1 if it is IPV4 as 3 is with csum */
+ w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM);
+
+ /* Inner L4 */
+ w1.il4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+
+ /* In case of no tunnel header use only
+ * shift IL3/IL4 fields a bit to use
+ * OL3/OL4 for header checksum
+ */
+ mask = !ol3type;
+ w1.u = ((w1.u & 0xFFFFFFFF00000000) >> (mask << 3)) |
+ ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
+
+ } else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
+ const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+ const uint8_t outer_l2_len = m->outer_l2_len;
+
+ /* Outer L3 */
+ w1.ol3ptr = outer_l2_len;
+ w1.ol4ptr = outer_l2_len + m->outer_l3_len;
+ /* Increment it by 1 if it is IPV4 as 3 is with csum */
+ w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
+ ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
+ !!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+
+ /* Outer L4 */
+ w1.ol4type = csum + (csum << 1);
+
+ } else if (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) {
+ const uint8_t l2_len = m->l2_len;
+
+ /* Always use OLXPTR and OLXTYPE when only
+ * when one header is present
+ */
+
+ /* Inner L3 */
+ w1.ol3ptr = l2_len;
+ w1.ol4ptr = l2_len + m->l3_len;
+ /* Increment it by 1 if it is IPV4 as 3 is with csum */
+ w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
+ ((!!(ol_flags & PKT_TX_IPV6)) << 2) +
+ !!(ol_flags & PKT_TX_IP_CKSUM);
+
+ /* Inner L4 */
+ w1.ol4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+ }
+
+ if (flags & NIX_TX_NEED_EXT_HDR &&
+ flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
+ send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN);
+ /* HW will update ptr after vlan0 update */
+ send_hdr_ext->w1.vlan1_ins_ptr = 12;
+ send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
+
+ send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ);
+ /* 2B before end of l2 header */
+ send_hdr_ext->w1.vlan0_ins_ptr = 12;
+ send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
+ }
+
+ if (flags & NIX_TX_NEED_SEND_HDR_W1)
+ send_hdr->w1.u = w1.u;
+
+ if (!(flags & NIX_TX_MULTI_SEG_F)) {
+ sg->seg1_size = m->data_len;
+ *(rte_iova_t *)(++sg) = rte_mbuf_data_iova(m);
+
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
+ /* Set don't free bit if reference count > 1 */
+ if (rte_pktmbuf_prefree_seg(m) == NULL)
+ send_hdr->w0.df = 1; /* SET DF */
+ }
+ /* Mark mempool object as "put" since it is freed by NIX */
+ if (!send_hdr->w0.df)
+ __mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+ }
+}
+
+
+static __rte_always_inline void
+otx2_nix_xmit_one(uint64_t *cmd, void *lmt_addr,
+ const rte_iova_t io_addr, const uint32_t flags)
+{
+ uint64_t lmt_status;
+
+ do {
+ otx2_lmt_mov(lmt_addr, cmd, otx2_nix_tx_ext_subs(flags));
+ lmt_status = otx2_lmt_submit(io_addr);
+ } while (lmt_status == 0);
+}
+
+
+#define L3L4CSUM_F NIX_TX_OFFLOAD_L3_L4_CSUM_F
+#define OL3OL4CSUM_F NIX_TX_OFFLOAD_OL3_OL4_CSUM_F
+#define VLAN_F NIX_TX_OFFLOAD_VLAN_QINQ_F
+#define NOFF_F NIX_TX_OFFLOAD_MBUF_NOFF_F
+#define TSP_F NIX_TX_OFFLOAD_TSTAMP_F
+
+/* [TSTMP] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
+#define NIX_TX_FASTPATH_MODES \
+T(no_offload, 0, 0, 0, 0, 0, 4, \
+ NIX_TX_OFFLOAD_NONE) \
+T(l3l4csum, 0, 0, 0, 0, 1, 4, \
+ L3L4CSUM_F) \
+T(ol3ol4csum, 0, 0, 0, 1, 0, 4, \
+ OL3OL4CSUM_F) \
+T(ol3ol4csum_l3l4csum, 0, 0, 0, 1, 1, 4, \
+ OL3OL4CSUM_F | L3L4CSUM_F) \
+T(vlan, 0, 0, 1, 0, 0, 6, \
+ VLAN_F) \
+T(vlan_l3l4csum, 0, 0, 1, 0, 1, 6, \
+ VLAN_F | L3L4CSUM_F) \
+T(vlan_ol3ol4csum, 0, 0, 1, 1, 0, 6, \
+ VLAN_F | OL3OL4CSUM_F) \
+T(vlan_ol3ol4csum_l3l4csum, 0, 0, 1, 1, 1, 6, \
+ VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(noff, 0, 1, 0, 0, 0, 4, \
+ NOFF_F) \
+T(noff_l3l4csum, 0, 1, 0, 0, 1, 4, \
+ NOFF_F | L3L4CSUM_F) \
+T(noff_ol3ol4csum, 0, 1, 0, 1, 0, 4, \
+ NOFF_F | OL3OL4CSUM_F) \
+T(noff_ol3ol4csum_l3l4csum, 0, 1, 0, 1, 1, 4, \
+ NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(noff_vlan, 0, 1, 1, 0, 0, 6, \
+ NOFF_F | VLAN_F) \
+T(noff_vlan_l3l4csum, 0, 1, 1, 0, 1, 6, \
+ NOFF_F | VLAN_F | L3L4CSUM_F) \
+T(noff_vlan_ol3ol4csum, 0, 1, 1, 1, 0, 6, \
+ NOFF_F | VLAN_F | OL3OL4CSUM_F) \
+T(noff_vlan_ol3ol4csum_l3l4csum, 0, 1, 1, 1, 1, 6, \
+ NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(ts, 1, 0, 0, 0, 0, 8, \
+ TSP_F) \
+T(ts_l3l4csum, 1, 0, 0, 0, 1, 8, \
+ TSP_F | L3L4CSUM_F) \
+T(ts_ol3ol4csum, 1, 0, 0, 1, 0, 8, \
+ TSP_F | OL3OL4CSUM_F) \
+T(ts_ol3ol4csum_l3l4csum, 1, 0, 0, 1, 1, 8, \
+ TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(ts_vlan, 1, 0, 1, 0, 0, 8, \
+ TSP_F | VLAN_F) \
+T(ts_vlan_l3l4csum, 1, 0, 1, 0, 1, 8, \
+ TSP_F | VLAN_F | L3L4CSUM_F) \
+T(ts_vlan_ol3ol4csum, 1, 0, 1, 1, 0, 8, \
+ TSP_F | VLAN_F | OL3OL4CSUM_F) \
+T(ts_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 1, 8, \
+ TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(ts_noff, 1, 1, 0, 0, 0, 8, \
+ TSP_F | NOFF_F) \
+T(ts_noff_l3l4csum, 1, 1, 0, 0, 1, 8, \
+ TSP_F | NOFF_F | L3L4CSUM_F) \
+T(ts_noff_ol3ol4csum, 1, 1, 0, 1, 0, 8, \
+ TSP_F | NOFF_F | OL3OL4CSUM_F) \
+T(ts_noff_ol3ol4csum_l3l4csum, 1, 1, 0, 1, 1, 8, \
+ TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(ts_noff_vlan, 1, 1, 1, 0, 0, 8, \
+ TSP_F | NOFF_F | VLAN_F) \
+T(ts_noff_vlan_l3l4csum, 1, 1, 1, 0, 1, 8, \
+ TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
+T(ts_noff_vlan_ol3ol4csum, 1, 1, 1, 1, 0, 8, \
+ TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
+T(ts_noff_vlan_ol3ol4csum_l3l4csum, 1, 1, 1, 1, 1, 8, \
+ TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)
+
#endif /* __OTX2_TX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 52/57] net/octeontx2: add Tx multi segment version
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (50 preceding siblings ...)
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 51/57] net/octeontx2: add Tx burst support jerinj
@ 2019-06-30 18:06 ` jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 53/57] net/octeontx2: add Tx vector version jerinj
` (5 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:06 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Pavan Nikhilesh
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add multi segment version of packet Transmit function.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.h | 4 ++
drivers/net/octeontx2/otx2_tx.c | 58 +++++++++++++++++++++
drivers/net/octeontx2/otx2_tx.h | 81 +++++++++++++++++++++++++++++
3 files changed, 143 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 1f9323fe3..f39fdfa1f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -89,6 +89,10 @@
#define NIX_TX_NB_SEG_MAX 9
#endif
+#define NIX_TX_MSEG_SG_DWORDS \
+ ((RTE_ALIGN_MUL_CEIL(NIX_TX_NB_SEG_MAX, 3) / 3) \
+ + NIX_TX_NB_SEG_MAX)
+
/* Apply BP when CQ is 75% full */
#define NIX_CQ_BP_LEVEL (25 * 256 / 100)
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index 16d69b74f..0ac5ea652 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -49,6 +49,37 @@ nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
return pkts;
}
+static __rte_always_inline uint16_t
+nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t pkts, uint64_t *cmd, const uint16_t flags)
+{
+ struct otx2_eth_txq *txq = tx_queue; uint64_t i;
+ const rte_iova_t io_addr = txq->io_addr;
+ void *lmt_addr = txq->lmt_addr;
+ uint16_t segdw;
+
+ NIX_XMIT_FC_OR_RETURN(txq, pkts);
+
+ otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags));
+
+ /* Lets commit any changes in the packet */
+ rte_cio_wmb();
+
+ for (i = 0; i < pkts; i++) {
+ otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags);
+ segdw = otx2_nix_prepare_mseg(tx_pkts[i], cmd, flags);
+ otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
+ tx_pkts[i]->ol_flags, segdw,
+ flags);
+ otx2_nix_xmit_mseg_one(cmd, lmt_addr, io_addr, segdw);
+ }
+
+ /* Reduce the cached count */
+ txq->fc_cache_pkts -= pkts;
+
+ return pkts;
+}
+
#define T(name, f4, f3, f2, f1, f0, sz, flags) \
static uint16_t __rte_noinline __hot \
otx2_nix_xmit_pkts_ ## name(void *tx_queue, \
@@ -62,6 +93,20 @@ otx2_nix_xmit_pkts_ ## name(void *tx_queue, \
NIX_TX_FASTPATH_MODES
#undef T
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+static uint16_t __rte_noinline __hot \
+otx2_nix_xmit_pkts_mseg_ ## name(void *tx_queue, \
+ struct rte_mbuf **tx_pkts, uint16_t pkts) \
+{ \
+ uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \
+ \
+ return nix_xmit_pkts_mseg(tx_queue, tx_pkts, pkts, cmd, \
+ (flags) | NIX_TX_MULTI_SEG_F); \
+}
+
+NIX_TX_FASTPATH_MODES
+#undef T
+
static inline void
pick_tx_func(struct rte_eth_dev *eth_dev,
const eth_tx_burst_t tx_burst[2][2][2][2][2])
@@ -80,15 +125,28 @@ pick_tx_func(struct rte_eth_dev *eth_dev,
void
otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2] = {
#define T(name, f4, f3, f2, f1, f0, sz, flags) \
[f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_ ## name,
+NIX_TX_FASTPATH_MODES
+#undef T
+ };
+
+ const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2] = {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_mseg_ ## name,
+
NIX_TX_FASTPATH_MODES
#undef T
};
pick_tx_func(eth_dev, nix_eth_tx_burst);
+ if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
+
rte_mb();
}
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
index db4c1f70f..b75a220ea 100644
--- a/drivers/net/octeontx2/otx2_tx.h
+++ b/drivers/net/octeontx2/otx2_tx.h
@@ -212,6 +212,87 @@ otx2_nix_xmit_one(uint64_t *cmd, void *lmt_addr,
} while (lmt_status == 0);
}
+static __rte_always_inline uint16_t
+otx2_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
+{
+ struct nix_send_hdr_s *send_hdr;
+ union nix_send_sg_s *sg;
+ struct rte_mbuf *m_next;
+ uint64_t *slist, sg_u;
+ uint64_t nb_segs;
+ uint64_t segdw;
+ uint8_t off, i;
+
+ send_hdr = (struct nix_send_hdr_s *)cmd;
+ send_hdr->w0.total = m->pkt_len;
+ send_hdr->w0.aura = npa_lf_aura_handle_to_aura(m->pool->pool_id);
+
+ if (flags & NIX_TX_NEED_EXT_HDR)
+ off = 2;
+ else
+ off = 0;
+
+ sg = (union nix_send_sg_s *)&cmd[2 + off];
+ sg_u = sg->u;
+ slist = &cmd[3 + off];
+
+ i = 0;
+ nb_segs = m->nb_segs;
+
+ /* Fill mbuf segments */
+ do {
+ m_next = m->next;
+ sg_u = sg_u | ((uint64_t)m->data_len << (i << 4));
+ *slist = rte_mbuf_data_iova(m);
+ /* Set invert df if reference count > 1 */
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)
+ sg_u |=
+ ((uint64_t)(rte_pktmbuf_prefree_seg(m) == NULL) <<
+ (i + 55));
+ /* Mark mempool object as "put" since it is freed by NIX */
+ if (!(sg_u & (1ULL << (i + 55)))) {
+ m->next = NULL;
+ __mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+ }
+ slist++;
+ i++;
+ nb_segs--;
+ if (i > 2 && nb_segs) {
+ i = 0;
+ /* Next SG subdesc */
+ *(uint64_t *)slist = sg_u & 0xFC00000000000000;
+ sg->u = sg_u;
+ sg->segs = 3;
+ sg = (union nix_send_sg_s *)slist;
+ sg_u = sg->u;
+ slist++;
+ }
+ m = m_next;
+ } while (nb_segs);
+
+ sg->u = sg_u;
+ sg->segs = i;
+ segdw = (uint64_t *)slist - (uint64_t *)&cmd[2 + off];
+ /* Roundup extra dwords to multiple of 2 */
+ segdw = (segdw >> 1) + (segdw & 0x1);
+ /* Default dwords */
+ segdw += (off >> 1) + 1 + !!(flags & NIX_TX_OFFLOAD_TSTAMP_F);
+ send_hdr->w0.sizem1 = segdw - 1;
+
+ return segdw;
+}
+
+static __rte_always_inline void
+otx2_nix_xmit_mseg_one(uint64_t *cmd, void *lmt_addr,
+ rte_iova_t io_addr, uint16_t segdw)
+{
+ uint64_t lmt_status;
+
+ do {
+ otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw);
+ lmt_status = otx2_lmt_submit(io_addr);
+ } while (lmt_status == 0);
+}
#define L3L4CSUM_F NIX_TX_OFFLOAD_L3_L4_CSUM_F
#define OL3OL4CSUM_F NIX_TX_OFFLOAD_OL3_OL4_CSUM_F
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 53/57] net/octeontx2: add Tx vector version
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (51 preceding siblings ...)
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 52/57] net/octeontx2: add Tx multi segment version jerinj
@ 2019-06-30 18:06 ` jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 54/57] net/octeontx2: add device start operation jerinj
` (4 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:06 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Pavan Nikhilesh
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add vector version of packet transmit function.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/net/octeontx2/otx2_tx.c | 883 +++++++++++++++++++++++++++++++-
1 file changed, 882 insertions(+), 1 deletion(-)
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index 0ac5ea652..6bce55112 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -80,6 +80,859 @@ nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
return pkts;
}
+#if defined(RTE_ARCH_ARM64)
+
+#define NIX_DESCS_PER_LOOP 4
+static __rte_always_inline uint16_t
+nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t pkts, const uint16_t flags)
+{
+ uint64x2_t dataoff_iova0, dataoff_iova1, dataoff_iova2, dataoff_iova3;
+ uint64x2_t len_olflags0, len_olflags1, len_olflags2, len_olflags3;
+ uint64_t *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+ uint64x2_t senddesc01_w0, senddesc23_w0;
+ uint64x2_t senddesc01_w1, senddesc23_w1;
+ uint64x2_t sgdesc01_w0, sgdesc23_w0;
+ uint64x2_t sgdesc01_w1, sgdesc23_w1;
+ struct otx2_eth_txq *txq = tx_queue;
+ uint64_t *lmt_addr = txq->lmt_addr;
+ rte_iova_t io_addr = txq->io_addr;
+ uint64x2_t ltypes01, ltypes23;
+ uint64x2_t xtmp128, ytmp128;
+ uint64x2_t xmask01, xmask23;
+ uint64x2_t mbuf01, mbuf23;
+ uint64x2_t cmd00, cmd01;
+ uint64x2_t cmd10, cmd11;
+ uint64x2_t cmd20, cmd21;
+ uint64x2_t cmd30, cmd31;
+ uint64_t lmt_status, i;
+
+ pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
+
+ NIX_XMIT_FC_OR_RETURN(txq, pkts);
+
+ /* Reduce the cached count */
+ txq->fc_cache_pkts -= pkts;
+
+ /* Lets commit any changes in the packet */
+ rte_cio_wmb();
+
+ senddesc01_w0 = vld1q_dup_u64(&txq->cmd[0]);
+ senddesc23_w0 = senddesc01_w0;
+ senddesc01_w1 = vdupq_n_u64(0);
+ senddesc23_w1 = senddesc01_w1;
+ sgdesc01_w0 = vld1q_dup_u64(&txq->cmd[2]);
+ sgdesc23_w0 = sgdesc01_w0;
+
+ for (i = 0; i < pkts; i += NIX_DESCS_PER_LOOP) {
+ mbuf01 = vld1q_u64((uint64_t *)tx_pkts);
+ mbuf23 = vld1q_u64((uint64_t *)(tx_pkts + 2));
+
+ /* Clear lower 32bit of SEND_HDR_W0 and SEND_SG_W0 */
+ senddesc01_w0 = vbicq_u64(senddesc01_w0,
+ vdupq_n_u64(0xFFFFFFFF));
+ sgdesc01_w0 = vbicq_u64(sgdesc01_w0,
+ vdupq_n_u64(0xFFFFFFFF));
+
+ senddesc23_w0 = senddesc01_w0;
+ sgdesc23_w0 = sgdesc01_w0;
+
+ tx_pkts = tx_pkts + NIX_DESCS_PER_LOOP;
+
+ /* Move mbufs to iova */
+ mbuf0 = (uint64_t *)vgetq_lane_u64(mbuf01, 0);
+ mbuf1 = (uint64_t *)vgetq_lane_u64(mbuf01, 1);
+ mbuf2 = (uint64_t *)vgetq_lane_u64(mbuf23, 0);
+ mbuf3 = (uint64_t *)vgetq_lane_u64(mbuf23, 1);
+
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mbuf, buf_iova));
+ /*
+ * Get mbuf's, olflags, iova, pktlen, dataoff
+ * dataoff_iovaX.D[0] = iova,
+ * dataoff_iovaX.D[1](15:0) = mbuf->dataoff
+ * len_olflagsX.D[0] = ol_flags,
+ * len_olflagsX.D[1](63:32) = mbuf->pkt_len
+ */
+ dataoff_iova0 = vld1q_u64(mbuf0);
+ len_olflags0 = vld1q_u64(mbuf0 + 2);
+ dataoff_iova1 = vld1q_u64(mbuf1);
+ len_olflags1 = vld1q_u64(mbuf1 + 2);
+ dataoff_iova2 = vld1q_u64(mbuf2);
+ len_olflags2 = vld1q_u64(mbuf2 + 2);
+ dataoff_iova3 = vld1q_u64(mbuf3);
+ len_olflags3 = vld1q_u64(mbuf3 + 2);
+
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
+ struct rte_mbuf *mbuf;
+ /* Set don't free bit if reference count > 1 */
+ xmask01 = vdupq_n_u64(0);
+ xmask23 = xmask01;
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
+ offsetof(struct rte_mbuf, buf_iova));
+
+ if (rte_pktmbuf_prefree_seg(mbuf) == NULL)
+ vsetq_lane_u64(0x80000, xmask01, 0);
+ else
+ __mempool_check_cookies(mbuf->pool,
+ (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
+ offsetof(struct rte_mbuf, buf_iova));
+ if (rte_pktmbuf_prefree_seg(mbuf) == NULL)
+ vsetq_lane_u64(0x80000, xmask01, 1);
+ else
+ __mempool_check_cookies(mbuf->pool,
+ (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
+ offsetof(struct rte_mbuf, buf_iova));
+ if (rte_pktmbuf_prefree_seg(mbuf) == NULL)
+ vsetq_lane_u64(0x80000, xmask23, 0);
+ else
+ __mempool_check_cookies(mbuf->pool,
+ (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
+ offsetof(struct rte_mbuf, buf_iova));
+ if (rte_pktmbuf_prefree_seg(mbuf) == NULL)
+ vsetq_lane_u64(0x80000, xmask23, 1);
+ else
+ __mempool_check_cookies(mbuf->pool,
+ (void **)&mbuf,
+ 1, 0);
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
+ } else {
+ struct rte_mbuf *mbuf;
+ /* Mark mempool object as "put" since
+ * it is freed by NIX
+ */
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
+ offsetof(struct rte_mbuf, buf_iova));
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
+ offsetof(struct rte_mbuf, buf_iova));
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
+ offsetof(struct rte_mbuf, buf_iova));
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
+ offsetof(struct rte_mbuf, buf_iova));
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+ 1, 0);
+ RTE_SET_USED(mbuf);
+ }
+
+ /* Move mbufs to point pool */
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mbuf, pool) -
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mbuf, pool) -
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mbuf, pool) -
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mbuf, pool) -
+ offsetof(struct rte_mbuf, buf_iova));
+
+ if (flags &
+ (NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
+ NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
+ /* Get tx_offload for ol2, ol3, l2, l3 lengths */
+ /*
+ * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7)
+ * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7)
+ */
+
+ asm volatile ("LD1 {%[a].D}[0],[%[in]]\n\t" :
+ [a]"+w"(senddesc01_w1) :
+ [in]"r"(mbuf0 + 2) : "memory");
+
+ asm volatile ("LD1 {%[a].D}[1],[%[in]]\n\t" :
+ [a]"+w"(senddesc01_w1) :
+ [in]"r"(mbuf1 + 2) : "memory");
+
+ asm volatile ("LD1 {%[b].D}[0],[%[in]]\n\t" :
+ [b]"+w"(senddesc23_w1) :
+ [in]"r"(mbuf2 + 2) : "memory");
+
+ asm volatile ("LD1 {%[b].D}[1],[%[in]]\n\t" :
+ [b]"+w"(senddesc23_w1) :
+ [in]"r"(mbuf3 + 2) : "memory");
+
+ /* Get pool pointer alone */
+ mbuf0 = (uint64_t *)*mbuf0;
+ mbuf1 = (uint64_t *)*mbuf1;
+ mbuf2 = (uint64_t *)*mbuf2;
+ mbuf3 = (uint64_t *)*mbuf3;
+ } else {
+ /* Get pool pointer alone */
+ mbuf0 = (uint64_t *)*mbuf0;
+ mbuf1 = (uint64_t *)*mbuf1;
+ mbuf2 = (uint64_t *)*mbuf2;
+ mbuf3 = (uint64_t *)*mbuf3;
+ }
+
+ const uint8x16_t shuf_mask2 = {
+ 0x4, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xc, 0xd, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ };
+ xtmp128 = vzip2q_u64(len_olflags0, len_olflags1);
+ ytmp128 = vzip2q_u64(len_olflags2, len_olflags3);
+
+ /* Clear dataoff_iovaX.D[1] bits other than dataoff(15:0) */
+ const uint64x2_t and_mask0 = {
+ 0xFFFFFFFFFFFFFFFF,
+ 0x000000000000FFFF,
+ };
+
+ dataoff_iova0 = vandq_u64(dataoff_iova0, and_mask0);
+ dataoff_iova1 = vandq_u64(dataoff_iova1, and_mask0);
+ dataoff_iova2 = vandq_u64(dataoff_iova2, and_mask0);
+ dataoff_iova3 = vandq_u64(dataoff_iova3, and_mask0);
+
+ /*
+ * Pick only 16 bits of pktlen preset at bits 63:32
+ * and place them at bits 15:0.
+ */
+ xtmp128 = vqtbl1q_u8(xtmp128, shuf_mask2);
+ ytmp128 = vqtbl1q_u8(ytmp128, shuf_mask2);
+
+ /* Add pairwise to get dataoff + iova in sgdesc_w1 */
+ sgdesc01_w1 = vpaddq_u64(dataoff_iova0, dataoff_iova1);
+ sgdesc23_w1 = vpaddq_u64(dataoff_iova2, dataoff_iova3);
+
+ /* Orr both sgdesc_w0 and senddesc_w0 with 16 bits of
+ * pktlen at 15:0 position.
+ */
+ sgdesc01_w0 = vorrq_u64(sgdesc01_w0, xtmp128);
+ sgdesc23_w0 = vorrq_u64(sgdesc23_w0, ytmp128);
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xtmp128);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, ytmp128);
+
+ if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
+ !(flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
+ /*
+ * Lookup table to translate ol_flags to
+ * il3/il4 types. But we still use ol3/ol4 types in
+ * senddesc_w1 as only one header processing is enabled.
+ */
+ const uint8x16_t tbl = {
+ /* [0-15] = il4type:il3type */
+ 0x04, /* none (IPv6 assumed) */
+ 0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */
+ 0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */
+ 0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */
+ 0x03, /* PKT_TX_IP_CKSUM */
+ 0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */
+ 0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */
+ 0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */
+ 0x02, /* PKT_TX_IPV4 */
+ 0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */
+ 0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */
+ 0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */
+ 0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */
+ 0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_TCP_CKSUM
+ */
+ 0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_SCTP_CKSUM
+ */
+ 0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_UDP_CKSUM
+ */
+ };
+
+ /* Extract olflags to translate to iltypes */
+ xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
+ ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
+
+ /*
+ * E(47):L3_LEN(9):L2_LEN(7+z)
+ * E(47):L3_LEN(9):L2_LEN(7+z)
+ */
+ senddesc01_w1 = vshlq_n_u64(senddesc01_w1, 1);
+ senddesc23_w1 = vshlq_n_u64(senddesc23_w1, 1);
+
+ /* Move OLFLAGS bits 55:52 to 51:48
+ * with zeros preprended on the byte and rest
+ * don't care
+ */
+ xtmp128 = vshrq_n_u8(xtmp128, 4);
+ ytmp128 = vshrq_n_u8(ytmp128, 4);
+ /*
+ * E(48):L3_LEN(8):L2_LEN(z+7)
+ * E(48):L3_LEN(8):L2_LEN(z+7)
+ */
+ const int8x16_t tshft3 = {
+ -1, 0, 8, 8, 8, 8, 8, 8,
+ -1, 0, 8, 8, 8, 8, 8, 8,
+ };
+
+ senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
+ senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
+
+ /* Do the lookup */
+ ltypes01 = vqtbl1q_u8(tbl, xtmp128);
+ ltypes23 = vqtbl1q_u8(tbl, ytmp128);
+
+ /* Just use ld1q to retrieve aura
+ * when we don't need tx_offload
+ */
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mempool, pool_id));
+
+ /* Pick only relevant fields i.e Bit 48:55 of iltype
+ * and place it in ol3/ol4type of senddesc_w1
+ */
+ const uint8x16_t shuf_mask0 = {
+ 0xFF, 0xFF, 0xFF, 0xFF, 0x6, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xE, 0xFF, 0xFF, 0xFF,
+ };
+
+ ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
+ ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
+
+ /* Prepare ol4ptr, ol3ptr from ol3len, ol2len.
+ * a [E(32):E(16):OL3(8):OL2(8)]
+ * a = a + (a << 8)
+ * a [E(32):E(16):(OL3+OL2):OL2]
+ * => E(32):E(16)::OL4PTR(8):OL3PTR(8)
+ */
+ senddesc01_w1 = vaddq_u8(senddesc01_w1,
+ vshlq_n_u16(senddesc01_w1, 8));
+ senddesc23_w1 = vaddq_u8(senddesc23_w1,
+ vshlq_n_u16(senddesc23_w1, 8));
+
+ /* Create first half of 4W cmd for 4 mbufs (sgdesc) */
+ cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
+ cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
+
+ xmask01 = vdupq_n_u64(0);
+ xmask23 = xmask01;
+ asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
+
+ asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
+ xmask01 = vshlq_n_u64(xmask01, 20);
+ xmask23 = vshlq_n_u64(xmask23, 20);
+
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
+ /* Move ltypes to senddesc*_w1 */
+ senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
+ senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
+
+ /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
+ cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
+ cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
+ cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
+ cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
+
+ } else if (!(flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
+ (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
+ /*
+ * Lookup table to translate ol_flags to
+ * ol3/ol4 types.
+ */
+
+ const uint8x16_t tbl = {
+ /* [0-15] = ol4type:ol3type */
+ 0x00, /* none */
+ 0x03, /* OUTER_IP_CKSUM */
+ 0x02, /* OUTER_IPV4 */
+ 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */
+ 0x04, /* OUTER_IPV6 */
+ 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */
+ 0x00, /* OUTER_IPV6 | OUTER_IPV4 */
+ 0x00, /* OUTER_IPV6 | OUTER_IPV4 |
+ * OUTER_IP_CKSUM
+ */
+ 0x00, /* OUTER_UDP_CKSUM */
+ 0x33, /* OUTER_UDP_CKSUM | OUTER_IP_CKSUM */
+ 0x32, /* OUTER_UDP_CKSUM | OUTER_IPV4 */
+ 0x33, /* OUTER_UDP_CKSUM | OUTER_IPV4 |
+ * OUTER_IP_CKSUM
+ */
+ 0x34, /* OUTER_UDP_CKSUM | OUTER_IPV6 */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IP_CKSUM
+ */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IPV4
+ */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IPV4 | OUTER_IP_CKSUM
+ */
+ };
+
+ /* Extract olflags to translate to iltypes */
+ xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
+ ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
+
+ /*
+ * E(47):OL3_LEN(9):OL2_LEN(7+z)
+ * E(47):OL3_LEN(9):OL2_LEN(7+z)
+ */
+ const uint8x16_t shuf_mask5 = {
+ 0x6, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xE, 0xD, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ };
+ senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5);
+ senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5);
+
+ /* Extract outer ol flags only */
+ const uint64x2_t o_cksum_mask = {
+ 0x1C00020000000000,
+ 0x1C00020000000000,
+ };
+
+ xtmp128 = vandq_u64(xtmp128, o_cksum_mask);
+ ytmp128 = vandq_u64(ytmp128, o_cksum_mask);
+
+ /* Extract OUTER_UDP_CKSUM bit 41 and
+ * move it to bit 61
+ */
+
+ xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20);
+ ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20);
+
+ /* Shift oltype by 2 to start nibble from BIT(56)
+ * instead of BIT(58)
+ */
+ xtmp128 = vshrq_n_u8(xtmp128, 2);
+ ytmp128 = vshrq_n_u8(ytmp128, 2);
+ /*
+ * E(48):L3_LEN(8):L2_LEN(z+7)
+ * E(48):L3_LEN(8):L2_LEN(z+7)
+ */
+ const int8x16_t tshft3 = {
+ -1, 0, 8, 8, 8, 8, 8, 8,
+ -1, 0, 8, 8, 8, 8, 8, 8,
+ };
+
+ senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
+ senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
+
+ /* Do the lookup */
+ ltypes01 = vqtbl1q_u8(tbl, xtmp128);
+ ltypes23 = vqtbl1q_u8(tbl, ytmp128);
+
+ /* Just use ld1q to retrieve aura
+ * when we don't need tx_offload
+ */
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mempool, pool_id));
+
+ /* Pick only relevant fields i.e Bit 56:63 of oltype
+ * and place it in ol3/ol4type of senddesc_w1
+ */
+ const uint8x16_t shuf_mask0 = {
+ 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xFF, 0xFF, 0xFF,
+ };
+
+ ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
+ ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
+
+ /* Prepare ol4ptr, ol3ptr from ol3len, ol2len.
+ * a [E(32):E(16):OL3(8):OL2(8)]
+ * a = a + (a << 8)
+ * a [E(32):E(16):(OL3+OL2):OL2]
+ * => E(32):E(16)::OL4PTR(8):OL3PTR(8)
+ */
+ senddesc01_w1 = vaddq_u8(senddesc01_w1,
+ vshlq_n_u16(senddesc01_w1, 8));
+ senddesc23_w1 = vaddq_u8(senddesc23_w1,
+ vshlq_n_u16(senddesc23_w1, 8));
+
+ /* Create second half of 4W cmd for 4 mbufs (sgdesc) */
+ cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
+ cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
+
+ xmask01 = vdupq_n_u64(0);
+ xmask23 = xmask01;
+ asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
+
+ asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
+ xmask01 = vshlq_n_u64(xmask01, 20);
+ xmask23 = vshlq_n_u64(xmask23, 20);
+
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
+ /* Move ltypes to senddesc*_w1 */
+ senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
+ senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
+
+ /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
+ cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
+ cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
+ cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
+ cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
+
+ } else if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
+ (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
+ /* Lookup table to translate ol_flags to
+ * ol4type, ol3type, il4type, il3type of senddesc_w1
+ */
+ const uint8x16x2_t tbl = {
+ {
+ {
+ /* [0-15] = il4type:il3type */
+ 0x04, /* none (IPv6) */
+ 0x14, /* PKT_TX_TCP_CKSUM (IPv6) */
+ 0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */
+ 0x34, /* PKT_TX_UDP_CKSUM (IPv6) */
+ 0x03, /* PKT_TX_IP_CKSUM */
+ 0x13, /* PKT_TX_IP_CKSUM |
+ * PKT_TX_TCP_CKSUM
+ */
+ 0x23, /* PKT_TX_IP_CKSUM |
+ * PKT_TX_SCTP_CKSUM
+ */
+ 0x33, /* PKT_TX_IP_CKSUM |
+ * PKT_TX_UDP_CKSUM
+ */
+ 0x02, /* PKT_TX_IPV4 */
+ 0x12, /* PKT_TX_IPV4 |
+ * PKT_TX_TCP_CKSUM
+ */
+ 0x22, /* PKT_TX_IPV4 |
+ * PKT_TX_SCTP_CKSUM
+ */
+ 0x32, /* PKT_TX_IPV4 |
+ * PKT_TX_UDP_CKSUM
+ */
+ 0x03, /* PKT_TX_IPV4 |
+ * PKT_TX_IP_CKSUM
+ */
+ 0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_TCP_CKSUM
+ */
+ 0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_SCTP_CKSUM
+ */
+ 0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_UDP_CKSUM
+ */
+ },
+
+ {
+ /* [16-31] = ol4type:ol3type */
+ 0x00, /* none */
+ 0x03, /* OUTER_IP_CKSUM */
+ 0x02, /* OUTER_IPV4 */
+ 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */
+ 0x04, /* OUTER_IPV6 */
+ 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */
+ 0x00, /* OUTER_IPV6 | OUTER_IPV4 */
+ 0x00, /* OUTER_IPV6 | OUTER_IPV4 |
+ * OUTER_IP_CKSUM
+ */
+ 0x00, /* OUTER_UDP_CKSUM */
+ 0x33, /* OUTER_UDP_CKSUM |
+ * OUTER_IP_CKSUM
+ */
+ 0x32, /* OUTER_UDP_CKSUM |
+ * OUTER_IPV4
+ */
+ 0x33, /* OUTER_UDP_CKSUM |
+ * OUTER_IPV4 | OUTER_IP_CKSUM
+ */
+ 0x34, /* OUTER_UDP_CKSUM |
+ * OUTER_IPV6
+ */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IP_CKSUM
+ */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IPV4
+ */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IPV4 | OUTER_IP_CKSUM
+ */
+ },
+ }
+ };
+
+ /* Extract olflags to translate to oltype & iltype */
+ xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
+ ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
+
+ /*
+ * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z)
+ * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z)
+ */
+ const uint32x4_t tshft_4 = {
+ 1, 0,
+ 1, 0,
+ };
+ senddesc01_w1 = vshlq_u32(senddesc01_w1, tshft_4);
+ senddesc23_w1 = vshlq_u32(senddesc23_w1, tshft_4);
+
+ /*
+ * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z)
+ * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z)
+ */
+ const uint8x16_t shuf_mask5 = {
+ 0x6, 0x5, 0x0, 0x1, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xE, 0xD, 0x8, 0x9, 0xFF, 0xFF, 0xFF, 0xFF,
+ };
+ senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5);
+ senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5);
+
+ /* Extract outer and inner header ol_flags */
+ const uint64x2_t oi_cksum_mask = {
+ 0x1CF0020000000000,
+ 0x1CF0020000000000,
+ };
+
+ xtmp128 = vandq_u64(xtmp128, oi_cksum_mask);
+ ytmp128 = vandq_u64(ytmp128, oi_cksum_mask);
+
+ /* Extract OUTER_UDP_CKSUM bit 41 and
+ * move it to bit 61
+ */
+
+ xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20);
+ ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20);
+
+ /* Shift right oltype by 2 and iltype by 4
+ * to start oltype nibble from BIT(58)
+ * instead of BIT(56) and iltype nibble from BIT(48)
+ * instead of BIT(52).
+ */
+ const int8x16_t tshft5 = {
+ 8, 8, 8, 8, 8, 8, -4, -2,
+ 8, 8, 8, 8, 8, 8, -4, -2,
+ };
+
+ xtmp128 = vshlq_u8(xtmp128, tshft5);
+ ytmp128 = vshlq_u8(ytmp128, tshft5);
+ /*
+ * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8)
+ * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8)
+ */
+ const int8x16_t tshft3 = {
+ -1, 0, -1, 0, 0, 0, 0, 0,
+ -1, 0, -1, 0, 0, 0, 0, 0,
+ };
+
+ senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
+ senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
+
+ /* Mark Bit(4) of oltype */
+ const uint64x2_t oi_cksum_mask2 = {
+ 0x1000000000000000,
+ 0x1000000000000000,
+ };
+
+ xtmp128 = vorrq_u64(xtmp128, oi_cksum_mask2);
+ ytmp128 = vorrq_u64(ytmp128, oi_cksum_mask2);
+
+ /* Do the lookup */
+ ltypes01 = vqtbl2q_u8(tbl, xtmp128);
+ ltypes23 = vqtbl2q_u8(tbl, ytmp128);
+
+ /* Just use ld1q to retrieve aura
+ * when we don't need tx_offload
+ */
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mempool, pool_id));
+
+ /* Pick only relevant fields i.e Bit 48:55 of iltype and
+ * Bit 56:63 of oltype and place it in corresponding
+ * place in senddesc_w1.
+ */
+ const uint8x16_t shuf_mask0 = {
+ 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0x6, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xE, 0xFF, 0xFF,
+ };
+
+ ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
+ ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
+
+ /* Prepare l4ptr, l3ptr, ol4ptr, ol3ptr from
+ * l3len, l2len, ol3len, ol2len.
+ * a [E(32):L3(8):L2(8):OL3(8):OL2(8)]
+ * a = a + (a << 8)
+ * a [E:(L3+L2):(L2+OL3):(OL3+OL2):OL2]
+ * a = a + (a << 16)
+ * a [E:(L3+L2+OL3+OL2):(L2+OL3+OL2):(OL3+OL2):OL2]
+ * => E(32):IL4PTR(8):IL3PTR(8):OL4PTR(8):OL3PTR(8)
+ */
+ senddesc01_w1 = vaddq_u8(senddesc01_w1,
+ vshlq_n_u32(senddesc01_w1, 8));
+ senddesc23_w1 = vaddq_u8(senddesc23_w1,
+ vshlq_n_u32(senddesc23_w1, 8));
+
+ /* Create second half of 4W cmd for 4 mbufs (sgdesc) */
+ cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
+ cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
+
+ /* Continue preparing l4ptr, l3ptr, ol4ptr, ol3ptr */
+ senddesc01_w1 = vaddq_u8(senddesc01_w1,
+ vshlq_n_u32(senddesc01_w1, 16));
+ senddesc23_w1 = vaddq_u8(senddesc23_w1,
+ vshlq_n_u32(senddesc23_w1, 16));
+
+ xmask01 = vdupq_n_u64(0);
+ xmask23 = xmask01;
+ asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
+
+ asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
+ xmask01 = vshlq_n_u64(xmask01, 20);
+ xmask23 = vshlq_n_u64(xmask23, 20);
+
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
+ /* Move ltypes to senddesc*_w1 */
+ senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
+ senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
+
+ /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
+ cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
+ cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
+ cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
+ cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
+ } else {
+ /* Just use ld1q to retrieve aura
+ * when we don't need tx_offload
+ */
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mempool, pool_id));
+ xmask01 = vdupq_n_u64(0);
+ xmask23 = xmask01;
+ asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
+
+ asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
+ xmask01 = vshlq_n_u64(xmask01, 20);
+ xmask23 = vshlq_n_u64(xmask23, 20);
+
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
+
+ /* Create 4W cmd for 4 mbufs (sendhdr, sgdesc) */
+ cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
+ cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
+ cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
+ cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
+ cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
+ cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
+ }
+
+ do {
+ vst1q_u64(lmt_addr, cmd00);
+ vst1q_u64(lmt_addr + 2, cmd01);
+ vst1q_u64(lmt_addr + 4, cmd10);
+ vst1q_u64(lmt_addr + 6, cmd11);
+ vst1q_u64(lmt_addr + 8, cmd20);
+ vst1q_u64(lmt_addr + 10, cmd21);
+ vst1q_u64(lmt_addr + 12, cmd30);
+ vst1q_u64(lmt_addr + 14, cmd31);
+ lmt_status = otx2_lmt_submit(io_addr);
+
+ } while (lmt_status == 0);
+ }
+
+ return pkts;
+}
+
+#else
+static __rte_always_inline uint16_t
+nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t pkts, const uint16_t flags)
+{
+ RTE_SET_USED(tx_queue);
+ RTE_SET_USED(tx_pkts);
+ RTE_SET_USED(pkts);
+ RTE_SET_USED(flags);
+ return 0;
+}
+#endif
+
#define T(name, f4, f3, f2, f1, f0, sz, flags) \
static uint16_t __rte_noinline __hot \
otx2_nix_xmit_pkts_ ## name(void *tx_queue, \
@@ -107,6 +960,21 @@ otx2_nix_xmit_pkts_mseg_ ## name(void *tx_queue, \
NIX_TX_FASTPATH_MODES
#undef T
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+static uint16_t __rte_noinline __hot \
+otx2_nix_xmit_pkts_vec_ ## name(void *tx_queue, \
+ struct rte_mbuf **tx_pkts, uint16_t pkts) \
+{ \
+ /* VLAN and TSTMP is not supported by vec */ \
+ if ((flags) & NIX_TX_OFFLOAD_VLAN_QINQ_F || \
+ (flags) & NIX_TX_OFFLOAD_TSTAMP_F) \
+ return 0; \
+ return nix_xmit_pkts_vector(tx_queue, tx_pkts, pkts, (flags)); \
+}
+
+NIX_TX_FASTPATH_MODES
+#undef T
+
static inline void
pick_tx_func(struct rte_eth_dev *eth_dev,
const eth_tx_burst_t tx_burst[2][2][2][2][2])
@@ -143,7 +1011,20 @@ NIX_TX_FASTPATH_MODES
#undef T
};
- pick_tx_func(eth_dev, nix_eth_tx_burst);
+ const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2] = {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_vec_ ## name,
+
+NIX_TX_FASTPATH_MODES
+#undef T
+ };
+
+ if (dev->scalar_ena ||
+ (dev->tx_offload_flags &
+ (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F)))
+ pick_tx_func(eth_dev, nix_eth_tx_burst);
+ else
+ pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 54/57] net/octeontx2: add device start operation
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (52 preceding siblings ...)
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 53/57] net/octeontx2: add Tx vector version jerinj
@ 2019-06-30 18:06 ` jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 55/57] net/octeontx2: add device stop and close operations jerinj
` (3 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:06 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic
Cc: Vamsi Attunuru
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add device start operation and update the correct
function pointers for Rx and Tx burst functions.
This patch also update the octeontx2 NIC specific
documentation.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
doc/guides/nics/octeontx2.rst | 91 ++++++++++++
drivers/net/octeontx2/otx2_ethdev.c | 180 ++++++++++++++++++++++++
drivers/net/octeontx2/otx2_flow.c | 4 +-
drivers/net/octeontx2/otx2_flow_parse.c | 4 +-
drivers/net/octeontx2/otx2_ptp.c | 8 ++
drivers/net/octeontx2/otx2_vlan.c | 1 +
6 files changed, 286 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 90ca4e2d2..d4a458262 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -34,6 +34,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Vector Poll mode driver
- Debug utilities - Context dump and error interrupt support
- IEEE1588 timestamping
+- HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
Prerequisites
-------------
@@ -49,6 +50,63 @@ The following options may be modified in the ``config`` file.
Toggle compilation of the ``librte_pmd_octeontx2`` driver.
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+To compile the OCTEON TX2 PMD for Linux arm64 gcc,
+use arm64-octeontx2-linux-gcc as target.
+
+#. Running testpmd:
+
+ Follow instructions available in the document
+ :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+ to run testpmd.
+
+ Example output:
+
+ .. code-block:: console
+
+ ./build/app/testpmd -c 0x300 -w 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
+ EAL: Detected 24 lcore(s)
+ EAL: Detected 1 NUMA nodes
+ EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
+ EAL: No available hugepages reported in hugepages-2048kB
+ EAL: Probing VFIO support...
+ EAL: VFIO support initialized
+ EAL: PCI device 0002:02:00.0 on NUMA socket 0
+ EAL: probe driver: 177d:a063 net_octeontx2
+ EAL: using IOMMU type 1 (Type 1)
+ testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456, size=2176, socket=0
+ testpmd: preferred mempool ops selected: octeontx2_npa
+ Configuring Port 0 (socket 0)
+ PMD: Port 0: Link Up - speed 40000 Mbps - full-duplex
+
+ Port 0: link state change event
+ Port 0: 36:10:66:88:7A:57
+ Checking link statuses...
+ Done
+ No commandline core given, start packet forwarding
+ io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
+ Logical Core 9 (socket 0) forwards packets on 1 streams:
+ RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
+
+ io packet forwarding packets/burst=32
+ nb forwarding cores=1 - nb forwarding ports=1
+ port 0: RX queue number: 1 Tx queue number: 1
+ Rx offloads=0x0 Tx offloads=0x10000
+ RX queue: 0
+ RX desc=512 - RX free threshold=0
+ RX threshold registers: pthresh=0 hthresh=0 wthresh=0
+ RX Offloads=0x0
+ TX queue: 0
+ TX desc=512 - TX free threshold=0
+ TX threshold registers: pthresh=0 hthresh=0 wthresh=0
+ TX offloads=0x10000 - TX RS bit threshold=0
+ Press enter to exit
+
Runtime Config Options
----------------------
@@ -116,6 +174,39 @@ Runtime Config Options
parameters to all the PCIe devices if application requires to configure on
all the ethdev ports.
+Limitations
+-----------
+
+``mempool_octeontx2`` external mempool handler dependency
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The OCTEON TX2 SoC family NIC has inbuilt HW assisted external mempool manager.
+``net_octeontx2`` pmd only works with ``mempool_octeontx2`` mempool handler
+as it is performance wise most effective way for packet allocation and Tx buffer
+recycling on OCTEON TX2 SoC platform.
+
+CRC striping
+~~~~~~~~~~~~
+
+The OCTEON TX2 SoC family NICs strip the CRC for every packet being received by
+the host interface irrespective of the offload configuration.
+
+
+Debugging Options
+-----------------
+
+.. _table_octeontx2_ethdev_debug_options:
+
+.. table:: OCTEON TX2 ethdev debug options
+
+ +---+------------+-------------------------------------------------------+
+ | # | Component | EAL log command |
+ +===+============+=======================================================+
+ | 1 | NIX | --log-level='pmd\.net.octeontx2,8' |
+ +---+------------+-------------------------------------------------------+
+ | 2 | NPC | --log-level='pmd\.net.octeontx2\.flow,8' |
+ +---+------------+-------------------------------------------------------+
+
RTE Flow Support
----------------
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 44753cbf5..7f33f8808 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -135,6 +135,55 @@ otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev)
return otx2_mbox_process(mbox);
}
+static int
+npc_rx_enable(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ otx2_mbox_alloc_msg_nix_lf_start_rx(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
+static int
+npc_rx_disable(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ otx2_mbox_alloc_msg_nix_lf_stop_rx(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
+static int
+nix_cgx_start_link_event(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_start_linkevents(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
+static int
+cgx_intlbk_enable(struct otx2_eth_dev *dev, bool en)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ if (en)
+ otx2_mbox_alloc_msg_cgx_intlbk_enable(mbox);
+ else
+ otx2_mbox_alloc_msg_cgx_intlbk_disable(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
static inline void
nix_rx_queue_reset(struct otx2_eth_rxq *rxq)
{
@@ -478,6 +527,74 @@ nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
return NIX_MAXSQESZ_W8;
}
+static uint16_t
+nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct rte_eth_conf *conf = &data->dev_conf;
+ struct rte_eth_rxmode *rxmode = &conf->rxmode;
+ uint16_t flags = 0;
+
+ if (rxmode->mq_mode == ETH_MQ_RX_RSS)
+ flags |= NIX_RX_OFFLOAD_RSS_F;
+
+ if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM))
+ flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
+
+ if (dev->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
+
+ if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ flags |= NIX_RX_MULTI_SEG_F;
+
+ if (dev->rx_offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_QINQ_STRIP))
+ flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
+
+ if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ flags |= NIX_RX_OFFLOAD_TSTAMP_F;
+
+ return flags;
+}
+
+static uint16_t
+nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t conf = dev->tx_offloads;
+ uint16_t flags = 0;
+
+ /* Fastpath is dependent on these enums */
+ RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52));
+ RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52));
+ RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52));
+
+ if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
+ conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
+
+ if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
+
+ if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
+ conf & DEV_TX_OFFLOAD_UDP_CKSUM ||
+ conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
+
+ if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
+
+ if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ flags |= NIX_TX_MULTI_SEG_F;
+
+ return flags;
+}
+
static int
nix_sq_init(struct otx2_eth_txq *txq)
{
@@ -1092,6 +1209,8 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
dev->rx_offloads = rxmode->offloads;
dev->tx_offloads = txmode->offloads;
+ dev->rx_offload_flags |= nix_rx_offload_flags(eth_dev);
+ dev->tx_offload_flags |= nix_tx_offload_flags(eth_dev);
dev->rss_info.rss_grps = NIX_RSS_GRPS;
nb_rxq = RTE_MAX(data->nb_rx_queues, 1);
@@ -1131,6 +1250,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ /* Configure loop back mode */
+ rc = cgx_intlbk_enable(dev, eth_dev->data->dev_conf.lpbk_mode);
+ if (rc) {
+ otx2_err("Failed to configure cgx loop back mode rc=%d", rc);
+ goto free_nix_lf;
+ }
+
rc = otx2_nix_rxchan_bpid_cfg(eth_dev, true);
if (rc) {
otx2_err("Failed to configure nix rx chan bpid cfg rc=%d", rc);
@@ -1280,6 +1406,59 @@ otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
return rc;
}
+static int
+otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc, i;
+
+ /* Start rx queues */
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ rc = otx2_nix_rx_queue_start(eth_dev, i);
+ if (rc)
+ return rc;
+ }
+
+ /* Start tx queues */
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ rc = otx2_nix_tx_queue_start(eth_dev, i);
+ if (rc)
+ return rc;
+ }
+
+ rc = otx2_nix_update_flow_ctrl_mode(eth_dev);
+ if (rc) {
+ otx2_err("Failed to update flow ctrl mode %d", rc);
+ return rc;
+ }
+
+ rc = npc_rx_enable(dev);
+ if (rc) {
+ otx2_err("Failed to enable NPC rx %d", rc);
+ return rc;
+ }
+
+ otx2_nix_toggle_flag_link_cfg(dev, true);
+
+ rc = nix_cgx_start_link_event(dev);
+ if (rc) {
+ otx2_err("Failed to start cgx link event %d", rc);
+ goto rx_disable;
+ }
+
+ otx2_nix_toggle_flag_link_cfg(dev, false);
+ otx2_eth_set_tx_function(eth_dev);
+ otx2_eth_set_rx_function(eth_dev);
+
+ return 0;
+
+rx_disable:
+ npc_rx_disable(dev);
+ otx2_nix_toggle_flag_link_cfg(dev, false);
+ return rc;
+}
+
+
/* Initialize and register driver with DPDK Application */
static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
@@ -1289,6 +1468,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.tx_queue_release = otx2_nix_tx_queue_release,
.rx_queue_setup = otx2_nix_rx_queue_setup,
.rx_queue_release = otx2_nix_rx_queue_release,
+ .dev_start = otx2_nix_dev_start,
.tx_queue_start = otx2_nix_tx_queue_start,
.tx_queue_stop = otx2_nix_tx_queue_stop,
.rx_queue_start = otx2_nix_rx_queue_start,
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 3ddecfb23..982100df4 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -528,8 +528,10 @@ otx2_flow_destroy(struct rte_eth_dev *dev,
return -EINVAL;
/* Clear mark offload flag if there are no more mark actions */
- if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0)
+ if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0) {
hw->rx_offload_flags &= ~NIX_RX_OFFLOAD_MARK_UPDATE_F;
+ otx2_eth_set_rx_function(dev);
+ }
}
rc = flow_free_rss_action(dev, flow);
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index 4cf5ce17e..375d00620 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -926,9 +926,11 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
if (mark)
flow->npc_action |= (uint64_t)mark << 40;
- if (rte_atomic32_read(&npc->mark_actions) == 1)
+ if (rte_atomic32_read(&npc->mark_actions) == 1) {
hw->rx_offload_flags |=
NIX_RX_OFFLOAD_MARK_UPDATE_F;
+ otx2_eth_set_rx_function(dev);
+ }
set_pf_func:
/* Ideally AF must ensure that correct pf_func is set */
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 5291da241..0186c629a 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -118,6 +118,10 @@ otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
otx2_nix_form_default_desc(txq);
}
+
+ /* Setting up the function pointers as per new offload flags */
+ otx2_eth_set_rx_function(eth_dev);
+ otx2_eth_set_tx_function(eth_dev);
}
return rc;
}
@@ -147,6 +151,10 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
otx2_nix_form_default_desc(txq);
}
+
+ /* Setting up the function pointers as per new offload flags */
+ otx2_eth_set_rx_function(eth_dev);
+ otx2_eth_set_tx_function(eth_dev);
}
return rc;
}
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index dc0f4e032..189c45174 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -760,6 +760,7 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
DEV_RX_OFFLOAD_QINQ_STRIP)) {
dev->rx_offloads |= offloads;
dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
+ otx2_eth_set_rx_function(eth_dev);
}
done:
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 55/57] net/octeontx2: add device stop and close operations
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (53 preceding siblings ...)
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 54/57] net/octeontx2: add device start operation jerinj
@ 2019-06-30 18:06 ` jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 56/57] net/octeontx2: add MTU set operation jerinj
` (2 subsequent siblings)
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:06 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vamsi Attunuru
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add device stop, close and reset operations.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 75 +++++++++++++++++++++++++++++
1 file changed, 75 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 7f33f8808..e23bed603 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -184,6 +184,19 @@ cgx_intlbk_enable(struct otx2_eth_dev *dev, bool en)
return otx2_mbox_process(mbox);
}
+static int
+nix_cgx_stop_link_event(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_stop_linkevents(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
static inline void
nix_rx_queue_reset(struct otx2_eth_rxq *rxq)
{
@@ -1189,6 +1202,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
if (dev->configured == 1) {
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
otx2_nix_vlan_fini(eth_dev);
+ otx2_flow_free_all_resources(dev);
oxt2_nix_unregister_queue_irqs(eth_dev);
nix_set_nop_rxtx_function(eth_dev);
rc = nix_store_queue_cfg_and_then_release(eth_dev);
@@ -1406,6 +1420,37 @@ otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
return rc;
}
+static void
+otx2_nix_dev_stop(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_mbuf *rx_pkts[32];
+ struct otx2_eth_rxq *rxq;
+ int count, i, j, rc;
+
+ nix_cgx_stop_link_event(dev);
+ npc_rx_disable(dev);
+
+ /* Stop rx queues and free up pkts pending */
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ rc = otx2_nix_rx_queue_stop(eth_dev, i);
+ if (rc)
+ continue;
+
+ rxq = eth_dev->data->rx_queues[i];
+ count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32);
+ while (count) {
+ for (j = 0; j < count; j++)
+ rte_pktmbuf_free(rx_pkts[j]);
+ count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32);
+ }
+ }
+
+ /* Stop tx queues */
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
+ otx2_nix_tx_queue_stop(eth_dev, i);
+}
+
static int
otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
{
@@ -1458,6 +1503,8 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
return rc;
}
+static int otx2_nix_dev_reset(struct rte_eth_dev *eth_dev);
+static void otx2_nix_dev_close(struct rte_eth_dev *eth_dev);
/* Initialize and register driver with DPDK Application */
static const struct eth_dev_ops otx2_eth_dev_ops = {
@@ -1469,11 +1516,14 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.rx_queue_setup = otx2_nix_rx_queue_setup,
.rx_queue_release = otx2_nix_rx_queue_release,
.dev_start = otx2_nix_dev_start,
+ .dev_stop = otx2_nix_dev_stop,
+ .dev_close = otx2_nix_dev_close,
.tx_queue_start = otx2_nix_tx_queue_start,
.tx_queue_stop = otx2_nix_tx_queue_stop,
.rx_queue_start = otx2_nix_rx_queue_start,
.rx_queue_stop = otx2_nix_rx_queue_stop,
.dev_supported_ptypes_get = otx2_nix_supported_ptypes_get,
+ .dev_reset = otx2_nix_dev_reset,
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
@@ -1725,9 +1775,14 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ /* Clear the flag since we are closing down */
+ dev->configured = 0;
+
/* Disable nix bpid config */
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+ npc_rx_disable(dev);
+
/* Disable vlan offloads */
otx2_nix_vlan_fini(eth_dev);
@@ -1738,6 +1793,8 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
if (otx2_ethdev_is_ptp_en(dev))
otx2_nix_timesync_disable(eth_dev);
+ nix_cgx_stop_link_event(dev);
+
/* Free up SQs */
for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]);
@@ -1793,6 +1850,24 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
return 0;
}
+static void
+otx2_nix_dev_close(struct rte_eth_dev *eth_dev)
+{
+ otx2_eth_dev_uninit(eth_dev, true);
+}
+
+static int
+otx2_nix_dev_reset(struct rte_eth_dev *eth_dev)
+{
+ int rc;
+
+ rc = otx2_eth_dev_uninit(eth_dev, false);
+ if (rc)
+ return rc;
+
+ return otx2_eth_dev_init(eth_dev);
+}
+
static int
nix_remove(struct rte_pci_device *pci_dev)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 56/57] net/octeontx2: add MTU set operation
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (54 preceding siblings ...)
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 55/57] net/octeontx2: add device stop and close operations jerinj
@ 2019-06-30 18:06 ` jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 57/57] net/octeontx2: add Rx interrupts support jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:06 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru, Sunil Kumar Kori
From: Vamsi Attunuru <vattunuru@marvell.com>
Add MTU set operation and MTU update feature.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 7 ++
drivers/net/octeontx2/otx2_ethdev.h | 4 +
drivers/net/octeontx2/otx2_ethdev_ops.c | 86 ++++++++++++++++++++++
6 files changed, 100 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 1856d9924..be10dc0c8 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -15,6 +15,7 @@ Runtime Tx queue setup = Y
Fast mbuf free = Y
Free Tx mbuf on demand = Y
Queue start/stop = Y
+MTU update = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 053fca288..df8180f83 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -15,6 +15,7 @@ Runtime Tx queue setup = Y
Fast mbuf free = Y
Free Tx mbuf on demand = Y
Queue start/stop = Y
+MTU update = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index d4a458262..517e9e641 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -30,6 +30,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Port hardware statistics
- Link state information
- Link flow control
+- MTU update
- Scatter-Gather IO support
- Vector Poll mode driver
- Debug utilities - Context dump and error interrupt support
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index e23bed603..170593e95 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1457,6 +1457,12 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, i;
+ if (eth_dev->data->nb_rx_queues != 0) {
+ rc = otx2_nix_recalc_mtu(eth_dev);
+ if (rc)
+ return rc;
+ }
+
/* Start rx queues */
for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
rc = otx2_nix_rx_queue_start(eth_dev, i);
@@ -1527,6 +1533,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
+ .mtu_set = otx2_nix_mtu_set,
.mac_addr_add = otx2_nix_mac_addr_add,
.mac_addr_remove = otx2_nix_mac_addr_del,
.mac_addr_set = otx2_nix_mac_addr_set,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index f39fdfa1f..3703acc69 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -371,6 +371,10 @@ int otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx);
int otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx);
uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id);
+/* MTU */
+int otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu);
+int otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev);
+
/* Link */
void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 6a3048336..5a16a3c04 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -6,6 +6,92 @@
#include "otx2_ethdev.h"
+int
+otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
+{
+ uint32_t buffsz, frame_size = mtu + NIX_L2_OVERHEAD;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_frs_cfg *req;
+ int rc;
+
+ /* Check if MTU is within the allowed range */
+ if (frame_size < NIX_MIN_FRS || frame_size > NIX_MAX_FRS)
+ return -EINVAL;
+
+ buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
+
+ /* Refuse MTU that requires the support of scattered packets
+ * when this feature has not been enabled before.
+ */
+ if (data->dev_started && frame_size > buffsz &&
+ !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER))
+ return -EINVAL;
+
+ /* Check <seg size> * <max_seg> >= max_frame */
+ if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ (frame_size > buffsz * NIX_RX_NB_SEG_MAX))
+ return -EINVAL;
+
+ req = otx2_mbox_alloc_msg_nix_set_hw_frs(mbox);
+ req->update_smq = true;
+ /* FRS HW config should exclude FCS but include NPC VTAG insert size */
+ req->maxlen = frame_size - RTE_ETHER_CRC_LEN + NIX_MAX_VTAG_ACT_SIZE;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* Now just update Rx MAXLEN */
+ req = otx2_mbox_alloc_msg_nix_set_hw_frs(mbox);
+ req->maxlen = frame_size - RTE_ETHER_CRC_LEN;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ if (frame_size > RTE_ETHER_MAX_LEN)
+ dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+ /* Update max_rx_pkt_len */
+ data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+ return rc;
+}
+
+int
+otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct rte_pktmbuf_pool_private *mbp_priv;
+ struct otx2_eth_rxq *rxq;
+ uint32_t buffsz;
+ uint16_t mtu;
+ int rc;
+
+ /* Get rx buffer size */
+ rxq = data->rx_queues[0];
+ mbp_priv = rte_mempool_get_priv(rxq->pool);
+ buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
+
+ /* Setup scatter mode if needed by jumbo */
+ if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz)
+ dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+
+ /* Setup MTU based on max_rx_pkt_len */
+ mtu = data->dev_conf.rxmode.max_rx_pkt_len - NIX_L2_OVERHEAD;
+
+ rc = otx2_nix_mtu_set(eth_dev, mtu);
+ if (rc)
+ otx2_err("Failed to set default MTU size %d", rc);
+
+ return rc;
+}
+
static void
nix_cgx_promisc_config(struct rte_eth_dev *eth_dev, int en)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v2 57/57] net/octeontx2: add Rx interrupts support
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (55 preceding siblings ...)
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 56/57] net/octeontx2: add MTU set operation jerinj
@ 2019-06-30 18:06 ` jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
57 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-06-30 18:06 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Harman Kalra
From: Harman Kalra <hkalra@marvell.com>
This patch implements rx interrupts feature required for power
saving. These interrupts can be enabled/disabled on demand.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 31 ++++++
drivers/net/octeontx2/otx2_ethdev.h | 16 +++
drivers/net/octeontx2/otx2_ethdev_irq.c | 125 ++++++++++++++++++++++
6 files changed, 175 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index be10dc0c8..66952328b 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -5,6 +5,7 @@
;
[Features]
Speed capabilities = Y
+Rx interrupt = Y
Lock-free Tx queue = Y
SR-IOV = Y
Multiprocess aware = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index bef451d01..16799309b 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -7,6 +7,7 @@
Speed capabilities = Y
Lock-free Tx queue = Y
Multiprocess aware = Y
+Rx interrupt = Y
Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 517e9e641..dbd376665 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -36,6 +36,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Debug utilities - Context dump and error interrupt support
- IEEE1588 timestamping
- HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
+- Support Rx interrupt
Prerequisites
-------------
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 170593e95..7f50a4c0e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -277,6 +277,8 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
/* Many to one reduction */
aq->cq.qint_idx = qid % dev->qints;
+ /* Map CQ0 [RQ0] to CINT0 and so on till max 64 irqs */
+ aq->cq.cint_idx = qid;
if (otx2_ethdev_fixup_is_limit_cq_full(dev)) {
uint16_t min_rx_drop;
@@ -1204,6 +1206,8 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
otx2_nix_vlan_fini(eth_dev);
otx2_flow_free_all_resources(dev);
oxt2_nix_unregister_queue_irqs(eth_dev);
+ if (eth_dev->data->dev_conf.intr_conf.rxq)
+ oxt2_nix_unregister_cq_irqs(eth_dev);
nix_set_nop_rxtx_function(eth_dev);
rc = nix_store_queue_cfg_and_then_release(eth_dev);
if (rc)
@@ -1264,6 +1268,27 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ /* Register cq IRQs */
+ if (eth_dev->data->dev_conf.intr_conf.rxq) {
+ if (eth_dev->data->nb_rx_queues > dev->cints) {
+ otx2_err("Rx interrupt cannot be enabled, rxq > %d",
+ dev->cints);
+ goto free_nix_lf;
+ }
+ /* Rx interrupt feature cannot work with vector mode because,
+ * vector mode doesn't process packets unless min 4 pkts are
+ * received, while cq interrupts are generated even for 1 pkt
+ * in the CQ.
+ */
+ dev->scalar_ena = true;
+
+ rc = oxt2_nix_register_cq_irqs(eth_dev);
+ if (rc) {
+ otx2_err("Failed to register CQ interrupts rc=%d", rc);
+ goto free_nix_lf;
+ }
+ }
+
/* Configure loop back mode */
rc = cgx_intlbk_enable(dev, eth_dev->data->dev_conf.lpbk_mode);
if (rc) {
@@ -1576,6 +1601,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set,
.vlan_tpid_set = otx2_nix_vlan_tpid_set,
.vlan_pvid_set = otx2_nix_vlan_pvid_set,
+ .rx_queue_intr_enable = otx2_nix_rx_queue_intr_enable,
+ .rx_queue_intr_disable = otx2_nix_rx_queue_intr_disable,
};
static inline int
@@ -1824,6 +1851,10 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
/* Unregister queue irqs */
oxt2_nix_unregister_queue_irqs(eth_dev);
+ /* Unregister cq irqs */
+ if (eth_dev->data->dev_conf.intr_conf.rxq)
+ oxt2_nix_unregister_cq_irqs(eth_dev);
+
rc = nix_lf_free(dev);
if (rc)
otx2_err("Failed to free nix lf, rc=%d", rc);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 3703acc69..f6905db83 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -102,6 +102,13 @@
#define OP_ERR BIT_ULL(CQ_OP_STAT_OP_ERR)
#define CQ_ERR BIT_ULL(CQ_OP_STAT_CQ_ERR)
+#define CQ_CQE_THRESH_DEFAULT 0x1ULL /* IRQ triggered when
+ * NIX_LF_CINTX_CNT[QCOUNT]
+ * crosses this value
+ */
+#define CQ_TIMER_THRESH_DEFAULT 0xAULL /* ~1usec i.e (0xA * 100nsec) */
+#define CQ_TIMER_THRESH_MAX 255
+
#define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
ETH_RSS_TCP | ETH_RSS_SCTP | \
ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD)
@@ -248,6 +255,7 @@ struct otx2_eth_dev {
uint16_t qints;
uint8_t configured;
uint8_t configured_qints;
+ uint8_t configured_cints;
uint8_t configured_nb_rx_qs;
uint8_t configured_nb_tx_qs;
uint16_t nix_msixoff;
@@ -262,6 +270,7 @@ struct otx2_eth_dev {
uint64_t rx_offload_capa;
uint64_t tx_offload_capa;
struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
+ struct otx2_qint cints_mem[RTE_MAX_QUEUES_PER_PORT];
uint16_t txschq[NIX_TXSCH_LVL_CNT];
uint16_t txschq_contig[NIX_TXSCH_LVL_CNT];
uint16_t txschq_index[NIX_TXSCH_LVL_CNT];
@@ -384,8 +393,15 @@ void otx2_eth_dev_link_status_update(struct otx2_dev *dev,
/* IRQ */
int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev);
+int oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev);
void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev);
+void oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev);
+
+int otx2_nix_rx_queue_intr_enable(struct rte_eth_dev *eth_dev,
+ uint16_t rx_queue_id);
+int otx2_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
+ uint16_t rx_queue_id);
/* Debug */
int otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data);
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index 066aca7a5..9006e5c8b 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -5,6 +5,7 @@
#include <inttypes.h>
#include <rte_bus_pci.h>
+#include <rte_malloc.h>
#include "otx2_ethdev.h"
@@ -171,6 +172,18 @@ nix_lf_sq_debug_reg(struct otx2_eth_dev *dev, uint32_t off)
(int)((reg >> 8) & 0xfffff), (uint8_t)(reg & 0xff));
}
+static void
+nix_lf_cq_irq(void *param)
+{
+ struct otx2_qint *cint = (struct otx2_qint *)param;
+ struct rte_eth_dev *eth_dev = cint->eth_dev;
+ struct otx2_eth_dev *dev;
+
+ dev = otx2_eth_pmd_priv(eth_dev);
+ /* Clear interrupt */
+ otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_INT(cint->qintx));
+}
+
static void
nix_lf_q_irq(void *param)
{
@@ -315,6 +328,92 @@ oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
}
}
+int
+oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint8_t rc = 0, vec, q;
+
+ dev->configured_cints = RTE_MIN(dev->cints,
+ eth_dev->data->nb_rx_queues);
+
+ for (q = 0; q < dev->configured_cints; q++) {
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_CINT_START + q;
+
+ /* Clear CINT CNT */
+ otx2_write64(0, dev->base + NIX_LF_CINTX_CNT(q));
+
+ /* Clear interrupt */
+ otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_ENA_W1C(q));
+
+ dev->cints_mem[q].eth_dev = eth_dev;
+ dev->cints_mem[q].qintx = q;
+
+ /* Sync cints_mem update */
+ rte_smp_wmb();
+
+ /* Register queue irq vector */
+ rc = otx2_register_irq(handle, nix_lf_cq_irq,
+ &dev->cints_mem[q], vec);
+ if (rc) {
+ otx2_err("Fail to register CQ irq, rc=%d", rc);
+ return rc;
+ }
+
+ if (!handle->intr_vec) {
+ handle->intr_vec = rte_zmalloc("intr_vec",
+ dev->configured_cints *
+ sizeof(int), 0);
+ if (!handle->intr_vec) {
+ otx2_err("Failed to allocate %d rx intr_vec",
+ dev->configured_cints);
+ return -ENOMEM;
+ }
+ }
+ /* VFIO vector zero is resereved for misc interrupt so
+ * doing required adjustment. (b13bfab4cd)
+ */
+ handle->intr_vec[q] = RTE_INTR_VEC_RXTX_OFFSET + vec;
+
+ /* Configure CQE interrupt coalescing parameters */
+ otx2_write64(((CQ_CQE_THRESH_DEFAULT) |
+ (CQ_CQE_THRESH_DEFAULT << 32) |
+ (CQ_TIMER_THRESH_DEFAULT << 48)),
+ dev->base + NIX_LF_CINTX_WAIT((q)));
+
+ /* Keeping the CQ interrupt disabled as the rx interrupt
+ * feature needs to be enabled/disabled on demand.
+ */
+ }
+
+ return rc;
+}
+
+void
+oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int vec, q;
+
+ for (q = 0; q < dev->configured_cints; q++) {
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_CINT_START + q;
+
+ /* Clear CINT CNT */
+ otx2_write64(0, dev->base + NIX_LF_CINTX_CNT(q));
+
+ /* Clear interrupt */
+ otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_ENA_W1C(q));
+
+ /* Unregister queue irq vector */
+ otx2_unregister_irq(handle, nix_lf_cq_irq,
+ &dev->cints_mem[q], vec);
+ }
+}
+
int
otx2_nix_register_irqs(struct rte_eth_dev *eth_dev)
{
@@ -341,3 +440,29 @@ otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev)
nix_lf_unregister_err_irq(eth_dev);
nix_lf_unregister_ras_irq(eth_dev);
}
+
+int
+otx2_nix_rx_queue_intr_enable(struct rte_eth_dev *eth_dev,
+ uint16_t rx_queue_id)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ /* Enable CINT interrupt */
+ otx2_write64(BIT_ULL(0), dev->base +
+ NIX_LF_CINTX_ENA_W1S(rx_queue_id));
+
+ return 0;
+}
+
+int
+otx2_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
+ uint16_t rx_queue_id)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ /* Clear and disable CINT interrupt */
+ otx2_write64(BIT_ULL(0), dev->base +
+ NIX_LF_CINTX_ENA_W1C(rx_queue_id));
+
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
` (56 preceding siblings ...)
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 57/57] net/octeontx2: add Rx interrupts support jerinj
@ 2019-07-03 8:41 ` jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 01/58] net/octeontx2: add build and doc infrastructure jerinj
` (58 more replies)
57 siblings, 59 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:41 UTC (permalink / raw)
To: dev; +Cc: Jerin Jacob
From: Jerin Jacob <jerinj@marvell.com>
This patchset adds support for OCTEON TX2 ethdev driver.
v3:
# Fix build issue with ICC 32bit build
# Add missing "net/octeontx2: add link status set operations" patch from
v1
v2:
# Moved maintainers file to the first patch(Ferruh)
# removed reference to to v19.05(Ferruh)
# Makefile/Meson CFLAGS moved to specific patches(Ferruh)
# Move Documentation updates to specific patches(Ferruh)
# reworked the code to remove the need for exposing
otx2_nix_fastpath_lookup_mem_get function(Ferruh)
# Updated goto logic in net/octeontx2: add FW version get operation(Ferruh)
# Added "add Rx interrupts support" patch
Harman Kalra (3):
net/octeontx2: add PTP base support
net/octeontx2: add remaining PTP operations
net/octeontx2: add Rx interrupts support
Jerin Jacob (16):
net/octeontx2: add build and doc infrastructure
net/octeontx2: add ethdev probe and remove
net/octeontx2: add device init and uninit
net/octeontx2: add devargs parsing functions
net/octeontx2: handle device error interrupts
net/octeontx2: add info get operation
net/octeontx2: add device configure operation
net/octeontx2: handle queue specific error interrupts
net/octeontx2: add context debug utils
net/octeontx2: add Rx queue setup and release
net/octeontx2: add Tx queue setup and release
net/octeontx2: add ptype support
net/octeontx2: add Rx and Tx descriptor operations
net/octeontx2: add Rx burst support
net/octeontx2: add Rx vector version
net/octeontx2: add Tx burst support
Kiran Kumar K (13):
net/octeontx2: add register dump support
net/octeontx2: add basic stats operation
net/octeontx2: add extended stats operations
net/octeontx2: introducing flow driver
net/octeontx2: add flow utility functions
net/octeontx2: add flow mbox utility functions
net/octeontx2: add flow MCAM utility functions
net/octeontx2: add flow parsing for outer layers
net/octeontx2: add flow actions support
net/octeontx2: add flow parse actions support
net/octeontx2: add flow operations
net/octeontx2: add flow destroy ops support
net/octeontx2: add flow init and fini
Krzysztof Kanas (2):
net/octeontx2: alloc and free TM HW resources
net/octeontx2: enable Tx through traffic manager
Nithin Dabilpuram (9):
net/octeontx2: add queue start and stop operations
net/octeontx2: introduce traffic manager
net/octeontx2: configure TM HW resources
net/octeontx2: add queue info and pool supported operations
net/octeontx2: add Rx multi segment version
net/octeontx2: add Tx multi segment version
net/octeontx2: add Tx vector version
net/octeontx2: add device start operation
net/octeontx2: add device stop and close operations
Sunil Kumar Kori (1):
net/octeontx2: add unicast MAC filter
Vamsi Attunuru (9):
net/octeontx2: add link stats operations
net/octeontx2: add promiscuous and allmulticast mode
net/octeontx2: add RSS support
net/octeontx2: handle port reconfigure
net/octeontx2: add module EEPROM dump
net/octeontx2: add flow control support
net/octeontx2: add FW version get operation
net/octeontx2: add MTU set operation
net/octeontx2: add link status set operations
Vivek Sharma (5):
net/octeontx2: connect flow API to ethdev ops
net/octeontx2: implement VLAN utility functions
net/octeontx2: support VLAN offloads
net/octeontx2: support VLAN filters
net/octeontx2: support VLAN TPID and PVID for Tx
MAINTAINERS | 9 +
config/common_base | 5 +
doc/guides/nics/features/octeontx2.ini | 50 +
doc/guides/nics/features/octeontx2_vec.ini | 46 +
doc/guides/nics/features/octeontx2_vf.ini | 42 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/octeontx2.rst | 306 +++
doc/guides/platform/octeontx2.rst | 3 +
drivers/net/Makefile | 1 +
drivers/net/meson.build | 6 +-
drivers/net/octeontx2/Makefile | 58 +
drivers/net/octeontx2/meson.build | 40 +
drivers/net/octeontx2/otx2_ethdev.c | 2017 +++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 531 +++++
drivers/net/octeontx2/otx2_ethdev_debug.c | 500 ++++
drivers/net/octeontx2/otx2_ethdev_devargs.c | 165 ++
drivers/net/octeontx2/otx2_ethdev_irq.c | 468 ++++
drivers/net/octeontx2/otx2_ethdev_ops.c | 461 ++++
drivers/net/octeontx2/otx2_flow.c | 981 ++++++++
drivers/net/octeontx2/otx2_flow.h | 390 ++++
drivers/net/octeontx2/otx2_flow_ctrl.c | 220 ++
drivers/net/octeontx2/otx2_flow_parse.c | 959 ++++++++
drivers/net/octeontx2/otx2_flow_utils.c | 910 ++++++++
drivers/net/octeontx2/otx2_link.c | 157 ++
drivers/net/octeontx2/otx2_lookup.c | 315 +++
drivers/net/octeontx2/otx2_mac.c | 149 ++
drivers/net/octeontx2/otx2_ptp.c | 273 +++
drivers/net/octeontx2/otx2_rss.c | 372 +++
drivers/net/octeontx2/otx2_rx.c | 411 ++++
drivers/net/octeontx2/otx2_rx.h | 333 +++
drivers/net/octeontx2/otx2_stats.c | 387 ++++
drivers/net/octeontx2/otx2_tm.c | 1396 ++++++++++++
drivers/net/octeontx2/otx2_tm.h | 153 ++
drivers/net/octeontx2/otx2_tx.c | 1033 +++++++++
drivers/net/octeontx2/otx2_tx.h | 370 +++
drivers/net/octeontx2/otx2_vlan.c | 1034 +++++++++
.../octeontx2/rte_pmd_octeontx2_version.map | 4 +
mk/rte.app.mk | 2 +
38 files changed, 14557 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/nics/features/octeontx2.ini
create mode 100644 doc/guides/nics/features/octeontx2_vec.ini
create mode 100644 doc/guides/nics/features/octeontx2_vf.ini
create mode 100644 doc/guides/nics/octeontx2.rst
create mode 100644 drivers/net/octeontx2/Makefile
create mode 100644 drivers/net/octeontx2/meson.build
create mode 100644 drivers/net/octeontx2/otx2_ethdev.c
create mode 100644 drivers/net/octeontx2/otx2_ethdev.h
create mode 100644 drivers/net/octeontx2/otx2_ethdev_debug.c
create mode 100644 drivers/net/octeontx2/otx2_ethdev_devargs.c
create mode 100644 drivers/net/octeontx2/otx2_ethdev_irq.c
create mode 100644 drivers/net/octeontx2/otx2_ethdev_ops.c
create mode 100644 drivers/net/octeontx2/otx2_flow.c
create mode 100644 drivers/net/octeontx2/otx2_flow.h
create mode 100644 drivers/net/octeontx2/otx2_flow_ctrl.c
create mode 100644 drivers/net/octeontx2/otx2_flow_parse.c
create mode 100644 drivers/net/octeontx2/otx2_flow_utils.c
create mode 100644 drivers/net/octeontx2/otx2_link.c
create mode 100644 drivers/net/octeontx2/otx2_lookup.c
create mode 100644 drivers/net/octeontx2/otx2_mac.c
create mode 100644 drivers/net/octeontx2/otx2_ptp.c
create mode 100644 drivers/net/octeontx2/otx2_rss.c
create mode 100644 drivers/net/octeontx2/otx2_rx.c
create mode 100644 drivers/net/octeontx2/otx2_rx.h
create mode 100644 drivers/net/octeontx2/otx2_stats.c
create mode 100644 drivers/net/octeontx2/otx2_tm.c
create mode 100644 drivers/net/octeontx2/otx2_tm.h
create mode 100644 drivers/net/octeontx2/otx2_tx.c
create mode 100644 drivers/net/octeontx2/otx2_tx.h
create mode 100644 drivers/net/octeontx2/otx2_vlan.c
create mode 100644 drivers/net/octeontx2/rte_pmd_octeontx2_version.map
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 01/58] net/octeontx2: add build and doc infrastructure
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
@ 2019-07-03 8:41 ` jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 02/58] net/octeontx2: add ethdev probe and remove jerinj
` (57 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:41 UTC (permalink / raw)
To: dev, Thomas Monjalon, John McNamara, Marko Kovacevic,
Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K, Vamsi Attunuru
From: Jerin Jacob <jerinj@marvell.com>
Adding bare minimum PMD library and doc build infrastructure
and claim the maintainership for octeontx2 PMD.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
MAINTAINERS | 9 ++++++
config/common_base | 5 +++
doc/guides/nics/features/octeontx2.ini | 9 ++++++
doc/guides/nics/features/octeontx2_vec.ini | 9 ++++++
doc/guides/nics/features/octeontx2_vf.ini | 9 ++++++
doc/guides/nics/index.rst | 1 +
doc/guides/nics/octeontx2.rst | 32 +++++++++++++++++++
doc/guides/platform/octeontx2.rst | 3 ++
drivers/net/Makefile | 1 +
drivers/net/meson.build | 6 +++-
drivers/net/octeontx2/Makefile | 30 +++++++++++++++++
drivers/net/octeontx2/meson.build | 9 ++++++
drivers/net/octeontx2/otx2_ethdev.c | 3 ++
.../octeontx2/rte_pmd_octeontx2_version.map | 4 +++
mk/rte.app.mk | 2 ++
15 files changed, 131 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/nics/features/octeontx2.ini
create mode 100644 doc/guides/nics/features/octeontx2_vec.ini
create mode 100644 doc/guides/nics/features/octeontx2_vf.ini
create mode 100644 doc/guides/nics/octeontx2.rst
create mode 100644 drivers/net/octeontx2/Makefile
create mode 100644 drivers/net/octeontx2/meson.build
create mode 100644 drivers/net/octeontx2/otx2_ethdev.c
create mode 100644 drivers/net/octeontx2/rte_pmd_octeontx2_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 97a009e43..073bf76a4 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -691,6 +691,15 @@ F: drivers/net/mvneta/
F: doc/guides/nics/mvneta.rst
F: doc/guides/nics/features/mvneta.ini
+Marvell OCTEON TX2
+M: Jerin Jacob <jerinj@marvell.com>
+M: Nithin Dabilpuram <ndabilpuram@marvell.com>
+M: Kiran Kumar K <kirankumark@marvell.com>
+T: git://dpdk.org/next/dpdk-next-net-mrvl
+F: drivers/net/octeontx2/
+F: doc/guides/nics/features/octeontx2*.rst
+F: doc/guides/nics/octeontx2.rst
+
Mellanox mlx4
M: Matan Azrad <matan@mellanox.com>
M: Shahaf Shuler <shahafs@mellanox.com>
diff --git a/config/common_base b/config/common_base
index e700bf1e7..6cc44b65a 100644
--- a/config/common_base
+++ b/config/common_base
@@ -411,6 +411,11 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n
#
CONFIG_RTE_LIBRTE_OCTEONTX_PMD=y
+#
+# Compile burst-oriented Marvell OCTEON TX2 network PMD driver
+#
+CONFIG_RTE_LIBRTE_OCTEONTX2_PMD=y
+
#
# Compile WRS accelerated virtual port (AVP) guest PMD driver
#
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
new file mode 100644
index 000000000..84d5ad779
--- /dev/null
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'octeontx2' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux VFIO = Y
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
new file mode 100644
index 000000000..5fd7e4c5c
--- /dev/null
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'octeontx2_vec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux VFIO = Y
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
new file mode 100644
index 000000000..3128cc120
--- /dev/null
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'octeontx2_vf' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux VFIO = Y
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index d664c4592..9fec02f3e 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -46,6 +46,7 @@ Network Interface Controller Drivers
nfb
nfp
octeontx
+ octeontx2
qede
sfc_efx
softnic
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
new file mode 100644
index 000000000..f0bd36be3
--- /dev/null
+++ b/doc/guides/nics/octeontx2.rst
@@ -0,0 +1,32 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(C) 2019 Marvell International Ltd.
+
+OCTEON TX2 Poll Mode driver
+===========================
+
+The OCTEON TX2 ETHDEV PMD (**librte_pmd_octeontx2**) provides poll mode ethdev
+driver support for the inbuilt network device found in **Marvell OCTEON TX2**
+SoC family as well as for their virtual functions (VF) in SR-IOV context.
+
+More information can be found at `Marvell Official Website
+<https://www.marvell.com/embedded-processors/infrastructure-processors>`_.
+
+Features
+--------
+
+Features of the OCTEON TX2 Ethdev PMD are:
+
+
+Prerequisites
+-------------
+
+See :doc:`../platform/octeontx2` for setup information.
+
+Compile time Config Options
+---------------------------
+
+The following options may be modified in the ``config`` file.
+
+- ``CONFIG_RTE_LIBRTE_OCTEONTX2_PMD`` (default ``y``)
+
+ Toggle compilation of the ``librte_pmd_octeontx2`` driver.
diff --git a/doc/guides/platform/octeontx2.rst b/doc/guides/platform/octeontx2.rst
index c9ea45647..d2592f119 100644
--- a/doc/guides/platform/octeontx2.rst
+++ b/doc/guides/platform/octeontx2.rst
@@ -98,6 +98,9 @@ HW Offload Drivers
This section lists dataplane H/W block(s) available in OCTEON TX2 SoC.
+#. **Ethdev Driver**
+ See :doc:`../nics/octeontx2` for NIX Ethdev driver information.
+
#. **Mempool Driver**
See :doc:`../mempool/octeontx2` for NPA mempool driver information.
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index a1d45d9cb..5767fdf65 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -47,6 +47,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += nfp
DIRS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt
DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += null
DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_PMD) += octeontx
+DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += octeontx2
DIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += pcap
DIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede
DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += ring
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 86e704e13..513f19b33 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -33,7 +33,11 @@ drivers = ['af_packet',
'netvsc',
'nfb',
'nfp',
- 'null', 'octeontx', 'pcap', 'qede', 'ring',
+ 'null',
+ 'octeontx',
+ 'octeontx2',
+ 'pcap',
+ 'ring',
'sfc',
'softnic',
'szedata2',
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
new file mode 100644
index 000000000..9c467352f
--- /dev/null
+++ b/drivers/net/octeontx2/Makefile
@@ -0,0 +1,30 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_octeontx2.a
+
+CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2
+CFLAGS += -I$(RTE_SDK)/drivers/net/octeontx2
+CFLAGS += -O3
+
+EXPORT_MAP := rte_pmd_octeontx2_version.map
+
+LIBABIVER := 1
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
+ otx2_ethdev.c
+
+LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
new file mode 100644
index 000000000..0d0ca32da
--- /dev/null
+++ b/drivers/net/octeontx2/meson.build
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+sources = files(
+ 'otx2_ethdev.c',
+ )
+
+deps += ['common_octeontx2', 'mempool_octeontx2']
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
new file mode 100644
index 000000000..d26535dee
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -0,0 +1,3 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
diff --git a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
new file mode 100644
index 000000000..9a61188cd
--- /dev/null
+++ b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
@@ -0,0 +1,4 @@
+DPDK_19.08 {
+
+ local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 2b5696a27..fab72ff6a 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -109,6 +109,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF)$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOO
_LDLIBS-y += -lrte_common_octeontx
endif
OCTEONTX2-y := $(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL)
+OCTEONTX2-y += $(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD)
ifeq ($(findstring y,$(OCTEONTX2-y)),y)
_LDLIBS-y += -lrte_common_octeontx2
endif
@@ -195,6 +196,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MVPP2_PMD) += -lrte_pmd_mvpp2
_LDLIBS-$(CONFIG_RTE_LIBRTE_MVNETA_PMD) += -lrte_pmd_mvneta
_LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += -lrte_pmd_nfp
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += -lrte_pmd_null
+_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += -lrte_pmd_octeontx2
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += -lrte_pmd_pcap -lpcap
_LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += -lrte_pmd_qede
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_RING) += -lrte_pmd_ring
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 02/58] net/octeontx2: add ethdev probe and remove
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 01/58] net/octeontx2: add build and doc infrastructure jerinj
@ 2019-07-03 8:41 ` jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 03/58] net/octeontx2: add device init and uninit jerinj
` (56 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:41 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K, Anatoly Burakov
From: Jerin Jacob <jerinj@marvell.com>
add basic PCIe ethdev probe and remove.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/Makefile | 11 +++-
drivers/net/octeontx2/meson.build | 14 ++++-
drivers/net/octeontx2/otx2_ethdev.c | 93 +++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 27 +++++++++
4 files changed, 143 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_ethdev.h
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 9c467352f..8999c38d1 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -15,6 +15,14 @@ CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2
CFLAGS += -I$(RTE_SDK)/drivers/net/octeontx2
CFLAGS += -O3
+ifneq ($(CONFIG_RTE_ARCH_64),y)
+CFLAGS += -Wno-int-to-pointer-cast
+CFLAGS += -Wno-pointer-to-int-cast
+ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
+CFLAGS += -diag-disable 2259 -flax-vector-conversions
+endif
+endif
+
EXPORT_MAP := rte_pmd_octeontx2_version.map
LIBABIVER := 1
@@ -25,6 +33,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_ethdev.c
-LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2
+LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal
+LDLIBS += -lrte_ethdev -lrte_bus_pci
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 0d0ca32da..db375f33b 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -6,4 +6,16 @@ sources = files(
'otx2_ethdev.c',
)
-deps += ['common_octeontx2', 'mempool_octeontx2']
+deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2']
+
+extra_flags = []
+# This integrated controller runs only on a arm64 machine, remove 32bit warnings
+if not dpdk_conf.get('RTE_ARCH_64')
+ extra_flags += ['-Wno-int-to-pointer-cast', '-Wno-pointer-to-int-cast']
+endif
+
+foreach flag: extra_flags
+ if cc.has_argument(flag)
+ cflags += flag
+ endif
+endforeach
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index d26535dee..05fa8988e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1,3 +1,96 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(C) 2019 Marvell International Ltd.
*/
+
+#include <rte_ethdev_pci.h>
+#include <rte_io.h>
+#include <rte_malloc.h>
+
+#include "otx2_ethdev.h"
+
+static int
+otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+
+ return -ENODEV;
+}
+
+static int
+otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
+{
+ RTE_SET_USED(eth_dev);
+ RTE_SET_USED(mbox_close);
+
+ return -ENODEV;
+}
+
+static int
+nix_remove(struct rte_pci_device *pci_dev)
+{
+ struct rte_eth_dev *eth_dev;
+ int rc;
+
+ eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+ if (eth_dev) {
+ /* Cleanup eth dev */
+ rc = otx2_eth_dev_uninit(eth_dev, true);
+ if (rc)
+ return rc;
+
+ rte_eth_dev_pci_release(eth_dev);
+ }
+
+ /* Nothing to be done for secondary processes */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ return 0;
+}
+
+static int
+nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ int rc;
+
+ RTE_SET_USED(pci_drv);
+
+ rc = rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct otx2_eth_dev),
+ otx2_eth_dev_init);
+
+ /* On error on secondary, recheck if port exists in primary or
+ * in mid of detach state.
+ */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY && rc)
+ if (!rte_eth_dev_allocated(pci_dev->device.name))
+ return 0;
+ return rc;
+}
+
+static const struct rte_pci_id pci_nix_map[] = {
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_PF)
+ },
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_VF)
+ },
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
+ PCI_DEVID_OCTEONTX2_RVU_AF_VF)
+ },
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver pci_nix = {
+ .id_table = pci_nix_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA |
+ RTE_PCI_DRV_INTR_LSC,
+ .probe = nix_probe,
+ .remove = nix_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_octeontx2, pci_nix);
+RTE_PMD_REGISTER_PCI_TABLE(net_octeontx2, pci_nix_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_octeontx2, "vfio-pci");
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
new file mode 100644
index 000000000..fd01a3254
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_ETHDEV_H__
+#define __OTX2_ETHDEV_H__
+
+#include <stdint.h>
+
+#include <rte_common.h>
+
+#include "otx2_common.h"
+#include "otx2_dev.h"
+#include "otx2_irq.h"
+#include "otx2_mempool.h"
+
+struct otx2_eth_dev {
+ OTX2_DEV; /* Base class */
+} __rte_cache_aligned;
+
+static inline struct otx2_eth_dev *
+otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
+{
+ return eth_dev->data->dev_private;
+}
+
+#endif /* __OTX2_ETHDEV_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 03/58] net/octeontx2: add device init and uninit
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 01/58] net/octeontx2: add build and doc infrastructure jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 02/58] net/octeontx2: add ethdev probe and remove jerinj
@ 2019-07-03 8:41 ` jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 04/58] net/octeontx2: add devargs parsing functions jerinj
` (55 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:41 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K, Anatoly Burakov
Cc: Sunil Kumar Kori, Vamsi Attunuru
From: Jerin Jacob <jerinj@marvell.com>
Add basic init and uninit function which includes
attaching LF device to probed PCIe device.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 277 +++++++++++++++++++++++++++-
drivers/net/octeontx2/otx2_ethdev.h | 72 ++++++++
drivers/net/octeontx2/otx2_mac.c | 72 ++++++++
5 files changed, 418 insertions(+), 5 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_mac.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 8999c38d1..fff95ab02 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -31,6 +31,7 @@ LIBABIVER := 1
# all source are stored in SRCS-y
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
+ otx2_mac.c \
otx2_ethdev.c
LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index db375f33b..b153f166d 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -3,6 +3,7 @@
#
sources = files(
+ 'otx2_mac.c',
'otx2_ethdev.c',
)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 05fa8988e..08f03b4c3 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -8,27 +8,277 @@
#include "otx2_ethdev.h"
+static inline void
+otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+}
+
+static inline void
+otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+}
+
+static inline uint64_t
+nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
+{
+ uint64_t capa = NIX_RX_OFFLOAD_CAPA;
+
+ if (otx2_dev_is_vf(dev))
+ capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+
+ return capa;
+}
+
+static inline uint64_t
+nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
+{
+ RTE_SET_USED(dev);
+
+ return NIX_TX_OFFLOAD_CAPA;
+}
+
+static int
+nix_lf_free(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_lf_free_req *req;
+ struct ndc_sync_op *ndc_req;
+ int rc;
+
+ /* Sync NDC-NIX for LF */
+ ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
+ ndc_req->nix_lf_tx_sync = 1;
+ ndc_req->nix_lf_rx_sync = 1;
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ otx2_err("Error on NDC-NIX-[TX, RX] LF sync, rc %d", rc);
+
+ req = otx2_mbox_alloc_msg_nix_lf_free(mbox);
+ /* Let AF driver free all this nix lf's
+ * NPC entries allocated using NPC MBOX.
+ */
+ req->flags = 0;
+
+ return otx2_mbox_process(mbox);
+}
+
+static inline int
+nix_lf_attach(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct rsrc_attach_req *req;
+
+ /* Attach NIX(lf) */
+ req = otx2_mbox_alloc_msg_attach_resources(mbox);
+ req->modify = true;
+ req->nixlf = true;
+
+ return otx2_mbox_process(mbox);
+}
+
+static inline int
+nix_lf_get_msix_offset(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct msix_offset_rsp *msix_rsp;
+ int rc;
+
+ /* Get NPA and NIX MSIX vector offsets */
+ otx2_mbox_alloc_msg_msix_offset(mbox);
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
+
+ dev->nix_msixoff = msix_rsp->nix_msixoff;
+
+ return rc;
+}
+
+static inline int
+otx2_eth_dev_lf_detach(struct otx2_mbox *mbox)
+{
+ struct rsrc_detach_req *req;
+
+ req = otx2_mbox_alloc_msg_detach_resources(mbox);
+
+ /* Detach all except npa lf */
+ req->partial = true;
+ req->nixlf = true;
+ req->sso = true;
+ req->ssow = true;
+ req->timlfs = true;
+ req->cptlfs = true;
+
+ return otx2_mbox_process(mbox);
+}
+
static int
otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
{
- RTE_SET_USED(eth_dev);
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_pci_device *pci_dev;
+ int rc, max_entries;
- return -ENODEV;
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ /* Setup callbacks for secondary process */
+ otx2_eth_set_tx_function(eth_dev);
+ otx2_eth_set_rx_function(eth_dev);
+ return 0;
+ }
+
+ pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+ rte_eth_copy_pci_info(eth_dev, pci_dev);
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_CLOSE_REMOVE;
+
+ /* Zero out everything after OTX2_DEV to allow proper dev_reset() */
+ memset(&dev->otx2_eth_dev_data_start, 0, sizeof(*dev) -
+ offsetof(struct otx2_eth_dev, otx2_eth_dev_data_start));
+
+ if (!dev->mbox_active) {
+ /* Initialize the base otx2_dev object
+ * only if already present
+ */
+ rc = otx2_dev_init(pci_dev, dev);
+ if (rc) {
+ otx2_err("Failed to initialize otx2_dev rc=%d", rc);
+ goto error;
+ }
+ }
+
+ /* Grab the NPA LF if required */
+ rc = otx2_npa_lf_init(pci_dev, dev);
+ if (rc)
+ goto otx2_dev_uninit;
+
+ dev->configured = 0;
+ dev->drv_inited = true;
+ dev->base = dev->bar2 + (RVU_BLOCK_ADDR_NIX0 << 20);
+ dev->lmt_addr = dev->bar2 + (RVU_BLOCK_ADDR_LMT << 20);
+
+ /* Attach NIX LF */
+ rc = nix_lf_attach(dev);
+ if (rc)
+ goto otx2_npa_uninit;
+
+ /* Get NIX MSIX offset */
+ rc = nix_lf_get_msix_offset(dev);
+ if (rc)
+ goto otx2_npa_uninit;
+
+ /* Get maximum number of supported MAC entries */
+ max_entries = otx2_cgx_mac_max_entries_get(dev);
+ if (max_entries < 0) {
+ otx2_err("Failed to get max entries for mac addr");
+ rc = -ENOTSUP;
+ goto mbox_detach;
+ }
+
+ /* For VFs, returned max_entries will be 0. But to keep default MAC
+ * address, one entry must be allocated. So setting up to 1.
+ */
+ if (max_entries == 0)
+ max_entries = 1;
+
+ eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", max_entries *
+ RTE_ETHER_ADDR_LEN, 0);
+ if (eth_dev->data->mac_addrs == NULL) {
+ otx2_err("Failed to allocate memory for mac addr");
+ rc = -ENOMEM;
+ goto mbox_detach;
+ }
+
+ dev->max_mac_entries = max_entries;
+
+ rc = otx2_nix_mac_addr_get(eth_dev, dev->mac_addr);
+ if (rc)
+ goto free_mac_addrs;
+
+ /* Update the mac address */
+ memcpy(eth_dev->data->mac_addrs, dev->mac_addr, RTE_ETHER_ADDR_LEN);
+
+ /* Also sync same MAC address to CGX table */
+ otx2_cgx_mac_addr_set(eth_dev, ð_dev->data->mac_addrs[0]);
+
+ dev->tx_offload_capa = nix_get_tx_offload_capa(dev);
+ dev->rx_offload_capa = nix_get_rx_offload_capa(dev);
+
+ if (otx2_dev_is_A0(dev)) {
+ dev->hwcap |= OTX2_FIXUP_F_MIN_4K_Q;
+ dev->hwcap |= OTX2_FIXUP_F_LIMIT_CQ_FULL;
+ }
+
+ otx2_nix_dbg("Port=%d pf=%d vf=%d ver=%s msix_off=%d hwcap=0x%" PRIx64
+ " rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64,
+ eth_dev->data->port_id, dev->pf, dev->vf,
+ OTX2_ETH_DEV_PMD_VERSION, dev->nix_msixoff, dev->hwcap,
+ dev->rx_offload_capa, dev->tx_offload_capa);
+ return 0;
+
+free_mac_addrs:
+ rte_free(eth_dev->data->mac_addrs);
+mbox_detach:
+ otx2_eth_dev_lf_detach(dev->mbox);
+otx2_npa_uninit:
+ otx2_npa_lf_fini();
+otx2_dev_uninit:
+ otx2_dev_fini(pci_dev, dev);
+error:
+ otx2_err("Failed to init nix eth_dev rc=%d", rc);
+ return rc;
}
static int
otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
{
- RTE_SET_USED(eth_dev);
- RTE_SET_USED(mbox_close);
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_pci_device *pci_dev;
+ int rc;
- return -ENODEV;
+ /* Nothing to be done for secondary processes */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ rc = nix_lf_free(dev);
+ if (rc)
+ otx2_err("Failed to free nix lf, rc=%d", rc);
+
+ rc = otx2_npa_lf_fini();
+ if (rc)
+ otx2_err("Failed to cleanup npa lf, rc=%d", rc);
+
+ rte_free(eth_dev->data->mac_addrs);
+ eth_dev->data->mac_addrs = NULL;
+ dev->drv_inited = false;
+
+ pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+ rc = otx2_eth_dev_lf_detach(dev->mbox);
+ if (rc)
+ otx2_err("Failed to detach resources, rc=%d", rc);
+
+ /* Check if mbox close is needed */
+ if (!mbox_close)
+ return 0;
+
+ if (otx2_npa_lf_active(dev) || otx2_dev_active_vfs(dev)) {
+ /* Will be freed later by PMD */
+ eth_dev->data->dev_private = NULL;
+ return 0;
+ }
+
+ otx2_dev_fini(pci_dev, dev);
+ return 0;
}
static int
nix_remove(struct rte_pci_device *pci_dev)
{
struct rte_eth_dev *eth_dev;
+ struct otx2_idev_cfg *idev;
+ struct otx2_dev *otx2_dev;
int rc;
eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
@@ -45,7 +295,24 @@ nix_remove(struct rte_pci_device *pci_dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ /* Check for common resources */
+ idev = otx2_intra_dev_get_cfg();
+ if (!idev || !idev->npa_lf || idev->npa_lf->pci_dev != pci_dev)
+ return 0;
+
+ otx2_dev = container_of(idev->npa_lf, struct otx2_dev, npalf);
+
+ if (otx2_npa_lf_active(otx2_dev) || otx2_dev_active_vfs(otx2_dev))
+ goto exit;
+
+ /* Safe to cleanup mbox as no more users */
+ otx2_dev_fini(pci_dev, otx2_dev);
+ rte_free(otx2_dev);
return 0;
+
+exit:
+ otx2_info("%s: common resource in use by other devices", pci_dev->name);
+ return -EAGAIN;
}
static int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index fd01a3254..d9f72686a 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -8,14 +8,76 @@
#include <stdint.h>
#include <rte_common.h>
+#include <rte_ethdev.h>
#include "otx2_common.h"
#include "otx2_dev.h"
#include "otx2_irq.h"
#include "otx2_mempool.h"
+#define OTX2_ETH_DEV_PMD_VERSION "1.0"
+
+/* Ethdev HWCAP and Fixup flags. Use from MSB bits to avoid conflict with dev */
+
+/* Minimum CQ size should be 4K */
+#define OTX2_FIXUP_F_MIN_4K_Q BIT_ULL(63)
+#define otx2_ethdev_fixup_is_min_4k_q(dev) \
+ ((dev)->hwcap & OTX2_FIXUP_F_MIN_4K_Q)
+/* Limit CQ being full */
+#define OTX2_FIXUP_F_LIMIT_CQ_FULL BIT_ULL(62)
+#define otx2_ethdev_fixup_is_limit_cq_full(dev) \
+ ((dev)->hwcap & OTX2_FIXUP_F_LIMIT_CQ_FULL)
+
+/* Used for struct otx2_eth_dev::flags */
+#define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0)
+
+#define NIX_TX_OFFLOAD_CAPA ( \
+ DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
+ DEV_TX_OFFLOAD_MT_LOCKFREE | \
+ DEV_TX_OFFLOAD_VLAN_INSERT | \
+ DEV_TX_OFFLOAD_QINQ_INSERT | \
+ DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+ DEV_TX_OFFLOAD_TCP_CKSUM | \
+ DEV_TX_OFFLOAD_UDP_CKSUM | \
+ DEV_TX_OFFLOAD_SCTP_CKSUM | \
+ DEV_TX_OFFLOAD_MULTI_SEGS | \
+ DEV_TX_OFFLOAD_IPV4_CKSUM)
+
+#define NIX_RX_OFFLOAD_CAPA ( \
+ DEV_RX_OFFLOAD_CHECKSUM | \
+ DEV_RX_OFFLOAD_SCTP_CKSUM | \
+ DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ DEV_RX_OFFLOAD_SCATTER | \
+ DEV_RX_OFFLOAD_JUMBO_FRAME | \
+ DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ DEV_RX_OFFLOAD_VLAN_STRIP | \
+ DEV_RX_OFFLOAD_VLAN_FILTER | \
+ DEV_RX_OFFLOAD_QINQ_STRIP | \
+ DEV_RX_OFFLOAD_TIMESTAMP)
+
struct otx2_eth_dev {
OTX2_DEV; /* Base class */
+ MARKER otx2_eth_dev_data_start;
+ uint16_t sqb_size;
+ uint16_t rx_chan_base;
+ uint16_t tx_chan_base;
+ uint8_t rx_chan_cnt;
+ uint8_t tx_chan_cnt;
+ uint8_t lso_tsov4_idx;
+ uint8_t lso_tsov6_idx;
+ uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
+ uint8_t max_mac_entries;
+ uint8_t configured;
+ uint16_t nix_msixoff;
+ uintptr_t base;
+ uintptr_t lmt_addr;
+ uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */
+ uint64_t rx_offloads;
+ uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */
+ uint64_t tx_offloads;
+ uint64_t rx_offload_capa;
+ uint64_t tx_offload_capa;
} __rte_cache_aligned;
static inline struct otx2_eth_dev *
@@ -24,4 +86,14 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
return eth_dev->data->dev_private;
}
+/* CGX */
+int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
+int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
+int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
+ struct rte_ether_addr *addr);
+
+/* Mac address handling */
+int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr);
+int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev);
+
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_mac.c b/drivers/net/octeontx2/otx2_mac.c
new file mode 100644
index 000000000..89b0ca6b0
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_mac.c
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_common.h>
+
+#include "otx2_dev.h"
+#include "otx2_ethdev.h"
+
+int
+otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct cgx_mac_addr_set_or_get *req;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return -ENOTSUP;
+
+ if (otx2_dev_active_vfs(dev))
+ return -ENOTSUP;
+
+ req = otx2_mbox_alloc_msg_cgx_mac_addr_set(mbox);
+ otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ otx2_err("Failed to set mac address in CGX, rc=%d", rc);
+
+ return 0;
+}
+
+int
+otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev)
+{
+ struct cgx_max_dmac_entries_get_rsp *rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_mac_max_entries_get(mbox);
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ return rsp->max_dmac_filters;
+}
+
+int
+otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_get_mac_addr_rsp *rsp;
+ int rc;
+
+ otx2_mbox_alloc_msg_nix_get_mac_addr(mbox);
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to get mac address, rc=%d", rc);
+ goto done;
+ }
+
+ otx2_mbox_memcpy(addr, rsp->mac_addr, RTE_ETHER_ADDR_LEN);
+
+done:
+ return rc;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 04/58] net/octeontx2: add devargs parsing functions
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (2 preceding siblings ...)
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 03/58] net/octeontx2: add device init and uninit jerinj
@ 2019-07-03 8:41 ` jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 05/58] net/octeontx2: handle device error interrupts jerinj
` (54 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:41 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic
Cc: Pavan Nikhilesh, Vamsi Attunuru
From: Jerin Jacob <jerinj@marvell.com>
add various devargs command line options supported by
this driver.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/octeontx2.rst | 67 ++++++++
drivers/net/octeontx2/Makefile | 5 +-
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 7 +
drivers/net/octeontx2/otx2_ethdev.h | 23 +++
drivers/net/octeontx2/otx2_ethdev_devargs.c | 165 ++++++++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 10 ++
7 files changed, 276 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_ethdev_devargs.c
create mode 100644 drivers/net/octeontx2/otx2_rx.h
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index f0bd36be3..92a7ebc42 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -30,3 +30,70 @@ The following options may be modified in the ``config`` file.
- ``CONFIG_RTE_LIBRTE_OCTEONTX2_PMD`` (default ``y``)
Toggle compilation of the ``librte_pmd_octeontx2`` driver.
+
+Runtime Config Options
+----------------------
+
+- ``HW offload ptype parsing disable`` (default ``0``)
+
+ Packet type parsing is HW offloaded by default and this feature may be toggled
+ using ``ptype_disable`` ``devargs`` parameter.
+
+- ``Rx&Tx scalar mode enable`` (default ``0``)
+
+ Ethdev supports both scalar and vector mode, it may be selected at runtime
+ using ``scalar_enable`` ``devargs`` parameter.
+
+- ``RSS reta size`` (default ``64``)
+
+ RSS redirection table size may be configured during runtime using ``reta_size``
+ ``devargs`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,reta_size=256
+
+ With the above configuration, reta table of size 256 is populated.
+
+- ``Flow priority levels`` (default ``3``)
+
+ RTE Flow priority levels can be configured during runtime using
+ ``flow_max_priority`` ``devargs`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,flow_max_priority=10
+
+ With the above configuration, priority level was set to 10 (0-9). Max
+ priority level supported is 32.
+
+- ``Reserve Flow entries`` (default ``8``)
+
+ RTE flow entries can be pre allocated and the size of pre allocation can be
+ selected runtime using ``flow_prealloc_size`` ``devargs`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,flow_prealloc_size=4
+
+ With the above configuration, pre alloc size was set to 4. Max pre alloc
+ size supported is 32.
+
+- ``Max SQB buffer count`` (default ``512``)
+
+ Send queue descriptor buffer count may be limited during runtime using
+ ``max_sqb_count`` ``devargs`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,max_sqb_count=64
+
+ With the above configuration, each send queue's decscriptor buffer count is
+ limited to a maximum of 64 buffers.
+
+
+.. note::
+
+ Above devarg parameters are configurable per device, user needs to pass the
+ parameters to all the PCIe devices if application requires to configure on
+ all the ethdev ports.
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index fff95ab02..2705ccd9d 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -32,9 +32,10 @@ LIBABIVER := 1
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
- otx2_ethdev.c
+ otx2_ethdev.c \
+ otx2_ethdev_devargs.c
LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal
-LDLIBS += -lrte_ethdev -lrte_bus_pci
+LDLIBS += -lrte_ethdev -lrte_bus_pci -lrte_kvargs
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index b153f166d..b5c6fb978 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -5,6 +5,7 @@
sources = files(
'otx2_mac.c',
'otx2_ethdev.c',
+ 'otx2_ethdev_devargs.c'
)
deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2']
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 08f03b4c3..eeba0c2c6 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -137,6 +137,13 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
memset(&dev->otx2_eth_dev_data_start, 0, sizeof(*dev) -
offsetof(struct otx2_eth_dev, otx2_eth_dev_data_start));
+ /* Parse devargs string */
+ rc = otx2_ethdev_parse_devargs(eth_dev->device->devargs, dev);
+ if (rc) {
+ otx2_err("Failed to parse devargs rc=%d", rc);
+ goto error;
+ }
+
if (!dev->mbox_active) {
/* Initialize the base otx2_dev object
* only if already present
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index d9f72686a..a83688392 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -9,11 +9,13 @@
#include <rte_common.h>
#include <rte_ethdev.h>
+#include <rte_kvargs.h>
#include "otx2_common.h"
#include "otx2_dev.h"
#include "otx2_irq.h"
#include "otx2_mempool.h"
+#include "otx2_rx.h"
#define OTX2_ETH_DEV_PMD_VERSION "1.0"
@@ -31,6 +33,10 @@
/* Used for struct otx2_eth_dev::flags */
#define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0)
+#define NIX_MAX_SQB 512
+#define NIX_MIN_SQB 32
+#define NIX_RSS_RETA_SIZE 64
+
#define NIX_TX_OFFLOAD_CAPA ( \
DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
DEV_TX_OFFLOAD_MT_LOCKFREE | \
@@ -56,6 +62,15 @@
DEV_RX_OFFLOAD_QINQ_STRIP | \
DEV_RX_OFFLOAD_TIMESTAMP)
+struct otx2_rss_info {
+ uint16_t rss_size;
+};
+
+struct otx2_npc_flow_info {
+ uint16_t flow_prealloc_size;
+ uint16_t flow_max_priority;
+};
+
struct otx2_eth_dev {
OTX2_DEV; /* Base class */
MARKER otx2_eth_dev_data_start;
@@ -72,12 +87,16 @@ struct otx2_eth_dev {
uint16_t nix_msixoff;
uintptr_t base;
uintptr_t lmt_addr;
+ uint16_t scalar_ena;
+ uint16_t max_sqb_count;
uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */
uint64_t rx_offloads;
uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */
uint64_t tx_offloads;
uint64_t rx_offload_capa;
uint64_t tx_offload_capa;
+ struct otx2_rss_info rss_info;
+ struct otx2_npc_flow_info npc_flow;
} __rte_cache_aligned;
static inline struct otx2_eth_dev *
@@ -96,4 +115,8 @@ int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr);
int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev);
+/* Devargs */
+int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
+ struct otx2_eth_dev *dev);
+
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
new file mode 100644
index 000000000..85e7e312a
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <inttypes.h>
+#include <math.h>
+
+#include "otx2_ethdev.h"
+
+static int
+parse_flow_max_priority(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint16_t val;
+
+ val = atoi(value);
+
+ /* Limit the max priority to 32 */
+ if (val < 1 || val > 32)
+ return -EINVAL;
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
+
+static int
+parse_flow_prealloc_size(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint16_t val;
+
+ val = atoi(value);
+
+ /* Limit the prealloc size to 32 */
+ if (val < 1 || val > 32)
+ return -EINVAL;
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
+
+static int
+parse_reta_size(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint32_t val;
+
+ val = atoi(value);
+
+ if (val <= ETH_RSS_RETA_SIZE_64)
+ val = ETH_RSS_RETA_SIZE_64;
+ else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
+ val = ETH_RSS_RETA_SIZE_128;
+ else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
+ val = ETH_RSS_RETA_SIZE_256;
+ else
+ val = NIX_RSS_RETA_SIZE;
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
+
+static int
+parse_ptype_flag(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint32_t val;
+
+ val = atoi(value);
+ if (val)
+ val = 0; /* Disable NIX_RX_OFFLOAD_PTYPE_F */
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
+
+static int
+parse_flag(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+
+ *(uint16_t *)extra_args = atoi(value);
+
+ return 0;
+}
+
+static int
+parse_sqb_count(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint32_t val;
+
+ val = atoi(value);
+
+ if (val < NIX_MIN_SQB || val > NIX_MAX_SQB)
+ return -EINVAL;
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
+
+#define OTX2_RSS_RETA_SIZE "reta_size"
+#define OTX2_PTYPE_DISABLE "ptype_disable"
+#define OTX2_SCL_ENABLE "scalar_enable"
+#define OTX2_MAX_SQB_COUNT "max_sqb_count"
+#define OTX2_FLOW_PREALLOC_SIZE "flow_prealloc_size"
+#define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
+
+int
+otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
+{
+ uint16_t offload_flag = NIX_RX_OFFLOAD_PTYPE_F;
+ uint16_t rss_size = NIX_RSS_RETA_SIZE;
+ uint16_t sqb_count = NIX_MAX_SQB;
+ uint16_t flow_prealloc_size = 8;
+ uint16_t flow_max_priority = 3;
+ uint16_t scalar_enable = 0;
+ struct rte_kvargs *kvlist;
+
+ if (devargs == NULL)
+ goto null_devargs;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ goto exit;
+
+ rte_kvargs_process(kvlist, OTX2_PTYPE_DISABLE,
+ &parse_ptype_flag, &offload_flag);
+ rte_kvargs_process(kvlist, OTX2_RSS_RETA_SIZE,
+ &parse_reta_size, &rss_size);
+ rte_kvargs_process(kvlist, OTX2_SCL_ENABLE,
+ &parse_flag, &scalar_enable);
+ rte_kvargs_process(kvlist, OTX2_MAX_SQB_COUNT,
+ &parse_sqb_count, &sqb_count);
+ rte_kvargs_process(kvlist, OTX2_FLOW_PREALLOC_SIZE,
+ &parse_flow_prealloc_size, &flow_prealloc_size);
+ rte_kvargs_process(kvlist, OTX2_FLOW_MAX_PRIORITY,
+ &parse_flow_max_priority, &flow_max_priority);
+ rte_kvargs_free(kvlist);
+
+null_devargs:
+ dev->rx_offload_flags = offload_flag;
+ dev->scalar_ena = scalar_enable;
+ dev->max_sqb_count = sqb_count;
+ dev->rss_info.rss_size = rss_size;
+ dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
+ dev->npc_flow.flow_max_priority = flow_max_priority;
+ return 0;
+
+exit:
+ return -EINVAL;
+}
+
+RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
+ OTX2_RSS_RETA_SIZE "=<64|128|256>"
+ OTX2_PTYPE_DISABLE "=1"
+ OTX2_SCL_ENABLE "=1"
+ OTX2_MAX_SQB_COUNT "=<32-512>"
+ OTX2_FLOW_PREALLOC_SIZE "=<1-32>"
+ OTX2_FLOW_MAX_PRIORITY "=<1-32>");
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
new file mode 100644
index 000000000..1749c43ff
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_RX_H__
+#define __OTX2_RX_H__
+
+#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
+
+#endif /* __OTX2_RX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 05/58] net/octeontx2: handle device error interrupts
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (3 preceding siblings ...)
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 04/58] net/octeontx2: add devargs parsing functions jerinj
@ 2019-07-03 8:41 ` jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 06/58] net/octeontx2: add info get operation jerinj
` (53 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:41 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Harman Kalra
From: Jerin Jacob <jerinj@marvell.com>
Handle device specific error and ras interrupts.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 12 +-
drivers/net/octeontx2/otx2_ethdev.h | 4 +
drivers/net/octeontx2/otx2_ethdev_irq.c | 140 ++++++++++++++++++++++++
5 files changed, 156 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_ethdev_irq.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 2705ccd9d..77ba9b0da 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -33,6 +33,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
otx2_ethdev.c \
+ otx2_ethdev_irq.c \
otx2_ethdev_devargs.c
LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index b5c6fb978..148f7d339 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -5,6 +5,7 @@
sources = files(
'otx2_mac.c',
'otx2_ethdev.c',
+ 'otx2_ethdev_irq.c',
'otx2_ethdev_devargs.c'
)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index eeba0c2c6..67a7ebb36 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -175,12 +175,17 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
if (rc)
goto otx2_npa_uninit;
+ /* Register LF irq handlers */
+ rc = otx2_nix_register_irqs(eth_dev);
+ if (rc)
+ goto mbox_detach;
+
/* Get maximum number of supported MAC entries */
max_entries = otx2_cgx_mac_max_entries_get(dev);
if (max_entries < 0) {
otx2_err("Failed to get max entries for mac addr");
rc = -ENOTSUP;
- goto mbox_detach;
+ goto unregister_irq;
}
/* For VFs, returned max_entries will be 0. But to keep default MAC
@@ -194,7 +199,7 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
if (eth_dev->data->mac_addrs == NULL) {
otx2_err("Failed to allocate memory for mac addr");
rc = -ENOMEM;
- goto mbox_detach;
+ goto unregister_irq;
}
dev->max_mac_entries = max_entries;
@@ -226,6 +231,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
free_mac_addrs:
rte_free(eth_dev->data->mac_addrs);
+unregister_irq:
+ otx2_nix_unregister_irqs(eth_dev);
mbox_detach:
otx2_eth_dev_lf_detach(dev->mbox);
otx2_npa_uninit:
@@ -261,6 +268,7 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
dev->drv_inited = false;
pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ otx2_nix_unregister_irqs(eth_dev);
rc = otx2_eth_dev_lf_detach(dev->mbox);
if (rc)
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index a83688392..f7d8838df 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -105,6 +105,10 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
return eth_dev->data->dev_private;
}
+/* IRQ */
+int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
+void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
+
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
new file mode 100644
index 000000000..33fed93c4
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -0,0 +1,140 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <inttypes.h>
+
+#include <rte_bus_pci.h>
+
+#include "otx2_ethdev.h"
+
+static void
+nix_lf_err_irq(void *param)
+{
+ struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t intr;
+
+ intr = otx2_read64(dev->base + NIX_LF_ERR_INT);
+ if (intr == 0)
+ return;
+
+ otx2_err("Err_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
+
+ /* Clear interrupt */
+ otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
+}
+
+static int
+nix_lf_register_err_irq(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc, vec;
+
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_ERR_INT_ENA_W1C);
+ /* Set used interrupt vectors */
+ rc = otx2_register_irq(handle, nix_lf_err_irq, eth_dev, vec);
+ /* Enable all dev interrupt except for RQ_DISABLED */
+ otx2_write64(~BIT_ULL(11), dev->base + NIX_LF_ERR_INT_ENA_W1S);
+
+ return rc;
+}
+
+static void
+nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int vec;
+
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_ERR_INT_ENA_W1C);
+ otx2_unregister_irq(handle, nix_lf_err_irq, eth_dev, vec);
+}
+
+static void
+nix_lf_ras_irq(void *param)
+{
+ struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t intr;
+
+ intr = otx2_read64(dev->base + NIX_LF_RAS);
+ if (intr == 0)
+ return;
+
+ otx2_err("Ras_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
+
+ /* Clear interrupt */
+ otx2_write64(intr, dev->base + NIX_LF_RAS);
+}
+
+static int
+nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc, vec;
+
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1C);
+ /* Set used interrupt vectors */
+ rc = otx2_register_irq(handle, nix_lf_ras_irq, eth_dev, vec);
+ /* Enable dev interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1S);
+
+ return rc;
+}
+
+static void
+nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int vec;
+
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON;
+
+ /* Clear err interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1C);
+ otx2_unregister_irq(handle, nix_lf_ras_irq, eth_dev, vec);
+}
+
+int
+otx2_nix_register_irqs(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc;
+
+ if (dev->nix_msixoff == MSIX_VECTOR_INVALID) {
+ otx2_err("Invalid NIXLF MSIX vector offset vector: 0x%x",
+ dev->nix_msixoff);
+ return -EINVAL;
+ }
+
+ /* Register lf err interrupt */
+ rc = nix_lf_register_err_irq(eth_dev);
+ /* Register RAS interrupt */
+ rc |= nix_lf_register_ras_irq(eth_dev);
+
+ return rc;
+}
+
+void
+otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev)
+{
+ nix_lf_unregister_err_irq(eth_dev);
+ nix_lf_unregister_ras_irq(eth_dev);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 06/58] net/octeontx2: add info get operation
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (4 preceding siblings ...)
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 05/58] net/octeontx2: handle device error interrupts jerinj
@ 2019-07-03 8:41 ` jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 07/58] net/octeontx2: add device configure operation jerinj
` (52 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:41 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru, Harman Kalra
From: Jerin Jacob <jerinj@marvell.com>
Add device information get operation.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 4 ++
doc/guides/nics/features/octeontx2_vec.ini | 4 ++
doc/guides/nics/features/octeontx2_vf.ini | 3 +
doc/guides/nics/octeontx2.rst | 2 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 7 +++
drivers/net/octeontx2/otx2_ethdev.h | 45 +++++++++++++++
drivers/net/octeontx2/otx2_ethdev_ops.c | 64 ++++++++++++++++++++++
9 files changed, 131 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_ethdev_ops.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 84d5ad779..356b88de7 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -4,6 +4,10 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Speed capabilities = Y
+Lock-free Tx queue = Y
+SR-IOV = Y
+Multiprocess aware = Y
Linux VFIO = Y
ARMv8 = Y
Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 5fd7e4c5c..5f4eaa3f4 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -4,6 +4,10 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Speed capabilities = Y
+Lock-free Tx queue = Y
+SR-IOV = Y
+Multiprocess aware = Y
Linux VFIO = Y
ARMv8 = Y
Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 3128cc120..024b032d4 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -4,6 +4,9 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Speed capabilities = Y
+Lock-free Tx queue = Y
+Multiprocess aware = Y
Linux VFIO = Y
ARMv8 = Y
Usage doc = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 92a7ebc42..e3f4c2c43 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -16,6 +16,8 @@ Features
Features of the OCTEON TX2 Ethdev PMD are:
+- SR-IOV VF
+- Lock-free Tx queue
Prerequisites
-------------
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 77ba9b0da..3360fbd10 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -34,6 +34,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
otx2_ethdev.c \
otx2_ethdev_irq.c \
+ otx2_ethdev_ops.c \
otx2_ethdev_devargs.c
LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 148f7d339..aa8417e3f 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -6,6 +6,7 @@ sources = files(
'otx2_mac.c',
'otx2_ethdev.c',
'otx2_ethdev_irq.c',
+ 'otx2_ethdev_ops.c',
'otx2_ethdev_devargs.c'
)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 67a7ebb36..6e3c70559 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -64,6 +64,11 @@ nix_lf_free(struct otx2_eth_dev *dev)
return otx2_mbox_process(mbox);
}
+/* Initialize and register driver with DPDK Application */
+static const struct eth_dev_ops otx2_eth_dev_ops = {
+ .dev_infos_get = otx2_nix_info_get,
+};
+
static inline int
nix_lf_attach(struct otx2_eth_dev *dev)
{
@@ -120,6 +125,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
struct rte_pci_device *pci_dev;
int rc, max_entries;
+ eth_dev->dev_ops = &otx2_eth_dev_ops;
+
/* For secondary processes, the primary has done all the work */
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
/* Setup callbacks for secondary process */
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index f7d8838df..666ceba91 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -33,9 +33,50 @@
/* Used for struct otx2_eth_dev::flags */
#define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0)
+/* VLAN tag inserted by NIX_TX_VTAG_ACTION.
+ * In Tx space is always reserved for this in FRS.
+ */
+#define NIX_MAX_VTAG_INS 2
+#define NIX_MAX_VTAG_ACT_SIZE (4 * NIX_MAX_VTAG_INS)
+
+/* ETH_HLEN+ETH_FCS+2*VLAN_HLEN */
+#define NIX_L2_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + 8)
+
+/* HW config of frame size doesn't include FCS */
+#define NIX_MAX_HW_FRS 9212
+#define NIX_MIN_HW_FRS 60
+
+/* Since HW FRS includes NPC VTAG insertion space, user has reduced FRS */
+#define NIX_MAX_FRS \
+ (NIX_MAX_HW_FRS + RTE_ETHER_CRC_LEN - NIX_MAX_VTAG_ACT_SIZE)
+
+#define NIX_MIN_FRS \
+ (NIX_MIN_HW_FRS + RTE_ETHER_CRC_LEN)
+
+#define NIX_MAX_MTU \
+ (NIX_MAX_FRS - NIX_L2_OVERHEAD)
+
#define NIX_MAX_SQB 512
#define NIX_MIN_SQB 32
+#define NIX_HASH_KEY_SIZE 48 /* 352 Bits */
#define NIX_RSS_RETA_SIZE 64
+#define NIX_RX_MIN_DESC 16
+#define NIX_RX_MIN_DESC_ALIGN 16
+#define NIX_RX_NB_SEG_MAX 6
+
+/* If PTP is enabled additional SEND MEM DESC is required which
+ * takes 2 words, hence max 7 iova address are possible
+ */
+#if defined(RTE_LIBRTE_IEEE1588)
+#define NIX_TX_NB_SEG_MAX 7
+#else
+#define NIX_TX_NB_SEG_MAX 9
+#endif
+
+#define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
+ ETH_RSS_TCP | ETH_RSS_SCTP | \
+ ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD)
#define NIX_TX_OFFLOAD_CAPA ( \
DEV_TX_OFFLOAD_MBUF_FAST_FREE | \
@@ -105,6 +146,10 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
return eth_dev->data->dev_private;
}
+/* Ops */
+void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_dev_info *dev_info);
+
/* IRQ */
int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
new file mode 100644
index 000000000..df7e909d2
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+
+void
+otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ devinfo->min_rx_bufsize = NIX_MIN_FRS;
+ devinfo->max_rx_pktlen = NIX_MAX_FRS;
+ devinfo->max_rx_queues = RTE_MAX_QUEUES_PER_PORT;
+ devinfo->max_tx_queues = RTE_MAX_QUEUES_PER_PORT;
+ devinfo->max_mac_addrs = dev->max_mac_entries;
+ devinfo->max_vfs = pci_dev->max_vfs;
+ devinfo->max_mtu = devinfo->max_rx_pktlen - NIX_L2_OVERHEAD;
+ devinfo->min_mtu = devinfo->min_rx_bufsize - NIX_L2_OVERHEAD;
+
+ devinfo->rx_offload_capa = dev->rx_offload_capa;
+ devinfo->tx_offload_capa = dev->tx_offload_capa;
+ devinfo->rx_queue_offload_capa = 0;
+ devinfo->tx_queue_offload_capa = 0;
+
+ devinfo->reta_size = dev->rss_info.rss_size;
+ devinfo->hash_key_size = NIX_HASH_KEY_SIZE;
+ devinfo->flow_type_rss_offloads = NIX_RSS_OFFLOAD;
+
+ devinfo->default_rxconf = (struct rte_eth_rxconf) {
+ .rx_drop_en = 0,
+ .offloads = 0,
+ };
+
+ devinfo->default_txconf = (struct rte_eth_txconf) {
+ .offloads = 0,
+ };
+
+ devinfo->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = UINT16_MAX,
+ .nb_min = NIX_RX_MIN_DESC,
+ .nb_align = NIX_RX_MIN_DESC_ALIGN,
+ .nb_seg_max = NIX_RX_NB_SEG_MAX,
+ .nb_mtu_seg_max = NIX_RX_NB_SEG_MAX,
+ };
+ devinfo->rx_desc_lim.nb_max =
+ RTE_ALIGN_MUL_FLOOR(devinfo->rx_desc_lim.nb_max,
+ NIX_RX_MIN_DESC_ALIGN);
+
+ devinfo->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = UINT16_MAX,
+ .nb_min = 1,
+ .nb_align = 1,
+ .nb_seg_max = NIX_TX_NB_SEG_MAX,
+ .nb_mtu_seg_max = NIX_TX_NB_SEG_MAX,
+ };
+
+ /* Auto negotiation disabled */
+ devinfo->speed_capa = ETH_LINK_SPEED_FIXED;
+ devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
+ ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
+ ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 07/58] net/octeontx2: add device configure operation
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (5 preceding siblings ...)
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 06/58] net/octeontx2: add info get operation jerinj
@ 2019-07-03 8:41 ` jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 08/58] net/octeontx2: handle queue specific error interrupts jerinj
` (51 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:41 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vamsi Attunuru
From: Jerin Jacob <jerinj@marvell.com>
Add device configure operation. This would call lf_alloc
mailbox to allocate a NIX LF and upon return, AF will
return the attributes for the select LF.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 151 ++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 11 ++
2 files changed, 162 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 6e3c70559..65d72a47f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -39,6 +39,52 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
return NIX_TX_OFFLOAD_CAPA;
}
+static int
+nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_lf_alloc_req *req;
+ struct nix_lf_alloc_rsp *rsp;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_nix_lf_alloc(mbox);
+ req->rq_cnt = nb_rxq;
+ req->sq_cnt = nb_txq;
+ req->cq_cnt = nb_rxq;
+ /* XQE_SZ should be in Sync with NIX_CQ_ENTRY_SZ */
+ RTE_BUILD_BUG_ON(NIX_CQ_ENTRY_SZ != 128);
+ req->xqe_sz = NIX_XQESZ_W16;
+ req->rss_sz = dev->rss_info.rss_size;
+ req->rss_grps = NIX_RSS_GRPS;
+ req->npa_func = otx2_npa_pf_func_get();
+ req->sso_func = otx2_sso_pf_func_get();
+ req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
+ if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM)) {
+ req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
+ req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
+ }
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ dev->sqb_size = rsp->sqb_size;
+ dev->tx_chan_base = rsp->tx_chan_base;
+ dev->rx_chan_base = rsp->rx_chan_base;
+ dev->rx_chan_cnt = rsp->rx_chan_cnt;
+ dev->tx_chan_cnt = rsp->tx_chan_cnt;
+ dev->lso_tsov4_idx = rsp->lso_tsov4_idx;
+ dev->lso_tsov6_idx = rsp->lso_tsov6_idx;
+ dev->lf_tx_stats = rsp->lf_tx_stats;
+ dev->lf_rx_stats = rsp->lf_rx_stats;
+ dev->cints = rsp->cints;
+ dev->qints = rsp->qints;
+ dev->npc_flow.channel = dev->rx_chan_base;
+
+ return 0;
+}
+
static int
nix_lf_free(struct otx2_eth_dev *dev)
{
@@ -64,9 +110,114 @@ nix_lf_free(struct otx2_eth_dev *dev)
return otx2_mbox_process(mbox);
}
+static int
+otx2_nix_configure(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct rte_eth_conf *conf = &data->dev_conf;
+ struct rte_eth_rxmode *rxmode = &conf->rxmode;
+ struct rte_eth_txmode *txmode = &conf->txmode;
+ char ea_fmt[RTE_ETHER_ADDR_FMT_SIZE];
+ struct rte_ether_addr *ea;
+ uint8_t nb_rxq, nb_txq;
+ int rc;
+
+ rc = -EINVAL;
+
+ /* Sanity checks */
+ if (rte_eal_has_hugepages() == 0) {
+ otx2_err("Huge page is not configured");
+ goto fail;
+ }
+
+ if (rte_eal_iova_mode() != RTE_IOVA_VA) {
+ otx2_err("iova mode should be va");
+ goto fail;
+ }
+
+ if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+ otx2_err("Setting link speed/duplex not supported");
+ goto fail;
+ }
+
+ if (conf->dcb_capability_en == 1) {
+ otx2_err("dcb enable is not supported");
+ goto fail;
+ }
+
+ if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) {
+ otx2_err("Flow director is not supported");
+ goto fail;
+ }
+
+ if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
+ rxmode->mq_mode != ETH_MQ_RX_RSS) {
+ otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
+ goto fail;
+ }
+
+ if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+ otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
+ goto fail;
+ }
+
+ /* Free the resources allocated from the previous configure */
+ if (dev->configured == 1)
+ nix_lf_free(dev);
+
+ if (otx2_dev_is_A0(dev) &&
+ (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
+ ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+ (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+ otx2_err("Outer IP and SCTP checksum unsupported");
+ rc = -EINVAL;
+ goto fail;
+ }
+
+ dev->rx_offloads = rxmode->offloads;
+ dev->tx_offloads = txmode->offloads;
+ dev->rss_info.rss_grps = NIX_RSS_GRPS;
+
+ nb_rxq = RTE_MAX(data->nb_rx_queues, 1);
+ nb_txq = RTE_MAX(data->nb_tx_queues, 1);
+
+ /* Alloc a nix lf */
+ rc = nix_lf_alloc(dev, nb_rxq, nb_txq);
+ if (rc) {
+ otx2_err("Failed to init nix_lf rc=%d", rc);
+ goto fail;
+ }
+
+ /* Update the mac address */
+ ea = eth_dev->data->mac_addrs;
+ memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN);
+ if (rte_is_zero_ether_addr(ea))
+ rte_eth_random_addr((uint8_t *)ea);
+
+ rte_ether_format_addr(ea_fmt, RTE_ETHER_ADDR_FMT_SIZE, ea);
+
+ otx2_nix_dbg("Configured port%d mac=%s nb_rxq=%d nb_txq=%d"
+ " rx_offloads=0x%" PRIx64 " tx_offloads=0x%" PRIx64 ""
+ " rx_flags=0x%x tx_flags=0x%x",
+ eth_dev->data->port_id, ea_fmt, nb_rxq,
+ nb_txq, dev->rx_offloads, dev->tx_offloads,
+ dev->rx_offload_flags, dev->tx_offload_flags);
+
+ /* All good */
+ dev->configured = 1;
+ dev->configured_nb_rx_qs = data->nb_rx_queues;
+ dev->configured_nb_tx_qs = data->nb_tx_queues;
+ return 0;
+
+fail:
+ return rc;
+}
+
/* Initialize and register driver with DPDK Application */
static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
+ .dev_configure = otx2_nix_configure,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 666ceba91..c1528e2ac 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -59,11 +59,14 @@
#define NIX_MAX_SQB 512
#define NIX_MIN_SQB 32
+/* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/
+#define NIX_RSS_GRPS 8
#define NIX_HASH_KEY_SIZE 48 /* 352 Bits */
#define NIX_RSS_RETA_SIZE 64
#define NIX_RX_MIN_DESC 16
#define NIX_RX_MIN_DESC_ALIGN 16
#define NIX_RX_NB_SEG_MAX 6
+#define NIX_CQ_ENTRY_SZ 128
/* If PTP is enabled additional SEND MEM DESC is required which
* takes 2 words, hence max 7 iova address are possible
@@ -105,9 +108,11 @@
struct otx2_rss_info {
uint16_t rss_size;
+ uint8_t rss_grps;
};
struct otx2_npc_flow_info {
+ uint16_t channel; /*rx channel */
uint16_t flow_prealloc_size;
uint16_t flow_max_priority;
};
@@ -124,7 +129,13 @@ struct otx2_eth_dev {
uint8_t lso_tsov6_idx;
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t max_mac_entries;
+ uint8_t lf_tx_stats;
+ uint8_t lf_rx_stats;
+ uint16_t cints;
+ uint16_t qints;
uint8_t configured;
+ uint8_t configured_nb_rx_qs;
+ uint8_t configured_nb_tx_qs;
uint16_t nix_msixoff;
uintptr_t base;
uintptr_t lmt_addr;
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 08/58] net/octeontx2: handle queue specific error interrupts
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (6 preceding siblings ...)
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 07/58] net/octeontx2: add device configure operation jerinj
@ 2019-07-03 8:41 ` jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 09/58] net/octeontx2: add context debug utils jerinj
` (50 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:41 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic
From: Jerin Jacob <jerinj@marvell.com>
Handle queue specific error interrupts.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 16 +-
drivers/net/octeontx2/otx2_ethdev.h | 9 ++
drivers/net/octeontx2/otx2_ethdev_irq.c | 191 ++++++++++++++++++++++++
4 files changed, 216 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index e3f4c2c43..50e825968 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -18,6 +18,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- SR-IOV VF
- Lock-free Tx queue
+- Debug utilities - error interrupt support
Prerequisites
-------------
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 65d72a47f..045855c2e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -163,8 +163,10 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
}
/* Free the resources allocated from the previous configure */
- if (dev->configured == 1)
+ if (dev->configured == 1) {
+ oxt2_nix_unregister_queue_irqs(eth_dev);
nix_lf_free(dev);
+ }
if (otx2_dev_is_A0(dev) &&
(txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
@@ -189,6 +191,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto fail;
}
+ /* Register queue IRQs */
+ rc = oxt2_nix_register_queue_irqs(eth_dev);
+ if (rc) {
+ otx2_err("Failed to register queue interrupts rc=%d", rc);
+ goto free_nix_lf;
+ }
+
/* Update the mac address */
ea = eth_dev->data->mac_addrs;
memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN);
@@ -210,6 +219,8 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
dev->configured_nb_tx_qs = data->nb_tx_queues;
return 0;
+free_nix_lf:
+ rc = nix_lf_free(dev);
fail:
return rc;
}
@@ -413,6 +424,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ /* Unregister queue irqs */
+ oxt2_nix_unregister_queue_irqs(eth_dev);
+
rc = nix_lf_free(dev);
if (rc)
otx2_err("Failed to free nix lf, rc=%d", rc);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index c1528e2ac..d9cdd33b5 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -106,6 +106,11 @@
DEV_RX_OFFLOAD_QINQ_STRIP | \
DEV_RX_OFFLOAD_TIMESTAMP)
+struct otx2_qint {
+ struct rte_eth_dev *eth_dev;
+ uint8_t qintx;
+};
+
struct otx2_rss_info {
uint16_t rss_size;
uint8_t rss_grps;
@@ -134,6 +139,7 @@ struct otx2_eth_dev {
uint16_t cints;
uint16_t qints;
uint8_t configured;
+ uint8_t configured_qints;
uint8_t configured_nb_rx_qs;
uint8_t configured_nb_tx_qs;
uint16_t nix_msixoff;
@@ -147,6 +153,7 @@ struct otx2_eth_dev {
uint64_t tx_offloads;
uint64_t rx_offload_capa;
uint64_t tx_offload_capa;
+ struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
struct otx2_rss_info rss_info;
struct otx2_npc_flow_info npc_flow;
} __rte_cache_aligned;
@@ -163,7 +170,9 @@ void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
/* IRQ */
int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
+int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev);
void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
+void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev);
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index 33fed93c4..476c7ea78 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -112,6 +112,197 @@ nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
otx2_unregister_irq(handle, nix_lf_ras_irq, eth_dev, vec);
}
+static inline uint8_t
+nix_lf_q_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t q,
+ uint32_t off, uint64_t mask)
+{
+ uint64_t reg, wdata;
+ uint8_t qint;
+
+ wdata = (uint64_t)q << 44;
+ reg = otx2_atomic64_add_nosync(wdata, (int64_t *)(dev->base + off));
+
+ if (reg & BIT_ULL(42) /* OP_ERR */) {
+ otx2_err("Failed execute irq get off=0x%x", off);
+ return 0;
+ }
+
+ qint = reg & 0xff;
+ wdata &= mask;
+ otx2_write64(wdata, dev->base + off);
+
+ return qint;
+}
+
+static inline uint8_t
+nix_lf_rq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t rq)
+{
+ return nix_lf_q_irq_get_and_clear(dev, rq, NIX_LF_RQ_OP_INT, ~0xff00);
+}
+
+static inline uint8_t
+nix_lf_cq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t cq)
+{
+ return nix_lf_q_irq_get_and_clear(dev, cq, NIX_LF_CQ_OP_INT, ~0xff00);
+}
+
+static inline uint8_t
+nix_lf_sq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t sq)
+{
+ return nix_lf_q_irq_get_and_clear(dev, sq, NIX_LF_SQ_OP_INT, ~0x1ff00);
+}
+
+static inline void
+nix_lf_sq_debug_reg(struct otx2_eth_dev *dev, uint32_t off)
+{
+ uint64_t reg;
+
+ reg = otx2_read64(dev->base + off);
+ if (reg & BIT_ULL(44))
+ otx2_err("SQ=%d err_code=0x%x",
+ (int)((reg >> 8) & 0xfffff), (uint8_t)(reg & 0xff));
+}
+
+static void
+nix_lf_q_irq(void *param)
+{
+ struct otx2_qint *qint = (struct otx2_qint *)param;
+ struct rte_eth_dev *eth_dev = qint->eth_dev;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint8_t irq, qintx = qint->qintx;
+ int q, cq, rq, sq;
+ uint64_t intr;
+
+ intr = otx2_read64(dev->base + NIX_LF_QINTX_INT(qintx));
+ if (intr == 0)
+ return;
+
+ otx2_err("Queue_intr=0x%" PRIx64 " qintx=%d pf=%d, vf=%d",
+ intr, qintx, dev->pf, dev->vf);
+
+ /* Handle RQ interrupts */
+ for (q = 0; q < eth_dev->data->nb_rx_queues; q++) {
+ rq = q % dev->qints;
+ irq = nix_lf_rq_irq_get_and_clear(dev, rq);
+
+ if (irq & BIT_ULL(NIX_RQINT_DROP))
+ otx2_err("RQ=%d NIX_RQINT_DROP", rq);
+
+ if (irq & BIT_ULL(NIX_RQINT_RED))
+ otx2_err("RQ=%d NIX_RQINT_RED", rq);
+ }
+
+ /* Handle CQ interrupts */
+ for (q = 0; q < eth_dev->data->nb_rx_queues; q++) {
+ cq = q % dev->qints;
+ irq = nix_lf_cq_irq_get_and_clear(dev, cq);
+
+ if (irq & BIT_ULL(NIX_CQERRINT_DOOR_ERR))
+ otx2_err("CQ=%d NIX_CQERRINT_DOOR_ERR", cq);
+
+ if (irq & BIT_ULL(NIX_CQERRINT_WR_FULL))
+ otx2_err("CQ=%d NIX_CQERRINT_WR_FULL", cq);
+
+ if (irq & BIT_ULL(NIX_CQERRINT_CQE_FAULT))
+ otx2_err("CQ=%d NIX_CQERRINT_CQE_FAULT", cq);
+ }
+
+ /* Handle SQ interrupts */
+ for (q = 0; q < eth_dev->data->nb_tx_queues; q++) {
+ sq = q % dev->qints;
+ irq = nix_lf_sq_irq_get_and_clear(dev, sq);
+
+ if (irq & BIT_ULL(NIX_SQINT_LMT_ERR)) {
+ otx2_err("SQ=%d NIX_SQINT_LMT_ERR", sq);
+ nix_lf_sq_debug_reg(dev, NIX_LF_SQ_OP_ERR_DBG);
+ }
+ if (irq & BIT_ULL(NIX_SQINT_MNQ_ERR)) {
+ otx2_err("SQ=%d NIX_SQINT_MNQ_ERR", sq);
+ nix_lf_sq_debug_reg(dev, NIX_LF_MNQ_ERR_DBG);
+ }
+ if (irq & BIT_ULL(NIX_SQINT_SEND_ERR)) {
+ otx2_err("SQ=%d NIX_SQINT_SEND_ERR", sq);
+ nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG);
+ }
+ if (irq & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL)) {
+ otx2_err("SQ=%d NIX_SQINT_SQB_ALLOC_FAIL", sq);
+ nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG);
+ }
+ }
+
+ /* Clear interrupt */
+ otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
+}
+
+int
+oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int vec, q, sqs, rqs, qs, rc = 0;
+
+ /* Figure out max qintx required */
+ rqs = RTE_MIN(dev->qints, eth_dev->data->nb_rx_queues);
+ sqs = RTE_MIN(dev->qints, eth_dev->data->nb_tx_queues);
+ qs = RTE_MAX(rqs, sqs);
+
+ dev->configured_qints = qs;
+
+ for (q = 0; q < qs; q++) {
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q;
+
+ /* Clear QINT CNT */
+ otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
+
+ /* Clear interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q));
+
+ dev->qints_mem[q].eth_dev = eth_dev;
+ dev->qints_mem[q].qintx = q;
+
+ /* Sync qints_mem update */
+ rte_smp_wmb();
+
+ /* Register queue irq vector */
+ rc = otx2_register_irq(handle, nix_lf_q_irq,
+ &dev->qints_mem[q], vec);
+ if (rc)
+ break;
+
+ otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
+ otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q));
+ /* Enable QINT interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1S(q));
+ }
+
+ return rc;
+}
+
+void
+oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int vec, q;
+
+ for (q = 0; q < dev->configured_qints; q++) {
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q;
+
+ /* Clear QINT CNT */
+ otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
+ otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q));
+
+ /* Clear interrupt */
+ otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q));
+
+ /* Unregister queue irq vector */
+ otx2_unregister_irq(handle, nix_lf_q_irq,
+ &dev->qints_mem[q], vec);
+ }
+}
+
int
otx2_nix_register_irqs(struct rte_eth_dev *eth_dev)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 09/58] net/octeontx2: add context debug utils
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (7 preceding siblings ...)
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 08/58] net/octeontx2: handle queue specific error interrupts jerinj
@ 2019-07-03 8:41 ` jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 10/58] net/octeontx2: add register dump support jerinj
` (49 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:41 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic
Cc: Vivek Sharma
From: Jerin Jacob <jerinj@marvell.com>
Add RQ,SQ,CQ context and CQE structure dump utils.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
doc/guides/nics/octeontx2.rst | 2 +-
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.h | 4 +
drivers/net/octeontx2/otx2_ethdev_debug.c | 272 ++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev_irq.c | 6 +
6 files changed, 285 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/octeontx2/otx2_ethdev_debug.c
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 50e825968..75d5746e8 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -18,7 +18,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- SR-IOV VF
- Lock-free Tx queue
-- Debug utilities - error interrupt support
+- Debug utilities - Context dump and error interrupt support
Prerequisites
-------------
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 3360fbd10..840339aab 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -35,6 +35,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_ethdev.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
+ otx2_ethdev_debug.c \
otx2_ethdev_devargs.c
LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index aa8417e3f..a06e1192c 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -7,6 +7,7 @@ sources = files(
'otx2_ethdev.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
+ 'otx2_ethdev_debug.c',
'otx2_ethdev_devargs.c'
)
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index d9cdd33b5..7c0bef28e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -174,6 +174,10 @@ int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev);
void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev);
+/* Debug */
+int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
+void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
+
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c
new file mode 100644
index 000000000..39cda7637
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ethdev_debug.c
@@ -0,0 +1,272 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+
+#define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
+
+static inline void
+nix_lf_sq_dump(struct nix_sq_ctx_s *ctx)
+{
+ nix_dump("W0: sqe_way_mask \t\t%d\nW0: cq \t\t\t\t%d",
+ ctx->sqe_way_mask, ctx->cq);
+ nix_dump("W0: sdp_mcast \t\t\t%d\nW0: substream \t\t\t0x%03x",
+ ctx->sdp_mcast, ctx->substream);
+ nix_dump("W0: qint_idx \t\t\t%d\nW0: ena \t\t\t%d\n",
+ ctx->qint_idx, ctx->ena);
+
+ nix_dump("W1: sqb_count \t\t\t%d\nW1: default_chan \t\t%d",
+ ctx->sqb_count, ctx->default_chan);
+ nix_dump("W1: smq_rr_quantum \t\t%d\nW1: sso_ena \t\t\t%d",
+ ctx->smq_rr_quantum, ctx->sso_ena);
+ nix_dump("W1: xoff \t\t\t%d\nW1: cq_ena \t\t\t%d\nW1: smq\t\t\t\t%d\n",
+ ctx->xoff, ctx->cq_ena, ctx->smq);
+
+ nix_dump("W2: sqe_stype \t\t\t%d\nW2: sq_int_ena \t\t\t%d",
+ ctx->sqe_stype, ctx->sq_int_ena);
+ nix_dump("W2: sq_int \t\t\t%d\nW2: sqb_aura \t\t\t%d",
+ ctx->sq_int, ctx->sqb_aura);
+ nix_dump("W2: smq_rr_count \t\t%d\n", ctx->smq_rr_count);
+
+ nix_dump("W3: smq_next_sq_vld\t\t%d\nW3: smq_pend\t\t\t%d",
+ ctx->smq_next_sq_vld, ctx->smq_pend);
+ nix_dump("W3: smenq_next_sqb_vld \t%d\nW3: head_offset\t\t\t%d",
+ ctx->smenq_next_sqb_vld, ctx->head_offset);
+ nix_dump("W3: smenq_offset\t\t%d\nW3: tail_offset \t\t%d",
+ ctx->smenq_offset, ctx->tail_offset);
+ nix_dump("W3: smq_lso_segnum \t\t%d\nW3: smq_next_sq \t\t%d",
+ ctx->smq_lso_segnum, ctx->smq_next_sq);
+ nix_dump("W3: mnq_dis \t\t\t%d\nW3: lmt_dis \t\t\t%d",
+ ctx->mnq_dis, ctx->lmt_dis);
+ nix_dump("W3: cq_limit\t\t\t%d\nW3: max_sqe_size\t\t%d\n",
+ ctx->cq_limit, ctx->max_sqe_size);
+
+ nix_dump("W4: next_sqb \t\t\t0x%" PRIx64 "", ctx->next_sqb);
+ nix_dump("W5: tail_sqb \t\t\t0x%" PRIx64 "", ctx->tail_sqb);
+ nix_dump("W6: smenq_sqb \t\t\t0x%" PRIx64 "", ctx->smenq_sqb);
+ nix_dump("W7: smenq_next_sqb \t\t0x%" PRIx64 "", ctx->smenq_next_sqb);
+ nix_dump("W8: head_sqb \t\t\t0x%" PRIx64 "", ctx->head_sqb);
+
+ nix_dump("W9: vfi_lso_vld \t\t%d\nW9: vfi_lso_vlan1_ins_ena\t%d",
+ ctx->vfi_lso_vld, ctx->vfi_lso_vlan1_ins_ena);
+ nix_dump("W9: vfi_lso_vlan0_ins_ena\t%d\nW9: vfi_lso_mps\t\t\t%d",
+ ctx->vfi_lso_vlan0_ins_ena, ctx->vfi_lso_mps);
+ nix_dump("W9: vfi_lso_sb \t\t\t%d\nW9: vfi_lso_sizem1\t\t%d",
+ ctx->vfi_lso_sb, ctx->vfi_lso_sizem1);
+ nix_dump("W9: vfi_lso_total\t\t%d", ctx->vfi_lso_total);
+
+ nix_dump("W10: scm_lso_rem \t\t0x%" PRIx64 "",
+ (uint64_t)ctx->scm_lso_rem);
+ nix_dump("W11: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
+ nix_dump("W12: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
+ nix_dump("W14: dropped_octs \t\t0x%" PRIx64 "",
+ (uint64_t)ctx->drop_octs);
+ nix_dump("W15: dropped_pkts \t\t0x%" PRIx64 "",
+ (uint64_t)ctx->drop_pkts);
+}
+
+static inline void
+nix_lf_rq_dump(struct nix_rq_ctx_s *ctx)
+{
+ nix_dump("W0: wqe_aura \t\t\t%d\nW0: substream \t\t\t0x%03x",
+ ctx->wqe_aura, ctx->substream);
+ nix_dump("W0: cq \t\t\t\t%d\nW0: ena_wqwd \t\t\t%d",
+ ctx->cq, ctx->ena_wqwd);
+ nix_dump("W0: ipsech_ena \t\t\t%d\nW0: sso_ena \t\t\t%d",
+ ctx->ipsech_ena, ctx->sso_ena);
+ nix_dump("W0: ena \t\t\t%d\n", ctx->ena);
+
+ nix_dump("W1: lpb_drop_ena \t\t%d\nW1: spb_drop_ena \t\t%d",
+ ctx->lpb_drop_ena, ctx->spb_drop_ena);
+ nix_dump("W1: xqe_drop_ena \t\t%d\nW1: wqe_caching \t\t%d",
+ ctx->xqe_drop_ena, ctx->wqe_caching);
+ nix_dump("W1: pb_caching \t\t\t%d\nW1: sso_tt \t\t\t%d",
+ ctx->pb_caching, ctx->sso_tt);
+ nix_dump("W1: sso_grp \t\t\t%d\nW1: lpb_aura \t\t\t%d",
+ ctx->sso_grp, ctx->lpb_aura);
+ nix_dump("W1: spb_aura \t\t\t%d\n", ctx->spb_aura);
+
+ nix_dump("W2: xqe_hdr_split \t\t%d\nW2: xqe_imm_copy \t\t%d",
+ ctx->xqe_hdr_split, ctx->xqe_imm_copy);
+ nix_dump("W2: xqe_imm_size \t\t%d\nW2: later_skip \t\t\t%d",
+ ctx->xqe_imm_size, ctx->later_skip);
+ nix_dump("W2: first_skip \t\t\t%d\nW2: lpb_sizem1 \t\t\t%d",
+ ctx->first_skip, ctx->lpb_sizem1);
+ nix_dump("W2: spb_ena \t\t\t%d\nW2: wqe_skip \t\t\t%d",
+ ctx->spb_ena, ctx->wqe_skip);
+ nix_dump("W2: spb_sizem1 \t\t\t%d\n", ctx->spb_sizem1);
+
+ nix_dump("W3: spb_pool_pass \t\t%d\nW3: spb_pool_drop \t\t%d",
+ ctx->spb_pool_pass, ctx->spb_pool_drop);
+ nix_dump("W3: spb_aura_pass \t\t%d\nW3: spb_aura_drop \t\t%d",
+ ctx->spb_aura_pass, ctx->spb_aura_drop);
+ nix_dump("W3: wqe_pool_pass \t\t%d\nW3: wqe_pool_drop \t\t%d",
+ ctx->wqe_pool_pass, ctx->wqe_pool_drop);
+ nix_dump("W3: xqe_pass \t\t\t%d\nW3: xqe_drop \t\t\t%d\n",
+ ctx->xqe_pass, ctx->xqe_drop);
+
+ nix_dump("W4: qint_idx \t\t\t%d\nW4: rq_int_ena \t\t\t%d",
+ ctx->qint_idx, ctx->rq_int_ena);
+ nix_dump("W4: rq_int \t\t\t%d\nW4: lpb_pool_pass \t\t%d",
+ ctx->rq_int, ctx->lpb_pool_pass);
+ nix_dump("W4: lpb_pool_drop \t\t%d\nW4: lpb_aura_pass \t\t%d",
+ ctx->lpb_pool_drop, ctx->lpb_aura_pass);
+ nix_dump("W4: lpb_aura_drop \t\t%d\n", ctx->lpb_aura_drop);
+
+ nix_dump("W5: flow_tagw \t\t\t%d\nW5: bad_utag \t\t\t%d",
+ ctx->flow_tagw, ctx->bad_utag);
+ nix_dump("W5: good_utag \t\t\t%d\nW5: ltag \t\t\t%d\n",
+ ctx->good_utag, ctx->ltag);
+
+ nix_dump("W6: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
+ nix_dump("W7: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
+ nix_dump("W8: drop_octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_octs);
+ nix_dump("W9: drop_pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_pkts);
+ nix_dump("W10: re_pkts \t\t\t0x%" PRIx64 "\n", (uint64_t)ctx->re_pkts);
+}
+
+static inline void
+nix_lf_cq_dump(struct nix_cq_ctx_s *ctx)
+{
+ nix_dump("W0: base \t\t\t0x%" PRIx64 "\n", ctx->base);
+
+ nix_dump("W1: wrptr \t\t\t%" PRIx64 "", (uint64_t)ctx->wrptr);
+ nix_dump("W1: avg_con \t\t\t%d\nW1: cint_idx \t\t\t%d",
+ ctx->avg_con, ctx->cint_idx);
+ nix_dump("W1: cq_err \t\t\t%d\nW1: qint_idx \t\t\t%d",
+ ctx->cq_err, ctx->qint_idx);
+ nix_dump("W1: bpid \t\t\t%d\nW1: bp_ena \t\t\t%d\n",
+ ctx->bpid, ctx->bp_ena);
+
+ nix_dump("W2: update_time \t\t%d\nW2: avg_level \t\t\t%d",
+ ctx->update_time, ctx->avg_level);
+ nix_dump("W2: head \t\t\t%d\nW2: tail \t\t\t%d\n",
+ ctx->head, ctx->tail);
+
+ nix_dump("W3: cq_err_int_ena \t\t%d\nW3: cq_err_int \t\t\t%d",
+ ctx->cq_err_int_ena, ctx->cq_err_int);
+ nix_dump("W3: qsize \t\t\t%d\nW3: caching \t\t\t%d",
+ ctx->qsize, ctx->caching);
+ nix_dump("W3: substream \t\t\t0x%03x\nW3: ena \t\t\t%d",
+ ctx->substream, ctx->ena);
+ nix_dump("W3: drop_ena \t\t\t%d\nW3: drop \t\t\t%d",
+ ctx->drop_ena, ctx->drop);
+ nix_dump("W3: bp \t\t\t\t%d\n", ctx->bp);
+}
+
+int
+otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc, q, rq = eth_dev->data->nb_rx_queues;
+ int sq = eth_dev->data->nb_tx_queues;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_rsp *rsp;
+ struct nix_aq_enq_req *aq;
+
+ for (q = 0; q < rq; q++) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = q;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to get cq context");
+ goto fail;
+ }
+ nix_dump("============== port=%d cq=%d ===============",
+ eth_dev->data->port_id, q);
+ nix_lf_cq_dump(&rsp->cq);
+ }
+
+ for (q = 0; q < rq; q++) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = q;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void **)&rsp);
+ if (rc) {
+ otx2_err("Failed to get rq context");
+ goto fail;
+ }
+ nix_dump("============== port=%d rq=%d ===============",
+ eth_dev->data->port_id, q);
+ nix_lf_rq_dump(&rsp->rq);
+ }
+ for (q = 0; q < sq; q++) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = q;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to get sq context");
+ goto fail;
+ }
+ nix_dump("============== port=%d sq=%d ===============",
+ eth_dev->data->port_id, q);
+ nix_lf_sq_dump(&rsp->sq);
+ }
+
+fail:
+ return rc;
+}
+
+/* Dumps struct nix_cqe_hdr_s and struct nix_rx_parse_s */
+void
+otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
+{
+ const struct nix_rx_parse_s *rx =
+ (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1);
+
+ nix_dump("tag \t\t0x%x\tq \t\t%d\t\tnode \t\t%d\tcqe_type \t%d",
+ cq->tag, cq->q, cq->node, cq->cqe_type);
+
+ nix_dump("W0: chan \t%d\t\tdesc_sizem1 \t%d",
+ rx->chan, rx->desc_sizem1);
+ nix_dump("W0: imm_copy \t%d\t\texpress \t%d",
+ rx->imm_copy, rx->express);
+ nix_dump("W0: wqwd \t%d\t\terrlev \t\t%d\t\terrcode \t%d",
+ rx->wqwd, rx->errlev, rx->errcode);
+ nix_dump("W0: latype \t%d\t\tlbtype \t\t%d\t\tlctype \t\t%d",
+ rx->latype, rx->lbtype, rx->lctype);
+ nix_dump("W0: ldtype \t%d\t\tletype \t\t%d\t\tlftype \t\t%d",
+ rx->ldtype, rx->letype, rx->lftype);
+ nix_dump("W0: lgtype \t%d \t\tlhtype \t\t%d",
+ rx->lgtype, rx->lhtype);
+
+ nix_dump("W1: pkt_lenm1 \t%d", rx->pkt_lenm1);
+ nix_dump("W1: l2m \t%d\t\tl2b \t\t%d\t\tl3m \t\t%d\tl3b \t\t%d",
+ rx->l2m, rx->l2b, rx->l3m, rx->l3b);
+ nix_dump("W1: vtag0_valid %d\t\tvtag0_gone \t%d",
+ rx->vtag0_valid, rx->vtag0_gone);
+ nix_dump("W1: vtag1_valid %d\t\tvtag1_gone \t%d",
+ rx->vtag1_valid, rx->vtag1_gone);
+ nix_dump("W1: pkind \t%d", rx->pkind);
+ nix_dump("W1: vtag0_tci \t%d\t\tvtag1_tci \t%d",
+ rx->vtag0_tci, rx->vtag1_tci);
+
+ nix_dump("W2: laflags \t%d\t\tlbflags\t\t%d\t\tlcflags \t%d",
+ rx->laflags, rx->lbflags, rx->lcflags);
+ nix_dump("W2: ldflags \t%d\t\tleflags\t\t%d\t\tlfflags \t%d",
+ rx->ldflags, rx->leflags, rx->lfflags);
+ nix_dump("W2: lgflags \t%d\t\tlhflags \t%d",
+ rx->lgflags, rx->lhflags);
+
+ nix_dump("W3: eoh_ptr \t%d\t\twqe_aura \t%d\t\tpb_aura \t%d",
+ rx->eoh_ptr, rx->wqe_aura, rx->pb_aura);
+ nix_dump("W3: match_id \t%d", rx->match_id);
+
+ nix_dump("W4: laptr \t%d\t\tlbptr \t\t%d\t\tlcptr \t\t%d",
+ rx->laptr, rx->lbptr, rx->lcptr);
+ nix_dump("W4: ldptr \t%d\t\tleptr \t\t%d\t\tlfptr \t\t%d",
+ rx->ldptr, rx->leptr, rx->lfptr);
+ nix_dump("W4: lgptr \t%d\t\tlhptr \t\t%d", rx->lgptr, rx->lhptr);
+
+ nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d",
+ rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg);
+}
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index 476c7ea78..fdebdef38 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -23,6 +23,8 @@ nix_lf_err_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
+
+ otx2_nix_queues_ctx_dump(eth_dev);
}
static int
@@ -75,6 +77,8 @@ nix_lf_ras_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_RAS);
+
+ otx2_nix_queues_ctx_dump(eth_dev);
}
static int
@@ -232,6 +236,8 @@ nix_lf_q_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
+
+ otx2_nix_queues_ctx_dump(eth_dev);
}
int
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 10/58] net/octeontx2: add register dump support
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (8 preceding siblings ...)
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 09/58] net/octeontx2: add context debug utils jerinj
@ 2019-07-03 8:41 ` jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 11/58] net/octeontx2: add link stats operations jerinj
` (48 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:41 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
From: Kiran Kumar K <kirankumark@marvell.com>
Add register dump support and mark Registers dump in features.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 1 +
drivers/net/octeontx2/otx2_ethdev.h | 3 +
drivers/net/octeontx2/otx2_ethdev_debug.c | 228 +++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev_irq.c | 6 +
7 files changed, 241 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 356b88de7..7d53bf0e7 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -8,6 +8,7 @@ Speed capabilities = Y
Lock-free Tx queue = Y
SR-IOV = Y
Multiprocess aware = Y
+Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 5f4eaa3f4..e0cc7b22d 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -8,6 +8,7 @@ Speed capabilities = Y
Lock-free Tx queue = Y
SR-IOV = Y
Multiprocess aware = Y
+Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 024b032d4..6dfdf88c6 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -7,6 +7,7 @@
Speed capabilities = Y
Lock-free Tx queue = Y
Multiprocess aware = Y
+Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
Usage doc = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 045855c2e..48d5a15d6 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -229,6 +229,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
.dev_configure = otx2_nix_configure,
+ .get_reg = otx2_nix_dev_get_reg,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7c0bef28e..7313689b0 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -175,6 +175,9 @@ void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev);
/* Debug */
+int otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data);
+int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
+ struct rte_dev_reg_info *regs);
int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c
index 39cda7637..9f06e5505 100644
--- a/drivers/net/octeontx2/otx2_ethdev_debug.c
+++ b/drivers/net/octeontx2/otx2_ethdev_debug.c
@@ -5,6 +5,234 @@
#include "otx2_ethdev.h"
#define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
+#define NIX_REG_INFO(reg) {reg, #reg}
+
+struct nix_lf_reg_info {
+ uint32_t offset;
+ const char *name;
+};
+
+static const struct
+nix_lf_reg_info nix_lf_reg[] = {
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(0)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(1)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(2)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(3)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(4)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(5)),
+ NIX_REG_INFO(NIX_LF_CFG),
+ NIX_REG_INFO(NIX_LF_GINT),
+ NIX_REG_INFO(NIX_LF_GINT_W1S),
+ NIX_REG_INFO(NIX_LF_GINT_ENA_W1C),
+ NIX_REG_INFO(NIX_LF_GINT_ENA_W1S),
+ NIX_REG_INFO(NIX_LF_ERR_INT),
+ NIX_REG_INFO(NIX_LF_ERR_INT_W1S),
+ NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1C),
+ NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1S),
+ NIX_REG_INFO(NIX_LF_RAS),
+ NIX_REG_INFO(NIX_LF_RAS_W1S),
+ NIX_REG_INFO(NIX_LF_RAS_ENA_W1C),
+ NIX_REG_INFO(NIX_LF_RAS_ENA_W1S),
+ NIX_REG_INFO(NIX_LF_SQ_OP_ERR_DBG),
+ NIX_REG_INFO(NIX_LF_MNQ_ERR_DBG),
+ NIX_REG_INFO(NIX_LF_SEND_ERR_DBG),
+};
+
+static int
+nix_lf_get_reg_count(struct otx2_eth_dev *dev)
+{
+ int reg_count = 0;
+
+ reg_count = RTE_DIM(nix_lf_reg);
+ /* NIX_LF_TX_STATX */
+ reg_count += dev->lf_tx_stats;
+ /* NIX_LF_RX_STATX */
+ reg_count += dev->lf_rx_stats;
+ /* NIX_LF_QINTX_CNT*/
+ reg_count += dev->qints;
+ /* NIX_LF_QINTX_INT */
+ reg_count += dev->qints;
+ /* NIX_LF_QINTX_ENA_W1S */
+ reg_count += dev->qints;
+ /* NIX_LF_QINTX_ENA_W1C */
+ reg_count += dev->qints;
+ /* NIX_LF_CINTX_CNT */
+ reg_count += dev->cints;
+ /* NIX_LF_CINTX_WAIT */
+ reg_count += dev->cints;
+ /* NIX_LF_CINTX_INT */
+ reg_count += dev->cints;
+ /* NIX_LF_CINTX_INT_W1S */
+ reg_count += dev->cints;
+ /* NIX_LF_CINTX_ENA_W1S */
+ reg_count += dev->cints;
+ /* NIX_LF_CINTX_ENA_W1C */
+ reg_count += dev->cints;
+
+ return reg_count;
+}
+
+int
+otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data)
+{
+ uintptr_t nix_lf_base = dev->base;
+ bool dump_stdout;
+ uint64_t reg;
+ uint32_t i;
+
+ dump_stdout = data ? 0 : 1;
+
+ for (i = 0; i < RTE_DIM(nix_lf_reg); i++) {
+ reg = otx2_read64(nix_lf_base + nix_lf_reg[i].offset);
+ if (dump_stdout && reg)
+ nix_dump("%32s = 0x%" PRIx64,
+ nix_lf_reg[i].name, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_TX_STATX */
+ for (i = 0; i < dev->lf_tx_stats; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_TX_STATX(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_TX_STATX", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_RX_STATX */
+ for (i = 0; i < dev->lf_rx_stats; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_RX_STATX(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_RX_STATX", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_QINTX_CNT*/
+ for (i = 0; i < dev->qints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_CNT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_QINTX_CNT", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_QINTX_INT */
+ for (i = 0; i < dev->qints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_INT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_QINTX_INT", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_QINTX_ENA_W1S */
+ for (i = 0; i < dev->qints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1S(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_QINTX_ENA_W1S", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_QINTX_ENA_W1C */
+ for (i = 0; i < dev->qints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1C(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_QINTX_ENA_W1C", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_CNT */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_CNT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_CNT", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_WAIT */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_WAIT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_WAIT", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_INT */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_INT", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_INT_W1S */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT_W1S(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_INT_W1S", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_ENA_W1S */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1S(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_ENA_W1S", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_ENA_W1C */
+ for (i = 0; i < dev->cints; i++) {
+ reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1C(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64,
+ "NIX_LF_CINTX_ENA_W1C", i, reg);
+ if (data)
+ *data++ = reg;
+ }
+ return 0;
+}
+
+int
+otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev, struct rte_dev_reg_info *regs)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t *data = regs->data;
+
+ if (data == NULL) {
+ regs->length = nix_lf_get_reg_count(dev);
+ regs->width = 8;
+ return 0;
+ }
+
+ if (!regs->length ||
+ regs->length == (uint32_t)nix_lf_get_reg_count(dev)) {
+ otx2_nix_reg_dump(dev, data);
+ return 0;
+ }
+
+ return -ENOTSUP;
+}
static inline void
nix_lf_sq_dump(struct nix_sq_ctx_s *ctx)
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index fdebdef38..066aca7a5 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -24,6 +24,8 @@ nix_lf_err_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
+ /* Dump registers to std out */
+ otx2_nix_reg_dump(dev, NULL);
otx2_nix_queues_ctx_dump(eth_dev);
}
@@ -78,6 +80,8 @@ nix_lf_ras_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_RAS);
+ /* Dump registers to std out */
+ otx2_nix_reg_dump(dev, NULL);
otx2_nix_queues_ctx_dump(eth_dev);
}
@@ -237,6 +241,8 @@ nix_lf_q_irq(void *param)
/* Clear interrupt */
otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
+ /* Dump registers to std out */
+ otx2_nix_reg_dump(dev, NULL);
otx2_nix_queues_ctx_dump(eth_dev);
}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 11/58] net/octeontx2: add link stats operations
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (9 preceding siblings ...)
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 10/58] net/octeontx2: add register dump support jerinj
@ 2019-07-03 8:41 ` jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 12/58] net/octeontx2: add basic stats operation jerinj
` (47 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:41 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Add link stats related operations and mark respective
items in the documentation.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 2 +
doc/guides/nics/features/octeontx2_vf.ini | 2 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 8 ++
drivers/net/octeontx2/otx2_ethdev.h | 8 ++
drivers/net/octeontx2/otx2_link.c | 108 +++++++++++++++++++++
9 files changed, 133 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_link.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 7d53bf0e7..828351409 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -8,6 +8,8 @@ Speed capabilities = Y
Lock-free Tx queue = Y
SR-IOV = Y
Multiprocess aware = Y
+Link status = Y
+Link status event = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index e0cc7b22d..719692dc6 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -8,6 +8,8 @@ Speed capabilities = Y
Lock-free Tx queue = Y
SR-IOV = Y
Multiprocess aware = Y
+Link status = Y
+Link status event = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 6dfdf88c6..4d5667583 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -7,6 +7,8 @@
Speed capabilities = Y
Lock-free Tx queue = Y
Multiprocess aware = Y
+Link status = Y
+Link status event = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 75d5746e8..a163f9128 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -18,6 +18,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- SR-IOV VF
- Lock-free Tx queue
+- Link state information
- Debug utilities - Context dump and error interrupt support
Prerequisites
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 840339aab..f6db918af 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -32,6 +32,7 @@ LIBABIVER := 1
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
+ otx2_link.c \
otx2_ethdev.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index a06e1192c..d693386b9 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -4,6 +4,7 @@
sources = files(
'otx2_mac.c',
+ 'otx2_link.c',
'otx2_ethdev.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 48d5a15d6..cb4f6ebb9 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -39,6 +39,10 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
return NIX_TX_OFFLOAD_CAPA;
}
+static const struct otx2_dev_ops otx2_dev_ops = {
+ .link_status_update = otx2_eth_dev_link_status_update,
+};
+
static int
nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
{
@@ -229,6 +233,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
.dev_configure = otx2_nix_configure,
+ .link_update = otx2_nix_link_update,
.get_reg = otx2_nix_dev_get_reg,
};
@@ -324,6 +329,9 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
goto error;
}
}
+ /* Device generic callbacks */
+ dev->ops = &otx2_dev_ops;
+ dev->eth_dev = eth_dev;
/* Grab the NPA LF if required */
rc = otx2_npa_lf_init(pci_dev, dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7313689b0..d8490337d 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -136,6 +136,7 @@ struct otx2_eth_dev {
uint8_t max_mac_entries;
uint8_t lf_tx_stats;
uint8_t lf_rx_stats;
+ uint16_t flags;
uint16_t cints;
uint16_t qints;
uint8_t configured;
@@ -156,6 +157,7 @@ struct otx2_eth_dev {
struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
struct otx2_rss_info rss_info;
struct otx2_npc_flow_info npc_flow;
+ struct rte_eth_dev *eth_dev;
} __rte_cache_aligned;
static inline struct otx2_eth_dev *
@@ -168,6 +170,12 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_info *dev_info);
+/* Link */
+void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
+int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
+void otx2_eth_dev_link_status_update(struct otx2_dev *dev,
+ struct cgx_link_user_info *link);
+
/* IRQ */
int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
new file mode 100644
index 000000000..228a0cd8e
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_link.c
@@ -0,0 +1,108 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_common.h>
+#include <rte_ethdev_pci.h>
+
+#include "otx2_ethdev.h"
+
+void
+otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set)
+{
+ if (set)
+ dev->flags |= OTX2_LINK_CFG_IN_PROGRESS_F;
+ else
+ dev->flags &= ~OTX2_LINK_CFG_IN_PROGRESS_F;
+
+ rte_wmb();
+}
+
+static inline int
+nix_wait_for_link_cfg(struct otx2_eth_dev *dev)
+{
+ uint16_t wait = 1000;
+
+ do {
+ rte_rmb();
+ if (!(dev->flags & OTX2_LINK_CFG_IN_PROGRESS_F))
+ break;
+ wait--;
+ rte_delay_ms(1);
+ } while (wait);
+
+ return wait ? 0 : -1;
+}
+
+static void
+nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
+{
+ if (link && link->link_status)
+ otx2_info("Port %d: Link Up - speed %u Mbps - %s",
+ (int)(eth_dev->data->port_id),
+ (uint32_t)link->link_speed,
+ link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+ "full-duplex" : "half-duplex");
+ else
+ otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
+}
+
+void
+otx2_eth_dev_link_status_update(struct otx2_dev *dev,
+ struct cgx_link_user_info *link)
+{
+ struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
+ struct rte_eth_dev *eth_dev = otx2_dev->eth_dev;
+ struct rte_eth_link eth_link;
+
+ if (!link || !dev || !eth_dev->data->dev_conf.intr_conf.lsc)
+ return;
+
+ if (nix_wait_for_link_cfg(otx2_dev)) {
+ otx2_err("Timeout waiting for link_cfg to complete");
+ return;
+ }
+
+ eth_link.link_status = link->link_up;
+ eth_link.link_speed = link->speed;
+ eth_link.link_autoneg = ETH_LINK_AUTONEG;
+ eth_link.link_duplex = link->full_duplex;
+
+ /* Print link info */
+ nix_link_status_print(eth_dev, ð_link);
+
+ /* Update link info */
+ rte_eth_linkstatus_set(eth_dev, ð_link);
+
+ /* Set the flag and execute application callbacks */
+ _rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+}
+
+int
+otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_link_info_msg *rsp;
+ struct rte_eth_link link;
+ int rc;
+
+ RTE_SET_USED(wait_to_complete);
+
+ if (otx2_dev_is_lbk(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_get_linkinfo(mbox);
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ link.link_status = rsp->link_info.link_up;
+ link.link_speed = rsp->link_info.speed;
+ link.link_autoneg = ETH_LINK_AUTONEG;
+
+ if (rsp->link_info.full_duplex)
+ link.link_duplex = rsp->link_info.full_duplex;
+
+ return rte_eth_linkstatus_set(eth_dev, &link);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 12/58] net/octeontx2: add basic stats operation
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (10 preceding siblings ...)
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 11/58] net/octeontx2: add link stats operations jerinj
@ 2019-07-03 8:41 ` jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 13/58] net/octeontx2: add extended stats operations jerinj
` (46 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:41 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru
From: Kiran Kumar K <kirankumark@marvell.com>
Add basic stat operation and updated the feature list.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 2 +
doc/guides/nics/features/octeontx2_vf.ini | 2 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 3 +
drivers/net/octeontx2/otx2_ethdev.h | 17 +++
drivers/net/octeontx2/otx2_stats.c | 117 +++++++++++++++++++++
9 files changed, 146 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_stats.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 828351409..557107016 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -10,6 +10,8 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Basic stats = Y
+Stats per queue = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 719692dc6..3a2b78e06 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -10,6 +10,8 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Basic stats = Y
+Stats per queue = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 4d5667583..499f66c5c 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -9,6 +9,8 @@ Lock-free Tx queue = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Basic stats = Y
+Stats per queue = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index a163f9128..2944bbb99 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -18,6 +18,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- SR-IOV VF
- Lock-free Tx queue
+- Port hardware statistics
- Link state information
- Debug utilities - Context dump and error interrupt support
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index f6db918af..5cb722482 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -33,6 +33,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
otx2_link.c \
+ otx2_stats.c \
otx2_ethdev.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index d693386b9..1c57b1bb4 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -5,6 +5,7 @@
sources = files(
'otx2_mac.c',
'otx2_link.c',
+ 'otx2_stats.c',
'otx2_ethdev.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index cb4f6ebb9..5787029d9 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -234,7 +234,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
.dev_configure = otx2_nix_configure,
.link_update = otx2_nix_link_update,
+ .stats_get = otx2_nix_dev_stats_get,
+ .stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
+ .queue_stats_mapping_set = otx2_nix_queue_stats_mapping,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index d8490337d..1cd9893a6 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -77,6 +77,12 @@
#define NIX_TX_NB_SEG_MAX 9
#endif
+#define CQ_OP_STAT_OP_ERR 63
+#define CQ_OP_STAT_CQ_ERR 46
+
+#define OP_ERR BIT_ULL(CQ_OP_STAT_OP_ERR)
+#define CQ_ERR BIT_ULL(CQ_OP_STAT_CQ_ERR)
+
#define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
ETH_RSS_TCP | ETH_RSS_SCTP | \
ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD)
@@ -156,6 +162,8 @@ struct otx2_eth_dev {
uint64_t tx_offload_capa;
struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
struct otx2_rss_info rss_info;
+ uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
+ uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
struct otx2_npc_flow_info npc_flow;
struct rte_eth_dev *eth_dev;
} __rte_cache_aligned;
@@ -189,6 +197,15 @@ int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
+/* Stats */
+int otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_stats *stats);
+void otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev);
+
+int otx2_nix_queue_stats_mapping(struct rte_eth_dev *dev,
+ uint16_t queue_id, uint8_t stat_idx,
+ uint8_t is_rx);
+
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
diff --git a/drivers/net/octeontx2/otx2_stats.c b/drivers/net/octeontx2/otx2_stats.c
new file mode 100644
index 000000000..cba1228d3
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_stats.c
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <inttypes.h>
+
+#include "otx2_ethdev.h"
+
+int
+otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_stats *stats)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t reg, val;
+ uint32_t qidx, i;
+ int64_t *addr;
+
+ stats->opackets = otx2_read64(dev->base +
+ NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_UCAST));
+ stats->opackets += otx2_read64(dev->base +
+ NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_MCAST));
+ stats->opackets += otx2_read64(dev->base +
+ NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_BCAST));
+ stats->oerrors = otx2_read64(dev->base +
+ NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_DROP));
+ stats->obytes = otx2_read64(dev->base +
+ NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_OCTS));
+
+ stats->ipackets = otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_UCAST));
+ stats->ipackets += otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_MCAST));
+ stats->ipackets += otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_BCAST));
+ stats->imissed = otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_DROP));
+ stats->ibytes = otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_OCTS));
+ stats->ierrors = otx2_read64(dev->base +
+ NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_ERR));
+
+ for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
+ if (dev->txmap[i] & (1U << 31)) {
+ qidx = dev->txmap[i] & 0xFFFF;
+ reg = (((uint64_t)qidx) << 32);
+
+ addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_opackets[i] = val;
+
+ addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_OCTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_obytes[i] = val;
+
+ addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_DROP_PKTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_errors[i] = val;
+ }
+ }
+
+ for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
+ if (dev->rxmap[i] & (1U << 31)) {
+ qidx = dev->rxmap[i] & 0xFFFF;
+ reg = (((uint64_t)qidx) << 32);
+
+ addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_PKTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_ipackets[i] = val;
+
+ addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_OCTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_ibytes[i] = val;
+
+ addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_DROP_PKTS);
+ val = otx2_atomic64_add_nosync(reg, addr);
+ if (val & OP_ERR)
+ val = 0;
+ stats->q_errors[i] += val;
+ }
+ }
+
+ return 0;
+}
+
+void
+otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+
+ otx2_mbox_alloc_msg_nix_stats_rst(mbox);
+ otx2_mbox_process(mbox);
+}
+
+int
+otx2_nix_queue_stats_mapping(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+ uint8_t stat_idx, uint8_t is_rx)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ if (is_rx)
+ dev->rxmap[stat_idx] = ((1U << 31) | queue_id);
+ else
+ dev->txmap[stat_idx] = ((1U << 31) | queue_id);
+
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 13/58] net/octeontx2: add extended stats operations
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (11 preceding siblings ...)
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 12/58] net/octeontx2: add basic stats operation jerinj
@ 2019-07-03 8:41 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 14/58] net/octeontx2: add promiscuous and allmulticast mode jerinj
` (45 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:41 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru
From: Kiran Kumar K <kirankumark@marvell.com>
Add extended operations and updated the feature list.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 5 +
drivers/net/octeontx2/otx2_ethdev.h | 13 +
drivers/net/octeontx2/otx2_stats.c | 270 +++++++++++++++++++++
6 files changed, 291 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 557107016..8d7c3588c 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Basic stats = Y
Stats per queue = Y
+Extended stats = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 3a2b78e06..a6e6876fa 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -11,6 +11,7 @@ Multiprocess aware = Y
Link status = Y
Link status event = Y
Basic stats = Y
+Extended stats = Y
Stats per queue = Y
Registers dump = Y
Linux VFIO = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 499f66c5c..6ec83e823 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -10,6 +10,7 @@ Multiprocess aware = Y
Link status = Y
Link status event = Y
Basic stats = Y
+Extended stats = Y
Stats per queue = Y
Registers dump = Y
Linux VFIO = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 5787029d9..937ba6399 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -238,6 +238,11 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
.queue_stats_mapping_set = otx2_nix_queue_stats_mapping,
+ .xstats_get = otx2_nix_xstats_get,
+ .xstats_get_names = otx2_nix_xstats_get_names,
+ .xstats_reset = otx2_nix_xstats_reset,
+ .xstats_get_by_id = otx2_nix_xstats_get_by_id,
+ .xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 1cd9893a6..7d53a6643 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -205,6 +205,19 @@ void otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev);
int otx2_nix_queue_stats_mapping(struct rte_eth_dev *dev,
uint16_t queue_id, uint8_t stat_idx,
uint8_t is_rx);
+int otx2_nix_xstats_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat *xstats, unsigned int n);
+int otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int limit);
+void otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev);
+
+int otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev,
+ const uint64_t *ids,
+ uint64_t *values, unsigned int n);
+int otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids, unsigned int limit);
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
diff --git a/drivers/net/octeontx2/otx2_stats.c b/drivers/net/octeontx2/otx2_stats.c
index cba1228d3..5eca4184f 100644
--- a/drivers/net/octeontx2/otx2_stats.c
+++ b/drivers/net/octeontx2/otx2_stats.c
@@ -6,6 +6,45 @@
#include "otx2_ethdev.h"
+struct otx2_nix_xstats_name {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ uint32_t offset;
+};
+
+static const struct otx2_nix_xstats_name nix_tx_xstats[] = {
+ {"tx_ucast", NIX_STAT_LF_TX_TX_UCAST},
+ {"tx_bcast", NIX_STAT_LF_TX_TX_BCAST},
+ {"tx_mcast", NIX_STAT_LF_TX_TX_MCAST},
+ {"tx_drop", NIX_STAT_LF_TX_TX_DROP},
+ {"tx_octs", NIX_STAT_LF_TX_TX_OCTS},
+};
+
+static const struct otx2_nix_xstats_name nix_rx_xstats[] = {
+ {"rx_octs", NIX_STAT_LF_RX_RX_OCTS},
+ {"rx_ucast", NIX_STAT_LF_RX_RX_UCAST},
+ {"rx_bcast", NIX_STAT_LF_RX_RX_BCAST},
+ {"rx_mcast", NIX_STAT_LF_RX_RX_MCAST},
+ {"rx_drop", NIX_STAT_LF_RX_RX_DROP},
+ {"rx_drop_octs", NIX_STAT_LF_RX_RX_DROP_OCTS},
+ {"rx_fcs", NIX_STAT_LF_RX_RX_FCS},
+ {"rx_err", NIX_STAT_LF_RX_RX_ERR},
+ {"rx_drp_bcast", NIX_STAT_LF_RX_RX_DRP_BCAST},
+ {"rx_drp_mcast", NIX_STAT_LF_RX_RX_DRP_MCAST},
+ {"rx_drp_l3bcast", NIX_STAT_LF_RX_RX_DRP_L3BCAST},
+ {"rx_drp_l3mcast", NIX_STAT_LF_RX_RX_DRP_L3MCAST},
+};
+
+static const struct otx2_nix_xstats_name nix_q_xstats[] = {
+ {"rq_op_re_pkts", NIX_LF_RQ_OP_RE_PKTS},
+};
+
+#define OTX2_NIX_NUM_RX_XSTATS RTE_DIM(nix_rx_xstats)
+#define OTX2_NIX_NUM_TX_XSTATS RTE_DIM(nix_tx_xstats)
+#define OTX2_NIX_NUM_QUEUE_XSTATS RTE_DIM(nix_q_xstats)
+
+#define OTX2_NIX_NUM_XSTATS_REG (OTX2_NIX_NUM_RX_XSTATS + \
+ OTX2_NIX_NUM_TX_XSTATS + OTX2_NIX_NUM_QUEUE_XSTATS)
+
int
otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
struct rte_eth_stats *stats)
@@ -115,3 +154,234 @@ otx2_nix_queue_stats_mapping(struct rte_eth_dev *eth_dev, uint16_t queue_id,
return 0;
}
+
+int
+otx2_nix_xstats_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat *xstats,
+ unsigned int n)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ unsigned int i, count = 0;
+ uint64_t reg, val;
+
+ if (n < OTX2_NIX_NUM_XSTATS_REG)
+ return OTX2_NIX_NUM_XSTATS_REG;
+
+ if (xstats == NULL)
+ return 0;
+
+ for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) {
+ xstats[count].value = otx2_read64(dev->base +
+ NIX_LF_TX_STATX(nix_tx_xstats[i].offset));
+ xstats[count].id = count;
+ count++;
+ }
+
+ for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) {
+ xstats[count].value = otx2_read64(dev->base +
+ NIX_LF_RX_STATX(nix_rx_xstats[i].offset));
+ xstats[count].id = count;
+ count++;
+ }
+
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ reg = (((uint64_t)i) << 32);
+ val = otx2_atomic64_add_nosync(reg, (int64_t *)(dev->base +
+ nix_q_xstats[0].offset));
+ if (val & OP_ERR)
+ val = 0;
+ xstats[count].value += val;
+ }
+ xstats[count].id = count;
+ count++;
+
+ return count;
+}
+
+int
+otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int limit)
+{
+ unsigned int i, count = 0;
+
+ RTE_SET_USED(eth_dev);
+
+ if (limit < OTX2_NIX_NUM_XSTATS_REG && xstats_names != NULL)
+ return -ENOMEM;
+
+ if (xstats_names) {
+ for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "%s", nix_tx_xstats[i].name);
+ count++;
+ }
+
+ for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "%s", nix_rx_xstats[i].name);
+ count++;
+ }
+
+ for (i = 0; i < OTX2_NIX_NUM_QUEUE_XSTATS; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "%s", nix_q_xstats[i].name);
+ count++;
+ }
+ }
+
+ return OTX2_NIX_NUM_XSTATS_REG;
+}
+
+int
+otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids, unsigned int limit)
+{
+ struct rte_eth_xstat_name xstats_names_copy[OTX2_NIX_NUM_XSTATS_REG];
+ uint16_t i;
+
+ if (limit < OTX2_NIX_NUM_XSTATS_REG && ids == NULL)
+ return OTX2_NIX_NUM_XSTATS_REG;
+
+ if (limit > OTX2_NIX_NUM_XSTATS_REG)
+ return -EINVAL;
+
+ if (xstats_names == NULL)
+ return -ENOMEM;
+
+ otx2_nix_xstats_get_names(eth_dev, xstats_names_copy, limit);
+
+ for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) {
+ if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) {
+ otx2_err("Invalid id value");
+ return -EINVAL;
+ }
+ strncpy(xstats_names[i].name, xstats_names_copy[ids[i]].name,
+ sizeof(xstats_names[i].name));
+ }
+
+ return limit;
+}
+
+int
+otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev, const uint64_t *ids,
+ uint64_t *values, unsigned int n)
+{
+ struct rte_eth_xstat xstats[OTX2_NIX_NUM_XSTATS_REG];
+ uint16_t i;
+
+ if (n < OTX2_NIX_NUM_XSTATS_REG && ids == NULL)
+ return OTX2_NIX_NUM_XSTATS_REG;
+
+ if (n > OTX2_NIX_NUM_XSTATS_REG)
+ return -EINVAL;
+
+ if (values == NULL)
+ return -ENOMEM;
+
+ otx2_nix_xstats_get(eth_dev, xstats, n);
+
+ for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) {
+ if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) {
+ otx2_err("Invalid id value");
+ return -EINVAL;
+ }
+ values[i] = xstats[ids[i]].value;
+ }
+
+ return n;
+}
+
+static void
+nix_queue_stats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_rsp *rsp;
+ struct nix_aq_enq_req *aq;
+ uint32_t i;
+ int rc;
+
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = i;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to read rq context");
+ return;
+ }
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = i;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+ otx2_mbox_memcpy(&aq->rq, &rsp->rq, sizeof(rsp->rq));
+ otx2_mbox_memset(&aq->rq_mask, 0, sizeof(aq->rq_mask));
+ aq->rq.octs = 0;
+ aq->rq.pkts = 0;
+ aq->rq.drop_octs = 0;
+ aq->rq.drop_pkts = 0;
+ aq->rq.re_pkts = 0;
+
+ aq->rq_mask.octs = ~(aq->rq_mask.octs);
+ aq->rq_mask.pkts = ~(aq->rq_mask.pkts);
+ aq->rq_mask.drop_octs = ~(aq->rq_mask.drop_octs);
+ aq->rq_mask.drop_pkts = ~(aq->rq_mask.drop_pkts);
+ aq->rq_mask.re_pkts = ~(aq->rq_mask.re_pkts);
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to write rq context");
+ return;
+ }
+ }
+
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = i;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to read sq context");
+ return;
+ }
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = i;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+ otx2_mbox_memcpy(&aq->sq, &rsp->sq, sizeof(rsp->sq));
+ otx2_mbox_memset(&aq->sq_mask, 0, sizeof(aq->sq_mask));
+ aq->sq.octs = 0;
+ aq->sq.pkts = 0;
+ aq->sq.drop_octs = 0;
+ aq->sq.drop_pkts = 0;
+
+ aq->sq_mask.octs = ~(aq->sq_mask.octs);
+ aq->sq_mask.pkts = ~(aq->sq_mask.pkts);
+ aq->sq_mask.drop_octs = ~(aq->sq_mask.drop_octs);
+ aq->sq_mask.drop_pkts = ~(aq->sq_mask.drop_pkts);
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to write sq context");
+ return;
+ }
+ }
+}
+
+void
+otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+
+ otx2_mbox_alloc_msg_nix_stats_rst(mbox);
+ otx2_mbox_process(mbox);
+
+ /* Reset queue stats */
+ nix_queue_stats_reset(eth_dev);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 14/58] net/octeontx2: add promiscuous and allmulticast mode
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (12 preceding siblings ...)
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 13/58] net/octeontx2: add extended stats operations jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 15/58] net/octeontx2: add unicast MAC filter jerinj
` (44 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru, Sunil Kumar Kori
From: Vamsi Attunuru <vattunuru@marvell.com>
Add promiscuous and allmulticast mode for PF devices and
update the respective feature list.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 2 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 4 ++
drivers/net/octeontx2/otx2_ethdev.h | 6 ++
drivers/net/octeontx2/otx2_ethdev_ops.c | 82 ++++++++++++++++++++++
6 files changed, 97 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 8d7c3588c..9f682609d 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -10,6 +10,8 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Promiscuous mode = Y
+Allmulticast mode = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index a6e6876fa..764e95ce6 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -10,6 +10,8 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Promiscuous mode = Y
+Allmulticast mode = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 2944bbb99..9ef7be08f 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -16,6 +16,7 @@ Features
Features of the OCTEON TX2 Ethdev PMD are:
+- Promiscuous mode
- SR-IOV VF
- Lock-free Tx queue
- Port hardware statistics
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 937ba6399..826ce7f4e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -237,6 +237,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
+ .promiscuous_enable = otx2_nix_promisc_enable,
+ .promiscuous_disable = otx2_nix_promisc_disable,
+ .allmulticast_enable = otx2_nix_allmulticast_enable,
+ .allmulticast_disable = otx2_nix_allmulticast_disable,
.queue_stats_mapping_set = otx2_nix_queue_stats_mapping,
.xstats_get = otx2_nix_xstats_get,
.xstats_get_names = otx2_nix_xstats_get_names,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7d53a6643..814fd6ec3 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -178,6 +178,12 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_info *dev_info);
+void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en);
+void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
+void otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev);
+void otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev);
+void otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev);
+
/* Link */
void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index df7e909d2..301a597f8 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -4,6 +4,88 @@
#include "otx2_ethdev.h"
+static void
+nix_cgx_promisc_config(struct rte_eth_dev *eth_dev, int en)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return;
+
+ if (en)
+ otx2_mbox_alloc_msg_cgx_promisc_enable(mbox);
+ else
+ otx2_mbox_alloc_msg_cgx_promisc_disable(mbox);
+
+ otx2_mbox_process(mbox);
+}
+
+void
+otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_rx_mode *req;
+
+ if (otx2_dev_is_vf(dev))
+ return;
+
+ req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox);
+
+ if (en)
+ req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC;
+
+ otx2_mbox_process(mbox);
+ eth_dev->data->promiscuous = en;
+}
+
+void
+otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev)
+{
+ otx2_nix_promisc_config(eth_dev, 1);
+ nix_cgx_promisc_config(eth_dev, 1);
+}
+
+void
+otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev)
+{
+ otx2_nix_promisc_config(eth_dev, 0);
+ nix_cgx_promisc_config(eth_dev, 0);
+}
+
+static void
+nix_allmulticast_config(struct rte_eth_dev *eth_dev, int en)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_rx_mode *req;
+
+ if (otx2_dev_is_vf(dev))
+ return;
+
+ req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox);
+
+ if (en)
+ req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_ALLMULTI;
+ else if (eth_dev->data->promiscuous)
+ req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC;
+
+ otx2_mbox_process(mbox);
+}
+
+void
+otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev)
+{
+ nix_allmulticast_config(eth_dev, 1);
+}
+
+void
+otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev)
+{
+ nix_allmulticast_config(eth_dev, 0);
+}
+
void
otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 15/58] net/octeontx2: add unicast MAC filter
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (13 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 14/58] net/octeontx2: add promiscuous and allmulticast mode jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 16/58] net/octeontx2: add RSS support jerinj
` (43 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Sunil Kumar Kori, Vamsi Attunuru
From: Sunil Kumar Kori <skori@marvell.com>
Add unicast MAC filter for PF device and
update the respective feature list.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 3 +
drivers/net/octeontx2/otx2_ethdev.h | 6 ++
drivers/net/octeontx2/otx2_mac.c | 77 ++++++++++++++++++++++
6 files changed, 89 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 9f682609d..566496113 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Promiscuous mode = Y
Allmulticast mode = Y
+Unicast MAC filter = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 764e95ce6..195a48940 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Promiscuous mode = Y
Allmulticast mode = Y
+Unicast MAC filter = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 9ef7be08f..8385c9c18 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -19,6 +19,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Promiscuous mode
- SR-IOV VF
- Lock-free Tx queue
+- MAC filtering
- Port hardware statistics
- Link state information
- Debug utilities - Context dump and error interrupt support
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 826ce7f4e..a72c901f4 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -237,6 +237,9 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
+ .mac_addr_add = otx2_nix_mac_addr_add,
+ .mac_addr_remove = otx2_nix_mac_addr_del,
+ .mac_addr_set = otx2_nix_mac_addr_set,
.promiscuous_enable = otx2_nix_promisc_enable,
.promiscuous_disable = otx2_nix_promisc_disable,
.allmulticast_enable = otx2_nix_allmulticast_enable,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 814fd6ec3..56517845b 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -232,7 +232,13 @@ int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *addr);
/* Mac address handling */
+int otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev,
+ struct rte_ether_addr *addr);
int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr);
+int otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev,
+ struct rte_ether_addr *addr,
+ uint32_t index, uint32_t pool);
+void otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index);
int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev);
/* Devargs */
diff --git a/drivers/net/octeontx2/otx2_mac.c b/drivers/net/octeontx2/otx2_mac.c
index 89b0ca6b0..b4bcc61f8 100644
--- a/drivers/net/octeontx2/otx2_mac.c
+++ b/drivers/net/octeontx2/otx2_mac.c
@@ -49,6 +49,83 @@ otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev)
return rsp->max_dmac_filters;
}
+int
+otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr,
+ uint32_t index __rte_unused, uint32_t pool __rte_unused)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_mac_addr_add_req *req;
+ struct cgx_mac_addr_add_rsp *rsp;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return -ENOTSUP;
+
+ if (otx2_dev_active_vfs(dev))
+ return -ENOTSUP;
+
+ req = otx2_mbox_alloc_msg_cgx_mac_addr_add(mbox);
+ otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to add mac address, rc=%d", rc);
+ goto done;
+ }
+
+ /* Enable promiscuous mode at NIX level */
+ otx2_nix_promisc_config(eth_dev, 1);
+
+done:
+ return rc;
+}
+
+void
+otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_mac_addr_del_req *req;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return;
+
+ req = otx2_mbox_alloc_msg_cgx_mac_addr_del(mbox);
+ req->index = index;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ otx2_err("Failed to delete mac address, rc=%d", rc);
+}
+
+int
+otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_set_mac_addr *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_nix_set_mac_addr(mbox);
+ otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to set mac address, rc=%d", rc);
+ goto done;
+ }
+
+ otx2_mbox_memcpy(dev->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ /* Install the same entry into CGX DMAC filter table too. */
+ otx2_cgx_mac_addr_set(eth_dev, addr);
+
+done:
+ return rc;
+}
+
int
otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 16/58] net/octeontx2: add RSS support
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (14 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 15/58] net/octeontx2: add unicast MAC filter jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 17/58] net/octeontx2: add Rx queue setup and release jerinj
` (42 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Add RSS support and expose RSS related functions
to implement RSS action for rte_flow driver.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 4 +
doc/guides/nics/features/octeontx2_vec.ini | 4 +
doc/guides/nics/features/octeontx2_vf.ini | 4 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 11 +
drivers/net/octeontx2/otx2_ethdev.h | 33 ++
| 372 +++++++++++++++++++++
9 files changed, 431 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_rss.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 566496113..f2d47d57b 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -13,6 +13,10 @@ Link status event = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+Inner RSS = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 195a48940..a67353d2a 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -13,6 +13,10 @@ Link status event = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+Inner RSS = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 6ec83e823..97d66ddde 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -9,6 +9,10 @@ Lock-free Tx queue = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+Inner RSS = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 8385c9c18..3bee3f3ca 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -19,6 +19,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Promiscuous mode
- SR-IOV VF
- Lock-free Tx queue
+- Receiver Side Scaling (RSS)
- MAC filtering
- Port hardware statistics
- Link state information
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 5cb722482..24931865d 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -31,6 +31,7 @@ LIBABIVER := 1
# all source are stored in SRCS-y
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
+ otx2_rss.c \
otx2_mac.c \
otx2_link.c \
otx2_stats.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 1c57b1bb4..8681a2642 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -3,6 +3,7 @@
#
sources = files(
+ 'otx2_rss.c',
'otx2_mac.c',
'otx2_link.c',
'otx2_stats.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index a72c901f4..5289c79e8 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -195,6 +195,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto fail;
}
+ /* Configure RSS */
+ rc = otx2_nix_rss_config(eth_dev);
+ if (rc) {
+ otx2_err("Failed to configure rss rc=%d", rc);
+ goto free_nix_lf;
+ }
+
/* Register queue IRQs */
rc = oxt2_nix_register_queue_irqs(eth_dev);
if (rc) {
@@ -245,6 +252,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.allmulticast_enable = otx2_nix_allmulticast_enable,
.allmulticast_disable = otx2_nix_allmulticast_disable,
.queue_stats_mapping_set = otx2_nix_queue_stats_mapping,
+ .reta_update = otx2_nix_dev_reta_update,
+ .reta_query = otx2_nix_dev_reta_query,
+ .rss_hash_update = otx2_nix_rss_hash_update,
+ .rss_hash_conf_get = otx2_nix_rss_hash_conf_get,
.xstats_get = otx2_nix_xstats_get,
.xstats_get_names = otx2_nix_xstats_get_names,
.xstats_reset = otx2_nix_xstats_reset,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 56517845b..19a4e45b0 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -59,6 +59,7 @@
#define NIX_MAX_SQB 512
#define NIX_MIN_SQB 32
+#define NIX_RSS_RETA_SIZE_MAX 256
/* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/
#define NIX_RSS_GRPS 8
#define NIX_HASH_KEY_SIZE 48 /* 352 Bits */
@@ -112,14 +113,22 @@
DEV_RX_OFFLOAD_QINQ_STRIP | \
DEV_RX_OFFLOAD_TIMESTAMP)
+#define NIX_DEFAULT_RSS_CTX_GROUP 0
+#define NIX_DEFAULT_RSS_MCAM_IDX -1
+
struct otx2_qint {
struct rte_eth_dev *eth_dev;
uint8_t qintx;
};
struct otx2_rss_info {
+ uint64_t nix_rss;
+ uint32_t flowkey_cfg;
uint16_t rss_size;
uint8_t rss_grps;
+ uint8_t alg_idx; /* Selected algo index */
+ uint16_t ind_tbl[NIX_RSS_RETA_SIZE_MAX];
+ uint8_t key[NIX_HASH_KEY_SIZE];
};
struct otx2_npc_flow_info {
@@ -225,6 +234,30 @@ int otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
struct rte_eth_xstat_name *xstats_names,
const uint64_t *ids, unsigned int limit);
+/* RSS */
+void otx2_nix_rss_set_key(struct otx2_eth_dev *dev,
+ uint8_t *key, uint32_t key_len);
+uint32_t otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev,
+ uint64_t ethdev_rss, uint8_t rss_level);
+int otx2_rss_set_hf(struct otx2_eth_dev *dev,
+ uint32_t flowkey_cfg, uint8_t *alg_idx,
+ uint8_t group, int mcam_index);
+int otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev, uint8_t group,
+ uint16_t *ind_tbl);
+int otx2_nix_rss_config(struct rte_eth_dev *eth_dev);
+
+int otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+int otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+int otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_conf *rss_conf);
+
+int otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_conf *rss_conf);
+
/* CGX */
int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
--git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
new file mode 100644
index 000000000..5afa21490
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -0,0 +1,372 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+
+int
+otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev,
+ uint8_t group, uint16_t *ind_tbl)
+{
+ struct otx2_rss_info *rss = &dev->rss_info;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *req;
+ int rc, idx;
+
+ for (idx = 0; idx < rss->rss_size; idx++) {
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req)
+ return -ENOMEM;
+ }
+ req->rss.rq = ind_tbl[idx];
+ /* Fill AQ info */
+ req->qidx = (group * rss->rss_size) + idx;
+ req->ctype = NIX_AQ_CTYPE_RSS;
+ req->op = NIX_AQ_INSTOP_INIT;
+ }
+
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ return 0;
+}
+
+int
+otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_rss_info *rss = &dev->rss_info;
+ int rc, i, j;
+ int idx = 0;
+
+ rc = -EINVAL;
+ if (reta_size != dev->rss_info.rss_size) {
+ otx2_err("Size of hash lookup table configured "
+ "(%d) doesn't match the number hardware can supported "
+ "(%d)", reta_size, dev->rss_info.rss_size);
+ goto fail;
+ }
+
+ /* Copy RETA table */
+ for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+ if ((reta_conf[i].mask >> j) & 0x01)
+ rss->ind_tbl[idx] = reta_conf[i].reta[j];
+ idx++;
+ }
+ }
+
+ return otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl);
+
+fail:
+ return rc;
+}
+
+int
+otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_rss_info *rss = &dev->rss_info;
+ int rc, i, j;
+
+ rc = -EINVAL;
+
+ if (reta_size != dev->rss_info.rss_size) {
+ otx2_err("Size of hash lookup table configured "
+ "(%d) doesn't match the number hardware can supported "
+ "(%d)", reta_size, dev->rss_info.rss_size);
+ goto fail;
+ }
+
+ /* Copy RETA table */
+ for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
+ for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+ if ((reta_conf[i].mask >> j) & 0x01)
+ reta_conf[i].reta[j] = rss->ind_tbl[j];
+ }
+
+ return 0;
+
+fail:
+ return rc;
+}
+
+void
+otx2_nix_rss_set_key(struct otx2_eth_dev *dev, uint8_t *key,
+ uint32_t key_len)
+{
+ const uint8_t default_key[NIX_HASH_KEY_SIZE] = {
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD
+ };
+ struct otx2_rss_info *rss = &dev->rss_info;
+ uint64_t *keyptr;
+ uint64_t val;
+ uint32_t idx;
+
+ if (key == NULL || key == 0) {
+ keyptr = (uint64_t *)(uintptr_t)default_key;
+ key_len = NIX_HASH_KEY_SIZE;
+ memset(rss->key, 0, key_len);
+ } else {
+ memcpy(rss->key, key, key_len);
+ keyptr = (uint64_t *)rss->key;
+ }
+
+ for (idx = 0; idx < (key_len >> 3); idx++) {
+ val = rte_cpu_to_be_64(*keyptr);
+ otx2_write64(val, dev->base + NIX_LF_RX_SECRETX(idx));
+ keyptr++;
+ }
+}
+
+static void
+rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
+{
+ uint64_t *keyptr = (uint64_t *)key;
+ uint64_t val;
+ int idx;
+
+ for (idx = 0; idx < (NIX_HASH_KEY_SIZE >> 3); idx++) {
+ val = otx2_read64(dev->base + NIX_LF_RX_SECRETX(idx));
+ *keyptr = rte_be_to_cpu_64(val);
+ keyptr++;
+ }
+}
+
+#define RSS_IPV4_ENABLE ( \
+ ETH_RSS_IPV4 | \
+ ETH_RSS_FRAG_IPV4 | \
+ ETH_RSS_NONFRAG_IPV4_UDP | \
+ ETH_RSS_NONFRAG_IPV4_TCP | \
+ ETH_RSS_NONFRAG_IPV4_SCTP)
+
+#define RSS_IPV6_ENABLE ( \
+ ETH_RSS_IPV6 | \
+ ETH_RSS_FRAG_IPV6 | \
+ ETH_RSS_NONFRAG_IPV6_UDP | \
+ ETH_RSS_NONFRAG_IPV6_TCP | \
+ ETH_RSS_NONFRAG_IPV6_SCTP)
+
+#define RSS_IPV6_EX_ENABLE ( \
+ ETH_RSS_IPV6_EX | \
+ ETH_RSS_IPV6_TCP_EX | \
+ ETH_RSS_IPV6_UDP_EX)
+
+#define RSS_MAX_LEVELS 3
+
+#define RSS_IPV4_INDEX 0
+#define RSS_IPV6_INDEX 1
+#define RSS_TCP_INDEX 2
+#define RSS_UDP_INDEX 3
+#define RSS_SCTP_INDEX 4
+#define RSS_DMAC_INDEX 5
+
+uint32_t
+otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
+ uint8_t rss_level)
+{
+ uint32_t flow_key_type[RSS_MAX_LEVELS][6] = {
+ {
+ FLOW_KEY_TYPE_IPV4, FLOW_KEY_TYPE_IPV6,
+ FLOW_KEY_TYPE_TCP, FLOW_KEY_TYPE_UDP,
+ FLOW_KEY_TYPE_SCTP, FLOW_KEY_TYPE_ETH_DMAC
+ },
+ {
+ FLOW_KEY_TYPE_INNR_IPV4, FLOW_KEY_TYPE_INNR_IPV6,
+ FLOW_KEY_TYPE_INNR_TCP, FLOW_KEY_TYPE_INNR_UDP,
+ FLOW_KEY_TYPE_INNR_SCTP, FLOW_KEY_TYPE_INNR_ETH_DMAC
+ },
+ {
+ FLOW_KEY_TYPE_IPV4 | FLOW_KEY_TYPE_INNR_IPV4,
+ FLOW_KEY_TYPE_IPV6 | FLOW_KEY_TYPE_INNR_IPV6,
+ FLOW_KEY_TYPE_TCP | FLOW_KEY_TYPE_INNR_TCP,
+ FLOW_KEY_TYPE_UDP | FLOW_KEY_TYPE_INNR_UDP,
+ FLOW_KEY_TYPE_SCTP | FLOW_KEY_TYPE_INNR_SCTP,
+ FLOW_KEY_TYPE_ETH_DMAC | FLOW_KEY_TYPE_INNR_ETH_DMAC
+ }
+ };
+ uint32_t flowkey_cfg = 0;
+
+ dev->rss_info.nix_rss = ethdev_rss;
+
+ if (ethdev_rss & RSS_IPV4_ENABLE)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_IPV4_INDEX];
+
+ if (ethdev_rss & RSS_IPV6_ENABLE)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
+
+ if (ethdev_rss & ETH_RSS_TCP)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
+
+ if (ethdev_rss & ETH_RSS_UDP)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
+
+ if (ethdev_rss & ETH_RSS_SCTP)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
+
+ if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+ flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
+
+ if (ethdev_rss & RSS_IPV6_EX_ENABLE)
+ flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
+
+ if (ethdev_rss & ETH_RSS_PORT)
+ flowkey_cfg |= FLOW_KEY_TYPE_PORT;
+
+ if (ethdev_rss & ETH_RSS_NVGRE)
+ flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
+
+ if (ethdev_rss & ETH_RSS_VXLAN)
+ flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
+
+ if (ethdev_rss & ETH_RSS_GENEVE)
+ flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
+
+ return flowkey_cfg;
+}
+
+int
+otx2_rss_set_hf(struct otx2_eth_dev *dev, uint32_t flowkey_cfg,
+ uint8_t *alg_idx, uint8_t group, int mcam_index)
+{
+ struct nix_rss_flowkey_cfg_rsp *rss_rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_rss_flowkey_cfg *cfg;
+ int rc;
+
+ rc = -EINVAL;
+
+ dev->rss_info.flowkey_cfg = flowkey_cfg;
+
+ cfg = otx2_mbox_alloc_msg_nix_rss_flowkey_cfg(mbox);
+
+ cfg->flowkey_cfg = flowkey_cfg;
+ cfg->mcam_index = mcam_index; /* -1 indicates default group */
+ cfg->group = group; /* 0 is default group */
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rss_rsp);
+ if (rc)
+ return rc;
+
+ if (alg_idx)
+ *alg_idx = rss_rsp->alg_idx;
+
+ return rc;
+}
+
+int
+otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint32_t flowkey_cfg;
+ uint8_t alg_idx;
+ int rc;
+
+ rc = -EINVAL;
+
+ if (rss_conf->rss_key && rss_conf->rss_key_len != NIX_HASH_KEY_SIZE) {
+ otx2_err("Hash key size mismatch %d vs %d",
+ rss_conf->rss_key_len, NIX_HASH_KEY_SIZE);
+ goto fail;
+ }
+
+ if (rss_conf->rss_key)
+ otx2_nix_rss_set_key(dev, rss_conf->rss_key,
+ (uint32_t)rss_conf->rss_key_len);
+
+ flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_conf->rss_hf, 0);
+
+ rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx,
+ NIX_DEFAULT_RSS_CTX_GROUP,
+ NIX_DEFAULT_RSS_MCAM_IDX);
+ if (rc) {
+ otx2_err("Failed to set RSS hash function rc=%d", rc);
+ return rc;
+ }
+
+ dev->rss_info.alg_idx = alg_idx;
+
+fail:
+ return rc;
+}
+
+int
+otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ if (rss_conf->rss_key)
+ rss_get_key(dev, rss_conf->rss_key);
+
+ rss_conf->rss_key_len = NIX_HASH_KEY_SIZE;
+ rss_conf->rss_hf = dev->rss_info.nix_rss;
+
+ return 0;
+}
+
+int
+otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint32_t idx, qcnt = eth_dev->data->nb_rx_queues;
+ uint32_t flowkey_cfg;
+ uint64_t rss_hf;
+ uint8_t alg_idx;
+ int rc;
+
+ /* Skip further configuration if selected mode is not RSS */
+ if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+ return 0;
+
+ /* Update default RSS key and cfg */
+ otx2_nix_rss_set_key(dev, NULL, 0);
+
+ /* Update default RSS RETA */
+ for (idx = 0; idx < dev->rss_info.rss_size; idx++)
+ dev->rss_info.ind_tbl[idx] = idx % qcnt;
+
+ /* Init RSS table context */
+ rc = otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl);
+ if (rc) {
+ otx2_err("Failed to init RSS table rc=%d", rc);
+ return rc;
+ }
+
+ rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
+ flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, 0);
+
+ rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx,
+ NIX_DEFAULT_RSS_CTX_GROUP,
+ NIX_DEFAULT_RSS_MCAM_IDX);
+ if (rc) {
+ otx2_err("Failed to set RSS hash function rc=%d", rc);
+ return rc;
+ }
+
+ dev->rss_info.alg_idx = alg_idx;
+
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 17/58] net/octeontx2: add Rx queue setup and release
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (15 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 16/58] net/octeontx2: add RSS support jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 18/58] net/octeontx2: add Tx " jerinj
` (41 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K, Thomas Monjalon
Cc: Vamsi Attunuru
From: Jerin Jacob <jerinj@marvell.com>
Add Rx queue setup and release.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/Makefile | 2 +-
drivers/net/octeontx2/otx2_ethdev.c | 310 +++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 51 ++++
drivers/net/octeontx2/otx2_ethdev_ops.c | 2 +
mk/rte.app.mk | 2 +-
8 files changed, 368 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index f2d47d57b..d0a2204d2 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -10,6 +10,7 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Runtime Rx queue setup = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index a67353d2a..64125a73f 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -10,6 +10,7 @@ SR-IOV = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Runtime Rx queue setup = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 97d66ddde..acda5e680 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -9,6 +9,7 @@ Lock-free Tx queue = Y
Multiprocess aware = Y
Link status = Y
Link status event = Y
+Runtime Rx queue setup = Y
RSS hash = Y
RSS key update = Y
RSS reta update = Y
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 24931865d..f938d9742 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -42,6 +42,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_ethdev_devargs.c
LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal
-LDLIBS += -lrte_ethdev -lrte_bus_pci -lrte_kvargs
+LDLIBS += -lrte_ethdev -lrte_bus_pci -lrte_kvargs -lrte_mbuf -lrte_mempool -lm
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 5289c79e8..dbbc2263d 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -2,9 +2,15 @@
* Copyright(C) 2019 Marvell International Ltd.
*/
+#include <inttypes.h>
+#include <math.h>
+
#include <rte_ethdev_pci.h>
#include <rte_io.h>
#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_pool_ops.h>
+#include <rte_mempool.h>
#include "otx2_ethdev.h"
@@ -114,6 +120,308 @@ nix_lf_free(struct otx2_eth_dev *dev)
return otx2_mbox_process(mbox);
}
+static inline void
+nix_rx_queue_reset(struct otx2_eth_rxq *rxq)
+{
+ rxq->head = 0;
+ rxq->available = 0;
+}
+
+static inline uint32_t
+nix_qsize_to_val(enum nix_q_size_e qsize)
+{
+ return (16UL << (qsize * 2));
+}
+
+static inline enum nix_q_size_e
+nix_qsize_clampup_get(struct otx2_eth_dev *dev, uint32_t val)
+{
+ int i;
+
+ if (otx2_ethdev_fixup_is_min_4k_q(dev))
+ i = nix_q_size_4K;
+ else
+ i = nix_q_size_16;
+
+ for (; i < nix_q_size_max; i++)
+ if (val <= nix_qsize_to_val(i))
+ break;
+
+ if (i >= nix_q_size_max)
+ i = nix_q_size_max - 1;
+
+ return i;
+}
+
+static int
+nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
+ uint16_t qid, struct otx2_eth_rxq *rxq, struct rte_mempool *mp)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ const struct rte_memzone *rz;
+ uint32_t ring_size, cq_size;
+ struct nix_aq_enq_req *aq;
+ uint16_t first_skip;
+ int rc;
+
+ cq_size = rxq->qlen;
+ ring_size = cq_size * NIX_CQ_ENTRY_SZ;
+ rz = rte_eth_dma_zone_reserve(eth_dev, "cq", qid, ring_size,
+ NIX_CQ_ALIGN, dev->node);
+ if (rz == NULL) {
+ otx2_err("Failed to allocate mem for cq hw ring");
+ rc = -ENOMEM;
+ goto fail;
+ }
+ memset(rz->addr, 0, rz->len);
+ rxq->desc = (uintptr_t)rz->addr;
+ rxq->qmask = cq_size - 1;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_INIT;
+
+ aq->cq.ena = 1;
+ aq->cq.caching = 1;
+ aq->cq.qsize = rxq->qsize;
+ aq->cq.base = rz->iova;
+ aq->cq.avg_level = 0xff;
+ aq->cq.cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT);
+ aq->cq.cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR);
+
+ /* Many to one reduction */
+ aq->cq.qint_idx = qid % dev->qints;
+
+ if (otx2_ethdev_fixup_is_limit_cq_full(dev)) {
+ uint16_t min_rx_drop;
+ const float rx_cq_skid = 1024 * 256;
+
+ min_rx_drop = ceil(rx_cq_skid / (float)cq_size);
+ aq->cq.drop = min_rx_drop;
+ aq->cq.drop_ena = 1;
+ }
+
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to init cq context");
+ goto fail;
+ }
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_INIT;
+
+ aq->rq.sso_ena = 0;
+ aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
+ aq->rq.spb_ena = 0;
+ aq->rq.lpb_aura = npa_lf_aura_handle_to_aura(mp->pool_id);
+ first_skip = (sizeof(struct rte_mbuf));
+ first_skip += RTE_PKTMBUF_HEADROOM;
+ first_skip += rte_pktmbuf_priv_size(mp);
+ rxq->data_off = first_skip;
+
+ first_skip /= 8; /* Expressed in number of dwords */
+ aq->rq.first_skip = first_skip;
+ aq->rq.later_skip = (sizeof(struct rte_mbuf) / 8);
+ aq->rq.flow_tagw = 32; /* 32-bits */
+ aq->rq.lpb_sizem1 = rte_pktmbuf_data_room_size(mp);
+ aq->rq.lpb_sizem1 += rte_pktmbuf_priv_size(mp);
+ aq->rq.lpb_sizem1 += sizeof(struct rte_mbuf);
+ aq->rq.lpb_sizem1 /= 8;
+ aq->rq.lpb_sizem1 -= 1; /* Expressed in size minus one */
+ aq->rq.ena = 1;
+ aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
+ aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
+ aq->rq.rq_int_ena = 0;
+ /* Many to one reduction */
+ aq->rq.qint_idx = qid % dev->qints;
+
+ if (otx2_ethdev_fixup_is_limit_cq_full(dev))
+ aq->rq.xqe_drop_ena = 1;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to init rq context");
+ goto fail;
+ }
+
+ return 0;
+fail:
+ return rc;
+}
+
+static int
+nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *aq;
+ int rc;
+
+ /* RQ is already disabled */
+ /* Disable CQ */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ aq->cq.ena = 0;
+ aq->cq_mask.ena = ~(aq->cq_mask.ena);
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to disable cq context");
+ return rc;
+ }
+
+ return 0;
+}
+
+static inline int
+nix_get_data_off(struct otx2_eth_dev *dev)
+{
+ RTE_SET_USED(dev);
+
+ return 0;
+}
+
+uint64_t
+otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id)
+{
+ struct rte_mbuf mb_def;
+ uint64_t *tmp;
+
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) % 8 != 0);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, refcnt) -
+ offsetof(struct rte_mbuf, data_off) != 2);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, nb_segs) -
+ offsetof(struct rte_mbuf, data_off) != 4);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, port) -
+ offsetof(struct rte_mbuf, data_off) != 6);
+ mb_def.nb_segs = 1;
+ mb_def.data_off = RTE_PKTMBUF_HEADROOM + nix_get_data_off(dev);
+ mb_def.port = port_id;
+ rte_mbuf_refcnt_set(&mb_def, 1);
+
+ /* Prevent compiler reordering: rearm_data covers previous fields */
+ rte_compiler_barrier();
+ tmp = (uint64_t *)&mb_def.rearm_data;
+
+ return *tmp;
+}
+
+static void
+otx2_nix_rx_queue_release(void *rx_queue)
+{
+ struct otx2_eth_rxq *rxq = rx_queue;
+
+ if (!rxq)
+ return;
+
+ otx2_nix_dbg("Releasing rxq %u", rxq->rq);
+ nix_cq_rq_uninit(rxq->eth_dev, rxq);
+ rte_free(rx_queue);
+}
+
+static int
+otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
+ uint16_t nb_desc, unsigned int socket,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_mempool_ops *ops;
+ struct otx2_eth_rxq *rxq;
+ const char *platform_ops;
+ enum nix_q_size_e qsize;
+ uint64_t offloads;
+ int rc;
+
+ rc = -EINVAL;
+
+ /* Compile time check to make sure all fast path elements in a CL */
+ RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_rxq, slow_path_start) >= 128);
+
+ /* Sanity checks */
+ if (rx_conf->rx_deferred_start == 1) {
+ otx2_err("Deferred Rx start is not supported");
+ goto fail;
+ }
+
+ platform_ops = rte_mbuf_platform_mempool_ops();
+ /* This driver needs octeontx2_npa mempool ops to work */
+ ops = rte_mempool_get_ops(mp->ops_index);
+ if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) {
+ otx2_err("mempool ops should be of octeontx2_npa type");
+ goto fail;
+ }
+
+ if (mp->pool_id == 0) {
+ otx2_err("Invalid pool_id");
+ goto fail;
+ }
+
+ /* Free memory prior to re-allocation if needed */
+ if (eth_dev->data->rx_queues[rq] != NULL) {
+ otx2_nix_dbg("Freeing memory prior to re-allocation %d", rq);
+ otx2_nix_rx_queue_release(eth_dev->data->rx_queues[rq]);
+ eth_dev->data->rx_queues[rq] = NULL;
+ }
+
+ offloads = rx_conf->offloads | eth_dev->data->dev_conf.rxmode.offloads;
+ dev->rx_offloads |= offloads;
+
+ /* Find the CQ queue size */
+ qsize = nix_qsize_clampup_get(dev, nb_desc);
+ /* Allocate rxq memory */
+ rxq = rte_zmalloc_socket("otx2 rxq", sizeof(*rxq), OTX2_ALIGN, socket);
+ if (rxq == NULL) {
+ otx2_err("Failed to allocate rq=%d", rq);
+ rc = -ENOMEM;
+ goto fail;
+ }
+
+ rxq->eth_dev = eth_dev;
+ rxq->rq = rq;
+ rxq->cq_door = dev->base + NIX_LF_CQ_OP_DOOR;
+ rxq->cq_status = (int64_t *)(dev->base + NIX_LF_CQ_OP_STATUS);
+ rxq->wdata = (uint64_t)rq << 32;
+ rxq->aura = npa_lf_aura_handle_to_aura(mp->pool_id);
+ rxq->mbuf_initializer = otx2_nix_rxq_mbuf_setup(dev,
+ eth_dev->data->port_id);
+ rxq->offloads = offloads;
+ rxq->pool = mp;
+ rxq->qlen = nix_qsize_to_val(qsize);
+ rxq->qsize = qsize;
+
+ /* Alloc completion queue */
+ rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp);
+ if (rc) {
+ otx2_err("Failed to allocate rxq=%u", rq);
+ goto free_rxq;
+ }
+
+ rxq->qconf.socket_id = socket;
+ rxq->qconf.nb_desc = nb_desc;
+ rxq->qconf.mempool = mp;
+ memcpy(&rxq->qconf.conf.rx, rx_conf, sizeof(struct rte_eth_rxconf));
+
+ nix_rx_queue_reset(rxq);
+ otx2_nix_dbg("rq=%d pool=%s qsize=%d nb_desc=%d->%d",
+ rq, mp->name, qsize, nb_desc, rxq->qlen);
+
+ eth_dev->data->rx_queues[rq] = rxq;
+ eth_dev->data->rx_queue_state[rq] = RTE_ETH_QUEUE_STATE_STOPPED;
+ return 0;
+
+free_rxq:
+ otx2_nix_rx_queue_release(rxq);
+fail:
+ return rc;
+}
+
static int
otx2_nix_configure(struct rte_eth_dev *eth_dev)
{
@@ -241,6 +549,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
.dev_configure = otx2_nix_configure,
.link_update = otx2_nix_link_update,
+ .rx_queue_setup = otx2_nix_rx_queue_setup,
+ .rx_queue_release = otx2_nix_rx_queue_release,
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 19a4e45b0..a09393336 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -10,6 +10,9 @@
#include <rte_common.h>
#include <rte_ethdev.h>
#include <rte_kvargs.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_string_fns.h>
#include "otx2_common.h"
#include "otx2_dev.h"
@@ -68,6 +71,7 @@
#define NIX_RX_MIN_DESC_ALIGN 16
#define NIX_RX_NB_SEG_MAX 6
#define NIX_CQ_ENTRY_SZ 128
+#define NIX_CQ_ALIGN 512
/* If PTP is enabled additional SEND MEM DESC is required which
* takes 2 words, hence max 7 iova address are possible
@@ -116,6 +120,19 @@
#define NIX_DEFAULT_RSS_CTX_GROUP 0
#define NIX_DEFAULT_RSS_MCAM_IDX -1
+enum nix_q_size_e {
+ nix_q_size_16, /* 16 entries */
+ nix_q_size_64, /* 64 entries */
+ nix_q_size_256,
+ nix_q_size_1K,
+ nix_q_size_4K,
+ nix_q_size_16K,
+ nix_q_size_64K,
+ nix_q_size_256K,
+ nix_q_size_1M, /* Million entries */
+ nix_q_size_max
+};
+
struct otx2_qint {
struct rte_eth_dev *eth_dev;
uint8_t qintx;
@@ -131,6 +148,16 @@ struct otx2_rss_info {
uint8_t key[NIX_HASH_KEY_SIZE];
};
+struct otx2_eth_qconf {
+ union {
+ struct rte_eth_txconf tx;
+ struct rte_eth_rxconf rx;
+ } conf;
+ void *mempool;
+ uint32_t socket_id;
+ uint16_t nb_desc;
+};
+
struct otx2_npc_flow_info {
uint16_t channel; /*rx channel */
uint16_t flow_prealloc_size;
@@ -177,6 +204,29 @@ struct otx2_eth_dev {
struct rte_eth_dev *eth_dev;
} __rte_cache_aligned;
+struct otx2_eth_rxq {
+ uint64_t mbuf_initializer;
+ uint64_t data_off;
+ uintptr_t desc;
+ void *lookup_mem;
+ uintptr_t cq_door;
+ uint64_t wdata;
+ int64_t *cq_status;
+ uint32_t head;
+ uint32_t qmask;
+ uint32_t available;
+ uint16_t rq;
+ struct otx2_timesync_info *tstamp;
+ MARKER slow_path_start;
+ uint64_t aura;
+ uint64_t offloads;
+ uint32_t qlen;
+ struct rte_mempool *pool;
+ enum nix_q_size_e qsize;
+ struct rte_eth_dev *eth_dev;
+ struct otx2_eth_qconf qconf;
+} __rte_cache_aligned;
+
static inline struct otx2_eth_dev *
otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
{
@@ -192,6 +242,7 @@ void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
void otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev);
void otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev);
void otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev);
+uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id);
/* Link */
void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 301a597f8..71d36b44a 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -143,4 +143,6 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+
+ devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP;
}
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index fab72ff6a..a852e5157 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -196,7 +196,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MVPP2_PMD) += -lrte_pmd_mvpp2
_LDLIBS-$(CONFIG_RTE_LIBRTE_MVNETA_PMD) += -lrte_pmd_mvneta
_LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += -lrte_pmd_nfp
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += -lrte_pmd_null
-_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += -lrte_pmd_octeontx2
+_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += -lrte_pmd_octeontx2 -lm
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += -lrte_pmd_pcap -lpcap
_LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += -lrte_pmd_qede
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_RING) += -lrte_pmd_ring
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 18/58] net/octeontx2: add Tx queue setup and release
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (16 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 17/58] net/octeontx2: add Rx queue setup and release jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 19/58] net/octeontx2: handle port reconfigure jerinj
` (40 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
From: Jerin Jacob <jerinj@marvell.com>
Add Tx queue setup and release.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 404 ++++++++++++++++++++-
drivers/net/octeontx2/otx2_ethdev.h | 25 ++
drivers/net/octeontx2/otx2_ethdev_ops.c | 3 +-
drivers/net/octeontx2/otx2_tx.h | 28 ++
8 files changed, 462 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_tx.h
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index d0a2204d2..c8f07fa1d 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -11,6 +11,7 @@ Multiprocess aware = Y
Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
+Runtime Tx queue setup = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 64125a73f..a98b7d523 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -11,6 +11,7 @@ Multiprocess aware = Y
Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
+Runtime Tx queue setup = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index acda5e680..9746357ce 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -10,6 +10,7 @@ Multiprocess aware = Y
Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
+Runtime Tx queue setup = Y
RSS hash = Y
RSS key update = Y
RSS reta update = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 3bee3f3ca..d7e8f3d56 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -19,6 +19,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Promiscuous mode
- SR-IOV VF
- Lock-free Tx queue
+- Multiple queues for TX and RX
- Receiver Side Scaling (RSS)
- MAC filtering
- Port hardware statistics
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index dbbc2263d..92f008b69 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -422,6 +422,392 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
return rc;
}
+static inline uint8_t
+nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
+{
+ /*
+ * Maximum three segments can be supported with W8, Choose
+ * NIX_MAXSQESZ_W16 for multi segment offload.
+ */
+ if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ return NIX_MAXSQESZ_W16;
+ else
+ return NIX_MAXSQESZ_W8;
+}
+
+static int
+nix_sq_init(struct otx2_eth_txq *txq)
+{
+ struct otx2_eth_dev *dev = txq->dev;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *sq;
+
+ if (txq->sqb_pool->pool_id == 0)
+ return -EINVAL;
+
+ sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ sq->qidx = txq->sq;
+ sq->ctype = NIX_AQ_CTYPE_SQ;
+ sq->op = NIX_AQ_INSTOP_INIT;
+ sq->sq.max_sqe_size = nix_sq_max_sqe_sz(txq);
+
+ sq->sq.default_chan = dev->tx_chan_base;
+ sq->sq.sqe_stype = NIX_STYPE_STF;
+ sq->sq.ena = 1;
+ if (sq->sq.max_sqe_size == NIX_MAXSQESZ_W8)
+ sq->sq.sqe_stype = NIX_STYPE_STP;
+ sq->sq.sqb_aura =
+ npa_lf_aura_handle_to_aura(txq->sqb_pool->pool_id);
+ sq->sq.sq_int_ena = BIT(NIX_SQINT_LMT_ERR);
+ sq->sq.sq_int_ena |= BIT(NIX_SQINT_SQB_ALLOC_FAIL);
+ sq->sq.sq_int_ena |= BIT(NIX_SQINT_SEND_ERR);
+ sq->sq.sq_int_ena |= BIT(NIX_SQINT_MNQ_ERR);
+
+ /* Many to one reduction */
+ sq->sq.qint_idx = txq->sq % dev->qints;
+
+ return otx2_mbox_process(mbox);
+}
+
+static int
+nix_sq_uninit(struct otx2_eth_txq *txq)
+{
+ struct otx2_eth_dev *dev = txq->dev;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct ndc_sync_op *ndc_req;
+ struct nix_aq_enq_rsp *rsp;
+ struct nix_aq_enq_req *aq;
+ uint16_t sqes_per_sqb;
+ void *sqb_buf;
+ int rc, count;
+
+ otx2_nix_dbg("Cleaning up sq %u", txq->sq);
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = txq->sq;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ /* Check if sq is already cleaned up */
+ if (!rsp->sq.ena)
+ return 0;
+
+ /* Disable sq */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = txq->sq;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ aq->sq_mask.ena = ~aq->sq_mask.ena;
+ aq->sq.ena = 0;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* Read SQ and free sqb's */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = txq->sq;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (aq->sq.smq_pend)
+ otx2_err("SQ has pending sqe's");
+
+ count = aq->sq.sqb_count;
+ sqes_per_sqb = 1 << txq->sqes_per_sqb_log2;
+ /* Free SQB's that are used */
+ sqb_buf = (void *)rsp->sq.head_sqb;
+ while (count) {
+ void *next_sqb;
+
+ next_sqb = *(void **)((uintptr_t)sqb_buf + ((sqes_per_sqb - 1) *
+ nix_sq_max_sqe_sz(txq)));
+ npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1,
+ (uint64_t)sqb_buf);
+ sqb_buf = next_sqb;
+ count--;
+ }
+
+ /* Free next to use sqb */
+ if (rsp->sq.next_sqb)
+ npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1,
+ rsp->sq.next_sqb);
+
+ /* Sync NDC-NIX-TX for LF */
+ ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
+ ndc_req->nix_lf_tx_sync = 1;
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ otx2_err("Error on NDC-NIX-TX LF sync, rc %d", rc);
+
+ return rc;
+}
+
+static int
+nix_sqb_aura_limit_cfg(struct rte_mempool *mp, uint16_t nb_sqb_bufs)
+{
+ struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
+ struct npa_aq_enq_req *aura_req;
+
+ aura_req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ aura_req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ aura_req->ctype = NPA_AQ_CTYPE_AURA;
+ aura_req->op = NPA_AQ_INSTOP_WRITE;
+
+ aura_req->aura.limit = nb_sqb_bufs;
+ aura_req->aura_mask.limit = ~(aura_req->aura_mask.limit);
+
+ return otx2_mbox_process(npa_lf->mbox);
+}
+
+static int
+nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
+{
+ struct otx2_eth_dev *dev = txq->dev;
+ uint16_t sqes_per_sqb, nb_sqb_bufs;
+ char name[RTE_MEMPOOL_NAMESIZE];
+ struct rte_mempool_objsz sz;
+ struct npa_aura_s *aura;
+ uint32_t tmp, blk_sz;
+
+ aura = (struct npa_aura_s *)((uintptr_t)txq->fc_mem + OTX2_ALIGN);
+ snprintf(name, sizeof(name), "otx2_sqb_pool_%d_%d", port, txq->sq);
+ blk_sz = dev->sqb_size;
+
+ if (nix_sq_max_sqe_sz(txq) == NIX_MAXSQESZ_W16)
+ sqes_per_sqb = (dev->sqb_size / 8) / 16;
+ else
+ sqes_per_sqb = (dev->sqb_size / 8) / 8;
+
+ nb_sqb_bufs = nb_desc / sqes_per_sqb;
+ /* Clamp up to devarg passed SQB count */
+ nb_sqb_bufs = RTE_MIN(dev->max_sqb_count, RTE_MAX(NIX_MIN_SQB,
+ nb_sqb_bufs + NIX_SQB_LIST_SPACE));
+
+ txq->sqb_pool = rte_mempool_create_empty(name, NIX_MAX_SQB, blk_sz,
+ 0, 0, dev->node,
+ MEMPOOL_F_NO_SPREAD);
+ txq->nb_sqb_bufs = nb_sqb_bufs;
+ txq->sqes_per_sqb_log2 = (uint16_t)rte_log2_u32(sqes_per_sqb);
+ txq->nb_sqb_bufs_adj = nb_sqb_bufs -
+ RTE_ALIGN_MUL_CEIL(nb_sqb_bufs, sqes_per_sqb) / sqes_per_sqb;
+ txq->nb_sqb_bufs_adj =
+ (NIX_SQB_LOWER_THRESH * txq->nb_sqb_bufs_adj) / 100;
+
+ if (txq->sqb_pool == NULL) {
+ otx2_err("Failed to allocate sqe mempool");
+ goto fail;
+ }
+
+ memset(aura, 0, sizeof(*aura));
+ aura->fc_ena = 1;
+ aura->fc_addr = txq->fc_iova;
+ aura->fc_hyst_bits = 0; /* Store count on all updates */
+ if (rte_mempool_set_ops_byname(txq->sqb_pool, "octeontx2_npa", aura)) {
+ otx2_err("Failed to set ops for sqe mempool");
+ goto fail;
+ }
+ if (rte_mempool_populate_default(txq->sqb_pool) < 0) {
+ otx2_err("Failed to populate sqe mempool");
+ goto fail;
+ }
+
+ tmp = rte_mempool_calc_obj_size(blk_sz, MEMPOOL_F_NO_SPREAD, &sz);
+ if (dev->sqb_size != sz.elt_size) {
+ otx2_err("sqe pool block size is not expected %d != %d",
+ dev->sqb_size, tmp);
+ goto fail;
+ }
+
+ nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs);
+
+ return 0;
+fail:
+ return -ENOMEM;
+}
+
+void
+otx2_nix_form_default_desc(struct otx2_eth_txq *txq)
+{
+ struct nix_send_ext_s *send_hdr_ext;
+ struct nix_send_hdr_s *send_hdr;
+ struct nix_send_mem_s *send_mem;
+ union nix_send_sg_s *sg;
+
+ /* Initialize the fields based on basic single segment packet */
+ memset(&txq->cmd, 0, sizeof(txq->cmd));
+
+ if (txq->dev->tx_offload_flags & NIX_TX_NEED_EXT_HDR) {
+ send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0];
+ /* 2(HDR) + 2(EXT_HDR) + 1(SG) + 1(IOVA) = 6/2 - 1 = 2 */
+ send_hdr->w0.sizem1 = 2;
+
+ send_hdr_ext = (struct nix_send_ext_s *)&txq->cmd[2];
+ send_hdr_ext->w0.subdc = NIX_SUBDC_EXT;
+ if (txq->dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F) {
+ /* Default: one seg packet would have:
+ * 2(HDR) + 2(EXT) + 1(SG) + 1(IOVA) + 2(MEM)
+ * => 8/2 - 1 = 3
+ */
+ send_hdr->w0.sizem1 = 3;
+ send_hdr_ext->w0.tstmp = 1;
+
+ /* To calculate the offset for send_mem,
+ * send_hdr->w0.sizem1 * 2
+ */
+ send_mem = (struct nix_send_mem_s *)(txq->cmd +
+ (send_hdr->w0.sizem1 << 1));
+ send_mem->subdc = NIX_SUBDC_MEM;
+ send_mem->dsz = 0x0;
+ send_mem->wmem = 0x1;
+ send_mem->alg = NIX_SENDMEMALG_SETTSTMP;
+ }
+ sg = (union nix_send_sg_s *)&txq->cmd[4];
+ } else {
+ send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0];
+ /* 2(HDR) + 1(SG) + 1(IOVA) = 4/2 - 1 = 1 */
+ send_hdr->w0.sizem1 = 1;
+ sg = (union nix_send_sg_s *)&txq->cmd[2];
+ }
+
+ send_hdr->w0.sq = txq->sq;
+ sg->subdc = NIX_SUBDC_SG;
+ sg->segs = 1;
+ sg->ld_type = NIX_SENDLDTYPE_LDD;
+
+ rte_smp_wmb();
+}
+
+static void
+otx2_nix_tx_queue_release(void *_txq)
+{
+ struct otx2_eth_txq *txq = _txq;
+
+ if (!txq)
+ return;
+
+ otx2_nix_dbg("Releasing txq %u", txq->sq);
+
+ /* Free sqb's and disable sq */
+ nix_sq_uninit(txq);
+
+ if (txq->sqb_pool) {
+ rte_mempool_free(txq->sqb_pool);
+ txq->sqb_pool = NULL;
+ }
+ rte_free(txq);
+}
+
+
+static int
+otx2_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t sq,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ const struct rte_memzone *fc;
+ struct otx2_eth_txq *txq;
+ uint64_t offloads;
+ int rc;
+
+ rc = -EINVAL;
+
+ /* Compile time check to make sure all fast path elements in a CL */
+ RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_txq, slow_path_start) >= 128);
+
+ if (tx_conf->tx_deferred_start) {
+ otx2_err("Tx deferred start is not supported");
+ goto fail;
+ }
+
+ /* Free memory prior to re-allocation if needed. */
+ if (eth_dev->data->tx_queues[sq] != NULL) {
+ otx2_nix_dbg("Freeing memory prior to re-allocation %d", sq);
+ otx2_nix_tx_queue_release(eth_dev->data->tx_queues[sq]);
+ eth_dev->data->tx_queues[sq] = NULL;
+ }
+
+ /* Find the expected offloads for this queue */
+ offloads = tx_conf->offloads | eth_dev->data->dev_conf.txmode.offloads;
+
+ /* Allocating tx queue data structure */
+ txq = rte_zmalloc_socket("otx2_ethdev TX queue", sizeof(*txq),
+ OTX2_ALIGN, socket_id);
+ if (txq == NULL) {
+ otx2_err("Failed to alloc txq=%d", sq);
+ rc = -ENOMEM;
+ goto fail;
+ }
+ txq->sq = sq;
+ txq->dev = dev;
+ txq->sqb_pool = NULL;
+ txq->offloads = offloads;
+ dev->tx_offloads |= offloads;
+
+ /*
+ * Allocate memory for flow control updates from HW.
+ * Alloc one cache line, so that fits all FC_STYPE modes.
+ */
+ fc = rte_eth_dma_zone_reserve(eth_dev, "fcmem", sq,
+ OTX2_ALIGN + sizeof(struct npa_aura_s),
+ OTX2_ALIGN, dev->node);
+ if (fc == NULL) {
+ otx2_err("Failed to allocate mem for fcmem");
+ rc = -ENOMEM;
+ goto free_txq;
+ }
+ txq->fc_iova = fc->iova;
+ txq->fc_mem = fc->addr;
+
+ /* Initialize the aura sqb pool */
+ rc = nix_alloc_sqb_pool(eth_dev->data->port_id, txq, nb_desc);
+ if (rc) {
+ otx2_err("Failed to alloc sqe pool rc=%d", rc);
+ goto free_txq;
+ }
+
+ /* Initialize the SQ */
+ rc = nix_sq_init(txq);
+ if (rc) {
+ otx2_err("Failed to init sq=%d context", sq);
+ goto free_txq;
+ }
+
+ txq->fc_cache_pkts = 0;
+ txq->io_addr = dev->base + NIX_LF_OP_SENDX(0);
+ /* Evenly distribute LMT slot for each sq */
+ txq->lmt_addr = (void *)(dev->lmt_addr + ((sq & LMT_SLOT_MASK) << 12));
+
+ txq->qconf.socket_id = socket_id;
+ txq->qconf.nb_desc = nb_desc;
+ memcpy(&txq->qconf.conf.tx, tx_conf, sizeof(struct rte_eth_txconf));
+
+ otx2_nix_form_default_desc(txq);
+
+ otx2_nix_dbg("sq=%d fc=%p offload=0x%" PRIx64 " sqb=0x%" PRIx64 ""
+ " lmt_addr=%p nb_sqb_bufs=%d sqes_per_sqb_log2=%d", sq,
+ fc->addr, offloads, txq->sqb_pool->pool_id, txq->lmt_addr,
+ txq->nb_sqb_bufs, txq->sqes_per_sqb_log2);
+ eth_dev->data->tx_queues[sq] = txq;
+ eth_dev->data->tx_queue_state[sq] = RTE_ETH_QUEUE_STATE_STOPPED;
+ return 0;
+
+free_txq:
+ otx2_nix_tx_queue_release(txq);
+fail:
+ return rc;
+}
+
+
static int
otx2_nix_configure(struct rte_eth_dev *eth_dev)
{
@@ -549,6 +935,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
.dev_configure = otx2_nix_configure,
.link_update = otx2_nix_link_update,
+ .tx_queue_setup = otx2_nix_tx_queue_setup,
+ .tx_queue_release = otx2_nix_tx_queue_release,
.rx_queue_setup = otx2_nix_rx_queue_setup,
.rx_queue_release = otx2_nix_rx_queue_release,
.stats_get = otx2_nix_dev_stats_get,
@@ -763,12 +1151,26 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
struct rte_pci_device *pci_dev;
- int rc;
+ int rc, i;
/* Nothing to be done for secondary processes */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ /* Free up SQs */
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]);
+ eth_dev->data->tx_queues[i] = NULL;
+ }
+ eth_dev->data->nb_tx_queues = 0;
+
+ /* Free up RQ's and CQ's */
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ otx2_nix_rx_queue_release(eth_dev->data->rx_queues[i]);
+ eth_dev->data->rx_queues[i] = NULL;
+ }
+ eth_dev->data->nb_rx_queues = 0;
+
/* Unregister queue irqs */
oxt2_nix_unregister_queue_irqs(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index a09393336..0ce67f634 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -19,6 +19,7 @@
#include "otx2_irq.h"
#include "otx2_mempool.h"
#include "otx2_rx.h"
+#include "otx2_tx.h"
#define OTX2_ETH_DEV_PMD_VERSION "1.0"
@@ -62,6 +63,7 @@
#define NIX_MAX_SQB 512
#define NIX_MIN_SQB 32
+#define NIX_SQB_LIST_SPACE 2
#define NIX_RSS_RETA_SIZE_MAX 256
/* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/
#define NIX_RSS_GRPS 8
@@ -72,6 +74,8 @@
#define NIX_RX_NB_SEG_MAX 6
#define NIX_CQ_ENTRY_SZ 128
#define NIX_CQ_ALIGN 512
+#define NIX_SQB_LOWER_THRESH 90
+#define LMT_SLOT_MASK 0x7f
/* If PTP is enabled additional SEND MEM DESC is required which
* takes 2 words, hence max 7 iova address are possible
@@ -204,6 +208,24 @@ struct otx2_eth_dev {
struct rte_eth_dev *eth_dev;
} __rte_cache_aligned;
+struct otx2_eth_txq {
+ uint64_t cmd[8];
+ int64_t fc_cache_pkts;
+ uint64_t *fc_mem;
+ void *lmt_addr;
+ rte_iova_t io_addr;
+ rte_iova_t fc_iova;
+ uint16_t sqes_per_sqb_log2;
+ int16_t nb_sqb_bufs_adj;
+ MARKER slow_path_start;
+ uint16_t nb_sqb_bufs;
+ uint16_t sq;
+ uint64_t offloads;
+ struct otx2_eth_dev *dev;
+ struct rte_mempool *sqb_pool;
+ struct otx2_eth_qconf qconf;
+} __rte_cache_aligned;
+
struct otx2_eth_rxq {
uint64_t mbuf_initializer;
uint64_t data_off;
@@ -329,4 +351,7 @@ int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev);
int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
struct otx2_eth_dev *dev);
+/* Rx and Tx routines */
+void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
+
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 71d36b44a..1c935b627 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -144,5 +144,6 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
- devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP;
+ devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
+ RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
}
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
new file mode 100644
index 000000000..4d0993f87
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_tx.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_TX_H__
+#define __OTX2_TX_H__
+
+#define NIX_TX_OFFLOAD_NONE (0)
+#define NIX_TX_OFFLOAD_L3_L4_CSUM_F BIT(0)
+#define NIX_TX_OFFLOAD_OL3_OL4_CSUM_F BIT(1)
+#define NIX_TX_OFFLOAD_VLAN_QINQ_F BIT(2)
+#define NIX_TX_OFFLOAD_MBUF_NOFF_F BIT(3)
+#define NIX_TX_OFFLOAD_TSTAMP_F BIT(4)
+
+/* Flags to control xmit_prepare function.
+ * Defining it from backwards to denote its been
+ * not used as offload flags to pick function
+ */
+#define NIX_TX_MULTI_SEG_F BIT(15)
+
+#define NIX_TX_NEED_SEND_HDR_W1 \
+ (NIX_TX_OFFLOAD_L3_L4_CSUM_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F | \
+ NIX_TX_OFFLOAD_VLAN_QINQ_F)
+
+#define NIX_TX_NEED_EXT_HDR \
+ (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F)
+
+#endif /* __OTX2_TX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 19/58] net/octeontx2: handle port reconfigure
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (17 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 18/58] net/octeontx2: add Tx " jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 20/58] net/octeontx2: add queue start and stop operations jerinj
` (39 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
setup tx & rx queues with the previous configuration during
port reconfig, it handles cases like port reconfigure without
reconfiguring tx & rx queues.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 180 ++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 2 +
2 files changed, 182 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 92f008b69..86ecdc14c 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -807,6 +807,172 @@ otx2_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t sq,
return rc;
}
+static int
+nix_store_queue_cfg_and_then_release(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_eth_qconf *tx_qconf = NULL;
+ struct otx2_eth_qconf *rx_qconf = NULL;
+ struct otx2_eth_txq **txq;
+ struct otx2_eth_rxq **rxq;
+ int i, nb_rxq, nb_txq;
+
+ nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues);
+ nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues);
+
+ tx_qconf = malloc(nb_txq * sizeof(*tx_qconf));
+ if (tx_qconf == NULL) {
+ otx2_err("Failed to allocate memory for tx_qconf");
+ goto fail;
+ }
+
+ rx_qconf = malloc(nb_rxq * sizeof(*rx_qconf));
+ if (rx_qconf == NULL) {
+ otx2_err("Failed to allocate memory for rx_qconf");
+ goto fail;
+ }
+
+ txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues;
+ for (i = 0; i < nb_txq; i++) {
+ if (txq[i] == NULL) {
+ otx2_err("txq[%d] is already released", i);
+ goto fail;
+ }
+ memcpy(&tx_qconf[i], &txq[i]->qconf, sizeof(*tx_qconf));
+ otx2_nix_tx_queue_release(txq[i]);
+ eth_dev->data->tx_queues[i] = NULL;
+ }
+
+ rxq = (struct otx2_eth_rxq **)eth_dev->data->rx_queues;
+ for (i = 0; i < nb_rxq; i++) {
+ if (rxq[i] == NULL) {
+ otx2_err("rxq[%d] is already released", i);
+ goto fail;
+ }
+ memcpy(&rx_qconf[i], &rxq[i]->qconf, sizeof(*rx_qconf));
+ otx2_nix_rx_queue_release(rxq[i]);
+ eth_dev->data->rx_queues[i] = NULL;
+ }
+
+ dev->tx_qconf = tx_qconf;
+ dev->rx_qconf = rx_qconf;
+ return 0;
+
+fail:
+ if (tx_qconf)
+ free(tx_qconf);
+ if (rx_qconf)
+ free(rx_qconf);
+
+ return -ENOMEM;
+}
+
+static int
+nix_restore_queue_cfg(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_eth_qconf *tx_qconf = dev->tx_qconf;
+ struct otx2_eth_qconf *rx_qconf = dev->rx_qconf;
+ struct otx2_eth_txq **txq;
+ struct otx2_eth_rxq **rxq;
+ int rc, i, nb_rxq, nb_txq;
+
+ nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues);
+ nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues);
+
+ rc = -ENOMEM;
+ /* Setup tx & rx queues with previous configuration so
+ * that the queues can be functional in cases like ports
+ * are started without re configuring queues.
+ *
+ * Usual re config sequence is like below:
+ * port_configure() {
+ * if(reconfigure) {
+ * queue_release()
+ * queue_setup()
+ * }
+ * queue_configure() {
+ * queue_release()
+ * queue_setup()
+ * }
+ * }
+ * port_start()
+ *
+ * In some application's control path, queue_configure() would
+ * NOT be invoked for TXQs/RXQs in port_configure().
+ * In such cases, queues can be functional after start as the
+ * queues are already setup in port_configure().
+ */
+ for (i = 0; i < nb_txq; i++) {
+ rc = otx2_nix_tx_queue_setup(eth_dev, i, tx_qconf[i].nb_desc,
+ tx_qconf[i].socket_id,
+ &tx_qconf[i].conf.tx);
+ if (rc) {
+ otx2_err("Failed to setup tx queue rc=%d", rc);
+ txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues;
+ for (i -= 1; i >= 0; i--)
+ otx2_nix_tx_queue_release(txq[i]);
+ goto fail;
+ }
+ }
+
+ free(tx_qconf); tx_qconf = NULL;
+
+ for (i = 0; i < nb_rxq; i++) {
+ rc = otx2_nix_rx_queue_setup(eth_dev, i, rx_qconf[i].nb_desc,
+ rx_qconf[i].socket_id,
+ &rx_qconf[i].conf.rx,
+ rx_qconf[i].mempool);
+ if (rc) {
+ otx2_err("Failed to setup rx queue rc=%d", rc);
+ rxq = (struct otx2_eth_rxq **)eth_dev->data->rx_queues;
+ for (i -= 1; i >= 0; i--)
+ otx2_nix_rx_queue_release(rxq[i]);
+ goto release_tx_queues;
+ }
+ }
+
+ free(rx_qconf); rx_qconf = NULL;
+
+ return 0;
+
+release_tx_queues:
+ txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues;
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
+ otx2_nix_tx_queue_release(txq[i]);
+fail:
+ if (tx_qconf)
+ free(tx_qconf);
+ if (rx_qconf)
+ free(rx_qconf);
+
+ return rc;
+}
+
+static uint16_t
+nix_eth_nop_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts)
+{
+ RTE_SET_USED(queue);
+ RTE_SET_USED(mbufs);
+ RTE_SET_USED(pkts);
+
+ return 0;
+}
+
+static void
+nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
+{
+ /* These dummy functions are required for supporting
+ * some applications which reconfigure queues without
+ * stopping tx burst and rx burst threads(eg kni app)
+ * When the queues context is saved, txq/rxqs are released
+ * which caused app crash since rx/tx burst is still
+ * on different lcores
+ */
+ eth_dev->tx_pkt_burst = nix_eth_nop_burst;
+ eth_dev->rx_pkt_burst = nix_eth_nop_burst;
+ rte_mb();
+}
static int
otx2_nix_configure(struct rte_eth_dev *eth_dev)
@@ -863,6 +1029,10 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
/* Free the resources allocated from the previous configure */
if (dev->configured == 1) {
oxt2_nix_unregister_queue_irqs(eth_dev);
+ nix_set_nop_rxtx_function(eth_dev);
+ rc = nix_store_queue_cfg_and_then_release(eth_dev);
+ if (rc)
+ goto fail;
nix_lf_free(dev);
}
@@ -903,6 +1073,16 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ /*
+ * Restore queue config when reconfigure followed by
+ * reconfigure and no queue configure invoked from application case.
+ */
+ if (dev->configured == 1) {
+ rc = nix_restore_queue_cfg(eth_dev);
+ if (rc)
+ goto free_nix_lf;
+ }
+
/* Update the mac address */
ea = eth_dev->data->mac_addrs;
memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 0ce67f634..ffc350e0d 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -205,6 +205,8 @@ struct otx2_eth_dev {
uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
struct otx2_npc_flow_info npc_flow;
+ struct otx2_eth_qconf *tx_qconf;
+ struct otx2_eth_qconf *rx_qconf;
struct rte_eth_dev *eth_dev;
} __rte_cache_aligned;
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 20/58] net/octeontx2: add queue start and stop operations
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (18 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 19/58] net/octeontx2: handle port reconfigure jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 21/58] net/octeontx2: introduce traffic manager jerinj
` (38 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add queue start and stop operations. Tx queue needs
to update the flow control value, Which will be
added in sub subsequent patch.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 92 ++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.h | 2 +
5 files changed, 97 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index c8f07fa1d..ca40358da 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index a98b7d523..b720c116f 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 9746357ce..5a287493f 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -11,6 +11,7 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Queue start/stop = Y
RSS hash = Y
RSS key update = Y
RSS reta update = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 86ecdc14c..9a011de58 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -252,6 +252,26 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
return rc;
}
+static int
+nix_rq_enb_dis(struct rte_eth_dev *eth_dev,
+ struct otx2_eth_rxq *rxq, const bool enb)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *aq;
+
+ /* Pkts will be dropped silently if RQ is disabled */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ aq->rq.ena = enb;
+ aq->rq_mask.ena = ~(aq->rq_mask.ena);
+
+ return otx2_mbox_process(mbox);
+}
+
static int
nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
{
@@ -1110,6 +1130,74 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
return rc;
}
+int
+otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+ struct rte_eth_dev_data *data = eth_dev->data;
+
+ if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
+ return 0;
+
+ data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+ return 0;
+}
+
+int
+otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+ struct rte_eth_dev_data *data = eth_dev->data;
+
+ if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+ return 0;
+
+ data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+ return 0;
+}
+
+static int
+otx2_nix_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+ struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx];
+ struct rte_eth_dev_data *data = eth_dev->data;
+ int rc;
+
+ if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
+ return 0;
+
+ rc = nix_rq_enb_dis(rxq->eth_dev, rxq, true);
+ if (rc) {
+ otx2_err("Failed to enable rxq=%u, rc=%d", qidx, rc);
+ goto done;
+ }
+
+ data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+
+done:
+ return rc;
+}
+
+static int
+otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+ struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx];
+ struct rte_eth_dev_data *data = eth_dev->data;
+ int rc;
+
+ if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+ return 0;
+
+ rc = nix_rq_enb_dis(rxq->eth_dev, rxq, false);
+ if (rc) {
+ otx2_err("Failed to disable rxq=%u, rc=%d", qidx, rc);
+ goto done;
+ }
+
+ data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+done:
+ return rc;
+}
+
/* Initialize and register driver with DPDK Application */
static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
@@ -1119,6 +1207,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.tx_queue_release = otx2_nix_tx_queue_release,
.rx_queue_setup = otx2_nix_rx_queue_setup,
.rx_queue_release = otx2_nix_rx_queue_release,
+ .tx_queue_start = otx2_nix_tx_queue_start,
+ .tx_queue_stop = otx2_nix_tx_queue_stop,
+ .rx_queue_start = otx2_nix_rx_queue_start,
+ .rx_queue_stop = otx2_nix_rx_queue_stop,
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index ffc350e0d..4e06b7111 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -266,6 +266,8 @@ void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
void otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev);
void otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev);
void otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev);
+int otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx);
+int otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx);
uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id);
/* Link */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 21/58] net/octeontx2: introduce traffic manager
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (19 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 20/58] net/octeontx2: add queue start and stop operations jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 22/58] net/octeontx2: alloc and free TM HW resources jerinj
` (37 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Introduce traffic manager infra and default hierarchy
creation.
Upon ethdev configure, a default hierarchy is
created with one-to-one mapped tm nodes. This topology
will be overridden when user explicitly creates and commits
a new hierarchy using rte_tm interface.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 16 ++
drivers/net/octeontx2/otx2_ethdev.h | 14 ++
drivers/net/octeontx2/otx2_tm.c | 252 ++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_tm.h | 67 ++++++++
6 files changed, 351 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_tm.c
create mode 100644 drivers/net/octeontx2/otx2_tm.h
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index f938d9742..dd64ba6da 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -31,6 +31,7 @@ LIBABIVER := 1
# all source are stored in SRCS-y
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
+ otx2_tm.c \
otx2_rss.c \
otx2_mac.c \
otx2_link.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 8681a2642..e344d877f 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -3,6 +3,7 @@
#
sources = files(
+ 'otx2_tm.c',
'otx2_rss.c',
'otx2_mac.c',
'otx2_link.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 9a011de58..e64159c21 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1053,6 +1053,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
rc = nix_store_queue_cfg_and_then_release(eth_dev);
if (rc)
goto fail;
+ otx2_nix_tm_fini(eth_dev);
nix_lf_free(dev);
}
@@ -1086,6 +1087,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ /* Init the default TM scheduler hierarchy */
+ rc = otx2_nix_tm_init_default(eth_dev);
+ if (rc) {
+ otx2_err("Failed to init traffic manager rc=%d", rc);
+ goto free_nix_lf;
+ }
+
/* Register queue IRQs */
rc = oxt2_nix_register_queue_irqs(eth_dev);
if (rc) {
@@ -1388,6 +1396,9 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
/* Also sync same MAC address to CGX table */
otx2_cgx_mac_addr_set(eth_dev, ð_dev->data->mac_addrs[0]);
+ /* Initialize the tm data structures */
+ otx2_nix_tm_conf_init(eth_dev);
+
dev->tx_offload_capa = nix_get_tx_offload_capa(dev);
dev->rx_offload_capa = nix_get_rx_offload_capa(dev);
@@ -1443,6 +1454,11 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
}
eth_dev->data->nb_rx_queues = 0;
+ /* Free tm resources */
+ rc = otx2_nix_tm_fini(eth_dev);
+ if (rc)
+ otx2_err("Failed to cleanup tm, rc=%d", rc);
+
/* Unregister queue irqs */
oxt2_nix_unregister_queue_irqs(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 4e06b7111..9f73bf89b 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -19,6 +19,7 @@
#include "otx2_irq.h"
#include "otx2_mempool.h"
#include "otx2_rx.h"
+#include "otx2_tm.h"
#include "otx2_tx.h"
#define OTX2_ETH_DEV_PMD_VERSION "1.0"
@@ -201,6 +202,19 @@ struct otx2_eth_dev {
uint64_t rx_offload_capa;
uint64_t tx_offload_capa;
struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
+ uint16_t txschq[NIX_TXSCH_LVL_CNT];
+ uint16_t txschq_contig[NIX_TXSCH_LVL_CNT];
+ uint16_t txschq_index[NIX_TXSCH_LVL_CNT];
+ uint16_t txschq_contig_index[NIX_TXSCH_LVL_CNT];
+ /* Dis-contiguous queues */
+ uint16_t txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
+ /* Contiguous queues */
+ uint16_t txschq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
+ uint16_t otx2_tm_root_lvl;
+ uint16_t tm_flags;
+ uint16_t tm_leaf_cnt;
+ struct otx2_nix_tm_node_list node_list;
+ struct otx2_nix_tm_shaper_profile_list shaper_profile_list;
struct otx2_rss_info rss_info;
uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
new file mode 100644
index 000000000..bc0474242
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -0,0 +1,252 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_malloc.h>
+
+#include "otx2_ethdev.h"
+#include "otx2_tm.h"
+
+/* Use last LVL_CNT nodes as default nodes */
+#define NIX_DEFAULT_NODE_ID_START (RTE_TM_NODE_ID_NULL - NIX_TXSCH_LVL_CNT)
+
+enum otx2_tm_node_level {
+ OTX2_TM_LVL_ROOT = 0,
+ OTX2_TM_LVL_SCH1,
+ OTX2_TM_LVL_SCH2,
+ OTX2_TM_LVL_SCH3,
+ OTX2_TM_LVL_SCH4,
+ OTX2_TM_LVL_QUEUE,
+ OTX2_TM_LVL_MAX,
+};
+
+static bool
+nix_tm_have_tl1_access(struct otx2_eth_dev *dev)
+{
+ bool is_lbk = otx2_dev_is_lbk(dev);
+ return otx2_dev_is_pf(dev) && !otx2_dev_is_A0(dev) &&
+ !is_lbk && !dev->maxvf;
+}
+
+static struct otx2_nix_tm_shaper_profile *
+nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id)
+{
+ struct otx2_nix_tm_shaper_profile *tm_shaper_profile;
+
+ TAILQ_FOREACH(tm_shaper_profile, &dev->shaper_profile_list, shaper) {
+ if (tm_shaper_profile->shaper_profile_id == shaper_id)
+ return tm_shaper_profile;
+ }
+ return NULL;
+}
+
+static struct otx2_nix_tm_node *
+nix_tm_node_search(struct otx2_eth_dev *dev,
+ uint32_t node_id, bool user)
+{
+ struct otx2_nix_tm_node *tm_node;
+
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (tm_node->id == node_id &&
+ (user == !!(tm_node->flags & NIX_TM_NODE_USER)))
+ return tm_node;
+ }
+ return NULL;
+}
+
+static int
+nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint16_t hw_lvl_id,
+ uint16_t level_id, bool user,
+ struct rte_tm_node_params *params)
+{
+ struct otx2_nix_tm_shaper_profile *shaper_profile;
+ struct otx2_nix_tm_node *tm_node, *parent_node;
+ uint32_t shaper_profile_id;
+
+ shaper_profile_id = params->shaper_profile_id;
+ shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id);
+
+ parent_node = nix_tm_node_search(dev, parent_node_id, user);
+
+ tm_node = rte_zmalloc("otx2_nix_tm_node",
+ sizeof(struct otx2_nix_tm_node), 0);
+ if (!tm_node)
+ return -ENOMEM;
+
+ tm_node->level_id = level_id;
+ tm_node->hw_lvl_id = hw_lvl_id;
+
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->rr_prio = 0xf;
+ tm_node->max_prio = UINT32_MAX;
+ tm_node->hw_id = UINT32_MAX;
+ tm_node->flags = 0;
+ if (user)
+ tm_node->flags = NIX_TM_NODE_USER;
+ rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params));
+
+ if (shaper_profile)
+ shaper_profile->reference_count++;
+ tm_node->parent = parent_node;
+ tm_node->parent_hw_id = UINT32_MAX;
+
+ TAILQ_INSERT_TAIL(&dev->node_list, tm_node, node);
+
+ return 0;
+}
+
+static int
+nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev)
+{
+ struct otx2_nix_tm_shaper_profile *shaper_profile;
+
+ while ((shaper_profile = TAILQ_FIRST(&dev->shaper_profile_list))) {
+ if (shaper_profile->reference_count)
+ otx2_tm_dbg("Shaper profile %u has non zero references",
+ shaper_profile->shaper_profile_id);
+ TAILQ_REMOVE(&dev->shaper_profile_list, shaper_profile, shaper);
+ rte_free(shaper_profile);
+ }
+
+ return 0;
+}
+
+static int
+nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint32_t def = eth_dev->data->nb_tx_queues;
+ struct rte_tm_node_params params;
+ uint32_t leaf_parent, i;
+ int rc = 0;
+
+ /* Default params */
+ memset(¶ms, 0, sizeof(params));
+ params.shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE;
+
+ if (nix_tm_have_tl1_access(dev)) {
+ dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1;
+ rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL1,
+ OTX2_TM_LVL_ROOT, false, ¶ms);
+ if (rc)
+ goto exit;
+ rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL2,
+ OTX2_TM_LVL_SCH1, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL3,
+ OTX2_TM_LVL_SCH2, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL4,
+ OTX2_TM_LVL_SCH3, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 4, def + 3, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_SMQ,
+ OTX2_TM_LVL_SCH4, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ leaf_parent = def + 4;
+ } else {
+ dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
+ rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL2,
+ OTX2_TM_LVL_ROOT, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL3,
+ OTX2_TM_LVL_SCH1, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_TL4,
+ OTX2_TM_LVL_SCH2, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_SMQ,
+ OTX2_TM_LVL_SCH3, false, ¶ms);
+ if (rc)
+ goto exit;
+
+ leaf_parent = def + 3;
+ }
+
+ /* Add leaf nodes */
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ rc = nix_tm_node_add_to_list(dev, i, leaf_parent, 0,
+ DEFAULT_RR_WEIGHT,
+ NIX_TXSCH_LVL_CNT,
+ OTX2_TM_LVL_QUEUE, false, ¶ms);
+ if (rc)
+ break;
+ }
+
+exit:
+ return rc;
+}
+
+void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ TAILQ_INIT(&dev->node_list);
+ TAILQ_INIT(&dev->shaper_profile_list);
+}
+
+int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
+ int rc;
+
+ /* Clear shaper profiles */
+ nix_tm_clear_shaper_profiles(dev);
+ dev->tm_flags = NIX_TM_DEFAULT_TREE;
+
+ rc = nix_tm_prepare_default_tree(eth_dev);
+ if (rc != 0)
+ return rc;
+
+ dev->tm_leaf_cnt = sq_cnt;
+
+ return 0;
+}
+
+int
+otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ /* Clear shaper profiles */
+ nix_tm_clear_shaper_profiles(dev);
+
+ dev->tm_flags = 0;
+ return 0;
+}
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
new file mode 100644
index 000000000..94023fa99
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_TM_H__
+#define __OTX2_TM_H__
+
+#include <stdbool.h>
+
+#include <rte_tm_driver.h>
+
+#define NIX_TM_DEFAULT_TREE BIT_ULL(0)
+
+struct otx2_eth_dev;
+
+void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev);
+int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
+int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
+
+struct otx2_nix_tm_node {
+ TAILQ_ENTRY(otx2_nix_tm_node) node;
+ uint32_t id;
+ uint32_t hw_id;
+ uint32_t priority;
+ uint32_t weight;
+ uint16_t level_id;
+ uint16_t hw_lvl_id;
+ uint32_t rr_prio;
+ uint32_t rr_num;
+ uint32_t max_prio;
+ uint32_t parent_hw_id;
+ uint32_t flags;
+#define NIX_TM_NODE_HWRES BIT_ULL(0)
+#define NIX_TM_NODE_ENABLED BIT_ULL(1)
+#define NIX_TM_NODE_USER BIT_ULL(2)
+ struct otx2_nix_tm_node *parent;
+ struct rte_tm_node_params params;
+};
+
+struct otx2_nix_tm_shaper_profile {
+ TAILQ_ENTRY(otx2_nix_tm_shaper_profile) shaper;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+struct shaper_params {
+ uint64_t burst_exponent;
+ uint64_t burst_mantissa;
+ uint64_t div_exp;
+ uint64_t exponent;
+ uint64_t mantissa;
+ uint64_t burst;
+ uint64_t rate;
+};
+
+TAILQ_HEAD(otx2_nix_tm_node_list, otx2_nix_tm_node);
+TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
+
+#define MAX_SCHED_WEIGHT ((uint8_t)~0)
+#define NIX_TM_RR_QUANTUM_MAX ((1 << 24) - 1)
+
+/* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT */
+/* = NIX_MAX_HW_MTU */
+#define DEFAULT_RR_WEIGHT 71
+
+#endif /* __OTX2_TM_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 22/58] net/octeontx2: alloc and free TM HW resources
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (20 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 21/58] net/octeontx2: introduce traffic manager jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 23/58] net/octeontx2: configure " jerinj
` (36 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas
From: Krzysztof Kanas <kkanas@marvell.com>
Allocate and free shaper/scheduler hardware resources for
nodes of hirearchy levels in sw.
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/otx2_tm.c | 350 ++++++++++++++++++++++++++++++++
1 file changed, 350 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index bc0474242..91f31df05 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -54,6 +54,69 @@ nix_tm_node_search(struct otx2_eth_dev *dev,
return NULL;
}
+static uint32_t
+check_rr(struct otx2_eth_dev *dev, uint32_t priority, uint32_t parent_id)
+{
+ struct otx2_nix_tm_node *tm_node;
+ uint32_t rr_num = 0;
+
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (!tm_node->parent)
+ continue;
+
+ if (!(tm_node->parent->id == parent_id))
+ continue;
+
+ if (tm_node->priority == priority)
+ rr_num++;
+ }
+ return rr_num;
+}
+
+static int
+nix_tm_update_parent_info(struct otx2_eth_dev *dev)
+{
+ struct otx2_nix_tm_node *tm_node_child;
+ struct otx2_nix_tm_node *tm_node;
+ struct otx2_nix_tm_node *parent;
+ uint32_t rr_num = 0;
+ uint32_t priority;
+
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (!tm_node->parent)
+ continue;
+ /* Count group of children of same priority i.e are RR */
+ parent = tm_node->parent;
+ priority = tm_node->priority;
+ rr_num = check_rr(dev, priority, parent->id);
+
+ /* Assuming that multiple RR groups are
+ * not configured based on capability.
+ */
+ if (rr_num > 1) {
+ parent->rr_prio = priority;
+ parent->rr_num = rr_num;
+ }
+
+ /* Find out static priority children that are not in RR */
+ TAILQ_FOREACH(tm_node_child, &dev->node_list, node) {
+ if (!tm_node_child->parent)
+ continue;
+ if (parent->id != tm_node_child->parent->id)
+ continue;
+ if (parent->max_prio == UINT32_MAX &&
+ tm_node_child->priority != parent->rr_prio)
+ parent->max_prio = 0;
+
+ if (parent->max_prio < tm_node_child->priority &&
+ parent->rr_prio != tm_node_child->priority)
+ parent->max_prio = tm_node_child->priority;
+ }
+ }
+
+ return 0;
+}
+
static int
nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
uint32_t parent_node_id, uint32_t priority,
@@ -115,6 +178,274 @@ nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev)
return 0;
}
+static int
+nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
+ uint32_t flags, bool hw_only)
+{
+ struct otx2_nix_tm_shaper_profile *shaper_profile;
+ struct otx2_nix_tm_node *tm_node, *next_node;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_txsch_free_req *req;
+ uint32_t shaper_profile_id;
+ bool skip_node = false;
+ int rc = 0;
+
+ next_node = TAILQ_FIRST(&dev->node_list);
+ while (next_node) {
+ tm_node = next_node;
+ next_node = TAILQ_NEXT(tm_node, node);
+
+ /* Check for only requested nodes */
+ if ((tm_node->flags & flags_mask) != flags)
+ continue;
+
+ if (nix_tm_have_tl1_access(dev) &&
+ tm_node->hw_lvl_id == NIX_TXSCH_LVL_TL1)
+ skip_node = true;
+
+ otx2_tm_dbg("Free hwres for node %u, hwlvl %u, hw_id %u (%p)",
+ tm_node->id, tm_node->hw_lvl_id,
+ tm_node->hw_id, tm_node);
+ /* Free specific HW resource if requested */
+ if (!skip_node && flags_mask &&
+ tm_node->flags & NIX_TM_NODE_HWRES) {
+ req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
+ req->flags = 0;
+ req->schq_lvl = tm_node->hw_lvl_id;
+ req->schq = tm_node->hw_id;
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ break;
+ } else {
+ skip_node = false;
+ }
+ tm_node->flags &= ~NIX_TM_NODE_HWRES;
+
+ /* Leave software elements if needed */
+ if (hw_only)
+ continue;
+
+ shaper_profile_id = tm_node->params.shaper_profile_id;
+ shaper_profile =
+ nix_tm_shaper_profile_search(dev, shaper_profile_id);
+ if (shaper_profile)
+ shaper_profile->reference_count--;
+
+ TAILQ_REMOVE(&dev->node_list, tm_node, node);
+ rte_free(tm_node);
+ }
+
+ if (!flags_mask) {
+ /* Free all hw resources */
+ req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
+ req->flags = TXSCHQ_FREE_ALL;
+
+ return otx2_mbox_process(mbox);
+ }
+
+ return rc;
+}
+
+static uint8_t
+nix_tm_copy_rsp_to_dev(struct otx2_eth_dev *dev,
+ struct nix_txsch_alloc_rsp *rsp)
+{
+ uint16_t schq;
+ uint8_t lvl;
+
+ for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
+ for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++) {
+ dev->txschq_list[lvl][schq] = rsp->schq_list[lvl][schq];
+ dev->txschq_contig_list[lvl][schq] =
+ rsp->schq_contig_list[lvl][schq];
+ }
+
+ dev->txschq[lvl] = rsp->schq[lvl];
+ dev->txschq_contig[lvl] = rsp->schq_contig[lvl];
+ }
+ return 0;
+}
+
+static int
+nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
+ struct otx2_nix_tm_node *child,
+ struct otx2_nix_tm_node *parent)
+{
+ uint32_t hw_id, schq_con_index, prio_offset;
+ uint32_t l_id, schq_index;
+
+ otx2_tm_dbg("Assign hw id for child node %u, lvl %u, hw_lvl %u (%p)",
+ child->id, child->level_id, child->hw_lvl_id, child);
+
+ child->flags |= NIX_TM_NODE_HWRES;
+
+ /* Process root nodes */
+ if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
+ child->hw_lvl_id == dev->otx2_tm_root_lvl && !parent) {
+ int idx = 0;
+ uint32_t tschq_con_index;
+
+ l_id = child->hw_lvl_id;
+ tschq_con_index = dev->txschq_contig_index[l_id];
+ hw_id = dev->txschq_contig_list[l_id][tschq_con_index];
+ child->hw_id = hw_id;
+ dev->txschq_contig_index[l_id]++;
+ /* Update TL1 hw_id for its parent for config purpose */
+ idx = dev->txschq_index[NIX_TXSCH_LVL_TL1]++;
+ hw_id = dev->txschq_list[NIX_TXSCH_LVL_TL1][idx];
+ child->parent_hw_id = hw_id;
+ return 0;
+ }
+ if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL1 &&
+ child->hw_lvl_id == dev->otx2_tm_root_lvl && !parent) {
+ uint32_t tschq_con_index;
+
+ l_id = child->hw_lvl_id;
+ tschq_con_index = dev->txschq_index[l_id];
+ hw_id = dev->txschq_list[l_id][tschq_con_index];
+ child->hw_id = hw_id;
+ dev->txschq_index[l_id]++;
+ return 0;
+ }
+
+ /* Process children with parents */
+ l_id = child->hw_lvl_id;
+ schq_index = dev->txschq_index[l_id];
+ schq_con_index = dev->txschq_contig_index[l_id];
+
+ if (child->priority == parent->rr_prio) {
+ hw_id = dev->txschq_list[l_id][schq_index];
+ child->hw_id = hw_id;
+ child->parent_hw_id = parent->hw_id;
+ dev->txschq_index[l_id]++;
+ } else {
+ prio_offset = schq_con_index + child->priority;
+ hw_id = dev->txschq_contig_list[l_id][prio_offset];
+ child->hw_id = hw_id;
+ }
+ return 0;
+}
+
+static int
+nix_tm_assign_hw_id(struct otx2_eth_dev *dev)
+{
+ struct otx2_nix_tm_node *parent, *child;
+ uint32_t child_hw_lvl, con_index_inc, i;
+
+ for (i = NIX_TXSCH_LVL_TL1; i > 0; i--) {
+ TAILQ_FOREACH(parent, &dev->node_list, node) {
+ child_hw_lvl = parent->hw_lvl_id - 1;
+ if (parent->hw_lvl_id != i)
+ continue;
+ TAILQ_FOREACH(child, &dev->node_list, node) {
+ if (!child->parent)
+ continue;
+ if (child->parent->id != parent->id)
+ continue;
+ nix_tm_assign_id_to_node(dev, child, parent);
+ }
+
+ con_index_inc = parent->max_prio + 1;
+ dev->txschq_contig_index[child_hw_lvl] += con_index_inc;
+
+ /*
+ * Explicitly assign id to parent node if it
+ * doesn't have a parent
+ */
+ if (parent->hw_lvl_id == dev->otx2_tm_root_lvl)
+ nix_tm_assign_id_to_node(dev, parent, NULL);
+ }
+ }
+ return 0;
+}
+
+static uint8_t
+nix_tm_count_req_schq(struct otx2_eth_dev *dev,
+ struct nix_txsch_alloc_req *req, uint8_t lvl)
+{
+ struct otx2_nix_tm_node *tm_node;
+ uint8_t contig_count;
+
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (lvl == tm_node->hw_lvl_id) {
+ req->schq[lvl - 1] += tm_node->rr_num;
+ if (tm_node->max_prio != UINT32_MAX) {
+ contig_count = tm_node->max_prio + 1;
+ req->schq_contig[lvl - 1] += contig_count;
+ }
+ }
+ if (lvl == dev->otx2_tm_root_lvl &&
+ dev->otx2_tm_root_lvl && lvl == NIX_TXSCH_LVL_TL2 &&
+ tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) {
+ req->schq_contig[dev->otx2_tm_root_lvl]++;
+ }
+ }
+
+ req->schq[NIX_TXSCH_LVL_TL1] = 1;
+ req->schq_contig[NIX_TXSCH_LVL_TL1] = 0;
+
+ return 0;
+}
+
+static int
+nix_tm_prepare_txschq_req(struct otx2_eth_dev *dev,
+ struct nix_txsch_alloc_req *req)
+{
+ uint8_t i;
+
+ for (i = NIX_TXSCH_LVL_TL1; i > 0; i--)
+ nix_tm_count_req_schq(dev, req, i);
+
+ for (i = 0; i < NIX_TXSCH_LVL_CNT; i++) {
+ dev->txschq_index[i] = 0;
+ dev->txschq_contig_index[i] = 0;
+ }
+ return 0;
+}
+
+static int
+nix_tm_send_txsch_alloc_msg(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_txsch_alloc_req *req;
+ struct nix_txsch_alloc_rsp *rsp;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_nix_txsch_alloc(mbox);
+
+ rc = nix_tm_prepare_txschq_req(dev, req);
+ if (rc)
+ return rc;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ nix_tm_copy_rsp_to_dev(dev, rsp);
+
+ nix_tm_assign_hw_id(dev);
+ return 0;
+}
+
+static int
+nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc;
+
+ RTE_SET_USED(xmit_enable);
+
+ nix_tm_update_parent_info(dev);
+
+ rc = nix_tm_send_txsch_alloc_msg(dev);
+ if (rc) {
+ otx2_err("TM failed to alloc tm resources=%d", rc);
+ return rc;
+ }
+
+ return 0;
+}
+
static int
nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
{
@@ -226,6 +557,13 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
int rc;
+ /* Free up all resources already held */
+ rc = nix_tm_free_resources(dev, 0, 0, false);
+ if (rc) {
+ otx2_err("Failed to freeup existing resources,rc=%d", rc);
+ return rc;
+ }
+
/* Clear shaper profiles */
nix_tm_clear_shaper_profiles(dev);
dev->tm_flags = NIX_TM_DEFAULT_TREE;
@@ -234,6 +572,9 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
if (rc != 0)
return rc;
+ rc = nix_tm_alloc_resources(eth_dev, false);
+ if (rc != 0)
+ return rc;
dev->tm_leaf_cnt = sq_cnt;
return 0;
@@ -243,6 +584,15 @@ int
otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc;
+
+ /* Xmit is assumed to be disabled */
+ /* Free up resources already held */
+ rc = nix_tm_free_resources(dev, 0, 0, false);
+ if (rc) {
+ otx2_err("Failed to freeup existing resources,rc=%d", rc);
+ return rc;
+ }
/* Clear shaper profiles */
nix_tm_clear_shaper_profiles(dev);
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 23/58] net/octeontx2: configure TM HW resources
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (21 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 22/58] net/octeontx2: alloc and free TM HW resources jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 24/58] net/octeontx2: enable Tx through traffic manager jerinj
` (35 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
This patch sets up and configure hierarchy in hw
nodes. Since all the registers are with RVU AF,
register configuration is also done using mbox
communication.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
drivers/net/octeontx2/otx2_tm.c | 504 ++++++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_tm.h | 82 ++++++
2 files changed, 586 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 91f31df05..c6154e4d4 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -20,6 +20,41 @@ enum otx2_tm_node_level {
OTX2_TM_LVL_MAX,
};
+static inline
+uint64_t shaper2regval(struct shaper_params *shaper)
+{
+ return (shaper->burst_exponent << 37) | (shaper->burst_mantissa << 29) |
+ (shaper->div_exp << 13) | (shaper->exponent << 9) |
+ (shaper->mantissa << 1);
+}
+
+static int
+nix_get_link(struct otx2_eth_dev *dev)
+{
+ int link = 13 /* SDP */;
+ uint16_t lmac_chan;
+ uint16_t map;
+
+ lmac_chan = dev->tx_chan_base;
+
+ /* CGX lmac link */
+ if (lmac_chan >= 0x800) {
+ map = lmac_chan & 0x7FF;
+ link = 4 * ((map >> 8) & 0xF) + ((map >> 4) & 0xF);
+ } else if (lmac_chan < 0x700) {
+ /* LBK channel */
+ link = 12;
+ }
+
+ return link;
+}
+
+static uint8_t
+nix_get_relchan(struct otx2_eth_dev *dev)
+{
+ return dev->tx_chan_base & 0xff;
+}
+
static bool
nix_tm_have_tl1_access(struct otx2_eth_dev *dev)
{
@@ -28,6 +63,24 @@ nix_tm_have_tl1_access(struct otx2_eth_dev *dev)
!is_lbk && !dev->maxvf;
}
+static int
+find_prio_anchor(struct otx2_eth_dev *dev, uint32_t node_id)
+{
+ struct otx2_nix_tm_node *child_node;
+
+ TAILQ_FOREACH(child_node, &dev->node_list, node) {
+ if (!child_node->parent)
+ continue;
+ if (!(child_node->parent->id == node_id))
+ continue;
+ if (child_node->priority == child_node->parent->rr_prio)
+ continue;
+ return child_node->hw_id - child_node->priority;
+ }
+ return 0;
+}
+
+
static struct otx2_nix_tm_shaper_profile *
nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id)
{
@@ -40,6 +93,451 @@ nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id)
return NULL;
}
+static inline uint64_t
+shaper_rate_to_nix(uint64_t cclk_hz, uint64_t cclk_ticks,
+ uint64_t value, uint64_t *exponent_p,
+ uint64_t *mantissa_p, uint64_t *div_exp_p)
+{
+ uint64_t div_exp, exponent, mantissa;
+
+ /* Boundary checks */
+ if (value < MIN_SHAPER_RATE(cclk_hz, cclk_ticks) ||
+ value > MAX_SHAPER_RATE(cclk_hz, cclk_ticks))
+ return 0;
+
+ if (value <= SHAPER_RATE(cclk_hz, cclk_ticks, 0, 0, 0)) {
+ /* Calculate rate div_exp and mantissa using
+ * the following formula:
+ *
+ * value = (cclk_hz * (256 + mantissa)
+ * / ((cclk_ticks << div_exp) * 256)
+ */
+ div_exp = 0;
+ exponent = 0;
+ mantissa = MAX_RATE_MANTISSA;
+
+ while (value < (cclk_hz / (cclk_ticks << div_exp)))
+ div_exp += 1;
+
+ while (value <
+ ((cclk_hz * (256 + mantissa)) /
+ ((cclk_ticks << div_exp) * 256)))
+ mantissa -= 1;
+ } else {
+ /* Calculate rate exponent and mantissa using
+ * the following formula:
+ *
+ * value = (cclk_hz * ((256 + mantissa) << exponent)
+ * / (cclk_ticks * 256)
+ *
+ */
+ div_exp = 0;
+ exponent = MAX_RATE_EXPONENT;
+ mantissa = MAX_RATE_MANTISSA;
+
+ while (value < (cclk_hz * (1 << exponent)) / cclk_ticks)
+ exponent -= 1;
+
+ while (value < (cclk_hz * ((256 + mantissa) << exponent)) /
+ (cclk_ticks * 256))
+ mantissa -= 1;
+ }
+
+ if (div_exp > MAX_RATE_DIV_EXP ||
+ exponent > MAX_RATE_EXPONENT || mantissa > MAX_RATE_MANTISSA)
+ return 0;
+
+ if (div_exp_p)
+ *div_exp_p = div_exp;
+ if (exponent_p)
+ *exponent_p = exponent;
+ if (mantissa_p)
+ *mantissa_p = mantissa;
+
+ /* Calculate real rate value */
+ return SHAPER_RATE(cclk_hz, cclk_ticks, exponent, mantissa, div_exp);
+}
+
+static inline uint64_t
+lx_shaper_rate_to_nix(uint64_t cclk_hz, uint32_t hw_lvl,
+ uint64_t value, uint64_t *exponent,
+ uint64_t *mantissa, uint64_t *div_exp)
+{
+ if (hw_lvl == NIX_TXSCH_LVL_TL1)
+ return shaper_rate_to_nix(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS,
+ value, exponent, mantissa, div_exp);
+ else
+ return shaper_rate_to_nix(cclk_hz, LX_TIME_WHEEL_CCLK_TICKS,
+ value, exponent, mantissa, div_exp);
+}
+
+static inline uint64_t
+shaper_burst_to_nix(uint64_t value, uint64_t *exponent_p,
+ uint64_t *mantissa_p)
+{
+ uint64_t exponent, mantissa;
+
+ if (value < MIN_SHAPER_BURST || value > MAX_SHAPER_BURST)
+ return 0;
+
+ /* Calculate burst exponent and mantissa using
+ * the following formula:
+ *
+ * value = (((256 + mantissa) << (exponent + 1)
+ / 256)
+ *
+ */
+ exponent = MAX_BURST_EXPONENT;
+ mantissa = MAX_BURST_MANTISSA;
+
+ while (value < (1ull << (exponent + 1)))
+ exponent -= 1;
+
+ while (value < ((256 + mantissa) << (exponent + 1)) / 256)
+ mantissa -= 1;
+
+ if (exponent > MAX_BURST_EXPONENT || mantissa > MAX_BURST_MANTISSA)
+ return 0;
+
+ if (exponent_p)
+ *exponent_p = exponent;
+ if (mantissa_p)
+ *mantissa_p = mantissa;
+
+ return SHAPER_BURST(exponent, mantissa);
+}
+
+static int
+configure_shaper_cir_pir_reg(struct otx2_eth_dev *dev,
+ struct otx2_nix_tm_node *tm_node,
+ struct shaper_params *cir,
+ struct shaper_params *pir)
+{
+ uint32_t shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE;
+ struct otx2_nix_tm_shaper_profile *shaper_profile = NULL;
+ struct rte_tm_shaper_params *param;
+
+ shaper_profile_id = tm_node->params.shaper_profile_id;
+
+ shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id);
+ if (shaper_profile) {
+ param = &shaper_profile->profile;
+ /* Calculate CIR exponent and mantissa */
+ if (param->committed.rate)
+ cir->rate = lx_shaper_rate_to_nix(CCLK_HZ,
+ tm_node->hw_lvl_id,
+ param->committed.rate,
+ &cir->exponent,
+ &cir->mantissa,
+ &cir->div_exp);
+
+ /* Calculate PIR exponent and mantissa */
+ if (param->peak.rate)
+ pir->rate = lx_shaper_rate_to_nix(CCLK_HZ,
+ tm_node->hw_lvl_id,
+ param->peak.rate,
+ &pir->exponent,
+ &pir->mantissa,
+ &pir->div_exp);
+
+ /* Calculate CIR burst exponent and mantissa */
+ if (param->committed.size)
+ cir->burst = shaper_burst_to_nix(param->committed.size,
+ &cir->burst_exponent,
+ &cir->burst_mantissa);
+
+ /* Calculate PIR burst exponent and mantissa */
+ if (param->peak.size)
+ pir->burst = shaper_burst_to_nix(param->peak.size,
+ &pir->burst_exponent,
+ &pir->burst_mantissa);
+ }
+
+ return 0;
+}
+
+static int
+send_tm_reqval(struct otx2_mbox *mbox, struct nix_txschq_config *req)
+{
+ int rc;
+
+ if (req->num_regs > MAX_REGS_PER_MBOX_MSG)
+ return -ERANGE;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ req->num_regs = 0;
+ return 0;
+}
+
+static int
+populate_tm_registers(struct otx2_eth_dev *dev,
+ struct otx2_nix_tm_node *tm_node)
+{
+ uint64_t strict_schedul_prio, rr_prio;
+ struct otx2_mbox *mbox = dev->mbox;
+ volatile uint64_t *reg, *regval;
+ uint64_t parent = 0, child = 0;
+ struct shaper_params cir, pir;
+ struct nix_txschq_config *req;
+ uint64_t rr_quantum;
+ uint32_t hw_lvl;
+ uint32_t schq;
+ int rc;
+
+ memset(&cir, 0, sizeof(cir));
+ memset(&pir, 0, sizeof(pir));
+
+ /* Skip leaf nodes */
+ if (tm_node->hw_lvl_id == NIX_TXSCH_LVL_CNT)
+ return 0;
+
+ /* Root node will not have a parent node */
+ if (tm_node->hw_lvl_id == dev->otx2_tm_root_lvl)
+ parent = tm_node->parent_hw_id;
+ else
+ parent = tm_node->parent->hw_id;
+
+ /* Do we need this trigger to configure TL1 */
+ if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
+ tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) {
+ schq = parent;
+ /*
+ * Default config for TL1.
+ * For VF this is always ignored.
+ */
+
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = NIX_TXSCH_LVL_TL1;
+
+ /* Set DWRR quantum */
+ req->reg[0] = NIX_AF_TL1X_SCHEDULE(schq);
+ req->regval[0] = TXSCH_TL1_DFLT_RR_QTM;
+ req->num_regs++;
+
+ req->reg[1] = NIX_AF_TL1X_TOPOLOGY(schq);
+ req->regval[1] = (TXSCH_TL1_DFLT_RR_PRIO << 1);
+ req->num_regs++;
+
+ req->reg[2] = NIX_AF_TL1X_CIR(schq);
+ req->regval[2] = 0;
+ req->num_regs++;
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ }
+
+ if (tm_node->hw_lvl_id != NIX_TXSCH_LVL_SMQ)
+ child = find_prio_anchor(dev, tm_node->id);
+
+ rr_prio = tm_node->rr_prio;
+ hw_lvl = tm_node->hw_lvl_id;
+ strict_schedul_prio = tm_node->priority;
+ schq = tm_node->hw_id;
+ rr_quantum = (tm_node->weight * NIX_TM_RR_QUANTUM_MAX) /
+ MAX_SCHED_WEIGHT;
+
+ configure_shaper_cir_pir_reg(dev, tm_node, &cir, &pir);
+
+ otx2_tm_dbg("Configure node %p, lvl %u hw_lvl %u, id %u, hw_id %u,"
+ "parent_hw_id %" PRIx64 ", pir %" PRIx64 ", cir %" PRIx64,
+ tm_node, tm_node->level_id, hw_lvl,
+ tm_node->id, schq, parent, pir.rate, cir.rate);
+
+ rc = -EFAULT;
+
+ switch (hw_lvl) {
+ case NIX_TXSCH_LVL_SMQ:
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = hw_lvl;
+ reg = req->reg;
+ regval = req->regval;
+ req->num_regs = 0;
+
+ /* Set xoff which will be cleared later */
+ *reg++ = NIX_AF_SMQX_CFG(schq);
+ *regval++ = BIT_ULL(50) | ((uint64_t)NIX_MAX_VTAG_INS << 36) |
+ (NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS;
+ req->num_regs++;
+ *reg++ = NIX_AF_MDQX_PARENT(schq);
+ *regval++ = parent << 16;
+ req->num_regs++;
+ *reg++ = NIX_AF_MDQX_SCHEDULE(schq);
+ *regval++ = (strict_schedul_prio << 24) | rr_quantum;
+ req->num_regs++;
+ if (pir.rate && pir.burst) {
+ *reg++ = NIX_AF_MDQX_PIR(schq);
+ *regval++ = shaper2regval(&pir) | 1;
+ req->num_regs++;
+ }
+
+ if (cir.rate && cir.burst) {
+ *reg++ = NIX_AF_MDQX_CIR(schq);
+ *regval++ = shaper2regval(&cir) | 1;
+ req->num_regs++;
+ }
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ break;
+ case NIX_TXSCH_LVL_TL4:
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = hw_lvl;
+ req->num_regs = 0;
+ reg = req->reg;
+ regval = req->regval;
+
+ *reg++ = NIX_AF_TL4X_PARENT(schq);
+ *regval++ = parent << 16;
+ req->num_regs++;
+ *reg++ = NIX_AF_TL4X_TOPOLOGY(schq);
+ *regval++ = (child << 32) | (rr_prio << 1);
+ req->num_regs++;
+ *reg++ = NIX_AF_TL4X_SCHEDULE(schq);
+ *regval++ = (strict_schedul_prio << 24) | rr_quantum;
+ req->num_regs++;
+ if (pir.rate && pir.burst) {
+ *reg++ = NIX_AF_TL4X_PIR(schq);
+ *regval++ = shaper2regval(&pir) | 1;
+ req->num_regs++;
+ }
+ if (cir.rate && cir.burst) {
+ *reg++ = NIX_AF_TL4X_CIR(schq);
+ *regval++ = shaper2regval(&cir) | 1;
+ req->num_regs++;
+ }
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ break;
+ case NIX_TXSCH_LVL_TL3:
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = hw_lvl;
+ req->num_regs = 0;
+ reg = req->reg;
+ regval = req->regval;
+
+ *reg++ = NIX_AF_TL3X_PARENT(schq);
+ *regval++ = parent << 16;
+ req->num_regs++;
+ *reg++ = NIX_AF_TL3X_TOPOLOGY(schq);
+ *regval++ = (child << 32) | (rr_prio << 1);
+ req->num_regs++;
+ *reg++ = NIX_AF_TL3X_SCHEDULE(schq);
+ *regval++ = (strict_schedul_prio << 24) | rr_quantum;
+ req->num_regs++;
+ if (pir.rate && pir.burst) {
+ *reg++ = NIX_AF_TL3X_PIR(schq);
+ *regval++ = shaper2regval(&pir) | 1;
+ req->num_regs++;
+ }
+ if (cir.rate && cir.burst) {
+ *reg++ = NIX_AF_TL3X_CIR(schq);
+ *regval++ = shaper2regval(&cir) | 1;
+ req->num_regs++;
+ }
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ break;
+ case NIX_TXSCH_LVL_TL2:
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = hw_lvl;
+ req->num_regs = 0;
+ reg = req->reg;
+ regval = req->regval;
+
+ *reg++ = NIX_AF_TL2X_PARENT(schq);
+ *regval++ = parent << 16;
+ req->num_regs++;
+ *reg++ = NIX_AF_TL2X_TOPOLOGY(schq);
+ *regval++ = (child << 32) | (rr_prio << 1);
+ req->num_regs++;
+ *reg++ = NIX_AF_TL2X_SCHEDULE(schq);
+ if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2)
+ *regval++ = (1 << 24) | rr_quantum;
+ else
+ *regval++ = (strict_schedul_prio << 24) | rr_quantum;
+ req->num_regs++;
+ *reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq, nix_get_link(dev));
+ *regval++ = BIT_ULL(12) | nix_get_relchan(dev);
+ req->num_regs++;
+ if (pir.rate && pir.burst) {
+ *reg++ = NIX_AF_TL2X_PIR(schq);
+ *regval++ = shaper2regval(&pir) | 1;
+ req->num_regs++;
+ }
+ if (cir.rate && cir.burst) {
+ *reg++ = NIX_AF_TL2X_CIR(schq);
+ *regval++ = shaper2regval(&cir) | 1;
+ req->num_regs++;
+ }
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ break;
+ case NIX_TXSCH_LVL_TL1:
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = hw_lvl;
+ req->num_regs = 0;
+ reg = req->reg;
+ regval = req->regval;
+
+ *reg++ = NIX_AF_TL1X_SCHEDULE(schq);
+ *regval++ = rr_quantum;
+ req->num_regs++;
+ *reg++ = NIX_AF_TL1X_TOPOLOGY(schq);
+ *regval++ = (child << 32) | (rr_prio << 1 /*RR_PRIO*/);
+ req->num_regs++;
+ if (cir.rate && cir.burst) {
+ *reg++ = NIX_AF_TL1X_CIR(schq);
+ *regval++ = shaper2regval(&cir) | 1;
+ req->num_regs++;
+ }
+
+ rc = send_tm_reqval(mbox, req);
+ if (rc)
+ goto error;
+ break;
+ }
+
+ return 0;
+error:
+ otx2_err("Txschq cfg request failed for node %p, rc=%d", tm_node, rc);
+ return rc;
+}
+
+
+static int
+nix_tm_txsch_reg_config(struct otx2_eth_dev *dev)
+{
+ struct otx2_nix_tm_node *tm_node;
+ uint32_t lvl;
+ int rc = 0;
+
+ if (nix_get_link(dev) == 13)
+ return -EPERM;
+
+ for (lvl = 0; lvl < (uint32_t)dev->otx2_tm_root_lvl + 1; lvl++) {
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (tm_node->hw_lvl_id == lvl) {
+ rc = populate_tm_registers(dev, tm_node);
+ if (rc)
+ goto exit;
+ }
+ }
+ }
+exit:
+ return rc;
+}
+
static struct otx2_nix_tm_node *
nix_tm_node_search(struct otx2_eth_dev *dev,
uint32_t node_id, bool user)
@@ -443,6 +941,12 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
return rc;
}
+ rc = nix_tm_txsch_reg_config(dev);
+ if (rc) {
+ otx2_err("TM failed to configure sched registers=%d", rc);
+ return rc;
+ }
+
return 0;
}
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index 94023fa99..af1bb1862 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -64,4 +64,86 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
/* = NIX_MAX_HW_MTU */
#define DEFAULT_RR_WEIGHT 71
+/** NIX rate limits */
+#define MAX_RATE_DIV_EXP 12
+#define MAX_RATE_EXPONENT 0xf
+#define MAX_RATE_MANTISSA 0xff
+
+/** NIX rate limiter time-wheel resolution */
+#define L1_TIME_WHEEL_CCLK_TICKS 240
+#define LX_TIME_WHEEL_CCLK_TICKS 860
+
+#define CCLK_HZ 1000000000
+
+/* NIX rate calculation
+ * CCLK = coprocessor-clock frequency in MHz
+ * CCLK_TICKS = rate limiter time-wheel resolution
+ *
+ * PIR_ADD = ((256 + NIX_*_PIR[RATE_MANTISSA])
+ * << NIX_*_PIR[RATE_EXPONENT]) / 256
+ * PIR = (CCLK / (CCLK_TICKS << NIX_*_PIR[RATE_DIVIDER_EXPONENT]))
+ * * PIR_ADD
+ *
+ * CIR_ADD = ((256 + NIX_*_CIR[RATE_MANTISSA])
+ * << NIX_*_CIR[RATE_EXPONENT]) / 256
+ * CIR = (CCLK / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT]))
+ * * CIR_ADD
+ */
+#define SHAPER_RATE(cclk_hz, cclk_ticks, \
+ exponent, mantissa, div_exp) \
+ (((uint64_t)(cclk_hz) * ((256 + (mantissa)) << (exponent))) \
+ / (((cclk_ticks) << (div_exp)) * 256))
+
+#define L1_SHAPER_RATE(cclk_hz, exponent, mantissa, div_exp) \
+ SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS, \
+ exponent, mantissa, div_exp)
+
+#define LX_SHAPER_RATE(cclk_hz, exponent, mantissa, div_exp) \
+ SHAPER_RATE(cclk_hz, LX_TIME_WHEEL_CCLK_TICKS, \
+ exponent, mantissa, div_exp)
+
+/* Shaper rate limits */
+#define MIN_SHAPER_RATE(cclk_hz, cclk_ticks) \
+ SHAPER_RATE(cclk_hz, cclk_ticks, 0, 0, MAX_RATE_DIV_EXP)
+
+#define MAX_SHAPER_RATE(cclk_hz, cclk_ticks) \
+ SHAPER_RATE(cclk_hz, cclk_ticks, MAX_RATE_EXPONENT, \
+ MAX_RATE_MANTISSA, 0)
+
+#define MIN_L1_SHAPER_RATE(cclk_hz) \
+ MIN_SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS)
+
+#define MAX_L1_SHAPER_RATE(cclk_hz) \
+ MAX_SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS)
+
+/** TM Shaper - low level operations */
+
+/** NIX burst limits */
+#define MAX_BURST_EXPONENT 0xf
+#define MAX_BURST_MANTISSA 0xff
+
+/* NIX burst calculation
+ * PIR_BURST = ((256 + NIX_*_PIR[BURST_MANTISSA])
+ * << (NIX_*_PIR[BURST_EXPONENT] + 1))
+ * / 256
+ *
+ * CIR_BURST = ((256 + NIX_*_CIR[BURST_MANTISSA])
+ * << (NIX_*_CIR[BURST_EXPONENT] + 1))
+ * / 256
+ */
+#define SHAPER_BURST(exponent, mantissa) \
+ (((256 + (mantissa)) << ((exponent) + 1)) / 256)
+
+/** Shaper burst limits */
+#define MIN_SHAPER_BURST \
+ SHAPER_BURST(0, 0)
+
+#define MAX_SHAPER_BURST \
+ SHAPER_BURST(MAX_BURST_EXPONENT,\
+ MAX_BURST_MANTISSA)
+
+/* Default TL1 priority and Quantum from AF */
+#define TXSCH_TL1_DFLT_RR_QTM ((1 << 24) - 1)
+#define TXSCH_TL1_DFLT_RR_PRIO 1
+
#endif /* __OTX2_TM_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 24/58] net/octeontx2: enable Tx through traffic manager
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (22 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 23/58] net/octeontx2: configure " jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 25/58] net/octeontx2: add ptype support jerinj
` (34 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: Krzysztof Kanas, Vamsi Attunuru
From: Krzysztof Kanas <kkanas@marvell.com>
This patch enables pkt transmit through traffic manager
hierarchy by clearing software XOFF on the nodes and linking
tx queues to corresponding leaf nodes.
It also adds support to start and stop tx queue using
traffic manager.
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 75 ++++++-
drivers/net/octeontx2/otx2_tm.c | 296 +++++++++++++++++++++++++++-
drivers/net/octeontx2/otx2_tm.h | 4 +
3 files changed, 370 insertions(+), 5 deletions(-)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index e64159c21..c1b8b37db 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -120,6 +120,32 @@ nix_lf_free(struct otx2_eth_dev *dev)
return otx2_mbox_process(mbox);
}
+int
+otx2_cgx_rxtx_start(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_start_rxtx(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
+int
+otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_stop_rxtx(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
static inline void
nix_rx_queue_reset(struct otx2_eth_rxq *rxq)
{
@@ -461,16 +487,27 @@ nix_sq_init(struct otx2_eth_txq *txq)
struct otx2_eth_dev *dev = txq->dev;
struct otx2_mbox *mbox = dev->mbox;
struct nix_aq_enq_req *sq;
+ uint32_t rr_quantum;
+ uint16_t smq;
+ int rc;
if (txq->sqb_pool->pool_id == 0)
return -EINVAL;
+ rc = otx2_nix_tm_get_leaf_data(dev, txq->sq, &rr_quantum, &smq);
+ if (rc) {
+ otx2_err("Failed to get sq->smq(leaf node), rc=%d", rc);
+ return rc;
+ }
+
sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
sq->qidx = txq->sq;
sq->ctype = NIX_AQ_CTYPE_SQ;
sq->op = NIX_AQ_INSTOP_INIT;
sq->sq.max_sqe_size = nix_sq_max_sqe_sz(txq);
+ sq->sq.smq = smq;
+ sq->sq.smq_rr_quantum = rr_quantum;
sq->sq.default_chan = dev->tx_chan_base;
sq->sq.sqe_stype = NIX_STYPE_STF;
sq->sq.ena = 1;
@@ -711,12 +748,18 @@ static void
otx2_nix_tx_queue_release(void *_txq)
{
struct otx2_eth_txq *txq = _txq;
+ struct rte_eth_dev *eth_dev;
if (!txq)
return;
+ eth_dev = txq->dev->eth_dev;
+
otx2_nix_dbg("Releasing txq %u", txq->sq);
+ /* Flush and disable tm */
+ otx2_nix_tm_sw_xoff(txq, eth_dev->data->dev_started);
+
/* Free sqb's and disable sq */
nix_sq_uninit(txq);
@@ -1142,24 +1185,52 @@ int
otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
{
struct rte_eth_dev_data *data = eth_dev->data;
+ struct otx2_eth_txq *txq;
+ int rc = -EINVAL;
+
+ txq = eth_dev->data->tx_queues[qidx];
if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
return 0;
+ rc = otx2_nix_sq_sqb_aura_fc(txq, true);
+ if (rc) {
+ otx2_err("Failed to enable sqb aura fc, txq=%u, rc=%d",
+ qidx, rc);
+ goto done;
+ }
+
data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
- return 0;
+
+done:
+ return rc;
}
int
otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
{
struct rte_eth_dev_data *data = eth_dev->data;
+ struct otx2_eth_txq *txq;
+ int rc;
+
+ txq = eth_dev->data->tx_queues[qidx];
if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
return 0;
+ txq->fc_cache_pkts = 0;
+
+ rc = otx2_nix_sq_sqb_aura_fc(txq, false);
+ if (rc) {
+ otx2_err("Failed to disable sqb aura fc, txq=%u, rc=%d",
+ qidx, rc);
+ goto done;
+ }
+
data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
- return 0;
+
+done:
+ return rc;
}
static int
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index c6154e4d4..246920695 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -676,6 +676,224 @@ nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev)
return 0;
}
+static int
+nix_smq_xoff(struct otx2_eth_dev *dev, uint16_t smq, bool enable)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_txschq_config *req;
+
+ req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = NIX_TXSCH_LVL_SMQ;
+ req->num_regs = 1;
+
+ req->reg[0] = NIX_AF_SMQX_CFG(smq);
+ /* Unmodified fields */
+ req->regval[0] = ((uint64_t)NIX_MAX_VTAG_INS << 36) |
+ (NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS;
+
+ if (enable)
+ req->regval[0] |= BIT_ULL(50) | BIT_ULL(49);
+ else
+ req->regval[0] |= 0;
+
+ return otx2_mbox_process(mbox);
+}
+
+int
+otx2_nix_sq_sqb_aura_fc(void *__txq, bool enable)
+{
+ struct otx2_eth_txq *txq = __txq;
+ struct npa_aq_enq_req *req;
+ struct npa_aq_enq_rsp *rsp;
+ struct otx2_npa_lf *lf;
+ struct otx2_mbox *mbox;
+ uint64_t aura_handle;
+ int rc;
+
+ lf = otx2_npa_lf_obj_get();
+ if (!lf)
+ return -EFAULT;
+ mbox = lf->mbox;
+ /* Set/clear sqb aura fc_ena */
+ aura_handle = txq->sqb_pool->pool_id;
+ req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+
+ req->aura_id = npa_lf_aura_handle_to_aura(aura_handle);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_WRITE;
+ /* Below is not needed for aura writes but AF driver needs it */
+ /* AF will translate to associated poolctx */
+ req->aura.pool_addr = req->aura_id;
+
+ req->aura.fc_ena = enable;
+ req->aura_mask.fc_ena = 1;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* Read back npa aura ctx */
+ req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+
+ req->aura_id = npa_lf_aura_handle_to_aura(aura_handle);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_READ;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ /* Init when enabled as there might be no triggers */
+ if (enable)
+ *(volatile uint64_t *)txq->fc_mem = rsp->aura.count;
+ else
+ *(volatile uint64_t *)txq->fc_mem = txq->nb_sqb_bufs;
+ /* Sync write barrier */
+ rte_wmb();
+
+ return 0;
+}
+
+static void
+nix_txq_flush_sq_spin(struct otx2_eth_txq *txq)
+{
+ uint16_t sqb_cnt, head_off, tail_off;
+ struct otx2_eth_dev *dev = txq->dev;
+ uint16_t sq = txq->sq;
+ uint64_t reg, val;
+ int64_t *regaddr;
+
+ while (true) {
+ reg = ((uint64_t)sq << 32);
+ regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
+ val = otx2_atomic64_add_nosync(reg, regaddr);
+
+ regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS);
+ val = otx2_atomic64_add_nosync(reg, regaddr);
+ sqb_cnt = val & 0xFFFF;
+ head_off = (val >> 20) & 0x3F;
+ tail_off = (val >> 28) & 0x3F;
+
+ /* SQ reached quiescent state */
+ if (sqb_cnt <= 1 && head_off == tail_off &&
+ (*txq->fc_mem == txq->nb_sqb_bufs)) {
+ break;
+ }
+
+ rte_pause();
+ }
+}
+
+int
+otx2_nix_tm_sw_xoff(void *__txq, bool dev_started)
+{
+ struct otx2_eth_txq *txq = __txq;
+ struct otx2_eth_dev *dev = txq->dev;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *req;
+ struct nix_aq_enq_rsp *rsp;
+ uint16_t smq;
+ int rc;
+
+ /* Get smq from sq */
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ req->qidx = txq->sq;
+ req->ctype = NIX_AQ_CTYPE_SQ;
+ req->op = NIX_AQ_INSTOP_READ;
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to get smq, rc=%d", rc);
+ return -EIO;
+ }
+
+ /* Check if sq is enabled */
+ if (!rsp->sq.ena)
+ return 0;
+
+ smq = rsp->sq.smq;
+
+ /* Enable CGX RXTX to drain pkts */
+ if (!dev_started) {
+ rc = otx2_cgx_rxtx_start(dev);
+ if (rc)
+ return rc;
+ }
+
+ rc = otx2_nix_sq_sqb_aura_fc(txq, false);
+ if (rc < 0) {
+ otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
+ goto cleanup;
+ }
+
+ /* Disable smq xoff for case it was enabled earlier */
+ rc = nix_smq_xoff(dev, smq, false);
+ if (rc) {
+ otx2_err("Failed to enable smq for sq %u, rc=%d", txq->sq, rc);
+ goto cleanup;
+ }
+
+ /* Wait for sq entries to be flushed */
+ nix_txq_flush_sq_spin(txq);
+
+ /* Flush and enable smq xoff */
+ rc = nix_smq_xoff(dev, smq, true);
+ if (rc) {
+ otx2_err("Failed to disable smq for sq %u, rc=%d", txq->sq, rc);
+ return rc;
+ }
+
+cleanup:
+ /* Restore cgx state */
+ if (!dev_started)
+ rc |= otx2_cgx_rxtx_stop(dev);
+
+ return rc;
+}
+
+static int
+nix_tm_sw_xon(struct otx2_eth_txq *txq,
+ uint16_t smq, uint32_t rr_quantum)
+{
+ struct otx2_eth_dev *dev = txq->dev;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *req;
+ int rc;
+
+ otx2_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum %u",
+ txq->sq, txq->sq, rr_quantum);
+ /* Set smq from sq */
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ req->qidx = txq->sq;
+ req->ctype = NIX_AQ_CTYPE_SQ;
+ req->op = NIX_AQ_INSTOP_WRITE;
+ req->sq.smq = smq;
+ req->sq.smq_rr_quantum = rr_quantum;
+ req->sq_mask.smq = ~req->sq_mask.smq;
+ req->sq_mask.smq_rr_quantum = ~req->sq_mask.smq_rr_quantum;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("Failed to set smq, rc=%d", rc);
+ return -EIO;
+ }
+
+ /* Enable sqb_aura fc */
+ rc = otx2_nix_sq_sqb_aura_fc(txq, true);
+ if (rc < 0) {
+ otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
+ return rc;
+ }
+
+ /* Disable smq xoff */
+ rc = nix_smq_xoff(dev, smq, false);
+ if (rc) {
+ otx2_err("Failed to enable smq for sq %u", txq->sq);
+ return rc;
+ }
+
+ return 0;
+}
+
static int
nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
uint32_t flags, bool hw_only)
@@ -929,10 +1147,11 @@ static int
nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_nix_tm_node *tm_node;
+ uint16_t sq, smq, rr_quantum;
+ struct otx2_eth_txq *txq;
int rc;
- RTE_SET_USED(xmit_enable);
-
nix_tm_update_parent_info(dev);
rc = nix_tm_send_txsch_alloc_msg(dev);
@@ -947,7 +1166,43 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
return rc;
}
- return 0;
+ /* Enable xmit as all the topology is ready */
+ TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+ if (tm_node->flags & NIX_TM_NODE_ENABLED)
+ continue;
+
+ /* Enable xmit on sq */
+ if (tm_node->level_id != OTX2_TM_LVL_QUEUE) {
+ tm_node->flags |= NIX_TM_NODE_ENABLED;
+ continue;
+ }
+
+ /* Don't enable SMQ or mark as enable */
+ if (!xmit_enable)
+ continue;
+
+ sq = tm_node->id;
+ if (sq > eth_dev->data->nb_tx_queues) {
+ rc = -EFAULT;
+ break;
+ }
+
+ txq = eth_dev->data->tx_queues[sq];
+
+ smq = tm_node->parent->hw_id;
+ rr_quantum = (tm_node->weight *
+ NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT;
+
+ rc = nix_tm_sw_xon(txq, smq, rr_quantum);
+ if (rc)
+ break;
+ tm_node->flags |= NIX_TM_NODE_ENABLED;
+ }
+
+ if (rc)
+ otx2_err("TM failed to enable xmit on sq %u, rc=%d", sq, rc);
+
+ return rc;
}
static int
@@ -1104,3 +1359,38 @@ otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
dev->tm_flags = 0;
return 0;
}
+
+int
+otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
+ uint32_t *rr_quantum, uint16_t *smq)
+{
+ struct otx2_nix_tm_node *tm_node;
+ int rc;
+
+ /* 0..sq_cnt-1 are leaf nodes */
+ if (sq >= dev->tm_leaf_cnt)
+ return -EINVAL;
+
+ /* Search for internal node first */
+ tm_node = nix_tm_node_search(dev, sq, false);
+ if (!tm_node)
+ tm_node = nix_tm_node_search(dev, sq, true);
+
+ /* Check if we found a valid leaf node */
+ if (!tm_node || tm_node->level_id != OTX2_TM_LVL_QUEUE ||
+ !tm_node->parent || tm_node->parent->hw_id == UINT32_MAX) {
+ return -EIO;
+ }
+
+ /* Get SMQ Id of leaf node's parent */
+ *smq = tm_node->parent->hw_id;
+ *rr_quantum = (tm_node->weight * NIX_TM_RR_QUANTUM_MAX)
+ / MAX_SCHED_WEIGHT;
+
+ rc = nix_smq_xoff(dev, *smq, false);
+ if (rc)
+ return rc;
+ tm_node->flags |= NIX_TM_NODE_ENABLED;
+
+ return 0;
+}
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index af1bb1862..2a009eece 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -16,6 +16,10 @@ struct otx2_eth_dev;
void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev);
int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
+int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
+ uint32_t *rr_quantum, uint16_t *smq);
+int otx2_nix_tm_sw_xoff(void *_txq, bool dev_started);
+int otx2_nix_sq_sqb_aura_fc(void *_txq, bool enable);
struct otx2_nix_tm_node {
TAILQ_ENTRY(otx2_nix_tm_node) node;
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 25/58] net/octeontx2: add ptype support
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (23 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 24/58] net/octeontx2: enable Tx through traffic manager jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 26/58] net/octeontx2: add queue info and pool supported operations jerinj
` (33 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Harman Kalra
From: Jerin Jacob <jerinj@marvell.com>
The fields from CQE needs to be converted to
ptype and rx ol flags in mbuf. This patch adds
create lookup memory for those items to be
used in Fastpath.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 2 +
drivers/net/octeontx2/otx2_ethdev.h | 6 +
drivers/net/octeontx2/otx2_lookup.c | 315 +++++++++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 7 +
10 files changed, 336 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_lookup.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index ca40358da..0de07776f 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -20,6 +20,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Packet type parsing = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index b720c116f..b4b253aa4 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -20,6 +20,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Packet type parsing = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 5a287493f..21cc4861e 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -16,6 +16,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Packet type parsing = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index d7e8f3d56..07e44b031 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -16,6 +16,7 @@ Features
Features of the OCTEON TX2 Ethdev PMD are:
+- Packet type information
- Promiscuous mode
- SR-IOV VF
- Lock-free Tx queue
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index dd64ba6da..dd0f2b9ca 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -36,6 +36,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_mac.c \
otx2_link.c \
otx2_stats.c \
+ otx2_lookup.c \
otx2_ethdev.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index e344d877f..3dff3e53d 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -8,6 +8,7 @@ sources = files(
'otx2_mac.c',
'otx2_link.c',
'otx2_stats.c',
+ 'otx2_lookup.c',
'otx2_ethdev.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index c1b8b37db..62514c6f6 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -441,6 +441,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
rxq->pool = mp;
rxq->qlen = nix_qsize_to_val(qsize);
rxq->qsize = qsize;
+ rxq->lookup_mem = otx2_nix_fastpath_lookup_mem_get();
/* Alloc completion queue */
rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp);
@@ -1290,6 +1291,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.tx_queue_stop = otx2_nix_tx_queue_stop,
.rx_queue_start = otx2_nix_rx_queue_start,
.rx_queue_stop = otx2_nix_rx_queue_stop,
+ .dev_supported_ptypes_get = otx2_nix_supported_ptypes_get,
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 9f73bf89b..cfc4dfe14 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -355,6 +355,12 @@ int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *addr);
+/* Lookup configuration */
+void *otx2_nix_fastpath_lookup_mem_get(void);
+
+/* PTYPES */
+const uint32_t *otx2_nix_supported_ptypes_get(struct rte_eth_dev *dev);
+
/* Mac address handling */
int otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *addr);
diff --git a/drivers/net/octeontx2/otx2_lookup.c b/drivers/net/octeontx2/otx2_lookup.c
new file mode 100644
index 000000000..99199d08a
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_lookup.c
@@ -0,0 +1,315 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_common.h>
+#include <rte_memzone.h>
+
+#include "otx2_ethdev.h"
+
+/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */
+#define ERRCODE_ERRLEN_WIDTH 12
+#define ERR_ARRAY_SZ ((BIT(ERRCODE_ERRLEN_WIDTH)) *\
+ sizeof(uint32_t))
+
+#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ)
+
+const uint32_t *
+otx2_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ static const uint32_t ptypes[] = {
+ RTE_PTYPE_L2_ETHER_QINQ, /* LB */
+ RTE_PTYPE_L2_ETHER_VLAN, /* LB */
+ RTE_PTYPE_L2_ETHER_TIMESYNC, /* LB */
+ RTE_PTYPE_L2_ETHER_ARP, /* LC */
+ RTE_PTYPE_L2_ETHER_NSH, /* LC */
+ RTE_PTYPE_L2_ETHER_FCOE, /* LC */
+ RTE_PTYPE_L2_ETHER_MPLS, /* LC */
+ RTE_PTYPE_L3_IPV4, /* LC */
+ RTE_PTYPE_L3_IPV4_EXT, /* LC */
+ RTE_PTYPE_L3_IPV6, /* LC */
+ RTE_PTYPE_L3_IPV6_EXT, /* LC */
+ RTE_PTYPE_L4_TCP, /* LD */
+ RTE_PTYPE_L4_UDP, /* LD */
+ RTE_PTYPE_L4_SCTP, /* LD */
+ RTE_PTYPE_L4_ICMP, /* LD */
+ RTE_PTYPE_L4_IGMP, /* LD */
+ RTE_PTYPE_TUNNEL_GRE, /* LD */
+ RTE_PTYPE_TUNNEL_ESP, /* LD */
+ RTE_PTYPE_TUNNEL_NVGRE, /* LD */
+ RTE_PTYPE_TUNNEL_VXLAN, /* LE */
+ RTE_PTYPE_TUNNEL_GENEVE, /* LE */
+ RTE_PTYPE_TUNNEL_GTPC, /* LE */
+ RTE_PTYPE_TUNNEL_GTPU, /* LE */
+ RTE_PTYPE_TUNNEL_VXLAN_GPE, /* LE */
+ RTE_PTYPE_TUNNEL_MPLS_IN_GRE, /* LE */
+ RTE_PTYPE_TUNNEL_MPLS_IN_UDP, /* LE */
+ RTE_PTYPE_INNER_L2_ETHER,/* LF */
+ RTE_PTYPE_INNER_L3_IPV4, /* LG */
+ RTE_PTYPE_INNER_L3_IPV6, /* LG */
+ RTE_PTYPE_INNER_L4_TCP, /* LH */
+ RTE_PTYPE_INNER_L4_UDP, /* LH */
+ RTE_PTYPE_INNER_L4_SCTP, /* LH */
+ RTE_PTYPE_INNER_L4_ICMP, /* LH */
+ };
+
+ if (dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)
+ return ptypes;
+ else
+ return NULL;
+}
+
+/*
+ * +------------------ +------------------ +
+ * | | IL4 | IL3| IL2 | TU | L4 | L3 | L2 |
+ * +-------------------+-------------------+
+ *
+ * +-------------------+------------------ +
+ * | | LH | LG | LF | LE | LD | LC | LB |
+ * +-------------------+-------------------+
+ *
+ * ptype [LE - LD - LC - LB] = TU - L4 - L3 - T2
+ * ptype_tunnel[LH - LG - LF] = IL4 - IL3 - IL2 - TU
+ *
+ */
+static void
+nix_create_non_tunnel_ptype_array(uint16_t *ptype)
+{
+ uint8_t lb, lc, ld, le;
+ uint16_t idx, val;
+
+ for (idx = 0; idx < PTYPE_NON_TUNNEL_ARRAY_SZ; idx++) {
+ lb = idx & 0xF;
+ lc = (idx & 0xF0) >> 4;
+ ld = (idx & 0xF00) >> 8;
+ le = (idx & 0xF000) >> 12;
+ val = RTE_PTYPE_UNKNOWN;
+
+ switch (lb) {
+ case NPC_LT_LB_QINQ:
+ val |= RTE_PTYPE_L2_ETHER_QINQ;
+ break;
+ case NPC_LT_LB_CTAG:
+ val |= RTE_PTYPE_L2_ETHER_VLAN;
+ break;
+ }
+
+ switch (lc) {
+ case NPC_LT_LC_ARP:
+ val |= RTE_PTYPE_L2_ETHER_ARP;
+ break;
+ case NPC_LT_LC_NSH:
+ val |= RTE_PTYPE_L2_ETHER_NSH;
+ break;
+ case NPC_LT_LC_FCOE:
+ val |= RTE_PTYPE_L2_ETHER_FCOE;
+ break;
+ case NPC_LT_LC_MPLS:
+ val |= RTE_PTYPE_L2_ETHER_MPLS;
+ break;
+ case NPC_LT_LC_IP:
+ val |= RTE_PTYPE_L3_IPV4;
+ break;
+ case NPC_LT_LC_IP_OPT:
+ val |= RTE_PTYPE_L3_IPV4_EXT;
+ break;
+ case NPC_LT_LC_IP6:
+ val |= RTE_PTYPE_L3_IPV6;
+ break;
+ case NPC_LT_LC_IP6_EXT:
+ val |= RTE_PTYPE_L3_IPV6_EXT;
+ break;
+ case NPC_LT_LC_PTP:
+ val |= RTE_PTYPE_L2_ETHER_TIMESYNC;
+ break;
+ }
+
+ switch (ld) {
+ case NPC_LT_LD_TCP:
+ val |= RTE_PTYPE_L4_TCP;
+ break;
+ case NPC_LT_LD_UDP:
+ val |= RTE_PTYPE_L4_UDP;
+ break;
+ case NPC_LT_LD_SCTP:
+ val |= RTE_PTYPE_L4_SCTP;
+ break;
+ case NPC_LT_LD_ICMP:
+ val |= RTE_PTYPE_L4_ICMP;
+ break;
+ case NPC_LT_LD_IGMP:
+ val |= RTE_PTYPE_L4_IGMP;
+ break;
+ case NPC_LT_LD_GRE:
+ val |= RTE_PTYPE_TUNNEL_GRE;
+ break;
+ case NPC_LT_LD_NVGRE:
+ val |= RTE_PTYPE_TUNNEL_NVGRE;
+ break;
+ case NPC_LT_LD_ESP:
+ val |= RTE_PTYPE_TUNNEL_ESP;
+ break;
+ }
+
+ switch (le) {
+ case NPC_LT_LE_VXLAN:
+ val |= RTE_PTYPE_TUNNEL_VXLAN;
+ break;
+ case NPC_LT_LE_VXLANGPE:
+ val |= RTE_PTYPE_TUNNEL_VXLAN_GPE;
+ break;
+ case NPC_LT_LE_GENEVE:
+ val |= RTE_PTYPE_TUNNEL_GENEVE;
+ break;
+ case NPC_LT_LE_GTPC:
+ val |= RTE_PTYPE_TUNNEL_GTPC;
+ break;
+ case NPC_LT_LE_GTPU:
+ val |= RTE_PTYPE_TUNNEL_GTPU;
+ break;
+ case NPC_LT_LE_TU_MPLS_IN_GRE:
+ val |= RTE_PTYPE_TUNNEL_MPLS_IN_GRE;
+ break;
+ case NPC_LT_LE_TU_MPLS_IN_UDP:
+ val |= RTE_PTYPE_TUNNEL_MPLS_IN_UDP;
+ break;
+ }
+ ptype[idx] = val;
+ }
+}
+
+#define TU_SHIFT(x) ((x) >> PTYPE_WIDTH)
+static void
+nix_create_tunnel_ptype_array(uint16_t *ptype)
+{
+ uint8_t le, lf, lg;
+ uint16_t idx, val;
+
+ /* Skip non tunnel ptype array memory */
+ ptype = ptype + PTYPE_NON_TUNNEL_ARRAY_SZ;
+
+ for (idx = 0; idx < PTYPE_TUNNEL_ARRAY_SZ; idx++) {
+ le = idx & 0xF;
+ lf = (idx & 0xF0) >> 4;
+ lg = (idx & 0xF00) >> 8;
+ val = RTE_PTYPE_UNKNOWN;
+
+ switch (le) {
+ case NPC_LT_LF_TU_ETHER:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L2_ETHER);
+ break;
+ }
+ switch (lf) {
+ case NPC_LT_LG_TU_IP:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV4);
+ break;
+ case NPC_LT_LG_TU_IP6:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV6);
+ break;
+ }
+ switch (lg) {
+ case NPC_LT_LH_TU_TCP:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L4_TCP);
+ break;
+ case NPC_LT_LH_TU_UDP:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L4_UDP);
+ break;
+ case NPC_LT_LH_TU_SCTP:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L4_SCTP);
+ break;
+ case NPC_LT_LH_TU_ICMP:
+ val |= TU_SHIFT(RTE_PTYPE_INNER_L4_ICMP);
+ break;
+ }
+
+ ptype[idx] = val;
+ }
+}
+
+static void
+nix_create_rx_ol_flags_array(void *mem)
+{
+ uint16_t idx, errcode, errlev;
+ uint32_t val, *ol_flags;
+
+ /* Skip ptype array memory */
+ ol_flags = (uint32_t *)((uint8_t *)mem + PTYPE_ARRAY_SZ);
+
+ for (idx = 0; idx < BIT(ERRCODE_ERRLEN_WIDTH); idx++) {
+ errlev = idx & 0xf;
+ errcode = (idx & 0xff0) >> 4;
+
+ val = PKT_RX_IP_CKSUM_UNKNOWN;
+ val |= PKT_RX_L4_CKSUM_UNKNOWN;
+ val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN;
+
+ switch (errlev) {
+ case NPC_ERRLEV_RE:
+ /* Mark all errors as BAD checksum errors */
+ if (errcode) {
+ val |= PKT_RX_IP_CKSUM_BAD;
+ val |= PKT_RX_L4_CKSUM_BAD;
+ } else {
+ val |= PKT_RX_IP_CKSUM_GOOD;
+ val |= PKT_RX_L4_CKSUM_GOOD;
+ }
+ break;
+ case NPC_ERRLEV_LC:
+ if (errcode == NPC_EC_OIP4_CSUM ||
+ errcode == NPC_EC_IP_FRAG_OFFSET_1) {
+ val |= PKT_RX_IP_CKSUM_BAD;
+ val |= PKT_RX_EIP_CKSUM_BAD;
+ } else {
+ val |= PKT_RX_IP_CKSUM_GOOD;
+ }
+ break;
+ case NPC_ERRLEV_LG:
+ if (errcode == NPC_EC_IIP4_CSUM)
+ val |= PKT_RX_IP_CKSUM_BAD;
+ else
+ val |= PKT_RX_IP_CKSUM_GOOD;
+ break;
+ case NPC_ERRLEV_NIX:
+ if (errcode == NIX_RX_PERRCODE_OL4_CHK) {
+ val |= PKT_RX_OUTER_L4_CKSUM_BAD;
+ val |= PKT_RX_L4_CKSUM_BAD;
+ } else if (errcode == NIX_RX_PERRCODE_IL4_CHK) {
+ val |= PKT_RX_L4_CKSUM_BAD;
+ } else {
+ val |= PKT_RX_IP_CKSUM_GOOD;
+ val |= PKT_RX_L4_CKSUM_GOOD;
+ }
+ break;
+ }
+
+ ol_flags[idx] = val;
+ }
+}
+
+void *
+otx2_nix_fastpath_lookup_mem_get(void)
+{
+ const char name[] = "otx2_nix_fastpath_lookup_mem";
+ const struct rte_memzone *mz;
+ void *mem;
+
+ mz = rte_memzone_lookup(name);
+ if (mz != NULL)
+ return mz->addr;
+
+ /* Request for the first time */
+ mz = rte_memzone_reserve_aligned(name, LOOKUP_ARRAY_SZ,
+ SOCKET_ID_ANY, 0, OTX2_ALIGN);
+ if (mz != NULL) {
+ mem = mz->addr;
+ /* Form the ptype array lookup memory */
+ nix_create_non_tunnel_ptype_array(mem);
+ nix_create_tunnel_ptype_array(mem);
+ /* Form the rx ol_flags based on errcode */
+ nix_create_rx_ol_flags_array(mem);
+ return mem;
+ }
+ return NULL;
+}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index 1749c43ff..1283fdf37 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -5,6 +5,13 @@
#ifndef __OTX2_RX_H__
#define __OTX2_RX_H__
+#define PTYPE_WIDTH 12
+#define PTYPE_NON_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH)
+#define PTYPE_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH)
+#define PTYPE_ARRAY_SZ ((PTYPE_NON_TUNNEL_ARRAY_SZ +\
+ PTYPE_TUNNEL_ARRAY_SZ) *\
+ sizeof(uint16_t))
+
#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
#endif /* __OTX2_RX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 26/58] net/octeontx2: add queue info and pool supported operations
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (24 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 25/58] net/octeontx2: add ptype support jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 27/58] net/octeontx2: add Rx and Tx descriptor operations jerinj
` (32 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add Rx and Tx queue info get and pool ops supported
operations.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 3 ++
drivers/net/octeontx2/otx2_ethdev.h | 5 +++
drivers/net/octeontx2/otx2_ethdev_ops.c | 51 +++++++++++++++++++++++++
3 files changed, 59 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 62514c6f6..7ef2cb87c 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1312,6 +1312,9 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.xstats_reset = otx2_nix_xstats_reset,
.xstats_get_by_id = otx2_nix_xstats_get_by_id,
.xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id,
+ .rxq_info_get = otx2_nix_rxq_info_get,
+ .txq_info_get = otx2_nix_txq_info_get,
+ .pool_ops_supported = otx2_nix_pool_ops_supported,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index cfc4dfe14..199d5f242 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -274,6 +274,11 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
/* Ops */
void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_info *dev_info);
+int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool);
+void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+void otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en);
void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 1c935b627..eda5f8a01 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -2,6 +2,8 @@
* Copyright(C) 2019 Marvell International Ltd.
*/
+#include <rte_mbuf_pool_ops.h>
+
#include "otx2_ethdev.h"
static void
@@ -86,6 +88,55 @@ otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev)
nix_allmulticast_config(eth_dev, 0);
}
+void
+otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct otx2_eth_rxq *rxq;
+
+ rxq = eth_dev->data->rx_queues[queue_id];
+
+ qinfo->mp = rxq->pool;
+ qinfo->scattered_rx = eth_dev->data->scattered_rx;
+ qinfo->nb_desc = rxq->qconf.nb_desc;
+
+ qinfo->conf.rx_free_thresh = 0;
+ qinfo->conf.rx_drop_en = 0;
+ qinfo->conf.rx_deferred_start = 0;
+ qinfo->conf.offloads = rxq->offloads;
+}
+
+void
+otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct otx2_eth_txq *txq;
+
+ txq = eth_dev->data->tx_queues[queue_id];
+
+ qinfo->nb_desc = txq->qconf.nb_desc;
+
+ qinfo->conf.tx_thresh.pthresh = 0;
+ qinfo->conf.tx_thresh.hthresh = 0;
+ qinfo->conf.tx_thresh.wthresh = 0;
+
+ qinfo->conf.tx_free_thresh = 0;
+ qinfo->conf.tx_rs_thresh = 0;
+ qinfo->conf.offloads = txq->offloads;
+ qinfo->conf.tx_deferred_start = 0;
+}
+
+int
+otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
+{
+ RTE_SET_USED(eth_dev);
+
+ if (!strcmp(pool, rte_mbuf_platform_mempool_ops()))
+ return 0;
+
+ return -ENOTSUP;
+}
+
void
otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 27/58] net/octeontx2: add Rx and Tx descriptor operations
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (25 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 26/58] net/octeontx2: add queue info and pool supported operations jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 28/58] net/octeontx2: add module EEPROM dump jerinj
` (31 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
From: Jerin Jacob <jerinj@marvell.com>
Add Rx and Tx queue descriptor related operations.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 2 +
doc/guides/nics/features/octeontx2_vf.ini | 2 +
drivers/net/octeontx2/otx2_ethdev.c | 4 ++
drivers/net/octeontx2/otx2_ethdev.h | 4 ++
drivers/net/octeontx2/otx2_ethdev_ops.c | 83 ++++++++++++++++++++++
6 files changed, 97 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 0de07776f..f07b64f24 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Free Tx mbuf on demand = Y
Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
@@ -21,6 +22,7 @@ RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Packet type parsing = Y
+Rx descriptor status = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index b4b253aa4..911c926e4 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Free Tx mbuf on demand = Y
Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
@@ -21,6 +22,7 @@ RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Packet type parsing = Y
+Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 21cc4861e..e275e6469 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -11,12 +11,14 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Free Tx mbuf on demand = Y
Queue start/stop = Y
RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Packet type parsing = Y
+Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 7ef2cb87c..909aad65c 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1314,6 +1314,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id,
.rxq_info_get = otx2_nix_rxq_info_get,
.txq_info_get = otx2_nix_txq_info_get,
+ .rx_queue_count = otx2_nix_rx_queue_count,
+ .rx_descriptor_done = otx2_nix_rx_descriptor_done,
+ .rx_descriptor_status = otx2_nix_rx_descriptor_status,
+ .tx_done_cleanup = otx2_nix_tx_done_cleanup,
.pool_ops_supported = otx2_nix_pool_ops_supported,
};
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 199d5f242..8f2691c80 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -279,6 +279,10 @@ void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
void otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo);
+uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx);
+int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
+int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset);
+int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en);
void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index eda5f8a01..44cc17200 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -126,6 +126,89 @@ otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
qinfo->conf.tx_deferred_start = 0;
}
+static void
+nix_rx_head_tail_get(struct otx2_eth_dev *dev,
+ uint32_t *head, uint32_t *tail, uint16_t queue_idx)
+{
+ uint64_t reg, val;
+
+ if (head == NULL || tail == NULL)
+ return;
+
+ reg = (((uint64_t)queue_idx) << 32);
+ val = otx2_atomic64_add_nosync(reg, (int64_t *)
+ (dev->base + NIX_LF_CQ_OP_STATUS));
+ if (val & (OP_ERR | CQ_ERR))
+ val = 0;
+
+ *tail = (uint32_t)(val & 0xFFFFF);
+ *head = (uint32_t)((val >> 20) & 0xFFFFF);
+}
+
+uint32_t
+otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx)
+{
+ struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx];
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint32_t head, tail;
+
+ nix_rx_head_tail_get(dev, &head, &tail, queue_idx);
+ return (tail - head) % rxq->qlen;
+}
+
+static inline int
+nix_offset_has_packet(uint32_t head, uint32_t tail, uint16_t offset)
+{
+ /* Check given offset(queue index) has packet filled by HW */
+ if (tail > head && offset <= tail && offset >= head)
+ return 1;
+ /* Wrap around case */
+ if (head > tail && (offset >= head || offset <= tail))
+ return 1;
+
+ return 0;
+}
+
+int
+otx2_nix_rx_descriptor_done(void *rx_queue, uint16_t offset)
+{
+ struct otx2_eth_rxq *rxq = rx_queue;
+ uint32_t head, tail;
+
+ nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev),
+ &head, &tail, rxq->rq);
+
+ return nix_offset_has_packet(head, tail, offset);
+}
+
+int
+otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset)
+{
+ struct otx2_eth_rxq *rxq = rx_queue;
+ uint32_t head, tail;
+
+ if (rxq->qlen >= offset)
+ return -EINVAL;
+
+ nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev),
+ &head, &tail, rxq->rq);
+
+ if (nix_offset_has_packet(head, tail, offset))
+ return RTE_ETH_RX_DESC_DONE;
+ else
+ return RTE_ETH_RX_DESC_AVAIL;
+}
+
+/* It is a NOP for octeontx2 as HW frees the buffer on xmit */
+int
+otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+ RTE_SET_USED(txq);
+ RTE_SET_USED(free_cnt);
+
+ return 0;
+}
+
int
otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 28/58] net/octeontx2: add module EEPROM dump
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (26 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 27/58] net/octeontx2: add Rx and Tx descriptor operations jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 29/58] net/octeontx2: add flow control support jerinj
` (30 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
add module EEPROM dump operation.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 2 +
drivers/net/octeontx2/otx2_ethdev.h | 4 ++
drivers/net/octeontx2/otx2_ethdev_ops.c | 51 ++++++++++++++++++++++
6 files changed, 60 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index f07b64f24..87141244a 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -26,6 +26,7 @@ Rx descriptor status = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
+Module EEPROM dump = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 911c926e4..dafbe003c 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -26,6 +26,7 @@ Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
+Module EEPROM dump = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index e275e6469..7fba7e1d9 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -22,6 +22,7 @@ Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
+Module EEPROM dump = Y
Registers dump = Y
Linux VFIO = Y
ARMv8 = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 909aad65c..fcc2504bf 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1319,6 +1319,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.rx_descriptor_status = otx2_nix_rx_descriptor_status,
.tx_done_cleanup = otx2_nix_tx_done_cleanup,
.pool_ops_supported = otx2_nix_pool_ops_supported,
+ .get_module_info = otx2_nix_get_module_info,
+ .get_module_eeprom = otx2_nix_get_module_eeprom,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 8f2691c80..5dd5d8c8b 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -274,6 +274,10 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
/* Ops */
void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_info *dev_info);
+int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
+ struct rte_eth_dev_module_info *modinfo);
+int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
+ struct rte_dev_eeprom_info *info);
int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool);
void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 44cc17200..2a949439a 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -220,6 +220,57 @@ otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
return -ENOTSUP;
}
+static struct cgx_fw_data *
+nix_get_fwdata(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_fw_data *rsp = NULL;
+
+ otx2_mbox_alloc_msg_cgx_get_aux_link_info(mbox);
+
+ otx2_mbox_process_msg(mbox, (void *)&rsp);
+
+ return rsp;
+}
+
+int
+otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
+ struct rte_eth_dev_module_info *modinfo)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct cgx_fw_data *rsp;
+
+ rsp = nix_get_fwdata(dev);
+ if (rsp == NULL)
+ return -EIO;
+
+ modinfo->type = rsp->fwdata.sfp_eeprom.sff_id;
+ modinfo->eeprom_len = SFP_EEPROM_SIZE;
+
+ return 0;
+}
+
+int
+otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
+ struct rte_dev_eeprom_info *info)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct cgx_fw_data *rsp;
+
+ if (!info->data || !info->length ||
+ (info->offset + info->length > SFP_EEPROM_SIZE))
+ return -EINVAL;
+
+ rsp = nix_get_fwdata(dev);
+ if (rsp == NULL)
+ return -EIO;
+
+ otx2_mbox_memcpy(info->data, rsp->fwdata.sfp_eeprom.buf + info->offset,
+ info->length);
+
+ return 0;
+}
+
void
otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 29/58] net/octeontx2: add flow control support
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (27 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 28/58] net/octeontx2: add module EEPROM dump jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 30/58] net/octeontx2: add PTP base support jerinj
` (29 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Add flow control operations and exposed
otx2_nix_update_flow_ctrl_mode() to enable on the
configured mode in dev_start().
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 20 ++
drivers/net/octeontx2/otx2_ethdev.h | 23 +++
drivers/net/octeontx2/otx2_flow_ctrl.c | 220 +++++++++++++++++++++
8 files changed, 268 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_flow_ctrl.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 87141244a..00feb0cf2 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -21,6 +21,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Flow control = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index dafbe003c..f3f812804 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -21,6 +21,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Flow control = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 07e44b031..20281b030 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -25,6 +25,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- MAC filtering
- Port hardware statistics
- Link state information
+- Link flow control
- Debug utilities - Context dump and error interrupt support
Prerequisites
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index dd0f2b9ca..582857459 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -38,6 +38,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_stats.c \
otx2_lookup.c \
otx2_ethdev.c \
+ otx2_flow_ctrl.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
otx2_ethdev_debug.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 3dff3e53d..4b56f4461 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -10,6 +10,7 @@ sources = files(
'otx2_stats.c',
'otx2_lookup.c',
'otx2_ethdev.c',
+ 'otx2_flow_ctrl.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
'otx2_ethdev_debug.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index fcc2504bf..25469c5f9 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -216,6 +216,14 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
aq->cq.cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT);
aq->cq.cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR);
+ /* TX pause frames enable flowctrl on RX side */
+ if (dev->fc_info.tx_pause) {
+ /* Single bpid is allocated for all rx channels for now */
+ aq->cq.bpid = dev->fc_info.bpid[0];
+ aq->cq.bp = NIX_CQ_BP_LEVEL;
+ aq->cq.bp_ena = 1;
+ }
+
/* Many to one reduction */
aq->cq.qint_idx = qid % dev->qints;
@@ -1092,6 +1100,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
/* Free the resources allocated from the previous configure */
if (dev->configured == 1) {
+ otx2_nix_rxchan_bpid_cfg(eth_dev, false);
oxt2_nix_unregister_queue_irqs(eth_dev);
nix_set_nop_rxtx_function(eth_dev);
rc = nix_store_queue_cfg_and_then_release(eth_dev);
@@ -1145,6 +1154,12 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ rc = otx2_nix_rxchan_bpid_cfg(eth_dev, true);
+ if (rc) {
+ otx2_err("Failed to configure nix rx chan bpid cfg rc=%d", rc);
+ goto free_nix_lf;
+ }
+
/*
* Restore queue config when reconfigure followed by
* reconfigure and no queue configure invoked from application case.
@@ -1321,6 +1336,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.pool_ops_supported = otx2_nix_pool_ops_supported,
.get_module_info = otx2_nix_get_module_info,
.get_module_eeprom = otx2_nix_get_module_eeprom,
+ .flow_ctrl_get = otx2_nix_flow_ctrl_get,
+ .flow_ctrl_set = otx2_nix_flow_ctrl_set,
};
static inline int
@@ -1522,6 +1539,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ /* Disable nix bpid config */
+ otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+
/* Free up SQs */
for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 5dd5d8c8b..03ecd32ec 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -87,6 +87,9 @@
#define NIX_TX_NB_SEG_MAX 9
#endif
+/* Apply BP when CQ is 75% full */
+#define NIX_CQ_BP_LEVEL (25 * 256 / 100)
+
#define CQ_OP_STAT_OP_ERR 63
#define CQ_OP_STAT_CQ_ERR 46
@@ -169,6 +172,14 @@ struct otx2_npc_flow_info {
uint16_t flow_max_priority;
};
+struct otx2_fc_info {
+ enum rte_eth_fc_mode mode; /**< Link flow control mode */
+ uint8_t rx_pause;
+ uint8_t tx_pause;
+ uint8_t chan_cnt;
+ uint16_t bpid[NIX_MAX_CHAN];
+};
+
struct otx2_eth_dev {
OTX2_DEV; /* Base class */
MARKER otx2_eth_dev_data_start;
@@ -216,6 +227,7 @@ struct otx2_eth_dev {
struct otx2_nix_tm_node_list node_list;
struct otx2_nix_tm_shaper_profile_list shaper_profile_list;
struct otx2_rss_info rss_info;
+ struct otx2_fc_info fc_info;
uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
struct otx2_npc_flow_info npc_flow;
@@ -368,6 +380,17 @@ int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *addr);
+/* Flow Control */
+int otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_fc_conf *fc_conf);
+
+int otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
+ struct rte_eth_fc_conf *fc_conf);
+
+int otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb);
+
+int otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev);
+
/* Lookup configuration */
void *otx2_nix_fastpath_lookup_mem_get(void);
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
new file mode 100644
index 000000000..0392086d8
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_flow_ctrl.c
@@ -0,0 +1,220 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+
+int
+otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_fc_info *fc = &dev->fc_info;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_bp_cfg_req *req;
+ struct nix_bp_cfg_rsp *rsp;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ if (enb) {
+ req = otx2_mbox_alloc_msg_nix_bp_enable(mbox);
+ req->chan_base = 0;
+ req->chan_cnt = 1;
+ req->bpid_per_chan = 0;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc || req->chan_cnt != rsp->chan_cnt) {
+ otx2_err("Insufficient BPIDs, alloc=%u < req=%u rc=%d",
+ rsp->chan_cnt, req->chan_cnt, rc);
+ return rc;
+ }
+
+ fc->bpid[0] = rsp->chan_bpid[0];
+ } else {
+ req = otx2_mbox_alloc_msg_nix_bp_disable(mbox);
+ req->chan_base = 0;
+ req->chan_cnt = 1;
+
+ rc = otx2_mbox_process(mbox);
+
+ memset(fc->bpid, 0, sizeof(uint16_t) * NIX_MAX_CHAN);
+ }
+
+ return rc;
+}
+
+int
+otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_fc_conf *fc_conf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct cgx_pause_frm_cfg *req, *rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc;
+
+ if (otx2_dev_is_vf(dev))
+ return -ENOTSUP;
+
+ req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
+ req->set = 0;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ goto done;
+
+ if (rsp->rx_pause && rsp->tx_pause)
+ fc_conf->mode = RTE_FC_FULL;
+ else if (rsp->rx_pause)
+ fc_conf->mode = RTE_FC_RX_PAUSE;
+ else if (rsp->tx_pause)
+ fc_conf->mode = RTE_FC_TX_PAUSE;
+ else
+ fc_conf->mode = RTE_FC_NONE;
+
+done:
+ return rc;
+}
+
+static int
+otx2_nix_cq_bp_cfg(struct rte_eth_dev *eth_dev, bool enb)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_fc_info *fc = &dev->fc_info;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_aq_enq_req *aq;
+ struct otx2_eth_rxq *rxq;
+ int i, rc;
+
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ rxq = eth_dev->data->rx_queues[i];
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ /* The shared memory buffer can be full.
+ * flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq)
+ return -ENOMEM;
+ }
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ if (enb) {
+ aq->cq.bpid = fc->bpid[0];
+ aq->cq_mask.bpid = ~(aq->cq_mask.bpid);
+ aq->cq.bp = NIX_CQ_BP_LEVEL;
+ aq->cq_mask.bp = ~(aq->cq_mask.bp);
+ }
+
+ aq->cq.bp_ena = !!enb;
+ aq->cq_mask.bp_ena = ~(aq->cq_mask.bp_ena);
+ }
+
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+otx2_nix_rx_fc_cfg(struct rte_eth_dev *eth_dev, bool enb)
+{
+ return otx2_nix_cq_bp_cfg(eth_dev, enb);
+}
+
+int
+otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
+ struct rte_eth_fc_conf *fc_conf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_fc_info *fc = &dev->fc_info;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_pause_frm_cfg *req;
+ uint8_t tx_pause, rx_pause;
+ int rc = 0;
+
+ if (otx2_dev_is_vf(dev))
+ return -ENOTSUP;
+
+ if (fc_conf->high_water || fc_conf->low_water || fc_conf->pause_time ||
+ fc_conf->mac_ctrl_frame_fwd || fc_conf->autoneg) {
+ otx2_info("Flowctrl parameter is not supported");
+ return -EINVAL;
+ }
+
+ if (fc_conf->mode == fc->mode)
+ return 0;
+
+ rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
+ (fc_conf->mode == RTE_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
+ (fc_conf->mode == RTE_FC_TX_PAUSE);
+
+ /* Check if TX pause frame is already enabled or not */
+ if (fc->tx_pause ^ tx_pause) {
+ if (otx2_dev_is_A0(dev) && eth_dev->data->dev_started) {
+ /* on A0, CQ should be in disabled state
+ * while setting flow control configuration.
+ */
+ otx2_info("Stop the port=%d for setting flow control\n",
+ eth_dev->data->port_id);
+ return 0;
+ }
+ /* TX pause frames, enable/disable flowctrl on RX side. */
+ rc = otx2_nix_rx_fc_cfg(eth_dev, tx_pause);
+ if (rc)
+ return rc;
+ }
+
+ req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
+ req->set = 1;
+ req->rx_pause = rx_pause;
+ req->tx_pause = tx_pause;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ fc->tx_pause = tx_pause;
+ fc->rx_pause = rx_pause;
+ fc->mode = fc_conf->mode;
+
+ return rc;
+}
+
+int
+otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_eth_fc_conf fc_conf;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
+ /* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+ * by AF driver, update those info in PMD structure.
+ */
+ otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
+
+ /* To avoid Link credit deadlock on A0, disable Tx FC if it's enabled */
+ if (otx2_dev_is_A0(dev) &&
+ (fc_conf.mode == RTE_FC_FULL || fc_conf.mode == RTE_FC_RX_PAUSE)) {
+ fc_conf.mode =
+ (fc_conf.mode == RTE_FC_FULL ||
+ fc_conf.mode == RTE_FC_TX_PAUSE) ?
+ RTE_FC_TX_PAUSE : RTE_FC_NONE;
+ }
+
+ return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 30/58] net/octeontx2: add PTP base support
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (28 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 29/58] net/octeontx2: add flow control support jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 31/58] net/octeontx2: add remaining PTP operations jerinj
` (28 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic
Cc: Harman Kalra, Zyta Szpak
From: Harman Kalra <hkalra@marvell.com>
Add PTP enable and disable operations.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: Zyta Szpak <zyta@marvell.com>
---
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 22 ++++-
drivers/net/octeontx2/otx2_ethdev.h | 17 ++++
drivers/net/octeontx2/otx2_ptp.c | 135 ++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 11 +++
7 files changed, 185 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_ptp.c
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 20281b030..41eb3c7b9 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -27,6 +27,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Link state information
- Link flow control
- Debug utilities - Context dump and error interrupt support
+- IEEE1588 timestamping
Prerequisites
-------------
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 582857459..f950fca14 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -34,6 +34,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_tm.c \
otx2_rss.c \
otx2_mac.c \
+ otx2_ptp.c \
otx2_link.c \
otx2_stats.c \
otx2_lookup.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 4b56f4461..2cac57d2b 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -6,6 +6,7 @@ sources = files(
'otx2_tm.c',
'otx2_rss.c',
'otx2_mac.c',
+ 'otx2_ptp.c',
'otx2_link.c',
'otx2_stats.c',
'otx2_lookup.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 25469c5f9..6ab8ed79d 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -336,9 +336,7 @@ nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
static inline int
nix_get_data_off(struct otx2_eth_dev *dev)
{
- RTE_SET_USED(dev);
-
- return 0;
+ return otx2_ethdev_is_ptp_en(dev) ? NIX_TIMESYNC_RX_OFFSET : 0;
}
uint64_t
@@ -450,6 +448,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
rxq->qlen = nix_qsize_to_val(qsize);
rxq->qsize = qsize;
rxq->lookup_mem = otx2_nix_fastpath_lookup_mem_get();
+ rxq->tstamp = &dev->tstamp;
/* Alloc completion queue */
rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp);
@@ -736,6 +735,7 @@ otx2_nix_form_default_desc(struct otx2_eth_txq *txq)
send_mem->dsz = 0x0;
send_mem->wmem = 0x1;
send_mem->alg = NIX_SENDMEMALG_SETTSTMP;
+ send_mem->addr = txq->dev->tstamp.tx_tstamp_iova;
}
sg = (union nix_send_sg_s *)&txq->cmd[4];
} else {
@@ -1160,6 +1160,16 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ /* Enable PTP if it was requested by the app or if it is already
+ * enabled in PF owning this VF
+ */
+ memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
+ if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+ otx2_ethdev_is_ptp_en(dev))
+ otx2_nix_timesync_enable(eth_dev);
+ else
+ otx2_nix_timesync_disable(eth_dev);
+
/*
* Restore queue config when reconfigure followed by
* reconfigure and no queue configure invoked from application case.
@@ -1338,6 +1348,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.get_module_eeprom = otx2_nix_get_module_eeprom,
.flow_ctrl_get = otx2_nix_flow_ctrl_get,
.flow_ctrl_set = otx2_nix_flow_ctrl_set,
+ .timesync_enable = otx2_nix_timesync_enable,
+ .timesync_disable = otx2_nix_timesync_disable,
};
static inline int
@@ -1542,6 +1554,10 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
/* Disable nix bpid config */
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+ /* Disable PTP if already enabled */
+ if (otx2_ethdev_is_ptp_en(dev))
+ otx2_nix_timesync_disable(eth_dev);
+
/* Free up SQs */
for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 03ecd32ec..1ca28add4 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -13,6 +13,7 @@
#include <rte_mbuf.h>
#include <rte_mempool.h>
#include <rte_string_fns.h>
+#include <rte_time.h>
#include "otx2_common.h"
#include "otx2_dev.h"
@@ -128,6 +129,12 @@
#define NIX_DEFAULT_RSS_CTX_GROUP 0
#define NIX_DEFAULT_RSS_MCAM_IDX -1
+#define otx2_ethdev_is_ptp_en(dev) ((dev)->ptp_en)
+
+#define NIX_TIMESYNC_TX_CMD_LEN 8
+/* Additional timesync values. */
+#define OTX2_CYCLECOUNTER_MASK 0xffffffffffffffffULL
+
enum nix_q_size_e {
nix_q_size_16, /* 16 entries */
nix_q_size_64, /* 64 entries */
@@ -234,6 +241,12 @@ struct otx2_eth_dev {
struct otx2_eth_qconf *tx_qconf;
struct otx2_eth_qconf *rx_qconf;
struct rte_eth_dev *eth_dev;
+ /* PTP counters */
+ bool ptp_en;
+ struct otx2_timesync_info tstamp;
+ struct rte_timecounter systime_tc;
+ struct rte_timecounter rx_tstamp_tc;
+ struct rte_timecounter tx_tstamp_tc;
} __rte_cache_aligned;
struct otx2_eth_txq {
@@ -414,4 +427,8 @@ int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
/* Rx and Tx routines */
void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
+/* Timesync - PTP routines */
+int otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev);
+int otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev);
+
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
new file mode 100644
index 000000000..105067949
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -0,0 +1,135 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_ethdev_driver.h>
+
+#include "otx2_ethdev.h"
+
+#define PTP_FREQ_ADJUST (1 << 9)
+
+static void
+nix_start_timecounters(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ memset(&dev->systime_tc, 0, sizeof(struct rte_timecounter));
+ memset(&dev->rx_tstamp_tc, 0, sizeof(struct rte_timecounter));
+ memset(&dev->tx_tstamp_tc, 0, sizeof(struct rte_timecounter));
+
+ dev->systime_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
+ dev->rx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
+ dev->tx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
+}
+
+static int
+nix_ptp_config(struct rte_eth_dev *eth_dev, int en)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ uint8_t rc = 0;
+
+ if (otx2_dev_is_vf(dev))
+ return rc;
+
+ if (en) {
+ /* Enable time stamping of sent PTP packets. */
+ otx2_mbox_alloc_msg_nix_lf_ptp_tx_enable(mbox);
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("MBOX ptp tx conf enable failed: err %d", rc);
+ return rc;
+ }
+ /* Enable time stamping of received PTP packets. */
+ otx2_mbox_alloc_msg_cgx_ptp_rx_enable(mbox);
+ } else {
+ /* Disable time stamping of sent PTP packets. */
+ otx2_mbox_alloc_msg_nix_lf_ptp_tx_disable(mbox);
+ rc = otx2_mbox_process(mbox);
+ if (rc) {
+ otx2_err("MBOX ptp tx conf disable failed: err %d", rc);
+ return rc;
+ }
+ /* Disable time stamping of received PTP packets. */
+ otx2_mbox_alloc_msg_cgx_ptp_rx_disable(mbox);
+ }
+
+ return otx2_mbox_process(mbox);
+}
+
+int
+otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int i, rc = 0;
+
+ if (otx2_ethdev_is_ptp_en(dev)) {
+ otx2_info("PTP mode is already enabled ");
+ return -EINVAL;
+ }
+
+ /* If we are VF, no further action can be taken */
+ if (otx2_dev_is_vf(dev))
+ return -EINVAL;
+
+ if (!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)) {
+ otx2_err("Ptype offload is disabled, it should be enabled");
+ return -EINVAL;
+ }
+
+ /* Allocating a iova address for tx tstamp */
+ const struct rte_memzone *ts;
+ ts = rte_eth_dma_zone_reserve(eth_dev, "otx2_ts",
+ 0, OTX2_ALIGN, OTX2_ALIGN,
+ dev->node);
+ if (ts == NULL)
+ otx2_err("Failed to allocate mem for tx tstamp addr");
+
+ dev->tstamp.tx_tstamp_iova = ts->iova;
+ dev->tstamp.tx_tstamp = ts->addr;
+
+ /* System time should be already on by default */
+ nix_start_timecounters(eth_dev);
+
+ dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
+ dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
+
+ rc = nix_ptp_config(eth_dev, 1);
+ if (!rc) {
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
+ otx2_nix_form_default_desc(txq);
+ }
+ }
+ return rc;
+}
+
+int
+otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int i, rc = 0;
+
+ if (!otx2_ethdev_is_ptp_en(dev)) {
+ otx2_nix_dbg("PTP mode is disabled");
+ return -EINVAL;
+ }
+
+ /* If we are VF, nothing else can be done */
+ if (otx2_dev_is_vf(dev))
+ return -EINVAL;
+
+ dev->rx_offloads &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+ dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
+ dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
+
+ rc = nix_ptp_config(eth_dev, 0);
+ if (!rc) {
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
+ otx2_nix_form_default_desc(txq);
+ }
+ }
+ return rc;
+}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index 1283fdf37..0c3627c12 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -13,5 +13,16 @@
sizeof(uint16_t))
#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
+#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
+
+#define NIX_TIMESYNC_RX_OFFSET 8
+
+struct otx2_timesync_info {
+ uint64_t rx_tstamp;
+ rte_iova_t tx_tstamp_iova;
+ uint64_t *tx_tstamp;
+ uint8_t tx_ready;
+ uint8_t rx_ready;
+} __rte_cache_aligned;
#endif /* __OTX2_RX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 31/58] net/octeontx2: add remaining PTP operations
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (29 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 30/58] net/octeontx2: add PTP base support jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 32/58] net/octeontx2: introducing flow driver jerinj
` (27 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Harman Kalra, Zyta Szpak
From: Harman Kalra <hkalra@marvell.com>
Add remaining PTP configuration/slowpath operations.
Timesync feature is available only for PF devices.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: Zyta Szpak <zyta@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
drivers/net/octeontx2/otx2_ethdev.c | 6 ++
drivers/net/octeontx2/otx2_ethdev.h | 11 +++
drivers/net/octeontx2/otx2_ptp.c | 130 +++++++++++++++++++++++++
4 files changed, 149 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 00feb0cf2..46fb00be6 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -23,6 +23,8 @@ RSS reta update = Y
Inner RSS = Y
Flow control = Y
Packet type parsing = Y
+Timesync = Y
+Timestamp offload = Y
Rx descriptor status = Y
Basic stats = Y
Stats per queue = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 6ab8ed79d..834b052c6 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -47,6 +47,7 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
static const struct otx2_dev_ops otx2_dev_ops = {
.link_status_update = otx2_eth_dev_link_status_update,
+ .ptp_info_update = otx2_eth_dev_ptp_info_update
};
static int
@@ -1350,6 +1351,11 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.flow_ctrl_set = otx2_nix_flow_ctrl_set,
.timesync_enable = otx2_nix_timesync_enable,
.timesync_disable = otx2_nix_timesync_disable,
+ .timesync_read_rx_timestamp = otx2_nix_timesync_read_rx_timestamp,
+ .timesync_read_tx_timestamp = otx2_nix_timesync_read_tx_timestamp,
+ .timesync_adjust_time = otx2_nix_timesync_adjust_time,
+ .timesync_read_time = otx2_nix_timesync_read_time,
+ .timesync_write_time = otx2_nix_timesync_write_time,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 1ca28add4..8f8d93a39 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -430,5 +430,16 @@ void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
/* Timesync - PTP routines */
int otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev);
int otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev);
+int otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev,
+ struct timespec *timestamp,
+ uint32_t flags);
+int otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev,
+ struct timespec *timestamp);
+int otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta);
+int otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev,
+ const struct timespec *ts);
+int otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev,
+ struct timespec *ts);
+int otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en);
#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 105067949..5291da241 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -57,6 +57,23 @@ nix_ptp_config(struct rte_eth_dev *eth_dev, int en)
return otx2_mbox_process(mbox);
}
+int
+otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en)
+{
+ struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
+ struct rte_eth_dev *eth_dev = otx2_dev->eth_dev;
+ int i;
+
+ otx2_dev->ptp_en = ptp_en;
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[i];
+ rxq->mbuf_initializer =
+ otx2_nix_rxq_mbuf_setup(otx2_dev,
+ eth_dev->data->port_id);
+ }
+ return 0;
+}
+
int
otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
{
@@ -133,3 +150,116 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
}
return rc;
}
+
+int
+otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev,
+ struct timespec *timestamp,
+ uint32_t __rte_unused flags)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_timesync_info *tstamp = &dev->tstamp;
+ uint64_t ns;
+
+ if (!tstamp->rx_ready)
+ return -EINVAL;
+
+ ns = rte_timecounter_update(&dev->rx_tstamp_tc, tstamp->rx_tstamp);
+ *timestamp = rte_ns_to_timespec(ns);
+ tstamp->rx_ready = 0;
+
+ otx2_nix_dbg("rx timestamp: %llu sec: %lu nsec %lu",
+ (unsigned long long)tstamp->rx_tstamp, timestamp->tv_sec,
+ timestamp->tv_nsec);
+
+ return 0;
+}
+
+int
+otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev,
+ struct timespec *timestamp)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_timesync_info *tstamp = &dev->tstamp;
+ uint64_t ns;
+
+ if (*tstamp->tx_tstamp == 0)
+ return -EINVAL;
+
+ ns = rte_timecounter_update(&dev->tx_tstamp_tc, *tstamp->tx_tstamp);
+ *timestamp = rte_ns_to_timespec(ns);
+
+ otx2_nix_dbg("tx timestamp: %llu sec: %lu nsec %lu",
+ *(unsigned long long *)tstamp->tx_tstamp,
+ timestamp->tv_sec, timestamp->tv_nsec);
+
+ *tstamp->tx_tstamp = 0;
+ rte_wmb();
+
+ return 0;
+}
+
+int
+otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct ptp_req *req;
+ struct ptp_rsp *rsp;
+ int rc;
+
+ /* Adjust the frequent to make tics increments in 10^9 tics per sec */
+ if (delta < PTP_FREQ_ADJUST && delta > -PTP_FREQ_ADJUST) {
+ req = otx2_mbox_alloc_msg_ptp_op(mbox);
+ req->op = PTP_OP_ADJFINE;
+ req->scaled_ppm = delta;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+ }
+ dev->systime_tc.nsec += delta;
+ dev->rx_tstamp_tc.nsec += delta;
+ dev->tx_tstamp_tc.nsec += delta;
+
+ return 0;
+}
+
+int
+otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev,
+ const struct timespec *ts)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t ns;
+
+ ns = rte_timespec_to_ns(ts);
+ /* Set the time counters to a new value. */
+ dev->systime_tc.nsec = ns;
+ dev->rx_tstamp_tc.nsec = ns;
+ dev->tx_tstamp_tc.nsec = ns;
+
+ return 0;
+}
+
+int
+otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev, struct timespec *ts)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct ptp_req *req;
+ struct ptp_rsp *rsp;
+ uint64_t ns;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_ptp_op(mbox);
+ req->op = PTP_OP_GET_CLOCK;
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ ns = rte_timecounter_update(&dev->systime_tc, rsp->clk);
+ *ts = rte_ns_to_timespec(ns);
+
+ otx2_nix_dbg("PTP time read: %ld.%09ld", ts->tv_sec, ts->tv_nsec);
+
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 32/58] net/octeontx2: introducing flow driver
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (30 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 31/58] net/octeontx2: add remaining PTP operations jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 33/58] net/octeontx2: add flow utility functions jerinj
` (26 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Introducing flow infra for octeontx2.
This will be used to maintain rte_flow rules.
Create, destroy, validate, query, flush, isolate flow operations
will be supported.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow.h | 388 ++++++++++++++++++++++++++++++
1 file changed, 388 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_flow.h
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
new file mode 100644
index 000000000..95bb6c2bf
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_flow.h
@@ -0,0 +1,388 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __OTX2_FLOW_H__
+#define __OTX2_FLOW_H__
+
+#include <stdint.h>
+
+#include <rte_flow_driver.h>
+#include <rte_malloc.h>
+#include <rte_tailq.h>
+
+#include "otx2_common.h"
+#include "otx2_ethdev.h"
+#include "otx2_mbox.h"
+
+int otx2_flow_init(struct otx2_eth_dev *hw);
+int otx2_flow_fini(struct otx2_eth_dev *hw);
+extern const struct rte_flow_ops otx2_flow_ops;
+
+enum {
+ OTX2_INTF_RX = 0,
+ OTX2_INTF_TX = 1,
+ OTX2_INTF_MAX = 2,
+};
+
+#define NPC_IH_LENGTH 8
+#define NPC_TPID_LENGTH 2
+#define NPC_COUNTER_NONE (-1)
+/* 32 bytes from LDATA_CFG & 32 bytes from FLAGS_CFG */
+#define NPC_MAX_EXTRACT_DATA_LEN (64)
+#define NPC_LDATA_LFLAG_LEN (16)
+#define NPC_MCAM_TOT_ENTRIES (4096)
+#define NPC_MAX_KEY_NIBBLES (31)
+/* Nibble offsets */
+#define NPC_LAYER_KEYX_SZ (3)
+#define NPC_PARSE_KEX_S_LA_OFFSET (7)
+#define NPC_PARSE_KEX_S_LID_OFFSET(lid) \
+ ((((lid) - NPC_LID_LA) * NPC_LAYER_KEYX_SZ) \
+ + NPC_PARSE_KEX_S_LA_OFFSET)
+
+
+/* supported flow actions flags */
+#define OTX2_FLOW_ACT_MARK (1 << 0)
+#define OTX2_FLOW_ACT_FLAG (1 << 1)
+#define OTX2_FLOW_ACT_DROP (1 << 2)
+#define OTX2_FLOW_ACT_QUEUE (1 << 3)
+#define OTX2_FLOW_ACT_RSS (1 << 4)
+#define OTX2_FLOW_ACT_DUP (1 << 5)
+#define OTX2_FLOW_ACT_SEC (1 << 6)
+#define OTX2_FLOW_ACT_COUNT (1 << 7)
+
+/* terminating actions */
+#define OTX2_FLOW_ACT_TERM (OTX2_FLOW_ACT_DROP | \
+ OTX2_FLOW_ACT_QUEUE | \
+ OTX2_FLOW_ACT_RSS | \
+ OTX2_FLOW_ACT_DUP | \
+ OTX2_FLOW_ACT_SEC)
+
+/* This mark value indicates flag action */
+#define OTX2_FLOW_FLAG_VAL (0xffff)
+
+#define NIX_RX_ACT_MATCH_OFFSET (40)
+#define NIX_RX_ACT_MATCH_MASK (0xFFFF)
+
+#define NIX_RSS_ACT_GRP_OFFSET (20)
+#define NIX_RSS_ACT_ALG_OFFSET (56)
+#define NIX_RSS_ACT_GRP_MASK (0xFFFFF)
+#define NIX_RSS_ACT_ALG_MASK (0x1F)
+
+/* PMD-specific definition of the opaque struct rte_flow */
+#define OTX2_MAX_MCAM_WIDTH_DWORDS 7
+
+enum npc_mcam_intf {
+ NPC_MCAM_RX,
+ NPC_MCAM_TX
+};
+
+struct npc_xtract_info {
+ /* Length in bytes of pkt data extracted. len = 0
+ * indicates that extraction is disabled.
+ */
+ uint8_t len;
+ uint8_t hdr_off; /* Byte offset of proto hdr: extract_src */
+ uint8_t key_off; /* Byte offset in MCAM key where data is placed */
+ uint8_t enable; /* Extraction enabled or disabled */
+};
+
+/* Information for a given {LAYER, LTYPE} */
+struct npc_lid_lt_xtract_info {
+ /* Info derived from parser configuration */
+ uint16_t npc_proto; /* Network protocol identified */
+ uint8_t valid_flags_mask; /* Flags applicable */
+ uint8_t is_terminating:1; /* No more parsing */
+ struct npc_xtract_info xtract[NPC_MAX_LD];
+};
+
+union npc_kex_ldata_flags_cfg {
+ struct {
+ #if defined(__BIG_ENDIAN_BITFIELD)
+ uint64_t rvsd_62_1 : 61;
+ uint64_t lid : 3;
+ #else
+ uint64_t lid : 3;
+ uint64_t rvsd_62_1 : 61;
+ #endif
+ } s;
+
+ uint64_t i;
+};
+
+typedef struct npc_lid_lt_xtract_info
+ otx2_dxcfg_t[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT];
+typedef struct npc_lid_lt_xtract_info
+ otx2_fxcfg_t[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL];
+typedef union npc_kex_ldata_flags_cfg otx2_ld_flags_t[NPC_MAX_LD];
+
+
+/* MBOX_MSG_NPC_GET_DATAX_CFG Response */
+struct npc_get_datax_cfg {
+ /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */
+ union npc_kex_ldata_flags_cfg ld_flags[NPC_MAX_LD];
+ /* Extract information indexed with [LID][LTYPE] */
+ struct npc_lid_lt_xtract_info lid_lt_xtract[NPC_MAX_LID][NPC_MAX_LT];
+ /* Flags based extract indexed with [LDATA][FLAGS_LOWER_NIBBLE]
+ * Fields flags_ena_ld0, flags_ena_ld1 in
+ * struct npc_lid_lt_xtract_info indicate if this is applicable
+ * for a given {LAYER, LTYPE}
+ */
+ struct npc_xtract_info flag_xtract[NPC_MAX_LD][NPC_MAX_LT];
+};
+
+struct otx2_mcam_ents_info {
+ /* Current max & min values of mcam index */
+ uint32_t max_id;
+ uint32_t min_id;
+ uint32_t free_ent;
+ uint32_t live_ent;
+};
+
+struct rte_flow {
+ uint8_t nix_intf;
+ uint32_t mcam_id;
+ int32_t ctr_id;
+ uint32_t priority;
+ /* Contiguous match string */
+ uint64_t mcam_data[OTX2_MAX_MCAM_WIDTH_DWORDS];
+ uint64_t mcam_mask[OTX2_MAX_MCAM_WIDTH_DWORDS];
+ uint64_t npc_action;
+ TAILQ_ENTRY(rte_flow) next;
+};
+
+TAILQ_HEAD(otx2_flow_list, rte_flow);
+
+/* Accessed from ethdev private - otx2_eth_dev */
+struct otx2_npc_flow_info {
+ rte_atomic32_t mark_actions;
+ uint32_t keyx_supp_nmask[NPC_MAX_INTF];/* nibble mask */
+ uint32_t keyx_len[NPC_MAX_INTF]; /* per intf key len in bits */
+ uint32_t datax_len[NPC_MAX_INTF]; /* per intf data len in bits */
+ uint32_t keyw[NPC_MAX_INTF]; /* max key + data len bits */
+ uint32_t mcam_entries; /* mcam entries supported */
+ otx2_dxcfg_t prx_dxcfg; /* intf, lid, lt, extract */
+ otx2_fxcfg_t prx_fxcfg; /* Flag extract */
+ otx2_ld_flags_t prx_lfcfg; /* KEX LD_Flags CFG */
+ /* mcam entry info per priority level: both free & in-use */
+ struct otx2_mcam_ents_info *flow_entry_info;
+ /* Bitmap of free preallocated entries in ascending index &
+ * descending priority
+ */
+ struct rte_bitmap **free_entries;
+ /* Bitmap of free preallocated entries in descending index &
+ * ascending priority
+ */
+ struct rte_bitmap **free_entries_rev;
+ /* Bitmap of live entries in ascending index & descending priority */
+ struct rte_bitmap **live_entries;
+ /* Bitmap of live entries in descending index & ascending priority */
+ struct rte_bitmap **live_entries_rev;
+ /* Priority bucket wise tail queue of all rte_flow resources */
+ struct otx2_flow_list *flow_list;
+ uint32_t rss_grps; /* rss groups supported */
+ struct rte_bitmap *rss_grp_entries;
+ uint16_t channel; /*rx channel */
+ uint16_t flow_prealloc_size;
+ uint16_t flow_max_priority;
+};
+
+struct otx2_parse_state {
+ struct otx2_npc_flow_info *npc;
+ const struct rte_flow_item *pattern;
+ const struct rte_flow_item *last_pattern; /* Temp usage */
+ struct rte_flow_error *error;
+ struct rte_flow *flow;
+ uint8_t tunnel;
+ uint8_t terminate;
+ uint8_t layer_mask;
+ uint8_t lt[NPC_MAX_LID];
+ uint8_t flags[NPC_MAX_LID];
+ uint8_t *mcam_data; /* point to flow->mcam_data + key_len */
+ uint8_t *mcam_mask; /* point to flow->mcam_mask + key_len */
+};
+
+struct otx2_flow_item_info {
+ const void *def_mask; /* rte_flow default mask */
+ void *hw_mask; /* hardware supported mask */
+ int len; /* length of item */
+ const void *spec; /* spec to use, NULL implies match any */
+ const void *mask; /* mask to use */
+ uint8_t hw_hdr_len; /* Extra data len at each layer*/
+};
+
+struct otx2_idev_kex_cfg {
+ struct npc_get_kex_cfg_rsp kex_cfg;
+ rte_atomic16_t kex_refcnt;
+};
+
+enum npc_kpu_parser_flag {
+ NPC_F_NA = 0,
+ NPC_F_PKI,
+ NPC_F_PKI_VLAN,
+ NPC_F_PKI_ETAG,
+ NPC_F_PKI_ITAG,
+ NPC_F_PKI_MPLS,
+ NPC_F_PKI_NSH,
+ NPC_F_ETYPE_UNK,
+ NPC_F_ETHER_VLAN,
+ NPC_F_ETHER_ETAG,
+ NPC_F_ETHER_ITAG,
+ NPC_F_ETHER_MPLS,
+ NPC_F_ETHER_NSH,
+ NPC_F_STAG_CTAG,
+ NPC_F_STAG_CTAG_UNK,
+ NPC_F_STAG_STAG_CTAG,
+ NPC_F_STAG_STAG_STAG,
+ NPC_F_QINQ_CTAG,
+ NPC_F_QINQ_CTAG_UNK,
+ NPC_F_QINQ_QINQ_CTAG,
+ NPC_F_QINQ_QINQ_QINQ,
+ NPC_F_BTAG_ITAG,
+ NPC_F_BTAG_ITAG_STAG,
+ NPC_F_BTAG_ITAG_CTAG,
+ NPC_F_BTAG_ITAG_UNK,
+ NPC_F_ETAG_CTAG,
+ NPC_F_ETAG_BTAG_ITAG,
+ NPC_F_ETAG_STAG,
+ NPC_F_ETAG_QINQ,
+ NPC_F_ETAG_ITAG,
+ NPC_F_ETAG_ITAG_STAG,
+ NPC_F_ETAG_ITAG_CTAG,
+ NPC_F_ETAG_ITAG_UNK,
+ NPC_F_ITAG_STAG_CTAG,
+ NPC_F_ITAG_STAG,
+ NPC_F_ITAG_CTAG,
+ NPC_F_MPLS_4_LABELS,
+ NPC_F_MPLS_3_LABELS,
+ NPC_F_MPLS_2_LABELS,
+ NPC_F_IP_HAS_OPTIONS,
+ NPC_F_IP_IP_IN_IP,
+ NPC_F_IP_6TO4,
+ NPC_F_IP_MPLS_IN_IP,
+ NPC_F_IP_UNK_PROTO,
+ NPC_F_IP_IP_IN_IP_HAS_OPTIONS,
+ NPC_F_IP_6TO4_HAS_OPTIONS,
+ NPC_F_IP_MPLS_IN_IP_HAS_OPTIONS,
+ NPC_F_IP_UNK_PROTO_HAS_OPTIONS,
+ NPC_F_IP6_HAS_EXT,
+ NPC_F_IP6_TUN_IP6,
+ NPC_F_IP6_MPLS_IN_IP,
+ NPC_F_TCP_HAS_OPTIONS,
+ NPC_F_TCP_HTTP,
+ NPC_F_TCP_HTTPS,
+ NPC_F_TCP_PPTP,
+ NPC_F_TCP_UNK_PORT,
+ NPC_F_TCP_HTTP_HAS_OPTIONS,
+ NPC_F_TCP_HTTPS_HAS_OPTIONS,
+ NPC_F_TCP_PPTP_HAS_OPTIONS,
+ NPC_F_TCP_UNK_PORT_HAS_OPTIONS,
+ NPC_F_UDP_VXLAN,
+ NPC_F_UDP_VXLAN_NOVNI,
+ NPC_F_UDP_VXLAN_NOVNI_NSH,
+ NPC_F_UDP_VXLANGPE,
+ NPC_F_UDP_VXLANGPE_NSH,
+ NPC_F_UDP_VXLANGPE_MPLS,
+ NPC_F_UDP_VXLANGPE_NOVNI,
+ NPC_F_UDP_VXLANGPE_NOVNI_NSH,
+ NPC_F_UDP_VXLANGPE_NOVNI_MPLS,
+ NPC_F_UDP_VXLANGPE_UNK,
+ NPC_F_UDP_VXLANGPE_NONP,
+ NPC_F_UDP_GTP_GTPC,
+ NPC_F_UDP_GTP_GTPU_G_PDU,
+ NPC_F_UDP_GTP_GTPU_UNK,
+ NPC_F_UDP_UNK_PORT,
+ NPC_F_UDP_GENEVE,
+ NPC_F_UDP_GENEVE_OAM,
+ NPC_F_UDP_GENEVE_CRI_OPT,
+ NPC_F_UDP_GENEVE_OAM_CRI_OPT,
+ NPC_F_GRE_NVGRE,
+ NPC_F_GRE_HAS_SRE,
+ NPC_F_GRE_HAS_CSUM,
+ NPC_F_GRE_HAS_KEY,
+ NPC_F_GRE_HAS_SEQ,
+ NPC_F_GRE_HAS_CSUM_KEY,
+ NPC_F_GRE_HAS_CSUM_SEQ,
+ NPC_F_GRE_HAS_KEY_SEQ,
+ NPC_F_GRE_HAS_CSUM_KEY_SEQ,
+ NPC_F_GRE_HAS_ROUTE,
+ NPC_F_GRE_UNK_PROTO,
+ NPC_F_GRE_VER1,
+ NPC_F_GRE_VER1_HAS_SEQ,
+ NPC_F_GRE_VER1_HAS_ACK,
+ NPC_F_GRE_VER1_HAS_SEQ_ACK,
+ NPC_F_GRE_VER1_UNK_PROTO,
+ NPC_F_TU_ETHER_UNK,
+ NPC_F_TU_ETHER_CTAG,
+ NPC_F_TU_ETHER_CTAG_UNK,
+ NPC_F_TU_ETHER_STAG_CTAG,
+ NPC_F_TU_ETHER_STAG_CTAG_UNK,
+ NPC_F_TU_ETHER_STAG,
+ NPC_F_TU_ETHER_STAG_UNK,
+ NPC_F_TU_ETHER_QINQ_CTAG,
+ NPC_F_TU_ETHER_QINQ_CTAG_UNK,
+ NPC_F_TU_ETHER_QINQ,
+ NPC_F_TU_ETHER_QINQ_UNK,
+ NPC_F_LAST /* has to be the last item */
+};
+
+
+int otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id);
+
+int otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id,
+ uint64_t *count);
+
+int otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id);
+
+int otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry);
+
+int otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox);
+
+int otx2_flow_update_parse_state(struct otx2_parse_state *pst,
+ struct otx2_flow_item_info *info,
+ int lid, int lt, uint8_t flags);
+
+int otx2_flow_parse_item_basic(const struct rte_flow_item *item,
+ struct otx2_flow_item_info *info,
+ struct rte_flow_error *error);
+
+void otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask);
+
+int otx2_flow_mcam_alloc_and_write(struct rte_flow *flow,
+ struct otx2_mbox *mbox,
+ struct otx2_parse_state *pst,
+ struct otx2_npc_flow_info *flow_info);
+
+void otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst,
+ struct otx2_flow_item_info *info,
+ int lid, int lt);
+
+const struct rte_flow_item *
+otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern);
+
+int otx2_flow_parse_lh(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_lg(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_lf(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_le(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_ld(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_lc(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_lb(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_la(struct otx2_parse_state *pst);
+
+int otx2_flow_parse_actions(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error,
+ struct rte_flow *flow);
+
+int otx2_flow_free_all_resources(struct otx2_eth_dev *hw);
+
+int otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid);
+#endif /* __OTX2_FLOW_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 33/58] net/octeontx2: add flow utility functions
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (31 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 32/58] net/octeontx2: introducing flow driver jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 34/58] net/octeontx2: add flow mbox " jerinj
` (25 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
First pass rte_flow utility functions for octeontx2.
These will be used to communicate with AF driver.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.h | 7 +-
drivers/net/octeontx2/otx2_flow.h | 2 +
drivers/net/octeontx2/otx2_flow_utils.c | 387 ++++++++++++++++++++++++
5 files changed, 392 insertions(+), 6 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_flow_utils.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index f950fca14..b7bbe7881 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -40,6 +40,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_lookup.c \
otx2_ethdev.c \
otx2_flow_ctrl.c \
+ otx2_flow_utils.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
otx2_ethdev_debug.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 2cac57d2b..75156ddbe 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -12,6 +12,7 @@ sources = files(
'otx2_lookup.c',
'otx2_ethdev.c',
'otx2_flow_ctrl.c',
+ 'otx2_flow_utils.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
'otx2_ethdev_debug.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 8f8d93a39..e8a22b6ec 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -17,6 +17,7 @@
#include "otx2_common.h"
#include "otx2_dev.h"
+#include "otx2_flow.h"
#include "otx2_irq.h"
#include "otx2_mempool.h"
#include "otx2_rx.h"
@@ -173,12 +174,6 @@ struct otx2_eth_qconf {
uint16_t nb_desc;
};
-struct otx2_npc_flow_info {
- uint16_t channel; /*rx channel */
- uint16_t flow_prealloc_size;
- uint16_t flow_max_priority;
-};
-
struct otx2_fc_info {
enum rte_eth_fc_mode mode; /**< Link flow control mode */
uint8_t rx_pause;
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
index 95bb6c2bf..f5cc3b983 100644
--- a/drivers/net/octeontx2/otx2_flow.h
+++ b/drivers/net/octeontx2/otx2_flow.h
@@ -15,6 +15,8 @@
#include "otx2_ethdev.h"
#include "otx2_mbox.h"
+struct otx2_eth_dev;
+
int otx2_flow_init(struct otx2_eth_dev *hw);
int otx2_flow_fini(struct otx2_eth_dev *hw);
extern const struct rte_flow_ops otx2_flow_ops;
diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c
new file mode 100644
index 000000000..6078a827b
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_flow_utils.c
@@ -0,0 +1,387 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+#include "otx2_flow.h"
+
+int
+otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id)
+{
+ struct npc_mcam_oper_counter_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_free_counter(mbox);
+ req->cntr = ctr_id;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, NULL);
+
+ return rc;
+}
+
+int
+otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id,
+ uint64_t *count)
+{
+ struct npc_mcam_oper_counter_req *req;
+ struct npc_mcam_oper_counter_rsp *rsp;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_counter_stats(mbox);
+ req->cntr = ctr_id;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
+
+ *count = rsp->stat;
+ return rc;
+}
+
+int
+otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id)
+{
+ struct npc_mcam_oper_counter_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_clear_counter(mbox);
+ req->cntr = ctr_id;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, NULL);
+
+ return rc;
+}
+
+int
+otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry)
+{
+ struct npc_mcam_free_entry_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
+ req->entry = entry;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, NULL);
+
+ return rc;
+}
+
+int
+otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox)
+{
+ struct npc_mcam_free_entry_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
+ req->all = 1;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, NULL);
+
+ return rc;
+}
+
+static void
+flow_prep_mcam_ldata(uint8_t *ptr, const uint8_t *data, int len)
+{
+ int idx;
+
+ for (idx = 0; idx < len; idx++)
+ ptr[idx] = data[len - 1 - idx];
+}
+
+static int
+flow_check_copysz(size_t size, size_t len)
+{
+ if (len <= size)
+ return len;
+ return -1;
+}
+
+static inline int
+flow_mem_is_zero(const void *mem, int len)
+{
+ const char *m = mem;
+ int i;
+
+ for (i = 0; i < len; i++) {
+ if (m[i] != 0)
+ return 0;
+ }
+ return 1;
+}
+
+void
+otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst,
+ struct otx2_flow_item_info *info, int lid, int lt)
+{
+ struct npc_xtract_info *xinfo;
+ char *hw_mask = info->hw_mask;
+ int max_off, offset;
+ int i, j;
+ int intf;
+
+ intf = pst->flow->nix_intf;
+ xinfo = pst->npc->prx_dxcfg[intf][lid][lt].xtract;
+ memset(hw_mask, 0, info->len);
+
+ for (i = 0; i < NPC_MAX_LD; i++) {
+ if (xinfo[i].hdr_off < info->hw_hdr_len)
+ continue;
+
+ max_off = xinfo[i].hdr_off + xinfo[i].len - info->hw_hdr_len;
+
+ if (xinfo[i].enable == 0)
+ continue;
+
+ if (max_off > info->len)
+ max_off = info->len;
+
+ offset = xinfo[i].hdr_off - info->hw_hdr_len;
+ for (j = offset; j < max_off; j++)
+ hw_mask[j] = 0xff;
+ }
+}
+
+int
+otx2_flow_update_parse_state(struct otx2_parse_state *pst,
+ struct otx2_flow_item_info *info, int lid, int lt,
+ uint8_t flags)
+{
+ uint8_t int_info_mask[NPC_MAX_EXTRACT_DATA_LEN];
+ uint8_t int_info[NPC_MAX_EXTRACT_DATA_LEN];
+ struct npc_lid_lt_xtract_info *xinfo;
+ int len = 0;
+ int intf;
+ int i;
+
+ otx2_npc_dbg("Parse state function info mask total %s",
+ (const uint8_t *)info->mask);
+
+ pst->layer_mask |= lid;
+ pst->lt[lid] = lt;
+ pst->flags[lid] = flags;
+
+ intf = pst->flow->nix_intf;
+ xinfo = &pst->npc->prx_dxcfg[intf][lid][lt];
+ otx2_npc_dbg("Is_terminating = %d", xinfo->is_terminating);
+ if (xinfo->is_terminating)
+ pst->terminate = 1;
+
+ /* Need to check if flags are supported but in latest
+ * KPU profile, flags are used as enumeration! No way,
+ * it can be validated unless MBOX is changed to return
+ * set of valid values out of 2**8 possible values.
+ */
+ if (info->spec == NULL) { /* Nothing to match */
+ otx2_npc_dbg("Info spec NULL");
+ goto done;
+ }
+
+ /* Copy spec and mask into mcam match string, mask.
+ * Since both RTE FLOW and OTX2 MCAM use network-endianness
+ * for data, we are saved from nasty conversions.
+ */
+ for (i = 0; i < NPC_MAX_LD; i++) {
+ struct npc_xtract_info *x;
+ int k, idx, hdr_off;
+
+ x = &xinfo->xtract[i];
+ len = x->len;
+ hdr_off = x->hdr_off;
+
+ if (hdr_off < info->hw_hdr_len)
+ continue;
+
+ if (x->enable == 0)
+ continue;
+
+ otx2_npc_dbg("x->hdr_off = %d, len = %d, info->len = %d,"
+ "x->key_off = %d", x->hdr_off, len, info->len,
+ x->key_off);
+
+ hdr_off -= info->hw_hdr_len;
+
+ if (hdr_off + len > info->len)
+ len = info->len - hdr_off;
+
+ /* Check for over-write of previous layer */
+ if (!flow_mem_is_zero(pst->mcam_mask + x->key_off,
+ len)) {
+ /* Cannot support this data match */
+ rte_flow_error_set(pst->error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ pst->pattern,
+ "Extraction unsupported");
+ return -rte_errno;
+ }
+
+ len = flow_check_copysz((OTX2_MAX_MCAM_WIDTH_DWORDS * 8)
+ - x->key_off,
+ len);
+ if (len < 0) {
+ rte_flow_error_set(pst->error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ pst->pattern,
+ "Internal Error");
+ return -rte_errno;
+ }
+
+ /* Need to reverse complete structure so that dest addr is at
+ * MSB so as to program the MCAM using mcam_data & mcam_mask
+ * arrays
+ */
+ flow_prep_mcam_ldata(int_info,
+ (const uint8_t *)info->spec + hdr_off,
+ x->len);
+ flow_prep_mcam_ldata(int_info_mask,
+ (const uint8_t *)info->mask + hdr_off,
+ x->len);
+
+ otx2_npc_dbg("Spec: ");
+ for (k = 0; k < info->len; k++)
+ otx2_npc_dbg("0x%.2x ",
+ ((const uint8_t *)info->spec)[k]);
+
+ otx2_npc_dbg("Int_info: ");
+ for (k = 0; k < info->len; k++)
+ otx2_npc_dbg("0x%.2x ", int_info[k]);
+
+ memcpy(pst->mcam_mask + x->key_off, int_info_mask, len);
+ memcpy(pst->mcam_data + x->key_off, int_info, len);
+
+ otx2_npc_dbg("Parse state mcam data & mask");
+ for (idx = 0; idx < len ; idx++)
+ otx2_npc_dbg("data[%d]: 0x%x, mask[%d]: 0x%x", idx,
+ *(pst->mcam_data + idx + x->key_off), idx,
+ *(pst->mcam_mask + idx + x->key_off));
+ }
+
+done:
+ /* Next pattern to parse by subsequent layers */
+ pst->pattern++;
+ return 0;
+}
+
+static inline int
+flow_range_is_valid(const char *spec, const char *last, const char *mask,
+ int len)
+{
+ /* Mask must be zero or equal to spec as we do not support
+ * non-contiguous ranges.
+ */
+ while (len--) {
+ if (last[len] &&
+ (spec[len] & mask[len]) != (last[len] & mask[len]))
+ return 0; /* False */
+ }
+ return 1;
+}
+
+
+static inline int
+flow_mask_is_supported(const char *mask, const char *hw_mask, int len)
+{
+ /*
+ * If no hw_mask, assume nothing is supported.
+ * mask is never NULL
+ */
+ if (hw_mask == NULL)
+ return flow_mem_is_zero(mask, len);
+
+ while (len--) {
+ if ((mask[len] | hw_mask[len]) != hw_mask[len])
+ return 0; /* False */
+ }
+ return 1;
+}
+
+int
+otx2_flow_parse_item_basic(const struct rte_flow_item *item,
+ struct otx2_flow_item_info *info,
+ struct rte_flow_error *error)
+{
+ /* Item must not be NULL */
+ if (item == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "Item is NULL");
+ return -rte_errno;
+ }
+ /* If spec is NULL, both mask and last must be NULL, this
+ * makes it to match ANY value (eq to mask = 0).
+ * Setting either mask or last without spec is an error
+ */
+ if (item->spec == NULL) {
+ if (item->last == NULL && item->mask == NULL) {
+ info->spec = NULL;
+ return 0;
+ }
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "mask or last set without spec");
+ return -rte_errno;
+ }
+
+ /* We have valid spec */
+ info->spec = item->spec;
+
+ /* If mask is not set, use default mask, err if default mask is
+ * also NULL.
+ */
+ if (item->mask == NULL) {
+ otx2_npc_dbg("Item mask null, using default mask");
+ if (info->def_mask == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "No mask or default mask given");
+ return -rte_errno;
+ }
+ info->mask = info->def_mask;
+ } else {
+ info->mask = item->mask;
+ }
+
+ /* mask specified must be subset of hw supported mask
+ * mask | hw_mask == hw_mask
+ */
+ if (!flow_mask_is_supported(info->mask, info->hw_mask, info->len)) {
+ rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Unsupported field in the mask");
+ return -rte_errno;
+ }
+
+ /* Now we have spec and mask. OTX2 does not support non-contiguous
+ * range. We should have either:
+ * - spec & mask == last & mask or,
+ * - last == 0 or,
+ * - last == NULL
+ */
+ if (item->last != NULL && !flow_mem_is_zero(item->last, info->len)) {
+ if (!flow_range_is_valid(item->spec, item->last, info->mask,
+ info->len)) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Unsupported range for match");
+ return -rte_errno;
+ }
+ }
+
+ return 0;
+}
+
+void
+otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask)
+{
+ uint64_t cdata[2] = {0ULL, 0ULL}, nibble;
+ int i, j = 0;
+
+ for (i = 0; i < NPC_MAX_KEY_NIBBLES; i++) {
+ if (nibble_mask & (1 << i)) {
+ nibble = (data[i / 16] >> ((i & 0xf) * 4)) & 0xf;
+ cdata[j / 16] |= (nibble << ((j & 0xf) * 4));
+ j += 1;
+ }
+ }
+
+ data[0] = cdata[0];
+ data[1] = cdata[1];
+}
+
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 34/58] net/octeontx2: add flow mbox utility functions
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (32 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 33/58] net/octeontx2: add flow utility functions jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 35/58] net/octeontx2: add flow MCAM " jerinj
` (24 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding mailbox utility functions for rte_flow. These will be used
to alloc, reserve and write the entries to the device on request.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow.h | 6 +
drivers/net/octeontx2/otx2_flow_utils.c | 259 ++++++++++++++++++++++++
2 files changed, 265 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
index f5cc3b983..a37d86512 100644
--- a/drivers/net/octeontx2/otx2_flow.h
+++ b/drivers/net/octeontx2/otx2_flow.h
@@ -387,4 +387,10 @@ int otx2_flow_parse_actions(struct rte_eth_dev *dev,
int otx2_flow_free_all_resources(struct otx2_eth_dev *hw);
int otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid);
+
+int
+flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info,
+ struct npc_mcam_alloc_entry_rsp *rsp,
+ int req_prio);
#endif /* __OTX2_FLOW_H__ */
diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c
index 6078a827b..c56a22ed1 100644
--- a/drivers/net/octeontx2/otx2_flow_utils.c
+++ b/drivers/net/octeontx2/otx2_flow_utils.c
@@ -385,3 +385,262 @@ otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask)
data[1] = cdata[1];
}
+static int
+flow_first_set_bit(uint64_t slab)
+{
+ int num = 0;
+
+ if ((slab & 0xffffffff) == 0) {
+ num += 32;
+ slab >>= 32;
+ }
+ if ((slab & 0xffff) == 0) {
+ num += 16;
+ slab >>= 16;
+ }
+ if ((slab & 0xff) == 0) {
+ num += 8;
+ slab >>= 8;
+ }
+ if ((slab & 0xf) == 0) {
+ num += 4;
+ slab >>= 4;
+ }
+ if ((slab & 0x3) == 0) {
+ num += 2;
+ slab >>= 2;
+ }
+ if ((slab & 0x1) == 0)
+ num += 1;
+
+ return num;
+}
+
+static int
+flow_shift_lv_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info,
+ uint32_t old_ent, uint32_t new_ent)
+{
+ struct npc_mcam_shift_entry_req *req;
+ struct npc_mcam_shift_entry_rsp *rsp;
+ struct otx2_flow_list *list;
+ struct rte_flow *flow_iter;
+ int rc = 0;
+
+ otx2_npc_dbg("Old ent:%u new ent:%u priority:%u", old_ent, new_ent,
+ flow->priority);
+
+ list = &flow_info->flow_list[flow->priority];
+
+ /* Old entry is disabled & it's contents are moved to new_entry,
+ * new entry is enabled finally.
+ */
+ req = otx2_mbox_alloc_msg_npc_mcam_shift_entry(mbox);
+ req->curr_entry[0] = old_ent;
+ req->new_entry[0] = new_ent;
+ req->shift_count = 1;
+
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ /* Remove old node from list */
+ TAILQ_FOREACH(flow_iter, list, next) {
+ if (flow_iter->mcam_id == old_ent)
+ TAILQ_REMOVE(list, flow_iter, next);
+ }
+
+ /* Insert node with new mcam id at right place */
+ TAILQ_FOREACH(flow_iter, list, next) {
+ if (flow_iter->mcam_id > new_ent)
+ TAILQ_INSERT_BEFORE(flow_iter, flow, next);
+ }
+ return rc;
+}
+
+/* Exchange all required entries with a given priority level */
+static int
+flow_shift_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info,
+ struct npc_mcam_alloc_entry_rsp *rsp, int dir, int prio_lvl)
+{
+ struct rte_bitmap *fr_bmp, *fr_bmp_rev, *lv_bmp, *lv_bmp_rev, *bmp;
+ uint32_t e_fr = 0, e_lv = 0, e, e_id = 0, mcam_entries;
+ uint64_t fr_bit_pos = 0, lv_bit_pos = 0, bit_pos = 0;
+ /* Bit position within the slab */
+ uint32_t sl_fr_bit_off = 0, sl_lv_bit_off = 0;
+ /* Overall bit position of the start of slab */
+ /* free & live entry index */
+ int rc_fr = 0, rc_lv = 0, rc = 0, idx = 0;
+ struct otx2_mcam_ents_info *ent_info;
+ /* free & live bitmap slab */
+ uint64_t sl_fr = 0, sl_lv = 0, *sl;
+
+ fr_bmp = flow_info->free_entries[prio_lvl];
+ fr_bmp_rev = flow_info->free_entries_rev[prio_lvl];
+ lv_bmp = flow_info->live_entries[prio_lvl];
+ lv_bmp_rev = flow_info->live_entries_rev[prio_lvl];
+ ent_info = &flow_info->flow_entry_info[prio_lvl];
+ mcam_entries = flow_info->mcam_entries;
+
+
+ /* New entries allocated are always contiguous, but older entries
+ * already in free/live bitmap can be non-contiguous: so return
+ * shifted entries should be in non-contiguous format.
+ */
+ while (idx <= rsp->count) {
+ if (!sl_fr && !sl_lv) {
+ /* Lower index elements to be exchanged */
+ if (dir < 0) {
+ rc_fr = rte_bitmap_scan(fr_bmp, &e_fr, &sl_fr);
+ rc_lv = rte_bitmap_scan(lv_bmp, &e_lv, &sl_lv);
+ otx2_npc_dbg("Fwd slab rc fr %u rc lv %u "
+ "e_fr %u e_lv %u", rc_fr, rc_lv,
+ e_fr, e_lv);
+ } else {
+ rc_fr = rte_bitmap_scan(fr_bmp_rev,
+ &sl_fr_bit_off,
+ &sl_fr);
+ rc_lv = rte_bitmap_scan(lv_bmp_rev,
+ &sl_lv_bit_off,
+ &sl_lv);
+
+ otx2_npc_dbg("Rev slab rc fr %u rc lv %u "
+ "e_fr %u e_lv %u", rc_fr, rc_lv,
+ e_fr, e_lv);
+ }
+ }
+
+ if (rc_fr) {
+ fr_bit_pos = flow_first_set_bit(sl_fr);
+ e_fr = sl_fr_bit_off + fr_bit_pos;
+ otx2_npc_dbg("Fr_bit_pos 0x%" PRIx64, fr_bit_pos);
+ } else {
+ e_fr = ~(0);
+ }
+
+ if (rc_lv) {
+ lv_bit_pos = flow_first_set_bit(sl_lv);
+ e_lv = sl_lv_bit_off + lv_bit_pos;
+ otx2_npc_dbg("Lv_bit_pos 0x%" PRIx64, lv_bit_pos);
+ } else {
+ e_lv = ~(0);
+ }
+
+ /* First entry is from free_bmap */
+ if (e_fr < e_lv) {
+ bmp = fr_bmp;
+ e = e_fr;
+ sl = &sl_fr;
+ bit_pos = fr_bit_pos;
+ if (dir > 0)
+ e_id = mcam_entries - e - 1;
+ else
+ e_id = e;
+ otx2_npc_dbg("Fr e %u e_id %u", e, e_id);
+ } else {
+ bmp = lv_bmp;
+ e = e_lv;
+ sl = &sl_lv;
+ bit_pos = lv_bit_pos;
+ if (dir > 0)
+ e_id = mcam_entries - e - 1;
+ else
+ e_id = e;
+
+ otx2_npc_dbg("Lv e %u e_id %u", e, e_id);
+ if (idx < rsp->count)
+ rc =
+ flow_shift_lv_ent(mbox, flow,
+ flow_info, e_id,
+ rsp->entry + idx);
+ }
+
+ rte_bitmap_clear(bmp, e);
+ rte_bitmap_set(bmp, rsp->entry + idx);
+ /* Update entry list, use non-contiguous
+ * list now.
+ */
+ rsp->entry_list[idx] = e_id;
+ *sl &= ~(1 << bit_pos);
+
+ /* Update min & max entry identifiers in current
+ * priority level.
+ */
+ if (dir < 0) {
+ ent_info->max_id = rsp->entry + idx;
+ ent_info->min_id = e_id;
+ } else {
+ ent_info->max_id = e_id;
+ ent_info->min_id = rsp->entry;
+ }
+
+ idx++;
+ }
+ return rc;
+}
+
+/* Validate if newly allocated entries lie in the correct priority zone
+ * since NPC_MCAM_LOWER_PRIO & NPC_MCAM_HIGHER_PRIO don't ensure zone accuracy.
+ * If not properly aligned, shift entries to do so
+ */
+int
+flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info,
+ struct npc_mcam_alloc_entry_rsp *rsp,
+ int req_prio)
+{
+ int prio_idx = 0, rc = 0, needs_shift = 0, idx, prio = flow->priority;
+ struct otx2_mcam_ents_info *info = flow_info->flow_entry_info;
+ int dir = (req_prio == NPC_MCAM_HIGHER_PRIO) ? 1 : -1;
+ uint32_t tot_ent = 0;
+
+ otx2_npc_dbg("Dir %d, priority = %d", dir, prio);
+
+ if (dir < 0)
+ prio_idx = flow_info->flow_max_priority - 1;
+
+ /* Only live entries needs to be shifted, free entries can just be
+ * moved by bits manipulation.
+ */
+
+ /* For dir = -1(NPC_MCAM_LOWER_PRIO), when shifting,
+ * NPC_MAX_PREALLOC_ENT are exchanged with adjoining higher priority
+ * level entries(lower indexes).
+ *
+ * For dir = +1(NPC_MCAM_HIGHER_PRIO), during shift,
+ * NPC_MAX_PREALLOC_ENT are exchanged with adjoining lower priority
+ * level entries(higher indexes) with highest indexes.
+ */
+ do {
+ tot_ent = info[prio_idx].free_ent + info[prio_idx].live_ent;
+
+ if (dir < 0 && prio_idx != prio &&
+ rsp->entry > info[prio_idx].max_id && tot_ent) {
+ otx2_npc_dbg("Rsp entry %u prio idx %u "
+ "max id %u", rsp->entry, prio_idx,
+ info[prio_idx].max_id);
+
+ needs_shift = 1;
+ } else if ((dir > 0) && (prio_idx != prio) &&
+ (rsp->entry < info[prio_idx].min_id) && tot_ent) {
+ otx2_npc_dbg("Rsp entry %u prio idx %u "
+ "min id %u", rsp->entry, prio_idx,
+ info[prio_idx].min_id);
+ needs_shift = 1;
+ }
+
+ otx2_npc_dbg("Needs_shift = %d", needs_shift);
+ if (needs_shift) {
+ needs_shift = 0;
+ rc = flow_shift_ent(mbox, flow, flow_info, rsp, dir,
+ prio_idx);
+ } else {
+ for (idx = 0; idx < rsp->count; idx++)
+ rsp->entry_list[idx] = rsp->entry + idx;
+ }
+ } while ((prio_idx != prio) && (prio_idx += dir));
+
+ return rc;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 35/58] net/octeontx2: add flow MCAM utility functions
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (33 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 34/58] net/octeontx2: add flow mbox " jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 36/58] net/octeontx2: add flow parsing for outer layers jerinj
` (23 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding MCAM utility functions to alloc and write the entries.
These will be used to arrange the flow rules based on priority.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow.h | 6 -
drivers/net/octeontx2/otx2_flow_utils.c | 266 +++++++++++++++++++++++-
2 files changed, 265 insertions(+), 7 deletions(-)
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
index a37d86512..f5cc3b983 100644
--- a/drivers/net/octeontx2/otx2_flow.h
+++ b/drivers/net/octeontx2/otx2_flow.h
@@ -387,10 +387,4 @@ int otx2_flow_parse_actions(struct rte_eth_dev *dev,
int otx2_flow_free_all_resources(struct otx2_eth_dev *hw);
int otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid);
-
-int
-flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info,
- struct npc_mcam_alloc_entry_rsp *rsp,
- int req_prio);
#endif /* __OTX2_FLOW_H__ */
diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c
index c56a22ed1..8a0fe7615 100644
--- a/drivers/net/octeontx2/otx2_flow_utils.c
+++ b/drivers/net/octeontx2/otx2_flow_utils.c
@@ -5,6 +5,22 @@
#include "otx2_ethdev.h"
#include "otx2_flow.h"
+static int
+flow_mcam_alloc_counter(struct otx2_mbox *mbox, uint16_t *ctr)
+{
+ struct npc_mcam_alloc_counter_req *req;
+ struct npc_mcam_alloc_counter_rsp *rsp;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_alloc_counter(mbox);
+ req->count = 1;
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
+
+ *ctr = rsp->cntr_list[0];
+ return rc;
+}
+
int
otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id)
{
@@ -585,7 +601,7 @@ flow_shift_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
* since NPC_MCAM_LOWER_PRIO & NPC_MCAM_HIGHER_PRIO don't ensure zone accuracy.
* If not properly aligned, shift entries to do so
*/
-int
+static int
flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
struct otx2_npc_flow_info *flow_info,
struct npc_mcam_alloc_entry_rsp *rsp,
@@ -644,3 +660,251 @@ flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
return rc;
}
+
+static int
+flow_find_ref_entry(struct otx2_npc_flow_info *flow_info, int *prio,
+ int prio_lvl)
+{
+ struct otx2_mcam_ents_info *info = flow_info->flow_entry_info;
+ int step = 1;
+
+ while (step < flow_info->flow_max_priority) {
+ if (((prio_lvl + step) < flow_info->flow_max_priority) &&
+ info[prio_lvl + step].live_ent) {
+ *prio = NPC_MCAM_HIGHER_PRIO;
+ return info[prio_lvl + step].min_id;
+ }
+
+ if (((prio_lvl - step) >= 0) &&
+ info[prio_lvl - step].live_ent) {
+ otx2_npc_dbg("Prio_lvl %u live %u", prio_lvl - step,
+ info[prio_lvl - step].live_ent);
+ *prio = NPC_MCAM_LOWER_PRIO;
+ return info[prio_lvl - step].max_id;
+ }
+ step++;
+ }
+ *prio = NPC_MCAM_ANY_PRIO;
+ return 0;
+}
+
+static int
+flow_fill_entry_cache(struct otx2_mbox *mbox, struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info, uint32_t *free_ent)
+{
+ struct rte_bitmap *free_bmp, *free_bmp_rev, *live_bmp, *live_bmp_rev;
+ struct npc_mcam_alloc_entry_rsp rsp_local;
+ struct npc_mcam_alloc_entry_rsp *rsp_cmd;
+ struct npc_mcam_alloc_entry_req *req;
+ struct npc_mcam_alloc_entry_rsp *rsp;
+ struct otx2_mcam_ents_info *info;
+ uint16_t ref_ent, idx;
+ int rc, prio;
+
+ info = &flow_info->flow_entry_info[flow->priority];
+ free_bmp = flow_info->free_entries[flow->priority];
+ free_bmp_rev = flow_info->free_entries_rev[flow->priority];
+ live_bmp = flow_info->live_entries[flow->priority];
+ live_bmp_rev = flow_info->live_entries_rev[flow->priority];
+
+ ref_ent = flow_find_ref_entry(flow_info, &prio, flow->priority);
+
+ req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
+ req->contig = 1;
+ req->count = flow_info->flow_prealloc_size;
+ req->priority = prio;
+ req->ref_entry = ref_ent;
+
+ otx2_npc_dbg("Fill cache ref entry %u prio %u", ref_ent, prio);
+
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp_cmd);
+ if (rc)
+ return rc;
+
+ rsp = &rsp_local;
+ memcpy(rsp, rsp_cmd, sizeof(*rsp));
+
+ otx2_npc_dbg("Alloc entry %u count %u , prio = %d", rsp->entry,
+ rsp->count, prio);
+
+ /* Non-first ent cache fill */
+ if (prio != NPC_MCAM_ANY_PRIO) {
+ flow_validate_and_shift_prio_ent(mbox, flow, flow_info, rsp,
+ prio);
+ } else {
+ /* Copy into response entry list */
+ for (idx = 0; idx < rsp->count; idx++)
+ rsp->entry_list[idx] = rsp->entry + idx;
+ }
+
+ otx2_npc_dbg("Fill entry cache rsp count %u", rsp->count);
+ /* Update free entries, reverse free entries list,
+ * min & max entry ids.
+ */
+ for (idx = 0; idx < rsp->count; idx++) {
+ if (unlikely(rsp->entry_list[idx] < info->min_id))
+ info->min_id = rsp->entry_list[idx];
+
+ if (unlikely(rsp->entry_list[idx] > info->max_id))
+ info->max_id = rsp->entry_list[idx];
+
+ /* Skip entry to be returned, not to be part of free
+ * list.
+ */
+ if (prio == NPC_MCAM_HIGHER_PRIO) {
+ if (unlikely(idx == (rsp->count - 1))) {
+ *free_ent = rsp->entry_list[idx];
+ continue;
+ }
+ } else {
+ if (unlikely(!idx)) {
+ *free_ent = rsp->entry_list[idx];
+ continue;
+ }
+ }
+ info->free_ent++;
+ rte_bitmap_set(free_bmp, rsp->entry_list[idx]);
+ rte_bitmap_set(free_bmp_rev, flow_info->mcam_entries -
+ rsp->entry_list[idx] - 1);
+
+ otx2_npc_dbg("Final rsp entry %u rsp entry rev %u",
+ rsp->entry_list[idx],
+ flow_info->mcam_entries - rsp->entry_list[idx] - 1);
+ }
+
+ otx2_npc_dbg("Cache free entry %u, rev = %u", *free_ent,
+ flow_info->mcam_entries - *free_ent - 1);
+ info->live_ent++;
+ rte_bitmap_set(live_bmp, *free_ent);
+ rte_bitmap_set(live_bmp_rev, flow_info->mcam_entries - *free_ent - 1);
+
+ return 0;
+}
+
+static int
+flow_check_preallocated_entry_cache(struct otx2_mbox *mbox,
+ struct rte_flow *flow,
+ struct otx2_npc_flow_info *flow_info)
+{
+ struct rte_bitmap *free, *free_rev, *live, *live_rev;
+ uint32_t pos = 0, free_ent = 0, mcam_entries;
+ struct otx2_mcam_ents_info *info;
+ uint64_t slab = 0;
+ int rc;
+
+ otx2_npc_dbg("Flow priority %u", flow->priority);
+
+ info = &flow_info->flow_entry_info[flow->priority];
+
+ free_rev = flow_info->free_entries_rev[flow->priority];
+ free = flow_info->free_entries[flow->priority];
+ live_rev = flow_info->live_entries_rev[flow->priority];
+ live = flow_info->live_entries[flow->priority];
+ mcam_entries = flow_info->mcam_entries;
+
+ if (info->free_ent) {
+ rc = rte_bitmap_scan(free, &pos, &slab);
+ if (rc) {
+ /* Get free_ent from free entry bitmap */
+ free_ent = pos + __builtin_ctzll(slab);
+ otx2_npc_dbg("Allocated from cache entry %u", free_ent);
+ /* Remove from free bitmaps and add to live ones */
+ rte_bitmap_clear(free, free_ent);
+ rte_bitmap_set(live, free_ent);
+ rte_bitmap_clear(free_rev,
+ mcam_entries - free_ent - 1);
+ rte_bitmap_set(live_rev,
+ mcam_entries - free_ent - 1);
+
+ info->free_ent--;
+ info->live_ent++;
+ return free_ent;
+ }
+
+ otx2_npc_dbg("No free entry:its a mess");
+ return -1;
+ }
+
+ rc = flow_fill_entry_cache(mbox, flow, flow_info, &free_ent);
+ if (rc)
+ return rc;
+
+ return free_ent;
+}
+
+int
+otx2_flow_mcam_alloc_and_write(struct rte_flow *flow, struct otx2_mbox *mbox,
+ __rte_unused struct otx2_parse_state *pst,
+ struct otx2_npc_flow_info *flow_info)
+{
+ int use_ctr = (flow->ctr_id == NPC_COUNTER_NONE ? 0 : 1);
+ struct npc_mcam_write_entry_req *req;
+ struct mbox_msghdr *rsp;
+ uint16_t ctr = ~(0);
+ int rc, idx;
+ int entry;
+
+ if (use_ctr) {
+ rc = flow_mcam_alloc_counter(mbox, &ctr);
+ if (rc)
+ return rc;
+ }
+
+ entry = flow_check_preallocated_entry_cache(mbox, flow, flow_info);
+ if (entry < 0) {
+ otx2_err("Prealloc failed");
+ otx2_flow_mcam_free_counter(mbox, ctr);
+ return NPC_MCAM_ALLOC_FAILED;
+ }
+ req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
+ req->set_cntr = use_ctr;
+ req->cntr = ctr;
+ req->entry = entry;
+ otx2_npc_dbg("Alloc & write entry %u", entry);
+
+ req->intf =
+ (flow->nix_intf == OTX2_INTF_RX) ? NPC_MCAM_RX : NPC_MCAM_TX;
+ req->enable_entry = 1;
+ req->entry_data.action = flow->npc_action;
+
+ /*
+ * DPDK sets vtag action on per interface basis, not
+ * per flow basis. It is a matter of how we decide to support
+ * this pmd specific behavior. There are two ways:
+ * 1. Inherit the vtag action from the one configured
+ * for this interface. This can be read from the
+ * vtag_action configured for default mcam entry of
+ * this pf_func.
+ * 2. Do not support vtag action with rte_flow.
+ *
+ * Second approach is used now.
+ */
+ req->entry_data.vtag_action = 0ULL;
+
+ for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
+ req->entry_data.kw[idx] = flow->mcam_data[idx];
+ req->entry_data.kw_mask[idx] = flow->mcam_mask[idx];
+ }
+
+ if (flow->nix_intf == OTX2_INTF_RX) {
+ req->entry_data.kw[0] |= flow_info->channel;
+ req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1);
+ } else {
+ uint16_t pf_func = (flow->npc_action >> 4) & 0xffff;
+
+ pf_func = htons(pf_func);
+ req->entry_data.kw[0] |= ((uint64_t)pf_func << 32);
+ req->entry_data.kw_mask[0] |= ((uint64_t)0xffff << 32);
+ }
+
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
+ if (rc != 0)
+ return rc;
+
+ flow->mcam_id = entry;
+ if (use_ctr)
+ flow->ctr_id = ctr;
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 36/58] net/octeontx2: add flow parsing for outer layers
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (34 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 35/58] net/octeontx2: add flow MCAM " jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 37/58] net/octeontx2: add flow actions support jerinj
` (22 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding functionality to parse outer layers from ld to lh.
These will be used parse outer layers L2, L3, L4 and tunnel types.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_flow_parse.c | 471 ++++++++++++++++++++++++
3 files changed, 473 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_flow_parse.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index b7bbe7881..0b492c4f3 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -40,6 +40,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_lookup.c \
otx2_ethdev.c \
otx2_flow_ctrl.c \
+ otx2_flow_parse.c \
otx2_flow_utils.c \
otx2_ethdev_irq.c \
otx2_ethdev_ops.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 75156ddbe..f608c4947 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -12,6 +12,7 @@ sources = files(
'otx2_lookup.c',
'otx2_ethdev.c',
'otx2_flow_ctrl.c',
+ 'otx2_flow_parse.c',
'otx2_flow_utils.c',
'otx2_ethdev_irq.c',
'otx2_ethdev_ops.c',
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
new file mode 100644
index 000000000..ed6c80f07
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -0,0 +1,471 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+#include "otx2_flow.h"
+
+const struct rte_flow_item *
+otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern)
+{
+ while ((pattern->type == RTE_FLOW_ITEM_TYPE_VOID) ||
+ (pattern->type == RTE_FLOW_ITEM_TYPE_ANY))
+ pattern++;
+
+ return pattern;
+}
+
+/*
+ * Tunnel+ESP, Tunnel+ICMP4/6, Tunnel+TCP, Tunnel+UDP,
+ * Tunnel+SCTP
+ */
+int
+otx2_flow_parse_lh(struct otx2_parse_state *pst)
+{
+ struct otx2_flow_item_info info;
+ char hw_mask[64];
+ int lid, lt;
+ int rc;
+
+ if (!pst->tunnel)
+ return 0;
+
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+ lid = NPC_LID_LH;
+
+ switch (pst->pattern->type) {
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ lt = NPC_LT_LH_TU_UDP;
+ info.def_mask = &rte_flow_item_udp_mask;
+ info.len = sizeof(struct rte_flow_item_udp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ lt = NPC_LT_LH_TU_TCP;
+ info.def_mask = &rte_flow_item_tcp_mask;
+ info.len = sizeof(struct rte_flow_item_tcp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ lt = NPC_LT_LH_TU_SCTP;
+ info.def_mask = &rte_flow_item_sctp_mask;
+ info.len = sizeof(struct rte_flow_item_sctp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_ESP:
+ lt = NPC_LT_LH_TU_ESP;
+ info.def_mask = &rte_flow_item_esp_mask;
+ info.len = sizeof(struct rte_flow_item_esp);
+ break;
+ default:
+ return 0;
+ }
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
+}
+
+/* Tunnel+IPv4, Tunnel+IPv6 */
+int
+otx2_flow_parse_lg(struct otx2_parse_state *pst)
+{
+ struct otx2_flow_item_info info;
+ char hw_mask[64];
+ int lid, lt;
+ int rc;
+
+ if (!pst->tunnel)
+ return 0;
+
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+ lid = NPC_LID_LG;
+
+ if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) {
+ lt = NPC_LT_LG_TU_IP;
+ info.def_mask = &rte_flow_item_ipv4_mask;
+ info.len = sizeof(struct rte_flow_item_ipv4);
+ } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+ lt = NPC_LT_LG_TU_IP6;
+ info.def_mask = &rte_flow_item_ipv6_mask;
+ info.len = sizeof(struct rte_flow_item_ipv6);
+ } else {
+ /* There is no tunneled IP header */
+ return 0;
+ }
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
+}
+
+/* Tunnel+Ether */
+int
+otx2_flow_parse_lf(struct otx2_parse_state *pst)
+{
+ const struct rte_flow_item *pattern, *last_pattern;
+ struct rte_flow_item_eth hw_mask;
+ struct otx2_flow_item_info info;
+ int lid, lt, lflags;
+ int nr_vlans = 0;
+ int rc;
+
+ /* We hit this layer if there is a tunneling protocol */
+ if (!pst->tunnel)
+ return 0;
+
+ if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH)
+ return 0;
+
+ lid = NPC_LID_LF;
+ lt = NPC_LT_LF_TU_ETHER;
+ lflags = 0;
+
+ info.def_mask = &rte_flow_item_vlan_mask;
+ /* No match support for vlan tags */
+ info.hw_mask = NULL;
+ info.len = sizeof(struct rte_flow_item_vlan);
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+
+ /* Look ahead and find out any VLAN tags. These can be
+ * detected but no data matching is available.
+ */
+ last_pattern = pst->pattern;
+ pattern = pst->pattern + 1;
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+ nr_vlans++;
+ rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+ last_pattern = pattern;
+ pattern++;
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ }
+ otx2_npc_dbg("Nr_vlans = %d", nr_vlans);
+ switch (nr_vlans) {
+ case 0:
+ break;
+ case 1:
+ lflags = NPC_F_TU_ETHER_CTAG;
+ break;
+ case 2:
+ lflags = NPC_F_TU_ETHER_STAG_CTAG;
+ break;
+ default:
+ rte_flow_error_set(pst->error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ last_pattern,
+ "more than 2 vlans with tunneled Ethernet "
+ "not supported");
+ return -rte_errno;
+ }
+
+ info.def_mask = &rte_flow_item_eth_mask;
+ info.hw_mask = &hw_mask;
+ info.len = sizeof(struct rte_flow_item_eth);
+ info.hw_hdr_len = 0;
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ info.spec = NULL;
+ info.mask = NULL;
+
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ pst->pattern = last_pattern;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+int
+otx2_flow_parse_le(struct otx2_parse_state *pst)
+{
+ /*
+ * We are positioned at UDP. Scan ahead and look for
+ * UDP encapsulated tunnel protocols. If available,
+ * parse them. In that case handle this:
+ * - RTE spec assumes we point to tunnel header.
+ * - NPC parser provides offset from UDP header.
+ */
+
+ /*
+ * Note: Add support to GENEVE, VXLAN_GPE when we
+ * upgrade DPDK
+ *
+ * Note: Better to split flags into two nibbles:
+ * - Higher nibble can have flags
+ * - Lower nibble to further enumerate protocols
+ * and have flags based extraction
+ */
+ const struct rte_flow_item *pattern = pst->pattern;
+ struct otx2_flow_item_info info;
+ int lid, lt, lflags;
+ char hw_mask[64];
+ int rc;
+
+ if (pst->tunnel)
+ return 0;
+
+ if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
+ return otx2_flow_parse_mpls(pst, NPC_LID_LE);
+
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_mask = NULL;
+ info.def_mask = NULL;
+ info.len = 0;
+ info.hw_hdr_len = 0;
+ lid = NPC_LID_LE;
+ lflags = 0;
+
+ /* Ensure we are not matching anything in UDP */
+ rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
+ if (rc)
+ return rc;
+
+ info.hw_mask = &hw_mask;
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ otx2_npc_dbg("Pattern->type = %d", pattern->type);
+ switch (pattern->type) {
+ case RTE_FLOW_ITEM_TYPE_VXLAN:
+ lflags = NPC_F_UDP_VXLAN;
+ info.def_mask = &rte_flow_item_vxlan_mask;
+ info.len = sizeof(struct rte_flow_item_vxlan);
+ lt = NPC_LT_LE_VXLAN;
+ break;
+ case RTE_FLOW_ITEM_TYPE_GTPC:
+ lflags = NPC_F_UDP_GTP_GTPC;
+ info.def_mask = &rte_flow_item_gtp_mask;
+ info.len = sizeof(struct rte_flow_item_gtp);
+ lt = NPC_LT_LE_GTPC;
+ break;
+ case RTE_FLOW_ITEM_TYPE_GTPU:
+ lflags = NPC_F_UDP_GTP_GTPU_G_PDU;
+ info.def_mask = &rte_flow_item_gtp_mask;
+ info.len = sizeof(struct rte_flow_item_gtp);
+ lt = NPC_LT_LE_GTPU;
+ break;
+ case RTE_FLOW_ITEM_TYPE_GENEVE:
+ lflags = NPC_F_UDP_GENEVE;
+ info.def_mask = &rte_flow_item_geneve_mask;
+ info.len = sizeof(struct rte_flow_item_geneve);
+ lt = NPC_LT_LE_GENEVE;
+ break;
+ case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
+ lflags = NPC_F_UDP_VXLANGPE;
+ info.def_mask = &rte_flow_item_vxlan_gpe_mask;
+ info.len = sizeof(struct rte_flow_item_vxlan_gpe);
+ lt = NPC_LT_LE_VXLANGPE;
+ break;
+ default:
+ return 0;
+ }
+
+ pst->tunnel = 1;
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+static int
+flow_parse_mpls_label_stack(struct otx2_parse_state *pst, int *flag)
+{
+ int nr_labels = 0;
+ const struct rte_flow_item *pattern = pst->pattern;
+ struct otx2_flow_item_info info;
+ int rc;
+ uint8_t flag_list[] = {0, NPC_F_MPLS_2_LABELS,
+ NPC_F_MPLS_3_LABELS, NPC_F_MPLS_4_LABELS};
+
+ /*
+ * pst->pattern points to first MPLS label. We only check
+ * that subsequent labels do not have anything to match.
+ */
+ info.def_mask = &rte_flow_item_mpls_mask;
+ info.hw_mask = NULL;
+ info.len = sizeof(struct rte_flow_item_mpls);
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+
+ while (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS) {
+ nr_labels++;
+
+ /* Basic validation of 2nd/3rd/4th mpls item */
+ if (nr_labels > 1) {
+ rc = otx2_flow_parse_item_basic(pattern, &info,
+ pst->error);
+ if (rc != 0)
+ return rc;
+ }
+ pst->last_pattern = pattern;
+ pattern++;
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ }
+
+ if (nr_labels > 4) {
+ rte_flow_error_set(pst->error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ pst->last_pattern,
+ "more than 4 mpls labels not supported");
+ return -rte_errno;
+ }
+
+ *flag = flag_list[nr_labels - 1];
+ return 0;
+}
+
+int
+otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid)
+{
+ /* Find number of MPLS labels */
+ struct rte_flow_item_mpls hw_mask;
+ struct otx2_flow_item_info info;
+ int lt, lflags;
+ int rc;
+
+ lflags = 0;
+
+ if (lid == NPC_LID_LC)
+ lt = NPC_LT_LC_MPLS;
+ else if (lid == NPC_LID_LD)
+ lt = NPC_LT_LD_TU_MPLS_IN_IP;
+ else
+ lt = NPC_LT_LE_TU_MPLS_IN_UDP;
+
+ /* Prepare for parsing the first item */
+ info.def_mask = &rte_flow_item_mpls_mask;
+ info.hw_mask = &hw_mask;
+ info.len = sizeof(struct rte_flow_item_mpls);
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ /*
+ * Parse for more labels.
+ * This sets lflags and pst->last_pattern correctly.
+ */
+ rc = flow_parse_mpls_label_stack(pst, &lflags);
+ if (rc != 0)
+ return rc;
+
+ pst->tunnel = 1;
+ pst->pattern = pst->last_pattern;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+/*
+ * ICMP, ICMP6, UDP, TCP, SCTP, VXLAN, GRE, NVGRE,
+ * GTP, GTPC, GTPU, ESP
+ *
+ * Note: UDP tunnel protocols are identified by flags.
+ * LPTR for these protocol still points to UDP
+ * header. Need flag based extraction to support
+ * this.
+ */
+int
+otx2_flow_parse_ld(struct otx2_parse_state *pst)
+{
+ char hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
+ struct otx2_flow_item_info info;
+ int lid, lt, lflags;
+ int rc;
+
+ if (pst->tunnel) {
+ /* We have already parsed MPLS or IPv4/v6 followed
+ * by MPLS or IPv4/v6. Subsequent TCP/UDP etc
+ * would be parsed as tunneled versions. Skip
+ * this layer, except for tunneled MPLS. If LC is
+ * MPLS, we have anyway skipped all stacked MPLS
+ * labels.
+ */
+ if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
+ return otx2_flow_parse_mpls(pst, NPC_LID_LD);
+ return 0;
+ }
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ info.def_mask = NULL;
+ info.len = 0;
+ info.hw_hdr_len = 0;
+
+ lid = NPC_LID_LD;
+ lflags = 0;
+
+ otx2_npc_dbg("Pst->pattern->type = %d", pst->pattern->type);
+ switch (pst->pattern->type) {
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ if (pst->lt[NPC_LID_LC] == NPC_LT_LC_IP6)
+ lt = NPC_LT_LD_ICMP6;
+ else
+ lt = NPC_LT_LD_ICMP;
+ info.def_mask = &rte_flow_item_icmp_mask;
+ info.len = sizeof(struct rte_flow_item_icmp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ lt = NPC_LT_LD_UDP;
+ info.def_mask = &rte_flow_item_udp_mask;
+ info.len = sizeof(struct rte_flow_item_udp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ lt = NPC_LT_LD_TCP;
+ info.def_mask = &rte_flow_item_tcp_mask;
+ info.len = sizeof(struct rte_flow_item_tcp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ lt = NPC_LT_LD_SCTP;
+ info.def_mask = &rte_flow_item_sctp_mask;
+ info.len = sizeof(struct rte_flow_item_sctp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_ESP:
+ lt = NPC_LT_LD_ESP;
+ info.def_mask = &rte_flow_item_esp_mask;
+ info.len = sizeof(struct rte_flow_item_esp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_GRE:
+ lt = NPC_LT_LD_GRE;
+ info.def_mask = &rte_flow_item_gre_mask;
+ info.len = sizeof(struct rte_flow_item_gre);
+ break;
+ case RTE_FLOW_ITEM_TYPE_NVGRE:
+ lt = NPC_LT_LD_GRE;
+ lflags = NPC_F_GRE_NVGRE;
+ info.def_mask = &rte_flow_item_nvgre_mask;
+ info.len = sizeof(struct rte_flow_item_nvgre);
+ /* Further IP/Ethernet are parsed as tunneled */
+ pst->tunnel = 1;
+ break;
+ default:
+ return 0;
+ }
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 37/58] net/octeontx2: add flow actions support
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (35 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 36/58] net/octeontx2: add flow parsing for outer layers jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 38/58] net/octeontx2: add flow parse " jerinj
` (21 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding support to parse flow actions like drop, count, mark, rss, queue.
On egress side, only drop and count actions were supported.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow_parse.c | 210 ++++++++++++++++++++++++
1 file changed, 210 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index ed6c80f07..b46fdd258 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -469,3 +469,213 @@ otx2_flow_parse_ld(struct otx2_parse_state *pst)
return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
}
+
+static inline void
+flow_check_lc_ip_tunnel(struct otx2_parse_state *pst)
+{
+ const struct rte_flow_item *pattern = pst->pattern + 1;
+
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ if (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS ||
+ pattern->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+ pattern->type == RTE_FLOW_ITEM_TYPE_IPV6)
+ pst->tunnel = 1;
+}
+
+/* Outer IPv4, Outer IPv6, MPLS, ARP */
+int
+otx2_flow_parse_lc(struct otx2_parse_state *pst)
+{
+ uint8_t hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
+ struct otx2_flow_item_info info;
+ int lid, lt;
+ int rc;
+
+ if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
+ return otx2_flow_parse_mpls(pst, NPC_LID_LC);
+
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+ lid = NPC_LID_LC;
+
+ switch (pst->pattern->type) {
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ lt = NPC_LT_LC_IP;
+ info.def_mask = &rte_flow_item_ipv4_mask;
+ info.len = sizeof(struct rte_flow_item_ipv4);
+ break;
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ lid = NPC_LID_LC;
+ lt = NPC_LT_LC_IP6;
+ info.def_mask = &rte_flow_item_ipv6_mask;
+ info.len = sizeof(struct rte_flow_item_ipv6);
+ break;
+ case RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4:
+ lt = NPC_LT_LC_ARP;
+ info.def_mask = &rte_flow_item_arp_eth_ipv4_mask;
+ info.len = sizeof(struct rte_flow_item_arp_eth_ipv4);
+ break;
+ default:
+ /* No match at this layer */
+ return 0;
+ }
+
+ /* Identify if IP tunnels MPLS or IPv4/v6 */
+ flow_check_lc_ip_tunnel(pst);
+
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
+}
+
+/* VLAN, ETAG */
+int
+otx2_flow_parse_lb(struct otx2_parse_state *pst)
+{
+ const struct rte_flow_item *pattern = pst->pattern;
+ const struct rte_flow_item *last_pattern;
+ char hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
+ struct otx2_flow_item_info info;
+ int lid, lt, lflags;
+ int nr_vlans = 0;
+ int rc;
+
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = NPC_TPID_LENGTH;
+
+ lid = NPC_LID_LB;
+ lflags = 0;
+ last_pattern = pattern;
+
+ if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+ /* RTE vlan is either 802.1q or 802.1ad,
+ * this maps to either CTAG/STAG. We need to decide
+ * based on number of VLANS present. Matching is
+ * supported on first tag only.
+ */
+ info.def_mask = &rte_flow_item_vlan_mask;
+ info.hw_mask = NULL;
+ info.len = sizeof(struct rte_flow_item_vlan);
+
+ pattern = pst->pattern;
+ while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+ nr_vlans++;
+
+ /* Basic validation of 2nd/3rd vlan item */
+ if (nr_vlans > 1) {
+ otx2_npc_dbg("Vlans = %d", nr_vlans);
+ rc = otx2_flow_parse_item_basic(pattern, &info,
+ pst->error);
+ if (rc != 0)
+ return rc;
+ }
+ last_pattern = pattern;
+ pattern++;
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+ }
+
+ switch (nr_vlans) {
+ case 1:
+ lt = NPC_LT_LB_CTAG;
+ break;
+ case 2:
+ lt = NPC_LT_LB_STAG;
+ lflags = NPC_F_STAG_CTAG;
+ break;
+ case 3:
+ lt = NPC_LT_LB_STAG;
+ lflags = NPC_F_STAG_STAG_CTAG;
+ break;
+ default:
+ rte_flow_error_set(pst->error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ last_pattern,
+ "more than 3 vlans not supported");
+ return -rte_errno;
+ }
+ } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_E_TAG) {
+ /* we can support ETAG and match a subsequent CTAG
+ * without any matching support.
+ */
+ lt = NPC_LT_LB_ETAG;
+ lflags = 0;
+
+ last_pattern = pst->pattern;
+ pattern = otx2_flow_skip_void_and_any_items(pst->pattern + 1);
+ if (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+ info.def_mask = &rte_flow_item_vlan_mask;
+ /* set supported mask to NULL for vlan tag */
+ info.hw_mask = NULL;
+ info.len = sizeof(struct rte_flow_item_vlan);
+ rc = otx2_flow_parse_item_basic(pattern, &info,
+ pst->error);
+ if (rc != 0)
+ return rc;
+
+ lflags = NPC_F_ETAG_CTAG;
+ last_pattern = pattern;
+ }
+
+ info.def_mask = &rte_flow_item_e_tag_mask;
+ info.len = sizeof(struct rte_flow_item_e_tag);
+ } else {
+ return 0;
+ }
+
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc != 0)
+ return rc;
+
+ /* Point pattern to last item consumed */
+ pst->pattern = last_pattern;
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+int
+otx2_flow_parse_la(struct otx2_parse_state *pst)
+{
+ struct rte_flow_item_eth hw_mask;
+ struct otx2_flow_item_info info;
+ int lid, lt;
+ int rc;
+
+ /* Identify the pattern type into lid, lt */
+ if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH)
+ return 0;
+
+ lid = NPC_LID_LA;
+ lt = NPC_LT_LA_ETHER;
+ info.hw_hdr_len = 0;
+
+ if (pst->flow->nix_intf == NIX_INTF_TX) {
+ lt = NPC_LT_LA_IH_NIX_ETHER;
+ info.hw_hdr_len = NPC_IH_LENGTH;
+ }
+
+ /* Prepare for parsing the item */
+ info.def_mask = &rte_flow_item_eth_mask;
+ info.hw_mask = &hw_mask;
+ info.len = sizeof(struct rte_flow_item_eth);
+ otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
+ info.spec = NULL;
+ info.mask = NULL;
+
+ /* Basic validation of item parameters */
+ rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
+ if (rc)
+ return rc;
+
+ /* Update pst if not validate only? clash check? */
+ return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 38/58] net/octeontx2: add flow parse actions support
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (36 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 37/58] net/octeontx2: add flow actions support jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 39/58] net/octeontx2: add flow operations jerinj
` (20 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding support to parse flow actions like drop, count, mark, rss, queue.
On egress side, only drop and count actions were supported.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow_parse.c | 276 ++++++++++++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 1 +
2 files changed, 277 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index b46fdd258..7f997ab74 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -679,3 +679,279 @@ otx2_flow_parse_la(struct otx2_parse_state *pst)
/* Update pst if not validate only? clash check? */
return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
}
+
+static int
+parse_rss_action(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_action *act,
+ struct rte_flow_error *error)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ struct otx2_rss_info *rss_info = &hw->rss_info;
+ const struct rte_flow_action_rss *rss;
+ uint32_t i;
+
+ rss = (const struct rte_flow_action_rss *)act->conf;
+
+ /* Not supported */
+ if (attr->egress) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+ attr, "No support of RSS in egress");
+ }
+
+ if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "multi-queue mode is disabled");
+
+ /* Parse RSS related parameters from configuration */
+ if (!rss || !rss->queue_num)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "no valid queues");
+
+ if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "non-default RSS hash functions"
+ " are not supported");
+
+ if (rss->key_len && rss->key_len > RTE_DIM(rss_info->key))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "RSS hash key too large");
+
+ if (rss->queue_num > rss_info->rss_size)
+ return rte_flow_error_set
+ (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "too many queues for RSS context");
+
+ for (i = 0; i < rss->queue_num; i++) {
+ if (rss->queue[i] >= dev->data->nb_rx_queues)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act,
+ "queue id > max number"
+ " of queues");
+ }
+
+ return 0;
+}
+
+int
+otx2_flow_parse_actions(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error,
+ struct rte_flow *flow)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ struct otx2_npc_flow_info *npc = &hw->npc_flow;
+ const struct rte_flow_action_count *act_count;
+ const struct rte_flow_action_mark *act_mark;
+ const struct rte_flow_action_queue *act_q;
+ const char *errmsg = NULL;
+ int sel_act, req_act = 0;
+ uint16_t pf_func;
+ int errcode = 0;
+ int mark = 0;
+ int rq = 0;
+
+ /* Initialize actions */
+ flow->ctr_id = NPC_COUNTER_NONE;
+
+ for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+ otx2_npc_dbg("Action type = %d", actions->type);
+
+ switch (actions->type) {
+ case RTE_FLOW_ACTION_TYPE_VOID:
+ break;
+ case RTE_FLOW_ACTION_TYPE_MARK:
+ act_mark =
+ (const struct rte_flow_action_mark *)actions->conf;
+
+ /* We have only 16 bits. Use highest val for flag */
+ if (act_mark->id > (OTX2_FLOW_FLAG_VAL - 2)) {
+ errmsg = "mark value must be < 0xfffe";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+ mark = act_mark->id + 1;
+ req_act |= OTX2_FLOW_ACT_MARK;
+ rte_atomic32_inc(&npc->mark_actions);
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_FLAG:
+ mark = OTX2_FLOW_FLAG_VAL;
+ req_act |= OTX2_FLOW_ACT_FLAG;
+ rte_atomic32_inc(&npc->mark_actions);
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_COUNT:
+ act_count =
+ (const struct rte_flow_action_count *)
+ actions->conf;
+
+ if (act_count->shared == 1) {
+ errmsg = "Shared Counters not supported";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+ /* Indicates, need a counter */
+ flow->ctr_id = 1;
+ req_act |= OTX2_FLOW_ACT_COUNT;
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_DROP:
+ req_act |= OTX2_FLOW_ACT_DROP;
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ /* Applicable only to ingress flow */
+ act_q = (const struct rte_flow_action_queue *)
+ actions->conf;
+ rq = act_q->index;
+ if (rq >= dev->data->nb_rx_queues) {
+ errmsg = "invalid queue index";
+ errcode = EINVAL;
+ goto err_exit;
+ }
+ req_act |= OTX2_FLOW_ACT_QUEUE;
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ errcode = parse_rss_action(dev, attr, actions, error);
+ if (errcode)
+ return -rte_errno;
+
+ req_act |= OTX2_FLOW_ACT_RSS;
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_SECURITY:
+ /* Assumes user has already configured security
+ * session for this flow. Associated conf is
+ * opaque. When RTE security is implemented for otx2,
+ * we need to verify that for specified security
+ * session:
+ * action_type ==
+ * RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL &&
+ * session_protocol ==
+ * RTE_SECURITY_PROTOCOL_IPSEC
+ *
+ * RSS is not supported with inline ipsec. Get the
+ * rq from associated conf, or make
+ * RTE_FLOW_ACTION_TYPE_QUEUE compulsory with this
+ * action.
+ * Currently, rq = 0 is assumed.
+ */
+ req_act |= OTX2_FLOW_ACT_SEC;
+ rq = 0;
+ break;
+ default:
+ errmsg = "Unsupported action specified";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+ }
+
+ /* Check if actions specified are compatible */
+ if (attr->egress) {
+ /* Only DROP/COUNT is supported */
+ if (!(req_act & OTX2_FLOW_ACT_DROP)) {
+ errmsg = "DROP is required action for egress";
+ errcode = EINVAL;
+ goto err_exit;
+ } else if (req_act & ~(OTX2_FLOW_ACT_DROP |
+ OTX2_FLOW_ACT_COUNT)) {
+ errmsg = "Unsupported action specified";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+ flow->npc_action = NIX_TX_ACTIONOP_DROP;
+ goto set_pf_func;
+ }
+
+ /* We have already verified the attr, this is ingress.
+ * - Exactly one terminating action is supported
+ * - Exactly one of MARK or FLAG is supported
+ * - If terminating action is DROP, only count is valid.
+ */
+ sel_act = req_act & OTX2_FLOW_ACT_TERM;
+ if ((sel_act & (sel_act - 1)) != 0) {
+ errmsg = "Only one terminating action supported";
+ errcode = EINVAL;
+ goto err_exit;
+ }
+
+ if (req_act & OTX2_FLOW_ACT_DROP) {
+ sel_act = req_act & ~OTX2_FLOW_ACT_COUNT;
+ if ((sel_act & (sel_act - 1)) != 0) {
+ errmsg = "Only COUNT action is supported "
+ "with DROP ingress action";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+ }
+
+ if ((req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK))
+ == (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) {
+ errmsg = "Only one of FLAG or MARK action is supported";
+ errcode = ENOTSUP;
+ goto err_exit;
+ }
+
+ /* Set NIX_RX_ACTIONOP */
+ if (req_act & OTX2_FLOW_ACT_DROP) {
+ flow->npc_action = NIX_RX_ACTIONOP_DROP;
+ } else if (req_act & OTX2_FLOW_ACT_QUEUE) {
+ flow->npc_action = NIX_RX_ACTIONOP_UCAST;
+ flow->npc_action |= (uint64_t)rq << 20;
+ } else if (req_act & OTX2_FLOW_ACT_RSS) {
+ /* When user added a rule for rss, first we will add the
+ *rule in MCAM and then update the action, once if we have
+ *FLOW_KEY_ALG index. So, till we update the action with
+ *flow_key_alg index, set the action to drop.
+ */
+ if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+ flow->npc_action = NIX_RX_ACTIONOP_DROP;
+ else
+ flow->npc_action = NIX_RX_ACTIONOP_UCAST;
+ } else if (req_act & OTX2_FLOW_ACT_SEC) {
+ flow->npc_action = NIX_RX_ACTIONOP_UCAST_IPSEC;
+ flow->npc_action |= (uint64_t)rq << 20;
+ } else if (req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) {
+ flow->npc_action = NIX_RX_ACTIONOP_UCAST;
+ } else if (req_act & OTX2_FLOW_ACT_COUNT) {
+ /* Keep OTX2_FLOW_ACT_COUNT always at the end
+ * This is default action, when user specify only
+ * COUNT ACTION
+ */
+ flow->npc_action = NIX_RX_ACTIONOP_UCAST;
+ } else {
+ /* Should never reach here */
+ errmsg = "Invalid action specified";
+ errcode = EINVAL;
+ goto err_exit;
+ }
+
+ if (mark)
+ flow->npc_action |= (uint64_t)mark << 40;
+
+ if (rte_atomic32_read(&npc->mark_actions) == 1)
+ hw->rx_offload_flags |=
+ NIX_RX_OFFLOAD_MARK_UPDATE_F;
+
+set_pf_func:
+ /* Ideally AF must ensure that correct pf_func is set */
+ pf_func = otx2_pfvf_func(hw->pf, hw->vf);
+ flow->npc_action |= (uint64_t)pf_func << 4;
+
+ return 0;
+
+err_exit:
+ rte_flow_error_set(error, errcode,
+ RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL,
+ errmsg);
+ return -rte_errno;
+}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index 0c3627c12..db79451b9 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -13,6 +13,7 @@
sizeof(uint16_t))
#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
+#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4)
#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
#define NIX_TIMESYNC_RX_OFFSET 8
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 39/58] net/octeontx2: add flow operations
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (37 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 38/58] net/octeontx2: add flow parse " jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 40/58] net/octeontx2: add flow destroy ops support jerinj
` (19 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding the initial flow ops like flow_create and flow_validate.
These will be used to alloc and write flow rule to device and
validate the flow rule.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_flow.c | 451 ++++++++++++++++++++++++++++++
3 files changed, 453 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_flow.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 0b492c4f3..26fe064b3 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -35,6 +35,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_rss.c \
otx2_mac.c \
otx2_ptp.c \
+ otx2_flow.c \
otx2_link.c \
otx2_stats.c \
otx2_lookup.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index f608c4947..f0e03bffe 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -7,6 +7,7 @@ sources = files(
'otx2_rss.c',
'otx2_mac.c',
'otx2_ptp.c',
+ 'otx2_flow.c',
'otx2_link.c',
'otx2_stats.c',
'otx2_lookup.c',
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
new file mode 100644
index 000000000..896aef00a
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -0,0 +1,451 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "otx2_ethdev.h"
+#include "otx2_flow.h"
+
+static int
+flow_program_npc(struct otx2_parse_state *pst, struct otx2_mbox *mbox,
+ struct otx2_npc_flow_info *flow_info)
+{
+ /* This is non-LDATA part in search key */
+ uint64_t key_data[2] = {0ULL, 0ULL};
+ uint64_t key_mask[2] = {0ULL, 0ULL};
+ int intf = pst->flow->nix_intf;
+ int key_len, bit = 0, index;
+ int off, idx, data_off = 0;
+ uint8_t lid, mask, data;
+ uint16_t layer_info;
+ uint64_t lt, flags;
+
+
+ /* Skip till Layer A data start */
+ while (bit < NPC_PARSE_KEX_S_LA_OFFSET) {
+ if (flow_info->keyx_supp_nmask[intf] & (1 << bit))
+ data_off++;
+ bit++;
+ }
+
+ /* Each bit represents 1 nibble */
+ data_off *= 4;
+
+ index = 0;
+ for (lid = 0; lid < NPC_MAX_LID; lid++) {
+ /* Offset in key */
+ off = NPC_PARSE_KEX_S_LID_OFFSET(lid);
+ lt = pst->lt[lid] & 0xf;
+ flags = pst->flags[lid] & 0xff;
+
+ /* NPC_LAYER_KEX_S */
+ layer_info = ((flow_info->keyx_supp_nmask[intf] >> off) & 0x7);
+
+ if (layer_info) {
+ for (idx = 0; idx <= 2 ; idx++) {
+ if (layer_info & (1 << idx)) {
+ if (idx == 2)
+ data = lt;
+ else if (idx == 1)
+ data = ((flags >> 4) & 0xf);
+ else
+ data = (flags & 0xf);
+
+ if (data_off >= 64) {
+ data_off = 0;
+ index++;
+ }
+ key_data[index] |= ((uint64_t)data <<
+ data_off);
+ mask = 0xf;
+ if (lt == 0)
+ mask = 0;
+ key_mask[index] |= ((uint64_t)mask <<
+ data_off);
+ data_off += 4;
+ }
+ }
+ }
+ }
+
+ otx2_npc_dbg("Npc prog key data0: 0x%" PRIx64 ", data1: 0x%" PRIx64,
+ key_data[0], key_data[1]);
+
+ /* Copy this into mcam string */
+ key_len = (pst->npc->keyx_len[intf] + 7) / 8;
+ otx2_npc_dbg("Key_len = %d", key_len);
+ memcpy(pst->flow->mcam_data, key_data, key_len);
+ memcpy(pst->flow->mcam_mask, key_mask, key_len);
+
+ otx2_npc_dbg("Final flow data");
+ for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
+ otx2_npc_dbg("data[%d]: 0x%" PRIx64 ", mask[%d]: 0x%" PRIx64,
+ idx, pst->flow->mcam_data[idx],
+ idx, pst->flow->mcam_mask[idx]);
+ }
+
+ /*
+ * Now we have mcam data and mask formatted as
+ * [Key_len/4 nibbles][0 or 1 nibble hole][data]
+ * hole is present if key_len is odd number of nibbles.
+ * mcam data must be split into 64 bits + 48 bits segments
+ * for each back W0, W1.
+ */
+
+ return otx2_flow_mcam_alloc_and_write(pst->flow, mbox, pst, flow_info);
+}
+
+static int
+flow_parse_attr(struct rte_eth_dev *eth_dev,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error,
+ struct rte_flow *flow)
+{
+ struct otx2_eth_dev *dev = eth_dev->data->dev_private;
+ const char *errmsg = NULL;
+
+ if (attr == NULL)
+ errmsg = "Attribute can't be empty";
+ else if (attr->group)
+ errmsg = "Groups are not supported";
+ else if (attr->priority >= dev->npc_flow.flow_max_priority)
+ errmsg = "Priority should be with in specified range";
+ else if ((!attr->egress && !attr->ingress) ||
+ (attr->egress && attr->ingress))
+ errmsg = "Exactly one of ingress or egress must be set";
+
+ if (errmsg != NULL) {
+ rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ATTR,
+ attr, errmsg);
+ return -ENOTSUP;
+ }
+
+ if (attr->ingress)
+ flow->nix_intf = OTX2_INTF_RX;
+ else
+ flow->nix_intf = OTX2_INTF_TX;
+
+ flow->priority = attr->priority;
+ return 0;
+}
+
+static inline int
+flow_get_free_rss_grp(struct rte_bitmap *bmap,
+ uint32_t size, uint32_t *pos)
+{
+ for (*pos = 0; *pos < size; ++*pos) {
+ if (!rte_bitmap_get(bmap, *pos))
+ break;
+ }
+
+ return *pos < size ? 0 : -1;
+}
+
+static int
+flow_configure_rss_action(struct otx2_eth_dev *dev,
+ const struct rte_flow_action_rss *rss,
+ uint8_t *alg_idx, uint32_t *rss_grp,
+ int mcam_index)
+{
+ struct otx2_npc_flow_info *flow_info = &dev->npc_flow;
+ uint16_t reta[NIX_RSS_RETA_SIZE_MAX];
+ uint32_t flowkey_cfg, grp_aval, i;
+ uint16_t *ind_tbl = NULL;
+ uint8_t flowkey_algx;
+ int rc;
+
+ rc = flow_get_free_rss_grp(flow_info->rss_grp_entries,
+ flow_info->rss_grps, &grp_aval);
+ /* RSS group :0 is not usable for flow rss action */
+ if (rc < 0 || grp_aval == 0)
+ return -ENOSPC;
+
+ *rss_grp = grp_aval;
+
+ otx2_nix_rss_set_key(dev, (uint8_t *)(uintptr_t)rss->key,
+ rss->key_len);
+
+ /* If queue count passed in the rss action is less than
+ * HW configured reta size, replicate rss action reta
+ * across HW reta table.
+ */
+ if (dev->rss_info.rss_size > rss->queue_num) {
+ ind_tbl = reta;
+
+ for (i = 0; i < (dev->rss_info.rss_size / rss->queue_num); i++)
+ memcpy(reta + i * rss->queue_num, rss->queue,
+ sizeof(uint16_t) * rss->queue_num);
+
+ i = dev->rss_info.rss_size % rss->queue_num;
+ if (i)
+ memcpy(&reta[dev->rss_info.rss_size] - i,
+ rss->queue, i * sizeof(uint16_t));
+ } else {
+ ind_tbl = (uint16_t *)(uintptr_t)rss->queue;
+ }
+
+ rc = otx2_nix_rss_tbl_init(dev, *rss_grp, ind_tbl);
+ if (rc) {
+ otx2_err("Failed to init rss table rc = %d", rc);
+ return rc;
+ }
+
+ flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss->types, rss->level);
+
+ rc = otx2_rss_set_hf(dev, flowkey_cfg, &flowkey_algx,
+ *rss_grp, mcam_index);
+ if (rc) {
+ otx2_err("Failed to set rss hash function rc = %d", rc);
+ return rc;
+ }
+
+ *alg_idx = flowkey_algx;
+
+ rte_bitmap_set(flow_info->rss_grp_entries, *rss_grp);
+
+ return 0;
+}
+
+
+static int
+flow_program_rss_action(struct rte_eth_dev *eth_dev,
+ const struct rte_flow_action actions[],
+ struct rte_flow *flow)
+{
+ struct otx2_eth_dev *dev = eth_dev->data->dev_private;
+ const struct rte_flow_action_rss *rss;
+ uint32_t rss_grp;
+ uint8_t alg_idx;
+ int rc;
+
+ for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+ if (actions->type == RTE_FLOW_ACTION_TYPE_RSS) {
+ rss = (const struct rte_flow_action_rss *)actions->conf;
+
+ rc = flow_configure_rss_action(dev,
+ rss, &alg_idx, &rss_grp,
+ flow->mcam_id);
+ if (rc)
+ return rc;
+
+ flow->npc_action |=
+ ((uint64_t)(alg_idx & NIX_RSS_ACT_ALG_MASK) <<
+ NIX_RSS_ACT_ALG_OFFSET) |
+ ((uint64_t)(rss_grp & NIX_RSS_ACT_GRP_MASK) <<
+ NIX_RSS_ACT_GRP_OFFSET);
+ }
+ }
+ return 0;
+}
+
+static int
+flow_parse_meta_items(__rte_unused struct otx2_parse_state *pst)
+{
+ otx2_npc_dbg("Meta Item");
+ return 0;
+}
+
+/*
+ * Parse function of each layer:
+ * - Consume one or more patterns that are relevant.
+ * - Update parse_state
+ * - Set parse_state.pattern = last item consumed
+ * - Set appropriate error code/message when returning error.
+ */
+typedef int (*flow_parse_stage_func_t)(struct otx2_parse_state *pst);
+
+static int
+flow_parse_pattern(struct rte_eth_dev *dev,
+ const struct rte_flow_item pattern[],
+ struct rte_flow_error *error,
+ struct rte_flow *flow,
+ struct otx2_parse_state *pst)
+{
+ flow_parse_stage_func_t parse_stage_funcs[] = {
+ flow_parse_meta_items,
+ otx2_flow_parse_la,
+ otx2_flow_parse_lb,
+ otx2_flow_parse_lc,
+ otx2_flow_parse_ld,
+ otx2_flow_parse_le,
+ otx2_flow_parse_lf,
+ otx2_flow_parse_lg,
+ otx2_flow_parse_lh,
+ };
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ uint8_t layer = 0;
+ int key_offset;
+ int rc;
+
+ if (pattern == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_NUM, NULL,
+ "pattern is NULL");
+ return -EINVAL;
+ }
+
+ memset(pst, 0, sizeof(*pst));
+ pst->npc = &hw->npc_flow;
+ pst->error = error;
+ pst->flow = flow;
+
+ /* Use integral byte offset */
+ key_offset = pst->npc->keyx_len[flow->nix_intf];
+ key_offset = (key_offset + 7) / 8;
+
+ /* Location where LDATA would begin */
+ pst->mcam_data = (uint8_t *)flow->mcam_data;
+ pst->mcam_mask = (uint8_t *)flow->mcam_mask;
+
+ while (pattern->type != RTE_FLOW_ITEM_TYPE_END &&
+ layer < RTE_DIM(parse_stage_funcs)) {
+ otx2_npc_dbg("Pattern type = %d", pattern->type);
+
+ /* Skip place-holders */
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+
+ pst->pattern = pattern;
+ otx2_npc_dbg("Is tunnel = %d, layer = %d", pst->tunnel, layer);
+ rc = parse_stage_funcs[layer](pst);
+ if (rc != 0)
+ return -rte_errno;
+
+ layer++;
+
+ /*
+ * Parse stage function sets pst->pattern to
+ * 1 past the last item it consumed.
+ */
+ pattern = pst->pattern;
+
+ if (pst->terminate)
+ break;
+ }
+
+ /* Skip trailing place-holders */
+ pattern = otx2_flow_skip_void_and_any_items(pattern);
+
+ /* Are there more items than what we can handle? */
+ if (pattern->type != RTE_FLOW_ITEM_TYPE_END) {
+ rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, pattern,
+ "unsupported item in the sequence");
+ return -ENOTSUP;
+ }
+
+ return 0;
+}
+
+static int
+flow_parse_rule(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error,
+ struct rte_flow *flow,
+ struct otx2_parse_state *pst)
+{
+ int err;
+
+ /* Check attributes */
+ err = flow_parse_attr(dev, attr, error, flow);
+ if (err)
+ return err;
+
+ /* Check actions */
+ err = otx2_flow_parse_actions(dev, attr, actions, error, flow);
+ if (err)
+ return err;
+
+ /* Check pattern */
+ err = flow_parse_pattern(dev, pattern, error, flow, pst);
+ if (err)
+ return err;
+
+ /* Check for overlaps? */
+ return 0;
+}
+
+static int
+otx2_flow_validate(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct otx2_parse_state parse_state;
+ struct rte_flow flow;
+
+ memset(&flow, 0, sizeof(flow));
+ return flow_parse_rule(dev, attr, pattern, actions, error, &flow,
+ &parse_state);
+}
+
+static struct rte_flow *
+otx2_flow_create(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ struct otx2_parse_state parse_state;
+ struct otx2_mbox *mbox = hw->mbox;
+ struct rte_flow *flow, *flow_iter;
+ struct otx2_flow_list *list;
+ int rc;
+
+ flow = rte_zmalloc("otx2_rte_flow", sizeof(*flow), 0);
+ if (flow == NULL) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Memory allocation failed");
+ return NULL;
+ }
+ memset(flow, 0, sizeof(*flow));
+
+ rc = flow_parse_rule(dev, attr, pattern, actions, error, flow,
+ &parse_state);
+ if (rc != 0)
+ goto err_exit;
+
+ rc = flow_program_npc(&parse_state, mbox, &hw->npc_flow);
+ if (rc != 0) {
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to insert filter");
+ goto err_exit;
+ }
+
+ rc = flow_program_rss_action(dev, actions, flow);
+ if (rc != 0) {
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to program rss action");
+ goto err_exit;
+ }
+
+
+ list = &hw->npc_flow.flow_list[flow->priority];
+ /* List in ascending order of mcam entries */
+ TAILQ_FOREACH(flow_iter, list, next) {
+ if (flow_iter->mcam_id > flow->mcam_id) {
+ TAILQ_INSERT_BEFORE(flow_iter, flow, next);
+ return flow;
+ }
+ }
+
+ TAILQ_INSERT_TAIL(list, flow, next);
+ return flow;
+
+err_exit:
+ rte_free(flow);
+ return NULL;
+}
+
+const struct rte_flow_ops otx2_flow_ops = {
+ .validate = otx2_flow_validate,
+ .create = otx2_flow_create,
+};
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 40/58] net/octeontx2: add flow destroy ops support
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (38 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 39/58] net/octeontx2: add flow operations jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 41/58] net/octeontx2: add flow init and fini jerinj
` (18 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding few more flow operations like flow_destroy, flow_isolate
and flow_flush.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow.c | 206 ++++++++++++++++++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 3 +
2 files changed, 209 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 896aef00a..24bde623d 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -5,6 +5,48 @@
#include "otx2_ethdev.h"
#include "otx2_flow.h"
+int
+otx2_flow_free_all_resources(struct otx2_eth_dev *hw)
+{
+ struct otx2_npc_flow_info *npc = &hw->npc_flow;
+ struct otx2_mbox *mbox = hw->mbox;
+ struct otx2_mcam_ents_info *info;
+ struct rte_bitmap *bmap;
+ struct rte_flow *flow;
+ int entry_count = 0;
+ int rc, idx;
+
+ for (idx = 0; idx < npc->flow_max_priority; idx++) {
+ info = &npc->flow_entry_info[idx];
+ entry_count += info->live_ent;
+ }
+
+ if (entry_count == 0)
+ return 0;
+
+ /* Free all MCAM entries allocated */
+ rc = otx2_flow_mcam_free_all_entries(mbox);
+
+ /* Free any MCAM counters and delete flow list */
+ for (idx = 0; idx < npc->flow_max_priority; idx++) {
+ while ((flow = TAILQ_FIRST(&npc->flow_list[idx])) != NULL) {
+ if (flow->ctr_id != NPC_COUNTER_NONE)
+ rc |= otx2_flow_mcam_free_counter(mbox,
+ flow->ctr_id);
+
+ TAILQ_REMOVE(&npc->flow_list[idx], flow, next);
+ rte_free(flow);
+ bmap = npc->live_entries[flow->priority];
+ rte_bitmap_clear(bmap, flow->mcam_id);
+ }
+ info = &npc->flow_entry_info[idx];
+ info->free_ent = 0;
+ info->live_ent = 0;
+ }
+ return rc;
+}
+
+
static int
flow_program_npc(struct otx2_parse_state *pst, struct otx2_mbox *mbox,
struct otx2_npc_flow_info *flow_info)
@@ -237,6 +279,27 @@ flow_program_rss_action(struct rte_eth_dev *eth_dev,
return 0;
}
+static int
+flow_free_rss_action(struct rte_eth_dev *eth_dev,
+ struct rte_flow *flow)
+{
+ struct otx2_eth_dev *dev = eth_dev->data->dev_private;
+ struct otx2_npc_flow_info *npc = &dev->npc_flow;
+ uint32_t rss_grp;
+
+ if (flow->npc_action & NIX_RX_ACTIONOP_RSS) {
+ rss_grp = (flow->npc_action >> NIX_RSS_ACT_GRP_OFFSET) &
+ NIX_RSS_ACT_GRP_MASK;
+ if (rss_grp == 0 || rss_grp >= npc->rss_grps)
+ return -EINVAL;
+
+ rte_bitmap_clear(npc->rss_grp_entries, rss_grp);
+ }
+
+ return 0;
+}
+
+
static int
flow_parse_meta_items(__rte_unused struct otx2_parse_state *pst)
{
@@ -445,7 +508,150 @@ otx2_flow_create(struct rte_eth_dev *dev,
return NULL;
}
+static int
+otx2_flow_destroy(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ struct otx2_npc_flow_info *npc = &hw->npc_flow;
+ struct otx2_mbox *mbox = hw->mbox;
+ struct rte_bitmap *bmap;
+ uint16_t match_id;
+ int rc;
+
+ match_id = (flow->npc_action >> NIX_RX_ACT_MATCH_OFFSET) &
+ NIX_RX_ACT_MATCH_MASK;
+
+ if (match_id && match_id < OTX2_FLOW_ACTION_FLAG_DEFAULT) {
+ if (rte_atomic32_read(&npc->mark_actions) == 0)
+ return -EINVAL;
+
+ /* Clear mark offload flag if there are no more mark actions */
+ if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0)
+ hw->rx_offload_flags &= ~NIX_RX_OFFLOAD_MARK_UPDATE_F;
+ }
+
+ rc = flow_free_rss_action(dev, flow);
+ if (rc != 0) {
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to free rss action");
+ }
+
+ rc = otx2_flow_mcam_free_entry(mbox, flow->mcam_id);
+ if (rc != 0) {
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to destroy filter");
+ }
+
+ TAILQ_REMOVE(&npc->flow_list[flow->priority], flow, next);
+
+ bmap = npc->live_entries[flow->priority];
+ rte_bitmap_clear(bmap, flow->mcam_id);
+
+ rte_free(flow);
+ return 0;
+}
+
+static int
+otx2_flow_flush(struct rte_eth_dev *dev,
+ struct rte_flow_error *error)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ int rc;
+
+ rc = otx2_flow_free_all_resources(hw);
+ if (rc) {
+ otx2_err("Error when deleting NPC MCAM entries "
+ ", counters");
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to flush filter");
+ return -rte_errno;
+ }
+
+ return 0;
+}
+
+static int
+otx2_flow_isolate(struct rte_eth_dev *dev __rte_unused,
+ int enable __rte_unused,
+ struct rte_flow_error *error)
+{
+ /*
+ * If we support, we need to un-install the default mcam
+ * entry for this port.
+ */
+
+ rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Flow isolation not supported");
+
+ return -rte_errno;
+}
+
+static int
+otx2_flow_query(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ const struct rte_flow_action *action,
+ void *data,
+ struct rte_flow_error *error)
+{
+ struct otx2_eth_dev *hw = dev->data->dev_private;
+ struct rte_flow_query_count *query = data;
+ struct otx2_mbox *mbox = hw->mbox;
+ const char *errmsg = NULL;
+ int errcode = ENOTSUP;
+ int rc;
+
+ if (action->type != RTE_FLOW_ACTION_TYPE_COUNT) {
+ errmsg = "Only COUNT is supported in query";
+ goto err_exit;
+ }
+
+ if (flow->ctr_id == NPC_COUNTER_NONE) {
+ errmsg = "Counter is not available";
+ goto err_exit;
+ }
+
+ rc = otx2_flow_mcam_read_counter(mbox, flow->ctr_id, &query->hits);
+ if (rc != 0) {
+ errcode = EIO;
+ errmsg = "Error reading flow counter";
+ goto err_exit;
+ }
+ query->hits_set = 1;
+ query->bytes_set = 0;
+
+ if (query->reset)
+ rc = otx2_flow_mcam_clear_counter(mbox, flow->ctr_id);
+ if (rc != 0) {
+ errcode = EIO;
+ errmsg = "Error clearing flow counter";
+ goto err_exit;
+ }
+
+ return 0;
+
+err_exit:
+ rte_flow_error_set(error, errcode,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ errmsg);
+ return -rte_errno;
+}
+
const struct rte_flow_ops otx2_flow_ops = {
.validate = otx2_flow_validate,
.create = otx2_flow_create,
+ .destroy = otx2_flow_destroy,
+ .flush = otx2_flow_flush,
+ .query = otx2_flow_query,
+ .isolate = otx2_flow_isolate,
};
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index db79451b9..e18e04658 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -5,6 +5,9 @@
#ifndef __OTX2_RX_H__
#define __OTX2_RX_H__
+/* Default mark value used when none is provided. */
+#define OTX2_FLOW_ACTION_FLAG_DEFAULT 0xffff
+
#define PTYPE_WIDTH 12
#define PTYPE_NON_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH)
#define PTYPE_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH)
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 41/58] net/octeontx2: add flow init and fini
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (39 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 40/58] net/octeontx2: add flow destroy ops support jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 42/58] net/octeontx2: connect flow API to ethdev ops jerinj
` (17 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Kiran Kumar K <kirankumark@marvell.com>
Adding the flow init and fini functionality. These will be called from
dev init and will initialize and de-initialize the flow related memory.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_flow.c | 315 ++++++++++++++++++++++++++++++
1 file changed, 315 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 24bde623d..94bd85161 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -655,3 +655,318 @@ const struct rte_flow_ops otx2_flow_ops = {
.query = otx2_flow_query,
.isolate = otx2_flow_isolate,
};
+
+static int
+flow_supp_key_len(uint32_t supp_mask)
+{
+ int nib_count = 0;
+ while (supp_mask) {
+ nib_count++;
+ supp_mask &= (supp_mask - 1);
+ }
+ return nib_count * 4;
+}
+
+/* Refer HRM register:
+ * NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG
+ * and
+ * NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG
+ **/
+#define BYTESM1_SHIFT 16
+#define HDR_OFF_SHIFT 8
+static void
+flow_update_kex_info(struct npc_xtract_info *xtract_info,
+ uint64_t val)
+{
+ xtract_info->len = ((val >> BYTESM1_SHIFT) & 0xf) + 1;
+ xtract_info->hdr_off = (val >> HDR_OFF_SHIFT) & 0xff;
+ xtract_info->key_off = val & 0x3f;
+ xtract_info->enable = ((val >> 7) & 0x1);
+}
+
+static void
+flow_process_mkex_cfg(struct otx2_npc_flow_info *npc,
+ struct npc_get_kex_cfg_rsp *kex_rsp)
+{
+ volatile uint64_t (*q)[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT]
+ [NPC_MAX_LD];
+ struct npc_xtract_info *x_info = NULL;
+ int lid, lt, ld, fl, ix;
+ otx2_dxcfg_t *p;
+ uint64_t keyw;
+ uint64_t val;
+
+ npc->keyx_supp_nmask[NPC_MCAM_RX] =
+ kex_rsp->rx_keyx_cfg & 0x7fffffffULL;
+ npc->keyx_supp_nmask[NPC_MCAM_TX] =
+ kex_rsp->tx_keyx_cfg & 0x7fffffffULL;
+ npc->keyx_len[NPC_MCAM_RX] =
+ flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_RX]);
+ npc->keyx_len[NPC_MCAM_TX] =
+ flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_TX]);
+
+ keyw = (kex_rsp->rx_keyx_cfg >> 32) & 0x7ULL;
+ npc->keyw[NPC_MCAM_RX] = keyw;
+ keyw = (kex_rsp->tx_keyx_cfg >> 32) & 0x7ULL;
+ npc->keyw[NPC_MCAM_TX] = keyw;
+
+ /* Update KEX_LD_FLAG */
+ for (ix = 0; ix < NPC_MAX_INTF; ix++) {
+ for (ld = 0; ld < NPC_MAX_LD; ld++) {
+ for (fl = 0; fl < NPC_MAX_LFL; fl++) {
+ x_info =
+ &npc->prx_fxcfg[ix][ld][fl].xtract[0];
+ val = kex_rsp->intf_ld_flags[ix][ld][fl];
+ flow_update_kex_info(x_info, val);
+ }
+ }
+ }
+
+ /* Update LID, LT and LDATA cfg */
+ p = &npc->prx_dxcfg;
+ q = (volatile uint64_t (*)[][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD])
+ (&kex_rsp->intf_lid_lt_ld);
+ for (ix = 0; ix < NPC_MAX_INTF; ix++) {
+ for (lid = 0; lid < NPC_MAX_LID; lid++) {
+ for (lt = 0; lt < NPC_MAX_LT; lt++) {
+ for (ld = 0; ld < NPC_MAX_LD; ld++) {
+ x_info = &(*p)[ix][lid][lt].xtract[ld];
+ val = (*q)[ix][lid][lt][ld];
+ flow_update_kex_info(x_info, val);
+ }
+ }
+ }
+ }
+ /* Update LDATA Flags cfg */
+ npc->prx_lfcfg[0].i = kex_rsp->kex_ld_flags[0];
+ npc->prx_lfcfg[1].i = kex_rsp->kex_ld_flags[1];
+}
+
+static struct otx2_idev_kex_cfg *
+flow_intra_dev_kex_cfg(void)
+{
+ static const char name[] = "octeontx2_intra_device_kex_conf";
+ struct otx2_idev_kex_cfg *idev;
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(name);
+ if (mz)
+ return mz->addr;
+
+ /* Request for the first time */
+ mz = rte_memzone_reserve_aligned(name, sizeof(struct otx2_idev_kex_cfg),
+ SOCKET_ID_ANY, 0, OTX2_ALIGN);
+ if (mz) {
+ idev = mz->addr;
+ rte_atomic16_set(&idev->kex_refcnt, 0);
+ return idev;
+ }
+ return NULL;
+}
+
+static int
+flow_fetch_kex_cfg(struct otx2_eth_dev *dev)
+{
+ struct otx2_npc_flow_info *npc = &dev->npc_flow;
+ struct npc_get_kex_cfg_rsp *kex_rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct otx2_idev_kex_cfg *idev;
+ int rc = 0;
+
+ idev = flow_intra_dev_kex_cfg();
+ if (!idev)
+ return -ENOMEM;
+
+ /* Is kex_cfg read by any another driver? */
+ if (rte_atomic16_add_return(&idev->kex_refcnt, 1) == 1) {
+ /* Call mailbox to get key & data size */
+ (void)otx2_mbox_alloc_msg_npc_get_kex_cfg(mbox);
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_get_rsp(mbox, 0, (void *)&kex_rsp);
+ if (rc) {
+ otx2_err("Failed to fetch NPC keyx config");
+ goto done;
+ }
+ memcpy(&idev->kex_cfg, kex_rsp,
+ sizeof(struct npc_get_kex_cfg_rsp));
+ }
+
+ flow_process_mkex_cfg(npc, &idev->kex_cfg);
+
+done:
+ return rc;
+}
+
+int
+otx2_flow_init(struct otx2_eth_dev *hw)
+{
+ uint8_t *mem = NULL, *nix_mem = NULL, *npc_mem = NULL;
+ struct otx2_npc_flow_info *npc = &hw->npc_flow;
+ uint32_t bmap_sz;
+ int rc = 0, idx;
+
+ rc = flow_fetch_kex_cfg(hw);
+ if (rc) {
+ otx2_err("Failed to fetch NPC keyx config from idev");
+ return rc;
+ }
+
+ rte_atomic32_init(&npc->mark_actions);
+
+ npc->mcam_entries = NPC_MCAM_TOT_ENTRIES >> npc->keyw[NPC_MCAM_RX];
+ /* Free, free_rev, live and live_rev entries */
+ bmap_sz = rte_bitmap_get_memory_footprint(npc->mcam_entries);
+ mem = rte_zmalloc(NULL, 4 * bmap_sz * npc->flow_max_priority,
+ RTE_CACHE_LINE_SIZE);
+ if (mem == NULL) {
+ otx2_err("Bmap alloc failed");
+ rc = -ENOMEM;
+ return rc;
+ }
+
+ npc->flow_entry_info = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct otx2_mcam_ents_info),
+ 0);
+ if (npc->flow_entry_info == NULL) {
+ otx2_err("flow_entry_info alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->free_entries = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct rte_bitmap),
+ 0);
+ if (npc->free_entries == NULL) {
+ otx2_err("free_entries alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->free_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct rte_bitmap),
+ 0);
+ if (npc->free_entries_rev == NULL) {
+ otx2_err("free_entries_rev alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->live_entries = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct rte_bitmap),
+ 0);
+ if (npc->live_entries == NULL) {
+ otx2_err("live_entries alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->live_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct rte_bitmap),
+ 0);
+ if (npc->live_entries_rev == NULL) {
+ otx2_err("live_entries_rev alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->flow_list = rte_zmalloc(NULL, npc->flow_max_priority
+ * sizeof(struct otx2_flow_list),
+ 0);
+ if (npc->flow_list == NULL) {
+ otx2_err("flow_list alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc_mem = mem;
+ for (idx = 0; idx < npc->flow_max_priority; idx++) {
+ TAILQ_INIT(&npc->flow_list[idx]);
+
+ npc->free_entries[idx] =
+ rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
+ mem += bmap_sz;
+
+ npc->free_entries_rev[idx] =
+ rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
+ mem += bmap_sz;
+
+ npc->live_entries[idx] =
+ rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
+ mem += bmap_sz;
+
+ npc->live_entries_rev[idx] =
+ rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
+ mem += bmap_sz;
+
+ npc->flow_entry_info[idx].free_ent = 0;
+ npc->flow_entry_info[idx].live_ent = 0;
+ npc->flow_entry_info[idx].max_id = 0;
+ npc->flow_entry_info[idx].min_id = ~(0);
+ }
+
+ npc->rss_grps = NIX_RSS_GRPS;
+
+ bmap_sz = rte_bitmap_get_memory_footprint(npc->rss_grps);
+ nix_mem = rte_zmalloc(NULL, bmap_sz, RTE_CACHE_LINE_SIZE);
+ if (nix_mem == NULL) {
+ otx2_err("Bmap alloc failed");
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ npc->rss_grp_entries = rte_bitmap_init(npc->rss_grps, nix_mem, bmap_sz);
+
+ /* Group 0 will be used for RSS,
+ * 1 -7 will be used for rte_flow RSS action
+ */
+ rte_bitmap_set(npc->rss_grp_entries, 0);
+
+ return 0;
+
+err:
+ if (npc->flow_list)
+ rte_free(npc->flow_list);
+ if (npc->live_entries_rev)
+ rte_free(npc->live_entries_rev);
+ if (npc->live_entries)
+ rte_free(npc->live_entries);
+ if (npc->free_entries_rev)
+ rte_free(npc->free_entries_rev);
+ if (npc->free_entries)
+ rte_free(npc->free_entries);
+ if (npc->flow_entry_info)
+ rte_free(npc->flow_entry_info);
+ if (npc_mem)
+ rte_free(npc_mem);
+ if (nix_mem)
+ rte_free(nix_mem);
+ return rc;
+}
+
+int
+otx2_flow_fini(struct otx2_eth_dev *hw)
+{
+ struct otx2_npc_flow_info *npc = &hw->npc_flow;
+ int rc;
+
+ rc = otx2_flow_free_all_resources(hw);
+ if (rc) {
+ otx2_err("Error when deleting NPC MCAM entries, counters");
+ return rc;
+ }
+
+ if (npc->flow_list)
+ rte_free(npc->flow_list);
+ if (npc->live_entries_rev)
+ rte_free(npc->live_entries_rev);
+ if (npc->live_entries)
+ rte_free(npc->live_entries);
+ if (npc->free_entries_rev)
+ rte_free(npc->free_entries_rev);
+ if (npc->free_entries)
+ rte_free(npc->free_entries);
+ if (npc->flow_entry_info)
+ rte_free(npc->flow_entry_info);
+
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 42/58] net/octeontx2: connect flow API to ethdev ops
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (40 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 41/58] net/octeontx2: add flow init and fini jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 43/58] net/octeontx2: implement VLAN utility functions jerinj
` (16 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vivek Sharma
From: Vivek Sharma <viveksharma@marvell.com>
Connect rte_flow driver ops to ethdev via .filter_ctrl op.
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
doc/guides/nics/octeontx2.rst | 97 ++++++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev.c | 9 ++
drivers/net/octeontx2/otx2_ethdev.h | 3 +
drivers/net/octeontx2/otx2_ethdev_ops.c | 21 +++++
7 files changed, 133 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 46fb00be6..33d2f2785 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -22,6 +22,7 @@ RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Flow control = Y
+Flow API = Y
Packet type parsing = Y
Timesync = Y
Timestamp offload = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index f3f812804..980a4daf9 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -22,6 +22,7 @@ RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Flow control = Y
+Flow API = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 7fba7e1d9..330534a90 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -17,6 +17,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Flow API = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 41eb3c7b9..ce7016e2b 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -23,6 +23,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Multiple queues for TX and RX
- Receiver Side Scaling (RSS)
- MAC filtering
+- Generic flow API
- Port hardware statistics
- Link state information
- Link flow control
@@ -109,3 +110,99 @@ Runtime Config Options
Above devarg parameters are configurable per device, user needs to pass the
parameters to all the PCIe devices if application requires to configure on
all the ethdev ports.
+
+RTE Flow Support
+----------------
+
+The OCTEON TX2 SoC family NIC has support for the following patterns and
+actions.
+
+Patterns:
+
+.. _table_octeontx2_supported_flow_item_types:
+
+.. table:: Item types
+
+ +----+--------------------------------+
+ | # | Pattern Type |
+ +====+================================+
+ | 1 | RTE_FLOW_ITEM_TYPE_ETH |
+ +----+--------------------------------+
+ | 2 | RTE_FLOW_ITEM_TYPE_VLAN |
+ +----+--------------------------------+
+ | 3 | RTE_FLOW_ITEM_TYPE_E_TAG |
+ +----+--------------------------------+
+ | 4 | RTE_FLOW_ITEM_TYPE_IPV4 |
+ +----+--------------------------------+
+ | 5 | RTE_FLOW_ITEM_TYPE_IPV6 |
+ +----+--------------------------------+
+ | 6 | RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4|
+ +----+--------------------------------+
+ | 7 | RTE_FLOW_ITEM_TYPE_MPLS |
+ +----+--------------------------------+
+ | 8 | RTE_FLOW_ITEM_TYPE_ICMP |
+ +----+--------------------------------+
+ | 9 | RTE_FLOW_ITEM_TYPE_UDP |
+ +----+--------------------------------+
+ | 10 | RTE_FLOW_ITEM_TYPE_TCP |
+ +----+--------------------------------+
+ | 11 | RTE_FLOW_ITEM_TYPE_SCTP |
+ +----+--------------------------------+
+ | 12 | RTE_FLOW_ITEM_TYPE_ESP |
+ +----+--------------------------------+
+ | 13 | RTE_FLOW_ITEM_TYPE_GRE |
+ +----+--------------------------------+
+ | 14 | RTE_FLOW_ITEM_TYPE_NVGRE |
+ +----+--------------------------------+
+ | 15 | RTE_FLOW_ITEM_TYPE_VXLAN |
+ +----+--------------------------------+
+ | 16 | RTE_FLOW_ITEM_TYPE_GTPC |
+ +----+--------------------------------+
+ | 17 | RTE_FLOW_ITEM_TYPE_GTPU |
+ +----+--------------------------------+
+ | 18 | RTE_FLOW_ITEM_TYPE_GENEVE |
+ +----+--------------------------------+
+ | 19 | RTE_FLOW_ITEM_TYPE_VXLAN_GPE |
+ +----+--------------------------------+
+ | 20 | RTE_FLOW_ITEM_TYPE_VOID |
+ +----+--------------------------------+
+ | 21 | RTE_FLOW_ITEM_TYPE_ANY |
+ +----+--------------------------------+
+
+Actions:
+
+.. _table_octeontx2_supported_ingress_action_types:
+
+.. table:: Ingress action types
+
+ +----+--------------------------------+
+ | # | Action Type |
+ +====+================================+
+ | 1 | RTE_FLOW_ACTION_TYPE_VOID |
+ +----+--------------------------------+
+ | 2 | RTE_FLOW_ACTION_TYPE_MARK |
+ +----+--------------------------------+
+ | 3 | RTE_FLOW_ACTION_TYPE_FLAG |
+ +----+--------------------------------+
+ | 4 | RTE_FLOW_ACTION_TYPE_COUNT |
+ +----+--------------------------------+
+ | 5 | RTE_FLOW_ACTION_TYPE_DROP |
+ +----+--------------------------------+
+ | 6 | RTE_FLOW_ACTION_TYPE_QUEUE |
+ +----+--------------------------------+
+ | 7 | RTE_FLOW_ACTION_TYPE_RSS |
+ +----+--------------------------------+
+ | 8 | RTE_FLOW_ACTION_TYPE_SECURITY |
+ +----+--------------------------------+
+
+.. _table_octeontx2_supported_egress_action_types:
+
+.. table:: Egress action types
+
+ +----+--------------------------------+
+ | # | Action Type |
+ +====+================================+
+ | 1 | RTE_FLOW_ACTION_TYPE_COUNT |
+ +----+--------------------------------+
+ | 2 | RTE_FLOW_ACTION_TYPE_DROP |
+ +----+--------------------------------+
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 834b052c6..62d5ee630 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1345,6 +1345,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.rx_descriptor_status = otx2_nix_rx_descriptor_status,
.tx_done_cleanup = otx2_nix_tx_done_cleanup,
.pool_ops_supported = otx2_nix_pool_ops_supported,
+ .filter_ctrl = otx2_nix_dev_filter_ctrl,
.get_module_info = otx2_nix_get_module_info,
.get_module_eeprom = otx2_nix_get_module_eeprom,
.flow_ctrl_get = otx2_nix_flow_ctrl_get,
@@ -1524,6 +1525,11 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
dev->hwcap |= OTX2_FIXUP_F_LIMIT_CQ_FULL;
}
+ /* Initialize rte-flow */
+ rc = otx2_flow_init(dev);
+ if (rc)
+ goto free_mac_addrs;
+
otx2_nix_dbg("Port=%d pf=%d vf=%d ver=%s msix_off=%d hwcap=0x%" PRIx64
" rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64,
eth_dev->data->port_id, dev->pf, dev->vf,
@@ -1560,6 +1566,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
/* Disable nix bpid config */
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+ /* Disable other rte_flow entries */
+ otx2_flow_fini(dev);
+
/* Disable PTP if already enabled */
if (otx2_ethdev_is_ptp_en(dev))
otx2_nix_timesync_disable(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index e8a22b6ec..ad12f2553 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -294,6 +294,9 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
/* Ops */
void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_info *dev_info);
+int otx2_nix_dev_filter_ctrl(struct rte_eth_dev *eth_dev,
+ enum rte_filter_type filter_type,
+ enum rte_filter_op filter_op, void *arg);
int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_module_info *modinfo);
int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 2a949439a..e55acd4e0 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -220,6 +220,27 @@ otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
return -ENOTSUP;
}
+int
+otx2_nix_dev_filter_ctrl(struct rte_eth_dev *eth_dev,
+ enum rte_filter_type filter_type,
+ enum rte_filter_op filter_op, void *arg)
+{
+ RTE_SET_USED(eth_dev);
+
+ if (filter_type != RTE_ETH_FILTER_GENERIC) {
+ otx2_err("Unsupported filter type %d", filter_type);
+ return -ENOTSUP;
+ }
+
+ if (filter_op == RTE_ETH_FILTER_GET) {
+ *(const void **)arg = &otx2_flow_ops;
+ return 0;
+ }
+
+ otx2_err("Invalid filter_op %d", filter_op);
+ return -EINVAL;
+}
+
static struct cgx_fw_data *
nix_get_fwdata(struct otx2_eth_dev *dev)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 43/58] net/octeontx2: implement VLAN utility functions
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (41 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 42/58] net/octeontx2: connect flow API to ethdev ops jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 44/58] net/octeontx2: support VLAN offloads jerinj
` (15 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Vivek Sharma <viveksharma@marvell.com>
Implement accessory functions needed for VLAN functionality.
Introduce VLAN related structures as well.
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 10 ++
drivers/net/octeontx2/otx2_ethdev.h | 46 +++++++
drivers/net/octeontx2/otx2_vlan.c | 190 ++++++++++++++++++++++++++++
5 files changed, 248 insertions(+)
create mode 100644 drivers/net/octeontx2/otx2_vlan.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 26fe064b3..dfe747188 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -37,6 +37,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_ptp.c \
otx2_flow.c \
otx2_link.c \
+ otx2_vlan.c \
otx2_stats.c \
otx2_lookup.c \
otx2_ethdev.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index f0e03bffe..6281ee21b 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -9,6 +9,7 @@ sources = files(
'otx2_ptp.c',
'otx2_flow.c',
'otx2_link.c',
+ 'otx2_vlan.c',
'otx2_stats.c',
'otx2_lookup.c',
'otx2_ethdev.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 62d5ee630..2deaf1a90 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1102,6 +1102,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
/* Free the resources allocated from the previous configure */
if (dev->configured == 1) {
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+ otx2_nix_vlan_fini(eth_dev);
oxt2_nix_unregister_queue_irqs(eth_dev);
nix_set_nop_rxtx_function(eth_dev);
rc = nix_store_queue_cfg_and_then_release(eth_dev);
@@ -1148,6 +1149,12 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ rc = otx2_nix_vlan_offload_init(eth_dev);
+ if (rc) {
+ otx2_err("Failed to init vlan offload rc=%d", rc);
+ goto free_nix_lf;
+ }
+
/* Register queue IRQs */
rc = oxt2_nix_register_queue_irqs(eth_dev);
if (rc) {
@@ -1566,6 +1573,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
/* Disable nix bpid config */
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+ /* Disable vlan offloads */
+ otx2_nix_vlan_fini(eth_dev);
+
/* Disable other rte_flow entries */
otx2_flow_fini(dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index ad12f2553..8577272b4 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -182,6 +182,47 @@ struct otx2_fc_info {
uint16_t bpid[NIX_MAX_CHAN];
};
+struct vlan_mkex_info {
+ struct npc_xtract_info la_xtract;
+ struct npc_xtract_info lb_xtract;
+ uint64_t lb_lt_offset;
+};
+
+struct vlan_entry {
+ uint32_t mcam_idx;
+ uint16_t vlan_id;
+ TAILQ_ENTRY(vlan_entry) next;
+};
+
+TAILQ_HEAD(otx2_vlan_filter_tbl, vlan_entry);
+
+struct otx2_vlan_info {
+ struct otx2_vlan_filter_tbl fltr_tbl;
+ /* MKEX layer info */
+ struct mcam_entry def_tx_mcam_ent;
+ struct mcam_entry def_rx_mcam_ent;
+ struct vlan_mkex_info mkex;
+ /* Default mcam entry that matches vlan packets */
+ uint32_t def_rx_mcam_idx;
+ uint32_t def_tx_mcam_idx;
+ /* MCAM entry that matches double vlan packets */
+ uint32_t qinq_mcam_idx;
+ /* Indices of tx_vtag def registers */
+ uint32_t outer_vlan_idx;
+ uint32_t inner_vlan_idx;
+ uint16_t outer_vlan_tpid;
+ uint16_t inner_vlan_tpid;
+ uint16_t pvid;
+ /* QinQ entry allocated before default one */
+ uint8_t qinq_before_def;
+ uint8_t pvid_insert_on;
+ /* Rx vtag action type */
+ uint8_t vtag_type_idx;
+ uint8_t filter_on;
+ uint8_t strip_on;
+ uint8_t qinq_on;
+};
+
struct otx2_eth_dev {
OTX2_DEV; /* Base class */
MARKER otx2_eth_dev_data_start;
@@ -233,6 +274,7 @@ struct otx2_eth_dev {
uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
struct otx2_npc_flow_info npc_flow;
+ struct otx2_vlan_info vlan_info;
struct otx2_eth_qconf *tx_qconf;
struct otx2_eth_qconf *rx_qconf;
struct rte_eth_dev *eth_dev;
@@ -402,6 +444,10 @@ int otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb);
int otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev);
+/* VLAN */
+int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev);
+int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev);
+
/* Lookup configuration */
void *otx2_nix_fastpath_lookup_mem_get(void);
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
new file mode 100644
index 000000000..b3136d2cf
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -0,0 +1,190 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_malloc.h>
+#include <rte_tailq.h>
+
+#include "otx2_ethdev.h"
+#include "otx2_flow.h"
+
+
+#define VLAN_ID_MATCH 0x1
+#define VTAG_F_MATCH 0x2
+#define MAC_ADDR_MATCH 0x4
+#define QINQ_F_MATCH 0x8
+#define VLAN_DROP 0x10
+
+enum vtag_cfg_dir {
+ VTAG_TX,
+ VTAG_RX
+};
+
+static int
+__rte_unused nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev,
+ uint32_t entry, const int enable)
+{
+ struct npc_mcam_ena_dis_entry_req *req;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc = -EINVAL;
+
+ if (enable)
+ req = otx2_mbox_alloc_msg_npc_mcam_ena_entry(mbox);
+ else
+ req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox);
+
+ req->entry = entry;
+
+ rc = otx2_mbox_process_msg(mbox, NULL);
+ return rc;
+}
+
+static int
+__rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
+{
+ struct npc_mcam_free_entry_req *req;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc = -EINVAL;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
+ req->entry = entry;
+
+ rc = otx2_mbox_process_msg(mbox, NULL);
+ return rc;
+}
+
+static int
+__rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
+ struct mcam_entry *entry, uint8_t intf)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct npc_mcam_write_entry_req *req;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct msghdr *rsp;
+ int rc = -EINVAL;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
+
+ req->entry = ent_idx;
+ req->intf = intf;
+ req->enable_entry = 1;
+ memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ return rc;
+}
+
+static int
+__rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev,
+ struct mcam_entry *entry,
+ uint8_t intf, bool drop)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct npc_mcam_alloc_and_write_entry_req *req;
+ struct npc_mcam_alloc_and_write_entry_rsp *rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc = -EINVAL;
+
+ req = otx2_mbox_alloc_msg_npc_mcam_alloc_and_write_entry(mbox);
+
+ if (intf == NPC_MCAM_RX) {
+ if (!drop && dev->vlan_info.def_rx_mcam_idx) {
+ req->priority = NPC_MCAM_HIGHER_PRIO;
+ req->ref_entry = dev->vlan_info.def_rx_mcam_idx;
+ } else if (drop && dev->vlan_info.qinq_mcam_idx) {
+ req->priority = NPC_MCAM_LOWER_PRIO;
+ req->ref_entry = dev->vlan_info.qinq_mcam_idx;
+ } else {
+ req->priority = NPC_MCAM_ANY_PRIO;
+ req->ref_entry = 0;
+ }
+ } else {
+ req->priority = NPC_MCAM_ANY_PRIO;
+ req->ref_entry = 0;
+ }
+
+ req->intf = intf;
+ req->enable_entry = 1;
+ memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ return rsp->entry;
+}
+
+static int
+nix_vlan_rx_mkex_offset(uint64_t mask)
+{
+ int nib_count = 0;
+
+ while (mask) {
+ nib_count += mask & 1;
+ mask >>= 1;
+ }
+
+ return nib_count * 4;
+}
+
+static int
+nix_vlan_get_mkex_info(struct otx2_eth_dev *dev)
+{
+ struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
+ struct otx2_npc_flow_info *npc = &dev->npc_flow;
+ struct npc_xtract_info *x_info = NULL;
+ uint64_t rx_keyx;
+ otx2_dxcfg_t *p;
+ int rc = -EINVAL;
+
+ if (npc == NULL) {
+ otx2_err("Missing npc mkex configuration");
+ return rc;
+ }
+
+#define NPC_KEX_CHAN_NIBBLE_ENA 0x7ULL
+#define NPC_KEX_LB_LTYPE_NIBBLE_ENA 0x1000ULL
+#define NPC_KEX_LB_LTYPE_NIBBLE_MASK 0xFFFULL
+
+ rx_keyx = npc->keyx_supp_nmask[NPC_MCAM_RX];
+ if ((rx_keyx & NPC_KEX_CHAN_NIBBLE_ENA) != NPC_KEX_CHAN_NIBBLE_ENA)
+ return rc;
+
+ if ((rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_ENA) !=
+ NPC_KEX_LB_LTYPE_NIBBLE_ENA)
+ return rc;
+
+ mkex->lb_lt_offset =
+ nix_vlan_rx_mkex_offset(rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_MASK);
+
+ p = &npc->prx_dxcfg;
+ x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LA][NPC_LT_LA_ETHER].xtract[0];
+ memcpy(&mkex->la_xtract, x_info, sizeof(struct npc_xtract_info));
+ x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LB][NPC_LT_LB_CTAG].xtract[0];
+ memcpy(&mkex->lb_xtract, x_info, sizeof(struct npc_xtract_info));
+
+ return 0;
+}
+
+int
+otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc;
+
+ /* Port initialized for first time or restarted */
+ if (!dev->configured) {
+ rc = nix_vlan_get_mkex_info(dev);
+ if (rc) {
+ otx2_err("Failed to get vlan mkex info rc=%d", rc);
+ return rc;
+ }
+ }
+ return 0;
+}
+
+int
+otx2_nix_vlan_fini(__rte_unused struct rte_eth_dev *eth_dev)
+{
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 44/58] net/octeontx2: support VLAN offloads
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (42 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 43/58] net/octeontx2: implement VLAN utility functions jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 45/58] net/octeontx2: support VLAN filters jerinj
` (14 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vivek Sharma
From: Vivek Sharma <viveksharma@marvell.com>
Support configuring VLAN offloads for an ethernet device and
dynamic promiscuous mode configuration for VLAN filters where
filters are updated according to promiscuous mode of the device.
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 2 +
doc/guides/nics/features/octeontx2_vf.ini | 2 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 1 +
drivers/net/octeontx2/otx2_ethdev.h | 3 +
drivers/net/octeontx2/otx2_ethdev_ops.c | 1 +
drivers/net/octeontx2/otx2_rx.h | 1 +
drivers/net/octeontx2/otx2_vlan.c | 523 ++++++++++++++++++++-
9 files changed, 527 insertions(+), 9 deletions(-)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 33d2f2785..ac4712b0c 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -23,6 +23,8 @@ RSS reta update = Y
Inner RSS = Y
Flow control = Y
Flow API = Y
+VLAN offload = Y
+QinQ offload = Y
Packet type parsing = Y
Timesync = Y
Timestamp offload = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 980a4daf9..e54c1babe 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -23,6 +23,8 @@ RSS reta update = Y
Inner RSS = Y
Flow control = Y
Flow API = Y
+VLAN offload = Y
+QinQ offload = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 330534a90..769ab16ee 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -18,6 +18,8 @@ RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Flow API = Y
+VLAN offload = Y
+QinQ offload = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index ce7016e2b..9184a76b9 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -24,6 +24,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Receiver Side Scaling (RSS)
- MAC filtering
- Generic flow API
+- VLAN/QinQ stripping and insertion
- Port hardware statistics
- Link state information
- Link flow control
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 2deaf1a90..2924d43a8 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1364,6 +1364,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.timesync_adjust_time = otx2_nix_timesync_adjust_time,
.timesync_read_time = otx2_nix_timesync_read_time,
.timesync_write_time = otx2_nix_timesync_write_time,
+ .vlan_offload_set = otx2_nix_vlan_offload_set,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 8577272b4..50fd18b6e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -221,6 +221,7 @@ struct otx2_vlan_info {
uint8_t filter_on;
uint8_t strip_on;
uint8_t qinq_on;
+ uint8_t promisc_on;
};
struct otx2_eth_dev {
@@ -447,6 +448,8 @@ int otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev);
/* VLAN */
int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev);
int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev);
+int otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask);
+void otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable);
/* Lookup configuration */
void *otx2_nix_fastpath_lookup_mem_get(void);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index e55acd4e0..690d8ac0c 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -40,6 +40,7 @@ otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en)
otx2_mbox_process(mbox);
eth_dev->data->promiscuous = en;
+ otx2_nix_vlan_update_promisc(eth_dev, en);
}
void
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index e18e04658..7dc34d705 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -16,6 +16,7 @@
sizeof(uint16_t))
#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
+#define NIX_RX_OFFLOAD_VLAN_STRIP_F BIT(3)
#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4)
#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index b3136d2cf..7cf4f3136 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -14,6 +14,7 @@
#define MAC_ADDR_MATCH 0x4
#define QINQ_F_MATCH 0x8
#define VLAN_DROP 0x10
+#define DEF_F_ENTRY 0x20
enum vtag_cfg_dir {
VTAG_TX,
@@ -39,8 +40,50 @@ __rte_unused nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev,
return rc;
}
+static void
+nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
+ struct mcam_entry *entry, bool qinq, bool drop)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int pcifunc = otx2_pfvf_func(dev->pf, dev->vf);
+ uint64_t action = 0, vtag_action = 0;
+
+ action = NIX_RX_ACTIONOP_UCAST;
+
+ if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+ action = NIX_RX_ACTIONOP_RSS;
+ action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
+ }
+
+ action |= (uint64_t)pcifunc << 4;
+ entry->action = action;
+
+ if (drop) {
+ entry->action &= ~((uint64_t)0xF);
+ entry->action |= NIX_RX_ACTIONOP_DROP;
+ return;
+ }
+
+ if (!qinq) {
+ /* VTAG0 fields denote CTAG in single vlan case */
+ vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
+ vtag_action |= (NPC_LID_LB << 8);
+ vtag_action |= NIX_RX_VTAGACTION_VTAG0_RELPTR;
+ } else {
+ /* VTAG0 & VTAG1 fields denote CTAG & STAG respectively */
+ vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
+ vtag_action |= (NPC_LID_LB << 8);
+ vtag_action |= NIX_RX_VTAGACTION_VTAG1_RELPTR;
+ vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 47);
+ vtag_action |= ((uint64_t)(NPC_LID_LB) << 40);
+ vtag_action |= (NIX_RX_VTAGACTION_VTAG0_RELPTR << 32);
+ }
+
+ entry->vtag_action = vtag_action;
+}
+
static int
-__rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
+nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
{
struct npc_mcam_free_entry_req *req;
struct otx2_mbox *mbox = dev->mbox;
@@ -54,8 +97,8 @@ __rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
}
static int
-__rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
- struct mcam_entry *entry, uint8_t intf)
+nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
+ struct mcam_entry *entry, uint8_t intf, uint8_t ena)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
struct npc_mcam_write_entry_req *req;
@@ -67,7 +110,7 @@ __rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
req->entry = ent_idx;
req->intf = intf;
- req->enable_entry = 1;
+ req->enable_entry = ena;
memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
@@ -75,9 +118,9 @@ __rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
}
static int
-__rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev,
- struct mcam_entry *entry,
- uint8_t intf, bool drop)
+nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev,
+ struct mcam_entry *entry,
+ uint8_t intf, bool drop)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
struct npc_mcam_alloc_and_write_entry_req *req;
@@ -114,6 +157,443 @@ __rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev,
return rsp->entry;
}
+static void
+nix_vlan_update_mac(struct rte_eth_dev *eth_dev, int mcam_index,
+ int enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
+ volatile uint8_t *key_data, *key_mask;
+ struct npc_mcam_read_entry_req *req;
+ struct npc_mcam_read_entry_rsp *rsp;
+ struct otx2_mbox *mbox = dev->mbox;
+ uint64_t mcam_data, mcam_mask;
+ struct mcam_entry entry;
+ uint8_t intf, mcam_ena;
+ int idx, rc = -EINVAL;
+ uint8_t *mac_addr;
+
+ memset(&entry, 0, sizeof(struct mcam_entry));
+
+ /* Read entry first */
+ req = otx2_mbox_alloc_msg_npc_mcam_read_entry(mbox);
+
+ req->entry = mcam_index;
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ otx2_err("Failed to read entry %d", mcam_index);
+ return;
+ }
+
+ entry = rsp->entry_data;
+ intf = rsp->intf;
+ mcam_ena = rsp->enable;
+
+ /* Update mcam address */
+ key_data = (volatile uint8_t *)entry.kw;
+ key_mask = (volatile uint8_t *)entry.kw_mask;
+
+ if (enable) {
+ mcam_mask = 0;
+ otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
+ &mcam_mask, mkex->la_xtract.len + 1);
+
+ } else {
+ mcam_data = 0ULL;
+ mac_addr = dev->mac_addr;
+ for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
+ mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
+
+ mcam_mask = BIT_ULL(48) - 1;
+
+ otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off,
+ &mcam_data, mkex->la_xtract.len + 1);
+ otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
+ &mcam_mask, mkex->la_xtract.len + 1);
+ }
+
+ /* Write back the mcam entry */
+ rc = nix_vlan_mcam_write(eth_dev, mcam_index,
+ &entry, intf, mcam_ena);
+ if (rc) {
+ otx2_err("Failed to write entry %d", mcam_index);
+ return;
+ }
+}
+
+void
+otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ struct vlan_entry *entry;
+
+ /* Already in required mode */
+ if (enable == vlan->promisc_on)
+ return;
+
+ /* Update default rx entry */
+ if (vlan->def_rx_mcam_idx)
+ nix_vlan_update_mac(eth_dev, vlan->def_rx_mcam_idx, enable);
+
+ /* Update all other rx filter entries */
+ TAILQ_FOREACH(entry, &vlan->fltr_tbl, next)
+ nix_vlan_update_mac(eth_dev, entry->mcam_idx, enable);
+
+ vlan->promisc_on = enable;
+}
+
+/* Configure mcam entry with required MCAM search rules */
+static int
+nix_vlan_mcam_config(struct rte_eth_dev *eth_dev,
+ uint16_t vlan_id, uint16_t flags)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
+ volatile uint8_t *key_data, *key_mask;
+ uint64_t mcam_data, mcam_mask;
+ struct mcam_entry entry;
+ uint8_t *mac_addr;
+ int idx, kwi = 0;
+
+ memset(&entry, 0, sizeof(struct mcam_entry));
+ key_data = (volatile uint8_t *)entry.kw;
+ key_mask = (volatile uint8_t *)entry.kw_mask;
+
+ /* Channel base extracted to KW0[11:0] */
+ entry.kw[kwi] = dev->rx_chan_base;
+ entry.kw_mask[kwi] = BIT_ULL(12) - 1;
+
+ /* Adds vlan_id & LB CTAG flag to MCAM KW */
+ if (flags & VLAN_ID_MATCH) {
+ entry.kw[kwi] |= NPC_LT_LB_CTAG << mkex->lb_lt_offset;
+ entry.kw_mask[kwi] |= 0xFULL << mkex->lb_lt_offset;
+
+ mcam_data = (vlan_id << 16);
+ mcam_mask = (BIT_ULL(16) - 1) << 16;
+ otx2_mbox_memcpy(key_data + mkex->lb_xtract.key_off,
+ &mcam_data, mkex->lb_xtract.len + 1);
+ otx2_mbox_memcpy(key_mask + mkex->lb_xtract.key_off,
+ &mcam_mask, mkex->lb_xtract.len + 1);
+ }
+
+ /* Adds LB STAG flag to MCAM KW */
+ if (flags & QINQ_F_MATCH) {
+ entry.kw[kwi] |= NPC_LT_LB_STAG << mkex->lb_lt_offset;
+ entry.kw_mask[kwi] |= 0xFULL << mkex->lb_lt_offset;
+ }
+
+ /* Adds LB CTAG & LB STAG flags to MCAM KW */
+ if (flags & VTAG_F_MATCH) {
+ entry.kw[kwi] |= (NPC_LT_LB_CTAG | NPC_LT_LB_STAG)
+ << mkex->lb_lt_offset;
+ entry.kw_mask[kwi] |= (NPC_LT_LB_CTAG & NPC_LT_LB_STAG)
+ << mkex->lb_lt_offset;
+ }
+
+ /* Adds port MAC address to MCAM KW */
+ if (flags & MAC_ADDR_MATCH) {
+ mcam_data = 0ULL;
+ mac_addr = dev->mac_addr;
+ for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
+ mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
+
+ mcam_mask = BIT_ULL(48) - 1;
+ otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off,
+ &mcam_data, mkex->la_xtract.len + 1);
+ otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
+ &mcam_mask, mkex->la_xtract.len + 1);
+ }
+
+ /* VLAN_DROP: for drop action for all vlan packets when filter is on.
+ * For QinQ, enable vtag action for both outer & inner tags
+ */
+ if (flags & VLAN_DROP)
+ nix_set_rx_vlan_action(eth_dev, &entry, false, true);
+ else if (flags & QINQ_F_MATCH)
+ nix_set_rx_vlan_action(eth_dev, &entry, true, false);
+ else
+ nix_set_rx_vlan_action(eth_dev, &entry, false, false);
+
+ if (flags & DEF_F_ENTRY)
+ dev->vlan_info.def_rx_mcam_ent = entry;
+
+ return nix_vlan_mcam_alloc_and_write(eth_dev, &entry, NIX_INTF_RX,
+ flags & VLAN_DROP);
+}
+
+/* Installs/Removes/Modifies default rx entry */
+static int
+nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
+ bool filter, bool enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ uint16_t flags = 0;
+ int mcam_idx, rc;
+
+ /* Use default mcam entry to either drop vlan traffic when
+ * vlan filter is on or strip vtag when strip is enabled.
+ * Allocate default entry which matches port mac address
+ * and vtag(ctag/stag) flags with drop action.
+ */
+ if (!vlan->def_rx_mcam_idx) {
+ if (!eth_dev->data->promiscuous)
+ flags = MAC_ADDR_MATCH;
+
+ if (filter && enable)
+ flags |= VTAG_F_MATCH | VLAN_DROP;
+ else if (strip && enable)
+ flags |= VTAG_F_MATCH;
+ else
+ return 0;
+
+ flags |= DEF_F_ENTRY;
+
+ mcam_idx = nix_vlan_mcam_config(eth_dev, 0, flags);
+ if (mcam_idx < 0) {
+ otx2_err("Failed to config vlan mcam");
+ return -mcam_idx;
+ }
+
+ vlan->def_rx_mcam_idx = mcam_idx;
+ return 0;
+ }
+
+ /* Filter is already enabled, so packets would be dropped anyways. No
+ * processing needed for enabling strip wrt mcam entry.
+ */
+
+ /* Filter disable request */
+ if (vlan->filter_on && filter && !enable) {
+ vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF);
+
+ /* Free default rx entry only when
+ * 1. strip is not on and
+ * 2. qinq entry is allocated before default entry.
+ */
+ if (vlan->strip_on ||
+ (vlan->qinq_on && !vlan->qinq_before_def)) {
+ if (eth_dev->data->dev_conf.rxmode.mq_mode ==
+ ETH_MQ_RX_RSS)
+ vlan->def_rx_mcam_ent.action |=
+ NIX_RX_ACTIONOP_RSS;
+ else
+ vlan->def_rx_mcam_ent.action |=
+ NIX_RX_ACTIONOP_UCAST;
+ return nix_vlan_mcam_write(eth_dev,
+ vlan->def_rx_mcam_idx,
+ &vlan->def_rx_mcam_ent,
+ NIX_INTF_RX, 1);
+ } else {
+ rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
+ if (rc)
+ return rc;
+ vlan->def_rx_mcam_idx = 0;
+ }
+ }
+
+ /* Filter enable request */
+ if (!vlan->filter_on && filter && enable) {
+ vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF);
+ vlan->def_rx_mcam_ent.action |= NIX_RX_ACTIONOP_DROP;
+ return nix_vlan_mcam_write(eth_dev, vlan->def_rx_mcam_idx,
+ &vlan->def_rx_mcam_ent, NIX_INTF_RX, 1);
+ }
+
+ /* Strip disable request */
+ if (vlan->strip_on && strip && !enable) {
+ if (!vlan->filter_on &&
+ !(vlan->qinq_on && !vlan->qinq_before_def)) {
+ rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
+ if (rc)
+ return rc;
+ vlan->def_rx_mcam_idx = 0;
+ }
+ }
+
+ return 0;
+}
+
+/* Configure vlan stripping on or off */
+static int
+nix_vlan_hw_strip(struct rte_eth_dev *eth_dev, const uint8_t enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_vtag_config *vtag_cfg;
+ int rc = -EINVAL;
+
+ rc = nix_vlan_handle_default_rx_entry(eth_dev, true, false, enable);
+ if (rc) {
+ otx2_err("Failed to config default rx entry");
+ return rc;
+ }
+
+ vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
+ /* cfg_type = 1 for rx vlan cfg */
+ vtag_cfg->cfg_type = VTAG_RX;
+
+ if (enable)
+ vtag_cfg->rx.strip_vtag = 1;
+ else
+ vtag_cfg->rx.strip_vtag = 0;
+
+ /* Always capture */
+ vtag_cfg->rx.capture_vtag = 1;
+ vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
+ /* Use rx vtag type index[0] for now */
+ vtag_cfg->rx.vtag_type = 0;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ dev->vlan_info.strip_on = enable;
+ return rc;
+}
+
+/* Configure vlan filtering on or off for all vlans if vlan_id == 0 */
+static int
+nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable,
+ uint16_t vlan_id)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc = -EINVAL;
+
+ if (!vlan_id && enable) {
+ rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
+ enable);
+ if (rc) {
+ otx2_err("Failed to config vlan mcam");
+ return rc;
+ }
+ dev->vlan_info.filter_on = enable;
+ return 0;
+ }
+
+ if (!vlan_id && !enable) {
+ rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
+ enable);
+ if (rc) {
+ otx2_err("Failed to config vlan mcam");
+ return rc;
+ }
+ dev->vlan_info.filter_on = enable;
+ return 0;
+ }
+
+ return 0;
+}
+
+/* Configure double vlan(qinq) on or off */
+static int
+otx2_nix_config_double_vlan(struct rte_eth_dev *eth_dev,
+ const uint8_t enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan_info;
+ int mcam_idx;
+ int rc;
+
+ vlan_info = &dev->vlan_info;
+
+ if (!enable) {
+ if (!vlan_info->qinq_mcam_idx)
+ return 0;
+
+ rc = nix_vlan_mcam_free(dev, vlan_info->qinq_mcam_idx);
+ if (rc)
+ return rc;
+
+ vlan_info->qinq_mcam_idx = 0;
+ dev->vlan_info.qinq_on = 0;
+ vlan_info->qinq_before_def = 0;
+ return 0;
+ }
+
+ if (eth_dev->data->promiscuous)
+ mcam_idx = nix_vlan_mcam_config(eth_dev, 0, QINQ_F_MATCH);
+ else
+ mcam_idx = nix_vlan_mcam_config(eth_dev, 0,
+ QINQ_F_MATCH | MAC_ADDR_MATCH);
+ if (mcam_idx < 0)
+ return mcam_idx;
+
+ if (!vlan_info->def_rx_mcam_idx)
+ vlan_info->qinq_before_def = 1;
+
+ vlan_info->qinq_mcam_idx = mcam_idx;
+ dev->vlan_info.qinq_on = 1;
+ return 0;
+}
+
+int
+otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t offloads = dev->rx_offloads;
+ struct rte_eth_rxmode *rxmode;
+ int rc;
+
+ rxmode = ð_dev->data->dev_conf.rxmode;
+
+ if (mask & ETH_VLAN_EXTEND_MASK) {
+ otx2_err("Extend offload not supported");
+ return -ENOTSUP;
+ }
+
+ if (mask & ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ rc = nix_vlan_hw_strip(eth_dev, true);
+ } else {
+ offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ rc = nix_vlan_hw_strip(eth_dev, false);
+ }
+ if (rc)
+ goto done;
+ }
+
+ if (mask & ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+ offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ rc = nix_vlan_hw_filter(eth_dev, true, 0);
+ } else {
+ offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+ rc = nix_vlan_hw_filter(eth_dev, false, 0);
+ }
+ if (rc)
+ goto done;
+ }
+
+ if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) {
+ if (!dev->vlan_info.qinq_on) {
+ offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+ rc = otx2_nix_config_double_vlan(eth_dev, true);
+ if (rc)
+ goto done;
+ }
+ } else {
+ if (dev->vlan_info.qinq_on) {
+ offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+ rc = otx2_nix_config_double_vlan(eth_dev, false);
+ if (rc)
+ goto done;
+ }
+ }
+
+ if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_QINQ_STRIP)) {
+ dev->rx_offloads |= offloads;
+ dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
+ }
+
+done:
+ return rc;
+}
+
static int
nix_vlan_rx_mkex_offset(uint64_t mask)
{
@@ -170,7 +650,7 @@ int
otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc;
+ int rc, mask;
/* Port initialized for first time or restarted */
if (!dev->configured) {
@@ -179,12 +659,37 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
otx2_err("Failed to get vlan mkex info rc=%d", rc);
return rc;
}
+
+ TAILQ_INIT(&dev->vlan_info.fltr_tbl);
}
+
+ mask =
+ ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+ rc = otx2_nix_vlan_offload_set(eth_dev, mask);
+ if (rc) {
+ otx2_err("Failed to set vlan offload rc=%d", rc);
+ return rc;
+ }
+
return 0;
}
int
-otx2_nix_vlan_fini(__rte_unused struct rte_eth_dev *eth_dev)
+otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev)
{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ int rc;
+
+ if (!dev->configured) {
+ if (vlan->def_rx_mcam_idx) {
+ rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
+ if (rc)
+ return rc;
+ }
+ }
+
+ otx2_nix_config_double_vlan(eth_dev, false);
+ vlan->def_rx_mcam_idx = 0;
return 0;
}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 45/58] net/octeontx2: support VLAN filters
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (43 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 44/58] net/octeontx2: support VLAN offloads jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 46/58] net/octeontx2: support VLAN TPID and PVID for Tx jerinj
` (13 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vivek Sharma
From: Vivek Sharma <viveksharma@marvell.com>
Support setting up VLAN filters so as to allow tagged
packet's reception after VLAN HW Filter offload is enabled.
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
doc/guides/nics/octeontx2.rst | 2 +-
drivers/net/octeontx2/otx2_ethdev.c | 2 +
drivers/net/octeontx2/otx2_ethdev.h | 4 +
drivers/net/octeontx2/otx2_vlan.c | 149 ++++++++++++++++++++-
7 files changed, 157 insertions(+), 3 deletions(-)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index ac4712b0c..37b802999 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -21,6 +21,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+VLAN filter = Y
Flow control = Y
Flow API = Y
VLAN offload = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index e54c1babe..ccedd1359 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -21,6 +21,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+VLAN filter = Y
Flow control = Y
Flow API = Y
VLAN offload = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 769ab16ee..24df14717 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -17,6 +17,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+VLAN filter = Y
Flow API = Y
VLAN offload = Y
QinQ offload = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 9184a76b9..457980acf 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -22,7 +22,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Lock-free Tx queue
- Multiple queues for TX and RX
- Receiver Side Scaling (RSS)
-- MAC filtering
+- MAC/VLAN filtering
- Generic flow API
- VLAN/QinQ stripping and insertion
- Port hardware statistics
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 2924d43a8..34fab469d 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1365,6 +1365,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.timesync_read_time = otx2_nix_timesync_read_time,
.timesync_write_time = otx2_nix_timesync_write_time,
.vlan_offload_set = otx2_nix_vlan_offload_set,
+ .vlan_filter_set = otx2_nix_vlan_filter_set,
+ .vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 50fd18b6e..996ddec47 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -450,6 +450,10 @@ int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev);
int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev);
int otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask);
void otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable);
+int otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
+ int on);
+void otx2_nix_vlan_strip_queue_set(struct rte_eth_dev *dev,
+ uint16_t queue, int on);
/* Lookup configuration */
void *otx2_nix_fastpath_lookup_mem_get(void);
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index 7cf4f3136..6216d6545 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -22,8 +22,8 @@ enum vtag_cfg_dir {
};
static int
-__rte_unused nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev,
- uint32_t entry, const int enable)
+nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev,
+ uint32_t entry, const int enable)
{
struct npc_mcam_ena_dis_entry_req *req;
struct otx2_mbox *mbox = dev->mbox;
@@ -460,6 +460,8 @@ nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable,
uint16_t vlan_id)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ struct vlan_entry *entry;
int rc = -EINVAL;
if (!vlan_id && enable) {
@@ -473,6 +475,24 @@ nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable,
return 0;
}
+ /* Enable/disable existing vlan filter entries */
+ TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
+ if (vlan_id) {
+ if (entry->vlan_id == vlan_id) {
+ rc = nix_vlan_mcam_enb_dis(dev,
+ entry->mcam_idx,
+ enable);
+ if (rc)
+ return rc;
+ }
+ } else {
+ rc = nix_vlan_mcam_enb_dis(dev, entry->mcam_idx,
+ enable);
+ if (rc)
+ return rc;
+ }
+ }
+
if (!vlan_id && !enable) {
rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
enable);
@@ -487,6 +507,85 @@ nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable,
return 0;
}
+/* Enable/disable vlan filtering for the given vlan_id */
+int
+otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
+ int on)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ struct vlan_entry *entry;
+ int entry_exists = 0;
+ int rc = -EINVAL;
+ int mcam_idx;
+
+ if (!vlan_id) {
+ otx2_err("Vlan Id can't be zero");
+ return rc;
+ }
+
+ if (!vlan->def_rx_mcam_idx) {
+ otx2_err("Vlan Filtering is disabled, enable it first");
+ return rc;
+ }
+
+ if (on) {
+ TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
+ if (entry->vlan_id == vlan_id) {
+ /* Vlan entry already exists */
+ entry_exists = 1;
+ /* Mcam entry already allocated */
+ if (entry->mcam_idx) {
+ rc = nix_vlan_hw_filter(eth_dev, on,
+ vlan_id);
+ return rc;
+ }
+ break;
+ }
+ }
+
+ if (!entry_exists) {
+ entry = rte_zmalloc("otx2_nix_vlan_entry",
+ sizeof(struct vlan_entry), 0);
+ if (!entry) {
+ otx2_err("Failed to allocate memory");
+ return -ENOMEM;
+ }
+ }
+
+ /* Enables vlan_id & mac address based filtering */
+ if (eth_dev->data->promiscuous)
+ mcam_idx = nix_vlan_mcam_config(eth_dev, vlan_id,
+ VLAN_ID_MATCH);
+ else
+ mcam_idx = nix_vlan_mcam_config(eth_dev, vlan_id,
+ VLAN_ID_MATCH |
+ MAC_ADDR_MATCH);
+ if (mcam_idx < 0) {
+ otx2_err("Failed to config vlan mcam");
+ TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
+ rte_free(entry);
+ return mcam_idx;
+ }
+
+ entry->mcam_idx = mcam_idx;
+ if (!entry_exists) {
+ entry->vlan_id = vlan_id;
+ TAILQ_INSERT_HEAD(&vlan->fltr_tbl, entry, next);
+ }
+ } else {
+ TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
+ if (entry->vlan_id == vlan_id) {
+ nix_vlan_mcam_free(dev, entry->mcam_idx);
+ TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
+ rte_free(entry);
+ break;
+ }
+ }
+ }
+ return 0;
+}
+
/* Configure double vlan(qinq) on or off */
static int
otx2_nix_config_double_vlan(struct rte_eth_dev *eth_dev,
@@ -594,6 +693,13 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
return rc;
}
+void otx2_nix_vlan_strip_queue_set(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused uint16_t queue,
+ __rte_unused int on)
+{
+ otx2_err("Not Supported");
+}
+
static int
nix_vlan_rx_mkex_offset(uint64_t mask)
{
@@ -646,6 +752,27 @@ nix_vlan_get_mkex_info(struct otx2_eth_dev *dev)
return 0;
}
+static void nix_vlan_reinstall_vlan_filters(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct vlan_entry *entry;
+ int rc;
+
+ /* VLAN filters can't be set without setting filtern on */
+ rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, true);
+ if (rc) {
+ otx2_err("Failed to reinstall vlan filters");
+ return;
+ }
+
+ TAILQ_FOREACH(entry, &dev->vlan_info.fltr_tbl, next) {
+ rc = otx2_nix_vlan_filter_set(eth_dev, entry->vlan_id, true);
+ if (rc)
+ otx2_err("Failed to reinstall filter for vlan:%d",
+ entry->vlan_id);
+ }
+}
+
int
otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
{
@@ -661,6 +788,11 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
}
TAILQ_INIT(&dev->vlan_info.fltr_tbl);
+ } else {
+ /* Reinstall all mcam entries now if filter offload is set */
+ if (eth_dev->data->dev_conf.rxmode.offloads &
+ DEV_RX_OFFLOAD_VLAN_FILTER)
+ nix_vlan_reinstall_vlan_filters(eth_dev);
}
mask =
@@ -679,8 +811,21 @@ otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev)
{
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
struct otx2_vlan_info *vlan = &dev->vlan_info;
+ struct vlan_entry *entry;
int rc;
+ TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
+ if (!dev->configured) {
+ TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
+ rte_free(entry);
+ } else {
+ /* MCAM entries freed by flow_fini & lf_free on
+ * port stop.
+ */
+ entry->mcam_idx = 0;
+ }
+ }
+
if (!dev->configured) {
if (vlan->def_rx_mcam_idx) {
rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 46/58] net/octeontx2: support VLAN TPID and PVID for Tx
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (44 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 45/58] net/octeontx2: support VLAN filters jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 47/58] net/octeontx2: add FW version get operation jerinj
` (12 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vivek Sharma
From: Vivek Sharma <viveksharma@marvell.com>
Implement support for setting VLAN TPID and PVID for Tx packets.
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 2 +
drivers/net/octeontx2/otx2_ethdev.h | 5 +-
drivers/net/octeontx2/otx2_vlan.c | 193 ++++++++++++++++++++++++++++
3 files changed, 199 insertions(+), 1 deletion(-)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 34fab469d..0b0e34555 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1367,6 +1367,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.vlan_offload_set = otx2_nix_vlan_offload_set,
.vlan_filter_set = otx2_nix_vlan_filter_set,
.vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set,
+ .vlan_tpid_set = otx2_nix_vlan_tpid_set,
+ .vlan_pvid_set = otx2_nix_vlan_pvid_set,
};
static inline int
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 996ddec47..12db92257 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -453,7 +453,10 @@ void otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable);
int otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
int on);
void otx2_nix_vlan_strip_queue_set(struct rte_eth_dev *dev,
- uint16_t queue, int on);
+ uint16_t queue, int on);
+int otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
+ enum rte_vlan_type type, uint16_t tpid);
+int otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on);
/* Lookup configuration */
void *otx2_nix_fastpath_lookup_mem_get(void);
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index 6216d6545..dc0f4e032 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -82,6 +82,39 @@ nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
entry->vtag_action = vtag_action;
}
+static void
+nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
+ int vtag_index)
+{
+ union {
+ uint64_t reg;
+ struct nix_tx_vtag_action_s act;
+ } vtag_action;
+
+ uint64_t action;
+
+ action = NIX_TX_ACTIONOP_UCAST_DEFAULT;
+
+ /*
+ * Take offset from LA since in case of untagged packet,
+ * lbptr is zero.
+ */
+ if (type == ETH_VLAN_TYPE_OUTER) {
+ vtag_action.act.vtag0_def = vtag_index;
+ vtag_action.act.vtag0_lid = NPC_LID_LA;
+ vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
+ vtag_action.act.vtag0_relptr = NIX_TX_VTAGACTION_VTAG0_RELPTR;
+ } else {
+ vtag_action.act.vtag1_def = vtag_index;
+ vtag_action.act.vtag1_lid = NPC_LID_LA;
+ vtag_action.act.vtag1_op = NIX_TX_VTAGOP_INSERT;
+ vtag_action.act.vtag1_relptr = NIX_TX_VTAGACTION_VTAG1_RELPTR;
+ }
+
+ entry->action = action;
+ entry->vtag_action = vtag_action.reg;
+}
+
static int
nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
{
@@ -416,6 +449,46 @@ nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
return 0;
}
+/* Installs/Removes default tx entry */
+static int
+nix_vlan_handle_default_tx_entry(struct rte_eth_dev *eth_dev,
+ enum rte_vlan_type type, int vtag_index,
+ int enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_vlan_info *vlan = &dev->vlan_info;
+ struct mcam_entry entry;
+ uint16_t pf_func;
+ int rc;
+
+ if (!vlan->def_tx_mcam_idx && enable) {
+ memset(&entry, 0, sizeof(struct mcam_entry));
+
+ /* Only pf_func is matched, swap it's bytes */
+ pf_func = (dev->pf_func & 0xff) << 8;
+ pf_func |= (dev->pf_func >> 8) & 0xff;
+
+ /* PF Func extracted to KW1[63:48] */
+ entry.kw[1] = (uint64_t)pf_func << 48;
+ entry.kw_mask[1] = (BIT_ULL(16) - 1) << 48;
+
+ nix_set_tx_vlan_action(&entry, type, vtag_index);
+ vlan->def_tx_mcam_ent = entry;
+
+ return nix_vlan_mcam_alloc_and_write(eth_dev, &entry,
+ NIX_INTF_TX, 0);
+ }
+
+ if (vlan->def_tx_mcam_idx && !enable) {
+ rc = nix_vlan_mcam_free(dev, vlan->def_tx_mcam_idx);
+ if (rc)
+ return rc;
+ vlan->def_rx_mcam_idx = 0;
+ }
+
+ return 0;
+}
+
/* Configure vlan stripping on or off */
static int
nix_vlan_hw_strip(struct rte_eth_dev *eth_dev, const uint8_t enable)
@@ -693,6 +766,126 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
return rc;
}
+int
+otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
+ enum rte_vlan_type type, uint16_t tpid)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct nix_set_vlan_tpid *tpid_cfg;
+ struct otx2_mbox *mbox = dev->mbox;
+ int rc;
+
+ tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
+
+ tpid_cfg->tpid = tpid;
+ if (type == ETH_VLAN_TYPE_OUTER)
+ tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
+ else
+ tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ if (type == ETH_VLAN_TYPE_OUTER)
+ dev->vlan_info.outer_vlan_tpid = tpid;
+ else
+ dev->vlan_info.inner_vlan_tpid = tpid;
+ return 0;
+}
+
+int
+otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct otx2_eth_dev *otx2_dev = otx2_eth_pmd_priv(dev);
+ struct otx2_mbox *mbox = otx2_dev->mbox;
+ struct nix_vtag_config *vtag_cfg;
+ struct nix_vtag_config_rsp *rsp;
+ struct otx2_vlan_info *vlan;
+ int rc, rc1, vtag_index = 0;
+
+ if (vlan_id == 0) {
+ otx2_err("vlan id can't be zero");
+ return -EINVAL;
+ }
+
+ vlan = &otx2_dev->vlan_info;
+
+ if (on && vlan->pvid_insert_on && vlan->pvid == vlan_id) {
+ otx2_err("pvid %d is already enabled", vlan_id);
+ return -EINVAL;
+ }
+
+ if (on && vlan->pvid_insert_on && vlan->pvid != vlan_id) {
+ otx2_err("another pvid is enabled, disable that first");
+ return -EINVAL;
+ }
+
+ /* No pvid active */
+ if (!on && !vlan->pvid_insert_on)
+ return 0;
+
+ /* Given pvid already disabled */
+ if (!on && vlan->pvid != vlan_id)
+ return 0;
+
+ vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
+
+ if (on) {
+ vtag_cfg->cfg_type = VTAG_TX;
+ vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
+
+ if (vlan->outer_vlan_tpid)
+ vtag_cfg->tx.vtag0 =
+ (vlan->outer_vlan_tpid << 16) | vlan_id;
+ else
+ vtag_cfg->tx.vtag0 =
+ ((RTE_ETHER_TYPE_VLAN << 16) | vlan_id);
+ vtag_cfg->tx.cfg_vtag0 = 1;
+ } else {
+ vtag_cfg->cfg_type = VTAG_TX;
+ vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
+
+ vtag_cfg->tx.vtag0_idx = vlan->outer_vlan_idx;
+ vtag_cfg->tx.free_vtag0 = 1;
+ }
+
+ rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (on) {
+ vtag_index = rsp->vtag0_idx;
+ } else {
+ vlan->pvid = 0;
+ vlan->pvid_insert_on = 0;
+ vlan->outer_vlan_idx = 0;
+ }
+
+ rc = nix_vlan_handle_default_tx_entry(dev, ETH_VLAN_TYPE_OUTER,
+ vtag_index, on);
+ if (rc < 0) {
+ printf("Default tx entry failed with rc %d\n", rc);
+ vtag_cfg->tx.vtag0_idx = vtag_index;
+ vtag_cfg->tx.free_vtag0 = 1;
+ vtag_cfg->tx.cfg_vtag0 = 0;
+
+ rc1 = otx2_mbox_process_msg(mbox, (void *)&rsp);
+ if (rc1)
+ otx2_err("Vtag free failed");
+
+ return rc;
+ }
+
+ if (on) {
+ vlan->pvid = vlan_id;
+ vlan->pvid_insert_on = 1;
+ vlan->outer_vlan_idx = vtag_index;
+ }
+
+ return 0;
+}
+
void otx2_nix_vlan_strip_queue_set(__rte_unused struct rte_eth_dev *dev,
__rte_unused uint16_t queue,
__rte_unused int on)
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 47/58] net/octeontx2: add FW version get operation
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (45 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 46/58] net/octeontx2: support VLAN TPID and PVID for Tx jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 48/58] net/octeontx2: add Rx burst support jerinj
` (11 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Add firmware version get operation.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 1 +
drivers/net/octeontx2/otx2_ethdev.h | 3 +++
drivers/net/octeontx2/otx2_ethdev_ops.c | 19 +++++++++++++++++++
drivers/net/octeontx2/otx2_flow.c | 7 +++++++
7 files changed, 33 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 37b802999..211ff93e7 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -33,6 +33,7 @@ Rx descriptor status = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
+FW version = Y
Module EEPROM dump = Y
Registers dump = Y
Linux VFIO = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index ccedd1359..967a3757d 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -31,6 +31,7 @@ Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
+FW version = Y
Module EEPROM dump = Y
Registers dump = Y
Linux VFIO = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 24df14717..884167c88 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -26,6 +26,7 @@ Rx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
+FW version = Y
Module EEPROM dump = Y
Registers dump = Y
Linux VFIO = Y
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 0b0e34555..a2a3d14c8 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1355,6 +1355,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.filter_ctrl = otx2_nix_dev_filter_ctrl,
.get_module_info = otx2_nix_get_module_info,
.get_module_eeprom = otx2_nix_get_module_eeprom,
+ .fw_version_get = otx2_nix_fw_version_get,
.flow_ctrl_get = otx2_nix_flow_ctrl_get,
.flow_ctrl_set = otx2_nix_flow_ctrl_set,
.timesync_enable = otx2_nix_timesync_enable,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 12db92257..e18483969 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -235,6 +235,7 @@ struct otx2_eth_dev {
uint8_t lso_tsov4_idx;
uint8_t lso_tsov6_idx;
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
+ uint8_t mkex_pfl_name[MKEX_NAME_LEN];
uint8_t max_mac_entries;
uint8_t lf_tx_stats;
uint8_t lf_rx_stats;
@@ -340,6 +341,8 @@ void otx2_nix_info_get(struct rte_eth_dev *eth_dev,
int otx2_nix_dev_filter_ctrl(struct rte_eth_dev *eth_dev,
enum rte_filter_type filter_type,
enum rte_filter_op filter_op, void *arg);
+int otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
+ size_t fw_size);
int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
struct rte_eth_dev_module_info *modinfo);
int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 690d8ac0c..6a3048336 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -210,6 +210,25 @@ otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt)
return 0;
}
+int
+otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
+ size_t fw_size)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc = (int)fw_size;
+
+ if (fw_size > sizeof(dev->mkex_pfl_name))
+ rc = sizeof(dev->mkex_pfl_name);
+
+ rc = strlcpy(fw_version, (char *)dev->mkex_pfl_name, rc);
+
+ rc += 1; /* Add the size of '\0' */
+ if (fw_size < (uint32_t)rc)
+ return rc;
+
+ return 0;
+}
+
int
otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
{
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 94bd85161..3ddecfb23 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -770,6 +770,7 @@ flow_fetch_kex_cfg(struct otx2_eth_dev *dev)
struct otx2_npc_flow_info *npc = &dev->npc_flow;
struct npc_get_kex_cfg_rsp *kex_rsp;
struct otx2_mbox *mbox = dev->mbox;
+ char mkex_pfl_name[MKEX_NAME_LEN];
struct otx2_idev_kex_cfg *idev;
int rc = 0;
@@ -791,6 +792,12 @@ flow_fetch_kex_cfg(struct otx2_eth_dev *dev)
sizeof(struct npc_get_kex_cfg_rsp));
}
+ otx2_mbox_memcpy(mkex_pfl_name,
+ idev->kex_cfg.mkex_pfl_name, MKEX_NAME_LEN);
+
+ strlcpy((char *)dev->mkex_pfl_name,
+ mkex_pfl_name, sizeof(dev->mkex_pfl_name));
+
flow_process_mkex_cfg(npc, &idev->kex_cfg);
done:
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 48/58] net/octeontx2: add Rx burst support
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (46 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 47/58] net/octeontx2: add FW version get operation jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 49/58] net/octeontx2: add Rx multi segment version jerinj
` (10 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K
Cc: Pavan Nikhilesh, Harman Kalra
From: Jerin Jacob <jerinj@marvell.com>
Add Rx burst support.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 2 +-
drivers/net/octeontx2/otx2_ethdev.c | 6 -
drivers/net/octeontx2/otx2_ethdev.h | 2 +
drivers/net/octeontx2/otx2_rx.c | 129 +++++++++++++++
drivers/net/octeontx2/otx2_rx.h | 247 ++++++++++++++++++++++++++++
6 files changed, 380 insertions(+), 7 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_rx.c
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index dfe747188..f92c8c594 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -31,6 +31,7 @@ LIBABIVER := 1
# all source are stored in SRCS-y
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
+ otx2_rx.c \
otx2_tm.c \
otx2_rss.c \
otx2_mac.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 6281ee21b..975b2e715 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -2,7 +2,7 @@
# Copyright(C) 2019 Marvell International Ltd.
#
-sources = files(
+sources = files('otx2_rx.c',
'otx2_tm.c',
'otx2_rss.c',
'otx2_mac.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index a2a3d14c8..321716945 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -14,12 +14,6 @@
#include "otx2_ethdev.h"
-static inline void
-otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(eth_dev);
-}
-
static inline void
otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
{
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index e18483969..22cf86981 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -280,6 +280,7 @@ struct otx2_eth_dev {
struct otx2_eth_qconf *tx_qconf;
struct otx2_eth_qconf *rx_qconf;
struct rte_eth_dev *eth_dev;
+ eth_rx_burst_t rx_pkt_burst_no_offload;
/* PTP counters */
bool ptp_en;
struct otx2_timesync_info tstamp;
@@ -482,6 +483,7 @@ int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
struct otx2_eth_dev *dev);
/* Rx and Tx routines */
+void otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev);
void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
/* Timesync - PTP routines */
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
new file mode 100644
index 000000000..4d5223e10
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -0,0 +1,129 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_vect.h>
+
+#include "otx2_ethdev.h"
+#include "otx2_rx.h"
+
+#define NIX_DESCS_PER_LOOP 4
+#define CQE_CAST(x) ((struct nix_cqe_hdr_s *)(x))
+#define CQE_SZ(x) ((x) * NIX_CQ_ENTRY_SZ)
+
+static inline uint16_t
+nix_rx_nb_pkts(struct otx2_eth_rxq *rxq, const uint64_t wdata,
+ const uint16_t pkts, const uint32_t qmask)
+{
+ uint32_t available = rxq->available;
+
+ /* Update the available count if cached value is not enough */
+ if (unlikely(available < pkts)) {
+ uint64_t reg, head, tail;
+
+ /* Use LDADDA version to avoid reorder */
+ reg = otx2_atomic64_add_sync(wdata, rxq->cq_status);
+ /* CQ_OP_STATUS operation error */
+ if (reg & BIT_ULL(CQ_OP_STAT_OP_ERR) ||
+ reg & BIT_ULL(CQ_OP_STAT_CQ_ERR))
+ return 0;
+
+ tail = reg & 0xFFFFF;
+ head = (reg >> 20) & 0xFFFFF;
+ if (tail < head)
+ available = tail - head + qmask + 1;
+ else
+ available = tail - head;
+
+ rxq->available = available;
+ }
+
+ return RTE_MIN(pkts, available);
+}
+
+static __rte_always_inline uint16_t
+nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t pkts, const uint16_t flags)
+{
+ struct otx2_eth_rxq *rxq = rx_queue;
+ const uint64_t mbuf_init = rxq->mbuf_initializer;
+ const void *lookup_mem = rxq->lookup_mem;
+ const uint64_t data_off = rxq->data_off;
+ const uintptr_t desc = rxq->desc;
+ const uint64_t wdata = rxq->wdata;
+ const uint32_t qmask = rxq->qmask;
+ uint16_t packets = 0, nb_pkts;
+ uint32_t head = rxq->head;
+ struct nix_cqe_hdr_s *cq;
+ struct rte_mbuf *mbuf;
+
+ nb_pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
+
+ while (packets < nb_pkts) {
+ /* Prefetch N desc ahead */
+ rte_prefetch_non_temporal((void *)(desc + (CQE_SZ(head + 2))));
+ cq = (struct nix_cqe_hdr_s *)(desc + CQE_SZ(head));
+
+ mbuf = nix_get_mbuf_from_cqe(cq, data_off);
+
+ otx2_nix_cqe_to_mbuf(cq, cq->tag, mbuf, lookup_mem, mbuf_init,
+ flags);
+ otx2_nix_mbuf_to_tstamp(mbuf, rxq->tstamp, flags);
+ rx_pkts[packets++] = mbuf;
+ otx2_prefetch_store_keep(mbuf);
+ head++;
+ head &= qmask;
+ }
+
+ rxq->head = head;
+ rxq->available -= nb_pkts;
+
+ /* Free all the CQs that we've processed */
+ otx2_write64((wdata | nb_pkts), rxq->cq_door);
+
+ return nb_pkts;
+}
+
+
+#define R(name, f5, f4, f3, f2, f1, f0, flags) \
+static uint16_t __rte_noinline __hot \
+otx2_nix_recv_pkts_ ## name(void *rx_queue, \
+ struct rte_mbuf **rx_pkts, uint16_t pkts) \
+{ \
+ return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags)); \
+} \
+
+NIX_RX_FASTPATH_MODES
+#undef R
+
+static inline void
+pick_rx_func(struct rte_eth_dev *eth_dev,
+ const eth_rx_burst_t rx_burst[2][2][2][2][2][2])
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ /* [TSTMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
+ eth_dev->rx_pkt_burst = rx_burst
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F)]
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_CHECKSUM_F)]
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)]
+ [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_RSS_F)];
+}
+
+void
+otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev)
+{
+ const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2] = {
+#define R(name, f5, f4, f3, f2, f1, f0, flags) \
+ [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_ ## name,
+
+NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ pick_rx_func(eth_dev, nix_eth_rx_burst);
+
+ rte_mb();
+}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index 7dc34d705..629768aab 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -15,7 +15,10 @@
PTYPE_TUNNEL_ARRAY_SZ) *\
sizeof(uint16_t))
+#define NIX_RX_OFFLOAD_NONE (0)
+#define NIX_RX_OFFLOAD_RSS_F BIT(0)
#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
+#define NIX_RX_OFFLOAD_CHECKSUM_F BIT(2)
#define NIX_RX_OFFLOAD_VLAN_STRIP_F BIT(3)
#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4)
#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
@@ -30,4 +33,248 @@ struct otx2_timesync_info {
uint8_t rx_ready;
} __rte_cache_aligned;
+union mbuf_initializer {
+ struct {
+ uint16_t data_off;
+ uint16_t refcnt;
+ uint16_t nb_segs;
+ uint16_t port;
+ } fields;
+ uint64_t value;
+};
+
+static __rte_always_inline void
+otx2_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
+ struct otx2_timesync_info *tstamp, const uint16_t flag)
+{
+ if ((flag & NIX_RX_OFFLOAD_TSTAMP_F) &&
+ mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC &&
+ (mbuf->data_off == RTE_PKTMBUF_HEADROOM +
+ NIX_TIMESYNC_RX_OFFSET)) {
+ uint64_t *tstamp_ptr;
+
+ /* Deal with rx timestamp */
+ tstamp_ptr = rte_pktmbuf_mtod_offset(mbuf, uint64_t *,
+ -NIX_TIMESYNC_RX_OFFSET);
+ mbuf->timestamp = rte_be_to_cpu_64(*tstamp_ptr);
+ tstamp->rx_tstamp = mbuf->timestamp;
+ tstamp->rx_ready = 1;
+ mbuf->ol_flags |= PKT_RX_IEEE1588_PTP | PKT_RX_IEEE1588_TMST
+ | PKT_RX_TIMESTAMP;
+ }
+}
+
+static __rte_always_inline uint64_t
+nix_clear_data_off(uint64_t oldval)
+{
+ union mbuf_initializer mbuf_init = { .value = oldval };
+
+ mbuf_init.fields.data_off = 0;
+ return mbuf_init.value;
+}
+
+static __rte_always_inline struct rte_mbuf *
+nix_get_mbuf_from_cqe(void *cq, const uint64_t data_off)
+{
+ rte_iova_t buff;
+
+ /* Skip CQE, NIX_RX_PARSE_S and SG HDR(9 DWORDs) and peek buff addr */
+ buff = *((rte_iova_t *)((uint64_t *)cq + 9));
+ return (struct rte_mbuf *)(buff - data_off);
+}
+
+
+static __rte_always_inline uint32_t
+nix_ptype_get(const void * const lookup_mem, const uint64_t in)
+{
+ const uint16_t * const ptype = lookup_mem;
+ const uint16_t lg_lf_le = (in & 0xFFF000000000000) >> 48;
+ const uint16_t tu_l2 = ptype[(in & 0x000FFF000000000) >> 36];
+ const uint16_t il4_tu = ptype[PTYPE_NON_TUNNEL_ARRAY_SZ + lg_lf_le];
+
+ return (il4_tu << PTYPE_WIDTH) | tu_l2;
+}
+
+static __rte_always_inline uint32_t
+nix_rx_olflags_get(const void * const lookup_mem, const uint64_t in)
+{
+ const uint32_t * const ol_flags = (const uint32_t *)
+ ((const uint8_t *)lookup_mem + PTYPE_ARRAY_SZ);
+
+ return ol_flags[(in & 0xfff00000) >> 20];
+}
+
+static inline uint64_t
+nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
+ struct rte_mbuf *mbuf)
+{
+ /* There is no separate bit to check match_id
+ * is valid or not? and no flag to identify it is an
+ * RTE_FLOW_ACTION_TYPE_FLAG vs RTE_FLOW_ACTION_TYPE_MARK
+ * action. The former case addressed through 0 being invalid
+ * value and inc/dec match_id pair when MARK is activated.
+ * The later case addressed through defining
+ * OTX2_FLOW_MARK_DEFAULT as value for
+ * RTE_FLOW_ACTION_TYPE_MARK.
+ * This would translate to not use
+ * OTX2_FLOW_ACTION_FLAG_DEFAULT - 1 and
+ * OTX2_FLOW_ACTION_FLAG_DEFAULT for match_id.
+ * i.e valid mark_id's are from
+ * 0 to OTX2_FLOW_ACTION_FLAG_DEFAULT - 2
+ */
+ if (likely(match_id)) {
+ ol_flags |= PKT_RX_FDIR;
+ if (match_id != OTX2_FLOW_ACTION_FLAG_DEFAULT) {
+ ol_flags |= PKT_RX_FDIR_ID;
+ mbuf->hash.fdir.hi = match_id - 1;
+ }
+ }
+
+ return ol_flags;
+}
+
+static __rte_always_inline void
+otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
+ struct rte_mbuf *mbuf, const void *lookup_mem,
+ const uint64_t val, const uint16_t flag)
+{
+ const struct nix_rx_parse_s *rx =
+ (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1);
+ const uint64_t w1 = *(const uint64_t *)rx;
+ const uint16_t len = rx->pkt_lenm1 + 1;
+ uint64_t ol_flags = 0;
+
+ /* Mark mempool obj as "get" as it is alloc'ed by NIX */
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+
+ if (flag & NIX_RX_OFFLOAD_PTYPE_F)
+ mbuf->packet_type = nix_ptype_get(lookup_mem, w1);
+ else
+ mbuf->packet_type = 0;
+
+ if (flag & NIX_RX_OFFLOAD_RSS_F) {
+ mbuf->hash.rss = tag;
+ ol_flags |= PKT_RX_RSS_HASH;
+ }
+
+ if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
+ ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
+
+ if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
+ if (rx->vtag0_gone) {
+ ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+ mbuf->vlan_tci = rx->vtag0_tci;
+ }
+ if (rx->vtag1_gone) {
+ ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+ mbuf->vlan_tci_outer = rx->vtag1_tci;
+ }
+ }
+
+ if (flag & NIX_RX_OFFLOAD_MARK_UPDATE_F)
+ ol_flags = nix_update_match_id(rx->match_id, ol_flags, mbuf);
+
+ mbuf->ol_flags = ol_flags;
+ *(uint64_t *)(&mbuf->rearm_data) = val;
+ mbuf->pkt_len = len;
+
+ mbuf->data_len = len;
+}
+
+#define CKSUM_F NIX_RX_OFFLOAD_CHECKSUM_F
+#define PTYPE_F NIX_RX_OFFLOAD_PTYPE_F
+#define RSS_F NIX_RX_OFFLOAD_RSS_F
+#define RX_VLAN_F NIX_RX_OFFLOAD_VLAN_STRIP_F
+#define MARK_F NIX_RX_OFFLOAD_MARK_UPDATE_F
+#define TS_F NIX_RX_OFFLOAD_TSTAMP_F
+
+/* [TSMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
+#define NIX_RX_FASTPATH_MODES \
+R(no_offload, 0, 0, 0, 0, 0, 0, NIX_RX_OFFLOAD_NONE) \
+R(rss, 0, 0, 0, 0, 0, 1, RSS_F) \
+R(ptype, 0, 0, 0, 0, 1, 0, PTYPE_F) \
+R(ptype_rss, 0, 0, 0, 0, 1, 1, PTYPE_F | RSS_F) \
+R(cksum, 0, 0, 0, 1, 0, 0, CKSUM_F) \
+R(cksum_rss, 0, 0, 0, 1, 0, 1, CKSUM_F | RSS_F) \
+R(cksum_ptype, 0, 0, 0, 1, 1, 0, CKSUM_F | PTYPE_F) \
+R(cksum_ptype_rss, 0, 0, 0, 1, 1, 1, CKSUM_F | PTYPE_F | RSS_F)\
+R(vlan, 0, 0, 1, 0, 0, 0, RX_VLAN_F) \
+R(vlan_rss, 0, 0, 1, 0, 0, 1, RX_VLAN_F | RSS_F) \
+R(vlan_ptype, 0, 0, 1, 0, 1, 0, RX_VLAN_F | PTYPE_F) \
+R(vlan_ptype_rss, 0, 0, 1, 0, 1, 1, RX_VLAN_F | PTYPE_F | RSS_F)\
+R(vlan_cksum, 0, 0, 1, 1, 0, 0, RX_VLAN_F | CKSUM_F) \
+R(vlan_cksum_rss, 0, 0, 1, 1, 0, 1, RX_VLAN_F | CKSUM_F | RSS_F)\
+R(vlan_cksum_ptype, 0, 0, 1, 1, 1, 0, \
+ RX_VLAN_F | CKSUM_F | PTYPE_F) \
+R(vlan_cksum_ptype_rss, 0, 0, 1, 1, 1, 1, \
+ RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(mark, 0, 1, 0, 0, 0, 0, MARK_F) \
+R(mark_rss, 0, 1, 0, 0, 0, 1, MARK_F | RSS_F) \
+R(mark_ptype, 0, 1, 0, 0, 1, 0, MARK_F | PTYPE_F) \
+R(mark_ptype_rss, 0, 1, 0, 0, 1, 1, MARK_F | PTYPE_F | RSS_F)\
+R(mark_cksum, 0, 1, 0, 1, 0, 0, MARK_F | CKSUM_F) \
+R(mark_cksum_rss, 0, 1, 0, 1, 0, 1, MARK_F | CKSUM_F | RSS_F)\
+R(mark_cksum_ptype, 0, 1, 0, 1, 1, 0, MARK_F | CKSUM_F | PTYPE_F)\
+R(mark_cksum_ptype_rss, 0, 1, 0, 1, 1, 1, \
+ MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(mark_vlan, 0, 1, 1, 0, 0, 0, MARK_F | RX_VLAN_F) \
+R(mark_vlan_rss, 0, 1, 1, 0, 0, 1, MARK_F | RX_VLAN_F | RSS_F)\
+R(mark_vlan_ptype, 0, 1, 1, 0, 1, 0, \
+ MARK_F | RX_VLAN_F | PTYPE_F) \
+R(mark_vlan_ptype_rss, 0, 1, 1, 0, 1, 1, \
+ MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
+R(mark_vlan_cksum, 0, 1, 1, 1, 0, 0, \
+ MARK_F | RX_VLAN_F | CKSUM_F) \
+R(mark_vlan_cksum_rss, 0, 1, 1, 1, 0, 1, \
+ MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
+R(mark_vlan_cksum_ptype, 0, 1, 1, 1, 1, 0, \
+ MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
+R(mark_vlan_cksum_ptype_rss, 0, 1, 1, 1, 1, 1, \
+ MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(ts, 1, 0, 0, 0, 0, 0, TS_F) \
+R(ts_rss, 1, 0, 0, 0, 0, 1, TS_F | RSS_F) \
+R(ts_ptype, 1, 0, 0, 0, 1, 0, TS_F | PTYPE_F) \
+R(ts_ptype_rss, 1, 0, 0, 0, 1, 1, TS_F | PTYPE_F | RSS_F)\
+R(ts_cksum, 1, 0, 0, 1, 0, 0, TS_F | CKSUM_F) \
+R(ts_cksum_rss, 1, 0, 0, 1, 0, 1, TS_F | CKSUM_F | RSS_F)\
+R(ts_cksum_ptype, 1, 0, 0, 1, 1, 0, TS_F | CKSUM_F | PTYPE_F)\
+R(ts_cksum_ptype_rss, 1, 0, 0, 1, 1, 1, \
+ TS_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(ts_vlan, 1, 0, 1, 0, 0, 0, TS_F | RX_VLAN_F) \
+R(ts_vlan_rss, 1, 0, 1, 0, 0, 1, TS_F | RX_VLAN_F | RSS_F)\
+R(ts_vlan_ptype, 1, 0, 1, 0, 1, 0, TS_F | RX_VLAN_F | PTYPE_F)\
+R(ts_vlan_ptype_rss, 1, 0, 1, 0, 1, 1, \
+ TS_F | RX_VLAN_F | PTYPE_F | RSS_F) \
+R(ts_vlan_cksum, 1, 0, 1, 1, 0, 0, \
+ TS_F | RX_VLAN_F | CKSUM_F) \
+R(ts_vlan_cksum_rss, 1, 0, 1, 1, 0, 1, \
+ MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
+R(ts_vlan_cksum_ptype, 1, 0, 1, 1, 1, 0, \
+ TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
+R(ts_vlan_cksum_ptype_rss, 1, 0, 1, 1, 1, 1, \
+ TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(ts_mark, 1, 1, 0, 0, 0, 0, TS_F | MARK_F) \
+R(ts_mark_rss, 1, 1, 0, 0, 0, 1, TS_F | MARK_F | RSS_F)\
+R(ts_mark_ptype, 1, 1, 0, 0, 1, 0, TS_F | MARK_F | PTYPE_F)\
+R(ts_mark_ptype_rss, 1, 1, 0, 0, 1, 1, \
+ TS_F | MARK_F | PTYPE_F | RSS_F) \
+R(ts_mark_cksum, 1, 1, 0, 1, 0, 0, TS_F | MARK_F | CKSUM_F)\
+R(ts_mark_cksum_rss, 1, 1, 0, 1, 0, 1, \
+ TS_F | MARK_F | CKSUM_F | RSS_F)\
+R(ts_mark_cksum_ptype, 1, 1, 0, 1, 1, 0, \
+ TS_F | MARK_F | CKSUM_F | PTYPE_F) \
+R(ts_mark_cksum_ptype_rss, 1, 1, 0, 1, 1, 1, \
+ TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
+R(ts_mark_vlan, 1, 1, 1, 0, 0, 0, TS_F | MARK_F | RX_VLAN_F)\
+R(ts_mark_vlan_rss, 1, 1, 1, 0, 0, 1, \
+ TS_F | MARK_F | RX_VLAN_F | RSS_F)\
+R(ts_mark_vlan_ptype, 1, 1, 1, 0, 1, 0, \
+ TS_F | MARK_F | RX_VLAN_F | PTYPE_F) \
+R(ts_mark_vlan_ptype_rss, 1, 1, 1, 0, 1, 1, \
+ TS_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
+R(ts_mark_vlan_cksum_ptype, 1, 1, 1, 1, 1, 0, \
+ TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
+R(ts_mark_vlan_cksum_ptype_rss, 1, 1, 1, 1, 1, 1, \
+ TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)
+
#endif /* __OTX2_RX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 49/58] net/octeontx2: add Rx multi segment version
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (47 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 48/58] net/octeontx2: add Rx burst support jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 50/58] net/octeontx2: add Rx vector version jerinj
` (9 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K, Anatoly Burakov
Cc: Pavan Nikhilesh
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add multi segment version of packet Receive function.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 2 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 2 +
doc/guides/nics/octeontx2.rst | 2 +
drivers/net/octeontx2/otx2_rx.c | 25 ++++++++++
drivers/net/octeontx2/otx2_rx.h | 55 +++++++++++++++++++++-
6 files changed, 86 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 211ff93e7..3280cba78 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -24,6 +24,8 @@ Inner RSS = Y
VLAN filter = Y
Flow control = Y
Flow API = Y
+Jumbo frame = Y
+Scattered Rx = Y
VLAN offload = Y
QinQ offload = Y
Packet type parsing = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 967a3757d..315722e60 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -24,6 +24,7 @@ Inner RSS = Y
VLAN filter = Y
Flow control = Y
Flow API = Y
+Jumbo frame = Y
VLAN offload = Y
QinQ offload = Y
Packet type parsing = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 884167c88..17b223221 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -19,6 +19,8 @@ RSS reta update = Y
Inner RSS = Y
VLAN filter = Y
Flow API = Y
+Jumbo frame = Y
+Scattered Rx = Y
VLAN offload = Y
QinQ offload = Y
Packet type parsing = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 457980acf..4556187ce 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -18,6 +18,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Packet type information
- Promiscuous mode
+- Jumbo frames
- SR-IOV VF
- Lock-free Tx queue
- Multiple queues for TX and RX
@@ -28,6 +29,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Port hardware statistics
- Link state information
- Link flow control
+- Scatter-Gather IO support
- Debug utilities - Context dump and error interrupt support
- IEEE1588 timestamping
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index 4d5223e10..fca182785 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -92,6 +92,14 @@ otx2_nix_recv_pkts_ ## name(void *rx_queue, \
{ \
return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags)); \
} \
+ \
+static uint16_t __rte_noinline __hot \
+otx2_nix_recv_pkts_mseg_ ## name(void *rx_queue, \
+ struct rte_mbuf **rx_pkts, uint16_t pkts) \
+{ \
+ return nix_recv_pkts(rx_queue, rx_pkts, pkts, \
+ (flags) | NIX_RX_MULTI_SEG_F); \
+} \
NIX_RX_FASTPATH_MODES
#undef R
@@ -115,15 +123,32 @@ pick_rx_func(struct rte_eth_dev *eth_dev,
void
otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev)
{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2] = {
#define R(name, f5, f4, f3, f2, f1, f0, flags) \
[f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_ ## name,
+NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2] = {
+#define R(name, f5, f4, f3, f2, f1, f0, flags) \
+ [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_mseg_ ## name,
+
NIX_RX_FASTPATH_MODES
#undef R
};
pick_rx_func(eth_dev, nix_eth_rx_burst);
+ if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
+
+ /* Copy multi seg version with no offload for tear down sequence */
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+ dev->rx_pkt_burst_no_offload =
+ nix_eth_rx_burst_mseg[0][0][0][0][0][0];
rte_mb();
}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index 629768aab..e150f38d7 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -23,6 +23,11 @@
#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4)
#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
+/* Flags to control cqe_to_mbuf conversion function.
+ * Defining it from backwards to denote its been
+ * not used as offload flags to pick function
+ */
+#define NIX_RX_MULTI_SEG_F BIT(15)
#define NIX_TIMESYNC_RX_OFFSET 8
struct otx2_timesync_info {
@@ -133,6 +138,51 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
return ol_flags;
}
+static __rte_always_inline void
+nix_cqe_xtract_mseg(const struct nix_rx_parse_s *rx,
+ struct rte_mbuf *mbuf, uint64_t rearm)
+{
+ const rte_iova_t *iova_list;
+ struct rte_mbuf *head;
+ const rte_iova_t *eol;
+ uint8_t nb_segs;
+ uint64_t sg;
+
+ sg = *(const uint64_t *)(rx + 1);
+ nb_segs = (sg >> 48) & 0x3;
+ mbuf->nb_segs = nb_segs;
+ mbuf->data_len = sg & 0xFFFF;
+ sg = sg >> 16;
+
+ eol = ((const rte_iova_t *)(rx + 1) + ((rx->desc_sizem1 + 1) << 1));
+ /* Skip SG_S and first IOVA*/
+ iova_list = ((const rte_iova_t *)(rx + 1)) + 2;
+ nb_segs--;
+
+ rearm = rearm & ~0xFFFF;
+
+ head = mbuf;
+ while (nb_segs) {
+ mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
+ mbuf = mbuf->next;
+
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
+
+ mbuf->data_len = sg & 0xFFFF;
+ sg = sg >> 16;
+ *(uint64_t *)(&mbuf->rearm_data) = rearm;
+ nb_segs--;
+ iova_list++;
+
+ if (!nb_segs && (iova_list + 1 < eol)) {
+ sg = *(const uint64_t *)(iova_list);
+ nb_segs = (sg >> 48) & 0x3;
+ head->nb_segs += nb_segs;
+ iova_list = (const rte_iova_t *)(iova_list + 1);
+ }
+ }
+}
+
static __rte_always_inline void
otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
struct rte_mbuf *mbuf, const void *lookup_mem,
@@ -178,7 +228,10 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
*(uint64_t *)(&mbuf->rearm_data) = val;
mbuf->pkt_len = len;
- mbuf->data_len = len;
+ if (flag & NIX_RX_MULTI_SEG_F)
+ nix_cqe_xtract_mseg(rx, mbuf, val);
+ else
+ mbuf->data_len = len;
}
#define CKSUM_F NIX_RX_OFFLOAD_CHECKSUM_F
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 50/58] net/octeontx2: add Rx vector version
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (48 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 49/58] net/octeontx2: add Rx multi segment version jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 51/58] net/octeontx2: add Tx burst support jerinj
` (8 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic
From: Jerin Jacob <jerinj@marvell.com>
Add vector version of packet Receive function.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 2 +
drivers/net/octeontx2/otx2_rx.c | 259 +++++++++++++++++++++++++++++-
4 files changed, 262 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 4556187ce..97054d11d 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -30,6 +30,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Link state information
- Link flow control
- Scatter-Gather IO support
+- Vector Poll mode driver
- Debug utilities - Context dump and error interrupt support
- IEEE1588 timestamping
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index f92c8c594..ee5bbb24b 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -14,6 +14,7 @@ CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2
CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2
CFLAGS += -I$(RTE_SDK)/drivers/net/octeontx2
CFLAGS += -O3
+CFLAGS += -flax-vector-conversions
ifneq ($(CONFIG_RTE_ARCH_64),y)
CFLAGS += -Wno-int-to-pointer-cast
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 975b2e715..9d151f88d 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -24,6 +24,8 @@ sources = files('otx2_rx.c',
deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2']
+cflags += ['-flax-vector-conversions']
+
extra_flags = []
# This integrated controller runs only on a arm64 machine, remove 32bit warnings
if not dpdk_conf.get('RTE_ARCH_64')
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index fca182785..deefe9588 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -84,6 +84,239 @@ nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_pkts;
}
+#if defined(RTE_ARCH_ARM64)
+
+static __rte_always_inline uint64_t
+nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
+{
+ if (w2 & BIT_ULL(21) /* vtag0_gone */) {
+ ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+ *f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
+ }
+
+ return ol_flags;
+}
+
+static __rte_always_inline uint64_t
+nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
+{
+ if (w2 & BIT_ULL(23) /* vtag1_gone */) {
+ ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+ mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
+ }
+
+ return ol_flags;
+}
+
+static __rte_always_inline uint16_t
+nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t pkts, const uint16_t flags)
+{
+ struct otx2_eth_rxq *rxq = rx_queue; uint16_t packets = 0;
+ uint64x2_t cq0_w8, cq1_w8, cq2_w8, cq3_w8, mbuf01, mbuf23;
+ const uint64_t mbuf_initializer = rxq->mbuf_initializer;
+ const uint64x2_t data_off = vdupq_n_u64(rxq->data_off);
+ uint64_t ol_flags0, ol_flags1, ol_flags2, ol_flags3;
+ uint64x2_t rearm0 = vdupq_n_u64(mbuf_initializer);
+ uint64x2_t rearm1 = vdupq_n_u64(mbuf_initializer);
+ uint64x2_t rearm2 = vdupq_n_u64(mbuf_initializer);
+ uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer);
+ struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+ const uint16_t *lookup_mem = rxq->lookup_mem;
+ const uint32_t qmask = rxq->qmask;
+ const uint64_t wdata = rxq->wdata;
+ const uintptr_t desc = rxq->desc;
+ uint8x16_t f0, f1, f2, f3;
+ uint32_t head = rxq->head;
+
+ pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
+ /* Packets has to be floor-aligned to NIX_DESCS_PER_LOOP */
+ pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
+
+ while (packets < pkts) {
+ /* Get the CQ pointers, since the ring size is multiple of
+ * 4, We can avoid checking the wrap around of head
+ * value after the each access unlike scalar version.
+ */
+ const uintptr_t cq0 = desc + CQE_SZ(head);
+
+ /* Prefetch N desc ahead */
+ rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(8)));
+ rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(9)));
+ rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(10)));
+ rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(11)));
+
+ /* Get NIX_RX_SG_S for size and buffer pointer */
+ cq0_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0) + 64));
+ cq1_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1) + 64));
+ cq2_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2) + 64));
+ cq3_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3) + 64));
+
+ /* Extract mbuf from NIX_RX_SG_S */
+ mbuf01 = vzip2q_u64(cq0_w8, cq1_w8);
+ mbuf23 = vzip2q_u64(cq2_w8, cq3_w8);
+ mbuf01 = vqsubq_u64(mbuf01, data_off);
+ mbuf23 = vqsubq_u64(mbuf23, data_off);
+
+ /* Move mbufs to scalar registers for future use */
+ mbuf0 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 0);
+ mbuf1 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 1);
+ mbuf2 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 0);
+ mbuf3 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 1);
+
+ /* Mask to get packet len from NIX_RX_SG_S */
+ const uint8x16_t shuf_msk = {
+ 0xFF, 0xFF, /* pkt_type set as unknown */
+ 0xFF, 0xFF, /* pkt_type set as unknown */
+ 0, 1, /* octet 1~0, low 16 bits pkt_len */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
+ 0, 1, /* octet 1~0, 16 bits data_len */
+ 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF
+ };
+
+ /* Form the rx_descriptor_fields1 with pkt_len and data_len */
+ f0 = vqtbl1q_u8(cq0_w8, shuf_msk);
+ f1 = vqtbl1q_u8(cq1_w8, shuf_msk);
+ f2 = vqtbl1q_u8(cq2_w8, shuf_msk);
+ f3 = vqtbl1q_u8(cq3_w8, shuf_msk);
+
+ /* Load CQE word0 and word 1 */
+ uint64x2_t cq0_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0)));
+ uint64x2_t cq1_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1)));
+ uint64x2_t cq2_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2)));
+ uint64x2_t cq3_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3)));
+
+ if (flags & NIX_RX_OFFLOAD_RSS_F) {
+ /* Fill rss in the rx_descriptor_fields1 */
+ f0 = vsetq_lane_u32(vgetq_lane_u32(cq0_w0, 0), f0, 3);
+ f1 = vsetq_lane_u32(vgetq_lane_u32(cq1_w0, 0), f1, 3);
+ f2 = vsetq_lane_u32(vgetq_lane_u32(cq2_w0, 0), f2, 3);
+ f3 = vsetq_lane_u32(vgetq_lane_u32(cq3_w0, 0), f3, 3);
+ ol_flags0 = PKT_RX_RSS_HASH;
+ ol_flags1 = PKT_RX_RSS_HASH;
+ ol_flags2 = PKT_RX_RSS_HASH;
+ ol_flags3 = PKT_RX_RSS_HASH;
+ } else {
+ ol_flags0 = 0; ol_flags1 = 0;
+ ol_flags2 = 0; ol_flags3 = 0;
+ }
+
+ if (flags & NIX_RX_OFFLOAD_PTYPE_F) {
+ /* Fill packet_type in the rx_descriptor_fields1 */
+ f0 = vsetq_lane_u32(nix_ptype_get(lookup_mem,
+ vgetq_lane_u64(cq0_w0, 1)), f0, 0);
+ f1 = vsetq_lane_u32(nix_ptype_get(lookup_mem,
+ vgetq_lane_u64(cq1_w0, 1)), f1, 0);
+ f2 = vsetq_lane_u32(nix_ptype_get(lookup_mem,
+ vgetq_lane_u64(cq2_w0, 1)), f2, 0);
+ f3 = vsetq_lane_u32(nix_ptype_get(lookup_mem,
+ vgetq_lane_u64(cq3_w0, 1)), f3, 0);
+ }
+
+ if (flags & NIX_RX_OFFLOAD_CHECKSUM_F) {
+ ol_flags0 |= nix_rx_olflags_get(lookup_mem,
+ vgetq_lane_u64(cq0_w0, 1));
+ ol_flags1 |= nix_rx_olflags_get(lookup_mem,
+ vgetq_lane_u64(cq1_w0, 1));
+ ol_flags2 |= nix_rx_olflags_get(lookup_mem,
+ vgetq_lane_u64(cq2_w0, 1));
+ ol_flags3 |= nix_rx_olflags_get(lookup_mem,
+ vgetq_lane_u64(cq3_w0, 1));
+ }
+
+ if (flags & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
+ uint64_t cq0_w2 = *(uint64_t *)(cq0 + CQE_SZ(0) + 16);
+ uint64_t cq1_w2 = *(uint64_t *)(cq0 + CQE_SZ(1) + 16);
+ uint64_t cq2_w2 = *(uint64_t *)(cq0 + CQE_SZ(2) + 16);
+ uint64_t cq3_w2 = *(uint64_t *)(cq0 + CQE_SZ(3) + 16);
+
+ ol_flags0 = nix_vlan_update(cq0_w2, ol_flags0, &f0);
+ ol_flags1 = nix_vlan_update(cq1_w2, ol_flags1, &f1);
+ ol_flags2 = nix_vlan_update(cq2_w2, ol_flags2, &f2);
+ ol_flags3 = nix_vlan_update(cq3_w2, ol_flags3, &f3);
+
+ ol_flags0 = nix_qinq_update(cq0_w2, ol_flags0, mbuf0);
+ ol_flags1 = nix_qinq_update(cq1_w2, ol_flags1, mbuf1);
+ ol_flags2 = nix_qinq_update(cq2_w2, ol_flags2, mbuf2);
+ ol_flags3 = nix_qinq_update(cq3_w2, ol_flags3, mbuf3);
+ }
+
+ if (flags & NIX_RX_OFFLOAD_MARK_UPDATE_F) {
+ ol_flags0 = nix_update_match_id(*(uint16_t *)
+ (cq0 + CQE_SZ(0) + 38), ol_flags0, mbuf0);
+ ol_flags1 = nix_update_match_id(*(uint16_t *)
+ (cq0 + CQE_SZ(1) + 38), ol_flags1, mbuf1);
+ ol_flags2 = nix_update_match_id(*(uint16_t *)
+ (cq0 + CQE_SZ(2) + 38), ol_flags2, mbuf2);
+ ol_flags3 = nix_update_match_id(*(uint16_t *)
+ (cq0 + CQE_SZ(3) + 38), ol_flags3, mbuf3);
+ }
+
+ /* Form rearm_data with ol_flags */
+ rearm0 = vsetq_lane_u64(ol_flags0, rearm0, 1);
+ rearm1 = vsetq_lane_u64(ol_flags1, rearm1, 1);
+ rearm2 = vsetq_lane_u64(ol_flags2, rearm2, 1);
+ rearm3 = vsetq_lane_u64(ol_flags3, rearm3, 1);
+
+ /* Update rx_descriptor_fields1 */
+ vst1q_u64((uint64_t *)mbuf0->rx_descriptor_fields1, f0);
+ vst1q_u64((uint64_t *)mbuf1->rx_descriptor_fields1, f1);
+ vst1q_u64((uint64_t *)mbuf2->rx_descriptor_fields1, f2);
+ vst1q_u64((uint64_t *)mbuf3->rx_descriptor_fields1, f3);
+
+ /* Update rearm_data */
+ vst1q_u64((uint64_t *)mbuf0->rearm_data, rearm0);
+ vst1q_u64((uint64_t *)mbuf1->rearm_data, rearm1);
+ vst1q_u64((uint64_t *)mbuf2->rearm_data, rearm2);
+ vst1q_u64((uint64_t *)mbuf3->rearm_data, rearm3);
+
+ /* Store the mbufs to rx_pkts */
+ vst1q_u64((uint64_t *)&rx_pkts[packets], mbuf01);
+ vst1q_u64((uint64_t *)&rx_pkts[packets + 2], mbuf23);
+
+ /* Prefetch mbufs */
+ otx2_prefetch_store_keep(mbuf0);
+ otx2_prefetch_store_keep(mbuf1);
+ otx2_prefetch_store_keep(mbuf2);
+ otx2_prefetch_store_keep(mbuf3);
+
+ /* Mark mempool obj as "get" as it is alloc'ed by NIX */
+ __mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1);
+ __mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1);
+ __mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1);
+ __mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1);
+
+ /* Advance head pointer and packets */
+ head += NIX_DESCS_PER_LOOP; head &= qmask;
+ packets += NIX_DESCS_PER_LOOP;
+ }
+
+ rxq->head = head;
+ rxq->available -= packets;
+
+ rte_cio_wmb();
+ /* Free all the CQs that we've processed */
+ otx2_write64((rxq->wdata | packets), rxq->cq_door);
+
+ return packets;
+}
+
+#else
+
+static inline uint16_t
+nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t pkts, const uint16_t flags)
+{
+ RTE_SET_USED(rx_queue);
+ RTE_SET_USED(rx_pkts);
+ RTE_SET_USED(pkts);
+ RTE_SET_USED(flags);
+
+ return 0;
+}
+
+#endif
#define R(name, f5, f4, f3, f2, f1, f0, flags) \
static uint16_t __rte_noinline __hot \
@@ -100,6 +333,16 @@ otx2_nix_recv_pkts_mseg_ ## name(void *rx_queue, \
return nix_recv_pkts(rx_queue, rx_pkts, pkts, \
(flags) | NIX_RX_MULTI_SEG_F); \
} \
+ \
+static uint16_t __rte_noinline __hot \
+otx2_nix_recv_pkts_vec_ ## name(void *rx_queue, \
+ struct rte_mbuf **rx_pkts, uint16_t pkts) \
+{ \
+ /* TSTMP is not supported by vector */ \
+ if ((flags) & NIX_RX_OFFLOAD_TSTAMP_F) \
+ return 0; \
+ return nix_recv_pkts_vector(rx_queue, rx_pkts, pkts, (flags)); \
+} \
NIX_RX_FASTPATH_MODES
#undef R
@@ -141,7 +384,21 @@ NIX_RX_FASTPATH_MODES
#undef R
};
- pick_rx_func(eth_dev, nix_eth_rx_burst);
+ const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2] = {
+#define R(name, f5, f4, f3, f2, f1, f0, flags) \
+ [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_vec_ ## name,
+
+NIX_RX_FASTPATH_MODES
+#undef R
+ };
+
+ /* For PTP enabled, scalar rx function should be chosen as most of the
+ * PTP apps are implemented to rx burst 1 pkt.
+ */
+ if (dev->scalar_ena || dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+ pick_rx_func(eth_dev, nix_eth_rx_burst);
+ else
+ pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 51/58] net/octeontx2: add Tx burst support
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (49 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 50/58] net/octeontx2: add Rx vector version jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 52/58] net/octeontx2: add Tx multi segment version jerinj
` (7 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Pavan Nikhilesh, Harman Kalra
From: Jerin Jacob <jerinj@marvell.com>
Add Tx burst support.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 5 +
doc/guides/nics/features/octeontx2_vec.ini | 5 +
doc/guides/nics/features/octeontx2_vf.ini | 5 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/Makefile | 1 +
drivers/net/octeontx2/meson.build | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 6 -
drivers/net/octeontx2/otx2_ethdev.h | 1 +
drivers/net/octeontx2/otx2_tx.c | 94 ++++++++
drivers/net/octeontx2/otx2_tx.h | 261 +++++++++++++++++++++
10 files changed, 374 insertions(+), 6 deletions(-)
create mode 100644 drivers/net/octeontx2/otx2_tx.c
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 3280cba78..1856d9924 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Fast mbuf free = Y
Free Tx mbuf on demand = Y
Queue start/stop = Y
Promiscuous mode = Y
@@ -28,6 +29,10 @@ Jumbo frame = Y
Scattered Rx = Y
VLAN offload = Y
QinQ offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
+Inner L3 checksum = Y
+Inner L4 checksum = Y
Packet type parsing = Y
Timesync = Y
Timestamp offload = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 315722e60..053fca288 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -12,6 +12,7 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Fast mbuf free = Y
Free Tx mbuf on demand = Y
Queue start/stop = Y
Promiscuous mode = Y
@@ -27,6 +28,10 @@ Flow API = Y
Jumbo frame = Y
VLAN offload = Y
QinQ offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
+Inner L3 checksum = Y
+Inner L4 checksum = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index 17b223221..bef451d01 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -11,6 +11,7 @@ Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
Runtime Tx queue setup = Y
+Fast mbuf free = Y
Free Tx mbuf on demand = Y
Queue start/stop = Y
RSS hash = Y
@@ -23,6 +24,10 @@ Jumbo frame = Y
Scattered Rx = Y
VLAN offload = Y
QinQ offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
+Inner L3 checksum = Y
+Inner L4 checksum = Y
Packet type parsing = Y
Rx descriptor status = Y
Basic stats = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 97054d11d..e92631057 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -25,6 +25,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Receiver Side Scaling (RSS)
- MAC/VLAN filtering
- Generic flow API
+- Inner and Outer Checksum offload
- VLAN/QinQ stripping and insertion
- Port hardware statistics
- Link state information
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index ee5bbb24b..244b7445d 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -33,6 +33,7 @@ LIBABIVER := 1
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \
otx2_rx.c \
+ otx2_tx.c \
otx2_tm.c \
otx2_rss.c \
otx2_mac.c \
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
index 9d151f88d..94bf09a78 100644
--- a/drivers/net/octeontx2/meson.build
+++ b/drivers/net/octeontx2/meson.build
@@ -3,6 +3,7 @@
#
sources = files('otx2_rx.c',
+ 'otx2_tx.c',
'otx2_tm.c',
'otx2_rss.c',
'otx2_mac.c',
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 321716945..1081d070a 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -14,12 +14,6 @@
#include "otx2_ethdev.h"
-static inline void
-otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(eth_dev);
-}
-
static inline uint64_t
nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
{
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 22cf86981..1f9323fe3 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -484,6 +484,7 @@ int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
/* Rx and Tx routines */
void otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev);
+void otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev);
void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
/* Timesync - PTP routines */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
new file mode 100644
index 000000000..16d69b74f
--- /dev/null
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -0,0 +1,94 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <rte_vect.h>
+
+#include "otx2_ethdev.h"
+
+#define NIX_XMIT_FC_OR_RETURN(txq, pkts) do { \
+ /* Cached value is low, Update the fc_cache_pkts */ \
+ if (unlikely((txq)->fc_cache_pkts < (pkts))) { \
+ /* Multiply with sqe_per_sqb to express in pkts */ \
+ (txq)->fc_cache_pkts = \
+ ((txq)->nb_sqb_bufs_adj - *(txq)->fc_mem) << \
+ (txq)->sqes_per_sqb_log2; \
+ /* Check it again for the room */ \
+ if (unlikely((txq)->fc_cache_pkts < (pkts))) \
+ return 0; \
+ } \
+} while (0)
+
+
+static __rte_always_inline uint16_t
+nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t pkts, uint64_t *cmd, const uint16_t flags)
+{
+ struct otx2_eth_txq *txq = tx_queue; uint16_t i;
+ const rte_iova_t io_addr = txq->io_addr;
+ void *lmt_addr = txq->lmt_addr;
+
+ NIX_XMIT_FC_OR_RETURN(txq, pkts);
+
+ otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags));
+
+ /* Lets commit any changes in the packet */
+ rte_cio_wmb();
+
+ for (i = 0; i < pkts; i++) {
+ otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags);
+ /* Passing no of segdw as 4: HDR + EXT + SG + SMEM */
+ otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
+ tx_pkts[i]->ol_flags, 4, flags);
+ otx2_nix_xmit_one(cmd, lmt_addr, io_addr, flags);
+ }
+
+ /* Reduce the cached count */
+ txq->fc_cache_pkts -= pkts;
+
+ return pkts;
+}
+
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+static uint16_t __rte_noinline __hot \
+otx2_nix_xmit_pkts_ ## name(void *tx_queue, \
+ struct rte_mbuf **tx_pkts, uint16_t pkts) \
+{ \
+ uint64_t cmd[sz]; \
+ \
+ return nix_xmit_pkts(tx_queue, tx_pkts, pkts, cmd, flags); \
+}
+
+NIX_TX_FASTPATH_MODES
+#undef T
+
+static inline void
+pick_tx_func(struct rte_eth_dev *eth_dev,
+ const eth_tx_burst_t tx_burst[2][2][2][2][2])
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ /* [TSTMP] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
+ eth_dev->tx_pkt_burst = tx_burst
+ [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F)]
+ [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
+ [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
+ [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
+ [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
+}
+
+void
+otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
+{
+ const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2] = {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_ ## name,
+
+NIX_TX_FASTPATH_MODES
+#undef T
+ };
+
+ pick_tx_func(eth_dev, nix_eth_tx_burst);
+
+ rte_mb();
+}
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
index 4d0993f87..db4c1f70f 100644
--- a/drivers/net/octeontx2/otx2_tx.h
+++ b/drivers/net/octeontx2/otx2_tx.h
@@ -25,4 +25,265 @@
#define NIX_TX_NEED_EXT_HDR \
(NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F)
+/* Function to determine no of tx subdesc required in case ext
+ * sub desc is enabled.
+ */
+static __rte_always_inline int
+otx2_nix_tx_ext_subs(const uint16_t flags)
+{
+ return (flags & NIX_TX_OFFLOAD_TSTAMP_F) ? 2 :
+ ((flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) ? 1 : 0);
+}
+
+static __rte_always_inline void
+otx2_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc,
+ const uint64_t ol_flags, const uint16_t no_segdw,
+ const uint16_t flags)
+{
+ if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
+ struct nix_send_mem_s *send_mem;
+ uint16_t off = (no_segdw - 1) << 1;
+
+ send_mem = (struct nix_send_mem_s *)(cmd + off);
+ if (flags & NIX_TX_MULTI_SEG_F)
+ /* Retrieving the default desc values */
+ cmd[off] = send_mem_desc[6];
+
+ /* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp
+ * should not be updated at tx tstamp registered address, rather
+ * a dummy address which is eight bytes ahead would be updated
+ */
+ send_mem->addr = (rte_iova_t)((uint64_t *)send_mem_desc[7] +
+ !(ol_flags & PKT_TX_IEEE1588_TMST));
+ }
+}
+
+static inline void
+otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
+{
+ struct nix_send_ext_s *send_hdr_ext;
+ struct nix_send_hdr_s *send_hdr;
+ uint64_t ol_flags = 0, mask;
+ union nix_send_hdr_w1_u w1;
+ union nix_send_sg_s *sg;
+
+ send_hdr = (struct nix_send_hdr_s *)cmd;
+ if (flags & NIX_TX_NEED_EXT_HDR) {
+ send_hdr_ext = (struct nix_send_ext_s *)(cmd + 2);
+ sg = (union nix_send_sg_s *)(cmd + 4);
+ /* Clear previous markings */
+ send_hdr_ext->w0.lso = 0;
+ send_hdr_ext->w1.u = 0;
+ } else {
+ sg = (union nix_send_sg_s *)(cmd + 2);
+ }
+
+ if (flags & NIX_TX_NEED_SEND_HDR_W1) {
+ ol_flags = m->ol_flags;
+ w1.u = 0;
+ }
+
+ if (!(flags & NIX_TX_MULTI_SEG_F)) {
+ send_hdr->w0.total = m->data_len;
+ send_hdr->w0.aura =
+ npa_lf_aura_handle_to_aura(m->pool->pool_id);
+ }
+
+ /*
+ * L3type: 2 => IPV4
+ * 3 => IPV4 with csum
+ * 4 => IPV6
+ * L3type and L3ptr needs to be set for either
+ * L3 csum or L4 csum or LSO
+ *
+ */
+
+ if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
+ (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
+ const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+ const uint8_t ol3type =
+ ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
+ ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
+ !!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+
+ /* Outer L3 */
+ w1.ol3type = ol3type;
+ mask = 0xffffull << ((!!ol3type) << 4);
+ w1.ol3ptr = ~mask & m->outer_l2_len;
+ w1.ol4ptr = ~mask & (w1.ol3ptr + m->outer_l3_len);
+
+ /* Outer L4 */
+ w1.ol4type = csum + (csum << 1);
+
+ /* Inner L3 */
+ w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
+ ((!!(ol_flags & PKT_TX_IPV6)) << 2);
+ w1.il3ptr = w1.ol4ptr + m->l2_len;
+ w1.il4ptr = w1.il3ptr + m->l3_len;
+ /* Increment it by 1 if it is IPV4 as 3 is with csum */
+ w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM);
+
+ /* Inner L4 */
+ w1.il4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+
+ /* In case of no tunnel header use only
+ * shift IL3/IL4 fields a bit to use
+ * OL3/OL4 for header checksum
+ */
+ mask = !ol3type;
+ w1.u = ((w1.u & 0xFFFFFFFF00000000) >> (mask << 3)) |
+ ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
+
+ } else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
+ const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+ const uint8_t outer_l2_len = m->outer_l2_len;
+
+ /* Outer L3 */
+ w1.ol3ptr = outer_l2_len;
+ w1.ol4ptr = outer_l2_len + m->outer_l3_len;
+ /* Increment it by 1 if it is IPV4 as 3 is with csum */
+ w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
+ ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
+ !!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+
+ /* Outer L4 */
+ w1.ol4type = csum + (csum << 1);
+
+ } else if (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) {
+ const uint8_t l2_len = m->l2_len;
+
+ /* Always use OLXPTR and OLXTYPE when only
+ * when one header is present
+ */
+
+ /* Inner L3 */
+ w1.ol3ptr = l2_len;
+ w1.ol4ptr = l2_len + m->l3_len;
+ /* Increment it by 1 if it is IPV4 as 3 is with csum */
+ w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
+ ((!!(ol_flags & PKT_TX_IPV6)) << 2) +
+ !!(ol_flags & PKT_TX_IP_CKSUM);
+
+ /* Inner L4 */
+ w1.ol4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+ }
+
+ if (flags & NIX_TX_NEED_EXT_HDR &&
+ flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
+ send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN);
+ /* HW will update ptr after vlan0 update */
+ send_hdr_ext->w1.vlan1_ins_ptr = 12;
+ send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
+
+ send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ);
+ /* 2B before end of l2 header */
+ send_hdr_ext->w1.vlan0_ins_ptr = 12;
+ send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
+ }
+
+ if (flags & NIX_TX_NEED_SEND_HDR_W1)
+ send_hdr->w1.u = w1.u;
+
+ if (!(flags & NIX_TX_MULTI_SEG_F)) {
+ sg->seg1_size = m->data_len;
+ *(rte_iova_t *)(++sg) = rte_mbuf_data_iova(m);
+
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
+ /* Set don't free bit if reference count > 1 */
+ if (rte_pktmbuf_prefree_seg(m) == NULL)
+ send_hdr->w0.df = 1; /* SET DF */
+ }
+ /* Mark mempool object as "put" since it is freed by NIX */
+ if (!send_hdr->w0.df)
+ __mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+ }
+}
+
+
+static __rte_always_inline void
+otx2_nix_xmit_one(uint64_t *cmd, void *lmt_addr,
+ const rte_iova_t io_addr, const uint32_t flags)
+{
+ uint64_t lmt_status;
+
+ do {
+ otx2_lmt_mov(lmt_addr, cmd, otx2_nix_tx_ext_subs(flags));
+ lmt_status = otx2_lmt_submit(io_addr);
+ } while (lmt_status == 0);
+}
+
+
+#define L3L4CSUM_F NIX_TX_OFFLOAD_L3_L4_CSUM_F
+#define OL3OL4CSUM_F NIX_TX_OFFLOAD_OL3_OL4_CSUM_F
+#define VLAN_F NIX_TX_OFFLOAD_VLAN_QINQ_F
+#define NOFF_F NIX_TX_OFFLOAD_MBUF_NOFF_F
+#define TSP_F NIX_TX_OFFLOAD_TSTAMP_F
+
+/* [TSTMP] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
+#define NIX_TX_FASTPATH_MODES \
+T(no_offload, 0, 0, 0, 0, 0, 4, \
+ NIX_TX_OFFLOAD_NONE) \
+T(l3l4csum, 0, 0, 0, 0, 1, 4, \
+ L3L4CSUM_F) \
+T(ol3ol4csum, 0, 0, 0, 1, 0, 4, \
+ OL3OL4CSUM_F) \
+T(ol3ol4csum_l3l4csum, 0, 0, 0, 1, 1, 4, \
+ OL3OL4CSUM_F | L3L4CSUM_F) \
+T(vlan, 0, 0, 1, 0, 0, 6, \
+ VLAN_F) \
+T(vlan_l3l4csum, 0, 0, 1, 0, 1, 6, \
+ VLAN_F | L3L4CSUM_F) \
+T(vlan_ol3ol4csum, 0, 0, 1, 1, 0, 6, \
+ VLAN_F | OL3OL4CSUM_F) \
+T(vlan_ol3ol4csum_l3l4csum, 0, 0, 1, 1, 1, 6, \
+ VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(noff, 0, 1, 0, 0, 0, 4, \
+ NOFF_F) \
+T(noff_l3l4csum, 0, 1, 0, 0, 1, 4, \
+ NOFF_F | L3L4CSUM_F) \
+T(noff_ol3ol4csum, 0, 1, 0, 1, 0, 4, \
+ NOFF_F | OL3OL4CSUM_F) \
+T(noff_ol3ol4csum_l3l4csum, 0, 1, 0, 1, 1, 4, \
+ NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(noff_vlan, 0, 1, 1, 0, 0, 6, \
+ NOFF_F | VLAN_F) \
+T(noff_vlan_l3l4csum, 0, 1, 1, 0, 1, 6, \
+ NOFF_F | VLAN_F | L3L4CSUM_F) \
+T(noff_vlan_ol3ol4csum, 0, 1, 1, 1, 0, 6, \
+ NOFF_F | VLAN_F | OL3OL4CSUM_F) \
+T(noff_vlan_ol3ol4csum_l3l4csum, 0, 1, 1, 1, 1, 6, \
+ NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(ts, 1, 0, 0, 0, 0, 8, \
+ TSP_F) \
+T(ts_l3l4csum, 1, 0, 0, 0, 1, 8, \
+ TSP_F | L3L4CSUM_F) \
+T(ts_ol3ol4csum, 1, 0, 0, 1, 0, 8, \
+ TSP_F | OL3OL4CSUM_F) \
+T(ts_ol3ol4csum_l3l4csum, 1, 0, 0, 1, 1, 8, \
+ TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(ts_vlan, 1, 0, 1, 0, 0, 8, \
+ TSP_F | VLAN_F) \
+T(ts_vlan_l3l4csum, 1, 0, 1, 0, 1, 8, \
+ TSP_F | VLAN_F | L3L4CSUM_F) \
+T(ts_vlan_ol3ol4csum, 1, 0, 1, 1, 0, 8, \
+ TSP_F | VLAN_F | OL3OL4CSUM_F) \
+T(ts_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 1, 8, \
+ TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(ts_noff, 1, 1, 0, 0, 0, 8, \
+ TSP_F | NOFF_F) \
+T(ts_noff_l3l4csum, 1, 1, 0, 0, 1, 8, \
+ TSP_F | NOFF_F | L3L4CSUM_F) \
+T(ts_noff_ol3ol4csum, 1, 1, 0, 1, 0, 8, \
+ TSP_F | NOFF_F | OL3OL4CSUM_F) \
+T(ts_noff_ol3ol4csum_l3l4csum, 1, 1, 0, 1, 1, 8, \
+ TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(ts_noff_vlan, 1, 1, 1, 0, 0, 8, \
+ TSP_F | NOFF_F | VLAN_F) \
+T(ts_noff_vlan_l3l4csum, 1, 1, 1, 0, 1, 8, \
+ TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
+T(ts_noff_vlan_ol3ol4csum, 1, 1, 1, 1, 0, 8, \
+ TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
+T(ts_noff_vlan_ol3ol4csum_l3l4csum, 1, 1, 1, 1, 1, 8, \
+ TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)
+
#endif /* __OTX2_TX_H__ */
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 52/58] net/octeontx2: add Tx multi segment version
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (50 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 51/58] net/octeontx2: add Tx burst support jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 53/58] net/octeontx2: add Tx vector version jerinj
` (6 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Pavan Nikhilesh
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add multi segment version of packet Transmit function.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.h | 4 ++
drivers/net/octeontx2/otx2_tx.c | 58 +++++++++++++++++++++
drivers/net/octeontx2/otx2_tx.h | 81 +++++++++++++++++++++++++++++
3 files changed, 143 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 1f9323fe3..f39fdfa1f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -89,6 +89,10 @@
#define NIX_TX_NB_SEG_MAX 9
#endif
+#define NIX_TX_MSEG_SG_DWORDS \
+ ((RTE_ALIGN_MUL_CEIL(NIX_TX_NB_SEG_MAX, 3) / 3) \
+ + NIX_TX_NB_SEG_MAX)
+
/* Apply BP when CQ is 75% full */
#define NIX_CQ_BP_LEVEL (25 * 256 / 100)
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index 16d69b74f..0ac5ea652 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -49,6 +49,37 @@ nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
return pkts;
}
+static __rte_always_inline uint16_t
+nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t pkts, uint64_t *cmd, const uint16_t flags)
+{
+ struct otx2_eth_txq *txq = tx_queue; uint64_t i;
+ const rte_iova_t io_addr = txq->io_addr;
+ void *lmt_addr = txq->lmt_addr;
+ uint16_t segdw;
+
+ NIX_XMIT_FC_OR_RETURN(txq, pkts);
+
+ otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags));
+
+ /* Lets commit any changes in the packet */
+ rte_cio_wmb();
+
+ for (i = 0; i < pkts; i++) {
+ otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags);
+ segdw = otx2_nix_prepare_mseg(tx_pkts[i], cmd, flags);
+ otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
+ tx_pkts[i]->ol_flags, segdw,
+ flags);
+ otx2_nix_xmit_mseg_one(cmd, lmt_addr, io_addr, segdw);
+ }
+
+ /* Reduce the cached count */
+ txq->fc_cache_pkts -= pkts;
+
+ return pkts;
+}
+
#define T(name, f4, f3, f2, f1, f0, sz, flags) \
static uint16_t __rte_noinline __hot \
otx2_nix_xmit_pkts_ ## name(void *tx_queue, \
@@ -62,6 +93,20 @@ otx2_nix_xmit_pkts_ ## name(void *tx_queue, \
NIX_TX_FASTPATH_MODES
#undef T
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+static uint16_t __rte_noinline __hot \
+otx2_nix_xmit_pkts_mseg_ ## name(void *tx_queue, \
+ struct rte_mbuf **tx_pkts, uint16_t pkts) \
+{ \
+ uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \
+ \
+ return nix_xmit_pkts_mseg(tx_queue, tx_pkts, pkts, cmd, \
+ (flags) | NIX_TX_MULTI_SEG_F); \
+}
+
+NIX_TX_FASTPATH_MODES
+#undef T
+
static inline void
pick_tx_func(struct rte_eth_dev *eth_dev,
const eth_tx_burst_t tx_burst[2][2][2][2][2])
@@ -80,15 +125,28 @@ pick_tx_func(struct rte_eth_dev *eth_dev,
void
otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2] = {
#define T(name, f4, f3, f2, f1, f0, sz, flags) \
[f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_ ## name,
+NIX_TX_FASTPATH_MODES
+#undef T
+ };
+
+ const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2] = {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_mseg_ ## name,
+
NIX_TX_FASTPATH_MODES
#undef T
};
pick_tx_func(eth_dev, nix_eth_tx_burst);
+ if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+ pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
+
rte_mb();
}
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
index db4c1f70f..b75a220ea 100644
--- a/drivers/net/octeontx2/otx2_tx.h
+++ b/drivers/net/octeontx2/otx2_tx.h
@@ -212,6 +212,87 @@ otx2_nix_xmit_one(uint64_t *cmd, void *lmt_addr,
} while (lmt_status == 0);
}
+static __rte_always_inline uint16_t
+otx2_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
+{
+ struct nix_send_hdr_s *send_hdr;
+ union nix_send_sg_s *sg;
+ struct rte_mbuf *m_next;
+ uint64_t *slist, sg_u;
+ uint64_t nb_segs;
+ uint64_t segdw;
+ uint8_t off, i;
+
+ send_hdr = (struct nix_send_hdr_s *)cmd;
+ send_hdr->w0.total = m->pkt_len;
+ send_hdr->w0.aura = npa_lf_aura_handle_to_aura(m->pool->pool_id);
+
+ if (flags & NIX_TX_NEED_EXT_HDR)
+ off = 2;
+ else
+ off = 0;
+
+ sg = (union nix_send_sg_s *)&cmd[2 + off];
+ sg_u = sg->u;
+ slist = &cmd[3 + off];
+
+ i = 0;
+ nb_segs = m->nb_segs;
+
+ /* Fill mbuf segments */
+ do {
+ m_next = m->next;
+ sg_u = sg_u | ((uint64_t)m->data_len << (i << 4));
+ *slist = rte_mbuf_data_iova(m);
+ /* Set invert df if reference count > 1 */
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)
+ sg_u |=
+ ((uint64_t)(rte_pktmbuf_prefree_seg(m) == NULL) <<
+ (i + 55));
+ /* Mark mempool object as "put" since it is freed by NIX */
+ if (!(sg_u & (1ULL << (i + 55)))) {
+ m->next = NULL;
+ __mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+ }
+ slist++;
+ i++;
+ nb_segs--;
+ if (i > 2 && nb_segs) {
+ i = 0;
+ /* Next SG subdesc */
+ *(uint64_t *)slist = sg_u & 0xFC00000000000000;
+ sg->u = sg_u;
+ sg->segs = 3;
+ sg = (union nix_send_sg_s *)slist;
+ sg_u = sg->u;
+ slist++;
+ }
+ m = m_next;
+ } while (nb_segs);
+
+ sg->u = sg_u;
+ sg->segs = i;
+ segdw = (uint64_t *)slist - (uint64_t *)&cmd[2 + off];
+ /* Roundup extra dwords to multiple of 2 */
+ segdw = (segdw >> 1) + (segdw & 0x1);
+ /* Default dwords */
+ segdw += (off >> 1) + 1 + !!(flags & NIX_TX_OFFLOAD_TSTAMP_F);
+ send_hdr->w0.sizem1 = segdw - 1;
+
+ return segdw;
+}
+
+static __rte_always_inline void
+otx2_nix_xmit_mseg_one(uint64_t *cmd, void *lmt_addr,
+ rte_iova_t io_addr, uint16_t segdw)
+{
+ uint64_t lmt_status;
+
+ do {
+ otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw);
+ lmt_status = otx2_lmt_submit(io_addr);
+ } while (lmt_status == 0);
+}
#define L3L4CSUM_F NIX_TX_OFFLOAD_L3_L4_CSUM_F
#define OL3OL4CSUM_F NIX_TX_OFFLOAD_OL3_OL4_CSUM_F
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 53/58] net/octeontx2: add Tx vector version
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (51 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 52/58] net/octeontx2: add Tx multi segment version jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 54/58] net/octeontx2: add device start operation jerinj
` (5 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Pavan Nikhilesh
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add vector version of packet transmit function.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/net/octeontx2/otx2_tx.c | 883 +++++++++++++++++++++++++++++++-
1 file changed, 882 insertions(+), 1 deletion(-)
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index 0ac5ea652..6bce55112 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -80,6 +80,859 @@ nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
return pkts;
}
+#if defined(RTE_ARCH_ARM64)
+
+#define NIX_DESCS_PER_LOOP 4
+static __rte_always_inline uint16_t
+nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t pkts, const uint16_t flags)
+{
+ uint64x2_t dataoff_iova0, dataoff_iova1, dataoff_iova2, dataoff_iova3;
+ uint64x2_t len_olflags0, len_olflags1, len_olflags2, len_olflags3;
+ uint64_t *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+ uint64x2_t senddesc01_w0, senddesc23_w0;
+ uint64x2_t senddesc01_w1, senddesc23_w1;
+ uint64x2_t sgdesc01_w0, sgdesc23_w0;
+ uint64x2_t sgdesc01_w1, sgdesc23_w1;
+ struct otx2_eth_txq *txq = tx_queue;
+ uint64_t *lmt_addr = txq->lmt_addr;
+ rte_iova_t io_addr = txq->io_addr;
+ uint64x2_t ltypes01, ltypes23;
+ uint64x2_t xtmp128, ytmp128;
+ uint64x2_t xmask01, xmask23;
+ uint64x2_t mbuf01, mbuf23;
+ uint64x2_t cmd00, cmd01;
+ uint64x2_t cmd10, cmd11;
+ uint64x2_t cmd20, cmd21;
+ uint64x2_t cmd30, cmd31;
+ uint64_t lmt_status, i;
+
+ pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
+
+ NIX_XMIT_FC_OR_RETURN(txq, pkts);
+
+ /* Reduce the cached count */
+ txq->fc_cache_pkts -= pkts;
+
+ /* Lets commit any changes in the packet */
+ rte_cio_wmb();
+
+ senddesc01_w0 = vld1q_dup_u64(&txq->cmd[0]);
+ senddesc23_w0 = senddesc01_w0;
+ senddesc01_w1 = vdupq_n_u64(0);
+ senddesc23_w1 = senddesc01_w1;
+ sgdesc01_w0 = vld1q_dup_u64(&txq->cmd[2]);
+ sgdesc23_w0 = sgdesc01_w0;
+
+ for (i = 0; i < pkts; i += NIX_DESCS_PER_LOOP) {
+ mbuf01 = vld1q_u64((uint64_t *)tx_pkts);
+ mbuf23 = vld1q_u64((uint64_t *)(tx_pkts + 2));
+
+ /* Clear lower 32bit of SEND_HDR_W0 and SEND_SG_W0 */
+ senddesc01_w0 = vbicq_u64(senddesc01_w0,
+ vdupq_n_u64(0xFFFFFFFF));
+ sgdesc01_w0 = vbicq_u64(sgdesc01_w0,
+ vdupq_n_u64(0xFFFFFFFF));
+
+ senddesc23_w0 = senddesc01_w0;
+ sgdesc23_w0 = sgdesc01_w0;
+
+ tx_pkts = tx_pkts + NIX_DESCS_PER_LOOP;
+
+ /* Move mbufs to iova */
+ mbuf0 = (uint64_t *)vgetq_lane_u64(mbuf01, 0);
+ mbuf1 = (uint64_t *)vgetq_lane_u64(mbuf01, 1);
+ mbuf2 = (uint64_t *)vgetq_lane_u64(mbuf23, 0);
+ mbuf3 = (uint64_t *)vgetq_lane_u64(mbuf23, 1);
+
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mbuf, buf_iova));
+ /*
+ * Get mbuf's, olflags, iova, pktlen, dataoff
+ * dataoff_iovaX.D[0] = iova,
+ * dataoff_iovaX.D[1](15:0) = mbuf->dataoff
+ * len_olflagsX.D[0] = ol_flags,
+ * len_olflagsX.D[1](63:32) = mbuf->pkt_len
+ */
+ dataoff_iova0 = vld1q_u64(mbuf0);
+ len_olflags0 = vld1q_u64(mbuf0 + 2);
+ dataoff_iova1 = vld1q_u64(mbuf1);
+ len_olflags1 = vld1q_u64(mbuf1 + 2);
+ dataoff_iova2 = vld1q_u64(mbuf2);
+ len_olflags2 = vld1q_u64(mbuf2 + 2);
+ dataoff_iova3 = vld1q_u64(mbuf3);
+ len_olflags3 = vld1q_u64(mbuf3 + 2);
+
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
+ struct rte_mbuf *mbuf;
+ /* Set don't free bit if reference count > 1 */
+ xmask01 = vdupq_n_u64(0);
+ xmask23 = xmask01;
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
+ offsetof(struct rte_mbuf, buf_iova));
+
+ if (rte_pktmbuf_prefree_seg(mbuf) == NULL)
+ vsetq_lane_u64(0x80000, xmask01, 0);
+ else
+ __mempool_check_cookies(mbuf->pool,
+ (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
+ offsetof(struct rte_mbuf, buf_iova));
+ if (rte_pktmbuf_prefree_seg(mbuf) == NULL)
+ vsetq_lane_u64(0x80000, xmask01, 1);
+ else
+ __mempool_check_cookies(mbuf->pool,
+ (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
+ offsetof(struct rte_mbuf, buf_iova));
+ if (rte_pktmbuf_prefree_seg(mbuf) == NULL)
+ vsetq_lane_u64(0x80000, xmask23, 0);
+ else
+ __mempool_check_cookies(mbuf->pool,
+ (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
+ offsetof(struct rte_mbuf, buf_iova));
+ if (rte_pktmbuf_prefree_seg(mbuf) == NULL)
+ vsetq_lane_u64(0x80000, xmask23, 1);
+ else
+ __mempool_check_cookies(mbuf->pool,
+ (void **)&mbuf,
+ 1, 0);
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
+ } else {
+ struct rte_mbuf *mbuf;
+ /* Mark mempool object as "put" since
+ * it is freed by NIX
+ */
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
+ offsetof(struct rte_mbuf, buf_iova));
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
+ offsetof(struct rte_mbuf, buf_iova));
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
+ offsetof(struct rte_mbuf, buf_iova));
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+ 1, 0);
+
+ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
+ offsetof(struct rte_mbuf, buf_iova));
+ __mempool_check_cookies(mbuf->pool, (void **)&mbuf,
+ 1, 0);
+ RTE_SET_USED(mbuf);
+ }
+
+ /* Move mbufs to point pool */
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mbuf, pool) -
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mbuf, pool) -
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mbuf, pool) -
+ offsetof(struct rte_mbuf, buf_iova));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mbuf, pool) -
+ offsetof(struct rte_mbuf, buf_iova));
+
+ if (flags &
+ (NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
+ NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
+ /* Get tx_offload for ol2, ol3, l2, l3 lengths */
+ /*
+ * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7)
+ * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7)
+ */
+
+ asm volatile ("LD1 {%[a].D}[0],[%[in]]\n\t" :
+ [a]"+w"(senddesc01_w1) :
+ [in]"r"(mbuf0 + 2) : "memory");
+
+ asm volatile ("LD1 {%[a].D}[1],[%[in]]\n\t" :
+ [a]"+w"(senddesc01_w1) :
+ [in]"r"(mbuf1 + 2) : "memory");
+
+ asm volatile ("LD1 {%[b].D}[0],[%[in]]\n\t" :
+ [b]"+w"(senddesc23_w1) :
+ [in]"r"(mbuf2 + 2) : "memory");
+
+ asm volatile ("LD1 {%[b].D}[1],[%[in]]\n\t" :
+ [b]"+w"(senddesc23_w1) :
+ [in]"r"(mbuf3 + 2) : "memory");
+
+ /* Get pool pointer alone */
+ mbuf0 = (uint64_t *)*mbuf0;
+ mbuf1 = (uint64_t *)*mbuf1;
+ mbuf2 = (uint64_t *)*mbuf2;
+ mbuf3 = (uint64_t *)*mbuf3;
+ } else {
+ /* Get pool pointer alone */
+ mbuf0 = (uint64_t *)*mbuf0;
+ mbuf1 = (uint64_t *)*mbuf1;
+ mbuf2 = (uint64_t *)*mbuf2;
+ mbuf3 = (uint64_t *)*mbuf3;
+ }
+
+ const uint8x16_t shuf_mask2 = {
+ 0x4, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xc, 0xd, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ };
+ xtmp128 = vzip2q_u64(len_olflags0, len_olflags1);
+ ytmp128 = vzip2q_u64(len_olflags2, len_olflags3);
+
+ /* Clear dataoff_iovaX.D[1] bits other than dataoff(15:0) */
+ const uint64x2_t and_mask0 = {
+ 0xFFFFFFFFFFFFFFFF,
+ 0x000000000000FFFF,
+ };
+
+ dataoff_iova0 = vandq_u64(dataoff_iova0, and_mask0);
+ dataoff_iova1 = vandq_u64(dataoff_iova1, and_mask0);
+ dataoff_iova2 = vandq_u64(dataoff_iova2, and_mask0);
+ dataoff_iova3 = vandq_u64(dataoff_iova3, and_mask0);
+
+ /*
+ * Pick only 16 bits of pktlen preset at bits 63:32
+ * and place them at bits 15:0.
+ */
+ xtmp128 = vqtbl1q_u8(xtmp128, shuf_mask2);
+ ytmp128 = vqtbl1q_u8(ytmp128, shuf_mask2);
+
+ /* Add pairwise to get dataoff + iova in sgdesc_w1 */
+ sgdesc01_w1 = vpaddq_u64(dataoff_iova0, dataoff_iova1);
+ sgdesc23_w1 = vpaddq_u64(dataoff_iova2, dataoff_iova3);
+
+ /* Orr both sgdesc_w0 and senddesc_w0 with 16 bits of
+ * pktlen at 15:0 position.
+ */
+ sgdesc01_w0 = vorrq_u64(sgdesc01_w0, xtmp128);
+ sgdesc23_w0 = vorrq_u64(sgdesc23_w0, ytmp128);
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xtmp128);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, ytmp128);
+
+ if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
+ !(flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
+ /*
+ * Lookup table to translate ol_flags to
+ * il3/il4 types. But we still use ol3/ol4 types in
+ * senddesc_w1 as only one header processing is enabled.
+ */
+ const uint8x16_t tbl = {
+ /* [0-15] = il4type:il3type */
+ 0x04, /* none (IPv6 assumed) */
+ 0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */
+ 0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */
+ 0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */
+ 0x03, /* PKT_TX_IP_CKSUM */
+ 0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */
+ 0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */
+ 0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */
+ 0x02, /* PKT_TX_IPV4 */
+ 0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */
+ 0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */
+ 0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */
+ 0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */
+ 0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_TCP_CKSUM
+ */
+ 0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_SCTP_CKSUM
+ */
+ 0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_UDP_CKSUM
+ */
+ };
+
+ /* Extract olflags to translate to iltypes */
+ xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
+ ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
+
+ /*
+ * E(47):L3_LEN(9):L2_LEN(7+z)
+ * E(47):L3_LEN(9):L2_LEN(7+z)
+ */
+ senddesc01_w1 = vshlq_n_u64(senddesc01_w1, 1);
+ senddesc23_w1 = vshlq_n_u64(senddesc23_w1, 1);
+
+ /* Move OLFLAGS bits 55:52 to 51:48
+ * with zeros preprended on the byte and rest
+ * don't care
+ */
+ xtmp128 = vshrq_n_u8(xtmp128, 4);
+ ytmp128 = vshrq_n_u8(ytmp128, 4);
+ /*
+ * E(48):L3_LEN(8):L2_LEN(z+7)
+ * E(48):L3_LEN(8):L2_LEN(z+7)
+ */
+ const int8x16_t tshft3 = {
+ -1, 0, 8, 8, 8, 8, 8, 8,
+ -1, 0, 8, 8, 8, 8, 8, 8,
+ };
+
+ senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
+ senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
+
+ /* Do the lookup */
+ ltypes01 = vqtbl1q_u8(tbl, xtmp128);
+ ltypes23 = vqtbl1q_u8(tbl, ytmp128);
+
+ /* Just use ld1q to retrieve aura
+ * when we don't need tx_offload
+ */
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mempool, pool_id));
+
+ /* Pick only relevant fields i.e Bit 48:55 of iltype
+ * and place it in ol3/ol4type of senddesc_w1
+ */
+ const uint8x16_t shuf_mask0 = {
+ 0xFF, 0xFF, 0xFF, 0xFF, 0x6, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xE, 0xFF, 0xFF, 0xFF,
+ };
+
+ ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
+ ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
+
+ /* Prepare ol4ptr, ol3ptr from ol3len, ol2len.
+ * a [E(32):E(16):OL3(8):OL2(8)]
+ * a = a + (a << 8)
+ * a [E(32):E(16):(OL3+OL2):OL2]
+ * => E(32):E(16)::OL4PTR(8):OL3PTR(8)
+ */
+ senddesc01_w1 = vaddq_u8(senddesc01_w1,
+ vshlq_n_u16(senddesc01_w1, 8));
+ senddesc23_w1 = vaddq_u8(senddesc23_w1,
+ vshlq_n_u16(senddesc23_w1, 8));
+
+ /* Create first half of 4W cmd for 4 mbufs (sgdesc) */
+ cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
+ cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
+
+ xmask01 = vdupq_n_u64(0);
+ xmask23 = xmask01;
+ asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
+
+ asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
+ xmask01 = vshlq_n_u64(xmask01, 20);
+ xmask23 = vshlq_n_u64(xmask23, 20);
+
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
+ /* Move ltypes to senddesc*_w1 */
+ senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
+ senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
+
+ /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
+ cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
+ cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
+ cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
+ cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
+
+ } else if (!(flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
+ (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
+ /*
+ * Lookup table to translate ol_flags to
+ * ol3/ol4 types.
+ */
+
+ const uint8x16_t tbl = {
+ /* [0-15] = ol4type:ol3type */
+ 0x00, /* none */
+ 0x03, /* OUTER_IP_CKSUM */
+ 0x02, /* OUTER_IPV4 */
+ 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */
+ 0x04, /* OUTER_IPV6 */
+ 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */
+ 0x00, /* OUTER_IPV6 | OUTER_IPV4 */
+ 0x00, /* OUTER_IPV6 | OUTER_IPV4 |
+ * OUTER_IP_CKSUM
+ */
+ 0x00, /* OUTER_UDP_CKSUM */
+ 0x33, /* OUTER_UDP_CKSUM | OUTER_IP_CKSUM */
+ 0x32, /* OUTER_UDP_CKSUM | OUTER_IPV4 */
+ 0x33, /* OUTER_UDP_CKSUM | OUTER_IPV4 |
+ * OUTER_IP_CKSUM
+ */
+ 0x34, /* OUTER_UDP_CKSUM | OUTER_IPV6 */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IP_CKSUM
+ */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IPV4
+ */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IPV4 | OUTER_IP_CKSUM
+ */
+ };
+
+ /* Extract olflags to translate to iltypes */
+ xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
+ ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
+
+ /*
+ * E(47):OL3_LEN(9):OL2_LEN(7+z)
+ * E(47):OL3_LEN(9):OL2_LEN(7+z)
+ */
+ const uint8x16_t shuf_mask5 = {
+ 0x6, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xE, 0xD, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+ };
+ senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5);
+ senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5);
+
+ /* Extract outer ol flags only */
+ const uint64x2_t o_cksum_mask = {
+ 0x1C00020000000000,
+ 0x1C00020000000000,
+ };
+
+ xtmp128 = vandq_u64(xtmp128, o_cksum_mask);
+ ytmp128 = vandq_u64(ytmp128, o_cksum_mask);
+
+ /* Extract OUTER_UDP_CKSUM bit 41 and
+ * move it to bit 61
+ */
+
+ xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20);
+ ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20);
+
+ /* Shift oltype by 2 to start nibble from BIT(56)
+ * instead of BIT(58)
+ */
+ xtmp128 = vshrq_n_u8(xtmp128, 2);
+ ytmp128 = vshrq_n_u8(ytmp128, 2);
+ /*
+ * E(48):L3_LEN(8):L2_LEN(z+7)
+ * E(48):L3_LEN(8):L2_LEN(z+7)
+ */
+ const int8x16_t tshft3 = {
+ -1, 0, 8, 8, 8, 8, 8, 8,
+ -1, 0, 8, 8, 8, 8, 8, 8,
+ };
+
+ senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
+ senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
+
+ /* Do the lookup */
+ ltypes01 = vqtbl1q_u8(tbl, xtmp128);
+ ltypes23 = vqtbl1q_u8(tbl, ytmp128);
+
+ /* Just use ld1q to retrieve aura
+ * when we don't need tx_offload
+ */
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mempool, pool_id));
+
+ /* Pick only relevant fields i.e Bit 56:63 of oltype
+ * and place it in ol3/ol4type of senddesc_w1
+ */
+ const uint8x16_t shuf_mask0 = {
+ 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0xFF, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xFF, 0xFF, 0xFF,
+ };
+
+ ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
+ ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
+
+ /* Prepare ol4ptr, ol3ptr from ol3len, ol2len.
+ * a [E(32):E(16):OL3(8):OL2(8)]
+ * a = a + (a << 8)
+ * a [E(32):E(16):(OL3+OL2):OL2]
+ * => E(32):E(16)::OL4PTR(8):OL3PTR(8)
+ */
+ senddesc01_w1 = vaddq_u8(senddesc01_w1,
+ vshlq_n_u16(senddesc01_w1, 8));
+ senddesc23_w1 = vaddq_u8(senddesc23_w1,
+ vshlq_n_u16(senddesc23_w1, 8));
+
+ /* Create second half of 4W cmd for 4 mbufs (sgdesc) */
+ cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
+ cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
+
+ xmask01 = vdupq_n_u64(0);
+ xmask23 = xmask01;
+ asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
+
+ asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
+ xmask01 = vshlq_n_u64(xmask01, 20);
+ xmask23 = vshlq_n_u64(xmask23, 20);
+
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
+ /* Move ltypes to senddesc*_w1 */
+ senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
+ senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
+
+ /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
+ cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
+ cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
+ cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
+ cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
+
+ } else if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
+ (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
+ /* Lookup table to translate ol_flags to
+ * ol4type, ol3type, il4type, il3type of senddesc_w1
+ */
+ const uint8x16x2_t tbl = {
+ {
+ {
+ /* [0-15] = il4type:il3type */
+ 0x04, /* none (IPv6) */
+ 0x14, /* PKT_TX_TCP_CKSUM (IPv6) */
+ 0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */
+ 0x34, /* PKT_TX_UDP_CKSUM (IPv6) */
+ 0x03, /* PKT_TX_IP_CKSUM */
+ 0x13, /* PKT_TX_IP_CKSUM |
+ * PKT_TX_TCP_CKSUM
+ */
+ 0x23, /* PKT_TX_IP_CKSUM |
+ * PKT_TX_SCTP_CKSUM
+ */
+ 0x33, /* PKT_TX_IP_CKSUM |
+ * PKT_TX_UDP_CKSUM
+ */
+ 0x02, /* PKT_TX_IPV4 */
+ 0x12, /* PKT_TX_IPV4 |
+ * PKT_TX_TCP_CKSUM
+ */
+ 0x22, /* PKT_TX_IPV4 |
+ * PKT_TX_SCTP_CKSUM
+ */
+ 0x32, /* PKT_TX_IPV4 |
+ * PKT_TX_UDP_CKSUM
+ */
+ 0x03, /* PKT_TX_IPV4 |
+ * PKT_TX_IP_CKSUM
+ */
+ 0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_TCP_CKSUM
+ */
+ 0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_SCTP_CKSUM
+ */
+ 0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
+ * PKT_TX_UDP_CKSUM
+ */
+ },
+
+ {
+ /* [16-31] = ol4type:ol3type */
+ 0x00, /* none */
+ 0x03, /* OUTER_IP_CKSUM */
+ 0x02, /* OUTER_IPV4 */
+ 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */
+ 0x04, /* OUTER_IPV6 */
+ 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */
+ 0x00, /* OUTER_IPV6 | OUTER_IPV4 */
+ 0x00, /* OUTER_IPV6 | OUTER_IPV4 |
+ * OUTER_IP_CKSUM
+ */
+ 0x00, /* OUTER_UDP_CKSUM */
+ 0x33, /* OUTER_UDP_CKSUM |
+ * OUTER_IP_CKSUM
+ */
+ 0x32, /* OUTER_UDP_CKSUM |
+ * OUTER_IPV4
+ */
+ 0x33, /* OUTER_UDP_CKSUM |
+ * OUTER_IPV4 | OUTER_IP_CKSUM
+ */
+ 0x34, /* OUTER_UDP_CKSUM |
+ * OUTER_IPV6
+ */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IP_CKSUM
+ */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IPV4
+ */
+ 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
+ * OUTER_IPV4 | OUTER_IP_CKSUM
+ */
+ },
+ }
+ };
+
+ /* Extract olflags to translate to oltype & iltype */
+ xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
+ ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
+
+ /*
+ * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z)
+ * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z)
+ */
+ const uint32x4_t tshft_4 = {
+ 1, 0,
+ 1, 0,
+ };
+ senddesc01_w1 = vshlq_u32(senddesc01_w1, tshft_4);
+ senddesc23_w1 = vshlq_u32(senddesc23_w1, tshft_4);
+
+ /*
+ * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z)
+ * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z)
+ */
+ const uint8x16_t shuf_mask5 = {
+ 0x6, 0x5, 0x0, 0x1, 0xFF, 0xFF, 0xFF, 0xFF,
+ 0xE, 0xD, 0x8, 0x9, 0xFF, 0xFF, 0xFF, 0xFF,
+ };
+ senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5);
+ senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5);
+
+ /* Extract outer and inner header ol_flags */
+ const uint64x2_t oi_cksum_mask = {
+ 0x1CF0020000000000,
+ 0x1CF0020000000000,
+ };
+
+ xtmp128 = vandq_u64(xtmp128, oi_cksum_mask);
+ ytmp128 = vandq_u64(ytmp128, oi_cksum_mask);
+
+ /* Extract OUTER_UDP_CKSUM bit 41 and
+ * move it to bit 61
+ */
+
+ xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20);
+ ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20);
+
+ /* Shift right oltype by 2 and iltype by 4
+ * to start oltype nibble from BIT(58)
+ * instead of BIT(56) and iltype nibble from BIT(48)
+ * instead of BIT(52).
+ */
+ const int8x16_t tshft5 = {
+ 8, 8, 8, 8, 8, 8, -4, -2,
+ 8, 8, 8, 8, 8, 8, -4, -2,
+ };
+
+ xtmp128 = vshlq_u8(xtmp128, tshft5);
+ ytmp128 = vshlq_u8(ytmp128, tshft5);
+ /*
+ * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8)
+ * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8)
+ */
+ const int8x16_t tshft3 = {
+ -1, 0, -1, 0, 0, 0, 0, 0,
+ -1, 0, -1, 0, 0, 0, 0, 0,
+ };
+
+ senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
+ senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
+
+ /* Mark Bit(4) of oltype */
+ const uint64x2_t oi_cksum_mask2 = {
+ 0x1000000000000000,
+ 0x1000000000000000,
+ };
+
+ xtmp128 = vorrq_u64(xtmp128, oi_cksum_mask2);
+ ytmp128 = vorrq_u64(ytmp128, oi_cksum_mask2);
+
+ /* Do the lookup */
+ ltypes01 = vqtbl2q_u8(tbl, xtmp128);
+ ltypes23 = vqtbl2q_u8(tbl, ytmp128);
+
+ /* Just use ld1q to retrieve aura
+ * when we don't need tx_offload
+ */
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mempool, pool_id));
+
+ /* Pick only relevant fields i.e Bit 48:55 of iltype and
+ * Bit 56:63 of oltype and place it in corresponding
+ * place in senddesc_w1.
+ */
+ const uint8x16_t shuf_mask0 = {
+ 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0x6, 0xFF, 0xFF,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xE, 0xFF, 0xFF,
+ };
+
+ ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
+ ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
+
+ /* Prepare l4ptr, l3ptr, ol4ptr, ol3ptr from
+ * l3len, l2len, ol3len, ol2len.
+ * a [E(32):L3(8):L2(8):OL3(8):OL2(8)]
+ * a = a + (a << 8)
+ * a [E:(L3+L2):(L2+OL3):(OL3+OL2):OL2]
+ * a = a + (a << 16)
+ * a [E:(L3+L2+OL3+OL2):(L2+OL3+OL2):(OL3+OL2):OL2]
+ * => E(32):IL4PTR(8):IL3PTR(8):OL4PTR(8):OL3PTR(8)
+ */
+ senddesc01_w1 = vaddq_u8(senddesc01_w1,
+ vshlq_n_u32(senddesc01_w1, 8));
+ senddesc23_w1 = vaddq_u8(senddesc23_w1,
+ vshlq_n_u32(senddesc23_w1, 8));
+
+ /* Create second half of 4W cmd for 4 mbufs (sgdesc) */
+ cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
+ cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
+
+ /* Continue preparing l4ptr, l3ptr, ol4ptr, ol3ptr */
+ senddesc01_w1 = vaddq_u8(senddesc01_w1,
+ vshlq_n_u32(senddesc01_w1, 16));
+ senddesc23_w1 = vaddq_u8(senddesc23_w1,
+ vshlq_n_u32(senddesc23_w1, 16));
+
+ xmask01 = vdupq_n_u64(0);
+ xmask23 = xmask01;
+ asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
+
+ asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
+ xmask01 = vshlq_n_u64(xmask01, 20);
+ xmask23 = vshlq_n_u64(xmask23, 20);
+
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
+ /* Move ltypes to senddesc*_w1 */
+ senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
+ senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
+
+ /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
+ cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
+ cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
+ cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
+ cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
+ } else {
+ /* Just use ld1q to retrieve aura
+ * when we don't need tx_offload
+ */
+ mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
+ offsetof(struct rte_mempool, pool_id));
+ mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
+ offsetof(struct rte_mempool, pool_id));
+ xmask01 = vdupq_n_u64(0);
+ xmask23 = xmask01;
+ asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
+
+ asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
+ [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
+
+ asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
+ [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
+ xmask01 = vshlq_n_u64(xmask01, 20);
+ xmask23 = vshlq_n_u64(xmask23, 20);
+
+ senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
+ senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
+
+ /* Create 4W cmd for 4 mbufs (sendhdr, sgdesc) */
+ cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
+ cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
+ cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
+ cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
+ cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
+ cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
+ cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
+ }
+
+ do {
+ vst1q_u64(lmt_addr, cmd00);
+ vst1q_u64(lmt_addr + 2, cmd01);
+ vst1q_u64(lmt_addr + 4, cmd10);
+ vst1q_u64(lmt_addr + 6, cmd11);
+ vst1q_u64(lmt_addr + 8, cmd20);
+ vst1q_u64(lmt_addr + 10, cmd21);
+ vst1q_u64(lmt_addr + 12, cmd30);
+ vst1q_u64(lmt_addr + 14, cmd31);
+ lmt_status = otx2_lmt_submit(io_addr);
+
+ } while (lmt_status == 0);
+ }
+
+ return pkts;
+}
+
+#else
+static __rte_always_inline uint16_t
+nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t pkts, const uint16_t flags)
+{
+ RTE_SET_USED(tx_queue);
+ RTE_SET_USED(tx_pkts);
+ RTE_SET_USED(pkts);
+ RTE_SET_USED(flags);
+ return 0;
+}
+#endif
+
#define T(name, f4, f3, f2, f1, f0, sz, flags) \
static uint16_t __rte_noinline __hot \
otx2_nix_xmit_pkts_ ## name(void *tx_queue, \
@@ -107,6 +960,21 @@ otx2_nix_xmit_pkts_mseg_ ## name(void *tx_queue, \
NIX_TX_FASTPATH_MODES
#undef T
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+static uint16_t __rte_noinline __hot \
+otx2_nix_xmit_pkts_vec_ ## name(void *tx_queue, \
+ struct rte_mbuf **tx_pkts, uint16_t pkts) \
+{ \
+ /* VLAN and TSTMP is not supported by vec */ \
+ if ((flags) & NIX_TX_OFFLOAD_VLAN_QINQ_F || \
+ (flags) & NIX_TX_OFFLOAD_TSTAMP_F) \
+ return 0; \
+ return nix_xmit_pkts_vector(tx_queue, tx_pkts, pkts, (flags)); \
+}
+
+NIX_TX_FASTPATH_MODES
+#undef T
+
static inline void
pick_tx_func(struct rte_eth_dev *eth_dev,
const eth_tx_burst_t tx_burst[2][2][2][2][2])
@@ -143,7 +1011,20 @@ NIX_TX_FASTPATH_MODES
#undef T
};
- pick_tx_func(eth_dev, nix_eth_tx_burst);
+ const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2] = {
+#define T(name, f4, f3, f2, f1, f0, sz, flags) \
+ [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_vec_ ## name,
+
+NIX_TX_FASTPATH_MODES
+#undef T
+ };
+
+ if (dev->scalar_ena ||
+ (dev->tx_offload_flags &
+ (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F)))
+ pick_tx_func(eth_dev, nix_eth_tx_burst);
+ else
+ pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 54/58] net/octeontx2: add device start operation
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (52 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 53/58] net/octeontx2: add Tx vector version jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 55/58] net/octeontx2: add device stop and close operations jerinj
` (4 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic
Cc: Vamsi Attunuru
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add device start operation and update the correct
function pointers for Rx and Tx burst functions.
This patch also update the octeontx2 NIC specific
documentation.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
doc/guides/nics/octeontx2.rst | 91 ++++++++++++
drivers/net/octeontx2/otx2_ethdev.c | 180 ++++++++++++++++++++++++
drivers/net/octeontx2/otx2_flow.c | 4 +-
drivers/net/octeontx2/otx2_flow_parse.c | 4 +-
drivers/net/octeontx2/otx2_ptp.c | 8 ++
drivers/net/octeontx2/otx2_vlan.c | 1 +
6 files changed, 286 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index e92631057..31cc1beec 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -34,6 +34,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Vector Poll mode driver
- Debug utilities - Context dump and error interrupt support
- IEEE1588 timestamping
+- HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
Prerequisites
-------------
@@ -49,6 +50,63 @@ The following options may be modified in the ``config`` file.
Toggle compilation of the ``librte_pmd_octeontx2`` driver.
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+To compile the OCTEON TX2 PMD for Linux arm64 gcc,
+use arm64-octeontx2-linux-gcc as target.
+
+#. Running testpmd:
+
+ Follow instructions available in the document
+ :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+ to run testpmd.
+
+ Example output:
+
+ .. code-block:: console
+
+ ./build/app/testpmd -c 0x300 -w 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
+ EAL: Detected 24 lcore(s)
+ EAL: Detected 1 NUMA nodes
+ EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
+ EAL: No available hugepages reported in hugepages-2048kB
+ EAL: Probing VFIO support...
+ EAL: VFIO support initialized
+ EAL: PCI device 0002:02:00.0 on NUMA socket 0
+ EAL: probe driver: 177d:a063 net_octeontx2
+ EAL: using IOMMU type 1 (Type 1)
+ testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456, size=2176, socket=0
+ testpmd: preferred mempool ops selected: octeontx2_npa
+ Configuring Port 0 (socket 0)
+ PMD: Port 0: Link Up - speed 40000 Mbps - full-duplex
+
+ Port 0: link state change event
+ Port 0: 36:10:66:88:7A:57
+ Checking link statuses...
+ Done
+ No commandline core given, start packet forwarding
+ io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
+ Logical Core 9 (socket 0) forwards packets on 1 streams:
+ RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
+
+ io packet forwarding packets/burst=32
+ nb forwarding cores=1 - nb forwarding ports=1
+ port 0: RX queue number: 1 Tx queue number: 1
+ Rx offloads=0x0 Tx offloads=0x10000
+ RX queue: 0
+ RX desc=512 - RX free threshold=0
+ RX threshold registers: pthresh=0 hthresh=0 wthresh=0
+ RX Offloads=0x0
+ TX queue: 0
+ TX desc=512 - TX free threshold=0
+ TX threshold registers: pthresh=0 hthresh=0 wthresh=0
+ TX offloads=0x10000 - TX RS bit threshold=0
+ Press enter to exit
+
Runtime Config Options
----------------------
@@ -116,6 +174,39 @@ Runtime Config Options
parameters to all the PCIe devices if application requires to configure on
all the ethdev ports.
+Limitations
+-----------
+
+``mempool_octeontx2`` external mempool handler dependency
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The OCTEON TX2 SoC family NIC has inbuilt HW assisted external mempool manager.
+``net_octeontx2`` pmd only works with ``mempool_octeontx2`` mempool handler
+as it is performance wise most effective way for packet allocation and Tx buffer
+recycling on OCTEON TX2 SoC platform.
+
+CRC striping
+~~~~~~~~~~~~
+
+The OCTEON TX2 SoC family NICs strip the CRC for every packet being received by
+the host interface irrespective of the offload configuration.
+
+
+Debugging Options
+-----------------
+
+.. _table_octeontx2_ethdev_debug_options:
+
+.. table:: OCTEON TX2 ethdev debug options
+
+ +---+------------+-------------------------------------------------------+
+ | # | Component | EAL log command |
+ +===+============+=======================================================+
+ | 1 | NIX | --log-level='pmd\.net.octeontx2,8' |
+ +---+------------+-------------------------------------------------------+
+ | 2 | NPC | --log-level='pmd\.net.octeontx2\.flow,8' |
+ +---+------------+-------------------------------------------------------+
+
RTE Flow Support
----------------
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 1081d070a..113d382c6 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -135,6 +135,55 @@ otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev)
return otx2_mbox_process(mbox);
}
+static int
+npc_rx_enable(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ otx2_mbox_alloc_msg_nix_lf_start_rx(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
+static int
+npc_rx_disable(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ otx2_mbox_alloc_msg_nix_lf_stop_rx(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
+static int
+nix_cgx_start_link_event(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_start_linkevents(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
+static int
+cgx_intlbk_enable(struct otx2_eth_dev *dev, bool en)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ if (en)
+ otx2_mbox_alloc_msg_cgx_intlbk_enable(mbox);
+ else
+ otx2_mbox_alloc_msg_cgx_intlbk_disable(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
static inline void
nix_rx_queue_reset(struct otx2_eth_rxq *rxq)
{
@@ -478,6 +527,74 @@ nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
return NIX_MAXSQESZ_W8;
}
+static uint16_t
+nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct rte_eth_conf *conf = &data->dev_conf;
+ struct rte_eth_rxmode *rxmode = &conf->rxmode;
+ uint16_t flags = 0;
+
+ if (rxmode->mq_mode == ETH_MQ_RX_RSS)
+ flags |= NIX_RX_OFFLOAD_RSS_F;
+
+ if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM))
+ flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
+
+ if (dev->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+ flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
+
+ if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ flags |= NIX_RX_MULTI_SEG_F;
+
+ if (dev->rx_offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_QINQ_STRIP))
+ flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
+
+ if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+ flags |= NIX_RX_OFFLOAD_TSTAMP_F;
+
+ return flags;
+}
+
+static uint16_t
+nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint64_t conf = dev->tx_offloads;
+ uint16_t flags = 0;
+
+ /* Fastpath is dependent on these enums */
+ RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52));
+ RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52));
+ RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52));
+
+ if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
+ conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+ flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
+
+ if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+ conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+ flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
+
+ if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
+ conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
+ conf & DEV_TX_OFFLOAD_UDP_CKSUM ||
+ conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+ flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
+
+ if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+ flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
+
+ if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+ flags |= NIX_TX_MULTI_SEG_F;
+
+ return flags;
+}
+
static int
nix_sq_init(struct otx2_eth_txq *txq)
{
@@ -1111,6 +1228,8 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
dev->rx_offloads = rxmode->offloads;
dev->tx_offloads = txmode->offloads;
+ dev->rx_offload_flags |= nix_rx_offload_flags(eth_dev);
+ dev->tx_offload_flags |= nix_tx_offload_flags(eth_dev);
dev->rss_info.rss_grps = NIX_RSS_GRPS;
nb_rxq = RTE_MAX(data->nb_rx_queues, 1);
@@ -1150,6 +1269,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ /* Configure loop back mode */
+ rc = cgx_intlbk_enable(dev, eth_dev->data->dev_conf.lpbk_mode);
+ if (rc) {
+ otx2_err("Failed to configure cgx loop back mode rc=%d", rc);
+ goto free_nix_lf;
+ }
+
rc = otx2_nix_rxchan_bpid_cfg(eth_dev, true);
if (rc) {
otx2_err("Failed to configure nix rx chan bpid cfg rc=%d", rc);
@@ -1299,6 +1425,59 @@ otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
return rc;
}
+static int
+otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc, i;
+
+ /* Start rx queues */
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ rc = otx2_nix_rx_queue_start(eth_dev, i);
+ if (rc)
+ return rc;
+ }
+
+ /* Start tx queues */
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ rc = otx2_nix_tx_queue_start(eth_dev, i);
+ if (rc)
+ return rc;
+ }
+
+ rc = otx2_nix_update_flow_ctrl_mode(eth_dev);
+ if (rc) {
+ otx2_err("Failed to update flow ctrl mode %d", rc);
+ return rc;
+ }
+
+ rc = npc_rx_enable(dev);
+ if (rc) {
+ otx2_err("Failed to enable NPC rx %d", rc);
+ return rc;
+ }
+
+ otx2_nix_toggle_flag_link_cfg(dev, true);
+
+ rc = nix_cgx_start_link_event(dev);
+ if (rc) {
+ otx2_err("Failed to start cgx link event %d", rc);
+ goto rx_disable;
+ }
+
+ otx2_nix_toggle_flag_link_cfg(dev, false);
+ otx2_eth_set_tx_function(eth_dev);
+ otx2_eth_set_rx_function(eth_dev);
+
+ return 0;
+
+rx_disable:
+ npc_rx_disable(dev);
+ otx2_nix_toggle_flag_link_cfg(dev, false);
+ return rc;
+}
+
+
/* Initialize and register driver with DPDK Application */
static const struct eth_dev_ops otx2_eth_dev_ops = {
.dev_infos_get = otx2_nix_info_get,
@@ -1308,6 +1487,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.tx_queue_release = otx2_nix_tx_queue_release,
.rx_queue_setup = otx2_nix_rx_queue_setup,
.rx_queue_release = otx2_nix_rx_queue_release,
+ .dev_start = otx2_nix_dev_start,
.tx_queue_start = otx2_nix_tx_queue_start,
.tx_queue_stop = otx2_nix_tx_queue_stop,
.rx_queue_start = otx2_nix_rx_queue_start,
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 3ddecfb23..982100df4 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -528,8 +528,10 @@ otx2_flow_destroy(struct rte_eth_dev *dev,
return -EINVAL;
/* Clear mark offload flag if there are no more mark actions */
- if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0)
+ if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0) {
hw->rx_offload_flags &= ~NIX_RX_OFFLOAD_MARK_UPDATE_F;
+ otx2_eth_set_rx_function(dev);
+ }
}
rc = flow_free_rss_action(dev, flow);
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index 7f997ab74..1940cc636 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -938,9 +938,11 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
if (mark)
flow->npc_action |= (uint64_t)mark << 40;
- if (rte_atomic32_read(&npc->mark_actions) == 1)
+ if (rte_atomic32_read(&npc->mark_actions) == 1) {
hw->rx_offload_flags |=
NIX_RX_OFFLOAD_MARK_UPDATE_F;
+ otx2_eth_set_rx_function(dev);
+ }
set_pf_func:
/* Ideally AF must ensure that correct pf_func is set */
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 5291da241..0186c629a 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -118,6 +118,10 @@ otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
otx2_nix_form_default_desc(txq);
}
+
+ /* Setting up the function pointers as per new offload flags */
+ otx2_eth_set_rx_function(eth_dev);
+ otx2_eth_set_tx_function(eth_dev);
}
return rc;
}
@@ -147,6 +151,10 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
otx2_nix_form_default_desc(txq);
}
+
+ /* Setting up the function pointers as per new offload flags */
+ otx2_eth_set_rx_function(eth_dev);
+ otx2_eth_set_tx_function(eth_dev);
}
return rc;
}
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index dc0f4e032..189c45174 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -760,6 +760,7 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
DEV_RX_OFFLOAD_QINQ_STRIP)) {
dev->rx_offloads |= offloads;
dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
+ otx2_eth_set_rx_function(eth_dev);
}
done:
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 55/58] net/octeontx2: add device stop and close operations
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (53 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 54/58] net/octeontx2: add device start operation jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 56/58] net/octeontx2: add MTU set operation jerinj
` (3 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vamsi Attunuru
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Add device stop, close and reset operations.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 75 +++++++++++++++++++++++++++++
1 file changed, 75 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 113d382c6..ddbb11167 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -184,6 +184,19 @@ cgx_intlbk_enable(struct otx2_eth_dev *dev, bool en)
return otx2_mbox_process(mbox);
}
+static int
+nix_cgx_stop_link_event(struct otx2_eth_dev *dev)
+{
+ struct otx2_mbox *mbox = dev->mbox;
+
+ if (otx2_dev_is_vf(dev))
+ return 0;
+
+ otx2_mbox_alloc_msg_cgx_stop_linkevents(mbox);
+
+ return otx2_mbox_process(mbox);
+}
+
static inline void
nix_rx_queue_reset(struct otx2_eth_rxq *rxq)
{
@@ -1208,6 +1221,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
if (dev->configured == 1) {
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
otx2_nix_vlan_fini(eth_dev);
+ otx2_flow_free_all_resources(dev);
oxt2_nix_unregister_queue_irqs(eth_dev);
nix_set_nop_rxtx_function(eth_dev);
rc = nix_store_queue_cfg_and_then_release(eth_dev);
@@ -1425,6 +1439,37 @@ otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
return rc;
}
+static void
+otx2_nix_dev_stop(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_mbuf *rx_pkts[32];
+ struct otx2_eth_rxq *rxq;
+ int count, i, j, rc;
+
+ nix_cgx_stop_link_event(dev);
+ npc_rx_disable(dev);
+
+ /* Stop rx queues and free up pkts pending */
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ rc = otx2_nix_rx_queue_stop(eth_dev, i);
+ if (rc)
+ continue;
+
+ rxq = eth_dev->data->rx_queues[i];
+ count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32);
+ while (count) {
+ for (j = 0; j < count; j++)
+ rte_pktmbuf_free(rx_pkts[j]);
+ count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32);
+ }
+ }
+
+ /* Stop tx queues */
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
+ otx2_nix_tx_queue_stop(eth_dev, i);
+}
+
static int
otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
{
@@ -1477,6 +1522,8 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
return rc;
}
+static int otx2_nix_dev_reset(struct rte_eth_dev *eth_dev);
+static void otx2_nix_dev_close(struct rte_eth_dev *eth_dev);
/* Initialize and register driver with DPDK Application */
static const struct eth_dev_ops otx2_eth_dev_ops = {
@@ -1488,11 +1535,14 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.rx_queue_setup = otx2_nix_rx_queue_setup,
.rx_queue_release = otx2_nix_rx_queue_release,
.dev_start = otx2_nix_dev_start,
+ .dev_stop = otx2_nix_dev_stop,
+ .dev_close = otx2_nix_dev_close,
.tx_queue_start = otx2_nix_tx_queue_start,
.tx_queue_stop = otx2_nix_tx_queue_stop,
.rx_queue_start = otx2_nix_rx_queue_start,
.rx_queue_stop = otx2_nix_rx_queue_stop,
.dev_supported_ptypes_get = otx2_nix_supported_ptypes_get,
+ .dev_reset = otx2_nix_dev_reset,
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
@@ -1744,9 +1794,14 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ /* Clear the flag since we are closing down */
+ dev->configured = 0;
+
/* Disable nix bpid config */
otx2_nix_rxchan_bpid_cfg(eth_dev, false);
+ npc_rx_disable(dev);
+
/* Disable vlan offloads */
otx2_nix_vlan_fini(eth_dev);
@@ -1757,6 +1812,8 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
if (otx2_ethdev_is_ptp_en(dev))
otx2_nix_timesync_disable(eth_dev);
+ nix_cgx_stop_link_event(dev);
+
/* Free up SQs */
for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]);
@@ -1812,6 +1869,24 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
return 0;
}
+static void
+otx2_nix_dev_close(struct rte_eth_dev *eth_dev)
+{
+ otx2_eth_dev_uninit(eth_dev, true);
+}
+
+static int
+otx2_nix_dev_reset(struct rte_eth_dev *eth_dev)
+{
+ int rc;
+
+ rc = otx2_eth_dev_uninit(eth_dev, false);
+ if (rc)
+ return rc;
+
+ return otx2_eth_dev_init(eth_dev);
+}
+
static int
nix_remove(struct rte_pci_device *pci_dev)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 56/58] net/octeontx2: add MTU set operation
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (54 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 55/58] net/octeontx2: add device stop and close operations jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 57/58] net/octeontx2: add Rx interrupts support jerinj
` (2 subsequent siblings)
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Vamsi Attunuru, Sunil Kumar Kori
From: Vamsi Attunuru <vattunuru@marvell.com>
Add MTU set operation and MTU update feature.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vec.ini | 1 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 7 ++
drivers/net/octeontx2/otx2_ethdev.h | 4 +
drivers/net/octeontx2/otx2_ethdev_ops.c | 86 ++++++++++++++++++++++
6 files changed, 100 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 1856d9924..be10dc0c8 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -15,6 +15,7 @@ Runtime Tx queue setup = Y
Fast mbuf free = Y
Free Tx mbuf on demand = Y
Queue start/stop = Y
+MTU update = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
index 053fca288..df8180f83 100644
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ b/doc/guides/nics/features/octeontx2_vec.ini
@@ -15,6 +15,7 @@ Runtime Tx queue setup = Y
Fast mbuf free = Y
Free Tx mbuf on demand = Y
Queue start/stop = Y
+MTU update = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 31cc1beec..a7ad31182 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -30,6 +30,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Port hardware statistics
- Link state information
- Link flow control
+- MTU update
- Scatter-Gather IO support
- Vector Poll mode driver
- Debug utilities - Context dump and error interrupt support
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index ddbb11167..7d1fce55b 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1476,6 +1476,12 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, i;
+ if (eth_dev->data->nb_rx_queues != 0) {
+ rc = otx2_nix_recalc_mtu(eth_dev);
+ if (rc)
+ return rc;
+ }
+
/* Start rx queues */
for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
rc = otx2_nix_rx_queue_start(eth_dev, i);
@@ -1546,6 +1552,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.stats_get = otx2_nix_dev_stats_get,
.stats_reset = otx2_nix_dev_stats_reset,
.get_reg = otx2_nix_dev_get_reg,
+ .mtu_set = otx2_nix_mtu_set,
.mac_addr_add = otx2_nix_mac_addr_add,
.mac_addr_remove = otx2_nix_mac_addr_del,
.mac_addr_set = otx2_nix_mac_addr_set,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index f39fdfa1f..3703acc69 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -371,6 +371,10 @@ int otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx);
int otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx);
uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id);
+/* MTU */
+int otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu);
+int otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev);
+
/* Link */
void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 6a3048336..5a16a3c04 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -6,6 +6,92 @@
#include "otx2_ethdev.h"
+int
+otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
+{
+ uint32_t buffsz, frame_size = mtu + NIX_L2_OVERHEAD;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct otx2_mbox *mbox = dev->mbox;
+ struct nix_frs_cfg *req;
+ int rc;
+
+ /* Check if MTU is within the allowed range */
+ if (frame_size < NIX_MIN_FRS || frame_size > NIX_MAX_FRS)
+ return -EINVAL;
+
+ buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
+
+ /* Refuse MTU that requires the support of scattered packets
+ * when this feature has not been enabled before.
+ */
+ if (data->dev_started && frame_size > buffsz &&
+ !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER))
+ return -EINVAL;
+
+ /* Check <seg size> * <max_seg> >= max_frame */
+ if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) &&
+ (frame_size > buffsz * NIX_RX_NB_SEG_MAX))
+ return -EINVAL;
+
+ req = otx2_mbox_alloc_msg_nix_set_hw_frs(mbox);
+ req->update_smq = true;
+ /* FRS HW config should exclude FCS but include NPC VTAG insert size */
+ req->maxlen = frame_size - RTE_ETHER_CRC_LEN + NIX_MAX_VTAG_ACT_SIZE;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* Now just update Rx MAXLEN */
+ req = otx2_mbox_alloc_msg_nix_set_hw_frs(mbox);
+ req->maxlen = frame_size - RTE_ETHER_CRC_LEN;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ if (frame_size > RTE_ETHER_MAX_LEN)
+ dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+ /* Update max_rx_pkt_len */
+ data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+ return rc;
+}
+
+int
+otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct rte_pktmbuf_pool_private *mbp_priv;
+ struct otx2_eth_rxq *rxq;
+ uint32_t buffsz;
+ uint16_t mtu;
+ int rc;
+
+ /* Get rx buffer size */
+ rxq = data->rx_queues[0];
+ mbp_priv = rte_mempool_get_priv(rxq->pool);
+ buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
+
+ /* Setup scatter mode if needed by jumbo */
+ if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz)
+ dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+
+ /* Setup MTU based on max_rx_pkt_len */
+ mtu = data->dev_conf.rxmode.max_rx_pkt_len - NIX_L2_OVERHEAD;
+
+ rc = otx2_nix_mtu_set(eth_dev, mtu);
+ if (rc)
+ otx2_err("Failed to set default MTU size %d", rc);
+
+ return rc;
+}
+
static void
nix_cgx_promisc_config(struct rte_eth_dev *eth_dev, int en)
{
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 57/58] net/octeontx2: add Rx interrupts support
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (55 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 56/58] net/octeontx2: add MTU set operation jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 58/58] net/octeontx2: add link status set operations jerinj
2019-07-03 20:22 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver Jerin Jacob Kollanukkaran
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, John McNamara, Marko Kovacevic, Jerin Jacob,
Nithin Dabilpuram, Kiran Kumar K
Cc: Harman Kalra
From: Harman Kalra <hkalra@marvell.com>
This patch implements rx interrupts feature required for power
saving. These interrupts can be enabled/disabled on demand.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
doc/guides/nics/features/octeontx2.ini | 1 +
doc/guides/nics/features/octeontx2_vf.ini | 1 +
doc/guides/nics/octeontx2.rst | 1 +
drivers/net/octeontx2/otx2_ethdev.c | 31 ++++++
drivers/net/octeontx2/otx2_ethdev.h | 16 +++
drivers/net/octeontx2/otx2_ethdev_irq.c | 125 ++++++++++++++++++++++
6 files changed, 175 insertions(+)
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index be10dc0c8..66952328b 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -5,6 +5,7 @@
;
[Features]
Speed capabilities = Y
+Rx interrupt = Y
Lock-free Tx queue = Y
SR-IOV = Y
Multiprocess aware = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
index bef451d01..16799309b 100644
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ b/doc/guides/nics/features/octeontx2_vf.ini
@@ -7,6 +7,7 @@
Speed capabilities = Y
Lock-free Tx queue = Y
Multiprocess aware = Y
+Rx interrupt = Y
Link status = Y
Link status event = Y
Runtime Rx queue setup = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index a7ad31182..a8ed3838f 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -36,6 +36,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
- Debug utilities - Context dump and error interrupt support
- IEEE1588 timestamping
- HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
+- Support Rx interrupt
Prerequisites
-------------
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 7d1fce55b..b5b5e63f7 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -277,6 +277,8 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
/* Many to one reduction */
aq->cq.qint_idx = qid % dev->qints;
+ /* Map CQ0 [RQ0] to CINT0 and so on till max 64 irqs */
+ aq->cq.cint_idx = qid;
if (otx2_ethdev_fixup_is_limit_cq_full(dev)) {
uint16_t min_rx_drop;
@@ -1223,6 +1225,8 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
otx2_nix_vlan_fini(eth_dev);
otx2_flow_free_all_resources(dev);
oxt2_nix_unregister_queue_irqs(eth_dev);
+ if (eth_dev->data->dev_conf.intr_conf.rxq)
+ oxt2_nix_unregister_cq_irqs(eth_dev);
nix_set_nop_rxtx_function(eth_dev);
rc = nix_store_queue_cfg_and_then_release(eth_dev);
if (rc)
@@ -1283,6 +1287,27 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
goto free_nix_lf;
}
+ /* Register cq IRQs */
+ if (eth_dev->data->dev_conf.intr_conf.rxq) {
+ if (eth_dev->data->nb_rx_queues > dev->cints) {
+ otx2_err("Rx interrupt cannot be enabled, rxq > %d",
+ dev->cints);
+ goto free_nix_lf;
+ }
+ /* Rx interrupt feature cannot work with vector mode because,
+ * vector mode doesn't process packets unless min 4 pkts are
+ * received, while cq interrupts are generated even for 1 pkt
+ * in the CQ.
+ */
+ dev->scalar_ena = true;
+
+ rc = oxt2_nix_register_cq_irqs(eth_dev);
+ if (rc) {
+ otx2_err("Failed to register CQ interrupts rc=%d", rc);
+ goto free_nix_lf;
+ }
+ }
+
/* Configure loop back mode */
rc = cgx_intlbk_enable(dev, eth_dev->data->dev_conf.lpbk_mode);
if (rc) {
@@ -1595,6 +1620,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set,
.vlan_tpid_set = otx2_nix_vlan_tpid_set,
.vlan_pvid_set = otx2_nix_vlan_pvid_set,
+ .rx_queue_intr_enable = otx2_nix_rx_queue_intr_enable,
+ .rx_queue_intr_disable = otx2_nix_rx_queue_intr_disable,
};
static inline int
@@ -1843,6 +1870,10 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
/* Unregister queue irqs */
oxt2_nix_unregister_queue_irqs(eth_dev);
+ /* Unregister cq irqs */
+ if (eth_dev->data->dev_conf.intr_conf.rxq)
+ oxt2_nix_unregister_cq_irqs(eth_dev);
+
rc = nix_lf_free(dev);
if (rc)
otx2_err("Failed to free nix lf, rc=%d", rc);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 3703acc69..f6905db83 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -102,6 +102,13 @@
#define OP_ERR BIT_ULL(CQ_OP_STAT_OP_ERR)
#define CQ_ERR BIT_ULL(CQ_OP_STAT_CQ_ERR)
+#define CQ_CQE_THRESH_DEFAULT 0x1ULL /* IRQ triggered when
+ * NIX_LF_CINTX_CNT[QCOUNT]
+ * crosses this value
+ */
+#define CQ_TIMER_THRESH_DEFAULT 0xAULL /* ~1usec i.e (0xA * 100nsec) */
+#define CQ_TIMER_THRESH_MAX 255
+
#define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
ETH_RSS_TCP | ETH_RSS_SCTP | \
ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD)
@@ -248,6 +255,7 @@ struct otx2_eth_dev {
uint16_t qints;
uint8_t configured;
uint8_t configured_qints;
+ uint8_t configured_cints;
uint8_t configured_nb_rx_qs;
uint8_t configured_nb_tx_qs;
uint16_t nix_msixoff;
@@ -262,6 +270,7 @@ struct otx2_eth_dev {
uint64_t rx_offload_capa;
uint64_t tx_offload_capa;
struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
+ struct otx2_qint cints_mem[RTE_MAX_QUEUES_PER_PORT];
uint16_t txschq[NIX_TXSCH_LVL_CNT];
uint16_t txschq_contig[NIX_TXSCH_LVL_CNT];
uint16_t txschq_index[NIX_TXSCH_LVL_CNT];
@@ -384,8 +393,15 @@ void otx2_eth_dev_link_status_update(struct otx2_dev *dev,
/* IRQ */
int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev);
+int oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev);
void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev);
+void oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev);
+
+int otx2_nix_rx_queue_intr_enable(struct rte_eth_dev *eth_dev,
+ uint16_t rx_queue_id);
+int otx2_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
+ uint16_t rx_queue_id);
/* Debug */
int otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data);
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index 066aca7a5..9006e5c8b 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -5,6 +5,7 @@
#include <inttypes.h>
#include <rte_bus_pci.h>
+#include <rte_malloc.h>
#include "otx2_ethdev.h"
@@ -171,6 +172,18 @@ nix_lf_sq_debug_reg(struct otx2_eth_dev *dev, uint32_t off)
(int)((reg >> 8) & 0xfffff), (uint8_t)(reg & 0xff));
}
+static void
+nix_lf_cq_irq(void *param)
+{
+ struct otx2_qint *cint = (struct otx2_qint *)param;
+ struct rte_eth_dev *eth_dev = cint->eth_dev;
+ struct otx2_eth_dev *dev;
+
+ dev = otx2_eth_pmd_priv(eth_dev);
+ /* Clear interrupt */
+ otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_INT(cint->qintx));
+}
+
static void
nix_lf_q_irq(void *param)
{
@@ -315,6 +328,92 @@ oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
}
}
+int
+oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ uint8_t rc = 0, vec, q;
+
+ dev->configured_cints = RTE_MIN(dev->cints,
+ eth_dev->data->nb_rx_queues);
+
+ for (q = 0; q < dev->configured_cints; q++) {
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_CINT_START + q;
+
+ /* Clear CINT CNT */
+ otx2_write64(0, dev->base + NIX_LF_CINTX_CNT(q));
+
+ /* Clear interrupt */
+ otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_ENA_W1C(q));
+
+ dev->cints_mem[q].eth_dev = eth_dev;
+ dev->cints_mem[q].qintx = q;
+
+ /* Sync cints_mem update */
+ rte_smp_wmb();
+
+ /* Register queue irq vector */
+ rc = otx2_register_irq(handle, nix_lf_cq_irq,
+ &dev->cints_mem[q], vec);
+ if (rc) {
+ otx2_err("Fail to register CQ irq, rc=%d", rc);
+ return rc;
+ }
+
+ if (!handle->intr_vec) {
+ handle->intr_vec = rte_zmalloc("intr_vec",
+ dev->configured_cints *
+ sizeof(int), 0);
+ if (!handle->intr_vec) {
+ otx2_err("Failed to allocate %d rx intr_vec",
+ dev->configured_cints);
+ return -ENOMEM;
+ }
+ }
+ /* VFIO vector zero is resereved for misc interrupt so
+ * doing required adjustment. (b13bfab4cd)
+ */
+ handle->intr_vec[q] = RTE_INTR_VEC_RXTX_OFFSET + vec;
+
+ /* Configure CQE interrupt coalescing parameters */
+ otx2_write64(((CQ_CQE_THRESH_DEFAULT) |
+ (CQ_CQE_THRESH_DEFAULT << 32) |
+ (CQ_TIMER_THRESH_DEFAULT << 48)),
+ dev->base + NIX_LF_CINTX_WAIT((q)));
+
+ /* Keeping the CQ interrupt disabled as the rx interrupt
+ * feature needs to be enabled/disabled on demand.
+ */
+ }
+
+ return rc;
+}
+
+void
+oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int vec, q;
+
+ for (q = 0; q < dev->configured_cints; q++) {
+ vec = dev->nix_msixoff + NIX_LF_INT_VEC_CINT_START + q;
+
+ /* Clear CINT CNT */
+ otx2_write64(0, dev->base + NIX_LF_CINTX_CNT(q));
+
+ /* Clear interrupt */
+ otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_ENA_W1C(q));
+
+ /* Unregister queue irq vector */
+ otx2_unregister_irq(handle, nix_lf_cq_irq,
+ &dev->cints_mem[q], vec);
+ }
+}
+
int
otx2_nix_register_irqs(struct rte_eth_dev *eth_dev)
{
@@ -341,3 +440,29 @@ otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev)
nix_lf_unregister_err_irq(eth_dev);
nix_lf_unregister_ras_irq(eth_dev);
}
+
+int
+otx2_nix_rx_queue_intr_enable(struct rte_eth_dev *eth_dev,
+ uint16_t rx_queue_id)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ /* Enable CINT interrupt */
+ otx2_write64(BIT_ULL(0), dev->base +
+ NIX_LF_CINTX_ENA_W1S(rx_queue_id));
+
+ return 0;
+}
+
+int
+otx2_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
+ uint16_t rx_queue_id)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+ /* Clear and disable CINT interrupt */
+ otx2_write64(BIT_ULL(0), dev->base +
+ NIX_LF_CINTX_ENA_W1C(rx_queue_id));
+
+ return 0;
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* [dpdk-dev] [PATCH v3 58/58] net/octeontx2: add link status set operations
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (56 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 57/58] net/octeontx2: add Rx interrupts support jerinj
@ 2019-07-03 8:42 ` jerinj
2019-07-03 20:22 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver Jerin Jacob Kollanukkaran
58 siblings, 0 replies; 196+ messages in thread
From: jerinj @ 2019-07-03 8:42 UTC (permalink / raw)
To: dev, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Vamsi Attunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Add support for setting the link up and down.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/octeontx2/otx2_ethdev.c | 2 ++
drivers/net/octeontx2/otx2_ethdev.h | 2 ++
drivers/net/octeontx2/otx2_link.c | 49 +++++++++++++++++++++++++++++
3 files changed, 53 insertions(+)
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index b5b5e63f7..156e7d34f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1572,6 +1572,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
.tx_queue_stop = otx2_nix_tx_queue_stop,
.rx_queue_start = otx2_nix_rx_queue_start,
.rx_queue_stop = otx2_nix_rx_queue_stop,
+ .dev_set_link_up = otx2_nix_dev_set_link_up,
+ .dev_set_link_down = otx2_nix_dev_set_link_down,
.dev_supported_ptypes_get = otx2_nix_supported_ptypes_get,
.dev_reset = otx2_nix_dev_reset,
.stats_get = otx2_nix_dev_stats_get,
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index f6905db83..863d4877f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -389,6 +389,8 @@ void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
void otx2_eth_dev_link_status_update(struct otx2_dev *dev,
struct cgx_link_user_info *link);
+int otx2_nix_dev_set_link_up(struct rte_eth_dev *eth_dev);
+int otx2_nix_dev_set_link_down(struct rte_eth_dev *eth_dev);
/* IRQ */
int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
index 228a0cd8e..8fcbdc9b7 100644
--- a/drivers/net/octeontx2/otx2_link.c
+++ b/drivers/net/octeontx2/otx2_link.c
@@ -106,3 +106,52 @@ otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
return rte_eth_linkstatus_set(eth_dev, &link);
}
+
+static int
+nix_dev_set_link_state(struct rte_eth_dev *eth_dev, uint8_t enable)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ struct otx2_mbox *mbox = dev->mbox;
+ struct cgx_set_link_state_msg *req;
+
+ req = otx2_mbox_alloc_msg_cgx_set_link_state(mbox);
+ req->enable = enable;
+ return otx2_mbox_process(mbox);
+}
+
+int
+otx2_nix_dev_set_link_up(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int rc, i;
+
+ if (otx2_dev_is_vf(dev))
+ return -ENOTSUP;
+
+ rc = nix_dev_set_link_state(eth_dev, 1);
+ if (rc)
+ goto done;
+
+ /* Start tx queues */
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
+ otx2_nix_tx_queue_start(eth_dev, i);
+
+done:
+ return rc;
+}
+
+int
+otx2_nix_dev_set_link_down(struct rte_eth_dev *eth_dev)
+{
+ struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+ int i;
+
+ if (otx2_dev_is_vf(dev))
+ return -ENOTSUP;
+
+ /* Stop tx queues */
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
+ otx2_nix_tx_queue_stop(eth_dev, i);
+
+ return nix_dev_set_link_state(eth_dev, 0);
+}
--
2.21.0
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
` (57 preceding siblings ...)
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 58/58] net/octeontx2: add link status set operations jerinj
@ 2019-07-03 20:22 ` Jerin Jacob Kollanukkaran
2019-07-04 18:11 ` Ferruh Yigit
58 siblings, 1 reply; 196+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-07-03 20:22 UTC (permalink / raw)
To: Jerin Jacob Kollanukkaran, dev; +Cc: Ferruh Yigit
> -----Original Message-----
> From: jerinj@marvell.com <jerinj@marvell.com>
> Sent: Wednesday, July 3, 2019 2:12 PM
> To: dev@dpdk.org
> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
> Subject: [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver
>
> From: Jerin Jacob <jerinj@marvell.com>
>
> This patchset adds support for OCTEON TX2 ethdev driver.
Series applied to dpdk-next-net-mrvl/master. Thanks.
^ permalink raw reply [flat|nested] 196+ messages in thread
* Re: [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver
2019-07-03 20:22 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver Jerin Jacob Kollanukkaran
@ 2019-07-04 18:11 ` Ferruh Yigit
0 siblings, 0 replies; 196+ messages in thread
From: Ferruh Yigit @ 2019-07-04 18:11 UTC (permalink / raw)
To: Jerin Jacob Kollanukkaran, dev
On 7/3/2019 9:22 PM, Jerin Jacob Kollanukkaran wrote:
>> -----Original Message-----
>> From: jerinj@marvell.com <jerinj@marvell.com>
>> Sent: Wednesday, July 3, 2019 2:12 PM
>> To: dev@dpdk.org
>> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
>> Subject: [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver
>>
>> From: Jerin Jacob <jerinj@marvell.com>
>>
>> This patchset adds support for OCTEON TX2 ethdev driver.
>
> Series applied to dpdk-next-net-mrvl/master. Thanks.
>
Jerin's ack added for the patches that are not by maintainers.
Added following patch in next-net because of icc warnings:
diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile
index 244b7445d..d08d3d854 100644
--- a/drivers/net/octeontx2/Makefile
+++ b/drivers/net/octeontx2/Makefile
@@ -14,13 +14,15 @@ CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2
CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2
CFLAGS += -I$(RTE_SDK)/drivers/net/octeontx2
CFLAGS += -O3
+ifneq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
CFLAGS += -flax-vector-conversions
+endif
ifneq ($(CONFIG_RTE_ARCH_64),y)
CFLAGS += -Wno-int-to-pointer-cast
CFLAGS += -Wno-pointer-to-int-cast
ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
-CFLAGS += -diag-disable 2259 -flax-vector-conversions
+CFLAGS += -diag-disable 2259
endif
^ permalink raw reply [flat|nested] 196+ messages in thread
end of thread, other threads:[~2019-07-04 18:11 UTC | newest]
Thread overview: 196+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-02 15:23 [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 01/58] net/octeontx2: add build infrastructure jerinj
2019-06-06 15:33 ` Ferruh Yigit
2019-06-06 16:40 ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 02/58] net/octeontx2: add ethdev probe and remove jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 03/58] net/octeontx2: add device init and uninit jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 04/58] net/octeontx2: add devargs parsing functions jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 05/58] net/octeontx2: handle device error interrupts jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 06/58] net/octeontx2: add info get operation jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 07/58] net/octeontx2: add device configure operation jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 08/58] net/octeontx2: handle queue specific error interrupts jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 09/58] net/octeontx2: add context debug utils jerinj
2019-06-06 15:41 ` Ferruh Yigit
2019-06-06 16:04 ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
2019-06-06 16:18 ` Ferruh Yigit
2019-06-06 16:27 ` Jerin Jacob Kollanukkaran
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 10/58] net/octeontx2: add register dump support jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 11/58] net/octeontx2: add link stats operations jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 12/58] net/octeontx2: add basic stats operation jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 13/58] net/octeontx2: add extended stats operations jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 14/58] net/octeontx2: add promiscuous and allmulticast mode jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 15/58] net/octeontx2: add unicast MAC filter jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 16/58] net/octeontx2: add RSS support jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 17/58] net/octeontx2: add Rx queue setup and release jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 18/58] net/octeontx2: add Tx " jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 19/58] net/octeontx2: handle port reconfigure jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 20/58] net/octeontx2: add queue start and stop operations jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 21/58] net/octeontx2: introduce traffic manager jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 22/58] net/octeontx2: alloc and free TM HW resources jerinj
2019-06-02 15:23 ` [dpdk-dev] [PATCH v1 23/58] net/octeontx2: configure " jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 24/58] net/octeontx2: enable Tx through traffic manager jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 25/58] net/octeontx2: add ptype support jerinj
2019-06-06 15:50 ` Ferruh Yigit
2019-06-06 15:59 ` Jerin Jacob Kollanukkaran
2019-06-06 16:20 ` Ferruh Yigit
2019-06-07 8:54 ` Jerin Jacob Kollanukkaran
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 26/58] net/octeontx2: add link status set operations jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 27/58] net/octeontx2: add queue info and pool supported operations jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 28/58] net/octeontx2: add Rx and Tx descriptor operations jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 29/58] net/octeontx2: add module EEPROM dump jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 30/58] net/octeontx2: add flow control support jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 31/58] net/octeontx2: add PTP base support jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 32/58] net/octeontx2: add remaining PTP operations jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 33/58] net/octeontx2: introducing flow driver jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 34/58] net/octeontx2: flow utility functions jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 35/58] net/octeontx2: flow mailbox utility jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 36/58] net/octeontx2: add flow MCAM utility functions jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 37/58] net/octeontx2: add flow parsing for outer layers jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 38/58] net/octeontx2: adding flow parsing for inner layers jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 39/58] net/octeontx2: add flow actions support jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 40/58] net/octeontx2: add flow operations jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 41/58] net/octeontx2: add additional " jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 42/58] net/octeontx2: add flow init and fini jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 43/58] net/octeontx2: connect flow API to ethdev ops jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 44/58] net/octeontx2: implement VLAN utility functions jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 45/58] net/octeontx2: support VLAN offloads jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 46/58] net/octeontx2: support VLAN filters jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 47/58] net/octeontx2: support VLAN TPID and PVID for Tx jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 48/58] net/octeontx2: add FW version get operation jerinj
2019-06-06 16:06 ` Ferruh Yigit
2019-06-07 5:51 ` [dpdk-dev] [EXT] " Vamsi Krishna Attunuru
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 49/58] net/octeontx2: add Rx burst support jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 50/58] net/octeontx2: add Rx multi segment version jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 51/58] net/octeontx2: add Rx vector version jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 52/58] net/octeontx2: add Tx burst support jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 53/58] net/octeontx2: add Tx multi segment version jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 54/58] net/octeontx2: add Tx vector version jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 55/58] net/octeontx2: add device start operation jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 56/58] net/octeontx2: add device stop and close operations jerinj
2019-06-06 16:23 ` Ferruh Yigit
2019-06-07 5:11 ` Nithin Dabilpuram
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 57/58] net/octeontx2: add MTU set operation jerinj
2019-06-02 15:24 ` [dpdk-dev] [PATCH v1 58/58] doc: add Marvell OCTEON TX2 ethdev documentation jerinj
2019-06-06 16:50 ` Ferruh Yigit
2019-06-07 3:42 ` Jerin Jacob Kollanukkaran
2019-06-06 15:23 ` [dpdk-dev] [PATCH v1 00/58] OCTEON TX2 Ethdev driver Ferruh Yigit
2019-06-10 9:54 ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 00/57] " jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 01/57] net/octeontx2: add build and doc infrastructure jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 02/57] net/octeontx2: add ethdev probe and remove jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 03/57] net/octeontx2: add device init and uninit jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 04/57] net/octeontx2: add devargs parsing functions jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 05/57] net/octeontx2: handle device error interrupts jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 06/57] net/octeontx2: add info get operation jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 07/57] net/octeontx2: add device configure operation jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 08/57] net/octeontx2: handle queue specific error interrupts jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 09/57] net/octeontx2: add context debug utils jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 10/57] net/octeontx2: add register dump support jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 11/57] net/octeontx2: add link stats operations jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 12/57] net/octeontx2: add basic stats operation jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 13/57] net/octeontx2: add extended stats operations jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 14/57] net/octeontx2: add promiscuous and allmulticast mode jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 15/57] net/octeontx2: add unicast MAC filter jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 16/57] net/octeontx2: add RSS support jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 17/57] net/octeontx2: add Rx queue setup and release jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 18/57] net/octeontx2: add Tx " jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 19/57] net/octeontx2: handle port reconfigure jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 20/57] net/octeontx2: add queue start and stop operations jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 21/57] net/octeontx2: introduce traffic manager jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 22/57] net/octeontx2: alloc and free TM HW resources jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 23/57] net/octeontx2: configure " jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 24/57] net/octeontx2: enable Tx through traffic manager jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 25/57] net/octeontx2: add ptype support jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 26/57] net/octeontx2: add queue info and pool supported operations jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 27/57] net/octeontx2: add Rx and Tx descriptor operations jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 28/57] net/octeontx2: add module EEPROM dump jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 29/57] net/octeontx2: add flow control support jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 30/57] net/octeontx2: add PTP base support jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 31/57] net/octeontx2: add remaining PTP operations jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 32/57] net/octeontx2: introducing flow driver jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 33/57] net/octeontx2: add flow utility functions jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 34/57] net/octeontx2: add flow mbox " jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 35/57] net/octeontx2: add flow MCAM " jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 36/57] net/octeontx2: add flow parsing for outer layers jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 37/57] net/octeontx2: add flow actions support jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 38/57] net/octeontx2: add flow parse " jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 39/57] net/octeontx2: add flow operations jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 40/57] net/octeontx2: add flow destroy ops support jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 41/57] net/octeontx2: add flow init and fini jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 42/57] net/octeontx2: connect flow API to ethdev ops jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 43/57] net/octeontx2: implement VLAN utility functions jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 44/57] net/octeontx2: support VLAN offloads jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 45/57] net/octeontx2: support VLAN filters jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 46/57] net/octeontx2: support VLAN TPID and PVID for Tx jerinj
2019-06-30 18:05 ` [dpdk-dev] [PATCH v2 47/57] net/octeontx2: add FW version get operation jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 48/57] net/octeontx2: add Rx burst support jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 49/57] net/octeontx2: add Rx multi segment version jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 50/57] net/octeontx2: add Rx vector version jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 51/57] net/octeontx2: add Tx burst support jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 52/57] net/octeontx2: add Tx multi segment version jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 53/57] net/octeontx2: add Tx vector version jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 54/57] net/octeontx2: add device start operation jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 55/57] net/octeontx2: add device stop and close operations jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 56/57] net/octeontx2: add MTU set operation jerinj
2019-06-30 18:06 ` [dpdk-dev] [PATCH v2 57/57] net/octeontx2: add Rx interrupts support jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 01/58] net/octeontx2: add build and doc infrastructure jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 02/58] net/octeontx2: add ethdev probe and remove jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 03/58] net/octeontx2: add device init and uninit jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 04/58] net/octeontx2: add devargs parsing functions jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 05/58] net/octeontx2: handle device error interrupts jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 06/58] net/octeontx2: add info get operation jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 07/58] net/octeontx2: add device configure operation jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 08/58] net/octeontx2: handle queue specific error interrupts jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 09/58] net/octeontx2: add context debug utils jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 10/58] net/octeontx2: add register dump support jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 11/58] net/octeontx2: add link stats operations jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 12/58] net/octeontx2: add basic stats operation jerinj
2019-07-03 8:41 ` [dpdk-dev] [PATCH v3 13/58] net/octeontx2: add extended stats operations jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 14/58] net/octeontx2: add promiscuous and allmulticast mode jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 15/58] net/octeontx2: add unicast MAC filter jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 16/58] net/octeontx2: add RSS support jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 17/58] net/octeontx2: add Rx queue setup and release jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 18/58] net/octeontx2: add Tx " jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 19/58] net/octeontx2: handle port reconfigure jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 20/58] net/octeontx2: add queue start and stop operations jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 21/58] net/octeontx2: introduce traffic manager jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 22/58] net/octeontx2: alloc and free TM HW resources jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 23/58] net/octeontx2: configure " jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 24/58] net/octeontx2: enable Tx through traffic manager jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 25/58] net/octeontx2: add ptype support jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 26/58] net/octeontx2: add queue info and pool supported operations jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 27/58] net/octeontx2: add Rx and Tx descriptor operations jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 28/58] net/octeontx2: add module EEPROM dump jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 29/58] net/octeontx2: add flow control support jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 30/58] net/octeontx2: add PTP base support jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 31/58] net/octeontx2: add remaining PTP operations jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 32/58] net/octeontx2: introducing flow driver jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 33/58] net/octeontx2: add flow utility functions jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 34/58] net/octeontx2: add flow mbox " jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 35/58] net/octeontx2: add flow MCAM " jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 36/58] net/octeontx2: add flow parsing for outer layers jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 37/58] net/octeontx2: add flow actions support jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 38/58] net/octeontx2: add flow parse " jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 39/58] net/octeontx2: add flow operations jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 40/58] net/octeontx2: add flow destroy ops support jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 41/58] net/octeontx2: add flow init and fini jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 42/58] net/octeontx2: connect flow API to ethdev ops jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 43/58] net/octeontx2: implement VLAN utility functions jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 44/58] net/octeontx2: support VLAN offloads jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 45/58] net/octeontx2: support VLAN filters jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 46/58] net/octeontx2: support VLAN TPID and PVID for Tx jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 47/58] net/octeontx2: add FW version get operation jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 48/58] net/octeontx2: add Rx burst support jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 49/58] net/octeontx2: add Rx multi segment version jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 50/58] net/octeontx2: add Rx vector version jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 51/58] net/octeontx2: add Tx burst support jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 52/58] net/octeontx2: add Tx multi segment version jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 53/58] net/octeontx2: add Tx vector version jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 54/58] net/octeontx2: add device start operation jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 55/58] net/octeontx2: add device stop and close operations jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 56/58] net/octeontx2: add MTU set operation jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 57/58] net/octeontx2: add Rx interrupts support jerinj
2019-07-03 8:42 ` [dpdk-dev] [PATCH v3 58/58] net/octeontx2: add link status set operations jerinj
2019-07-03 20:22 ` [dpdk-dev] [PATCH v3 00/58] OCTEON TX2 Ethdev driver Jerin Jacob Kollanukkaran
2019-07-04 18:11 ` Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).