DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure
@ 2021-01-26 21:30 Nalla Pradeep
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 02/11] net/octeontx_ep: add ethdev probe and remove Nalla Pradeep
                   ` (10 more replies)
  0 siblings, 11 replies; 12+ messages in thread
From: Nalla Pradeep @ 2021-01-26 21:30 UTC (permalink / raw)
  To: Thomas Monjalon, Ray Kinsella, Neil Horman
  Cc: jerinj, sburla, dev, Nalla Pradeep

Adding bare minimum PMD library and doc build infrastructure
and claim the maintainership for octeontx end point PMD.

Signed-off-by: Nalla Pradeep <pnalla@marvell.com>
---
 MAINTAINERS                              |  9 +++++++++
 doc/guides/nics/features/octeontx_ep.ini |  8 ++++++++
 doc/guides/nics/index.rst                |  1 +
 doc/guides/nics/octeontx_ep.rst          | 23 +++++++++++++++++++++++
 drivers/net/meson.build                  |  1 +
 drivers/net/octeontx_ep/meson.build      |  8 ++++++++
 drivers/net/octeontx_ep/otx_ep_ethdev.c  |  3 +++
 drivers/net/octeontx_ep/version.map      |  3 +++
 8 files changed, 56 insertions(+)
 create mode 100644 doc/guides/nics/features/octeontx_ep.ini
 create mode 100644 doc/guides/nics/octeontx_ep.rst
 create mode 100644 drivers/net/octeontx_ep/meson.build
 create mode 100644 drivers/net/octeontx_ep/otx_ep_ethdev.c
 create mode 100644 drivers/net/octeontx_ep/version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index aa973a396..6876fc490 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -761,6 +761,15 @@ T: git://dpdk.org/next/dpdk-next-crypto
 F: drivers/common/octeontx2/otx2_sec*
 F: drivers/net/octeontx2/otx2_ethdev_sec*
 
+Marvell OCTEON TX EP - endpoint
+M: Nalla Pradeep <pnalla at marvell.com>
+M: Radha Mohan Chintakuntla <radhac at marvell.com>
+M: Veerasenareddy Burru <vburru at marvell.com>
+T: git://dpdk.org/next/dpdk-next-net-mrvl
+F: drivers/net/octeontx_ep/
+F: doc/guides/nics/features/octeontx_ep.ini
+F: doc/guides/nics/octeontx_ep.rst
+
 Mellanox mlx4
 M: Matan Azrad <matan@nvidia.com>
 M: Shahaf Shuler <shahafs@nvidia.com>
diff --git a/doc/guides/nics/features/octeontx_ep.ini b/doc/guides/nics/features/octeontx_ep.ini
new file mode 100644
index 000000000..95d658522
--- /dev/null
+++ b/doc/guides/nics/features/octeontx_ep.ini
@@ -0,0 +1,8 @@
+;
+; Supported features of the 'octeontx_ep' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux VFIO           = Y
+Usage doc            = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 344361775..799697caf 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -50,6 +50,7 @@ Network Interface Controller Drivers
     null
     octeontx
     octeontx2
+    octeontx_ep
     pfe
     qede
     sfc_efx
diff --git a/doc/guides/nics/octeontx_ep.rst b/doc/guides/nics/octeontx_ep.rst
new file mode 100644
index 000000000..bb539a440
--- /dev/null
+++ b/doc/guides/nics/octeontx_ep.rst
@@ -0,0 +1,23 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(C) 2021 Marvell.
+
+OCTEON TX EP Poll Mode driver
+===========================
+
+The OCTEON TX EP ETHDEV PMD (**librte_pmd_octeontx_ep**) provides poll mode
+ethdev driver support for the virtual functions (VF) of **Marvell OCTEON TX2**
+and **Cavium OCTEON TX** families of adapters in SR-IOV context.
+
+More information can be found at `Marvell Official Website
+<https://www.marvell.com/embedded-processors/infrastructure-processors>`_.
+
+Features
+--------
+
+Features of the OCTEON TX EP Ethdev PMD are:
+
+
+Prerequisites
+-------------
+
+See :doc:`../platform/octeontx2` and `../platform/octeontx` for setup information.
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 4cbca9641..fb9ff05a1 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -39,6 +39,7 @@ drivers = ['af_packet',
 	'null',
 	'octeontx',
 	'octeontx2',
+	'octeontx_ep',
 	'pcap',
 	'pfe',
 	'qede',
diff --git a/drivers/net/octeontx_ep/meson.build b/drivers/net/octeontx_ep/meson.build
new file mode 100644
index 000000000..2ef2222d2
--- /dev/null
+++ b/drivers/net/octeontx_ep/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2021 Marvell.
+#
+
+sources = files(
+               'otx_ep_ethdev.c',
+               )
+
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
new file mode 100644
index 000000000..603023b0d
--- /dev/null
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -0,0 +1,3 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
diff --git a/drivers/net/octeontx_ep/version.map b/drivers/net/octeontx_ep/version.map
new file mode 100644
index 000000000..6e4fb220a
--- /dev/null
+++ b/drivers/net/octeontx_ep/version.map
@@ -0,0 +1,3 @@
+DPDK_21 {
+        local: *;
+};
-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v3 02/11] net/octeontx_ep: add ethdev probe and remove
  2021-01-26 21:30 [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure Nalla Pradeep
@ 2021-01-26 21:30 ` Nalla Pradeep
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 03/11] net/octeontx_ep: add device init and uninit Nalla Pradeep
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Nalla Pradeep @ 2021-01-26 21:30 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Radha Mohan Chintakuntla,
	Veerasenareddy Burru
  Cc: sburla, dev, Nalla Pradeep

add basic PCIe ethdev probe and remove.

Signed-off-by: Nalla Pradeep <pnalla@marvell.com>
---
 drivers/common/octeontx2/otx2_common.h    |  5 +-
 drivers/net/octeontx_ep/meson.build       |  2 +
 drivers/net/octeontx_ep/otx_ep_common.h   | 14 +++++
 drivers/net/octeontx_ep/otx_ep_ethdev.c   | 62 +++++++++++++++++++++++
 drivers/net/octeontx_ep/otx_ep_vf.h       |  9 ++++
 drivers/raw/octeontx2_ep/otx2_ep_rawdev.c |  6 +--
 6 files changed, 94 insertions(+), 4 deletions(-)
 create mode 100644 drivers/net/octeontx_ep/otx_ep_common.h
 create mode 100644 drivers/net/octeontx_ep/otx_ep_vf.h

diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h
index b6779f710..cd52e098e 100644
--- a/drivers/common/octeontx2/otx2_common.h
+++ b/drivers/common/octeontx2/otx2_common.h
@@ -136,7 +136,10 @@ extern int otx2_logtype_ree;
 #define PCI_DEVID_OCTEONTX2_RVU_CPT_VF		0xA0FE
 #define PCI_DEVID_OCTEONTX2_RVU_AF_VF		0xA0f8
 #define PCI_DEVID_OCTEONTX2_DPI_VF		0xA081
-#define PCI_DEVID_OCTEONTX2_EP_VF		0xB203 /* OCTEON TX2 EP mode */
+#define PCI_DEVID_OCTEONTX2_EP_NET_VF		0xB203 /* OCTEON TX2 EP mode */
+/* OCTEON TX2 98xx EP mode */
+#define PCI_DEVID_CN98XX_EP_NET_VF		0xB103
+#define PCI_DEVID_OCTEONTX2_EP_RAW_VF		0xB204 /* OCTEON TX2 EP mode */
 #define PCI_DEVID_OCTEONTX2_RVU_SDP_PF		0xA0f6
 #define PCI_DEVID_OCTEONTX2_RVU_SDP_VF		0xA0f7
 #define PCI_DEVID_OCTEONTX2_RVU_REE_PF		0xA0f4
diff --git a/drivers/net/octeontx_ep/meson.build b/drivers/net/octeontx_ep/meson.build
index 2ef2222d2..73e04b0be 100644
--- a/drivers/net/octeontx_ep/meson.build
+++ b/drivers/net/octeontx_ep/meson.build
@@ -2,7 +2,9 @@
 # Copyright(C) 2021 Marvell.
 #
 
+deps += ['common_octeontx2']
 sources = files(
                'otx_ep_ethdev.c',
                )
 
+includes += include_directories('../../common/octeontx2')
diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h
new file mode 100644
index 000000000..35ea99a79
--- /dev/null
+++ b/drivers/net/octeontx_ep/otx_ep_common.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef _OTX_EP_COMMON_H_
+#define _OTX_EP_COMMON_H_
+
+/* OTX_EP EP VF device data structure */
+struct otx_ep_device {
+	/* PCI device pointer */
+	struct rte_pci_device *pdev;
+
+	struct rte_eth_dev *eth_dev;
+};
+#endif  /* _OTX_EP_COMMON_H_ */
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index 603023b0d..461474be1 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -1,3 +1,65 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright(C) 2021 Marvell.
  */
+
+#include <rte_ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_io.h>
+
+#include "otx2_common.h"
+#include "otx_ep_common.h"
+#include "otx_ep_vf.h"
+
+static int
+otx_ep_eth_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+	RTE_SET_USED(eth_dev);
+
+	return -ENODEV;
+}
+
+static int
+otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+	RTE_SET_USED(eth_dev);
+
+	return -ENODEV;
+}
+
+static int
+otx_ep_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+		      struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+					     sizeof(struct otx_ep_device),
+					     otx_ep_eth_dev_init);
+}
+
+static int
+otx_ep_eth_dev_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev,
+					      otx_ep_eth_dev_uninit);
+}
+
+
+/* Set of PCI devices this driver supports */
+static const struct rte_pci_id pci_id_otx_ep_map[] = {
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX_EP_VF) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_EP_NET_VF) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN98XX_EP_NET_VF) },
+	{ .vendor_id = 0, /* sentinel */ }
+};
+
+
+
+static struct rte_pci_driver rte_otx_ep_pmd = {
+	.id_table	= pci_id_otx_ep_map,
+	.drv_flags      = RTE_PCI_DRV_NEED_MAPPING,
+	.probe		= otx_ep_eth_dev_pci_probe,
+	.remove		= otx_ep_eth_dev_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_otx_ep, rte_otx_ep_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_otx_ep, pci_id_otx_ep_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_otx_ep, "* igb_uio | vfio-pci");
diff --git a/drivers/net/octeontx_ep/otx_ep_vf.h b/drivers/net/octeontx_ep/otx_ep_vf.h
new file mode 100644
index 000000000..e88b40971
--- /dev/null
+++ b/drivers/net/octeontx_ep/otx_ep_vf.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef _OTX_EP_VF_H_
+#define _OTX_EP_VF_H_
+
+#define PCI_DEVID_OCTEONTX_EP_VF 0xa303
+
+#endif /*_OTX_EP_VF_H_ */
diff --git a/drivers/raw/octeontx2_ep/otx2_ep_rawdev.c b/drivers/raw/octeontx2_ep/otx2_ep_rawdev.c
index 2b78a7941..b2ccdda83 100644
--- a/drivers/raw/octeontx2_ep/otx2_ep_rawdev.c
+++ b/drivers/raw/octeontx2_ep/otx2_ep_rawdev.c
@@ -22,7 +22,7 @@
 static const struct rte_pci_id pci_sdp_vf_map[] = {
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
-			       PCI_DEVID_OCTEONTX2_EP_VF)
+			       PCI_DEVID_OCTEONTX2_EP_RAW_VF)
 	},
 	{
 		.vendor_id = 0,
@@ -109,8 +109,8 @@ sdp_chip_specific_setup(struct sdp_device *sdpvf)
 	int ret;
 
 	switch (dev_id) {
-	case PCI_DEVID_OCTEONTX2_EP_VF:
-		sdpvf->chip_id = PCI_DEVID_OCTEONTX2_EP_VF;
+	case PCI_DEVID_OCTEONTX2_EP_RAW_VF:
+		sdpvf->chip_id = PCI_DEVID_OCTEONTX2_EP_RAW_VF;
 		ret = sdp_vf_setup_device(sdpvf);
 
 		break;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v3 03/11] net/octeontx_ep: add device init and uninit
  2021-01-26 21:30 [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure Nalla Pradeep
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 02/11] net/octeontx_ep: add ethdev probe and remove Nalla Pradeep
@ 2021-01-26 21:30 ` Nalla Pradeep
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 04/11] net/octeontx_ep: Added basic device setup Nalla Pradeep
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Nalla Pradeep @ 2021-01-26 21:30 UTC (permalink / raw)
  To: Anatoly Burakov; +Cc: jerinj, sburla, dev, Nalla Pradeep

Add basic init and uninit function which includes
initializing fields of ethdev private structure.

Signed-off-by: Nalla Pradeep <pnalla@marvell.com>
---
 drivers/net/octeontx_ep/otx_ep_common.h | 22 ++++++-
 drivers/net/octeontx_ep/otx_ep_ethdev.c | 88 +++++++++++++++++++++++--
 2 files changed, 104 insertions(+), 6 deletions(-)

diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h
index 35ea99a79..f84ab88db 100644
--- a/drivers/net/octeontx_ep/otx_ep_common.h
+++ b/drivers/net/octeontx_ep/otx_ep_common.h
@@ -4,11 +4,31 @@
 #ifndef _OTX_EP_COMMON_H_
 #define _OTX_EP_COMMON_H_
 
+#define otx_ep_printf(level, fmt, args...)			\
+	rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD,		\
+		 fmt, ##args)
+
+#define otx_ep_info(fmt, args...)				\
+	RTE_LOG(INFO, PMD, fmt "\n", ## args)
+
+#define otx_ep_err(fmt, args...)				\
+	RTE_LOG(ERR, PMD, "%s():%u " fmt "\n",			\
+		__func__, __LINE__, ## args)
+
+#define otx_ep_dbg(fmt, args...)				\
+	rte_log(RTE_LOG_DEBUG, otx_net_ep_logtype,		\
+		"%s():%u " fmt "\n",				\
+		__func__, __LINE__, ##args)
+
 /* OTX_EP EP VF device data structure */
 struct otx_ep_device {
 	/* PCI device pointer */
 	struct rte_pci_device *pdev;
-
+	uint16_t chip_id;
 	struct rte_eth_dev *eth_dev;
+	int port_id;
+	/* Memory mapped h/w address */
+	uint8_t *hw_addr;
+	int port_configured;
 };
 #endif  /* _OTX_EP_COMMON_H_ */
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index 461474be1..adb3ec2ee 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -10,20 +10,99 @@
 #include "otx_ep_common.h"
 #include "otx_ep_vf.h"
 
+#define OTX_EP_DEV(_eth_dev)            ((_eth_dev)->data->dev_private)
+static int
+otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
+{
+	struct rte_pci_device *pdev = otx_epvf->pdev;
+	uint32_t dev_id = pdev->id.device_id;
+	int ret;
+
+	switch (dev_id) {
+	case PCI_DEVID_OCTEONTX_EP_VF:
+		otx_epvf->chip_id = dev_id;
+		break;
+	case PCI_DEVID_OCTEONTX2_EP_NET_VF:
+	case PCI_DEVID_CN98XX_EP_NET_VF:
+		otx_epvf->chip_id = dev_id;
+		break;
+	default:
+		otx_ep_err("Unsupported device\n");
+		ret = -EINVAL;
+	}
+
+	if (!ret)
+		otx_ep_info("OTX_EP dev_id[%d]\n", dev_id);
+
+	return ret;
+}
+
+/* OTX_EP VF device initialization */
+static int
+otx_epdev_init(struct otx_ep_device *otx_epvf)
+{
+	if (otx_ep_chip_specific_setup(otx_epvf)) {
+		otx_ep_err("Chip specific setup failed\n");
+		goto setup_fail;
+	}
+
+	return 0;
+
+setup_fail:
+	return -ENOMEM;
+}
+
 static int
 otx_ep_eth_dev_uninit(struct rte_eth_dev *eth_dev)
 {
-	RTE_SET_USED(eth_dev);
+	struct otx_ep_device *otx_epvf = OTX_EP_DEV(eth_dev);
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+	otx_epvf->port_configured = 0;
+
+	if (eth_dev->data->mac_addrs != NULL)
+		rte_free(eth_dev->data->mac_addrs);
 
-	return -ENODEV;
+	return 0;
 }
 
 static int
 otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
 {
-	RTE_SET_USED(eth_dev);
+	struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(eth_dev);
+	struct otx_ep_device *otx_epvf = OTX_EP_DEV(eth_dev);
+	unsigned char vf_mac_addr[RTE_ETHER_ADDR_LEN];
 
-	return -ENODEV;
+	/* Single process support */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	rte_eth_copy_pci_info(eth_dev, pdev);
+
+	if (pdev->mem_resource[0].addr) {
+		otx_ep_info("OTX_EP BAR0 is mapped:\n");
+	} else {
+		otx_ep_err("OTX_EP: Failed to map device BARs\n");
+		otx_ep_err("BAR0 %p\n", pdev->mem_resource[0].addr);
+		return -ENODEV;
+	}
+	otx_epvf->eth_dev = eth_dev;
+	otx_epvf->port_id = eth_dev->data->port_id;
+	eth_dev->data->mac_addrs = rte_zmalloc("otx_ep", RTE_ETHER_ADDR_LEN, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		otx_ep_err("MAC addresses memory allocation failed\n");
+		return -ENOMEM;
+	}
+	rte_eth_random_addr(vf_mac_addr);
+	memcpy(eth_dev->data->mac_addrs, vf_mac_addr, RTE_ETHER_ADDR_LEN);
+	otx_epvf->hw_addr = pdev->mem_resource[0].addr;
+	otx_epvf->pdev = pdev;
+
+	otx_epdev_init(otx_epvf);
+	otx_epvf->port_configured = 0;
+
+	return 0;
 }
 
 static int
@@ -42,7 +121,6 @@ otx_ep_eth_dev_pci_remove(struct rte_pci_device *pci_dev)
 					      otx_ep_eth_dev_uninit);
 }
 
-
 /* Set of PCI devices this driver supports */
 static const struct rte_pci_id pci_id_otx_ep_map[] = {
 	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX_EP_VF) },
-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v3 04/11] net/octeontx_ep: Added basic device setup.
  2021-01-26 21:30 [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure Nalla Pradeep
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 02/11] net/octeontx_ep: add ethdev probe and remove Nalla Pradeep
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 03/11] net/octeontx_ep: add device init and uninit Nalla Pradeep
@ 2021-01-26 21:30 ` Nalla Pradeep
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 05/11] net/octeontx_ep: Add dev info get and configure Nalla Pradeep
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Nalla Pradeep @ 2021-01-26 21:30 UTC (permalink / raw)
  Cc: jerinj, sburla, dev, Nalla Pradeep

Functions to setup device, basic IQ and OQ registers are added.

Signed-off-by: Nalla Pradeep <pnalla@marvell.com>
---
 drivers/net/octeontx_ep/meson.build     |   2 +
 drivers/net/octeontx_ep/otx2_ep_vf.c    | 133 +++++++++++++++++++++
 drivers/net/octeontx_ep/otx2_ep_vf.h    |  11 ++
 drivers/net/octeontx_ep/otx_ep_common.h |  97 ++++++++++++++-
 drivers/net/octeontx_ep/otx_ep_ethdev.c |  11 ++
 drivers/net/octeontx_ep/otx_ep_vf.c     | 150 ++++++++++++++++++++++++
 drivers/net/octeontx_ep/otx_ep_vf.h     |  33 ++++++
 7 files changed, 434 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/octeontx_ep/otx2_ep_vf.c
 create mode 100644 drivers/net/octeontx_ep/otx2_ep_vf.h
 create mode 100644 drivers/net/octeontx_ep/otx_ep_vf.c

diff --git a/drivers/net/octeontx_ep/meson.build b/drivers/net/octeontx_ep/meson.build
index 73e04b0be..8cd2e76d1 100644
--- a/drivers/net/octeontx_ep/meson.build
+++ b/drivers/net/octeontx_ep/meson.build
@@ -5,6 +5,8 @@
 deps += ['common_octeontx2']
 sources = files(
                'otx_ep_ethdev.c',
+               'otx_ep_vf.c',
+               'otx2_ep_vf.c',
                )
 
 includes += include_directories('../../common/octeontx2')
diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.c b/drivers/net/octeontx_ep/otx2_ep_vf.c
new file mode 100644
index 000000000..e793c04fb
--- /dev/null
+++ b/drivers/net/octeontx_ep/otx2_ep_vf.c
@@ -0,0 +1,133 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "otx2_common.h"
+#include "otx_ep_common.h"
+#include "otx2_ep_vf.h"
+
+static void
+otx2_vf_setup_global_iq_reg(struct otx_ep_device *otx_ep, int q_no)
+{
+	volatile uint64_t reg_val = 0ull;
+
+	/* Select ES, RO, NS, RDSIZE,DPTR Format#0 for IQs
+	 * IS_64B is by default enabled.
+	 */
+	reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_IN_CONTROL(q_no));
+
+	reg_val |= SDP_VF_R_IN_CTL_RDSIZE;
+	reg_val |= SDP_VF_R_IN_CTL_IS_64B;
+	reg_val |= SDP_VF_R_IN_CTL_ESR;
+
+	otx2_write64(reg_val, otx_ep->hw_addr + SDP_VF_R_IN_CONTROL(q_no));
+}
+
+static void
+otx2_vf_setup_global_oq_reg(struct otx_ep_device *otx_ep, int q_no)
+{
+	volatile uint64_t reg_val = 0ull;
+
+	reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_OUT_CONTROL(q_no));
+
+	reg_val &= ~(SDP_VF_R_OUT_CTL_IMODE);
+	reg_val &= ~(SDP_VF_R_OUT_CTL_ROR_P);
+	reg_val &= ~(SDP_VF_R_OUT_CTL_NSR_P);
+	reg_val &= ~(SDP_VF_R_OUT_CTL_ROR_I);
+	reg_val &= ~(SDP_VF_R_OUT_CTL_NSR_I);
+	reg_val &= ~(SDP_VF_R_OUT_CTL_ES_I);
+	reg_val &= ~(SDP_VF_R_OUT_CTL_ROR_D);
+	reg_val &= ~(SDP_VF_R_OUT_CTL_NSR_D);
+	reg_val &= ~(SDP_VF_R_OUT_CTL_ES_D);
+
+	/* INFO/DATA ptr swap is required  */
+	reg_val |= (SDP_VF_R_OUT_CTL_ES_P);
+
+	otx2_write64(reg_val, otx_ep->hw_addr + SDP_VF_R_OUT_CONTROL(q_no));
+}
+
+static void
+otx2_vf_setup_global_input_regs(struct otx_ep_device *otx_ep)
+{
+	uint64_t q_no = 0ull;
+
+	for (q_no = 0; q_no < (otx_ep->sriov_info.rings_per_vf); q_no++)
+		otx2_vf_setup_global_iq_reg(otx_ep, q_no);
+}
+
+static void
+otx2_vf_setup_global_output_regs(struct otx_ep_device *otx_ep)
+{
+	uint32_t q_no;
+
+	for (q_no = 0; q_no < (otx_ep->sriov_info.rings_per_vf); q_no++)
+		otx2_vf_setup_global_oq_reg(otx_ep, q_no);
+}
+
+static int
+otx2_vf_setup_device_regs(struct otx_ep_device *otx_ep)
+{
+	otx2_vf_setup_global_input_regs(otx_ep);
+	otx2_vf_setup_global_output_regs(otx_ep);
+
+	return 0;
+}
+
+static const struct otx_ep_config default_otx2_ep_conf = {
+	/* IQ attributes */
+	.iq                        = {
+		.max_iqs           = OTX_EP_CFG_IO_QUEUES,
+		.instr_type        = OTX_EP_64BYTE_INSTR,
+		.pending_list_size = (OTX_EP_MAX_IQ_DESCRIPTORS *
+				      OTX_EP_CFG_IO_QUEUES),
+	},
+
+	/* OQ attributes */
+	.oq                        = {
+		.max_oqs           = OTX_EP_CFG_IO_QUEUES,
+		.info_ptr          = OTX_EP_OQ_INFOPTR_MODE,
+		.refill_threshold  = OTX_EP_OQ_REFIL_THRESHOLD,
+	},
+
+	.num_iqdef_descs           = OTX_EP_MAX_IQ_DESCRIPTORS,
+	.num_oqdef_descs           = OTX_EP_MAX_OQ_DESCRIPTORS,
+	.oqdef_buf_size            = OTX_EP_OQ_BUF_SIZE,
+};
+
+static const struct otx_ep_config*
+otx2_ep_get_defconf(struct otx_ep_device *otx_ep_dev __rte_unused)
+{
+	const struct otx_ep_config *default_conf = NULL;
+
+	default_conf = &default_otx2_ep_conf;
+
+	return default_conf;
+}
+
+int
+otx2_ep_vf_setup_device(struct otx_ep_device *otx_ep)
+{
+	uint64_t reg_val = 0ull;
+
+	/* If application doesn't provide its conf, use driver default conf */
+	if (otx_ep->conf == NULL) {
+		otx_ep->conf = otx2_ep_get_defconf(otx_ep);
+		if (otx_ep->conf == NULL) {
+			otx2_err("SDP VF default config not found");
+			return -ENOMEM;
+		}
+		otx2_info("Default config is used");
+	}
+
+	/* Get IOQs (RPVF] count */
+	reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_IN_CONTROL(0));
+
+	otx_ep->sriov_info.rings_per_vf = ((reg_val >> SDP_VF_R_IN_CTL_RPVF_POS)
+					  & SDP_VF_R_IN_CTL_RPVF_MASK);
+
+	otx2_info("SDP RPVF: %d", otx_ep->sriov_info.rings_per_vf);
+
+	otx_ep->fn_list.setup_device_regs   = otx2_vf_setup_device_regs;
+
+	return 0;
+}
diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.h b/drivers/net/octeontx_ep/otx2_ep_vf.h
new file mode 100644
index 000000000..191fee426
--- /dev/null
+++ b/drivers/net/octeontx_ep/otx2_ep_vf.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef _OTX2_EP_VF_H_
+#define _OTX2_EP_VF_H_
+
+int
+otx2_ep_vf_setup_device(struct otx_ep_device *sdpvf);
+
+#endif /*_OTX2_EP_VF_H_ */
+
diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h
index f84ab88db..74f9e10b1 100644
--- a/drivers/net/octeontx_ep/otx_ep_common.h
+++ b/drivers/net/octeontx_ep/otx_ep_common.h
@@ -4,9 +4,15 @@
 #ifndef _OTX_EP_COMMON_H_
 #define _OTX_EP_COMMON_H_
 
-#define otx_ep_printf(level, fmt, args...)			\
-	rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD,		\
-		 fmt, ##args)
+#define OTX_EP_MAX_RINGS_PER_VF        (8)
+#define OTX_EP_CFG_IO_QUEUES        OTX_EP_MAX_RINGS_PER_VF
+#define OTX_EP_64BYTE_INSTR         (64)
+#define OTX_EP_MAX_IQ_DESCRIPTORS   (8192)
+#define OTX_EP_MAX_OQ_DESCRIPTORS   (8192)
+#define OTX_EP_OQ_BUF_SIZE          (2048)
+
+#define OTX_EP_OQ_INFOPTR_MODE      (0)
+#define OTX_EP_OQ_REFIL_THRESHOLD   (16)
 
 #define otx_ep_info(fmt, args...)				\
 	RTE_LOG(INFO, PMD, fmt "\n", ## args)
@@ -20,15 +26,100 @@
 		"%s():%u " fmt "\n",				\
 		__func__, __LINE__, ##args)
 
+#define otx_ep_write64(value, base_addr, reg_off) \
+	{\
+	typeof(value) val = (value); \
+	typeof(reg_off) off = (reg_off); \
+	otx_ep_dbg("octeon_write_csr64: reg: 0x%08lx val: 0x%016llx\n", \
+		   (unsigned long)off, (unsigned long long)val); \
+	rte_write64(val, ((base_addr) + off)); \
+	}
+
+struct otx_ep_device;
+
+/* Structure to define the configuration attributes for each Input queue. */
+struct otx_ep_iq_config {
+	/* Max number of IQs available */
+	uint16_t max_iqs;
+
+	/* Command size - 32 or 64 bytes */
+	uint16_t instr_type;
+
+	/* Pending list size, usually set to the sum of the size of all IQs */
+	uint32_t pending_list_size;
+};
+
+/* Structure to define the configuration attributes for each Output queue. */
+struct otx_ep_oq_config {
+	/* Max number of OQs available */
+	uint16_t max_oqs;
+
+	/* If set, the Output queue uses info-pointer mode. (Default: 1 ) */
+	uint16_t info_ptr;
+
+	/** The number of buffers that were consumed during packet processing by
+	 *  the driver on this Output queue before the driver attempts to
+	 *  replenish the descriptor ring with new buffers.
+	 */
+	uint32_t refill_threshold;
+};
+
+/* Structure to define the configuration. */
+struct otx_ep_config {
+	/* Input Queue attributes. */
+	struct otx_ep_iq_config iq;
+
+	/* Output Queue attributes. */
+	struct otx_ep_oq_config oq;
+
+	/* Num of desc for IQ rings */
+	uint32_t num_iqdef_descs;
+
+	/* Num of desc for OQ rings */
+	uint32_t num_oqdef_descs;
+
+	/* OQ buffer size */
+	uint32_t oqdef_buf_size;
+};
+
+/* SRIOV information */
+struct otx_ep_sriov_info {
+	/* Number of rings assigned to VF */
+	uint32_t rings_per_vf;
+
+	/* Number of VF devices enabled */
+	uint32_t num_vfs;
+};
+
+/* Required functions for each VF device */
+struct otx_ep_fn_list {
+	int (*setup_device_regs)(struct otx_ep_device *otx_ep);
+};
+
 /* OTX_EP EP VF device data structure */
 struct otx_ep_device {
 	/* PCI device pointer */
 	struct rte_pci_device *pdev;
+
 	uint16_t chip_id;
+
 	struct rte_eth_dev *eth_dev;
+
 	int port_id;
+
 	/* Memory mapped h/w address */
 	uint8_t *hw_addr;
+
+	struct otx_ep_fn_list fn_list;
+
+	/* SR-IOV info */
+	struct otx_ep_sriov_info sriov_info;
+
+	/* Device configuration */
+	const struct otx_ep_config *conf;
+
 	int port_configured;
 };
+
+extern int otx_net_ep_logtype;
 #endif  /* _OTX_EP_COMMON_H_ */
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index adb3ec2ee..c90ef13c0 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -9,6 +9,7 @@
 #include "otx2_common.h"
 #include "otx_ep_common.h"
 #include "otx_ep_vf.h"
+#include "otx2_ep_vf.h"
 
 #define OTX_EP_DEV(_eth_dev)            ((_eth_dev)->data->dev_private)
 static int
@@ -21,10 +22,12 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
 	switch (dev_id) {
 	case PCI_DEVID_OCTEONTX_EP_VF:
 		otx_epvf->chip_id = dev_id;
+		ret = otx_ep_vf_setup_device(otx_epvf);
 		break;
 	case PCI_DEVID_OCTEONTX2_EP_NET_VF:
 	case PCI_DEVID_CN98XX_EP_NET_VF:
 		otx_epvf->chip_id = dev_id;
+		ret = otx2_ep_vf_setup_device(otx_epvf);
 		break;
 	default:
 		otx_ep_err("Unsupported device\n");
@@ -46,6 +49,13 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
 		goto setup_fail;
 	}
 
+	if (otx_epvf->fn_list.setup_device_regs(otx_epvf)) {
+		otx_ep_err("Failed to configure device registers\n");
+		goto setup_fail;
+	}
+
+	otx_ep_info("OTX_EP Device is Ready\n");
+
 	return 0;
 
 setup_fail:
@@ -141,3 +151,4 @@ static struct rte_pci_driver rte_otx_ep_pmd = {
 RTE_PMD_REGISTER_PCI(net_otx_ep, rte_otx_ep_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_otx_ep, pci_id_otx_ep_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_otx_ep, "* igb_uio | vfio-pci");
+RTE_LOG_REGISTER(otx_net_ep_logtype, pmd.net.octeontx_ep, NOTICE);
diff --git a/drivers/net/octeontx_ep/otx_ep_vf.c b/drivers/net/octeontx_ep/otx_ep_vf.c
new file mode 100644
index 000000000..0bf8e5bed
--- /dev/null
+++ b/drivers/net/octeontx_ep/otx_ep_vf.c
@@ -0,0 +1,150 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_ethdev_driver.h>
+#include <rte_ethdev_pci.h>
+#include <rte_io.h>
+
+#include "otx_ep_common.h"
+#include "otx_ep_vf.h"
+
+
+static void
+otx_ep_setup_global_iq_reg(struct otx_ep_device *otx_ep, int q_no)
+{
+	volatile uint64_t reg_val = 0ull;
+
+	/* Select ES, RO, NS, RDSIZE,DPTR Format#0 for IQs
+	 * IS_64B is by default enabled.
+	 */
+	reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_IN_CONTROL(q_no));
+
+	reg_val |= OTX_EP_R_IN_CTL_RDSIZE;
+	reg_val |= OTX_EP_R_IN_CTL_IS_64B;
+	reg_val |= OTX_EP_R_IN_CTL_ESR;
+
+	otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_IN_CONTROL(q_no));
+	reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_IN_CONTROL(q_no));
+
+	if (!(reg_val & OTX_EP_R_IN_CTL_IDLE)) {
+		do {
+			reg_val = rte_read64(otx_ep->hw_addr +
+					      OTX_EP_R_IN_CONTROL(q_no));
+		} while (!(reg_val & OTX_EP_R_IN_CTL_IDLE));
+	}
+}
+
+static void
+otx_ep_setup_global_oq_reg(struct otx_ep_device *otx_ep, int q_no)
+{
+	volatile uint64_t reg_val = 0ull;
+
+	reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_OUT_CONTROL(q_no));
+
+	reg_val &= ~(OTX_EP_R_OUT_CTL_IMODE);
+	reg_val &= ~(OTX_EP_R_OUT_CTL_ROR_P);
+	reg_val &= ~(OTX_EP_R_OUT_CTL_NSR_P);
+	reg_val &= ~(OTX_EP_R_OUT_CTL_ROR_I);
+	reg_val &= ~(OTX_EP_R_OUT_CTL_NSR_I);
+	reg_val &= ~(OTX_EP_R_OUT_CTL_ES_I);
+	reg_val &= ~(OTX_EP_R_OUT_CTL_ROR_D);
+	reg_val &= ~(OTX_EP_R_OUT_CTL_NSR_D);
+	reg_val &= ~(OTX_EP_R_OUT_CTL_ES_D);
+
+	/* INFO/DATA ptr swap is required  */
+	reg_val |= (OTX_EP_R_OUT_CTL_ES_P);
+
+	otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_OUT_CONTROL(q_no));
+}
+
+static void
+otx_ep_setup_global_input_regs(struct otx_ep_device *otx_ep)
+{
+	uint64_t q_no = 0ull;
+
+	for (q_no = 0; q_no < (otx_ep->sriov_info.rings_per_vf); q_no++)
+		otx_ep_setup_global_iq_reg(otx_ep, q_no);
+}
+
+static void
+otx_ep_setup_global_output_regs(struct otx_ep_device *otx_ep)
+{
+	uint32_t q_no;
+
+	for (q_no = 0; q_no < (otx_ep->sriov_info.rings_per_vf); q_no++)
+		otx_ep_setup_global_oq_reg(otx_ep, q_no);
+}
+
+static int
+otx_ep_setup_device_regs(struct otx_ep_device *otx_ep)
+{
+	otx_ep_setup_global_input_regs(otx_ep);
+	otx_ep_setup_global_output_regs(otx_ep);
+
+	return 0;
+}
+
+/* OTX_EP default configuration */
+static const struct otx_ep_config default_otx_ep_conf = {
+	/* IQ attributes */
+	.iq                        = {
+		.max_iqs           = OTX_EP_CFG_IO_QUEUES,
+		.instr_type        = OTX_EP_64BYTE_INSTR,
+		.pending_list_size = (OTX_EP_MAX_IQ_DESCRIPTORS *
+				      OTX_EP_CFG_IO_QUEUES),
+	},
+
+	/* OQ attributes */
+	.oq                        = {
+		.max_oqs           = OTX_EP_CFG_IO_QUEUES,
+		.info_ptr          = OTX_EP_OQ_INFOPTR_MODE,
+		.refill_threshold  = OTX_EP_OQ_REFIL_THRESHOLD,
+	},
+
+	.num_iqdef_descs           = OTX_EP_MAX_IQ_DESCRIPTORS,
+	.num_oqdef_descs           = OTX_EP_MAX_OQ_DESCRIPTORS,
+	.oqdef_buf_size            = OTX_EP_OQ_BUF_SIZE,
+
+};
+
+
+static const struct otx_ep_config*
+otx_ep_get_defconf(struct otx_ep_device *otx_ep_dev __rte_unused)
+{
+	const struct otx_ep_config *default_conf = NULL;
+
+	default_conf = &default_otx_ep_conf;
+
+	return default_conf;
+}
+
+int
+otx_ep_vf_setup_device(struct otx_ep_device *otx_ep)
+{
+	uint64_t reg_val = 0ull;
+
+	/* If application doesn't provide its conf, use driver default conf */
+	if (otx_ep->conf == NULL) {
+		otx_ep->conf = otx_ep_get_defconf(otx_ep);
+		if (otx_ep->conf == NULL) {
+			otx_ep_err("OTX_EP VF default config not found\n");
+			return -ENOMEM;
+		}
+		otx_ep_info("Default config is used\n");
+	}
+
+	/* Get IOQs (RPVF] count */
+	reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_IN_CONTROL(0));
+
+	otx_ep->sriov_info.rings_per_vf = ((reg_val >> OTX_EP_R_IN_CTL_RPVF_POS)
+					  & OTX_EP_R_IN_CTL_RPVF_MASK);
+
+	otx_ep_info("OTX_EP RPVF: %d\n", otx_ep->sriov_info.rings_per_vf);
+
+	otx_ep->fn_list.setup_device_regs   = otx_ep_setup_device_regs;
+
+	return 0;
+}
diff --git a/drivers/net/octeontx_ep/otx_ep_vf.h b/drivers/net/octeontx_ep/otx_ep_vf.h
index e88b40971..c5741a3f1 100644
--- a/drivers/net/octeontx_ep/otx_ep_vf.h
+++ b/drivers/net/octeontx_ep/otx_ep_vf.h
@@ -4,6 +4,39 @@
 #ifndef _OTX_EP_VF_H_
 #define _OTX_EP_VF_H_
 
+#define OTX_EP_RING_OFFSET                (0x1ull << 17)
+
+/* OTX_EP VF IQ Registers */
+#define OTX_EP_R_IN_CONTROL_START         (0x10000)
+#define OTX_EP_R_IN_CONTROL(ring)  \
+	(OTX_EP_R_IN_CONTROL_START + ((ring) * OTX_EP_RING_OFFSET))
+
+/* OTX_EP VF IQ Masks */
+#define OTX_EP_R_IN_CTL_RPVF_MASK       (0xF)
+#define	OTX_EP_R_IN_CTL_RPVF_POS        (48)
+
+#define OTX_EP_R_IN_CTL_IDLE            (0x1ull << 28)
+#define OTX_EP_R_IN_CTL_RDSIZE          (0x3ull << 25) /* Setting to max(4) */
+#define OTX_EP_R_IN_CTL_IS_64B          (0x1ull << 24)
+#define OTX_EP_R_IN_CTL_ESR             (0x1ull << 1)
+/* OTX_EP VF OQ Registers */
+#define OTX_EP_R_OUT_CONTROL_START           (0x10150)
+#define OTX_EP_R_OUT_CONTROL(ring)    \
+	(OTX_EP_R_OUT_CONTROL_START + ((ring) * OTX_EP_RING_OFFSET))
+/* OTX_EP VF OQ Masks */
+#define OTX_EP_R_OUT_CTL_ES_I         (1ull << 34)
+#define OTX_EP_R_OUT_CTL_NSR_I        (1ull << 33)
+#define OTX_EP_R_OUT_CTL_ROR_I        (1ull << 32)
+#define OTX_EP_R_OUT_CTL_ES_D         (1ull << 30)
+#define OTX_EP_R_OUT_CTL_NSR_D        (1ull << 29)
+#define OTX_EP_R_OUT_CTL_ROR_D        (1ull << 28)
+#define OTX_EP_R_OUT_CTL_ES_P         (1ull << 26)
+#define OTX_EP_R_OUT_CTL_NSR_P        (1ull << 25)
+#define OTX_EP_R_OUT_CTL_ROR_P        (1ull << 24)
+#define OTX_EP_R_OUT_CTL_IMODE        (1ull << 23)
+
 #define PCI_DEVID_OCTEONTX_EP_VF 0xa303
 
+int
+otx_ep_vf_setup_device(struct otx_ep_device *otx_ep);
 #endif /*_OTX_EP_VF_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v3 05/11] net/octeontx_ep: Add dev info get and configure
  2021-01-26 21:30 [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure Nalla Pradeep
                   ` (2 preceding siblings ...)
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 04/11] net/octeontx_ep: Added basic device setup Nalla Pradeep
@ 2021-01-26 21:30 ` Nalla Pradeep
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 06/11] net/octeontx_ep: Added rxq setup and release Nalla Pradeep
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Nalla Pradeep @ 2021-01-26 21:30 UTC (permalink / raw)
  Cc: jerinj, sburla, dev, Nalla Pradeep

Add device information get and device configure operations.

Signed-off-by: Nalla Pradeep <pnalla@marvell.com>
---
 drivers/net/octeontx_ep/otx_ep_common.h | 15 +++++
 drivers/net/octeontx_ep/otx_ep_ethdev.c | 89 ++++++++++++++++++++++++-
 drivers/net/octeontx_ep/otx_ep_rxtx.h   | 10 +++
 3 files changed, 111 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/octeontx_ep/otx_ep_rxtx.h

diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h
index 74f9e10b1..7f3c913f3 100644
--- a/drivers/net/octeontx_ep/otx_ep_common.h
+++ b/drivers/net/octeontx_ep/otx_ep_common.h
@@ -7,9 +7,12 @@
 #define OTX_EP_MAX_RINGS_PER_VF        (8)
 #define OTX_EP_CFG_IO_QUEUES        OTX_EP_MAX_RINGS_PER_VF
 #define OTX_EP_64BYTE_INSTR         (64)
+#define OTX_EP_MIN_IQ_DESCRIPTORS   (128)
+#define OTX_EP_MIN_OQ_DESCRIPTORS   (128)
 #define OTX_EP_MAX_IQ_DESCRIPTORS   (8192)
 #define OTX_EP_MAX_OQ_DESCRIPTORS   (8192)
 #define OTX_EP_OQ_BUF_SIZE          (2048)
+#define OTX_EP_MIN_RX_BUF_SIZE      (64)
 
 #define OTX_EP_OQ_INFOPTR_MODE      (0)
 #define OTX_EP_OQ_REFIL_THRESHOLD   (16)
@@ -112,6 +115,10 @@ struct otx_ep_device {
 
 	struct otx_ep_fn_list fn_list;
 
+	uint32_t max_tx_queues;
+
+	uint32_t max_rx_queues;
+
 	/* SR-IOV info */
 	struct otx_ep_sriov_info sriov_info;
 
@@ -119,7 +126,15 @@ struct otx_ep_device {
 	const struct otx_ep_config *conf;
 
 	int port_configured;
+
+	uint64_t rx_offloads;
+
+	uint64_t tx_offloads;
 };
 
+#define OTX_EP_MAX_PKT_SZ 64000U
+
+#define OTX_EP_MAX_MAC_ADDRS 1
+
 extern int otx_net_ep_logtype;
 #endif  /* _OTX_EP_COMMON_H_ */
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index c90ef13c0..4b6800fae 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -10,8 +10,57 @@
 #include "otx_ep_common.h"
 #include "otx_ep_vf.h"
 #include "otx2_ep_vf.h"
+#include "otx_ep_rxtx.h"
+
+#define OTX_EP_DEV(_eth_dev) \
+	((struct otx_ep_device *)(_eth_dev)->data->dev_private)
+
+static const struct rte_eth_desc_lim otx_ep_rx_desc_lim = {
+	.nb_max		= OTX_EP_MAX_OQ_DESCRIPTORS,
+	.nb_min		= OTX_EP_MIN_OQ_DESCRIPTORS,
+	.nb_align	= OTX_EP_RXD_ALIGN,
+};
+
+static const struct rte_eth_desc_lim otx_ep_tx_desc_lim = {
+	.nb_max		= OTX_EP_MAX_IQ_DESCRIPTORS,
+	.nb_min		= OTX_EP_MIN_IQ_DESCRIPTORS,
+	.nb_align	= OTX_EP_TXD_ALIGN,
+};
+
+static int
+otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
+		    struct rte_eth_dev_info *devinfo)
+{
+	struct otx_ep_device *otx_epvf;
+	struct rte_pci_device *pdev;
+	uint32_t dev_id;
+
+	otx_epvf = OTX_EP_DEV(eth_dev);
+	pdev = otx_epvf->pdev;
+	dev_id = pdev->id.device_id;
+
+	devinfo->speed_capa = ETH_LINK_SPEED_10G;
+	devinfo->max_rx_queues = otx_epvf->max_rx_queues;
+	devinfo->max_tx_queues = otx_epvf->max_tx_queues;
+
+	devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
+	if (dev_id == PCI_DEVID_OCTEONTX_EP_VF ||
+	    dev_id == PCI_DEVID_OCTEONTX2_EP_NET_VF ||
+	    dev_id == PCI_DEVID_CN98XX_EP_NET_VF) {
+		devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
+		devinfo->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
+		devinfo->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+		devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	}
+
+	devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
+
+	devinfo->rx_desc_lim = otx_ep_rx_desc_lim;
+	devinfo->tx_desc_lim = otx_ep_tx_desc_lim;
+
+	return 0;
+}
 
-#define OTX_EP_DEV(_eth_dev)            ((_eth_dev)->data->dev_private)
 static int
 otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
 {
@@ -62,6 +111,41 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
 	return -ENOMEM;
 }
 
+static int
+otx_ep_dev_configure(struct rte_eth_dev *eth_dev)
+{
+	struct otx_ep_device *otx_epvf = OTX_EP_DEV(eth_dev);
+	struct rte_eth_dev_data *data = eth_dev->data;
+	struct rte_eth_rxmode *rxmode;
+	struct rte_eth_txmode *txmode;
+	struct rte_eth_conf *conf;
+	uint32_t ethdev_queues;
+
+	conf = &data->dev_conf;
+	rxmode = &conf->rxmode;
+	txmode = &conf->txmode;
+	ethdev_queues = (uint32_t)(otx_epvf->sriov_info.rings_per_vf);
+	if (eth_dev->data->nb_rx_queues > ethdev_queues ||
+	    eth_dev->data->nb_tx_queues > ethdev_queues) {
+		otx_ep_err("invalid num queues\n");
+		return -ENOMEM;
+	}
+	otx_ep_info("OTX_EP Device is configured with num_txq %d num_rxq %d\n",
+		    eth_dev->data->nb_rx_queues, eth_dev->data->nb_tx_queues);
+
+	otx_epvf->port_configured = 1;
+	otx_epvf->rx_offloads = rxmode->offloads;
+	otx_epvf->tx_offloads = txmode->offloads;
+
+	return 0;
+}
+
+/* Define our ethernet definitions */
+static const struct eth_dev_ops otx_ep_eth_dev_ops = {
+	.dev_configure		= otx_ep_dev_configure,
+	.dev_infos_get		= otx_ep_dev_info_get,
+};
+
 static int
 otx_ep_eth_dev_uninit(struct rte_eth_dev *eth_dev)
 {
@@ -99,6 +183,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
 	}
 	otx_epvf->eth_dev = eth_dev;
 	otx_epvf->port_id = eth_dev->data->port_id;
+	eth_dev->dev_ops = &otx_ep_eth_dev_ops;
 	eth_dev->data->mac_addrs = rte_zmalloc("otx_ep", RTE_ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL) {
 		otx_ep_err("MAC addresses memory allocation failed\n");
@@ -139,8 +224,6 @@ static const struct rte_pci_id pci_id_otx_ep_map[] = {
 	{ .vendor_id = 0, /* sentinel */ }
 };
 
-
-
 static struct rte_pci_driver rte_otx_ep_pmd = {
 	.id_table	= pci_id_otx_ep_map,
 	.drv_flags      = RTE_PCI_DRV_NEED_MAPPING,
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.h b/drivers/net/octeontx_ep/otx_ep_rxtx.h
new file mode 100644
index 000000000..9779e96b6
--- /dev/null
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef _OTX_EP_RXTX_H_
+#define _OTX_EP_RXTX_H_
+
+#define OTX_EP_RXD_ALIGN 1
+#define OTX_EP_TXD_ALIGN 1
+#endif
-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v3 06/11] net/octeontx_ep: Added rxq setup and release
  2021-01-26 21:30 [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure Nalla Pradeep
                   ` (3 preceding siblings ...)
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 05/11] net/octeontx_ep: Add dev info get and configure Nalla Pradeep
@ 2021-01-26 21:30 ` Nalla Pradeep
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 07/11] net/octeontx_ep: Added tx queue " Nalla Pradeep
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Nalla Pradeep @ 2021-01-26 21:30 UTC (permalink / raw)
  Cc: jerinj, sburla, dev, Nalla Pradeep

Receive queue setup involves allocating memory for the queue,
initializing data structure representing the queue and filling queue
with receive buffers of rx descriptor count. Receive queues are referred
as droq. Hardware fills the receive buffers in queue with the packet.

In receive queue release, receive buffers are freed along with the
receive queue.

Signed-off-by: Nalla Pradeep <pnalla@marvell.com>
---
 drivers/net/octeontx_ep/meson.build     |   1 +
 drivers/net/octeontx_ep/otx_ep_common.h | 160 ++++++++++++++++-
 drivers/net/octeontx_ep/otx_ep_ethdev.c | 132 ++++++++++++++
 drivers/net/octeontx_ep/otx_ep_rxtx.c   | 222 ++++++++++++++++++++++++
 drivers/net/octeontx_ep/otx_ep_vf.h     |   6 +
 5 files changed, 516 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/octeontx_ep/otx_ep_rxtx.c

diff --git a/drivers/net/octeontx_ep/meson.build b/drivers/net/octeontx_ep/meson.build
index 8cd2e76d1..a8436f35f 100644
--- a/drivers/net/octeontx_ep/meson.build
+++ b/drivers/net/octeontx_ep/meson.build
@@ -7,6 +7,7 @@ sources = files(
                'otx_ep_ethdev.c',
                'otx_ep_vf.c',
                'otx2_ep_vf.c',
+               'otx_ep_rxtx.c',
                )
 
 includes += include_directories('../../common/octeontx2')
diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h
index 7f3c913f3..6be5c5a76 100644
--- a/drivers/net/octeontx_ep/otx_ep_common.h
+++ b/drivers/net/octeontx_ep/otx_ep_common.h
@@ -16,6 +16,10 @@
 
 #define OTX_EP_OQ_INFOPTR_MODE      (0)
 #define OTX_EP_OQ_REFIL_THRESHOLD   (16)
+#define OTX_EP_PCI_RING_ALIGN   65536
+#define SDP_PKIND 40
+#define SDP_OTX2_PKIND 57
+#define OTX_EP_MAX_IOQS_PER_VF 8
 
 #define otx_ep_info(fmt, args...)				\
 	RTE_LOG(INFO, PMD, fmt "\n", ## args)
@@ -52,6 +56,65 @@ struct otx_ep_iq_config {
 	uint32_t pending_list_size;
 };
 
+/** Descriptor format.
+ *  The descriptor ring is made of descriptors which have 2 64-bit values:
+ *  -# Physical (bus) address of the data buffer.
+ *  -# Physical (bus) address of a otx_ep_droq_info structure.
+ *  The device DMA's incoming packets and its information at the address
+ *  given by these descriptor fields.
+ */
+struct otx_ep_droq_desc {
+	/* The buffer pointer */
+	uint64_t buffer_ptr;
+
+	/* The Info pointer */
+	uint64_t info_ptr;
+};
+#define OTX_EP_DROQ_DESC_SIZE	(sizeof(struct otx_ep_droq_desc))
+
+/* Receive Header */
+union otx_ep_rh {
+	uint64_t rh64;
+};
+#define OTX_EP_RH_SIZE (sizeof(union otx_ep_rh))
+
+/** Information about packet DMA'ed by OCTEON TX2.
+ *  The format of the information available at Info Pointer after OCTEON TX2
+ *  has posted a packet. Not all descriptors have valid information. Only
+ *  the Info field of the first descriptor for a packet has information
+ *  about the packet.
+ */
+struct otx_ep_droq_info {
+	/* The Length of the packet. */
+	uint64_t length;
+
+	/* The Output Receive Header. */
+	union otx_ep_rh rh;
+};
+#define OTX_EP_DROQ_INFO_SIZE	(sizeof(struct otx_ep_droq_info))
+
+
+/* DROQ statistics. Each output queue has four stats fields. */
+struct otx_ep_droq_stats {
+	/* Number of packets received in this queue. */
+	uint64_t pkts_received;
+
+	/* Bytes received by this queue. */
+	uint64_t bytes_received;
+
+	/* Num of failures of rte_pktmbuf_alloc() */
+	uint64_t rx_alloc_failure;
+
+	/* Rx error */
+	uint64_t rx_err;
+
+	/* packets with data got ready after interrupt arrived */
+	uint64_t pkts_delayed_data;
+
+	/* packets dropped due to zero length */
+	uint64_t dropped_zlp;
+};
+
 /* Structure to define the configuration attributes for each Output queue. */
 struct otx_ep_oq_config {
 	/* Max number of OQs available */
@@ -67,6 +130,74 @@ struct otx_ep_oq_config {
 	uint32_t refill_threshold;
 };
 
+/* The Descriptor Ring Output Queue(DROQ) structure. */
+struct otx_ep_droq {
+	struct otx_ep_device *otx_ep_dev;
+	/* The 8B aligned descriptor ring starts at this address. */
+	struct otx_ep_droq_desc *desc_ring;
+
+	uint32_t q_no;
+	uint64_t last_pkt_count;
+
+	struct rte_mempool *mpool;
+
+	/* Driver should read the next packet at this index */
+	uint32_t read_idx;
+
+	/* OCTEON TX2 will write the next packet at this index */
+	uint32_t write_idx;
+
+	/* At this index, the driver will refill the descriptor's buffer */
+	uint32_t refill_idx;
+
+	/* Packets pending to be processed */
+	uint64_t pkts_pending;
+
+	/* Number of descriptors in this ring. */
+	uint32_t nb_desc;
+
+	/* The number of descriptors pending to refill. */
+	uint32_t refill_count;
+
+	uint32_t refill_threshold;
+
+	/* The 8B aligned info ptrs begin from this address. */
+	struct otx_ep_droq_info *info_list;
+
+	/* receive buffer list contains mbuf ptr list */
+	struct rte_mbuf **recv_buf_list;
+
+	/* The size of each buffer pointed by the buffer pointer. */
+	uint32_t buffer_size;
+
+	/* Statistics for this DROQ. */
+	struct otx_ep_droq_stats stats;
+
+	/* DMA mapped address of the DROQ descriptor ring. */
+	size_t desc_ring_dma;
+
+	/* Info_ptr list is allocated at this virtual address. */
+	size_t info_base_addr;
+
+	/* DMA mapped address of the info list */
+	size_t info_list_dma;
+
+	/* Allocated size of info list. */
+	uint32_t info_alloc_size;
+
+	/* Memory zone **/
+	const struct rte_memzone *desc_ring_mz;
+	const struct rte_memzone *info_mz;
+};
+#define OTX_EP_DROQ_SIZE		(sizeof(struct otx_ep_droq))
+
+/* IQ/OQ mask */
+struct otx_ep_io_enable {
+	uint64_t iq;
+	uint64_t oq;
+	uint64_t iq64B;
+};
+
 /* Structure to define the configuration. */
 struct otx_ep_config {
 	/* Input Queue attributes. */
@@ -85,6 +216,15 @@ struct otx_ep_config {
 	uint32_t oqdef_buf_size;
 };
 
+/* Required functions for each VF device */
+struct otx_ep_fn_list {
+	void (*setup_oq_regs)(struct otx_ep_device *otx_ep, uint32_t q_no);
+
+	int (*setup_device_regs)(struct otx_ep_device *otx_ep);
+
+	void (*disable_io_queues)(struct otx_ep_device *otx_ep);
+};
+
 /* SRIOV information */
 struct otx_ep_sriov_info {
 	/* Number of rings assigned to VF */
@@ -94,11 +234,6 @@ struct otx_ep_sriov_info {
 	uint32_t num_vfs;
 };
 
-/* Required functions for each VF device */
-struct otx_ep_fn_list {
-	int (*setup_device_regs)(struct otx_ep_device *otx_ep);
-};
-
 /* OTX_EP EP VF device data structure */
 struct otx_ep_device {
 	/* PCI device pointer */
@@ -106,6 +241,8 @@ struct otx_ep_device {
 
 	uint16_t chip_id;
 
+	uint32_t pkind;
+
 	struct rte_eth_dev *eth_dev;
 
 	int port_id;
@@ -119,6 +256,15 @@ struct otx_ep_device {
 
 	uint32_t max_rx_queues;
 
+	/* Num OQs */
+	uint32_t nb_rx_queues;
+
+	/* The DROQ output queues  */
+	struct otx_ep_droq *droq[OTX_EP_MAX_IOQS_PER_VF];
+
+	/* IOQ mask */
+	struct otx_ep_io_enable io_qmask;
+
 	/* SR-IOV info */
 	struct otx_ep_sriov_info sriov_info;
 
@@ -132,6 +278,10 @@ struct otx_ep_device {
 	uint64_t tx_offloads;
 };
 
+int otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs,
+		     int desc_size, struct rte_mempool *mpool,
+		     unsigned int socket_id);
+int otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no);
 #define OTX_EP_MAX_PKT_SZ 64000U
 
 #define OTX_EP_MAX_MAC_ADDRS 1
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index 4b6800fae..3e2df4035 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -72,11 +72,13 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
 	case PCI_DEVID_OCTEONTX_EP_VF:
 		otx_epvf->chip_id = dev_id;
 		ret = otx_ep_vf_setup_device(otx_epvf);
+		otx_epvf->fn_list.disable_io_queues(otx_epvf);
 		break;
 	case PCI_DEVID_OCTEONTX2_EP_NET_VF:
 	case PCI_DEVID_CN98XX_EP_NET_VF:
 		otx_epvf->chip_id = dev_id;
 		ret = otx2_ep_vf_setup_device(otx_epvf);
+		otx_epvf->fn_list.disable_io_queues(otx_epvf);
 		break;
 	default:
 		otx_ep_err("Unsupported device\n");
@@ -93,6 +95,8 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
 static int
 otx_epdev_init(struct otx_ep_device *otx_epvf)
 {
+	uint32_t ethdev_queues;
+
 	if (otx_ep_chip_specific_setup(otx_epvf)) {
 		otx_ep_err("Chip specific setup failed\n");
 		goto setup_fail;
@@ -103,6 +107,10 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
 		goto setup_fail;
 	}
 
+	ethdev_queues = (uint32_t)(otx_epvf->sriov_info.rings_per_vf);
+	otx_epvf->max_rx_queues = ethdev_queues;
+	otx_epvf->max_tx_queues = ethdev_queues;
+
 	otx_ep_info("OTX_EP Device is Ready\n");
 
 	return 0;
@@ -140,12 +148,125 @@ otx_ep_dev_configure(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+/**
+ * Setup our receive queue/ringbuffer. This is the
+ * queue the Octeon uses to send us packets and
+ * responses. We are given a memory pool for our
+ * packet buffers that are used to populate the receive
+ * queue.
+ *
+ * @param eth_dev
+ *    Pointer to the structure rte_eth_dev
+ * @param q_no
+ *    Queue number
+ * @param num_rx_descs
+ *    Number of entries in the queue
+ * @param socket_id
+ *    Where to allocate memory
+ * @param rx_conf
+ *    Pointer to the struction rte_eth_rxconf
+ * @param mp
+ *    Pointer to the packet pool
+ *
+ * @return
+ *    - On success, return 0
+ *    - On failure, return -1
+ */
+static int
+otx_ep_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
+		       uint16_t num_rx_descs, unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf __rte_unused,
+		       struct rte_mempool *mp)
+{
+	struct otx_ep_device *otx_epvf = OTX_EP_DEV(eth_dev);
+	struct rte_pktmbuf_pool_private *mbp_priv;
+	uint16_t buf_size;
+
+	if (q_no >= otx_epvf->max_rx_queues) {
+		otx_ep_err("Invalid rx queue number %u\n", q_no);
+		return -EINVAL;
+	}
+
+	if (num_rx_descs & (num_rx_descs - 1)) {
+		otx_ep_err("Invalid rx desc number should be pow 2  %u\n",
+			   num_rx_descs);
+		return -EINVAL;
+	}
+	if (num_rx_descs < (SDP_GBL_WMARK * 8)) {
+		otx_ep_err("Invalid rx desc number should at least be greater than 8xwmark  %u\n",
+			   num_rx_descs);
+		return -EINVAL;
+	}
+
+	otx_ep_dbg("setting up rx queue %u\n", q_no);
+
+	mbp_priv = rte_mempool_get_priv(mp);
+	buf_size = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
+
+	if (otx_ep_setup_oqs(otx_epvf, q_no, num_rx_descs, buf_size, mp,
+			     socket_id)) {
+		otx_ep_err("droq allocation failed\n");
+		return -1;
+	}
+
+	eth_dev->data->rx_queues[q_no] = otx_epvf->droq[q_no];
+
+	return 0;
+}
+
+/**
+ * Release the receive queue/ringbuffer. Called by
+ * the upper layers.
+ *
+ * @param rxq
+ *    Opaque pointer to the receive queue to release
+ *
+ * @return
+ *    - nothing
+ */
+static void
+otx_ep_rx_queue_release(void *rxq)
+{
+	struct otx_ep_droq *rq = (struct otx_ep_droq *)rxq;
+	struct otx_ep_device *otx_epvf = rq->otx_ep_dev;
+	int q_id = rq->q_no;
+
+	if (otx_ep_delete_oqs(otx_epvf, q_id))
+		otx_ep_err("Failed to delete OQ:%d\n", q_id);
+}
+
 /* Define our ethernet definitions */
 static const struct eth_dev_ops otx_ep_eth_dev_ops = {
 	.dev_configure		= otx_ep_dev_configure,
+	.rx_queue_setup	        = otx_ep_rx_queue_setup,
+	.rx_queue_release	= otx_ep_rx_queue_release,
 	.dev_infos_get		= otx_ep_dev_info_get,
 };
 
+
+
+static int
+otx_epdev_exit(struct rte_eth_dev *eth_dev)
+{
+	struct otx_ep_device *otx_epvf;
+	uint32_t num_queues, q;
+
+	otx_ep_info("%s:\n", __func__);
+
+	otx_epvf = OTX_EP_DEV(eth_dev);
+
+	num_queues = otx_epvf->nb_rx_queues;
+	for (q = 0; q < num_queues; q++) {
+		if (otx_ep_delete_oqs(otx_epvf, q)) {
+			otx_ep_err("Failed to delete OQ:%d\n", q);
+			return -ENOMEM;
+		}
+	}
+	otx_ep_info("Num OQs:%d freed\n", otx_epvf->nb_rx_queues);
+
+	return 0;
+}
+
 static int
 otx_ep_eth_dev_uninit(struct rte_eth_dev *eth_dev)
 {
@@ -153,11 +274,15 @@ otx_ep_eth_dev_uninit(struct rte_eth_dev *eth_dev)
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
+	otx_epdev_exit(eth_dev);
+
 	otx_epvf->port_configured = 0;
 
 	if (eth_dev->data->mac_addrs != NULL)
 		rte_free(eth_dev->data->mac_addrs);
 
+	eth_dev->dev_ops = NULL;
+
 	return 0;
 }
 
@@ -187,6 +312,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
 	eth_dev->data->mac_addrs = rte_zmalloc("otx_ep", RTE_ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL) {
 		otx_ep_err("MAC addresses memory allocation failed\n");
+		eth_dev->dev_ops = NULL;
 		return -ENOMEM;
 	}
 	rte_eth_random_addr(vf_mac_addr);
@@ -195,6 +321,12 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
 	otx_epvf->pdev = pdev;
 
 	otx_epdev_init(otx_epvf);
+	if (pdev->id.device_id == PCI_DEVID_OCTEONTX2_EP_NET_VF)
+		otx_epvf->pkind = SDP_OTX2_PKIND;
+	else
+		otx_epvf->pkind = SDP_PKIND;
+	otx_ep_info("using pkind %d\n", otx_epvf->pkind);
+
 	otx_epvf->port_configured = 0;
 
 	return 0;
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
new file mode 100644
index 000000000..e5b228f26
--- /dev/null
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -0,0 +1,222 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <unistd.h>
+
+#include <rte_eal.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_ethdev_pci.h>
+
+#include "otx_ep_common.h"
+#include "otx_ep_vf.h"
+#include "otx2_ep_vf.h"
+#include "otx_ep_rxtx.h"
+
+static void
+otx_ep_dmazone_free(const struct rte_memzone *mz)
+{
+	const struct rte_memzone *mz_tmp;
+	int ret = 0;
+
+	if (mz == NULL) {
+		otx_ep_err("Memzone %s : NULL\n", mz->name);
+		return;
+	}
+
+	mz_tmp = rte_memzone_lookup(mz->name);
+	if (mz_tmp == NULL) {
+		otx_ep_err("Memzone %s Not Found\n", mz->name);
+		return;
+	}
+
+	ret = rte_memzone_free(mz);
+	if (ret)
+		otx_ep_err("Memzone free failed : ret = %d\n", ret);
+}
+
+static void
+otx_ep_droq_reset_indices(struct otx_ep_droq *droq)
+{
+	droq->read_idx  = 0;
+	droq->write_idx = 0;
+	droq->refill_idx = 0;
+	droq->refill_count = 0;
+	droq->last_pkt_count = 0;
+	droq->pkts_pending = 0;
+}
+
+static void
+otx_ep_droq_destroy_ring_buffers(struct otx_ep_droq *droq)
+{
+	uint32_t idx;
+
+	for (idx = 0; idx < droq->nb_desc; idx++) {
+		if (droq->recv_buf_list[idx]) {
+			rte_pktmbuf_free(droq->recv_buf_list[idx]);
+			droq->recv_buf_list[idx] = NULL;
+		}
+	}
+
+	otx_ep_droq_reset_indices(droq);
+}
+
+/* Free OQs resources */
+int
+otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
+{
+	struct otx_ep_droq *droq;
+
+	droq = otx_ep->droq[oq_no];
+	if (droq == NULL) {
+		otx_ep_err("Invalid droq[%d]\n", oq_no);
+		return -ENOMEM;
+	}
+
+	otx_ep_droq_destroy_ring_buffers(droq);
+	rte_free(droq->recv_buf_list);
+	droq->recv_buf_list = NULL;
+
+	if (droq->desc_ring_mz) {
+		otx_ep_dmazone_free(droq->desc_ring_mz);
+		droq->desc_ring_mz = NULL;
+	}
+
+	memset(droq, 0, OTX_EP_DROQ_SIZE);
+
+	rte_free(otx_ep->droq[oq_no]);
+	otx_ep->droq[oq_no] = NULL;
+
+	otx_ep->nb_rx_queues--;
+
+	otx_ep_info("OQ[%d] is deleted\n", oq_no);
+	return 0;
+}
+
+static int
+otx_ep_droq_setup_ring_buffers(struct otx_ep_droq *droq)
+{
+	struct otx_ep_droq_desc *desc_ring = droq->desc_ring;
+	struct otx_ep_droq_info *info;
+	struct rte_mbuf *buf;
+	uint32_t idx;
+
+	for (idx = 0; idx < droq->nb_desc; idx++) {
+		buf = rte_pktmbuf_alloc(droq->mpool);
+		if (buf == NULL) {
+			otx_ep_err("OQ buffer alloc failed\n");
+			droq->stats.rx_alloc_failure++;
+			/* otx_ep_droq_destroy_ring_buffers(droq);*/
+			return -ENOMEM;
+		}
+
+		droq->recv_buf_list[idx] = buf;
+		info = rte_pktmbuf_mtod(buf, struct otx_ep_droq_info *);
+		memset(info, 0, sizeof(*info));
+		desc_ring[idx].buffer_ptr = rte_mbuf_data_iova_default(buf);
+	}
+
+	otx_ep_droq_reset_indices(droq);
+
+	return 0;
+}
+
+/* OQ initialization */
+static int
+otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
+	      uint32_t num_descs, uint32_t desc_size,
+	      struct rte_mempool *mpool, unsigned int socket_id)
+{
+	const struct otx_ep_config *conf = otx_ep->conf;
+	uint32_t c_refill_threshold;
+	struct otx_ep_droq *droq;
+	uint32_t desc_ring_size;
+
+	otx_ep_info("OQ[%d] Init start\n", q_no);
+
+	droq = otx_ep->droq[q_no];
+	droq->otx_ep_dev = otx_ep;
+	droq->q_no = q_no;
+	droq->mpool = mpool;
+
+	droq->nb_desc      = num_descs;
+	droq->buffer_size  = desc_size;
+	c_refill_threshold = RTE_MAX(conf->oq.refill_threshold,
+				     droq->nb_desc / 2);
+
+	/* OQ desc_ring set up */
+	desc_ring_size = droq->nb_desc * OTX_EP_DROQ_DESC_SIZE;
+	droq->desc_ring_mz = rte_eth_dma_zone_reserve(otx_ep->eth_dev, "droq",
+						      q_no, desc_ring_size,
+						      OTX_EP_PCI_RING_ALIGN,
+						      socket_id);
+
+	if (droq->desc_ring_mz == NULL) {
+		otx_ep_err("OQ:%d desc_ring allocation failed\n", q_no);
+		goto init_droq_fail;
+	}
+
+	droq->desc_ring_dma = droq->desc_ring_mz->iova;
+	droq->desc_ring = (struct otx_ep_droq_desc *)droq->desc_ring_mz->addr;
+
+	otx_ep_dbg("OQ[%d]: desc_ring: virt: 0x%p, dma: %lx\n",
+		    q_no, droq->desc_ring, (unsigned long)droq->desc_ring_dma);
+	otx_ep_dbg("OQ[%d]: num_desc: %d\n", q_no, droq->nb_desc);
+
+	/* OQ buf_list set up */
+	droq->recv_buf_list = rte_zmalloc_socket("recv_buf_list",
+				(droq->nb_desc * sizeof(struct rte_mbuf *)),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (droq->recv_buf_list == NULL) {
+		otx_ep_err("OQ recv_buf_list alloc failed\n");
+		goto init_droq_fail;
+	}
+
+	if (otx_ep_droq_setup_ring_buffers(droq))
+		goto init_droq_fail;
+
+	droq->refill_threshold = c_refill_threshold;
+
+	/* Set up OQ registers */
+	otx_ep->fn_list.setup_oq_regs(otx_ep, q_no);
+
+	otx_ep->io_qmask.oq |= (1ull << q_no);
+
+	return 0;
+
+init_droq_fail:
+	return -ENOMEM;
+}
+
+/* OQ configuration and setup */
+int
+otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs,
+	       int desc_size, struct rte_mempool *mpool, unsigned int socket_id)
+{
+	struct otx_ep_droq *droq;
+
+	/* Allocate new droq. */
+	droq = (struct otx_ep_droq *)rte_zmalloc("otx_ep_OQ",
+				sizeof(*droq), RTE_CACHE_LINE_SIZE);
+	if (droq == NULL) {
+		otx_ep_err("Droq[%d] Creation Failed\n", oq_no);
+		return -ENOMEM;
+	}
+	otx_ep->droq[oq_no] = droq;
+
+	if (otx_ep_init_droq(otx_ep, oq_no, num_descs, desc_size, mpool,
+			     socket_id)) {
+		otx_ep_err("Droq[%d] Initialization failed\n", oq_no);
+		goto delete_OQ;
+	}
+	otx_ep_info("OQ[%d] is created.\n", oq_no);
+
+	otx_ep->nb_rx_queues++;
+
+	return 0;
+
+delete_OQ:
+	otx_ep_delete_oqs(otx_ep, oq_no);
+	return -ENOMEM;
+}
diff --git a/drivers/net/octeontx_ep/otx_ep_vf.h b/drivers/net/octeontx_ep/otx_ep_vf.h
index c5741a3f1..d17c87909 100644
--- a/drivers/net/octeontx_ep/otx_ep_vf.h
+++ b/drivers/net/octeontx_ep/otx_ep_vf.h
@@ -37,6 +37,12 @@
 
 #define PCI_DEVID_OCTEONTX_EP_VF 0xa303
 
+/* this is a static value set by SLI PF driver in octeon
+ * No handshake is available
+ * Change this if changing the value in SLI PF driver
+ */
+#define SDP_GBL_WMARK 0x100
+
 int
 otx_ep_vf_setup_device(struct otx_ep_device *otx_ep);
 #endif /*_OTX_EP_VF_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v3 07/11] net/octeontx_ep: Added tx queue setup and release
  2021-01-26 21:30 [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure Nalla Pradeep
                   ` (4 preceding siblings ...)
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 06/11] net/octeontx_ep: Added rxq setup and release Nalla Pradeep
@ 2021-01-26 21:30 ` Nalla Pradeep
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 08/11] net/octeontx_ep: Setting up iq and oq registers Nalla Pradeep
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Nalla Pradeep @ 2021-01-26 21:30 UTC (permalink / raw)
  Cc: jerinj, sburla, dev, Nalla Pradeep

Transmit queue setup involves allocating memory for the command queue
considering tx descriptor count and initializing data structure
representing the queue. Transmit queue release function frees the
command queue.

Signed-off-by: Nalla Pradeep <pnalla@marvell.com>
---
 drivers/net/octeontx_ep/otx_ep_common.h |  89 +++++++++++++++-
 drivers/net/octeontx_ep/otx_ep_ethdev.c |  81 ++++++++++++++
 drivers/net/octeontx_ep/otx_ep_rxtx.c   | 135 ++++++++++++++++++++++++
 3 files changed, 303 insertions(+), 2 deletions(-)

diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h
index 6be5c5a76..e1b4ff270 100644
--- a/drivers/net/octeontx_ep/otx_ep_common.h
+++ b/drivers/net/octeontx_ep/otx_ep_common.h
@@ -42,7 +42,21 @@
 	rte_write64(val, ((base_addr) + off)); \
 	}
 
-struct otx_ep_device;
+/* OTX_EP IQ request list */
+struct otx_ep_instr_list {
+	void *buf;
+	uint32_t reqtype;
+};
+#define OTX_EP_IQREQ_LIST_SIZE	(sizeof(struct otx_ep_instr_list))
+
+/* Input Queue statistics. Each input queue has four stats fields. */
+struct otx_ep_iq_stats {
+	uint64_t instr_posted; /* Instructions posted to this queue. */
+	uint64_t instr_processed; /* Instructions processed in this queue. */
+	uint64_t instr_dropped; /* Instructions that could not be processed */
+	uint64_t tx_pkts;
+	uint64_t tx_bytes;
+};
 
 /* Structure to define the configuration attributes for each Input queue. */
 struct otx_ep_iq_config {
@@ -56,6 +70,66 @@ struct otx_ep_iq_config {
 	uint32_t pending_list_size;
 };
 
+/** The instruction (input) queue.
+ *  The input queue is used to post raw (instruction) mode data or packet data
+ *  to OCTEON TX2 device from the host. Each IQ of a OTX_EP EP VF device has one
+ *  such structure to represent it.
+ */
+struct otx_ep_instr_queue {
+	struct otx_ep_device *otx_ep_dev;
+
+	uint32_t q_no;
+	uint32_t pkt_in_done;
+
+	/* Flag for 64 byte commands. */
+	uint32_t iqcmd_64B:1;
+	uint32_t rsvd:17;
+	uint32_t status:8;
+
+	/* Number of  descriptors in this ring. */
+	uint32_t nb_desc;
+
+	/* Input ring index, where the driver should write the next packet */
+	uint32_t host_write_index;
+
+	/* Input ring index, where the OCTEON TX2 should read the next packet */
+	uint32_t otx_read_index;
+
+	uint32_t reset_instr_cnt;
+
+	/** This index aids in finding the window in the queue where OCTEON TX2
+	 *  has read the commands.
+	 */
+	uint32_t flush_index;
+
+	/* This keeps track of the instructions pending in this queue. */
+	uint64_t instr_pending;
+
+	/* Pointer to the Virtual Base addr of the input ring. */
+	uint8_t *base_addr;
+
+	/* This IQ request list */
+	struct otx_ep_instr_list *req_list;
+
+	/* OTX_EP doorbell register for the ring. */
+	void *doorbell_reg;
+
+	/* OTX_EP instruction count register for this ring. */
+	void *inst_cnt_reg;
+
+	/* Number of instructions pending to be posted to OCTEON TX2. */
+	uint32_t fill_cnt;
+
+	/* Statistics for this input queue. */
+	struct otx_ep_iq_stats stats;
+
+	/* DMA mapped base address of the input descriptor ring. */
+	uint64_t base_addr_dma;
+
+	/* Memory zone */
+	const struct rte_memzone *iq_mz;
+};
+
 /** Descriptor format.
  *  The descriptor ring is made of descriptors which have 2 64-bit values:
  *  -# Physical (bus) address of the data buffer.
@@ -218,6 +292,7 @@ struct otx_ep_config {
 
 /* Required functions for each VF device */
 struct otx_ep_fn_list {
+	void (*setup_iq_regs)(struct otx_ep_device *otx_ep, uint32_t q_no);
 	void (*setup_oq_regs)(struct otx_ep_device *otx_ep, uint32_t q_no);
 
 	int (*setup_device_regs)(struct otx_ep_device *otx_ep);
@@ -256,6 +331,12 @@ struct otx_ep_device {
 
 	uint32_t max_rx_queues;
 
+	/* Num IQs */
+	uint32_t nb_tx_queues;
+
+	/* The input instruction queues */
+	struct otx_ep_instr_queue *instr_queue[OTX_EP_MAX_IOQS_PER_VF];
+
 	/* Num OQs */
 	uint32_t nb_rx_queues;
 
@@ -278,12 +359,16 @@ struct otx_ep_device {
 	uint64_t tx_offloads;
 };
 
+int otx_ep_setup_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no,
+		     int num_descs, unsigned int socket_id);
+int otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no);
+
 int otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs,
 		     int desc_size, struct rte_mempool *mpool,
 		     unsigned int socket_id);
 int otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no);
-#define OTX_EP_MAX_PKT_SZ 64000U
 
+#define OTX_EP_MAX_PKT_SZ 64000U
 #define OTX_EP_MAX_MAC_ADDRS 1
 
 extern int otx_net_ep_logtype;
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index 3e2df4035..33ddc1aed 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -235,11 +235,83 @@ otx_ep_rx_queue_release(void *rxq)
 		otx_ep_err("Failed to delete OQ:%d\n", q_id);
 }
 
+/**
+ * Allocate and initialize SW ring. Initialize associated HW registers.
+ *
+ * @param eth_dev
+ *   Pointer to structure rte_eth_dev
+ *
+ * @param q_no
+ *   Queue number
+ *
+ * @param num_tx_descs
+ *   Number of ringbuffer descriptors
+ *
+ * @param socket_id
+ *   NUMA socket id, used for memory allocations
+ *
+ * @param tx_conf
+ *   Pointer to the structure rte_eth_txconf
+ *
+ * @return
+ *   - On success, return 0
+ *   - On failure, return -errno value
+ */
+static int
+otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
+		       uint16_t num_tx_descs, unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf __rte_unused)
+{
+	struct otx_ep_device *otx_epvf = OTX_EP_DEV(eth_dev);
+	int retval;
+
+	if (q_no >= otx_epvf->max_tx_queues) {
+		otx_ep_err("Invalid tx queue number %u\n", q_no);
+		return -EINVAL;
+	}
+	if (num_tx_descs & (num_tx_descs - 1)) {
+		otx_ep_err("Invalid tx desc number should be pow 2  %u\n",
+			   num_tx_descs);
+		return -EINVAL;
+	}
+
+	retval = otx_ep_setup_iqs(otx_epvf, q_no, num_tx_descs, socket_id);
+
+	if (retval) {
+		otx_ep_err("IQ(TxQ) creation failed.\n");
+		return retval;
+	}
+
+	eth_dev->data->tx_queues[q_no] = otx_epvf->instr_queue[q_no];
+	otx_ep_dbg("tx queue[%d] setup\n", q_no);
+	return 0;
+}
+
+/**
+ * Release the transmit queue/ringbuffer. Called by
+ * the upper layers.
+ *
+ * @param txq
+ *    Opaque pointer to the transmit queue to release
+ *
+ * @return
+ *    - nothing
+ */
+static void
+otx_ep_tx_queue_release(void *txq)
+{
+	struct otx_ep_instr_queue *tq = (struct otx_ep_instr_queue *)txq;
+
+	otx_ep_delete_iqs(tq->otx_ep_dev, tq->q_no);
+}
+
 /* Define our ethernet definitions */
 static const struct eth_dev_ops otx_ep_eth_dev_ops = {
 	.dev_configure		= otx_ep_dev_configure,
 	.rx_queue_setup	        = otx_ep_rx_queue_setup,
 	.rx_queue_release	= otx_ep_rx_queue_release,
+	.tx_queue_setup	        = otx_ep_tx_queue_setup,
+	.tx_queue_release	= otx_ep_tx_queue_release,
 	.dev_infos_get		= otx_ep_dev_info_get,
 };
 
@@ -264,6 +336,15 @@ otx_epdev_exit(struct rte_eth_dev *eth_dev)
 	}
 	otx_ep_info("Num OQs:%d freed\n", otx_epvf->nb_rx_queues);
 
+	num_queues = otx_epvf->nb_tx_queues;
+	for (q = 0; q < num_queues; q++) {
+		if (otx_ep_delete_iqs(otx_epvf, q)) {
+			otx_ep_err("Failed to delete IQ:%d\n", q);
+			return -ENOMEM;
+		}
+	}
+	otx_ep_dbg("Num IQs:%d freed\n", otx_epvf->nb_tx_queues);
+
 	return 0;
 }
 
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index e5b228f26..666411e7c 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -36,6 +36,141 @@ otx_ep_dmazone_free(const struct rte_memzone *mz)
 		otx_ep_err("Memzone free failed : ret = %d\n", ret);
 }
 
+/* Free IQ resources */
+int
+otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
+{
+	struct otx_ep_instr_queue *iq;
+
+	iq = otx_ep->instr_queue[iq_no];
+	if (iq == NULL) {
+		otx_ep_err("Invalid IQ[%d]\n", iq_no);
+		return -ENOMEM;
+	}
+
+	rte_free(iq->req_list);
+	iq->req_list = NULL;
+
+	if (iq->iq_mz) {
+		otx_ep_dmazone_free(iq->iq_mz);
+		iq->iq_mz = NULL;
+	}
+
+	rte_free(otx_ep->instr_queue[iq_no]);
+	otx_ep->instr_queue[iq_no] = NULL;
+
+	otx_ep->nb_tx_queues--;
+
+	otx_ep_info("IQ[%d] is deleted\n", iq_no);
+
+	return 0;
+}
+
+/* IQ initialization */
+static int
+otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
+		     unsigned int socket_id)
+{
+	const struct otx_ep_config *conf;
+	struct otx_ep_instr_queue *iq;
+	uint32_t q_size;
+
+	conf = otx_ep->conf;
+	iq = otx_ep->instr_queue[iq_no];
+	q_size = conf->iq.instr_type * num_descs;
+
+	/* IQ memory creation for Instruction submission to OCTEON TX2 */
+	iq->iq_mz = rte_eth_dma_zone_reserve(otx_ep->eth_dev,
+					     "instr_queue", iq_no, q_size,
+					     OTX_EP_PCI_RING_ALIGN,
+					     socket_id);
+	if (iq->iq_mz == NULL) {
+		otx_ep_err("IQ[%d] memzone alloc failed\n", iq_no);
+		goto iq_init_fail;
+	}
+
+	iq->base_addr_dma = iq->iq_mz->iova;
+	iq->base_addr = (uint8_t *)iq->iq_mz->addr;
+
+	if (num_descs & (num_descs - 1)) {
+		otx_ep_err("IQ[%d] descs not in power of 2\n", iq_no);
+		goto iq_init_fail;
+	}
+
+	iq->nb_desc = num_descs;
+
+	/* Create a IQ request list to hold requests that have been
+	 * posted to OCTEON TX2. This list will be used for freeing the IQ
+	 * data buffer(s) later once the OCTEON TX2 fetched the requests.
+	 */
+	iq->req_list = rte_zmalloc_socket("request_list",
+			(iq->nb_desc * OTX_EP_IQREQ_LIST_SIZE),
+			RTE_CACHE_LINE_SIZE,
+			rte_socket_id());
+	if (iq->req_list == NULL) {
+		otx_ep_err("IQ[%d] req_list alloc failed\n", iq_no);
+		goto iq_init_fail;
+	}
+
+	otx_ep_info("IQ[%d]: base: %p basedma: %lx count: %d\n",
+		     iq_no, iq->base_addr, (unsigned long)iq->base_addr_dma,
+		     iq->nb_desc);
+
+	iq->otx_ep_dev = otx_ep;
+	iq->q_no = iq_no;
+	iq->fill_cnt = 0;
+	iq->host_write_index = 0;
+	iq->otx_read_index = 0;
+	iq->flush_index = 0;
+	iq->instr_pending = 0;
+
+
+
+	otx_ep->io_qmask.iq |= (1ull << iq_no);
+
+	/* Set 32B/64B mode for each input queue */
+	if (conf->iq.instr_type == 64)
+		otx_ep->io_qmask.iq64B |= (1ull << iq_no);
+
+	iq->iqcmd_64B = (conf->iq.instr_type == 64);
+
+	/* Set up IQ registers */
+	otx_ep->fn_list.setup_iq_regs(otx_ep, iq_no);
+
+	return 0;
+
+iq_init_fail:
+	return -ENOMEM;
+}
+
+int
+otx_ep_setup_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no, int num_descs,
+		 unsigned int socket_id)
+{
+	struct otx_ep_instr_queue *iq;
+
+	iq = (struct otx_ep_instr_queue *)rte_zmalloc("otx_ep_IQ", sizeof(*iq),
+						RTE_CACHE_LINE_SIZE);
+	if (iq == NULL)
+		return -ENOMEM;
+
+	otx_ep->instr_queue[iq_no] = iq;
+
+	if (otx_ep_init_instr_queue(otx_ep, iq_no, num_descs, socket_id)) {
+		otx_ep_err("IQ init is failed\n");
+		goto delete_IQ;
+	}
+	otx_ep->nb_tx_queues++;
+
+	otx_ep_info("IQ[%d] is created.\n", iq_no);
+
+	return 0;
+
+delete_IQ:
+	otx_ep_delete_iqs(otx_ep, iq_no);
+	return -ENOMEM;
+}
+
 static void
 otx_ep_droq_reset_indices(struct otx_ep_droq *droq)
 {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v3 08/11] net/octeontx_ep: Setting up iq and oq registers
  2021-01-26 21:30 [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure Nalla Pradeep
                   ` (5 preceding siblings ...)
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 07/11] net/octeontx_ep: Added tx queue " Nalla Pradeep
@ 2021-01-26 21:30 ` Nalla Pradeep
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 09/11] net/octeontx_ep: Added dev start and stop Nalla Pradeep
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Nalla Pradeep @ 2021-01-26 21:30 UTC (permalink / raw)
  Cc: jerinj, sburla, dev, Nalla Pradeep

Configuring hardware registers with command queue(iq) and droq(oq)
parameters.

Signed-off-by: Nalla Pradeep <pnalla@marvell.com>
---
 drivers/net/octeontx_ep/otx2_ep_vf.c    | 120 +++++++++++++++++++++++
 drivers/net/octeontx_ep/otx_ep_common.h |  65 +++++++++++++
 drivers/net/octeontx_ep/otx_ep_vf.c     | 121 ++++++++++++++++++++++++
 drivers/net/octeontx_ep/otx_ep_vf.h     |  53 +++++++++++
 4 files changed, 359 insertions(+)

diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.c b/drivers/net/octeontx_ep/otx2_ep_vf.c
index e793c04fb..9349e66c0 100644
--- a/drivers/net/octeontx_ep/otx2_ep_vf.c
+++ b/drivers/net/octeontx_ep/otx2_ep_vf.c
@@ -73,6 +73,123 @@ otx2_vf_setup_device_regs(struct otx_ep_device *otx_ep)
 	return 0;
 }
 
+static void
+otx2_vf_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
+{
+	struct otx_ep_instr_queue *iq = otx_ep->instr_queue[iq_no];
+	volatile uint64_t reg_val = 0ull;
+
+	reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_IN_CONTROL(iq_no));
+
+	/* Wait till IDLE to set to 1, not supposed to configure BADDR
+	 * as long as IDLE is 0
+	 */
+	if (!(reg_val & SDP_VF_R_IN_CTL_IDLE)) {
+		do {
+			reg_val = otx2_read64(otx_ep->hw_addr +
+					      SDP_VF_R_IN_CONTROL(iq_no));
+		} while (!(reg_val & SDP_VF_R_IN_CTL_IDLE));
+	}
+
+	/* Write the start of the input queue's ring and its size  */
+	otx2_write64(iq->base_addr_dma, otx_ep->hw_addr +
+		     SDP_VF_R_IN_INSTR_BADDR(iq_no));
+	otx2_write64(iq->nb_desc, otx_ep->hw_addr +
+		     SDP_VF_R_IN_INSTR_RSIZE(iq_no));
+
+	/* Remember the doorbell & instruction count register addr
+	 * for this queue
+	 */
+	iq->doorbell_reg = (uint8_t *)otx_ep->hw_addr +
+			   SDP_VF_R_IN_INSTR_DBELL(iq_no);
+	iq->inst_cnt_reg = (uint8_t *)otx_ep->hw_addr +
+			   SDP_VF_R_IN_CNTS(iq_no);
+
+	otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p instcnt_reg @ 0x%p",
+		   iq_no, iq->doorbell_reg, iq->inst_cnt_reg);
+
+	do {
+		reg_val = rte_read32(iq->inst_cnt_reg);
+		rte_write32(reg_val, iq->inst_cnt_reg);
+	} while (reg_val != 0);
+
+	/* IN INTR_THRESHOLD is set to max(FFFFFFFF) which disable the IN INTR
+	 * to raise
+	 */
+	otx2_write64(0x3FFFFFFFFFFFFFUL,
+		     otx_ep->hw_addr + SDP_VF_R_IN_INT_LEVELS(iq_no));
+}
+
+static void
+otx2_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
+{
+	volatile uint64_t reg_val = 0ull;
+	uint64_t oq_ctl = 0ull;
+	struct otx_ep_droq *droq = otx_ep->droq[oq_no];
+
+	/* Wait on IDLE to set to 1, supposed to configure BADDR
+	 * as log as IDLE is 0
+	 */
+	reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_OUT_CONTROL(oq_no));
+
+	while (!(reg_val & SDP_VF_R_OUT_CTL_IDLE)) {
+		reg_val = otx2_read64(otx_ep->hw_addr +
+				      SDP_VF_R_OUT_CONTROL(oq_no));
+	}
+
+	otx2_write64(droq->desc_ring_dma, otx_ep->hw_addr +
+		     SDP_VF_R_OUT_SLIST_BADDR(oq_no));
+	otx2_write64(droq->nb_desc, otx_ep->hw_addr +
+		     SDP_VF_R_OUT_SLIST_RSIZE(oq_no));
+
+	oq_ctl = otx2_read64(otx_ep->hw_addr + SDP_VF_R_OUT_CONTROL(oq_no));
+
+	/* Clear the ISIZE and BSIZE (22-0) */
+	oq_ctl &= ~(0x7fffffull);
+
+	/* Populate the BSIZE (15-0) */
+	oq_ctl |= (droq->buffer_size & 0xffff);
+
+	otx2_write64(oq_ctl, otx_ep->hw_addr + SDP_VF_R_OUT_CONTROL(oq_no));
+
+	/* Mapped address of the pkt_sent and pkts_credit regs */
+	droq->pkts_sent_reg = (uint8_t *)otx_ep->hw_addr +
+			      SDP_VF_R_OUT_CNTS(oq_no);
+	droq->pkts_credit_reg = (uint8_t *)otx_ep->hw_addr +
+				SDP_VF_R_OUT_SLIST_DBELL(oq_no);
+
+	rte_write64(0x3FFFFFFFFFFFFFUL,
+		    otx_ep->hw_addr + SDP_VF_R_OUT_INT_LEVELS(oq_no));
+
+	/* Clear PKT_CNT register */
+	rte_write64(0xFFFFFFFFF, (uint8_t *)otx_ep->hw_addr +
+		    SDP_VF_R_OUT_PKT_CNT(oq_no));
+
+	/* Clear the OQ doorbell  */
+	rte_write32(0xFFFFFFFF, droq->pkts_credit_reg);
+	while ((rte_read32(droq->pkts_credit_reg) != 0ull)) {
+		rte_write32(0xFFFFFFFF, droq->pkts_credit_reg);
+		rte_delay_ms(1);
+	}
+	otx_ep_dbg("SDP_R[%d]_credit:%x", oq_no,
+		   rte_read32(droq->pkts_credit_reg));
+
+	/* Clear the OQ_OUT_CNTS doorbell  */
+	reg_val = rte_read32(droq->pkts_sent_reg);
+	rte_write32((uint32_t)reg_val, droq->pkts_sent_reg);
+
+	otx_ep_dbg("SDP_R[%d]_sent: %x", oq_no,
+		   rte_read32(droq->pkts_sent_reg));
+
+	while (((rte_read32(droq->pkts_sent_reg)) != 0ull)) {
+		reg_val = rte_read32(droq->pkts_sent_reg);
+		rte_write32((uint32_t)reg_val, droq->pkts_sent_reg);
+		rte_delay_ms(1);
+	}
+	otx_ep_dbg("SDP_R[%d]_sent: %x", oq_no,
+		   rte_read32(droq->pkts_sent_reg));
+}
+
 static const struct otx_ep_config default_otx2_ep_conf = {
 	/* IQ attributes */
 	.iq                        = {
@@ -127,6 +244,9 @@ otx2_ep_vf_setup_device(struct otx_ep_device *otx_ep)
 
 	otx2_info("SDP RPVF: %d", otx_ep->sriov_info.rings_per_vf);
 
+	otx_ep->fn_list.setup_iq_regs       = otx2_vf_setup_iq_regs;
+	otx_ep->fn_list.setup_oq_regs       = otx2_vf_setup_oq_regs;
+
 	otx_ep->fn_list.setup_device_regs   = otx2_vf_setup_device_regs;
 
 	return 0;
diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h
index e1b4ff270..85fb946b3 100644
--- a/drivers/net/octeontx_ep/otx_ep_common.h
+++ b/drivers/net/octeontx_ep/otx_ep_common.h
@@ -33,6 +33,33 @@
 		"%s():%u " fmt "\n",				\
 		__func__, __LINE__, ##args)
 
+/* Input Request Header format */
+union otx_ep_instr_irh {
+	uint64_t u64;
+	struct {
+		/* Request ID  */
+		uint64_t rid:16;
+
+		/* PCIe port to use for response */
+		uint64_t pcie_port:3;
+
+		/* Scatter indicator  1=scatter */
+		uint64_t scatter:1;
+
+		/* Size of Expected result OR no. of entries in scatter list */
+		uint64_t rlenssz:14;
+
+		/* Desired destination port for result */
+		uint64_t dport:6;
+
+		/* Opcode Specific parameters */
+		uint64_t param:8;
+
+		/* Opcode for the return packet  */
+		uint64_t opcode:16;
+	} s;
+};
+
 #define otx_ep_write64(value, base_addr, reg_off) \
 	{\
 	typeof(value) val = (value); \
@@ -42,6 +69,33 @@
 	rte_write64(val, ((base_addr) + off)); \
 	}
 
+/* Instruction Header - for OCTEON-TX models */
+typedef union otx_ep_instr_ih {
+	uint64_t u64;
+	struct {
+	  /** Data Len */
+		uint64_t tlen:16;
+
+	  /** Reserved */
+		uint64_t rsvd:20;
+
+	  /** PKIND for OTX_EP */
+		uint64_t pkind:6;
+
+	  /** Front Data size */
+		uint64_t fsz:6;
+
+	  /** No. of entries in gather list */
+		uint64_t gsz:14;
+
+	  /** Gather indicator 1=gather*/
+		uint64_t gather:1;
+
+	  /** Reserved3 */
+		uint64_t reserved3:1;
+	} s;
+} otx_ep_instr_ih_t;
+
 /* OTX_EP IQ request list */
 struct otx_ep_instr_list {
 	void *buf;
@@ -244,6 +298,16 @@ struct otx_ep_droq {
 	/* The size of each buffer pointed by the buffer pointer. */
 	uint32_t buffer_size;
 
+	/** Pointer to the mapped packet credit register.
+	 *  Host writes number of info/buffer ptrs available to this register
+	 */
+	void *pkts_credit_reg;
+
+	/** Pointer to the mapped packet sent register. OCTEON TX2 writes the
+	 *  number of packets DMA'ed to host memory in this register.
+	 */
+	void *pkts_sent_reg;
+
 	/* Statistics for this DROQ. */
 	struct otx_ep_droq_stats stats;
 
@@ -259,6 +323,7 @@ struct otx_ep_droq {
 	/* Allocated size of info list. */
 	uint32_t info_alloc_size;
 
+
 	/* Memory zone **/
 	const struct rte_memzone *desc_ring_mz;
 	const struct rte_memzone *info_mz;
diff --git a/drivers/net/octeontx_ep/otx_ep_vf.c b/drivers/net/octeontx_ep/otx_ep_vf.c
index 0bf8e5bed..e5a747577 100644
--- a/drivers/net/octeontx_ep/otx_ep_vf.c
+++ b/drivers/net/octeontx_ep/otx_ep_vf.c
@@ -87,6 +87,124 @@ otx_ep_setup_device_regs(struct otx_ep_device *otx_ep)
 	return 0;
 }
 
+static void
+otx_ep_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
+{
+	struct otx_ep_instr_queue *iq = otx_ep->instr_queue[iq_no];
+	volatile uint64_t reg_val = 0ull;
+
+	reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_IN_CONTROL(iq_no));
+
+	/* Wait till IDLE to set to 1, not supposed to configure BADDR
+	 * as long as IDLE is 0
+	 */
+	if (!(reg_val & OTX_EP_R_IN_CTL_IDLE)) {
+		do {
+			reg_val = rte_read64(otx_ep->hw_addr +
+					      OTX_EP_R_IN_CONTROL(iq_no));
+		} while (!(reg_val & OTX_EP_R_IN_CTL_IDLE));
+	}
+
+	/* Write the start of the input queue's ring and its size  */
+	otx_ep_write64(iq->base_addr_dma, otx_ep->hw_addr,
+		       OTX_EP_R_IN_INSTR_BADDR(iq_no));
+	otx_ep_write64(iq->nb_desc, otx_ep->hw_addr,
+		       OTX_EP_R_IN_INSTR_RSIZE(iq_no));
+
+	/* Remember the doorbell & instruction count register addr
+	 * for this queue
+	 */
+	iq->doorbell_reg = (uint8_t *)otx_ep->hw_addr +
+			   OTX_EP_R_IN_INSTR_DBELL(iq_no);
+	iq->inst_cnt_reg = (uint8_t *)otx_ep->hw_addr +
+			   OTX_EP_R_IN_CNTS(iq_no);
+
+	otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p instcnt_reg @ 0x%p\n",
+		     iq_no, iq->doorbell_reg, iq->inst_cnt_reg);
+
+	do {
+		reg_val = rte_read32(iq->inst_cnt_reg);
+		rte_write32(reg_val, iq->inst_cnt_reg);
+	} while (reg_val !=  0);
+
+	/* IN INTR_THRESHOLD is set to max(FFFFFFFF) which disable the IN INTR
+	 * to raise
+	 */
+	/* reg_val = rte_read64(otx_ep->hw_addr +
+	 * OTX_EP_R_IN_INT_LEVELS(iq_no));
+	 */
+	reg_val = 0xffffffff;
+
+	otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_IN_INT_LEVELS(iq_no));
+}
+
+static void
+otx_ep_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
+{
+	volatile uint64_t reg_val = 0ull;
+	uint64_t oq_ctl = 0ull;
+
+	struct otx_ep_droq *droq = otx_ep->droq[oq_no];
+
+	/* Wait on IDLE to set to 1, supposed to configure BADDR
+	 * as log as IDLE is 0
+	 */
+	otx_ep_write64(0ULL, otx_ep->hw_addr, OTX_EP_R_OUT_ENABLE(oq_no));
+
+	reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_OUT_CONTROL(oq_no));
+
+	while (!(reg_val & OTX_EP_R_OUT_CTL_IDLE)) {
+		reg_val = rte_read64(otx_ep->hw_addr +
+				      OTX_EP_R_OUT_CONTROL(oq_no));
+	}
+
+	otx_ep_write64(droq->desc_ring_dma, otx_ep->hw_addr,
+		       OTX_EP_R_OUT_SLIST_BADDR(oq_no));
+	otx_ep_write64(droq->nb_desc, otx_ep->hw_addr,
+		       OTX_EP_R_OUT_SLIST_RSIZE(oq_no));
+
+	oq_ctl = rte_read64(otx_ep->hw_addr + OTX_EP_R_OUT_CONTROL(oq_no));
+
+	/* Clear the ISIZE and BSIZE (22-0) */
+	oq_ctl &= ~(0x7fffffull);
+
+	/* Populate the BSIZE (15-0) */
+	oq_ctl |= (droq->buffer_size & 0xffff);
+
+	otx_ep_write64(oq_ctl, otx_ep->hw_addr, OTX_EP_R_OUT_CONTROL(oq_no));
+
+	/* Mapped address of the pkt_sent and pkts_credit regs */
+	droq->pkts_sent_reg = (uint8_t *)otx_ep->hw_addr +
+			      OTX_EP_R_OUT_CNTS(oq_no);
+	droq->pkts_credit_reg = (uint8_t *)otx_ep->hw_addr +
+				OTX_EP_R_OUT_SLIST_DBELL(oq_no);
+
+	otx_ep_write64(0x3fffffffffffffULL, otx_ep->hw_addr,
+		       OTX_EP_R_OUT_INT_LEVELS(oq_no));
+
+	/* Clear the OQ doorbell  */
+	rte_write32(0xFFFFFFFF, droq->pkts_credit_reg);
+	while ((rte_read32(droq->pkts_credit_reg) != 0ull)) {
+		rte_write32(0xFFFFFFFF, droq->pkts_credit_reg);
+		rte_delay_ms(1);
+	}
+	otx_ep_dbg("OTX_EP_R[%d]_credit:%x\n", oq_no,
+		     rte_read32(droq->pkts_credit_reg));
+
+	/* Clear the OQ_OUT_CNTS doorbell  */
+	reg_val = rte_read32(droq->pkts_sent_reg);
+	rte_write32((uint32_t)reg_val, droq->pkts_sent_reg);
+
+	otx_ep_dbg("OTX_EP_R[%d]_sent: %x\n", oq_no,
+		     rte_read32(droq->pkts_sent_reg));
+
+	while (((rte_read32(droq->pkts_sent_reg)) != 0ull)) {
+		reg_val = rte_read32(droq->pkts_sent_reg);
+		rte_write32((uint32_t)reg_val, droq->pkts_sent_reg);
+		rte_delay_ms(1);
+	}
+}
+
 /* OTX_EP default configuration */
 static const struct otx_ep_config default_otx_ep_conf = {
 	/* IQ attributes */
@@ -144,6 +262,9 @@ otx_ep_vf_setup_device(struct otx_ep_device *otx_ep)
 
 	otx_ep_info("OTX_EP RPVF: %d\n", otx_ep->sriov_info.rings_per_vf);
 
+	otx_ep->fn_list.setup_iq_regs       = otx_ep_setup_iq_regs;
+	otx_ep->fn_list.setup_oq_regs       = otx_ep_setup_oq_regs;
+
 	otx_ep->fn_list.setup_device_regs   = otx_ep_setup_device_regs;
 
 	return 0;
diff --git a/drivers/net/octeontx_ep/otx_ep_vf.h b/drivers/net/octeontx_ep/otx_ep_vf.h
index d17c87909..acc16753b 100644
--- a/drivers/net/octeontx_ep/otx_ep_vf.h
+++ b/drivers/net/octeontx_ep/otx_ep_vf.h
@@ -4,13 +4,38 @@
 #ifndef _OTX_EP_VF_H_
 #define _OTX_EP_VF_H_
 
+
+
+
+
 #define OTX_EP_RING_OFFSET                (0x1ull << 17)
 
 /* OTX_EP VF IQ Registers */
 #define OTX_EP_R_IN_CONTROL_START         (0x10000)
+#define OTX_EP_R_IN_INSTR_BADDR_START     (0x10020)
+#define OTX_EP_R_IN_INSTR_RSIZE_START     (0x10030)
+#define OTX_EP_R_IN_INSTR_DBELL_START     (0x10040)
+#define OTX_EP_R_IN_CNTS_START            (0x10050)
+#define OTX_EP_R_IN_INT_LEVELS_START      (0x10060)
+
 #define OTX_EP_R_IN_CONTROL(ring)  \
 	(OTX_EP_R_IN_CONTROL_START + ((ring) * OTX_EP_RING_OFFSET))
 
+#define OTX_EP_R_IN_INSTR_BADDR(ring)   \
+	(OTX_EP_R_IN_INSTR_BADDR_START + ((ring) * OTX_EP_RING_OFFSET))
+
+#define OTX_EP_R_IN_INSTR_RSIZE(ring)   \
+	(OTX_EP_R_IN_INSTR_RSIZE_START + ((ring) * OTX_EP_RING_OFFSET))
+
+#define OTX_EP_R_IN_INSTR_DBELL(ring)   \
+	(OTX_EP_R_IN_INSTR_DBELL_START + ((ring) * OTX_EP_RING_OFFSET))
+
+#define OTX_EP_R_IN_CNTS(ring)          \
+	(OTX_EP_R_IN_CNTS_START + ((ring) * OTX_EP_RING_OFFSET))
+
+#define OTX_EP_R_IN_INT_LEVELS(ring)    \
+	(OTX_EP_R_IN_INT_LEVELS_START + ((ring) * OTX_EP_RING_OFFSET))
+
 /* OTX_EP VF IQ Masks */
 #define OTX_EP_R_IN_CTL_RPVF_MASK       (0xF)
 #define	OTX_EP_R_IN_CTL_RPVF_POS        (48)
@@ -20,10 +45,38 @@
 #define OTX_EP_R_IN_CTL_IS_64B          (0x1ull << 24)
 #define OTX_EP_R_IN_CTL_ESR             (0x1ull << 1)
 /* OTX_EP VF OQ Registers */
+#define OTX_EP_R_OUT_CNTS_START              (0x10100)
+#define OTX_EP_R_OUT_INT_LEVELS_START        (0x10110)
+#define OTX_EP_R_OUT_SLIST_BADDR_START       (0x10120)
+#define OTX_EP_R_OUT_SLIST_RSIZE_START       (0x10130)
+#define OTX_EP_R_OUT_SLIST_DBELL_START       (0x10140)
 #define OTX_EP_R_OUT_CONTROL_START           (0x10150)
+#define OTX_EP_R_OUT_ENABLE_START            (0x10160)
+
 #define OTX_EP_R_OUT_CONTROL(ring)    \
 	(OTX_EP_R_OUT_CONTROL_START + ((ring) * OTX_EP_RING_OFFSET))
+
+#define OTX_EP_R_OUT_ENABLE(ring)     \
+	(OTX_EP_R_OUT_ENABLE_START + ((ring) * OTX_EP_RING_OFFSET))
+
+#define OTX_EP_R_OUT_SLIST_BADDR(ring)  \
+	(OTX_EP_R_OUT_SLIST_BADDR_START + ((ring) * OTX_EP_RING_OFFSET))
+
+#define OTX_EP_R_OUT_SLIST_RSIZE(ring)  \
+	(OTX_EP_R_OUT_SLIST_RSIZE_START + ((ring) * OTX_EP_RING_OFFSET))
+
+#define OTX_EP_R_OUT_SLIST_DBELL(ring)  \
+	(OTX_EP_R_OUT_SLIST_DBELL_START + ((ring) * OTX_EP_RING_OFFSET))
+
+#define OTX_EP_R_OUT_CNTS(ring)   \
+	(OTX_EP_R_OUT_CNTS_START + ((ring) * OTX_EP_RING_OFFSET))
+
+#define OTX_EP_R_OUT_INT_LEVELS(ring)   \
+	(OTX_EP_R_OUT_INT_LEVELS_START + ((ring) * OTX_EP_RING_OFFSET))
+
 /* OTX_EP VF OQ Masks */
+
+#define OTX_EP_R_OUT_CTL_IDLE         (1ull << 36)
 #define OTX_EP_R_OUT_CTL_ES_I         (1ull << 34)
 #define OTX_EP_R_OUT_CTL_NSR_I        (1ull << 33)
 #define OTX_EP_R_OUT_CTL_ROR_I        (1ull << 32)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v3 09/11] net/octeontx_ep: Added dev start and stop
  2021-01-26 21:30 [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure Nalla Pradeep
                   ` (6 preceding siblings ...)
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 08/11] net/octeontx_ep: Setting up iq and oq registers Nalla Pradeep
@ 2021-01-26 21:30 ` Nalla Pradeep
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 10/11] net/octeontx_ep: Receive data path function added Nalla Pradeep
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Nalla Pradeep @ 2021-01-26 21:30 UTC (permalink / raw)
  Cc: jerinj, sburla, dev, Nalla Pradeep

Dev start and stop operations are added. To accomplish this internal
functions to enable or disable io queues are incorporated.

Signed-off-by: Nalla Pradeep <pnalla@marvell.com>
---
 drivers/net/octeontx_ep/otx2_ep_vf.c    | 107 ++++++++++++++++++++
 drivers/net/octeontx_ep/otx_ep_common.h |  10 ++
 drivers/net/octeontx_ep/otx_ep_ethdev.c |  48 +++++++++
 drivers/net/octeontx_ep/otx_ep_vf.c     | 128 ++++++++++++++++++++++++
 drivers/net/octeontx_ep/otx_ep_vf.h     |   4 +
 5 files changed, 297 insertions(+)

diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.c b/drivers/net/octeontx_ep/otx2_ep_vf.c
index 9349e66c0..9703ad023 100644
--- a/drivers/net/octeontx_ep/otx2_ep_vf.c
+++ b/drivers/net/octeontx_ep/otx2_ep_vf.c
@@ -190,6 +190,104 @@ otx2_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
 		   rte_read32(droq->pkts_sent_reg));
 }
 
+static int
+otx2_vf_enable_iq(struct otx_ep_device *otx_ep, uint32_t q_no)
+{
+	uint64_t loop = SDP_VF_BUSY_LOOP_COUNT;
+	uint64_t reg_val = 0ull;
+
+	/* Resetting doorbells during IQ enabling also to handle abrupt
+	 * guest reboot. IQ reset does not clear the doorbells.
+	 */
+	otx2_write64(0xFFFFFFFF, otx_ep->hw_addr +
+		     SDP_VF_R_IN_INSTR_DBELL(q_no));
+
+	while (((otx2_read64(otx_ep->hw_addr +
+		 SDP_VF_R_IN_INSTR_DBELL(q_no))) != 0ull) && loop--) {
+		rte_delay_ms(1);
+	}
+
+	if (!loop) {
+		otx_ep_err("INSTR DBELL not coming back to 0\n");
+		return -EIO;
+	}
+
+	reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_IN_ENABLE(q_no));
+	reg_val |= 0x1ull;
+
+	otx2_write64(reg_val, otx_ep->hw_addr + SDP_VF_R_IN_ENABLE(q_no));
+
+	otx2_info("IQ[%d] enable done", q_no);
+
+	return 0;
+}
+
+static int
+otx2_vf_enable_oq(struct otx_ep_device *otx_ep, uint32_t q_no)
+{
+	uint64_t reg_val = 0ull;
+
+	reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_OUT_ENABLE(q_no));
+	reg_val |= 0x1ull;
+	otx2_write64(reg_val, otx_ep->hw_addr + SDP_VF_R_OUT_ENABLE(q_no));
+
+	otx2_info("OQ[%d] enable done", q_no);
+
+	return 0;
+}
+
+static int
+otx2_vf_enable_io_queues(struct otx_ep_device *otx_ep)
+{
+	uint32_t q_no = 0;
+	int ret;
+
+	for (q_no = 0; q_no < otx_ep->nb_tx_queues; q_no++) {
+		ret = otx2_vf_enable_iq(otx_ep, q_no);
+		if (ret)
+			return ret;
+	}
+
+	for (q_no = 0; q_no < otx_ep->nb_rx_queues; q_no++)
+		otx2_vf_enable_oq(otx_ep, q_no);
+
+	return 0;
+}
+
+static void
+otx2_vf_disable_iq(struct otx_ep_device *otx_ep, uint32_t q_no)
+{
+	uint64_t reg_val = 0ull;
+
+	/* Reset the doorbell register for this Input Queue. */
+	reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_IN_ENABLE(q_no));
+	reg_val &= ~0x1ull;
+
+	otx2_write64(reg_val, otx_ep->hw_addr + SDP_VF_R_IN_ENABLE(q_no));
+}
+
+static void
+otx2_vf_disable_oq(struct otx_ep_device *otx_ep, uint32_t q_no)
+{
+	volatile uint64_t reg_val = 0ull;
+
+	reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_OUT_ENABLE(q_no));
+	reg_val &= ~0x1ull;
+
+	otx2_write64(reg_val, otx_ep->hw_addr + SDP_VF_R_OUT_ENABLE(q_no));
+}
+
+static void
+otx2_vf_disable_io_queues(struct otx_ep_device *otx_ep)
+{
+	uint32_t q_no = 0;
+
+	for (q_no = 0; q_no < otx_ep->sriov_info.rings_per_vf; q_no++) {
+		otx2_vf_disable_iq(otx_ep, q_no);
+		otx2_vf_disable_oq(otx_ep, q_no);
+	}
+}
+
 static const struct otx_ep_config default_otx2_ep_conf = {
 	/* IQ attributes */
 	.iq                        = {
@@ -249,5 +347,14 @@ otx2_ep_vf_setup_device(struct otx_ep_device *otx_ep)
 
 	otx_ep->fn_list.setup_device_regs   = otx2_vf_setup_device_regs;
 
+	otx_ep->fn_list.enable_io_queues    = otx2_vf_enable_io_queues;
+	otx_ep->fn_list.disable_io_queues   = otx2_vf_disable_io_queues;
+
+	otx_ep->fn_list.enable_iq           = otx2_vf_enable_iq;
+	otx_ep->fn_list.disable_iq          = otx2_vf_disable_iq;
+
+	otx_ep->fn_list.enable_oq           = otx2_vf_enable_oq;
+	otx_ep->fn_list.disable_oq          = otx2_vf_disable_oq;
+
 	return 0;
 }
diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h
index 85fb946b3..51a6750c6 100644
--- a/drivers/net/octeontx_ep/otx_ep_common.h
+++ b/drivers/net/octeontx_ep/otx_ep_common.h
@@ -19,6 +19,7 @@
 #define OTX_EP_PCI_RING_ALIGN   65536
 #define SDP_PKIND 40
 #define SDP_OTX2_PKIND 57
+#define OTX_EP_BUSY_LOOP_COUNT      (10000)
 #define OTX_EP_MAX_IOQS_PER_VF 8
 
 #define otx_ep_info(fmt, args...)				\
@@ -362,7 +363,14 @@ struct otx_ep_fn_list {
 
 	int (*setup_device_regs)(struct otx_ep_device *otx_ep);
 
+	int (*enable_io_queues)(struct otx_ep_device *otx_ep);
 	void (*disable_io_queues)(struct otx_ep_device *otx_ep);
+
+	int (*enable_iq)(struct otx_ep_device *otx_ep, uint32_t q_no);
+	void (*disable_iq)(struct otx_ep_device *otx_ep, uint32_t q_no);
+
+	int (*enable_oq)(struct otx_ep_device *otx_ep, uint32_t q_no);
+	void (*disable_oq)(struct otx_ep_device *otx_ep, uint32_t q_no);
 };
 
 /* SRIOV information */
@@ -417,6 +425,8 @@ struct otx_ep_device {
 	/* Device configuration */
 	const struct otx_ep_config *conf;
 
+	int started;
+
 	int port_configured;
 
 	uint64_t rx_offloads;
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index 33ddc1aed..5fa315e71 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -61,6 +61,50 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
 	return 0;
 }
 
+static int
+otx_ep_dev_start(struct rte_eth_dev *eth_dev)
+{
+	struct otx_ep_device *otx_epvf;
+	unsigned int q;
+	int ret;
+
+	otx_epvf = (struct otx_ep_device *)OTX_EP_DEV(eth_dev);
+	/* Enable IQ/OQ for this device */
+	ret = otx_epvf->fn_list.enable_io_queues(otx_epvf);
+	if (ret) {
+		otx_ep_err("IOQ enable failed\n");
+		return ret;
+	}
+
+	for (q = 0; q < otx_epvf->nb_rx_queues; q++) {
+		rte_write32(otx_epvf->droq[q]->nb_desc,
+			    otx_epvf->droq[q]->pkts_credit_reg);
+
+		rte_wmb();
+		otx_ep_info("OQ[%d] dbells [%d]\n", q,
+		rte_read32(otx_epvf->droq[q]->pkts_credit_reg));
+	}
+
+	otx_epvf->started = 1;
+
+	rte_wmb();
+	otx_ep_info("dev started\n");
+
+	return 0;
+}
+
+/* Stop device and disable input/output functions */
+static int
+otx_ep_dev_stop(struct rte_eth_dev *eth_dev)
+{
+	struct otx_ep_device *otx_epvf = OTX_EP_DEV(eth_dev);
+
+	otx_epvf->fn_list.disable_io_queues(otx_epvf);
+	otx_epvf->started = 0;
+
+	return 0;
+}
+
 static int
 otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
 {
@@ -308,6 +352,8 @@ otx_ep_tx_queue_release(void *txq)
 /* Define our ethernet definitions */
 static const struct eth_dev_ops otx_ep_eth_dev_ops = {
 	.dev_configure		= otx_ep_dev_configure,
+	.dev_start		= otx_ep_dev_start,
+	.dev_stop		= otx_ep_dev_stop,
 	.rx_queue_setup	        = otx_ep_rx_queue_setup,
 	.rx_queue_release	= otx_ep_rx_queue_release,
 	.tx_queue_setup	        = otx_ep_tx_queue_setup,
@@ -327,6 +373,8 @@ otx_epdev_exit(struct rte_eth_dev *eth_dev)
 
 	otx_epvf = OTX_EP_DEV(eth_dev);
 
+	otx_epvf->fn_list.disable_io_queues(otx_epvf);
+
 	num_queues = otx_epvf->nb_rx_queues;
 	for (q = 0; q < num_queues; q++) {
 		if (otx_ep_delete_oqs(otx_epvf, q)) {
diff --git a/drivers/net/octeontx_ep/otx_ep_vf.c b/drivers/net/octeontx_ep/otx_ep_vf.c
index e5a747577..326187fa9 100644
--- a/drivers/net/octeontx_ep/otx_ep_vf.c
+++ b/drivers/net/octeontx_ep/otx_ep_vf.c
@@ -205,6 +205,124 @@ otx_ep_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
 	}
 }
 
+static int
+otx_ep_enable_iq(struct otx_ep_device *otx_ep, uint32_t q_no)
+{
+	uint64_t loop = OTX_EP_BUSY_LOOP_COUNT;
+	uint64_t reg_val = 0ull;
+
+	/* Resetting doorbells during IQ enabling also to handle abrupt
+	 * guest reboot. IQ reset does not clear the doorbells.
+	 */
+	otx_ep_write64(0xFFFFFFFF, otx_ep->hw_addr,
+		       OTX_EP_R_IN_INSTR_DBELL(q_no));
+
+	while (((rte_read64(otx_ep->hw_addr +
+		 OTX_EP_R_IN_INSTR_DBELL(q_no))) != 0ull) && loop--) {
+		rte_delay_ms(1);
+	}
+
+	if (loop == 0) {
+		otx_ep_err("dbell reset failed\n");
+		return -EIO;
+	}
+
+
+	reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_IN_ENABLE(q_no));
+	reg_val |= 0x1ull;
+
+	otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_IN_ENABLE(q_no));
+
+	otx_ep_info("IQ[%d] enable done\n", q_no);
+
+	return 0;
+}
+
+static int
+otx_ep_enable_oq(struct otx_ep_device *otx_ep, uint32_t q_no)
+{
+	uint64_t reg_val = 0ull;
+	uint64_t loop = OTX_EP_BUSY_LOOP_COUNT;
+
+	/* Resetting doorbells during IQ enabling also to handle abrupt
+	 * guest reboot. IQ reset does not clear the doorbells.
+	 */
+	otx_ep_write64(0xFFFFFFFF, otx_ep->hw_addr,
+		       OTX_EP_R_OUT_SLIST_DBELL(q_no));
+	while (((rte_read64(otx_ep->hw_addr +
+		 OTX_EP_R_OUT_SLIST_DBELL(q_no))) != 0ull) && loop--) {
+		rte_delay_ms(1);
+	}
+	if (loop == 0) {
+		otx_ep_err("dbell reset failed\n");
+		return -EIO;
+	}
+
+
+	reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_OUT_ENABLE(q_no));
+	reg_val |= 0x1ull;
+	otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_OUT_ENABLE(q_no));
+
+	otx_ep_info("OQ[%d] enable done\n", q_no);
+
+	return 0;
+}
+
+static int
+otx_ep_enable_io_queues(struct otx_ep_device *otx_ep)
+{
+	uint32_t q_no = 0;
+	int ret;
+
+	for (q_no = 0; q_no < otx_ep->nb_tx_queues; q_no++) {
+		ret = otx_ep_enable_iq(otx_ep, q_no);
+		if (ret)
+			return ret;
+	}
+
+	for (q_no = 0; q_no < otx_ep->nb_rx_queues; q_no++) {
+		ret = otx_ep_enable_oq(otx_ep, q_no);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static void
+otx_ep_disable_iq(struct otx_ep_device *otx_ep, uint32_t q_no)
+{
+	uint64_t reg_val = 0ull;
+
+	/* Reset the doorbell register for this Input Queue. */
+	reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_IN_ENABLE(q_no));
+	reg_val &= ~0x1ull;
+
+	otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_IN_ENABLE(q_no));
+}
+
+static void
+otx_ep_disable_oq(struct otx_ep_device *otx_ep, uint32_t q_no)
+{
+	uint64_t reg_val = 0ull;
+
+	reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_OUT_ENABLE(q_no));
+	reg_val &= ~0x1ull;
+
+	otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_OUT_ENABLE(q_no));
+}
+
+static void
+otx_ep_disable_io_queues(struct otx_ep_device *otx_ep)
+{
+	uint32_t q_no = 0;
+
+	for (q_no = 0; q_no < otx_ep->sriov_info.rings_per_vf; q_no++) {
+		otx_ep_disable_iq(otx_ep, q_no);
+		otx_ep_disable_oq(otx_ep, q_no);
+	}
+}
+
 /* OTX_EP default configuration */
 static const struct otx_ep_config default_otx_ep_conf = {
 	/* IQ attributes */
@@ -267,5 +385,15 @@ otx_ep_vf_setup_device(struct otx_ep_device *otx_ep)
 
 	otx_ep->fn_list.setup_device_regs   = otx_ep_setup_device_regs;
 
+	otx_ep->fn_list.enable_io_queues    = otx_ep_enable_io_queues;
+	otx_ep->fn_list.disable_io_queues   = otx_ep_disable_io_queues;
+
+	otx_ep->fn_list.enable_iq           = otx_ep_enable_iq;
+	otx_ep->fn_list.disable_iq          = otx_ep_disable_iq;
+
+	otx_ep->fn_list.enable_oq           = otx_ep_enable_oq;
+	otx_ep->fn_list.disable_oq          = otx_ep_disable_oq;
+
+
 	return 0;
 }
diff --git a/drivers/net/octeontx_ep/otx_ep_vf.h b/drivers/net/octeontx_ep/otx_ep_vf.h
index acc16753b..64e4df451 100644
--- a/drivers/net/octeontx_ep/otx_ep_vf.h
+++ b/drivers/net/octeontx_ep/otx_ep_vf.h
@@ -12,6 +12,7 @@
 
 /* OTX_EP VF IQ Registers */
 #define OTX_EP_R_IN_CONTROL_START         (0x10000)
+#define OTX_EP_R_IN_ENABLE_START          (0x10010)
 #define OTX_EP_R_IN_INSTR_BADDR_START     (0x10020)
 #define OTX_EP_R_IN_INSTR_RSIZE_START     (0x10030)
 #define OTX_EP_R_IN_INSTR_DBELL_START     (0x10040)
@@ -21,6 +22,9 @@
 #define OTX_EP_R_IN_CONTROL(ring)  \
 	(OTX_EP_R_IN_CONTROL_START + ((ring) * OTX_EP_RING_OFFSET))
 
+#define OTX_EP_R_IN_ENABLE(ring)   \
+	(OTX_EP_R_IN_ENABLE_START + ((ring) * OTX_EP_RING_OFFSET))
+
 #define OTX_EP_R_IN_INSTR_BADDR(ring)   \
 	(OTX_EP_R_IN_INSTR_BADDR_START + ((ring) * OTX_EP_RING_OFFSET))
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v3 10/11] net/octeontx_ep: Receive data path function added
  2021-01-26 21:30 [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure Nalla Pradeep
                   ` (7 preceding siblings ...)
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 09/11] net/octeontx_ep: Added dev start and stop Nalla Pradeep
@ 2021-01-26 21:30 ` Nalla Pradeep
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 11/11] net/octeontx_ep: Transmit " Nalla Pradeep
  2021-01-27  1:09 ` [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure Ferruh Yigit
  10 siblings, 0 replies; 12+ messages in thread
From: Nalla Pradeep @ 2021-01-26 21:30 UTC (permalink / raw)
  Cc: jerinj, sburla, dev, Nalla Pradeep

Function to deliver packets from DROQ to application is added. It also
fills DROQ with receive buffers timely such that device can fill them
with incoming packets.

Signed-off-by: Nalla Pradeep <pnalla@marvell.com>
---
 drivers/net/octeontx_ep/otx_ep_common.h |   2 +
 drivers/net/octeontx_ep/otx_ep_ethdev.c |   3 +
 drivers/net/octeontx_ep/otx_ep_rxtx.c   | 297 +++++++++++++++++++++++-
 drivers/net/octeontx_ep/otx_ep_rxtx.h   |  12 +-
 4 files changed, 312 insertions(+), 2 deletions(-)

diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h
index 51a6750c6..d87333539 100644
--- a/drivers/net/octeontx_ep/otx_ep_common.h
+++ b/drivers/net/octeontx_ep/otx_ep_common.h
@@ -22,6 +22,8 @@
 #define OTX_EP_BUSY_LOOP_COUNT      (10000)
 #define OTX_EP_MAX_IOQS_PER_VF 8
 
+#define OTX_CUST_DATA_LEN 0
+
 #define otx_ep_info(fmt, args...)				\
 	RTE_LOG(INFO, PMD, fmt "\n", ## args)
 
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index 5fa315e71..79572425a 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -151,6 +151,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
 		goto setup_fail;
 	}
 
+	otx_epvf->eth_dev->rx_pkt_burst = &otx_ep_recv_pkts;
 	ethdev_queues = (uint32_t)(otx_epvf->sriov_info.rings_per_vf);
 	otx_epvf->max_rx_queues = ethdev_queues;
 	otx_epvf->max_tx_queues = ethdev_queues;
@@ -411,6 +412,8 @@ otx_ep_eth_dev_uninit(struct rte_eth_dev *eth_dev)
 		rte_free(eth_dev->data->mac_addrs);
 
 	eth_dev->dev_ops = NULL;
+	eth_dev->rx_pkt_burst = NULL;
+	eth_dev->tx_pkt_burst = NULL;
 
 	return 0;
 }
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index 666411e7c..7b65e3ffe 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -7,6 +7,8 @@
 #include <rte_eal.h>
 #include <rte_mempool.h>
 #include <rte_mbuf.h>
+#include <rte_io.h>
+#include <rte_net.h>
 #include <rte_ethdev_pci.h>
 
 #include "otx_ep_common.h"
@@ -14,6 +16,10 @@
 #include "otx2_ep_vf.h"
 #include "otx_ep_rxtx.h"
 
+/* SDP_LENGTH_S specifies packet length and is of 8-byte size */
+#define INFO_SIZE 8
+#define DROQ_REFILL_THRESHOLD 16
+
 static void
 otx_ep_dmazone_free(const struct rte_memzone *mz)
 {
@@ -327,7 +333,8 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
 /* OQ configuration and setup */
 int
 otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs,
-	       int desc_size, struct rte_mempool *mpool, unsigned int socket_id)
+		 int desc_size, struct rte_mempool *mpool,
+		 unsigned int socket_id)
 {
 	struct otx_ep_droq *droq;
 
@@ -355,3 +362,291 @@ otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs,
 	otx_ep_delete_oqs(otx_ep, oq_no);
 	return -ENOMEM;
 }
+
+static uint32_t
+otx_ep_droq_refill(struct otx_ep_droq *droq)
+{
+	struct otx_ep_droq_desc *desc_ring;
+	struct otx_ep_droq_info *info;
+	struct rte_mbuf *buf = NULL;
+	uint32_t desc_refilled = 0;
+
+	desc_ring = droq->desc_ring;
+
+	while (droq->refill_count && (desc_refilled < droq->nb_desc)) {
+		/* If a valid buffer exists (happens if there is no dispatch),
+		 * reuse the buffer, else allocate.
+		 */
+		if (droq->recv_buf_list[droq->refill_idx] != NULL)
+			break;
+
+		buf = rte_pktmbuf_alloc(droq->mpool);
+		/* If a buffer could not be allocated, no point in
+		 * continuing
+		 */
+		if (buf == NULL) {
+			droq->stats.rx_alloc_failure++;
+			break;
+		}
+		info = rte_pktmbuf_mtod(buf, struct otx_ep_droq_info *);
+		memset(info, 0, sizeof(*info));
+
+		droq->recv_buf_list[droq->refill_idx] = buf;
+		desc_ring[droq->refill_idx].buffer_ptr =
+					rte_mbuf_data_iova_default(buf);
+
+
+		droq->refill_idx = otx_ep_incr_index(droq->refill_idx, 1,
+				droq->nb_desc);
+
+		desc_refilled++;
+		droq->refill_count--;
+	}
+
+	return desc_refilled;
+}
+
+static struct rte_mbuf *
+otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
+			struct otx_ep_droq *droq, int next_fetch)
+{
+	volatile struct otx_ep_droq_info *info;
+	struct rte_mbuf *droq_pkt2 = NULL;
+	struct rte_mbuf *droq_pkt = NULL;
+	struct rte_net_hdr_lens hdr_lens;
+	struct otx_ep_droq_info *info2;
+	uint64_t total_pkt_len;
+	uint32_t pkt_len = 0;
+	int next_idx;
+
+	droq_pkt  = droq->recv_buf_list[droq->read_idx];
+	droq_pkt2  = droq->recv_buf_list[droq->read_idx];
+	info = rte_pktmbuf_mtod(droq_pkt, struct otx_ep_droq_info *);
+	/* make sure info is available */
+	rte_rmb();
+	if (unlikely(!info->length)) {
+		int retry = OTX_EP_MAX_DELAYED_PKT_RETRIES;
+		/* otx_ep_dbg("OCTEON DROQ[%d]: read_idx: %d; Data not ready "
+		 * "yet, Retry; pending=%lu\n", droq->q_no, droq->read_idx,
+		 * droq->pkts_pending);
+		 */
+		droq->stats.pkts_delayed_data++;
+		while (retry && !info->length)
+			retry--;
+		if (!retry && !info->length) {
+			otx_ep_err("OCTEON DROQ[%d]: read_idx: %d; Retry failed !!\n",
+				   droq->q_no, droq->read_idx);
+			/* May be zero length packet; drop it */
+			rte_pktmbuf_free(droq_pkt);
+			droq->recv_buf_list[droq->read_idx] = NULL;
+			droq->read_idx = otx_ep_incr_index(droq->read_idx, 1,
+							   droq->nb_desc);
+			droq->stats.dropped_zlp++;
+			droq->refill_count++;
+			goto oq_read_fail;
+		}
+	}
+	if (next_fetch) {
+		next_idx = otx_ep_incr_index(droq->read_idx, 1, droq->nb_desc);
+		droq_pkt2  = droq->recv_buf_list[next_idx];
+		info2 = rte_pktmbuf_mtod(droq_pkt2, struct otx_ep_droq_info *);
+		rte_prefetch_non_temporal((const void *)info2);
+	}
+
+	info->length = rte_bswap64(info->length);
+	/* Deduce the actual data size */
+	total_pkt_len = info->length + INFO_SIZE;
+	if (total_pkt_len <= droq->buffer_size) {
+		info->length -=  OTX_EP_RH_SIZE;
+		droq_pkt  = droq->recv_buf_list[droq->read_idx];
+		if (likely(droq_pkt != NULL)) {
+			droq_pkt->data_off += OTX_EP_DROQ_INFO_SIZE;
+			/* otx_ep_dbg("OQ: pkt_len[%ld], buffer_size %d\n",
+			 * (long)info->length, droq->buffer_size);
+			 */
+			pkt_len = (uint32_t)info->length;
+			droq_pkt->pkt_len  = pkt_len;
+			droq_pkt->data_len  = pkt_len;
+			droq_pkt->port = otx_ep->port_id;
+			droq->recv_buf_list[droq->read_idx] = NULL;
+			droq->read_idx = otx_ep_incr_index(droq->read_idx, 1,
+							   droq->nb_desc);
+			droq->refill_count++;
+		}
+	} else {
+		struct rte_mbuf *first_buf = NULL;
+		struct rte_mbuf *last_buf = NULL;
+
+		while (pkt_len < total_pkt_len) {
+			int cpy_len = 0;
+
+			cpy_len = ((pkt_len + droq->buffer_size) >
+					total_pkt_len)
+					? ((uint32_t)total_pkt_len -
+						pkt_len)
+					: droq->buffer_size;
+
+			droq_pkt = droq->recv_buf_list[droq->read_idx];
+			droq->recv_buf_list[droq->read_idx] = NULL;
+
+			if (likely(droq_pkt != NULL)) {
+				/* Note the first seg */
+				if (!pkt_len)
+					first_buf = droq_pkt;
+
+				droq_pkt->port = otx_ep->port_id;
+				if (!pkt_len) {
+					droq_pkt->data_off +=
+						OTX_EP_DROQ_INFO_SIZE;
+					droq_pkt->pkt_len =
+						cpy_len - OTX_EP_DROQ_INFO_SIZE;
+					droq_pkt->data_len =
+						cpy_len - OTX_EP_DROQ_INFO_SIZE;
+				} else {
+					droq_pkt->pkt_len = cpy_len;
+					droq_pkt->data_len = cpy_len;
+				}
+
+				if (pkt_len) {
+					first_buf->nb_segs++;
+					first_buf->pkt_len += droq_pkt->pkt_len;
+				}
+
+				if (last_buf)
+					last_buf->next = droq_pkt;
+
+				last_buf = droq_pkt;
+			} else {
+				otx_ep_err("no buf\n");
+			}
+
+			pkt_len += cpy_len;
+			droq->read_idx = otx_ep_incr_index(droq->read_idx, 1,
+							   droq->nb_desc);
+			droq->refill_count++;
+		}
+		droq_pkt = first_buf;
+	}
+	droq_pkt->packet_type = rte_net_get_ptype(droq_pkt, &hdr_lens,
+					RTE_PTYPE_ALL_MASK);
+	droq_pkt->l2_len = hdr_lens.l2_len;
+	droq_pkt->l3_len = hdr_lens.l3_len;
+	droq_pkt->l4_len = hdr_lens.l4_len;
+
+	if ((droq_pkt->pkt_len > (RTE_ETHER_MAX_LEN + OTX_CUST_DATA_LEN)) &&
+	    !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) {
+		rte_pktmbuf_free(droq_pkt);
+		goto oq_read_fail;
+	}
+
+	if (droq_pkt->nb_segs > 1 &&
+	    !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+		rte_pktmbuf_free(droq_pkt);
+		goto oq_read_fail;
+	}
+
+	return droq_pkt;
+
+oq_read_fail:
+	return NULL;
+}
+
+static inline uint32_t
+otx_ep_check_droq_pkts(struct otx_ep_droq *droq)
+{
+	volatile uint64_t pkt_count;
+	uint32_t new_pkts;
+
+	/* Latest available OQ packets */
+	pkt_count = rte_read32(droq->pkts_sent_reg);
+	rte_write32(pkt_count, droq->pkts_sent_reg);
+	new_pkts = pkt_count;
+	/* otx_ep_dbg("Recvd [%d] new OQ pkts\n", new_pkts); */
+	droq->pkts_pending += new_pkts;
+	return new_pkts;
+}
+
+
+/* Check for response arrival from OCTEON TX2
+ * returns number of requests completed
+ */
+uint16_t
+otx_ep_recv_pkts(void *rx_queue,
+		  struct rte_mbuf **rx_pkts,
+		  uint16_t budget)
+{
+	struct otx_ep_droq *droq = rx_queue;
+	struct otx_ep_device *otx_ep;
+	struct rte_mbuf *oq_pkt;
+
+	uint32_t pkts = 0;
+	uint32_t new_pkts = 0;
+	int next_fetch;
+
+	otx_ep = droq->otx_ep_dev;
+
+	if (droq->pkts_pending > budget) {
+		new_pkts = budget;
+	} else {
+		new_pkts = droq->pkts_pending;
+		new_pkts += otx_ep_check_droq_pkts(droq);
+		if (new_pkts > budget)
+			new_pkts = budget;
+	}
+	if (!new_pkts) {
+		/* otx_ep_dbg("Zero new_pkts:%d\n", new_pkts); */
+		goto update_credit; /* No pkts at this moment */
+	}
+
+	/* otx_ep_dbg("Received new_pkts = %d\n", new_pkts); */
+
+	for (pkts = 0; pkts < new_pkts; pkts++) {
+		/* Push the received pkt to application */
+		next_fetch = (pkts == new_pkts - 1) ? 0 : 1;
+		oq_pkt = otx_ep_droq_read_packet(otx_ep, droq, next_fetch);
+		if (!oq_pkt) {
+			otx_ep_err("DROQ read pkt failed pending %lu last_pkt_count %lu new_pkts %d.\n",
+				   droq->pkts_pending, droq->last_pkt_count,
+				   new_pkts);
+			droq->pkts_pending -= pkts;
+			droq->stats.rx_err++;
+			goto finish;
+		}
+		/* rte_pktmbuf_dump(stdout, oq_pkt,
+		 * rte_pktmbuf_pkt_len(oq_pkt));
+		 */
+		rx_pkts[pkts] = oq_pkt;
+		/* Stats */
+		droq->stats.pkts_received++;
+		droq->stats.bytes_received += oq_pkt->pkt_len;
+	}
+	droq->pkts_pending -= pkts;
+	/* otx_ep_dbg("DROQ pkts[%d] pushed to application\n", pkts); */
+
+	/* Refill DROQ buffers */
+update_credit:
+	if (droq->refill_count >= DROQ_REFILL_THRESHOLD) {
+		int desc_refilled = otx_ep_droq_refill(droq);
+
+		/* Flush the droq descriptor data to memory to be sure
+		 * that when we update the credits the data in memory is
+		 * accurate.
+		 */
+		rte_wmb();
+		rte_write32(desc_refilled, droq->pkts_credit_reg);
+		/* otx_ep_dbg("Refilled count = %d\n", desc_refilled); */
+	} else {
+		/*
+		 * SDP output goes into DROP state when output doorbell count
+		 * goes below drop count. When door bell count is written with
+		 * a value greater than drop count SDP output should come out
+		 * of DROP state. Due to a race condition this is not happening.
+		 * Writing doorbell register with 0 again may make SDP output
+		 * come out of this state.
+		 */
+
+		rte_write32(0, droq->pkts_credit_reg);
+	}
+finish:
+	return pkts;
+}
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.h b/drivers/net/octeontx_ep/otx_ep_rxtx.h
index 9779e96b6..d8b411459 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.h
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.h
@@ -7,4 +7,14 @@
 
 #define OTX_EP_RXD_ALIGN 1
 #define OTX_EP_TXD_ALIGN 1
-#endif
+#define OTX_EP_MAX_DELAYED_PKT_RETRIES 10000
+static inline uint32_t
+otx_ep_incr_index(uint32_t index, uint32_t count, uint32_t max)
+{
+	return ((index + count) & (max - 1));
+}
+uint16_t
+otx_ep_recv_pkts(void *rx_queue,
+		  struct rte_mbuf **rx_pkts,
+		  uint16_t budget);
+#endif /* _OTX_EP_RXTX_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v3 11/11] net/octeontx_ep: Transmit data path function added
  2021-01-26 21:30 [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure Nalla Pradeep
                   ` (8 preceding siblings ...)
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 10/11] net/octeontx_ep: Receive data path function added Nalla Pradeep
@ 2021-01-26 21:30 ` Nalla Pradeep
  2021-01-27  1:09 ` [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure Ferruh Yigit
  10 siblings, 0 replies; 12+ messages in thread
From: Nalla Pradeep @ 2021-01-26 21:30 UTC (permalink / raw)
  Cc: jerinj, sburla, dev, Nalla Pradeep

1. Packet transmit function for both otx and otx2 are added.
2. Flushing transmit(command) queue when pending commands are more than
   maximum allowed value (currently 16).
3. Scatter gather support if the packet spans multiple buffers.

Signed-off-by: Nalla Pradeep <pnalla@marvell.com>
---
 drivers/net/octeontx_ep/otx2_ep_vf.h    |  19 +
 drivers/net/octeontx_ep/otx_ep_common.h |  51 +++
 drivers/net/octeontx_ep/otx_ep_ethdev.c |   5 +
 drivers/net/octeontx_ep/otx_ep_rxtx.c   | 448 +++++++++++++++++++++++-
 drivers/net/octeontx_ep/otx_ep_rxtx.h   |  26 ++
 drivers/net/octeontx_ep/otx_ep_vf.h     |  68 ++++
 6 files changed, 615 insertions(+), 2 deletions(-)

diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.h b/drivers/net/octeontx_ep/otx2_ep_vf.h
index 191fee426..5e5aefbc1 100644
--- a/drivers/net/octeontx_ep/otx2_ep_vf.h
+++ b/drivers/net/octeontx_ep/otx2_ep_vf.h
@@ -7,5 +7,24 @@
 int
 otx2_ep_vf_setup_device(struct otx_ep_device *sdpvf);
 
+struct otx2_ep_instr_64B {
+	/* Pointer where the input data is available. */
+	uint64_t dptr;
+
+	/* OTX_EP Instruction Header. */
+	union otx_ep_instr_ih ih;
+
+	/** Pointer where the response for a RAW mode packet
+	 * will be written by OCTEON TX.
+	 */
+	uint64_t rptr;
+
+	/* Input Request Header. */
+	union otx_ep_instr_irh irh;
+
+	/* Additional headers available in a 64-byte instruction. */
+	uint64_t exhdr[4];
+};
+
 #endif /*_OTX2_EP_VF_H_ */
 
diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h
index d87333539..c323e8d5b 100644
--- a/drivers/net/octeontx_ep/otx_ep_common.h
+++ b/drivers/net/octeontx_ep/otx_ep_common.h
@@ -4,6 +4,10 @@
 #ifndef _OTX_EP_COMMON_H_
 #define _OTX_EP_COMMON_H_
 
+
+#define OTX_EP_NW_PKT_OP               0x1220
+#define OTX_EP_NW_CMD_OP               0x1221
+
 #define OTX_EP_MAX_RINGS_PER_VF        (8)
 #define OTX_EP_CFG_IO_QUEUES        OTX_EP_MAX_RINGS_PER_VF
 #define OTX_EP_64BYTE_INSTR         (64)
@@ -16,9 +20,24 @@
 
 #define OTX_EP_OQ_INFOPTR_MODE      (0)
 #define OTX_EP_OQ_REFIL_THRESHOLD   (16)
+
+/* IQ instruction req types */
+#define OTX_EP_REQTYPE_NONE             (0)
+#define OTX_EP_REQTYPE_NORESP_INSTR     (1)
+#define OTX_EP_REQTYPE_NORESP_NET_DIRECT       (2)
+#define OTX_EP_REQTYPE_NORESP_NET       OTX_EP_REQTYPE_NORESP_NET_DIRECT
+#define OTX_EP_REQTYPE_NORESP_GATHER    (3)
+#define OTX_EP_NORESP_OHSM_SEND     (4)
+#define OTX_EP_NORESP_LAST          (4)
 #define OTX_EP_PCI_RING_ALIGN   65536
 #define SDP_PKIND 40
 #define SDP_OTX2_PKIND 57
+
+#define      ORDERED_TAG 0
+#define      ATOMIC_TAG  1
+#define      NULL_TAG  2
+#define      NULL_NULL_TAG  3
+
 #define OTX_EP_BUSY_LOOP_COUNT      (10000)
 #define OTX_EP_MAX_IOQS_PER_VF 8
 
@@ -445,8 +464,40 @@ int otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs,
 		     unsigned int socket_id);
 int otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no);
 
+struct otx_ep_sg_entry {
+	/** The first 64 bit gives the size of data in each dptr. */
+	union {
+		uint16_t size[4];
+		uint64_t size64;
+	} u;
+
+	/** The 4 dptr pointers for this entry. */
+	uint64_t ptr[4];
+};
+
+#define OTX_EP_SG_ENTRY_SIZE	(sizeof(struct otx_ep_sg_entry))
+
+/** Structure of a node in list of gather components maintained by
+ *  driver for each network device.
+ */
+struct otx_ep_gather {
+	/** number of gather entries. */
+	int num_sg;
+
+	/** Gather component that can accommodate max sized fragment list
+	 *  received from the IP layer.
+	 */
+	struct otx_ep_sg_entry *sg;
+};
+
+struct otx_ep_buf_free_info {
+	struct rte_mbuf *mbuf;
+	struct otx_ep_gather g;
+};
+
 #define OTX_EP_MAX_PKT_SZ 64000U
 #define OTX_EP_MAX_MAC_ADDRS 1
+#define OTX_EP_SG_ALIGN 8
 
 extern int otx_net_ep_logtype;
 #endif  /* _OTX_EP_COMMON_H_ */
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index 79572425a..913d7e581 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -152,6 +152,11 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
 	}
 
 	otx_epvf->eth_dev->rx_pkt_burst = &otx_ep_recv_pkts;
+	if (otx_epvf->chip_id == PCI_DEVID_OCTEONTX_EP_VF)
+		otx_epvf->eth_dev->tx_pkt_burst = &otx_ep_xmit_pkts;
+	else if (otx_epvf->chip_id == PCI_DEVID_OCTEONTX2_EP_NET_VF ||
+		 otx_epvf->chip_id == PCI_DEVID_CN98XX_EP_NET_VF)
+		otx_epvf->eth_dev->tx_pkt_burst = &otx2_ep_xmit_pkts;
 	ethdev_queues = (uint32_t)(otx_epvf->sriov_info.rings_per_vf);
 	otx_epvf->max_rx_queues = ethdev_queues;
 	otx_epvf->max_tx_queues = ethdev_queues;
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index 7b65e3ffe..a6625c79b 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -130,8 +130,6 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
 	iq->flush_index = 0;
 	iq->instr_pending = 0;
 
-
-
 	otx_ep->io_qmask.iq |= (1ull << iq_no);
 
 	/* Set 32B/64B mode for each input queue */
@@ -363,6 +361,452 @@ otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs,
 	return -ENOMEM;
 }
 
+static inline void
+otx_ep_iqreq_delete(struct otx_ep_instr_queue *iq, uint32_t idx)
+{
+	uint32_t reqtype;
+	void *buf;
+	struct otx_ep_buf_free_info *finfo;
+
+	buf     = iq->req_list[idx].buf;
+	reqtype = iq->req_list[idx].reqtype;
+
+	switch (reqtype) {
+	case OTX_EP_REQTYPE_NORESP_NET:
+		rte_pktmbuf_free((struct rte_mbuf *)buf);
+		otx_ep_dbg("IQ buffer freed at idx[%d]\n", idx);
+		break;
+
+	case OTX_EP_REQTYPE_NORESP_GATHER:
+		finfo = (struct  otx_ep_buf_free_info *)buf;
+		/* This will take care of multiple segments also */
+		rte_pktmbuf_free(finfo->mbuf);
+		rte_free(finfo->g.sg);
+		rte_free(finfo);
+		break;
+
+	case OTX_EP_REQTYPE_NONE:
+	default:
+		otx_ep_info("This iqreq mode is not supported:%d\n", reqtype);
+	}
+
+	/* Reset the request list at this index */
+	iq->req_list[idx].buf = NULL;
+	iq->req_list[idx].reqtype = 0;
+}
+
+static inline void
+otx_ep_iqreq_add(struct otx_ep_instr_queue *iq, void *buf,
+		uint32_t reqtype, int index)
+{
+	iq->req_list[index].buf = buf;
+	iq->req_list[index].reqtype = reqtype;
+
+	/*otx_ep_dbg("IQ buffer added at idx[%d]\n", iq->host_write_index);*/
+}
+
+static uint32_t
+otx_vf_update_read_index(struct otx_ep_instr_queue *iq)
+{
+	uint32_t new_idx = rte_read32(iq->inst_cnt_reg);
+	if (unlikely(new_idx == 0xFFFFFFFFU)) {
+		/*otx2_sdp_dbg("%s Going to reset IQ index\n", __func__);*/
+		rte_write32(new_idx, iq->inst_cnt_reg);
+	}
+	/* Modulo of the new index with the IQ size will give us
+	 * the new index.
+	 */
+	new_idx &= (iq->nb_desc - 1);
+
+	return new_idx;
+}
+
+static void
+otx_ep_flush_iq(struct otx_ep_instr_queue *iq)
+{
+	uint32_t instr_processed = 0;
+
+	iq->otx_read_index = otx_vf_update_read_index(iq);
+	while (iq->flush_index != iq->otx_read_index) {
+		/* Free the IQ data buffer to the pool */
+		otx_ep_iqreq_delete(iq, iq->flush_index);
+		iq->flush_index =
+			otx_ep_incr_index(iq->flush_index, 1, iq->nb_desc);
+
+		instr_processed++;
+	}
+
+	iq->stats.instr_processed = instr_processed;
+	iq->instr_pending -= instr_processed;
+}
+
+static inline void
+otx_ep_ring_doorbell(struct otx_ep_device *otx_ep __rte_unused,
+		struct otx_ep_instr_queue *iq)
+{
+	rte_wmb();
+	rte_write64(iq->fill_cnt, iq->doorbell_reg);
+	iq->fill_cnt = 0;
+}
+
+static inline int
+post_iqcmd(struct otx_ep_instr_queue *iq, uint8_t *iqcmd)
+{
+	uint8_t *iqptr, cmdsize;
+
+	/* This ensures that the read index does not wrap around to
+	 * the same position if queue gets full before OCTEON TX2 could
+	 * fetch any instr.
+	 */
+	if (iq->instr_pending > (iq->nb_desc - 1))
+		return OTX_EP_IQ_SEND_FAILED;
+
+	/* Copy cmd into iq */
+	cmdsize = 64;
+	iqptr   = iq->base_addr + (iq->host_write_index << 6);
+
+	rte_memcpy(iqptr, iqcmd, cmdsize);
+
+	/* Increment the host write index */
+	iq->host_write_index =
+		otx_ep_incr_index(iq->host_write_index, 1, iq->nb_desc);
+
+	iq->fill_cnt++;
+
+	/* Flush the command into memory. We need to be sure the data
+	 * is in memory before indicating that the instruction is
+	 * pending.
+	 */
+	iq->instr_pending++;
+	/* OTX_EP_IQ_SEND_SUCCESS */
+	return 0;
+}
+
+
+static int
+otx_ep_send_data(struct otx_ep_device *otx_ep, struct otx_ep_instr_queue *iq,
+		 void *cmd, int dbell)
+{
+	uint32_t ret;
+
+	/* Submit IQ command */
+	ret = post_iqcmd(iq, cmd);
+
+	if (ret == OTX_EP_IQ_SEND_SUCCESS) {
+		if (dbell)
+			otx_ep_ring_doorbell(otx_ep, iq);
+		iq->stats.instr_posted++;
+
+	} else {
+		iq->stats.instr_dropped++;
+		if (iq->fill_cnt)
+			otx_ep_ring_doorbell(otx_ep, iq);
+	}
+	return ret;
+}
+
+static inline void
+set_sg_size(struct otx_ep_sg_entry *sg_entry, uint16_t size, uint32_t pos)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	sg_entry->u.size[pos] = size;
+#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+	sg_entry->u.size[3 - pos] = size;
+#endif
+}
+
+/* Enqueue requests/packets to OTX_EP IQ queue.
+ * returns number of requests enqueued successfully
+ */
+uint16_t
+otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct otx_ep_instr_64B iqcmd;
+	struct otx_ep_instr_queue *iq;
+	struct otx_ep_device *otx_ep;
+	struct rte_mbuf *m;
+
+	uint32_t iqreq_type, sgbuf_sz;
+	int dbell, index, count = 0;
+	unsigned int pkt_len, i;
+	int gather, gsz;
+	void *iqreq_buf;
+	uint64_t dptr;
+
+	iq = (struct otx_ep_instr_queue *)tx_queue;
+	otx_ep = iq->otx_ep_dev;
+
+	/* if (!otx_ep->started || !otx_ep->linkup) {
+	 *	goto xmit_fail;
+	 * }
+	 */
+
+	iqcmd.ih.u64 = 0;
+	iqcmd.pki_ih3.u64 = 0;
+	iqcmd.irh.u64 = 0;
+
+	/* ih invars */
+	iqcmd.ih.s.fsz = OTX_EP_FSZ;
+	iqcmd.ih.s.pkind = otx_ep->pkind; /* The SDK decided PKIND value */
+
+	/* pki ih3 invars */
+	iqcmd.pki_ih3.s.w = 1;
+	iqcmd.pki_ih3.s.utt = 1;
+	iqcmd.pki_ih3.s.tagtype = ORDERED_TAG;
+	/* sl will be sizeof(pki_ih3) */
+	iqcmd.pki_ih3.s.sl = OTX_EP_FSZ + OTX_CUST_DATA_LEN;
+
+	/* irh invars */
+	iqcmd.irh.s.opcode = OTX_EP_NW_PKT_OP;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = pkts[i];
+		if (m->nb_segs == 1) {
+			/* dptr */
+			dptr = rte_mbuf_data_iova(m);
+			pkt_len = rte_pktmbuf_data_len(m);
+			iqreq_buf = m;
+			iqreq_type = OTX_EP_REQTYPE_NORESP_NET;
+			gather = 0;
+			gsz = 0;
+		} else {
+			struct otx_ep_buf_free_info *finfo;
+			int j, frags, num_sg;
+
+			if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+				goto xmit_fail;
+
+			finfo = (struct otx_ep_buf_free_info *)rte_malloc(NULL,
+							sizeof(*finfo), 0);
+			if (finfo == NULL) {
+				otx_ep_err("free buffer alloc failed\n");
+				goto xmit_fail;
+			}
+			num_sg = (m->nb_segs + 3) / 4;
+			sgbuf_sz = sizeof(struct otx_ep_sg_entry) * num_sg;
+			finfo->g.sg =
+				rte_zmalloc(NULL, sgbuf_sz, OTX_EP_SG_ALIGN);
+			if (finfo->g.sg == NULL) {
+				rte_free(finfo);
+				otx_ep_err("sg entry alloc failed\n");
+				goto xmit_fail;
+			}
+			gather = 1;
+			gsz = m->nb_segs;
+			finfo->g.num_sg = num_sg;
+			finfo->g.sg[0].ptr[0] = rte_mbuf_data_iova(m);
+			set_sg_size(&finfo->g.sg[0], m->data_len, 0);
+			pkt_len = m->data_len;
+			finfo->mbuf = m;
+
+			frags = m->nb_segs - 1;
+			j = 1;
+			m = m->next;
+			while (frags--) {
+				finfo->g.sg[(j >> 2)].ptr[(j & 3)] =
+						rte_mbuf_data_iova(m);
+				set_sg_size(&finfo->g.sg[(j >> 2)],
+						m->data_len, (j & 3));
+				pkt_len += m->data_len;
+				j++;
+				m = m->next;
+			}
+			dptr = rte_mem_virt2iova(finfo->g.sg);
+			iqreq_buf = finfo;
+			iqreq_type = OTX_EP_REQTYPE_NORESP_GATHER;
+			if (pkt_len > OTX_EP_MAX_PKT_SZ) {
+				rte_free(finfo->g.sg);
+				rte_free(finfo);
+				otx_ep_err("failed\n");
+				goto xmit_fail;
+			}
+		}
+		/* ih vars */
+		iqcmd.ih.s.tlen = pkt_len + iqcmd.ih.s.fsz;
+		iqcmd.ih.s.gather = gather;
+		iqcmd.ih.s.gsz = gsz;
+		/* PKI_IH3 vars */
+		/* irh vars */
+		/* irh.rlenssz = ; */
+
+		iqcmd.dptr = dptr;
+		/* Swap FSZ(front data) here, to avoid swapping on
+		 * OCTEON TX side rprt is not used so not swapping
+		 */
+		/* otx_ep_swap_8B_data(&iqcmd.rptr, 1); */
+		otx_ep_swap_8B_data(&iqcmd.irh.u64, 1);
+
+#ifdef OTX_EP_IO_DEBUG
+		otx_ep_dbg("After swapping\n");
+		otx_ep_dbg("Word0 [dptr]: 0x%016lx\n",
+			   (unsigned long)iqcmd.dptr);
+		otx_ep_dbg("Word1 [ihtx]: 0x%016lx\n", (unsigned long)iqcmd.ih);
+		otx_ep_dbg("Word2 [pki_ih3]: 0x%016lx\n",
+			   (unsigned long)iqcmd.pki_ih3);
+		otx_ep_dbg("Word3 [rptr]: 0x%016lx\n",
+			   (unsigned long)iqcmd.rptr);
+		otx_ep_dbg("Word4 [irh]: 0x%016lx\n", (unsigned long)iqcmd.irh);
+		otx_ep_dbg("Word5 [exhdr[0]]: 0x%016lx\n",
+				(unsigned long)iqcmd.exhdr[0]);
+		rte_pktmbuf_dump(stdout, m, rte_pktmbuf_pkt_len(m));
+#endif
+		dbell = (i == (unsigned int)(nb_pkts - 1)) ? 1 : 0;
+		index = iq->host_write_index;
+		if (otx_ep_send_data(otx_ep, iq, &iqcmd, dbell))
+			goto xmit_fail;
+		otx_ep_iqreq_add(iq, iqreq_buf, iqreq_type, index);
+		iq->stats.tx_pkts++;
+		iq->stats.tx_bytes += pkt_len;
+		count++;
+	}
+
+xmit_fail:
+	if (iq->instr_pending >= OTX_EP_MAX_INSTR)
+		otx_ep_flush_iq(iq);
+
+	/* Return no# of instructions posted successfully. */
+	return count;
+}
+
+/* Enqueue requests/packets to OTX_EP IQ queue.
+ * returns number of requests enqueued successfully
+ */
+uint16_t
+otx2_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct otx2_ep_instr_64B iqcmd2;
+	struct otx_ep_instr_queue *iq;
+	struct otx_ep_device *otx_ep;
+	uint64_t dptr;
+	int count = 0;
+	unsigned int i;
+	struct rte_mbuf *m;
+	unsigned int pkt_len;
+	void *iqreq_buf;
+	uint32_t iqreq_type, sgbuf_sz;
+	int gather, gsz;
+	int dbell;
+	int index;
+
+	iq = (struct otx_ep_instr_queue *)tx_queue;
+	otx_ep = iq->otx_ep_dev;
+
+	iqcmd2.ih.u64 = 0;
+	iqcmd2.irh.u64 = 0;
+
+	/* ih invars */
+	iqcmd2.ih.s.fsz = OTX2_EP_FSZ;
+	iqcmd2.ih.s.pkind = otx_ep->pkind; /* The SDK decided PKIND value */
+	/* irh invars */
+	iqcmd2.irh.s.opcode = OTX_EP_NW_PKT_OP;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = pkts[i];
+		if (m->nb_segs == 1) {
+			/* dptr */
+			dptr = rte_mbuf_data_iova(m);
+			pkt_len = rte_pktmbuf_data_len(m);
+			iqreq_buf = m;
+			iqreq_type = OTX_EP_REQTYPE_NORESP_NET;
+			gather = 0;
+			gsz = 0;
+		} else {
+			struct otx_ep_buf_free_info *finfo;
+			int j, frags, num_sg;
+
+			if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+				goto xmit_fail;
+
+			finfo = (struct otx_ep_buf_free_info *)
+					rte_malloc(NULL, sizeof(*finfo), 0);
+			if (finfo == NULL) {
+				otx_ep_err("free buffer alloc failed\n");
+				goto xmit_fail;
+			}
+			num_sg = (m->nb_segs + 3) / 4;
+			sgbuf_sz = sizeof(struct otx_ep_sg_entry) * num_sg;
+			finfo->g.sg =
+				rte_zmalloc(NULL, sgbuf_sz, OTX_EP_SG_ALIGN);
+			if (finfo->g.sg == NULL) {
+				rte_free(finfo);
+				otx_ep_err("sg entry alloc failed\n");
+				goto xmit_fail;
+			}
+			gather = 1;
+			gsz = m->nb_segs;
+			finfo->g.num_sg = num_sg;
+			finfo->g.sg[0].ptr[0] = rte_mbuf_data_iova(m);
+			set_sg_size(&finfo->g.sg[0], m->data_len, 0);
+			pkt_len = m->data_len;
+			finfo->mbuf = m;
+
+			frags = m->nb_segs - 1;
+			j = 1;
+			m = m->next;
+			while (frags--) {
+				finfo->g.sg[(j >> 2)].ptr[(j & 3)] =
+						rte_mbuf_data_iova(m);
+				set_sg_size(&finfo->g.sg[(j >> 2)],
+						m->data_len, (j & 3));
+				pkt_len += m->data_len;
+				j++;
+				m = m->next;
+			}
+			dptr = rte_mem_virt2iova(finfo->g.sg);
+			iqreq_buf = finfo;
+			iqreq_type = OTX_EP_REQTYPE_NORESP_GATHER;
+			if (pkt_len > OTX_EP_MAX_PKT_SZ) {
+				rte_free(finfo->g.sg);
+				rte_free(finfo);
+				otx_ep_err("failed\n");
+				goto xmit_fail;
+			}
+		}
+		/* ih vars */
+		iqcmd2.ih.s.tlen = pkt_len + iqcmd2.ih.s.fsz;
+		iqcmd2.ih.s.gather = gather;
+		iqcmd2.ih.s.gsz = gsz;
+		/* irh vars */
+		/* irh.rlenssz = ; */
+		iqcmd2.dptr = dptr;
+		/* Swap FSZ(front data) here, to avoid swapping on
+		 * OCTEON TX side rptr is not used so not swapping.
+		 */
+		/* otx_ep_swap_8B_data(&iqcmd2.rptr, 1); */
+		otx_ep_swap_8B_data(&iqcmd2.irh.u64, 1);
+
+#ifdef OTX_EP_IO_DEBUG
+		otx_ep_dbg("After swapping\n");
+		otx_ep_dbg("Word0 [dptr]: 0x%016lx\n",
+			   (unsigned long)iqcmd.dptr);
+		otx_ep_dbg("Word1 [ihtx]: 0x%016lx\n", (unsigned long)iqcmd.ih);
+		otx_ep_dbg("Word2 [pki_ih3]: 0x%016lx\n",
+			   (unsigned long)iqcmd.pki_ih3);
+		otx_ep_dbg("Word3 [rptr]: 0x%016lx\n",
+			   (unsigned long)iqcmd.rptr);
+		otx_ep_dbg("Word4 [irh]: 0x%016lx\n", (unsigned long)iqcmd.irh);
+		otx_ep_dbg("Word5 [exhdr[0]]: 0x%016lx\n",
+			   (unsigned long)iqcmd.exhdr[0]);
+#endif
+		/* rte_pktmbuf_dump(stdout, m, rte_pktmbuf_pkt_len(m)); */
+		index = iq->host_write_index;
+		dbell = (i == (unsigned int)(nb_pkts - 1)) ? 1 : 0;
+		if (otx_ep_send_data(otx_ep, iq, &iqcmd2, dbell))
+			goto xmit_fail;
+		otx_ep_iqreq_add(iq, iqreq_buf, iqreq_type, index);
+		iq->stats.tx_pkts++;
+		iq->stats.tx_bytes += pkt_len;
+		count++;
+	}
+
+xmit_fail:
+	if (iq->instr_pending >= OTX_EP_MAX_INSTR)
+		otx_ep_flush_iq(iq);
+
+	/* Return no# of instructions posted successfully. */
+	return count;
+}
+
 static uint32_t
 otx_ep_droq_refill(struct otx_ep_droq *droq)
 {
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.h b/drivers/net/octeontx_ep/otx_ep_rxtx.h
index d8b411459..1527d350b 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.h
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.h
@@ -5,15 +5,41 @@
 #ifndef _OTX_EP_RXTX_H_
 #define _OTX_EP_RXTX_H_
 
+#include <rte_byteorder.h>
+
 #define OTX_EP_RXD_ALIGN 1
 #define OTX_EP_TXD_ALIGN 1
+
+#define OTX_EP_IQ_SEND_FAILED      (-1)
+#define OTX_EP_IQ_SEND_SUCCESS     (0)
+
 #define OTX_EP_MAX_DELAYED_PKT_RETRIES 10000
+
+#define OTX_EP_FSZ 28
+#define OTX2_EP_FSZ 24
+#define OTX_EP_MAX_INSTR 16
+
+static inline void
+otx_ep_swap_8B_data(uint64_t *data, uint32_t blocks)
+{
+	/* Swap 8B blocks */
+	while (blocks) {
+		*data = rte_bswap64(*data);
+		blocks--;
+		data++;
+	}
+}
+
 static inline uint32_t
 otx_ep_incr_index(uint32_t index, uint32_t count, uint32_t max)
 {
 	return ((index + count) & (max - 1));
 }
 uint16_t
+otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts);
+uint16_t
+otx2_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts);
+uint16_t
 otx_ep_recv_pkts(void *rx_queue,
 		  struct rte_mbuf **rx_pkts,
 		  uint16_t budget);
diff --git a/drivers/net/octeontx_ep/otx_ep_vf.h b/drivers/net/octeontx_ep/otx_ep_vf.h
index 64e4df451..b87afa40f 100644
--- a/drivers/net/octeontx_ep/otx_ep_vf.h
+++ b/drivers/net/octeontx_ep/otx_ep_vf.h
@@ -100,6 +100,74 @@
  */
 #define SDP_GBL_WMARK 0x100
 
+
+/* Optional PKI Instruction Header(PKI IH) */
+typedef union {
+	uint64_t u64;
+	struct {
+		/** Tag Value */
+		uint64_t tag:32;
+
+		/** QPG Value */
+		uint64_t qpg:11;
+
+		/** Reserved1 */
+		uint64_t reserved1:2;
+
+		/** Tag type */
+		uint64_t tagtype:2;
+
+		/** Use Tag Type */
+		uint64_t utt:1;
+
+		/** Skip Length */
+		uint64_t sl:8;
+
+		/** Parse Mode */
+		uint64_t pm:3;
+
+		/** Reserved2 */
+		uint64_t reserved2:1;
+
+		/** Use QPG */
+		uint64_t uqpg:1;
+
+		/** Use Tag */
+		uint64_t utag:1;
+
+		/** Raw mode indicator 1 = RAW */
+		uint64_t raw:1;
+
+		/** Wider bit */
+		uint64_t w:1;
+	} s;
+} otx_ep_instr_pki_ih3_t;
+
+
+/* OTX_EP 64B instruction format */
+struct otx_ep_instr_64B {
+	/* Pointer where the input data is available. */
+	uint64_t dptr;
+
+	/* OTX_EP Instruction Header. */
+	union otx_ep_instr_ih ih;
+
+	/* PKI Optional Instruction Header. */
+	otx_ep_instr_pki_ih3_t pki_ih3;
+
+	/** Pointer where the response for a RAW mode packet
+	 * will be written by OCTEON TX.
+	 */
+	uint64_t rptr;
+
+	/* Input Request Header. */
+	union otx_ep_instr_irh irh;
+
+	/* Additional headers available in a 64-byte instruction. */
+	uint64_t exhdr[3];
+};
+#define OTX_EP_64B_INSTR_SIZE	(sizeof(otx_ep_instr_64B))
+
 int
 otx_ep_vf_setup_device(struct otx_ep_device *otx_ep);
 #endif /*_OTX_EP_VF_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure
  2021-01-26 21:30 [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure Nalla Pradeep
                   ` (9 preceding siblings ...)
  2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 11/11] net/octeontx_ep: Transmit " Nalla Pradeep
@ 2021-01-27  1:09 ` Ferruh Yigit
  10 siblings, 0 replies; 12+ messages in thread
From: Ferruh Yigit @ 2021-01-27  1:09 UTC (permalink / raw)
  To: Nalla Pradeep, Thomas Monjalon, Ray Kinsella, Neil Horman
  Cc: jerinj, sburla, dev

On 1/26/2021 9:30 PM, Nalla Pradeep wrote:
> Adding bare minimum PMD library and doc build infrastructure
> and claim the maintainership for octeontx end point PMD.
> 
> Signed-off-by: Nalla Pradeep <pnalla@marvell.com>

Hi Nalla,

As I quickly checked many comments not addressed, can you please add a change 
log to the commits to help to trace what is changed in new version?

Also for the ones not changed can you please comment on the previous version, so 
we can be clear if the comment is missed or ignored or not changed and why?

Sending new versions as reply to previous version, using 'git send-email' 
'--reply-to' option helps keeping them in same email thread and helps following 
previous comments and the history.

Would you mind sending a v4 as a reply to v2 with as mentioned above?


Thanks,
ferruh

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-01-27  1:10 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-26 21:30 [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure Nalla Pradeep
2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 02/11] net/octeontx_ep: add ethdev probe and remove Nalla Pradeep
2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 03/11] net/octeontx_ep: add device init and uninit Nalla Pradeep
2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 04/11] net/octeontx_ep: Added basic device setup Nalla Pradeep
2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 05/11] net/octeontx_ep: Add dev info get and configure Nalla Pradeep
2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 06/11] net/octeontx_ep: Added rxq setup and release Nalla Pradeep
2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 07/11] net/octeontx_ep: Added tx queue " Nalla Pradeep
2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 08/11] net/octeontx_ep: Setting up iq and oq registers Nalla Pradeep
2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 09/11] net/octeontx_ep: Added dev start and stop Nalla Pradeep
2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 10/11] net/octeontx_ep: Receive data path function added Nalla Pradeep
2021-01-26 21:30 ` [dpdk-dev] [PATCH v3 11/11] net/octeontx_ep: Transmit " Nalla Pradeep
2021-01-27  1:09 ` [dpdk-dev] [PATCH v3 01/11] net/octeontx_ep: add build and doc infrastructure Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).