DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/6] qede: SR-IOV PF driver support
@ 2020-06-09 19:42 Manish Chopra
  2020-06-09 19:42 ` [dpdk-dev] [PATCH 1/6] net/qede: define PCI config space specific osals Manish Chopra
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Manish Chopra @ 2020-06-09 19:42 UTC (permalink / raw)
  To: jerinjacobk, jerinj, ferruh.yigit
  Cc: dev, irusskikh, rmody, GR-Everest-DPDK-Dev

Hi,

This series adds SR-IOV PF pmd driver support to have VF pmd
driver work over PF pmd driver instances in order to run the
adapter completely under DPDK environment for one of the use
cases like ovs-dpdk.

This is very initial bring-up with following testing covered -

* Enable/Disable SR-IOV VFs through igb_uio sysfs hook.
* Load VFs, run fastpath, teardown VFs in hypervisor and guest VM.
* VF FLR flow (in case of VF PCI passthrough to the guest VM)
* Bulletin mechanism tested to communicate link changes to the VFs.

Note that this series is only intended for upcoming DPDK release (20.08)
Please consider applying this series to dpdk-next-net-mrvl.git

Thanks,
Manish

Manish Chopra (6):
  net/qede: define PCI config space specific osals
  net/qede: configure VFs on hardware
  net/qede: add infrastructure support for VF load
  net/qede: initialize VF MAC and link
  net/qede: add VF FLR support
  doc/guides: update qede features list

 doc/guides/nics/features/qede.ini     |   1 +
 doc/guides/nics/qede.rst              |   2 +-
 drivers/net/qede/Makefile             |   1 +
 drivers/net/qede/base/bcm_osal.c      |  71 +++++++++
 drivers/net/qede/base/bcm_osal.h      |  30 +++-
 drivers/net/qede/base/ecore.h         |  27 ++++
 drivers/net/qede/base/ecore_iov_api.h |   3 +
 drivers/net/qede/meson.build          |   1 +
 drivers/net/qede/qede_ethdev.c        |  37 ++++-
 drivers/net/qede/qede_ethdev.h        |   1 +
 drivers/net/qede/qede_if.h            |   1 +
 drivers/net/qede/qede_main.c          |  13 +-
 drivers/net/qede/qede_sriov.c         | 219 ++++++++++++++++++++++++++
 drivers/net/qede/qede_sriov.h         |  22 +++
 14 files changed, 417 insertions(+), 12 deletions(-)
 create mode 100644 drivers/net/qede/qede_sriov.c
 create mode 100644 drivers/net/qede/qede_sriov.h

-- 
2.17.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 1/6] net/qede: define PCI config space specific osals
  2020-06-09 19:42 [dpdk-dev] [PATCH 0/6] qede: SR-IOV PF driver support Manish Chopra
@ 2020-06-09 19:42 ` Manish Chopra
  2020-06-26  4:53   ` Jerin Jacob
  2020-06-09 19:42 ` [dpdk-dev] [PATCH 2/6] net/qede: configure VFs on hardware Manish Chopra
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Manish Chopra @ 2020-06-09 19:42 UTC (permalink / raw)
  To: jerinjacobk, jerinj, ferruh.yigit
  Cc: dev, irusskikh, rmody, GR-Everest-DPDK-Dev

This patch defines various PCI config space access APIs
in order to read and find IOV specific PCI capabilities.

With these definitions implemented, it enables the base
driver to do SR-IOV specific initialization and HW specific
configuration required from PF-PMD driver instance.

Signed-off-by: Manish Chopra <manishc@marvell.com>
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/bcm_osal.c | 38 ++++++++++++++++++++++++++++++++
 drivers/net/qede/base/bcm_osal.h | 15 +++++++++----
 drivers/net/qede/base/ecore.h    | 23 +++++++++++++++++++
 drivers/net/qede/qede_main.c     |  1 +
 4 files changed, 73 insertions(+), 4 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.c b/drivers/net/qede/base/bcm_osal.c
index 48d016e24..3cf33a9a7 100644
--- a/drivers/net/qede/base/bcm_osal.c
+++ b/drivers/net/qede/base/bcm_osal.c
@@ -14,6 +14,44 @@
 #include "ecore_iov_api.h"
 #include "ecore_mcp_api.h"
 #include "ecore_l2_api.h"
+#include <rte_bus_pci.h>
+#include <rte_io.h>
+
+int osal_pci_find_next_ext_capability(struct rte_pci_device *dev,
+				      int cap)
+{
+	int pos = PCI_CFG_SPACE_SIZE;
+	uint32_t header;
+	int ttl;
+
+	/* minimum 8 bytes per capability */
+	ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8;
+
+	if (rte_pci_read_config(dev, &header, 4, pos) < 0)
+		return -1;
+
+	/*
+	 * If we have no capabilities, this is indicated by cap ID,
+	 * cap version and next pointer all being 0.
+	 */
+	if (header == 0)
+		return 0;
+
+	while (ttl-- > 0) {
+		if (PCI_EXT_CAP_ID(header) == cap)
+			return pos;
+
+		pos = PCI_EXT_CAP_NEXT(header);
+
+		if (pos < PCI_CFG_SPACE_SIZE)
+			break;
+
+		if (rte_pci_read_config(dev, &header, 4, pos) < 0)
+			return -1;
+	}
+
+	return 0;
+}
 
 /* Array of memzone pointers */
 static const struct rte_memzone *ecore_mz_mapping[RTE_MAX_MEMZONE];
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 8b2faec5b..7cb887409 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -18,6 +18,7 @@
 #include <rte_debug.h>
 #include <rte_ether.h>
 #include <rte_io.h>
+#include <rte_bus_pci.h>
 
 /* Forward declaration */
 struct ecore_dev;
@@ -284,10 +285,16 @@ typedef struct osal_list_t {
 
 /* PCI config space */
 
-#define OSAL_PCI_READ_CONFIG_BYTE(dev, address, dst) nothing
-#define OSAL_PCI_READ_CONFIG_WORD(dev, address, dst) nothing
-#define OSAL_PCI_READ_CONFIG_DWORD(dev, address, dst) nothing
-#define OSAL_PCI_FIND_EXT_CAPABILITY(dev, pcie_id) 0
+int osal_pci_find_next_ext_capability(struct rte_pci_device *dev,
+				       int cap);
+#define OSAL_PCI_READ_CONFIG_BYTE(dev, address, dst) \
+	rte_pci_read_config((dev)->pci_dev, dst, 1, address)
+#define OSAL_PCI_READ_CONFIG_WORD(dev, address, dst) \
+	rte_pci_read_config((dev)->pci_dev, dst, 2, address)
+#define OSAL_PCI_READ_CONFIG_DWORD(dev, address, dst) \
+	rte_pci_read_config((dev)->pci_dev, dst, 4, address)
+#define OSAL_PCI_FIND_EXT_CAPABILITY(dev, cap) \
+	osal_pci_find_next_ext_capability((dev)->pci_dev, cap)
 #define OSAL_PCI_FIND_CAPABILITY(dev, pcie_id) 0
 #define OSAL_PCI_WRITE_CONFIG_WORD(dev, address, val) nothing
 #define OSAL_BAR_SIZE(dev, bar_id) 0
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index b2077bc46..386348e68 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -27,6 +27,26 @@
 #include "ecore_proto_if.h"
 #include "mcp_public.h"
 
+#define PCICFG_VENDOR_ID_OFFSET 0x00
+#define PCICFG_DEVICE_ID_OFFSET 0x02
+#define PCI_CFG_SPACE_SIZE 256
+#define PCI_EXP_DEVCTL 0x0008
+#define PCI_EXT_CAP_ID(header) (int)((header) & 0x0000ffff)
+#define PCI_EXT_CAP_NEXT(header) (((header) >> 20) & 0xffc)
+#define PCI_CFG_SPACE_EXP_SIZE 4096
+
+#define PCI_SRIOV_CTRL 0x08 /* SR-IOV Control */
+#define PCI_SRIOV_TOTAL_VF 0x0e /* Total VFs */
+#define PCI_SRIOV_INITIAL_VF 0x0c /* Initial VFs */
+#define PCI_SRIOV_NUM_VF 0x10 /* Number of VFs */
+#define PCI_SRIOV_VF_OFFSET 0x14 /* First VF Offset */
+#define PCI_SRIOV_VF_STRIDE 0x16 /* Following VF Stride */
+#define PCI_SRIOV_VF_DID 0x1a
+#define PCI_SRIOV_SUP_PGSIZE 0x1c
+#define PCI_SRIOV_CAP 0x04
+#define PCI_SRIOV_FUNC_LINK 0x12
+#define PCI_EXT_CAP_ID_SRIOV 0x10
+
 #define ECORE_MAJOR_VERSION		8
 #define ECORE_MINOR_VERSION		40
 #define ECORE_REVISION_VERSION		26
@@ -916,6 +936,9 @@ struct ecore_dev {
 	/* @DPDK */
 	struct ecore_dbg_feature	dbg_features[DBG_FEATURE_NUM];
 	u8				engine_for_debug;
+
+	/* DPDK specific ecore field */
+	struct rte_pci_device		*pci_dev;
 };
 
 enum ecore_hsi_def_type {
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 70357ebb6..62039af6f 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -36,6 +36,7 @@ static void qed_init_pci(struct ecore_dev *edev, struct rte_pci_device *pci_dev)
 	edev->regview = pci_dev->mem_resource[0].addr;
 	edev->doorbells = pci_dev->mem_resource[2].addr;
 	edev->db_size = pci_dev->mem_resource[2].len;
+	edev->pci_dev = pci_dev;
 }
 
 static int
-- 
2.17.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 2/6] net/qede: configure VFs on hardware
  2020-06-09 19:42 [dpdk-dev] [PATCH 0/6] qede: SR-IOV PF driver support Manish Chopra
  2020-06-09 19:42 ` [dpdk-dev] [PATCH 1/6] net/qede: define PCI config space specific osals Manish Chopra
@ 2020-06-09 19:42 ` Manish Chopra
  2020-06-09 19:42 ` [dpdk-dev] [PATCH 3/6] net/qede: add infrastructure support for VF load Manish Chopra
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Manish Chopra @ 2020-06-09 19:42 UTC (permalink / raw)
  To: jerinjacobk, jerinj, ferruh.yigit
  Cc: dev, irusskikh, rmody, GR-Everest-DPDK-Dev

Based on number of VFs enabled at PCI, PF-PMD driver instance
enables/configures those VFs from hardware perspective, such
that in later patches they could get required HW access to
communicate with PFs for slowpath configuration and run the
fastpath themsleves.

This patch also add two new qede IOV files [qede_sriov(.c|.h)]
under qede directory to add non-base driver IOV APIs/contents there.

Signed-off-by: Manish Chopra <manishc@marvell.com>
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/Makefile      |  1 +
 drivers/net/qede/meson.build   |  1 +
 drivers/net/qede/qede_ethdev.c |  1 +
 drivers/net/qede/qede_ethdev.h |  1 +
 drivers/net/qede/qede_if.h     |  1 +
 drivers/net/qede/qede_main.c   |  1 +
 drivers/net/qede/qede_sriov.c  | 85 ++++++++++++++++++++++++++++++++++
 drivers/net/qede/qede_sriov.h  |  9 ++++
 8 files changed, 100 insertions(+)
 create mode 100644 drivers/net/qede/qede_sriov.c
 create mode 100644 drivers/net/qede/qede_sriov.h

diff --git a/drivers/net/qede/Makefile b/drivers/net/qede/Makefile
index 5810b4d49..6ed776c5a 100644
--- a/drivers/net/qede/Makefile
+++ b/drivers/net/qede/Makefile
@@ -104,5 +104,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_main.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_rxtx.c
 SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_filter.c
+SRCS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede_sriov.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/qede/meson.build b/drivers/net/qede/meson.build
index 12388a680..7f62cb78d 100644
--- a/drivers/net/qede/meson.build
+++ b/drivers/net/qede/meson.build
@@ -9,4 +9,5 @@ sources = files(
 	'qede_filter.c',
 	'qede_main.c',
 	'qede_rxtx.c',
+	'qede_sriov.c',
 )
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index c4f8f1258..250cd2d0e 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2714,6 +2714,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 		adapter->vxlan.enable = false;
 		adapter->geneve.enable = false;
 		adapter->ipgre.enable = false;
+		qed_ops->sriov_configure(edev, pci_dev->max_vfs);
 	}
 
 	DP_INFO(edev, "MAC address : %02x:%02x:%02x:%02x:%02x:%02x\n",
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index b988a73f2..fcc17e22e 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -34,6 +34,7 @@
 #include "base/ecore_l2.h"
 #include "base/ecore_vf.h"
 
+#include "qede_sriov.h"
 #include "qede_logs.h"
 #include "qede_if.h"
 #include "qede_rxtx.h"
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 858cd51d5..e30161616 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -82,6 +82,7 @@ struct qed_eth_ops {
 	const struct qed_common_ops *common;
 	int (*fill_dev_info)(struct ecore_dev *edev,
 			     struct qed_dev_eth_info *info);
+	void (*sriov_configure)(struct ecore_dev *edev, int num_vfs);
 };
 
 struct qed_link_params {
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 62039af6f..a02ef5685 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -786,6 +786,7 @@ const struct qed_common_ops qed_common_ops_pass = {
 const struct qed_eth_ops qed_eth_ops_pass = {
 	INIT_STRUCT_FIELD(common, &qed_common_ops_pass),
 	INIT_STRUCT_FIELD(fill_dev_info, &qed_fill_eth_dev_info),
+	INIT_STRUCT_FIELD(sriov_configure, &qed_sriov_configure),
 };
 
 const struct qed_eth_ops *qed_get_eth_ops(void)
diff --git a/drivers/net/qede/qede_sriov.c b/drivers/net/qede/qede_sriov.c
new file mode 100644
index 000000000..ba4384e90
--- /dev/null
+++ b/drivers/net/qede/qede_sriov.c
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2020 Marvell.
+ * All rights reserved.
+ * www.marvell.com
+ */
+
+#include "qede_sriov.h"
+
+static void qed_sriov_enable_qid_config(struct ecore_hwfn *hwfn,
+					u16 vfid,
+					struct ecore_iov_vf_init_params *params)
+{
+	u16 num_pf_l2_queues, base, i;
+
+	/* Since we have an equal resource distribution per-VF, and we assume
+	 * PF has acquired its first queues, we start setting sequentially from
+	 * there.
+	 */
+	num_pf_l2_queues = (u16)FEAT_NUM(hwfn, ECORE_PF_L2_QUE);
+
+	base = num_pf_l2_queues + vfid * params->num_queues;
+	params->rel_vf_id = vfid;
+
+	for (i = 0; i < params->num_queues; i++) {
+		params->req_rx_queue[i] = base + i;
+		params->req_tx_queue[i] = base + i;
+	}
+
+	/* PF uses indices 0 for itself; Set vport/RSS afterwards */
+	params->vport_id = vfid + 1;
+	params->rss_eng_id = vfid + 1;
+}
+
+static void qed_sriov_enable(struct ecore_dev *edev, int num)
+{
+	struct ecore_iov_vf_init_params params;
+	struct ecore_hwfn *p_hwfn;
+	struct ecore_ptt *p_ptt;
+	int i, j, rc;
+
+	if ((u32)num >= RESC_NUM(&edev->hwfns[0], ECORE_VPORT)) {
+		DP_NOTICE(edev, false, "Can start at most %d VFs\n",
+			  RESC_NUM(&edev->hwfns[0], ECORE_VPORT) - 1);
+		return;
+	}
+
+	OSAL_MEMSET(&params, 0, sizeof(struct ecore_iov_vf_init_params));
+
+	for_each_hwfn(edev, j) {
+		int feat_num;
+
+		p_hwfn = &edev->hwfns[j];
+		p_ptt = ecore_ptt_acquire(p_hwfn);
+		feat_num = FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE) / num;
+
+		params.num_queues = OSAL_MIN_T(int, feat_num, 16);
+
+		for (i = 0; i < num; i++) {
+			if (!ecore_iov_is_valid_vfid(p_hwfn, i, false, true))
+				continue;
+
+			qed_sriov_enable_qid_config(p_hwfn, i, &params);
+
+			rc = ecore_iov_init_hw_for_vf(p_hwfn, p_ptt, &params);
+			if (rc) {
+				DP_ERR(edev, "Failed to enable VF[%d]\n", i);
+				ecore_ptt_release(p_hwfn, p_ptt);
+				return;
+			}
+		}
+
+		ecore_ptt_release(p_hwfn, p_ptt);
+	}
+}
+
+void qed_sriov_configure(struct ecore_dev *edev, int num_vfs_param)
+{
+	if (!IS_ECORE_SRIOV(edev)) {
+		DP_VERBOSE(edev, ECORE_MSG_IOV, "SR-IOV is not supported\n");
+		return;
+	}
+
+	if (num_vfs_param)
+		qed_sriov_enable(edev, num_vfs_param);
+}
diff --git a/drivers/net/qede/qede_sriov.h b/drivers/net/qede/qede_sriov.h
new file mode 100644
index 000000000..6c85b1dd5
--- /dev/null
+++ b/drivers/net/qede/qede_sriov.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2020 Marvell.
+ * All rights reserved.
+ * www.marvell.com
+ */
+
+#include "qede_ethdev.h"
+
+void qed_sriov_configure(struct ecore_dev *edev, int num_vfs_param);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 3/6] net/qede: add infrastructure support for VF load
  2020-06-09 19:42 [dpdk-dev] [PATCH 0/6] qede: SR-IOV PF driver support Manish Chopra
  2020-06-09 19:42 ` [dpdk-dev] [PATCH 1/6] net/qede: define PCI config space specific osals Manish Chopra
  2020-06-09 19:42 ` [dpdk-dev] [PATCH 2/6] net/qede: configure VFs on hardware Manish Chopra
@ 2020-06-09 19:42 ` Manish Chopra
  2020-06-09 19:42 ` [dpdk-dev] [PATCH 4/6] net/qede: initialize VF MAC and link Manish Chopra
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Manish Chopra @ 2020-06-09 19:42 UTC (permalink / raw)
  To: jerinjacobk, jerinj, ferruh.yigit
  Cc: dev, irusskikh, rmody, GR-Everest-DPDK-Dev

This patch adds necessary infrastructure support (required to handle
messages from VF and sending ramrod on behalf of VF's configuration
request from alarm handler context) to start/load the VF-PMD driver
instance on top of PF-PMD driver instance.

Signed-off-by: Manish Chopra <manishc@marvell.com>
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/bcm_osal.c      | 28 ++++++++++++
 drivers/net/qede/base/bcm_osal.h      | 11 +++--
 drivers/net/qede/base/ecore.h         |  4 ++
 drivers/net/qede/base/ecore_iov_api.h |  3 ++
 drivers/net/qede/qede_ethdev.c        |  2 +
 drivers/net/qede/qede_main.c          |  4 +-
 drivers/net/qede/qede_sriov.c         | 61 +++++++++++++++++++++++++++
 drivers/net/qede/qede_sriov.h         | 16 ++++++-
 8 files changed, 123 insertions(+), 6 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.c b/drivers/net/qede/base/bcm_osal.c
index 3cf33a9a7..1f6466a32 100644
--- a/drivers/net/qede/base/bcm_osal.c
+++ b/drivers/net/qede/base/bcm_osal.c
@@ -14,9 +14,37 @@
 #include "ecore_iov_api.h"
 #include "ecore_mcp_api.h"
 #include "ecore_l2_api.h"
+#include "../qede_sriov.h"
+
 #include <rte_bus_pci.h>
 #include <rte_io.h>
 
+int osal_pf_vf_msg(struct ecore_hwfn *p_hwfn)
+{
+	int rc;
+
+	rc = qed_schedule_iov(p_hwfn, QED_IOV_WQ_MSG_FLAG);
+
+	if (rc) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Failed to schedule alarm handler rc=%d\n", rc);
+	}
+
+	return rc;
+}
+
+void osal_poll_mode_dpc(osal_int_ptr_t hwfn_cookie)
+{
+	struct ecore_hwfn *p_hwfn = (struct ecore_hwfn *)hwfn_cookie;
+
+	if (!p_hwfn)
+		return;
+
+	OSAL_SPIN_LOCK(&p_hwfn->spq_lock);
+	ecore_int_sp_dpc((osal_int_ptr_t)(p_hwfn));
+	OSAL_SPIN_UNLOCK(&p_hwfn->spq_lock);
+}
+
 int osal_pci_find_next_ext_capability(struct rte_pci_device *dev,
 				      int cap)
 {
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 7cb887409..b55802952 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -175,9 +175,12 @@ typedef pthread_mutex_t osal_mutex_t;
 
 /* DPC */
 
+void osal_poll_mode_dpc(osal_int_ptr_t hwfn_cookie);
 #define OSAL_DPC_ALLOC(hwfn) OSAL_ALLOC(hwfn, GFP, sizeof(osal_dpc_t))
-#define OSAL_DPC_INIT(dpc, hwfn) nothing
-#define OSAL_POLL_MODE_DPC(hwfn) nothing
+#define OSAL_DPC_INIT(dpc, hwfn) \
+	OSAL_SPIN_LOCK_INIT(&(hwfn)->spq_lock)
+#define OSAL_POLL_MODE_DPC(hwfn) \
+	osal_poll_mode_dpc((osal_int_ptr_t)(p_hwfn))
 #define OSAL_DPC_SYNC(hwfn) nothing
 
 /* Lists */
@@ -348,10 +351,12 @@ u32 qede_find_first_zero_bit(unsigned long *, u32);
 
 /* SR-IOV channel */
 
+int osal_pf_vf_msg(struct ecore_hwfn *p_hwfn);
 #define OSAL_VF_FLR_UPDATE(hwfn) nothing
 #define OSAL_VF_SEND_MSG2PF(dev, done, msg, reply_addr, msg_size, reply_size) 0
 #define OSAL_VF_CQE_COMPLETION(_dev_p, _cqe, _protocol)	(0)
-#define OSAL_PF_VF_MSG(hwfn, vfid) 0
+#define OSAL_PF_VF_MSG(hwfn, vfid) \
+	osal_pf_vf_msg(hwfn)
 #define OSAL_PF_VF_MALICIOUS(hwfn, vfid) nothing
 #define OSAL_IOV_CHK_UCAST(hwfn, vfid, params) 0
 #define OSAL_IOV_POST_START_VPORT(hwfn, vf, vport_id, opaque_fid) nothing
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 386348e68..9eff484d5 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -723,6 +723,10 @@ struct ecore_hwfn {
 
 	/* @DPDK */
 	struct ecore_ptt		*p_arfs_ptt;
+
+	/* DPDK specific, not the part of vanilla ecore */
+	osal_spinlock_t spq_lock;
+	unsigned long iov_task_flags;
 };
 
 enum ecore_mf_mode {
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 545001812..bd7c5703f 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -14,6 +14,9 @@
 #define ECORE_ETH_VF_NUM_VLAN_FILTERS 2
 #define ECORE_VF_ARRAY_LENGTH (3)
 
+#define ECORE_VF_ARRAY_GET_VFID(arr, vfid)	\
+	(((arr)[(vfid) / 64]) & (1ULL << ((vfid) % 64)))
+
 #define IS_VF(p_dev)		((p_dev)->b_is_vf)
 #define IS_PF(p_dev)		(!((p_dev)->b_is_vf))
 #ifdef CONFIG_ECORE_SRIOV
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 250cd2d0e..ca63d9102 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -286,7 +286,9 @@ qede_fw_version_get(struct rte_eth_dev *dev, char *fw_ver, size_t fw_size)
 
 static void qede_interrupt_action(struct ecore_hwfn *p_hwfn)
 {
+	OSAL_SPIN_LOCK(&p_hwfn->spq_lock);
 	ecore_int_sp_dpc((osal_int_ptr_t)(p_hwfn));
+	OSAL_SPIN_UNLOCK(&p_hwfn->spq_lock);
 }
 
 static void
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index a02ef5685..93dfa8962 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -220,7 +220,9 @@ static void qed_stop_iov_task(struct ecore_dev *edev)
 
 	for_each_hwfn(edev, i) {
 		p_hwfn = &edev->hwfns[i];
-		if (!IS_PF(edev))
+		if (IS_PF(edev))
+			rte_eal_alarm_cancel(qed_iov_pf_task, p_hwfn);
+		else
 			rte_eal_alarm_cancel(qede_vf_task, p_hwfn);
 	}
 }
diff --git a/drivers/net/qede/qede_sriov.c b/drivers/net/qede/qede_sriov.c
index ba4384e90..f7d7807fb 100644
--- a/drivers/net/qede/qede_sriov.c
+++ b/drivers/net/qede/qede_sriov.c
@@ -4,6 +4,14 @@
  * www.marvell.com
  */
 
+#include <rte_alarm.h>
+
+#include "base/bcm_osal.h"
+#include "base/ecore.h"
+#include "base/ecore_sriov.h"
+#include "base/ecore_mcp.h"
+#include "base/ecore_vf.h"
+
 #include "qede_sriov.h"
 
 static void qed_sriov_enable_qid_config(struct ecore_hwfn *hwfn,
@@ -83,3 +91,56 @@ void qed_sriov_configure(struct ecore_dev *edev, int num_vfs_param)
 	if (num_vfs_param)
 		qed_sriov_enable(edev, num_vfs_param);
 }
+
+static void qed_handle_vf_msg(struct ecore_hwfn *hwfn)
+{
+	u64 events[ECORE_VF_ARRAY_LENGTH];
+	struct ecore_ptt *ptt;
+	int i;
+
+	ptt = ecore_ptt_acquire(hwfn);
+	if (!ptt) {
+		DP_NOTICE(hwfn, true, "PTT acquire failed\n");
+		qed_schedule_iov(hwfn, QED_IOV_WQ_MSG_FLAG);
+		return;
+	}
+
+	ecore_iov_pf_get_pending_events(hwfn, events);
+
+	ecore_for_each_vf(hwfn, i) {
+		/* Skip VFs with no pending messages */
+		if (!ECORE_VF_ARRAY_GET_VFID(events, i))
+			continue;
+
+		DP_VERBOSE(hwfn, ECORE_MSG_IOV,
+			   "Handling VF message from VF 0x%02x [Abs 0x%02x]\n",
+			   i, hwfn->p_dev->p_iov_info->first_vf_in_pf + i);
+
+		/* Copy VF's message to PF's request buffer for that VF */
+		if (ecore_iov_copy_vf_msg(hwfn, ptt, i))
+			continue;
+
+		ecore_iov_process_mbx_req(hwfn, ptt, i);
+	}
+
+	ecore_ptt_release(hwfn, ptt);
+}
+
+void qed_iov_pf_task(void *arg)
+{
+	struct ecore_hwfn *p_hwfn = arg;
+
+	if (qede_test_bit(QED_IOV_WQ_MSG_FLAG, &p_hwfn->iov_task_flags)) {
+		qede_clr_bit(QED_IOV_WQ_MSG_FLAG, &p_hwfn->iov_task_flags);
+		qed_handle_vf_msg(p_hwfn);
+	}
+}
+
+int qed_schedule_iov(struct ecore_hwfn *p_hwfn, enum qed_iov_wq_flag flag)
+{
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Scheduling iov task [Flag: %d]\n",
+		   flag);
+
+	qede_set_bit(flag, &p_hwfn->iov_task_flags);
+	return rte_eal_alarm_set(1, qed_iov_pf_task, p_hwfn);
+}
diff --git a/drivers/net/qede/qede_sriov.h b/drivers/net/qede/qede_sriov.h
index 6c85b1dd5..8b7fa7daa 100644
--- a/drivers/net/qede/qede_sriov.h
+++ b/drivers/net/qede/qede_sriov.h
@@ -4,6 +4,18 @@
  * www.marvell.com
  */
 
-#include "qede_ethdev.h"
-
 void qed_sriov_configure(struct ecore_dev *edev, int num_vfs_param);
+
+enum qed_iov_wq_flag {
+	QED_IOV_WQ_MSG_FLAG,
+	QED_IOV_WQ_SET_UNICAST_FILTER_FLAG,
+	QED_IOV_WQ_BULLETIN_UPDATE_FLAG,
+	QED_IOV_WQ_STOP_WQ_FLAG,
+	QED_IOV_WQ_FLR_FLAG,
+	QED_IOV_WQ_TRUST_FLAG,
+	QED_IOV_WQ_VF_FORCE_LINK_QUERY_FLAG,
+	QED_IOV_WQ_DB_REC_HANDLER,
+};
+
+int qed_schedule_iov(struct ecore_hwfn *p_hwfn, enum qed_iov_wq_flag flag);
+void qed_iov_pf_task(void *arg);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 4/6] net/qede: initialize VF MAC and link
  2020-06-09 19:42 [dpdk-dev] [PATCH 0/6] qede: SR-IOV PF driver support Manish Chopra
                   ` (2 preceding siblings ...)
  2020-06-09 19:42 ` [dpdk-dev] [PATCH 3/6] net/qede: add infrastructure support for VF load Manish Chopra
@ 2020-06-09 19:42 ` Manish Chopra
  2020-06-09 19:42 ` [dpdk-dev] [PATCH 5/6] net/qede: add VF FLR support Manish Chopra
  2020-06-09 19:42 ` [dpdk-dev] [PATCH 6/6] doc/guides: update qede features list Manish Chopra
  5 siblings, 0 replies; 13+ messages in thread
From: Manish Chopra @ 2020-06-09 19:42 UTC (permalink / raw)
  To: jerinjacobk, jerinj, ferruh.yigit
  Cc: dev, irusskikh, rmody, GR-Everest-DPDK-Dev

This patch configures VFs with random mac if no MAC is
provided by the PF/bulletin. This also adds required bulletin
APIs by PF-PMD driver to communicate LINK properties/changes to
the VFs through bulletin update mechanism.

With these changes, VF-PMD instance is able to run
fastpath over PF-PMD driver instance.

Signed-off-by: Manish Chopra <manishc@marvell.com>
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/qede_ethdev.c | 34 ++++++++++++++++++++-
 drivers/net/qede/qede_main.c   |  7 ++++-
 drivers/net/qede/qede_sriov.c  | 55 ++++++++++++++++++++++++++++++++++
 drivers/net/qede/qede_sriov.h  |  1 +
 4 files changed, 95 insertions(+), 2 deletions(-)

diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index ca63d9102..6625775ac 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2493,6 +2493,24 @@ static void qede_update_pf_params(struct ecore_dev *edev)
 	qed_ops->common->update_pf_params(edev, &pf_params);
 }
 
+static void qede_generate_random_mac_addr(struct rte_ether_addr *mac_addr)
+{
+	uint64_t random;
+
+	/* Set Organizationally Unique Identifier (OUI) prefix. */
+	mac_addr->addr_bytes[0] = 0x00;
+	mac_addr->addr_bytes[1] = 0x09;
+	mac_addr->addr_bytes[2] = 0xC0;
+
+	/* Force indication of locally assigned MAC address. */
+	mac_addr->addr_bytes[0] |= RTE_ETHER_LOCAL_ADMIN_ADDR;
+
+	/* Generate the last 3 bytes of the MAC address with a random number. */
+	random = rte_rand();
+
+	memcpy(&mac_addr->addr_bytes[3], &random, 3);
+}
+
 static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 {
 	struct rte_pci_device *pci_dev;
@@ -2505,7 +2523,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 	uint8_t bulletin_change;
 	uint8_t vf_mac[RTE_ETHER_ADDR_LEN];
 	uint8_t is_mac_forced;
-	bool is_mac_exist;
+	bool is_mac_exist = false;
 	/* Fix up ecore debug level */
 	uint32_t dp_module = ~0 & ~ECORE_MSG_HW;
 	uint8_t dp_level = ECORE_LEVEL_VERBOSE;
@@ -2683,6 +2701,20 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 				DP_ERR(edev, "No VF macaddr assigned\n");
 			}
 		}
+
+		/* If MAC doesn't exist from PF, generate random one */
+		if (!is_mac_exist) {
+			struct rte_ether_addr *mac_addr;
+
+			mac_addr = (struct rte_ether_addr *)&vf_mac;
+			qede_generate_random_mac_addr(mac_addr);
+
+			rte_ether_addr_copy(mac_addr,
+					    &eth_dev->data->mac_addrs[0]);
+
+			rte_ether_addr_copy(&eth_dev->data->mac_addrs[0],
+					    &adapter->primary_mac);
+		}
 	}
 
 	eth_dev->dev_ops = (is_vf) ? &qede_eth_vf_dev_ops : &qede_eth_dev_ops;
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 93dfa8962..eae3f55fb 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -645,10 +645,15 @@ void qed_link_update(struct ecore_hwfn *hwfn)
 	struct ecore_dev *edev = hwfn->p_dev;
 	struct qede_dev *qdev = (struct qede_dev *)edev;
 	struct rte_eth_dev *dev = (struct rte_eth_dev *)qdev->ethdev;
+	int rc;
+
+	rc = qede_link_update(dev, 0);
+	qed_inform_vf_link_state(hwfn);
 
-	if (!qede_link_update(dev, 0))
+	if (!rc) {
 		_rte_eth_dev_callback_process(dev,
 					      RTE_ETH_EVENT_INTR_LSC, NULL);
+	}
 }
 
 static int qed_drain(struct ecore_dev *edev)
diff --git a/drivers/net/qede/qede_sriov.c b/drivers/net/qede/qede_sriov.c
index f7d7807fb..125e5058b 100644
--- a/drivers/net/qede/qede_sriov.c
+++ b/drivers/net/qede/qede_sriov.c
@@ -126,6 +126,28 @@ static void qed_handle_vf_msg(struct ecore_hwfn *hwfn)
 	ecore_ptt_release(hwfn, ptt);
 }
 
+static void qed_handle_bulletin_post(struct ecore_hwfn *hwfn)
+{
+	struct ecore_ptt *ptt;
+	int i;
+
+	ptt = ecore_ptt_acquire(hwfn);
+	if (!ptt) {
+		DP_NOTICE(hwfn, true, "PTT acquire failed\n");
+		qed_schedule_iov(hwfn, QED_IOV_WQ_BULLETIN_UPDATE_FLAG);
+		return;
+	}
+
+	/* TODO - at the moment update bulletin board of all VFs.
+	 * if this proves to costly, we can mark VFs that need their
+	 * bulletins updated.
+	 */
+	ecore_for_each_vf(hwfn, i)
+		ecore_iov_post_vf_bulletin(hwfn, i, ptt);
+
+	ecore_ptt_release(hwfn, ptt);
+}
+
 void qed_iov_pf_task(void *arg)
 {
 	struct ecore_hwfn *p_hwfn = arg;
@@ -134,6 +156,13 @@ void qed_iov_pf_task(void *arg)
 		qede_clr_bit(QED_IOV_WQ_MSG_FLAG, &p_hwfn->iov_task_flags);
 		qed_handle_vf_msg(p_hwfn);
 	}
+
+	if (qede_test_bit(QED_IOV_WQ_BULLETIN_UPDATE_FLAG,
+			  &p_hwfn->iov_task_flags)) {
+		qede_clr_bit(QED_IOV_WQ_BULLETIN_UPDATE_FLAG,
+			     &p_hwfn->iov_task_flags);
+		qed_handle_bulletin_post(p_hwfn);
+	}
 }
 
 int qed_schedule_iov(struct ecore_hwfn *p_hwfn, enum qed_iov_wq_flag flag)
@@ -144,3 +173,29 @@ int qed_schedule_iov(struct ecore_hwfn *p_hwfn, enum qed_iov_wq_flag flag)
 	qede_set_bit(flag, &p_hwfn->iov_task_flags);
 	return rte_eal_alarm_set(1, qed_iov_pf_task, p_hwfn);
 }
+
+void qed_inform_vf_link_state(struct ecore_hwfn *hwfn)
+{
+	struct ecore_hwfn *lead_hwfn = ECORE_LEADING_HWFN(hwfn->p_dev);
+	struct ecore_mcp_link_capabilities caps;
+	struct ecore_mcp_link_params params;
+	struct ecore_mcp_link_state link;
+	int i;
+
+	if (!hwfn->pf_iov_info)
+		return;
+
+	rte_memcpy(&params, ecore_mcp_get_link_params(lead_hwfn),
+		   sizeof(params));
+	rte_memcpy(&link, ecore_mcp_get_link_state(lead_hwfn), sizeof(link));
+	rte_memcpy(&caps, ecore_mcp_get_link_capabilities(lead_hwfn),
+		   sizeof(caps));
+
+	/* Update bulletin of all future possible VFs with link configuration */
+	for (i = 0; i < hwfn->p_dev->p_iov_info->total_vfs; i++) {
+		ecore_iov_set_link(hwfn, i,
+				   &params, &link, &caps);
+	}
+
+	qed_schedule_iov(hwfn, QED_IOV_WQ_BULLETIN_UPDATE_FLAG);
+}
diff --git a/drivers/net/qede/qede_sriov.h b/drivers/net/qede/qede_sriov.h
index 8b7fa7daa..e58ecc2a5 100644
--- a/drivers/net/qede/qede_sriov.h
+++ b/drivers/net/qede/qede_sriov.h
@@ -17,5 +17,6 @@ enum qed_iov_wq_flag {
 	QED_IOV_WQ_DB_REC_HANDLER,
 };
 
+void qed_inform_vf_link_state(struct ecore_hwfn *hwfn);
 int qed_schedule_iov(struct ecore_hwfn *p_hwfn, enum qed_iov_wq_flag flag);
 void qed_iov_pf_task(void *arg);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 5/6] net/qede: add VF FLR support
  2020-06-09 19:42 [dpdk-dev] [PATCH 0/6] qede: SR-IOV PF driver support Manish Chopra
                   ` (3 preceding siblings ...)
  2020-06-09 19:42 ` [dpdk-dev] [PATCH 4/6] net/qede: initialize VF MAC and link Manish Chopra
@ 2020-06-09 19:42 ` Manish Chopra
  2020-06-09 19:42 ` [dpdk-dev] [PATCH 6/6] doc/guides: update qede features list Manish Chopra
  5 siblings, 0 replies; 13+ messages in thread
From: Manish Chopra @ 2020-06-09 19:42 UTC (permalink / raw)
  To: jerinjacobk, jerinj, ferruh.yigit
  Cc: dev, irusskikh, rmody, GR-Everest-DPDK-Dev

This patch adds required bit to handle VF FLR
indication from Management FW (MFW) of the device

With that VFs were able to load in VM (VF attached as PCI
passthrough to the guest VM) followed by FLR successfully.

Signed-off-by: Manish Chopra <manishc@marvell.com>
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/bcm_osal.c |  5 +++++
 drivers/net/qede/base/bcm_osal.h |  4 +++-
 drivers/net/qede/qede_sriov.c    | 18 ++++++++++++++++++
 3 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/drivers/net/qede/base/bcm_osal.c b/drivers/net/qede/base/bcm_osal.c
index 1f6466a32..78524431c 100644
--- a/drivers/net/qede/base/bcm_osal.c
+++ b/drivers/net/qede/base/bcm_osal.c
@@ -33,6 +33,11 @@ int osal_pf_vf_msg(struct ecore_hwfn *p_hwfn)
 	return rc;
 }
 
+void osal_vf_flr_update(struct ecore_hwfn *p_hwfn)
+{
+	qed_schedule_iov(p_hwfn, QED_IOV_WQ_FLR_FLAG);
+}
+
 void osal_poll_mode_dpc(osal_int_ptr_t hwfn_cookie)
 {
 	struct ecore_hwfn *p_hwfn = (struct ecore_hwfn *)hwfn_cookie;
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index b55802952..cb3711210 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -352,7 +352,9 @@ u32 qede_find_first_zero_bit(unsigned long *, u32);
 /* SR-IOV channel */
 
 int osal_pf_vf_msg(struct ecore_hwfn *p_hwfn);
-#define OSAL_VF_FLR_UPDATE(hwfn) nothing
+void osal_vf_flr_update(struct ecore_hwfn *p_hwfn);
+#define OSAL_VF_FLR_UPDATE(hwfn) \
+	osal_vf_flr_update(hwfn)
 #define OSAL_VF_SEND_MSG2PF(dev, done, msg, reply_addr, msg_size, reply_size) 0
 #define OSAL_VF_CQE_COMPLETION(_dev_p, _cqe, _protocol)	(0)
 #define OSAL_PF_VF_MSG(hwfn, vfid) \
diff --git a/drivers/net/qede/qede_sriov.c b/drivers/net/qede/qede_sriov.c
index 125e5058b..a486b0496 100644
--- a/drivers/net/qede/qede_sriov.c
+++ b/drivers/net/qede/qede_sriov.c
@@ -151,6 +151,7 @@ static void qed_handle_bulletin_post(struct ecore_hwfn *hwfn)
 void qed_iov_pf_task(void *arg)
 {
 	struct ecore_hwfn *p_hwfn = arg;
+	int rc;
 
 	if (qede_test_bit(QED_IOV_WQ_MSG_FLAG, &p_hwfn->iov_task_flags)) {
 		qede_clr_bit(QED_IOV_WQ_MSG_FLAG, &p_hwfn->iov_task_flags);
@@ -163,6 +164,23 @@ void qed_iov_pf_task(void *arg)
 			     &p_hwfn->iov_task_flags);
 		qed_handle_bulletin_post(p_hwfn);
 	}
+
+	if (qede_test_bit(QED_IOV_WQ_FLR_FLAG, &p_hwfn->iov_task_flags)) {
+		struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+
+		qede_clr_bit(QED_IOV_WQ_FLR_FLAG, &p_hwfn->iov_task_flags);
+
+		if (!p_ptt) {
+			qed_schedule_iov(p_hwfn, QED_IOV_WQ_FLR_FLAG);
+			return;
+		}
+
+		rc = ecore_iov_vf_flr_cleanup(p_hwfn, p_ptt);
+		if (rc)
+			qed_schedule_iov(p_hwfn, QED_IOV_WQ_FLR_FLAG);
+
+		ecore_ptt_release(p_hwfn, p_ptt);
+	}
 }
 
 int qed_schedule_iov(struct ecore_hwfn *p_hwfn, enum qed_iov_wq_flag flag)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 6/6] doc/guides: update qede features list
  2020-06-09 19:42 [dpdk-dev] [PATCH 0/6] qede: SR-IOV PF driver support Manish Chopra
                   ` (4 preceding siblings ...)
  2020-06-09 19:42 ` [dpdk-dev] [PATCH 5/6] net/qede: add VF FLR support Manish Chopra
@ 2020-06-09 19:42 ` Manish Chopra
  2020-06-26  4:55   ` Jerin Jacob
  5 siblings, 1 reply; 13+ messages in thread
From: Manish Chopra @ 2020-06-09 19:42 UTC (permalink / raw)
  To: jerinjacobk, jerinj, ferruh.yigit
  Cc: dev, irusskikh, rmody, GR-Everest-DPDK-Dev

Add SR-IOV PF entry in supported features

Signed-off-by: Manish Chopra <manishc@marvell.com>
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 doc/guides/nics/features/qede.ini | 1 +
 doc/guides/nics/qede.rst          | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/qede.ini b/doc/guides/nics/features/qede.ini
index 20c90e626..030f5606b 100644
--- a/doc/guides/nics/features/qede.ini
+++ b/doc/guides/nics/features/qede.ini
@@ -18,6 +18,7 @@ Multicast MAC filter = Y
 RSS hash             = Y
 RSS key update       = Y
 RSS reta update      = Y
+SR-IOV               = Y
 VLAN filter          = Y
 Flow control         = Y
 Flow API             = Y
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index 5b2f86895..3306fabaa 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -34,7 +34,7 @@ Supported Features
 - VLAN offload - Filtering and stripping
 - N-tuple filter and flow director (limited support)
 - NPAR (NIC Partitioning)
-- SR-IOV VF
+- SR-IOV PF and VF
 - GRE Tunneling offload
 - GENEVE Tunneling offload
 - VXLAN Tunneling offload
-- 
2.17.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 1/6] net/qede: define PCI config space specific osals
  2020-06-09 19:42 ` [dpdk-dev] [PATCH 1/6] net/qede: define PCI config space specific osals Manish Chopra
@ 2020-06-26  4:53   ` Jerin Jacob
  2020-07-09 15:05     ` [dpdk-dev] [EXT] " Manish Chopra
  0 siblings, 1 reply; 13+ messages in thread
From: Jerin Jacob @ 2020-06-26  4:53 UTC (permalink / raw)
  To: Manish Chopra, Gaetan Rivet
  Cc: Jerin Jacob, Ferruh Yigit, dpdk-dev, Igor Russkikh, Rasesh Mody,
	GR-Everest-DPDK-Dev

On Wed, Jun 10, 2020 at 1:13 AM Manish Chopra <manishc@marvell.com> wrote:
>
> This patch defines various PCI config space access APIs
> in order to read and find IOV specific PCI capabilities.
>
> With these definitions implemented, it enables the base
> driver to do SR-IOV specific initialization and HW specific
> configuration required from PF-PMD driver instance.
>
> Signed-off-by: Manish Chopra <manishc@marvell.com>
> Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
> Signed-off-by: Rasesh Mody <rmody@marvell.com>
> ---
> +
> +int osal_pci_find_next_ext_capability(struct rte_pci_device *dev,
> +                                     int cap)


+ Gaetan (PCI maintainer)

Manish,
It must be a candidate for a generic PCI API as it is nothing to do with qede.
Please move to common PCI code if such API is not already present.


> +{
> +       int pos = PCI_CFG_SPACE_SIZE;
> +       uint32_t header;
> +       int ttl;
> +
> +       /* minimum 8 bytes per capability */
> +       ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8;
> +
> +       if (rte_pci_read_config(dev, &header, 4, pos) < 0)
> +               return -1;
> +
> +       /*
> +        * If we have no capabilities, this is indicated by cap ID,
> +        * cap version and next pointer all being 0.
> +        */
> +       if (header == 0)
> +               return 0;
> +
> +       while (ttl-- > 0) {
> +               if (PCI_EXT_CAP_ID(header) == cap)
> +                       return pos;
> +
> +               pos = PCI_EXT_CAP_NEXT(header);
> +
> +               if (pos < PCI_CFG_SPACE_SIZE)
> +                       break;
> +
> +               if (rte_pci_read_config(dev, &header, 4, pos) < 0)
> +                       return -1;
> +       }
> +
> +       return 0;
> +}
>

>
> +#define PCICFG_VENDOR_ID_OFFSET 0x00
> +#define PCICFG_DEVICE_ID_OFFSET 0x02
> +#define PCI_CFG_SPACE_SIZE 256
> +#define PCI_EXP_DEVCTL 0x0008
> +#define PCI_EXT_CAP_ID(header) (int)((header) & 0x0000ffff)
> +#define PCI_EXT_CAP_NEXT(header) (((header) >> 20) & 0xffc)
> +#define PCI_CFG_SPACE_EXP_SIZE 4096
> +
> +#define PCI_SRIOV_CTRL 0x08 /* SR-IOV Control */
> +#define PCI_SRIOV_TOTAL_VF 0x0e /* Total VFs */
> +#define PCI_SRIOV_INITIAL_VF 0x0c /* Initial VFs */
> +#define PCI_SRIOV_NUM_VF 0x10 /* Number of VFs */
> +#define PCI_SRIOV_VF_OFFSET 0x14 /* First VF Offset */
> +#define PCI_SRIOV_VF_STRIDE 0x16 /* Following VF Stride */
> +#define PCI_SRIOV_VF_DID 0x1a
> +#define PCI_SRIOV_SUP_PGSIZE 0x1c
> +#define PCI_SRIOV_CAP 0x04
> +#define PCI_SRIOV_FUNC_LINK 0x12
> +#define PCI_EXT_CAP_ID_SRIOV 0x10

Dont DEFINE PCI_ symbols in drivers, It may conflict with other PCI
definitions in the future.
Please move GENERIC PCI_ symbols to the generic PCI layer.



> +

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 6/6] doc/guides: update qede features list
  2020-06-09 19:42 ` [dpdk-dev] [PATCH 6/6] doc/guides: update qede features list Manish Chopra
@ 2020-06-26  4:55   ` Jerin Jacob
  0 siblings, 0 replies; 13+ messages in thread
From: Jerin Jacob @ 2020-06-26  4:55 UTC (permalink / raw)
  To: Manish Chopra
  Cc: Jerin Jacob, Ferruh Yigit, dpdk-dev, Igor Russkikh, Rasesh Mody,
	GR-Everest-DPDK-Dev

On Wed, Jun 10, 2020 at 1:15 AM Manish Chopra <manishc@marvell.com> wrote:
>
> Add SR-IOV PF entry in supported features
>
> Signed-off-by: Manish Chopra <manishc@marvell.com>
> Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
> Signed-off-by: Rasesh Mody <rmody@marvell.com>
> ---
>  doc/guides/nics/features/qede.ini | 1 +
>  doc/guides/nics/qede.rst          | 2 +-

Please squash to the appropriate patch where this support gets added.

No other comments for this series for my side.


>  2 files changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/doc/guides/nics/features/qede.ini b/doc/guides/nics/features/qede.ini
> index 20c90e626..030f5606b 100644
> --- a/doc/guides/nics/features/qede.ini
> +++ b/doc/guides/nics/features/qede.ini
> @@ -18,6 +18,7 @@ Multicast MAC filter = Y
>  RSS hash             = Y
>  RSS key update       = Y
>  RSS reta update      = Y
> +SR-IOV               = Y
>  VLAN filter          = Y
>  Flow control         = Y
>  Flow API             = Y
> diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
> index 5b2f86895..3306fabaa 100644
> --- a/doc/guides/nics/qede.rst
> +++ b/doc/guides/nics/qede.rst
> @@ -34,7 +34,7 @@ Supported Features
>  - VLAN offload - Filtering and stripping
>  - N-tuple filter and flow director (limited support)
>  - NPAR (NIC Partitioning)
> -- SR-IOV VF
> +- SR-IOV PF and VF
>  - GRE Tunneling offload
>  - GENEVE Tunneling offload
>  - VXLAN Tunneling offload
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 1/6] net/qede: define PCI config space specific osals
  2020-06-26  4:53   ` Jerin Jacob
@ 2020-07-09 15:05     ` Manish Chopra
  2020-07-09 16:11       ` Jerin Jacob
  0 siblings, 1 reply; 13+ messages in thread
From: Manish Chopra @ 2020-07-09 15:05 UTC (permalink / raw)
  To: Jerin Jacob, Gaetan Rivet
  Cc: Jerin Jacob Kollanukkaran, Ferruh Yigit, dpdk-dev, Igor Russkikh,
	Rasesh Mody, GR-Everest-DPDK-Dev

> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Friday, June 26, 2020 10:24 AM
> To: Manish Chopra <manishc@marvell.com>; Gaetan Rivet
> <grive@u256.net>
> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Ferruh Yigit
> <ferruh.yigit@intel.com>; dpdk-dev <dev@dpdk.org>; Igor Russkikh
> <irusskikh@marvell.com>; Rasesh Mody <rmody@marvell.com>; GR-Everest-
> DPDK-Dev <GR-Everest-DPDK-Dev@marvell.com>
> Subject: [EXT] Re: [PATCH 1/6] net/qede: define PCI config space specific
> osals
> 
> External Email
> 
> ----------------------------------------------------------------------
> On Wed, Jun 10, 2020 at 1:13 AM Manish Chopra <manishc@marvell.com>
> wrote:
> >
> > This patch defines various PCI config space access APIs in order to
> > read and find IOV specific PCI capabilities.
> >
> > With these definitions implemented, it enables the base driver to do
> > SR-IOV specific initialization and HW specific configuration required
> > from PF-PMD driver instance.
> >
> > Signed-off-by: Manish Chopra <manishc@marvell.com>
> > Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
> > Signed-off-by: Rasesh Mody <rmody@marvell.com>
> > ---
> > +
> > +int osal_pci_find_next_ext_capability(struct rte_pci_device *dev,
> > +                                     int cap)
> 
> 
> + Gaetan (PCI maintainer)
> 
> Manish,
> It must be a candidate for a generic PCI API as it is nothing to do with qede.
> Please move to common PCI code if such API is not already present.
> 
> 
> > +{
> > +       int pos = PCI_CFG_SPACE_SIZE;
> > +       uint32_t header;
> > +       int ttl;
> > +
> > +       /* minimum 8 bytes per capability */
> > +       ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8;
> > +
> > +       if (rte_pci_read_config(dev, &header, 4, pos) < 0)
> > +               return -1;
> > +
> > +       /*
> > +        * If we have no capabilities, this is indicated by cap ID,
> > +        * cap version and next pointer all being 0.
> > +        */
> > +       if (header == 0)
> > +               return 0;
> > +
> > +       while (ttl-- > 0) {
> > +               if (PCI_EXT_CAP_ID(header) == cap)
> > +                       return pos;
> > +
> > +               pos = PCI_EXT_CAP_NEXT(header);
> > +
> > +               if (pos < PCI_CFG_SPACE_SIZE)
> > +                       break;
> > +
> > +               if (rte_pci_read_config(dev, &header, 4, pos) < 0)
> > +                       return -1;
> > +       }
> > +
> > +       return 0;
> > +}
> >
> 
> >
> > +#define PCICFG_VENDOR_ID_OFFSET 0x00
> > +#define PCICFG_DEVICE_ID_OFFSET 0x02
> > +#define PCI_CFG_SPACE_SIZE 256
> > +#define PCI_EXP_DEVCTL 0x0008
> > +#define PCI_EXT_CAP_ID(header) (int)((header) & 0x0000ffff) #define
> > +PCI_EXT_CAP_NEXT(header) (((header) >> 20) & 0xffc) #define
> > +PCI_CFG_SPACE_EXP_SIZE 4096
> > +
> > +#define PCI_SRIOV_CTRL 0x08 /* SR-IOV Control */ #define
> > +PCI_SRIOV_TOTAL_VF 0x0e /* Total VFs */ #define PCI_SRIOV_INITIAL_VF
> > +0x0c /* Initial VFs */ #define PCI_SRIOV_NUM_VF 0x10 /* Number of VFs
> > +*/ #define PCI_SRIOV_VF_OFFSET 0x14 /* First VF Offset */ #define
> > +PCI_SRIOV_VF_STRIDE 0x16 /* Following VF Stride */ #define
> > +PCI_SRIOV_VF_DID 0x1a #define PCI_SRIOV_SUP_PGSIZE 0x1c #define
> > +PCI_SRIOV_CAP 0x04 #define PCI_SRIOV_FUNC_LINK 0x12 #define
> > +PCI_EXT_CAP_ID_SRIOV 0x10
> 
> Dont DEFINE PCI_ symbols in drivers, It may conflict with other PCI
> definitions in the future.
> Please move GENERIC PCI_ symbols to the generic PCI layer.
> 
> 
> 

Hi Jerin/Gaetan,

Which generic PCI code/files these defines/API should be added to ? (lib/librte_pci/rte_pci.[c|h]) ?
Just FYI, note that it can't be done without cleaning up other vendors, as I can see that various other vendors have also
defined this function to find pci extended cap and some of these PCI_* macro defines as well in their respective drivers.

Thanks,
Manish

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 1/6] net/qede: define PCI config space specific osals
  2020-07-09 15:05     ` [dpdk-dev] [EXT] " Manish Chopra
@ 2020-07-09 16:11       ` Jerin Jacob
  2020-07-09 22:28         ` Manish Chopra
  0 siblings, 1 reply; 13+ messages in thread
From: Jerin Jacob @ 2020-07-09 16:11 UTC (permalink / raw)
  To: Manish Chopra
  Cc: Gaetan Rivet, Jerin Jacob Kollanukkaran, Ferruh Yigit, dpdk-dev,
	Igor Russkikh, Rasesh Mody, GR-Everest-DPDK-Dev

On Thu, Jul 9, 2020 at 8:35 PM Manish Chopra <manishc@marvell.com> wrote:
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Friday, June 26, 2020 10:24 AM
> > To: Manish Chopra <manishc@marvell.com>; Gaetan Rivet
> > <grive@u256.net>
> > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Ferruh Yigit
> > <ferruh.yigit@intel.com>; dpdk-dev <dev@dpdk.org>; Igor Russkikh
> > <irusskikh@marvell.com>; Rasesh Mody <rmody@marvell.com>; GR-Everest-
> > DPDK-Dev <GR-Everest-DPDK-Dev@marvell.com>
> > Subject: [EXT] Re: [PATCH 1/6] net/qede: define PCI config space specific
> > osals
> >
> > External Email
> >
> > ----------------------------------------------------------------------
> > On Wed, Jun 10, 2020 at 1:13 AM Manish Chopra <manishc@marvell.com>
> > wrote:
> > >
> > > This patch defines various PCI config space access APIs in order to
> > > read and find IOV specific PCI capabilities.
> > >
> > > With these definitions implemented, it enables the base driver to do
> > > SR-IOV specific initialization and HW specific configuration required
> > > from PF-PMD driver instance.
> > >
> > > Signed-off-by: Manish Chopra <manishc@marvell.com>
> > > Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
> > > Signed-off-by: Rasesh Mody <rmody@marvell.com>
> > > ---
> > > +
> > > +int osal_pci_find_next_ext_capability(struct rte_pci_device *dev,
> > > +                                     int cap)
> >
> >
> > + Gaetan (PCI maintainer)
> >
> > Manish,
> > It must be a candidate for a generic PCI API as it is nothing to do with qede.
> > Please move to common PCI code if such API is not already present.
> >
> >
> > > +{
> > > +       int pos = PCI_CFG_SPACE_SIZE;
> > > +       uint32_t header;
> > > +       int ttl;
> > > +
> > > +       /* minimum 8 bytes per capability */
> > > +       ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8;
> > > +
> > > +       if (rte_pci_read_config(dev, &header, 4, pos) < 0)
> > > +               return -1;
> > > +
> > > +       /*
> > > +        * If we have no capabilities, this is indicated by cap ID,
> > > +        * cap version and next pointer all being 0.
> > > +        */
> > > +       if (header == 0)
> > > +               return 0;
> > > +
> > > +       while (ttl-- > 0) {
> > > +               if (PCI_EXT_CAP_ID(header) == cap)
> > > +                       return pos;
> > > +
> > > +               pos = PCI_EXT_CAP_NEXT(header);
> > > +
> > > +               if (pos < PCI_CFG_SPACE_SIZE)
> > > +                       break;
> > > +
> > > +               if (rte_pci_read_config(dev, &header, 4, pos) < 0)
> > > +                       return -1;
> > > +       }
> > > +
> > > +       return 0;
> > > +}
> > >
> >
> > >
> > > +#define PCICFG_VENDOR_ID_OFFSET 0x00
> > > +#define PCICFG_DEVICE_ID_OFFSET 0x02
> > > +#define PCI_CFG_SPACE_SIZE 256
> > > +#define PCI_EXP_DEVCTL 0x0008
> > > +#define PCI_EXT_CAP_ID(header) (int)((header) & 0x0000ffff) #define
> > > +PCI_EXT_CAP_NEXT(header) (((header) >> 20) & 0xffc) #define
> > > +PCI_CFG_SPACE_EXP_SIZE 4096
> > > +
> > > +#define PCI_SRIOV_CTRL 0x08 /* SR-IOV Control */ #define
> > > +PCI_SRIOV_TOTAL_VF 0x0e /* Total VFs */ #define PCI_SRIOV_INITIAL_VF
> > > +0x0c /* Initial VFs */ #define PCI_SRIOV_NUM_VF 0x10 /* Number of VFs
> > > +*/ #define PCI_SRIOV_VF_OFFSET 0x14 /* First VF Offset */ #define
> > > +PCI_SRIOV_VF_STRIDE 0x16 /* Following VF Stride */ #define
> > > +PCI_SRIOV_VF_DID 0x1a #define PCI_SRIOV_SUP_PGSIZE 0x1c #define
> > > +PCI_SRIOV_CAP 0x04 #define PCI_SRIOV_FUNC_LINK 0x12 #define
> > > +PCI_EXT_CAP_ID_SRIOV 0x10
> >
> > Dont DEFINE PCI_ symbols in drivers, It may conflict with other PCI
> > definitions in the future.
> > Please move GENERIC PCI_ symbols to the generic PCI layer.
> >
> >
> >
>
> Hi Jerin/Gaetan,
>
> Which generic PCI code/files these defines/API should be added to ? (lib/librte_pci/rte_pci.[c|h]) ?

Since it generic, To me, lib/librte_pci/rte_pci.[c|h]) is the correct place.

> Just FYI, note that it can't be done without cleaning up other vendors, as I can see that various other vendors have also
> defined this function to find pci extended cap and some of these PCI_* macro defines as well in their respective drivers.
>
> Thanks,
> Manish

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 1/6] net/qede: define PCI config space specific osals
  2020-07-09 16:11       ` Jerin Jacob
@ 2020-07-09 22:28         ` Manish Chopra
  2020-07-10  5:03           ` Jerin Jacob
  0 siblings, 1 reply; 13+ messages in thread
From: Manish Chopra @ 2020-07-09 22:28 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Gaetan Rivet, Jerin Jacob Kollanukkaran, Ferruh Yigit, dpdk-dev,
	Igor Russkikh, Rasesh Mody, GR-Everest-DPDK-Dev

> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Thursday, July 9, 2020 9:41 PM
> To: Manish Chopra <manishc@marvell.com>
> Cc: Gaetan Rivet <grive@u256.net>; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Ferruh Yigit <ferruh.yigit@intel.com>; dpdk-dev
> <dev@dpdk.org>; Igor Russkikh <irusskikh@marvell.com>; Rasesh Mody
> <rmody@marvell.com>; GR-Everest-DPDK-Dev <GR-Everest-DPDK-
> Dev@marvell.com>
> Subject: Re: [EXT] Re: [PATCH 1/6] net/qede: define PCI config space specific
> osals
> 
> On Thu, Jul 9, 2020 at 8:35 PM Manish Chopra <manishc@marvell.com>
> wrote:
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Friday, June 26, 2020 10:24 AM
> > > To: Manish Chopra <manishc@marvell.com>; Gaetan Rivet
> > > <grive@u256.net>
> > > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Ferruh Yigit
> > > <ferruh.yigit@intel.com>; dpdk-dev <dev@dpdk.org>; Igor Russkikh
> > > <irusskikh@marvell.com>; Rasesh Mody <rmody@marvell.com>;
> > > GR-Everest- DPDK-Dev <GR-Everest-DPDK-Dev@marvell.com>
> > > Subject: [EXT] Re: [PATCH 1/6] net/qede: define PCI config space
> > > specific osals
> > >
> > > External Email
> > >
> > > --------------------------------------------------------------------
> > > -- On Wed, Jun 10, 2020 at 1:13 AM Manish Chopra
> > > <manishc@marvell.com>
> > > wrote:
> > > >
> > > > This patch defines various PCI config space access APIs in order
> > > > to read and find IOV specific PCI capabilities.
> > > >
> > > > With these definitions implemented, it enables the base driver to
> > > > do SR-IOV specific initialization and HW specific configuration
> > > > required from PF-PMD driver instance.
> > > >
> > > > Signed-off-by: Manish Chopra <manishc@marvell.com>
> > > > Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
> > > > Signed-off-by: Rasesh Mody <rmody@marvell.com>
> > > > ---
> > > > +
> > > > +int osal_pci_find_next_ext_capability(struct rte_pci_device *dev,
> > > > +                                     int cap)
> > >
> > >
> > > + Gaetan (PCI maintainer)
> > >
> > > Manish,
> > > It must be a candidate for a generic PCI API as it is nothing to do with
> qede.
> > > Please move to common PCI code if such API is not already present.
> > >
> > >
> > > > +{
> > > > +       int pos = PCI_CFG_SPACE_SIZE;
> > > > +       uint32_t header;
> > > > +       int ttl;
> > > > +
> > > > +       /* minimum 8 bytes per capability */
> > > > +       ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8;
> > > > +
> > > > +       if (rte_pci_read_config(dev, &header, 4, pos) < 0)
> > > > +               return -1;
> > > > +
> > > > +       /*
> > > > +        * If we have no capabilities, this is indicated by cap ID,
> > > > +        * cap version and next pointer all being 0.
> > > > +        */
> > > > +       if (header == 0)
> > > > +               return 0;
> > > > +
> > > > +       while (ttl-- > 0) {
> > > > +               if (PCI_EXT_CAP_ID(header) == cap)
> > > > +                       return pos;
> > > > +
> > > > +               pos = PCI_EXT_CAP_NEXT(header);
> > > > +
> > > > +               if (pos < PCI_CFG_SPACE_SIZE)
> > > > +                       break;
> > > > +
> > > > +               if (rte_pci_read_config(dev, &header, 4, pos) < 0)
> > > > +                       return -1;
> > > > +       }
> > > > +
> > > > +       return 0;
> > > > +}
> > > >
> > >
> > > >
> > > > +#define PCICFG_VENDOR_ID_OFFSET 0x00 #define
> > > > +PCICFG_DEVICE_ID_OFFSET 0x02 #define PCI_CFG_SPACE_SIZE 256
> > > > +#define PCI_EXP_DEVCTL 0x0008 #define PCI_EXT_CAP_ID(header)
> > > > +(int)((header) & 0x0000ffff) #define
> > > > +PCI_EXT_CAP_NEXT(header) (((header) >> 20) & 0xffc) #define
> > > > +PCI_CFG_SPACE_EXP_SIZE 4096
> > > > +
> > > > +#define PCI_SRIOV_CTRL 0x08 /* SR-IOV Control */ #define
> > > > +PCI_SRIOV_TOTAL_VF 0x0e /* Total VFs */ #define
> > > > +PCI_SRIOV_INITIAL_VF 0x0c /* Initial VFs */ #define
> > > > +PCI_SRIOV_NUM_VF 0x10 /* Number of VFs */ #define
> > > > +PCI_SRIOV_VF_OFFSET 0x14 /* First VF Offset */ #define
> > > > +PCI_SRIOV_VF_STRIDE 0x16 /* Following VF Stride */ #define
> > > > +PCI_SRIOV_VF_DID 0x1a #define PCI_SRIOV_SUP_PGSIZE 0x1c #define
> > > > +PCI_SRIOV_CAP 0x04 #define PCI_SRIOV_FUNC_LINK 0x12 #define
> > > > +PCI_EXT_CAP_ID_SRIOV 0x10
> > >
> > > Dont DEFINE PCI_ symbols in drivers, It may conflict with other PCI
> > > definitions in the future.
> > > Please move GENERIC PCI_ symbols to the generic PCI layer.
> > >
> > >
> > >
> >
> > Hi Jerin/Gaetan,
> >
> > Which generic PCI code/files these defines/API should be added to ?
> (lib/librte_pci/rte_pci.[c|h]) ?
> 
> Since it generic, To me, lib/librte_pci/rte_pci.[c|h]) is the correct place.
> 
> > Just FYI, note that it can't be done without cleaning up other
> > vendors, as I can see that various other vendors have also defined this
> function to find pci extended cap and some of these PCI_* macro defines as
> well in their respective drivers.
> >
> > Thanks,
> > Manish

Hi Jerin,

It seems like adding these in lib/librte_pci/rte_pci.[c|h]) is not straight w/o doing forward declarations of func/structs
like (strcut rte_pci_device, rte_pci_read_config()) which are being referenced in pci_find_next_ext_capability(),
as rte_bus_pci.h already have include of rte_pci.h

So, how about adding them in drivers/bus/pci/pci_common.c and drivers/bus/pci/rte_bus_pci.h files directly ?

Also, most of the PCI* defines above are already available from /usr/include/pci_regs.h so I think we don't need them to re-define any again in DPDK tree's headers.
(Assuming that all supported kernels would have latest /usr/include/pci_regs.h)

Thanks,
Manish

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 1/6] net/qede: define PCI config space specific osals
  2020-07-09 22:28         ` Manish Chopra
@ 2020-07-10  5:03           ` Jerin Jacob
  0 siblings, 0 replies; 13+ messages in thread
From: Jerin Jacob @ 2020-07-10  5:03 UTC (permalink / raw)
  To: Manish Chopra
  Cc: Gaetan Rivet, Jerin Jacob Kollanukkaran, Ferruh Yigit, dpdk-dev,
	Igor Russkikh, Rasesh Mody, GR-Everest-DPDK-Dev

On Fri, Jul 10, 2020 at 3:58 AM Manish Chopra <manishc@marvell.com> wrote:
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Thursday, July 9, 2020 9:41 PM
> > To: Manish Chopra <manishc@marvell.com>
> > Cc: Gaetan Rivet <grive@u256.net>; Jerin Jacob Kollanukkaran
> > <jerinj@marvell.com>; Ferruh Yigit <ferruh.yigit@intel.com>; dpdk-dev
> > <dev@dpdk.org>; Igor Russkikh <irusskikh@marvell.com>; Rasesh Mody
> > <rmody@marvell.com>; GR-Everest-DPDK-Dev <GR-Everest-DPDK-
> > Dev@marvell.com>
> > Subject: Re: [EXT] Re: [PATCH 1/6] net/qede: define PCI config space specific
> > osals
> >
> > On Thu, Jul 9, 2020 at 8:35 PM Manish Chopra <manishc@marvell.com>
> > wrote:
> > >
> > > > -----Original Message-----
> > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > Sent: Friday, June 26, 2020 10:24 AM
> > > > To: Manish Chopra <manishc@marvell.com>; Gaetan Rivet
> > > > <grive@u256.net>
> > > > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Ferruh Yigit
> > > > <ferruh.yigit@intel.com>; dpdk-dev <dev@dpdk.org>; Igor Russkikh
> > > > <irusskikh@marvell.com>; Rasesh Mody <rmody@marvell.com>;
> > > > GR-Everest- DPDK-Dev <GR-Everest-DPDK-Dev@marvell.com>
> > > > Subject: [EXT] Re: [PATCH 1/6] net/qede: define PCI config space
> > > > specific osals
> > > >
> > > > External Email
> > > >
> > > > --------------------------------------------------------------------
> > > > -- On Wed, Jun 10, 2020 at 1:13 AM Manish Chopra
> > > > <manishc@marvell.com>
> > > > wrote:
> > > > >
> > > > > This patch defines various PCI config space access APIs in order
> > > > > to read and find IOV specific PCI capabilities.
> > > > >
> > > > > With these definitions implemented, it enables the base driver to
> > > > > do SR-IOV specific initialization and HW specific configuration
> > > > > required from PF-PMD driver instance.
> > > > >
> > > > > Signed-off-by: Manish Chopra <manishc@marvell.com>
> > > > > Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
> > > > > Signed-off-by: Rasesh Mody <rmody@marvell.com>
> > > > > ---
> > > > > +
> > > > > +int osal_pci_find_next_ext_capability(struct rte_pci_device *dev,
> > > > > +                                     int cap)
> > > >
> > > >
> > > > + Gaetan (PCI maintainer)
> > > >
> > > > Manish,
> > > > It must be a candidate for a generic PCI API as it is nothing to do with
> > qede.
> > > > Please move to common PCI code if such API is not already present.
> > > >
> > > >
> > > > > +{
> > > > > +       int pos = PCI_CFG_SPACE_SIZE;
> > > > > +       uint32_t header;
> > > > > +       int ttl;
> > > > > +
> > > > > +       /* minimum 8 bytes per capability */
> > > > > +       ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8;
> > > > > +
> > > > > +       if (rte_pci_read_config(dev, &header, 4, pos) < 0)
> > > > > +               return -1;
> > > > > +
> > > > > +       /*
> > > > > +        * If we have no capabilities, this is indicated by cap ID,
> > > > > +        * cap version and next pointer all being 0.
> > > > > +        */
> > > > > +       if (header == 0)
> > > > > +               return 0;
> > > > > +
> > > > > +       while (ttl-- > 0) {
> > > > > +               if (PCI_EXT_CAP_ID(header) == cap)
> > > > > +                       return pos;
> > > > > +
> > > > > +               pos = PCI_EXT_CAP_NEXT(header);
> > > > > +
> > > > > +               if (pos < PCI_CFG_SPACE_SIZE)
> > > > > +                       break;
> > > > > +
> > > > > +               if (rte_pci_read_config(dev, &header, 4, pos) < 0)
> > > > > +                       return -1;
> > > > > +       }
> > > > > +
> > > > > +       return 0;
> > > > > +}
> > > > >
> > > >
> > > > >
> > > > > +#define PCICFG_VENDOR_ID_OFFSET 0x00 #define
> > > > > +PCICFG_DEVICE_ID_OFFSET 0x02 #define PCI_CFG_SPACE_SIZE 256
> > > > > +#define PCI_EXP_DEVCTL 0x0008 #define PCI_EXT_CAP_ID(header)
> > > > > +(int)((header) & 0x0000ffff) #define
> > > > > +PCI_EXT_CAP_NEXT(header) (((header) >> 20) & 0xffc) #define
> > > > > +PCI_CFG_SPACE_EXP_SIZE 4096
> > > > > +
> > > > > +#define PCI_SRIOV_CTRL 0x08 /* SR-IOV Control */ #define
> > > > > +PCI_SRIOV_TOTAL_VF 0x0e /* Total VFs */ #define
> > > > > +PCI_SRIOV_INITIAL_VF 0x0c /* Initial VFs */ #define
> > > > > +PCI_SRIOV_NUM_VF 0x10 /* Number of VFs */ #define
> > > > > +PCI_SRIOV_VF_OFFSET 0x14 /* First VF Offset */ #define
> > > > > +PCI_SRIOV_VF_STRIDE 0x16 /* Following VF Stride */ #define
> > > > > +PCI_SRIOV_VF_DID 0x1a #define PCI_SRIOV_SUP_PGSIZE 0x1c #define
> > > > > +PCI_SRIOV_CAP 0x04 #define PCI_SRIOV_FUNC_LINK 0x12 #define
> > > > > +PCI_EXT_CAP_ID_SRIOV 0x10
> > > >
> > > > Dont DEFINE PCI_ symbols in drivers, It may conflict with other PCI
> > > > definitions in the future.
> > > > Please move GENERIC PCI_ symbols to the generic PCI layer.
> > > >
> > > >
> > > >
> > >
> > > Hi Jerin/Gaetan,
> > >
> > > Which generic PCI code/files these defines/API should be added to ?
> > (lib/librte_pci/rte_pci.[c|h]) ?
> >
> > Since it generic, To me, lib/librte_pci/rte_pci.[c|h]) is the correct place.
> >
> > > Just FYI, note that it can't be done without cleaning up other
> > > vendors, as I can see that various other vendors have also defined this
> > function to find pci extended cap and some of these PCI_* macro defines as
> > well in their respective drivers.
> > >
> > > Thanks,
> > > Manish
>
> Hi Jerin,
>
> It seems like adding these in lib/librte_pci/rte_pci.[c|h]) is not straight w/o doing forward declarations of func/structs
> like (strcut rte_pci_device, rte_pci_read_config()) which are being referenced in pci_find_next_ext_capability(),
> as rte_bus_pci.h already have include of rte_pci.h
>
> So, how about adding them in drivers/bus/pci/pci_common.c and drivers/bus/pci/rte_bus_pci.h files directly ?

Good to me.

>
> Also, most of the PCI* defines above are already available from /usr/include/pci_regs.h so I think we don't need them to re-define any again in DPDK tree's headers.
> (Assuming that all supported kernels would have latest /usr/include/pci_regs.h)

I think, we can avoid that path to avoid Linux kernel or specific
package dependency

>
> Thanks,
> Manish

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-07-10  5:03 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-09 19:42 [dpdk-dev] [PATCH 0/6] qede: SR-IOV PF driver support Manish Chopra
2020-06-09 19:42 ` [dpdk-dev] [PATCH 1/6] net/qede: define PCI config space specific osals Manish Chopra
2020-06-26  4:53   ` Jerin Jacob
2020-07-09 15:05     ` [dpdk-dev] [EXT] " Manish Chopra
2020-07-09 16:11       ` Jerin Jacob
2020-07-09 22:28         ` Manish Chopra
2020-07-10  5:03           ` Jerin Jacob
2020-06-09 19:42 ` [dpdk-dev] [PATCH 2/6] net/qede: configure VFs on hardware Manish Chopra
2020-06-09 19:42 ` [dpdk-dev] [PATCH 3/6] net/qede: add infrastructure support for VF load Manish Chopra
2020-06-09 19:42 ` [dpdk-dev] [PATCH 4/6] net/qede: initialize VF MAC and link Manish Chopra
2020-06-09 19:42 ` [dpdk-dev] [PATCH 5/6] net/qede: add VF FLR support Manish Chopra
2020-06-09 19:42 ` [dpdk-dev] [PATCH 6/6] doc/guides: update qede features list Manish Chopra
2020-06-26  4:55   ` Jerin Jacob

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).