DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH v4 00/15] net/idpf: introduce idpf common modle
       [not found] <https://patches.dpdk.org/project/dpdk/cover/20230117072626.93796-1-beilei.xing@intel.com/>
@ 2023-01-17  8:06 ` beilei.xing
  2023-01-17  8:06   ` [PATCH v4 01/15] common/idpf: add adapter structure beilei.xing
                     ` (15 more replies)
  0 siblings, 16 replies; 79+ messages in thread
From: beilei.xing @ 2023-01-17  8:06 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Refactor idpf pmd by introducing idpf common module, which will be also
consumed by a new PMD - CPFL (Control Plane Function Library) PMD.

v2 changes:
 - Refine irq map/unmap functions.
 - Fix cross compile issue.
v3 changes:
 - Embed vport_info field into the vport structure.
 - Refine APIs' name and order in version.map.
 - Refine commit log.
v4 changes:
 - Refine commit log.

Beilei Xing (15):
  common/idpf: add adapter structure
  common/idpf: add vport structure
  common/idpf: add virtual channel functions
  common/idpf: introduce adapter init and deinit
  common/idpf: add vport init/deinit
  common/idpf: add config RSS
  common/idpf: add irq map/unmap
  common/idpf: support get packet type
  common/idpf: add vport info initialization
  common/idpf: add vector flags in vport
  common/idpf: add rxq and txq struct
  common/idpf: add help functions for queue setup and release
  common/idpf: add Rx and Tx data path
  common/idpf: add vec queue setup
  common/idpf: add avx512 for single queue model

 drivers/common/idpf/base/meson.build          |    2 +-
 drivers/common/idpf/idpf_common_device.c      |  651 ++++++
 drivers/common/idpf/idpf_common_device.h      |  195 ++
 drivers/common/idpf/idpf_common_logs.h        |   47 +
 drivers/common/idpf/idpf_common_rxtx.c        | 1458 ++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h        |  278 +++
 .../idpf/idpf_common_rxtx_avx512.c}           |   14 +-
 .../idpf/idpf_common_virtchnl.c}              |  883 ++-----
 drivers/common/idpf/idpf_common_virtchnl.h    |   52 +
 drivers/common/idpf/meson.build               |   38 +
 drivers/common/idpf/version.map               |   55 +-
 drivers/net/idpf/idpf_ethdev.c                |  544 +----
 drivers/net/idpf/idpf_ethdev.h                |  194 +-
 drivers/net/idpf/idpf_logs.h                  |   24 -
 drivers/net/idpf/idpf_rxtx.c                  | 2055 +++--------------
 drivers/net/idpf/idpf_rxtx.h                  |  253 +-
 drivers/net/idpf/meson.build                  |   18 -
 17 files changed, 3370 insertions(+), 3391 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_device.c
 create mode 100644 drivers/common/idpf/idpf_common_device.h
 create mode 100644 drivers/common/idpf/idpf_common_logs.h
 create mode 100644 drivers/common/idpf/idpf_common_rxtx.c
 create mode 100644 drivers/common/idpf/idpf_common_rxtx.h
 rename drivers/{net/idpf/idpf_rxtx_vec_avx512.c => common/idpf/idpf_common_rxtx_avx512.c} (98%)
 rename drivers/{net/idpf/idpf_vchnl.c => common/idpf/idpf_common_virtchnl.c} (56%)
 create mode 100644 drivers/common/idpf/idpf_common_virtchnl.h

-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v4 01/15] common/idpf: add adapter structure
  2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
@ 2023-01-17  8:06   ` beilei.xing
  2023-01-17  8:06   ` [PATCH v4 02/15] common/idpf: add vport structure beilei.xing
                     ` (14 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-01-17  8:06 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Add structure idpf_adapter in common module, the structure includes
some basic fields.
Introduce structure idpf_adapter_ext in PMD, this structure includes
extra fields except idpf_adapter.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h | 20 ++++++
 drivers/net/idpf/idpf_ethdev.c           | 91 ++++++++++--------------
 drivers/net/idpf/idpf_ethdev.h           | 25 +++----
 drivers/net/idpf/idpf_rxtx.c             | 16 ++---
 drivers/net/idpf/idpf_rxtx.h             |  4 +-
 drivers/net/idpf/idpf_rxtx_vec_avx512.c  |  3 +-
 drivers/net/idpf/idpf_vchnl.c            | 30 ++++----
 7 files changed, 99 insertions(+), 90 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_device.h

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
new file mode 100644
index 0000000000..4f548a7185
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_COMMON_DEVICE_H_
+#define _IDPF_COMMON_DEVICE_H_
+
+#include <base/idpf_prototype.h>
+#include <base/virtchnl2.h>
+
+struct idpf_adapter {
+	struct idpf_hw hw;
+	struct virtchnl2_version_info virtchnl_version;
+	struct virtchnl2_get_capabilities caps;
+	volatile uint32_t pend_cmd; /* pending command not finished */
+	uint32_t cmd_retval; /* return value of the cmd response from cp */
+	uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */
+};
+
+#endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 3f1b77144c..1b13d081a7 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -53,8 +53,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_adapter *adapter = vport->adapter;
 
-	dev_info->max_rx_queues = adapter->caps->max_rx_q;
-	dev_info->max_tx_queues = adapter->caps->max_tx_q;
+	dev_info->max_rx_queues = adapter->caps.max_rx_q;
+	dev_info->max_tx_queues = adapter->caps.max_tx_q;
 	dev_info->min_rx_bufsize = IDPF_MIN_BUF_SIZE;
 	dev_info->max_rx_pktlen = vport->max_mtu + IDPF_ETH_OVERHEAD;
 
@@ -147,7 +147,7 @@ idpf_init_vport_req_info(struct rte_eth_dev *dev,
 			 struct virtchnl2_create_vport *vport_info)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(vport->adapter);
 
 	vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
 	if (adapter->txq_model == 0) {
@@ -379,7 +379,7 @@ idpf_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
-	if (adapter->caps->rss_caps != 0 && dev->data->nb_rx_queues != 0) {
+	if (adapter->caps.rss_caps != 0 && dev->data->nb_rx_queues != 0) {
 		ret = idpf_init_rss(vport);
 		if (ret != 0) {
 			PMD_INIT_LOG(ERR, "Failed to init rss");
@@ -420,7 +420,7 @@ idpf_config_rx_queues_irqs(struct rte_eth_dev *dev)
 
 	/* Rx interrupt disabled, Map interrupt only for writeback */
 
-	/* The capability flags adapter->caps->other_caps should be
+	/* The capability flags adapter->caps.other_caps should be
 	 * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if
 	 * condition should be updated when the FW can return the
 	 * correct flag bits.
@@ -518,9 +518,9 @@ static int
 idpf_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
-	uint16_t num_allocated_vectors =
-		adapter->caps->num_allocated_vectors;
+	struct idpf_adapter *base = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
+	uint16_t num_allocated_vectors = base->caps.num_allocated_vectors;
 	uint16_t req_vecs_num;
 	int ret;
 
@@ -596,7 +596,7 @@ static int
 idpf_dev_close(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(vport->adapter);
 
 	idpf_dev_stop(dev);
 
@@ -728,7 +728,7 @@ parse_bool(const char *key, const char *value, void *args)
 }
 
 static int
-idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter,
+idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter,
 		   struct idpf_devargs *idpf_args)
 {
 	struct rte_devargs *devargs = pci_dev->device.devargs;
@@ -875,14 +875,14 @@ idpf_init_mbx(struct idpf_hw *hw)
 }
 
 static int
-idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
+idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
 {
-	struct idpf_hw *hw = &adapter->hw;
+	struct idpf_hw *hw = &adapter->base.hw;
 	int ret = 0;
 
 	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
 	hw->hw_addr_len = pci_dev->mem_resource[0].len;
-	hw->back = adapter;
+	hw->back = &adapter->base;
 	hw->vendor_id = pci_dev->id.vendor_id;
 	hw->device_id = pci_dev->id.device_id;
 	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
@@ -902,15 +902,15 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
 		goto err;
 	}
 
-	adapter->mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
-					IDPF_DFLT_MBX_BUF_SIZE, 0);
-	if (adapter->mbx_resp == NULL) {
+	adapter->base.mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
+					     IDPF_DFLT_MBX_BUF_SIZE, 0);
+	if (adapter->base.mbx_resp == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp memory");
 		ret = -ENOMEM;
 		goto err_mbx;
 	}
 
-	ret = idpf_vc_check_api_version(adapter);
+	ret = idpf_vc_check_api_version(&adapter->base);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to check api version");
 		goto err_api;
@@ -922,21 +922,13 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
 		goto err_api;
 	}
 
-	adapter->caps = rte_zmalloc("idpf_caps",
-				sizeof(struct virtchnl2_get_capabilities), 0);
-	if (adapter->caps == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate idpf_caps memory");
-		ret = -ENOMEM;
-		goto err_api;
-	}
-
-	ret = idpf_vc_get_caps(adapter);
+	ret = idpf_vc_get_caps(&adapter->base);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to get capabilities");
-		goto err_caps;
+		goto err_api;
 	}
 
-	adapter->max_vport_nb = adapter->caps->max_vports;
+	adapter->max_vport_nb = adapter->base.caps.max_vports;
 
 	adapter->vports = rte_zmalloc("vports",
 				      adapter->max_vport_nb *
@@ -945,7 +937,7 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
 	if (adapter->vports == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
 		ret = -ENOMEM;
-		goto err_vports;
+		goto err_api;
 	}
 
 	adapter->max_rxq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
@@ -962,13 +954,9 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
 
 	return ret;
 
-err_vports:
-err_caps:
-	rte_free(adapter->caps);
-	adapter->caps = NULL;
 err_api:
-	rte_free(adapter->mbx_resp);
-	adapter->mbx_resp = NULL;
+	rte_free(adapter->base.mbx_resp);
+	adapter->base.mbx_resp = NULL;
 err_mbx:
 	idpf_ctlq_deinit(hw);
 err:
@@ -995,7 +983,7 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 };
 
 static uint16_t
-idpf_vport_idx_alloc(struct idpf_adapter *ad)
+idpf_vport_idx_alloc(struct idpf_adapter_ext *ad)
 {
 	uint16_t vport_idx;
 	uint16_t i;
@@ -1018,13 +1006,13 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_vport_param *param = init_params;
-	struct idpf_adapter *adapter = param->adapter;
+	struct idpf_adapter_ext *adapter = param->adapter;
 	/* for sending create vport virtchnl msg prepare */
 	struct virtchnl2_create_vport vport_req_info;
 	int ret = 0;
 
 	dev->dev_ops = &idpf_eth_dev_ops;
-	vport->adapter = adapter;
+	vport->adapter = &adapter->base;
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
 
@@ -1085,10 +1073,10 @@ static const struct rte_pci_id pci_id_idpf_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
-struct idpf_adapter *
-idpf_find_adapter(struct rte_pci_device *pci_dev)
+struct idpf_adapter_ext *
+idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 {
-	struct idpf_adapter *adapter;
+	struct idpf_adapter_ext *adapter;
 	int found = 0;
 
 	if (pci_dev == NULL)
@@ -1110,17 +1098,14 @@ idpf_find_adapter(struct rte_pci_device *pci_dev)
 }
 
 static void
-idpf_adapter_rel(struct idpf_adapter *adapter)
+idpf_adapter_rel(struct idpf_adapter_ext *adapter)
 {
-	struct idpf_hw *hw = &adapter->hw;
+	struct idpf_hw *hw = &adapter->base.hw;
 
 	idpf_ctlq_deinit(hw);
 
-	rte_free(adapter->caps);
-	adapter->caps = NULL;
-
-	rte_free(adapter->mbx_resp);
-	adapter->mbx_resp = NULL;
+	rte_free(adapter->base.mbx_resp);
+	adapter->base.mbx_resp = NULL;
 
 	rte_free(adapter->vports);
 	adapter->vports = NULL;
@@ -1131,7 +1116,7 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	       struct rte_pci_device *pci_dev)
 {
 	struct idpf_vport_param vport_param;
-	struct idpf_adapter *adapter;
+	struct idpf_adapter_ext *adapter;
 	struct idpf_devargs devargs;
 	char name[RTE_ETH_NAME_MAX_LEN];
 	int i, retval;
@@ -1143,11 +1128,11 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		idpf_adapter_list_init = true;
 	}
 
-	adapter = idpf_find_adapter(pci_dev);
+	adapter = idpf_find_adapter_ext(pci_dev);
 	if (adapter == NULL) {
 		first_probe = true;
-		adapter = rte_zmalloc("idpf_adapter",
-						sizeof(struct idpf_adapter), 0);
+		adapter = rte_zmalloc("idpf_adapter_ext",
+				      sizeof(struct idpf_adapter_ext), 0);
 		if (adapter == NULL) {
 			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
 			return -ENOMEM;
@@ -1225,7 +1210,7 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 static int
 idpf_pci_remove(struct rte_pci_device *pci_dev)
 {
-	struct idpf_adapter *adapter = idpf_find_adapter(pci_dev);
+	struct idpf_adapter_ext *adapter = idpf_find_adapter_ext(pci_dev);
 	uint16_t port_id;
 
 	/* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF through rte_device */
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index b0746e5041..e956fa989c 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -15,6 +15,7 @@
 
 #include "idpf_logs.h"
 
+#include <idpf_common_device.h>
 #include <base/idpf_prototype.h>
 #include <base/virtchnl2.h>
 
@@ -91,7 +92,7 @@ struct idpf_chunks_info {
 };
 
 struct idpf_vport_param {
-	struct idpf_adapter *adapter;
+	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
 	uint16_t idx;       /* index in adapter->vports[]*/
 };
@@ -144,17 +145,11 @@ struct idpf_devargs {
 	uint16_t req_vport_nb;
 };
 
-struct idpf_adapter {
-	TAILQ_ENTRY(idpf_adapter) next;
-	struct idpf_hw hw;
-	char name[IDPF_ADAPTER_NAME_LEN];
-
-	struct virtchnl2_version_info virtchnl_version;
-	struct virtchnl2_get_capabilities *caps;
+struct idpf_adapter_ext {
+	TAILQ_ENTRY(idpf_adapter_ext) next;
+	struct idpf_adapter base;
 
-	volatile uint32_t pend_cmd; /* pending command not finished */
-	uint32_t cmd_retval; /* return value of the cmd response from ipf */
-	uint8_t *mbx_resp; /* buffer to store the mailbox response from ipf */
+	char name[IDPF_ADAPTER_NAME_LEN];
 
 	uint32_t txq_model; /* 0 - split queue model, non-0 - single queue model */
 	uint32_t rxq_model; /* 0 - split queue model, non-0 - single queue model */
@@ -182,10 +177,12 @@ struct idpf_adapter {
 	uint64_t time_hw;
 };
 
-TAILQ_HEAD(idpf_adapter_list, idpf_adapter);
+TAILQ_HEAD(idpf_adapter_list, idpf_adapter_ext);
 
 #define IDPF_DEV_TO_PCI(eth_dev)		\
 	RTE_DEV_TO_PCI((eth_dev)->device)
+#define IDPF_ADAPTER_TO_EXT(p)					\
+	container_of((p), struct idpf_adapter_ext, base)
 
 /* structure used for sending and checking response of virtchnl ops */
 struct idpf_cmd_info {
@@ -234,10 +231,10 @@ atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
 	return !ret;
 }
 
-struct idpf_adapter *idpf_find_adapter(struct rte_pci_device *pci_dev);
+struct idpf_adapter_ext *idpf_find_adapter_ext(struct rte_pci_device *pci_dev);
 void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
 int idpf_vc_check_api_version(struct idpf_adapter *adapter);
-int idpf_get_pkt_type(struct idpf_adapter *adapter);
+int idpf_get_pkt_type(struct idpf_adapter_ext *adapter);
 int idpf_vc_get_caps(struct idpf_adapter *adapter);
 int idpf_vc_create_vport(struct idpf_vport *vport,
 			 struct virtchnl2_create_vport *vport_info);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 5aef8ba2b6..4845f2ea0a 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1384,7 +1384,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	struct idpf_rx_queue *rxq;
 	const uint32_t *ptype_tbl;
 	uint8_t status_err0_qw1;
-	struct idpf_adapter *ad;
+	struct idpf_adapter_ext *ad;
 	struct rte_mbuf *rxm;
 	uint16_t rx_id_bufq1;
 	uint16_t rx_id_bufq2;
@@ -1398,7 +1398,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	nb_rx = 0;
 	rxq = rx_queue;
-	ad = rxq->adapter;
+	ad = IDPF_ADAPTER_TO_EXT(rxq->adapter);
 
 	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
 		return nb_rx;
@@ -1407,7 +1407,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	rx_id_bufq1 = rxq->bufq1->rx_next_avail;
 	rx_id_bufq2 = rxq->bufq2->rx_next_avail;
 	rx_desc_ring = rxq->rx_ring;
-	ptype_tbl = rxq->adapter->ptype_tbl;
+	ptype_tbl = ad->ptype_tbl;
 
 	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
 		rxq->hw_register_set = 1;
@@ -1791,7 +1791,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	const uint32_t *ptype_tbl;
 	uint16_t rx_id, nb_hold;
 	struct rte_eth_dev *dev;
-	struct idpf_adapter *ad;
+	struct idpf_adapter_ext *ad;
 	uint16_t rx_packet_len;
 	struct rte_mbuf *rxm;
 	struct rte_mbuf *nmb;
@@ -1805,14 +1805,14 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	nb_hold = 0;
 	rxq = rx_queue;
 
-	ad = rxq->adapter;
+	ad = IDPF_ADAPTER_TO_EXT(rxq->adapter);
 
 	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
 		return nb_rx;
 
 	rx_id = rxq->rx_tail;
 	rx_ring = rxq->rx_ring;
-	ptype_tbl = rxq->adapter->ptype_tbl;
+	ptype_tbl = ad->ptype_tbl;
 
 	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
 		rxq->hw_register_set = 1;
@@ -2221,7 +2221,7 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 #ifdef RTE_ARCH_X86
-	struct idpf_adapter *ad = vport->adapter;
+	struct idpf_adapter_ext *ad = IDPF_ADAPTER_TO_EXT(vport->adapter);
 	struct idpf_rx_queue *rxq;
 	int i;
 
@@ -2275,7 +2275,7 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 #ifdef RTE_ARCH_X86
-	struct idpf_adapter *ad = vport->adapter;
+	struct idpf_adapter_ext *ad = IDPF_ADAPTER_TO_EXT(vport->adapter);
 #ifdef CC_AVX512_SUPPORT
 	struct idpf_tx_queue *txq;
 	int i;
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 730dc64ebc..047fc03614 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -247,11 +247,11 @@ void idpf_set_tx_function(struct rte_eth_dev *dev);
 /* Helper function to convert a 32b nanoseconds timestamp to 64b. */
 static inline uint64_t
 
-idpf_tstamp_convert_32b_64b(struct idpf_adapter *ad, uint32_t flag,
+idpf_tstamp_convert_32b_64b(struct idpf_adapter_ext *ad, uint32_t flag,
 			    uint32_t in_timestamp)
 {
 #ifdef RTE_ARCH_X86_64
-	struct idpf_hw *hw = &ad->hw;
+	struct idpf_hw *hw = &ad->base.hw;
 	const uint64_t mask = 0xFFFFFFFF;
 	uint32_t hi, lo, lo2, delta;
 	uint64_t ns;
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
index fb2b6bb53c..efa7cd2187 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
@@ -245,7 +245,8 @@ _idpf_singleq_recv_raw_pkts_avx512(struct idpf_rx_queue *rxq,
 				   struct rte_mbuf **rx_pkts,
 				   uint16_t nb_pkts)
 {
-	const uint32_t *type_table = rxq->adapter->ptype_tbl;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(rxq->adapter);
+	const uint32_t *type_table = adapter->ptype_tbl;
 
 	const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0,
 						    rxq->mbuf_initializer);
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 14b34619af..ca481bb915 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -311,13 +311,17 @@ idpf_vc_check_api_version(struct idpf_adapter *adapter)
 }
 
 int __rte_cold
-idpf_get_pkt_type(struct idpf_adapter *adapter)
+idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 {
 	struct virtchnl2_get_ptype_info *ptype_info;
-	uint16_t ptype_recvd = 0, ptype_offset, i, j;
+	struct idpf_adapter *base;
+	uint16_t ptype_offset, i, j;
+	uint16_t ptype_recvd = 0;
 	int ret;
 
-	ret = idpf_vc_query_ptype_info(adapter);
+	base = &adapter->base;
+
+	ret = idpf_vc_query_ptype_info(base);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Fail to query packet type information");
 		return ret;
@@ -328,7 +332,7 @@ idpf_get_pkt_type(struct idpf_adapter *adapter)
 			return -ENOMEM;
 
 	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
-		ret = idpf_read_one_msg(adapter, VIRTCHNL2_OP_GET_PTYPE_INFO,
+		ret = idpf_read_one_msg(base, VIRTCHNL2_OP_GET_PTYPE_INFO,
 					IDPF_DFLT_MBX_BUF_SIZE, (u8 *)ptype_info);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR, "Fail to get packet type information");
@@ -515,7 +519,7 @@ idpf_get_pkt_type(struct idpf_adapter *adapter)
 
 free_ptype_info:
 	rte_free(ptype_info);
-	clear_cmd(adapter);
+	clear_cmd(base);
 	return ret;
 }
 
@@ -577,7 +581,7 @@ idpf_vc_get_caps(struct idpf_adapter *adapter)
 		return err;
 	}
 
-	rte_memcpy(adapter->caps, args.out_buffer, sizeof(caps_msg));
+	rte_memcpy(&adapter->caps, args.out_buffer, sizeof(caps_msg));
 
 	return 0;
 }
@@ -740,7 +744,8 @@ idpf_vc_set_rss_hash(struct idpf_vport *vport)
 int
 idpf_vc_config_rxqs(struct idpf_vport *vport)
 {
-	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_adapter *base = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
 	struct idpf_rx_queue **rxq =
 		(struct idpf_rx_queue **)vport->dev_data->rx_queues;
 	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
@@ -832,10 +837,10 @@ idpf_vc_config_rxqs(struct idpf_vport *vport)
 		args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
 		args.in_args = (uint8_t *)vc_rxqs;
 		args.in_args_size = size;
-		args.out_buffer = adapter->mbx_resp;
+		args.out_buffer = base->mbx_resp;
 		args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-		err = idpf_execute_vc_cmd(adapter, &args);
+		err = idpf_execute_vc_cmd(base, &args);
 		rte_free(vc_rxqs);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
@@ -940,7 +945,8 @@ idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id)
 int
 idpf_vc_config_txqs(struct idpf_vport *vport)
 {
-	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_adapter *base = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
 	struct idpf_tx_queue **txq =
 		(struct idpf_tx_queue **)vport->dev_data->tx_queues;
 	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
@@ -1010,10 +1016,10 @@ idpf_vc_config_txqs(struct idpf_vport *vport)
 		args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
 		args.in_args = (uint8_t *)vc_txqs;
 		args.in_args_size = size;
-		args.out_buffer = adapter->mbx_resp;
+		args.out_buffer = base->mbx_resp;
 		args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-		err = idpf_execute_vc_cmd(adapter, &args);
+		err = idpf_execute_vc_cmd(base, &args);
 		rte_free(vc_txqs);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v4 02/15] common/idpf: add vport structure
  2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
  2023-01-17  8:06   ` [PATCH v4 01/15] common/idpf: add adapter structure beilei.xing
@ 2023-01-17  8:06   ` beilei.xing
  2023-01-17  8:06   ` [PATCH v4 03/15] common/idpf: add virtual channel functions beilei.xing
                     ` (13 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-01-17  8:06 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Move idpf_vport structure to common module, remove ethdev dependency.
Also remove unused functions.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h |  59 ++++++
 drivers/net/idpf/idpf_ethdev.c           |  10 +-
 drivers/net/idpf/idpf_ethdev.h           |  66 +-----
 drivers/net/idpf/idpf_rxtx.c             |   4 +-
 drivers/net/idpf/idpf_rxtx.h             |   3 +
 drivers/net/idpf/idpf_vchnl.c            | 252 +++--------------------
 6 files changed, 96 insertions(+), 298 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 4f548a7185..b7fff84b25 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -17,4 +17,63 @@ struct idpf_adapter {
 	uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */
 };
 
+struct idpf_chunks_info {
+	uint32_t tx_start_qid;
+	uint32_t rx_start_qid;
+	/* Valid only if split queue model */
+	uint32_t tx_compl_start_qid;
+	uint32_t rx_buf_start_qid;
+
+	uint64_t tx_qtail_start;
+	uint32_t tx_qtail_spacing;
+	uint64_t rx_qtail_start;
+	uint32_t rx_qtail_spacing;
+	uint64_t tx_compl_qtail_start;
+	uint32_t tx_compl_qtail_spacing;
+	uint64_t rx_buf_qtail_start;
+	uint32_t rx_buf_qtail_spacing;
+};
+
+struct idpf_vport {
+	struct idpf_adapter *adapter; /* Backreference to associated adapter */
+	struct virtchnl2_create_vport *vport_info; /* virtchnl response info handling */
+	uint16_t sw_idx; /* SW index in adapter->vports[]*/
+	uint16_t vport_id;
+	uint32_t txq_model;
+	uint32_t rxq_model;
+	uint16_t num_tx_q;
+	/* valid only if txq_model is split Q */
+	uint16_t num_tx_complq;
+	uint16_t num_rx_q;
+	/* valid only if rxq_model is split Q */
+	uint16_t num_rx_bufq;
+
+	uint16_t max_mtu;
+	uint8_t default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+
+	enum virtchnl_rss_algorithm rss_algorithm;
+	uint16_t rss_key_size;
+	uint16_t rss_lut_size;
+
+	void *dev_data; /* Pointer to the device data */
+	uint16_t max_pkt_len; /* Maximum packet length */
+
+	/* RSS info */
+	uint32_t *rss_lut;
+	uint8_t *rss_key;
+	uint64_t rss_hf;
+
+	/* MSIX info*/
+	struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */
+	uint16_t max_vectors;
+	struct virtchnl2_alloc_vectors *recv_vectors;
+
+	/* Chunk info */
+	struct idpf_chunks_info chunks_info;
+
+	uint16_t devarg_id;
+
+	bool stopped;
+};
+
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 1b13d081a7..72a5c9f39b 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -275,11 +275,13 @@ static int
 idpf_init_rss(struct idpf_vport *vport)
 {
 	struct rte_eth_rss_conf *rss_conf;
+	struct rte_eth_dev_data *dev_data;
 	uint16_t i, nb_q, lut_size;
 	int ret = 0;
 
-	rss_conf = &vport->dev_data->dev_conf.rx_adv_conf.rss_conf;
-	nb_q = vport->dev_data->nb_rx_queues;
+	dev_data = vport->dev_data;
+	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev_data->nb_rx_queues;
 
 	vport->rss_key = rte_zmalloc("rss_key",
 				     vport->rss_key_size, 0);
@@ -466,7 +468,7 @@ idpf_config_rx_queues_irqs(struct rte_eth_dev *dev)
 	}
 	vport->qv_map = qv_map;
 
-	if (idpf_vc_config_irq_map_unmap(vport, true) != 0) {
+	if (idpf_vc_config_irq_map_unmap(vport, dev->data->nb_rx_queues, true) != 0) {
 		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
 		goto config_irq_map_err;
 	}
@@ -582,7 +584,7 @@ idpf_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_stop_queues(dev);
 
-	idpf_vc_config_irq_map_unmap(vport, false);
+	idpf_vc_config_irq_map_unmap(vport, dev->data->nb_rx_queues, false);
 
 	if (vport->recv_vectors != NULL)
 		idpf_vc_dealloc_vectors(vport);
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index e956fa989c..8c29019667 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -74,71 +74,12 @@ enum idpf_vc_result {
 	IDPF_MSG_CMD,      /* Read async command result */
 };
 
-struct idpf_chunks_info {
-	uint32_t tx_start_qid;
-	uint32_t rx_start_qid;
-	/* Valid only if split queue model */
-	uint32_t tx_compl_start_qid;
-	uint32_t rx_buf_start_qid;
-
-	uint64_t tx_qtail_start;
-	uint32_t tx_qtail_spacing;
-	uint64_t rx_qtail_start;
-	uint32_t rx_qtail_spacing;
-	uint64_t tx_compl_qtail_start;
-	uint32_t tx_compl_qtail_spacing;
-	uint64_t rx_buf_qtail_start;
-	uint32_t rx_buf_qtail_spacing;
-};
-
 struct idpf_vport_param {
 	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
 	uint16_t idx;       /* index in adapter->vports[]*/
 };
 
-struct idpf_vport {
-	struct idpf_adapter *adapter; /* Backreference to associated adapter */
-	struct virtchnl2_create_vport *vport_info; /* virtchnl response info handling */
-	uint16_t sw_idx; /* SW index in adapter->vports[]*/
-	uint16_t vport_id;
-	uint32_t txq_model;
-	uint32_t rxq_model;
-	uint16_t num_tx_q;
-	/* valid only if txq_model is split Q */
-	uint16_t num_tx_complq;
-	uint16_t num_rx_q;
-	/* valid only if rxq_model is split Q */
-	uint16_t num_rx_bufq;
-
-	uint16_t max_mtu;
-	uint8_t default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
-
-	enum virtchnl_rss_algorithm rss_algorithm;
-	uint16_t rss_key_size;
-	uint16_t rss_lut_size;
-
-	struct rte_eth_dev_data *dev_data; /* Pointer to the device data */
-	uint16_t max_pkt_len; /* Maximum packet length */
-
-	/* RSS info */
-	uint32_t *rss_lut;
-	uint8_t *rss_key;
-	uint64_t rss_hf;
-
-	/* MSIX info*/
-	struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */
-	uint16_t max_vectors;
-	struct virtchnl2_alloc_vectors *recv_vectors;
-
-	/* Chunk info */
-	struct idpf_chunks_info chunks_info;
-
-	uint16_t devarg_id;
-
-	bool stopped;
-};
-
 /* Struct used when parse driver specific devargs */
 struct idpf_devargs {
 	uint16_t req_vports[IDPF_MAX_VPORT_NUM];
@@ -242,15 +183,12 @@ int idpf_vc_destroy_vport(struct idpf_vport *vport);
 int idpf_vc_set_rss_key(struct idpf_vport *vport);
 int idpf_vc_set_rss_lut(struct idpf_vport *vport);
 int idpf_vc_set_rss_hash(struct idpf_vport *vport);
-int idpf_vc_config_rxqs(struct idpf_vport *vport);
-int idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id);
-int idpf_vc_config_txqs(struct idpf_vport *vport);
-int idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id);
 int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
 		      bool rx, bool on);
 int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
 int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
-int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, bool map);
+int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
+				 uint16_t nb_rxq, bool map);
 int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
 int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
 int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 4845f2ea0a..918d156e03 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1066,7 +1066,7 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		dev->data->rx_queues[rx_queue_id];
 	int err = 0;
 
-	err = idpf_vc_config_rxq(vport, rx_queue_id);
+	err = idpf_vc_config_rxq(vport, rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Fail to configure Rx queue %u", rx_queue_id);
 		return err;
@@ -1117,7 +1117,7 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 		dev->data->tx_queues[tx_queue_id];
 	int err = 0;
 
-	err = idpf_vc_config_txq(vport, tx_queue_id);
+	err = idpf_vc_config_txq(vport, txq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id);
 		return err;
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 047fc03614..9417651b3f 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -243,6 +243,9 @@ void idpf_stop_queues(struct rte_eth_dev *dev);
 void idpf_set_rx_function(struct rte_eth_dev *dev);
 void idpf_set_tx_function(struct rte_eth_dev *dev);
 
+int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
+int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
+
 #define IDPF_TIMESYNC_REG_WRAP_GUARD_BAND  10000
 /* Helper function to convert a 32b nanoseconds timestamp to 64b. */
 static inline uint64_t
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index ca481bb915..633d3295d3 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -742,121 +742,9 @@ idpf_vc_set_rss_hash(struct idpf_vport *vport)
 
 #define IDPF_RX_BUF_STRIDE		64
 int
-idpf_vc_config_rxqs(struct idpf_vport *vport)
-{
-	struct idpf_adapter *base = vport->adapter;
-	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
-	struct idpf_rx_queue **rxq =
-		(struct idpf_rx_queue **)vport->dev_data->rx_queues;
-	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
-	struct virtchnl2_rxq_info *rxq_info;
-	struct idpf_cmd_info args;
-	uint16_t total_qs, num_qs;
-	int size, i, j;
-	int err = 0;
-	int k = 0;
-
-	total_qs = vport->num_rx_q + vport->num_rx_bufq;
-	while (total_qs) {
-		if (total_qs > adapter->max_rxq_per_msg) {
-			num_qs = adapter->max_rxq_per_msg;
-			total_qs -= adapter->max_rxq_per_msg;
-		} else {
-			num_qs = total_qs;
-			total_qs = 0;
-		}
-
-		size = sizeof(*vc_rxqs) + (num_qs - 1) *
-			sizeof(struct virtchnl2_rxq_info);
-		vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
-		if (vc_rxqs == NULL) {
-			PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues");
-			err = -ENOMEM;
-			break;
-		}
-		vc_rxqs->vport_id = vport->vport_id;
-		vc_rxqs->num_qinfo = num_qs;
-		if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-			for (i = 0; i < num_qs; i++, k++) {
-				rxq_info = &vc_rxqs->qinfo[i];
-				rxq_info->dma_ring_addr = rxq[k]->rx_ring_phys_addr;
-				rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-				rxq_info->queue_id = rxq[k]->queue_id;
-				rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-				rxq_info->data_buffer_size = rxq[k]->rx_buf_len;
-				rxq_info->max_pkt_size = vport->max_pkt_len;
-
-				rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
-				rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
-
-				rxq_info->ring_len = rxq[k]->nb_rx_desc;
-			}
-		} else {
-			for (i = 0; i < num_qs / 3; i++, k++) {
-				/* Rx queue */
-				rxq_info = &vc_rxqs->qinfo[i * 3];
-				rxq_info->dma_ring_addr =
-					rxq[k]->rx_ring_phys_addr;
-				rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-				rxq_info->queue_id = rxq[k]->queue_id;
-				rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-				rxq_info->data_buffer_size = rxq[k]->rx_buf_len;
-				rxq_info->max_pkt_size = vport->max_pkt_len;
-
-				rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
-				rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
-
-				rxq_info->ring_len = rxq[k]->nb_rx_desc;
-				rxq_info->rx_bufq1_id = rxq[k]->bufq1->queue_id;
-				rxq_info->rx_bufq2_id = rxq[k]->bufq2->queue_id;
-				rxq_info->rx_buffer_low_watermark = 64;
-
-				/* Buffer queue */
-				for (j = 1; j <= IDPF_RX_BUFQ_PER_GRP; j++) {
-					struct idpf_rx_queue *bufq = j == 1 ?
-						rxq[k]->bufq1 : rxq[k]->bufq2;
-					rxq_info = &vc_rxqs->qinfo[i * 3 + j];
-					rxq_info->dma_ring_addr =
-						bufq->rx_ring_phys_addr;
-					rxq_info->type =
-						VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
-					rxq_info->queue_id = bufq->queue_id;
-					rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-					rxq_info->data_buffer_size = bufq->rx_buf_len;
-					rxq_info->desc_ids =
-						VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
-					rxq_info->ring_len = bufq->nb_rx_desc;
-
-					rxq_info->buffer_notif_stride =
-						IDPF_RX_BUF_STRIDE;
-					rxq_info->rx_buffer_low_watermark = 64;
-				}
-			}
-		}
-		memset(&args, 0, sizeof(args));
-		args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
-		args.in_args = (uint8_t *)vc_rxqs;
-		args.in_args_size = size;
-		args.out_buffer = base->mbx_resp;
-		args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-		err = idpf_execute_vc_cmd(base, &args);
-		rte_free(vc_rxqs);
-		if (err != 0) {
-			PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
-			break;
-		}
-	}
-
-	return err;
-}
-
-int
-idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id)
+idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
 {
 	struct idpf_adapter *adapter = vport->adapter;
-	struct idpf_rx_queue **rxq =
-		(struct idpf_rx_queue **)vport->dev_data->rx_queues;
 	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
 	struct virtchnl2_rxq_info *rxq_info;
 	struct idpf_cmd_info args;
@@ -880,39 +768,38 @@ idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id)
 	vc_rxqs->num_qinfo = num_qs;
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
 		rxq_info = &vc_rxqs->qinfo[0];
-		rxq_info->dma_ring_addr = rxq[rxq_id]->rx_ring_phys_addr;
+		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
 		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-		rxq_info->queue_id = rxq[rxq_id]->queue_id;
+		rxq_info->queue_id = rxq->queue_id;
 		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-		rxq_info->data_buffer_size = rxq[rxq_id]->rx_buf_len;
+		rxq_info->data_buffer_size = rxq->rx_buf_len;
 		rxq_info->max_pkt_size = vport->max_pkt_len;
 
 		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
 		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
 
-		rxq_info->ring_len = rxq[rxq_id]->nb_rx_desc;
+		rxq_info->ring_len = rxq->nb_rx_desc;
 	}  else {
 		/* Rx queue */
 		rxq_info = &vc_rxqs->qinfo[0];
-		rxq_info->dma_ring_addr = rxq[rxq_id]->rx_ring_phys_addr;
+		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
 		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-		rxq_info->queue_id = rxq[rxq_id]->queue_id;
+		rxq_info->queue_id = rxq->queue_id;
 		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-		rxq_info->data_buffer_size = rxq[rxq_id]->rx_buf_len;
+		rxq_info->data_buffer_size = rxq->rx_buf_len;
 		rxq_info->max_pkt_size = vport->max_pkt_len;
 
 		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
 		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
 
-		rxq_info->ring_len = rxq[rxq_id]->nb_rx_desc;
-		rxq_info->rx_bufq1_id = rxq[rxq_id]->bufq1->queue_id;
-		rxq_info->rx_bufq2_id = rxq[rxq_id]->bufq2->queue_id;
+		rxq_info->ring_len = rxq->nb_rx_desc;
+		rxq_info->rx_bufq1_id = rxq->bufq1->queue_id;
+		rxq_info->rx_bufq2_id = rxq->bufq2->queue_id;
 		rxq_info->rx_buffer_low_watermark = 64;
 
 		/* Buffer queue */
 		for (i = 1; i <= IDPF_RX_BUFQ_PER_GRP; i++) {
-			struct idpf_rx_queue *bufq =
-				i == 1 ? rxq[rxq_id]->bufq1 : rxq[rxq_id]->bufq2;
+			struct idpf_rx_queue *bufq = i == 1 ? rxq->bufq1 : rxq->bufq2;
 			rxq_info = &vc_rxqs->qinfo[i];
 			rxq_info->dma_ring_addr = bufq->rx_ring_phys_addr;
 			rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
@@ -943,99 +830,9 @@ idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id)
 }
 
 int
-idpf_vc_config_txqs(struct idpf_vport *vport)
-{
-	struct idpf_adapter *base = vport->adapter;
-	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
-	struct idpf_tx_queue **txq =
-		(struct idpf_tx_queue **)vport->dev_data->tx_queues;
-	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
-	struct virtchnl2_txq_info *txq_info;
-	struct idpf_cmd_info args;
-	uint16_t total_qs, num_qs;
-	int size, i;
-	int err = 0;
-	int k = 0;
-
-	total_qs = vport->num_tx_q + vport->num_tx_complq;
-	while (total_qs) {
-		if (total_qs > adapter->max_txq_per_msg) {
-			num_qs = adapter->max_txq_per_msg;
-			total_qs -= adapter->max_txq_per_msg;
-		} else {
-			num_qs = total_qs;
-			total_qs = 0;
-		}
-		size = sizeof(*vc_txqs) + (num_qs - 1) *
-			sizeof(struct virtchnl2_txq_info);
-		vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
-		if (vc_txqs == NULL) {
-			PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues");
-			err = -ENOMEM;
-			break;
-		}
-		vc_txqs->vport_id = vport->vport_id;
-		vc_txqs->num_qinfo = num_qs;
-		if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-			for (i = 0; i < num_qs; i++, k++) {
-				txq_info = &vc_txqs->qinfo[i];
-				txq_info->dma_ring_addr = txq[k]->tx_ring_phys_addr;
-				txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-				txq_info->queue_id = txq[k]->queue_id;
-				txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-				txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
-				txq_info->ring_len = txq[k]->nb_tx_desc;
-			}
-		} else {
-			for (i = 0; i < num_qs / 2; i++, k++) {
-				/* txq info */
-				txq_info = &vc_txqs->qinfo[2 * i];
-				txq_info->dma_ring_addr = txq[k]->tx_ring_phys_addr;
-				txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-				txq_info->queue_id = txq[k]->queue_id;
-				txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-				txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-				txq_info->ring_len = txq[k]->nb_tx_desc;
-				txq_info->tx_compl_queue_id =
-					txq[k]->complq->queue_id;
-				txq_info->relative_queue_id = txq_info->queue_id;
-
-				/* tx completion queue info */
-				txq_info = &vc_txqs->qinfo[2 * i + 1];
-				txq_info->dma_ring_addr =
-					txq[k]->complq->tx_ring_phys_addr;
-				txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-				txq_info->queue_id = txq[k]->complq->queue_id;
-				txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-				txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-				txq_info->ring_len = txq[k]->complq->nb_tx_desc;
-			}
-		}
-
-		memset(&args, 0, sizeof(args));
-		args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
-		args.in_args = (uint8_t *)vc_txqs;
-		args.in_args_size = size;
-		args.out_buffer = base->mbx_resp;
-		args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-		err = idpf_execute_vc_cmd(base, &args);
-		rte_free(vc_txqs);
-		if (err != 0) {
-			PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
-			break;
-		}
-	}
-
-	return err;
-}
-
-int
-idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id)
+idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
 {
 	struct idpf_adapter *adapter = vport->adapter;
-	struct idpf_tx_queue **txq =
-		(struct idpf_tx_queue **)vport->dev_data->tx_queues;
 	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
 	struct virtchnl2_txq_info *txq_info;
 	struct idpf_cmd_info args;
@@ -1060,32 +857,32 @@ idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id)
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
 		txq_info = &vc_txqs->qinfo[0];
-		txq_info->dma_ring_addr = txq[txq_id]->tx_ring_phys_addr;
+		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
 		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-		txq_info->queue_id = txq[txq_id]->queue_id;
+		txq_info->queue_id = txq->queue_id;
 		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
 		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
-		txq_info->ring_len = txq[txq_id]->nb_tx_desc;
+		txq_info->ring_len = txq->nb_tx_desc;
 	} else {
 		/* txq info */
 		txq_info = &vc_txqs->qinfo[0];
-		txq_info->dma_ring_addr = txq[txq_id]->tx_ring_phys_addr;
+		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
 		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-		txq_info->queue_id = txq[txq_id]->queue_id;
+		txq_info->queue_id = txq->queue_id;
 		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
 		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-		txq_info->ring_len = txq[txq_id]->nb_tx_desc;
-		txq_info->tx_compl_queue_id = txq[txq_id]->complq->queue_id;
+		txq_info->ring_len = txq->nb_tx_desc;
+		txq_info->tx_compl_queue_id = txq->complq->queue_id;
 		txq_info->relative_queue_id = txq_info->queue_id;
 
 		/* tx completion queue info */
 		txq_info = &vc_txqs->qinfo[1];
-		txq_info->dma_ring_addr = txq[txq_id]->complq->tx_ring_phys_addr;
+		txq_info->dma_ring_addr = txq->complq->tx_ring_phys_addr;
 		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-		txq_info->queue_id = txq[txq_id]->complq->queue_id;
+		txq_info->queue_id = txq->complq->queue_id;
 		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
 		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-		txq_info->ring_len = txq[txq_id]->complq->nb_tx_desc;
+		txq_info->ring_len = txq->complq->nb_tx_desc;
 	}
 
 	memset(&args, 0, sizeof(args));
@@ -1104,12 +901,11 @@ idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id)
 }
 
 int
-idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, bool map)
+idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_queue_vector_maps *map_info;
 	struct virtchnl2_queue_vector *vecmap;
-	uint16_t nb_rxq = vport->dev_data->nb_rx_queues;
 	struct idpf_cmd_info args;
 	int len, i, err = 0;
 
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v4 03/15] common/idpf: add virtual channel functions
  2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
  2023-01-17  8:06   ` [PATCH v4 01/15] common/idpf: add adapter structure beilei.xing
  2023-01-17  8:06   ` [PATCH v4 02/15] common/idpf: add vport structure beilei.xing
@ 2023-01-17  8:06   ` beilei.xing
  2023-01-18  4:00     ` Zhang, Qi Z
  2023-01-17  8:06   ` [PATCH v4 04/15] common/idpf: introduce adapter init and deinit beilei.xing
                     ` (12 subsequent siblings)
  15 siblings, 1 reply; 79+ messages in thread
From: beilei.xing @ 2023-01-17  8:06 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Move most of the virtual channel functions to idpf common module.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/base/meson.build       |   2 +-
 drivers/common/idpf/idpf_common_device.c   |   8 +
 drivers/common/idpf/idpf_common_device.h   |  61 ++
 drivers/common/idpf/idpf_common_logs.h     |  23 +
 drivers/common/idpf/idpf_common_virtchnl.c | 815 +++++++++++++++++++++
 drivers/common/idpf/idpf_common_virtchnl.h |  48 ++
 drivers/common/idpf/meson.build            |   5 +
 drivers/common/idpf/version.map            |  20 +-
 drivers/net/idpf/idpf_ethdev.c             |   9 +-
 drivers/net/idpf/idpf_ethdev.h             |  85 +--
 drivers/net/idpf/idpf_vchnl.c              | 815 +--------------------
 11 files changed, 983 insertions(+), 908 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_device.c
 create mode 100644 drivers/common/idpf/idpf_common_logs.h
 create mode 100644 drivers/common/idpf/idpf_common_virtchnl.c
 create mode 100644 drivers/common/idpf/idpf_common_virtchnl.h

diff --git a/drivers/common/idpf/base/meson.build b/drivers/common/idpf/base/meson.build
index 183587b51a..dc4b93c198 100644
--- a/drivers/common/idpf/base/meson.build
+++ b/drivers/common/idpf/base/meson.build
@@ -1,7 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2022 Intel Corporation
 
-sources = files(
+sources += files(
         'idpf_common.c',
         'idpf_controlq.c',
         'idpf_controlq_setup.c',
diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
new file mode 100644
index 0000000000..5062780362
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <rte_log.h>
+#include <idpf_common_device.h>
+
+RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index b7fff84b25..a7537281d1 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -7,6 +7,12 @@
 
 #include <base/idpf_prototype.h>
 #include <base/virtchnl2.h>
+#include <idpf_common_logs.h>
+
+#define IDPF_CTLQ_LEN		64
+#define IDPF_DFLT_MBX_BUF_SIZE	4096
+
+#define IDPF_MAX_PKT_TYPE	1024
 
 struct idpf_adapter {
 	struct idpf_hw hw;
@@ -76,4 +82,59 @@ struct idpf_vport {
 	bool stopped;
 };
 
+/* Message type read in virtual channel from PF */
+enum idpf_vc_result {
+	IDPF_MSG_ERR = -1, /* Meet error when accessing admin queue */
+	IDPF_MSG_NON,      /* Read nothing from admin queue */
+	IDPF_MSG_SYS,      /* Read system msg from admin queue */
+	IDPF_MSG_CMD,      /* Read async command result */
+};
+
+/* structure used for sending and checking response of virtchnl ops */
+struct idpf_cmd_info {
+	uint32_t ops;
+	uint8_t *in_args;       /* buffer for sending */
+	uint32_t in_args_size;  /* buffer size for sending */
+	uint8_t *out_buffer;    /* buffer for response */
+	uint32_t out_size;      /* buffer size for response */
+};
+
+/* notify current command done. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+notify_cmd(struct idpf_adapter *adapter, int msg_ret)
+{
+	adapter->cmd_retval = msg_ret;
+	/* Return value may be checked in anither thread, need to ensure the coherence. */
+	rte_wmb();
+	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
+}
+
+/* clear current command. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+clear_cmd(struct idpf_adapter *adapter)
+{
+	/* Return value may be checked in anither thread, need to ensure the coherence. */
+	rte_wmb();
+	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
+	adapter->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
+}
+
+/* Check there is pending cmd in execution. If none, set new command. */
+static inline bool
+atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
+{
+	uint32_t op_unk = VIRTCHNL2_OP_UNKNOWN;
+	bool ret = __atomic_compare_exchange(&adapter->pend_cmd, &op_unk, &ops,
+					    0, __ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE);
+
+	if (!ret)
+		DRV_LOG(ERR, "There is incomplete cmd %d", adapter->pend_cmd);
+
+	return !ret;
+}
+
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_logs.h b/drivers/common/idpf/idpf_common_logs.h
new file mode 100644
index 0000000000..fe36562769
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_logs.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_COMMON_LOGS_H_
+#define _IDPF_COMMON_LOGS_H_
+
+#include <rte_log.h>
+
+extern int idpf_common_logtype;
+
+#define DRV_LOG_RAW(level, ...)					\
+	rte_log(RTE_LOG_ ## level,				\
+		idpf_common_logtype,				\
+		RTE_FMT("%s(): "				\
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n",	\
+			__func__,				\
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define DRV_LOG(level, fmt, args...)		\
+	DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _IDPF_COMMON_LOGS_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
new file mode 100644
index 0000000000..2e94a95876
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -0,0 +1,815 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <idpf_common_virtchnl.h>
+#include <idpf_common_logs.h>
+
+static int
+idpf_vc_clean(struct idpf_adapter *adapter)
+{
+	struct idpf_ctlq_msg *q_msg[IDPF_CTLQ_LEN];
+	uint16_t num_q_msg = IDPF_CTLQ_LEN;
+	struct idpf_dma_mem *dma_mem;
+	int err;
+	uint32_t i;
+
+	for (i = 0; i < 10; i++) {
+		err = idpf_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg);
+		msleep(20);
+		if (num_q_msg > 0)
+			break;
+	}
+	if (err != 0)
+		return err;
+
+	/* Empty queue is not an error */
+	for (i = 0; i < num_q_msg; i++) {
+		dma_mem = q_msg[i]->ctx.indirect.payload;
+		if (dma_mem != NULL) {
+			idpf_free_dma_mem(&adapter->hw, dma_mem);
+			rte_free(dma_mem);
+		}
+		rte_free(q_msg[i]);
+	}
+
+	return 0;
+}
+
+static int
+idpf_send_vc_msg(struct idpf_adapter *adapter, uint32_t op,
+		 uint16_t msg_size, uint8_t *msg)
+{
+	struct idpf_ctlq_msg *ctlq_msg;
+	struct idpf_dma_mem *dma_mem;
+	int err;
+
+	err = idpf_vc_clean(adapter);
+	if (err != 0)
+		goto err;
+
+	ctlq_msg = rte_zmalloc(NULL, sizeof(struct idpf_ctlq_msg), 0);
+	if (ctlq_msg == NULL) {
+		err = -ENOMEM;
+		goto err;
+	}
+
+	dma_mem = rte_zmalloc(NULL, sizeof(struct idpf_dma_mem), 0);
+	if (dma_mem == NULL) {
+		err = -ENOMEM;
+		goto dma_mem_error;
+	}
+
+	dma_mem->size = IDPF_DFLT_MBX_BUF_SIZE;
+	idpf_alloc_dma_mem(&adapter->hw, dma_mem, dma_mem->size);
+	if (dma_mem->va == NULL) {
+		err = -ENOMEM;
+		goto dma_alloc_error;
+	}
+
+	memcpy(dma_mem->va, msg, msg_size);
+
+	ctlq_msg->opcode = idpf_mbq_opc_send_msg_to_pf;
+	ctlq_msg->func_id = 0;
+	ctlq_msg->data_len = msg_size;
+	ctlq_msg->cookie.mbx.chnl_opcode = op;
+	ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS;
+	ctlq_msg->ctx.indirect.payload = dma_mem;
+
+	err = idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
+	if (err != 0)
+		goto send_error;
+
+	return 0;
+
+send_error:
+	idpf_free_dma_mem(&adapter->hw, dma_mem);
+dma_alloc_error:
+	rte_free(dma_mem);
+dma_mem_error:
+	rte_free(ctlq_msg);
+err:
+	return err;
+}
+
+static enum idpf_vc_result
+idpf_read_msg_from_cp(struct idpf_adapter *adapter, uint16_t buf_len,
+		      uint8_t *buf)
+{
+	struct idpf_hw *hw = &adapter->hw;
+	struct idpf_ctlq_msg ctlq_msg;
+	struct idpf_dma_mem *dma_mem = NULL;
+	enum idpf_vc_result result = IDPF_MSG_NON;
+	uint32_t opcode;
+	uint16_t pending = 1;
+	int ret;
+
+	ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+	if (ret != 0) {
+		DRV_LOG(DEBUG, "Can't read msg from AQ");
+		if (ret != -ENOMSG)
+			result = IDPF_MSG_ERR;
+		return result;
+	}
+
+	rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len);
+
+	opcode = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+	adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+	DRV_LOG(DEBUG, "CQ from CP carries opcode %u, retval %d",
+		opcode, adapter->cmd_retval);
+
+	if (opcode == VIRTCHNL2_OP_EVENT) {
+		struct virtchnl2_event *ve = ctlq_msg.ctx.indirect.payload->va;
+
+		result = IDPF_MSG_SYS;
+		switch (ve->event) {
+		case VIRTCHNL2_EVENT_LINK_CHANGE:
+			/* TBD */
+			break;
+		default:
+			DRV_LOG(ERR, "%s: Unknown event %d from CP",
+				__func__, ve->event);
+			break;
+		}
+	} else {
+		/* async reply msg on command issued by pf previously */
+		result = IDPF_MSG_CMD;
+		if (opcode != adapter->pend_cmd) {
+			DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
+				adapter->pend_cmd, opcode);
+			result = IDPF_MSG_ERR;
+		}
+	}
+
+	if (ctlq_msg.data_len != 0)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret != 0 && dma_mem != NULL)
+		idpf_free_dma_mem(hw, dma_mem);
+
+	return result;
+}
+
+#define MAX_TRY_TIMES 200
+#define ASQ_DELAY_MS  10
+
+int
+idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_len,
+		  uint8_t *buf)
+{
+	int err = 0;
+	int i = 0;
+	int ret;
+
+	do {
+		ret = idpf_read_msg_from_cp(adapter, buf_len, buf);
+		if (ret == IDPF_MSG_CMD)
+			break;
+		rte_delay_ms(ASQ_DELAY_MS);
+	} while (i++ < MAX_TRY_TIMES);
+	if (i >= MAX_TRY_TIMES ||
+	    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+		err = -EBUSY;
+		DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
+			adapter->cmd_retval, ops);
+	}
+
+	return err;
+}
+
+int
+idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
+{
+	int err = 0;
+	int i = 0;
+	int ret;
+
+	if (atomic_set_cmd(adapter, args->ops))
+		return -EINVAL;
+
+	ret = idpf_send_vc_msg(adapter, args->ops, args->in_args_size, args->in_args);
+	if (ret != 0) {
+		DRV_LOG(ERR, "fail to send cmd %d", args->ops);
+		clear_cmd(adapter);
+		return ret;
+	}
+
+	switch (args->ops) {
+	case VIRTCHNL_OP_VERSION:
+	case VIRTCHNL2_OP_GET_CAPS:
+	case VIRTCHNL2_OP_CREATE_VPORT:
+	case VIRTCHNL2_OP_DESTROY_VPORT:
+	case VIRTCHNL2_OP_SET_RSS_KEY:
+	case VIRTCHNL2_OP_SET_RSS_LUT:
+	case VIRTCHNL2_OP_SET_RSS_HASH:
+	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
+	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
+	case VIRTCHNL2_OP_ENABLE_QUEUES:
+	case VIRTCHNL2_OP_DISABLE_QUEUES:
+	case VIRTCHNL2_OP_ENABLE_VPORT:
+	case VIRTCHNL2_OP_DISABLE_VPORT:
+	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
+	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
+	case VIRTCHNL2_OP_ALLOC_VECTORS:
+	case VIRTCHNL2_OP_DEALLOC_VECTORS:
+		/* for init virtchnl ops, need to poll the response */
+		err = idpf_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
+		clear_cmd(adapter);
+		break;
+	case VIRTCHNL2_OP_GET_PTYPE_INFO:
+		/* for multuple response message,
+		 * do not handle the response here.
+		 */
+		break;
+	default:
+		/* For other virtchnl ops in running time,
+		 * wait for the cmd done flag.
+		 */
+		do {
+			if (adapter->pend_cmd == VIRTCHNL_OP_UNKNOWN)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+			/* If don't read msg or read sys event, continue */
+		} while (i++ < MAX_TRY_TIMES);
+		/* If there's no response is received, clear command */
+		if (i >= MAX_TRY_TIMES  ||
+		    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -EBUSY;
+			DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
+				adapter->cmd_retval, args->ops);
+			clear_cmd(adapter);
+		}
+		break;
+	}
+
+	return err;
+}
+
+int
+idpf_vc_check_api_version(struct idpf_adapter *adapter)
+{
+	struct virtchnl2_version_info version, *pver;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&version, 0, sizeof(struct virtchnl_version_info));
+	version.major = VIRTCHNL2_VERSION_MAJOR_2;
+	version.minor = VIRTCHNL2_VERSION_MINOR_0;
+
+	args.ops = VIRTCHNL_OP_VERSION;
+	args.in_args = (uint8_t *)&version;
+	args.in_args_size = sizeof(version);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0) {
+		DRV_LOG(ERR,
+			"Failed to execute command of VIRTCHNL_OP_VERSION");
+		return err;
+	}
+
+	pver = (struct virtchnl2_version_info *)args.out_buffer;
+	adapter->virtchnl_version = *pver;
+
+	if (adapter->virtchnl_version.major != VIRTCHNL2_VERSION_MAJOR_2 ||
+	    adapter->virtchnl_version.minor != VIRTCHNL2_VERSION_MINOR_0) {
+		DRV_LOG(ERR, "VIRTCHNL API version mismatch:(%u.%u)-(%u.%u)",
+			adapter->virtchnl_version.major,
+			adapter->virtchnl_version.minor,
+			VIRTCHNL2_VERSION_MAJOR_2,
+			VIRTCHNL2_VERSION_MINOR_0);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int
+idpf_vc_get_caps(struct idpf_adapter *adapter)
+{
+	struct virtchnl2_get_capabilities caps_msg;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities));
+
+	caps_msg.csum_caps =
+		VIRTCHNL2_CAP_TX_CSUM_L3_IPV4          |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP      |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP      |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP     |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP      |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP      |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP     |
+		VIRTCHNL2_CAP_TX_CSUM_GENERIC          |
+		VIRTCHNL2_CAP_RX_CSUM_L3_IPV4          |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP      |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP      |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP     |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP      |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP      |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP     |
+		VIRTCHNL2_CAP_RX_CSUM_GENERIC;
+
+	caps_msg.rss_caps =
+		VIRTCHNL2_CAP_RSS_IPV4_TCP             |
+		VIRTCHNL2_CAP_RSS_IPV4_UDP             |
+		VIRTCHNL2_CAP_RSS_IPV4_SCTP            |
+		VIRTCHNL2_CAP_RSS_IPV4_OTHER           |
+		VIRTCHNL2_CAP_RSS_IPV6_TCP             |
+		VIRTCHNL2_CAP_RSS_IPV6_UDP             |
+		VIRTCHNL2_CAP_RSS_IPV6_SCTP            |
+		VIRTCHNL2_CAP_RSS_IPV6_OTHER           |
+		VIRTCHNL2_CAP_RSS_IPV4_AH              |
+		VIRTCHNL2_CAP_RSS_IPV4_ESP             |
+		VIRTCHNL2_CAP_RSS_IPV4_AH_ESP          |
+		VIRTCHNL2_CAP_RSS_IPV6_AH              |
+		VIRTCHNL2_CAP_RSS_IPV6_ESP             |
+		VIRTCHNL2_CAP_RSS_IPV6_AH_ESP;
+
+	caps_msg.other_caps = VIRTCHNL2_CAP_WB_ON_ITR;
+
+	args.ops = VIRTCHNL2_OP_GET_CAPS;
+	args.in_args = (uint8_t *)&caps_msg;
+	args.in_args_size = sizeof(caps_msg);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0) {
+		DRV_LOG(ERR,
+			"Failed to execute command of VIRTCHNL2_OP_GET_CAPS");
+		return err;
+	}
+
+	rte_memcpy(&adapter->caps, args.out_buffer, sizeof(caps_msg));
+
+	return 0;
+}
+
+int
+idpf_vc_create_vport(struct idpf_vport *vport,
+		     struct virtchnl2_create_vport *vport_req_info)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_create_vport vport_msg;
+	struct idpf_cmd_info args;
+	int err = -1;
+
+	memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
+	vport_msg.vport_type = vport_req_info->vport_type;
+	vport_msg.txq_model = vport_req_info->txq_model;
+	vport_msg.rxq_model = vport_req_info->rxq_model;
+	vport_msg.num_tx_q = vport_req_info->num_tx_q;
+	vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
+	vport_msg.num_rx_q = vport_req_info->num_rx_q;
+	vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_CREATE_VPORT;
+	args.in_args = (uint8_t *)&vport_msg;
+	args.in_args_size = sizeof(vport_msg);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0) {
+		DRV_LOG(ERR,
+			"Failed to execute command of VIRTCHNL2_OP_CREATE_VPORT");
+		return err;
+	}
+
+	rte_memcpy(vport->vport_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
+	return 0;
+}
+
+int
+idpf_vc_destroy_vport(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_vport vc_vport;
+	struct idpf_cmd_info args;
+	int err;
+
+	vc_vport.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_DESTROY_VPORT;
+	args.in_args = (uint8_t *)&vc_vport;
+	args.in_args_size = sizeof(vc_vport);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DESTROY_VPORT");
+
+	return err;
+}
+
+int
+idpf_vc_set_rss_key(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_key *rss_key;
+	struct idpf_cmd_info args;
+	int len, err;
+
+	len = sizeof(*rss_key) + sizeof(rss_key->key[0]) *
+		(vport->rss_key_size - 1);
+	rss_key = rte_zmalloc("rss_key", len, 0);
+	if (rss_key == NULL)
+		return -ENOMEM;
+
+	rss_key->vport_id = vport->vport_id;
+	rss_key->key_len = vport->rss_key_size;
+	rte_memcpy(rss_key->key, vport->rss_key,
+		   sizeof(rss_key->key[0]) * vport->rss_key_size);
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_SET_RSS_KEY;
+	args.in_args = (uint8_t *)rss_key;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_KEY");
+
+	rte_free(rss_key);
+	return err;
+}
+
+int
+idpf_vc_set_rss_lut(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_lut *rss_lut;
+	struct idpf_cmd_info args;
+	int len, err;
+
+	len = sizeof(*rss_lut) + sizeof(rss_lut->lut[0]) *
+		(vport->rss_lut_size - 1);
+	rss_lut = rte_zmalloc("rss_lut", len, 0);
+	if (rss_lut == NULL)
+		return -ENOMEM;
+
+	rss_lut->vport_id = vport->vport_id;
+	rss_lut->lut_entries = vport->rss_lut_size;
+	rte_memcpy(rss_lut->lut, vport->rss_lut,
+		   sizeof(rss_lut->lut[0]) * vport->rss_lut_size);
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_SET_RSS_LUT;
+	args.in_args = (uint8_t *)rss_lut;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_LUT");
+
+	rte_free(rss_lut);
+	return err;
+}
+
+int
+idpf_vc_set_rss_hash(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_hash rss_hash;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_hash, 0, sizeof(rss_hash));
+	rss_hash.ptype_groups = vport->rss_hf;
+	rss_hash.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_SET_RSS_HASH;
+	args.in_args = (uint8_t *)&rss_hash;
+	args.in_args_size = sizeof(rss_hash);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of OP_SET_RSS_HASH");
+
+	return err;
+}
+
+int
+idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_queue_vector_maps *map_info;
+	struct virtchnl2_queue_vector *vecmap;
+	struct idpf_cmd_info args;
+	int len, i, err = 0;
+
+	len = sizeof(struct virtchnl2_queue_vector_maps) +
+		(nb_rxq - 1) * sizeof(struct virtchnl2_queue_vector);
+
+	map_info = rte_zmalloc("map_info", len, 0);
+	if (map_info == NULL)
+		return -ENOMEM;
+
+	map_info->vport_id = vport->vport_id;
+	map_info->num_qv_maps = nb_rxq;
+	for (i = 0; i < nb_rxq; i++) {
+		vecmap = &map_info->qv_maps[i];
+		vecmap->queue_id = vport->qv_map[i].queue_id;
+		vecmap->vector_id = vport->qv_map[i].vector_id;
+		vecmap->itr_idx = VIRTCHNL2_ITR_IDX_0;
+		vecmap->queue_type = VIRTCHNL2_QUEUE_TYPE_RX;
+	}
+
+	args.ops = map ? VIRTCHNL2_OP_MAP_QUEUE_VECTOR :
+		VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR;
+	args.in_args = (uint8_t *)map_info;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUE_VECTOR",
+			map ? "MAP" : "UNMAP");
+
+	rte_free(map_info);
+	return err;
+}
+
+int
+idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_alloc_vectors *alloc_vec;
+	struct idpf_cmd_info args;
+	int err, len;
+
+	len = sizeof(struct virtchnl2_alloc_vectors) +
+		(num_vectors - 1) * sizeof(struct virtchnl2_vector_chunk);
+	alloc_vec = rte_zmalloc("alloc_vec", len, 0);
+	if (alloc_vec == NULL)
+		return -ENOMEM;
+
+	alloc_vec->num_vectors = num_vectors;
+
+	args.ops = VIRTCHNL2_OP_ALLOC_VECTORS;
+	args.in_args = (uint8_t *)alloc_vec;
+	args.in_args_size = sizeof(struct virtchnl2_alloc_vectors);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_ALLOC_VECTORS");
+
+	if (vport->recv_vectors == NULL) {
+		vport->recv_vectors = rte_zmalloc("recv_vectors", len, 0);
+		if (vport->recv_vectors == NULL) {
+			rte_free(alloc_vec);
+			return -ENOMEM;
+		}
+	}
+
+	rte_memcpy(vport->recv_vectors, args.out_buffer, len);
+	rte_free(alloc_vec);
+	return err;
+}
+
+int
+idpf_vc_dealloc_vectors(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_alloc_vectors *alloc_vec;
+	struct virtchnl2_vector_chunks *vcs;
+	struct idpf_cmd_info args;
+	int err, len;
+
+	alloc_vec = vport->recv_vectors;
+	vcs = &alloc_vec->vchunks;
+
+	len = sizeof(struct virtchnl2_vector_chunks) +
+		(vcs->num_vchunks - 1) * sizeof(struct virtchnl2_vector_chunk);
+
+	args.ops = VIRTCHNL2_OP_DEALLOC_VECTORS;
+	args.in_args = (uint8_t *)vcs;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_DEALLOC_VECTORS");
+
+	return err;
+}
+
+static int
+idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
+			  uint32_t type, bool on)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_del_ena_dis_queues *queue_select;
+	struct virtchnl2_queue_chunk *queue_chunk;
+	struct idpf_cmd_info args;
+	int err, len;
+
+	len = sizeof(struct virtchnl2_del_ena_dis_queues);
+	queue_select = rte_zmalloc("queue_select", len, 0);
+	if (queue_select == NULL)
+		return -ENOMEM;
+
+	queue_chunk = queue_select->chunks.chunks;
+	queue_select->chunks.num_chunks = 1;
+	queue_select->vport_id = vport->vport_id;
+
+	queue_chunk->type = type;
+	queue_chunk->start_queue_id = qid;
+	queue_chunk->num_queues = 1;
+
+	args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
+		VIRTCHNL2_OP_DISABLE_QUEUES;
+	args.in_args = (uint8_t *)queue_select;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+			on ? "ENABLE" : "DISABLE");
+
+	rte_free(queue_select);
+	return err;
+}
+
+int
+idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
+		  bool rx, bool on)
+{
+	uint32_t type;
+	int err, queue_id;
+
+	/* switch txq/rxq */
+	type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
+
+	if (type == VIRTCHNL2_QUEUE_TYPE_RX)
+		queue_id = vport->chunks_info.rx_start_qid + qid;
+	else
+		queue_id = vport->chunks_info.tx_start_qid + qid;
+	err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
+	if (err != 0)
+		return err;
+
+	/* switch tx completion queue */
+	if (!rx && vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+		queue_id = vport->chunks_info.tx_compl_start_qid + qid;
+		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
+		if (err != 0)
+			return err;
+	}
+
+	/* switch rx buffer queue */
+	if (rx && vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+		queue_id = vport->chunks_info.rx_buf_start_qid + 2 * qid;
+		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
+		if (err != 0)
+			return err;
+		queue_id++;
+		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
+		if (err != 0)
+			return err;
+	}
+
+	return err;
+}
+
+#define IDPF_RXTX_QUEUE_CHUNKS_NUM	2
+int
+idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_del_ena_dis_queues *queue_select;
+	struct virtchnl2_queue_chunk *queue_chunk;
+	uint32_t type;
+	struct idpf_cmd_info args;
+	uint16_t num_chunks;
+	int err, len;
+
+	num_chunks = IDPF_RXTX_QUEUE_CHUNKS_NUM;
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+		num_chunks++;
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+		num_chunks++;
+
+	len = sizeof(struct virtchnl2_del_ena_dis_queues) +
+		sizeof(struct virtchnl2_queue_chunk) * (num_chunks - 1);
+	queue_select = rte_zmalloc("queue_select", len, 0);
+	if (queue_select == NULL)
+		return -ENOMEM;
+
+	queue_chunk = queue_select->chunks.chunks;
+	queue_select->chunks.num_chunks = num_chunks;
+	queue_select->vport_id = vport->vport_id;
+
+	type = VIRTCHNL_QUEUE_TYPE_RX;
+	queue_chunk[type].type = type;
+	queue_chunk[type].start_queue_id = vport->chunks_info.rx_start_qid;
+	queue_chunk[type].num_queues = vport->num_rx_q;
+
+	type = VIRTCHNL2_QUEUE_TYPE_TX;
+	queue_chunk[type].type = type;
+	queue_chunk[type].start_queue_id = vport->chunks_info.tx_start_qid;
+	queue_chunk[type].num_queues = vport->num_tx_q;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+		queue_chunk[type].type = type;
+		queue_chunk[type].start_queue_id =
+			vport->chunks_info.rx_buf_start_qid;
+		queue_chunk[type].num_queues = vport->num_rx_bufq;
+	}
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+		queue_chunk[type].type = type;
+		queue_chunk[type].start_queue_id =
+			vport->chunks_info.tx_compl_start_qid;
+		queue_chunk[type].num_queues = vport->num_tx_complq;
+	}
+
+	args.ops = enable ? VIRTCHNL2_OP_ENABLE_QUEUES :
+		VIRTCHNL2_OP_DISABLE_QUEUES;
+	args.in_args = (uint8_t *)queue_select;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+			enable ? "ENABLE" : "DISABLE");
+
+	rte_free(queue_select);
+	return err;
+}
+
+int
+idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_vport vc_vport;
+	struct idpf_cmd_info args;
+	int err;
+
+	vc_vport.vport_id = vport->vport_id;
+	args.ops = enable ? VIRTCHNL2_OP_ENABLE_VPORT :
+		VIRTCHNL2_OP_DISABLE_VPORT;
+	args.in_args = (uint8_t *)&vc_vport;
+	args.in_args_size = sizeof(vc_vport);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0) {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_VPORT",
+			enable ? "ENABLE" : "DISABLE");
+	}
+
+	return err;
+}
+
+int
+idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
+{
+	struct virtchnl2_get_ptype_info *ptype_info;
+	struct idpf_cmd_info args;
+	int len, err;
+
+	len = sizeof(struct virtchnl2_get_ptype_info);
+	ptype_info = rte_zmalloc("ptype_info", len, 0);
+	if (ptype_info == NULL)
+		return -ENOMEM;
+
+	ptype_info->start_ptype_id = 0;
+	ptype_info->num_ptypes = IDPF_MAX_PKT_TYPE;
+	args.ops = VIRTCHNL2_OP_GET_PTYPE_INFO;
+	args.in_args = (uint8_t *)ptype_info;
+	args.in_args_size = len;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_PTYPE_INFO");
+
+	rte_free(ptype_info);
+	return err;
+}
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
new file mode 100644
index 0000000000..bbc66d63c4
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_COMMON_VIRTCHNL_H_
+#define _IDPF_COMMON_VIRTCHNL_H_
+
+#include <idpf_common_device.h>
+
+__rte_internal
+int idpf_vc_check_api_version(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_vc_get_caps(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_vc_create_vport(struct idpf_vport *vport,
+			 struct virtchnl2_create_vport *vport_info);
+__rte_internal
+int idpf_vc_destroy_vport(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_set_rss_key(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_set_rss_lut(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_set_rss_hash(struct idpf_vport *vport);
+__rte_internal
+int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
+		      bool rx, bool on);
+__rte_internal
+int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
+__rte_internal
+int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
+__rte_internal
+int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
+				 uint16_t nb_rxq, bool map);
+__rte_internal
+int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
+__rte_internal
+int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
+		      uint16_t buf_len, uint8_t *buf);
+__rte_internal
+int idpf_execute_vc_cmd(struct idpf_adapter *adapter,
+			struct idpf_cmd_info *args);
+
+#endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/meson.build b/drivers/common/idpf/meson.build
index 77d997b4a7..d1578641ba 100644
--- a/drivers/common/idpf/meson.build
+++ b/drivers/common/idpf/meson.build
@@ -1,4 +1,9 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2022 Intel Corporation
 
+sources = files(
+    'idpf_common_device.c',
+    'idpf_common_virtchnl.c',
+)
+
 subdir('base')
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index bfb246c752..a2b8780780 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -1,12 +1,28 @@
 INTERNAL {
 	global:
 
+	idpf_ctlq_clean_sq;
 	idpf_ctlq_deinit;
 	idpf_ctlq_init;
-	idpf_ctlq_clean_sq;
+	idpf_ctlq_post_rx_buffs;
 	idpf_ctlq_recv;
 	idpf_ctlq_send;
-	idpf_ctlq_post_rx_buffs;
+	idpf_execute_vc_cmd;
+	idpf_read_one_msg;
+	idpf_switch_queue;
+	idpf_vc_alloc_vectors;
+	idpf_vc_check_api_version;
+	idpf_vc_config_irq_map_unmap;
+	idpf_vc_create_vport;
+	idpf_vc_dealloc_vectors;
+	idpf_vc_destroy_vport;
+	idpf_vc_ena_dis_queues;
+	idpf_vc_ena_dis_vport;
+	idpf_vc_get_caps;
+	idpf_vc_query_ptype_info;
+	idpf_vc_set_rss_hash;
+	idpf_vc_set_rss_key;
+	idpf_vc_set_rss_lut;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 72a5c9f39b..759fc981d7 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -942,13 +942,6 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapt
 		goto err_api;
 	}
 
-	adapter->max_rxq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
-				sizeof(struct virtchnl2_config_rx_queues)) /
-				sizeof(struct virtchnl2_rxq_info);
-	adapter->max_txq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
-				sizeof(struct virtchnl2_config_tx_queues)) /
-				sizeof(struct virtchnl2_txq_info);
-
 	adapter->cur_vports = 0;
 	adapter->cur_vport_nb = 0;
 
@@ -1075,7 +1068,7 @@ static const struct rte_pci_id pci_id_idpf_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
-struct idpf_adapter_ext *
+static struct idpf_adapter_ext *
 idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 {
 	struct idpf_adapter_ext *adapter;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 8c29019667..efc540fa32 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -16,6 +16,7 @@
 #include "idpf_logs.h"
 
 #include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
 #include <base/idpf_prototype.h>
 #include <base/virtchnl2.h>
 
@@ -31,8 +32,6 @@
 #define IDPF_RX_BUFQ_PER_GRP	2
 
 #define IDPF_CTLQ_ID		-1
-#define IDPF_CTLQ_LEN		64
-#define IDPF_DFLT_MBX_BUF_SIZE	4096
 
 #define IDPF_DFLT_Q_VEC_NUM	1
 #define IDPF_DFLT_INTERVAL	16
@@ -44,8 +43,6 @@
 
 #define IDPF_NUM_MACADDR_MAX	64
 
-#define IDPF_MAX_PKT_TYPE	1024
-
 #define IDPF_VLAN_TAG_SIZE	4
 #define IDPF_ETH_OVERHEAD \
 	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + IDPF_VLAN_TAG_SIZE * 2)
@@ -66,14 +63,6 @@
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
-/* Message type read in virtual channel from PF */
-enum idpf_vc_result {
-	IDPF_MSG_ERR = -1, /* Meet error when accessing admin queue */
-	IDPF_MSG_NON,      /* Read nothing from admin queue */
-	IDPF_MSG_SYS,      /* Read system msg from admin queue */
-	IDPF_MSG_CMD,      /* Read async command result */
-};
-
 struct idpf_vport_param {
 	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
@@ -103,10 +92,6 @@ struct idpf_adapter_ext {
 
 	uint16_t used_vecs_num;
 
-	/* Max config queue number per VC message */
-	uint32_t max_rxq_per_msg;
-	uint32_t max_txq_per_msg;
-
 	uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
 
 	bool rx_vec_allowed;
@@ -125,74 +110,6 @@ TAILQ_HEAD(idpf_adapter_list, idpf_adapter_ext);
 #define IDPF_ADAPTER_TO_EXT(p)					\
 	container_of((p), struct idpf_adapter_ext, base)
 
-/* structure used for sending and checking response of virtchnl ops */
-struct idpf_cmd_info {
-	uint32_t ops;
-	uint8_t *in_args;       /* buffer for sending */
-	uint32_t in_args_size;  /* buffer size for sending */
-	uint8_t *out_buffer;    /* buffer for response */
-	uint32_t out_size;      /* buffer size for response */
-};
-
-/* notify current command done. Only call in case execute
- * _atomic_set_cmd successfully.
- */
-static inline void
-notify_cmd(struct idpf_adapter *adapter, int msg_ret)
-{
-	adapter->cmd_retval = msg_ret;
-	/* Return value may be checked in anither thread, need to ensure the coherence. */
-	rte_wmb();
-	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
-}
-
-/* clear current command. Only call in case execute
- * _atomic_set_cmd successfully.
- */
-static inline void
-clear_cmd(struct idpf_adapter *adapter)
-{
-	/* Return value may be checked in anither thread, need to ensure the coherence. */
-	rte_wmb();
-	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
-	adapter->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
-}
-
-/* Check there is pending cmd in execution. If none, set new command. */
-static inline bool
-atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
-{
-	uint32_t op_unk = VIRTCHNL2_OP_UNKNOWN;
-	bool ret = __atomic_compare_exchange(&adapter->pend_cmd, &op_unk, &ops,
-					    0, __ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE);
-
-	if (!ret)
-		PMD_DRV_LOG(ERR, "There is incomplete cmd %d", adapter->pend_cmd);
-
-	return !ret;
-}
-
-struct idpf_adapter_ext *idpf_find_adapter_ext(struct rte_pci_device *pci_dev);
-void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
-int idpf_vc_check_api_version(struct idpf_adapter *adapter);
 int idpf_get_pkt_type(struct idpf_adapter_ext *adapter);
-int idpf_vc_get_caps(struct idpf_adapter *adapter);
-int idpf_vc_create_vport(struct idpf_vport *vport,
-			 struct virtchnl2_create_vport *vport_info);
-int idpf_vc_destroy_vport(struct idpf_vport *vport);
-int idpf_vc_set_rss_key(struct idpf_vport *vport);
-int idpf_vc_set_rss_lut(struct idpf_vport *vport);
-int idpf_vc_set_rss_hash(struct idpf_vport *vport);
-int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
-		      bool rx, bool on);
-int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
-int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
-int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
-				 uint16_t nb_rxq, bool map);
-int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
-int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
-int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
-int idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
-		      uint16_t buf_len, uint8_t *buf);
 
 #endif /* _IDPF_ETHDEV_H_ */
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 633d3295d3..576b797973 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -23,293 +23,6 @@
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
 
-static int
-idpf_vc_clean(struct idpf_adapter *adapter)
-{
-	struct idpf_ctlq_msg *q_msg[IDPF_CTLQ_LEN];
-	uint16_t num_q_msg = IDPF_CTLQ_LEN;
-	struct idpf_dma_mem *dma_mem;
-	int err;
-	uint32_t i;
-
-	for (i = 0; i < 10; i++) {
-		err = idpf_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg);
-		msleep(20);
-		if (num_q_msg > 0)
-			break;
-	}
-	if (err != 0)
-		return err;
-
-	/* Empty queue is not an error */
-	for (i = 0; i < num_q_msg; i++) {
-		dma_mem = q_msg[i]->ctx.indirect.payload;
-		if (dma_mem != NULL) {
-			idpf_free_dma_mem(&adapter->hw, dma_mem);
-			rte_free(dma_mem);
-		}
-		rte_free(q_msg[i]);
-	}
-
-	return 0;
-}
-
-static int
-idpf_send_vc_msg(struct idpf_adapter *adapter, uint32_t op,
-		 uint16_t msg_size, uint8_t *msg)
-{
-	struct idpf_ctlq_msg *ctlq_msg;
-	struct idpf_dma_mem *dma_mem;
-	int err;
-
-	err = idpf_vc_clean(adapter);
-	if (err != 0)
-		goto err;
-
-	ctlq_msg = rte_zmalloc(NULL, sizeof(struct idpf_ctlq_msg), 0);
-	if (ctlq_msg == NULL) {
-		err = -ENOMEM;
-		goto err;
-	}
-
-	dma_mem = rte_zmalloc(NULL, sizeof(struct idpf_dma_mem), 0);
-	if (dma_mem == NULL) {
-		err = -ENOMEM;
-		goto dma_mem_error;
-	}
-
-	dma_mem->size = IDPF_DFLT_MBX_BUF_SIZE;
-	idpf_alloc_dma_mem(&adapter->hw, dma_mem, dma_mem->size);
-	if (dma_mem->va == NULL) {
-		err = -ENOMEM;
-		goto dma_alloc_error;
-	}
-
-	memcpy(dma_mem->va, msg, msg_size);
-
-	ctlq_msg->opcode = idpf_mbq_opc_send_msg_to_pf;
-	ctlq_msg->func_id = 0;
-	ctlq_msg->data_len = msg_size;
-	ctlq_msg->cookie.mbx.chnl_opcode = op;
-	ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS;
-	ctlq_msg->ctx.indirect.payload = dma_mem;
-
-	err = idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
-	if (err != 0)
-		goto send_error;
-
-	return 0;
-
-send_error:
-	idpf_free_dma_mem(&adapter->hw, dma_mem);
-dma_alloc_error:
-	rte_free(dma_mem);
-dma_mem_error:
-	rte_free(ctlq_msg);
-err:
-	return err;
-}
-
-static enum idpf_vc_result
-idpf_read_msg_from_cp(struct idpf_adapter *adapter, uint16_t buf_len,
-		      uint8_t *buf)
-{
-	struct idpf_hw *hw = &adapter->hw;
-	struct idpf_ctlq_msg ctlq_msg;
-	struct idpf_dma_mem *dma_mem = NULL;
-	enum idpf_vc_result result = IDPF_MSG_NON;
-	uint32_t opcode;
-	uint16_t pending = 1;
-	int ret;
-
-	ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
-	if (ret != 0) {
-		PMD_DRV_LOG(DEBUG, "Can't read msg from AQ");
-		if (ret != -ENOMSG)
-			result = IDPF_MSG_ERR;
-		return result;
-	}
-
-	rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len);
-
-	opcode = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
-	adapter->cmd_retval =
-		(enum virtchnl_status_code)rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
-
-	PMD_DRV_LOG(DEBUG, "CQ from CP carries opcode %u, retval %d",
-		    opcode, adapter->cmd_retval);
-
-	if (opcode == VIRTCHNL2_OP_EVENT) {
-		struct virtchnl2_event *ve =
-			(struct virtchnl2_event *)ctlq_msg.ctx.indirect.payload->va;
-
-		result = IDPF_MSG_SYS;
-		switch (ve->event) {
-		case VIRTCHNL2_EVENT_LINK_CHANGE:
-			/* TBD */
-			break;
-		default:
-			PMD_DRV_LOG(ERR, "%s: Unknown event %d from CP",
-				    __func__, ve->event);
-			break;
-		}
-	} else {
-		/* async reply msg on command issued by pf previously */
-		result = IDPF_MSG_CMD;
-		if (opcode != adapter->pend_cmd) {
-			PMD_DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
-				    adapter->pend_cmd, opcode);
-			result = IDPF_MSG_ERR;
-		}
-	}
-
-	if (ctlq_msg.data_len != 0)
-		dma_mem = ctlq_msg.ctx.indirect.payload;
-	else
-		pending = 0;
-
-	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
-	if (ret != 0 && dma_mem != NULL)
-		idpf_free_dma_mem(hw, dma_mem);
-
-	return result;
-}
-
-#define MAX_TRY_TIMES 200
-#define ASQ_DELAY_MS  10
-
-int
-idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_len,
-		  uint8_t *buf)
-{
-	int err = 0;
-	int i = 0;
-	int ret;
-
-	do {
-		ret = idpf_read_msg_from_cp(adapter, buf_len, buf);
-		if (ret == IDPF_MSG_CMD)
-			break;
-		rte_delay_ms(ASQ_DELAY_MS);
-	} while (i++ < MAX_TRY_TIMES);
-	if (i >= MAX_TRY_TIMES ||
-	    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
-		err = -EBUSY;
-		PMD_DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
-			    adapter->cmd_retval, ops);
-	}
-
-	return err;
-}
-
-static int
-idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
-{
-	int err = 0;
-	int i = 0;
-	int ret;
-
-	if (atomic_set_cmd(adapter, args->ops))
-		return -EINVAL;
-
-	ret = idpf_send_vc_msg(adapter, args->ops, args->in_args_size, args->in_args);
-	if (ret != 0) {
-		PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops);
-		clear_cmd(adapter);
-		return ret;
-	}
-
-	switch (args->ops) {
-	case VIRTCHNL_OP_VERSION:
-	case VIRTCHNL2_OP_GET_CAPS:
-	case VIRTCHNL2_OP_CREATE_VPORT:
-	case VIRTCHNL2_OP_DESTROY_VPORT:
-	case VIRTCHNL2_OP_SET_RSS_KEY:
-	case VIRTCHNL2_OP_SET_RSS_LUT:
-	case VIRTCHNL2_OP_SET_RSS_HASH:
-	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_QUEUES:
-	case VIRTCHNL2_OP_DISABLE_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_VPORT:
-	case VIRTCHNL2_OP_DISABLE_VPORT:
-	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_ALLOC_VECTORS:
-	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-		/* for init virtchnl ops, need to poll the response */
-		err = idpf_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
-		clear_cmd(adapter);
-		break;
-	case VIRTCHNL2_OP_GET_PTYPE_INFO:
-		/* for multuple response message,
-		 * do not handle the response here.
-		 */
-		break;
-	default:
-		/* For other virtchnl ops in running time,
-		 * wait for the cmd done flag.
-		 */
-		do {
-			if (adapter->pend_cmd == VIRTCHNL_OP_UNKNOWN)
-				break;
-			rte_delay_ms(ASQ_DELAY_MS);
-			/* If don't read msg or read sys event, continue */
-		} while (i++ < MAX_TRY_TIMES);
-		/* If there's no response is received, clear command */
-		if (i >= MAX_TRY_TIMES  ||
-		    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
-			err = -EBUSY;
-			PMD_DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
-				    adapter->cmd_retval, args->ops);
-			clear_cmd(adapter);
-		}
-		break;
-	}
-
-	return err;
-}
-
-int
-idpf_vc_check_api_version(struct idpf_adapter *adapter)
-{
-	struct virtchnl2_version_info version, *pver;
-	struct idpf_cmd_info args;
-	int err;
-
-	memset(&version, 0, sizeof(struct virtchnl_version_info));
-	version.major = VIRTCHNL2_VERSION_MAJOR_2;
-	version.minor = VIRTCHNL2_VERSION_MINOR_0;
-
-	args.ops = VIRTCHNL_OP_VERSION;
-	args.in_args = (uint8_t *)&version;
-	args.in_args_size = sizeof(version);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0) {
-		PMD_DRV_LOG(ERR,
-			    "Failed to execute command of VIRTCHNL_OP_VERSION");
-		return err;
-	}
-
-	pver = (struct virtchnl2_version_info *)args.out_buffer;
-	adapter->virtchnl_version = *pver;
-
-	if (adapter->virtchnl_version.major != VIRTCHNL2_VERSION_MAJOR_2 ||
-	    adapter->virtchnl_version.minor != VIRTCHNL2_VERSION_MINOR_0) {
-		PMD_INIT_LOG(ERR, "VIRTCHNL API version mismatch:(%u.%u)-(%u.%u)",
-			     adapter->virtchnl_version.major,
-			     adapter->virtchnl_version.minor,
-			     VIRTCHNL2_VERSION_MAJOR_2,
-			     VIRTCHNL2_VERSION_MINOR_0);
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
 int __rte_cold
 idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 {
@@ -333,7 +46,7 @@ idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 
 	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
 		ret = idpf_read_one_msg(base, VIRTCHNL2_OP_GET_PTYPE_INFO,
-					IDPF_DFLT_MBX_BUF_SIZE, (u8 *)ptype_info);
+					IDPF_DFLT_MBX_BUF_SIZE, (uint8_t *)ptype_info);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR, "Fail to get packet type information");
 			goto free_ptype_info;
@@ -349,7 +62,7 @@ idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 			uint32_t proto_hdr = 0;
 
 			ptype = (struct virtchnl2_ptype *)
-					((u8 *)ptype_info + ptype_offset);
+					((uint8_t *)ptype_info + ptype_offset);
 			ptype_offset += IDPF_GET_PTYPE_SIZE(ptype);
 			if (ptype_offset > IDPF_DFLT_MBX_BUF_SIZE) {
 				ret = -EINVAL;
@@ -523,223 +236,6 @@ idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 	return ret;
 }
 
-int
-idpf_vc_get_caps(struct idpf_adapter *adapter)
-{
-	struct virtchnl2_get_capabilities caps_msg;
-	struct idpf_cmd_info args;
-	int err;
-
-	memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities));
-
-	caps_msg.csum_caps =
-		VIRTCHNL2_CAP_TX_CSUM_L3_IPV4          |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP      |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP      |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP     |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP      |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP      |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP     |
-		VIRTCHNL2_CAP_TX_CSUM_GENERIC          |
-		VIRTCHNL2_CAP_RX_CSUM_L3_IPV4          |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP      |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP      |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP     |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP      |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP      |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP     |
-		VIRTCHNL2_CAP_RX_CSUM_GENERIC;
-
-	caps_msg.rss_caps =
-		VIRTCHNL2_CAP_RSS_IPV4_TCP             |
-		VIRTCHNL2_CAP_RSS_IPV4_UDP             |
-		VIRTCHNL2_CAP_RSS_IPV4_SCTP            |
-		VIRTCHNL2_CAP_RSS_IPV4_OTHER           |
-		VIRTCHNL2_CAP_RSS_IPV6_TCP             |
-		VIRTCHNL2_CAP_RSS_IPV6_UDP             |
-		VIRTCHNL2_CAP_RSS_IPV6_SCTP            |
-		VIRTCHNL2_CAP_RSS_IPV6_OTHER           |
-		VIRTCHNL2_CAP_RSS_IPV4_AH              |
-		VIRTCHNL2_CAP_RSS_IPV4_ESP             |
-		VIRTCHNL2_CAP_RSS_IPV4_AH_ESP          |
-		VIRTCHNL2_CAP_RSS_IPV6_AH              |
-		VIRTCHNL2_CAP_RSS_IPV6_ESP             |
-		VIRTCHNL2_CAP_RSS_IPV6_AH_ESP;
-
-	caps_msg.other_caps = VIRTCHNL2_CAP_WB_ON_ITR;
-
-	args.ops = VIRTCHNL2_OP_GET_CAPS;
-	args.in_args = (uint8_t *)&caps_msg;
-	args.in_args_size = sizeof(caps_msg);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0) {
-		PMD_DRV_LOG(ERR,
-			    "Failed to execute command of VIRTCHNL2_OP_GET_CAPS");
-		return err;
-	}
-
-	rte_memcpy(&adapter->caps, args.out_buffer, sizeof(caps_msg));
-
-	return 0;
-}
-
-int
-idpf_vc_create_vport(struct idpf_vport *vport,
-		     struct virtchnl2_create_vport *vport_req_info)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_create_vport vport_msg;
-	struct idpf_cmd_info args;
-	int err = -1;
-
-	memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
-	vport_msg.vport_type = vport_req_info->vport_type;
-	vport_msg.txq_model = vport_req_info->txq_model;
-	vport_msg.rxq_model = vport_req_info->rxq_model;
-	vport_msg.num_tx_q = vport_req_info->num_tx_q;
-	vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
-	vport_msg.num_rx_q = vport_req_info->num_rx_q;
-	vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_CREATE_VPORT;
-	args.in_args = (uint8_t *)&vport_msg;
-	args.in_args_size = sizeof(vport_msg);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0) {
-		PMD_DRV_LOG(ERR,
-			    "Failed to execute command of VIRTCHNL2_OP_CREATE_VPORT");
-		return err;
-	}
-
-	rte_memcpy(vport->vport_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
-	return 0;
-}
-
-int
-idpf_vc_destroy_vport(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_vport vc_vport;
-	struct idpf_cmd_info args;
-	int err;
-
-	vc_vport.vport_id = vport->vport_id;
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_DESTROY_VPORT;
-	args.in_args = (uint8_t *)&vc_vport;
-	args.in_args_size = sizeof(vc_vport);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DESTROY_VPORT");
-
-	return err;
-}
-
-int
-idpf_vc_set_rss_key(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_rss_key *rss_key;
-	struct idpf_cmd_info args;
-	int len, err;
-
-	len = sizeof(*rss_key) + sizeof(rss_key->key[0]) *
-		(vport->rss_key_size - 1);
-	rss_key = rte_zmalloc("rss_key", len, 0);
-	if (rss_key == NULL)
-		return -ENOMEM;
-
-	rss_key->vport_id = vport->vport_id;
-	rss_key->key_len = vport->rss_key_size;
-	rte_memcpy(rss_key->key, vport->rss_key,
-		   sizeof(rss_key->key[0]) * vport->rss_key_size);
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_SET_RSS_KEY;
-	args.in_args = (uint8_t *)rss_key;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_KEY");
-
-	rte_free(rss_key);
-	return err;
-}
-
-int
-idpf_vc_set_rss_lut(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_rss_lut *rss_lut;
-	struct idpf_cmd_info args;
-	int len, err;
-
-	len = sizeof(*rss_lut) + sizeof(rss_lut->lut[0]) *
-		(vport->rss_lut_size - 1);
-	rss_lut = rte_zmalloc("rss_lut", len, 0);
-	if (rss_lut == NULL)
-		return -ENOMEM;
-
-	rss_lut->vport_id = vport->vport_id;
-	rss_lut->lut_entries = vport->rss_lut_size;
-	rte_memcpy(rss_lut->lut, vport->rss_lut,
-		   sizeof(rss_lut->lut[0]) * vport->rss_lut_size);
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_SET_RSS_LUT;
-	args.in_args = (uint8_t *)rss_lut;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_LUT");
-
-	rte_free(rss_lut);
-	return err;
-}
-
-int
-idpf_vc_set_rss_hash(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_rss_hash rss_hash;
-	struct idpf_cmd_info args;
-	int err;
-
-	memset(&rss_hash, 0, sizeof(rss_hash));
-	rss_hash.ptype_groups = vport->rss_hf;
-	rss_hash.vport_id = vport->vport_id;
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_SET_RSS_HASH;
-	args.in_args = (uint8_t *)&rss_hash;
-	args.in_args_size = sizeof(rss_hash);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of OP_SET_RSS_HASH");
-
-	return err;
-}
-
 #define IDPF_RX_BUF_STRIDE		64
 int
 idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
@@ -899,310 +395,3 @@ idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
 
 	return err;
 }
-
-int
-idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_queue_vector_maps *map_info;
-	struct virtchnl2_queue_vector *vecmap;
-	struct idpf_cmd_info args;
-	int len, i, err = 0;
-
-	len = sizeof(struct virtchnl2_queue_vector_maps) +
-		(nb_rxq - 1) * sizeof(struct virtchnl2_queue_vector);
-
-	map_info = rte_zmalloc("map_info", len, 0);
-	if (map_info == NULL)
-		return -ENOMEM;
-
-	map_info->vport_id = vport->vport_id;
-	map_info->num_qv_maps = nb_rxq;
-	for (i = 0; i < nb_rxq; i++) {
-		vecmap = &map_info->qv_maps[i];
-		vecmap->queue_id = vport->qv_map[i].queue_id;
-		vecmap->vector_id = vport->qv_map[i].vector_id;
-		vecmap->itr_idx = VIRTCHNL2_ITR_IDX_0;
-		vecmap->queue_type = VIRTCHNL2_QUEUE_TYPE_RX;
-	}
-
-	args.ops = map ? VIRTCHNL2_OP_MAP_QUEUE_VECTOR :
-		VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR;
-	args.in_args = (u8 *)map_info;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUE_VECTOR",
-			    map ? "MAP" : "UNMAP");
-
-	rte_free(map_info);
-	return err;
-}
-
-int
-idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_alloc_vectors *alloc_vec;
-	struct idpf_cmd_info args;
-	int err, len;
-
-	len = sizeof(struct virtchnl2_alloc_vectors) +
-		(num_vectors - 1) * sizeof(struct virtchnl2_vector_chunk);
-	alloc_vec = rte_zmalloc("alloc_vec", len, 0);
-	if (alloc_vec == NULL)
-		return -ENOMEM;
-
-	alloc_vec->num_vectors = num_vectors;
-
-	args.ops = VIRTCHNL2_OP_ALLOC_VECTORS;
-	args.in_args = (u8 *)alloc_vec;
-	args.in_args_size = sizeof(struct virtchnl2_alloc_vectors);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_ALLOC_VECTORS");
-
-	if (vport->recv_vectors == NULL) {
-		vport->recv_vectors = rte_zmalloc("recv_vectors", len, 0);
-		if (vport->recv_vectors == NULL) {
-			rte_free(alloc_vec);
-			return -ENOMEM;
-		}
-	}
-
-	rte_memcpy(vport->recv_vectors, args.out_buffer, len);
-	rte_free(alloc_vec);
-	return err;
-}
-
-int
-idpf_vc_dealloc_vectors(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_alloc_vectors *alloc_vec;
-	struct virtchnl2_vector_chunks *vcs;
-	struct idpf_cmd_info args;
-	int err, len;
-
-	alloc_vec = vport->recv_vectors;
-	vcs = &alloc_vec->vchunks;
-
-	len = sizeof(struct virtchnl2_vector_chunks) +
-		(vcs->num_vchunks - 1) * sizeof(struct virtchnl2_vector_chunk);
-
-	args.ops = VIRTCHNL2_OP_DEALLOC_VECTORS;
-	args.in_args = (u8 *)vcs;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_DEALLOC_VECTORS");
-
-	return err;
-}
-
-static int
-idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
-			  uint32_t type, bool on)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_del_ena_dis_queues *queue_select;
-	struct virtchnl2_queue_chunk *queue_chunk;
-	struct idpf_cmd_info args;
-	int err, len;
-
-	len = sizeof(struct virtchnl2_del_ena_dis_queues);
-	queue_select = rte_zmalloc("queue_select", len, 0);
-	if (queue_select == NULL)
-		return -ENOMEM;
-
-	queue_chunk = queue_select->chunks.chunks;
-	queue_select->chunks.num_chunks = 1;
-	queue_select->vport_id = vport->vport_id;
-
-	queue_chunk->type = type;
-	queue_chunk->start_queue_id = qid;
-	queue_chunk->num_queues = 1;
-
-	args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
-		VIRTCHNL2_OP_DISABLE_QUEUES;
-	args.in_args = (u8 *)queue_select;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
-			    on ? "ENABLE" : "DISABLE");
-
-	rte_free(queue_select);
-	return err;
-}
-
-int
-idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
-		     bool rx, bool on)
-{
-	uint32_t type;
-	int err, queue_id;
-
-	/* switch txq/rxq */
-	type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
-
-	if (type == VIRTCHNL2_QUEUE_TYPE_RX)
-		queue_id = vport->chunks_info.rx_start_qid + qid;
-	else
-		queue_id = vport->chunks_info.tx_start_qid + qid;
-	err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
-	if (err != 0)
-		return err;
-
-	/* switch tx completion queue */
-	if (!rx && vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-		queue_id = vport->chunks_info.tx_compl_start_qid + qid;
-		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
-		if (err != 0)
-			return err;
-	}
-
-	/* switch rx buffer queue */
-	if (rx && vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
-		queue_id = vport->chunks_info.rx_buf_start_qid + 2 * qid;
-		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
-		if (err != 0)
-			return err;
-		queue_id++;
-		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
-		if (err != 0)
-			return err;
-	}
-
-	return err;
-}
-
-#define IDPF_RXTX_QUEUE_CHUNKS_NUM	2
-int
-idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_del_ena_dis_queues *queue_select;
-	struct virtchnl2_queue_chunk *queue_chunk;
-	uint32_t type;
-	struct idpf_cmd_info args;
-	uint16_t num_chunks;
-	int err, len;
-
-	num_chunks = IDPF_RXTX_QUEUE_CHUNKS_NUM;
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
-		num_chunks++;
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
-		num_chunks++;
-
-	len = sizeof(struct virtchnl2_del_ena_dis_queues) +
-		sizeof(struct virtchnl2_queue_chunk) * (num_chunks - 1);
-	queue_select = rte_zmalloc("queue_select", len, 0);
-	if (queue_select == NULL)
-		return -ENOMEM;
-
-	queue_chunk = queue_select->chunks.chunks;
-	queue_select->chunks.num_chunks = num_chunks;
-	queue_select->vport_id = vport->vport_id;
-
-	type = VIRTCHNL_QUEUE_TYPE_RX;
-	queue_chunk[type].type = type;
-	queue_chunk[type].start_queue_id = vport->chunks_info.rx_start_qid;
-	queue_chunk[type].num_queues = vport->num_rx_q;
-
-	type = VIRTCHNL2_QUEUE_TYPE_TX;
-	queue_chunk[type].type = type;
-	queue_chunk[type].start_queue_id = vport->chunks_info.tx_start_qid;
-	queue_chunk[type].num_queues = vport->num_tx_q;
-
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
-		queue_chunk[type].type = type;
-		queue_chunk[type].start_queue_id =
-			vport->chunks_info.rx_buf_start_qid;
-		queue_chunk[type].num_queues = vport->num_rx_bufq;
-	}
-
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-		queue_chunk[type].type = type;
-		queue_chunk[type].start_queue_id =
-			vport->chunks_info.tx_compl_start_qid;
-		queue_chunk[type].num_queues = vport->num_tx_complq;
-	}
-
-	args.ops = enable ? VIRTCHNL2_OP_ENABLE_QUEUES :
-		VIRTCHNL2_OP_DISABLE_QUEUES;
-	args.in_args = (u8 *)queue_select;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
-			    enable ? "ENABLE" : "DISABLE");
-
-	rte_free(queue_select);
-	return err;
-}
-
-int
-idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_vport vc_vport;
-	struct idpf_cmd_info args;
-	int err;
-
-	vc_vport.vport_id = vport->vport_id;
-	args.ops = enable ? VIRTCHNL2_OP_ENABLE_VPORT :
-			    VIRTCHNL2_OP_DISABLE_VPORT;
-	args.in_args = (uint8_t *)&vc_vport;
-	args.in_args_size = sizeof(vc_vport);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0) {
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_VPORT",
-			    enable ? "ENABLE" : "DISABLE");
-	}
-
-	return err;
-}
-
-int
-idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
-{
-	struct virtchnl2_get_ptype_info *ptype_info;
-	struct idpf_cmd_info args;
-	int len, err;
-
-	len = sizeof(struct virtchnl2_get_ptype_info);
-	ptype_info = rte_zmalloc("ptype_info", len, 0);
-	if (ptype_info == NULL)
-		return -ENOMEM;
-
-	ptype_info->start_ptype_id = 0;
-	ptype_info->num_ptypes = IDPF_MAX_PKT_TYPE;
-	args.ops = VIRTCHNL2_OP_GET_PTYPE_INFO;
-	args.in_args = (u8 *)ptype_info;
-	args.in_args_size = len;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_PTYPE_INFO");
-
-	rte_free(ptype_info);
-	return err;
-}
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v4 04/15] common/idpf: introduce adapter init and deinit
  2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
                     ` (2 preceding siblings ...)
  2023-01-17  8:06   ` [PATCH v4 03/15] common/idpf: add virtual channel functions beilei.xing
@ 2023-01-17  8:06   ` beilei.xing
  2023-01-17  8:06   ` [PATCH v4 05/15] common/idpf: add vport init/deinit beilei.xing
                     ` (11 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-01-17  8:06 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Introduce idpf_adapter_init and idpf_adapter_deinit
functions in common module.
And also introduce idpf_adapter_ext_init and
idpf_adapter_ext_deinit functions.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c | 153 ++++++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h |   6 +
 drivers/common/idpf/version.map          |   2 +
 drivers/net/idpf/idpf_ethdev.c           | 158 +++--------------------
 drivers/net/idpf/idpf_ethdev.h           |   2 -
 5 files changed, 178 insertions(+), 143 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 5062780362..b2b42443e4 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -4,5 +4,158 @@
 
 #include <rte_log.h>
 #include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
+
+static void
+idpf_reset_pf(struct idpf_hw *hw)
+{
+	uint32_t reg;
+
+	reg = IDPF_READ_REG(hw, PFGEN_CTRL);
+	IDPF_WRITE_REG(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR));
+}
+
+#define IDPF_RESET_WAIT_CNT 100
+static int
+idpf_check_pf_reset_done(struct idpf_hw *hw)
+{
+	uint32_t reg;
+	int i;
+
+	for (i = 0; i < IDPF_RESET_WAIT_CNT; i++) {
+		reg = IDPF_READ_REG(hw, PFGEN_RSTAT);
+		if (reg != 0xFFFFFFFF && (reg & PFGEN_RSTAT_PFR_STATE_M))
+			return 0;
+		rte_delay_ms(1000);
+	}
+
+	DRV_LOG(ERR, "IDPF reset timeout");
+	return -EBUSY;
+}
+
+#define CTLQ_NUM 2
+static int
+idpf_init_mbx(struct idpf_hw *hw)
+{
+	struct idpf_ctlq_create_info ctlq_info[CTLQ_NUM] = {
+		{
+			.type = IDPF_CTLQ_TYPE_MAILBOX_TX,
+			.id = IDPF_CTLQ_ID,
+			.len = IDPF_CTLQ_LEN,
+			.buf_size = IDPF_DFLT_MBX_BUF_SIZE,
+			.reg = {
+				.head = PF_FW_ATQH,
+				.tail = PF_FW_ATQT,
+				.len = PF_FW_ATQLEN,
+				.bah = PF_FW_ATQBAH,
+				.bal = PF_FW_ATQBAL,
+				.len_mask = PF_FW_ATQLEN_ATQLEN_M,
+				.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M,
+				.head_mask = PF_FW_ATQH_ATQH_M,
+			}
+		},
+		{
+			.type = IDPF_CTLQ_TYPE_MAILBOX_RX,
+			.id = IDPF_CTLQ_ID,
+			.len = IDPF_CTLQ_LEN,
+			.buf_size = IDPF_DFLT_MBX_BUF_SIZE,
+			.reg = {
+				.head = PF_FW_ARQH,
+				.tail = PF_FW_ARQT,
+				.len = PF_FW_ARQLEN,
+				.bah = PF_FW_ARQBAH,
+				.bal = PF_FW_ARQBAL,
+				.len_mask = PF_FW_ARQLEN_ARQLEN_M,
+				.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M,
+				.head_mask = PF_FW_ARQH_ARQH_M,
+			}
+		}
+	};
+	struct idpf_ctlq_info *ctlq;
+	int ret;
+
+	ret = idpf_ctlq_init(hw, CTLQ_NUM, ctlq_info);
+	if (ret != 0)
+		return ret;
+
+	LIST_FOR_EACH_ENTRY_SAFE(ctlq, NULL, &hw->cq_list_head,
+				 struct idpf_ctlq_info, cq_list) {
+		if (ctlq->q_id == IDPF_CTLQ_ID &&
+		    ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
+			hw->asq = ctlq;
+		if (ctlq->q_id == IDPF_CTLQ_ID &&
+		    ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_RX)
+			hw->arq = ctlq;
+	}
+
+	if (hw->asq == NULL || hw->arq == NULL) {
+		idpf_ctlq_deinit(hw);
+		ret = -ENOENT;
+	}
+
+	return ret;
+}
+
+int
+idpf_adapter_init(struct idpf_adapter *adapter)
+{
+	struct idpf_hw *hw = &adapter->hw;
+	int ret;
+
+	idpf_reset_pf(hw);
+	ret = idpf_check_pf_reset_done(hw);
+	if (ret != 0) {
+		DRV_LOG(ERR, "IDPF is still resetting");
+		goto err_check_reset;
+	}
+
+	ret = idpf_init_mbx(hw);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to init mailbox");
+		goto err_check_reset;
+	}
+
+	adapter->mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
+					IDPF_DFLT_MBX_BUF_SIZE, 0);
+	if (adapter->mbx_resp == NULL) {
+		DRV_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp memory");
+		ret = -ENOMEM;
+		goto err_mbx_resp;
+	}
+
+	ret = idpf_vc_check_api_version(adapter);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to check api version");
+		goto err_check_api;
+	}
+
+	ret = idpf_vc_get_caps(adapter);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to get capabilities");
+		goto err_check_api;
+	}
+
+	return 0;
+
+err_check_api:
+	rte_free(adapter->mbx_resp);
+	adapter->mbx_resp = NULL;
+err_mbx_resp:
+	idpf_ctlq_deinit(hw);
+err_check_reset:
+	return ret;
+}
+
+int
+idpf_adapter_deinit(struct idpf_adapter *adapter)
+{
+	struct idpf_hw *hw = &adapter->hw;
+
+	idpf_ctlq_deinit(hw);
+	rte_free(adapter->mbx_resp);
+	adapter->mbx_resp = NULL;
+
+	return 0;
+}
 
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index a7537281d1..e4344ea392 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -9,6 +9,7 @@
 #include <base/virtchnl2.h>
 #include <idpf_common_logs.h>
 
+#define IDPF_CTLQ_ID		-1
 #define IDPF_CTLQ_LEN		64
 #define IDPF_DFLT_MBX_BUF_SIZE	4096
 
@@ -137,4 +138,9 @@ atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
 	return !ret;
 }
 
+__rte_internal
+int idpf_adapter_init(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_adapter_deinit(struct idpf_adapter *adapter);
+
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index a2b8780780..7259dcf8a4 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -1,6 +1,8 @@
 INTERNAL {
 	global:
 
+	idpf_adapter_deinit;
+	idpf_adapter_init;
 	idpf_ctlq_clean_sq;
 	idpf_ctlq_deinit;
 	idpf_ctlq_init;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 759fc981d7..c17c7bb472 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -786,148 +786,32 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap
 	return ret;
 }
 
-static void
-idpf_reset_pf(struct idpf_hw *hw)
-{
-	uint32_t reg;
-
-	reg = IDPF_READ_REG(hw, PFGEN_CTRL);
-	IDPF_WRITE_REG(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR));
-}
-
-#define IDPF_RESET_WAIT_CNT 100
 static int
-idpf_check_pf_reset_done(struct idpf_hw *hw)
+idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
 {
-	uint32_t reg;
-	int i;
-
-	for (i = 0; i < IDPF_RESET_WAIT_CNT; i++) {
-		reg = IDPF_READ_REG(hw, PFGEN_RSTAT);
-		if (reg != 0xFFFFFFFF && (reg & PFGEN_RSTAT_PFR_STATE_M))
-			return 0;
-		rte_delay_ms(1000);
-	}
-
-	PMD_INIT_LOG(ERR, "IDPF reset timeout");
-	return -EBUSY;
-}
-
-#define CTLQ_NUM 2
-static int
-idpf_init_mbx(struct idpf_hw *hw)
-{
-	struct idpf_ctlq_create_info ctlq_info[CTLQ_NUM] = {
-		{
-			.type = IDPF_CTLQ_TYPE_MAILBOX_TX,
-			.id = IDPF_CTLQ_ID,
-			.len = IDPF_CTLQ_LEN,
-			.buf_size = IDPF_DFLT_MBX_BUF_SIZE,
-			.reg = {
-				.head = PF_FW_ATQH,
-				.tail = PF_FW_ATQT,
-				.len = PF_FW_ATQLEN,
-				.bah = PF_FW_ATQBAH,
-				.bal = PF_FW_ATQBAL,
-				.len_mask = PF_FW_ATQLEN_ATQLEN_M,
-				.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M,
-				.head_mask = PF_FW_ATQH_ATQH_M,
-			}
-		},
-		{
-			.type = IDPF_CTLQ_TYPE_MAILBOX_RX,
-			.id = IDPF_CTLQ_ID,
-			.len = IDPF_CTLQ_LEN,
-			.buf_size = IDPF_DFLT_MBX_BUF_SIZE,
-			.reg = {
-				.head = PF_FW_ARQH,
-				.tail = PF_FW_ARQT,
-				.len = PF_FW_ARQLEN,
-				.bah = PF_FW_ARQBAH,
-				.bal = PF_FW_ARQBAL,
-				.len_mask = PF_FW_ARQLEN_ARQLEN_M,
-				.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M,
-				.head_mask = PF_FW_ARQH_ARQH_M,
-			}
-		}
-	};
-	struct idpf_ctlq_info *ctlq;
-	int ret;
-
-	ret = idpf_ctlq_init(hw, CTLQ_NUM, ctlq_info);
-	if (ret != 0)
-		return ret;
-
-	LIST_FOR_EACH_ENTRY_SAFE(ctlq, NULL, &hw->cq_list_head,
-				 struct idpf_ctlq_info, cq_list) {
-		if (ctlq->q_id == IDPF_CTLQ_ID &&
-		    ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
-			hw->asq = ctlq;
-		if (ctlq->q_id == IDPF_CTLQ_ID &&
-		    ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_RX)
-			hw->arq = ctlq;
-	}
-
-	if (hw->asq == NULL || hw->arq == NULL) {
-		idpf_ctlq_deinit(hw);
-		ret = -ENOENT;
-	}
-
-	return ret;
-}
-
-static int
-idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
-{
-	struct idpf_hw *hw = &adapter->base.hw;
+	struct idpf_adapter *base = &adapter->base;
+	struct idpf_hw *hw = &base->hw;
 	int ret = 0;
 
 	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
 	hw->hw_addr_len = pci_dev->mem_resource[0].len;
-	hw->back = &adapter->base;
+	hw->back = base;
 	hw->vendor_id = pci_dev->id.vendor_id;
 	hw->device_id = pci_dev->id.device_id;
 	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
 
 	strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
 
-	idpf_reset_pf(hw);
-	ret = idpf_check_pf_reset_done(hw);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "IDPF is still resetting");
-		goto err;
-	}
-
-	ret = idpf_init_mbx(hw);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to init mailbox");
-		goto err;
-	}
-
-	adapter->base.mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
-					     IDPF_DFLT_MBX_BUF_SIZE, 0);
-	if (adapter->base.mbx_resp == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp memory");
-		ret = -ENOMEM;
-		goto err_mbx;
-	}
-
-	ret = idpf_vc_check_api_version(&adapter->base);
+	ret = idpf_adapter_init(base);
 	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to check api version");
-		goto err_api;
+		PMD_INIT_LOG(ERR, "Failed to init adapter");
+		goto err_adapter_init;
 	}
 
 	ret = idpf_get_pkt_type(adapter);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to set ptype table");
-		goto err_api;
-	}
-
-	ret = idpf_vc_get_caps(&adapter->base);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to get capabilities");
-		goto err_api;
+		goto err_get_ptype;
 	}
 
 	adapter->max_vport_nb = adapter->base.caps.max_vports;
@@ -939,7 +823,7 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapt
 	if (adapter->vports == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
 		ret = -ENOMEM;
-		goto err_api;
+		goto err_get_ptype;
 	}
 
 	adapter->cur_vports = 0;
@@ -949,12 +833,9 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapt
 
 	return ret;
 
-err_api:
-	rte_free(adapter->base.mbx_resp);
-	adapter->base.mbx_resp = NULL;
-err_mbx:
-	idpf_ctlq_deinit(hw);
-err:
+err_get_ptype:
+	idpf_adapter_deinit(base);
+err_adapter_init:
 	return ret;
 }
 
@@ -1093,14 +974,9 @@ idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 }
 
 static void
-idpf_adapter_rel(struct idpf_adapter_ext *adapter)
+idpf_adapter_ext_deinit(struct idpf_adapter_ext *adapter)
 {
-	struct idpf_hw *hw = &adapter->base.hw;
-
-	idpf_ctlq_deinit(hw);
-
-	rte_free(adapter->base.mbx_resp);
-	adapter->base.mbx_resp = NULL;
+	idpf_adapter_deinit(&adapter->base);
 
 	rte_free(adapter->vports);
 	adapter->vports = NULL;
@@ -1133,7 +1009,7 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 			return -ENOMEM;
 		}
 
-		retval = idpf_adapter_init(pci_dev, adapter);
+		retval = idpf_adapter_ext_init(pci_dev, adapter);
 		if (retval != 0) {
 			PMD_INIT_LOG(ERR, "Failed to init adapter.");
 			return retval;
@@ -1196,7 +1072,7 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		rte_spinlock_lock(&idpf_adapter_lock);
 		TAILQ_REMOVE(&idpf_adapter_list, adapter, next);
 		rte_spinlock_unlock(&idpf_adapter_lock);
-		idpf_adapter_rel(adapter);
+		idpf_adapter_ext_deinit(adapter);
 		rte_free(adapter);
 	}
 	return retval;
@@ -1216,7 +1092,7 @@ idpf_pci_remove(struct rte_pci_device *pci_dev)
 	rte_spinlock_lock(&idpf_adapter_lock);
 	TAILQ_REMOVE(&idpf_adapter_list, adapter, next);
 	rte_spinlock_unlock(&idpf_adapter_lock);
-	idpf_adapter_rel(adapter);
+	idpf_adapter_ext_deinit(adapter);
 	rte_free(adapter);
 
 	return 0;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index efc540fa32..07ffe8e408 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -31,8 +31,6 @@
 #define IDPF_RXQ_PER_GRP	1
 #define IDPF_RX_BUFQ_PER_GRP	2
 
-#define IDPF_CTLQ_ID		-1
-
 #define IDPF_DFLT_Q_VEC_NUM	1
 #define IDPF_DFLT_INTERVAL	16
 
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v4 05/15] common/idpf: add vport init/deinit
  2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
                     ` (3 preceding siblings ...)
  2023-01-17  8:06   ` [PATCH v4 04/15] common/idpf: introduce adapter init and deinit beilei.xing
@ 2023-01-17  8:06   ` beilei.xing
  2023-01-17  8:06   ` [PATCH v4 06/15] common/idpf: add config RSS beilei.xing
                     ` (10 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-01-17  8:06 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Introduce idpf_vport_init and idpf_vport_deinit functions
in common module.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 115 +++++++++++++++++
 drivers/common/idpf/idpf_common_device.h   |  13 +-
 drivers/common/idpf/idpf_common_virtchnl.c |  18 +--
 drivers/common/idpf/version.map            |   2 +
 drivers/net/idpf/idpf_ethdev.c             | 138 ++-------------------
 5 files changed, 148 insertions(+), 138 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index b2b42443e4..5628fb5c57 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -158,4 +158,119 @@ idpf_adapter_deinit(struct idpf_adapter *adapter)
 	return 0;
 }
 
+int
+idpf_vport_init(struct idpf_vport *vport,
+		struct virtchnl2_create_vport *create_vport_info,
+		void *dev_data)
+{
+	struct virtchnl2_create_vport *vport_info;
+	int i, type, ret;
+
+	ret = idpf_vc_create_vport(vport, create_vport_info);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to create vport.");
+		goto err_create_vport;
+	}
+
+	vport_info = &(vport->vport_info.info);
+	vport->vport_id = vport_info->vport_id;
+	vport->txq_model = vport_info->txq_model;
+	vport->rxq_model = vport_info->rxq_model;
+	vport->num_tx_q = vport_info->num_tx_q;
+	vport->num_tx_complq = vport_info->num_tx_complq;
+	vport->num_rx_q = vport_info->num_rx_q;
+	vport->num_rx_bufq = vport_info->num_rx_bufq;
+	vport->max_mtu = vport_info->max_mtu;
+	rte_memcpy(vport->default_mac_addr,
+		   vport_info->default_mac_addr, ETH_ALEN);
+	vport->rss_algorithm = vport_info->rss_algorithm;
+	vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+				      vport_info->rss_key_size);
+	vport->rss_lut_size = vport_info->rss_lut_size;
+
+	for (i = 0; i < vport_info->chunks.num_chunks; i++) {
+		type = vport_info->chunks.chunks[i].type;
+		switch (type) {
+		case VIRTCHNL2_QUEUE_TYPE_TX:
+			vport->chunks_info.tx_start_qid =
+				vport_info->chunks.chunks[i].start_queue_id;
+			vport->chunks_info.tx_qtail_start =
+				vport_info->chunks.chunks[i].qtail_reg_start;
+			vport->chunks_info.tx_qtail_spacing =
+				vport_info->chunks.chunks[i].qtail_reg_spacing;
+			break;
+		case VIRTCHNL2_QUEUE_TYPE_RX:
+			vport->chunks_info.rx_start_qid =
+				vport_info->chunks.chunks[i].start_queue_id;
+			vport->chunks_info.rx_qtail_start =
+				vport_info->chunks.chunks[i].qtail_reg_start;
+			vport->chunks_info.rx_qtail_spacing =
+				vport_info->chunks.chunks[i].qtail_reg_spacing;
+			break;
+		case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+			vport->chunks_info.tx_compl_start_qid =
+				vport_info->chunks.chunks[i].start_queue_id;
+			vport->chunks_info.tx_compl_qtail_start =
+				vport_info->chunks.chunks[i].qtail_reg_start;
+			vport->chunks_info.tx_compl_qtail_spacing =
+				vport_info->chunks.chunks[i].qtail_reg_spacing;
+			break;
+		case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+			vport->chunks_info.rx_buf_start_qid =
+				vport_info->chunks.chunks[i].start_queue_id;
+			vport->chunks_info.rx_buf_qtail_start =
+				vport_info->chunks.chunks[i].qtail_reg_start;
+			vport->chunks_info.rx_buf_qtail_spacing =
+				vport_info->chunks.chunks[i].qtail_reg_spacing;
+			break;
+		default:
+			DRV_LOG(ERR, "Unsupported queue type");
+			break;
+		}
+	}
+
+	vport->dev_data = dev_data;
+
+	vport->rss_key = rte_zmalloc("rss_key",
+				     vport->rss_key_size, 0);
+	if (vport->rss_key == NULL) {
+		DRV_LOG(ERR, "Failed to allocate RSS key");
+		ret = -ENOMEM;
+		goto err_rss_key;
+	}
+
+	vport->rss_lut = rte_zmalloc("rss_lut",
+				     sizeof(uint32_t) * vport->rss_lut_size, 0);
+	if (vport->rss_lut == NULL) {
+		DRV_LOG(ERR, "Failed to allocate RSS lut");
+		ret = -ENOMEM;
+		goto err_rss_lut;
+	}
+
+	return 0;
+
+err_rss_lut:
+	vport->dev_data = NULL;
+	rte_free(vport->rss_key);
+	vport->rss_key = NULL;
+err_rss_key:
+	idpf_vc_destroy_vport(vport);
+err_create_vport:
+	return ret;
+}
+int
+idpf_vport_deinit(struct idpf_vport *vport)
+{
+	rte_free(vport->rss_lut);
+	vport->rss_lut = NULL;
+
+	rte_free(vport->rss_key);
+	vport->rss_key = NULL;
+
+	vport->dev_data = NULL;
+
+	idpf_vc_destroy_vport(vport);
+
+	return 0;
+}
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index e4344ea392..14d04268e5 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -9,6 +9,8 @@
 #include <base/virtchnl2.h>
 #include <idpf_common_logs.h>
 
+#define IDPF_RSS_KEY_LEN	52
+
 #define IDPF_CTLQ_ID		-1
 #define IDPF_CTLQ_LEN		64
 #define IDPF_DFLT_MBX_BUF_SIZE	4096
@@ -43,7 +45,10 @@ struct idpf_chunks_info {
 
 struct idpf_vport {
 	struct idpf_adapter *adapter; /* Backreference to associated adapter */
-	struct virtchnl2_create_vport *vport_info; /* virtchnl response info handling */
+	union {
+		struct virtchnl2_create_vport info; /* virtchnl response info handling */
+		uint8_t data[IDPF_DFLT_MBX_BUF_SIZE];
+	} vport_info;
 	uint16_t sw_idx; /* SW index in adapter->vports[]*/
 	uint16_t vport_id;
 	uint32_t txq_model;
@@ -142,5 +147,11 @@ __rte_internal
 int idpf_adapter_init(struct idpf_adapter *adapter);
 __rte_internal
 int idpf_adapter_deinit(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_vport_init(struct idpf_vport *vport,
+		    struct virtchnl2_create_vport *vport_req_info,
+		    void *dev_data);
+__rte_internal
+int idpf_vport_deinit(struct idpf_vport *vport);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 2e94a95876..1531adccef 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -355,7 +355,7 @@ idpf_vc_get_caps(struct idpf_adapter *adapter)
 
 int
 idpf_vc_create_vport(struct idpf_vport *vport,
-		     struct virtchnl2_create_vport *vport_req_info)
+		     struct virtchnl2_create_vport *create_vport_info)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_create_vport vport_msg;
@@ -363,13 +363,13 @@ idpf_vc_create_vport(struct idpf_vport *vport,
 	int err = -1;
 
 	memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
-	vport_msg.vport_type = vport_req_info->vport_type;
-	vport_msg.txq_model = vport_req_info->txq_model;
-	vport_msg.rxq_model = vport_req_info->rxq_model;
-	vport_msg.num_tx_q = vport_req_info->num_tx_q;
-	vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
-	vport_msg.num_rx_q = vport_req_info->num_rx_q;
-	vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
+	vport_msg.vport_type = create_vport_info->vport_type;
+	vport_msg.txq_model = create_vport_info->txq_model;
+	vport_msg.rxq_model = create_vport_info->rxq_model;
+	vport_msg.num_tx_q = create_vport_info->num_tx_q;
+	vport_msg.num_tx_complq = create_vport_info->num_tx_complq;
+	vport_msg.num_rx_q = create_vport_info->num_rx_q;
+	vport_msg.num_rx_bufq = create_vport_info->num_rx_bufq;
 
 	memset(&args, 0, sizeof(args));
 	args.ops = VIRTCHNL2_OP_CREATE_VPORT;
@@ -385,7 +385,7 @@ idpf_vc_create_vport(struct idpf_vport *vport,
 		return err;
 	}
 
-	rte_memcpy(vport->vport_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
+	rte_memcpy(&(vport->vport_info.info), args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
 	return 0;
 }
 
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 7259dcf8a4..680a69822c 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -25,6 +25,8 @@ INTERNAL {
 	idpf_vc_set_rss_hash;
 	idpf_vc_set_rss_key;
 	idpf_vc_set_rss_lut;
+	idpf_vport_deinit;
+	idpf_vport_init;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index c17c7bb472..7a8fb6fd4a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -178,73 +178,6 @@ idpf_init_vport_req_info(struct rte_eth_dev *dev,
 	return 0;
 }
 
-#define IDPF_RSS_KEY_LEN 52
-
-static int
-idpf_init_vport(struct idpf_vport *vport)
-{
-	struct virtchnl2_create_vport *vport_info = vport->vport_info;
-	int i, type;
-
-	vport->vport_id = vport_info->vport_id;
-	vport->txq_model = vport_info->txq_model;
-	vport->rxq_model = vport_info->rxq_model;
-	vport->num_tx_q = vport_info->num_tx_q;
-	vport->num_tx_complq = vport_info->num_tx_complq;
-	vport->num_rx_q = vport_info->num_rx_q;
-	vport->num_rx_bufq = vport_info->num_rx_bufq;
-	vport->max_mtu = vport_info->max_mtu;
-	rte_memcpy(vport->default_mac_addr,
-		   vport_info->default_mac_addr, ETH_ALEN);
-	vport->rss_algorithm = vport_info->rss_algorithm;
-	vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
-				     vport_info->rss_key_size);
-	vport->rss_lut_size = vport_info->rss_lut_size;
-
-	for (i = 0; i < vport_info->chunks.num_chunks; i++) {
-		type = vport_info->chunks.chunks[i].type;
-		switch (type) {
-		case VIRTCHNL2_QUEUE_TYPE_TX:
-			vport->chunks_info.tx_start_qid =
-				vport_info->chunks.chunks[i].start_queue_id;
-			vport->chunks_info.tx_qtail_start =
-				vport_info->chunks.chunks[i].qtail_reg_start;
-			vport->chunks_info.tx_qtail_spacing =
-				vport_info->chunks.chunks[i].qtail_reg_spacing;
-			break;
-		case VIRTCHNL2_QUEUE_TYPE_RX:
-			vport->chunks_info.rx_start_qid =
-				vport_info->chunks.chunks[i].start_queue_id;
-			vport->chunks_info.rx_qtail_start =
-				vport_info->chunks.chunks[i].qtail_reg_start;
-			vport->chunks_info.rx_qtail_spacing =
-				vport_info->chunks.chunks[i].qtail_reg_spacing;
-			break;
-		case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
-			vport->chunks_info.tx_compl_start_qid =
-				vport_info->chunks.chunks[i].start_queue_id;
-			vport->chunks_info.tx_compl_qtail_start =
-				vport_info->chunks.chunks[i].qtail_reg_start;
-			vport->chunks_info.tx_compl_qtail_spacing =
-				vport_info->chunks.chunks[i].qtail_reg_spacing;
-			break;
-		case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
-			vport->chunks_info.rx_buf_start_qid =
-				vport_info->chunks.chunks[i].start_queue_id;
-			vport->chunks_info.rx_buf_qtail_start =
-				vport_info->chunks.chunks[i].qtail_reg_start;
-			vport->chunks_info.rx_buf_qtail_spacing =
-				vport_info->chunks.chunks[i].qtail_reg_spacing;
-			break;
-		default:
-			PMD_INIT_LOG(ERR, "Unsupported queue type");
-			break;
-		}
-	}
-
-	return 0;
-}
-
 static int
 idpf_config_rss(struct idpf_vport *vport)
 {
@@ -276,63 +209,34 @@ idpf_init_rss(struct idpf_vport *vport)
 {
 	struct rte_eth_rss_conf *rss_conf;
 	struct rte_eth_dev_data *dev_data;
-	uint16_t i, nb_q, lut_size;
+	uint16_t i, nb_q;
 	int ret = 0;
 
 	dev_data = vport->dev_data;
 	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
 	nb_q = dev_data->nb_rx_queues;
 
-	vport->rss_key = rte_zmalloc("rss_key",
-				     vport->rss_key_size, 0);
-	if (vport->rss_key == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate RSS key");
-		ret = -ENOMEM;
-		goto err_alloc_key;
-	}
-
-	lut_size = vport->rss_lut_size;
-	vport->rss_lut = rte_zmalloc("rss_lut",
-				     sizeof(uint32_t) * lut_size, 0);
-	if (vport->rss_lut == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate RSS lut");
-		ret = -ENOMEM;
-		goto err_alloc_lut;
-	}
-
 	if (rss_conf->rss_key == NULL) {
 		for (i = 0; i < vport->rss_key_size; i++)
 			vport->rss_key[i] = (uint8_t)rte_rand();
 	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
 		PMD_INIT_LOG(ERR, "Invalid RSS key length in RSS configuration, should be %d",
 			     vport->rss_key_size);
-		ret = -EINVAL;
-		goto err_cfg_key;
+		return -EINVAL;
 	} else {
 		rte_memcpy(vport->rss_key, rss_conf->rss_key,
 			   vport->rss_key_size);
 	}
 
-	for (i = 0; i < lut_size; i++)
+	for (i = 0; i < vport->rss_lut_size; i++)
 		vport->rss_lut[i] = i % nb_q;
 
 	vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
 
 	ret = idpf_config_rss(vport);
-	if (ret != 0) {
+	if (ret != 0)
 		PMD_INIT_LOG(ERR, "Failed to configure RSS");
-		goto err_cfg_key;
-	}
-
-	return ret;
 
-err_cfg_key:
-	rte_free(vport->rss_lut);
-	vport->rss_lut = NULL;
-err_alloc_lut:
-	rte_free(vport->rss_key);
-	vport->rss_key = NULL;
-err_alloc_key:
 	return ret;
 }
 
@@ -602,13 +506,7 @@ idpf_dev_close(struct rte_eth_dev *dev)
 
 	idpf_dev_stop(dev);
 
-	idpf_vc_destroy_vport(vport);
-
-	rte_free(vport->rss_lut);
-	vport->rss_lut = NULL;
-
-	rte_free(vport->rss_key);
-	vport->rss_key = NULL;
+	idpf_vport_deinit(vport);
 
 	rte_free(vport->recv_vectors);
 	vport->recv_vectors = NULL;
@@ -892,13 +790,6 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
 
-	vport->vport_info = rte_zmalloc(NULL, IDPF_DFLT_MBX_BUF_SIZE, 0);
-	if (vport->vport_info == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate vport_info");
-		ret = -ENOMEM;
-		goto err;
-	}
-
 	memset(&vport_req_info, 0, sizeof(vport_req_info));
 	ret = idpf_init_vport_req_info(dev, &vport_req_info);
 	if (ret != 0) {
@@ -906,19 +797,12 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 		goto err;
 	}
 
-	ret = idpf_vc_create_vport(vport, &vport_req_info);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to create vport.");
-		goto err_create_vport;
-	}
-
-	ret = idpf_init_vport(vport);
+	ret = idpf_vport_init(vport, &vport_req_info, dev->data);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to init vports.");
-		goto err_init_vport;
+		goto err;
 	}
 
-	vport->dev_data = dev->data;
 	adapter->vports[param->idx] = vport;
 	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
 	adapter->cur_vport_nb++;
@@ -927,7 +811,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	if (dev->data->mac_addrs == NULL) {
 		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
 		ret = -ENOMEM;
-		goto err_init_vport;
+		goto err_mac_addrs;
 	}
 
 	rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
@@ -935,11 +819,9 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 
 	return 0;
 
-err_init_vport:
+err_mac_addrs:
 	adapter->vports[param->idx] = NULL;  /* reset */
-	idpf_vc_destroy_vport(vport);
-err_create_vport:
-	rte_free(vport->vport_info);
+	idpf_vport_deinit(vport);
 err:
 	return ret;
 }
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v4 06/15] common/idpf: add config RSS
  2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
                     ` (4 preceding siblings ...)
  2023-01-17  8:06   ` [PATCH v4 05/15] common/idpf: add vport init/deinit beilei.xing
@ 2023-01-17  8:06   ` beilei.xing
  2023-01-17  8:06   ` [PATCH v4 07/15] common/idpf: add irq map/unmap beilei.xing
                     ` (9 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-01-17  8:06 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move configure RSS to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c | 25 +++++++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h |  2 ++
 drivers/common/idpf/version.map          |  1 +
 drivers/net/idpf/idpf_ethdev.c           | 26 ------------------------
 4 files changed, 28 insertions(+), 26 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 5628fb5c57..eee96b5083 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -273,4 +273,29 @@ idpf_vport_deinit(struct idpf_vport *vport)
 
 	return 0;
 }
+int
+idpf_config_rss(struct idpf_vport *vport)
+{
+	int ret;
+
+	ret = idpf_vc_set_rss_key(vport);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+	ret = idpf_vc_set_rss_lut(vport);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to configure RSS lut");
+		return ret;
+	}
+
+	ret = idpf_vc_set_rss_hash(vport);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return ret;
+}
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 14d04268e5..1d3bb06fef 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -153,5 +153,7 @@ int idpf_vport_init(struct idpf_vport *vport,
 		    void *dev_data);
 __rte_internal
 int idpf_vport_deinit(struct idpf_vport *vport);
+__rte_internal
+int idpf_config_rss(struct idpf_vport *vport);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 680a69822c..d8d5275b1c 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,6 +3,7 @@ INTERNAL {
 
 	idpf_adapter_deinit;
 	idpf_adapter_init;
+	idpf_config_rss;
 	idpf_ctlq_clean_sq;
 	idpf_ctlq_deinit;
 	idpf_ctlq_init;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 7a8fb6fd4a..f728318dad 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -178,32 +178,6 @@ idpf_init_vport_req_info(struct rte_eth_dev *dev,
 	return 0;
 }
 
-static int
-idpf_config_rss(struct idpf_vport *vport)
-{
-	int ret;
-
-	ret = idpf_vc_set_rss_key(vport);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
-		return ret;
-	}
-
-	ret = idpf_vc_set_rss_lut(vport);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
-		return ret;
-	}
-
-	ret = idpf_vc_set_rss_hash(vport);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
-		return ret;
-	}
-
-	return ret;
-}
-
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v4 07/15] common/idpf: add irq map/unmap
  2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
                     ` (5 preceding siblings ...)
  2023-01-17  8:06   ` [PATCH v4 06/15] common/idpf: add config RSS beilei.xing
@ 2023-01-17  8:06   ` beilei.xing
  2023-01-31  8:11     ` Wu, Jingjing
  2023-01-17  8:06   ` [PATCH v4 08/15] common/idpf: support get packet type beilei.xing
                     ` (8 subsequent siblings)
  15 siblings, 1 reply; 79+ messages in thread
From: beilei.xing @ 2023-01-17  8:06 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Introduce idpf_config_irq_map/idpf_config_irq_unmap functions
in common module, and refine config rxq irqs function.
Refine device start function with some irq error handling. Besides,
vport->stopped should be initialized at the end of the function.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   |  99 ++++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h   |   6 ++
 drivers/common/idpf/idpf_common_virtchnl.c |   8 --
 drivers/common/idpf/idpf_common_virtchnl.h |   6 +-
 drivers/common/idpf/version.map            |   2 +
 drivers/net/idpf/idpf_ethdev.c             | 102 +++------------------
 drivers/net/idpf/idpf_ethdev.h             |   1 -
 7 files changed, 122 insertions(+), 102 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index eee96b5083..422b0b0304 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -247,8 +247,21 @@ idpf_vport_init(struct idpf_vport *vport,
 		goto err_rss_lut;
 	}
 
+	/* recv_vectors is used for VIRTCHNL2_OP_ALLOC_VECTORS response,
+	 * reserve maximum size for it now, may need optimization in future.
+	 */
+	vport->recv_vectors = rte_zmalloc("recv_vectors", IDPF_DFLT_MBX_BUF_SIZE, 0);
+	if (vport->recv_vectors == NULL) {
+		DRV_LOG(ERR, "Failed to allocate ecv_vectors");
+		ret = -ENOMEM;
+		goto err_recv_vec;
+	}
+
 	return 0;
 
+err_recv_vec:
+	rte_free(vport->rss_lut);
+	vport->rss_lut = NULL;
 err_rss_lut:
 	vport->dev_data = NULL;
 	rte_free(vport->rss_key);
@@ -261,6 +274,8 @@ idpf_vport_init(struct idpf_vport *vport,
 int
 idpf_vport_deinit(struct idpf_vport *vport)
 {
+	rte_free(vport->recv_vectors);
+	vport->recv_vectors = NULL;
 	rte_free(vport->rss_lut);
 	vport->rss_lut = NULL;
 
@@ -298,4 +313,88 @@ idpf_config_rss(struct idpf_vport *vport)
 
 	return ret;
 }
+
+int
+idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_queue_vector *qv_map;
+	struct idpf_hw *hw = &adapter->hw;
+	uint32_t dynctl_val, itrn_val;
+	uint32_t dynctl_reg_start;
+	uint32_t itrn_reg_start;
+	uint16_t i;
+
+	qv_map = rte_zmalloc("qv_map",
+			     nb_rx_queues *
+			     sizeof(struct virtchnl2_queue_vector), 0);
+	if (qv_map == NULL) {
+		DRV_LOG(ERR, "Failed to allocate %d queue-vector map",
+			nb_rx_queues);
+		goto qv_map_alloc_err;
+	}
+
+	/* Rx interrupt disabled, Map interrupt only for writeback */
+
+	/* The capability flags adapter->caps.other_caps should be
+	 * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if
+	 * condition should be updated when the FW can return the
+	 * correct flag bits.
+	 */
+	dynctl_reg_start =
+		vport->recv_vectors->vchunks.vchunks->dynctl_reg_start;
+	itrn_reg_start =
+		vport->recv_vectors->vchunks.vchunks->itrn_reg_start;
+	dynctl_val = IDPF_READ_REG(hw, dynctl_reg_start);
+	DRV_LOG(DEBUG, "Value of dynctl_reg_start is 0x%x", dynctl_val);
+	itrn_val = IDPF_READ_REG(hw, itrn_reg_start);
+	DRV_LOG(DEBUG, "Value of itrn_reg_start is 0x%x", itrn_val);
+	/* Force write-backs by setting WB_ON_ITR bit in DYN_CTL
+	 * register. WB_ON_ITR and INTENA are mutually exclusive
+	 * bits. Setting WB_ON_ITR bits means TX and RX Descs
+	 * are written back based on ITR expiration irrespective
+	 * of INTENA setting.
+	 */
+	/* TBD: need to tune INTERVAL value for better performance. */
+	itrn_val = (itrn_val == 0) ? IDPF_DFLT_INTERVAL : itrn_val;
+	dynctl_val = VIRTCHNL2_ITR_IDX_0  <<
+		     PF_GLINT_DYN_CTL_ITR_INDX_S |
+		     PF_GLINT_DYN_CTL_WB_ON_ITR_M |
+		     itrn_val << PF_GLINT_DYN_CTL_INTERVAL_S;
+	IDPF_WRITE_REG(hw, dynctl_reg_start, dynctl_val);
+
+	for (i = 0; i < nb_rx_queues; i++) {
+		/* map all queues to the same vector */
+		qv_map[i].queue_id = vport->chunks_info.rx_start_qid + i;
+		qv_map[i].vector_id =
+			vport->recv_vectors->vchunks.vchunks->start_vector_id;
+	}
+	vport->qv_map = qv_map;
+
+	if (idpf_vc_config_irq_map_unmap(vport, nb_rx_queues, true) != 0) {
+		DRV_LOG(ERR, "config interrupt mapping failed");
+		goto config_irq_map_err;
+	}
+
+	return 0;
+
+config_irq_map_err:
+	rte_free(vport->qv_map);
+	vport->qv_map = NULL;
+
+qv_map_alloc_err:
+	return -1;
+}
+
+int
+idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues)
+{
+	idpf_vc_config_irq_map_unmap(vport, nb_rx_queues, false);
+
+	rte_free(vport->qv_map);
+	vport->qv_map = NULL;
+
+	return 0;
+}
+
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 1d3bb06fef..d45c2b8777 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -17,6 +17,8 @@
 
 #define IDPF_MAX_PKT_TYPE	1024
 
+#define IDPF_DFLT_INTERVAL	16
+
 struct idpf_adapter {
 	struct idpf_hw hw;
 	struct virtchnl2_version_info virtchnl_version;
@@ -155,5 +157,9 @@ __rte_internal
 int idpf_vport_deinit(struct idpf_vport *vport);
 __rte_internal
 int idpf_config_rss(struct idpf_vport *vport);
+__rte_internal
+int idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues);
+__rte_internal
+int idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 1531adccef..f670d2cc17 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -573,14 +573,6 @@ idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_ALLOC_VECTORS");
 
-	if (vport->recv_vectors == NULL) {
-		vport->recv_vectors = rte_zmalloc("recv_vectors", len, 0);
-		if (vport->recv_vectors == NULL) {
-			rte_free(alloc_vec);
-			return -ENOMEM;
-		}
-	}
-
 	rte_memcpy(vport->recv_vectors, args.out_buffer, len);
 	rte_free(alloc_vec);
 	return err;
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index bbc66d63c4..3c9f51e4cf 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -23,6 +23,9 @@ int idpf_vc_set_rss_lut(struct idpf_vport *vport);
 __rte_internal
 int idpf_vc_set_rss_hash(struct idpf_vport *vport);
 __rte_internal
+int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
+				 uint16_t nb_rxq, bool map);
+__rte_internal
 int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
 		      bool rx, bool on);
 __rte_internal
@@ -30,9 +33,6 @@ int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
 __rte_internal
 int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
 __rte_internal
-int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
-				 uint16_t nb_rxq, bool map);
-__rte_internal
 int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
 __rte_internal
 int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index d8d5275b1c..da3b0feefc 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,6 +3,8 @@ INTERNAL {
 
 	idpf_adapter_deinit;
 	idpf_adapter_init;
+	idpf_config_irq_map;
+	idpf_config_irq_unmap;
 	idpf_config_rss;
 	idpf_ctlq_clean_sq;
 	idpf_ctlq_deinit;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index f728318dad..d0799087a5 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -281,84 +281,9 @@ static int
 idpf_config_rx_queues_irqs(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_queue_vector *qv_map;
-	struct idpf_hw *hw = &adapter->hw;
-	uint32_t dynctl_reg_start;
-	uint32_t itrn_reg_start;
-	uint32_t dynctl_val, itrn_val;
-	uint16_t i;
-
-	qv_map = rte_zmalloc("qv_map",
-			dev->data->nb_rx_queues *
-			sizeof(struct virtchnl2_queue_vector), 0);
-	if (qv_map == NULL) {
-		PMD_DRV_LOG(ERR, "Failed to allocate %d queue-vector map",
-			    dev->data->nb_rx_queues);
-		goto qv_map_alloc_err;
-	}
-
-	/* Rx interrupt disabled, Map interrupt only for writeback */
-
-	/* The capability flags adapter->caps.other_caps should be
-	 * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if
-	 * condition should be updated when the FW can return the
-	 * correct flag bits.
-	 */
-	dynctl_reg_start =
-		vport->recv_vectors->vchunks.vchunks->dynctl_reg_start;
-	itrn_reg_start =
-		vport->recv_vectors->vchunks.vchunks->itrn_reg_start;
-	dynctl_val = IDPF_READ_REG(hw, dynctl_reg_start);
-	PMD_DRV_LOG(DEBUG, "Value of dynctl_reg_start is 0x%x",
-		    dynctl_val);
-	itrn_val = IDPF_READ_REG(hw, itrn_reg_start);
-	PMD_DRV_LOG(DEBUG, "Value of itrn_reg_start is 0x%x", itrn_val);
-	/* Force write-backs by setting WB_ON_ITR bit in DYN_CTL
-	 * register. WB_ON_ITR and INTENA are mutually exclusive
-	 * bits. Setting WB_ON_ITR bits means TX and RX Descs
-	 * are written back based on ITR expiration irrespective
-	 * of INTENA setting.
-	 */
-	/* TBD: need to tune INTERVAL value for better performance. */
-	if (itrn_val != 0)
-		IDPF_WRITE_REG(hw,
-			       dynctl_reg_start,
-			       VIRTCHNL2_ITR_IDX_0  <<
-			       PF_GLINT_DYN_CTL_ITR_INDX_S |
-			       PF_GLINT_DYN_CTL_WB_ON_ITR_M |
-			       itrn_val <<
-			       PF_GLINT_DYN_CTL_INTERVAL_S);
-	else
-		IDPF_WRITE_REG(hw,
-			       dynctl_reg_start,
-			       VIRTCHNL2_ITR_IDX_0  <<
-			       PF_GLINT_DYN_CTL_ITR_INDX_S |
-			       PF_GLINT_DYN_CTL_WB_ON_ITR_M |
-			       IDPF_DFLT_INTERVAL <<
-			       PF_GLINT_DYN_CTL_INTERVAL_S);
-
-	for (i = 0; i < dev->data->nb_rx_queues; i++) {
-		/* map all queues to the same vector */
-		qv_map[i].queue_id = vport->chunks_info.rx_start_qid + i;
-		qv_map[i].vector_id =
-			vport->recv_vectors->vchunks.vchunks->start_vector_id;
-	}
-	vport->qv_map = qv_map;
-
-	if (idpf_vc_config_irq_map_unmap(vport, dev->data->nb_rx_queues, true) != 0) {
-		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
-		goto config_irq_map_err;
-	}
-
-	return 0;
-
-config_irq_map_err:
-	rte_free(vport->qv_map);
-	vport->qv_map = NULL;
+	uint16_t nb_rx_queues = dev->data->nb_rx_queues;
 
-qv_map_alloc_err:
-	return -1;
+	return idpf_config_irq_map(vport, nb_rx_queues);
 }
 
 static int
@@ -404,8 +329,6 @@ idpf_dev_start(struct rte_eth_dev *dev)
 	uint16_t req_vecs_num;
 	int ret;
 
-	vport->stopped = 0;
-
 	req_vecs_num = IDPF_DFLT_Q_VEC_NUM;
 	if (req_vecs_num + adapter->used_vecs_num > num_allocated_vectors) {
 		PMD_DRV_LOG(ERR, "The accumulated request vectors' number should be less than %d",
@@ -424,13 +347,13 @@ idpf_dev_start(struct rte_eth_dev *dev)
 	ret = idpf_config_rx_queues_irqs(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to configure irqs");
-		goto err_vec;
+		goto err_irq;
 	}
 
 	ret = idpf_start_queues(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to start queues");
-		goto err_vec;
+		goto err_startq;
 	}
 
 	idpf_set_rx_function(dev);
@@ -442,10 +365,16 @@ idpf_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	vport->stopped = 0;
+
 	return 0;
 
 err_vport:
 	idpf_stop_queues(dev);
+err_startq:
+	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
+err_irq:
+	idpf_vc_dealloc_vectors(vport);
 err_vec:
 	return ret;
 }
@@ -462,10 +391,9 @@ idpf_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_stop_queues(dev);
 
-	idpf_vc_config_irq_map_unmap(vport, dev->data->nb_rx_queues, false);
+	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
 
-	if (vport->recv_vectors != NULL)
-		idpf_vc_dealloc_vectors(vport);
+	idpf_vc_dealloc_vectors(vport);
 
 	vport->stopped = 1;
 
@@ -482,12 +410,6 @@ idpf_dev_close(struct rte_eth_dev *dev)
 
 	idpf_vport_deinit(vport);
 
-	rte_free(vport->recv_vectors);
-	vport->recv_vectors = NULL;
-
-	rte_free(vport->qv_map);
-	vport->qv_map = NULL;
-
 	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
 	adapter->cur_vport_nb--;
 	dev->data->dev_private = NULL;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 07ffe8e408..55be98a8ed 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -32,7 +32,6 @@
 #define IDPF_RX_BUFQ_PER_GRP	2
 
 #define IDPF_DFLT_Q_VEC_NUM	1
-#define IDPF_DFLT_INTERVAL	16
 
 #define IDPF_MIN_BUF_SIZE	1024
 #define IDPF_MAX_FRAME_SIZE	9728
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v4 08/15] common/idpf: support get packet type
  2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
                     ` (6 preceding siblings ...)
  2023-01-17  8:06   ` [PATCH v4 07/15] common/idpf: add irq map/unmap beilei.xing
@ 2023-01-17  8:06   ` beilei.xing
  2023-01-17  8:06   ` [PATCH v4 09/15] common/idpf: add vport info initialization beilei.xing
                     ` (7 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-01-17  8:06 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move ptype_tbl field to idpf_adapter structure.
Move get_pkt_type to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 216 +++++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h   |   7 +
 drivers/common/idpf/idpf_common_virtchnl.h |  10 +-
 drivers/common/idpf/meson.build            |   2 +
 drivers/net/idpf/idpf_ethdev.c             |   6 -
 drivers/net/idpf/idpf_ethdev.h             |   4 -
 drivers/net/idpf/idpf_rxtx.c               |   4 +-
 drivers/net/idpf/idpf_rxtx.h               |   4 -
 drivers/net/idpf/idpf_rxtx_vec_avx512.c    |   3 +-
 drivers/net/idpf/idpf_vchnl.c              | 213 --------------------
 10 files changed, 233 insertions(+), 236 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 422b0b0304..9647d4a62a 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -96,6 +96,216 @@ idpf_init_mbx(struct idpf_hw *hw)
 	return ret;
 }
 
+static int
+idpf_get_pkt_type(struct idpf_adapter *adapter)
+{
+	struct virtchnl2_get_ptype_info *ptype_info;
+	uint16_t ptype_offset, i, j;
+	uint16_t ptype_recvd = 0;
+	int ret;
+
+	ret = idpf_vc_query_ptype_info(adapter);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Fail to query packet type information");
+		return ret;
+	}
+
+	ptype_info = rte_zmalloc("ptype_info", IDPF_DFLT_MBX_BUF_SIZE, 0);
+		if (ptype_info == NULL)
+			return -ENOMEM;
+
+	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
+		ret = idpf_read_one_msg(adapter, VIRTCHNL2_OP_GET_PTYPE_INFO,
+					IDPF_DFLT_MBX_BUF_SIZE, (uint8_t *)ptype_info);
+		if (ret != 0) {
+			DRV_LOG(ERR, "Fail to get packet type information");
+			goto free_ptype_info;
+		}
+
+		ptype_recvd += ptype_info->num_ptypes;
+		ptype_offset = sizeof(struct virtchnl2_get_ptype_info) -
+						sizeof(struct virtchnl2_ptype);
+
+		for (i = 0; i < rte_cpu_to_le_16(ptype_info->num_ptypes); i++) {
+			bool is_inner = false, is_ip = false;
+			struct virtchnl2_ptype *ptype;
+			uint32_t proto_hdr = 0;
+
+			ptype = (struct virtchnl2_ptype *)
+					((uint8_t *)ptype_info + ptype_offset);
+			ptype_offset += IDPF_GET_PTYPE_SIZE(ptype);
+			if (ptype_offset > IDPF_DFLT_MBX_BUF_SIZE) {
+				ret = -EINVAL;
+				goto free_ptype_info;
+			}
+
+			if (rte_cpu_to_le_16(ptype->ptype_id_10) == 0xFFFF)
+				goto free_ptype_info;
+
+			for (j = 0; j < ptype->proto_id_count; j++) {
+				switch (rte_cpu_to_le_16(ptype->proto_id[j])) {
+				case VIRTCHNL2_PROTO_HDR_GRE:
+				case VIRTCHNL2_PROTO_HDR_VXLAN:
+					proto_hdr &= ~RTE_PTYPE_L4_MASK;
+					proto_hdr |= RTE_PTYPE_TUNNEL_GRENAT;
+					is_inner = true;
+					break;
+				case VIRTCHNL2_PROTO_HDR_MAC:
+					if (is_inner) {
+						proto_hdr &= ~RTE_PTYPE_INNER_L2_MASK;
+						proto_hdr |= RTE_PTYPE_INNER_L2_ETHER;
+					} else {
+						proto_hdr &= ~RTE_PTYPE_L2_MASK;
+						proto_hdr |= RTE_PTYPE_L2_ETHER;
+					}
+					break;
+				case VIRTCHNL2_PROTO_HDR_VLAN:
+					if (is_inner) {
+						proto_hdr &= ~RTE_PTYPE_INNER_L2_MASK;
+						proto_hdr |= RTE_PTYPE_INNER_L2_ETHER_VLAN;
+					}
+					break;
+				case VIRTCHNL2_PROTO_HDR_PTP:
+					proto_hdr &= ~RTE_PTYPE_L2_MASK;
+					proto_hdr |= RTE_PTYPE_L2_ETHER_TIMESYNC;
+					break;
+				case VIRTCHNL2_PROTO_HDR_LLDP:
+					proto_hdr &= ~RTE_PTYPE_L2_MASK;
+					proto_hdr |= RTE_PTYPE_L2_ETHER_LLDP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_ARP:
+					proto_hdr &= ~RTE_PTYPE_L2_MASK;
+					proto_hdr |= RTE_PTYPE_L2_ETHER_ARP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_PPPOE:
+					proto_hdr &= ~RTE_PTYPE_L2_MASK;
+					proto_hdr |= RTE_PTYPE_L2_ETHER_PPPOE;
+					break;
+				case VIRTCHNL2_PROTO_HDR_IPV4:
+					if (!is_ip) {
+						proto_hdr |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
+						is_ip = true;
+					} else {
+						proto_hdr |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+							     RTE_PTYPE_TUNNEL_IP;
+						is_inner = true;
+					}
+						break;
+				case VIRTCHNL2_PROTO_HDR_IPV6:
+					if (!is_ip) {
+						proto_hdr |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
+						is_ip = true;
+					} else {
+						proto_hdr |= RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+							     RTE_PTYPE_TUNNEL_IP;
+						is_inner = true;
+					}
+					break;
+				case VIRTCHNL2_PROTO_HDR_IPV4_FRAG:
+				case VIRTCHNL2_PROTO_HDR_IPV6_FRAG:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_FRAG;
+					else
+						proto_hdr |= RTE_PTYPE_L4_FRAG;
+					break;
+				case VIRTCHNL2_PROTO_HDR_UDP:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_UDP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_UDP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_TCP:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_TCP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_TCP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_SCTP:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_SCTP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_SCTP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_ICMP:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_ICMP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_ICMP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_ICMPV6:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_ICMP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_ICMP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_L2TPV2:
+				case VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL:
+				case VIRTCHNL2_PROTO_HDR_L2TPV3:
+					is_inner = true;
+					proto_hdr |= RTE_PTYPE_TUNNEL_L2TP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_NVGRE:
+					is_inner = true;
+					proto_hdr |= RTE_PTYPE_TUNNEL_NVGRE;
+					break;
+				case VIRTCHNL2_PROTO_HDR_GTPC_TEID:
+					is_inner = true;
+					proto_hdr |= RTE_PTYPE_TUNNEL_GTPC;
+					break;
+				case VIRTCHNL2_PROTO_HDR_GTPU:
+				case VIRTCHNL2_PROTO_HDR_GTPU_UL:
+				case VIRTCHNL2_PROTO_HDR_GTPU_DL:
+					is_inner = true;
+					proto_hdr |= RTE_PTYPE_TUNNEL_GTPU;
+					break;
+				case VIRTCHNL2_PROTO_HDR_PAY:
+				case VIRTCHNL2_PROTO_HDR_IPV6_EH:
+				case VIRTCHNL2_PROTO_HDR_PRE_MAC:
+				case VIRTCHNL2_PROTO_HDR_POST_MAC:
+				case VIRTCHNL2_PROTO_HDR_ETHERTYPE:
+				case VIRTCHNL2_PROTO_HDR_SVLAN:
+				case VIRTCHNL2_PROTO_HDR_CVLAN:
+				case VIRTCHNL2_PROTO_HDR_MPLS:
+				case VIRTCHNL2_PROTO_HDR_MMPLS:
+				case VIRTCHNL2_PROTO_HDR_CTRL:
+				case VIRTCHNL2_PROTO_HDR_ECP:
+				case VIRTCHNL2_PROTO_HDR_EAPOL:
+				case VIRTCHNL2_PROTO_HDR_PPPOD:
+				case VIRTCHNL2_PROTO_HDR_IGMP:
+				case VIRTCHNL2_PROTO_HDR_AH:
+				case VIRTCHNL2_PROTO_HDR_ESP:
+				case VIRTCHNL2_PROTO_HDR_IKE:
+				case VIRTCHNL2_PROTO_HDR_NATT_KEEP:
+				case VIRTCHNL2_PROTO_HDR_GTP:
+				case VIRTCHNL2_PROTO_HDR_GTP_EH:
+				case VIRTCHNL2_PROTO_HDR_GTPCV2:
+				case VIRTCHNL2_PROTO_HDR_ECPRI:
+				case VIRTCHNL2_PROTO_HDR_VRRP:
+				case VIRTCHNL2_PROTO_HDR_OSPF:
+				case VIRTCHNL2_PROTO_HDR_TUN:
+				case VIRTCHNL2_PROTO_HDR_VXLAN_GPE:
+				case VIRTCHNL2_PROTO_HDR_GENEVE:
+				case VIRTCHNL2_PROTO_HDR_NSH:
+				case VIRTCHNL2_PROTO_HDR_QUIC:
+				case VIRTCHNL2_PROTO_HDR_PFCP:
+				case VIRTCHNL2_PROTO_HDR_PFCP_NODE:
+				case VIRTCHNL2_PROTO_HDR_PFCP_SESSION:
+				case VIRTCHNL2_PROTO_HDR_RTP:
+				case VIRTCHNL2_PROTO_HDR_NO_PROTO:
+				default:
+					continue;
+				}
+				adapter->ptype_tbl[ptype->ptype_id_10] = proto_hdr;
+			}
+		}
+	}
+
+free_ptype_info:
+	rte_free(ptype_info);
+	clear_cmd(adapter);
+	return ret;
+}
+
 int
 idpf_adapter_init(struct idpf_adapter *adapter)
 {
@@ -135,6 +345,12 @@ idpf_adapter_init(struct idpf_adapter *adapter)
 		goto err_check_api;
 	}
 
+	ret = idpf_get_pkt_type(adapter);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to set ptype table");
+		goto err_check_api;
+	}
+
 	return 0;
 
 err_check_api:
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index d45c2b8777..997f01f3aa 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -5,6 +5,7 @@
 #ifndef _IDPF_COMMON_DEVICE_H_
 #define _IDPF_COMMON_DEVICE_H_
 
+#include <rte_mbuf_ptype.h>
 #include <base/idpf_prototype.h>
 #include <base/virtchnl2.h>
 #include <idpf_common_logs.h>
@@ -19,6 +20,10 @@
 
 #define IDPF_DFLT_INTERVAL	16
 
+#define IDPF_GET_PTYPE_SIZE(p)						\
+	(sizeof(struct virtchnl2_ptype) +				\
+	 (((p)->proto_id_count ? ((p)->proto_id_count - 1) : 0) * sizeof((p)->proto_id[0])))
+
 struct idpf_adapter {
 	struct idpf_hw hw;
 	struct virtchnl2_version_info virtchnl_version;
@@ -26,6 +31,8 @@ struct idpf_adapter {
 	volatile uint32_t pend_cmd; /* pending command not finished */
 	uint32_t cmd_retval; /* return value of the cmd response from cp */
 	uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */
+
+	uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
 };
 
 struct idpf_chunks_info {
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 3c9f51e4cf..11dbc089cb 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -26,6 +26,11 @@ __rte_internal
 int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
 				 uint16_t nb_rxq, bool map);
 __rte_internal
+int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
+		      uint16_t buf_len, uint8_t *buf);
+__rte_internal
 int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
 		      bool rx, bool on);
 __rte_internal
@@ -37,11 +42,6 @@ int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
 __rte_internal
 int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
 __rte_internal
-int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
-__rte_internal
-int idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
-		      uint16_t buf_len, uint8_t *buf);
-__rte_internal
 int idpf_execute_vc_cmd(struct idpf_adapter *adapter,
 			struct idpf_cmd_info *args);
 
diff --git a/drivers/common/idpf/meson.build b/drivers/common/idpf/meson.build
index d1578641ba..c6cc7a196b 100644
--- a/drivers/common/idpf/meson.build
+++ b/drivers/common/idpf/meson.build
@@ -1,6 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2022 Intel Corporation
 
+deps += ['mbuf']
+
 sources = files(
     'idpf_common_device.c',
     'idpf_common_virtchnl.c',
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index d0799087a5..84046f955a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -602,12 +602,6 @@ idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *a
 		goto err_adapter_init;
 	}
 
-	ret = idpf_get_pkt_type(adapter);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to set ptype table");
-		goto err_get_ptype;
-	}
-
 	adapter->max_vport_nb = adapter->base.caps.max_vports;
 
 	adapter->vports = rte_zmalloc("vports",
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 55be98a8ed..d30807ca41 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -89,8 +89,6 @@ struct idpf_adapter_ext {
 
 	uint16_t used_vecs_num;
 
-	uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
-
 	bool rx_vec_allowed;
 	bool tx_vec_allowed;
 	bool rx_use_avx512;
@@ -107,6 +105,4 @@ TAILQ_HEAD(idpf_adapter_list, idpf_adapter_ext);
 #define IDPF_ADAPTER_TO_EXT(p)					\
 	container_of((p), struct idpf_adapter_ext, base)
 
-int idpf_get_pkt_type(struct idpf_adapter_ext *adapter);
-
 #endif /* _IDPF_ETHDEV_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 918d156e03..0c9c7fee29 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1407,7 +1407,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	rx_id_bufq1 = rxq->bufq1->rx_next_avail;
 	rx_id_bufq2 = rxq->bufq2->rx_next_avail;
 	rx_desc_ring = rxq->rx_ring;
-	ptype_tbl = ad->ptype_tbl;
+	ptype_tbl = rxq->adapter->ptype_tbl;
 
 	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
 		rxq->hw_register_set = 1;
@@ -1812,7 +1812,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	rx_id = rxq->rx_tail;
 	rx_ring = rxq->rx_ring;
-	ptype_tbl = ad->ptype_tbl;
+	ptype_tbl = rxq->adapter->ptype_tbl;
 
 	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
 		rxq->hw_register_set = 1;
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 9417651b3f..cac6040943 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -82,10 +82,6 @@
 #define IDPF_TX_OFFLOAD_NOTSUP_MASK \
 		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
 
-#define IDPF_GET_PTYPE_SIZE(p) \
-	(sizeof(struct virtchnl2_ptype) + \
-	(((p)->proto_id_count ? ((p)->proto_id_count - 1) : 0) * sizeof((p)->proto_id[0])))
-
 extern uint64_t idpf_timestamp_dynflag;
 
 struct idpf_rx_queue {
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
index efa7cd2187..fb2b6bb53c 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
@@ -245,8 +245,7 @@ _idpf_singleq_recv_raw_pkts_avx512(struct idpf_rx_queue *rxq,
 				   struct rte_mbuf **rx_pkts,
 				   uint16_t nb_pkts)
 {
-	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(rxq->adapter);
-	const uint32_t *type_table = adapter->ptype_tbl;
+	const uint32_t *type_table = rxq->adapter->ptype_tbl;
 
 	const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0,
 						    rxq->mbuf_initializer);
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 576b797973..45d05ed108 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -23,219 +23,6 @@
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
 
-int __rte_cold
-idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
-{
-	struct virtchnl2_get_ptype_info *ptype_info;
-	struct idpf_adapter *base;
-	uint16_t ptype_offset, i, j;
-	uint16_t ptype_recvd = 0;
-	int ret;
-
-	base = &adapter->base;
-
-	ret = idpf_vc_query_ptype_info(base);
-	if (ret != 0) {
-		PMD_DRV_LOG(ERR, "Fail to query packet type information");
-		return ret;
-	}
-
-	ptype_info = rte_zmalloc("ptype_info", IDPF_DFLT_MBX_BUF_SIZE, 0);
-		if (ptype_info == NULL)
-			return -ENOMEM;
-
-	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
-		ret = idpf_read_one_msg(base, VIRTCHNL2_OP_GET_PTYPE_INFO,
-					IDPF_DFLT_MBX_BUF_SIZE, (uint8_t *)ptype_info);
-		if (ret != 0) {
-			PMD_DRV_LOG(ERR, "Fail to get packet type information");
-			goto free_ptype_info;
-		}
-
-		ptype_recvd += ptype_info->num_ptypes;
-		ptype_offset = sizeof(struct virtchnl2_get_ptype_info) -
-						sizeof(struct virtchnl2_ptype);
-
-		for (i = 0; i < rte_cpu_to_le_16(ptype_info->num_ptypes); i++) {
-			bool is_inner = false, is_ip = false;
-			struct virtchnl2_ptype *ptype;
-			uint32_t proto_hdr = 0;
-
-			ptype = (struct virtchnl2_ptype *)
-					((uint8_t *)ptype_info + ptype_offset);
-			ptype_offset += IDPF_GET_PTYPE_SIZE(ptype);
-			if (ptype_offset > IDPF_DFLT_MBX_BUF_SIZE) {
-				ret = -EINVAL;
-				goto free_ptype_info;
-			}
-
-			if (rte_cpu_to_le_16(ptype->ptype_id_10) == 0xFFFF)
-				goto free_ptype_info;
-
-			for (j = 0; j < ptype->proto_id_count; j++) {
-				switch (rte_cpu_to_le_16(ptype->proto_id[j])) {
-				case VIRTCHNL2_PROTO_HDR_GRE:
-				case VIRTCHNL2_PROTO_HDR_VXLAN:
-					proto_hdr &= ~RTE_PTYPE_L4_MASK;
-					proto_hdr |= RTE_PTYPE_TUNNEL_GRENAT;
-					is_inner = true;
-					break;
-				case VIRTCHNL2_PROTO_HDR_MAC:
-					if (is_inner) {
-						proto_hdr &= ~RTE_PTYPE_INNER_L2_MASK;
-						proto_hdr |= RTE_PTYPE_INNER_L2_ETHER;
-					} else {
-						proto_hdr &= ~RTE_PTYPE_L2_MASK;
-						proto_hdr |= RTE_PTYPE_L2_ETHER;
-					}
-					break;
-				case VIRTCHNL2_PROTO_HDR_VLAN:
-					if (is_inner) {
-						proto_hdr &= ~RTE_PTYPE_INNER_L2_MASK;
-						proto_hdr |= RTE_PTYPE_INNER_L2_ETHER_VLAN;
-					}
-					break;
-				case VIRTCHNL2_PROTO_HDR_PTP:
-					proto_hdr &= ~RTE_PTYPE_L2_MASK;
-					proto_hdr |= RTE_PTYPE_L2_ETHER_TIMESYNC;
-					break;
-				case VIRTCHNL2_PROTO_HDR_LLDP:
-					proto_hdr &= ~RTE_PTYPE_L2_MASK;
-					proto_hdr |= RTE_PTYPE_L2_ETHER_LLDP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_ARP:
-					proto_hdr &= ~RTE_PTYPE_L2_MASK;
-					proto_hdr |= RTE_PTYPE_L2_ETHER_ARP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_PPPOE:
-					proto_hdr &= ~RTE_PTYPE_L2_MASK;
-					proto_hdr |= RTE_PTYPE_L2_ETHER_PPPOE;
-					break;
-				case VIRTCHNL2_PROTO_HDR_IPV4:
-					if (!is_ip) {
-						proto_hdr |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
-						is_ip = true;
-					} else {
-						proto_hdr |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
-							     RTE_PTYPE_TUNNEL_IP;
-						is_inner = true;
-					}
-						break;
-				case VIRTCHNL2_PROTO_HDR_IPV6:
-					if (!is_ip) {
-						proto_hdr |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
-						is_ip = true;
-					} else {
-						proto_hdr |= RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
-							     RTE_PTYPE_TUNNEL_IP;
-						is_inner = true;
-					}
-					break;
-				case VIRTCHNL2_PROTO_HDR_IPV4_FRAG:
-				case VIRTCHNL2_PROTO_HDR_IPV6_FRAG:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_FRAG;
-					else
-						proto_hdr |= RTE_PTYPE_L4_FRAG;
-					break;
-				case VIRTCHNL2_PROTO_HDR_UDP:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_UDP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_UDP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_TCP:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_TCP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_TCP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_SCTP:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_SCTP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_SCTP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_ICMP:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_ICMP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_ICMP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_ICMPV6:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_ICMP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_ICMP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_L2TPV2:
-				case VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL:
-				case VIRTCHNL2_PROTO_HDR_L2TPV3:
-					is_inner = true;
-					proto_hdr |= RTE_PTYPE_TUNNEL_L2TP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_NVGRE:
-					is_inner = true;
-					proto_hdr |= RTE_PTYPE_TUNNEL_NVGRE;
-					break;
-				case VIRTCHNL2_PROTO_HDR_GTPC_TEID:
-					is_inner = true;
-					proto_hdr |= RTE_PTYPE_TUNNEL_GTPC;
-					break;
-				case VIRTCHNL2_PROTO_HDR_GTPU:
-				case VIRTCHNL2_PROTO_HDR_GTPU_UL:
-				case VIRTCHNL2_PROTO_HDR_GTPU_DL:
-					is_inner = true;
-					proto_hdr |= RTE_PTYPE_TUNNEL_GTPU;
-					break;
-				case VIRTCHNL2_PROTO_HDR_PAY:
-				case VIRTCHNL2_PROTO_HDR_IPV6_EH:
-				case VIRTCHNL2_PROTO_HDR_PRE_MAC:
-				case VIRTCHNL2_PROTO_HDR_POST_MAC:
-				case VIRTCHNL2_PROTO_HDR_ETHERTYPE:
-				case VIRTCHNL2_PROTO_HDR_SVLAN:
-				case VIRTCHNL2_PROTO_HDR_CVLAN:
-				case VIRTCHNL2_PROTO_HDR_MPLS:
-				case VIRTCHNL2_PROTO_HDR_MMPLS:
-				case VIRTCHNL2_PROTO_HDR_CTRL:
-				case VIRTCHNL2_PROTO_HDR_ECP:
-				case VIRTCHNL2_PROTO_HDR_EAPOL:
-				case VIRTCHNL2_PROTO_HDR_PPPOD:
-				case VIRTCHNL2_PROTO_HDR_IGMP:
-				case VIRTCHNL2_PROTO_HDR_AH:
-				case VIRTCHNL2_PROTO_HDR_ESP:
-				case VIRTCHNL2_PROTO_HDR_IKE:
-				case VIRTCHNL2_PROTO_HDR_NATT_KEEP:
-				case VIRTCHNL2_PROTO_HDR_GTP:
-				case VIRTCHNL2_PROTO_HDR_GTP_EH:
-				case VIRTCHNL2_PROTO_HDR_GTPCV2:
-				case VIRTCHNL2_PROTO_HDR_ECPRI:
-				case VIRTCHNL2_PROTO_HDR_VRRP:
-				case VIRTCHNL2_PROTO_HDR_OSPF:
-				case VIRTCHNL2_PROTO_HDR_TUN:
-				case VIRTCHNL2_PROTO_HDR_VXLAN_GPE:
-				case VIRTCHNL2_PROTO_HDR_GENEVE:
-				case VIRTCHNL2_PROTO_HDR_NSH:
-				case VIRTCHNL2_PROTO_HDR_QUIC:
-				case VIRTCHNL2_PROTO_HDR_PFCP:
-				case VIRTCHNL2_PROTO_HDR_PFCP_NODE:
-				case VIRTCHNL2_PROTO_HDR_PFCP_SESSION:
-				case VIRTCHNL2_PROTO_HDR_RTP:
-				case VIRTCHNL2_PROTO_HDR_NO_PROTO:
-				default:
-					continue;
-				}
-				adapter->ptype_tbl[ptype->ptype_id_10] = proto_hdr;
-			}
-		}
-	}
-
-free_ptype_info:
-	rte_free(ptype_info);
-	clear_cmd(base);
-	return ret;
-}
-
 #define IDPF_RX_BUF_STRIDE		64
 int
 idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v4 09/15] common/idpf: add vport info initialization
  2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
                     ` (7 preceding siblings ...)
  2023-01-17  8:06   ` [PATCH v4 08/15] common/idpf: support get packet type beilei.xing
@ 2023-01-17  8:06   ` beilei.xing
  2023-01-31  8:24     ` Wu, Jingjing
  2023-01-17  8:06   ` [PATCH v4 10/15] common/idpf: add vector flags in vport beilei.xing
                     ` (6 subsequent siblings)
  15 siblings, 1 reply; 79+ messages in thread
From: beilei.xing @ 2023-01-17  8:06 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move queue module fields from idpf_adapter_ext structure to
idpf_adapter structure.
Refine some parameter and function name, and move function
idpf_create_vport_info_init to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c | 35 +++++++++++++++++
 drivers/common/idpf/idpf_common_device.h | 11 ++++++
 drivers/common/idpf/version.map          |  1 +
 drivers/net/idpf/idpf_ethdev.c           | 48 +++---------------------
 drivers/net/idpf/idpf_ethdev.h           |  8 ----
 5 files changed, 53 insertions(+), 50 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 9647d4a62a..411873c902 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -613,4 +613,39 @@ idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues)
 	return 0;
 }
 
+int
+idpf_create_vport_info_init(struct idpf_vport *vport,
+			    struct virtchnl2_create_vport *vport_info)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+
+	vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
+	if (adapter->txq_model == 0) {
+		vport_info->txq_model =
+			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+		vport_info->num_tx_q = IDPF_DEFAULT_TXQ_NUM;
+		vport_info->num_tx_complq =
+			IDPF_DEFAULT_TXQ_NUM * IDPF_TX_COMPLQ_PER_GRP;
+	} else {
+		vport_info->txq_model =
+			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+		vport_info->num_tx_q = IDPF_DEFAULT_TXQ_NUM;
+		vport_info->num_tx_complq = 0;
+	}
+	if (adapter->rxq_model == 0) {
+		vport_info->rxq_model =
+			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+		vport_info->num_rx_q = IDPF_DEFAULT_RXQ_NUM;
+		vport_info->num_rx_bufq =
+			IDPF_DEFAULT_RXQ_NUM * IDPF_RX_BUFQ_PER_GRP;
+	} else {
+		vport_info->rxq_model =
+			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+		vport_info->num_rx_q = IDPF_DEFAULT_RXQ_NUM;
+		vport_info->num_rx_bufq = 0;
+	}
+
+	return 0;
+}
+
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 997f01f3aa..0c73d40e53 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -16,6 +16,11 @@
 #define IDPF_CTLQ_LEN		64
 #define IDPF_DFLT_MBX_BUF_SIZE	4096
 
+#define IDPF_DEFAULT_RXQ_NUM	16
+#define IDPF_RX_BUFQ_PER_GRP	2
+#define IDPF_DEFAULT_TXQ_NUM	16
+#define IDPF_TX_COMPLQ_PER_GRP	1
+
 #define IDPF_MAX_PKT_TYPE	1024
 
 #define IDPF_DFLT_INTERVAL	16
@@ -33,6 +38,9 @@ struct idpf_adapter {
 	uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */
 
 	uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
+
+	uint32_t txq_model; /* 0 - split queue model, non-0 - single queue model */
+	uint32_t rxq_model; /* 0 - split queue model, non-0 - single queue model */
 };
 
 struct idpf_chunks_info {
@@ -168,5 +176,8 @@ __rte_internal
 int idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues);
 __rte_internal
 int idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues);
+__rte_internal
+int idpf_create_vport_info_init(struct idpf_vport *vport,
+				struct virtchnl2_create_vport *vport_info);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index da3b0feefc..b153647ee1 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -6,6 +6,7 @@ INTERNAL {
 	idpf_config_irq_map;
 	idpf_config_irq_unmap;
 	idpf_config_rss;
+	idpf_create_vport_info_init;
 	idpf_ctlq_clean_sq;
 	idpf_ctlq_deinit;
 	idpf_ctlq_init;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 84046f955a..734e97ffc2 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -142,42 +142,6 @@ idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
-static int
-idpf_init_vport_req_info(struct rte_eth_dev *dev,
-			 struct virtchnl2_create_vport *vport_info)
-{
-	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(vport->adapter);
-
-	vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
-	if (adapter->txq_model == 0) {
-		vport_info->txq_model =
-			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
-		vport_info->num_tx_q = IDPF_DEFAULT_TXQ_NUM;
-		vport_info->num_tx_complq =
-			IDPF_DEFAULT_TXQ_NUM * IDPF_TX_COMPLQ_PER_GRP;
-	} else {
-		vport_info->txq_model =
-			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
-		vport_info->num_tx_q = IDPF_DEFAULT_TXQ_NUM;
-		vport_info->num_tx_complq = 0;
-	}
-	if (adapter->rxq_model == 0) {
-		vport_info->rxq_model =
-			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
-		vport_info->num_rx_q = IDPF_DEFAULT_RXQ_NUM;
-		vport_info->num_rx_bufq =
-			IDPF_DEFAULT_RXQ_NUM * IDPF_RX_BUFQ_PER_GRP;
-	} else {
-		vport_info->rxq_model =
-			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
-		vport_info->num_rx_q = IDPF_DEFAULT_RXQ_NUM;
-		vport_info->num_rx_bufq = 0;
-	}
-
-	return 0;
-}
-
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -566,12 +530,12 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap
 		goto bail;
 
 	ret = rte_kvargs_process(kvlist, IDPF_TX_SINGLE_Q, &parse_bool,
-				 &adapter->txq_model);
+				 &adapter->base.txq_model);
 	if (ret != 0)
 		goto bail;
 
 	ret = rte_kvargs_process(kvlist, IDPF_RX_SINGLE_Q, &parse_bool,
-				 &adapter->rxq_model);
+				 &adapter->base.rxq_model);
 	if (ret != 0)
 		goto bail;
 
@@ -672,7 +636,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	struct idpf_vport_param *param = init_params;
 	struct idpf_adapter_ext *adapter = param->adapter;
 	/* for sending create vport virtchnl msg prepare */
-	struct virtchnl2_create_vport vport_req_info;
+	struct virtchnl2_create_vport create_vport_info;
 	int ret = 0;
 
 	dev->dev_ops = &idpf_eth_dev_ops;
@@ -680,14 +644,14 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
 
-	memset(&vport_req_info, 0, sizeof(vport_req_info));
-	ret = idpf_init_vport_req_info(dev, &vport_req_info);
+	memset(&create_vport_info, 0, sizeof(create_vport_info));
+	ret = idpf_create_vport_info_init(vport, &create_vport_info);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
 		goto err;
 	}
 
-	ret = idpf_vport_init(vport, &vport_req_info, dev->data);
+	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to init vports.");
 		goto err;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index d30807ca41..c2a7abb05c 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -22,14 +22,9 @@
 
 #define IDPF_MAX_VPORT_NUM	8
 
-#define IDPF_DEFAULT_RXQ_NUM	16
-#define IDPF_DEFAULT_TXQ_NUM	16
-
 #define IDPF_INVALID_VPORT_IDX	0xffff
 #define IDPF_TXQ_PER_GRP	1
-#define IDPF_TX_COMPLQ_PER_GRP	1
 #define IDPF_RXQ_PER_GRP	1
-#define IDPF_RX_BUFQ_PER_GRP	2
 
 #define IDPF_DFLT_Q_VEC_NUM	1
 
@@ -78,9 +73,6 @@ struct idpf_adapter_ext {
 
 	char name[IDPF_ADAPTER_NAME_LEN];
 
-	uint32_t txq_model; /* 0 - split queue model, non-0 - single queue model */
-	uint32_t rxq_model; /* 0 - split queue model, non-0 - single queue model */
-
 	struct idpf_vport **vports;
 	uint16_t max_vport_nb;
 
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v4 10/15] common/idpf: add vector flags in vport
  2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
                     ` (8 preceding siblings ...)
  2023-01-17  8:06   ` [PATCH v4 09/15] common/idpf: add vport info initialization beilei.xing
@ 2023-01-17  8:06   ` beilei.xing
  2023-01-17  8:06   ` [PATCH v4 11/15] common/idpf: add rxq and txq struct beilei.xing
                     ` (5 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-01-17  8:06 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move vector flags from idpf_adapter_ext structure to
idpf_vport structure.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h |  5 +++++
 drivers/net/idpf/idpf_ethdev.h           |  5 -----
 drivers/net/idpf/idpf_rxtx.c             | 22 ++++++++++------------
 3 files changed, 15 insertions(+), 17 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 0c73d40e53..61c47ba5f4 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -103,6 +103,11 @@ struct idpf_vport {
 	uint16_t devarg_id;
 
 	bool stopped;
+
+	bool rx_vec_allowed;
+	bool tx_vec_allowed;
+	bool rx_use_avx512;
+	bool tx_use_avx512;
 };
 
 /* Message type read in virtual channel from PF */
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index c2a7abb05c..bef6199622 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -81,11 +81,6 @@ struct idpf_adapter_ext {
 
 	uint16_t used_vecs_num;
 
-	bool rx_vec_allowed;
-	bool tx_vec_allowed;
-	bool rx_use_avx512;
-	bool tx_use_avx512;
-
 	/* For PTP */
 	uint64_t time_hw;
 };
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 0c9c7fee29..f0eff493f8 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -2221,25 +2221,24 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 #ifdef RTE_ARCH_X86
-	struct idpf_adapter_ext *ad = IDPF_ADAPTER_TO_EXT(vport->adapter);
 	struct idpf_rx_queue *rxq;
 	int i;
 
 	if (idpf_rx_vec_dev_check_default(dev) == IDPF_VECTOR_PATH &&
 	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
-		ad->rx_vec_allowed = true;
+		vport->rx_vec_allowed = true;
 
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
 			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
-				ad->rx_use_avx512 = true;
+				vport->rx_use_avx512 = true;
 #else
 		PMD_DRV_LOG(NOTICE,
 			    "AVX512 is not supported in build env");
 #endif /* CC_AVX512_SUPPORT */
 	} else {
-		ad->rx_vec_allowed = false;
+		vport->rx_vec_allowed = false;
 	}
 #endif /* RTE_ARCH_X86 */
 
@@ -2247,13 +2246,13 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
 	} else {
-		if (ad->rx_vec_allowed) {
+		if (vport->rx_vec_allowed) {
 			for (i = 0; i < dev->data->nb_tx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
 				(void)idpf_singleq_rx_vec_setup(rxq);
 			}
 #ifdef CC_AVX512_SUPPORT
-			if (ad->rx_use_avx512) {
+			if (vport->rx_use_avx512) {
 				dev->rx_pkt_burst = idpf_singleq_recv_pkts_avx512;
 				return;
 			}
@@ -2275,7 +2274,6 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 #ifdef RTE_ARCH_X86
-	struct idpf_adapter_ext *ad = IDPF_ADAPTER_TO_EXT(vport->adapter);
 #ifdef CC_AVX512_SUPPORT
 	struct idpf_tx_queue *txq;
 	int i;
@@ -2283,18 +2281,18 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 
 	if (idpf_rx_vec_dev_check_default(dev) == IDPF_VECTOR_PATH &&
 	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
-		ad->tx_vec_allowed = true;
+		vport->tx_vec_allowed = true;
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
 			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
-				ad->tx_use_avx512 = true;
+				vport->tx_use_avx512 = true;
 #else
 		PMD_DRV_LOG(NOTICE,
 			    "AVX512 is not supported in build env");
 #endif /* CC_AVX512_SUPPORT */
 	} else {
-		ad->tx_vec_allowed = false;
+		vport->tx_vec_allowed = false;
 	}
 #endif /* RTE_ARCH_X86 */
 
@@ -2303,9 +2301,9 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 		dev->tx_pkt_prepare = idpf_prep_pkts;
 	} else {
 #ifdef RTE_ARCH_X86
-		if (ad->tx_vec_allowed) {
+		if (vport->tx_vec_allowed) {
 #ifdef CC_AVX512_SUPPORT
-			if (ad->tx_use_avx512) {
+			if (vport->tx_use_avx512) {
 				for (i = 0; i < dev->data->nb_tx_queues; i++) {
 					txq = dev->data->tx_queues[i];
 					if (txq == NULL)
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v4 11/15] common/idpf: add rxq and txq struct
  2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
                     ` (9 preceding siblings ...)
  2023-01-17  8:06   ` [PATCH v4 10/15] common/idpf: add vector flags in vport beilei.xing
@ 2023-01-17  8:06   ` beilei.xing
  2023-01-17  8:06   ` [PATCH v4 12/15] common/idpf: add help functions for queue setup and release beilei.xing
                     ` (4 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-01-17  8:06 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Add idpf_rxq and idpf_txq structure in common module.
Move idpf_vc_config_rxq and idpf_vc_config_txq functions
to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   2 +
 drivers/common/idpf/idpf_common_rxtx.h     | 112 +++++++++++++
 drivers/common/idpf/idpf_common_virtchnl.c | 160 ++++++++++++++++++
 drivers/common/idpf/idpf_common_virtchnl.h |  10 +-
 drivers/common/idpf/version.map            |   2 +
 drivers/net/idpf/idpf_ethdev.h             |   2 -
 drivers/net/idpf/idpf_rxtx.h               |  97 +----------
 drivers/net/idpf/idpf_vchnl.c              | 184 ---------------------
 drivers/net/idpf/meson.build               |   1 -
 9 files changed, 284 insertions(+), 286 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_rxtx.h
 delete mode 100644 drivers/net/idpf/idpf_vchnl.c

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 61c47ba5f4..4895f5f360 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -18,8 +18,10 @@
 
 #define IDPF_DEFAULT_RXQ_NUM	16
 #define IDPF_RX_BUFQ_PER_GRP	2
+#define IDPF_RXQ_PER_GRP	1
 #define IDPF_DEFAULT_TXQ_NUM	16
 #define IDPF_TX_COMPLQ_PER_GRP	1
+#define IDPF_TXQ_PER_GRP	1
 
 #define IDPF_MAX_PKT_TYPE	1024
 
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
new file mode 100644
index 0000000000..a9ed31c08a
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_COMMON_RXTX_H_
+#define _IDPF_COMMON_RXTX_H_
+
+#include <rte_mbuf_ptype.h>
+#include <rte_mbuf_core.h>
+
+#include "idpf_common_device.h"
+
+struct idpf_rx_stats {
+	uint64_t mbuf_alloc_failed;
+};
+
+struct idpf_rx_queue {
+	struct idpf_adapter *adapter;   /* the adapter this queue belongs to */
+	struct rte_mempool *mp;         /* mbuf pool to populate Rx ring */
+	const struct rte_memzone *mz;   /* memzone for Rx ring */
+	volatile void *rx_ring;
+	struct rte_mbuf **sw_ring;      /* address of SW ring */
+	uint64_t rx_ring_phys_addr;     /* Rx ring DMA address */
+
+	uint16_t nb_rx_desc;            /* ring length */
+	uint16_t rx_tail;               /* current value of tail */
+	volatile uint8_t *qrx_tail;     /* register address of tail */
+	uint16_t rx_free_thresh;        /* max free RX desc to hold */
+	uint16_t nb_rx_hold;            /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
+	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
+
+	/* used for VPMD */
+	uint16_t rxrearm_nb;       /* number of remaining to be re-armed */
+	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
+	uint64_t mbuf_initializer; /* value to init mbufs */
+
+	uint16_t rx_nb_avail;
+	uint16_t rx_next_avail;
+
+	uint16_t port_id;       /* device port ID */
+	uint16_t queue_id;      /* Rx queue index */
+	uint16_t rx_buf_len;    /* The packet buffer size */
+	uint16_t rx_hdr_len;    /* The header buffer size */
+	uint16_t max_pkt_len;   /* Maximum packet length */
+	uint8_t rxdid;
+
+	bool q_set;             /* if rx queue has been configured */
+	bool q_started;         /* if rx queue has been started */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+	const struct idpf_rxq_ops *ops;
+
+	struct idpf_rx_stats rx_stats;
+
+	/* only valid for split queue mode */
+	uint8_t expected_gen_id;
+	struct idpf_rx_queue *bufq1;
+	struct idpf_rx_queue *bufq2;
+
+	uint64_t offloads;
+	uint32_t hw_register_set;
+};
+
+struct idpf_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct idpf_tx_queue {
+	const struct rte_memzone *mz;		/* memzone for Tx ring */
+	volatile struct idpf_flex_tx_desc *tx_ring;	/* Tx ring virtual address */
+	volatile union {
+		struct idpf_flex_tx_sched_desc *desc_ring;
+		struct idpf_splitq_tx_compl_desc *compl_ring;
+	};
+	uint64_t tx_ring_phys_addr;		/* Tx ring DMA address */
+	struct idpf_tx_entry *sw_ring;		/* address array of SW ring */
+
+	uint16_t nb_tx_desc;		/* ring length */
+	uint16_t tx_tail;		/* current value of tail */
+	volatile uint8_t *qtx_tail;	/* register address of tail */
+	/* number of used desc since RS bit set */
+	uint16_t nb_used;
+	uint16_t nb_free;
+	uint16_t last_desc_cleaned;	/* last desc have been cleaned*/
+	uint16_t free_thresh;
+	uint16_t rs_thresh;
+
+	uint16_t port_id;
+	uint16_t queue_id;
+	uint64_t offloads;
+	uint16_t next_dd;	/* next to set RS, for VPMD */
+	uint16_t next_rs;	/* next to check DD,  for VPMD */
+
+	bool q_set;		/* if tx queue has been configured */
+	bool q_started;		/* if tx queue has been started */
+	bool tx_deferred_start; /* don't start this queue in dev start */
+	const struct idpf_txq_ops *ops;
+
+	/* only valid for split queue mode */
+	uint16_t sw_nb_desc;
+	uint16_t sw_tail;
+	void **txqs;
+	uint32_t tx_start_qid;
+	uint8_t expected_gen_id;
+	struct idpf_tx_queue *complq;
+};
+
+#endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index f670d2cc17..188d0131a4 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -805,3 +805,163 @@ idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
 	rte_free(ptype_info);
 	return err;
 }
+
+#define IDPF_RX_BUF_STRIDE		64
+int
+idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
+	struct virtchnl2_rxq_info *rxq_info;
+	struct idpf_cmd_info args;
+	uint16_t num_qs;
+	int size, err, i;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
+		num_qs = IDPF_RXQ_PER_GRP;
+	else
+		num_qs = IDPF_RXQ_PER_GRP + IDPF_RX_BUFQ_PER_GRP;
+
+	size = sizeof(*vc_rxqs) + (num_qs - 1) *
+		sizeof(struct virtchnl2_rxq_info);
+	vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
+	if (vc_rxqs == NULL) {
+		DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues");
+		err = -ENOMEM;
+		return err;
+	}
+	vc_rxqs->vport_id = vport->vport_id;
+	vc_rxqs->num_qinfo = num_qs;
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		rxq_info = &vc_rxqs->qinfo[0];
+		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
+		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
+		rxq_info->queue_id = rxq->queue_id;
+		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
+		rxq_info->data_buffer_size = rxq->rx_buf_len;
+		rxq_info->max_pkt_size = vport->max_pkt_len;
+
+		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
+		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
+
+		rxq_info->ring_len = rxq->nb_rx_desc;
+	}  else {
+		/* Rx queue */
+		rxq_info = &vc_rxqs->qinfo[0];
+		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
+		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
+		rxq_info->queue_id = rxq->queue_id;
+		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+		rxq_info->data_buffer_size = rxq->rx_buf_len;
+		rxq_info->max_pkt_size = vport->max_pkt_len;
+
+		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
+		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
+
+		rxq_info->ring_len = rxq->nb_rx_desc;
+		rxq_info->rx_bufq1_id = rxq->bufq1->queue_id;
+		rxq_info->rx_bufq2_id = rxq->bufq2->queue_id;
+		rxq_info->rx_buffer_low_watermark = 64;
+
+		/* Buffer queue */
+		for (i = 1; i <= IDPF_RX_BUFQ_PER_GRP; i++) {
+			struct idpf_rx_queue *bufq = i == 1 ? rxq->bufq1 : rxq->bufq2;
+			rxq_info = &vc_rxqs->qinfo[i];
+			rxq_info->dma_ring_addr = bufq->rx_ring_phys_addr;
+			rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+			rxq_info->queue_id = bufq->queue_id;
+			rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+			rxq_info->data_buffer_size = bufq->rx_buf_len;
+			rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
+			rxq_info->ring_len = bufq->nb_rx_desc;
+
+			rxq_info->buffer_notif_stride = IDPF_RX_BUF_STRIDE;
+			rxq_info->rx_buffer_low_watermark = 64;
+		}
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
+	args.in_args = (uint8_t *)vc_rxqs;
+	args.in_args_size = size;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	rte_free(vc_rxqs);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
+
+	return err;
+}
+
+int
+idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
+	struct virtchnl2_txq_info *txq_info;
+	struct idpf_cmd_info args;
+	uint16_t num_qs;
+	int size, err;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
+		num_qs = IDPF_TXQ_PER_GRP;
+	else
+		num_qs = IDPF_TXQ_PER_GRP + IDPF_TX_COMPLQ_PER_GRP;
+
+	size = sizeof(*vc_txqs) + (num_qs - 1) *
+		sizeof(struct virtchnl2_txq_info);
+	vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
+	if (vc_txqs == NULL) {
+		DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues");
+		err = -ENOMEM;
+		return err;
+	}
+	vc_txqs->vport_id = vport->vport_id;
+	vc_txqs->num_qinfo = num_qs;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		txq_info = &vc_txqs->qinfo[0];
+		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
+		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
+		txq_info->queue_id = txq->queue_id;
+		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
+		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
+		txq_info->ring_len = txq->nb_tx_desc;
+	} else {
+		/* txq info */
+		txq_info = &vc_txqs->qinfo[0];
+		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
+		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
+		txq_info->queue_id = txq->queue_id;
+		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+		txq_info->ring_len = txq->nb_tx_desc;
+		txq_info->tx_compl_queue_id = txq->complq->queue_id;
+		txq_info->relative_queue_id = txq_info->queue_id;
+
+		/* tx completion queue info */
+		txq_info = &vc_txqs->qinfo[1];
+		txq_info->dma_ring_addr = txq->complq->tx_ring_phys_addr;
+		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+		txq_info->queue_id = txq->complq->queue_id;
+		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+		txq_info->ring_len = txq->complq->nb_tx_desc;
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
+	args.in_args = (uint8_t *)vc_txqs;
+	args.in_args_size = size;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	rte_free(vc_txqs);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
+
+	return err;
+}
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 11dbc089cb..b8045ba63b 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -6,6 +6,7 @@
 #define _IDPF_COMMON_VIRTCHNL_H_
 
 #include <idpf_common_device.h>
+#include <idpf_common_rxtx.h>
 
 __rte_internal
 int idpf_vc_check_api_version(struct idpf_adapter *adapter);
@@ -31,6 +32,9 @@ __rte_internal
 int idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
 		      uint16_t buf_len, uint8_t *buf);
 __rte_internal
+int idpf_execute_vc_cmd(struct idpf_adapter *adapter,
+			struct idpf_cmd_info *args);
+__rte_internal
 int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
 		      bool rx, bool on);
 __rte_internal
@@ -42,7 +46,7 @@ int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
 __rte_internal
 int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
 __rte_internal
-int idpf_execute_vc_cmd(struct idpf_adapter *adapter,
-			struct idpf_cmd_info *args);
-
+int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
+__rte_internal
+int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index b153647ee1..19de5c8122 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -19,6 +19,8 @@ INTERNAL {
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
 	idpf_vc_config_irq_map_unmap;
+	idpf_vc_config_rxq;
+	idpf_vc_config_txq;
 	idpf_vc_create_vport;
 	idpf_vc_dealloc_vectors;
 	idpf_vc_destroy_vport;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index bef6199622..9b40aa4e56 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -23,8 +23,6 @@
 #define IDPF_MAX_VPORT_NUM	8
 
 #define IDPF_INVALID_VPORT_IDX	0xffff
-#define IDPF_TXQ_PER_GRP	1
-#define IDPF_RXQ_PER_GRP	1
 
 #define IDPF_DFLT_Q_VEC_NUM	1
 
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index cac6040943..b8325f9b96 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -5,6 +5,7 @@
 #ifndef _IDPF_RXTX_H_
 #define _IDPF_RXTX_H_
 
+#include <idpf_common_rxtx.h>
 #include "idpf_ethdev.h"
 
 /* MTS */
@@ -84,103 +85,10 @@
 
 extern uint64_t idpf_timestamp_dynflag;
 
-struct idpf_rx_queue {
-	struct idpf_adapter *adapter;   /* the adapter this queue belongs to */
-	struct rte_mempool *mp;         /* mbuf pool to populate Rx ring */
-	const struct rte_memzone *mz;   /* memzone for Rx ring */
-	volatile void *rx_ring;
-	struct rte_mbuf **sw_ring;      /* address of SW ring */
-	uint64_t rx_ring_phys_addr;     /* Rx ring DMA address */
-
-	uint16_t nb_rx_desc;            /* ring length */
-	uint16_t rx_tail;               /* current value of tail */
-	volatile uint8_t *qrx_tail;     /* register address of tail */
-	uint16_t rx_free_thresh;        /* max free RX desc to hold */
-	uint16_t nb_rx_hold;            /* number of held free RX desc */
-	struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
-	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
-	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
-
-	/* used for VPMD */
-	uint16_t rxrearm_nb;       /* number of remaining to be re-armed */
-	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
-	uint64_t mbuf_initializer; /* value to init mbufs */
-
-	uint16_t rx_nb_avail;
-	uint16_t rx_next_avail;
-
-	uint16_t port_id;       /* device port ID */
-	uint16_t queue_id;      /* Rx queue index */
-	uint16_t rx_buf_len;    /* The packet buffer size */
-	uint16_t rx_hdr_len;    /* The header buffer size */
-	uint16_t max_pkt_len;   /* Maximum packet length */
-	uint8_t rxdid;
-
-	bool q_set;             /* if rx queue has been configured */
-	bool q_started;         /* if rx queue has been started */
-	bool rx_deferred_start; /* don't start this queue in dev start */
-	const struct idpf_rxq_ops *ops;
-
-	/* only valid for split queue mode */
-	uint8_t expected_gen_id;
-	struct idpf_rx_queue *bufq1;
-	struct idpf_rx_queue *bufq2;
-
-	uint64_t offloads;
-	uint32_t hw_register_set;
-};
-
-struct idpf_tx_entry {
-	struct rte_mbuf *mbuf;
-	uint16_t next_id;
-	uint16_t last_id;
-};
-
 struct idpf_tx_vec_entry {
 	struct rte_mbuf *mbuf;
 };
 
-/* Structure associated with each TX queue. */
-struct idpf_tx_queue {
-	const struct rte_memzone *mz;		/* memzone for Tx ring */
-	volatile struct idpf_flex_tx_desc *tx_ring;	/* Tx ring virtual address */
-	volatile union {
-		struct idpf_flex_tx_sched_desc *desc_ring;
-		struct idpf_splitq_tx_compl_desc *compl_ring;
-	};
-	uint64_t tx_ring_phys_addr;		/* Tx ring DMA address */
-	struct idpf_tx_entry *sw_ring;		/* address array of SW ring */
-
-	uint16_t nb_tx_desc;		/* ring length */
-	uint16_t tx_tail;		/* current value of tail */
-	volatile uint8_t *qtx_tail;	/* register address of tail */
-	/* number of used desc since RS bit set */
-	uint16_t nb_used;
-	uint16_t nb_free;
-	uint16_t last_desc_cleaned;	/* last desc have been cleaned*/
-	uint16_t free_thresh;
-	uint16_t rs_thresh;
-
-	uint16_t port_id;
-	uint16_t queue_id;
-	uint64_t offloads;
-	uint16_t next_dd;	/* next to set RS, for VPMD */
-	uint16_t next_rs;	/* next to check DD,  for VPMD */
-
-	bool q_set;		/* if tx queue has been configured */
-	bool q_started;		/* if tx queue has been started */
-	bool tx_deferred_start; /* don't start this queue in dev start */
-	const struct idpf_txq_ops *ops;
-
-	/* only valid for split queue mode */
-	uint16_t sw_nb_desc;
-	uint16_t sw_tail;
-	void **txqs;
-	uint32_t tx_start_qid;
-	uint8_t expected_gen_id;
-	struct idpf_tx_queue *complq;
-};
-
 /* Offload features */
 union idpf_tx_offload {
 	uint64_t data;
@@ -239,9 +147,6 @@ void idpf_stop_queues(struct rte_eth_dev *dev);
 void idpf_set_rx_function(struct rte_eth_dev *dev);
 void idpf_set_tx_function(struct rte_eth_dev *dev);
 
-int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
-int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
-
 #define IDPF_TIMESYNC_REG_WRAP_GUARD_BAND  10000
 /* Helper function to convert a 32b nanoseconds timestamp to 64b. */
 static inline uint64_t
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
deleted file mode 100644
index 45d05ed108..0000000000
--- a/drivers/net/idpf/idpf_vchnl.c
+++ /dev/null
@@ -1,184 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2022 Intel Corporation
- */
-
-#include <stdio.h>
-#include <errno.h>
-#include <stdint.h>
-#include <string.h>
-#include <unistd.h>
-#include <stdarg.h>
-#include <inttypes.h>
-#include <rte_byteorder.h>
-#include <rte_common.h>
-
-#include <rte_debug.h>
-#include <rte_atomic.h>
-#include <rte_eal.h>
-#include <rte_ether.h>
-#include <ethdev_driver.h>
-#include <ethdev_pci.h>
-#include <rte_dev.h>
-
-#include "idpf_ethdev.h"
-#include "idpf_rxtx.h"
-
-#define IDPF_RX_BUF_STRIDE		64
-int
-idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
-	struct virtchnl2_rxq_info *rxq_info;
-	struct idpf_cmd_info args;
-	uint16_t num_qs;
-	int size, err, i;
-
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
-		num_qs = IDPF_RXQ_PER_GRP;
-	else
-		num_qs = IDPF_RXQ_PER_GRP + IDPF_RX_BUFQ_PER_GRP;
-
-	size = sizeof(*vc_rxqs) + (num_qs - 1) *
-		sizeof(struct virtchnl2_rxq_info);
-	vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
-	if (vc_rxqs == NULL) {
-		PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues");
-		err = -ENOMEM;
-		return err;
-	}
-	vc_rxqs->vport_id = vport->vport_id;
-	vc_rxqs->num_qinfo = num_qs;
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-		rxq_info = &vc_rxqs->qinfo[0];
-		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
-		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-		rxq_info->queue_id = rxq->queue_id;
-		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-		rxq_info->data_buffer_size = rxq->rx_buf_len;
-		rxq_info->max_pkt_size = vport->max_pkt_len;
-
-		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
-		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
-
-		rxq_info->ring_len = rxq->nb_rx_desc;
-	}  else {
-		/* Rx queue */
-		rxq_info = &vc_rxqs->qinfo[0];
-		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
-		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-		rxq_info->queue_id = rxq->queue_id;
-		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-		rxq_info->data_buffer_size = rxq->rx_buf_len;
-		rxq_info->max_pkt_size = vport->max_pkt_len;
-
-		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
-		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
-
-		rxq_info->ring_len = rxq->nb_rx_desc;
-		rxq_info->rx_bufq1_id = rxq->bufq1->queue_id;
-		rxq_info->rx_bufq2_id = rxq->bufq2->queue_id;
-		rxq_info->rx_buffer_low_watermark = 64;
-
-		/* Buffer queue */
-		for (i = 1; i <= IDPF_RX_BUFQ_PER_GRP; i++) {
-			struct idpf_rx_queue *bufq = i == 1 ? rxq->bufq1 : rxq->bufq2;
-			rxq_info = &vc_rxqs->qinfo[i];
-			rxq_info->dma_ring_addr = bufq->rx_ring_phys_addr;
-			rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
-			rxq_info->queue_id = bufq->queue_id;
-			rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-			rxq_info->data_buffer_size = bufq->rx_buf_len;
-			rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
-			rxq_info->ring_len = bufq->nb_rx_desc;
-
-			rxq_info->buffer_notif_stride = IDPF_RX_BUF_STRIDE;
-			rxq_info->rx_buffer_low_watermark = 64;
-		}
-	}
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
-	args.in_args = (uint8_t *)vc_rxqs;
-	args.in_args_size = size;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	rte_free(vc_rxqs);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
-
-	return err;
-}
-
-int
-idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
-	struct virtchnl2_txq_info *txq_info;
-	struct idpf_cmd_info args;
-	uint16_t num_qs;
-	int size, err;
-
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
-		num_qs = IDPF_TXQ_PER_GRP;
-	else
-		num_qs = IDPF_TXQ_PER_GRP + IDPF_TX_COMPLQ_PER_GRP;
-
-	size = sizeof(*vc_txqs) + (num_qs - 1) *
-		sizeof(struct virtchnl2_txq_info);
-	vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
-	if (vc_txqs == NULL) {
-		PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues");
-		err = -ENOMEM;
-		return err;
-	}
-	vc_txqs->vport_id = vport->vport_id;
-	vc_txqs->num_qinfo = num_qs;
-
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-		txq_info = &vc_txqs->qinfo[0];
-		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
-		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-		txq_info->queue_id = txq->queue_id;
-		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
-		txq_info->ring_len = txq->nb_tx_desc;
-	} else {
-		/* txq info */
-		txq_info = &vc_txqs->qinfo[0];
-		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
-		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-		txq_info->queue_id = txq->queue_id;
-		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-		txq_info->ring_len = txq->nb_tx_desc;
-		txq_info->tx_compl_queue_id = txq->complq->queue_id;
-		txq_info->relative_queue_id = txq_info->queue_id;
-
-		/* tx completion queue info */
-		txq_info = &vc_txqs->qinfo[1];
-		txq_info->dma_ring_addr = txq->complq->tx_ring_phys_addr;
-		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-		txq_info->queue_id = txq->complq->queue_id;
-		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-		txq_info->ring_len = txq->complq->nb_tx_desc;
-	}
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
-	args.in_args = (uint8_t *)vc_txqs;
-	args.in_args_size = size;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	rte_free(vc_txqs);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
-
-	return err;
-}
diff --git a/drivers/net/idpf/meson.build b/drivers/net/idpf/meson.build
index 650dade0b9..378925166f 100644
--- a/drivers/net/idpf/meson.build
+++ b/drivers/net/idpf/meson.build
@@ -18,7 +18,6 @@ deps += ['common_idpf']
 sources = files(
         'idpf_ethdev.c',
         'idpf_rxtx.c',
-        'idpf_vchnl.c',
 )
 
 if arch_subdir == 'x86'
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v4 12/15] common/idpf: add help functions for queue setup and release
  2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
                     ` (10 preceding siblings ...)
  2023-01-17  8:06   ` [PATCH v4 11/15] common/idpf: add rxq and txq struct beilei.xing
@ 2023-01-17  8:06   ` beilei.xing
  2023-01-17  8:06   ` [PATCH v4 13/15] common/idpf: add Rx and Tx data path beilei.xing
                     ` (3 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-01-17  8:06 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Refine rxq setup and txq setup.
Move some help functions of queue setup and queue release
to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c  |  414 +++++++++
 drivers/common/idpf/idpf_common_rxtx.h  |   57 ++
 drivers/common/idpf/meson.build         |    1 +
 drivers/common/idpf/version.map         |   15 +
 drivers/net/idpf/idpf_rxtx.c            | 1051 ++++++-----------------
 drivers/net/idpf/idpf_rxtx.h            |    9 -
 drivers/net/idpf/idpf_rxtx_vec_avx512.c |    2 +-
 7 files changed, 773 insertions(+), 776 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_rxtx.c

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
new file mode 100644
index 0000000000..eeeeedca88
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -0,0 +1,414 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <rte_mbuf_dyn.h>
+#include "idpf_common_rxtx.h"
+
+int
+idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+{
+	/* The following constraints must be satisfied:
+	 * thresh < rxq->nb_rx_desc
+	 */
+	if (thresh >= nb_desc) {
+		DRV_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
+			thresh, nb_desc);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int
+idpf_check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+		     uint16_t tx_free_thresh)
+{
+	/* TX descriptors will have their RS bit set after tx_rs_thresh
+	 * descriptors have been used. The TX descriptor ring will be cleaned
+	 * after tx_free_thresh descriptors are used or if the number of
+	 * descriptors required to transmit a packet is greater than the
+	 * number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		DRV_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+			"number of TX descriptors (%u) minus 2",
+			tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		DRV_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+			"number of TX descriptors (%u) minus 3.",
+			tx_free_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		DRV_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+			"equal to tx_free_thresh (%u).",
+			tx_rs_thresh, tx_free_thresh);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		DRV_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+			"number of TX descriptors (%u).",
+			tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+void
+idpf_release_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+	uint16_t i;
+
+	if (rxq->sw_ring == NULL)
+		return;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i] != NULL) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+			rxq->sw_ring[i] = NULL;
+		}
+	}
+}
+
+void
+idpf_release_txq_mbufs(struct idpf_tx_queue *txq)
+{
+	uint16_t nb_desc, i;
+
+	if (txq == NULL || txq->sw_ring == NULL) {
+		DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
+		return;
+	}
+
+	if (txq->sw_nb_desc != 0) {
+		/* For split queue model, descriptor ring */
+		nb_desc = txq->sw_nb_desc;
+	} else {
+		/* For single queue model */
+		nb_desc = txq->nb_tx_desc;
+	}
+	for (i = 0; i < nb_desc; i++) {
+		if (txq->sw_ring[i].mbuf != NULL) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+void
+idpf_reset_split_rx_descq(struct idpf_rx_queue *rxq)
+{
+	uint16_t len;
+	uint32_t i;
+
+	if (rxq == NULL)
+		return;
+
+	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3);
+	     i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	rxq->rx_tail = 0;
+	rxq->expected_gen_id = 1;
+}
+
+void
+idpf_reset_split_rx_bufq(struct idpf_rx_queue *rxq)
+{
+	uint16_t len;
+	uint32_t i;
+
+	if (rxq == NULL)
+		return;
+
+	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(struct virtchnl2_splitq_rx_buf_desc);
+	     i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+	/* The next descriptor id which can be received. */
+	rxq->rx_next_avail = 0;
+
+	/* The next descriptor id which can be refilled. */
+	rxq->rx_tail = 0;
+	/* The number of descriptors which can be refilled. */
+	rxq->nb_rx_hold = rxq->nb_rx_desc - 1;
+
+	rxq->bufq1 = NULL;
+	rxq->bufq2 = NULL;
+}
+
+void
+idpf_reset_split_rx_queue(struct idpf_rx_queue *rxq)
+{
+	idpf_reset_split_rx_descq(rxq);
+	idpf_reset_split_rx_bufq(rxq->bufq1);
+	idpf_reset_split_rx_bufq(rxq->bufq2);
+}
+
+void
+idpf_reset_single_rx_queue(struct idpf_rx_queue *rxq)
+{
+	uint16_t len;
+	uint32_t i;
+
+	if (rxq == NULL)
+		return;
+
+	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(struct virtchnl2_singleq_rx_buf_desc);
+	     i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+
+	rte_pktmbuf_free(rxq->pkt_first_seg);
+
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+	rxq->rxrearm_start = 0;
+	rxq->rxrearm_nb = 0;
+}
+
+void
+idpf_reset_split_tx_descq(struct idpf_tx_queue *txq)
+{
+	struct idpf_tx_entry *txe;
+	uint32_t i, size;
+	uint16_t prev;
+
+	if (txq == NULL) {
+		DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	size = sizeof(struct idpf_flex_tx_sched_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->desc_ring)[i] = 0;
+
+	txe = txq->sw_ring;
+	prev = (uint16_t)(txq->sw_nb_desc - 1);
+	for (i = 0; i < txq->sw_nb_desc; i++) {
+		txe[i].mbuf = NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_used = 0;
+
+	/* Use this as next to clean for split desc queue */
+	txq->last_desc_cleaned = 0;
+	txq->sw_tail = 0;
+	txq->nb_free = txq->nb_tx_desc - 1;
+}
+
+void
+idpf_reset_split_tx_complq(struct idpf_tx_queue *cq)
+{
+	uint32_t i, size;
+
+	if (cq == NULL) {
+		DRV_LOG(DEBUG, "Pointer to complq is NULL");
+		return;
+	}
+
+	size = sizeof(struct idpf_splitq_tx_compl_desc) * cq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)cq->compl_ring)[i] = 0;
+
+	cq->tx_tail = 0;
+	cq->expected_gen_id = 1;
+}
+
+void
+idpf_reset_single_tx_queue(struct idpf_tx_queue *txq)
+{
+	struct idpf_tx_entry *txe;
+	uint32_t i, size;
+	uint16_t prev;
+
+	if (txq == NULL) {
+		DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct idpf_flex_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		txq->tx_ring[i].qw1.cmd_dtype =
+			rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_used = 0;
+
+	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+	txq->nb_free = txq->nb_tx_desc - 1;
+
+	txq->next_dd = txq->rs_thresh - 1;
+	txq->next_rs = txq->rs_thresh - 1;
+}
+
+void
+idpf_rx_queue_release(void *rxq)
+{
+	struct idpf_rx_queue *q = rxq;
+
+	if (q == NULL)
+		return;
+
+	/* Split queue */
+	if (q->bufq1 != NULL && q->bufq2 != NULL) {
+		q->bufq1->ops->release_mbufs(q->bufq1);
+		rte_free(q->bufq1->sw_ring);
+		rte_memzone_free(q->bufq1->mz);
+		rte_free(q->bufq1);
+		q->bufq2->ops->release_mbufs(q->bufq2);
+		rte_free(q->bufq2->sw_ring);
+		rte_memzone_free(q->bufq2->mz);
+		rte_free(q->bufq2);
+		rte_memzone_free(q->mz);
+		rte_free(q);
+		return;
+	}
+
+	/* Single queue */
+	q->ops->release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+idpf_tx_queue_release(void *txq)
+{
+	struct idpf_tx_queue *q = txq;
+
+	if (q == NULL)
+		return;
+
+	if (q->complq) {
+		rte_memzone_free(q->complq->mz);
+		rte_free(q->complq);
+	}
+
+	q->ops->release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+int
+idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+	volatile struct virtchnl2_singleq_rx_buf_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(mbuf == NULL)) {
+			DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &((volatile struct virtchnl2_singleq_rx_buf_desc *)(rxq->rx_ring))[i];
+		rxd->pkt_addr = dma_addr;
+		rxd->hdr_addr = 0;
+		rxd->rsvd1 = 0;
+		rxd->rsvd2 = 0;
+		rxq->sw_ring[i] = mbuf;
+	}
+
+	return 0;
+}
+
+int
+idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+	volatile struct virtchnl2_splitq_rx_buf_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(mbuf == NULL)) {
+			DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &((volatile struct virtchnl2_splitq_rx_buf_desc *)(rxq->rx_ring))[i];
+		rxd->qword0.buf_id = i;
+		rxd->qword0.rsvd0 = 0;
+		rxd->qword0.rsvd1 = 0;
+		rxd->pkt_addr = dma_addr;
+		rxd->hdr_addr = 0;
+		rxd->rsvd2 = 0;
+
+		rxq->sw_ring[i] = mbuf;
+	}
+
+	rxq->nb_rx_hold = 0;
+	rxq->rx_tail = rxq->nb_rx_desc - 1;
+
+	return 0;
+}
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index a9ed31c08a..c5bb7d48af 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -5,11 +5,28 @@
 #ifndef _IDPF_COMMON_RXTX_H_
 #define _IDPF_COMMON_RXTX_H_
 
+#include <rte_mbuf.h>
 #include <rte_mbuf_ptype.h>
 #include <rte_mbuf_core.h>
 
 #include "idpf_common_device.h"
 
+#define IDPF_RX_MAX_BURST		32
+
+#define IDPF_RX_OFFLOAD_IPV4_CKSUM		RTE_BIT64(1)
+#define IDPF_RX_OFFLOAD_UDP_CKSUM		RTE_BIT64(2)
+#define IDPF_RX_OFFLOAD_TCP_CKSUM		RTE_BIT64(3)
+#define IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM	RTE_BIT64(6)
+#define IDPF_RX_OFFLOAD_TIMESTAMP		RTE_BIT64(14)
+
+#define IDPF_TX_OFFLOAD_IPV4_CKSUM       RTE_BIT64(1)
+#define IDPF_TX_OFFLOAD_UDP_CKSUM        RTE_BIT64(2)
+#define IDPF_TX_OFFLOAD_TCP_CKSUM        RTE_BIT64(3)
+#define IDPF_TX_OFFLOAD_SCTP_CKSUM       RTE_BIT64(4)
+#define IDPF_TX_OFFLOAD_TCP_TSO          RTE_BIT64(5)
+#define IDPF_TX_OFFLOAD_MULTI_SEGS       RTE_BIT64(15)
+#define IDPF_TX_OFFLOAD_MBUF_FAST_FREE   RTE_BIT64(16)
+
 struct idpf_rx_stats {
 	uint64_t mbuf_alloc_failed;
 };
@@ -109,4 +126,44 @@ struct idpf_tx_queue {
 	struct idpf_tx_queue *complq;
 };
 
+struct idpf_rxq_ops {
+	void (*release_mbufs)(struct idpf_rx_queue *rxq);
+};
+
+struct idpf_txq_ops {
+	void (*release_mbufs)(struct idpf_tx_queue *txq);
+};
+
+__rte_internal
+int idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh);
+__rte_internal
+int idpf_check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+			 uint16_t tx_free_thresh);
+__rte_internal
+void idpf_release_rxq_mbufs(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_release_txq_mbufs(struct idpf_tx_queue *txq);
+__rte_internal
+void idpf_reset_split_rx_descq(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_reset_split_rx_bufq(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_reset_split_rx_queue(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_reset_single_rx_queue(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_reset_split_tx_descq(struct idpf_tx_queue *txq);
+__rte_internal
+void idpf_reset_split_tx_complq(struct idpf_tx_queue *cq);
+__rte_internal
+void idpf_reset_single_tx_queue(struct idpf_tx_queue *txq);
+__rte_internal
+void idpf_rx_queue_release(void *rxq);
+__rte_internal
+void idpf_tx_queue_release(void *txq);
+__rte_internal
+int idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq);
+__rte_internal
+int idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq);
+
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/meson.build b/drivers/common/idpf/meson.build
index c6cc7a196b..5ee071fdb2 100644
--- a/drivers/common/idpf/meson.build
+++ b/drivers/common/idpf/meson.build
@@ -5,6 +5,7 @@ deps += ['mbuf']
 
 sources = files(
     'idpf_common_device.c',
+    'idpf_common_rxtx.c',
     'idpf_common_virtchnl.c',
 )
 
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 19de5c8122..8d98635e46 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,6 +3,10 @@ INTERNAL {
 
 	idpf_adapter_deinit;
 	idpf_adapter_init;
+	idpf_alloc_single_rxq_mbufs;
+	idpf_alloc_split_rxq_mbufs;
+	idpf_check_rx_thresh;
+	idpf_check_tx_thresh;
 	idpf_config_irq_map;
 	idpf_config_irq_unmap;
 	idpf_config_rss;
@@ -15,7 +19,18 @@ INTERNAL {
 	idpf_ctlq_send;
 	idpf_execute_vc_cmd;
 	idpf_read_one_msg;
+	idpf_release_rxq_mbufs;
+	idpf_release_txq_mbufs;
+	idpf_reset_single_rx_queue;
+	idpf_reset_single_tx_queue;
+	idpf_reset_split_rx_bufq;
+	idpf_reset_split_rx_descq;
+	idpf_reset_split_rx_queue;
+	idpf_reset_split_tx_complq;
+	idpf_reset_split_tx_descq;
+	idpf_rx_queue_release;
 	idpf_switch_queue;
+	idpf_tx_queue_release;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
 	idpf_vc_config_irq_map_unmap;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index f0eff493f8..852076c235 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -12,358 +12,141 @@
 
 static int idpf_timestamp_dynfield_offset = -1;
 
-static int
-check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
-{
-	/* The following constraints must be satisfied:
-	 *   thresh < rxq->nb_rx_desc
-	 */
-	if (thresh >= nb_desc) {
-		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
-			     thresh, nb_desc);
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static int
-check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
-		uint16_t tx_free_thresh)
+static uint64_t
+idpf_rx_offload_convert(uint64_t offload)
 {
-	/* TX descriptors will have their RS bit set after tx_rs_thresh
-	 * descriptors have been used. The TX descriptor ring will be cleaned
-	 * after tx_free_thresh descriptors are used or if the number of
-	 * descriptors required to transmit a packet is greater than the
-	 * number of free TX descriptors.
-	 *
-	 * The following constraints must be satisfied:
-	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
-	 *  - tx_free_thresh must be less than the size of the ring minus 3.
-	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
-	 *  - tx_rs_thresh must be a divisor of the ring size.
-	 *
-	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
-	 * race condition, hence the maximum threshold constraints. When set
-	 * to zero use default values.
-	 */
-	if (tx_rs_thresh >= (nb_desc - 2)) {
-		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
-			     "number of TX descriptors (%u) minus 2",
-			     tx_rs_thresh, nb_desc);
-		return -EINVAL;
-	}
-	if (tx_free_thresh >= (nb_desc - 3)) {
-		PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be less than the "
-			     "number of TX descriptors (%u) minus 3.",
-			     tx_free_thresh, nb_desc);
-		return -EINVAL;
-	}
-	if (tx_rs_thresh > tx_free_thresh) {
-		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
-			     "equal to tx_free_thresh (%u).",
-			     tx_rs_thresh, tx_free_thresh);
-		return -EINVAL;
-	}
-	if ((nb_desc % tx_rs_thresh) != 0) {
-		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
-			     "number of TX descriptors (%u).",
-			     tx_rs_thresh, nb_desc);
-		return -EINVAL;
-	}
-
-	return 0;
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
+		ol |= IDPF_RX_OFFLOAD_TIMESTAMP;
+
+	return ol;
 }
 
-static void
-release_rxq_mbufs(struct idpf_rx_queue *rxq)
+static uint64_t
+idpf_tx_offload_convert(uint64_t offload)
 {
-	uint16_t i;
-
-	if (rxq->sw_ring == NULL)
-		return;
-
-	for (i = 0; i < rxq->nb_rx_desc; i++) {
-		if (rxq->sw_ring[i] != NULL) {
-			rte_pktmbuf_free_seg(rxq->sw_ring[i]);
-			rxq->sw_ring[i] = NULL;
-		}
-	}
-}
-
-static void
-release_txq_mbufs(struct idpf_tx_queue *txq)
-{
-	uint16_t nb_desc, i;
-
-	if (txq == NULL || txq->sw_ring == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
-		return;
-	}
-
-	if (txq->sw_nb_desc != 0) {
-		/* For split queue model, descriptor ring */
-		nb_desc = txq->sw_nb_desc;
-	} else {
-		/* For single queue model */
-		nb_desc = txq->nb_tx_desc;
-	}
-	for (i = 0; i < nb_desc; i++) {
-		if (txq->sw_ring[i].mbuf != NULL) {
-			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
-			txq->sw_ring[i].mbuf = NULL;
-		}
-	}
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_SCTP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
+		ol |= IDPF_TX_OFFLOAD_MULTI_SEGS;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0)
+		ol |= IDPF_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	return ol;
 }
 
 static const struct idpf_rxq_ops def_rxq_ops = {
-	.release_mbufs = release_rxq_mbufs,
+	.release_mbufs = idpf_release_rxq_mbufs,
 };
 
 static const struct idpf_txq_ops def_txq_ops = {
-	.release_mbufs = release_txq_mbufs,
+	.release_mbufs = idpf_release_txq_mbufs,
 };
 
-static void
-reset_split_rx_descq(struct idpf_rx_queue *rxq)
-{
-	uint16_t len;
-	uint32_t i;
-
-	if (rxq == NULL)
-		return;
-
-	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
-
-	for (i = 0; i < len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3);
-	     i++)
-		((volatile char *)rxq->rx_ring)[i] = 0;
-
-	rxq->rx_tail = 0;
-	rxq->expected_gen_id = 1;
-}
-
-static void
-reset_split_rx_bufq(struct idpf_rx_queue *rxq)
-{
-	uint16_t len;
-	uint32_t i;
-
-	if (rxq == NULL)
-		return;
-
-	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
-
-	for (i = 0; i < len * sizeof(struct virtchnl2_splitq_rx_buf_desc);
-	     i++)
-		((volatile char *)rxq->rx_ring)[i] = 0;
-
-	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
-
-	for (i = 0; i < IDPF_RX_MAX_BURST; i++)
-		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
-
-	/* The next descriptor id which can be received. */
-	rxq->rx_next_avail = 0;
-
-	/* The next descriptor id which can be refilled. */
-	rxq->rx_tail = 0;
-	/* The number of descriptors which can be refilled. */
-	rxq->nb_rx_hold = rxq->nb_rx_desc - 1;
-
-	rxq->bufq1 = NULL;
-	rxq->bufq2 = NULL;
-}
-
-static void
-idpf_rx_queue_release(void *rxq)
-{
-	struct idpf_rx_queue *q = rxq;
-
-	if (q == NULL)
-		return;
-
-	/* Split queue */
-	if (q->bufq1 != NULL && q->bufq2 != NULL) {
-		q->bufq1->ops->release_mbufs(q->bufq1);
-		rte_free(q->bufq1->sw_ring);
-		rte_memzone_free(q->bufq1->mz);
-		rte_free(q->bufq1);
-		q->bufq2->ops->release_mbufs(q->bufq2);
-		rte_free(q->bufq2->sw_ring);
-		rte_memzone_free(q->bufq2->mz);
-		rte_free(q->bufq2);
-		rte_memzone_free(q->mz);
-		rte_free(q);
-		return;
-	}
-
-	/* Single queue */
-	q->ops->release_mbufs(q);
-	rte_free(q->sw_ring);
-	rte_memzone_free(q->mz);
-	rte_free(q);
-}
-
-static void
-idpf_tx_queue_release(void *txq)
-{
-	struct idpf_tx_queue *q = txq;
-
-	if (q == NULL)
-		return;
-
-	if (q->complq) {
-		rte_memzone_free(q->complq->mz);
-		rte_free(q->complq);
-	}
-
-	q->ops->release_mbufs(q);
-	rte_free(q->sw_ring);
-	rte_memzone_free(q->mz);
-	rte_free(q);
-}
-
-static inline void
-reset_split_rx_queue(struct idpf_rx_queue *rxq)
+static const struct rte_memzone *
+idpf_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
+		      uint16_t len, uint16_t queue_type,
+		      unsigned int socket_id, bool splitq)
 {
-	reset_split_rx_descq(rxq);
-	reset_split_rx_bufq(rxq->bufq1);
-	reset_split_rx_bufq(rxq->bufq2);
-}
-
-static void
-reset_single_rx_queue(struct idpf_rx_queue *rxq)
-{
-	uint16_t len;
-	uint32_t i;
-
-	if (rxq == NULL)
-		return;
-
-	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
-
-	for (i = 0; i < len * sizeof(struct virtchnl2_singleq_rx_buf_desc);
-	     i++)
-		((volatile char *)rxq->rx_ring)[i] = 0;
-
-	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
-
-	for (i = 0; i < IDPF_RX_MAX_BURST; i++)
-		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
-
-	rxq->rx_tail = 0;
-	rxq->nb_rx_hold = 0;
-
-	rte_pktmbuf_free(rxq->pkt_first_seg);
-
-	rxq->pkt_first_seg = NULL;
-	rxq->pkt_last_seg = NULL;
-	rxq->rxrearm_start = 0;
-	rxq->rxrearm_nb = 0;
-}
-
-static void
-reset_split_tx_descq(struct idpf_tx_queue *txq)
-{
-	struct idpf_tx_entry *txe;
-	uint32_t i, size;
-	uint16_t prev;
-
-	if (txq == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
-		return;
-	}
+	char ring_name[RTE_MEMZONE_NAMESIZE];
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
 
-	size = sizeof(struct idpf_flex_tx_sched_desc) * txq->nb_tx_desc;
-	for (i = 0; i < size; i++)
-		((volatile char *)txq->desc_ring)[i] = 0;
-
-	txe = txq->sw_ring;
-	prev = (uint16_t)(txq->sw_nb_desc - 1);
-	for (i = 0; i < txq->sw_nb_desc; i++) {
-		txe[i].mbuf = NULL;
-		txe[i].last_id = i;
-		txe[prev].next_id = i;
-		prev = i;
+	memset(ring_name, 0, RTE_MEMZONE_NAMESIZE);
+	switch (queue_type) {
+	case VIRTCHNL2_QUEUE_TYPE_TX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_sched_desc),
+					      IDPF_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_desc),
+					      IDPF_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "idpf Tx ring", sizeof("idpf Tx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
+					      IDPF_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_singleq_rx_buf_desc),
+					      IDPF_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "idpf Rx ring", sizeof("idpf Rx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+		ring_size = RTE_ALIGN(len * sizeof(struct idpf_splitq_tx_compl_desc),
+				      IDPF_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "idpf Tx compl ring", sizeof("idpf Tx compl ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+		ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_splitq_rx_buf_desc),
+				      IDPF_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "idpf Rx buf ring", sizeof("idpf Rx buf ring"));
+		break;
+	default:
+		PMD_INIT_LOG(ERR, "Invalid queue type");
+		return NULL;
 	}
 
-	txq->tx_tail = 0;
-	txq->nb_used = 0;
-
-	/* Use this as next to clean for split desc queue */
-	txq->last_desc_cleaned = 0;
-	txq->sw_tail = 0;
-	txq->nb_free = txq->nb_tx_desc - 1;
-}
-
-static void
-reset_split_tx_complq(struct idpf_tx_queue *cq)
-{
-	uint32_t i, size;
-
-	if (cq == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to complq is NULL");
-		return;
+	mz = rte_eth_dma_zone_reserve(dev, ring_name, queue_idx,
+				      ring_size, IDPF_RING_BASE_ALIGN,
+				      socket_id);
+	if (mz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for ring");
+		return NULL;
 	}
 
-	size = sizeof(struct idpf_splitq_tx_compl_desc) * cq->nb_tx_desc;
-	for (i = 0; i < size; i++)
-		((volatile char *)cq->compl_ring)[i] = 0;
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
 
-	cq->tx_tail = 0;
-	cq->expected_gen_id = 1;
+	return mz;
 }
 
 static void
-reset_single_tx_queue(struct idpf_tx_queue *txq)
+idpf_dma_zone_release(const struct rte_memzone *mz)
 {
-	struct idpf_tx_entry *txe;
-	uint32_t i, size;
-	uint16_t prev;
-
-	if (txq == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
-		return;
-	}
-
-	txe = txq->sw_ring;
-	size = sizeof(struct idpf_flex_tx_desc) * txq->nb_tx_desc;
-	for (i = 0; i < size; i++)
-		((volatile char *)txq->tx_ring)[i] = 0;
-
-	prev = (uint16_t)(txq->nb_tx_desc - 1);
-	for (i = 0; i < txq->nb_tx_desc; i++) {
-		txq->tx_ring[i].qw1.cmd_dtype =
-			rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE);
-		txe[i].mbuf =  NULL;
-		txe[i].last_id = i;
-		txe[prev].next_id = i;
-		prev = i;
-	}
-
-	txq->tx_tail = 0;
-	txq->nb_used = 0;
-
-	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
-	txq->nb_free = txq->nb_tx_desc - 1;
-
-	txq->next_dd = txq->rs_thresh - 1;
-	txq->next_rs = txq->rs_thresh - 1;
+	rte_memzone_free(mz);
 }
 
 static int
-idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
+idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
 			 uint16_t queue_idx, uint16_t rx_free_thresh,
 			 uint16_t nb_desc, unsigned int socket_id,
-			 struct rte_mempool *mp)
+			 struct rte_mempool *mp, uint8_t bufq_id)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_adapter *adapter = vport->adapter;
 	struct idpf_hw *hw = &adapter->hw;
 	const struct rte_memzone *mz;
-	uint32_t ring_size;
+	struct idpf_rx_queue *bufq;
 	uint16_t len;
+	int ret;
+
+	bufq = rte_zmalloc_socket("idpf bufq",
+				   sizeof(struct idpf_rx_queue),
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue.");
+		ret = -ENOMEM;
+		goto err_bufq1_alloc;
+	}
 
 	bufq->mp = mp;
 	bufq->nb_rx_desc = nb_desc;
@@ -376,8 +159,21 @@ idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
 	len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM;
 	bufq->rx_buf_len = len;
 
-	/* Allocate the software ring. */
+	/* Allocate a little more to support bulk allocate. */
 	len = nb_desc + IDPF_RX_MAX_BURST;
+
+	mz = idpf_dma_zone_reserve(dev, queue_idx, len,
+				   VIRTCHNL2_QUEUE_TYPE_RX_BUFFER,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+
+	bufq->rx_ring_phys_addr = mz->iova;
+	bufq->rx_ring = mz->addr;
+	bufq->mz = mz;
+
 	bufq->sw_ring =
 		rte_zmalloc_socket("idpf rx bufq sw ring",
 				   sizeof(struct rte_mbuf *) * len,
@@ -385,55 +181,60 @@ idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
 				   socket_id);
 	if (bufq->sw_ring == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
-		return -ENOMEM;
-	}
-
-	/* Allocate a liitle more to support bulk allocate. */
-	len = nb_desc + IDPF_RX_MAX_BURST;
-	ring_size = RTE_ALIGN(len *
-			      sizeof(struct virtchnl2_splitq_rx_buf_desc),
-			      IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "rx_buf_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
-	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue.");
-		rte_free(bufq->sw_ring);
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
 	}
 
-	/* Zero all the descriptors in the ring. */
-	memset(mz->addr, 0, ring_size);
-	bufq->rx_ring_phys_addr = mz->iova;
-	bufq->rx_ring = mz->addr;
-
-	bufq->mz = mz;
-	reset_split_rx_bufq(bufq);
-	bufq->q_set = true;
+	idpf_reset_split_rx_bufq(bufq);
 	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
 	bufq->ops = &def_rxq_ops;
+	bufq->q_set = true;
 
-	/* TODO: allow bulk or vec */
+	if (bufq_id == 1) {
+		rxq->bufq1 = bufq;
+	} else if (bufq_id == 2) {
+		rxq->bufq2 = bufq;
+	} else {
+		PMD_INIT_LOG(ERR, "Invalid buffer queue index.");
+		ret = -EINVAL;
+		goto err_bufq_id;
+	}
 
 	return 0;
+
+err_bufq_id:
+	rte_free(bufq->sw_ring);
+err_sw_ring_alloc:
+	idpf_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(bufq);
+err_bufq1_alloc:
+	return ret;
 }
 
-static int
-idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-			  uint16_t nb_desc, unsigned int socket_id,
-			  const struct rte_eth_rxconf *rx_conf,
-			  struct rte_mempool *mp)
+static void
+idpf_rx_split_bufq_release(struct idpf_rx_queue *bufq)
+{
+	rte_free(bufq->sw_ring);
+	idpf_dma_zone_release(bufq->mz);
+	rte_free(bufq);
+}
+
+int
+idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_rxconf *rx_conf,
+		    struct rte_mempool *mp)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_adapter *adapter = vport->adapter;
-	struct idpf_rx_queue *bufq1, *bufq2;
+	struct idpf_hw *hw = &adapter->hw;
 	const struct rte_memzone *mz;
 	struct idpf_rx_queue *rxq;
 	uint16_t rx_free_thresh;
-	uint32_t ring_size;
 	uint64_t offloads;
-	uint16_t qid;
+	bool is_splitq;
 	uint16_t len;
 	int ret;
 
@@ -443,7 +244,7 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
 		IDPF_DEFAULT_RX_FREE_THRESH :
 		rx_conf->rx_free_thresh;
-	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+	if (idpf_check_rx_thresh(nb_desc, rx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Free memory if needed */
@@ -452,16 +253,19 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		dev->data->rx_queues[queue_idx] = NULL;
 	}
 
-	/* Setup Rx description queue */
+	/* Setup Rx queue */
 	rxq = rte_zmalloc_socket("idpf rxq",
 				 sizeof(struct idpf_rx_queue),
 				 RTE_CACHE_LINE_SIZE,
 				 socket_id);
 	if (rxq == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_rxq_alloc;
 	}
 
+	is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
 	rxq->mp = mp;
 	rxq->nb_rx_desc = nb_desc;
 	rxq->rx_free_thresh = rx_free_thresh;
@@ -470,343 +274,129 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
 	rxq->rx_hdr_len = 0;
 	rxq->adapter = adapter;
-	rxq->offloads = offloads;
+	rxq->offloads = idpf_rx_offload_convert(offloads);
 
 	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
 	rxq->rx_buf_len = len;
 
-	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
-	ring_size = RTE_ALIGN(len *
-			      sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
-			      IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "rx_cpmpl_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
-
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+	mz = idpf_dma_zone_reserve(dev, queue_idx, len, VIRTCHNL2_QUEUE_TYPE_RX,
+				   socket_id, is_splitq);
 	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
 		ret = -ENOMEM;
-		goto free_rxq;
+		goto err_mz_reserve;
 	}
-
-	/* Zero all the descriptors in the ring. */
-	memset(mz->addr, 0, ring_size);
 	rxq->rx_ring_phys_addr = mz->iova;
 	rxq->rx_ring = mz->addr;
-
 	rxq->mz = mz;
-	reset_split_rx_descq(rxq);
 
-	/* TODO: allow bulk or vec */
+	if (!is_splitq) {
+		rxq->sw_ring = rte_zmalloc_socket("idpf rxq sw ring",
+						  sizeof(struct rte_mbuf *) * len,
+						  RTE_CACHE_LINE_SIZE,
+						  socket_id);
+		if (rxq->sw_ring == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+			ret = -ENOMEM;
+			goto err_sw_ring_alloc;
+		}
 
-	/* setup Rx buffer queue */
-	bufq1 = rte_zmalloc_socket("idpf bufq1",
-				   sizeof(struct idpf_rx_queue),
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (bufq1 == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue 1.");
-		ret = -ENOMEM;
-		goto free_mz;
-	}
-	qid = 2 * queue_idx;
-	ret = idpf_rx_split_bufq_setup(dev, bufq1, qid, rx_free_thresh,
-				       nb_desc, socket_id, mp);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
-		ret = -EINVAL;
-		goto free_bufq1;
-	}
-	rxq->bufq1 = bufq1;
+		idpf_reset_single_rx_queue(rxq);
+		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
+				queue_idx * vport->chunks_info.rx_qtail_spacing);
+		rxq->ops = &def_rxq_ops;
+	} else {
+		idpf_reset_split_rx_descq(rxq);
 
-	bufq2 = rte_zmalloc_socket("idpf bufq2",
-				   sizeof(struct idpf_rx_queue),
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (bufq2 == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue 2.");
-		rte_free(bufq1->sw_ring);
-		rte_memzone_free(bufq1->mz);
-		ret = -ENOMEM;
-		goto free_bufq1;
-	}
-	qid = 2 * queue_idx + 1;
-	ret = idpf_rx_split_bufq_setup(dev, bufq2, qid, rx_free_thresh,
-				       nb_desc, socket_id, mp);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
-		rte_free(bufq1->sw_ring);
-		rte_memzone_free(bufq1->mz);
-		ret = -EINVAL;
-		goto free_bufq2;
+		/* Setup Rx buffer queues */
+		ret = idpf_rx_split_bufq_setup(dev, rxq, 2 * queue_idx,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 1);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
+			ret = -EINVAL;
+			goto err_bufq1_setup;
+		}
+
+		ret = idpf_rx_split_bufq_setup(dev, rxq, 2 * queue_idx + 1,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 2);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
+			ret = -EINVAL;
+			goto err_bufq2_setup;
+		}
 	}
-	rxq->bufq2 = bufq2;
 
 	rxq->q_set = true;
 	dev->data->rx_queues[queue_idx] = rxq;
 
 	return 0;
 
-free_bufq2:
-	rte_free(bufq2);
-free_bufq1:
-	rte_free(bufq1);
-free_mz:
-	rte_memzone_free(mz);
-free_rxq:
+err_bufq2_setup:
+	idpf_rx_split_bufq_release(rxq->bufq1);
+err_bufq1_setup:
+err_sw_ring_alloc:
+	idpf_dma_zone_release(mz);
+err_mz_reserve:
 	rte_free(rxq);
-
+err_rxq_alloc:
 	return ret;
 }
 
 static int
-idpf_rx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-			   uint16_t nb_desc, unsigned int socket_id,
-			   const struct rte_eth_rxconf *rx_conf,
-			   struct rte_mempool *mp)
+idpf_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
+		     uint16_t queue_idx, uint16_t nb_desc,
+		     unsigned int socket_id)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
-	struct idpf_hw *hw = &adapter->hw;
 	const struct rte_memzone *mz;
-	struct idpf_rx_queue *rxq;
-	uint16_t rx_free_thresh;
-	uint32_t ring_size;
-	uint64_t offloads;
-	uint16_t len;
-
-	offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
-
-	/* Check free threshold */
-	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
-		IDPF_DEFAULT_RX_FREE_THRESH :
-		rx_conf->rx_free_thresh;
-	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
-		return -EINVAL;
-
-	/* Free memory if needed */
-	if (dev->data->rx_queues[queue_idx] != NULL) {
-		idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
-		dev->data->rx_queues[queue_idx] = NULL;
-	}
-
-	/* Setup Rx description queue */
-	rxq = rte_zmalloc_socket("idpf rxq",
-				 sizeof(struct idpf_rx_queue),
-				 RTE_CACHE_LINE_SIZE,
-				 socket_id);
-	if (rxq == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
-		return -ENOMEM;
-	}
-
-	rxq->mp = mp;
-	rxq->nb_rx_desc = nb_desc;
-	rxq->rx_free_thresh = rx_free_thresh;
-	rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
-	rxq->port_id = dev->data->port_id;
-	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
-	rxq->rx_hdr_len = 0;
-	rxq->adapter = adapter;
-	rxq->offloads = offloads;
-
-	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
-	rxq->rx_buf_len = len;
-
-	len = nb_desc + IDPF_RX_MAX_BURST;
-	rxq->sw_ring =
-		rte_zmalloc_socket("idpf rxq sw ring",
-				   sizeof(struct rte_mbuf *) * len,
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (rxq->sw_ring == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
-		rte_free(rxq);
-		return -ENOMEM;
-	}
-
-	/* Allocate a liitle more to support bulk allocate. */
-	len = nb_desc + IDPF_RX_MAX_BURST;
-	ring_size = RTE_ALIGN(len *
-			      sizeof(struct virtchnl2_singleq_rx_buf_desc),
-			      IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "rx ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
-	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue.");
-		rte_free(rxq->sw_ring);
-		rte_free(rxq);
-		return -ENOMEM;
-	}
-
-	/* Zero all the descriptors in the ring. */
-	memset(mz->addr, 0, ring_size);
-	rxq->rx_ring_phys_addr = mz->iova;
-	rxq->rx_ring = mz->addr;
-
-	rxq->mz = mz;
-	reset_single_rx_queue(rxq);
-	rxq->q_set = true;
-	dev->data->rx_queues[queue_idx] = rxq;
-	rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
-			queue_idx * vport->chunks_info.rx_qtail_spacing);
-	rxq->ops = &def_rxq_ops;
-
-	return 0;
-}
-
-int
-idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-		    uint16_t nb_desc, unsigned int socket_id,
-		    const struct rte_eth_rxconf *rx_conf,
-		    struct rte_mempool *mp)
-{
-	struct idpf_vport *vport = dev->data->dev_private;
-
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
-		return idpf_rx_single_queue_setup(dev, queue_idx, nb_desc,
-						  socket_id, rx_conf, mp);
-	else
-		return idpf_rx_split_queue_setup(dev, queue_idx, nb_desc,
-						 socket_id, rx_conf, mp);
-}
-
-static int
-idpf_tx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-			  uint16_t nb_desc, unsigned int socket_id,
-			  const struct rte_eth_txconf *tx_conf)
-{
-	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
-	uint16_t tx_rs_thresh, tx_free_thresh;
-	struct idpf_hw *hw = &adapter->hw;
-	struct idpf_tx_queue *txq, *cq;
-	const struct rte_memzone *mz;
-	uint32_t ring_size;
-	uint64_t offloads;
+	struct idpf_tx_queue *cq;
 	int ret;
 
-	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
-
-	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh != 0) ?
-		tx_conf->tx_rs_thresh : IDPF_DEFAULT_TX_RS_THRESH);
-	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh != 0) ?
-		tx_conf->tx_free_thresh : IDPF_DEFAULT_TX_FREE_THRESH);
-	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
-		return -EINVAL;
-
-	/* Free memory if needed. */
-	if (dev->data->tx_queues[queue_idx] != NULL) {
-		idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
-		dev->data->tx_queues[queue_idx] = NULL;
-	}
-
-	/* Allocate the TX queue data structure. */
-	txq = rte_zmalloc_socket("idpf split txq",
-				 sizeof(struct idpf_tx_queue),
-				 RTE_CACHE_LINE_SIZE,
-				 socket_id);
-	if (txq == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
-		return -ENOMEM;
-	}
-
-	txq->nb_tx_desc = nb_desc;
-	txq->rs_thresh = tx_rs_thresh;
-	txq->free_thresh = tx_free_thresh;
-	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
-	txq->port_id = dev->data->port_id;
-	txq->offloads = offloads;
-	txq->tx_deferred_start = tx_conf->tx_deferred_start;
-
-	/* Allocate software ring */
-	txq->sw_nb_desc = 2 * nb_desc;
-	txq->sw_ring =
-		rte_zmalloc_socket("idpf split tx sw ring",
-				   sizeof(struct idpf_tx_entry) *
-				   txq->sw_nb_desc,
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (txq->sw_ring == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
-		ret = -ENOMEM;
-		goto err_txq_sw_ring;
-	}
-
-	/* Allocate TX hardware ring descriptors. */
-	ring_size = sizeof(struct idpf_flex_tx_sched_desc) * txq->nb_tx_desc;
-	ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "split_tx_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
-	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
-		ret = -ENOMEM;
-		goto err_txq_mz;
-	}
-	txq->tx_ring_phys_addr = mz->iova;
-	txq->desc_ring = mz->addr;
-
-	txq->mz = mz;
-	reset_split_tx_descq(txq);
-	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
-			queue_idx * vport->chunks_info.tx_qtail_spacing);
-	txq->ops = &def_txq_ops;
-
-	/* Allocate the TX completion queue data structure. */
-	txq->complq = rte_zmalloc_socket("idpf splitq cq",
-					 sizeof(struct idpf_tx_queue),
-					 RTE_CACHE_LINE_SIZE,
-					 socket_id);
-	cq = txq->complq;
+	cq = rte_zmalloc_socket("idpf splitq cq",
+				sizeof(struct idpf_tx_queue),
+				RTE_CACHE_LINE_SIZE,
+				socket_id);
 	if (cq == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for Tx compl queue");
 		ret = -ENOMEM;
-		goto err_cq;
+		goto err_cq_alloc;
 	}
-	cq->nb_tx_desc = 2 * nb_desc;
+
+	cq->nb_tx_desc = nb_desc;
 	cq->queue_id = vport->chunks_info.tx_compl_start_qid + queue_idx;
 	cq->port_id = dev->data->port_id;
 	cq->txqs = dev->data->tx_queues;
 	cq->tx_start_qid = vport->chunks_info.tx_start_qid;
 
-	ring_size = sizeof(struct idpf_splitq_tx_compl_desc) * cq->nb_tx_desc;
-	ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "tx_split_compl_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
+	mz = idpf_dma_zone_reserve(dev, queue_idx, nb_desc,
+				   VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION,
+				   socket_id, true);
 	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX completion queue");
 		ret = -ENOMEM;
-		goto err_cq_mz;
+		goto err_mz_reserve;
 	}
 	cq->tx_ring_phys_addr = mz->iova;
 	cq->compl_ring = mz->addr;
 	cq->mz = mz;
-	reset_split_tx_complq(cq);
+	idpf_reset_split_tx_complq(cq);
 
-	txq->q_set = true;
-	dev->data->tx_queues[queue_idx] = txq;
+	txq->complq = cq;
 
 	return 0;
 
-err_cq_mz:
+err_mz_reserve:
 	rte_free(cq);
-err_cq:
-	rte_memzone_free(txq->mz);
-err_txq_mz:
-	rte_free(txq->sw_ring);
-err_txq_sw_ring:
-	rte_free(txq);
-
+err_cq_alloc:
 	return ret;
 }
 
-static int
-idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-			   uint16_t nb_desc, unsigned int socket_id,
-			   const struct rte_eth_txconf *tx_conf)
+int
+idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_txconf *tx_conf)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_adapter *adapter = vport->adapter;
@@ -814,8 +404,10 @@ idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	struct idpf_hw *hw = &adapter->hw;
 	const struct rte_memzone *mz;
 	struct idpf_tx_queue *txq;
-	uint32_t ring_size;
 	uint64_t offloads;
+	uint16_t len;
+	bool is_splitq;
+	int ret;
 
 	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
 
@@ -823,7 +415,7 @@ idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		tx_conf->tx_rs_thresh : IDPF_DEFAULT_TX_RS_THRESH);
 	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
 		tx_conf->tx_free_thresh : IDPF_DEFAULT_TX_FREE_THRESH);
-	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+	if (idpf_check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Free memory if needed. */
@@ -839,71 +431,74 @@ idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 				 socket_id);
 	if (txq == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_txq_alloc;
 	}
 
-	/* TODO: vlan offload */
+	is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
 
 	txq->nb_tx_desc = nb_desc;
 	txq->rs_thresh = tx_rs_thresh;
 	txq->free_thresh = tx_free_thresh;
 	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
 	txq->port_id = dev->data->port_id;
-	txq->offloads = offloads;
+	txq->offloads = idpf_tx_offload_convert(offloads);
 	txq->tx_deferred_start = tx_conf->tx_deferred_start;
 
-	/* Allocate software ring */
-	txq->sw_ring =
-		rte_zmalloc_socket("idpf tx sw ring",
-				   sizeof(struct idpf_tx_entry) * nb_desc,
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (txq->sw_ring == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
-		rte_free(txq);
-		return -ENOMEM;
-	}
+	if (is_splitq)
+		len = 2 * nb_desc;
+	else
+		len = nb_desc;
+	txq->sw_nb_desc = len;
 
 	/* Allocate TX hardware ring descriptors. */
-	ring_size = sizeof(struct idpf_flex_tx_desc) * nb_desc;
-	ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
+	mz = idpf_dma_zone_reserve(dev, queue_idx, nb_desc, VIRTCHNL2_QUEUE_TYPE_TX,
+				   socket_id, is_splitq);
 	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
-		rte_free(txq->sw_ring);
-		rte_free(txq);
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_mz_reserve;
 	}
-
 	txq->tx_ring_phys_addr = mz->iova;
-	txq->tx_ring = mz->addr;
-
 	txq->mz = mz;
-	reset_single_tx_queue(txq);
-	txq->q_set = true;
-	dev->data->tx_queues[queue_idx] = txq;
+
+	txq->sw_ring = rte_zmalloc_socket("idpf tx sw ring",
+					  sizeof(struct idpf_tx_entry) * len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	if (!is_splitq) {
+		txq->tx_ring = mz->addr;
+		idpf_reset_single_tx_queue(txq);
+	} else {
+		txq->desc_ring = mz->addr;
+		idpf_reset_split_tx_descq(txq);
+
+		/* Setup tx completion queue if split model */
+		ret = idpf_tx_complq_setup(dev, txq, queue_idx,
+					   2 * nb_desc, socket_id);
+		if (ret != 0)
+			goto err_complq_setup;
+	}
+
 	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
 			queue_idx * vport->chunks_info.tx_qtail_spacing);
 	txq->ops = &def_txq_ops;
+	txq->q_set = true;
+	dev->data->tx_queues[queue_idx] = txq;
 
 	return 0;
-}
 
-int
-idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-		    uint16_t nb_desc, unsigned int socket_id,
-		    const struct rte_eth_txconf *tx_conf)
-{
-	struct idpf_vport *vport = dev->data->dev_private;
-
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
-		return idpf_tx_single_queue_setup(dev, queue_idx, nb_desc,
-						  socket_id, tx_conf);
-	else
-		return idpf_tx_split_queue_setup(dev, queue_idx, nb_desc,
-						 socket_id, tx_conf);
+err_complq_setup:
+err_sw_ring_alloc:
+	idpf_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(txq);
+err_txq_alloc:
+	return ret;
 }
 
 static int
@@ -916,89 +511,13 @@ idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
 							 &idpf_timestamp_dynflag);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR,
-				"Cannot register mbuf field/flag for timestamp");
+				    "Cannot register mbuf field/flag for timestamp");
 			return -EINVAL;
 		}
 	}
 	return 0;
 }
 
-static int
-idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
-{
-	volatile struct virtchnl2_singleq_rx_buf_desc *rxd;
-	struct rte_mbuf *mbuf = NULL;
-	uint64_t dma_addr;
-	uint16_t i;
-
-	for (i = 0; i < rxq->nb_rx_desc; i++) {
-		mbuf = rte_mbuf_raw_alloc(rxq->mp);
-		if (unlikely(mbuf == NULL)) {
-			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
-			return -ENOMEM;
-		}
-
-		rte_mbuf_refcnt_set(mbuf, 1);
-		mbuf->next = NULL;
-		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
-		mbuf->nb_segs = 1;
-		mbuf->port = rxq->port_id;
-
-		dma_addr =
-			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
-
-		rxd = &((volatile struct virtchnl2_singleq_rx_buf_desc *)(rxq->rx_ring))[i];
-		rxd->pkt_addr = dma_addr;
-		rxd->hdr_addr = 0;
-		rxd->rsvd1 = 0;
-		rxd->rsvd2 = 0;
-		rxq->sw_ring[i] = mbuf;
-	}
-
-	return 0;
-}
-
-static int
-idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
-{
-	volatile struct virtchnl2_splitq_rx_buf_desc *rxd;
-	struct rte_mbuf *mbuf = NULL;
-	uint64_t dma_addr;
-	uint16_t i;
-
-	for (i = 0; i < rxq->nb_rx_desc; i++) {
-		mbuf = rte_mbuf_raw_alloc(rxq->mp);
-		if (unlikely(mbuf == NULL)) {
-			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
-			return -ENOMEM;
-		}
-
-		rte_mbuf_refcnt_set(mbuf, 1);
-		mbuf->next = NULL;
-		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
-		mbuf->nb_segs = 1;
-		mbuf->port = rxq->port_id;
-
-		dma_addr =
-			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
-
-		rxd = &((volatile struct virtchnl2_splitq_rx_buf_desc *)(rxq->rx_ring))[i];
-		rxd->qword0.buf_id = i;
-		rxd->qword0.rsvd0 = 0;
-		rxd->qword0.rsvd1 = 0;
-		rxd->pkt_addr = dma_addr;
-		rxd->hdr_addr = 0;
-		rxd->rsvd2 = 0;
-
-		rxq->sw_ring[i] = mbuf;
-	}
-
-	rxq->nb_rx_hold = 0;
-	rxq->rx_tail = rxq->nb_rx_desc - 1;
-
-	return 0;
-}
-
 int
 idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
@@ -1164,11 +683,11 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	rxq = dev->data->rx_queues[rx_queue_id];
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
 		rxq->ops->release_mbufs(rxq);
-		reset_single_rx_queue(rxq);
+		idpf_reset_single_rx_queue(rxq);
 	} else {
 		rxq->bufq1->ops->release_mbufs(rxq->bufq1);
 		rxq->bufq2->ops->release_mbufs(rxq->bufq2);
-		reset_split_rx_queue(rxq);
+		idpf_reset_split_rx_queue(rxq);
 	}
 	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -1195,10 +714,10 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	txq = dev->data->tx_queues[tx_queue_id];
 	txq->ops->release_mbufs(txq);
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-		reset_single_tx_queue(txq);
+		idpf_reset_single_tx_queue(txq);
 	} else {
-		reset_split_tx_descq(txq);
-		reset_split_tx_complq(txq->complq);
+		idpf_reset_split_tx_descq(txq);
+		idpf_reset_split_tx_complq(txq->complq);
 	}
 	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index b8325f9b96..4efbf10295 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -51,7 +51,6 @@
 /* Base address of the HW descriptor ring should be 128B aligned. */
 #define IDPF_RING_BASE_ALIGN	128
 
-#define IDPF_RX_MAX_BURST		32
 #define IDPF_DEFAULT_RX_FREE_THRESH	32
 
 /* used for Vector PMD */
@@ -101,14 +100,6 @@ union idpf_tx_offload {
 	};
 };
 
-struct idpf_rxq_ops {
-	void (*release_mbufs)(struct idpf_rx_queue *rxq);
-};
-
-struct idpf_txq_ops {
-	void (*release_mbufs)(struct idpf_tx_queue *txq);
-};
-
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
index fb2b6bb53c..71a6c59823 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
@@ -562,7 +562,7 @@ idpf_tx_free_bufs_avx512(struct idpf_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->next_dd - (n - 1);
 
-	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & IDPF_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
 								rte_lcore_id());
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v4 13/15] common/idpf: add Rx and Tx data path
  2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
                     ` (11 preceding siblings ...)
  2023-01-17  8:06   ` [PATCH v4 12/15] common/idpf: add help functions for queue setup and release beilei.xing
@ 2023-01-17  8:06   ` beilei.xing
  2023-01-17  8:06   ` [PATCH v4 14/15] common/idpf: add vec queue setup beilei.xing
                     ` (2 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-01-17  8:06 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Mingxia Liu

From: Beilei Xing <beilei.xing@intel.com>

Add timestamp filed to idpf_adapter structure.
Move scalar Rx/Tx data path for both single queue and split queue
to common module.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h |   5 +
 drivers/common/idpf/idpf_common_logs.h   |  24 +
 drivers/common/idpf/idpf_common_rxtx.c   | 987 +++++++++++++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h   |  89 +-
 drivers/common/idpf/version.map          |   6 +
 drivers/net/idpf/idpf_ethdev.c           |   2 -
 drivers/net/idpf/idpf_ethdev.h           |   4 -
 drivers/net/idpf/idpf_logs.h             |  24 -
 drivers/net/idpf/idpf_rxtx.c             | 935 ---------------------
 drivers/net/idpf/idpf_rxtx.h             | 132 ---
 drivers/net/idpf/idpf_rxtx_vec_avx512.c  |   8 +-
 11 files changed, 1114 insertions(+), 1102 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 4895f5f360..573852ff75 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -23,6 +23,8 @@
 #define IDPF_TX_COMPLQ_PER_GRP	1
 #define IDPF_TXQ_PER_GRP	1
 
+#define IDPF_MIN_FRAME_SIZE	14
+
 #define IDPF_MAX_PKT_TYPE	1024
 
 #define IDPF_DFLT_INTERVAL	16
@@ -43,6 +45,9 @@ struct idpf_adapter {
 
 	uint32_t txq_model; /* 0 - split queue model, non-0 - single queue model */
 	uint32_t rxq_model; /* 0 - split queue model, non-0 - single queue model */
+
+	/* For timestamp */
+	uint64_t time_hw;
 };
 
 struct idpf_chunks_info {
diff --git a/drivers/common/idpf/idpf_common_logs.h b/drivers/common/idpf/idpf_common_logs.h
index fe36562769..63ad2195be 100644
--- a/drivers/common/idpf/idpf_common_logs.h
+++ b/drivers/common/idpf/idpf_common_logs.h
@@ -20,4 +20,28 @@ extern int idpf_common_logtype;
 #define DRV_LOG(level, fmt, args...)		\
 	DRV_LOG_RAW(level, fmt "\n", ## args)
 
+#ifdef RTE_LIBRTE_IDPF_DEBUG_RX
+#define RX_LOG(level, ...) \
+	RTE_LOG(level, \
+		PMD, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+#else
+#define RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_IDPF_DEBUG_TX
+#define TX_LOG(level, ...) \
+	RTE_LOG(level, \
+		PMD, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+#else
+#define TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
 #endif /* _IDPF_COMMON_LOGS_H_ */
diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index eeeeedca88..459057f20e 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -3,8 +3,13 @@
  */
 
 #include <rte_mbuf_dyn.h>
+#include <rte_errno.h>
+
 #include "idpf_common_rxtx.h"
 
+int idpf_timestamp_dynfield_offset = -1;
+uint64_t idpf_timestamp_dynflag;
+
 int
 idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
 {
@@ -337,6 +342,23 @@ idpf_tx_queue_release(void *txq)
 	rte_free(q);
 }
 
+int
+idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
+{
+	int err;
+	if ((rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
+		/* Register mbuf field and flag for Rx timestamp */
+		err = rte_mbuf_dyn_rx_timestamp_register(&idpf_timestamp_dynfield_offset,
+							 &idpf_timestamp_dynflag);
+		if (err != 0) {
+			DRV_LOG(ERR,
+				"Cannot register mbuf field/flag for timestamp");
+			return -EINVAL;
+		}
+	}
+	return 0;
+}
+
 int
 idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
 {
@@ -412,3 +434,968 @@ idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
 
 	return 0;
 }
+
+#define IDPF_TIMESYNC_REG_WRAP_GUARD_BAND  10000
+/* Helper function to convert a 32b nanoseconds timestamp to 64b. */
+static inline uint64_t
+idpf_tstamp_convert_32b_64b(struct idpf_adapter *ad, uint32_t flag,
+			    uint32_t in_timestamp)
+{
+#ifdef RTE_ARCH_X86_64
+	struct idpf_hw *hw = &ad->hw;
+	const uint64_t mask = 0xFFFFFFFF;
+	uint32_t hi, lo, lo2, delta;
+	uint64_t ns;
+
+	if (flag != 0) {
+		IDPF_WRITE_REG(hw, GLTSYN_CMD_SYNC_0_0, PF_GLTSYN_CMD_SYNC_SHTIME_EN_M);
+		IDPF_WRITE_REG(hw, GLTSYN_CMD_SYNC_0_0, PF_GLTSYN_CMD_SYNC_EXEC_CMD_M |
+			       PF_GLTSYN_CMD_SYNC_SHTIME_EN_M);
+		lo = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
+		hi = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_H_0);
+		/*
+		 * On typical system, the delta between lo and lo2 is ~1000ns,
+		 * so 10000 seems a large-enough but not overly-big guard band.
+		 */
+		if (lo > (UINT32_MAX - IDPF_TIMESYNC_REG_WRAP_GUARD_BAND))
+			lo2 = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
+		else
+			lo2 = lo;
+
+		if (lo2 < lo) {
+			lo = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
+			hi = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_H_0);
+		}
+
+		ad->time_hw = ((uint64_t)hi << 32) | lo;
+	}
+
+	delta = (in_timestamp - (uint32_t)(ad->time_hw & mask));
+	if (delta > (mask / 2)) {
+		delta = ((uint32_t)(ad->time_hw & mask) - in_timestamp);
+		ns = ad->time_hw - delta;
+	} else {
+		ns = ad->time_hw + delta;
+	}
+
+	return ns;
+#else /* !RTE_ARCH_X86_64 */
+	RTE_SET_USED(ad);
+	RTE_SET_USED(flag);
+	RTE_SET_USED(in_timestamp);
+	return 0;
+#endif /* RTE_ARCH_X86_64 */
+}
+
+#define IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S				\
+	(RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S) |     \
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S) |     \
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S) |    \
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S))
+
+static inline uint64_t
+idpf_splitq_rx_csum_offload(uint8_t err)
+{
+	uint64_t flags = 0;
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S)) == 0))
+		return flags;
+
+	if (likely((err & IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S) == 0)) {
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S)) != 0))
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
+
+	return flags;
+}
+
+#define IDPF_RX_FLEX_DESC_ADV_HASH1_S  0
+#define IDPF_RX_FLEX_DESC_ADV_HASH2_S  16
+#define IDPF_RX_FLEX_DESC_ADV_HASH3_S  24
+
+static inline uint64_t
+idpf_splitq_rx_rss_offload(struct rte_mbuf *mb,
+			   volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc)
+{
+	uint8_t status_err0_qw0;
+	uint64_t flags = 0;
+
+	status_err0_qw0 = rx_desc->status_err0_qw0;
+
+	if ((status_err0_qw0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S)) != 0) {
+		flags |= RTE_MBUF_F_RX_RSS_HASH;
+		mb->hash.rss = (rte_le_to_cpu_16(rx_desc->hash1) <<
+				IDPF_RX_FLEX_DESC_ADV_HASH1_S) |
+			((uint32_t)(rx_desc->ff2_mirrid_hash2.hash2) <<
+			 IDPF_RX_FLEX_DESC_ADV_HASH2_S) |
+			((uint32_t)(rx_desc->hash3) <<
+			 IDPF_RX_FLEX_DESC_ADV_HASH3_S);
+	}
+
+	return flags;
+}
+
+static void
+idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
+{
+	volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_ring;
+	volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_desc;
+	uint16_t nb_refill = rx_bufq->rx_free_thresh;
+	uint16_t nb_desc = rx_bufq->nb_rx_desc;
+	uint16_t next_avail = rx_bufq->rx_tail;
+	struct rte_mbuf *nmb[rx_bufq->rx_free_thresh];
+	uint64_t dma_addr;
+	uint16_t delta;
+	int i;
+
+	if (rx_bufq->nb_rx_hold < rx_bufq->rx_free_thresh)
+		return;
+
+	rx_buf_ring = rx_bufq->rx_ring;
+	delta = nb_desc - next_avail;
+	if (unlikely(delta < nb_refill)) {
+		if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, delta) == 0)) {
+			for (i = 0; i < delta; i++) {
+				rx_buf_desc = &rx_buf_ring[next_avail + i];
+				rx_bufq->sw_ring[next_avail + i] = nmb[i];
+				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+				rx_buf_desc->hdr_addr = 0;
+				rx_buf_desc->pkt_addr = dma_addr;
+			}
+			nb_refill -= delta;
+			next_avail = 0;
+			rx_bufq->nb_rx_hold -= delta;
+		} else {
+			rte_atomic64_add(&rx_bufq->rx_stats.mbuf_alloc_failed,
+					 nb_desc - next_avail);
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+			       rx_bufq->port_id, rx_bufq->queue_id);
+			return;
+		}
+	}
+
+	if (nb_desc - next_avail >= nb_refill) {
+		if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, nb_refill) == 0)) {
+			for (i = 0; i < nb_refill; i++) {
+				rx_buf_desc = &rx_buf_ring[next_avail + i];
+				rx_bufq->sw_ring[next_avail + i] = nmb[i];
+				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+				rx_buf_desc->hdr_addr = 0;
+				rx_buf_desc->pkt_addr = dma_addr;
+			}
+			next_avail += nb_refill;
+			rx_bufq->nb_rx_hold -= nb_refill;
+		} else {
+			rte_atomic64_add(&rx_bufq->rx_stats.mbuf_alloc_failed,
+					 nb_desc - next_avail);
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+			       rx_bufq->port_id, rx_bufq->queue_id);
+		}
+	}
+
+	IDPF_PCI_REG_WRITE(rx_bufq->qrx_tail, next_avail);
+
+	rx_bufq->rx_tail = next_avail;
+}
+
+uint16_t
+idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		      uint16_t nb_pkts)
+{
+	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
+	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
+	uint16_t pktlen_gen_bufq_id;
+	struct idpf_rx_queue *rxq;
+	const uint32_t *ptype_tbl;
+	uint8_t status_err0_qw1;
+	struct idpf_adapter *ad;
+	struct rte_mbuf *rxm;
+	uint16_t rx_id_bufq1;
+	uint16_t rx_id_bufq2;
+	uint64_t pkt_flags;
+	uint16_t pkt_len;
+	uint16_t bufq_id;
+	uint16_t gen_id;
+	uint16_t rx_id;
+	uint16_t nb_rx;
+	uint64_t ts_ns;
+
+	nb_rx = 0;
+	rxq = rx_queue;
+	ad = rxq->adapter;
+
+	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
+		return nb_rx;
+
+	rx_id = rxq->rx_tail;
+	rx_id_bufq1 = rxq->bufq1->rx_next_avail;
+	rx_id_bufq2 = rxq->bufq2->rx_next_avail;
+	rx_desc_ring = rxq->rx_ring;
+	ptype_tbl = rxq->adapter->ptype_tbl;
+
+	if ((rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0)
+		rxq->hw_register_set = 1;
+
+	while (nb_rx < nb_pkts) {
+		rx_desc = &rx_desc_ring[rx_id];
+
+		pktlen_gen_bufq_id =
+			rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
+		gen_id = (pktlen_gen_bufq_id &
+			  VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
+			VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
+		if (gen_id != rxq->expected_gen_id)
+			break;
+
+		pkt_len = (pktlen_gen_bufq_id &
+			   VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M) >>
+			VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S;
+		if (pkt_len == 0)
+			RX_LOG(ERR, "Packet length is 0");
+
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc)) {
+			rx_id = 0;
+			rxq->expected_gen_id ^= 1;
+		}
+
+		bufq_id = (pktlen_gen_bufq_id &
+			   VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M) >>
+			VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S;
+		if (bufq_id == 0) {
+			rxm = rxq->bufq1->sw_ring[rx_id_bufq1];
+			rx_id_bufq1++;
+			if (unlikely(rx_id_bufq1 == rxq->bufq1->nb_rx_desc))
+				rx_id_bufq1 = 0;
+			rxq->bufq1->nb_rx_hold++;
+		} else {
+			rxm = rxq->bufq2->sw_ring[rx_id_bufq2];
+			rx_id_bufq2++;
+			if (unlikely(rx_id_bufq2 == rxq->bufq2->nb_rx_desc))
+				rx_id_bufq2 = 0;
+			rxq->bufq2->nb_rx_hold++;
+		}
+
+		rxm->pkt_len = pkt_len;
+		rxm->data_len = pkt_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rxm->next = NULL;
+		rxm->nb_segs = 1;
+		rxm->port = rxq->port_id;
+		rxm->ol_flags = 0;
+		rxm->packet_type =
+			ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
+				   VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
+				  VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
+
+		status_err0_qw1 = rx_desc->status_err0_qw1;
+		pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
+		pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
+		if (idpf_timestamp_dynflag > 0 &&
+		    (rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP)) {
+			/* timestamp */
+			ts_ns = idpf_tstamp_convert_32b_64b(ad,
+							    rxq->hw_register_set,
+							    rte_le_to_cpu_32(rx_desc->ts_high));
+			rxq->hw_register_set = 0;
+			*RTE_MBUF_DYNFIELD(rxm,
+					   idpf_timestamp_dynfield_offset,
+					   rte_mbuf_timestamp_t *) = ts_ns;
+			rxm->ol_flags |= idpf_timestamp_dynflag;
+		}
+
+		rxm->ol_flags |= pkt_flags;
+
+		rx_pkts[nb_rx++] = rxm;
+	}
+
+	if (nb_rx > 0) {
+		rxq->rx_tail = rx_id;
+		if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
+			rxq->bufq1->rx_next_avail = rx_id_bufq1;
+		if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
+			rxq->bufq2->rx_next_avail = rx_id_bufq2;
+
+		idpf_split_rx_bufq_refill(rxq->bufq1);
+		idpf_split_rx_bufq_refill(rxq->bufq2);
+	}
+
+	return nb_rx;
+}
+
+static inline void
+idpf_split_tx_free(struct idpf_tx_queue *cq)
+{
+	volatile struct idpf_splitq_tx_compl_desc *compl_ring = cq->compl_ring;
+	volatile struct idpf_splitq_tx_compl_desc *txd;
+	uint16_t next = cq->tx_tail;
+	struct idpf_tx_entry *txe;
+	struct idpf_tx_queue *txq;
+	uint16_t gen, qid, q_head;
+	uint16_t nb_desc_clean;
+	uint8_t ctype;
+
+	txd = &compl_ring[next];
+	gen = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+	       IDPF_TXD_COMPLQ_GEN_M) >> IDPF_TXD_COMPLQ_GEN_S;
+	if (gen != cq->expected_gen_id)
+		return;
+
+	ctype = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+		 IDPF_TXD_COMPLQ_COMPL_TYPE_M) >> IDPF_TXD_COMPLQ_COMPL_TYPE_S;
+	qid = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+	       IDPF_TXD_COMPLQ_QID_M) >> IDPF_TXD_COMPLQ_QID_S;
+	q_head = rte_le_to_cpu_16(txd->q_head_compl_tag.compl_tag);
+	txq = cq->txqs[qid - cq->tx_start_qid];
+
+	switch (ctype) {
+	case IDPF_TXD_COMPLT_RE:
+		/* clean to q_head which indicates be fetched txq desc id + 1.
+		 * TODO: need to refine and remove the if condition.
+		 */
+		if (unlikely(q_head % 32)) {
+			TX_LOG(ERR, "unexpected desc (head = %u) completion.",
+			       q_head);
+			return;
+		}
+		if (txq->last_desc_cleaned > q_head)
+			nb_desc_clean = (txq->nb_tx_desc - txq->last_desc_cleaned) +
+				q_head;
+		else
+			nb_desc_clean = q_head - txq->last_desc_cleaned;
+		txq->nb_free += nb_desc_clean;
+		txq->last_desc_cleaned = q_head;
+		break;
+	case IDPF_TXD_COMPLT_RS:
+		/* q_head indicates sw_id when ctype is 2 */
+		txe = &txq->sw_ring[q_head];
+		if (txe->mbuf != NULL) {
+			rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = NULL;
+		}
+		break;
+	default:
+		TX_LOG(ERR, "unknown completion type.");
+		return;
+	}
+
+	if (++next == cq->nb_tx_desc) {
+		next = 0;
+		cq->expected_gen_id ^= 1;
+	}
+
+	cq->tx_tail = next;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+idpf_calc_context_desc(uint64_t flags)
+{
+	if ((flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
+		return 1;
+
+	return 0;
+}
+
+/* set TSO context descriptor
+ */
+static inline void
+idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
+			union idpf_tx_offload tx_offload,
+			volatile union idpf_flex_tx_ctx_desc *ctx_desc)
+{
+	uint16_t cmd_dtype;
+	uint32_t tso_len;
+	uint8_t hdr_len;
+
+	if (tx_offload.l4_len == 0) {
+		TX_LOG(DEBUG, "L4 length set to 0");
+		return;
+	}
+
+	hdr_len = tx_offload.l2_len +
+		tx_offload.l3_len +
+		tx_offload.l4_len;
+	cmd_dtype = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX |
+		IDPF_TX_FLEX_CTX_DESC_CMD_TSO;
+	tso_len = mbuf->pkt_len - hdr_len;
+
+	ctx_desc->tso.qw1.cmd_dtype = rte_cpu_to_le_16(cmd_dtype);
+	ctx_desc->tso.qw0.hdr_len = hdr_len;
+	ctx_desc->tso.qw0.mss_rt =
+		rte_cpu_to_le_16((uint16_t)mbuf->tso_segsz &
+				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
+	ctx_desc->tso.qw0.flex_tlen =
+		rte_cpu_to_le_32(tso_len &
+				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
+}
+
+uint16_t
+idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		      uint16_t nb_pkts)
+{
+	struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue;
+	volatile struct idpf_flex_tx_sched_desc *txr;
+	volatile struct idpf_flex_tx_sched_desc *txd;
+	struct idpf_tx_entry *sw_ring;
+	union idpf_tx_offload tx_offload = {0};
+	struct idpf_tx_entry *txe, *txn;
+	uint16_t nb_used, tx_id, sw_id;
+	struct rte_mbuf *tx_pkt;
+	uint16_t nb_to_clean;
+	uint16_t nb_tx = 0;
+	uint64_t ol_flags;
+	uint16_t nb_ctx;
+
+	if (unlikely(txq == NULL) || unlikely(!txq->q_started))
+		return nb_tx;
+
+	txr = txq->desc_ring;
+	sw_ring = txq->sw_ring;
+	tx_id = txq->tx_tail;
+	sw_id = txq->sw_tail;
+	txe = &sw_ring[sw_id];
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		tx_pkt = tx_pkts[nb_tx];
+
+		if (txq->nb_free <= txq->free_thresh) {
+			/* TODO: Need to refine
+			 * 1. free and clean: Better to decide a clean destination instead of
+			 * loop times. And don't free mbuf when RS got immediately, free when
+			 * transmit or according to the clean destination.
+			 * Now, just ignore the RE write back, free mbuf when get RS
+			 * 2. out-of-order rewrite back haven't be supported, SW head and HW head
+			 * need to be separated.
+			 **/
+			nb_to_clean = 2 * txq->rs_thresh;
+			while (nb_to_clean--)
+				idpf_split_tx_free(txq->complq);
+		}
+
+		if (txq->nb_free < tx_pkt->nb_segs)
+			break;
+
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = idpf_calc_context_desc(ol_flags);
+		nb_used = tx_pkt->nb_segs + nb_ctx;
+
+		/* context descriptor */
+		if (nb_ctx != 0) {
+			volatile union idpf_flex_tx_ctx_desc *ctx_desc =
+				(volatile union idpf_flex_tx_ctx_desc *)&txr[tx_id];
+
+			if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
+				idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
+							ctx_desc);
+
+			tx_id++;
+			if (tx_id == txq->nb_tx_desc)
+				tx_id = 0;
+		}
+
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+			txe->mbuf = tx_pkt;
+
+			/* Setup TX descriptor */
+			txd->buf_addr =
+				rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt));
+			txd->qw1.cmd_dtype =
+				rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE);
+			txd->qw1.rxr_bufsize = tx_pkt->data_len;
+			txd->qw1.compl_tag = sw_id;
+			tx_id++;
+			if (tx_id == txq->nb_tx_desc)
+				tx_id = 0;
+			sw_id = txe->next_id;
+			txe = txn;
+			tx_pkt = tx_pkt->next;
+		} while (tx_pkt);
+
+		/* fill the last descriptor with End of Packet (EOP) bit */
+		txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_EOP;
+
+		if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
+			txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN;
+		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+
+		if (txq->nb_used >= 32) {
+			txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_RE;
+			/* Update txq RE bit counters */
+			txq->nb_used = 0;
+		}
+	}
+
+	/* update the tail pointer if any packets were processed */
+	if (likely(nb_tx > 0)) {
+		IDPF_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+		txq->tx_tail = tx_id;
+		txq->sw_tail = sw_id;
+	}
+
+	return nb_tx;
+}
+
+#define IDPF_RX_FLEX_DESC_STATUS0_XSUM_S				\
+	(RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) |		\
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) |		\
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) |	\
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S))
+
+/* Translate the rx descriptor status and error fields to pkt flags */
+static inline uint64_t
+idpf_rxd_to_pkt_flags(uint16_t status_error)
+{
+	uint64_t flags = 0;
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S)) == 0))
+		return flags;
+
+	if (likely((status_error & IDPF_RX_FLEX_DESC_STATUS0_XSUM_S) == 0)) {
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)) != 0))
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
+
+	return flags;
+}
+
+static inline void
+idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
+		    uint16_t rx_id)
+{
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+
+	if (nb_hold > rxq->rx_free_thresh) {
+		RX_LOG(DEBUG,
+		       "port_id=%u queue_id=%u rx_tail=%u nb_hold=%u",
+		       rxq->port_id, rxq->queue_id, rx_id, nb_hold);
+		rx_id = (uint16_t)((rx_id == 0) ?
+				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+}
+
+uint16_t
+idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts)
+{
+	volatile union virtchnl2_rx_desc *rx_ring;
+	volatile union virtchnl2_rx_desc *rxdp;
+	union virtchnl2_rx_desc rxd;
+	struct idpf_rx_queue *rxq;
+	const uint32_t *ptype_tbl;
+	uint16_t rx_id, nb_hold;
+	struct idpf_adapter *ad;
+	uint16_t rx_packet_len;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	uint16_t rx_status0;
+	uint64_t pkt_flags;
+	uint64_t dma_addr;
+	uint64_t ts_ns;
+	uint16_t nb_rx;
+
+	nb_rx = 0;
+	nb_hold = 0;
+	rxq = rx_queue;
+
+	ad = rxq->adapter;
+
+	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
+		return nb_rx;
+
+	rx_id = rxq->rx_tail;
+	rx_ring = rxq->rx_ring;
+	ptype_tbl = rxq->adapter->ptype_tbl;
+
+	if ((rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0)
+		rxq->hw_register_set = 1;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
+
+		/* Check the DD bit first */
+		if ((rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)) == 0)
+			break;
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(nmb == NULL)) {
+			rte_atomic64_inc(&rxq->rx_stats.mbuf_alloc_failed);
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+			       "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+		nb_hold++;
+		rxm = rxq->sw_ring[rx_id];
+		rxq->sw_ring[rx_id] = nmb;
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		rx_packet_len = (rte_cpu_to_le_16(rxd.flex_nic_wb.pkt_len) &
+				 VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M);
+
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		rxm->ol_flags = 0;
+		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		rxm->packet_type =
+			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
+					    VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
+
+		rxm->ol_flags |= pkt_flags;
+
+		if (idpf_timestamp_dynflag > 0 &&
+		    (rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
+			/* timestamp */
+			ts_ns = idpf_tstamp_convert_32b_64b(ad,
+					    rxq->hw_register_set,
+					    rte_le_to_cpu_32(rxd.flex_nic_wb.flex_ts.ts_high));
+			rxq->hw_register_set = 0;
+			*RTE_MBUF_DYNFIELD(rxm,
+					   idpf_timestamp_dynfield_offset,
+					   rte_mbuf_timestamp_t *) = ts_ns;
+			rxm->ol_flags |= idpf_timestamp_dynflag;
+		}
+
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+
+	idpf_update_rx_tail(rxq, nb_hold, rx_id);
+
+	return nb_rx;
+}
+
+static inline int
+idpf_xmit_cleanup(struct idpf_tx_queue *txq)
+{
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	struct idpf_tx_entry *sw_ring = txq->sw_ring;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+	uint16_t i;
+
+	volatile struct idpf_flex_tx_desc *txd = txq->tx_ring;
+
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	/* In the writeback Tx desccriptor, the only significant fields are the 4-bit DTYPE */
+	if ((txd[desc_to_clean_to].qw1.cmd_dtype &
+	     rte_cpu_to_le_16(IDPF_TXD_QW1_DTYPE_M)) !=
+	    rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE)) {
+		TX_LOG(DEBUG, "TX descriptor %4u is not done "
+		       "(port=%d queue=%d)", desc_to_clean_to,
+		       txq->port_id, txq->queue_id);
+		return -1;
+	}
+
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+					    desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					    last_desc_cleaned);
+
+	txd[desc_to_clean_to].qw1.cmd_dtype = 0;
+	txd[desc_to_clean_to].qw1.buf_size = 0;
+	for (i = 0; i < RTE_DIM(txd[desc_to_clean_to].qw1.flex.raw); i++)
+		txd[desc_to_clean_to].qw1.flex.raw[i] = 0;
+
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* TX function */
+uint16_t
+idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts)
+{
+	volatile struct idpf_flex_tx_desc *txd;
+	volatile struct idpf_flex_tx_desc *txr;
+	union idpf_tx_offload tx_offload = {0};
+	struct idpf_tx_entry *txe, *txn;
+	struct idpf_tx_entry *sw_ring;
+	struct idpf_tx_queue *txq;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint64_t buf_dma_addr;
+	uint64_t ol_flags;
+	uint16_t tx_last;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint16_t td_cmd;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint16_t slen;
+
+	nb_tx = 0;
+	txq = tx_queue;
+
+	if (unlikely(txq == NULL) || unlikely(!txq->q_started))
+		return nb_tx;
+
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_free < txq->free_thresh)
+		(void)idpf_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		td_cmd = 0;
+
+		tx_pkt = *tx_pkts++;
+		RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
+
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = idpf_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus 1 context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		TX_LOG(DEBUG, "port_id=%u queue_id=%u"
+		       " tx_first=%u tx_last=%u",
+		       txq->port_id, txq->queue_id, tx_id, tx_last);
+
+		if (nb_used > txq->nb_free) {
+			if (idpf_xmit_cleanup(txq) != 0) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->rs_thresh)) {
+				while (nb_used > txq->nb_free) {
+					if (idpf_xmit_cleanup(txq) != 0) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		if (nb_ctx != 0) {
+			/* Setup TX context descriptor if required */
+			volatile union idpf_flex_tx_ctx_desc *ctx_txd =
+				(volatile union idpf_flex_tx_ctx_desc *)
+				&txr[tx_id];
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf != NULL) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			/* TSO enabled */
+			if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
+				idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
+							ctx_txd);
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+
+		m_seg = tx_pkt;
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf != NULL)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			slen = m_seg->data_len;
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->qw1.buf_size = slen;
+			txd->qw1.cmd_dtype = rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_DATA <<
+							      IDPF_FLEX_TXD_QW1_DTYPE_S);
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg);
+
+		/* The last packet data descriptor needs End Of Packet (EOP) */
+		td_cmd |= IDPF_TX_FLEX_DESC_CMD_EOP;
+		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+
+		if (txq->nb_used >= txq->rs_thresh) {
+			TX_LOG(DEBUG, "Setting RS bit on TXD id="
+			       "%4u (port=%d queue=%d)",
+			       tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= IDPF_TX_FLEX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_used = 0;
+		}
+
+		if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
+			td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN;
+
+		txd->qw1.cmd_dtype |= rte_cpu_to_le_16(td_cmd << IDPF_FLEX_TXD_QW1_CMD_S);
+	}
+
+end_of_tx:
+	rte_wmb();
+
+	TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+	       txq->port_id, txq->queue_id, tx_id, nb_tx);
+
+	IDPF_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+/* TX prep functions */
+uint16_t
+idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	       uint16_t nb_pkts)
+{
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+	int ret;
+#endif
+	int i;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		/* Check condition for nb_segs > IDPF_TX_MAX_MTU_SEG. */
+		if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0) {
+			if (m->nb_segs > IDPF_TX_MAX_MTU_SEG) {
+				rte_errno = EINVAL;
+				return i;
+			}
+		} else if ((m->tso_segsz < IDPF_MIN_TSO_MSS) ||
+			   (m->tso_segsz > IDPF_MAX_TSO_MSS) ||
+			   (m->pkt_len > IDPF_MAX_TSO_FRAME_SIZE)) {
+			/* MSS outside the range are considered malicious */
+			rte_errno = EINVAL;
+			return i;
+		}
+
+		if ((ol_flags & IDPF_TX_OFFLOAD_NOTSUP_MASK) != 0) {
+			rte_errno = ENOTSUP;
+			return i;
+		}
+
+		if (m->pkt_len < IDPF_MIN_FRAME_SIZE) {
+			rte_errno = EINVAL;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = -ret;
+			return i;
+		}
+#endif
+	}
+
+	return i;
+}
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index c5bb7d48af..827f791505 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -27,8 +27,63 @@
 #define IDPF_TX_OFFLOAD_MULTI_SEGS       RTE_BIT64(15)
 #define IDPF_TX_OFFLOAD_MBUF_FAST_FREE   RTE_BIT64(16)
 
+#define IDPF_TX_MAX_MTU_SEG	10
+
+#define IDPF_MIN_TSO_MSS	88
+#define IDPF_MAX_TSO_MSS	9728
+#define IDPF_MAX_TSO_FRAME_SIZE	262143
+#define IDPF_TX_MAX_MTU_SEG     10
+
+#define IDPF_TX_CKSUM_OFFLOAD_MASK (		\
+		RTE_MBUF_F_TX_IP_CKSUM |	\
+		RTE_MBUF_F_TX_L4_MASK |		\
+		RTE_MBUF_F_TX_TCP_SEG)
+
+#define IDPF_TX_OFFLOAD_MASK (			\
+		IDPF_TX_CKSUM_OFFLOAD_MASK |	\
+		RTE_MBUF_F_TX_IPV4 |		\
+		RTE_MBUF_F_TX_IPV6)
+
+#define IDPF_TX_OFFLOAD_NOTSUP_MASK \
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
+
+/* MTS */
+#define GLTSYN_CMD_SYNC_0_0	(PF_TIMESYNC_BASE + 0x0)
+#define PF_GLTSYN_SHTIME_0_0	(PF_TIMESYNC_BASE + 0x4)
+#define PF_GLTSYN_SHTIME_L_0	(PF_TIMESYNC_BASE + 0x8)
+#define PF_GLTSYN_SHTIME_H_0	(PF_TIMESYNC_BASE + 0xC)
+#define GLTSYN_ART_L_0		(PF_TIMESYNC_BASE + 0x10)
+#define GLTSYN_ART_H_0		(PF_TIMESYNC_BASE + 0x14)
+#define PF_GLTSYN_SHTIME_0_1	(PF_TIMESYNC_BASE + 0x24)
+#define PF_GLTSYN_SHTIME_L_1	(PF_TIMESYNC_BASE + 0x28)
+#define PF_GLTSYN_SHTIME_H_1	(PF_TIMESYNC_BASE + 0x2C)
+#define PF_GLTSYN_SHTIME_0_2	(PF_TIMESYNC_BASE + 0x44)
+#define PF_GLTSYN_SHTIME_L_2	(PF_TIMESYNC_BASE + 0x48)
+#define PF_GLTSYN_SHTIME_H_2	(PF_TIMESYNC_BASE + 0x4C)
+#define PF_GLTSYN_SHTIME_0_3	(PF_TIMESYNC_BASE + 0x64)
+#define PF_GLTSYN_SHTIME_L_3	(PF_TIMESYNC_BASE + 0x68)
+#define PF_GLTSYN_SHTIME_H_3	(PF_TIMESYNC_BASE + 0x6C)
+
+#define PF_TIMESYNC_BAR4_BASE	0x0E400000
+#define GLTSYN_ENA		(PF_TIMESYNC_BAR4_BASE + 0x90)
+#define GLTSYN_CMD		(PF_TIMESYNC_BAR4_BASE + 0x94)
+#define GLTSYC_TIME_L		(PF_TIMESYNC_BAR4_BASE + 0x104)
+#define GLTSYC_TIME_H		(PF_TIMESYNC_BAR4_BASE + 0x108)
+
+#define GLTSYN_CMD_SYNC_0_4	(PF_TIMESYNC_BAR4_BASE + 0x110)
+#define PF_GLTSYN_SHTIME_L_4	(PF_TIMESYNC_BAR4_BASE + 0x118)
+#define PF_GLTSYN_SHTIME_H_4	(PF_TIMESYNC_BAR4_BASE + 0x11C)
+#define GLTSYN_INCVAL_L		(PF_TIMESYNC_BAR4_BASE + 0x150)
+#define GLTSYN_INCVAL_H		(PF_TIMESYNC_BAR4_BASE + 0x154)
+#define GLTSYN_SHADJ_L		(PF_TIMESYNC_BAR4_BASE + 0x158)
+#define GLTSYN_SHADJ_H		(PF_TIMESYNC_BAR4_BASE + 0x15C)
+
+#define GLTSYN_CMD_SYNC_0_5	(PF_TIMESYNC_BAR4_BASE + 0x130)
+#define PF_GLTSYN_SHTIME_L_5	(PF_TIMESYNC_BAR4_BASE + 0x138)
+#define PF_GLTSYN_SHTIME_H_5	(PF_TIMESYNC_BAR4_BASE + 0x13C)
+
 struct idpf_rx_stats {
-	uint64_t mbuf_alloc_failed;
+	rte_atomic64_t mbuf_alloc_failed;
 };
 
 struct idpf_rx_queue {
@@ -126,6 +181,18 @@ struct idpf_tx_queue {
 	struct idpf_tx_queue *complq;
 };
 
+/* Offload features */
+union idpf_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /* L3 (IP) Header Length. */
+		uint64_t l4_len:8; /* L4 Header Length. */
+		uint64_t tso_segsz:16; /* TCP TSO segment size */
+		/* uint64_t unused : 24; */
+	};
+};
+
 struct idpf_rxq_ops {
 	void (*release_mbufs)(struct idpf_rx_queue *rxq);
 };
@@ -134,6 +201,9 @@ struct idpf_txq_ops {
 	void (*release_mbufs)(struct idpf_tx_queue *txq);
 };
 
+extern int idpf_timestamp_dynfield_offset;
+extern uint64_t idpf_timestamp_dynflag;
+
 __rte_internal
 int idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh);
 __rte_internal
@@ -162,8 +232,25 @@ void idpf_rx_queue_release(void *rxq);
 __rte_internal
 void idpf_tx_queue_release(void *txq);
 __rte_internal
+int idpf_register_ts_mbuf(struct idpf_rx_queue *rxq);
+__rte_internal
 int idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq);
 __rte_internal
 int idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq);
+__rte_internal
+uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			       uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			       uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+				uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+				uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 8d98635e46..244c74c209 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -18,7 +18,9 @@ INTERNAL {
 	idpf_ctlq_recv;
 	idpf_ctlq_send;
 	idpf_execute_vc_cmd;
+	idpf_prep_pkts;
 	idpf_read_one_msg;
+	idpf_register_ts_mbuf;
 	idpf_release_rxq_mbufs;
 	idpf_release_txq_mbufs;
 	idpf_reset_single_rx_queue;
@@ -29,6 +31,10 @@ INTERNAL {
 	idpf_reset_split_tx_complq;
 	idpf_reset_split_tx_descq;
 	idpf_rx_queue_release;
+	idpf_singleq_recv_pkts;
+	idpf_singleq_xmit_pkts;
+	idpf_splitq_recv_pkts;
+	idpf_splitq_xmit_pkts;
 	idpf_switch_queue;
 	idpf_tx_queue_release;
 	idpf_vc_alloc_vectors;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 734e97ffc2..ee2dec7c7c 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -22,8 +22,6 @@ rte_spinlock_t idpf_adapter_lock;
 struct idpf_adapter_list idpf_adapter_list;
 bool idpf_adapter_list_init;
 
-uint64_t idpf_timestamp_dynflag;
-
 static const char * const idpf_valid_args[] = {
 	IDPF_TX_SINGLE_Q,
 	IDPF_RX_SINGLE_Q,
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 9b40aa4e56..d791d402fb 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -28,7 +28,6 @@
 
 #define IDPF_MIN_BUF_SIZE	1024
 #define IDPF_MAX_FRAME_SIZE	9728
-#define IDPF_MIN_FRAME_SIZE	14
 #define IDPF_DEFAULT_MTU	RTE_ETHER_MTU
 
 #define IDPF_NUM_MACADDR_MAX	64
@@ -78,9 +77,6 @@ struct idpf_adapter_ext {
 	uint16_t cur_vport_nb;
 
 	uint16_t used_vecs_num;
-
-	/* For PTP */
-	uint64_t time_hw;
 };
 
 TAILQ_HEAD(idpf_adapter_list, idpf_adapter_ext);
diff --git a/drivers/net/idpf/idpf_logs.h b/drivers/net/idpf/idpf_logs.h
index d5f778fefe..bf0774b8e4 100644
--- a/drivers/net/idpf/idpf_logs.h
+++ b/drivers/net/idpf/idpf_logs.h
@@ -29,28 +29,4 @@ extern int idpf_logtype_driver;
 #define PMD_DRV_LOG(level, fmt, args...) \
 	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
 
-#ifdef RTE_LIBRTE_IDPF_DEBUG_RX
-#define PMD_RX_LOG(level, ...) \
-	RTE_LOG(level, \
-		PMD, \
-		RTE_FMT("%s(): " \
-			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
-			__func__, \
-			RTE_FMT_TAIL(__VA_ARGS__,)))
-#else
-#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
-#endif
-
-#ifdef RTE_LIBRTE_IDPF_DEBUG_TX
-#define PMD_TX_LOG(level, ...) \
-	RTE_LOG(level, \
-		PMD, \
-		RTE_FMT("%s(): " \
-			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
-			__func__, \
-			RTE_FMT_TAIL(__VA_ARGS__,)))
-#else
-#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
-#endif
-
 #endif /* _IDPF_LOGS_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 852076c235..74bf207c05 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -10,8 +10,6 @@
 #include "idpf_rxtx.h"
 #include "idpf_rxtx_vec_common.h"
 
-static int idpf_timestamp_dynfield_offset = -1;
-
 static uint64_t
 idpf_rx_offload_convert(uint64_t offload)
 {
@@ -501,23 +499,6 @@ idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	return ret;
 }
 
-static int
-idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
-{
-	int err;
-	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0) {
-		/* Register mbuf field and flag for Rx timestamp */
-		err = rte_mbuf_dyn_rx_timestamp_register(&idpf_timestamp_dynfield_offset,
-							 &idpf_timestamp_dynflag);
-		if (err != 0) {
-			PMD_DRV_LOG(ERR,
-				    "Cannot register mbuf field/flag for timestamp");
-			return -EINVAL;
-		}
-	}
-	return 0;
-}
-
 int
 idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
@@ -762,922 +743,6 @@ idpf_stop_queues(struct rte_eth_dev *dev)
 	}
 }
 
-#define IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S				\
-	(RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S) |     \
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S) |     \
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S) |    \
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S))
-
-static inline uint64_t
-idpf_splitq_rx_csum_offload(uint8_t err)
-{
-	uint64_t flags = 0;
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S)) == 0))
-		return flags;
-
-	if (likely((err & IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S) == 0)) {
-		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
-			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
-		return flags;
-	}
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S)) != 0))
-		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
-
-	return flags;
-}
-
-#define IDPF_RX_FLEX_DESC_ADV_HASH1_S  0
-#define IDPF_RX_FLEX_DESC_ADV_HASH2_S  16
-#define IDPF_RX_FLEX_DESC_ADV_HASH3_S  24
-
-static inline uint64_t
-idpf_splitq_rx_rss_offload(struct rte_mbuf *mb,
-			   volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc)
-{
-	uint8_t status_err0_qw0;
-	uint64_t flags = 0;
-
-	status_err0_qw0 = rx_desc->status_err0_qw0;
-
-	if ((status_err0_qw0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S)) != 0) {
-		flags |= RTE_MBUF_F_RX_RSS_HASH;
-		mb->hash.rss = (rte_le_to_cpu_16(rx_desc->hash1) <<
-				IDPF_RX_FLEX_DESC_ADV_HASH1_S) |
-			((uint32_t)(rx_desc->ff2_mirrid_hash2.hash2) <<
-			 IDPF_RX_FLEX_DESC_ADV_HASH2_S) |
-			((uint32_t)(rx_desc->hash3) <<
-			 IDPF_RX_FLEX_DESC_ADV_HASH3_S);
-	}
-
-	return flags;
-}
-
-static void
-idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
-{
-	volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_ring;
-	volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_desc;
-	uint16_t nb_refill = rx_bufq->rx_free_thresh;
-	uint16_t nb_desc = rx_bufq->nb_rx_desc;
-	uint16_t next_avail = rx_bufq->rx_tail;
-	struct rte_mbuf *nmb[rx_bufq->rx_free_thresh];
-	struct rte_eth_dev *dev;
-	uint64_t dma_addr;
-	uint16_t delta;
-	int i;
-
-	if (rx_bufq->nb_rx_hold < rx_bufq->rx_free_thresh)
-		return;
-
-	rx_buf_ring = rx_bufq->rx_ring;
-	delta = nb_desc - next_avail;
-	if (unlikely(delta < nb_refill)) {
-		if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, delta) == 0)) {
-			for (i = 0; i < delta; i++) {
-				rx_buf_desc = &rx_buf_ring[next_avail + i];
-				rx_bufq->sw_ring[next_avail + i] = nmb[i];
-				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
-				rx_buf_desc->hdr_addr = 0;
-				rx_buf_desc->pkt_addr = dma_addr;
-			}
-			nb_refill -= delta;
-			next_avail = 0;
-			rx_bufq->nb_rx_hold -= delta;
-		} else {
-			dev = &rte_eth_devices[rx_bufq->port_id];
-			dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
-			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
-				   rx_bufq->port_id, rx_bufq->queue_id);
-			return;
-		}
-	}
-
-	if (nb_desc - next_avail >= nb_refill) {
-		if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, nb_refill) == 0)) {
-			for (i = 0; i < nb_refill; i++) {
-				rx_buf_desc = &rx_buf_ring[next_avail + i];
-				rx_bufq->sw_ring[next_avail + i] = nmb[i];
-				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
-				rx_buf_desc->hdr_addr = 0;
-				rx_buf_desc->pkt_addr = dma_addr;
-			}
-			next_avail += nb_refill;
-			rx_bufq->nb_rx_hold -= nb_refill;
-		} else {
-			dev = &rte_eth_devices[rx_bufq->port_id];
-			dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
-			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
-				   rx_bufq->port_id, rx_bufq->queue_id);
-		}
-	}
-
-	IDPF_PCI_REG_WRITE(rx_bufq->qrx_tail, next_avail);
-
-	rx_bufq->rx_tail = next_avail;
-}
-
-uint16_t
-idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-		      uint16_t nb_pkts)
-{
-	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
-	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
-	uint16_t pktlen_gen_bufq_id;
-	struct idpf_rx_queue *rxq;
-	const uint32_t *ptype_tbl;
-	uint8_t status_err0_qw1;
-	struct idpf_adapter_ext *ad;
-	struct rte_mbuf *rxm;
-	uint16_t rx_id_bufq1;
-	uint16_t rx_id_bufq2;
-	uint64_t pkt_flags;
-	uint16_t pkt_len;
-	uint16_t bufq_id;
-	uint16_t gen_id;
-	uint16_t rx_id;
-	uint16_t nb_rx;
-	uint64_t ts_ns;
-
-	nb_rx = 0;
-	rxq = rx_queue;
-	ad = IDPF_ADAPTER_TO_EXT(rxq->adapter);
-
-	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
-		return nb_rx;
-
-	rx_id = rxq->rx_tail;
-	rx_id_bufq1 = rxq->bufq1->rx_next_avail;
-	rx_id_bufq2 = rxq->bufq2->rx_next_avail;
-	rx_desc_ring = rxq->rx_ring;
-	ptype_tbl = rxq->adapter->ptype_tbl;
-
-	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
-		rxq->hw_register_set = 1;
-
-	while (nb_rx < nb_pkts) {
-		rx_desc = &rx_desc_ring[rx_id];
-
-		pktlen_gen_bufq_id =
-			rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
-		gen_id = (pktlen_gen_bufq_id &
-			  VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
-			VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
-		if (gen_id != rxq->expected_gen_id)
-			break;
-
-		pkt_len = (pktlen_gen_bufq_id &
-			   VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M) >>
-			VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S;
-		if (pkt_len == 0)
-			PMD_RX_LOG(ERR, "Packet length is 0");
-
-		rx_id++;
-		if (unlikely(rx_id == rxq->nb_rx_desc)) {
-			rx_id = 0;
-			rxq->expected_gen_id ^= 1;
-		}
-
-		bufq_id = (pktlen_gen_bufq_id &
-			   VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M) >>
-			VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S;
-		if (bufq_id == 0) {
-			rxm = rxq->bufq1->sw_ring[rx_id_bufq1];
-			rx_id_bufq1++;
-			if (unlikely(rx_id_bufq1 == rxq->bufq1->nb_rx_desc))
-				rx_id_bufq1 = 0;
-			rxq->bufq1->nb_rx_hold++;
-		} else {
-			rxm = rxq->bufq2->sw_ring[rx_id_bufq2];
-			rx_id_bufq2++;
-			if (unlikely(rx_id_bufq2 == rxq->bufq2->nb_rx_desc))
-				rx_id_bufq2 = 0;
-			rxq->bufq2->nb_rx_hold++;
-		}
-
-		rxm->pkt_len = pkt_len;
-		rxm->data_len = pkt_len;
-		rxm->data_off = RTE_PKTMBUF_HEADROOM;
-		rxm->next = NULL;
-		rxm->nb_segs = 1;
-		rxm->port = rxq->port_id;
-		rxm->ol_flags = 0;
-		rxm->packet_type =
-			ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
-				   VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
-				  VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
-
-		status_err0_qw1 = rx_desc->status_err0_qw1;
-		pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
-		pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
-		if (idpf_timestamp_dynflag > 0 &&
-		    (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
-			/* timestamp */
-			ts_ns = idpf_tstamp_convert_32b_64b(ad,
-				rxq->hw_register_set,
-				rte_le_to_cpu_32(rx_desc->ts_high));
-			rxq->hw_register_set = 0;
-			*RTE_MBUF_DYNFIELD(rxm,
-					   idpf_timestamp_dynfield_offset,
-					   rte_mbuf_timestamp_t *) = ts_ns;
-			rxm->ol_flags |= idpf_timestamp_dynflag;
-		}
-
-		rxm->ol_flags |= pkt_flags;
-
-		rx_pkts[nb_rx++] = rxm;
-	}
-
-	if (nb_rx > 0) {
-		rxq->rx_tail = rx_id;
-		if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
-			rxq->bufq1->rx_next_avail = rx_id_bufq1;
-		if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
-			rxq->bufq2->rx_next_avail = rx_id_bufq2;
-
-		idpf_split_rx_bufq_refill(rxq->bufq1);
-		idpf_split_rx_bufq_refill(rxq->bufq2);
-	}
-
-	return nb_rx;
-}
-
-static inline void
-idpf_split_tx_free(struct idpf_tx_queue *cq)
-{
-	volatile struct idpf_splitq_tx_compl_desc *compl_ring = cq->compl_ring;
-	volatile struct idpf_splitq_tx_compl_desc *txd;
-	uint16_t next = cq->tx_tail;
-	struct idpf_tx_entry *txe;
-	struct idpf_tx_queue *txq;
-	uint16_t gen, qid, q_head;
-	uint16_t nb_desc_clean;
-	uint8_t ctype;
-
-	txd = &compl_ring[next];
-	gen = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
-		IDPF_TXD_COMPLQ_GEN_M) >> IDPF_TXD_COMPLQ_GEN_S;
-	if (gen != cq->expected_gen_id)
-		return;
-
-	ctype = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
-		IDPF_TXD_COMPLQ_COMPL_TYPE_M) >> IDPF_TXD_COMPLQ_COMPL_TYPE_S;
-	qid = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
-		IDPF_TXD_COMPLQ_QID_M) >> IDPF_TXD_COMPLQ_QID_S;
-	q_head = rte_le_to_cpu_16(txd->q_head_compl_tag.compl_tag);
-	txq = cq->txqs[qid - cq->tx_start_qid];
-
-	switch (ctype) {
-	case IDPF_TXD_COMPLT_RE:
-		/* clean to q_head which indicates be fetched txq desc id + 1.
-		 * TODO: need to refine and remove the if condition.
-		 */
-		if (unlikely(q_head % 32)) {
-			PMD_DRV_LOG(ERR, "unexpected desc (head = %u) completion.",
-						q_head);
-			return;
-		}
-		if (txq->last_desc_cleaned > q_head)
-			nb_desc_clean = (txq->nb_tx_desc - txq->last_desc_cleaned) +
-				q_head;
-		else
-			nb_desc_clean = q_head - txq->last_desc_cleaned;
-		txq->nb_free += nb_desc_clean;
-		txq->last_desc_cleaned = q_head;
-		break;
-	case IDPF_TXD_COMPLT_RS:
-		/* q_head indicates sw_id when ctype is 2 */
-		txe = &txq->sw_ring[q_head];
-		if (txe->mbuf != NULL) {
-			rte_pktmbuf_free_seg(txe->mbuf);
-			txe->mbuf = NULL;
-		}
-		break;
-	default:
-		PMD_DRV_LOG(ERR, "unknown completion type.");
-		return;
-	}
-
-	if (++next == cq->nb_tx_desc) {
-		next = 0;
-		cq->expected_gen_id ^= 1;
-	}
-
-	cq->tx_tail = next;
-}
-
-/* Check if the context descriptor is needed for TX offloading */
-static inline uint16_t
-idpf_calc_context_desc(uint64_t flags)
-{
-	if ((flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
-		return 1;
-
-	return 0;
-}
-
-/* set TSO context descriptor
- */
-static inline void
-idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
-			union idpf_tx_offload tx_offload,
-			volatile union idpf_flex_tx_ctx_desc *ctx_desc)
-{
-	uint16_t cmd_dtype;
-	uint32_t tso_len;
-	uint8_t hdr_len;
-
-	if (tx_offload.l4_len == 0) {
-		PMD_TX_LOG(DEBUG, "L4 length set to 0");
-		return;
-	}
-
-	hdr_len = tx_offload.l2_len +
-		tx_offload.l3_len +
-		tx_offload.l4_len;
-	cmd_dtype = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX |
-		IDPF_TX_FLEX_CTX_DESC_CMD_TSO;
-	tso_len = mbuf->pkt_len - hdr_len;
-
-	ctx_desc->tso.qw1.cmd_dtype = rte_cpu_to_le_16(cmd_dtype);
-	ctx_desc->tso.qw0.hdr_len = hdr_len;
-	ctx_desc->tso.qw0.mss_rt =
-		rte_cpu_to_le_16((uint16_t)mbuf->tso_segsz &
-				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
-	ctx_desc->tso.qw0.flex_tlen =
-		rte_cpu_to_le_32(tso_len &
-				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
-}
-
-uint16_t
-idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-		      uint16_t nb_pkts)
-{
-	struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue;
-	volatile struct idpf_flex_tx_sched_desc *txr;
-	volatile struct idpf_flex_tx_sched_desc *txd;
-	struct idpf_tx_entry *sw_ring;
-	union idpf_tx_offload tx_offload = {0};
-	struct idpf_tx_entry *txe, *txn;
-	uint16_t nb_used, tx_id, sw_id;
-	struct rte_mbuf *tx_pkt;
-	uint16_t nb_to_clean;
-	uint16_t nb_tx = 0;
-	uint64_t ol_flags;
-	uint16_t nb_ctx;
-
-	if (unlikely(txq == NULL) || unlikely(!txq->q_started))
-		return nb_tx;
-
-	txr = txq->desc_ring;
-	sw_ring = txq->sw_ring;
-	tx_id = txq->tx_tail;
-	sw_id = txq->sw_tail;
-	txe = &sw_ring[sw_id];
-
-	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
-		tx_pkt = tx_pkts[nb_tx];
-
-		if (txq->nb_free <= txq->free_thresh) {
-			/* TODO: Need to refine
-			 * 1. free and clean: Better to decide a clean destination instead of
-			 * loop times. And don't free mbuf when RS got immediately, free when
-			 * transmit or according to the clean destination.
-			 * Now, just ignore the RE write back, free mbuf when get RS
-			 * 2. out-of-order rewrite back haven't be supported, SW head and HW head
-			 * need to be separated.
-			 **/
-			nb_to_clean = 2 * txq->rs_thresh;
-			while (nb_to_clean--)
-				idpf_split_tx_free(txq->complq);
-		}
-
-		if (txq->nb_free < tx_pkt->nb_segs)
-			break;
-
-		ol_flags = tx_pkt->ol_flags;
-		tx_offload.l2_len = tx_pkt->l2_len;
-		tx_offload.l3_len = tx_pkt->l3_len;
-		tx_offload.l4_len = tx_pkt->l4_len;
-		tx_offload.tso_segsz = tx_pkt->tso_segsz;
-		/* Calculate the number of context descriptors needed. */
-		nb_ctx = idpf_calc_context_desc(ol_flags);
-		nb_used = tx_pkt->nb_segs + nb_ctx;
-
-		/* context descriptor */
-		if (nb_ctx != 0) {
-			volatile union idpf_flex_tx_ctx_desc *ctx_desc =
-			(volatile union idpf_flex_tx_ctx_desc *)&txr[tx_id];
-
-			if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
-				idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
-							ctx_desc);
-
-			tx_id++;
-			if (tx_id == txq->nb_tx_desc)
-				tx_id = 0;
-		}
-
-		do {
-			txd = &txr[tx_id];
-			txn = &sw_ring[txe->next_id];
-			txe->mbuf = tx_pkt;
-
-			/* Setup TX descriptor */
-			txd->buf_addr =
-				rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt));
-			txd->qw1.cmd_dtype =
-				rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE);
-			txd->qw1.rxr_bufsize = tx_pkt->data_len;
-			txd->qw1.compl_tag = sw_id;
-			tx_id++;
-			if (tx_id == txq->nb_tx_desc)
-				tx_id = 0;
-			sw_id = txe->next_id;
-			txe = txn;
-			tx_pkt = tx_pkt->next;
-		} while (tx_pkt);
-
-		/* fill the last descriptor with End of Packet (EOP) bit */
-		txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_EOP;
-
-		if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
-			txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN;
-		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
-		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
-
-		if (txq->nb_used >= 32) {
-			txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_RE;
-			/* Update txq RE bit counters */
-			txq->nb_used = 0;
-		}
-	}
-
-	/* update the tail pointer if any packets were processed */
-	if (likely(nb_tx > 0)) {
-		IDPF_PCI_REG_WRITE(txq->qtx_tail, tx_id);
-		txq->tx_tail = tx_id;
-		txq->sw_tail = sw_id;
-	}
-
-	return nb_tx;
-}
-
-#define IDPF_RX_FLEX_DESC_STATUS0_XSUM_S				\
-	(RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) |		\
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) |		\
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) |	\
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S))
-
-/* Translate the rx descriptor status and error fields to pkt flags */
-static inline uint64_t
-idpf_rxd_to_pkt_flags(uint16_t status_error)
-{
-	uint64_t flags = 0;
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S)) == 0))
-		return flags;
-
-	if (likely((status_error & IDPF_RX_FLEX_DESC_STATUS0_XSUM_S) == 0)) {
-		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
-			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
-		return flags;
-	}
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)) != 0))
-		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
-
-	return flags;
-}
-
-static inline void
-idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
-		    uint16_t rx_id)
-{
-	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
-
-	if (nb_hold > rxq->rx_free_thresh) {
-		PMD_RX_LOG(DEBUG,
-			   "port_id=%u queue_id=%u rx_tail=%u nb_hold=%u",
-			   rxq->port_id, rxq->queue_id, rx_id, nb_hold);
-		rx_id = (uint16_t)((rx_id == 0) ?
-				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
-		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
-		nb_hold = 0;
-	}
-	rxq->nb_rx_hold = nb_hold;
-}
-
-uint16_t
-idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-		       uint16_t nb_pkts)
-{
-	volatile union virtchnl2_rx_desc *rx_ring;
-	volatile union virtchnl2_rx_desc *rxdp;
-	union virtchnl2_rx_desc rxd;
-	struct idpf_rx_queue *rxq;
-	const uint32_t *ptype_tbl;
-	uint16_t rx_id, nb_hold;
-	struct rte_eth_dev *dev;
-	struct idpf_adapter_ext *ad;
-	uint16_t rx_packet_len;
-	struct rte_mbuf *rxm;
-	struct rte_mbuf *nmb;
-	uint16_t rx_status0;
-	uint64_t pkt_flags;
-	uint64_t dma_addr;
-	uint64_t ts_ns;
-	uint16_t nb_rx;
-
-	nb_rx = 0;
-	nb_hold = 0;
-	rxq = rx_queue;
-
-	ad = IDPF_ADAPTER_TO_EXT(rxq->adapter);
-
-	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
-		return nb_rx;
-
-	rx_id = rxq->rx_tail;
-	rx_ring = rxq->rx_ring;
-	ptype_tbl = rxq->adapter->ptype_tbl;
-
-	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
-		rxq->hw_register_set = 1;
-
-	while (nb_rx < nb_pkts) {
-		rxdp = &rx_ring[rx_id];
-		rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
-
-		/* Check the DD bit first */
-		if ((rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)) == 0)
-			break;
-
-		nmb = rte_mbuf_raw_alloc(rxq->mp);
-		if (unlikely(nmb == NULL)) {
-			dev = &rte_eth_devices[rxq->port_id];
-			dev->data->rx_mbuf_alloc_failed++;
-			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
-				   "queue_id=%u", rxq->port_id, rxq->queue_id);
-			break;
-		}
-		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
-
-		nb_hold++;
-		rxm = rxq->sw_ring[rx_id];
-		rxq->sw_ring[rx_id] = nmb;
-		rx_id++;
-		if (unlikely(rx_id == rxq->nb_rx_desc))
-			rx_id = 0;
-
-		/* Prefetch next mbuf */
-		rte_prefetch0(rxq->sw_ring[rx_id]);
-
-		/* When next RX descriptor is on a cache line boundary,
-		 * prefetch the next 4 RX descriptors and next 8 pointers
-		 * to mbufs.
-		 */
-		if ((rx_id & 0x3) == 0) {
-			rte_prefetch0(&rx_ring[rx_id]);
-			rte_prefetch0(rxq->sw_ring[rx_id]);
-		}
-		dma_addr =
-			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
-		rxdp->read.hdr_addr = 0;
-		rxdp->read.pkt_addr = dma_addr;
-
-		rx_packet_len = (rte_cpu_to_le_16(rxd.flex_nic_wb.pkt_len) &
-				 VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M);
-
-		rxm->data_off = RTE_PKTMBUF_HEADROOM;
-		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
-		rxm->nb_segs = 1;
-		rxm->next = NULL;
-		rxm->pkt_len = rx_packet_len;
-		rxm->data_len = rx_packet_len;
-		rxm->port = rxq->port_id;
-		rxm->ol_flags = 0;
-		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
-		rxm->packet_type =
-			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
-					    VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
-
-		rxm->ol_flags |= pkt_flags;
-
-		if (idpf_timestamp_dynflag > 0 &&
-		   (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0) {
-			/* timestamp */
-			ts_ns = idpf_tstamp_convert_32b_64b(ad,
-				rxq->hw_register_set,
-				rte_le_to_cpu_32(rxd.flex_nic_wb.flex_ts.ts_high));
-			rxq->hw_register_set = 0;
-			*RTE_MBUF_DYNFIELD(rxm,
-					   idpf_timestamp_dynfield_offset,
-					   rte_mbuf_timestamp_t *) = ts_ns;
-			rxm->ol_flags |= idpf_timestamp_dynflag;
-		}
-
-		rx_pkts[nb_rx++] = rxm;
-	}
-	rxq->rx_tail = rx_id;
-
-	idpf_update_rx_tail(rxq, nb_hold, rx_id);
-
-	return nb_rx;
-}
-
-static inline int
-idpf_xmit_cleanup(struct idpf_tx_queue *txq)
-{
-	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
-	struct idpf_tx_entry *sw_ring = txq->sw_ring;
-	uint16_t nb_tx_desc = txq->nb_tx_desc;
-	uint16_t desc_to_clean_to;
-	uint16_t nb_tx_to_clean;
-	uint16_t i;
-
-	volatile struct idpf_flex_tx_desc *txd = txq->tx_ring;
-
-	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
-	if (desc_to_clean_to >= nb_tx_desc)
-		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
-
-	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
-	/* In the writeback Tx desccriptor, the only significant fields are the 4-bit DTYPE */
-	if ((txd[desc_to_clean_to].qw1.cmd_dtype &
-			rte_cpu_to_le_16(IDPF_TXD_QW1_DTYPE_M)) !=
-			rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE)) {
-		PMD_TX_LOG(DEBUG, "TX descriptor %4u is not done "
-			   "(port=%d queue=%d)", desc_to_clean_to,
-			   txq->port_id, txq->queue_id);
-		return -1;
-	}
-
-	if (last_desc_cleaned > desc_to_clean_to)
-		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
-					    desc_to_clean_to);
-	else
-		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
-					last_desc_cleaned);
-
-	txd[desc_to_clean_to].qw1.cmd_dtype = 0;
-	txd[desc_to_clean_to].qw1.buf_size = 0;
-	for (i = 0; i < RTE_DIM(txd[desc_to_clean_to].qw1.flex.raw); i++)
-		txd[desc_to_clean_to].qw1.flex.raw[i] = 0;
-
-	txq->last_desc_cleaned = desc_to_clean_to;
-	txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
-
-	return 0;
-}
-
-/* TX function */
-uint16_t
-idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-		       uint16_t nb_pkts)
-{
-	volatile struct idpf_flex_tx_desc *txd;
-	volatile struct idpf_flex_tx_desc *txr;
-	union idpf_tx_offload tx_offload = {0};
-	struct idpf_tx_entry *txe, *txn;
-	struct idpf_tx_entry *sw_ring;
-	struct idpf_tx_queue *txq;
-	struct rte_mbuf *tx_pkt;
-	struct rte_mbuf *m_seg;
-	uint64_t buf_dma_addr;
-	uint64_t ol_flags;
-	uint16_t tx_last;
-	uint16_t nb_used;
-	uint16_t nb_ctx;
-	uint16_t td_cmd;
-	uint16_t tx_id;
-	uint16_t nb_tx;
-	uint16_t slen;
-
-	nb_tx = 0;
-	txq = tx_queue;
-
-	if (unlikely(txq == NULL) || unlikely(!txq->q_started))
-		return nb_tx;
-
-	sw_ring = txq->sw_ring;
-	txr = txq->tx_ring;
-	tx_id = txq->tx_tail;
-	txe = &sw_ring[tx_id];
-
-	/* Check if the descriptor ring needs to be cleaned. */
-	if (txq->nb_free < txq->free_thresh)
-		(void)idpf_xmit_cleanup(txq);
-
-	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
-		td_cmd = 0;
-
-		tx_pkt = *tx_pkts++;
-		RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
-
-		ol_flags = tx_pkt->ol_flags;
-		tx_offload.l2_len = tx_pkt->l2_len;
-		tx_offload.l3_len = tx_pkt->l3_len;
-		tx_offload.l4_len = tx_pkt->l4_len;
-		tx_offload.tso_segsz = tx_pkt->tso_segsz;
-		/* Calculate the number of context descriptors needed. */
-		nb_ctx = idpf_calc_context_desc(ol_flags);
-
-		/* The number of descriptors that must be allocated for
-		 * a packet equals to the number of the segments of that
-		 * packet plus 1 context descriptor if needed.
-		 */
-		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
-		tx_last = (uint16_t)(tx_id + nb_used - 1);
-
-		/* Circular ring */
-		if (tx_last >= txq->nb_tx_desc)
-			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
-
-		PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u"
-			   " tx_first=%u tx_last=%u",
-			   txq->port_id, txq->queue_id, tx_id, tx_last);
-
-		if (nb_used > txq->nb_free) {
-			if (idpf_xmit_cleanup(txq) != 0) {
-				if (nb_tx == 0)
-					return 0;
-				goto end_of_tx;
-			}
-			if (unlikely(nb_used > txq->rs_thresh)) {
-				while (nb_used > txq->nb_free) {
-					if (idpf_xmit_cleanup(txq) != 0) {
-						if (nb_tx == 0)
-							return 0;
-						goto end_of_tx;
-					}
-				}
-			}
-		}
-
-		if (nb_ctx != 0) {
-			/* Setup TX context descriptor if required */
-			volatile union idpf_flex_tx_ctx_desc *ctx_txd =
-				(volatile union idpf_flex_tx_ctx_desc *)
-							&txr[tx_id];
-
-			txn = &sw_ring[txe->next_id];
-			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
-			if (txe->mbuf != NULL) {
-				rte_pktmbuf_free_seg(txe->mbuf);
-				txe->mbuf = NULL;
-			}
-
-			/* TSO enabled */
-			if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
-				idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
-							ctx_txd);
-
-			txe->last_id = tx_last;
-			tx_id = txe->next_id;
-			txe = txn;
-		}
-
-		m_seg = tx_pkt;
-		do {
-			txd = &txr[tx_id];
-			txn = &sw_ring[txe->next_id];
-
-			if (txe->mbuf != NULL)
-				rte_pktmbuf_free_seg(txe->mbuf);
-			txe->mbuf = m_seg;
-
-			/* Setup TX Descriptor */
-			slen = m_seg->data_len;
-			buf_dma_addr = rte_mbuf_data_iova(m_seg);
-			txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
-			txd->qw1.buf_size = slen;
-			txd->qw1.cmd_dtype = rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_DATA <<
-							      IDPF_FLEX_TXD_QW1_DTYPE_S);
-
-			txe->last_id = tx_last;
-			tx_id = txe->next_id;
-			txe = txn;
-			m_seg = m_seg->next;
-		} while (m_seg);
-
-		/* The last packet data descriptor needs End Of Packet (EOP) */
-		td_cmd |= IDPF_TX_FLEX_DESC_CMD_EOP;
-		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
-		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
-
-		if (txq->nb_used >= txq->rs_thresh) {
-			PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
-				   "%4u (port=%d queue=%d)",
-				   tx_last, txq->port_id, txq->queue_id);
-
-			td_cmd |= IDPF_TX_FLEX_DESC_CMD_RS;
-
-			/* Update txq RS bit counters */
-			txq->nb_used = 0;
-		}
-
-		if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
-			td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN;
-
-		txd->qw1.cmd_dtype |= rte_cpu_to_le_16(td_cmd << IDPF_FLEX_TXD_QW1_CMD_S);
-	}
-
-end_of_tx:
-	rte_wmb();
-
-	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
-		   txq->port_id, txq->queue_id, tx_id, nb_tx);
-
-	IDPF_PCI_REG_WRITE(txq->qtx_tail, tx_id);
-	txq->tx_tail = tx_id;
-
-	return nb_tx;
-}
-
-/* TX prep functions */
-uint16_t
-idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
-	       uint16_t nb_pkts)
-{
-#ifdef RTE_LIBRTE_ETHDEV_DEBUG
-	int ret;
-#endif
-	int i;
-	uint64_t ol_flags;
-	struct rte_mbuf *m;
-
-	for (i = 0; i < nb_pkts; i++) {
-		m = tx_pkts[i];
-		ol_flags = m->ol_flags;
-
-		/* Check condition for nb_segs > IDPF_TX_MAX_MTU_SEG. */
-		if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0) {
-			if (m->nb_segs > IDPF_TX_MAX_MTU_SEG) {
-				rte_errno = EINVAL;
-				return i;
-			}
-		} else if ((m->tso_segsz < IDPF_MIN_TSO_MSS) ||
-			   (m->tso_segsz > IDPF_MAX_TSO_MSS) ||
-			   (m->pkt_len > IDPF_MAX_TSO_FRAME_SIZE)) {
-			/* MSS outside the range are considered malicious */
-			rte_errno = EINVAL;
-			return i;
-		}
-
-		if ((ol_flags & IDPF_TX_OFFLOAD_NOTSUP_MASK) != 0) {
-			rte_errno = ENOTSUP;
-			return i;
-		}
-
-		if (m->pkt_len < IDPF_MIN_FRAME_SIZE) {
-			rte_errno = EINVAL;
-			return i;
-		}
-
-#ifdef RTE_LIBRTE_ETHDEV_DEBUG
-		ret = rte_validate_tx_offload(m);
-		if (ret != 0) {
-			rte_errno = -ret;
-			return i;
-		}
-#endif
-	}
-
-	return i;
-}
-
 static void __rte_cold
 release_rxq_mbufs_vec(struct idpf_rx_queue *rxq)
 {
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 4efbf10295..eab363c3e7 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -8,41 +8,6 @@
 #include <idpf_common_rxtx.h>
 #include "idpf_ethdev.h"
 
-/* MTS */
-#define GLTSYN_CMD_SYNC_0_0	(PF_TIMESYNC_BASE + 0x0)
-#define PF_GLTSYN_SHTIME_0_0	(PF_TIMESYNC_BASE + 0x4)
-#define PF_GLTSYN_SHTIME_L_0	(PF_TIMESYNC_BASE + 0x8)
-#define PF_GLTSYN_SHTIME_H_0	(PF_TIMESYNC_BASE + 0xC)
-#define GLTSYN_ART_L_0		(PF_TIMESYNC_BASE + 0x10)
-#define GLTSYN_ART_H_0		(PF_TIMESYNC_BASE + 0x14)
-#define PF_GLTSYN_SHTIME_0_1	(PF_TIMESYNC_BASE + 0x24)
-#define PF_GLTSYN_SHTIME_L_1	(PF_TIMESYNC_BASE + 0x28)
-#define PF_GLTSYN_SHTIME_H_1	(PF_TIMESYNC_BASE + 0x2C)
-#define PF_GLTSYN_SHTIME_0_2	(PF_TIMESYNC_BASE + 0x44)
-#define PF_GLTSYN_SHTIME_L_2	(PF_TIMESYNC_BASE + 0x48)
-#define PF_GLTSYN_SHTIME_H_2	(PF_TIMESYNC_BASE + 0x4C)
-#define PF_GLTSYN_SHTIME_0_3	(PF_TIMESYNC_BASE + 0x64)
-#define PF_GLTSYN_SHTIME_L_3	(PF_TIMESYNC_BASE + 0x68)
-#define PF_GLTSYN_SHTIME_H_3	(PF_TIMESYNC_BASE + 0x6C)
-
-#define PF_TIMESYNC_BAR4_BASE	0x0E400000
-#define GLTSYN_ENA		(PF_TIMESYNC_BAR4_BASE + 0x90)
-#define GLTSYN_CMD		(PF_TIMESYNC_BAR4_BASE + 0x94)
-#define GLTSYC_TIME_L		(PF_TIMESYNC_BAR4_BASE + 0x104)
-#define GLTSYC_TIME_H		(PF_TIMESYNC_BAR4_BASE + 0x108)
-
-#define GLTSYN_CMD_SYNC_0_4	(PF_TIMESYNC_BAR4_BASE + 0x110)
-#define PF_GLTSYN_SHTIME_L_4	(PF_TIMESYNC_BAR4_BASE + 0x118)
-#define PF_GLTSYN_SHTIME_H_4	(PF_TIMESYNC_BAR4_BASE + 0x11C)
-#define GLTSYN_INCVAL_L		(PF_TIMESYNC_BAR4_BASE + 0x150)
-#define GLTSYN_INCVAL_H		(PF_TIMESYNC_BAR4_BASE + 0x154)
-#define GLTSYN_SHADJ_L		(PF_TIMESYNC_BAR4_BASE + 0x158)
-#define GLTSYN_SHADJ_H		(PF_TIMESYNC_BAR4_BASE + 0x15C)
-
-#define GLTSYN_CMD_SYNC_0_5	(PF_TIMESYNC_BAR4_BASE + 0x130)
-#define PF_GLTSYN_SHTIME_L_5	(PF_TIMESYNC_BAR4_BASE + 0x138)
-#define PF_GLTSYN_SHTIME_H_5	(PF_TIMESYNC_BAR4_BASE + 0x13C)
-
 /* In QLEN must be whole number of 32 descriptors. */
 #define IDPF_ALIGN_RING_DESC	32
 #define IDPF_MIN_RING_DESC	32
@@ -62,44 +27,10 @@
 #define IDPF_DEFAULT_TX_RS_THRESH	32
 #define IDPF_DEFAULT_TX_FREE_THRESH	32
 
-#define IDPF_TX_MAX_MTU_SEG	10
-
-#define IDPF_MIN_TSO_MSS	88
-#define IDPF_MAX_TSO_MSS	9728
-#define IDPF_MAX_TSO_FRAME_SIZE	262143
-#define IDPF_TX_MAX_MTU_SEG     10
-
-#define IDPF_TX_CKSUM_OFFLOAD_MASK (		\
-		RTE_MBUF_F_TX_IP_CKSUM |	\
-		RTE_MBUF_F_TX_L4_MASK |		\
-		RTE_MBUF_F_TX_TCP_SEG)
-
-#define IDPF_TX_OFFLOAD_MASK (			\
-		IDPF_TX_CKSUM_OFFLOAD_MASK |	\
-		RTE_MBUF_F_TX_IPV4 |		\
-		RTE_MBUF_F_TX_IPV6)
-
-#define IDPF_TX_OFFLOAD_NOTSUP_MASK \
-		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
-
-extern uint64_t idpf_timestamp_dynflag;
-
 struct idpf_tx_vec_entry {
 	struct rte_mbuf *mbuf;
 };
 
-/* Offload features */
-union idpf_tx_offload {
-	uint64_t data;
-	struct {
-		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
-		uint64_t l3_len:9; /* L3 (IP) Header Length. */
-		uint64_t l4_len:8; /* L4 Header Length. */
-		uint64_t tso_segsz:16; /* TCP TSO segment size */
-		/* uint64_t unused : 24; */
-	};
-};
-
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
@@ -118,77 +49,14 @@ int idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
-uint16_t idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-				uint16_t nb_pkts);
 uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
 				       uint16_t nb_pkts);
-uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-			       uint16_t nb_pkts);
-uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-				uint16_t nb_pkts);
 uint16_t idpf_singleq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 				       uint16_t nb_pkts);
-uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-			       uint16_t nb_pkts);
-uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-			uint16_t nb_pkts);
 
 void idpf_stop_queues(struct rte_eth_dev *dev);
 
 void idpf_set_rx_function(struct rte_eth_dev *dev);
 void idpf_set_tx_function(struct rte_eth_dev *dev);
 
-#define IDPF_TIMESYNC_REG_WRAP_GUARD_BAND  10000
-/* Helper function to convert a 32b nanoseconds timestamp to 64b. */
-static inline uint64_t
-
-idpf_tstamp_convert_32b_64b(struct idpf_adapter_ext *ad, uint32_t flag,
-			    uint32_t in_timestamp)
-{
-#ifdef RTE_ARCH_X86_64
-	struct idpf_hw *hw = &ad->base.hw;
-	const uint64_t mask = 0xFFFFFFFF;
-	uint32_t hi, lo, lo2, delta;
-	uint64_t ns;
-
-	if (flag != 0) {
-		IDPF_WRITE_REG(hw, GLTSYN_CMD_SYNC_0_0, PF_GLTSYN_CMD_SYNC_SHTIME_EN_M);
-		IDPF_WRITE_REG(hw, GLTSYN_CMD_SYNC_0_0, PF_GLTSYN_CMD_SYNC_EXEC_CMD_M |
-			       PF_GLTSYN_CMD_SYNC_SHTIME_EN_M);
-		lo = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
-		hi = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_H_0);
-		/*
-		 * On typical system, the delta between lo and lo2 is ~1000ns,
-		 * so 10000 seems a large-enough but not overly-big guard band.
-		 */
-		if (lo > (UINT32_MAX - IDPF_TIMESYNC_REG_WRAP_GUARD_BAND))
-			lo2 = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
-		else
-			lo2 = lo;
-
-		if (lo2 < lo) {
-			lo = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
-			hi = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_H_0);
-		}
-
-		ad->time_hw = ((uint64_t)hi << 32) | lo;
-	}
-
-	delta = (in_timestamp - (uint32_t)(ad->time_hw & mask));
-	if (delta > (mask / 2)) {
-		delta = ((uint32_t)(ad->time_hw & mask) - in_timestamp);
-		ns = ad->time_hw - delta;
-	} else {
-		ns = ad->time_hw + delta;
-	}
-
-	return ns;
-#else /* !RTE_ARCH_X86_64 */
-	RTE_SET_USED(ad);
-	RTE_SET_USED(flag);
-	RTE_SET_USED(in_timestamp);
-	return 0;
-#endif /* RTE_ARCH_X86_64 */
-}
-
 #endif /* _IDPF_RXTX_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
index 71a6c59823..ea949635e0 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
@@ -38,8 +38,8 @@ idpf_singleq_rearm_common(struct idpf_rx_queue *rxq)
 						dma_addr0);
 			}
 		}
-		rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed +=
-			IDPF_RXQ_REARM_THRESH;
+		rte_atomic64_add(&rxq->rx_stats.mbuf_alloc_failed,
+				 IDPF_RXQ_REARM_THRESH);
 		return;
 	}
 	struct rte_mbuf *mb0, *mb1, *mb2, *mb3;
@@ -168,8 +168,8 @@ idpf_singleq_rearm(struct idpf_rx_queue *rxq)
 							 dma_addr0);
 				}
 			}
-			rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed +=
-					IDPF_RXQ_REARM_THRESH;
+			rte_atomic64_add(&rxq->rx_stats.mbuf_alloc_failed,
+					 IDPF_RXQ_REARM_THRESH);
 			return;
 		}
 	}
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v4 14/15] common/idpf: add vec queue setup
  2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
                     ` (12 preceding siblings ...)
  2023-01-17  8:06   ` [PATCH v4 13/15] common/idpf: add Rx and Tx data path beilei.xing
@ 2023-01-17  8:06   ` beilei.xing
  2023-01-17  8:06   ` [PATCH v4 15/15] common/idpf: add avx512 for single queue model beilei.xing
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-01-17  8:06 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move vector queue setup for single queue model to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 57 ++++++++++++++++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h |  2 +
 drivers/common/idpf/version.map        |  1 +
 drivers/net/idpf/idpf_rxtx.c           | 57 --------------------------
 drivers/net/idpf/idpf_rxtx.h           |  1 -
 5 files changed, 60 insertions(+), 58 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 459057f20e..bc95fef6bc 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1399,3 +1399,60 @@ idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 
 	return i;
 }
+
+static void __rte_cold
+release_rxq_mbufs_vec(struct idpf_rx_queue *rxq)
+{
+	const uint16_t mask = rxq->nb_rx_desc - 1;
+	uint16_t i;
+
+	if (rxq->sw_ring == NULL || rxq->rxrearm_nb >= rxq->nb_rx_desc)
+		return;
+
+	/* free all mbufs that are valid in the ring */
+	if (rxq->rxrearm_nb == 0) {
+		for (i = 0; i < rxq->nb_rx_desc; i++) {
+			if (rxq->sw_ring[i] != NULL)
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	} else {
+		for (i = rxq->rx_tail; i != rxq->rxrearm_start; i = (i + 1) & mask) {
+			if (rxq->sw_ring[i] != NULL)
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	}
+
+	rxq->rxrearm_nb = rxq->nb_rx_desc;
+
+	/* set all entries to NULL */
+	memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
+}
+
+static const struct idpf_rxq_ops def_singleq_rx_ops_vec = {
+	.release_mbufs = release_rxq_mbufs_vec,
+};
+
+static inline int
+idpf_singleq_rx_vec_setup_default(struct idpf_rx_queue *rxq)
+{
+	uintptr_t p;
+	struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
+
+	mb_def.nb_segs = 1;
+	mb_def.data_off = RTE_PKTMBUF_HEADROOM;
+	mb_def.port = rxq->port_id;
+	rte_mbuf_refcnt_set(&mb_def, 1);
+
+	/* prevent compiler reordering: rearm_data covers previous fields */
+	rte_compiler_barrier();
+	p = (uintptr_t)&mb_def.rearm_data;
+	rxq->mbuf_initializer = *(uint64_t *)p;
+	return 0;
+}
+
+int __rte_cold
+idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
+{
+	rxq->ops = &def_singleq_rx_ops_vec;
+	return idpf_singleq_rx_vec_setup_default(rxq);
+}
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index 827f791505..74d6081638 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -252,5 +252,7 @@ uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 __rte_internal
 uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts);
+__rte_internal
+int idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 244c74c209..0f3f4aa758 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -32,6 +32,7 @@ INTERNAL {
 	idpf_reset_split_tx_descq;
 	idpf_rx_queue_release;
 	idpf_singleq_recv_pkts;
+	idpf_singleq_rx_vec_setup;
 	idpf_singleq_xmit_pkts;
 	idpf_splitq_recv_pkts;
 	idpf_splitq_xmit_pkts;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 74bf207c05..6155531e69 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -743,63 +743,6 @@ idpf_stop_queues(struct rte_eth_dev *dev)
 	}
 }
 
-static void __rte_cold
-release_rxq_mbufs_vec(struct idpf_rx_queue *rxq)
-{
-	const uint16_t mask = rxq->nb_rx_desc - 1;
-	uint16_t i;
-
-	if (rxq->sw_ring == NULL || rxq->rxrearm_nb >= rxq->nb_rx_desc)
-		return;
-
-	/* free all mbufs that are valid in the ring */
-	if (rxq->rxrearm_nb == 0) {
-		for (i = 0; i < rxq->nb_rx_desc; i++) {
-			if (rxq->sw_ring[i] != NULL)
-				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
-		}
-	} else {
-		for (i = rxq->rx_tail; i != rxq->rxrearm_start; i = (i + 1) & mask) {
-			if (rxq->sw_ring[i] != NULL)
-				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
-		}
-	}
-
-	rxq->rxrearm_nb = rxq->nb_rx_desc;
-
-	/* set all entries to NULL */
-	memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
-}
-
-static const struct idpf_rxq_ops def_singleq_rx_ops_vec = {
-	.release_mbufs = release_rxq_mbufs_vec,
-};
-
-static inline int
-idpf_singleq_rx_vec_setup_default(struct idpf_rx_queue *rxq)
-{
-	uintptr_t p;
-	struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
-
-	mb_def.nb_segs = 1;
-	mb_def.data_off = RTE_PKTMBUF_HEADROOM;
-	mb_def.port = rxq->port_id;
-	rte_mbuf_refcnt_set(&mb_def, 1);
-
-	/* prevent compiler reordering: rearm_data covers previous fields */
-	rte_compiler_barrier();
-	p = (uintptr_t)&mb_def.rearm_data;
-	rxq->mbuf_initializer = *(uint64_t *)p;
-	return 0;
-}
-
-int __rte_cold
-idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
-{
-	rxq->ops = &def_singleq_rx_ops_vec;
-	return idpf_singleq_rx_vec_setup_default(rxq);
-}
-
 void
 idpf_set_rx_function(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index eab363c3e7..a985dc2cf5 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -44,7 +44,6 @@ void idpf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 int idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
-int idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
 int idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v4 15/15] common/idpf: add avx512 for single queue model
  2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
                     ` (13 preceding siblings ...)
  2023-01-17  8:06   ` [PATCH v4 14/15] common/idpf: add vec queue setup beilei.xing
@ 2023-01-17  8:06   ` beilei.xing
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-01-17  8:06 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move avx512 vector path for single queue to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.h        | 20 +++++++++++++
 .../idpf/idpf_common_rxtx_avx512.c}           |  4 +--
 drivers/common/idpf/meson.build               | 30 +++++++++++++++++++
 drivers/common/idpf/version.map               |  3 ++
 drivers/net/idpf/idpf_rxtx.h                  | 13 --------
 drivers/net/idpf/meson.build                  | 17 -----------
 6 files changed, 55 insertions(+), 32 deletions(-)
 rename drivers/{net/idpf/idpf_rxtx_vec_avx512.c => common/idpf/idpf_common_rxtx_avx512.c} (99%)

diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index 74d6081638..6e3ee7de25 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -47,6 +47,12 @@
 #define IDPF_TX_OFFLOAD_NOTSUP_MASK \
 		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
 
+/* used for Vector PMD */
+#define IDPF_VPMD_RX_MAX_BURST		32
+#define IDPF_VPMD_TX_MAX_BURST		32
+#define IDPF_VPMD_DESCS_PER_LOOP	4
+#define IDPF_RXQ_REARM_THRESH		64
+
 /* MTS */
 #define GLTSYN_CMD_SYNC_0_0	(PF_TIMESYNC_BASE + 0x0)
 #define PF_GLTSYN_SHTIME_0_0	(PF_TIMESYNC_BASE + 0x4)
@@ -193,6 +199,10 @@ union idpf_tx_offload {
 	};
 };
 
+struct idpf_tx_vec_entry {
+	struct rte_mbuf *mbuf;
+};
+
 struct idpf_rxq_ops {
 	void (*release_mbufs)(struct idpf_rx_queue *rxq);
 };
@@ -254,5 +264,15 @@ uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts);
 __rte_internal
 int idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
+__rte_internal
+int idpf_singleq_tx_vec_setup_avx512(struct idpf_tx_queue *txq);
+__rte_internal
+uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue,
+				       struct rte_mbuf **rx_pkts,
+				       uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_singleq_xmit_pkts_avx512(void *tx_queue,
+				       struct rte_mbuf **tx_pkts,
+				       uint16_t nb_pkts);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/common/idpf/idpf_common_rxtx_avx512.c
similarity index 99%
rename from drivers/net/idpf/idpf_rxtx_vec_avx512.c
rename to drivers/common/idpf/idpf_common_rxtx_avx512.c
index ea949635e0..6ae0e14d2f 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c
@@ -2,9 +2,9 @@
  * Copyright(c) 2022 Intel Corporation
  */
 
-#include "idpf_rxtx_vec_common.h"
-
 #include <rte_vect.h>
+#include <idpf_common_device.h>
+#include <idpf_common_rxtx.h>
 
 #ifndef __INTEL_COMPILER
 #pragma GCC diagnostic ignored "-Wcast-qual"
diff --git a/drivers/common/idpf/meson.build b/drivers/common/idpf/meson.build
index 5ee071fdb2..1dafafeb2f 100644
--- a/drivers/common/idpf/meson.build
+++ b/drivers/common/idpf/meson.build
@@ -9,4 +9,34 @@ sources = files(
     'idpf_common_virtchnl.c',
 )
 
+if arch_subdir == 'x86'
+    idpf_avx512_cpu_support = (
+        cc.get_define('__AVX512F__', args: machine_args) != '' and
+        cc.get_define('__AVX512BW__', args: machine_args) != ''
+    )
+
+    idpf_avx512_cc_support = (
+        not machine_args.contains('-mno-avx512f') and
+        cc.has_argument('-mavx512f') and
+        cc.has_argument('-mavx512bw')
+    )
+
+    if idpf_avx512_cpu_support == true or idpf_avx512_cc_support == true
+        cflags += ['-DCC_AVX512_SUPPORT']
+        avx512_args = [cflags, '-mavx512f', '-mavx512bw']
+        if cc.has_argument('-march=skylake-avx512')
+            avx512_args += '-march=skylake-avx512'
+        endif
+        idpf_common_avx512_lib = static_library(
+            'idpf_common_avx512_lib',
+            'idpf_common_rxtx_avx512.c',
+	    dependencies: [
+	            static_rte_mbuf,
+	    ],
+            include_directories: includes,
+            c_args: avx512_args)
+        objs += idpf_common_avx512_lib.extract_objects('idpf_common_rxtx_avx512.c')
+    endif
+endif
+
 subdir('base')
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 0f3f4aa758..a6b9eefdb5 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -32,8 +32,11 @@ INTERNAL {
 	idpf_reset_split_tx_descq;
 	idpf_rx_queue_release;
 	idpf_singleq_recv_pkts;
+	idpf_singleq_recv_pkts_avx512;
 	idpf_singleq_rx_vec_setup;
+	idpf_singleq_tx_vec_setup_avx512;
 	idpf_singleq_xmit_pkts;
+	idpf_singleq_xmit_pkts_avx512;
 	idpf_splitq_recv_pkts;
 	idpf_splitq_xmit_pkts;
 	idpf_switch_queue;
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index a985dc2cf5..3a5084dfd6 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -19,23 +19,14 @@
 #define IDPF_DEFAULT_RX_FREE_THRESH	32
 
 /* used for Vector PMD */
-#define IDPF_VPMD_RX_MAX_BURST	32
-#define IDPF_VPMD_TX_MAX_BURST	32
-#define IDPF_VPMD_DESCS_PER_LOOP	4
-#define IDPF_RXQ_REARM_THRESH	64
 
 #define IDPF_DEFAULT_TX_RS_THRESH	32
 #define IDPF_DEFAULT_TX_FREE_THRESH	32
 
-struct idpf_tx_vec_entry {
-	struct rte_mbuf *mbuf;
-};
-
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
 			struct rte_mempool *mp);
-int idpf_singleq_tx_vec_setup_avx512(struct idpf_tx_queue *txq);
 int idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
@@ -48,10 +39,6 @@ int idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
-uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
-				       uint16_t nb_pkts);
-uint16_t idpf_singleq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
-				       uint16_t nb_pkts);
 
 void idpf_stop_queues(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/idpf/meson.build b/drivers/net/idpf/meson.build
index 378925166f..98f8ceb77b 100644
--- a/drivers/net/idpf/meson.build
+++ b/drivers/net/idpf/meson.build
@@ -34,22 +34,5 @@ if arch_subdir == 'x86'
 
     if idpf_avx512_cpu_support == true or idpf_avx512_cc_support == true
         cflags += ['-DCC_AVX512_SUPPORT']
-        avx512_args = [cflags, '-mavx512f', '-mavx512bw']
-        if cc.has_argument('-march=skylake-avx512')
-            avx512_args += '-march=skylake-avx512'
-        endif
-        idpf_avx512_lib = static_library(
-            'idpf_avx512_lib',
-            'idpf_rxtx_vec_avx512.c',
-            dependencies: [
-                    static_rte_common_idpf,
-                    static_rte_ethdev,
-                    static_rte_bus_pci,
-                    static_rte_kvargs,
-                    static_rte_hash,
-            ],
-            include_directories: includes,
-            c_args: avx512_args)
-        objs += idpf_avx512_lib.extract_objects('idpf_rxtx_vec_avx512.c')
     endif
 endif
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* RE: [PATCH v4 03/15] common/idpf: add virtual channel functions
  2023-01-17  8:06   ` [PATCH v4 03/15] common/idpf: add virtual channel functions beilei.xing
@ 2023-01-18  4:00     ` Zhang, Qi Z
  2023-01-18  4:10       ` Zhang, Qi Z
  0 siblings, 1 reply; 79+ messages in thread
From: Zhang, Qi Z @ 2023-01-18  4:00 UTC (permalink / raw)
  To: Xing, Beilei, Wu, Jingjing; +Cc: dev, Wu, Wenjun1



> -----Original Message-----
> From: Xing, Beilei <beilei.xing@intel.com>
> Sent: Tuesday, January 17, 2023 4:06 PM
> To: Wu, Jingjing <jingjing.wu@intel.com>
> Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>; Wu, Wenjun1 <wenjun1.wu@intel.com>
> Subject: [PATCH v4 03/15] common/idpf: add virtual channel functions
> 
> From: Beilei Xing <beilei.xing@intel.com>
> 
> Move most of the virtual channel functions to idpf common module.
> 
> Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
> Signed-off-by: Beilei Xing <beilei.xing@intel.com>
> ---
>  drivers/common/idpf/base/meson.build       |   2 +-
>  drivers/common/idpf/idpf_common_device.c   |   8 +
>  drivers/common/idpf/idpf_common_device.h   |  61 ++
>  drivers/common/idpf/idpf_common_logs.h     |  23 +
>  drivers/common/idpf/idpf_common_virtchnl.c | 815
> +++++++++++++++++++++
>  drivers/common/idpf/idpf_common_virtchnl.h |  48 ++
>  drivers/common/idpf/meson.build            |   5 +
>  drivers/common/idpf/version.map            |  20 +-
>  drivers/net/idpf/idpf_ethdev.c             |   9 +-
>  drivers/net/idpf/idpf_ethdev.h             |  85 +--
>  drivers/net/idpf/idpf_vchnl.c              | 815 +--------------------
>  11 files changed, 983 insertions(+), 908 deletions(-)
>  create mode 100644 drivers/common/idpf/idpf_common_device.c
>  create mode 100644 drivers/common/idpf/idpf_common_logs.h
>  create mode 100644 drivers/common/idpf/idpf_common_virtchnl.c
>  create mode 100644 drivers/common/idpf/idpf_common_virtchnl.h
> 
> diff --git a/drivers/common/idpf/base/meson.build
> b/drivers/common/idpf/base/meson.build
> index 183587b51a..dc4b93c198 100644
> --- a/drivers/common/idpf/base/meson.build
> +++ b/drivers/common/idpf/base/meson.build
> @@ -1,7 +1,7 @@
>  # SPDX-License-Identifier: BSD-3-Clause
>  # Copyright(c) 2022 Intel Corporation
> 
> -sources = files(
> +sources += files(
>          'idpf_common.c',
>          'idpf_controlq.c',
>          'idpf_controlq_setup.c',
> diff --git a/drivers/common/idpf/idpf_common_device.c
> b/drivers/common/idpf/idpf_common_device.c
> new file mode 100644
> index 0000000000..5062780362
> --- /dev/null
> +++ b/drivers/common/idpf/idpf_common_device.c
> @@ -0,0 +1,8 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2022 Intel Corporation
> + */
> +
> +#include <rte_log.h>
> +#include <idpf_common_device.h>
> +
> +RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
> diff --git a/drivers/common/idpf/idpf_common_device.h
> b/drivers/common/idpf/idpf_common_device.h
> index b7fff84b25..a7537281d1 100644
> --- a/drivers/common/idpf/idpf_common_device.h
> +++ b/drivers/common/idpf/idpf_common_device.h
> @@ -7,6 +7,12 @@
> 
>  #include <base/idpf_prototype.h>
>  #include <base/virtchnl2.h>
> +#include <idpf_common_logs.h>
> +
> +#define IDPF_CTLQ_LEN		64
> +#define IDPF_DFLT_MBX_BUF_SIZE	4096
> +
> +#define IDPF_MAX_PKT_TYPE	1024
> 
>  struct idpf_adapter {
>  	struct idpf_hw hw;
> @@ -76,4 +82,59 @@ struct idpf_vport {
>  	bool stopped;
>  };
> 
> +/* Message type read in virtual channel from PF */
> +enum idpf_vc_result {
> +	IDPF_MSG_ERR = -1, /* Meet error when accessing admin queue */
> +	IDPF_MSG_NON,      /* Read nothing from admin queue */
> +	IDPF_MSG_SYS,      /* Read system msg from admin queue */
> +	IDPF_MSG_CMD,      /* Read async command result */
> +};
> +
> +/* structure used for sending and checking response of virtchnl ops */
> +struct idpf_cmd_info {
> +	uint32_t ops;
> +	uint8_t *in_args;       /* buffer for sending */
> +	uint32_t in_args_size;  /* buffer size for sending */
> +	uint8_t *out_buffer;    /* buffer for response */
> +	uint32_t out_size;      /* buffer size for response */
> +};
> +
> +/* notify current command done. Only call in case execute
> + * _atomic_set_cmd successfully.
> + */
> +static inline void
> +notify_cmd(struct idpf_adapter *adapter, int msg_ret)
> +{
> +	adapter->cmd_retval = msg_ret;
> +	/* Return value may be checked in anither thread, need to ensure
> the coherence. */
> +	rte_wmb();
> +	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
> +}
> +
> +/* clear current command. Only call in case execute
> + * _atomic_set_cmd successfully.
> + */
> +static inline void
> +clear_cmd(struct idpf_adapter *adapter)
> +{
> +	/* Return value may be checked in anither thread, need to ensure
> the coherence. */
> +	rte_wmb();
> +	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
> +	adapter->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
> +}
> +
> +/* Check there is pending cmd in execution. If none, set new command. */
> +static inline bool
> +atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
> +{
> +	uint32_t op_unk = VIRTCHNL2_OP_UNKNOWN;
> +	bool ret = __atomic_compare_exchange(&adapter->pend_cmd,
> &op_unk, &ops,
> +					    0, __ATOMIC_ACQUIRE,
> __ATOMIC_ACQUIRE);
> +
> +	if (!ret)
> +		DRV_LOG(ERR, "There is incomplete cmd %d", adapter-
> >pend_cmd);
> +
> +	return !ret;
> +}
> +
>  #endif /* _IDPF_COMMON_DEVICE_H_ */
> diff --git a/drivers/common/idpf/idpf_common_logs.h
> b/drivers/common/idpf/idpf_common_logs.h
> new file mode 100644
> index 0000000000..fe36562769
> --- /dev/null
> +++ b/drivers/common/idpf/idpf_common_logs.h
> @@ -0,0 +1,23 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2022 Intel Corporation
> + */
> +
> +#ifndef _IDPF_COMMON_LOGS_H_
> +#define _IDPF_COMMON_LOGS_H_
> +
> +#include <rte_log.h>
> +
> +extern int idpf_common_logtype;
> +
> +#define DRV_LOG_RAW(level, ...)					\
> +	rte_log(RTE_LOG_ ## level,				\
> +		idpf_common_logtype,				\
> +		RTE_FMT("%s(): "				\
> +			RTE_FMT_HEAD(__VA_ARGS__,) "\n",	\
> +			__func__,				\
> +			RTE_FMT_TAIL(__VA_ARGS__,)))
> +
> +#define DRV_LOG(level, fmt, args...)		\
> +	DRV_LOG_RAW(level, fmt "\n", ## args)
> +
> +#endif /* _IDPF_COMMON_LOGS_H_ */
> diff --git a/drivers/common/idpf/idpf_common_virtchnl.c
> b/drivers/common/idpf/idpf_common_virtchnl.c
> new file mode 100644
> index 0000000000..2e94a95876
> --- /dev/null
> +++ b/drivers/common/idpf/idpf_common_virtchnl.c
> @@ -0,0 +1,815 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2022 Intel Corporation
> + */
> +
> +#include <idpf_common_virtchnl.h>
> +#include <idpf_common_logs.h>
> +
> +static int
> +idpf_vc_clean(struct idpf_adapter *adapter)
> +{
> +	struct idpf_ctlq_msg *q_msg[IDPF_CTLQ_LEN];
> +	uint16_t num_q_msg = IDPF_CTLQ_LEN;
> +	struct idpf_dma_mem *dma_mem;
> +	int err;
> +	uint32_t i;
> +
> +	for (i = 0; i < 10; i++) {
> +		err = idpf_ctlq_clean_sq(adapter->hw.asq, &num_q_msg,
> q_msg);
> +		msleep(20);
> +		if (num_q_msg > 0)
> +			break;
> +	}
> +	if (err != 0)
> +		return err;
> +
> +	/* Empty queue is not an error */
> +	for (i = 0; i < num_q_msg; i++) {
> +		dma_mem = q_msg[i]->ctx.indirect.payload;
> +		if (dma_mem != NULL) {
> +			idpf_free_dma_mem(&adapter->hw, dma_mem);
> +			rte_free(dma_mem);
> +		}
> +		rte_free(q_msg[i]);
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +idpf_send_vc_msg(struct idpf_adapter *adapter, uint32_t op,
> +		 uint16_t msg_size, uint8_t *msg)
> +{
> +	struct idpf_ctlq_msg *ctlq_msg;
> +	struct idpf_dma_mem *dma_mem;
> +	int err;
> +
> +	err = idpf_vc_clean(adapter);
> +	if (err != 0)
> +		goto err;
> +
> +	ctlq_msg = rte_zmalloc(NULL, sizeof(struct idpf_ctlq_msg), 0);
> +	if (ctlq_msg == NULL) {
> +		err = -ENOMEM;
> +		goto err;
> +	}
> +
> +	dma_mem = rte_zmalloc(NULL, sizeof(struct idpf_dma_mem), 0);
> +	if (dma_mem == NULL) {
> +		err = -ENOMEM;
> +		goto dma_mem_error;
> +	}
> +
> +	dma_mem->size = IDPF_DFLT_MBX_BUF_SIZE;
> +	idpf_alloc_dma_mem(&adapter->hw, dma_mem, dma_mem->size);
> +	if (dma_mem->va == NULL) {
> +		err = -ENOMEM;
> +		goto dma_alloc_error;
> +	}
> +
> +	memcpy(dma_mem->va, msg, msg_size);
> +
> +	ctlq_msg->opcode = idpf_mbq_opc_send_msg_to_pf;
> +	ctlq_msg->func_id = 0;
> +	ctlq_msg->data_len = msg_size;
> +	ctlq_msg->cookie.mbx.chnl_opcode = op;
> +	ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS;
> +	ctlq_msg->ctx.indirect.payload = dma_mem;
> +
> +	err = idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
> +	if (err != 0)
> +		goto send_error;
> +
> +	return 0;
> +
> +send_error:
> +	idpf_free_dma_mem(&adapter->hw, dma_mem);
> +dma_alloc_error:
> +	rte_free(dma_mem);
> +dma_mem_error:
> +	rte_free(ctlq_msg);
> +err:
> +	return err;
> +}
> +
> +static enum idpf_vc_result
> +idpf_read_msg_from_cp(struct idpf_adapter *adapter, uint16_t buf_len,
> +		      uint8_t *buf)
> +{
> +	struct idpf_hw *hw = &adapter->hw;
> +	struct idpf_ctlq_msg ctlq_msg;
> +	struct idpf_dma_mem *dma_mem = NULL;
> +	enum idpf_vc_result result = IDPF_MSG_NON;
> +	uint32_t opcode;
> +	uint16_t pending = 1;
> +	int ret;
> +
> +	ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
> +	if (ret != 0) {
> +		DRV_LOG(DEBUG, "Can't read msg from AQ");
> +		if (ret != -ENOMSG)
> +			result = IDPF_MSG_ERR;
> +		return result;
> +	}
> +
> +	rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len);
> +
> +	opcode = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
> +	adapter->cmd_retval =
> rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
> +
> +	DRV_LOG(DEBUG, "CQ from CP carries opcode %u, retval %d",
> +		opcode, adapter->cmd_retval);
> +
> +	if (opcode == VIRTCHNL2_OP_EVENT) {
> +		struct virtchnl2_event *ve = ctlq_msg.ctx.indirect.payload-
> >va;
> +
> +		result = IDPF_MSG_SYS;
> +		switch (ve->event) {
> +		case VIRTCHNL2_EVENT_LINK_CHANGE:
> +			/* TBD */
> +			break;
> +		default:
> +			DRV_LOG(ERR, "%s: Unknown event %d from CP",
> +				__func__, ve->event);
> +			break;
> +		}
> +	} else {
> +		/* async reply msg on command issued by pf previously */
> +		result = IDPF_MSG_CMD;
> +		if (opcode != adapter->pend_cmd) {
> +			DRV_LOG(WARNING, "command mismatch,
> expect %u, get %u",
> +				adapter->pend_cmd, opcode);
> +			result = IDPF_MSG_ERR;
> +		}
> +	}
> +
> +	if (ctlq_msg.data_len != 0)
> +		dma_mem = ctlq_msg.ctx.indirect.payload;
> +	else
> +		pending = 0;
> +
> +	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
> +	if (ret != 0 && dma_mem != NULL)
> +		idpf_free_dma_mem(hw, dma_mem);
> +
> +	return result;
> +}
> +
> +#define MAX_TRY_TIMES 200
> +#define ASQ_DELAY_MS  10
> +
> +int
> +idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, uint16_t
> buf_len,
> +		  uint8_t *buf)
> +{
> +	int err = 0;
> +	int i = 0;
> +	int ret;
> +
> +	do {
> +		ret = idpf_read_msg_from_cp(adapter, buf_len, buf);
> +		if (ret == IDPF_MSG_CMD)
> +			break;
> +		rte_delay_ms(ASQ_DELAY_MS);
> +	} while (i++ < MAX_TRY_TIMES);
> +	if (i >= MAX_TRY_TIMES ||
> +	    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
> +		err = -EBUSY;
> +		DRV_LOG(ERR, "No response or return failure (%d) for
> cmd %d",
> +			adapter->cmd_retval, ops);
> +	}
> +
> +	return err;
> +}
> +
> +int
> +idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info
> *args)
> +{
> +	int err = 0;
> +	int i = 0;
> +	int ret;
> +
> +	if (atomic_set_cmd(adapter, args->ops))
> +		return -EINVAL;
> +
> +	ret = idpf_send_vc_msg(adapter, args->ops, args->in_args_size, args-
> >in_args);
> +	if (ret != 0) {
> +		DRV_LOG(ERR, "fail to send cmd %d", args->ops);
> +		clear_cmd(adapter);
> +		return ret;
> +	}
> +
> +	switch (args->ops) {
> +	case VIRTCHNL_OP_VERSION:
> +	case VIRTCHNL2_OP_GET_CAPS:
> +	case VIRTCHNL2_OP_CREATE_VPORT:
> +	case VIRTCHNL2_OP_DESTROY_VPORT:
> +	case VIRTCHNL2_OP_SET_RSS_KEY:
> +	case VIRTCHNL2_OP_SET_RSS_LUT:
> +	case VIRTCHNL2_OP_SET_RSS_HASH:
> +	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
> +	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
> +	case VIRTCHNL2_OP_ENABLE_QUEUES:
> +	case VIRTCHNL2_OP_DISABLE_QUEUES:
> +	case VIRTCHNL2_OP_ENABLE_VPORT:
> +	case VIRTCHNL2_OP_DISABLE_VPORT:
> +	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
> +	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
> +	case VIRTCHNL2_OP_ALLOC_VECTORS:
> +	case VIRTCHNL2_OP_DEALLOC_VECTORS:
> +		/* for init virtchnl ops, need to poll the response */
> +		err = idpf_read_one_msg(adapter, args->ops, args->out_size,
> args->out_buffer);
> +		clear_cmd(adapter);
> +		break;
> +	case VIRTCHNL2_OP_GET_PTYPE_INFO:
> +		/* for multuple response message,
> +		 * do not handle the response here.
> +		 */
> +		break;
> +	default:
> +		/* For other virtchnl ops in running time,
> +		 * wait for the cmd done flag.
> +		 */
> +		do {
> +			if (adapter->pend_cmd == VIRTCHNL_OP_UNKNOWN)
> +				break;
> +			rte_delay_ms(ASQ_DELAY_MS);
> +			/* If don't read msg or read sys event, continue */
> +		} while (i++ < MAX_TRY_TIMES);
> +		/* If there's no response is received, clear command */
> +		if (i >= MAX_TRY_TIMES  ||
> +		    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
> +			err = -EBUSY;
> +			DRV_LOG(ERR, "No response or return failure (%d)
> for cmd %d",
> +				adapter->cmd_retval, args->ops);
> +			clear_cmd(adapter);
> +		}
> +		break;
> +	}
> +
> +	return err;
> +}
> +
> +int
> +idpf_vc_check_api_version(struct idpf_adapter *adapter)
> +{
> +	struct virtchnl2_version_info version, *pver;
> +	struct idpf_cmd_info args;
> +	int err;
> +
> +	memset(&version, 0, sizeof(struct virtchnl_version_info));
> +	version.major = VIRTCHNL2_VERSION_MAJOR_2;
> +	version.minor = VIRTCHNL2_VERSION_MINOR_0;
> +
> +	args.ops = VIRTCHNL_OP_VERSION;
> +	args.in_args = (uint8_t *)&version;
> +	args.in_args_size = sizeof(version);
> +	args.out_buffer = adapter->mbx_resp;
> +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> +
> +	err = idpf_execute_vc_cmd(adapter, &args);
> +	if (err != 0) {
> +		DRV_LOG(ERR,
> +			"Failed to execute command of
> VIRTCHNL_OP_VERSION");
> +		return err;
> +	}
> +
> +	pver = (struct virtchnl2_version_info *)args.out_buffer;
> +	adapter->virtchnl_version = *pver;
> +
> +	if (adapter->virtchnl_version.major !=
> VIRTCHNL2_VERSION_MAJOR_2 ||
> +	    adapter->virtchnl_version.minor !=
> VIRTCHNL2_VERSION_MINOR_0) {
> +		DRV_LOG(ERR, "VIRTCHNL API version mismatch:(%u.%u)-
> (%u.%u)",
> +			adapter->virtchnl_version.major,
> +			adapter->virtchnl_version.minor,
> +			VIRTCHNL2_VERSION_MAJOR_2,
> +			VIRTCHNL2_VERSION_MINOR_0);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +int
> +idpf_vc_get_caps(struct idpf_adapter *adapter)
> +{
> +	struct virtchnl2_get_capabilities caps_msg;
> +	struct idpf_cmd_info args;
> +	int err;
> +
> +	memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities));
> +
> +	caps_msg.csum_caps =
> +		VIRTCHNL2_CAP_TX_CSUM_L3_IPV4          |
> +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP      |
> +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP      |
> +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP     |
> +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP      |
> +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP      |
> +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP     |
> +		VIRTCHNL2_CAP_TX_CSUM_GENERIC          |
> +		VIRTCHNL2_CAP_RX_CSUM_L3_IPV4          |
> +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP      |
> +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP      |
> +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP     |
> +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP      |
> +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP      |
> +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP     |
> +		VIRTCHNL2_CAP_RX_CSUM_GENERIC;
> +
> +	caps_msg.rss_caps =
> +		VIRTCHNL2_CAP_RSS_IPV4_TCP             |
> +		VIRTCHNL2_CAP_RSS_IPV4_UDP             |
> +		VIRTCHNL2_CAP_RSS_IPV4_SCTP            |
> +		VIRTCHNL2_CAP_RSS_IPV4_OTHER           |
> +		VIRTCHNL2_CAP_RSS_IPV6_TCP             |
> +		VIRTCHNL2_CAP_RSS_IPV6_UDP             |
> +		VIRTCHNL2_CAP_RSS_IPV6_SCTP            |
> +		VIRTCHNL2_CAP_RSS_IPV6_OTHER           |
> +		VIRTCHNL2_CAP_RSS_IPV4_AH              |
> +		VIRTCHNL2_CAP_RSS_IPV4_ESP             |
> +		VIRTCHNL2_CAP_RSS_IPV4_AH_ESP          |
> +		VIRTCHNL2_CAP_RSS_IPV6_AH              |
> +		VIRTCHNL2_CAP_RSS_IPV6_ESP             |
> +		VIRTCHNL2_CAP_RSS_IPV6_AH_ESP;
> +
> +	caps_msg.other_caps = VIRTCHNL2_CAP_WB_ON_ITR;
> +
> +	args.ops = VIRTCHNL2_OP_GET_CAPS;
> +	args.in_args = (uint8_t *)&caps_msg;
> +	args.in_args_size = sizeof(caps_msg);
> +	args.out_buffer = adapter->mbx_resp;
> +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> +
> +	err = idpf_execute_vc_cmd(adapter, &args);
> +	if (err != 0) {
> +		DRV_LOG(ERR,
> +			"Failed to execute command of
> VIRTCHNL2_OP_GET_CAPS");
> +		return err;
> +	}
> +
> +	rte_memcpy(&adapter->caps, args.out_buffer, sizeof(caps_msg));
> +
> +	return 0;
> +}
> +
> +int
> +idpf_vc_create_vport(struct idpf_vport *vport,
> +		     struct virtchnl2_create_vport *vport_req_info)
> +{
> +	struct idpf_adapter *adapter = vport->adapter;
> +	struct virtchnl2_create_vport vport_msg;
> +	struct idpf_cmd_info args;
> +	int err = -1;
> +
> +	memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
> +	vport_msg.vport_type = vport_req_info->vport_type;
> +	vport_msg.txq_model = vport_req_info->txq_model;
> +	vport_msg.rxq_model = vport_req_info->rxq_model;
> +	vport_msg.num_tx_q = vport_req_info->num_tx_q;
> +	vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
> +	vport_msg.num_rx_q = vport_req_info->num_rx_q;
> +	vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
> +
> +	memset(&args, 0, sizeof(args));
> +	args.ops = VIRTCHNL2_OP_CREATE_VPORT;
> +	args.in_args = (uint8_t *)&vport_msg;
> +	args.in_args_size = sizeof(vport_msg);
> +	args.out_buffer = adapter->mbx_resp;
> +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> +
> +	err = idpf_execute_vc_cmd(adapter, &args);
> +	if (err != 0) {
> +		DRV_LOG(ERR,
> +			"Failed to execute command of
> VIRTCHNL2_OP_CREATE_VPORT");
> +		return err;
> +	}
> +
> +	rte_memcpy(vport->vport_info, args.out_buffer,
> IDPF_DFLT_MBX_BUF_SIZE);
> +	return 0;
> +}
> +
> +int
> +idpf_vc_destroy_vport(struct idpf_vport *vport)
> +{
> +	struct idpf_adapter *adapter = vport->adapter;
> +	struct virtchnl2_vport vc_vport;
> +	struct idpf_cmd_info args;
> +	int err;
> +
> +	vc_vport.vport_id = vport->vport_id;
> +
> +	memset(&args, 0, sizeof(args));
> +	args.ops = VIRTCHNL2_OP_DESTROY_VPORT;
> +	args.in_args = (uint8_t *)&vc_vport;
> +	args.in_args_size = sizeof(vc_vport);
> +	args.out_buffer = adapter->mbx_resp;
> +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> +
> +	err = idpf_execute_vc_cmd(adapter, &args);
> +	if (err != 0)
> +		DRV_LOG(ERR, "Failed to execute command of
> VIRTCHNL2_OP_DESTROY_VPORT");
> +
> +	return err;
> +}
> +
> +int
> +idpf_vc_set_rss_key(struct idpf_vport *vport)
> +{
> +	struct idpf_adapter *adapter = vport->adapter;
> +	struct virtchnl2_rss_key *rss_key;
> +	struct idpf_cmd_info args;
> +	int len, err;
> +
> +	len = sizeof(*rss_key) + sizeof(rss_key->key[0]) *
> +		(vport->rss_key_size - 1);
> +	rss_key = rte_zmalloc("rss_key", len, 0);
> +	if (rss_key == NULL)
> +		return -ENOMEM;
> +
> +	rss_key->vport_id = vport->vport_id;
> +	rss_key->key_len = vport->rss_key_size;
> +	rte_memcpy(rss_key->key, vport->rss_key,
> +		   sizeof(rss_key->key[0]) * vport->rss_key_size);
> +
> +	memset(&args, 0, sizeof(args));
> +	args.ops = VIRTCHNL2_OP_SET_RSS_KEY;
> +	args.in_args = (uint8_t *)rss_key;
> +	args.in_args_size = len;
> +	args.out_buffer = adapter->mbx_resp;
> +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> +
> +	err = idpf_execute_vc_cmd(adapter, &args);
> +	if (err != 0)
> +		DRV_LOG(ERR, "Failed to execute command of
> VIRTCHNL2_OP_SET_RSS_KEY");
> +
> +	rte_free(rss_key);
> +	return err;
> +}
> +
> +int
> +idpf_vc_set_rss_lut(struct idpf_vport *vport)
> +{
> +	struct idpf_adapter *adapter = vport->adapter;
> +	struct virtchnl2_rss_lut *rss_lut;
> +	struct idpf_cmd_info args;
> +	int len, err;
> +
> +	len = sizeof(*rss_lut) + sizeof(rss_lut->lut[0]) *
> +		(vport->rss_lut_size - 1);
> +	rss_lut = rte_zmalloc("rss_lut", len, 0);
> +	if (rss_lut == NULL)
> +		return -ENOMEM;
> +
> +	rss_lut->vport_id = vport->vport_id;
> +	rss_lut->lut_entries = vport->rss_lut_size;
> +	rte_memcpy(rss_lut->lut, vport->rss_lut,
> +		   sizeof(rss_lut->lut[0]) * vport->rss_lut_size);
> +
> +	memset(&args, 0, sizeof(args));
> +	args.ops = VIRTCHNL2_OP_SET_RSS_LUT;
> +	args.in_args = (uint8_t *)rss_lut;
> +	args.in_args_size = len;
> +	args.out_buffer = adapter->mbx_resp;
> +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> +
> +	err = idpf_execute_vc_cmd(adapter, &args);
> +	if (err != 0)
> +		DRV_LOG(ERR, "Failed to execute command of
> VIRTCHNL2_OP_SET_RSS_LUT");
> +
> +	rte_free(rss_lut);
> +	return err;
> +}
> +
> +int
> +idpf_vc_set_rss_hash(struct idpf_vport *vport)
> +{
> +	struct idpf_adapter *adapter = vport->adapter;
> +	struct virtchnl2_rss_hash rss_hash;
> +	struct idpf_cmd_info args;
> +	int err;
> +
> +	memset(&rss_hash, 0, sizeof(rss_hash));
> +	rss_hash.ptype_groups = vport->rss_hf;
> +	rss_hash.vport_id = vport->vport_id;
> +
> +	memset(&args, 0, sizeof(args));
> +	args.ops = VIRTCHNL2_OP_SET_RSS_HASH;
> +	args.in_args = (uint8_t *)&rss_hash;
> +	args.in_args_size = sizeof(rss_hash);
> +	args.out_buffer = adapter->mbx_resp;
> +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> +
> +	err = idpf_execute_vc_cmd(adapter, &args);
> +	if (err != 0)
> +		DRV_LOG(ERR, "Failed to execute command of
> OP_SET_RSS_HASH");
> +
> +	return err;
> +}
> +
> +int
> +idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq,
> bool map)
> +{
> +	struct idpf_adapter *adapter = vport->adapter;
> +	struct virtchnl2_queue_vector_maps *map_info;
> +	struct virtchnl2_queue_vector *vecmap;
> +	struct idpf_cmd_info args;
> +	int len, i, err = 0;
> +
> +	len = sizeof(struct virtchnl2_queue_vector_maps) +
> +		(nb_rxq - 1) * sizeof(struct virtchnl2_queue_vector);
> +
> +	map_info = rte_zmalloc("map_info", len, 0);
> +	if (map_info == NULL)
> +		return -ENOMEM;
> +
> +	map_info->vport_id = vport->vport_id;
> +	map_info->num_qv_maps = nb_rxq;
> +	for (i = 0; i < nb_rxq; i++) {
> +		vecmap = &map_info->qv_maps[i];
> +		vecmap->queue_id = vport->qv_map[i].queue_id;
> +		vecmap->vector_id = vport->qv_map[i].vector_id;
> +		vecmap->itr_idx = VIRTCHNL2_ITR_IDX_0;
> +		vecmap->queue_type = VIRTCHNL2_QUEUE_TYPE_RX;
> +	}
> +
> +	args.ops = map ? VIRTCHNL2_OP_MAP_QUEUE_VECTOR :
> +		VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR;
> +	args.in_args = (uint8_t *)map_info;
> +	args.in_args_size = len;
> +	args.out_buffer = adapter->mbx_resp;
> +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> +	err = idpf_execute_vc_cmd(adapter, &args);
> +	if (err != 0)
> +		DRV_LOG(ERR, "Failed to execute command of
> VIRTCHNL2_OP_%s_QUEUE_VECTOR",
> +			map ? "MAP" : "UNMAP");
> +
> +	rte_free(map_info);
> +	return err;
> +}
> +
> +int
> +idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
> +{
> +	struct idpf_adapter *adapter = vport->adapter;
> +	struct virtchnl2_alloc_vectors *alloc_vec;
> +	struct idpf_cmd_info args;
> +	int err, len;
> +
> +	len = sizeof(struct virtchnl2_alloc_vectors) +
> +		(num_vectors - 1) * sizeof(struct virtchnl2_vector_chunk);
> +	alloc_vec = rte_zmalloc("alloc_vec", len, 0);
> +	if (alloc_vec == NULL)
> +		return -ENOMEM;
> +
> +	alloc_vec->num_vectors = num_vectors;
> +
> +	args.ops = VIRTCHNL2_OP_ALLOC_VECTORS;
> +	args.in_args = (uint8_t *)alloc_vec;
> +	args.in_args_size = sizeof(struct virtchnl2_alloc_vectors);
> +	args.out_buffer = adapter->mbx_resp;
> +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> +	err = idpf_execute_vc_cmd(adapter, &args);
> +	if (err != 0)
> +		DRV_LOG(ERR, "Failed to execute command
> VIRTCHNL2_OP_ALLOC_VECTORS");
> +
> +	if (vport->recv_vectors == NULL) {
> +		vport->recv_vectors = rte_zmalloc("recv_vectors", len, 0);
> +		if (vport->recv_vectors == NULL) {
> +			rte_free(alloc_vec);
> +			return -ENOMEM;
> +		}
> +	}
> +
> +	rte_memcpy(vport->recv_vectors, args.out_buffer, len);
> +	rte_free(alloc_vec);
> +	return err;
> +}
> +
> +int
> +idpf_vc_dealloc_vectors(struct idpf_vport *vport)
> +{
> +	struct idpf_adapter *adapter = vport->adapter;
> +	struct virtchnl2_alloc_vectors *alloc_vec;
> +	struct virtchnl2_vector_chunks *vcs;
> +	struct idpf_cmd_info args;
> +	int err, len;
> +
> +	alloc_vec = vport->recv_vectors;
> +	vcs = &alloc_vec->vchunks;
> +
> +	len = sizeof(struct virtchnl2_vector_chunks) +
> +		(vcs->num_vchunks - 1) * sizeof(struct
> virtchnl2_vector_chunk);
> +
> +	args.ops = VIRTCHNL2_OP_DEALLOC_VECTORS;
> +	args.in_args = (uint8_t *)vcs;
> +	args.in_args_size = len;
> +	args.out_buffer = adapter->mbx_resp;
> +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> +	err = idpf_execute_vc_cmd(adapter, &args);
> +	if (err != 0)
> +		DRV_LOG(ERR, "Failed to execute command
> VIRTCHNL2_OP_DEALLOC_VECTORS");
> +
> +	return err;
> +}
> +
> +static int
> +idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
> +			  uint32_t type, bool on)
> +{
> +	struct idpf_adapter *adapter = vport->adapter;
> +	struct virtchnl2_del_ena_dis_queues *queue_select;
> +	struct virtchnl2_queue_chunk *queue_chunk;
> +	struct idpf_cmd_info args;
> +	int err, len;
> +
> +	len = sizeof(struct virtchnl2_del_ena_dis_queues);
> +	queue_select = rte_zmalloc("queue_select", len, 0);
> +	if (queue_select == NULL)
> +		return -ENOMEM;
> +
> +	queue_chunk = queue_select->chunks.chunks;
> +	queue_select->chunks.num_chunks = 1;
> +	queue_select->vport_id = vport->vport_id;
> +
> +	queue_chunk->type = type;
> +	queue_chunk->start_queue_id = qid;
> +	queue_chunk->num_queues = 1;
> +
> +	args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
> +		VIRTCHNL2_OP_DISABLE_QUEUES;
> +	args.in_args = (uint8_t *)queue_select;
> +	args.in_args_size = len;
> +	args.out_buffer = adapter->mbx_resp;
> +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> +	err = idpf_execute_vc_cmd(adapter, &args);
> +	if (err != 0)
> +		DRV_LOG(ERR, "Failed to execute command of
> VIRTCHNL2_OP_%s_QUEUES",
> +			on ? "ENABLE" : "DISABLE");
> +
> +	rte_free(queue_select);
> +	return err;
> +}
> +
> +int
> +idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
> +		  bool rx, bool on)
> +{
> +	uint32_t type;
> +	int err, queue_id;
> +
> +	/* switch txq/rxq */
> +	type = rx ? VIRTCHNL2_QUEUE_TYPE_RX :
> VIRTCHNL2_QUEUE_TYPE_TX;
> +
> +	if (type == VIRTCHNL2_QUEUE_TYPE_RX)
> +		queue_id = vport->chunks_info.rx_start_qid + qid;
> +	else
> +		queue_id = vport->chunks_info.tx_start_qid + qid;
> +	err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> +	if (err != 0)
> +		return err;
> +
> +	/* switch tx completion queue */
> +	if (!rx && vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
> +		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
> +		queue_id = vport->chunks_info.tx_compl_start_qid + qid;
> +		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> +		if (err != 0)
> +			return err;
> +	}
> +
> +	/* switch rx buffer queue */
> +	if (rx && vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
> +		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
> +		queue_id = vport->chunks_info.rx_buf_start_qid + 2 * qid;
> +		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> +		if (err != 0)
> +			return err;
> +		queue_id++;
> +		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> +		if (err != 0)
> +			return err;
> +	}
> +
> +	return err;
> +}
> +
> +#define IDPF_RXTX_QUEUE_CHUNKS_NUM	2
> +int
> +idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable)
> +{
> +	struct idpf_adapter *adapter = vport->adapter;
> +	struct virtchnl2_del_ena_dis_queues *queue_select;
> +	struct virtchnl2_queue_chunk *queue_chunk;
> +	uint32_t type;
> +	struct idpf_cmd_info args;
> +	uint16_t num_chunks;
> +	int err, len;
> +
> +	num_chunks = IDPF_RXTX_QUEUE_CHUNKS_NUM;
> +	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
> +		num_chunks++;
> +	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
> +		num_chunks++;
> +
> +	len = sizeof(struct virtchnl2_del_ena_dis_queues) +
> +		sizeof(struct virtchnl2_queue_chunk) * (num_chunks - 1);
> +	queue_select = rte_zmalloc("queue_select", len, 0);
> +	if (queue_select == NULL)
> +		return -ENOMEM;
> +
> +	queue_chunk = queue_select->chunks.chunks;
> +	queue_select->chunks.num_chunks = num_chunks;
> +	queue_select->vport_id = vport->vport_id;
> +
> +	type = VIRTCHNL_QUEUE_TYPE_RX;
> +	queue_chunk[type].type = type;
> +	queue_chunk[type].start_queue_id = vport-
> >chunks_info.rx_start_qid;
> +	queue_chunk[type].num_queues = vport->num_rx_q;
> +
> +	type = VIRTCHNL2_QUEUE_TYPE_TX;
> +	queue_chunk[type].type = type;
> +	queue_chunk[type].start_queue_id = vport-
> >chunks_info.tx_start_qid;
> +	queue_chunk[type].num_queues = vport->num_tx_q;
> +
> +	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
> +		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
> +		queue_chunk[type].type = type;
> +		queue_chunk[type].start_queue_id =
> +			vport->chunks_info.rx_buf_start_qid;
> +		queue_chunk[type].num_queues = vport->num_rx_bufq;
> +	}
> +
> +	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
> +		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
> +		queue_chunk[type].type = type;
> +		queue_chunk[type].start_queue_id =
> +			vport->chunks_info.tx_compl_start_qid;
> +		queue_chunk[type].num_queues = vport->num_tx_complq;
> +	}
> +
> +	args.ops = enable ? VIRTCHNL2_OP_ENABLE_QUEUES :
> +		VIRTCHNL2_OP_DISABLE_QUEUES;
> +	args.in_args = (uint8_t *)queue_select;
> +	args.in_args_size = len;
> +	args.out_buffer = adapter->mbx_resp;
> +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> +	err = idpf_execute_vc_cmd(adapter, &args);
> +	if (err != 0)
> +		DRV_LOG(ERR, "Failed to execute command of
> VIRTCHNL2_OP_%s_QUEUES",
> +			enable ? "ENABLE" : "DISABLE");
> +
> +	rte_free(queue_select);
> +	return err;
> +}
> +
> +int
> +idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable)
> +{
> +	struct idpf_adapter *adapter = vport->adapter;
> +	struct virtchnl2_vport vc_vport;
> +	struct idpf_cmd_info args;
> +	int err;
> +
> +	vc_vport.vport_id = vport->vport_id;
> +	args.ops = enable ? VIRTCHNL2_OP_ENABLE_VPORT :
> +		VIRTCHNL2_OP_DISABLE_VPORT;
> +	args.in_args = (uint8_t *)&vc_vport;
> +	args.in_args_size = sizeof(vc_vport);
> +	args.out_buffer = adapter->mbx_resp;
> +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> +
> +	err = idpf_execute_vc_cmd(adapter, &args);
> +	if (err != 0) {
> +		DRV_LOG(ERR, "Failed to execute command of
> VIRTCHNL2_OP_%s_VPORT",
> +			enable ? "ENABLE" : "DISABLE");
> +	}
> +
> +	return err;
> +}
> +
> +int
> +idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
> +{
> +	struct virtchnl2_get_ptype_info *ptype_info;
> +	struct idpf_cmd_info args;
> +	int len, err;
> +
> +	len = sizeof(struct virtchnl2_get_ptype_info);
> +	ptype_info = rte_zmalloc("ptype_info", len, 0);
> +	if (ptype_info == NULL)
> +		return -ENOMEM;
> +
> +	ptype_info->start_ptype_id = 0;
> +	ptype_info->num_ptypes = IDPF_MAX_PKT_TYPE;
> +	args.ops = VIRTCHNL2_OP_GET_PTYPE_INFO;
> +	args.in_args = (uint8_t *)ptype_info;
> +	args.in_args_size = len;
> +
> +	err = idpf_execute_vc_cmd(adapter, &args);
> +	if (err != 0)
> +		DRV_LOG(ERR, "Failed to execute command of
> VIRTCHNL2_OP_GET_PTYPE_INFO");
> +
> +	rte_free(ptype_info);
> +	return err;
> +}
> diff --git a/drivers/common/idpf/idpf_common_virtchnl.h
> b/drivers/common/idpf/idpf_common_virtchnl.h
> new file mode 100644
> index 0000000000..bbc66d63c4
> --- /dev/null
> +++ b/drivers/common/idpf/idpf_common_virtchnl.h
> @@ -0,0 +1,48 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2022 Intel Corporation
> + */
> +
> +#ifndef _IDPF_COMMON_VIRTCHNL_H_
> +#define _IDPF_COMMON_VIRTCHNL_H_
> +
> +#include <idpf_common_device.h>
> +
> +__rte_internal
> +int idpf_vc_check_api_version(struct idpf_adapter *adapter);
> +__rte_internal
> +int idpf_vc_get_caps(struct idpf_adapter *adapter);
> +__rte_internal
> +int idpf_vc_create_vport(struct idpf_vport *vport,
> +			 struct virtchnl2_create_vport *vport_info);
> +__rte_internal
> +int idpf_vc_destroy_vport(struct idpf_vport *vport);
> +__rte_internal
> +int idpf_vc_set_rss_key(struct idpf_vport *vport);
> +__rte_internal
> +int idpf_vc_set_rss_lut(struct idpf_vport *vport);
> +__rte_internal
> +int idpf_vc_set_rss_hash(struct idpf_vport *vport);
> +__rte_internal
> +int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
> +		      bool rx, bool on);
> +__rte_internal
> +int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
> +__rte_internal
> +int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
> +__rte_internal
> +int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
> +				 uint16_t nb_rxq, bool map);
> +__rte_internal
> +int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
> +__rte_internal
> +int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
> +__rte_internal
> +int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
> +__rte_internal
> +int idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
> +		      uint16_t buf_len, uint8_t *buf);
> +__rte_internal
> +int idpf_execute_vc_cmd(struct idpf_adapter *adapter,
> +			struct idpf_cmd_info *args);
> +
> +#endif /* _IDPF_COMMON_VIRTCHNL_H_ */
> diff --git a/drivers/common/idpf/meson.build
> b/drivers/common/idpf/meson.build
> index 77d997b4a7..d1578641ba 100644
> --- a/drivers/common/idpf/meson.build
> +++ b/drivers/common/idpf/meson.build
> @@ -1,4 +1,9 @@
>  # SPDX-License-Identifier: BSD-3-Clause
>  # Copyright(c) 2022 Intel Corporation
> 
> +sources = files(
> +    'idpf_common_device.c',
> +    'idpf_common_virtchnl.c',
> +)
> +
>  subdir('base')
> diff --git a/drivers/common/idpf/version.map
> b/drivers/common/idpf/version.map
> index bfb246c752..a2b8780780 100644
> --- a/drivers/common/idpf/version.map
> +++ b/drivers/common/idpf/version.map
> @@ -1,12 +1,28 @@
>  INTERNAL {
>  	global:
> 
> +	idpf_ctlq_clean_sq;
>  	idpf_ctlq_deinit;
>  	idpf_ctlq_init;
> -	idpf_ctlq_clean_sq;
> +	idpf_ctlq_post_rx_buffs;
>  	idpf_ctlq_recv;
>  	idpf_ctlq_send;
> -	idpf_ctlq_post_rx_buffs;
> +	idpf_execute_vc_cmd;
> +	idpf_read_one_msg;
> +	idpf_switch_queue;

I think all APsI be exposed from idpf_common_virtchnl.h can follow the same naming rule "idpf_vc*"



^ permalink raw reply	[flat|nested] 79+ messages in thread

* RE: [PATCH v4 03/15] common/idpf: add virtual channel functions
  2023-01-18  4:00     ` Zhang, Qi Z
@ 2023-01-18  4:10       ` Zhang, Qi Z
  0 siblings, 0 replies; 79+ messages in thread
From: Zhang, Qi Z @ 2023-01-18  4:10 UTC (permalink / raw)
  To: Xing, Beilei, Wu, Jingjing; +Cc: dev, Wu, Wenjun1



> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Wednesday, January 18, 2023 12:00 PM
> To: Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Cc: dev@dpdk.org; Wu, Wenjun1 <Wenjun1.Wu@intel.com>
> Subject: RE: [PATCH v4 03/15] common/idpf: add virtual channel functions
> 
> 
> 
> > -----Original Message-----
> > From: Xing, Beilei <beilei.xing@intel.com>
> > Sent: Tuesday, January 17, 2023 4:06 PM
> > To: Wu, Jingjing <jingjing.wu@intel.com>
> > Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei
> > <beilei.xing@intel.com>; Wu, Wenjun1 <wenjun1.wu@intel.com>
> > Subject: [PATCH v4 03/15] common/idpf: add virtual channel functions
> >
> > From: Beilei Xing <beilei.xing@intel.com>
> >
> > Move most of the virtual channel functions to idpf common module.
> >
> > Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
> > Signed-off-by: Beilei Xing <beilei.xing@intel.com>
> > ---
> >  drivers/common/idpf/base/meson.build       |   2 +-
> >  drivers/common/idpf/idpf_common_device.c   |   8 +
> >  drivers/common/idpf/idpf_common_device.h   |  61 ++
> >  drivers/common/idpf/idpf_common_logs.h     |  23 +
> >  drivers/common/idpf/idpf_common_virtchnl.c | 815
> > +++++++++++++++++++++
> >  drivers/common/idpf/idpf_common_virtchnl.h |  48 ++
> >  drivers/common/idpf/meson.build            |   5 +
> >  drivers/common/idpf/version.map            |  20 +-
> >  drivers/net/idpf/idpf_ethdev.c             |   9 +-
> >  drivers/net/idpf/idpf_ethdev.h             |  85 +--
> >  drivers/net/idpf/idpf_vchnl.c              | 815 +--------------------
> >  11 files changed, 983 insertions(+), 908 deletions(-)  create mode
> > 100644 drivers/common/idpf/idpf_common_device.c
> >  create mode 100644 drivers/common/idpf/idpf_common_logs.h
> >  create mode 100644 drivers/common/idpf/idpf_common_virtchnl.c
> >  create mode 100644 drivers/common/idpf/idpf_common_virtchnl.h
> >
> > diff --git a/drivers/common/idpf/base/meson.build
> > b/drivers/common/idpf/base/meson.build
> > index 183587b51a..dc4b93c198 100644
> > --- a/drivers/common/idpf/base/meson.build
> > +++ b/drivers/common/idpf/base/meson.build
> > @@ -1,7 +1,7 @@
> >  # SPDX-License-Identifier: BSD-3-Clause  # Copyright(c) 2022 Intel
> > Corporation
> >
> > -sources = files(
> > +sources += files(
> >          'idpf_common.c',
> >          'idpf_controlq.c',
> >          'idpf_controlq_setup.c',
> > diff --git a/drivers/common/idpf/idpf_common_device.c
> > b/drivers/common/idpf/idpf_common_device.c
> > new file mode 100644
> > index 0000000000..5062780362
> > --- /dev/null
> > +++ b/drivers/common/idpf/idpf_common_device.c
> > @@ -0,0 +1,8 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2022 Intel Corporation  */
> > +
> > +#include <rte_log.h>
> > +#include <idpf_common_device.h>
> > +
> > +RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
> > diff --git a/drivers/common/idpf/idpf_common_device.h
> > b/drivers/common/idpf/idpf_common_device.h
> > index b7fff84b25..a7537281d1 100644
> > --- a/drivers/common/idpf/idpf_common_device.h
> > +++ b/drivers/common/idpf/idpf_common_device.h
> > @@ -7,6 +7,12 @@
> >
> >  #include <base/idpf_prototype.h>
> >  #include <base/virtchnl2.h>
> > +#include <idpf_common_logs.h>
> > +
> > +#define IDPF_CTLQ_LEN		64
> > +#define IDPF_DFLT_MBX_BUF_SIZE	4096
> > +
> > +#define IDPF_MAX_PKT_TYPE	1024
> >
> >  struct idpf_adapter {
> >  	struct idpf_hw hw;
> > @@ -76,4 +82,59 @@ struct idpf_vport {
> >  	bool stopped;
> >  };
> >
> > +/* Message type read in virtual channel from PF */ enum
> > +idpf_vc_result {
> > +	IDPF_MSG_ERR = -1, /* Meet error when accessing admin queue */
> > +	IDPF_MSG_NON,      /* Read nothing from admin queue */
> > +	IDPF_MSG_SYS,      /* Read system msg from admin queue */
> > +	IDPF_MSG_CMD,      /* Read async command result */
> > +};
> > +
> > +/* structure used for sending and checking response of virtchnl ops
> > +*/ struct idpf_cmd_info {
> > +	uint32_t ops;
> > +	uint8_t *in_args;       /* buffer for sending */
> > +	uint32_t in_args_size;  /* buffer size for sending */
> > +	uint8_t *out_buffer;    /* buffer for response */
> > +	uint32_t out_size;      /* buffer size for response */
> > +};
> > +
> > +/* notify current command done. Only call in case execute
> > + * _atomic_set_cmd successfully.
> > + */
> > +static inline void
> > +notify_cmd(struct idpf_adapter *adapter, int msg_ret) {
> > +	adapter->cmd_retval = msg_ret;
> > +	/* Return value may be checked in anither thread, need to ensure
> > the coherence. */
> > +	rte_wmb();
> > +	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN; }
> > +
> > +/* clear current command. Only call in case execute
> > + * _atomic_set_cmd successfully.
> > + */
> > +static inline void
> > +clear_cmd(struct idpf_adapter *adapter) {
> > +	/* Return value may be checked in anither thread, need to ensure
> > the coherence. */
> > +	rte_wmb();
> > +	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
> > +	adapter->cmd_retval = VIRTCHNL_STATUS_SUCCESS; }
> > +
> > +/* Check there is pending cmd in execution. If none, set new command.
> > +*/ static inline bool atomic_set_cmd(struct idpf_adapter *adapter,
> > +uint32_t ops) {
> > +	uint32_t op_unk = VIRTCHNL2_OP_UNKNOWN;
> > +	bool ret = __atomic_compare_exchange(&adapter->pend_cmd,
> > &op_unk, &ops,
> > +					    0, __ATOMIC_ACQUIRE,
> > __ATOMIC_ACQUIRE);
> > +
> > +	if (!ret)
> > +		DRV_LOG(ERR, "There is incomplete cmd %d", adapter-
> > >pend_cmd);
> > +
> > +	return !ret;
> > +}
> > +
> >  #endif /* _IDPF_COMMON_DEVICE_H_ */
> > diff --git a/drivers/common/idpf/idpf_common_logs.h
> > b/drivers/common/idpf/idpf_common_logs.h
> > new file mode 100644
> > index 0000000000..fe36562769
> > --- /dev/null
> > +++ b/drivers/common/idpf/idpf_common_logs.h
> > @@ -0,0 +1,23 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2022 Intel Corporation  */
> > +
> > +#ifndef _IDPF_COMMON_LOGS_H_
> > +#define _IDPF_COMMON_LOGS_H_
> > +
> > +#include <rte_log.h>
> > +
> > +extern int idpf_common_logtype;
> > +
> > +#define DRV_LOG_RAW(level, ...)					\
> > +	rte_log(RTE_LOG_ ## level,				\
> > +		idpf_common_logtype,				\
> > +		RTE_FMT("%s(): "				\
> > +			RTE_FMT_HEAD(__VA_ARGS__,) "\n",	\
> > +			__func__,				\
> > +			RTE_FMT_TAIL(__VA_ARGS__,)))
> > +
> > +#define DRV_LOG(level, fmt, args...)		\
> > +	DRV_LOG_RAW(level, fmt "\n", ## args)
> > +
> > +#endif /* _IDPF_COMMON_LOGS_H_ */
> > diff --git a/drivers/common/idpf/idpf_common_virtchnl.c
> > b/drivers/common/idpf/idpf_common_virtchnl.c
> > new file mode 100644
> > index 0000000000..2e94a95876
> > --- /dev/null
> > +++ b/drivers/common/idpf/idpf_common_virtchnl.c
> > @@ -0,0 +1,815 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2022 Intel Corporation  */
> > +
> > +#include <idpf_common_virtchnl.h>
> > +#include <idpf_common_logs.h>
> > +
> > +static int
> > +idpf_vc_clean(struct idpf_adapter *adapter) {
> > +	struct idpf_ctlq_msg *q_msg[IDPF_CTLQ_LEN];
> > +	uint16_t num_q_msg = IDPF_CTLQ_LEN;
> > +	struct idpf_dma_mem *dma_mem;
> > +	int err;
> > +	uint32_t i;
> > +
> > +	for (i = 0; i < 10; i++) {
> > +		err = idpf_ctlq_clean_sq(adapter->hw.asq, &num_q_msg,
> > q_msg);
> > +		msleep(20);
> > +		if (num_q_msg > 0)
> > +			break;
> > +	}
> > +	if (err != 0)
> > +		return err;
> > +
> > +	/* Empty queue is not an error */
> > +	for (i = 0; i < num_q_msg; i++) {
> > +		dma_mem = q_msg[i]->ctx.indirect.payload;
> > +		if (dma_mem != NULL) {
> > +			idpf_free_dma_mem(&adapter->hw, dma_mem);
> > +			rte_free(dma_mem);
> > +		}
> > +		rte_free(q_msg[i]);
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static int
> > +idpf_send_vc_msg(struct idpf_adapter *adapter, uint32_t op,
> > +		 uint16_t msg_size, uint8_t *msg)
> > +{
> > +	struct idpf_ctlq_msg *ctlq_msg;
> > +	struct idpf_dma_mem *dma_mem;
> > +	int err;
> > +
> > +	err = idpf_vc_clean(adapter);
> > +	if (err != 0)
> > +		goto err;
> > +
> > +	ctlq_msg = rte_zmalloc(NULL, sizeof(struct idpf_ctlq_msg), 0);
> > +	if (ctlq_msg == NULL) {
> > +		err = -ENOMEM;
> > +		goto err;
> > +	}
> > +
> > +	dma_mem = rte_zmalloc(NULL, sizeof(struct idpf_dma_mem), 0);
> > +	if (dma_mem == NULL) {
> > +		err = -ENOMEM;
> > +		goto dma_mem_error;
> > +	}
> > +
> > +	dma_mem->size = IDPF_DFLT_MBX_BUF_SIZE;
> > +	idpf_alloc_dma_mem(&adapter->hw, dma_mem, dma_mem->size);
> > +	if (dma_mem->va == NULL) {
> > +		err = -ENOMEM;
> > +		goto dma_alloc_error;
> > +	}
> > +
> > +	memcpy(dma_mem->va, msg, msg_size);
> > +
> > +	ctlq_msg->opcode = idpf_mbq_opc_send_msg_to_pf;
> > +	ctlq_msg->func_id = 0;
> > +	ctlq_msg->data_len = msg_size;
> > +	ctlq_msg->cookie.mbx.chnl_opcode = op;
> > +	ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS;
> > +	ctlq_msg->ctx.indirect.payload = dma_mem;
> > +
> > +	err = idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
> > +	if (err != 0)
> > +		goto send_error;
> > +
> > +	return 0;
> > +
> > +send_error:
> > +	idpf_free_dma_mem(&adapter->hw, dma_mem);
> > +dma_alloc_error:
> > +	rte_free(dma_mem);
> > +dma_mem_error:
> > +	rte_free(ctlq_msg);
> > +err:
> > +	return err;
> > +}
> > +
> > +static enum idpf_vc_result
> > +idpf_read_msg_from_cp(struct idpf_adapter *adapter, uint16_t buf_len,
> > +		      uint8_t *buf)
> > +{
> > +	struct idpf_hw *hw = &adapter->hw;
> > +	struct idpf_ctlq_msg ctlq_msg;
> > +	struct idpf_dma_mem *dma_mem = NULL;
> > +	enum idpf_vc_result result = IDPF_MSG_NON;
> > +	uint32_t opcode;
> > +	uint16_t pending = 1;
> > +	int ret;
> > +
> > +	ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
> > +	if (ret != 0) {
> > +		DRV_LOG(DEBUG, "Can't read msg from AQ");
> > +		if (ret != -ENOMSG)
> > +			result = IDPF_MSG_ERR;
> > +		return result;
> > +	}
> > +
> > +	rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len);
> > +
> > +	opcode = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
> > +	adapter->cmd_retval =
> > rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
> > +
> > +	DRV_LOG(DEBUG, "CQ from CP carries opcode %u, retval %d",
> > +		opcode, adapter->cmd_retval);
> > +
> > +	if (opcode == VIRTCHNL2_OP_EVENT) {
> > +		struct virtchnl2_event *ve = ctlq_msg.ctx.indirect.payload-
> > >va;
> > +
> > +		result = IDPF_MSG_SYS;
> > +		switch (ve->event) {
> > +		case VIRTCHNL2_EVENT_LINK_CHANGE:
> > +			/* TBD */
> > +			break;
> > +		default:
> > +			DRV_LOG(ERR, "%s: Unknown event %d from CP",
> > +				__func__, ve->event);
> > +			break;
> > +		}
> > +	} else {
> > +		/* async reply msg on command issued by pf previously */
> > +		result = IDPF_MSG_CMD;
> > +		if (opcode != adapter->pend_cmd) {
> > +			DRV_LOG(WARNING, "command mismatch,
> > expect %u, get %u",
> > +				adapter->pend_cmd, opcode);
> > +			result = IDPF_MSG_ERR;
> > +		}
> > +	}
> > +
> > +	if (ctlq_msg.data_len != 0)
> > +		dma_mem = ctlq_msg.ctx.indirect.payload;
> > +	else
> > +		pending = 0;
> > +
> > +	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
> > +	if (ret != 0 && dma_mem != NULL)
> > +		idpf_free_dma_mem(hw, dma_mem);
> > +
> > +	return result;
> > +}
> > +
> > +#define MAX_TRY_TIMES 200
> > +#define ASQ_DELAY_MS  10
> > +
> > +int
> > +idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
> > +uint16_t
> > buf_len,
> > +		  uint8_t *buf)
> > +{
> > +	int err = 0;
> > +	int i = 0;
> > +	int ret;
> > +
> > +	do {
> > +		ret = idpf_read_msg_from_cp(adapter, buf_len, buf);
> > +		if (ret == IDPF_MSG_CMD)
> > +			break;
> > +		rte_delay_ms(ASQ_DELAY_MS);
> > +	} while (i++ < MAX_TRY_TIMES);
> > +	if (i >= MAX_TRY_TIMES ||
> > +	    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
> > +		err = -EBUSY;
> > +		DRV_LOG(ERR, "No response or return failure (%d) for
> > cmd %d",
> > +			adapter->cmd_retval, ops);
> > +	}
> > +
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct
> > +idpf_cmd_info
> > *args)
> > +{
> > +	int err = 0;
> > +	int i = 0;
> > +	int ret;
> > +
> > +	if (atomic_set_cmd(adapter, args->ops))
> > +		return -EINVAL;
> > +
> > +	ret = idpf_send_vc_msg(adapter, args->ops, args->in_args_size, args-
> > >in_args);
> > +	if (ret != 0) {
> > +		DRV_LOG(ERR, "fail to send cmd %d", args->ops);
> > +		clear_cmd(adapter);
> > +		return ret;
> > +	}
> > +
> > +	switch (args->ops) {
> > +	case VIRTCHNL_OP_VERSION:
> > +	case VIRTCHNL2_OP_GET_CAPS:
> > +	case VIRTCHNL2_OP_CREATE_VPORT:
> > +	case VIRTCHNL2_OP_DESTROY_VPORT:
> > +	case VIRTCHNL2_OP_SET_RSS_KEY:
> > +	case VIRTCHNL2_OP_SET_RSS_LUT:
> > +	case VIRTCHNL2_OP_SET_RSS_HASH:
> > +	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
> > +	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
> > +	case VIRTCHNL2_OP_ENABLE_QUEUES:
> > +	case VIRTCHNL2_OP_DISABLE_QUEUES:
> > +	case VIRTCHNL2_OP_ENABLE_VPORT:
> > +	case VIRTCHNL2_OP_DISABLE_VPORT:
> > +	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
> > +	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
> > +	case VIRTCHNL2_OP_ALLOC_VECTORS:
> > +	case VIRTCHNL2_OP_DEALLOC_VECTORS:
> > +		/* for init virtchnl ops, need to poll the response */
> > +		err = idpf_read_one_msg(adapter, args->ops, args->out_size,
> > args->out_buffer);
> > +		clear_cmd(adapter);
> > +		break;
> > +	case VIRTCHNL2_OP_GET_PTYPE_INFO:
> > +		/* for multuple response message,
> > +		 * do not handle the response here.
> > +		 */
> > +		break;
> > +	default:
> > +		/* For other virtchnl ops in running time,
> > +		 * wait for the cmd done flag.
> > +		 */
> > +		do {
> > +			if (adapter->pend_cmd == VIRTCHNL_OP_UNKNOWN)
> > +				break;
> > +			rte_delay_ms(ASQ_DELAY_MS);
> > +			/* If don't read msg or read sys event, continue */
> > +		} while (i++ < MAX_TRY_TIMES);
> > +		/* If there's no response is received, clear command */
> > +		if (i >= MAX_TRY_TIMES  ||
> > +		    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
> > +			err = -EBUSY;
> > +			DRV_LOG(ERR, "No response or return failure (%d)
> > for cmd %d",
> > +				adapter->cmd_retval, args->ops);
> > +			clear_cmd(adapter);
> > +		}
> > +		break;
> > +	}
> > +
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_check_api_version(struct idpf_adapter *adapter) {
> > +	struct virtchnl2_version_info version, *pver;
> > +	struct idpf_cmd_info args;
> > +	int err;
> > +
> > +	memset(&version, 0, sizeof(struct virtchnl_version_info));
> > +	version.major = VIRTCHNL2_VERSION_MAJOR_2;
> > +	version.minor = VIRTCHNL2_VERSION_MINOR_0;
> > +
> > +	args.ops = VIRTCHNL_OP_VERSION;
> > +	args.in_args = (uint8_t *)&version;
> > +	args.in_args_size = sizeof(version);
> > +	args.out_buffer = adapter->mbx_resp;
> > +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err = idpf_execute_vc_cmd(adapter, &args);
> > +	if (err != 0) {
> > +		DRV_LOG(ERR,
> > +			"Failed to execute command of
> > VIRTCHNL_OP_VERSION");
> > +		return err;
> > +	}
> > +
> > +	pver = (struct virtchnl2_version_info *)args.out_buffer;
> > +	adapter->virtchnl_version = *pver;
> > +
> > +	if (adapter->virtchnl_version.major !=
> > VIRTCHNL2_VERSION_MAJOR_2 ||
> > +	    adapter->virtchnl_version.minor !=
> > VIRTCHNL2_VERSION_MINOR_0) {
> > +		DRV_LOG(ERR, "VIRTCHNL API version mismatch:(%u.%u)-
> > (%u.%u)",
> > +			adapter->virtchnl_version.major,
> > +			adapter->virtchnl_version.minor,
> > +			VIRTCHNL2_VERSION_MAJOR_2,
> > +			VIRTCHNL2_VERSION_MINOR_0);
> > +		return -EINVAL;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +int
> > +idpf_vc_get_caps(struct idpf_adapter *adapter) {
> > +	struct virtchnl2_get_capabilities caps_msg;
> > +	struct idpf_cmd_info args;
> > +	int err;
> > +
> > +	memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities));
> > +
> > +	caps_msg.csum_caps =
> > +		VIRTCHNL2_CAP_TX_CSUM_L3_IPV4          |
> > +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP      |
> > +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP      |
> > +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP     |
> > +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP      |
> > +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP      |
> > +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP     |
> > +		VIRTCHNL2_CAP_TX_CSUM_GENERIC          |
> > +		VIRTCHNL2_CAP_RX_CSUM_L3_IPV4          |
> > +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP      |
> > +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP      |
> > +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP     |
> > +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP      |
> > +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP      |
> > +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP     |
> > +		VIRTCHNL2_CAP_RX_CSUM_GENERIC;
> > +
> > +	caps_msg.rss_caps =
> > +		VIRTCHNL2_CAP_RSS_IPV4_TCP             |
> > +		VIRTCHNL2_CAP_RSS_IPV4_UDP             |
> > +		VIRTCHNL2_CAP_RSS_IPV4_SCTP            |
> > +		VIRTCHNL2_CAP_RSS_IPV4_OTHER           |
> > +		VIRTCHNL2_CAP_RSS_IPV6_TCP             |
> > +		VIRTCHNL2_CAP_RSS_IPV6_UDP             |
> > +		VIRTCHNL2_CAP_RSS_IPV6_SCTP            |
> > +		VIRTCHNL2_CAP_RSS_IPV6_OTHER           |
> > +		VIRTCHNL2_CAP_RSS_IPV4_AH              |
> > +		VIRTCHNL2_CAP_RSS_IPV4_ESP             |
> > +		VIRTCHNL2_CAP_RSS_IPV4_AH_ESP          |
> > +		VIRTCHNL2_CAP_RSS_IPV6_AH              |
> > +		VIRTCHNL2_CAP_RSS_IPV6_ESP             |
> > +		VIRTCHNL2_CAP_RSS_IPV6_AH_ESP;
> > +
> > +	caps_msg.other_caps = VIRTCHNL2_CAP_WB_ON_ITR;
> > +
> > +	args.ops = VIRTCHNL2_OP_GET_CAPS;
> > +	args.in_args = (uint8_t *)&caps_msg;
> > +	args.in_args_size = sizeof(caps_msg);
> > +	args.out_buffer = adapter->mbx_resp;
> > +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err = idpf_execute_vc_cmd(adapter, &args);
> > +	if (err != 0) {
> > +		DRV_LOG(ERR,
> > +			"Failed to execute command of
> > VIRTCHNL2_OP_GET_CAPS");
> > +		return err;
> > +	}
> > +
> > +	rte_memcpy(&adapter->caps, args.out_buffer, sizeof(caps_msg));
> > +
> > +	return 0;
> > +}
> > +
> > +int
> > +idpf_vc_create_vport(struct idpf_vport *vport,
> > +		     struct virtchnl2_create_vport *vport_req_info) {
> > +	struct idpf_adapter *adapter = vport->adapter;
> > +	struct virtchnl2_create_vport vport_msg;
> > +	struct idpf_cmd_info args;
> > +	int err = -1;
> > +
> > +	memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
> > +	vport_msg.vport_type = vport_req_info->vport_type;
> > +	vport_msg.txq_model = vport_req_info->txq_model;
> > +	vport_msg.rxq_model = vport_req_info->rxq_model;
> > +	vport_msg.num_tx_q = vport_req_info->num_tx_q;
> > +	vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
> > +	vport_msg.num_rx_q = vport_req_info->num_rx_q;
> > +	vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
> > +
> > +	memset(&args, 0, sizeof(args));
> > +	args.ops = VIRTCHNL2_OP_CREATE_VPORT;
> > +	args.in_args = (uint8_t *)&vport_msg;
> > +	args.in_args_size = sizeof(vport_msg);
> > +	args.out_buffer = adapter->mbx_resp;
> > +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err = idpf_execute_vc_cmd(adapter, &args);
> > +	if (err != 0) {
> > +		DRV_LOG(ERR,
> > +			"Failed to execute command of
> > VIRTCHNL2_OP_CREATE_VPORT");
> > +		return err;
> > +	}
> > +
> > +	rte_memcpy(vport->vport_info, args.out_buffer,
> > IDPF_DFLT_MBX_BUF_SIZE);
> > +	return 0;
> > +}
> > +
> > +int
> > +idpf_vc_destroy_vport(struct idpf_vport *vport) {
> > +	struct idpf_adapter *adapter = vport->adapter;
> > +	struct virtchnl2_vport vc_vport;
> > +	struct idpf_cmd_info args;
> > +	int err;
> > +
> > +	vc_vport.vport_id = vport->vport_id;
> > +
> > +	memset(&args, 0, sizeof(args));
> > +	args.ops = VIRTCHNL2_OP_DESTROY_VPORT;
> > +	args.in_args = (uint8_t *)&vc_vport;
> > +	args.in_args_size = sizeof(vc_vport);
> > +	args.out_buffer = adapter->mbx_resp;
> > +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err = idpf_execute_vc_cmd(adapter, &args);
> > +	if (err != 0)
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_DESTROY_VPORT");
> > +
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_set_rss_key(struct idpf_vport *vport) {
> > +	struct idpf_adapter *adapter = vport->adapter;
> > +	struct virtchnl2_rss_key *rss_key;
> > +	struct idpf_cmd_info args;
> > +	int len, err;
> > +
> > +	len = sizeof(*rss_key) + sizeof(rss_key->key[0]) *
> > +		(vport->rss_key_size - 1);
> > +	rss_key = rte_zmalloc("rss_key", len, 0);
> > +	if (rss_key == NULL)
> > +		return -ENOMEM;
> > +
> > +	rss_key->vport_id = vport->vport_id;
> > +	rss_key->key_len = vport->rss_key_size;
> > +	rte_memcpy(rss_key->key, vport->rss_key,
> > +		   sizeof(rss_key->key[0]) * vport->rss_key_size);
> > +
> > +	memset(&args, 0, sizeof(args));
> > +	args.ops = VIRTCHNL2_OP_SET_RSS_KEY;
> > +	args.in_args = (uint8_t *)rss_key;
> > +	args.in_args_size = len;
> > +	args.out_buffer = adapter->mbx_resp;
> > +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err = idpf_execute_vc_cmd(adapter, &args);
> > +	if (err != 0)
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_SET_RSS_KEY");
> > +
> > +	rte_free(rss_key);
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_set_rss_lut(struct idpf_vport *vport) {
> > +	struct idpf_adapter *adapter = vport->adapter;
> > +	struct virtchnl2_rss_lut *rss_lut;
> > +	struct idpf_cmd_info args;
> > +	int len, err;
> > +
> > +	len = sizeof(*rss_lut) + sizeof(rss_lut->lut[0]) *
> > +		(vport->rss_lut_size - 1);
> > +	rss_lut = rte_zmalloc("rss_lut", len, 0);
> > +	if (rss_lut == NULL)
> > +		return -ENOMEM;
> > +
> > +	rss_lut->vport_id = vport->vport_id;
> > +	rss_lut->lut_entries = vport->rss_lut_size;
> > +	rte_memcpy(rss_lut->lut, vport->rss_lut,
> > +		   sizeof(rss_lut->lut[0]) * vport->rss_lut_size);
> > +
> > +	memset(&args, 0, sizeof(args));
> > +	args.ops = VIRTCHNL2_OP_SET_RSS_LUT;
> > +	args.in_args = (uint8_t *)rss_lut;
> > +	args.in_args_size = len;
> > +	args.out_buffer = adapter->mbx_resp;
> > +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err = idpf_execute_vc_cmd(adapter, &args);
> > +	if (err != 0)
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_SET_RSS_LUT");
> > +
> > +	rte_free(rss_lut);
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_set_rss_hash(struct idpf_vport *vport) {
> > +	struct idpf_adapter *adapter = vport->adapter;
> > +	struct virtchnl2_rss_hash rss_hash;
> > +	struct idpf_cmd_info args;
> > +	int err;
> > +
> > +	memset(&rss_hash, 0, sizeof(rss_hash));
> > +	rss_hash.ptype_groups = vport->rss_hf;
> > +	rss_hash.vport_id = vport->vport_id;
> > +
> > +	memset(&args, 0, sizeof(args));
> > +	args.ops = VIRTCHNL2_OP_SET_RSS_HASH;
> > +	args.in_args = (uint8_t *)&rss_hash;
> > +	args.in_args_size = sizeof(rss_hash);
> > +	args.out_buffer = adapter->mbx_resp;
> > +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err = idpf_execute_vc_cmd(adapter, &args);
> > +	if (err != 0)
> > +		DRV_LOG(ERR, "Failed to execute command of
> > OP_SET_RSS_HASH");
> > +
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t
> > +nb_rxq,
> > bool map)
> > +{
> > +	struct idpf_adapter *adapter = vport->adapter;
> > +	struct virtchnl2_queue_vector_maps *map_info;
> > +	struct virtchnl2_queue_vector *vecmap;
> > +	struct idpf_cmd_info args;
> > +	int len, i, err = 0;
> > +
> > +	len = sizeof(struct virtchnl2_queue_vector_maps) +
> > +		(nb_rxq - 1) * sizeof(struct virtchnl2_queue_vector);
> > +
> > +	map_info = rte_zmalloc("map_info", len, 0);
> > +	if (map_info == NULL)
> > +		return -ENOMEM;
> > +
> > +	map_info->vport_id = vport->vport_id;
> > +	map_info->num_qv_maps = nb_rxq;
> > +	for (i = 0; i < nb_rxq; i++) {
> > +		vecmap = &map_info->qv_maps[i];
> > +		vecmap->queue_id = vport->qv_map[i].queue_id;
> > +		vecmap->vector_id = vport->qv_map[i].vector_id;
> > +		vecmap->itr_idx = VIRTCHNL2_ITR_IDX_0;
> > +		vecmap->queue_type = VIRTCHNL2_QUEUE_TYPE_RX;
> > +	}
> > +
> > +	args.ops = map ? VIRTCHNL2_OP_MAP_QUEUE_VECTOR :
> > +		VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR;
> > +	args.in_args = (uint8_t *)map_info;
> > +	args.in_args_size = len;
> > +	args.out_buffer = adapter->mbx_resp;
> > +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +	err = idpf_execute_vc_cmd(adapter, &args);
> > +	if (err != 0)
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_%s_QUEUE_VECTOR",
> > +			map ? "MAP" : "UNMAP");
> > +
> > +	rte_free(map_info);
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
> > +{
> > +	struct idpf_adapter *adapter = vport->adapter;
> > +	struct virtchnl2_alloc_vectors *alloc_vec;
> > +	struct idpf_cmd_info args;
> > +	int err, len;
> > +
> > +	len = sizeof(struct virtchnl2_alloc_vectors) +
> > +		(num_vectors - 1) * sizeof(struct virtchnl2_vector_chunk);
> > +	alloc_vec = rte_zmalloc("alloc_vec", len, 0);
> > +	if (alloc_vec == NULL)
> > +		return -ENOMEM;
> > +
> > +	alloc_vec->num_vectors = num_vectors;
> > +
> > +	args.ops = VIRTCHNL2_OP_ALLOC_VECTORS;
> > +	args.in_args = (uint8_t *)alloc_vec;
> > +	args.in_args_size = sizeof(struct virtchnl2_alloc_vectors);
> > +	args.out_buffer = adapter->mbx_resp;
> > +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +	err = idpf_execute_vc_cmd(adapter, &args);
> > +	if (err != 0)
> > +		DRV_LOG(ERR, "Failed to execute command
> > VIRTCHNL2_OP_ALLOC_VECTORS");
> > +
> > +	if (vport->recv_vectors == NULL) {
> > +		vport->recv_vectors = rte_zmalloc("recv_vectors", len, 0);
> > +		if (vport->recv_vectors == NULL) {
> > +			rte_free(alloc_vec);
> > +			return -ENOMEM;
> > +		}
> > +	}
> > +
> > +	rte_memcpy(vport->recv_vectors, args.out_buffer, len);
> > +	rte_free(alloc_vec);
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_dealloc_vectors(struct idpf_vport *vport) {
> > +	struct idpf_adapter *adapter = vport->adapter;
> > +	struct virtchnl2_alloc_vectors *alloc_vec;
> > +	struct virtchnl2_vector_chunks *vcs;
> > +	struct idpf_cmd_info args;
> > +	int err, len;
> > +
> > +	alloc_vec = vport->recv_vectors;
> > +	vcs = &alloc_vec->vchunks;
> > +
> > +	len = sizeof(struct virtchnl2_vector_chunks) +
> > +		(vcs->num_vchunks - 1) * sizeof(struct
> > virtchnl2_vector_chunk);
> > +
> > +	args.ops = VIRTCHNL2_OP_DEALLOC_VECTORS;
> > +	args.in_args = (uint8_t *)vcs;
> > +	args.in_args_size = len;
> > +	args.out_buffer = adapter->mbx_resp;
> > +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +	err = idpf_execute_vc_cmd(adapter, &args);
> > +	if (err != 0)
> > +		DRV_LOG(ERR, "Failed to execute command
> > VIRTCHNL2_OP_DEALLOC_VECTORS");
> > +
> > +	return err;
> > +}
> > +
> > +static int
> > +idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
> > +			  uint32_t type, bool on)
> > +{
> > +	struct idpf_adapter *adapter = vport->adapter;
> > +	struct virtchnl2_del_ena_dis_queues *queue_select;
> > +	struct virtchnl2_queue_chunk *queue_chunk;
> > +	struct idpf_cmd_info args;
> > +	int err, len;
> > +
> > +	len = sizeof(struct virtchnl2_del_ena_dis_queues);
> > +	queue_select = rte_zmalloc("queue_select", len, 0);
> > +	if (queue_select == NULL)
> > +		return -ENOMEM;
> > +
> > +	queue_chunk = queue_select->chunks.chunks;
> > +	queue_select->chunks.num_chunks = 1;
> > +	queue_select->vport_id = vport->vport_id;
> > +
> > +	queue_chunk->type = type;
> > +	queue_chunk->start_queue_id = qid;
> > +	queue_chunk->num_queues = 1;
> > +
> > +	args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
> > +		VIRTCHNL2_OP_DISABLE_QUEUES;
> > +	args.in_args = (uint8_t *)queue_select;
> > +	args.in_args_size = len;
> > +	args.out_buffer = adapter->mbx_resp;
> > +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +	err = idpf_execute_vc_cmd(adapter, &args);
> > +	if (err != 0)
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_%s_QUEUES",
> > +			on ? "ENABLE" : "DISABLE");
> > +
> > +	rte_free(queue_select);
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
> > +		  bool rx, bool on)
> > +{
> > +	uint32_t type;
> > +	int err, queue_id;
> > +
> > +	/* switch txq/rxq */
> > +	type = rx ? VIRTCHNL2_QUEUE_TYPE_RX :
> > VIRTCHNL2_QUEUE_TYPE_TX;
> > +
> > +	if (type == VIRTCHNL2_QUEUE_TYPE_RX)
> > +		queue_id = vport->chunks_info.rx_start_qid + qid;
> > +	else
> > +		queue_id = vport->chunks_info.tx_start_qid + qid;
> > +	err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> > +	if (err != 0)
> > +		return err;
> > +
> > +	/* switch tx completion queue */
> > +	if (!rx && vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
> > +		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
> > +		queue_id = vport->chunks_info.tx_compl_start_qid + qid;
> > +		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> > +		if (err != 0)
> > +			return err;
> > +	}
> > +
> > +	/* switch rx buffer queue */
> > +	if (rx && vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
> > +		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
> > +		queue_id = vport->chunks_info.rx_buf_start_qid + 2 * qid;
> > +		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> > +		if (err != 0)
> > +			return err;
> > +		queue_id++;
> > +		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> > +		if (err != 0)
> > +			return err;
> > +	}
> > +
> > +	return err;
> > +}
> > +
> > +#define IDPF_RXTX_QUEUE_CHUNKS_NUM	2
> > +int
> > +idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable) {
> > +	struct idpf_adapter *adapter = vport->adapter;
> > +	struct virtchnl2_del_ena_dis_queues *queue_select;
> > +	struct virtchnl2_queue_chunk *queue_chunk;
> > +	uint32_t type;
> > +	struct idpf_cmd_info args;
> > +	uint16_t num_chunks;
> > +	int err, len;
> > +
> > +	num_chunks = IDPF_RXTX_QUEUE_CHUNKS_NUM;
> > +	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
> > +		num_chunks++;
> > +	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
> > +		num_chunks++;
> > +
> > +	len = sizeof(struct virtchnl2_del_ena_dis_queues) +
> > +		sizeof(struct virtchnl2_queue_chunk) * (num_chunks - 1);
> > +	queue_select = rte_zmalloc("queue_select", len, 0);
> > +	if (queue_select == NULL)
> > +		return -ENOMEM;
> > +
> > +	queue_chunk = queue_select->chunks.chunks;
> > +	queue_select->chunks.num_chunks = num_chunks;
> > +	queue_select->vport_id = vport->vport_id;
> > +
> > +	type = VIRTCHNL_QUEUE_TYPE_RX;
> > +	queue_chunk[type].type = type;
> > +	queue_chunk[type].start_queue_id = vport-
> > >chunks_info.rx_start_qid;
> > +	queue_chunk[type].num_queues = vport->num_rx_q;
> > +
> > +	type = VIRTCHNL2_QUEUE_TYPE_TX;
> > +	queue_chunk[type].type = type;
> > +	queue_chunk[type].start_queue_id = vport-
> > >chunks_info.tx_start_qid;
> > +	queue_chunk[type].num_queues = vport->num_tx_q;
> > +
> > +	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
> > +		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
> > +		queue_chunk[type].type = type;
> > +		queue_chunk[type].start_queue_id =
> > +			vport->chunks_info.rx_buf_start_qid;
> > +		queue_chunk[type].num_queues = vport->num_rx_bufq;
> > +	}
> > +
> > +	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
> > +		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
> > +		queue_chunk[type].type = type;
> > +		queue_chunk[type].start_queue_id =
> > +			vport->chunks_info.tx_compl_start_qid;
> > +		queue_chunk[type].num_queues = vport->num_tx_complq;
> > +	}
> > +
> > +	args.ops = enable ? VIRTCHNL2_OP_ENABLE_QUEUES :
> > +		VIRTCHNL2_OP_DISABLE_QUEUES;
> > +	args.in_args = (uint8_t *)queue_select;
> > +	args.in_args_size = len;
> > +	args.out_buffer = adapter->mbx_resp;
> > +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +	err = idpf_execute_vc_cmd(adapter, &args);
> > +	if (err != 0)
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_%s_QUEUES",
> > +			enable ? "ENABLE" : "DISABLE");
> > +
> > +	rte_free(queue_select);
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable) {
> > +	struct idpf_adapter *adapter = vport->adapter;
> > +	struct virtchnl2_vport vc_vport;
> > +	struct idpf_cmd_info args;
> > +	int err;
> > +
> > +	vc_vport.vport_id = vport->vport_id;
> > +	args.ops = enable ? VIRTCHNL2_OP_ENABLE_VPORT :
> > +		VIRTCHNL2_OP_DISABLE_VPORT;
> > +	args.in_args = (uint8_t *)&vc_vport;
> > +	args.in_args_size = sizeof(vc_vport);
> > +	args.out_buffer = adapter->mbx_resp;
> > +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err = idpf_execute_vc_cmd(adapter, &args);
> > +	if (err != 0) {
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_%s_VPORT",
> > +			enable ? "ENABLE" : "DISABLE");
> > +	}
> > +
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_query_ptype_info(struct idpf_adapter *adapter) {
> > +	struct virtchnl2_get_ptype_info *ptype_info;
> > +	struct idpf_cmd_info args;
> > +	int len, err;
> > +
> > +	len = sizeof(struct virtchnl2_get_ptype_info);
> > +	ptype_info = rte_zmalloc("ptype_info", len, 0);
> > +	if (ptype_info == NULL)
> > +		return -ENOMEM;
> > +
> > +	ptype_info->start_ptype_id = 0;
> > +	ptype_info->num_ptypes = IDPF_MAX_PKT_TYPE;
> > +	args.ops = VIRTCHNL2_OP_GET_PTYPE_INFO;
> > +	args.in_args = (uint8_t *)ptype_info;
> > +	args.in_args_size = len;
> > +
> > +	err = idpf_execute_vc_cmd(adapter, &args);
> > +	if (err != 0)
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_GET_PTYPE_INFO");
> > +
> > +	rte_free(ptype_info);
> > +	return err;
> > +}
> > diff --git a/drivers/common/idpf/idpf_common_virtchnl.h
> > b/drivers/common/idpf/idpf_common_virtchnl.h
> > new file mode 100644
> > index 0000000000..bbc66d63c4
> > --- /dev/null
> > +++ b/drivers/common/idpf/idpf_common_virtchnl.h
> > @@ -0,0 +1,48 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2022 Intel Corporation  */
> > +
> > +#ifndef _IDPF_COMMON_VIRTCHNL_H_
> > +#define _IDPF_COMMON_VIRTCHNL_H_
> > +
> > +#include <idpf_common_device.h>
> > +
> > +__rte_internal
> > +int idpf_vc_check_api_version(struct idpf_adapter *adapter);
> > +__rte_internal int idpf_vc_get_caps(struct idpf_adapter *adapter);
> > +__rte_internal int idpf_vc_create_vport(struct idpf_vport *vport,
> > +			 struct virtchnl2_create_vport *vport_info);
> __rte_internal int
> > +idpf_vc_destroy_vport(struct idpf_vport *vport); __rte_internal int
> > +idpf_vc_set_rss_key(struct idpf_vport *vport); __rte_internal int
> > +idpf_vc_set_rss_lut(struct idpf_vport *vport); __rte_internal int
> > +idpf_vc_set_rss_hash(struct idpf_vport *vport); __rte_internal int
> > +idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
> > +		      bool rx, bool on);
> > +__rte_internal
> > +int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
> > +__rte_internal int idpf_vc_ena_dis_vport(struct idpf_vport *vport,
> > +bool enable); __rte_internal int idpf_vc_config_irq_map_unmap(struct
> > +idpf_vport *vport,
> > +				 uint16_t nb_rxq, bool map);
> > +__rte_internal
> > +int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t
> > +num_vectors); __rte_internal int idpf_vc_dealloc_vectors(struct
> > +idpf_vport *vport); __rte_internal int
> > +idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
> > +__rte_internal int idpf_read_one_msg(struct idpf_adapter *adapter,
> > +uint32_t ops,
> > +		      uint16_t buf_len, uint8_t *buf); __rte_internal int
> > +idpf_execute_vc_cmd(struct idpf_adapter *adapter,
> > +			struct idpf_cmd_info *args);
> > +
> > +#endif /* _IDPF_COMMON_VIRTCHNL_H_ */
> > diff --git a/drivers/common/idpf/meson.build
> > b/drivers/common/idpf/meson.build index 77d997b4a7..d1578641ba
> 100644
> > --- a/drivers/common/idpf/meson.build
> > +++ b/drivers/common/idpf/meson.build
> > @@ -1,4 +1,9 @@
> >  # SPDX-License-Identifier: BSD-3-Clause  # Copyright(c) 2022 Intel
> > Corporation
> >
> > +sources = files(
> > +    'idpf_common_device.c',
> > +    'idpf_common_virtchnl.c',
> > +)
> > +
> >  subdir('base')
> > diff --git a/drivers/common/idpf/version.map
> > b/drivers/common/idpf/version.map index bfb246c752..a2b8780780
> 100644
> > --- a/drivers/common/idpf/version.map
> > +++ b/drivers/common/idpf/version.map
> > @@ -1,12 +1,28 @@
> >  INTERNAL {
> >  	global:
> >
> > +	idpf_ctlq_clean_sq;
> >  	idpf_ctlq_deinit;
> >  	idpf_ctlq_init;
> > -	idpf_ctlq_clean_sq;
> > +	idpf_ctlq_post_rx_buffs;
> >  	idpf_ctlq_recv;
> >  	idpf_ctlq_send;
> > -	idpf_ctlq_post_rx_buffs;

And do we really need to expose all ctlq APIs , ideally all APIs in drivers/common/idpf/base folder could only be consumed by the idpf common module inside, we should wrap it on the upper layer.

> > +	idpf_execute_vc_cmd;
> > +	idpf_read_one_msg;
> > +	idpf_switch_queue;
> 
> I think all APsI be exposed from idpf_common_virtchnl.h can follow the same
> naming rule "idpf_vc*"
> 



^ permalink raw reply	[flat|nested] 79+ messages in thread

* RE: [PATCH v4 07/15] common/idpf: add irq map/unmap
  2023-01-17  8:06   ` [PATCH v4 07/15] common/idpf: add irq map/unmap beilei.xing
@ 2023-01-31  8:11     ` Wu, Jingjing
  0 siblings, 0 replies; 79+ messages in thread
From: Wu, Jingjing @ 2023-01-31  8:11 UTC (permalink / raw)
  To: Xing, Beilei; +Cc: dev, Zhang, Qi Z

> @@ -247,8 +247,21 @@ idpf_vport_init(struct idpf_vport *vport,
>  		goto err_rss_lut;
>  	}
> 
> +	/* recv_vectors is used for VIRTCHNL2_OP_ALLOC_VECTORS response,
> +	 * reserve maximum size for it now, may need optimization in future.
> +	 */
> +	vport->recv_vectors = rte_zmalloc("recv_vectors", IDPF_DFLT_MBX_BUF_SIZE, 0);
> +	if (vport->recv_vectors == NULL) {
> +		DRV_LOG(ERR, "Failed to allocate ecv_vectors");
ecv-> recv?

> +		ret = -ENOMEM;
> +		goto err_recv_vec;
> +	}
> +
>  	return 0;
> 
> +err_recv_vec:
> +	rte_free(vport->rss_lut);
> +	vport->rss_lut = NULL;
>  err_rss_lut:
>  	vport->dev_data = NULL;
>  	rte_free(vport->rss_key);
> @@ -261,6 +274,8 @@ idpf_vport_init(struct idpf_vport *vport,
>  int
>  idpf_vport_deinit(struct idpf_vport *vport)
>  {
> +	rte_free(vport->recv_vectors);
> +	vport->recv_vectors = NULL;
>  	rte_free(vport->rss_lut);
>  	vport->rss_lut = NULL;
> 
> @@ -298,4 +313,88 @@ idpf_config_rss(struct idpf_vport *vport)
> 
>  	return ret;
>  }
> +
> +int
> +idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues)
> +{
> +	struct idpf_adapter *adapter = vport->adapter;
> +	struct virtchnl2_queue_vector *qv_map;
> +	struct idpf_hw *hw = &adapter->hw;
> +	uint32_t dynctl_val, itrn_val;
> +	uint32_t dynctl_reg_start;
> +	uint32_t itrn_reg_start;
> +	uint16_t i;
> +
> +	qv_map = rte_zmalloc("qv_map",
> +			     nb_rx_queues *
> +			     sizeof(struct virtchnl2_queue_vector), 0);
> +	if (qv_map == NULL) {
> +		DRV_LOG(ERR, "Failed to allocate %d queue-vector map",
> +			nb_rx_queues);
> +		goto qv_map_alloc_err;
Use error code -ENOMEM instead of using -1?

> +	}
> +
> +	/* Rx interrupt disabled, Map interrupt only for writeback */
> +
> +	/* The capability flags adapter->caps.other_caps should be
> +	 * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if
> +	 * condition should be updated when the FW can return the
> +	 * correct flag bits.
> +	 */
> +	dynctl_reg_start =
> +		vport->recv_vectors->vchunks.vchunks->dynctl_reg_start;
> +	itrn_reg_start =
> +		vport->recv_vectors->vchunks.vchunks->itrn_reg_start;
> +	dynctl_val = IDPF_READ_REG(hw, dynctl_reg_start);
> +	DRV_LOG(DEBUG, "Value of dynctl_reg_start is 0x%x", dynctl_val);
> +	itrn_val = IDPF_READ_REG(hw, itrn_reg_start);
> +	DRV_LOG(DEBUG, "Value of itrn_reg_start is 0x%x", itrn_val);
> +	/* Force write-backs by setting WB_ON_ITR bit in DYN_CTL
> +	 * register. WB_ON_ITR and INTENA are mutually exclusive
> +	 * bits. Setting WB_ON_ITR bits means TX and RX Descs
> +	 * are written back based on ITR expiration irrespective
> +	 * of INTENA setting.
> +	 */
> +	/* TBD: need to tune INTERVAL value for better performance. */
> +	itrn_val = (itrn_val == 0) ? IDPF_DFLT_INTERVAL : itrn_val;
> +	dynctl_val = VIRTCHNL2_ITR_IDX_0  <<
> +		     PF_GLINT_DYN_CTL_ITR_INDX_S |
> +		     PF_GLINT_DYN_CTL_WB_ON_ITR_M |
> +		     itrn_val << PF_GLINT_DYN_CTL_INTERVAL_S;
> +	IDPF_WRITE_REG(hw, dynctl_reg_start, dynctl_val);
> +
> +	for (i = 0; i < nb_rx_queues; i++) {
> +		/* map all queues to the same vector */
> +		qv_map[i].queue_id = vport->chunks_info.rx_start_qid + i;
> +		qv_map[i].vector_id =
> +			vport->recv_vectors->vchunks.vchunks->start_vector_id;
> +	}
> +	vport->qv_map = qv_map;
> +
> +	if (idpf_vc_config_irq_map_unmap(vport, nb_rx_queues, true) != 0) {
> +		DRV_LOG(ERR, "config interrupt mapping failed");
> +		goto config_irq_map_err;
> +	}
> +
> +	return 0;
> +
> +config_irq_map_err:
> +	rte_free(vport->qv_map);
> +	vport->qv_map = NULL;
> +
> +qv_map_alloc_err:
> +	return -1;
> +}
> +


^ permalink raw reply	[flat|nested] 79+ messages in thread

* RE: [PATCH v4 09/15] common/idpf: add vport info initialization
  2023-01-17  8:06   ` [PATCH v4 09/15] common/idpf: add vport info initialization beilei.xing
@ 2023-01-31  8:24     ` Wu, Jingjing
  0 siblings, 0 replies; 79+ messages in thread
From: Wu, Jingjing @ 2023-01-31  8:24 UTC (permalink / raw)
  To: Xing, Beilei; +Cc: dev, Zhang, Qi Z

> +int
> +idpf_create_vport_info_init(struct idpf_vport *vport,
> +			    struct virtchnl2_create_vport *vport_info)
> +{
> +	struct idpf_adapter *adapter = vport->adapter;
> +
> +	vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
> +	if (adapter->txq_model == 0) {
> +		vport_info->txq_model =
> +			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
Byte order is consider for txq_model, how about other fields?

> +		vport_info->num_tx_q = IDPF_DEFAULT_TXQ_NUM;
> +		vport_info->num_tx_complq =
> +			IDPF_DEFAULT_TXQ_NUM * IDPF_TX_COMPLQ_PER_GRP;
> +	} else {
> +		vport_info->txq_model =
> +			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
> +		vport_info->num_tx_q = IDPF_DEFAULT_TXQ_NUM;
> +		vport_info->num_tx_complq = 0;
> +	}
> +	if (adapter->rxq_model == 0) {
> +		vport_info->rxq_model =
> +			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
> +		vport_info->num_rx_q = IDPF_DEFAULT_RXQ_NUM;
> +		vport_info->num_rx_bufq =
> +			IDPF_DEFAULT_RXQ_NUM * IDPF_RX_BUFQ_PER_GRP;
> +	} else {
> +		vport_info->rxq_model =
> +			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
> +		vport_info->num_rx_q = IDPF_DEFAULT_RXQ_NUM;
> +		vport_info->num_rx_bufq = 0;
> +	}
> +
> +	return 0;
> +}
> +

^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v5 00/15] net/idpf: introduce idpf common modle
  2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
                     ` (14 preceding siblings ...)
  2023-01-17  8:06   ` [PATCH v4 15/15] common/idpf: add avx512 for single queue model beilei.xing
@ 2023-02-02  9:53   ` beilei.xing
  2023-02-02  9:53     ` [PATCH v5 01/15] common/idpf: add adapter structure beilei.xing
                       ` (15 more replies)
  15 siblings, 16 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-02  9:53 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Refactor idpf pmd by introducing idpf common module, which will be also
consumed by a new PMD - CPFL (Control Plane Function Library) PMD.

v2 changes:
 - Refine irq map/unmap functions.
 - Fix cross compile issue.
v3 changes:
 - Embed vport_info field into the vport structure.
 - Refine APIs' name and order in version.map.
 - Refine commit log.
v4 changes:
 - Refine commit log.
v5 changes:
 - Refine version.map.
 - Fix typo.
 - Return error log.

Beilei Xing (15):
  common/idpf: add adapter structure
  common/idpf: add vport structure
  common/idpf: add virtual channel functions
  common/idpf: introduce adapter init and deinit
  common/idpf: add vport init/deinit
  common/idpf: add config RSS
  common/idpf: add irq map/unmap
  common/idpf: support get packet type
  common/idpf: add vport info initialization
  common/idpf: add vector flags in vport
  common/idpf: add rxq and txq struct
  common/idpf: add help functions for queue setup and release
  common/idpf: add Rx and Tx data path
  common/idpf: add vec queue setup
  common/idpf: add avx512 for single queue model

 drivers/common/idpf/base/idpf_controlq_api.h  |    6 -
 drivers/common/idpf/base/meson.build          |    2 +-
 drivers/common/idpf/idpf_common_device.c      |  655 ++++++
 drivers/common/idpf/idpf_common_device.h      |  195 ++
 drivers/common/idpf/idpf_common_logs.h        |   47 +
 drivers/common/idpf/idpf_common_rxtx.c        | 1458 ++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h        |  278 +++
 .../idpf/idpf_common_rxtx_avx512.c}           |   14 +-
 .../idpf/idpf_common_virtchnl.c}              |  889 ++-----
 drivers/common/idpf/idpf_common_virtchnl.h    |   52 +
 drivers/common/idpf/meson.build               |   38 +
 drivers/common/idpf/version.map               |   57 +-
 drivers/net/idpf/idpf_ethdev.c                |  544 +----
 drivers/net/idpf/idpf_ethdev.h                |  194 +-
 drivers/net/idpf/idpf_logs.h                  |   24 -
 drivers/net/idpf/idpf_rxtx.c                  | 2065 +++--------------
 drivers/net/idpf/idpf_rxtx.h                  |  253 +-
 drivers/net/idpf/meson.build                  |   18 -
 18 files changed, 3380 insertions(+), 3409 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_device.c
 create mode 100644 drivers/common/idpf/idpf_common_device.h
 create mode 100644 drivers/common/idpf/idpf_common_logs.h
 create mode 100644 drivers/common/idpf/idpf_common_rxtx.c
 create mode 100644 drivers/common/idpf/idpf_common_rxtx.h
 rename drivers/{net/idpf/idpf_rxtx_vec_avx512.c => common/idpf/idpf_common_rxtx_avx512.c} (98%)
 rename drivers/{net/idpf/idpf_vchnl.c => common/idpf/idpf_common_virtchnl.c} (55%)
 create mode 100644 drivers/common/idpf/idpf_common_virtchnl.h

-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v5 01/15] common/idpf: add adapter structure
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
@ 2023-02-02  9:53     ` beilei.xing
  2023-02-02  9:53     ` [PATCH v5 02/15] common/idpf: add vport structure beilei.xing
                       ` (14 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-02  9:53 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Add structure idpf_adapter in common module, the structure includes
some basic fields.
Introduce structure idpf_adapter_ext in PMD, this structure includes
extra fields except idpf_adapter.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h | 20 ++++++
 drivers/net/idpf/idpf_ethdev.c           | 91 ++++++++++--------------
 drivers/net/idpf/idpf_ethdev.h           | 25 +++----
 drivers/net/idpf/idpf_rxtx.c             | 16 ++---
 drivers/net/idpf/idpf_rxtx.h             |  4 +-
 drivers/net/idpf/idpf_rxtx_vec_avx512.c  |  3 +-
 drivers/net/idpf/idpf_vchnl.c            | 30 ++++----
 7 files changed, 99 insertions(+), 90 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_device.h

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
new file mode 100644
index 0000000000..4f548a7185
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_COMMON_DEVICE_H_
+#define _IDPF_COMMON_DEVICE_H_
+
+#include <base/idpf_prototype.h>
+#include <base/virtchnl2.h>
+
+struct idpf_adapter {
+	struct idpf_hw hw;
+	struct virtchnl2_version_info virtchnl_version;
+	struct virtchnl2_get_capabilities caps;
+	volatile uint32_t pend_cmd; /* pending command not finished */
+	uint32_t cmd_retval; /* return value of the cmd response from cp */
+	uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */
+};
+
+#endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 3f1b77144c..1b13d081a7 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -53,8 +53,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_adapter *adapter = vport->adapter;
 
-	dev_info->max_rx_queues = adapter->caps->max_rx_q;
-	dev_info->max_tx_queues = adapter->caps->max_tx_q;
+	dev_info->max_rx_queues = adapter->caps.max_rx_q;
+	dev_info->max_tx_queues = adapter->caps.max_tx_q;
 	dev_info->min_rx_bufsize = IDPF_MIN_BUF_SIZE;
 	dev_info->max_rx_pktlen = vport->max_mtu + IDPF_ETH_OVERHEAD;
 
@@ -147,7 +147,7 @@ idpf_init_vport_req_info(struct rte_eth_dev *dev,
 			 struct virtchnl2_create_vport *vport_info)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(vport->adapter);
 
 	vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
 	if (adapter->txq_model == 0) {
@@ -379,7 +379,7 @@ idpf_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
-	if (adapter->caps->rss_caps != 0 && dev->data->nb_rx_queues != 0) {
+	if (adapter->caps.rss_caps != 0 && dev->data->nb_rx_queues != 0) {
 		ret = idpf_init_rss(vport);
 		if (ret != 0) {
 			PMD_INIT_LOG(ERR, "Failed to init rss");
@@ -420,7 +420,7 @@ idpf_config_rx_queues_irqs(struct rte_eth_dev *dev)
 
 	/* Rx interrupt disabled, Map interrupt only for writeback */
 
-	/* The capability flags adapter->caps->other_caps should be
+	/* The capability flags adapter->caps.other_caps should be
 	 * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if
 	 * condition should be updated when the FW can return the
 	 * correct flag bits.
@@ -518,9 +518,9 @@ static int
 idpf_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
-	uint16_t num_allocated_vectors =
-		adapter->caps->num_allocated_vectors;
+	struct idpf_adapter *base = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
+	uint16_t num_allocated_vectors = base->caps.num_allocated_vectors;
 	uint16_t req_vecs_num;
 	int ret;
 
@@ -596,7 +596,7 @@ static int
 idpf_dev_close(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(vport->adapter);
 
 	idpf_dev_stop(dev);
 
@@ -728,7 +728,7 @@ parse_bool(const char *key, const char *value, void *args)
 }
 
 static int
-idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter,
+idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter,
 		   struct idpf_devargs *idpf_args)
 {
 	struct rte_devargs *devargs = pci_dev->device.devargs;
@@ -875,14 +875,14 @@ idpf_init_mbx(struct idpf_hw *hw)
 }
 
 static int
-idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
+idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
 {
-	struct idpf_hw *hw = &adapter->hw;
+	struct idpf_hw *hw = &adapter->base.hw;
 	int ret = 0;
 
 	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
 	hw->hw_addr_len = pci_dev->mem_resource[0].len;
-	hw->back = adapter;
+	hw->back = &adapter->base;
 	hw->vendor_id = pci_dev->id.vendor_id;
 	hw->device_id = pci_dev->id.device_id;
 	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
@@ -902,15 +902,15 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
 		goto err;
 	}
 
-	adapter->mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
-					IDPF_DFLT_MBX_BUF_SIZE, 0);
-	if (adapter->mbx_resp == NULL) {
+	adapter->base.mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
+					     IDPF_DFLT_MBX_BUF_SIZE, 0);
+	if (adapter->base.mbx_resp == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp memory");
 		ret = -ENOMEM;
 		goto err_mbx;
 	}
 
-	ret = idpf_vc_check_api_version(adapter);
+	ret = idpf_vc_check_api_version(&adapter->base);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to check api version");
 		goto err_api;
@@ -922,21 +922,13 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
 		goto err_api;
 	}
 
-	adapter->caps = rte_zmalloc("idpf_caps",
-				sizeof(struct virtchnl2_get_capabilities), 0);
-	if (adapter->caps == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate idpf_caps memory");
-		ret = -ENOMEM;
-		goto err_api;
-	}
-
-	ret = idpf_vc_get_caps(adapter);
+	ret = idpf_vc_get_caps(&adapter->base);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to get capabilities");
-		goto err_caps;
+		goto err_api;
 	}
 
-	adapter->max_vport_nb = adapter->caps->max_vports;
+	adapter->max_vport_nb = adapter->base.caps.max_vports;
 
 	adapter->vports = rte_zmalloc("vports",
 				      adapter->max_vport_nb *
@@ -945,7 +937,7 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
 	if (adapter->vports == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
 		ret = -ENOMEM;
-		goto err_vports;
+		goto err_api;
 	}
 
 	adapter->max_rxq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
@@ -962,13 +954,9 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
 
 	return ret;
 
-err_vports:
-err_caps:
-	rte_free(adapter->caps);
-	adapter->caps = NULL;
 err_api:
-	rte_free(adapter->mbx_resp);
-	adapter->mbx_resp = NULL;
+	rte_free(adapter->base.mbx_resp);
+	adapter->base.mbx_resp = NULL;
 err_mbx:
 	idpf_ctlq_deinit(hw);
 err:
@@ -995,7 +983,7 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 };
 
 static uint16_t
-idpf_vport_idx_alloc(struct idpf_adapter *ad)
+idpf_vport_idx_alloc(struct idpf_adapter_ext *ad)
 {
 	uint16_t vport_idx;
 	uint16_t i;
@@ -1018,13 +1006,13 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_vport_param *param = init_params;
-	struct idpf_adapter *adapter = param->adapter;
+	struct idpf_adapter_ext *adapter = param->adapter;
 	/* for sending create vport virtchnl msg prepare */
 	struct virtchnl2_create_vport vport_req_info;
 	int ret = 0;
 
 	dev->dev_ops = &idpf_eth_dev_ops;
-	vport->adapter = adapter;
+	vport->adapter = &adapter->base;
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
 
@@ -1085,10 +1073,10 @@ static const struct rte_pci_id pci_id_idpf_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
-struct idpf_adapter *
-idpf_find_adapter(struct rte_pci_device *pci_dev)
+struct idpf_adapter_ext *
+idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 {
-	struct idpf_adapter *adapter;
+	struct idpf_adapter_ext *adapter;
 	int found = 0;
 
 	if (pci_dev == NULL)
@@ -1110,17 +1098,14 @@ idpf_find_adapter(struct rte_pci_device *pci_dev)
 }
 
 static void
-idpf_adapter_rel(struct idpf_adapter *adapter)
+idpf_adapter_rel(struct idpf_adapter_ext *adapter)
 {
-	struct idpf_hw *hw = &adapter->hw;
+	struct idpf_hw *hw = &adapter->base.hw;
 
 	idpf_ctlq_deinit(hw);
 
-	rte_free(adapter->caps);
-	adapter->caps = NULL;
-
-	rte_free(adapter->mbx_resp);
-	adapter->mbx_resp = NULL;
+	rte_free(adapter->base.mbx_resp);
+	adapter->base.mbx_resp = NULL;
 
 	rte_free(adapter->vports);
 	adapter->vports = NULL;
@@ -1131,7 +1116,7 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	       struct rte_pci_device *pci_dev)
 {
 	struct idpf_vport_param vport_param;
-	struct idpf_adapter *adapter;
+	struct idpf_adapter_ext *adapter;
 	struct idpf_devargs devargs;
 	char name[RTE_ETH_NAME_MAX_LEN];
 	int i, retval;
@@ -1143,11 +1128,11 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		idpf_adapter_list_init = true;
 	}
 
-	adapter = idpf_find_adapter(pci_dev);
+	adapter = idpf_find_adapter_ext(pci_dev);
 	if (adapter == NULL) {
 		first_probe = true;
-		adapter = rte_zmalloc("idpf_adapter",
-						sizeof(struct idpf_adapter), 0);
+		adapter = rte_zmalloc("idpf_adapter_ext",
+				      sizeof(struct idpf_adapter_ext), 0);
 		if (adapter == NULL) {
 			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
 			return -ENOMEM;
@@ -1225,7 +1210,7 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 static int
 idpf_pci_remove(struct rte_pci_device *pci_dev)
 {
-	struct idpf_adapter *adapter = idpf_find_adapter(pci_dev);
+	struct idpf_adapter_ext *adapter = idpf_find_adapter_ext(pci_dev);
 	uint16_t port_id;
 
 	/* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF through rte_device */
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index b0746e5041..e956fa989c 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -15,6 +15,7 @@
 
 #include "idpf_logs.h"
 
+#include <idpf_common_device.h>
 #include <base/idpf_prototype.h>
 #include <base/virtchnl2.h>
 
@@ -91,7 +92,7 @@ struct idpf_chunks_info {
 };
 
 struct idpf_vport_param {
-	struct idpf_adapter *adapter;
+	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
 	uint16_t idx;       /* index in adapter->vports[]*/
 };
@@ -144,17 +145,11 @@ struct idpf_devargs {
 	uint16_t req_vport_nb;
 };
 
-struct idpf_adapter {
-	TAILQ_ENTRY(idpf_adapter) next;
-	struct idpf_hw hw;
-	char name[IDPF_ADAPTER_NAME_LEN];
-
-	struct virtchnl2_version_info virtchnl_version;
-	struct virtchnl2_get_capabilities *caps;
+struct idpf_adapter_ext {
+	TAILQ_ENTRY(idpf_adapter_ext) next;
+	struct idpf_adapter base;
 
-	volatile uint32_t pend_cmd; /* pending command not finished */
-	uint32_t cmd_retval; /* return value of the cmd response from ipf */
-	uint8_t *mbx_resp; /* buffer to store the mailbox response from ipf */
+	char name[IDPF_ADAPTER_NAME_LEN];
 
 	uint32_t txq_model; /* 0 - split queue model, non-0 - single queue model */
 	uint32_t rxq_model; /* 0 - split queue model, non-0 - single queue model */
@@ -182,10 +177,12 @@ struct idpf_adapter {
 	uint64_t time_hw;
 };
 
-TAILQ_HEAD(idpf_adapter_list, idpf_adapter);
+TAILQ_HEAD(idpf_adapter_list, idpf_adapter_ext);
 
 #define IDPF_DEV_TO_PCI(eth_dev)		\
 	RTE_DEV_TO_PCI((eth_dev)->device)
+#define IDPF_ADAPTER_TO_EXT(p)					\
+	container_of((p), struct idpf_adapter_ext, base)
 
 /* structure used for sending and checking response of virtchnl ops */
 struct idpf_cmd_info {
@@ -234,10 +231,10 @@ atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
 	return !ret;
 }
 
-struct idpf_adapter *idpf_find_adapter(struct rte_pci_device *pci_dev);
+struct idpf_adapter_ext *idpf_find_adapter_ext(struct rte_pci_device *pci_dev);
 void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
 int idpf_vc_check_api_version(struct idpf_adapter *adapter);
-int idpf_get_pkt_type(struct idpf_adapter *adapter);
+int idpf_get_pkt_type(struct idpf_adapter_ext *adapter);
 int idpf_vc_get_caps(struct idpf_adapter *adapter);
 int idpf_vc_create_vport(struct idpf_vport *vport,
 			 struct virtchnl2_create_vport *vport_info);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 5aef8ba2b6..4845f2ea0a 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1384,7 +1384,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	struct idpf_rx_queue *rxq;
 	const uint32_t *ptype_tbl;
 	uint8_t status_err0_qw1;
-	struct idpf_adapter *ad;
+	struct idpf_adapter_ext *ad;
 	struct rte_mbuf *rxm;
 	uint16_t rx_id_bufq1;
 	uint16_t rx_id_bufq2;
@@ -1398,7 +1398,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	nb_rx = 0;
 	rxq = rx_queue;
-	ad = rxq->adapter;
+	ad = IDPF_ADAPTER_TO_EXT(rxq->adapter);
 
 	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
 		return nb_rx;
@@ -1407,7 +1407,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	rx_id_bufq1 = rxq->bufq1->rx_next_avail;
 	rx_id_bufq2 = rxq->bufq2->rx_next_avail;
 	rx_desc_ring = rxq->rx_ring;
-	ptype_tbl = rxq->adapter->ptype_tbl;
+	ptype_tbl = ad->ptype_tbl;
 
 	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
 		rxq->hw_register_set = 1;
@@ -1791,7 +1791,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	const uint32_t *ptype_tbl;
 	uint16_t rx_id, nb_hold;
 	struct rte_eth_dev *dev;
-	struct idpf_adapter *ad;
+	struct idpf_adapter_ext *ad;
 	uint16_t rx_packet_len;
 	struct rte_mbuf *rxm;
 	struct rte_mbuf *nmb;
@@ -1805,14 +1805,14 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	nb_hold = 0;
 	rxq = rx_queue;
 
-	ad = rxq->adapter;
+	ad = IDPF_ADAPTER_TO_EXT(rxq->adapter);
 
 	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
 		return nb_rx;
 
 	rx_id = rxq->rx_tail;
 	rx_ring = rxq->rx_ring;
-	ptype_tbl = rxq->adapter->ptype_tbl;
+	ptype_tbl = ad->ptype_tbl;
 
 	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
 		rxq->hw_register_set = 1;
@@ -2221,7 +2221,7 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 #ifdef RTE_ARCH_X86
-	struct idpf_adapter *ad = vport->adapter;
+	struct idpf_adapter_ext *ad = IDPF_ADAPTER_TO_EXT(vport->adapter);
 	struct idpf_rx_queue *rxq;
 	int i;
 
@@ -2275,7 +2275,7 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 #ifdef RTE_ARCH_X86
-	struct idpf_adapter *ad = vport->adapter;
+	struct idpf_adapter_ext *ad = IDPF_ADAPTER_TO_EXT(vport->adapter);
 #ifdef CC_AVX512_SUPPORT
 	struct idpf_tx_queue *txq;
 	int i;
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 730dc64ebc..047fc03614 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -247,11 +247,11 @@ void idpf_set_tx_function(struct rte_eth_dev *dev);
 /* Helper function to convert a 32b nanoseconds timestamp to 64b. */
 static inline uint64_t
 
-idpf_tstamp_convert_32b_64b(struct idpf_adapter *ad, uint32_t flag,
+idpf_tstamp_convert_32b_64b(struct idpf_adapter_ext *ad, uint32_t flag,
 			    uint32_t in_timestamp)
 {
 #ifdef RTE_ARCH_X86_64
-	struct idpf_hw *hw = &ad->hw;
+	struct idpf_hw *hw = &ad->base.hw;
 	const uint64_t mask = 0xFFFFFFFF;
 	uint32_t hi, lo, lo2, delta;
 	uint64_t ns;
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
index fb2b6bb53c..efa7cd2187 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
@@ -245,7 +245,8 @@ _idpf_singleq_recv_raw_pkts_avx512(struct idpf_rx_queue *rxq,
 				   struct rte_mbuf **rx_pkts,
 				   uint16_t nb_pkts)
 {
-	const uint32_t *type_table = rxq->adapter->ptype_tbl;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(rxq->adapter);
+	const uint32_t *type_table = adapter->ptype_tbl;
 
 	const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0,
 						    rxq->mbuf_initializer);
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 14b34619af..ca481bb915 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -311,13 +311,17 @@ idpf_vc_check_api_version(struct idpf_adapter *adapter)
 }
 
 int __rte_cold
-idpf_get_pkt_type(struct idpf_adapter *adapter)
+idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 {
 	struct virtchnl2_get_ptype_info *ptype_info;
-	uint16_t ptype_recvd = 0, ptype_offset, i, j;
+	struct idpf_adapter *base;
+	uint16_t ptype_offset, i, j;
+	uint16_t ptype_recvd = 0;
 	int ret;
 
-	ret = idpf_vc_query_ptype_info(adapter);
+	base = &adapter->base;
+
+	ret = idpf_vc_query_ptype_info(base);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Fail to query packet type information");
 		return ret;
@@ -328,7 +332,7 @@ idpf_get_pkt_type(struct idpf_adapter *adapter)
 			return -ENOMEM;
 
 	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
-		ret = idpf_read_one_msg(adapter, VIRTCHNL2_OP_GET_PTYPE_INFO,
+		ret = idpf_read_one_msg(base, VIRTCHNL2_OP_GET_PTYPE_INFO,
 					IDPF_DFLT_MBX_BUF_SIZE, (u8 *)ptype_info);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR, "Fail to get packet type information");
@@ -515,7 +519,7 @@ idpf_get_pkt_type(struct idpf_adapter *adapter)
 
 free_ptype_info:
 	rte_free(ptype_info);
-	clear_cmd(adapter);
+	clear_cmd(base);
 	return ret;
 }
 
@@ -577,7 +581,7 @@ idpf_vc_get_caps(struct idpf_adapter *adapter)
 		return err;
 	}
 
-	rte_memcpy(adapter->caps, args.out_buffer, sizeof(caps_msg));
+	rte_memcpy(&adapter->caps, args.out_buffer, sizeof(caps_msg));
 
 	return 0;
 }
@@ -740,7 +744,8 @@ idpf_vc_set_rss_hash(struct idpf_vport *vport)
 int
 idpf_vc_config_rxqs(struct idpf_vport *vport)
 {
-	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_adapter *base = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
 	struct idpf_rx_queue **rxq =
 		(struct idpf_rx_queue **)vport->dev_data->rx_queues;
 	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
@@ -832,10 +837,10 @@ idpf_vc_config_rxqs(struct idpf_vport *vport)
 		args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
 		args.in_args = (uint8_t *)vc_rxqs;
 		args.in_args_size = size;
-		args.out_buffer = adapter->mbx_resp;
+		args.out_buffer = base->mbx_resp;
 		args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-		err = idpf_execute_vc_cmd(adapter, &args);
+		err = idpf_execute_vc_cmd(base, &args);
 		rte_free(vc_rxqs);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
@@ -940,7 +945,8 @@ idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id)
 int
 idpf_vc_config_txqs(struct idpf_vport *vport)
 {
-	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_adapter *base = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
 	struct idpf_tx_queue **txq =
 		(struct idpf_tx_queue **)vport->dev_data->tx_queues;
 	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
@@ -1010,10 +1016,10 @@ idpf_vc_config_txqs(struct idpf_vport *vport)
 		args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
 		args.in_args = (uint8_t *)vc_txqs;
 		args.in_args_size = size;
-		args.out_buffer = adapter->mbx_resp;
+		args.out_buffer = base->mbx_resp;
 		args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-		err = idpf_execute_vc_cmd(adapter, &args);
+		err = idpf_execute_vc_cmd(base, &args);
 		rte_free(vc_txqs);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v5 02/15] common/idpf: add vport structure
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
  2023-02-02  9:53     ` [PATCH v5 01/15] common/idpf: add adapter structure beilei.xing
@ 2023-02-02  9:53     ` beilei.xing
  2023-02-02  9:53     ` [PATCH v5 03/15] common/idpf: add virtual channel functions beilei.xing
                       ` (13 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-02  9:53 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Move idpf_vport structure to common module, remove ethdev dependency.
Also remove unused functions.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h |  59 ++++++
 drivers/net/idpf/idpf_ethdev.c           |  10 +-
 drivers/net/idpf/idpf_ethdev.h           |  66 +-----
 drivers/net/idpf/idpf_rxtx.c             |   4 +-
 drivers/net/idpf/idpf_rxtx.h             |   3 +
 drivers/net/idpf/idpf_vchnl.c            | 252 +++--------------------
 6 files changed, 96 insertions(+), 298 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 4f548a7185..b7fff84b25 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -17,4 +17,63 @@ struct idpf_adapter {
 	uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */
 };
 
+struct idpf_chunks_info {
+	uint32_t tx_start_qid;
+	uint32_t rx_start_qid;
+	/* Valid only if split queue model */
+	uint32_t tx_compl_start_qid;
+	uint32_t rx_buf_start_qid;
+
+	uint64_t tx_qtail_start;
+	uint32_t tx_qtail_spacing;
+	uint64_t rx_qtail_start;
+	uint32_t rx_qtail_spacing;
+	uint64_t tx_compl_qtail_start;
+	uint32_t tx_compl_qtail_spacing;
+	uint64_t rx_buf_qtail_start;
+	uint32_t rx_buf_qtail_spacing;
+};
+
+struct idpf_vport {
+	struct idpf_adapter *adapter; /* Backreference to associated adapter */
+	struct virtchnl2_create_vport *vport_info; /* virtchnl response info handling */
+	uint16_t sw_idx; /* SW index in adapter->vports[]*/
+	uint16_t vport_id;
+	uint32_t txq_model;
+	uint32_t rxq_model;
+	uint16_t num_tx_q;
+	/* valid only if txq_model is split Q */
+	uint16_t num_tx_complq;
+	uint16_t num_rx_q;
+	/* valid only if rxq_model is split Q */
+	uint16_t num_rx_bufq;
+
+	uint16_t max_mtu;
+	uint8_t default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+
+	enum virtchnl_rss_algorithm rss_algorithm;
+	uint16_t rss_key_size;
+	uint16_t rss_lut_size;
+
+	void *dev_data; /* Pointer to the device data */
+	uint16_t max_pkt_len; /* Maximum packet length */
+
+	/* RSS info */
+	uint32_t *rss_lut;
+	uint8_t *rss_key;
+	uint64_t rss_hf;
+
+	/* MSIX info*/
+	struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */
+	uint16_t max_vectors;
+	struct virtchnl2_alloc_vectors *recv_vectors;
+
+	/* Chunk info */
+	struct idpf_chunks_info chunks_info;
+
+	uint16_t devarg_id;
+
+	bool stopped;
+};
+
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 1b13d081a7..72a5c9f39b 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -275,11 +275,13 @@ static int
 idpf_init_rss(struct idpf_vport *vport)
 {
 	struct rte_eth_rss_conf *rss_conf;
+	struct rte_eth_dev_data *dev_data;
 	uint16_t i, nb_q, lut_size;
 	int ret = 0;
 
-	rss_conf = &vport->dev_data->dev_conf.rx_adv_conf.rss_conf;
-	nb_q = vport->dev_data->nb_rx_queues;
+	dev_data = vport->dev_data;
+	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev_data->nb_rx_queues;
 
 	vport->rss_key = rte_zmalloc("rss_key",
 				     vport->rss_key_size, 0);
@@ -466,7 +468,7 @@ idpf_config_rx_queues_irqs(struct rte_eth_dev *dev)
 	}
 	vport->qv_map = qv_map;
 
-	if (idpf_vc_config_irq_map_unmap(vport, true) != 0) {
+	if (idpf_vc_config_irq_map_unmap(vport, dev->data->nb_rx_queues, true) != 0) {
 		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
 		goto config_irq_map_err;
 	}
@@ -582,7 +584,7 @@ idpf_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_stop_queues(dev);
 
-	idpf_vc_config_irq_map_unmap(vport, false);
+	idpf_vc_config_irq_map_unmap(vport, dev->data->nb_rx_queues, false);
 
 	if (vport->recv_vectors != NULL)
 		idpf_vc_dealloc_vectors(vport);
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index e956fa989c..8c29019667 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -74,71 +74,12 @@ enum idpf_vc_result {
 	IDPF_MSG_CMD,      /* Read async command result */
 };
 
-struct idpf_chunks_info {
-	uint32_t tx_start_qid;
-	uint32_t rx_start_qid;
-	/* Valid only if split queue model */
-	uint32_t tx_compl_start_qid;
-	uint32_t rx_buf_start_qid;
-
-	uint64_t tx_qtail_start;
-	uint32_t tx_qtail_spacing;
-	uint64_t rx_qtail_start;
-	uint32_t rx_qtail_spacing;
-	uint64_t tx_compl_qtail_start;
-	uint32_t tx_compl_qtail_spacing;
-	uint64_t rx_buf_qtail_start;
-	uint32_t rx_buf_qtail_spacing;
-};
-
 struct idpf_vport_param {
 	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
 	uint16_t idx;       /* index in adapter->vports[]*/
 };
 
-struct idpf_vport {
-	struct idpf_adapter *adapter; /* Backreference to associated adapter */
-	struct virtchnl2_create_vport *vport_info; /* virtchnl response info handling */
-	uint16_t sw_idx; /* SW index in adapter->vports[]*/
-	uint16_t vport_id;
-	uint32_t txq_model;
-	uint32_t rxq_model;
-	uint16_t num_tx_q;
-	/* valid only if txq_model is split Q */
-	uint16_t num_tx_complq;
-	uint16_t num_rx_q;
-	/* valid only if rxq_model is split Q */
-	uint16_t num_rx_bufq;
-
-	uint16_t max_mtu;
-	uint8_t default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
-
-	enum virtchnl_rss_algorithm rss_algorithm;
-	uint16_t rss_key_size;
-	uint16_t rss_lut_size;
-
-	struct rte_eth_dev_data *dev_data; /* Pointer to the device data */
-	uint16_t max_pkt_len; /* Maximum packet length */
-
-	/* RSS info */
-	uint32_t *rss_lut;
-	uint8_t *rss_key;
-	uint64_t rss_hf;
-
-	/* MSIX info*/
-	struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */
-	uint16_t max_vectors;
-	struct virtchnl2_alloc_vectors *recv_vectors;
-
-	/* Chunk info */
-	struct idpf_chunks_info chunks_info;
-
-	uint16_t devarg_id;
-
-	bool stopped;
-};
-
 /* Struct used when parse driver specific devargs */
 struct idpf_devargs {
 	uint16_t req_vports[IDPF_MAX_VPORT_NUM];
@@ -242,15 +183,12 @@ int idpf_vc_destroy_vport(struct idpf_vport *vport);
 int idpf_vc_set_rss_key(struct idpf_vport *vport);
 int idpf_vc_set_rss_lut(struct idpf_vport *vport);
 int idpf_vc_set_rss_hash(struct idpf_vport *vport);
-int idpf_vc_config_rxqs(struct idpf_vport *vport);
-int idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id);
-int idpf_vc_config_txqs(struct idpf_vport *vport);
-int idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id);
 int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
 		      bool rx, bool on);
 int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
 int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
-int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, bool map);
+int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
+				 uint16_t nb_rxq, bool map);
 int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
 int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
 int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 4845f2ea0a..918d156e03 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1066,7 +1066,7 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		dev->data->rx_queues[rx_queue_id];
 	int err = 0;
 
-	err = idpf_vc_config_rxq(vport, rx_queue_id);
+	err = idpf_vc_config_rxq(vport, rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Fail to configure Rx queue %u", rx_queue_id);
 		return err;
@@ -1117,7 +1117,7 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 		dev->data->tx_queues[tx_queue_id];
 	int err = 0;
 
-	err = idpf_vc_config_txq(vport, tx_queue_id);
+	err = idpf_vc_config_txq(vport, txq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id);
 		return err;
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 047fc03614..9417651b3f 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -243,6 +243,9 @@ void idpf_stop_queues(struct rte_eth_dev *dev);
 void idpf_set_rx_function(struct rte_eth_dev *dev);
 void idpf_set_tx_function(struct rte_eth_dev *dev);
 
+int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
+int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
+
 #define IDPF_TIMESYNC_REG_WRAP_GUARD_BAND  10000
 /* Helper function to convert a 32b nanoseconds timestamp to 64b. */
 static inline uint64_t
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index ca481bb915..633d3295d3 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -742,121 +742,9 @@ idpf_vc_set_rss_hash(struct idpf_vport *vport)
 
 #define IDPF_RX_BUF_STRIDE		64
 int
-idpf_vc_config_rxqs(struct idpf_vport *vport)
-{
-	struct idpf_adapter *base = vport->adapter;
-	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
-	struct idpf_rx_queue **rxq =
-		(struct idpf_rx_queue **)vport->dev_data->rx_queues;
-	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
-	struct virtchnl2_rxq_info *rxq_info;
-	struct idpf_cmd_info args;
-	uint16_t total_qs, num_qs;
-	int size, i, j;
-	int err = 0;
-	int k = 0;
-
-	total_qs = vport->num_rx_q + vport->num_rx_bufq;
-	while (total_qs) {
-		if (total_qs > adapter->max_rxq_per_msg) {
-			num_qs = adapter->max_rxq_per_msg;
-			total_qs -= adapter->max_rxq_per_msg;
-		} else {
-			num_qs = total_qs;
-			total_qs = 0;
-		}
-
-		size = sizeof(*vc_rxqs) + (num_qs - 1) *
-			sizeof(struct virtchnl2_rxq_info);
-		vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
-		if (vc_rxqs == NULL) {
-			PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues");
-			err = -ENOMEM;
-			break;
-		}
-		vc_rxqs->vport_id = vport->vport_id;
-		vc_rxqs->num_qinfo = num_qs;
-		if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-			for (i = 0; i < num_qs; i++, k++) {
-				rxq_info = &vc_rxqs->qinfo[i];
-				rxq_info->dma_ring_addr = rxq[k]->rx_ring_phys_addr;
-				rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-				rxq_info->queue_id = rxq[k]->queue_id;
-				rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-				rxq_info->data_buffer_size = rxq[k]->rx_buf_len;
-				rxq_info->max_pkt_size = vport->max_pkt_len;
-
-				rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
-				rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
-
-				rxq_info->ring_len = rxq[k]->nb_rx_desc;
-			}
-		} else {
-			for (i = 0; i < num_qs / 3; i++, k++) {
-				/* Rx queue */
-				rxq_info = &vc_rxqs->qinfo[i * 3];
-				rxq_info->dma_ring_addr =
-					rxq[k]->rx_ring_phys_addr;
-				rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-				rxq_info->queue_id = rxq[k]->queue_id;
-				rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-				rxq_info->data_buffer_size = rxq[k]->rx_buf_len;
-				rxq_info->max_pkt_size = vport->max_pkt_len;
-
-				rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
-				rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
-
-				rxq_info->ring_len = rxq[k]->nb_rx_desc;
-				rxq_info->rx_bufq1_id = rxq[k]->bufq1->queue_id;
-				rxq_info->rx_bufq2_id = rxq[k]->bufq2->queue_id;
-				rxq_info->rx_buffer_low_watermark = 64;
-
-				/* Buffer queue */
-				for (j = 1; j <= IDPF_RX_BUFQ_PER_GRP; j++) {
-					struct idpf_rx_queue *bufq = j == 1 ?
-						rxq[k]->bufq1 : rxq[k]->bufq2;
-					rxq_info = &vc_rxqs->qinfo[i * 3 + j];
-					rxq_info->dma_ring_addr =
-						bufq->rx_ring_phys_addr;
-					rxq_info->type =
-						VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
-					rxq_info->queue_id = bufq->queue_id;
-					rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-					rxq_info->data_buffer_size = bufq->rx_buf_len;
-					rxq_info->desc_ids =
-						VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
-					rxq_info->ring_len = bufq->nb_rx_desc;
-
-					rxq_info->buffer_notif_stride =
-						IDPF_RX_BUF_STRIDE;
-					rxq_info->rx_buffer_low_watermark = 64;
-				}
-			}
-		}
-		memset(&args, 0, sizeof(args));
-		args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
-		args.in_args = (uint8_t *)vc_rxqs;
-		args.in_args_size = size;
-		args.out_buffer = base->mbx_resp;
-		args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-		err = idpf_execute_vc_cmd(base, &args);
-		rte_free(vc_rxqs);
-		if (err != 0) {
-			PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
-			break;
-		}
-	}
-
-	return err;
-}
-
-int
-idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id)
+idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
 {
 	struct idpf_adapter *adapter = vport->adapter;
-	struct idpf_rx_queue **rxq =
-		(struct idpf_rx_queue **)vport->dev_data->rx_queues;
 	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
 	struct virtchnl2_rxq_info *rxq_info;
 	struct idpf_cmd_info args;
@@ -880,39 +768,38 @@ idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id)
 	vc_rxqs->num_qinfo = num_qs;
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
 		rxq_info = &vc_rxqs->qinfo[0];
-		rxq_info->dma_ring_addr = rxq[rxq_id]->rx_ring_phys_addr;
+		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
 		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-		rxq_info->queue_id = rxq[rxq_id]->queue_id;
+		rxq_info->queue_id = rxq->queue_id;
 		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-		rxq_info->data_buffer_size = rxq[rxq_id]->rx_buf_len;
+		rxq_info->data_buffer_size = rxq->rx_buf_len;
 		rxq_info->max_pkt_size = vport->max_pkt_len;
 
 		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
 		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
 
-		rxq_info->ring_len = rxq[rxq_id]->nb_rx_desc;
+		rxq_info->ring_len = rxq->nb_rx_desc;
 	}  else {
 		/* Rx queue */
 		rxq_info = &vc_rxqs->qinfo[0];
-		rxq_info->dma_ring_addr = rxq[rxq_id]->rx_ring_phys_addr;
+		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
 		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-		rxq_info->queue_id = rxq[rxq_id]->queue_id;
+		rxq_info->queue_id = rxq->queue_id;
 		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-		rxq_info->data_buffer_size = rxq[rxq_id]->rx_buf_len;
+		rxq_info->data_buffer_size = rxq->rx_buf_len;
 		rxq_info->max_pkt_size = vport->max_pkt_len;
 
 		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
 		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
 
-		rxq_info->ring_len = rxq[rxq_id]->nb_rx_desc;
-		rxq_info->rx_bufq1_id = rxq[rxq_id]->bufq1->queue_id;
-		rxq_info->rx_bufq2_id = rxq[rxq_id]->bufq2->queue_id;
+		rxq_info->ring_len = rxq->nb_rx_desc;
+		rxq_info->rx_bufq1_id = rxq->bufq1->queue_id;
+		rxq_info->rx_bufq2_id = rxq->bufq2->queue_id;
 		rxq_info->rx_buffer_low_watermark = 64;
 
 		/* Buffer queue */
 		for (i = 1; i <= IDPF_RX_BUFQ_PER_GRP; i++) {
-			struct idpf_rx_queue *bufq =
-				i == 1 ? rxq[rxq_id]->bufq1 : rxq[rxq_id]->bufq2;
+			struct idpf_rx_queue *bufq = i == 1 ? rxq->bufq1 : rxq->bufq2;
 			rxq_info = &vc_rxqs->qinfo[i];
 			rxq_info->dma_ring_addr = bufq->rx_ring_phys_addr;
 			rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
@@ -943,99 +830,9 @@ idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id)
 }
 
 int
-idpf_vc_config_txqs(struct idpf_vport *vport)
-{
-	struct idpf_adapter *base = vport->adapter;
-	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
-	struct idpf_tx_queue **txq =
-		(struct idpf_tx_queue **)vport->dev_data->tx_queues;
-	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
-	struct virtchnl2_txq_info *txq_info;
-	struct idpf_cmd_info args;
-	uint16_t total_qs, num_qs;
-	int size, i;
-	int err = 0;
-	int k = 0;
-
-	total_qs = vport->num_tx_q + vport->num_tx_complq;
-	while (total_qs) {
-		if (total_qs > adapter->max_txq_per_msg) {
-			num_qs = adapter->max_txq_per_msg;
-			total_qs -= adapter->max_txq_per_msg;
-		} else {
-			num_qs = total_qs;
-			total_qs = 0;
-		}
-		size = sizeof(*vc_txqs) + (num_qs - 1) *
-			sizeof(struct virtchnl2_txq_info);
-		vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
-		if (vc_txqs == NULL) {
-			PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues");
-			err = -ENOMEM;
-			break;
-		}
-		vc_txqs->vport_id = vport->vport_id;
-		vc_txqs->num_qinfo = num_qs;
-		if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-			for (i = 0; i < num_qs; i++, k++) {
-				txq_info = &vc_txqs->qinfo[i];
-				txq_info->dma_ring_addr = txq[k]->tx_ring_phys_addr;
-				txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-				txq_info->queue_id = txq[k]->queue_id;
-				txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-				txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
-				txq_info->ring_len = txq[k]->nb_tx_desc;
-			}
-		} else {
-			for (i = 0; i < num_qs / 2; i++, k++) {
-				/* txq info */
-				txq_info = &vc_txqs->qinfo[2 * i];
-				txq_info->dma_ring_addr = txq[k]->tx_ring_phys_addr;
-				txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-				txq_info->queue_id = txq[k]->queue_id;
-				txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-				txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-				txq_info->ring_len = txq[k]->nb_tx_desc;
-				txq_info->tx_compl_queue_id =
-					txq[k]->complq->queue_id;
-				txq_info->relative_queue_id = txq_info->queue_id;
-
-				/* tx completion queue info */
-				txq_info = &vc_txqs->qinfo[2 * i + 1];
-				txq_info->dma_ring_addr =
-					txq[k]->complq->tx_ring_phys_addr;
-				txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-				txq_info->queue_id = txq[k]->complq->queue_id;
-				txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-				txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-				txq_info->ring_len = txq[k]->complq->nb_tx_desc;
-			}
-		}
-
-		memset(&args, 0, sizeof(args));
-		args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
-		args.in_args = (uint8_t *)vc_txqs;
-		args.in_args_size = size;
-		args.out_buffer = base->mbx_resp;
-		args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-		err = idpf_execute_vc_cmd(base, &args);
-		rte_free(vc_txqs);
-		if (err != 0) {
-			PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
-			break;
-		}
-	}
-
-	return err;
-}
-
-int
-idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id)
+idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
 {
 	struct idpf_adapter *adapter = vport->adapter;
-	struct idpf_tx_queue **txq =
-		(struct idpf_tx_queue **)vport->dev_data->tx_queues;
 	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
 	struct virtchnl2_txq_info *txq_info;
 	struct idpf_cmd_info args;
@@ -1060,32 +857,32 @@ idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id)
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
 		txq_info = &vc_txqs->qinfo[0];
-		txq_info->dma_ring_addr = txq[txq_id]->tx_ring_phys_addr;
+		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
 		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-		txq_info->queue_id = txq[txq_id]->queue_id;
+		txq_info->queue_id = txq->queue_id;
 		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
 		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
-		txq_info->ring_len = txq[txq_id]->nb_tx_desc;
+		txq_info->ring_len = txq->nb_tx_desc;
 	} else {
 		/* txq info */
 		txq_info = &vc_txqs->qinfo[0];
-		txq_info->dma_ring_addr = txq[txq_id]->tx_ring_phys_addr;
+		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
 		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-		txq_info->queue_id = txq[txq_id]->queue_id;
+		txq_info->queue_id = txq->queue_id;
 		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
 		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-		txq_info->ring_len = txq[txq_id]->nb_tx_desc;
-		txq_info->tx_compl_queue_id = txq[txq_id]->complq->queue_id;
+		txq_info->ring_len = txq->nb_tx_desc;
+		txq_info->tx_compl_queue_id = txq->complq->queue_id;
 		txq_info->relative_queue_id = txq_info->queue_id;
 
 		/* tx completion queue info */
 		txq_info = &vc_txqs->qinfo[1];
-		txq_info->dma_ring_addr = txq[txq_id]->complq->tx_ring_phys_addr;
+		txq_info->dma_ring_addr = txq->complq->tx_ring_phys_addr;
 		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-		txq_info->queue_id = txq[txq_id]->complq->queue_id;
+		txq_info->queue_id = txq->complq->queue_id;
 		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
 		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-		txq_info->ring_len = txq[txq_id]->complq->nb_tx_desc;
+		txq_info->ring_len = txq->complq->nb_tx_desc;
 	}
 
 	memset(&args, 0, sizeof(args));
@@ -1104,12 +901,11 @@ idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id)
 }
 
 int
-idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, bool map)
+idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_queue_vector_maps *map_info;
 	struct virtchnl2_queue_vector *vecmap;
-	uint16_t nb_rxq = vport->dev_data->nb_rx_queues;
 	struct idpf_cmd_info args;
 	int len, i, err = 0;
 
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v5 03/15] common/idpf: add virtual channel functions
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
  2023-02-02  9:53     ` [PATCH v5 01/15] common/idpf: add adapter structure beilei.xing
  2023-02-02  9:53     ` [PATCH v5 02/15] common/idpf: add vport structure beilei.xing
@ 2023-02-02  9:53     ` beilei.xing
  2023-02-02  9:53     ` [PATCH v5 04/15] common/idpf: introduce adapter init and deinit beilei.xing
                       ` (12 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-02  9:53 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Move most of the virtual channel functions to idpf common module.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/base/idpf_controlq_api.h |   4 -
 drivers/common/idpf/base/meson.build         |   2 +-
 drivers/common/idpf/idpf_common_device.c     |   8 +
 drivers/common/idpf/idpf_common_device.h     |  61 ++
 drivers/common/idpf/idpf_common_logs.h       |  23 +
 drivers/common/idpf/idpf_common_virtchnl.c   | 815 ++++++++++++++++++
 drivers/common/idpf/idpf_common_virtchnl.h   |  48 ++
 drivers/common/idpf/meson.build              |   5 +
 drivers/common/idpf/version.map              |  20 +-
 drivers/net/idpf/idpf_ethdev.c               |   9 +-
 drivers/net/idpf/idpf_ethdev.h               |  85 +-
 drivers/net/idpf/idpf_rxtx.c                 |   8 +-
 drivers/net/idpf/idpf_vchnl.c                | 817 +------------------
 13 files changed, 986 insertions(+), 919 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_device.c
 create mode 100644 drivers/common/idpf/idpf_common_logs.h
 create mode 100644 drivers/common/idpf/idpf_common_virtchnl.c
 create mode 100644 drivers/common/idpf/idpf_common_virtchnl.h

diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h
index 68ac0cfe70..891a0f10f6 100644
--- a/drivers/common/idpf/base/idpf_controlq_api.h
+++ b/drivers/common/idpf/base/idpf_controlq_api.h
@@ -177,7 +177,6 @@ void idpf_ctlq_remove(struct idpf_hw *hw,
 		      struct idpf_ctlq_info *cq);
 
 /* Sends messages to HW and will also free the buffer*/
-__rte_internal
 int idpf_ctlq_send(struct idpf_hw *hw,
 		   struct idpf_ctlq_info *cq,
 		   u16 num_q_msg,
@@ -186,17 +185,14 @@ int idpf_ctlq_send(struct idpf_hw *hw,
 /* Receives messages and called by interrupt handler/polling
  * initiated by app/process. Also caller is supposed to free the buffers
  */
-__rte_internal
 int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 		   struct idpf_ctlq_msg *q_msg);
 
 /* Reclaims send descriptors on HW write back */
-__rte_internal
 int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
 		       struct idpf_ctlq_msg *msg_status[]);
 
 /* Indicate RX buffers are done being processed */
-__rte_internal
 int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw,
 			    struct idpf_ctlq_info *cq,
 			    u16 *buff_count,
diff --git a/drivers/common/idpf/base/meson.build b/drivers/common/idpf/base/meson.build
index 183587b51a..dc4b93c198 100644
--- a/drivers/common/idpf/base/meson.build
+++ b/drivers/common/idpf/base/meson.build
@@ -1,7 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2022 Intel Corporation
 
-sources = files(
+sources += files(
         'idpf_common.c',
         'idpf_controlq.c',
         'idpf_controlq_setup.c',
diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
new file mode 100644
index 0000000000..5062780362
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <rte_log.h>
+#include <idpf_common_device.h>
+
+RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index b7fff84b25..a7537281d1 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -7,6 +7,12 @@
 
 #include <base/idpf_prototype.h>
 #include <base/virtchnl2.h>
+#include <idpf_common_logs.h>
+
+#define IDPF_CTLQ_LEN		64
+#define IDPF_DFLT_MBX_BUF_SIZE	4096
+
+#define IDPF_MAX_PKT_TYPE	1024
 
 struct idpf_adapter {
 	struct idpf_hw hw;
@@ -76,4 +82,59 @@ struct idpf_vport {
 	bool stopped;
 };
 
+/* Message type read in virtual channel from PF */
+enum idpf_vc_result {
+	IDPF_MSG_ERR = -1, /* Meet error when accessing admin queue */
+	IDPF_MSG_NON,      /* Read nothing from admin queue */
+	IDPF_MSG_SYS,      /* Read system msg from admin queue */
+	IDPF_MSG_CMD,      /* Read async command result */
+};
+
+/* structure used for sending and checking response of virtchnl ops */
+struct idpf_cmd_info {
+	uint32_t ops;
+	uint8_t *in_args;       /* buffer for sending */
+	uint32_t in_args_size;  /* buffer size for sending */
+	uint8_t *out_buffer;    /* buffer for response */
+	uint32_t out_size;      /* buffer size for response */
+};
+
+/* notify current command done. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+notify_cmd(struct idpf_adapter *adapter, int msg_ret)
+{
+	adapter->cmd_retval = msg_ret;
+	/* Return value may be checked in anither thread, need to ensure the coherence. */
+	rte_wmb();
+	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
+}
+
+/* clear current command. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+clear_cmd(struct idpf_adapter *adapter)
+{
+	/* Return value may be checked in anither thread, need to ensure the coherence. */
+	rte_wmb();
+	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
+	adapter->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
+}
+
+/* Check there is pending cmd in execution. If none, set new command. */
+static inline bool
+atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
+{
+	uint32_t op_unk = VIRTCHNL2_OP_UNKNOWN;
+	bool ret = __atomic_compare_exchange(&adapter->pend_cmd, &op_unk, &ops,
+					    0, __ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE);
+
+	if (!ret)
+		DRV_LOG(ERR, "There is incomplete cmd %d", adapter->pend_cmd);
+
+	return !ret;
+}
+
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_logs.h b/drivers/common/idpf/idpf_common_logs.h
new file mode 100644
index 0000000000..fe36562769
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_logs.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_COMMON_LOGS_H_
+#define _IDPF_COMMON_LOGS_H_
+
+#include <rte_log.h>
+
+extern int idpf_common_logtype;
+
+#define DRV_LOG_RAW(level, ...)					\
+	rte_log(RTE_LOG_ ## level,				\
+		idpf_common_logtype,				\
+		RTE_FMT("%s(): "				\
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n",	\
+			__func__,				\
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define DRV_LOG(level, fmt, args...)		\
+	DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _IDPF_COMMON_LOGS_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
new file mode 100644
index 0000000000..f2ee586fa0
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -0,0 +1,815 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <idpf_common_virtchnl.h>
+#include <idpf_common_logs.h>
+
+static int
+idpf_vc_clean(struct idpf_adapter *adapter)
+{
+	struct idpf_ctlq_msg *q_msg[IDPF_CTLQ_LEN];
+	uint16_t num_q_msg = IDPF_CTLQ_LEN;
+	struct idpf_dma_mem *dma_mem;
+	int err;
+	uint32_t i;
+
+	for (i = 0; i < 10; i++) {
+		err = idpf_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg);
+		msleep(20);
+		if (num_q_msg > 0)
+			break;
+	}
+	if (err != 0)
+		return err;
+
+	/* Empty queue is not an error */
+	for (i = 0; i < num_q_msg; i++) {
+		dma_mem = q_msg[i]->ctx.indirect.payload;
+		if (dma_mem != NULL) {
+			idpf_free_dma_mem(&adapter->hw, dma_mem);
+			rte_free(dma_mem);
+		}
+		rte_free(q_msg[i]);
+	}
+
+	return 0;
+}
+
+static int
+idpf_send_vc_msg(struct idpf_adapter *adapter, uint32_t op,
+		 uint16_t msg_size, uint8_t *msg)
+{
+	struct idpf_ctlq_msg *ctlq_msg;
+	struct idpf_dma_mem *dma_mem;
+	int err;
+
+	err = idpf_vc_clean(adapter);
+	if (err != 0)
+		goto err;
+
+	ctlq_msg = rte_zmalloc(NULL, sizeof(struct idpf_ctlq_msg), 0);
+	if (ctlq_msg == NULL) {
+		err = -ENOMEM;
+		goto err;
+	}
+
+	dma_mem = rte_zmalloc(NULL, sizeof(struct idpf_dma_mem), 0);
+	if (dma_mem == NULL) {
+		err = -ENOMEM;
+		goto dma_mem_error;
+	}
+
+	dma_mem->size = IDPF_DFLT_MBX_BUF_SIZE;
+	idpf_alloc_dma_mem(&adapter->hw, dma_mem, dma_mem->size);
+	if (dma_mem->va == NULL) {
+		err = -ENOMEM;
+		goto dma_alloc_error;
+	}
+
+	memcpy(dma_mem->va, msg, msg_size);
+
+	ctlq_msg->opcode = idpf_mbq_opc_send_msg_to_pf;
+	ctlq_msg->func_id = 0;
+	ctlq_msg->data_len = msg_size;
+	ctlq_msg->cookie.mbx.chnl_opcode = op;
+	ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS;
+	ctlq_msg->ctx.indirect.payload = dma_mem;
+
+	err = idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
+	if (err != 0)
+		goto send_error;
+
+	return 0;
+
+send_error:
+	idpf_free_dma_mem(&adapter->hw, dma_mem);
+dma_alloc_error:
+	rte_free(dma_mem);
+dma_mem_error:
+	rte_free(ctlq_msg);
+err:
+	return err;
+}
+
+static enum idpf_vc_result
+idpf_read_msg_from_cp(struct idpf_adapter *adapter, uint16_t buf_len,
+		      uint8_t *buf)
+{
+	struct idpf_hw *hw = &adapter->hw;
+	struct idpf_ctlq_msg ctlq_msg;
+	struct idpf_dma_mem *dma_mem = NULL;
+	enum idpf_vc_result result = IDPF_MSG_NON;
+	uint32_t opcode;
+	uint16_t pending = 1;
+	int ret;
+
+	ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+	if (ret != 0) {
+		DRV_LOG(DEBUG, "Can't read msg from AQ");
+		if (ret != -ENOMSG)
+			result = IDPF_MSG_ERR;
+		return result;
+	}
+
+	rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len);
+
+	opcode = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+	adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+	DRV_LOG(DEBUG, "CQ from CP carries opcode %u, retval %d",
+		opcode, adapter->cmd_retval);
+
+	if (opcode == VIRTCHNL2_OP_EVENT) {
+		struct virtchnl2_event *ve = ctlq_msg.ctx.indirect.payload->va;
+
+		result = IDPF_MSG_SYS;
+		switch (ve->event) {
+		case VIRTCHNL2_EVENT_LINK_CHANGE:
+			/* TBD */
+			break;
+		default:
+			DRV_LOG(ERR, "%s: Unknown event %d from CP",
+				__func__, ve->event);
+			break;
+		}
+	} else {
+		/* async reply msg on command issued by pf previously */
+		result = IDPF_MSG_CMD;
+		if (opcode != adapter->pend_cmd) {
+			DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
+				adapter->pend_cmd, opcode);
+			result = IDPF_MSG_ERR;
+		}
+	}
+
+	if (ctlq_msg.data_len != 0)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret != 0 && dma_mem != NULL)
+		idpf_free_dma_mem(hw, dma_mem);
+
+	return result;
+}
+
+#define MAX_TRY_TIMES 200
+#define ASQ_DELAY_MS  10
+
+int
+idpf_vc_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_len,
+		     uint8_t *buf)
+{
+	int err = 0;
+	int i = 0;
+	int ret;
+
+	do {
+		ret = idpf_read_msg_from_cp(adapter, buf_len, buf);
+		if (ret == IDPF_MSG_CMD)
+			break;
+		rte_delay_ms(ASQ_DELAY_MS);
+	} while (i++ < MAX_TRY_TIMES);
+	if (i >= MAX_TRY_TIMES ||
+	    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+		err = -EBUSY;
+		DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
+			adapter->cmd_retval, ops);
+	}
+
+	return err;
+}
+
+int
+idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
+{
+	int err = 0;
+	int i = 0;
+	int ret;
+
+	if (atomic_set_cmd(adapter, args->ops))
+		return -EINVAL;
+
+	ret = idpf_send_vc_msg(adapter, args->ops, args->in_args_size, args->in_args);
+	if (ret != 0) {
+		DRV_LOG(ERR, "fail to send cmd %d", args->ops);
+		clear_cmd(adapter);
+		return ret;
+	}
+
+	switch (args->ops) {
+	case VIRTCHNL_OP_VERSION:
+	case VIRTCHNL2_OP_GET_CAPS:
+	case VIRTCHNL2_OP_CREATE_VPORT:
+	case VIRTCHNL2_OP_DESTROY_VPORT:
+	case VIRTCHNL2_OP_SET_RSS_KEY:
+	case VIRTCHNL2_OP_SET_RSS_LUT:
+	case VIRTCHNL2_OP_SET_RSS_HASH:
+	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
+	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
+	case VIRTCHNL2_OP_ENABLE_QUEUES:
+	case VIRTCHNL2_OP_DISABLE_QUEUES:
+	case VIRTCHNL2_OP_ENABLE_VPORT:
+	case VIRTCHNL2_OP_DISABLE_VPORT:
+	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
+	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
+	case VIRTCHNL2_OP_ALLOC_VECTORS:
+	case VIRTCHNL2_OP_DEALLOC_VECTORS:
+		/* for init virtchnl ops, need to poll the response */
+		err = idpf_vc_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
+		clear_cmd(adapter);
+		break;
+	case VIRTCHNL2_OP_GET_PTYPE_INFO:
+		/* for multuple response message,
+		 * do not handle the response here.
+		 */
+		break;
+	default:
+		/* For other virtchnl ops in running time,
+		 * wait for the cmd done flag.
+		 */
+		do {
+			if (adapter->pend_cmd == VIRTCHNL_OP_UNKNOWN)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+			/* If don't read msg or read sys event, continue */
+		} while (i++ < MAX_TRY_TIMES);
+		/* If there's no response is received, clear command */
+		if (i >= MAX_TRY_TIMES  ||
+		    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -EBUSY;
+			DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
+				adapter->cmd_retval, args->ops);
+			clear_cmd(adapter);
+		}
+		break;
+	}
+
+	return err;
+}
+
+int
+idpf_vc_check_api_version(struct idpf_adapter *adapter)
+{
+	struct virtchnl2_version_info version, *pver;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&version, 0, sizeof(struct virtchnl_version_info));
+	version.major = VIRTCHNL2_VERSION_MAJOR_2;
+	version.minor = VIRTCHNL2_VERSION_MINOR_0;
+
+	args.ops = VIRTCHNL_OP_VERSION;
+	args.in_args = (uint8_t *)&version;
+	args.in_args_size = sizeof(version);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0) {
+		DRV_LOG(ERR,
+			"Failed to execute command of VIRTCHNL_OP_VERSION");
+		return err;
+	}
+
+	pver = (struct virtchnl2_version_info *)args.out_buffer;
+	adapter->virtchnl_version = *pver;
+
+	if (adapter->virtchnl_version.major != VIRTCHNL2_VERSION_MAJOR_2 ||
+	    adapter->virtchnl_version.minor != VIRTCHNL2_VERSION_MINOR_0) {
+		DRV_LOG(ERR, "VIRTCHNL API version mismatch:(%u.%u)-(%u.%u)",
+			adapter->virtchnl_version.major,
+			adapter->virtchnl_version.minor,
+			VIRTCHNL2_VERSION_MAJOR_2,
+			VIRTCHNL2_VERSION_MINOR_0);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int
+idpf_vc_get_caps(struct idpf_adapter *adapter)
+{
+	struct virtchnl2_get_capabilities caps_msg;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities));
+
+	caps_msg.csum_caps =
+		VIRTCHNL2_CAP_TX_CSUM_L3_IPV4          |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP      |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP      |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP     |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP      |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP      |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP     |
+		VIRTCHNL2_CAP_TX_CSUM_GENERIC          |
+		VIRTCHNL2_CAP_RX_CSUM_L3_IPV4          |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP      |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP      |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP     |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP      |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP      |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP     |
+		VIRTCHNL2_CAP_RX_CSUM_GENERIC;
+
+	caps_msg.rss_caps =
+		VIRTCHNL2_CAP_RSS_IPV4_TCP             |
+		VIRTCHNL2_CAP_RSS_IPV4_UDP             |
+		VIRTCHNL2_CAP_RSS_IPV4_SCTP            |
+		VIRTCHNL2_CAP_RSS_IPV4_OTHER           |
+		VIRTCHNL2_CAP_RSS_IPV6_TCP             |
+		VIRTCHNL2_CAP_RSS_IPV6_UDP             |
+		VIRTCHNL2_CAP_RSS_IPV6_SCTP            |
+		VIRTCHNL2_CAP_RSS_IPV6_OTHER           |
+		VIRTCHNL2_CAP_RSS_IPV4_AH              |
+		VIRTCHNL2_CAP_RSS_IPV4_ESP             |
+		VIRTCHNL2_CAP_RSS_IPV4_AH_ESP          |
+		VIRTCHNL2_CAP_RSS_IPV6_AH              |
+		VIRTCHNL2_CAP_RSS_IPV6_ESP             |
+		VIRTCHNL2_CAP_RSS_IPV6_AH_ESP;
+
+	caps_msg.other_caps = VIRTCHNL2_CAP_WB_ON_ITR;
+
+	args.ops = VIRTCHNL2_OP_GET_CAPS;
+	args.in_args = (uint8_t *)&caps_msg;
+	args.in_args_size = sizeof(caps_msg);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0) {
+		DRV_LOG(ERR,
+			"Failed to execute command of VIRTCHNL2_OP_GET_CAPS");
+		return err;
+	}
+
+	rte_memcpy(&adapter->caps, args.out_buffer, sizeof(caps_msg));
+
+	return 0;
+}
+
+int
+idpf_vc_create_vport(struct idpf_vport *vport,
+		     struct virtchnl2_create_vport *vport_req_info)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_create_vport vport_msg;
+	struct idpf_cmd_info args;
+	int err = -1;
+
+	memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
+	vport_msg.vport_type = vport_req_info->vport_type;
+	vport_msg.txq_model = vport_req_info->txq_model;
+	vport_msg.rxq_model = vport_req_info->rxq_model;
+	vport_msg.num_tx_q = vport_req_info->num_tx_q;
+	vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
+	vport_msg.num_rx_q = vport_req_info->num_rx_q;
+	vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_CREATE_VPORT;
+	args.in_args = (uint8_t *)&vport_msg;
+	args.in_args_size = sizeof(vport_msg);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0) {
+		DRV_LOG(ERR,
+			"Failed to execute command of VIRTCHNL2_OP_CREATE_VPORT");
+		return err;
+	}
+
+	rte_memcpy(vport->vport_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
+	return 0;
+}
+
+int
+idpf_vc_destroy_vport(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_vport vc_vport;
+	struct idpf_cmd_info args;
+	int err;
+
+	vc_vport.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_DESTROY_VPORT;
+	args.in_args = (uint8_t *)&vc_vport;
+	args.in_args_size = sizeof(vc_vport);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DESTROY_VPORT");
+
+	return err;
+}
+
+int
+idpf_vc_set_rss_key(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_key *rss_key;
+	struct idpf_cmd_info args;
+	int len, err;
+
+	len = sizeof(*rss_key) + sizeof(rss_key->key[0]) *
+		(vport->rss_key_size - 1);
+	rss_key = rte_zmalloc("rss_key", len, 0);
+	if (rss_key == NULL)
+		return -ENOMEM;
+
+	rss_key->vport_id = vport->vport_id;
+	rss_key->key_len = vport->rss_key_size;
+	rte_memcpy(rss_key->key, vport->rss_key,
+		   sizeof(rss_key->key[0]) * vport->rss_key_size);
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_SET_RSS_KEY;
+	args.in_args = (uint8_t *)rss_key;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_KEY");
+
+	rte_free(rss_key);
+	return err;
+}
+
+int
+idpf_vc_set_rss_lut(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_lut *rss_lut;
+	struct idpf_cmd_info args;
+	int len, err;
+
+	len = sizeof(*rss_lut) + sizeof(rss_lut->lut[0]) *
+		(vport->rss_lut_size - 1);
+	rss_lut = rte_zmalloc("rss_lut", len, 0);
+	if (rss_lut == NULL)
+		return -ENOMEM;
+
+	rss_lut->vport_id = vport->vport_id;
+	rss_lut->lut_entries = vport->rss_lut_size;
+	rte_memcpy(rss_lut->lut, vport->rss_lut,
+		   sizeof(rss_lut->lut[0]) * vport->rss_lut_size);
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_SET_RSS_LUT;
+	args.in_args = (uint8_t *)rss_lut;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_LUT");
+
+	rte_free(rss_lut);
+	return err;
+}
+
+int
+idpf_vc_set_rss_hash(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_hash rss_hash;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_hash, 0, sizeof(rss_hash));
+	rss_hash.ptype_groups = vport->rss_hf;
+	rss_hash.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_SET_RSS_HASH;
+	args.in_args = (uint8_t *)&rss_hash;
+	args.in_args_size = sizeof(rss_hash);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of OP_SET_RSS_HASH");
+
+	return err;
+}
+
+int
+idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_queue_vector_maps *map_info;
+	struct virtchnl2_queue_vector *vecmap;
+	struct idpf_cmd_info args;
+	int len, i, err = 0;
+
+	len = sizeof(struct virtchnl2_queue_vector_maps) +
+		(nb_rxq - 1) * sizeof(struct virtchnl2_queue_vector);
+
+	map_info = rte_zmalloc("map_info", len, 0);
+	if (map_info == NULL)
+		return -ENOMEM;
+
+	map_info->vport_id = vport->vport_id;
+	map_info->num_qv_maps = nb_rxq;
+	for (i = 0; i < nb_rxq; i++) {
+		vecmap = &map_info->qv_maps[i];
+		vecmap->queue_id = vport->qv_map[i].queue_id;
+		vecmap->vector_id = vport->qv_map[i].vector_id;
+		vecmap->itr_idx = VIRTCHNL2_ITR_IDX_0;
+		vecmap->queue_type = VIRTCHNL2_QUEUE_TYPE_RX;
+	}
+
+	args.ops = map ? VIRTCHNL2_OP_MAP_QUEUE_VECTOR :
+		VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR;
+	args.in_args = (uint8_t *)map_info;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUE_VECTOR",
+			map ? "MAP" : "UNMAP");
+
+	rte_free(map_info);
+	return err;
+}
+
+int
+idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_alloc_vectors *alloc_vec;
+	struct idpf_cmd_info args;
+	int err, len;
+
+	len = sizeof(struct virtchnl2_alloc_vectors) +
+		(num_vectors - 1) * sizeof(struct virtchnl2_vector_chunk);
+	alloc_vec = rte_zmalloc("alloc_vec", len, 0);
+	if (alloc_vec == NULL)
+		return -ENOMEM;
+
+	alloc_vec->num_vectors = num_vectors;
+
+	args.ops = VIRTCHNL2_OP_ALLOC_VECTORS;
+	args.in_args = (uint8_t *)alloc_vec;
+	args.in_args_size = sizeof(struct virtchnl2_alloc_vectors);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_ALLOC_VECTORS");
+
+	if (vport->recv_vectors == NULL) {
+		vport->recv_vectors = rte_zmalloc("recv_vectors", len, 0);
+		if (vport->recv_vectors == NULL) {
+			rte_free(alloc_vec);
+			return -ENOMEM;
+		}
+	}
+
+	rte_memcpy(vport->recv_vectors, args.out_buffer, len);
+	rte_free(alloc_vec);
+	return err;
+}
+
+int
+idpf_vc_dealloc_vectors(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_alloc_vectors *alloc_vec;
+	struct virtchnl2_vector_chunks *vcs;
+	struct idpf_cmd_info args;
+	int err, len;
+
+	alloc_vec = vport->recv_vectors;
+	vcs = &alloc_vec->vchunks;
+
+	len = sizeof(struct virtchnl2_vector_chunks) +
+		(vcs->num_vchunks - 1) * sizeof(struct virtchnl2_vector_chunk);
+
+	args.ops = VIRTCHNL2_OP_DEALLOC_VECTORS;
+	args.in_args = (uint8_t *)vcs;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_DEALLOC_VECTORS");
+
+	return err;
+}
+
+static int
+idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
+			  uint32_t type, bool on)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_del_ena_dis_queues *queue_select;
+	struct virtchnl2_queue_chunk *queue_chunk;
+	struct idpf_cmd_info args;
+	int err, len;
+
+	len = sizeof(struct virtchnl2_del_ena_dis_queues);
+	queue_select = rte_zmalloc("queue_select", len, 0);
+	if (queue_select == NULL)
+		return -ENOMEM;
+
+	queue_chunk = queue_select->chunks.chunks;
+	queue_select->chunks.num_chunks = 1;
+	queue_select->vport_id = vport->vport_id;
+
+	queue_chunk->type = type;
+	queue_chunk->start_queue_id = qid;
+	queue_chunk->num_queues = 1;
+
+	args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
+		VIRTCHNL2_OP_DISABLE_QUEUES;
+	args.in_args = (uint8_t *)queue_select;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+			on ? "ENABLE" : "DISABLE");
+
+	rte_free(queue_select);
+	return err;
+}
+
+int
+idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
+		     bool rx, bool on)
+{
+	uint32_t type;
+	int err, queue_id;
+
+	/* switch txq/rxq */
+	type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
+
+	if (type == VIRTCHNL2_QUEUE_TYPE_RX)
+		queue_id = vport->chunks_info.rx_start_qid + qid;
+	else
+		queue_id = vport->chunks_info.tx_start_qid + qid;
+	err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
+	if (err != 0)
+		return err;
+
+	/* switch tx completion queue */
+	if (!rx && vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+		queue_id = vport->chunks_info.tx_compl_start_qid + qid;
+		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
+		if (err != 0)
+			return err;
+	}
+
+	/* switch rx buffer queue */
+	if (rx && vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+		queue_id = vport->chunks_info.rx_buf_start_qid + 2 * qid;
+		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
+		if (err != 0)
+			return err;
+		queue_id++;
+		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
+		if (err != 0)
+			return err;
+	}
+
+	return err;
+}
+
+#define IDPF_RXTX_QUEUE_CHUNKS_NUM	2
+int
+idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_del_ena_dis_queues *queue_select;
+	struct virtchnl2_queue_chunk *queue_chunk;
+	uint32_t type;
+	struct idpf_cmd_info args;
+	uint16_t num_chunks;
+	int err, len;
+
+	num_chunks = IDPF_RXTX_QUEUE_CHUNKS_NUM;
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+		num_chunks++;
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+		num_chunks++;
+
+	len = sizeof(struct virtchnl2_del_ena_dis_queues) +
+		sizeof(struct virtchnl2_queue_chunk) * (num_chunks - 1);
+	queue_select = rte_zmalloc("queue_select", len, 0);
+	if (queue_select == NULL)
+		return -ENOMEM;
+
+	queue_chunk = queue_select->chunks.chunks;
+	queue_select->chunks.num_chunks = num_chunks;
+	queue_select->vport_id = vport->vport_id;
+
+	type = VIRTCHNL_QUEUE_TYPE_RX;
+	queue_chunk[type].type = type;
+	queue_chunk[type].start_queue_id = vport->chunks_info.rx_start_qid;
+	queue_chunk[type].num_queues = vport->num_rx_q;
+
+	type = VIRTCHNL2_QUEUE_TYPE_TX;
+	queue_chunk[type].type = type;
+	queue_chunk[type].start_queue_id = vport->chunks_info.tx_start_qid;
+	queue_chunk[type].num_queues = vport->num_tx_q;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+		queue_chunk[type].type = type;
+		queue_chunk[type].start_queue_id =
+			vport->chunks_info.rx_buf_start_qid;
+		queue_chunk[type].num_queues = vport->num_rx_bufq;
+	}
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+		queue_chunk[type].type = type;
+		queue_chunk[type].start_queue_id =
+			vport->chunks_info.tx_compl_start_qid;
+		queue_chunk[type].num_queues = vport->num_tx_complq;
+	}
+
+	args.ops = enable ? VIRTCHNL2_OP_ENABLE_QUEUES :
+		VIRTCHNL2_OP_DISABLE_QUEUES;
+	args.in_args = (uint8_t *)queue_select;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+			enable ? "ENABLE" : "DISABLE");
+
+	rte_free(queue_select);
+	return err;
+}
+
+int
+idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_vport vc_vport;
+	struct idpf_cmd_info args;
+	int err;
+
+	vc_vport.vport_id = vport->vport_id;
+	args.ops = enable ? VIRTCHNL2_OP_ENABLE_VPORT :
+		VIRTCHNL2_OP_DISABLE_VPORT;
+	args.in_args = (uint8_t *)&vc_vport;
+	args.in_args_size = sizeof(vc_vport);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0) {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_VPORT",
+			enable ? "ENABLE" : "DISABLE");
+	}
+
+	return err;
+}
+
+int
+idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
+{
+	struct virtchnl2_get_ptype_info *ptype_info;
+	struct idpf_cmd_info args;
+	int len, err;
+
+	len = sizeof(struct virtchnl2_get_ptype_info);
+	ptype_info = rte_zmalloc("ptype_info", len, 0);
+	if (ptype_info == NULL)
+		return -ENOMEM;
+
+	ptype_info->start_ptype_id = 0;
+	ptype_info->num_ptypes = IDPF_MAX_PKT_TYPE;
+	args.ops = VIRTCHNL2_OP_GET_PTYPE_INFO;
+	args.in_args = (uint8_t *)ptype_info;
+	args.in_args_size = len;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_PTYPE_INFO");
+
+	rte_free(ptype_info);
+	return err;
+}
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
new file mode 100644
index 0000000000..e05619f4b4
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_COMMON_VIRTCHNL_H_
+#define _IDPF_COMMON_VIRTCHNL_H_
+
+#include <idpf_common_device.h>
+
+__rte_internal
+int idpf_vc_check_api_version(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_vc_get_caps(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_vc_create_vport(struct idpf_vport *vport,
+			 struct virtchnl2_create_vport *vport_info);
+__rte_internal
+int idpf_vc_destroy_vport(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_set_rss_key(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_set_rss_lut(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_set_rss_hash(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
+			 bool rx, bool on);
+__rte_internal
+int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
+__rte_internal
+int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
+__rte_internal
+int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
+				 uint16_t nb_rxq, bool map);
+__rte_internal
+int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
+__rte_internal
+int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_vc_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
+			 uint16_t buf_len, uint8_t *buf);
+__rte_internal
+int idpf_execute_vc_cmd(struct idpf_adapter *adapter,
+			struct idpf_cmd_info *args);
+
+#endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/meson.build b/drivers/common/idpf/meson.build
index 77d997b4a7..d1578641ba 100644
--- a/drivers/common/idpf/meson.build
+++ b/drivers/common/idpf/meson.build
@@ -1,4 +1,9 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2022 Intel Corporation
 
+sources = files(
+    'idpf_common_device.c',
+    'idpf_common_virtchnl.c',
+)
+
 subdir('base')
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index bfb246c752..9bc0d2a909 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,10 +3,22 @@ INTERNAL {
 
 	idpf_ctlq_deinit;
 	idpf_ctlq_init;
-	idpf_ctlq_clean_sq;
-	idpf_ctlq_recv;
-	idpf_ctlq_send;
-	idpf_ctlq_post_rx_buffs;
+	idpf_execute_vc_cmd;
+	idpf_vc_alloc_vectors;
+	idpf_vc_check_api_version;
+	idpf_vc_config_irq_map_unmap;
+	idpf_vc_create_vport;
+	idpf_vc_dealloc_vectors;
+	idpf_vc_destroy_vport;
+	idpf_vc_ena_dis_queues;
+	idpf_vc_ena_dis_vport;
+	idpf_vc_get_caps;
+	idpf_vc_query_ptype_info;
+	idpf_vc_read_one_msg;
+	idpf_vc_set_rss_hash;
+	idpf_vc_set_rss_key;
+	idpf_vc_set_rss_lut;
+	idpf_vc_switch_queue;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 72a5c9f39b..759fc981d7 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -942,13 +942,6 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapt
 		goto err_api;
 	}
 
-	adapter->max_rxq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
-				sizeof(struct virtchnl2_config_rx_queues)) /
-				sizeof(struct virtchnl2_rxq_info);
-	adapter->max_txq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
-				sizeof(struct virtchnl2_config_tx_queues)) /
-				sizeof(struct virtchnl2_txq_info);
-
 	adapter->cur_vports = 0;
 	adapter->cur_vport_nb = 0;
 
@@ -1075,7 +1068,7 @@ static const struct rte_pci_id pci_id_idpf_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
-struct idpf_adapter_ext *
+static struct idpf_adapter_ext *
 idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 {
 	struct idpf_adapter_ext *adapter;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 8c29019667..efc540fa32 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -16,6 +16,7 @@
 #include "idpf_logs.h"
 
 #include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
 #include <base/idpf_prototype.h>
 #include <base/virtchnl2.h>
 
@@ -31,8 +32,6 @@
 #define IDPF_RX_BUFQ_PER_GRP	2
 
 #define IDPF_CTLQ_ID		-1
-#define IDPF_CTLQ_LEN		64
-#define IDPF_DFLT_MBX_BUF_SIZE	4096
 
 #define IDPF_DFLT_Q_VEC_NUM	1
 #define IDPF_DFLT_INTERVAL	16
@@ -44,8 +43,6 @@
 
 #define IDPF_NUM_MACADDR_MAX	64
 
-#define IDPF_MAX_PKT_TYPE	1024
-
 #define IDPF_VLAN_TAG_SIZE	4
 #define IDPF_ETH_OVERHEAD \
 	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + IDPF_VLAN_TAG_SIZE * 2)
@@ -66,14 +63,6 @@
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
-/* Message type read in virtual channel from PF */
-enum idpf_vc_result {
-	IDPF_MSG_ERR = -1, /* Meet error when accessing admin queue */
-	IDPF_MSG_NON,      /* Read nothing from admin queue */
-	IDPF_MSG_SYS,      /* Read system msg from admin queue */
-	IDPF_MSG_CMD,      /* Read async command result */
-};
-
 struct idpf_vport_param {
 	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
@@ -103,10 +92,6 @@ struct idpf_adapter_ext {
 
 	uint16_t used_vecs_num;
 
-	/* Max config queue number per VC message */
-	uint32_t max_rxq_per_msg;
-	uint32_t max_txq_per_msg;
-
 	uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
 
 	bool rx_vec_allowed;
@@ -125,74 +110,6 @@ TAILQ_HEAD(idpf_adapter_list, idpf_adapter_ext);
 #define IDPF_ADAPTER_TO_EXT(p)					\
 	container_of((p), struct idpf_adapter_ext, base)
 
-/* structure used for sending and checking response of virtchnl ops */
-struct idpf_cmd_info {
-	uint32_t ops;
-	uint8_t *in_args;       /* buffer for sending */
-	uint32_t in_args_size;  /* buffer size for sending */
-	uint8_t *out_buffer;    /* buffer for response */
-	uint32_t out_size;      /* buffer size for response */
-};
-
-/* notify current command done. Only call in case execute
- * _atomic_set_cmd successfully.
- */
-static inline void
-notify_cmd(struct idpf_adapter *adapter, int msg_ret)
-{
-	adapter->cmd_retval = msg_ret;
-	/* Return value may be checked in anither thread, need to ensure the coherence. */
-	rte_wmb();
-	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
-}
-
-/* clear current command. Only call in case execute
- * _atomic_set_cmd successfully.
- */
-static inline void
-clear_cmd(struct idpf_adapter *adapter)
-{
-	/* Return value may be checked in anither thread, need to ensure the coherence. */
-	rte_wmb();
-	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
-	adapter->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
-}
-
-/* Check there is pending cmd in execution. If none, set new command. */
-static inline bool
-atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
-{
-	uint32_t op_unk = VIRTCHNL2_OP_UNKNOWN;
-	bool ret = __atomic_compare_exchange(&adapter->pend_cmd, &op_unk, &ops,
-					    0, __ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE);
-
-	if (!ret)
-		PMD_DRV_LOG(ERR, "There is incomplete cmd %d", adapter->pend_cmd);
-
-	return !ret;
-}
-
-struct idpf_adapter_ext *idpf_find_adapter_ext(struct rte_pci_device *pci_dev);
-void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
-int idpf_vc_check_api_version(struct idpf_adapter *adapter);
 int idpf_get_pkt_type(struct idpf_adapter_ext *adapter);
-int idpf_vc_get_caps(struct idpf_adapter *adapter);
-int idpf_vc_create_vport(struct idpf_vport *vport,
-			 struct virtchnl2_create_vport *vport_info);
-int idpf_vc_destroy_vport(struct idpf_vport *vport);
-int idpf_vc_set_rss_key(struct idpf_vport *vport);
-int idpf_vc_set_rss_lut(struct idpf_vport *vport);
-int idpf_vc_set_rss_hash(struct idpf_vport *vport);
-int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
-		      bool rx, bool on);
-int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
-int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
-int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
-				 uint16_t nb_rxq, bool map);
-int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
-int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
-int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
-int idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
-		      uint16_t buf_len, uint8_t *buf);
 
 #endif /* _IDPF_ETHDEV_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 918d156e03..ad3e31208d 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1080,7 +1080,7 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_switch_queue(vport, rx_queue_id, true, true);
+	err = idpf_vc_switch_queue(vport, rx_queue_id, true, true);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
 			    rx_queue_id);
@@ -1131,7 +1131,7 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_switch_queue(vport, tx_queue_id, false, true);
+	err = idpf_vc_switch_queue(vport, tx_queue_id, false, true);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
 			    tx_queue_id);
@@ -1154,7 +1154,7 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (rx_queue_id >= dev->data->nb_rx_queues)
 		return -EINVAL;
 
-	err = idpf_switch_queue(vport, rx_queue_id, true, false);
+	err = idpf_vc_switch_queue(vport, rx_queue_id, true, false);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
 			    rx_queue_id);
@@ -1185,7 +1185,7 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	if (tx_queue_id >= dev->data->nb_tx_queues)
 		return -EINVAL;
 
-	err = idpf_switch_queue(vport, tx_queue_id, false, false);
+	err = idpf_vc_switch_queue(vport, tx_queue_id, false, false);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
 			    tx_queue_id);
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 633d3295d3..6f4eb52beb 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -23,293 +23,6 @@
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
 
-static int
-idpf_vc_clean(struct idpf_adapter *adapter)
-{
-	struct idpf_ctlq_msg *q_msg[IDPF_CTLQ_LEN];
-	uint16_t num_q_msg = IDPF_CTLQ_LEN;
-	struct idpf_dma_mem *dma_mem;
-	int err;
-	uint32_t i;
-
-	for (i = 0; i < 10; i++) {
-		err = idpf_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg);
-		msleep(20);
-		if (num_q_msg > 0)
-			break;
-	}
-	if (err != 0)
-		return err;
-
-	/* Empty queue is not an error */
-	for (i = 0; i < num_q_msg; i++) {
-		dma_mem = q_msg[i]->ctx.indirect.payload;
-		if (dma_mem != NULL) {
-			idpf_free_dma_mem(&adapter->hw, dma_mem);
-			rte_free(dma_mem);
-		}
-		rte_free(q_msg[i]);
-	}
-
-	return 0;
-}
-
-static int
-idpf_send_vc_msg(struct idpf_adapter *adapter, uint32_t op,
-		 uint16_t msg_size, uint8_t *msg)
-{
-	struct idpf_ctlq_msg *ctlq_msg;
-	struct idpf_dma_mem *dma_mem;
-	int err;
-
-	err = idpf_vc_clean(adapter);
-	if (err != 0)
-		goto err;
-
-	ctlq_msg = rte_zmalloc(NULL, sizeof(struct idpf_ctlq_msg), 0);
-	if (ctlq_msg == NULL) {
-		err = -ENOMEM;
-		goto err;
-	}
-
-	dma_mem = rte_zmalloc(NULL, sizeof(struct idpf_dma_mem), 0);
-	if (dma_mem == NULL) {
-		err = -ENOMEM;
-		goto dma_mem_error;
-	}
-
-	dma_mem->size = IDPF_DFLT_MBX_BUF_SIZE;
-	idpf_alloc_dma_mem(&adapter->hw, dma_mem, dma_mem->size);
-	if (dma_mem->va == NULL) {
-		err = -ENOMEM;
-		goto dma_alloc_error;
-	}
-
-	memcpy(dma_mem->va, msg, msg_size);
-
-	ctlq_msg->opcode = idpf_mbq_opc_send_msg_to_pf;
-	ctlq_msg->func_id = 0;
-	ctlq_msg->data_len = msg_size;
-	ctlq_msg->cookie.mbx.chnl_opcode = op;
-	ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS;
-	ctlq_msg->ctx.indirect.payload = dma_mem;
-
-	err = idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
-	if (err != 0)
-		goto send_error;
-
-	return 0;
-
-send_error:
-	idpf_free_dma_mem(&adapter->hw, dma_mem);
-dma_alloc_error:
-	rte_free(dma_mem);
-dma_mem_error:
-	rte_free(ctlq_msg);
-err:
-	return err;
-}
-
-static enum idpf_vc_result
-idpf_read_msg_from_cp(struct idpf_adapter *adapter, uint16_t buf_len,
-		      uint8_t *buf)
-{
-	struct idpf_hw *hw = &adapter->hw;
-	struct idpf_ctlq_msg ctlq_msg;
-	struct idpf_dma_mem *dma_mem = NULL;
-	enum idpf_vc_result result = IDPF_MSG_NON;
-	uint32_t opcode;
-	uint16_t pending = 1;
-	int ret;
-
-	ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
-	if (ret != 0) {
-		PMD_DRV_LOG(DEBUG, "Can't read msg from AQ");
-		if (ret != -ENOMSG)
-			result = IDPF_MSG_ERR;
-		return result;
-	}
-
-	rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len);
-
-	opcode = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
-	adapter->cmd_retval =
-		(enum virtchnl_status_code)rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
-
-	PMD_DRV_LOG(DEBUG, "CQ from CP carries opcode %u, retval %d",
-		    opcode, adapter->cmd_retval);
-
-	if (opcode == VIRTCHNL2_OP_EVENT) {
-		struct virtchnl2_event *ve =
-			(struct virtchnl2_event *)ctlq_msg.ctx.indirect.payload->va;
-
-		result = IDPF_MSG_SYS;
-		switch (ve->event) {
-		case VIRTCHNL2_EVENT_LINK_CHANGE:
-			/* TBD */
-			break;
-		default:
-			PMD_DRV_LOG(ERR, "%s: Unknown event %d from CP",
-				    __func__, ve->event);
-			break;
-		}
-	} else {
-		/* async reply msg on command issued by pf previously */
-		result = IDPF_MSG_CMD;
-		if (opcode != adapter->pend_cmd) {
-			PMD_DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
-				    adapter->pend_cmd, opcode);
-			result = IDPF_MSG_ERR;
-		}
-	}
-
-	if (ctlq_msg.data_len != 0)
-		dma_mem = ctlq_msg.ctx.indirect.payload;
-	else
-		pending = 0;
-
-	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
-	if (ret != 0 && dma_mem != NULL)
-		idpf_free_dma_mem(hw, dma_mem);
-
-	return result;
-}
-
-#define MAX_TRY_TIMES 200
-#define ASQ_DELAY_MS  10
-
-int
-idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_len,
-		  uint8_t *buf)
-{
-	int err = 0;
-	int i = 0;
-	int ret;
-
-	do {
-		ret = idpf_read_msg_from_cp(adapter, buf_len, buf);
-		if (ret == IDPF_MSG_CMD)
-			break;
-		rte_delay_ms(ASQ_DELAY_MS);
-	} while (i++ < MAX_TRY_TIMES);
-	if (i >= MAX_TRY_TIMES ||
-	    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
-		err = -EBUSY;
-		PMD_DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
-			    adapter->cmd_retval, ops);
-	}
-
-	return err;
-}
-
-static int
-idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
-{
-	int err = 0;
-	int i = 0;
-	int ret;
-
-	if (atomic_set_cmd(adapter, args->ops))
-		return -EINVAL;
-
-	ret = idpf_send_vc_msg(adapter, args->ops, args->in_args_size, args->in_args);
-	if (ret != 0) {
-		PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops);
-		clear_cmd(adapter);
-		return ret;
-	}
-
-	switch (args->ops) {
-	case VIRTCHNL_OP_VERSION:
-	case VIRTCHNL2_OP_GET_CAPS:
-	case VIRTCHNL2_OP_CREATE_VPORT:
-	case VIRTCHNL2_OP_DESTROY_VPORT:
-	case VIRTCHNL2_OP_SET_RSS_KEY:
-	case VIRTCHNL2_OP_SET_RSS_LUT:
-	case VIRTCHNL2_OP_SET_RSS_HASH:
-	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_QUEUES:
-	case VIRTCHNL2_OP_DISABLE_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_VPORT:
-	case VIRTCHNL2_OP_DISABLE_VPORT:
-	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_ALLOC_VECTORS:
-	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-		/* for init virtchnl ops, need to poll the response */
-		err = idpf_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
-		clear_cmd(adapter);
-		break;
-	case VIRTCHNL2_OP_GET_PTYPE_INFO:
-		/* for multuple response message,
-		 * do not handle the response here.
-		 */
-		break;
-	default:
-		/* For other virtchnl ops in running time,
-		 * wait for the cmd done flag.
-		 */
-		do {
-			if (adapter->pend_cmd == VIRTCHNL_OP_UNKNOWN)
-				break;
-			rte_delay_ms(ASQ_DELAY_MS);
-			/* If don't read msg or read sys event, continue */
-		} while (i++ < MAX_TRY_TIMES);
-		/* If there's no response is received, clear command */
-		if (i >= MAX_TRY_TIMES  ||
-		    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
-			err = -EBUSY;
-			PMD_DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
-				    adapter->cmd_retval, args->ops);
-			clear_cmd(adapter);
-		}
-		break;
-	}
-
-	return err;
-}
-
-int
-idpf_vc_check_api_version(struct idpf_adapter *adapter)
-{
-	struct virtchnl2_version_info version, *pver;
-	struct idpf_cmd_info args;
-	int err;
-
-	memset(&version, 0, sizeof(struct virtchnl_version_info));
-	version.major = VIRTCHNL2_VERSION_MAJOR_2;
-	version.minor = VIRTCHNL2_VERSION_MINOR_0;
-
-	args.ops = VIRTCHNL_OP_VERSION;
-	args.in_args = (uint8_t *)&version;
-	args.in_args_size = sizeof(version);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0) {
-		PMD_DRV_LOG(ERR,
-			    "Failed to execute command of VIRTCHNL_OP_VERSION");
-		return err;
-	}
-
-	pver = (struct virtchnl2_version_info *)args.out_buffer;
-	adapter->virtchnl_version = *pver;
-
-	if (adapter->virtchnl_version.major != VIRTCHNL2_VERSION_MAJOR_2 ||
-	    adapter->virtchnl_version.minor != VIRTCHNL2_VERSION_MINOR_0) {
-		PMD_INIT_LOG(ERR, "VIRTCHNL API version mismatch:(%u.%u)-(%u.%u)",
-			     adapter->virtchnl_version.major,
-			     adapter->virtchnl_version.minor,
-			     VIRTCHNL2_VERSION_MAJOR_2,
-			     VIRTCHNL2_VERSION_MINOR_0);
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
 int __rte_cold
 idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 {
@@ -332,8 +45,8 @@ idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 			return -ENOMEM;
 
 	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
-		ret = idpf_read_one_msg(base, VIRTCHNL2_OP_GET_PTYPE_INFO,
-					IDPF_DFLT_MBX_BUF_SIZE, (u8 *)ptype_info);
+		ret = idpf_vc_read_one_msg(base, VIRTCHNL2_OP_GET_PTYPE_INFO,
+					   IDPF_DFLT_MBX_BUF_SIZE, (uint8_t *)ptype_info);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR, "Fail to get packet type information");
 			goto free_ptype_info;
@@ -349,7 +62,7 @@ idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 			uint32_t proto_hdr = 0;
 
 			ptype = (struct virtchnl2_ptype *)
-					((u8 *)ptype_info + ptype_offset);
+					((uint8_t *)ptype_info + ptype_offset);
 			ptype_offset += IDPF_GET_PTYPE_SIZE(ptype);
 			if (ptype_offset > IDPF_DFLT_MBX_BUF_SIZE) {
 				ret = -EINVAL;
@@ -523,223 +236,6 @@ idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 	return ret;
 }
 
-int
-idpf_vc_get_caps(struct idpf_adapter *adapter)
-{
-	struct virtchnl2_get_capabilities caps_msg;
-	struct idpf_cmd_info args;
-	int err;
-
-	memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities));
-
-	caps_msg.csum_caps =
-		VIRTCHNL2_CAP_TX_CSUM_L3_IPV4          |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP      |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP      |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP     |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP      |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP      |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP     |
-		VIRTCHNL2_CAP_TX_CSUM_GENERIC          |
-		VIRTCHNL2_CAP_RX_CSUM_L3_IPV4          |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP      |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP      |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP     |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP      |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP      |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP     |
-		VIRTCHNL2_CAP_RX_CSUM_GENERIC;
-
-	caps_msg.rss_caps =
-		VIRTCHNL2_CAP_RSS_IPV4_TCP             |
-		VIRTCHNL2_CAP_RSS_IPV4_UDP             |
-		VIRTCHNL2_CAP_RSS_IPV4_SCTP            |
-		VIRTCHNL2_CAP_RSS_IPV4_OTHER           |
-		VIRTCHNL2_CAP_RSS_IPV6_TCP             |
-		VIRTCHNL2_CAP_RSS_IPV6_UDP             |
-		VIRTCHNL2_CAP_RSS_IPV6_SCTP            |
-		VIRTCHNL2_CAP_RSS_IPV6_OTHER           |
-		VIRTCHNL2_CAP_RSS_IPV4_AH              |
-		VIRTCHNL2_CAP_RSS_IPV4_ESP             |
-		VIRTCHNL2_CAP_RSS_IPV4_AH_ESP          |
-		VIRTCHNL2_CAP_RSS_IPV6_AH              |
-		VIRTCHNL2_CAP_RSS_IPV6_ESP             |
-		VIRTCHNL2_CAP_RSS_IPV6_AH_ESP;
-
-	caps_msg.other_caps = VIRTCHNL2_CAP_WB_ON_ITR;
-
-	args.ops = VIRTCHNL2_OP_GET_CAPS;
-	args.in_args = (uint8_t *)&caps_msg;
-	args.in_args_size = sizeof(caps_msg);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0) {
-		PMD_DRV_LOG(ERR,
-			    "Failed to execute command of VIRTCHNL2_OP_GET_CAPS");
-		return err;
-	}
-
-	rte_memcpy(&adapter->caps, args.out_buffer, sizeof(caps_msg));
-
-	return 0;
-}
-
-int
-idpf_vc_create_vport(struct idpf_vport *vport,
-		     struct virtchnl2_create_vport *vport_req_info)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_create_vport vport_msg;
-	struct idpf_cmd_info args;
-	int err = -1;
-
-	memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
-	vport_msg.vport_type = vport_req_info->vport_type;
-	vport_msg.txq_model = vport_req_info->txq_model;
-	vport_msg.rxq_model = vport_req_info->rxq_model;
-	vport_msg.num_tx_q = vport_req_info->num_tx_q;
-	vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
-	vport_msg.num_rx_q = vport_req_info->num_rx_q;
-	vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_CREATE_VPORT;
-	args.in_args = (uint8_t *)&vport_msg;
-	args.in_args_size = sizeof(vport_msg);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0) {
-		PMD_DRV_LOG(ERR,
-			    "Failed to execute command of VIRTCHNL2_OP_CREATE_VPORT");
-		return err;
-	}
-
-	rte_memcpy(vport->vport_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
-	return 0;
-}
-
-int
-idpf_vc_destroy_vport(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_vport vc_vport;
-	struct idpf_cmd_info args;
-	int err;
-
-	vc_vport.vport_id = vport->vport_id;
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_DESTROY_VPORT;
-	args.in_args = (uint8_t *)&vc_vport;
-	args.in_args_size = sizeof(vc_vport);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DESTROY_VPORT");
-
-	return err;
-}
-
-int
-idpf_vc_set_rss_key(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_rss_key *rss_key;
-	struct idpf_cmd_info args;
-	int len, err;
-
-	len = sizeof(*rss_key) + sizeof(rss_key->key[0]) *
-		(vport->rss_key_size - 1);
-	rss_key = rte_zmalloc("rss_key", len, 0);
-	if (rss_key == NULL)
-		return -ENOMEM;
-
-	rss_key->vport_id = vport->vport_id;
-	rss_key->key_len = vport->rss_key_size;
-	rte_memcpy(rss_key->key, vport->rss_key,
-		   sizeof(rss_key->key[0]) * vport->rss_key_size);
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_SET_RSS_KEY;
-	args.in_args = (uint8_t *)rss_key;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_KEY");
-
-	rte_free(rss_key);
-	return err;
-}
-
-int
-idpf_vc_set_rss_lut(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_rss_lut *rss_lut;
-	struct idpf_cmd_info args;
-	int len, err;
-
-	len = sizeof(*rss_lut) + sizeof(rss_lut->lut[0]) *
-		(vport->rss_lut_size - 1);
-	rss_lut = rte_zmalloc("rss_lut", len, 0);
-	if (rss_lut == NULL)
-		return -ENOMEM;
-
-	rss_lut->vport_id = vport->vport_id;
-	rss_lut->lut_entries = vport->rss_lut_size;
-	rte_memcpy(rss_lut->lut, vport->rss_lut,
-		   sizeof(rss_lut->lut[0]) * vport->rss_lut_size);
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_SET_RSS_LUT;
-	args.in_args = (uint8_t *)rss_lut;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_LUT");
-
-	rte_free(rss_lut);
-	return err;
-}
-
-int
-idpf_vc_set_rss_hash(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_rss_hash rss_hash;
-	struct idpf_cmd_info args;
-	int err;
-
-	memset(&rss_hash, 0, sizeof(rss_hash));
-	rss_hash.ptype_groups = vport->rss_hf;
-	rss_hash.vport_id = vport->vport_id;
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_SET_RSS_HASH;
-	args.in_args = (uint8_t *)&rss_hash;
-	args.in_args_size = sizeof(rss_hash);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of OP_SET_RSS_HASH");
-
-	return err;
-}
-
 #define IDPF_RX_BUF_STRIDE		64
 int
 idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
@@ -899,310 +395,3 @@ idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
 
 	return err;
 }
-
-int
-idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_queue_vector_maps *map_info;
-	struct virtchnl2_queue_vector *vecmap;
-	struct idpf_cmd_info args;
-	int len, i, err = 0;
-
-	len = sizeof(struct virtchnl2_queue_vector_maps) +
-		(nb_rxq - 1) * sizeof(struct virtchnl2_queue_vector);
-
-	map_info = rte_zmalloc("map_info", len, 0);
-	if (map_info == NULL)
-		return -ENOMEM;
-
-	map_info->vport_id = vport->vport_id;
-	map_info->num_qv_maps = nb_rxq;
-	for (i = 0; i < nb_rxq; i++) {
-		vecmap = &map_info->qv_maps[i];
-		vecmap->queue_id = vport->qv_map[i].queue_id;
-		vecmap->vector_id = vport->qv_map[i].vector_id;
-		vecmap->itr_idx = VIRTCHNL2_ITR_IDX_0;
-		vecmap->queue_type = VIRTCHNL2_QUEUE_TYPE_RX;
-	}
-
-	args.ops = map ? VIRTCHNL2_OP_MAP_QUEUE_VECTOR :
-		VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR;
-	args.in_args = (u8 *)map_info;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUE_VECTOR",
-			    map ? "MAP" : "UNMAP");
-
-	rte_free(map_info);
-	return err;
-}
-
-int
-idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_alloc_vectors *alloc_vec;
-	struct idpf_cmd_info args;
-	int err, len;
-
-	len = sizeof(struct virtchnl2_alloc_vectors) +
-		(num_vectors - 1) * sizeof(struct virtchnl2_vector_chunk);
-	alloc_vec = rte_zmalloc("alloc_vec", len, 0);
-	if (alloc_vec == NULL)
-		return -ENOMEM;
-
-	alloc_vec->num_vectors = num_vectors;
-
-	args.ops = VIRTCHNL2_OP_ALLOC_VECTORS;
-	args.in_args = (u8 *)alloc_vec;
-	args.in_args_size = sizeof(struct virtchnl2_alloc_vectors);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_ALLOC_VECTORS");
-
-	if (vport->recv_vectors == NULL) {
-		vport->recv_vectors = rte_zmalloc("recv_vectors", len, 0);
-		if (vport->recv_vectors == NULL) {
-			rte_free(alloc_vec);
-			return -ENOMEM;
-		}
-	}
-
-	rte_memcpy(vport->recv_vectors, args.out_buffer, len);
-	rte_free(alloc_vec);
-	return err;
-}
-
-int
-idpf_vc_dealloc_vectors(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_alloc_vectors *alloc_vec;
-	struct virtchnl2_vector_chunks *vcs;
-	struct idpf_cmd_info args;
-	int err, len;
-
-	alloc_vec = vport->recv_vectors;
-	vcs = &alloc_vec->vchunks;
-
-	len = sizeof(struct virtchnl2_vector_chunks) +
-		(vcs->num_vchunks - 1) * sizeof(struct virtchnl2_vector_chunk);
-
-	args.ops = VIRTCHNL2_OP_DEALLOC_VECTORS;
-	args.in_args = (u8 *)vcs;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_DEALLOC_VECTORS");
-
-	return err;
-}
-
-static int
-idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
-			  uint32_t type, bool on)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_del_ena_dis_queues *queue_select;
-	struct virtchnl2_queue_chunk *queue_chunk;
-	struct idpf_cmd_info args;
-	int err, len;
-
-	len = sizeof(struct virtchnl2_del_ena_dis_queues);
-	queue_select = rte_zmalloc("queue_select", len, 0);
-	if (queue_select == NULL)
-		return -ENOMEM;
-
-	queue_chunk = queue_select->chunks.chunks;
-	queue_select->chunks.num_chunks = 1;
-	queue_select->vport_id = vport->vport_id;
-
-	queue_chunk->type = type;
-	queue_chunk->start_queue_id = qid;
-	queue_chunk->num_queues = 1;
-
-	args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
-		VIRTCHNL2_OP_DISABLE_QUEUES;
-	args.in_args = (u8 *)queue_select;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
-			    on ? "ENABLE" : "DISABLE");
-
-	rte_free(queue_select);
-	return err;
-}
-
-int
-idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
-		     bool rx, bool on)
-{
-	uint32_t type;
-	int err, queue_id;
-
-	/* switch txq/rxq */
-	type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
-
-	if (type == VIRTCHNL2_QUEUE_TYPE_RX)
-		queue_id = vport->chunks_info.rx_start_qid + qid;
-	else
-		queue_id = vport->chunks_info.tx_start_qid + qid;
-	err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
-	if (err != 0)
-		return err;
-
-	/* switch tx completion queue */
-	if (!rx && vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-		queue_id = vport->chunks_info.tx_compl_start_qid + qid;
-		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
-		if (err != 0)
-			return err;
-	}
-
-	/* switch rx buffer queue */
-	if (rx && vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
-		queue_id = vport->chunks_info.rx_buf_start_qid + 2 * qid;
-		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
-		if (err != 0)
-			return err;
-		queue_id++;
-		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
-		if (err != 0)
-			return err;
-	}
-
-	return err;
-}
-
-#define IDPF_RXTX_QUEUE_CHUNKS_NUM	2
-int
-idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_del_ena_dis_queues *queue_select;
-	struct virtchnl2_queue_chunk *queue_chunk;
-	uint32_t type;
-	struct idpf_cmd_info args;
-	uint16_t num_chunks;
-	int err, len;
-
-	num_chunks = IDPF_RXTX_QUEUE_CHUNKS_NUM;
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
-		num_chunks++;
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
-		num_chunks++;
-
-	len = sizeof(struct virtchnl2_del_ena_dis_queues) +
-		sizeof(struct virtchnl2_queue_chunk) * (num_chunks - 1);
-	queue_select = rte_zmalloc("queue_select", len, 0);
-	if (queue_select == NULL)
-		return -ENOMEM;
-
-	queue_chunk = queue_select->chunks.chunks;
-	queue_select->chunks.num_chunks = num_chunks;
-	queue_select->vport_id = vport->vport_id;
-
-	type = VIRTCHNL_QUEUE_TYPE_RX;
-	queue_chunk[type].type = type;
-	queue_chunk[type].start_queue_id = vport->chunks_info.rx_start_qid;
-	queue_chunk[type].num_queues = vport->num_rx_q;
-
-	type = VIRTCHNL2_QUEUE_TYPE_TX;
-	queue_chunk[type].type = type;
-	queue_chunk[type].start_queue_id = vport->chunks_info.tx_start_qid;
-	queue_chunk[type].num_queues = vport->num_tx_q;
-
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
-		queue_chunk[type].type = type;
-		queue_chunk[type].start_queue_id =
-			vport->chunks_info.rx_buf_start_qid;
-		queue_chunk[type].num_queues = vport->num_rx_bufq;
-	}
-
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-		queue_chunk[type].type = type;
-		queue_chunk[type].start_queue_id =
-			vport->chunks_info.tx_compl_start_qid;
-		queue_chunk[type].num_queues = vport->num_tx_complq;
-	}
-
-	args.ops = enable ? VIRTCHNL2_OP_ENABLE_QUEUES :
-		VIRTCHNL2_OP_DISABLE_QUEUES;
-	args.in_args = (u8 *)queue_select;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
-			    enable ? "ENABLE" : "DISABLE");
-
-	rte_free(queue_select);
-	return err;
-}
-
-int
-idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_vport vc_vport;
-	struct idpf_cmd_info args;
-	int err;
-
-	vc_vport.vport_id = vport->vport_id;
-	args.ops = enable ? VIRTCHNL2_OP_ENABLE_VPORT :
-			    VIRTCHNL2_OP_DISABLE_VPORT;
-	args.in_args = (uint8_t *)&vc_vport;
-	args.in_args_size = sizeof(vc_vport);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0) {
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_VPORT",
-			    enable ? "ENABLE" : "DISABLE");
-	}
-
-	return err;
-}
-
-int
-idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
-{
-	struct virtchnl2_get_ptype_info *ptype_info;
-	struct idpf_cmd_info args;
-	int len, err;
-
-	len = sizeof(struct virtchnl2_get_ptype_info);
-	ptype_info = rte_zmalloc("ptype_info", len, 0);
-	if (ptype_info == NULL)
-		return -ENOMEM;
-
-	ptype_info->start_ptype_id = 0;
-	ptype_info->num_ptypes = IDPF_MAX_PKT_TYPE;
-	args.ops = VIRTCHNL2_OP_GET_PTYPE_INFO;
-	args.in_args = (u8 *)ptype_info;
-	args.in_args_size = len;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_PTYPE_INFO");
-
-	rte_free(ptype_info);
-	return err;
-}
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v5 04/15] common/idpf: introduce adapter init and deinit
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
                       ` (2 preceding siblings ...)
  2023-02-02  9:53     ` [PATCH v5 03/15] common/idpf: add virtual channel functions beilei.xing
@ 2023-02-02  9:53     ` beilei.xing
  2023-02-02  9:53     ` [PATCH v5 05/15] common/idpf: add vport init/deinit beilei.xing
                       ` (11 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-02  9:53 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Introduce idpf_adapter_init and idpf_adapter_deinit
functions in common module.
And also introduce idpf_adapter_ext_init and
idpf_adapter_ext_deinit functions.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/base/idpf_controlq_api.h |   2 -
 drivers/common/idpf/idpf_common_device.c     | 153 ++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h     |   6 +
 drivers/common/idpf/version.map              |   4 +-
 drivers/net/idpf/idpf_ethdev.c               | 158 ++-----------------
 drivers/net/idpf/idpf_ethdev.h               |   2 -
 6 files changed, 178 insertions(+), 147 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h
index 891a0f10f6..32d17baadf 100644
--- a/drivers/common/idpf/base/idpf_controlq_api.h
+++ b/drivers/common/idpf/base/idpf_controlq_api.h
@@ -161,7 +161,6 @@ enum idpf_mbx_opc {
 /* Will init all required q including default mb.  "q_info" is an array of
  * create_info structs equal to the number of control queues to be created.
  */
-__rte_internal
 int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 		   struct idpf_ctlq_create_info *q_info);
 
@@ -199,7 +198,6 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw,
 			    struct idpf_dma_mem **buffs);
 
 /* Will destroy all q including the default mb */
-__rte_internal
 int idpf_ctlq_deinit(struct idpf_hw *hw);
 
 #endif /* _IDPF_CONTROLQ_API_H_ */
diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 5062780362..b2b42443e4 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -4,5 +4,158 @@
 
 #include <rte_log.h>
 #include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
+
+static void
+idpf_reset_pf(struct idpf_hw *hw)
+{
+	uint32_t reg;
+
+	reg = IDPF_READ_REG(hw, PFGEN_CTRL);
+	IDPF_WRITE_REG(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR));
+}
+
+#define IDPF_RESET_WAIT_CNT 100
+static int
+idpf_check_pf_reset_done(struct idpf_hw *hw)
+{
+	uint32_t reg;
+	int i;
+
+	for (i = 0; i < IDPF_RESET_WAIT_CNT; i++) {
+		reg = IDPF_READ_REG(hw, PFGEN_RSTAT);
+		if (reg != 0xFFFFFFFF && (reg & PFGEN_RSTAT_PFR_STATE_M))
+			return 0;
+		rte_delay_ms(1000);
+	}
+
+	DRV_LOG(ERR, "IDPF reset timeout");
+	return -EBUSY;
+}
+
+#define CTLQ_NUM 2
+static int
+idpf_init_mbx(struct idpf_hw *hw)
+{
+	struct idpf_ctlq_create_info ctlq_info[CTLQ_NUM] = {
+		{
+			.type = IDPF_CTLQ_TYPE_MAILBOX_TX,
+			.id = IDPF_CTLQ_ID,
+			.len = IDPF_CTLQ_LEN,
+			.buf_size = IDPF_DFLT_MBX_BUF_SIZE,
+			.reg = {
+				.head = PF_FW_ATQH,
+				.tail = PF_FW_ATQT,
+				.len = PF_FW_ATQLEN,
+				.bah = PF_FW_ATQBAH,
+				.bal = PF_FW_ATQBAL,
+				.len_mask = PF_FW_ATQLEN_ATQLEN_M,
+				.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M,
+				.head_mask = PF_FW_ATQH_ATQH_M,
+			}
+		},
+		{
+			.type = IDPF_CTLQ_TYPE_MAILBOX_RX,
+			.id = IDPF_CTLQ_ID,
+			.len = IDPF_CTLQ_LEN,
+			.buf_size = IDPF_DFLT_MBX_BUF_SIZE,
+			.reg = {
+				.head = PF_FW_ARQH,
+				.tail = PF_FW_ARQT,
+				.len = PF_FW_ARQLEN,
+				.bah = PF_FW_ARQBAH,
+				.bal = PF_FW_ARQBAL,
+				.len_mask = PF_FW_ARQLEN_ARQLEN_M,
+				.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M,
+				.head_mask = PF_FW_ARQH_ARQH_M,
+			}
+		}
+	};
+	struct idpf_ctlq_info *ctlq;
+	int ret;
+
+	ret = idpf_ctlq_init(hw, CTLQ_NUM, ctlq_info);
+	if (ret != 0)
+		return ret;
+
+	LIST_FOR_EACH_ENTRY_SAFE(ctlq, NULL, &hw->cq_list_head,
+				 struct idpf_ctlq_info, cq_list) {
+		if (ctlq->q_id == IDPF_CTLQ_ID &&
+		    ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
+			hw->asq = ctlq;
+		if (ctlq->q_id == IDPF_CTLQ_ID &&
+		    ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_RX)
+			hw->arq = ctlq;
+	}
+
+	if (hw->asq == NULL || hw->arq == NULL) {
+		idpf_ctlq_deinit(hw);
+		ret = -ENOENT;
+	}
+
+	return ret;
+}
+
+int
+idpf_adapter_init(struct idpf_adapter *adapter)
+{
+	struct idpf_hw *hw = &adapter->hw;
+	int ret;
+
+	idpf_reset_pf(hw);
+	ret = idpf_check_pf_reset_done(hw);
+	if (ret != 0) {
+		DRV_LOG(ERR, "IDPF is still resetting");
+		goto err_check_reset;
+	}
+
+	ret = idpf_init_mbx(hw);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to init mailbox");
+		goto err_check_reset;
+	}
+
+	adapter->mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
+					IDPF_DFLT_MBX_BUF_SIZE, 0);
+	if (adapter->mbx_resp == NULL) {
+		DRV_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp memory");
+		ret = -ENOMEM;
+		goto err_mbx_resp;
+	}
+
+	ret = idpf_vc_check_api_version(adapter);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to check api version");
+		goto err_check_api;
+	}
+
+	ret = idpf_vc_get_caps(adapter);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to get capabilities");
+		goto err_check_api;
+	}
+
+	return 0;
+
+err_check_api:
+	rte_free(adapter->mbx_resp);
+	adapter->mbx_resp = NULL;
+err_mbx_resp:
+	idpf_ctlq_deinit(hw);
+err_check_reset:
+	return ret;
+}
+
+int
+idpf_adapter_deinit(struct idpf_adapter *adapter)
+{
+	struct idpf_hw *hw = &adapter->hw;
+
+	idpf_ctlq_deinit(hw);
+	rte_free(adapter->mbx_resp);
+	adapter->mbx_resp = NULL;
+
+	return 0;
+}
 
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index a7537281d1..e4344ea392 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -9,6 +9,7 @@
 #include <base/virtchnl2.h>
 #include <idpf_common_logs.h>
 
+#define IDPF_CTLQ_ID		-1
 #define IDPF_CTLQ_LEN		64
 #define IDPF_DFLT_MBX_BUF_SIZE	4096
 
@@ -137,4 +138,9 @@ atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
 	return !ret;
 }
 
+__rte_internal
+int idpf_adapter_init(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_adapter_deinit(struct idpf_adapter *adapter);
+
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 9bc0d2a909..8056996e3c 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -1,8 +1,8 @@
 INTERNAL {
 	global:
 
-	idpf_ctlq_deinit;
-	idpf_ctlq_init;
+	idpf_adapter_deinit;
+	idpf_adapter_init;
 	idpf_execute_vc_cmd;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 759fc981d7..c17c7bb472 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -786,148 +786,32 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap
 	return ret;
 }
 
-static void
-idpf_reset_pf(struct idpf_hw *hw)
-{
-	uint32_t reg;
-
-	reg = IDPF_READ_REG(hw, PFGEN_CTRL);
-	IDPF_WRITE_REG(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR));
-}
-
-#define IDPF_RESET_WAIT_CNT 100
 static int
-idpf_check_pf_reset_done(struct idpf_hw *hw)
+idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
 {
-	uint32_t reg;
-	int i;
-
-	for (i = 0; i < IDPF_RESET_WAIT_CNT; i++) {
-		reg = IDPF_READ_REG(hw, PFGEN_RSTAT);
-		if (reg != 0xFFFFFFFF && (reg & PFGEN_RSTAT_PFR_STATE_M))
-			return 0;
-		rte_delay_ms(1000);
-	}
-
-	PMD_INIT_LOG(ERR, "IDPF reset timeout");
-	return -EBUSY;
-}
-
-#define CTLQ_NUM 2
-static int
-idpf_init_mbx(struct idpf_hw *hw)
-{
-	struct idpf_ctlq_create_info ctlq_info[CTLQ_NUM] = {
-		{
-			.type = IDPF_CTLQ_TYPE_MAILBOX_TX,
-			.id = IDPF_CTLQ_ID,
-			.len = IDPF_CTLQ_LEN,
-			.buf_size = IDPF_DFLT_MBX_BUF_SIZE,
-			.reg = {
-				.head = PF_FW_ATQH,
-				.tail = PF_FW_ATQT,
-				.len = PF_FW_ATQLEN,
-				.bah = PF_FW_ATQBAH,
-				.bal = PF_FW_ATQBAL,
-				.len_mask = PF_FW_ATQLEN_ATQLEN_M,
-				.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M,
-				.head_mask = PF_FW_ATQH_ATQH_M,
-			}
-		},
-		{
-			.type = IDPF_CTLQ_TYPE_MAILBOX_RX,
-			.id = IDPF_CTLQ_ID,
-			.len = IDPF_CTLQ_LEN,
-			.buf_size = IDPF_DFLT_MBX_BUF_SIZE,
-			.reg = {
-				.head = PF_FW_ARQH,
-				.tail = PF_FW_ARQT,
-				.len = PF_FW_ARQLEN,
-				.bah = PF_FW_ARQBAH,
-				.bal = PF_FW_ARQBAL,
-				.len_mask = PF_FW_ARQLEN_ARQLEN_M,
-				.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M,
-				.head_mask = PF_FW_ARQH_ARQH_M,
-			}
-		}
-	};
-	struct idpf_ctlq_info *ctlq;
-	int ret;
-
-	ret = idpf_ctlq_init(hw, CTLQ_NUM, ctlq_info);
-	if (ret != 0)
-		return ret;
-
-	LIST_FOR_EACH_ENTRY_SAFE(ctlq, NULL, &hw->cq_list_head,
-				 struct idpf_ctlq_info, cq_list) {
-		if (ctlq->q_id == IDPF_CTLQ_ID &&
-		    ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
-			hw->asq = ctlq;
-		if (ctlq->q_id == IDPF_CTLQ_ID &&
-		    ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_RX)
-			hw->arq = ctlq;
-	}
-
-	if (hw->asq == NULL || hw->arq == NULL) {
-		idpf_ctlq_deinit(hw);
-		ret = -ENOENT;
-	}
-
-	return ret;
-}
-
-static int
-idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
-{
-	struct idpf_hw *hw = &adapter->base.hw;
+	struct idpf_adapter *base = &adapter->base;
+	struct idpf_hw *hw = &base->hw;
 	int ret = 0;
 
 	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
 	hw->hw_addr_len = pci_dev->mem_resource[0].len;
-	hw->back = &adapter->base;
+	hw->back = base;
 	hw->vendor_id = pci_dev->id.vendor_id;
 	hw->device_id = pci_dev->id.device_id;
 	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
 
 	strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
 
-	idpf_reset_pf(hw);
-	ret = idpf_check_pf_reset_done(hw);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "IDPF is still resetting");
-		goto err;
-	}
-
-	ret = idpf_init_mbx(hw);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to init mailbox");
-		goto err;
-	}
-
-	adapter->base.mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
-					     IDPF_DFLT_MBX_BUF_SIZE, 0);
-	if (adapter->base.mbx_resp == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp memory");
-		ret = -ENOMEM;
-		goto err_mbx;
-	}
-
-	ret = idpf_vc_check_api_version(&adapter->base);
+	ret = idpf_adapter_init(base);
 	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to check api version");
-		goto err_api;
+		PMD_INIT_LOG(ERR, "Failed to init adapter");
+		goto err_adapter_init;
 	}
 
 	ret = idpf_get_pkt_type(adapter);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to set ptype table");
-		goto err_api;
-	}
-
-	ret = idpf_vc_get_caps(&adapter->base);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to get capabilities");
-		goto err_api;
+		goto err_get_ptype;
 	}
 
 	adapter->max_vport_nb = adapter->base.caps.max_vports;
@@ -939,7 +823,7 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapt
 	if (adapter->vports == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
 		ret = -ENOMEM;
-		goto err_api;
+		goto err_get_ptype;
 	}
 
 	adapter->cur_vports = 0;
@@ -949,12 +833,9 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapt
 
 	return ret;
 
-err_api:
-	rte_free(adapter->base.mbx_resp);
-	adapter->base.mbx_resp = NULL;
-err_mbx:
-	idpf_ctlq_deinit(hw);
-err:
+err_get_ptype:
+	idpf_adapter_deinit(base);
+err_adapter_init:
 	return ret;
 }
 
@@ -1093,14 +974,9 @@ idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 }
 
 static void
-idpf_adapter_rel(struct idpf_adapter_ext *adapter)
+idpf_adapter_ext_deinit(struct idpf_adapter_ext *adapter)
 {
-	struct idpf_hw *hw = &adapter->base.hw;
-
-	idpf_ctlq_deinit(hw);
-
-	rte_free(adapter->base.mbx_resp);
-	adapter->base.mbx_resp = NULL;
+	idpf_adapter_deinit(&adapter->base);
 
 	rte_free(adapter->vports);
 	adapter->vports = NULL;
@@ -1133,7 +1009,7 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 			return -ENOMEM;
 		}
 
-		retval = idpf_adapter_init(pci_dev, adapter);
+		retval = idpf_adapter_ext_init(pci_dev, adapter);
 		if (retval != 0) {
 			PMD_INIT_LOG(ERR, "Failed to init adapter.");
 			return retval;
@@ -1196,7 +1072,7 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		rte_spinlock_lock(&idpf_adapter_lock);
 		TAILQ_REMOVE(&idpf_adapter_list, adapter, next);
 		rte_spinlock_unlock(&idpf_adapter_lock);
-		idpf_adapter_rel(adapter);
+		idpf_adapter_ext_deinit(adapter);
 		rte_free(adapter);
 	}
 	return retval;
@@ -1216,7 +1092,7 @@ idpf_pci_remove(struct rte_pci_device *pci_dev)
 	rte_spinlock_lock(&idpf_adapter_lock);
 	TAILQ_REMOVE(&idpf_adapter_list, adapter, next);
 	rte_spinlock_unlock(&idpf_adapter_lock);
-	idpf_adapter_rel(adapter);
+	idpf_adapter_ext_deinit(adapter);
 	rte_free(adapter);
 
 	return 0;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index efc540fa32..07ffe8e408 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -31,8 +31,6 @@
 #define IDPF_RXQ_PER_GRP	1
 #define IDPF_RX_BUFQ_PER_GRP	2
 
-#define IDPF_CTLQ_ID		-1
-
 #define IDPF_DFLT_Q_VEC_NUM	1
 #define IDPF_DFLT_INTERVAL	16
 
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v5 05/15] common/idpf: add vport init/deinit
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
                       ` (3 preceding siblings ...)
  2023-02-02  9:53     ` [PATCH v5 04/15] common/idpf: introduce adapter init and deinit beilei.xing
@ 2023-02-02  9:53     ` beilei.xing
  2023-02-02  9:53     ` [PATCH v5 06/15] common/idpf: add config RSS beilei.xing
                       ` (10 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-02  9:53 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Introduce idpf_vport_init and idpf_vport_deinit functions
in common module.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 115 +++++++++++++++++
 drivers/common/idpf/idpf_common_device.h   |  13 +-
 drivers/common/idpf/idpf_common_virtchnl.c |  18 +--
 drivers/common/idpf/version.map            |   2 +
 drivers/net/idpf/idpf_ethdev.c             | 138 ++-------------------
 5 files changed, 148 insertions(+), 138 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index b2b42443e4..5628fb5c57 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -158,4 +158,119 @@ idpf_adapter_deinit(struct idpf_adapter *adapter)
 	return 0;
 }
 
+int
+idpf_vport_init(struct idpf_vport *vport,
+		struct virtchnl2_create_vport *create_vport_info,
+		void *dev_data)
+{
+	struct virtchnl2_create_vport *vport_info;
+	int i, type, ret;
+
+	ret = idpf_vc_create_vport(vport, create_vport_info);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to create vport.");
+		goto err_create_vport;
+	}
+
+	vport_info = &(vport->vport_info.info);
+	vport->vport_id = vport_info->vport_id;
+	vport->txq_model = vport_info->txq_model;
+	vport->rxq_model = vport_info->rxq_model;
+	vport->num_tx_q = vport_info->num_tx_q;
+	vport->num_tx_complq = vport_info->num_tx_complq;
+	vport->num_rx_q = vport_info->num_rx_q;
+	vport->num_rx_bufq = vport_info->num_rx_bufq;
+	vport->max_mtu = vport_info->max_mtu;
+	rte_memcpy(vport->default_mac_addr,
+		   vport_info->default_mac_addr, ETH_ALEN);
+	vport->rss_algorithm = vport_info->rss_algorithm;
+	vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+				      vport_info->rss_key_size);
+	vport->rss_lut_size = vport_info->rss_lut_size;
+
+	for (i = 0; i < vport_info->chunks.num_chunks; i++) {
+		type = vport_info->chunks.chunks[i].type;
+		switch (type) {
+		case VIRTCHNL2_QUEUE_TYPE_TX:
+			vport->chunks_info.tx_start_qid =
+				vport_info->chunks.chunks[i].start_queue_id;
+			vport->chunks_info.tx_qtail_start =
+				vport_info->chunks.chunks[i].qtail_reg_start;
+			vport->chunks_info.tx_qtail_spacing =
+				vport_info->chunks.chunks[i].qtail_reg_spacing;
+			break;
+		case VIRTCHNL2_QUEUE_TYPE_RX:
+			vport->chunks_info.rx_start_qid =
+				vport_info->chunks.chunks[i].start_queue_id;
+			vport->chunks_info.rx_qtail_start =
+				vport_info->chunks.chunks[i].qtail_reg_start;
+			vport->chunks_info.rx_qtail_spacing =
+				vport_info->chunks.chunks[i].qtail_reg_spacing;
+			break;
+		case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+			vport->chunks_info.tx_compl_start_qid =
+				vport_info->chunks.chunks[i].start_queue_id;
+			vport->chunks_info.tx_compl_qtail_start =
+				vport_info->chunks.chunks[i].qtail_reg_start;
+			vport->chunks_info.tx_compl_qtail_spacing =
+				vport_info->chunks.chunks[i].qtail_reg_spacing;
+			break;
+		case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+			vport->chunks_info.rx_buf_start_qid =
+				vport_info->chunks.chunks[i].start_queue_id;
+			vport->chunks_info.rx_buf_qtail_start =
+				vport_info->chunks.chunks[i].qtail_reg_start;
+			vport->chunks_info.rx_buf_qtail_spacing =
+				vport_info->chunks.chunks[i].qtail_reg_spacing;
+			break;
+		default:
+			DRV_LOG(ERR, "Unsupported queue type");
+			break;
+		}
+	}
+
+	vport->dev_data = dev_data;
+
+	vport->rss_key = rte_zmalloc("rss_key",
+				     vport->rss_key_size, 0);
+	if (vport->rss_key == NULL) {
+		DRV_LOG(ERR, "Failed to allocate RSS key");
+		ret = -ENOMEM;
+		goto err_rss_key;
+	}
+
+	vport->rss_lut = rte_zmalloc("rss_lut",
+				     sizeof(uint32_t) * vport->rss_lut_size, 0);
+	if (vport->rss_lut == NULL) {
+		DRV_LOG(ERR, "Failed to allocate RSS lut");
+		ret = -ENOMEM;
+		goto err_rss_lut;
+	}
+
+	return 0;
+
+err_rss_lut:
+	vport->dev_data = NULL;
+	rte_free(vport->rss_key);
+	vport->rss_key = NULL;
+err_rss_key:
+	idpf_vc_destroy_vport(vport);
+err_create_vport:
+	return ret;
+}
+int
+idpf_vport_deinit(struct idpf_vport *vport)
+{
+	rte_free(vport->rss_lut);
+	vport->rss_lut = NULL;
+
+	rte_free(vport->rss_key);
+	vport->rss_key = NULL;
+
+	vport->dev_data = NULL;
+
+	idpf_vc_destroy_vport(vport);
+
+	return 0;
+}
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index e4344ea392..14d04268e5 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -9,6 +9,8 @@
 #include <base/virtchnl2.h>
 #include <idpf_common_logs.h>
 
+#define IDPF_RSS_KEY_LEN	52
+
 #define IDPF_CTLQ_ID		-1
 #define IDPF_CTLQ_LEN		64
 #define IDPF_DFLT_MBX_BUF_SIZE	4096
@@ -43,7 +45,10 @@ struct idpf_chunks_info {
 
 struct idpf_vport {
 	struct idpf_adapter *adapter; /* Backreference to associated adapter */
-	struct virtchnl2_create_vport *vport_info; /* virtchnl response info handling */
+	union {
+		struct virtchnl2_create_vport info; /* virtchnl response info handling */
+		uint8_t data[IDPF_DFLT_MBX_BUF_SIZE];
+	} vport_info;
 	uint16_t sw_idx; /* SW index in adapter->vports[]*/
 	uint16_t vport_id;
 	uint32_t txq_model;
@@ -142,5 +147,11 @@ __rte_internal
 int idpf_adapter_init(struct idpf_adapter *adapter);
 __rte_internal
 int idpf_adapter_deinit(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_vport_init(struct idpf_vport *vport,
+		    struct virtchnl2_create_vport *vport_req_info,
+		    void *dev_data);
+__rte_internal
+int idpf_vport_deinit(struct idpf_vport *vport);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index f2ee586fa0..cdbf8ca895 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -355,7 +355,7 @@ idpf_vc_get_caps(struct idpf_adapter *adapter)
 
 int
 idpf_vc_create_vport(struct idpf_vport *vport,
-		     struct virtchnl2_create_vport *vport_req_info)
+		     struct virtchnl2_create_vport *create_vport_info)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_create_vport vport_msg;
@@ -363,13 +363,13 @@ idpf_vc_create_vport(struct idpf_vport *vport,
 	int err = -1;
 
 	memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
-	vport_msg.vport_type = vport_req_info->vport_type;
-	vport_msg.txq_model = vport_req_info->txq_model;
-	vport_msg.rxq_model = vport_req_info->rxq_model;
-	vport_msg.num_tx_q = vport_req_info->num_tx_q;
-	vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
-	vport_msg.num_rx_q = vport_req_info->num_rx_q;
-	vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
+	vport_msg.vport_type = create_vport_info->vport_type;
+	vport_msg.txq_model = create_vport_info->txq_model;
+	vport_msg.rxq_model = create_vport_info->rxq_model;
+	vport_msg.num_tx_q = create_vport_info->num_tx_q;
+	vport_msg.num_tx_complq = create_vport_info->num_tx_complq;
+	vport_msg.num_rx_q = create_vport_info->num_rx_q;
+	vport_msg.num_rx_bufq = create_vport_info->num_rx_bufq;
 
 	memset(&args, 0, sizeof(args));
 	args.ops = VIRTCHNL2_OP_CREATE_VPORT;
@@ -385,7 +385,7 @@ idpf_vc_create_vport(struct idpf_vport *vport,
 		return err;
 	}
 
-	rte_memcpy(vport->vport_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
+	rte_memcpy(&(vport->vport_info.info), args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
 	return 0;
 }
 
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 8056996e3c..c1ae5affa4 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -19,6 +19,8 @@ INTERNAL {
 	idpf_vc_set_rss_key;
 	idpf_vc_set_rss_lut;
 	idpf_vc_switch_queue;
+	idpf_vport_deinit;
+	idpf_vport_init;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index c17c7bb472..7a8fb6fd4a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -178,73 +178,6 @@ idpf_init_vport_req_info(struct rte_eth_dev *dev,
 	return 0;
 }
 
-#define IDPF_RSS_KEY_LEN 52
-
-static int
-idpf_init_vport(struct idpf_vport *vport)
-{
-	struct virtchnl2_create_vport *vport_info = vport->vport_info;
-	int i, type;
-
-	vport->vport_id = vport_info->vport_id;
-	vport->txq_model = vport_info->txq_model;
-	vport->rxq_model = vport_info->rxq_model;
-	vport->num_tx_q = vport_info->num_tx_q;
-	vport->num_tx_complq = vport_info->num_tx_complq;
-	vport->num_rx_q = vport_info->num_rx_q;
-	vport->num_rx_bufq = vport_info->num_rx_bufq;
-	vport->max_mtu = vport_info->max_mtu;
-	rte_memcpy(vport->default_mac_addr,
-		   vport_info->default_mac_addr, ETH_ALEN);
-	vport->rss_algorithm = vport_info->rss_algorithm;
-	vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
-				     vport_info->rss_key_size);
-	vport->rss_lut_size = vport_info->rss_lut_size;
-
-	for (i = 0; i < vport_info->chunks.num_chunks; i++) {
-		type = vport_info->chunks.chunks[i].type;
-		switch (type) {
-		case VIRTCHNL2_QUEUE_TYPE_TX:
-			vport->chunks_info.tx_start_qid =
-				vport_info->chunks.chunks[i].start_queue_id;
-			vport->chunks_info.tx_qtail_start =
-				vport_info->chunks.chunks[i].qtail_reg_start;
-			vport->chunks_info.tx_qtail_spacing =
-				vport_info->chunks.chunks[i].qtail_reg_spacing;
-			break;
-		case VIRTCHNL2_QUEUE_TYPE_RX:
-			vport->chunks_info.rx_start_qid =
-				vport_info->chunks.chunks[i].start_queue_id;
-			vport->chunks_info.rx_qtail_start =
-				vport_info->chunks.chunks[i].qtail_reg_start;
-			vport->chunks_info.rx_qtail_spacing =
-				vport_info->chunks.chunks[i].qtail_reg_spacing;
-			break;
-		case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
-			vport->chunks_info.tx_compl_start_qid =
-				vport_info->chunks.chunks[i].start_queue_id;
-			vport->chunks_info.tx_compl_qtail_start =
-				vport_info->chunks.chunks[i].qtail_reg_start;
-			vport->chunks_info.tx_compl_qtail_spacing =
-				vport_info->chunks.chunks[i].qtail_reg_spacing;
-			break;
-		case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
-			vport->chunks_info.rx_buf_start_qid =
-				vport_info->chunks.chunks[i].start_queue_id;
-			vport->chunks_info.rx_buf_qtail_start =
-				vport_info->chunks.chunks[i].qtail_reg_start;
-			vport->chunks_info.rx_buf_qtail_spacing =
-				vport_info->chunks.chunks[i].qtail_reg_spacing;
-			break;
-		default:
-			PMD_INIT_LOG(ERR, "Unsupported queue type");
-			break;
-		}
-	}
-
-	return 0;
-}
-
 static int
 idpf_config_rss(struct idpf_vport *vport)
 {
@@ -276,63 +209,34 @@ idpf_init_rss(struct idpf_vport *vport)
 {
 	struct rte_eth_rss_conf *rss_conf;
 	struct rte_eth_dev_data *dev_data;
-	uint16_t i, nb_q, lut_size;
+	uint16_t i, nb_q;
 	int ret = 0;
 
 	dev_data = vport->dev_data;
 	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
 	nb_q = dev_data->nb_rx_queues;
 
-	vport->rss_key = rte_zmalloc("rss_key",
-				     vport->rss_key_size, 0);
-	if (vport->rss_key == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate RSS key");
-		ret = -ENOMEM;
-		goto err_alloc_key;
-	}
-
-	lut_size = vport->rss_lut_size;
-	vport->rss_lut = rte_zmalloc("rss_lut",
-				     sizeof(uint32_t) * lut_size, 0);
-	if (vport->rss_lut == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate RSS lut");
-		ret = -ENOMEM;
-		goto err_alloc_lut;
-	}
-
 	if (rss_conf->rss_key == NULL) {
 		for (i = 0; i < vport->rss_key_size; i++)
 			vport->rss_key[i] = (uint8_t)rte_rand();
 	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
 		PMD_INIT_LOG(ERR, "Invalid RSS key length in RSS configuration, should be %d",
 			     vport->rss_key_size);
-		ret = -EINVAL;
-		goto err_cfg_key;
+		return -EINVAL;
 	} else {
 		rte_memcpy(vport->rss_key, rss_conf->rss_key,
 			   vport->rss_key_size);
 	}
 
-	for (i = 0; i < lut_size; i++)
+	for (i = 0; i < vport->rss_lut_size; i++)
 		vport->rss_lut[i] = i % nb_q;
 
 	vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
 
 	ret = idpf_config_rss(vport);
-	if (ret != 0) {
+	if (ret != 0)
 		PMD_INIT_LOG(ERR, "Failed to configure RSS");
-		goto err_cfg_key;
-	}
-
-	return ret;
 
-err_cfg_key:
-	rte_free(vport->rss_lut);
-	vport->rss_lut = NULL;
-err_alloc_lut:
-	rte_free(vport->rss_key);
-	vport->rss_key = NULL;
-err_alloc_key:
 	return ret;
 }
 
@@ -602,13 +506,7 @@ idpf_dev_close(struct rte_eth_dev *dev)
 
 	idpf_dev_stop(dev);
 
-	idpf_vc_destroy_vport(vport);
-
-	rte_free(vport->rss_lut);
-	vport->rss_lut = NULL;
-
-	rte_free(vport->rss_key);
-	vport->rss_key = NULL;
+	idpf_vport_deinit(vport);
 
 	rte_free(vport->recv_vectors);
 	vport->recv_vectors = NULL;
@@ -892,13 +790,6 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
 
-	vport->vport_info = rte_zmalloc(NULL, IDPF_DFLT_MBX_BUF_SIZE, 0);
-	if (vport->vport_info == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate vport_info");
-		ret = -ENOMEM;
-		goto err;
-	}
-
 	memset(&vport_req_info, 0, sizeof(vport_req_info));
 	ret = idpf_init_vport_req_info(dev, &vport_req_info);
 	if (ret != 0) {
@@ -906,19 +797,12 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 		goto err;
 	}
 
-	ret = idpf_vc_create_vport(vport, &vport_req_info);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to create vport.");
-		goto err_create_vport;
-	}
-
-	ret = idpf_init_vport(vport);
+	ret = idpf_vport_init(vport, &vport_req_info, dev->data);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to init vports.");
-		goto err_init_vport;
+		goto err;
 	}
 
-	vport->dev_data = dev->data;
 	adapter->vports[param->idx] = vport;
 	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
 	adapter->cur_vport_nb++;
@@ -927,7 +811,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	if (dev->data->mac_addrs == NULL) {
 		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
 		ret = -ENOMEM;
-		goto err_init_vport;
+		goto err_mac_addrs;
 	}
 
 	rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
@@ -935,11 +819,9 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 
 	return 0;
 
-err_init_vport:
+err_mac_addrs:
 	adapter->vports[param->idx] = NULL;  /* reset */
-	idpf_vc_destroy_vport(vport);
-err_create_vport:
-	rte_free(vport->vport_info);
+	idpf_vport_deinit(vport);
 err:
 	return ret;
 }
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v5 06/15] common/idpf: add config RSS
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
                       ` (4 preceding siblings ...)
  2023-02-02  9:53     ` [PATCH v5 05/15] common/idpf: add vport init/deinit beilei.xing
@ 2023-02-02  9:53     ` beilei.xing
  2023-02-02  9:53     ` [PATCH v5 07/15] common/idpf: add irq map/unmap beilei.xing
                       ` (9 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-02  9:53 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move configure RSS to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c | 25 +++++++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h |  2 ++
 drivers/common/idpf/version.map          |  1 +
 drivers/net/idpf/idpf_ethdev.c           | 26 ------------------------
 4 files changed, 28 insertions(+), 26 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 5628fb5c57..eee96b5083 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -273,4 +273,29 @@ idpf_vport_deinit(struct idpf_vport *vport)
 
 	return 0;
 }
+int
+idpf_config_rss(struct idpf_vport *vport)
+{
+	int ret;
+
+	ret = idpf_vc_set_rss_key(vport);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+	ret = idpf_vc_set_rss_lut(vport);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to configure RSS lut");
+		return ret;
+	}
+
+	ret = idpf_vc_set_rss_hash(vport);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return ret;
+}
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 14d04268e5..1d3bb06fef 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -153,5 +153,7 @@ int idpf_vport_init(struct idpf_vport *vport,
 		    void *dev_data);
 __rte_internal
 int idpf_vport_deinit(struct idpf_vport *vport);
+__rte_internal
+int idpf_config_rss(struct idpf_vport *vport);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index c1ae5affa4..fd56a9988f 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,6 +3,7 @@ INTERNAL {
 
 	idpf_adapter_deinit;
 	idpf_adapter_init;
+	idpf_config_rss;
 	idpf_execute_vc_cmd;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 7a8fb6fd4a..f728318dad 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -178,32 +178,6 @@ idpf_init_vport_req_info(struct rte_eth_dev *dev,
 	return 0;
 }
 
-static int
-idpf_config_rss(struct idpf_vport *vport)
-{
-	int ret;
-
-	ret = idpf_vc_set_rss_key(vport);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
-		return ret;
-	}
-
-	ret = idpf_vc_set_rss_lut(vport);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
-		return ret;
-	}
-
-	ret = idpf_vc_set_rss_hash(vport);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
-		return ret;
-	}
-
-	return ret;
-}
-
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v5 07/15] common/idpf: add irq map/unmap
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
                       ` (5 preceding siblings ...)
  2023-02-02  9:53     ` [PATCH v5 06/15] common/idpf: add config RSS beilei.xing
@ 2023-02-02  9:53     ` beilei.xing
  2023-02-02  9:53     ` [PATCH v5 08/15] common/idpf: support get packet type beilei.xing
                       ` (8 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-02  9:53 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Introduce idpf_config_irq_map/idpf_config_irq_unmap functions
in common module, and refine config rxq irqs function.
Refine device start function with some irq error handling. Besides,
vport->stopped should be initialized at the end of the function.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 102 +++++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h   |   6 ++
 drivers/common/idpf/idpf_common_virtchnl.c |   8 --
 drivers/common/idpf/idpf_common_virtchnl.h |   6 +-
 drivers/common/idpf/version.map            |   2 +
 drivers/net/idpf/idpf_ethdev.c             | 102 +++------------------
 drivers/net/idpf/idpf_ethdev.h             |   1 -
 7 files changed, 125 insertions(+), 102 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index eee96b5083..04bf4d51dd 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -247,8 +247,21 @@ idpf_vport_init(struct idpf_vport *vport,
 		goto err_rss_lut;
 	}
 
+	/* recv_vectors is used for VIRTCHNL2_OP_ALLOC_VECTORS response,
+	 * reserve maximum size for it now, may need optimization in future.
+	 */
+	vport->recv_vectors = rte_zmalloc("recv_vectors", IDPF_DFLT_MBX_BUF_SIZE, 0);
+	if (vport->recv_vectors == NULL) {
+		DRV_LOG(ERR, "Failed to allocate recv_vectors");
+		ret = -ENOMEM;
+		goto err_recv_vec;
+	}
+
 	return 0;
 
+err_recv_vec:
+	rte_free(vport->rss_lut);
+	vport->rss_lut = NULL;
 err_rss_lut:
 	vport->dev_data = NULL;
 	rte_free(vport->rss_key);
@@ -261,6 +274,8 @@ idpf_vport_init(struct idpf_vport *vport,
 int
 idpf_vport_deinit(struct idpf_vport *vport)
 {
+	rte_free(vport->recv_vectors);
+	vport->recv_vectors = NULL;
 	rte_free(vport->rss_lut);
 	vport->rss_lut = NULL;
 
@@ -298,4 +313,91 @@ idpf_config_rss(struct idpf_vport *vport)
 
 	return ret;
 }
+
+int
+idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_queue_vector *qv_map;
+	struct idpf_hw *hw = &adapter->hw;
+	uint32_t dynctl_val, itrn_val;
+	uint32_t dynctl_reg_start;
+	uint32_t itrn_reg_start;
+	uint16_t i;
+	int ret;
+
+	qv_map = rte_zmalloc("qv_map",
+			     nb_rx_queues *
+			     sizeof(struct virtchnl2_queue_vector), 0);
+	if (qv_map == NULL) {
+		DRV_LOG(ERR, "Failed to allocate %d queue-vector map",
+			nb_rx_queues);
+		ret = -ENOMEM;
+		goto qv_map_alloc_err;
+	}
+
+	/* Rx interrupt disabled, Map interrupt only for writeback */
+
+	/* The capability flags adapter->caps.other_caps should be
+	 * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if
+	 * condition should be updated when the FW can return the
+	 * correct flag bits.
+	 */
+	dynctl_reg_start =
+		vport->recv_vectors->vchunks.vchunks->dynctl_reg_start;
+	itrn_reg_start =
+		vport->recv_vectors->vchunks.vchunks->itrn_reg_start;
+	dynctl_val = IDPF_READ_REG(hw, dynctl_reg_start);
+	DRV_LOG(DEBUG, "Value of dynctl_reg_start is 0x%x", dynctl_val);
+	itrn_val = IDPF_READ_REG(hw, itrn_reg_start);
+	DRV_LOG(DEBUG, "Value of itrn_reg_start is 0x%x", itrn_val);
+	/* Force write-backs by setting WB_ON_ITR bit in DYN_CTL
+	 * register. WB_ON_ITR and INTENA are mutually exclusive
+	 * bits. Setting WB_ON_ITR bits means TX and RX Descs
+	 * are written back based on ITR expiration irrespective
+	 * of INTENA setting.
+	 */
+	/* TBD: need to tune INTERVAL value for better performance. */
+	itrn_val = (itrn_val == 0) ? IDPF_DFLT_INTERVAL : itrn_val;
+	dynctl_val = VIRTCHNL2_ITR_IDX_0  <<
+		     PF_GLINT_DYN_CTL_ITR_INDX_S |
+		     PF_GLINT_DYN_CTL_WB_ON_ITR_M |
+		     itrn_val << PF_GLINT_DYN_CTL_INTERVAL_S;
+	IDPF_WRITE_REG(hw, dynctl_reg_start, dynctl_val);
+
+	for (i = 0; i < nb_rx_queues; i++) {
+		/* map all queues to the same vector */
+		qv_map[i].queue_id = vport->chunks_info.rx_start_qid + i;
+		qv_map[i].vector_id =
+			vport->recv_vectors->vchunks.vchunks->start_vector_id;
+	}
+	vport->qv_map = qv_map;
+
+	ret = idpf_vc_config_irq_map_unmap(vport, nb_rx_queues, true);
+	if (ret != 0) {
+		DRV_LOG(ERR, "config interrupt mapping failed");
+		goto config_irq_map_err;
+	}
+
+	return 0;
+
+config_irq_map_err:
+	rte_free(vport->qv_map);
+	vport->qv_map = NULL;
+
+qv_map_alloc_err:
+	return ret;
+}
+
+int
+idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues)
+{
+	idpf_vc_config_irq_map_unmap(vport, nb_rx_queues, false);
+
+	rte_free(vport->qv_map);
+	vport->qv_map = NULL;
+
+	return 0;
+}
+
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 1d3bb06fef..d45c2b8777 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -17,6 +17,8 @@
 
 #define IDPF_MAX_PKT_TYPE	1024
 
+#define IDPF_DFLT_INTERVAL	16
+
 struct idpf_adapter {
 	struct idpf_hw hw;
 	struct virtchnl2_version_info virtchnl_version;
@@ -155,5 +157,9 @@ __rte_internal
 int idpf_vport_deinit(struct idpf_vport *vport);
 __rte_internal
 int idpf_config_rss(struct idpf_vport *vport);
+__rte_internal
+int idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues);
+__rte_internal
+int idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index cdbf8ca895..0ee76b98a7 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -573,14 +573,6 @@ idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_ALLOC_VECTORS");
 
-	if (vport->recv_vectors == NULL) {
-		vport->recv_vectors = rte_zmalloc("recv_vectors", len, 0);
-		if (vport->recv_vectors == NULL) {
-			rte_free(alloc_vec);
-			return -ENOMEM;
-		}
-	}
-
 	rte_memcpy(vport->recv_vectors, args.out_buffer, len);
 	rte_free(alloc_vec);
 	return err;
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index e05619f4b4..155527f0b6 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -23,6 +23,9 @@ int idpf_vc_set_rss_lut(struct idpf_vport *vport);
 __rte_internal
 int idpf_vc_set_rss_hash(struct idpf_vport *vport);
 __rte_internal
+int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
+				 uint16_t nb_rxq, bool map);
+__rte_internal
 int idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
 			 bool rx, bool on);
 __rte_internal
@@ -30,9 +33,6 @@ int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
 __rte_internal
 int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
 __rte_internal
-int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
-				 uint16_t nb_rxq, bool map);
-__rte_internal
 int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
 __rte_internal
 int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index fd56a9988f..5dab5787de 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,6 +3,8 @@ INTERNAL {
 
 	idpf_adapter_deinit;
 	idpf_adapter_init;
+	idpf_config_irq_map;
+	idpf_config_irq_unmap;
 	idpf_config_rss;
 	idpf_execute_vc_cmd;
 	idpf_vc_alloc_vectors;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index f728318dad..d0799087a5 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -281,84 +281,9 @@ static int
 idpf_config_rx_queues_irqs(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_queue_vector *qv_map;
-	struct idpf_hw *hw = &adapter->hw;
-	uint32_t dynctl_reg_start;
-	uint32_t itrn_reg_start;
-	uint32_t dynctl_val, itrn_val;
-	uint16_t i;
-
-	qv_map = rte_zmalloc("qv_map",
-			dev->data->nb_rx_queues *
-			sizeof(struct virtchnl2_queue_vector), 0);
-	if (qv_map == NULL) {
-		PMD_DRV_LOG(ERR, "Failed to allocate %d queue-vector map",
-			    dev->data->nb_rx_queues);
-		goto qv_map_alloc_err;
-	}
-
-	/* Rx interrupt disabled, Map interrupt only for writeback */
-
-	/* The capability flags adapter->caps.other_caps should be
-	 * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if
-	 * condition should be updated when the FW can return the
-	 * correct flag bits.
-	 */
-	dynctl_reg_start =
-		vport->recv_vectors->vchunks.vchunks->dynctl_reg_start;
-	itrn_reg_start =
-		vport->recv_vectors->vchunks.vchunks->itrn_reg_start;
-	dynctl_val = IDPF_READ_REG(hw, dynctl_reg_start);
-	PMD_DRV_LOG(DEBUG, "Value of dynctl_reg_start is 0x%x",
-		    dynctl_val);
-	itrn_val = IDPF_READ_REG(hw, itrn_reg_start);
-	PMD_DRV_LOG(DEBUG, "Value of itrn_reg_start is 0x%x", itrn_val);
-	/* Force write-backs by setting WB_ON_ITR bit in DYN_CTL
-	 * register. WB_ON_ITR and INTENA are mutually exclusive
-	 * bits. Setting WB_ON_ITR bits means TX and RX Descs
-	 * are written back based on ITR expiration irrespective
-	 * of INTENA setting.
-	 */
-	/* TBD: need to tune INTERVAL value for better performance. */
-	if (itrn_val != 0)
-		IDPF_WRITE_REG(hw,
-			       dynctl_reg_start,
-			       VIRTCHNL2_ITR_IDX_0  <<
-			       PF_GLINT_DYN_CTL_ITR_INDX_S |
-			       PF_GLINT_DYN_CTL_WB_ON_ITR_M |
-			       itrn_val <<
-			       PF_GLINT_DYN_CTL_INTERVAL_S);
-	else
-		IDPF_WRITE_REG(hw,
-			       dynctl_reg_start,
-			       VIRTCHNL2_ITR_IDX_0  <<
-			       PF_GLINT_DYN_CTL_ITR_INDX_S |
-			       PF_GLINT_DYN_CTL_WB_ON_ITR_M |
-			       IDPF_DFLT_INTERVAL <<
-			       PF_GLINT_DYN_CTL_INTERVAL_S);
-
-	for (i = 0; i < dev->data->nb_rx_queues; i++) {
-		/* map all queues to the same vector */
-		qv_map[i].queue_id = vport->chunks_info.rx_start_qid + i;
-		qv_map[i].vector_id =
-			vport->recv_vectors->vchunks.vchunks->start_vector_id;
-	}
-	vport->qv_map = qv_map;
-
-	if (idpf_vc_config_irq_map_unmap(vport, dev->data->nb_rx_queues, true) != 0) {
-		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
-		goto config_irq_map_err;
-	}
-
-	return 0;
-
-config_irq_map_err:
-	rte_free(vport->qv_map);
-	vport->qv_map = NULL;
+	uint16_t nb_rx_queues = dev->data->nb_rx_queues;
 
-qv_map_alloc_err:
-	return -1;
+	return idpf_config_irq_map(vport, nb_rx_queues);
 }
 
 static int
@@ -404,8 +329,6 @@ idpf_dev_start(struct rte_eth_dev *dev)
 	uint16_t req_vecs_num;
 	int ret;
 
-	vport->stopped = 0;
-
 	req_vecs_num = IDPF_DFLT_Q_VEC_NUM;
 	if (req_vecs_num + adapter->used_vecs_num > num_allocated_vectors) {
 		PMD_DRV_LOG(ERR, "The accumulated request vectors' number should be less than %d",
@@ -424,13 +347,13 @@ idpf_dev_start(struct rte_eth_dev *dev)
 	ret = idpf_config_rx_queues_irqs(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to configure irqs");
-		goto err_vec;
+		goto err_irq;
 	}
 
 	ret = idpf_start_queues(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to start queues");
-		goto err_vec;
+		goto err_startq;
 	}
 
 	idpf_set_rx_function(dev);
@@ -442,10 +365,16 @@ idpf_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	vport->stopped = 0;
+
 	return 0;
 
 err_vport:
 	idpf_stop_queues(dev);
+err_startq:
+	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
+err_irq:
+	idpf_vc_dealloc_vectors(vport);
 err_vec:
 	return ret;
 }
@@ -462,10 +391,9 @@ idpf_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_stop_queues(dev);
 
-	idpf_vc_config_irq_map_unmap(vport, dev->data->nb_rx_queues, false);
+	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
 
-	if (vport->recv_vectors != NULL)
-		idpf_vc_dealloc_vectors(vport);
+	idpf_vc_dealloc_vectors(vport);
 
 	vport->stopped = 1;
 
@@ -482,12 +410,6 @@ idpf_dev_close(struct rte_eth_dev *dev)
 
 	idpf_vport_deinit(vport);
 
-	rte_free(vport->recv_vectors);
-	vport->recv_vectors = NULL;
-
-	rte_free(vport->qv_map);
-	vport->qv_map = NULL;
-
 	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
 	adapter->cur_vport_nb--;
 	dev->data->dev_private = NULL;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 07ffe8e408..55be98a8ed 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -32,7 +32,6 @@
 #define IDPF_RX_BUFQ_PER_GRP	2
 
 #define IDPF_DFLT_Q_VEC_NUM	1
-#define IDPF_DFLT_INTERVAL	16
 
 #define IDPF_MIN_BUF_SIZE	1024
 #define IDPF_MAX_FRAME_SIZE	9728
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v5 08/15] common/idpf: support get packet type
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
                       ` (6 preceding siblings ...)
  2023-02-02  9:53     ` [PATCH v5 07/15] common/idpf: add irq map/unmap beilei.xing
@ 2023-02-02  9:53     ` beilei.xing
  2023-02-02  9:53     ` [PATCH v5 09/15] common/idpf: add vport info initialization beilei.xing
                       ` (7 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-02  9:53 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move ptype_tbl field to idpf_adapter structure.
Move get_pkt_type to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c | 216 +++++++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h |   7 +
 drivers/common/idpf/meson.build          |   2 +
 drivers/net/idpf/idpf_ethdev.c           |   6 -
 drivers/net/idpf/idpf_ethdev.h           |   4 -
 drivers/net/idpf/idpf_rxtx.c             |   4 +-
 drivers/net/idpf/idpf_rxtx.h             |   4 -
 drivers/net/idpf/idpf_rxtx_vec_avx512.c  |   3 +-
 drivers/net/idpf/idpf_vchnl.c            | 213 ----------------------
 9 files changed, 228 insertions(+), 231 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 04bf4d51dd..3f8e25e6a2 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -96,6 +96,216 @@ idpf_init_mbx(struct idpf_hw *hw)
 	return ret;
 }
 
+static int
+idpf_get_pkt_type(struct idpf_adapter *adapter)
+{
+	struct virtchnl2_get_ptype_info *ptype_info;
+	uint16_t ptype_offset, i, j;
+	uint16_t ptype_recvd = 0;
+	int ret;
+
+	ret = idpf_vc_query_ptype_info(adapter);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Fail to query packet type information");
+		return ret;
+	}
+
+	ptype_info = rte_zmalloc("ptype_info", IDPF_DFLT_MBX_BUF_SIZE, 0);
+		if (ptype_info == NULL)
+			return -ENOMEM;
+
+	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
+		ret = idpf_vc_read_one_msg(adapter, VIRTCHNL2_OP_GET_PTYPE_INFO,
+					   IDPF_DFLT_MBX_BUF_SIZE, (uint8_t *)ptype_info);
+		if (ret != 0) {
+			DRV_LOG(ERR, "Fail to get packet type information");
+			goto free_ptype_info;
+		}
+
+		ptype_recvd += ptype_info->num_ptypes;
+		ptype_offset = sizeof(struct virtchnl2_get_ptype_info) -
+						sizeof(struct virtchnl2_ptype);
+
+		for (i = 0; i < rte_cpu_to_le_16(ptype_info->num_ptypes); i++) {
+			bool is_inner = false, is_ip = false;
+			struct virtchnl2_ptype *ptype;
+			uint32_t proto_hdr = 0;
+
+			ptype = (struct virtchnl2_ptype *)
+					((uint8_t *)ptype_info + ptype_offset);
+			ptype_offset += IDPF_GET_PTYPE_SIZE(ptype);
+			if (ptype_offset > IDPF_DFLT_MBX_BUF_SIZE) {
+				ret = -EINVAL;
+				goto free_ptype_info;
+			}
+
+			if (rte_cpu_to_le_16(ptype->ptype_id_10) == 0xFFFF)
+				goto free_ptype_info;
+
+			for (j = 0; j < ptype->proto_id_count; j++) {
+				switch (rte_cpu_to_le_16(ptype->proto_id[j])) {
+				case VIRTCHNL2_PROTO_HDR_GRE:
+				case VIRTCHNL2_PROTO_HDR_VXLAN:
+					proto_hdr &= ~RTE_PTYPE_L4_MASK;
+					proto_hdr |= RTE_PTYPE_TUNNEL_GRENAT;
+					is_inner = true;
+					break;
+				case VIRTCHNL2_PROTO_HDR_MAC:
+					if (is_inner) {
+						proto_hdr &= ~RTE_PTYPE_INNER_L2_MASK;
+						proto_hdr |= RTE_PTYPE_INNER_L2_ETHER;
+					} else {
+						proto_hdr &= ~RTE_PTYPE_L2_MASK;
+						proto_hdr |= RTE_PTYPE_L2_ETHER;
+					}
+					break;
+				case VIRTCHNL2_PROTO_HDR_VLAN:
+					if (is_inner) {
+						proto_hdr &= ~RTE_PTYPE_INNER_L2_MASK;
+						proto_hdr |= RTE_PTYPE_INNER_L2_ETHER_VLAN;
+					}
+					break;
+				case VIRTCHNL2_PROTO_HDR_PTP:
+					proto_hdr &= ~RTE_PTYPE_L2_MASK;
+					proto_hdr |= RTE_PTYPE_L2_ETHER_TIMESYNC;
+					break;
+				case VIRTCHNL2_PROTO_HDR_LLDP:
+					proto_hdr &= ~RTE_PTYPE_L2_MASK;
+					proto_hdr |= RTE_PTYPE_L2_ETHER_LLDP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_ARP:
+					proto_hdr &= ~RTE_PTYPE_L2_MASK;
+					proto_hdr |= RTE_PTYPE_L2_ETHER_ARP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_PPPOE:
+					proto_hdr &= ~RTE_PTYPE_L2_MASK;
+					proto_hdr |= RTE_PTYPE_L2_ETHER_PPPOE;
+					break;
+				case VIRTCHNL2_PROTO_HDR_IPV4:
+					if (!is_ip) {
+						proto_hdr |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
+						is_ip = true;
+					} else {
+						proto_hdr |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+							     RTE_PTYPE_TUNNEL_IP;
+						is_inner = true;
+					}
+						break;
+				case VIRTCHNL2_PROTO_HDR_IPV6:
+					if (!is_ip) {
+						proto_hdr |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
+						is_ip = true;
+					} else {
+						proto_hdr |= RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+							     RTE_PTYPE_TUNNEL_IP;
+						is_inner = true;
+					}
+					break;
+				case VIRTCHNL2_PROTO_HDR_IPV4_FRAG:
+				case VIRTCHNL2_PROTO_HDR_IPV6_FRAG:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_FRAG;
+					else
+						proto_hdr |= RTE_PTYPE_L4_FRAG;
+					break;
+				case VIRTCHNL2_PROTO_HDR_UDP:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_UDP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_UDP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_TCP:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_TCP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_TCP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_SCTP:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_SCTP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_SCTP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_ICMP:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_ICMP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_ICMP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_ICMPV6:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_ICMP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_ICMP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_L2TPV2:
+				case VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL:
+				case VIRTCHNL2_PROTO_HDR_L2TPV3:
+					is_inner = true;
+					proto_hdr |= RTE_PTYPE_TUNNEL_L2TP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_NVGRE:
+					is_inner = true;
+					proto_hdr |= RTE_PTYPE_TUNNEL_NVGRE;
+					break;
+				case VIRTCHNL2_PROTO_HDR_GTPC_TEID:
+					is_inner = true;
+					proto_hdr |= RTE_PTYPE_TUNNEL_GTPC;
+					break;
+				case VIRTCHNL2_PROTO_HDR_GTPU:
+				case VIRTCHNL2_PROTO_HDR_GTPU_UL:
+				case VIRTCHNL2_PROTO_HDR_GTPU_DL:
+					is_inner = true;
+					proto_hdr |= RTE_PTYPE_TUNNEL_GTPU;
+					break;
+				case VIRTCHNL2_PROTO_HDR_PAY:
+				case VIRTCHNL2_PROTO_HDR_IPV6_EH:
+				case VIRTCHNL2_PROTO_HDR_PRE_MAC:
+				case VIRTCHNL2_PROTO_HDR_POST_MAC:
+				case VIRTCHNL2_PROTO_HDR_ETHERTYPE:
+				case VIRTCHNL2_PROTO_HDR_SVLAN:
+				case VIRTCHNL2_PROTO_HDR_CVLAN:
+				case VIRTCHNL2_PROTO_HDR_MPLS:
+				case VIRTCHNL2_PROTO_HDR_MMPLS:
+				case VIRTCHNL2_PROTO_HDR_CTRL:
+				case VIRTCHNL2_PROTO_HDR_ECP:
+				case VIRTCHNL2_PROTO_HDR_EAPOL:
+				case VIRTCHNL2_PROTO_HDR_PPPOD:
+				case VIRTCHNL2_PROTO_HDR_IGMP:
+				case VIRTCHNL2_PROTO_HDR_AH:
+				case VIRTCHNL2_PROTO_HDR_ESP:
+				case VIRTCHNL2_PROTO_HDR_IKE:
+				case VIRTCHNL2_PROTO_HDR_NATT_KEEP:
+				case VIRTCHNL2_PROTO_HDR_GTP:
+				case VIRTCHNL2_PROTO_HDR_GTP_EH:
+				case VIRTCHNL2_PROTO_HDR_GTPCV2:
+				case VIRTCHNL2_PROTO_HDR_ECPRI:
+				case VIRTCHNL2_PROTO_HDR_VRRP:
+				case VIRTCHNL2_PROTO_HDR_OSPF:
+				case VIRTCHNL2_PROTO_HDR_TUN:
+				case VIRTCHNL2_PROTO_HDR_VXLAN_GPE:
+				case VIRTCHNL2_PROTO_HDR_GENEVE:
+				case VIRTCHNL2_PROTO_HDR_NSH:
+				case VIRTCHNL2_PROTO_HDR_QUIC:
+				case VIRTCHNL2_PROTO_HDR_PFCP:
+				case VIRTCHNL2_PROTO_HDR_PFCP_NODE:
+				case VIRTCHNL2_PROTO_HDR_PFCP_SESSION:
+				case VIRTCHNL2_PROTO_HDR_RTP:
+				case VIRTCHNL2_PROTO_HDR_NO_PROTO:
+				default:
+					continue;
+				}
+				adapter->ptype_tbl[ptype->ptype_id_10] = proto_hdr;
+			}
+		}
+	}
+
+free_ptype_info:
+	rte_free(ptype_info);
+	clear_cmd(adapter);
+	return ret;
+}
+
 int
 idpf_adapter_init(struct idpf_adapter *adapter)
 {
@@ -135,6 +345,12 @@ idpf_adapter_init(struct idpf_adapter *adapter)
 		goto err_check_api;
 	}
 
+	ret = idpf_get_pkt_type(adapter);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to set ptype table");
+		goto err_check_api;
+	}
+
 	return 0;
 
 err_check_api:
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index d45c2b8777..997f01f3aa 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -5,6 +5,7 @@
 #ifndef _IDPF_COMMON_DEVICE_H_
 #define _IDPF_COMMON_DEVICE_H_
 
+#include <rte_mbuf_ptype.h>
 #include <base/idpf_prototype.h>
 #include <base/virtchnl2.h>
 #include <idpf_common_logs.h>
@@ -19,6 +20,10 @@
 
 #define IDPF_DFLT_INTERVAL	16
 
+#define IDPF_GET_PTYPE_SIZE(p)						\
+	(sizeof(struct virtchnl2_ptype) +				\
+	 (((p)->proto_id_count ? ((p)->proto_id_count - 1) : 0) * sizeof((p)->proto_id[0])))
+
 struct idpf_adapter {
 	struct idpf_hw hw;
 	struct virtchnl2_version_info virtchnl_version;
@@ -26,6 +31,8 @@ struct idpf_adapter {
 	volatile uint32_t pend_cmd; /* pending command not finished */
 	uint32_t cmd_retval; /* return value of the cmd response from cp */
 	uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */
+
+	uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
 };
 
 struct idpf_chunks_info {
diff --git a/drivers/common/idpf/meson.build b/drivers/common/idpf/meson.build
index d1578641ba..c6cc7a196b 100644
--- a/drivers/common/idpf/meson.build
+++ b/drivers/common/idpf/meson.build
@@ -1,6 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2022 Intel Corporation
 
+deps += ['mbuf']
+
 sources = files(
     'idpf_common_device.c',
     'idpf_common_virtchnl.c',
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index d0799087a5..84046f955a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -602,12 +602,6 @@ idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *a
 		goto err_adapter_init;
 	}
 
-	ret = idpf_get_pkt_type(adapter);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to set ptype table");
-		goto err_get_ptype;
-	}
-
 	adapter->max_vport_nb = adapter->base.caps.max_vports;
 
 	adapter->vports = rte_zmalloc("vports",
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 55be98a8ed..d30807ca41 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -89,8 +89,6 @@ struct idpf_adapter_ext {
 
 	uint16_t used_vecs_num;
 
-	uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
-
 	bool rx_vec_allowed;
 	bool tx_vec_allowed;
 	bool rx_use_avx512;
@@ -107,6 +105,4 @@ TAILQ_HEAD(idpf_adapter_list, idpf_adapter_ext);
 #define IDPF_ADAPTER_TO_EXT(p)					\
 	container_of((p), struct idpf_adapter_ext, base)
 
-int idpf_get_pkt_type(struct idpf_adapter_ext *adapter);
-
 #endif /* _IDPF_ETHDEV_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index ad3e31208d..0b10e4248b 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1407,7 +1407,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	rx_id_bufq1 = rxq->bufq1->rx_next_avail;
 	rx_id_bufq2 = rxq->bufq2->rx_next_avail;
 	rx_desc_ring = rxq->rx_ring;
-	ptype_tbl = ad->ptype_tbl;
+	ptype_tbl = rxq->adapter->ptype_tbl;
 
 	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
 		rxq->hw_register_set = 1;
@@ -1812,7 +1812,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	rx_id = rxq->rx_tail;
 	rx_ring = rxq->rx_ring;
-	ptype_tbl = ad->ptype_tbl;
+	ptype_tbl = rxq->adapter->ptype_tbl;
 
 	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
 		rxq->hw_register_set = 1;
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 9417651b3f..cac6040943 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -82,10 +82,6 @@
 #define IDPF_TX_OFFLOAD_NOTSUP_MASK \
 		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
 
-#define IDPF_GET_PTYPE_SIZE(p) \
-	(sizeof(struct virtchnl2_ptype) + \
-	(((p)->proto_id_count ? ((p)->proto_id_count - 1) : 0) * sizeof((p)->proto_id[0])))
-
 extern uint64_t idpf_timestamp_dynflag;
 
 struct idpf_rx_queue {
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
index efa7cd2187..fb2b6bb53c 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
@@ -245,8 +245,7 @@ _idpf_singleq_recv_raw_pkts_avx512(struct idpf_rx_queue *rxq,
 				   struct rte_mbuf **rx_pkts,
 				   uint16_t nb_pkts)
 {
-	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(rxq->adapter);
-	const uint32_t *type_table = adapter->ptype_tbl;
+	const uint32_t *type_table = rxq->adapter->ptype_tbl;
 
 	const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0,
 						    rxq->mbuf_initializer);
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 6f4eb52beb..45d05ed108 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -23,219 +23,6 @@
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
 
-int __rte_cold
-idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
-{
-	struct virtchnl2_get_ptype_info *ptype_info;
-	struct idpf_adapter *base;
-	uint16_t ptype_offset, i, j;
-	uint16_t ptype_recvd = 0;
-	int ret;
-
-	base = &adapter->base;
-
-	ret = idpf_vc_query_ptype_info(base);
-	if (ret != 0) {
-		PMD_DRV_LOG(ERR, "Fail to query packet type information");
-		return ret;
-	}
-
-	ptype_info = rte_zmalloc("ptype_info", IDPF_DFLT_MBX_BUF_SIZE, 0);
-		if (ptype_info == NULL)
-			return -ENOMEM;
-
-	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
-		ret = idpf_vc_read_one_msg(base, VIRTCHNL2_OP_GET_PTYPE_INFO,
-					   IDPF_DFLT_MBX_BUF_SIZE, (uint8_t *)ptype_info);
-		if (ret != 0) {
-			PMD_DRV_LOG(ERR, "Fail to get packet type information");
-			goto free_ptype_info;
-		}
-
-		ptype_recvd += ptype_info->num_ptypes;
-		ptype_offset = sizeof(struct virtchnl2_get_ptype_info) -
-						sizeof(struct virtchnl2_ptype);
-
-		for (i = 0; i < rte_cpu_to_le_16(ptype_info->num_ptypes); i++) {
-			bool is_inner = false, is_ip = false;
-			struct virtchnl2_ptype *ptype;
-			uint32_t proto_hdr = 0;
-
-			ptype = (struct virtchnl2_ptype *)
-					((uint8_t *)ptype_info + ptype_offset);
-			ptype_offset += IDPF_GET_PTYPE_SIZE(ptype);
-			if (ptype_offset > IDPF_DFLT_MBX_BUF_SIZE) {
-				ret = -EINVAL;
-				goto free_ptype_info;
-			}
-
-			if (rte_cpu_to_le_16(ptype->ptype_id_10) == 0xFFFF)
-				goto free_ptype_info;
-
-			for (j = 0; j < ptype->proto_id_count; j++) {
-				switch (rte_cpu_to_le_16(ptype->proto_id[j])) {
-				case VIRTCHNL2_PROTO_HDR_GRE:
-				case VIRTCHNL2_PROTO_HDR_VXLAN:
-					proto_hdr &= ~RTE_PTYPE_L4_MASK;
-					proto_hdr |= RTE_PTYPE_TUNNEL_GRENAT;
-					is_inner = true;
-					break;
-				case VIRTCHNL2_PROTO_HDR_MAC:
-					if (is_inner) {
-						proto_hdr &= ~RTE_PTYPE_INNER_L2_MASK;
-						proto_hdr |= RTE_PTYPE_INNER_L2_ETHER;
-					} else {
-						proto_hdr &= ~RTE_PTYPE_L2_MASK;
-						proto_hdr |= RTE_PTYPE_L2_ETHER;
-					}
-					break;
-				case VIRTCHNL2_PROTO_HDR_VLAN:
-					if (is_inner) {
-						proto_hdr &= ~RTE_PTYPE_INNER_L2_MASK;
-						proto_hdr |= RTE_PTYPE_INNER_L2_ETHER_VLAN;
-					}
-					break;
-				case VIRTCHNL2_PROTO_HDR_PTP:
-					proto_hdr &= ~RTE_PTYPE_L2_MASK;
-					proto_hdr |= RTE_PTYPE_L2_ETHER_TIMESYNC;
-					break;
-				case VIRTCHNL2_PROTO_HDR_LLDP:
-					proto_hdr &= ~RTE_PTYPE_L2_MASK;
-					proto_hdr |= RTE_PTYPE_L2_ETHER_LLDP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_ARP:
-					proto_hdr &= ~RTE_PTYPE_L2_MASK;
-					proto_hdr |= RTE_PTYPE_L2_ETHER_ARP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_PPPOE:
-					proto_hdr &= ~RTE_PTYPE_L2_MASK;
-					proto_hdr |= RTE_PTYPE_L2_ETHER_PPPOE;
-					break;
-				case VIRTCHNL2_PROTO_HDR_IPV4:
-					if (!is_ip) {
-						proto_hdr |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
-						is_ip = true;
-					} else {
-						proto_hdr |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
-							     RTE_PTYPE_TUNNEL_IP;
-						is_inner = true;
-					}
-						break;
-				case VIRTCHNL2_PROTO_HDR_IPV6:
-					if (!is_ip) {
-						proto_hdr |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
-						is_ip = true;
-					} else {
-						proto_hdr |= RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
-							     RTE_PTYPE_TUNNEL_IP;
-						is_inner = true;
-					}
-					break;
-				case VIRTCHNL2_PROTO_HDR_IPV4_FRAG:
-				case VIRTCHNL2_PROTO_HDR_IPV6_FRAG:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_FRAG;
-					else
-						proto_hdr |= RTE_PTYPE_L4_FRAG;
-					break;
-				case VIRTCHNL2_PROTO_HDR_UDP:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_UDP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_UDP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_TCP:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_TCP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_TCP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_SCTP:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_SCTP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_SCTP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_ICMP:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_ICMP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_ICMP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_ICMPV6:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_ICMP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_ICMP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_L2TPV2:
-				case VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL:
-				case VIRTCHNL2_PROTO_HDR_L2TPV3:
-					is_inner = true;
-					proto_hdr |= RTE_PTYPE_TUNNEL_L2TP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_NVGRE:
-					is_inner = true;
-					proto_hdr |= RTE_PTYPE_TUNNEL_NVGRE;
-					break;
-				case VIRTCHNL2_PROTO_HDR_GTPC_TEID:
-					is_inner = true;
-					proto_hdr |= RTE_PTYPE_TUNNEL_GTPC;
-					break;
-				case VIRTCHNL2_PROTO_HDR_GTPU:
-				case VIRTCHNL2_PROTO_HDR_GTPU_UL:
-				case VIRTCHNL2_PROTO_HDR_GTPU_DL:
-					is_inner = true;
-					proto_hdr |= RTE_PTYPE_TUNNEL_GTPU;
-					break;
-				case VIRTCHNL2_PROTO_HDR_PAY:
-				case VIRTCHNL2_PROTO_HDR_IPV6_EH:
-				case VIRTCHNL2_PROTO_HDR_PRE_MAC:
-				case VIRTCHNL2_PROTO_HDR_POST_MAC:
-				case VIRTCHNL2_PROTO_HDR_ETHERTYPE:
-				case VIRTCHNL2_PROTO_HDR_SVLAN:
-				case VIRTCHNL2_PROTO_HDR_CVLAN:
-				case VIRTCHNL2_PROTO_HDR_MPLS:
-				case VIRTCHNL2_PROTO_HDR_MMPLS:
-				case VIRTCHNL2_PROTO_HDR_CTRL:
-				case VIRTCHNL2_PROTO_HDR_ECP:
-				case VIRTCHNL2_PROTO_HDR_EAPOL:
-				case VIRTCHNL2_PROTO_HDR_PPPOD:
-				case VIRTCHNL2_PROTO_HDR_IGMP:
-				case VIRTCHNL2_PROTO_HDR_AH:
-				case VIRTCHNL2_PROTO_HDR_ESP:
-				case VIRTCHNL2_PROTO_HDR_IKE:
-				case VIRTCHNL2_PROTO_HDR_NATT_KEEP:
-				case VIRTCHNL2_PROTO_HDR_GTP:
-				case VIRTCHNL2_PROTO_HDR_GTP_EH:
-				case VIRTCHNL2_PROTO_HDR_GTPCV2:
-				case VIRTCHNL2_PROTO_HDR_ECPRI:
-				case VIRTCHNL2_PROTO_HDR_VRRP:
-				case VIRTCHNL2_PROTO_HDR_OSPF:
-				case VIRTCHNL2_PROTO_HDR_TUN:
-				case VIRTCHNL2_PROTO_HDR_VXLAN_GPE:
-				case VIRTCHNL2_PROTO_HDR_GENEVE:
-				case VIRTCHNL2_PROTO_HDR_NSH:
-				case VIRTCHNL2_PROTO_HDR_QUIC:
-				case VIRTCHNL2_PROTO_HDR_PFCP:
-				case VIRTCHNL2_PROTO_HDR_PFCP_NODE:
-				case VIRTCHNL2_PROTO_HDR_PFCP_SESSION:
-				case VIRTCHNL2_PROTO_HDR_RTP:
-				case VIRTCHNL2_PROTO_HDR_NO_PROTO:
-				default:
-					continue;
-				}
-				adapter->ptype_tbl[ptype->ptype_id_10] = proto_hdr;
-			}
-		}
-	}
-
-free_ptype_info:
-	rte_free(ptype_info);
-	clear_cmd(base);
-	return ret;
-}
-
 #define IDPF_RX_BUF_STRIDE		64
 int
 idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v5 09/15] common/idpf: add vport info initialization
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
                       ` (7 preceding siblings ...)
  2023-02-02  9:53     ` [PATCH v5 08/15] common/idpf: support get packet type beilei.xing
@ 2023-02-02  9:53     ` beilei.xing
  2023-02-02  9:53     ` [PATCH v5 10/15] common/idpf: add vector flags in vport beilei.xing
                       ` (6 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-02  9:53 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move queue module fields from idpf_adapter_ext structure to
idpf_adapter structure.
Refine some parameter and function name, and move function
idpf_create_vport_info_init to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c | 36 ++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h | 11 ++++++
 drivers/common/idpf/version.map          |  1 +
 drivers/net/idpf/idpf_ethdev.c           | 48 +++---------------------
 drivers/net/idpf/idpf_ethdev.h           |  8 ----
 5 files changed, 54 insertions(+), 50 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 3f8e25e6a2..a9304df6dd 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -616,4 +616,40 @@ idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues)
 	return 0;
 }
 
+int
+idpf_create_vport_info_init(struct idpf_vport *vport,
+			    struct virtchnl2_create_vport *vport_info)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+
+	vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
+	if (adapter->txq_model == 0) {
+		vport_info->txq_model =
+			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+		vport_info->num_tx_q =
+			rte_cpu_to_le_16(IDPF_DEFAULT_TXQ_NUM);
+		vport_info->num_tx_complq =
+			rte_cpu_to_le_16(IDPF_DEFAULT_TXQ_NUM * IDPF_TX_COMPLQ_PER_GRP);
+	} else {
+		vport_info->txq_model =
+			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+		vport_info->num_tx_q = rte_cpu_to_le_16(IDPF_DEFAULT_TXQ_NUM);
+		vport_info->num_tx_complq = 0;
+	}
+	if (adapter->rxq_model == 0) {
+		vport_info->rxq_model =
+			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+		vport_info->num_rx_q = rte_cpu_to_le_16(IDPF_DEFAULT_RXQ_NUM);
+		vport_info->num_rx_bufq =
+			rte_cpu_to_le_16(IDPF_DEFAULT_RXQ_NUM * IDPF_RX_BUFQ_PER_GRP);
+	} else {
+		vport_info->rxq_model =
+			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+		vport_info->num_rx_q = rte_cpu_to_le_16(IDPF_DEFAULT_RXQ_NUM);
+		vport_info->num_rx_bufq = 0;
+	}
+
+	return 0;
+}
+
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 997f01f3aa..0c73d40e53 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -16,6 +16,11 @@
 #define IDPF_CTLQ_LEN		64
 #define IDPF_DFLT_MBX_BUF_SIZE	4096
 
+#define IDPF_DEFAULT_RXQ_NUM	16
+#define IDPF_RX_BUFQ_PER_GRP	2
+#define IDPF_DEFAULT_TXQ_NUM	16
+#define IDPF_TX_COMPLQ_PER_GRP	1
+
 #define IDPF_MAX_PKT_TYPE	1024
 
 #define IDPF_DFLT_INTERVAL	16
@@ -33,6 +38,9 @@ struct idpf_adapter {
 	uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */
 
 	uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
+
+	uint32_t txq_model; /* 0 - split queue model, non-0 - single queue model */
+	uint32_t rxq_model; /* 0 - split queue model, non-0 - single queue model */
 };
 
 struct idpf_chunks_info {
@@ -168,5 +176,8 @@ __rte_internal
 int idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues);
 __rte_internal
 int idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues);
+__rte_internal
+int idpf_create_vport_info_init(struct idpf_vport *vport,
+				struct virtchnl2_create_vport *vport_info);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 5dab5787de..83338640c4 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -6,6 +6,7 @@ INTERNAL {
 	idpf_config_irq_map;
 	idpf_config_irq_unmap;
 	idpf_config_rss;
+	idpf_create_vport_info_init;
 	idpf_execute_vc_cmd;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 84046f955a..734e97ffc2 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -142,42 +142,6 @@ idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
-static int
-idpf_init_vport_req_info(struct rte_eth_dev *dev,
-			 struct virtchnl2_create_vport *vport_info)
-{
-	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(vport->adapter);
-
-	vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
-	if (adapter->txq_model == 0) {
-		vport_info->txq_model =
-			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
-		vport_info->num_tx_q = IDPF_DEFAULT_TXQ_NUM;
-		vport_info->num_tx_complq =
-			IDPF_DEFAULT_TXQ_NUM * IDPF_TX_COMPLQ_PER_GRP;
-	} else {
-		vport_info->txq_model =
-			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
-		vport_info->num_tx_q = IDPF_DEFAULT_TXQ_NUM;
-		vport_info->num_tx_complq = 0;
-	}
-	if (adapter->rxq_model == 0) {
-		vport_info->rxq_model =
-			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
-		vport_info->num_rx_q = IDPF_DEFAULT_RXQ_NUM;
-		vport_info->num_rx_bufq =
-			IDPF_DEFAULT_RXQ_NUM * IDPF_RX_BUFQ_PER_GRP;
-	} else {
-		vport_info->rxq_model =
-			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
-		vport_info->num_rx_q = IDPF_DEFAULT_RXQ_NUM;
-		vport_info->num_rx_bufq = 0;
-	}
-
-	return 0;
-}
-
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -566,12 +530,12 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap
 		goto bail;
 
 	ret = rte_kvargs_process(kvlist, IDPF_TX_SINGLE_Q, &parse_bool,
-				 &adapter->txq_model);
+				 &adapter->base.txq_model);
 	if (ret != 0)
 		goto bail;
 
 	ret = rte_kvargs_process(kvlist, IDPF_RX_SINGLE_Q, &parse_bool,
-				 &adapter->rxq_model);
+				 &adapter->base.rxq_model);
 	if (ret != 0)
 		goto bail;
 
@@ -672,7 +636,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	struct idpf_vport_param *param = init_params;
 	struct idpf_adapter_ext *adapter = param->adapter;
 	/* for sending create vport virtchnl msg prepare */
-	struct virtchnl2_create_vport vport_req_info;
+	struct virtchnl2_create_vport create_vport_info;
 	int ret = 0;
 
 	dev->dev_ops = &idpf_eth_dev_ops;
@@ -680,14 +644,14 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
 
-	memset(&vport_req_info, 0, sizeof(vport_req_info));
-	ret = idpf_init_vport_req_info(dev, &vport_req_info);
+	memset(&create_vport_info, 0, sizeof(create_vport_info));
+	ret = idpf_create_vport_info_init(vport, &create_vport_info);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
 		goto err;
 	}
 
-	ret = idpf_vport_init(vport, &vport_req_info, dev->data);
+	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to init vports.");
 		goto err;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index d30807ca41..c2a7abb05c 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -22,14 +22,9 @@
 
 #define IDPF_MAX_VPORT_NUM	8
 
-#define IDPF_DEFAULT_RXQ_NUM	16
-#define IDPF_DEFAULT_TXQ_NUM	16
-
 #define IDPF_INVALID_VPORT_IDX	0xffff
 #define IDPF_TXQ_PER_GRP	1
-#define IDPF_TX_COMPLQ_PER_GRP	1
 #define IDPF_RXQ_PER_GRP	1
-#define IDPF_RX_BUFQ_PER_GRP	2
 
 #define IDPF_DFLT_Q_VEC_NUM	1
 
@@ -78,9 +73,6 @@ struct idpf_adapter_ext {
 
 	char name[IDPF_ADAPTER_NAME_LEN];
 
-	uint32_t txq_model; /* 0 - split queue model, non-0 - single queue model */
-	uint32_t rxq_model; /* 0 - split queue model, non-0 - single queue model */
-
 	struct idpf_vport **vports;
 	uint16_t max_vport_nb;
 
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v5 10/15] common/idpf: add vector flags in vport
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
                       ` (8 preceding siblings ...)
  2023-02-02  9:53     ` [PATCH v5 09/15] common/idpf: add vport info initialization beilei.xing
@ 2023-02-02  9:53     ` beilei.xing
  2023-02-02  9:53     ` [PATCH v5 11/15] common/idpf: add rxq and txq struct beilei.xing
                       ` (5 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-02  9:53 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move vector flags from idpf_adapter_ext structure to
idpf_vport structure.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h |  5 +++++
 drivers/net/idpf/idpf_ethdev.h           |  5 -----
 drivers/net/idpf/idpf_rxtx.c             | 22 ++++++++++------------
 3 files changed, 15 insertions(+), 17 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 0c73d40e53..61c47ba5f4 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -103,6 +103,11 @@ struct idpf_vport {
 	uint16_t devarg_id;
 
 	bool stopped;
+
+	bool rx_vec_allowed;
+	bool tx_vec_allowed;
+	bool rx_use_avx512;
+	bool tx_use_avx512;
 };
 
 /* Message type read in virtual channel from PF */
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index c2a7abb05c..bef6199622 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -81,11 +81,6 @@ struct idpf_adapter_ext {
 
 	uint16_t used_vecs_num;
 
-	bool rx_vec_allowed;
-	bool tx_vec_allowed;
-	bool rx_use_avx512;
-	bool tx_use_avx512;
-
 	/* For PTP */
 	uint64_t time_hw;
 };
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 0b10e4248b..068eb8000e 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -2221,25 +2221,24 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 #ifdef RTE_ARCH_X86
-	struct idpf_adapter_ext *ad = IDPF_ADAPTER_TO_EXT(vport->adapter);
 	struct idpf_rx_queue *rxq;
 	int i;
 
 	if (idpf_rx_vec_dev_check_default(dev) == IDPF_VECTOR_PATH &&
 	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
-		ad->rx_vec_allowed = true;
+		vport->rx_vec_allowed = true;
 
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
 			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
-				ad->rx_use_avx512 = true;
+				vport->rx_use_avx512 = true;
 #else
 		PMD_DRV_LOG(NOTICE,
 			    "AVX512 is not supported in build env");
 #endif /* CC_AVX512_SUPPORT */
 	} else {
-		ad->rx_vec_allowed = false;
+		vport->rx_vec_allowed = false;
 	}
 #endif /* RTE_ARCH_X86 */
 
@@ -2247,13 +2246,13 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
 	} else {
-		if (ad->rx_vec_allowed) {
+		if (vport->rx_vec_allowed) {
 			for (i = 0; i < dev->data->nb_tx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
 				(void)idpf_singleq_rx_vec_setup(rxq);
 			}
 #ifdef CC_AVX512_SUPPORT
-			if (ad->rx_use_avx512) {
+			if (vport->rx_use_avx512) {
 				dev->rx_pkt_burst = idpf_singleq_recv_pkts_avx512;
 				return;
 			}
@@ -2275,7 +2274,6 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 #ifdef RTE_ARCH_X86
-	struct idpf_adapter_ext *ad = IDPF_ADAPTER_TO_EXT(vport->adapter);
 #ifdef CC_AVX512_SUPPORT
 	struct idpf_tx_queue *txq;
 	int i;
@@ -2283,18 +2281,18 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 
 	if (idpf_rx_vec_dev_check_default(dev) == IDPF_VECTOR_PATH &&
 	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
-		ad->tx_vec_allowed = true;
+		vport->tx_vec_allowed = true;
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
 			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
-				ad->tx_use_avx512 = true;
+				vport->tx_use_avx512 = true;
 #else
 		PMD_DRV_LOG(NOTICE,
 			    "AVX512 is not supported in build env");
 #endif /* CC_AVX512_SUPPORT */
 	} else {
-		ad->tx_vec_allowed = false;
+		vport->tx_vec_allowed = false;
 	}
 #endif /* RTE_ARCH_X86 */
 
@@ -2303,9 +2301,9 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 		dev->tx_pkt_prepare = idpf_prep_pkts;
 	} else {
 #ifdef RTE_ARCH_X86
-		if (ad->tx_vec_allowed) {
+		if (vport->tx_vec_allowed) {
 #ifdef CC_AVX512_SUPPORT
-			if (ad->tx_use_avx512) {
+			if (vport->tx_use_avx512) {
 				for (i = 0; i < dev->data->nb_tx_queues; i++) {
 					txq = dev->data->tx_queues[i];
 					if (txq == NULL)
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v5 11/15] common/idpf: add rxq and txq struct
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
                       ` (9 preceding siblings ...)
  2023-02-02  9:53     ` [PATCH v5 10/15] common/idpf: add vector flags in vport beilei.xing
@ 2023-02-02  9:53     ` beilei.xing
  2023-02-02  9:53     ` [PATCH v5 12/15] common/idpf: add help functions for queue setup and release beilei.xing
                       ` (4 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-02  9:53 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Add idpf_rxq and idpf_txq structure in common module.
Move idpf_vc_config_rxq and idpf_vc_config_txq functions
to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   2 +
 drivers/common/idpf/idpf_common_rxtx.h     | 112 +++++++++++++
 drivers/common/idpf/idpf_common_virtchnl.c | 160 ++++++++++++++++++
 drivers/common/idpf/idpf_common_virtchnl.h |  10 +-
 drivers/common/idpf/version.map            |   2 +
 drivers/net/idpf/idpf_ethdev.h             |   2 -
 drivers/net/idpf/idpf_rxtx.h               |  97 +----------
 drivers/net/idpf/idpf_vchnl.c              | 184 ---------------------
 drivers/net/idpf/meson.build               |   1 -
 9 files changed, 284 insertions(+), 286 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_rxtx.h
 delete mode 100644 drivers/net/idpf/idpf_vchnl.c

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 61c47ba5f4..4895f5f360 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -18,8 +18,10 @@
 
 #define IDPF_DEFAULT_RXQ_NUM	16
 #define IDPF_RX_BUFQ_PER_GRP	2
+#define IDPF_RXQ_PER_GRP	1
 #define IDPF_DEFAULT_TXQ_NUM	16
 #define IDPF_TX_COMPLQ_PER_GRP	1
+#define IDPF_TXQ_PER_GRP	1
 
 #define IDPF_MAX_PKT_TYPE	1024
 
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
new file mode 100644
index 0000000000..a9ed31c08a
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_COMMON_RXTX_H_
+#define _IDPF_COMMON_RXTX_H_
+
+#include <rte_mbuf_ptype.h>
+#include <rte_mbuf_core.h>
+
+#include "idpf_common_device.h"
+
+struct idpf_rx_stats {
+	uint64_t mbuf_alloc_failed;
+};
+
+struct idpf_rx_queue {
+	struct idpf_adapter *adapter;   /* the adapter this queue belongs to */
+	struct rte_mempool *mp;         /* mbuf pool to populate Rx ring */
+	const struct rte_memzone *mz;   /* memzone for Rx ring */
+	volatile void *rx_ring;
+	struct rte_mbuf **sw_ring;      /* address of SW ring */
+	uint64_t rx_ring_phys_addr;     /* Rx ring DMA address */
+
+	uint16_t nb_rx_desc;            /* ring length */
+	uint16_t rx_tail;               /* current value of tail */
+	volatile uint8_t *qrx_tail;     /* register address of tail */
+	uint16_t rx_free_thresh;        /* max free RX desc to hold */
+	uint16_t nb_rx_hold;            /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
+	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
+
+	/* used for VPMD */
+	uint16_t rxrearm_nb;       /* number of remaining to be re-armed */
+	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
+	uint64_t mbuf_initializer; /* value to init mbufs */
+
+	uint16_t rx_nb_avail;
+	uint16_t rx_next_avail;
+
+	uint16_t port_id;       /* device port ID */
+	uint16_t queue_id;      /* Rx queue index */
+	uint16_t rx_buf_len;    /* The packet buffer size */
+	uint16_t rx_hdr_len;    /* The header buffer size */
+	uint16_t max_pkt_len;   /* Maximum packet length */
+	uint8_t rxdid;
+
+	bool q_set;             /* if rx queue has been configured */
+	bool q_started;         /* if rx queue has been started */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+	const struct idpf_rxq_ops *ops;
+
+	struct idpf_rx_stats rx_stats;
+
+	/* only valid for split queue mode */
+	uint8_t expected_gen_id;
+	struct idpf_rx_queue *bufq1;
+	struct idpf_rx_queue *bufq2;
+
+	uint64_t offloads;
+	uint32_t hw_register_set;
+};
+
+struct idpf_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct idpf_tx_queue {
+	const struct rte_memzone *mz;		/* memzone for Tx ring */
+	volatile struct idpf_flex_tx_desc *tx_ring;	/* Tx ring virtual address */
+	volatile union {
+		struct idpf_flex_tx_sched_desc *desc_ring;
+		struct idpf_splitq_tx_compl_desc *compl_ring;
+	};
+	uint64_t tx_ring_phys_addr;		/* Tx ring DMA address */
+	struct idpf_tx_entry *sw_ring;		/* address array of SW ring */
+
+	uint16_t nb_tx_desc;		/* ring length */
+	uint16_t tx_tail;		/* current value of tail */
+	volatile uint8_t *qtx_tail;	/* register address of tail */
+	/* number of used desc since RS bit set */
+	uint16_t nb_used;
+	uint16_t nb_free;
+	uint16_t last_desc_cleaned;	/* last desc have been cleaned*/
+	uint16_t free_thresh;
+	uint16_t rs_thresh;
+
+	uint16_t port_id;
+	uint16_t queue_id;
+	uint64_t offloads;
+	uint16_t next_dd;	/* next to set RS, for VPMD */
+	uint16_t next_rs;	/* next to check DD,  for VPMD */
+
+	bool q_set;		/* if tx queue has been configured */
+	bool q_started;		/* if tx queue has been started */
+	bool tx_deferred_start; /* don't start this queue in dev start */
+	const struct idpf_txq_ops *ops;
+
+	/* only valid for split queue mode */
+	uint16_t sw_nb_desc;
+	uint16_t sw_tail;
+	void **txqs;
+	uint32_t tx_start_qid;
+	uint8_t expected_gen_id;
+	struct idpf_tx_queue *complq;
+};
+
+#endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 0ee76b98a7..4509658c24 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -805,3 +805,163 @@ idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
 	rte_free(ptype_info);
 	return err;
 }
+
+#define IDPF_RX_BUF_STRIDE		64
+int
+idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
+	struct virtchnl2_rxq_info *rxq_info;
+	struct idpf_cmd_info args;
+	uint16_t num_qs;
+	int size, err, i;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
+		num_qs = IDPF_RXQ_PER_GRP;
+	else
+		num_qs = IDPF_RXQ_PER_GRP + IDPF_RX_BUFQ_PER_GRP;
+
+	size = sizeof(*vc_rxqs) + (num_qs - 1) *
+		sizeof(struct virtchnl2_rxq_info);
+	vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
+	if (vc_rxqs == NULL) {
+		DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues");
+		err = -ENOMEM;
+		return err;
+	}
+	vc_rxqs->vport_id = vport->vport_id;
+	vc_rxqs->num_qinfo = num_qs;
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		rxq_info = &vc_rxqs->qinfo[0];
+		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
+		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
+		rxq_info->queue_id = rxq->queue_id;
+		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
+		rxq_info->data_buffer_size = rxq->rx_buf_len;
+		rxq_info->max_pkt_size = vport->max_pkt_len;
+
+		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
+		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
+
+		rxq_info->ring_len = rxq->nb_rx_desc;
+	}  else {
+		/* Rx queue */
+		rxq_info = &vc_rxqs->qinfo[0];
+		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
+		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
+		rxq_info->queue_id = rxq->queue_id;
+		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+		rxq_info->data_buffer_size = rxq->rx_buf_len;
+		rxq_info->max_pkt_size = vport->max_pkt_len;
+
+		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
+		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
+
+		rxq_info->ring_len = rxq->nb_rx_desc;
+		rxq_info->rx_bufq1_id = rxq->bufq1->queue_id;
+		rxq_info->rx_bufq2_id = rxq->bufq2->queue_id;
+		rxq_info->rx_buffer_low_watermark = 64;
+
+		/* Buffer queue */
+		for (i = 1; i <= IDPF_RX_BUFQ_PER_GRP; i++) {
+			struct idpf_rx_queue *bufq = i == 1 ? rxq->bufq1 : rxq->bufq2;
+			rxq_info = &vc_rxqs->qinfo[i];
+			rxq_info->dma_ring_addr = bufq->rx_ring_phys_addr;
+			rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+			rxq_info->queue_id = bufq->queue_id;
+			rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+			rxq_info->data_buffer_size = bufq->rx_buf_len;
+			rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
+			rxq_info->ring_len = bufq->nb_rx_desc;
+
+			rxq_info->buffer_notif_stride = IDPF_RX_BUF_STRIDE;
+			rxq_info->rx_buffer_low_watermark = 64;
+		}
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
+	args.in_args = (uint8_t *)vc_rxqs;
+	args.in_args_size = size;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	rte_free(vc_rxqs);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
+
+	return err;
+}
+
+int
+idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
+	struct virtchnl2_txq_info *txq_info;
+	struct idpf_cmd_info args;
+	uint16_t num_qs;
+	int size, err;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
+		num_qs = IDPF_TXQ_PER_GRP;
+	else
+		num_qs = IDPF_TXQ_PER_GRP + IDPF_TX_COMPLQ_PER_GRP;
+
+	size = sizeof(*vc_txqs) + (num_qs - 1) *
+		sizeof(struct virtchnl2_txq_info);
+	vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
+	if (vc_txqs == NULL) {
+		DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues");
+		err = -ENOMEM;
+		return err;
+	}
+	vc_txqs->vport_id = vport->vport_id;
+	vc_txqs->num_qinfo = num_qs;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		txq_info = &vc_txqs->qinfo[0];
+		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
+		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
+		txq_info->queue_id = txq->queue_id;
+		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
+		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
+		txq_info->ring_len = txq->nb_tx_desc;
+	} else {
+		/* txq info */
+		txq_info = &vc_txqs->qinfo[0];
+		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
+		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
+		txq_info->queue_id = txq->queue_id;
+		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+		txq_info->ring_len = txq->nb_tx_desc;
+		txq_info->tx_compl_queue_id = txq->complq->queue_id;
+		txq_info->relative_queue_id = txq_info->queue_id;
+
+		/* tx completion queue info */
+		txq_info = &vc_txqs->qinfo[1];
+		txq_info->dma_ring_addr = txq->complq->tx_ring_phys_addr;
+		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+		txq_info->queue_id = txq->complq->queue_id;
+		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+		txq_info->ring_len = txq->complq->nb_tx_desc;
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
+	args.in_args = (uint8_t *)vc_txqs;
+	args.in_args_size = size;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	rte_free(vc_txqs);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
+
+	return err;
+}
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 155527f0b6..07755d4923 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -6,6 +6,7 @@
 #define _IDPF_COMMON_VIRTCHNL_H_
 
 #include <idpf_common_device.h>
+#include <idpf_common_rxtx.h>
 
 __rte_internal
 int idpf_vc_check_api_version(struct idpf_adapter *adapter);
@@ -26,6 +27,9 @@ __rte_internal
 int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
 				 uint16_t nb_rxq, bool map);
 __rte_internal
+int idpf_execute_vc_cmd(struct idpf_adapter *adapter,
+			struct idpf_cmd_info *args);
+__rte_internal
 int idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
 			 bool rx, bool on);
 __rte_internal
@@ -42,7 +46,7 @@ __rte_internal
 int idpf_vc_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
 			 uint16_t buf_len, uint8_t *buf);
 __rte_internal
-int idpf_execute_vc_cmd(struct idpf_adapter *adapter,
-			struct idpf_cmd_info *args);
-
+int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
+__rte_internal
+int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 83338640c4..69295270df 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -11,6 +11,8 @@ INTERNAL {
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
 	idpf_vc_config_irq_map_unmap;
+	idpf_vc_config_rxq;
+	idpf_vc_config_txq;
 	idpf_vc_create_vport;
 	idpf_vc_dealloc_vectors;
 	idpf_vc_destroy_vport;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index bef6199622..9b40aa4e56 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -23,8 +23,6 @@
 #define IDPF_MAX_VPORT_NUM	8
 
 #define IDPF_INVALID_VPORT_IDX	0xffff
-#define IDPF_TXQ_PER_GRP	1
-#define IDPF_RXQ_PER_GRP	1
 
 #define IDPF_DFLT_Q_VEC_NUM	1
 
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index cac6040943..b8325f9b96 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -5,6 +5,7 @@
 #ifndef _IDPF_RXTX_H_
 #define _IDPF_RXTX_H_
 
+#include <idpf_common_rxtx.h>
 #include "idpf_ethdev.h"
 
 /* MTS */
@@ -84,103 +85,10 @@
 
 extern uint64_t idpf_timestamp_dynflag;
 
-struct idpf_rx_queue {
-	struct idpf_adapter *adapter;   /* the adapter this queue belongs to */
-	struct rte_mempool *mp;         /* mbuf pool to populate Rx ring */
-	const struct rte_memzone *mz;   /* memzone for Rx ring */
-	volatile void *rx_ring;
-	struct rte_mbuf **sw_ring;      /* address of SW ring */
-	uint64_t rx_ring_phys_addr;     /* Rx ring DMA address */
-
-	uint16_t nb_rx_desc;            /* ring length */
-	uint16_t rx_tail;               /* current value of tail */
-	volatile uint8_t *qrx_tail;     /* register address of tail */
-	uint16_t rx_free_thresh;        /* max free RX desc to hold */
-	uint16_t nb_rx_hold;            /* number of held free RX desc */
-	struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
-	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
-	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
-
-	/* used for VPMD */
-	uint16_t rxrearm_nb;       /* number of remaining to be re-armed */
-	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
-	uint64_t mbuf_initializer; /* value to init mbufs */
-
-	uint16_t rx_nb_avail;
-	uint16_t rx_next_avail;
-
-	uint16_t port_id;       /* device port ID */
-	uint16_t queue_id;      /* Rx queue index */
-	uint16_t rx_buf_len;    /* The packet buffer size */
-	uint16_t rx_hdr_len;    /* The header buffer size */
-	uint16_t max_pkt_len;   /* Maximum packet length */
-	uint8_t rxdid;
-
-	bool q_set;             /* if rx queue has been configured */
-	bool q_started;         /* if rx queue has been started */
-	bool rx_deferred_start; /* don't start this queue in dev start */
-	const struct idpf_rxq_ops *ops;
-
-	/* only valid for split queue mode */
-	uint8_t expected_gen_id;
-	struct idpf_rx_queue *bufq1;
-	struct idpf_rx_queue *bufq2;
-
-	uint64_t offloads;
-	uint32_t hw_register_set;
-};
-
-struct idpf_tx_entry {
-	struct rte_mbuf *mbuf;
-	uint16_t next_id;
-	uint16_t last_id;
-};
-
 struct idpf_tx_vec_entry {
 	struct rte_mbuf *mbuf;
 };
 
-/* Structure associated with each TX queue. */
-struct idpf_tx_queue {
-	const struct rte_memzone *mz;		/* memzone for Tx ring */
-	volatile struct idpf_flex_tx_desc *tx_ring;	/* Tx ring virtual address */
-	volatile union {
-		struct idpf_flex_tx_sched_desc *desc_ring;
-		struct idpf_splitq_tx_compl_desc *compl_ring;
-	};
-	uint64_t tx_ring_phys_addr;		/* Tx ring DMA address */
-	struct idpf_tx_entry *sw_ring;		/* address array of SW ring */
-
-	uint16_t nb_tx_desc;		/* ring length */
-	uint16_t tx_tail;		/* current value of tail */
-	volatile uint8_t *qtx_tail;	/* register address of tail */
-	/* number of used desc since RS bit set */
-	uint16_t nb_used;
-	uint16_t nb_free;
-	uint16_t last_desc_cleaned;	/* last desc have been cleaned*/
-	uint16_t free_thresh;
-	uint16_t rs_thresh;
-
-	uint16_t port_id;
-	uint16_t queue_id;
-	uint64_t offloads;
-	uint16_t next_dd;	/* next to set RS, for VPMD */
-	uint16_t next_rs;	/* next to check DD,  for VPMD */
-
-	bool q_set;		/* if tx queue has been configured */
-	bool q_started;		/* if tx queue has been started */
-	bool tx_deferred_start; /* don't start this queue in dev start */
-	const struct idpf_txq_ops *ops;
-
-	/* only valid for split queue mode */
-	uint16_t sw_nb_desc;
-	uint16_t sw_tail;
-	void **txqs;
-	uint32_t tx_start_qid;
-	uint8_t expected_gen_id;
-	struct idpf_tx_queue *complq;
-};
-
 /* Offload features */
 union idpf_tx_offload {
 	uint64_t data;
@@ -239,9 +147,6 @@ void idpf_stop_queues(struct rte_eth_dev *dev);
 void idpf_set_rx_function(struct rte_eth_dev *dev);
 void idpf_set_tx_function(struct rte_eth_dev *dev);
 
-int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
-int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
-
 #define IDPF_TIMESYNC_REG_WRAP_GUARD_BAND  10000
 /* Helper function to convert a 32b nanoseconds timestamp to 64b. */
 static inline uint64_t
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
deleted file mode 100644
index 45d05ed108..0000000000
--- a/drivers/net/idpf/idpf_vchnl.c
+++ /dev/null
@@ -1,184 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2022 Intel Corporation
- */
-
-#include <stdio.h>
-#include <errno.h>
-#include <stdint.h>
-#include <string.h>
-#include <unistd.h>
-#include <stdarg.h>
-#include <inttypes.h>
-#include <rte_byteorder.h>
-#include <rte_common.h>
-
-#include <rte_debug.h>
-#include <rte_atomic.h>
-#include <rte_eal.h>
-#include <rte_ether.h>
-#include <ethdev_driver.h>
-#include <ethdev_pci.h>
-#include <rte_dev.h>
-
-#include "idpf_ethdev.h"
-#include "idpf_rxtx.h"
-
-#define IDPF_RX_BUF_STRIDE		64
-int
-idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
-	struct virtchnl2_rxq_info *rxq_info;
-	struct idpf_cmd_info args;
-	uint16_t num_qs;
-	int size, err, i;
-
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
-		num_qs = IDPF_RXQ_PER_GRP;
-	else
-		num_qs = IDPF_RXQ_PER_GRP + IDPF_RX_BUFQ_PER_GRP;
-
-	size = sizeof(*vc_rxqs) + (num_qs - 1) *
-		sizeof(struct virtchnl2_rxq_info);
-	vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
-	if (vc_rxqs == NULL) {
-		PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues");
-		err = -ENOMEM;
-		return err;
-	}
-	vc_rxqs->vport_id = vport->vport_id;
-	vc_rxqs->num_qinfo = num_qs;
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-		rxq_info = &vc_rxqs->qinfo[0];
-		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
-		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-		rxq_info->queue_id = rxq->queue_id;
-		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-		rxq_info->data_buffer_size = rxq->rx_buf_len;
-		rxq_info->max_pkt_size = vport->max_pkt_len;
-
-		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
-		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
-
-		rxq_info->ring_len = rxq->nb_rx_desc;
-	}  else {
-		/* Rx queue */
-		rxq_info = &vc_rxqs->qinfo[0];
-		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
-		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-		rxq_info->queue_id = rxq->queue_id;
-		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-		rxq_info->data_buffer_size = rxq->rx_buf_len;
-		rxq_info->max_pkt_size = vport->max_pkt_len;
-
-		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
-		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
-
-		rxq_info->ring_len = rxq->nb_rx_desc;
-		rxq_info->rx_bufq1_id = rxq->bufq1->queue_id;
-		rxq_info->rx_bufq2_id = rxq->bufq2->queue_id;
-		rxq_info->rx_buffer_low_watermark = 64;
-
-		/* Buffer queue */
-		for (i = 1; i <= IDPF_RX_BUFQ_PER_GRP; i++) {
-			struct idpf_rx_queue *bufq = i == 1 ? rxq->bufq1 : rxq->bufq2;
-			rxq_info = &vc_rxqs->qinfo[i];
-			rxq_info->dma_ring_addr = bufq->rx_ring_phys_addr;
-			rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
-			rxq_info->queue_id = bufq->queue_id;
-			rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-			rxq_info->data_buffer_size = bufq->rx_buf_len;
-			rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
-			rxq_info->ring_len = bufq->nb_rx_desc;
-
-			rxq_info->buffer_notif_stride = IDPF_RX_BUF_STRIDE;
-			rxq_info->rx_buffer_low_watermark = 64;
-		}
-	}
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
-	args.in_args = (uint8_t *)vc_rxqs;
-	args.in_args_size = size;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	rte_free(vc_rxqs);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
-
-	return err;
-}
-
-int
-idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
-	struct virtchnl2_txq_info *txq_info;
-	struct idpf_cmd_info args;
-	uint16_t num_qs;
-	int size, err;
-
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
-		num_qs = IDPF_TXQ_PER_GRP;
-	else
-		num_qs = IDPF_TXQ_PER_GRP + IDPF_TX_COMPLQ_PER_GRP;
-
-	size = sizeof(*vc_txqs) + (num_qs - 1) *
-		sizeof(struct virtchnl2_txq_info);
-	vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
-	if (vc_txqs == NULL) {
-		PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues");
-		err = -ENOMEM;
-		return err;
-	}
-	vc_txqs->vport_id = vport->vport_id;
-	vc_txqs->num_qinfo = num_qs;
-
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-		txq_info = &vc_txqs->qinfo[0];
-		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
-		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-		txq_info->queue_id = txq->queue_id;
-		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
-		txq_info->ring_len = txq->nb_tx_desc;
-	} else {
-		/* txq info */
-		txq_info = &vc_txqs->qinfo[0];
-		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
-		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-		txq_info->queue_id = txq->queue_id;
-		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-		txq_info->ring_len = txq->nb_tx_desc;
-		txq_info->tx_compl_queue_id = txq->complq->queue_id;
-		txq_info->relative_queue_id = txq_info->queue_id;
-
-		/* tx completion queue info */
-		txq_info = &vc_txqs->qinfo[1];
-		txq_info->dma_ring_addr = txq->complq->tx_ring_phys_addr;
-		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-		txq_info->queue_id = txq->complq->queue_id;
-		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-		txq_info->ring_len = txq->complq->nb_tx_desc;
-	}
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
-	args.in_args = (uint8_t *)vc_txqs;
-	args.in_args_size = size;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	rte_free(vc_txqs);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
-
-	return err;
-}
diff --git a/drivers/net/idpf/meson.build b/drivers/net/idpf/meson.build
index 650dade0b9..378925166f 100644
--- a/drivers/net/idpf/meson.build
+++ b/drivers/net/idpf/meson.build
@@ -18,7 +18,6 @@ deps += ['common_idpf']
 sources = files(
         'idpf_ethdev.c',
         'idpf_rxtx.c',
-        'idpf_vchnl.c',
 )
 
 if arch_subdir == 'x86'
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v5 12/15] common/idpf: add help functions for queue setup and release
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
                       ` (10 preceding siblings ...)
  2023-02-02  9:53     ` [PATCH v5 11/15] common/idpf: add rxq and txq struct beilei.xing
@ 2023-02-02  9:53     ` beilei.xing
  2023-02-02  9:53     ` [PATCH v5 13/15] common/idpf: add Rx and Tx data path beilei.xing
                       ` (3 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-02  9:53 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Refine rxq setup and txq setup.
Move some help functions of queue setup and queue release
to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c  |  414 +++++++++
 drivers/common/idpf/idpf_common_rxtx.h  |   57 ++
 drivers/common/idpf/meson.build         |    1 +
 drivers/common/idpf/version.map         |   15 +
 drivers/net/idpf/idpf_rxtx.c            | 1051 ++++++-----------------
 drivers/net/idpf/idpf_rxtx.h            |    9 -
 drivers/net/idpf/idpf_rxtx_vec_avx512.c |    2 +-
 7 files changed, 773 insertions(+), 776 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_rxtx.c

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
new file mode 100644
index 0000000000..eeeeedca88
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -0,0 +1,414 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <rte_mbuf_dyn.h>
+#include "idpf_common_rxtx.h"
+
+int
+idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+{
+	/* The following constraints must be satisfied:
+	 * thresh < rxq->nb_rx_desc
+	 */
+	if (thresh >= nb_desc) {
+		DRV_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
+			thresh, nb_desc);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int
+idpf_check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+		     uint16_t tx_free_thresh)
+{
+	/* TX descriptors will have their RS bit set after tx_rs_thresh
+	 * descriptors have been used. The TX descriptor ring will be cleaned
+	 * after tx_free_thresh descriptors are used or if the number of
+	 * descriptors required to transmit a packet is greater than the
+	 * number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		DRV_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+			"number of TX descriptors (%u) minus 2",
+			tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		DRV_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+			"number of TX descriptors (%u) minus 3.",
+			tx_free_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		DRV_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+			"equal to tx_free_thresh (%u).",
+			tx_rs_thresh, tx_free_thresh);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		DRV_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+			"number of TX descriptors (%u).",
+			tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+void
+idpf_release_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+	uint16_t i;
+
+	if (rxq->sw_ring == NULL)
+		return;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i] != NULL) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+			rxq->sw_ring[i] = NULL;
+		}
+	}
+}
+
+void
+idpf_release_txq_mbufs(struct idpf_tx_queue *txq)
+{
+	uint16_t nb_desc, i;
+
+	if (txq == NULL || txq->sw_ring == NULL) {
+		DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
+		return;
+	}
+
+	if (txq->sw_nb_desc != 0) {
+		/* For split queue model, descriptor ring */
+		nb_desc = txq->sw_nb_desc;
+	} else {
+		/* For single queue model */
+		nb_desc = txq->nb_tx_desc;
+	}
+	for (i = 0; i < nb_desc; i++) {
+		if (txq->sw_ring[i].mbuf != NULL) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+void
+idpf_reset_split_rx_descq(struct idpf_rx_queue *rxq)
+{
+	uint16_t len;
+	uint32_t i;
+
+	if (rxq == NULL)
+		return;
+
+	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3);
+	     i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	rxq->rx_tail = 0;
+	rxq->expected_gen_id = 1;
+}
+
+void
+idpf_reset_split_rx_bufq(struct idpf_rx_queue *rxq)
+{
+	uint16_t len;
+	uint32_t i;
+
+	if (rxq == NULL)
+		return;
+
+	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(struct virtchnl2_splitq_rx_buf_desc);
+	     i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+	/* The next descriptor id which can be received. */
+	rxq->rx_next_avail = 0;
+
+	/* The next descriptor id which can be refilled. */
+	rxq->rx_tail = 0;
+	/* The number of descriptors which can be refilled. */
+	rxq->nb_rx_hold = rxq->nb_rx_desc - 1;
+
+	rxq->bufq1 = NULL;
+	rxq->bufq2 = NULL;
+}
+
+void
+idpf_reset_split_rx_queue(struct idpf_rx_queue *rxq)
+{
+	idpf_reset_split_rx_descq(rxq);
+	idpf_reset_split_rx_bufq(rxq->bufq1);
+	idpf_reset_split_rx_bufq(rxq->bufq2);
+}
+
+void
+idpf_reset_single_rx_queue(struct idpf_rx_queue *rxq)
+{
+	uint16_t len;
+	uint32_t i;
+
+	if (rxq == NULL)
+		return;
+
+	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(struct virtchnl2_singleq_rx_buf_desc);
+	     i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+
+	rte_pktmbuf_free(rxq->pkt_first_seg);
+
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+	rxq->rxrearm_start = 0;
+	rxq->rxrearm_nb = 0;
+}
+
+void
+idpf_reset_split_tx_descq(struct idpf_tx_queue *txq)
+{
+	struct idpf_tx_entry *txe;
+	uint32_t i, size;
+	uint16_t prev;
+
+	if (txq == NULL) {
+		DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	size = sizeof(struct idpf_flex_tx_sched_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->desc_ring)[i] = 0;
+
+	txe = txq->sw_ring;
+	prev = (uint16_t)(txq->sw_nb_desc - 1);
+	for (i = 0; i < txq->sw_nb_desc; i++) {
+		txe[i].mbuf = NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_used = 0;
+
+	/* Use this as next to clean for split desc queue */
+	txq->last_desc_cleaned = 0;
+	txq->sw_tail = 0;
+	txq->nb_free = txq->nb_tx_desc - 1;
+}
+
+void
+idpf_reset_split_tx_complq(struct idpf_tx_queue *cq)
+{
+	uint32_t i, size;
+
+	if (cq == NULL) {
+		DRV_LOG(DEBUG, "Pointer to complq is NULL");
+		return;
+	}
+
+	size = sizeof(struct idpf_splitq_tx_compl_desc) * cq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)cq->compl_ring)[i] = 0;
+
+	cq->tx_tail = 0;
+	cq->expected_gen_id = 1;
+}
+
+void
+idpf_reset_single_tx_queue(struct idpf_tx_queue *txq)
+{
+	struct idpf_tx_entry *txe;
+	uint32_t i, size;
+	uint16_t prev;
+
+	if (txq == NULL) {
+		DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct idpf_flex_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		txq->tx_ring[i].qw1.cmd_dtype =
+			rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_used = 0;
+
+	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+	txq->nb_free = txq->nb_tx_desc - 1;
+
+	txq->next_dd = txq->rs_thresh - 1;
+	txq->next_rs = txq->rs_thresh - 1;
+}
+
+void
+idpf_rx_queue_release(void *rxq)
+{
+	struct idpf_rx_queue *q = rxq;
+
+	if (q == NULL)
+		return;
+
+	/* Split queue */
+	if (q->bufq1 != NULL && q->bufq2 != NULL) {
+		q->bufq1->ops->release_mbufs(q->bufq1);
+		rte_free(q->bufq1->sw_ring);
+		rte_memzone_free(q->bufq1->mz);
+		rte_free(q->bufq1);
+		q->bufq2->ops->release_mbufs(q->bufq2);
+		rte_free(q->bufq2->sw_ring);
+		rte_memzone_free(q->bufq2->mz);
+		rte_free(q->bufq2);
+		rte_memzone_free(q->mz);
+		rte_free(q);
+		return;
+	}
+
+	/* Single queue */
+	q->ops->release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+idpf_tx_queue_release(void *txq)
+{
+	struct idpf_tx_queue *q = txq;
+
+	if (q == NULL)
+		return;
+
+	if (q->complq) {
+		rte_memzone_free(q->complq->mz);
+		rte_free(q->complq);
+	}
+
+	q->ops->release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+int
+idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+	volatile struct virtchnl2_singleq_rx_buf_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(mbuf == NULL)) {
+			DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &((volatile struct virtchnl2_singleq_rx_buf_desc *)(rxq->rx_ring))[i];
+		rxd->pkt_addr = dma_addr;
+		rxd->hdr_addr = 0;
+		rxd->rsvd1 = 0;
+		rxd->rsvd2 = 0;
+		rxq->sw_ring[i] = mbuf;
+	}
+
+	return 0;
+}
+
+int
+idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+	volatile struct virtchnl2_splitq_rx_buf_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(mbuf == NULL)) {
+			DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &((volatile struct virtchnl2_splitq_rx_buf_desc *)(rxq->rx_ring))[i];
+		rxd->qword0.buf_id = i;
+		rxd->qword0.rsvd0 = 0;
+		rxd->qword0.rsvd1 = 0;
+		rxd->pkt_addr = dma_addr;
+		rxd->hdr_addr = 0;
+		rxd->rsvd2 = 0;
+
+		rxq->sw_ring[i] = mbuf;
+	}
+
+	rxq->nb_rx_hold = 0;
+	rxq->rx_tail = rxq->nb_rx_desc - 1;
+
+	return 0;
+}
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index a9ed31c08a..c5bb7d48af 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -5,11 +5,28 @@
 #ifndef _IDPF_COMMON_RXTX_H_
 #define _IDPF_COMMON_RXTX_H_
 
+#include <rte_mbuf.h>
 #include <rte_mbuf_ptype.h>
 #include <rte_mbuf_core.h>
 
 #include "idpf_common_device.h"
 
+#define IDPF_RX_MAX_BURST		32
+
+#define IDPF_RX_OFFLOAD_IPV4_CKSUM		RTE_BIT64(1)
+#define IDPF_RX_OFFLOAD_UDP_CKSUM		RTE_BIT64(2)
+#define IDPF_RX_OFFLOAD_TCP_CKSUM		RTE_BIT64(3)
+#define IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM	RTE_BIT64(6)
+#define IDPF_RX_OFFLOAD_TIMESTAMP		RTE_BIT64(14)
+
+#define IDPF_TX_OFFLOAD_IPV4_CKSUM       RTE_BIT64(1)
+#define IDPF_TX_OFFLOAD_UDP_CKSUM        RTE_BIT64(2)
+#define IDPF_TX_OFFLOAD_TCP_CKSUM        RTE_BIT64(3)
+#define IDPF_TX_OFFLOAD_SCTP_CKSUM       RTE_BIT64(4)
+#define IDPF_TX_OFFLOAD_TCP_TSO          RTE_BIT64(5)
+#define IDPF_TX_OFFLOAD_MULTI_SEGS       RTE_BIT64(15)
+#define IDPF_TX_OFFLOAD_MBUF_FAST_FREE   RTE_BIT64(16)
+
 struct idpf_rx_stats {
 	uint64_t mbuf_alloc_failed;
 };
@@ -109,4 +126,44 @@ struct idpf_tx_queue {
 	struct idpf_tx_queue *complq;
 };
 
+struct idpf_rxq_ops {
+	void (*release_mbufs)(struct idpf_rx_queue *rxq);
+};
+
+struct idpf_txq_ops {
+	void (*release_mbufs)(struct idpf_tx_queue *txq);
+};
+
+__rte_internal
+int idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh);
+__rte_internal
+int idpf_check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+			 uint16_t tx_free_thresh);
+__rte_internal
+void idpf_release_rxq_mbufs(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_release_txq_mbufs(struct idpf_tx_queue *txq);
+__rte_internal
+void idpf_reset_split_rx_descq(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_reset_split_rx_bufq(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_reset_split_rx_queue(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_reset_single_rx_queue(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_reset_split_tx_descq(struct idpf_tx_queue *txq);
+__rte_internal
+void idpf_reset_split_tx_complq(struct idpf_tx_queue *cq);
+__rte_internal
+void idpf_reset_single_tx_queue(struct idpf_tx_queue *txq);
+__rte_internal
+void idpf_rx_queue_release(void *rxq);
+__rte_internal
+void idpf_tx_queue_release(void *txq);
+__rte_internal
+int idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq);
+__rte_internal
+int idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq);
+
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/meson.build b/drivers/common/idpf/meson.build
index c6cc7a196b..5ee071fdb2 100644
--- a/drivers/common/idpf/meson.build
+++ b/drivers/common/idpf/meson.build
@@ -5,6 +5,7 @@ deps += ['mbuf']
 
 sources = files(
     'idpf_common_device.c',
+    'idpf_common_rxtx.c',
     'idpf_common_virtchnl.c',
 )
 
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 69295270df..aa6ebd7c6c 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,11 +3,26 @@ INTERNAL {
 
 	idpf_adapter_deinit;
 	idpf_adapter_init;
+	idpf_alloc_single_rxq_mbufs;
+	idpf_alloc_split_rxq_mbufs;
+	idpf_check_rx_thresh;
+	idpf_check_tx_thresh;
 	idpf_config_irq_map;
 	idpf_config_irq_unmap;
 	idpf_config_rss;
 	idpf_create_vport_info_init;
 	idpf_execute_vc_cmd;
+	idpf_release_rxq_mbufs;
+	idpf_release_txq_mbufs;
+	idpf_reset_single_rx_queue;
+	idpf_reset_single_tx_queue;
+	idpf_reset_split_rx_bufq;
+	idpf_reset_split_rx_descq;
+	idpf_reset_split_rx_queue;
+	idpf_reset_split_tx_complq;
+	idpf_reset_split_tx_descq;
+	idpf_rx_queue_release;
+	idpf_tx_queue_release;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
 	idpf_vc_config_irq_map_unmap;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 068eb8000e..fb1814d893 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -12,358 +12,141 @@
 
 static int idpf_timestamp_dynfield_offset = -1;
 
-static int
-check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
-{
-	/* The following constraints must be satisfied:
-	 *   thresh < rxq->nb_rx_desc
-	 */
-	if (thresh >= nb_desc) {
-		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
-			     thresh, nb_desc);
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static int
-check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
-		uint16_t tx_free_thresh)
+static uint64_t
+idpf_rx_offload_convert(uint64_t offload)
 {
-	/* TX descriptors will have their RS bit set after tx_rs_thresh
-	 * descriptors have been used. The TX descriptor ring will be cleaned
-	 * after tx_free_thresh descriptors are used or if the number of
-	 * descriptors required to transmit a packet is greater than the
-	 * number of free TX descriptors.
-	 *
-	 * The following constraints must be satisfied:
-	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
-	 *  - tx_free_thresh must be less than the size of the ring minus 3.
-	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
-	 *  - tx_rs_thresh must be a divisor of the ring size.
-	 *
-	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
-	 * race condition, hence the maximum threshold constraints. When set
-	 * to zero use default values.
-	 */
-	if (tx_rs_thresh >= (nb_desc - 2)) {
-		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
-			     "number of TX descriptors (%u) minus 2",
-			     tx_rs_thresh, nb_desc);
-		return -EINVAL;
-	}
-	if (tx_free_thresh >= (nb_desc - 3)) {
-		PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be less than the "
-			     "number of TX descriptors (%u) minus 3.",
-			     tx_free_thresh, nb_desc);
-		return -EINVAL;
-	}
-	if (tx_rs_thresh > tx_free_thresh) {
-		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
-			     "equal to tx_free_thresh (%u).",
-			     tx_rs_thresh, tx_free_thresh);
-		return -EINVAL;
-	}
-	if ((nb_desc % tx_rs_thresh) != 0) {
-		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
-			     "number of TX descriptors (%u).",
-			     tx_rs_thresh, nb_desc);
-		return -EINVAL;
-	}
-
-	return 0;
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
+		ol |= IDPF_RX_OFFLOAD_TIMESTAMP;
+
+	return ol;
 }
 
-static void
-release_rxq_mbufs(struct idpf_rx_queue *rxq)
+static uint64_t
+idpf_tx_offload_convert(uint64_t offload)
 {
-	uint16_t i;
-
-	if (rxq->sw_ring == NULL)
-		return;
-
-	for (i = 0; i < rxq->nb_rx_desc; i++) {
-		if (rxq->sw_ring[i] != NULL) {
-			rte_pktmbuf_free_seg(rxq->sw_ring[i]);
-			rxq->sw_ring[i] = NULL;
-		}
-	}
-}
-
-static void
-release_txq_mbufs(struct idpf_tx_queue *txq)
-{
-	uint16_t nb_desc, i;
-
-	if (txq == NULL || txq->sw_ring == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
-		return;
-	}
-
-	if (txq->sw_nb_desc != 0) {
-		/* For split queue model, descriptor ring */
-		nb_desc = txq->sw_nb_desc;
-	} else {
-		/* For single queue model */
-		nb_desc = txq->nb_tx_desc;
-	}
-	for (i = 0; i < nb_desc; i++) {
-		if (txq->sw_ring[i].mbuf != NULL) {
-			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
-			txq->sw_ring[i].mbuf = NULL;
-		}
-	}
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_SCTP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
+		ol |= IDPF_TX_OFFLOAD_MULTI_SEGS;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0)
+		ol |= IDPF_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	return ol;
 }
 
 static const struct idpf_rxq_ops def_rxq_ops = {
-	.release_mbufs = release_rxq_mbufs,
+	.release_mbufs = idpf_release_rxq_mbufs,
 };
 
 static const struct idpf_txq_ops def_txq_ops = {
-	.release_mbufs = release_txq_mbufs,
+	.release_mbufs = idpf_release_txq_mbufs,
 };
 
-static void
-reset_split_rx_descq(struct idpf_rx_queue *rxq)
-{
-	uint16_t len;
-	uint32_t i;
-
-	if (rxq == NULL)
-		return;
-
-	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
-
-	for (i = 0; i < len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3);
-	     i++)
-		((volatile char *)rxq->rx_ring)[i] = 0;
-
-	rxq->rx_tail = 0;
-	rxq->expected_gen_id = 1;
-}
-
-static void
-reset_split_rx_bufq(struct idpf_rx_queue *rxq)
-{
-	uint16_t len;
-	uint32_t i;
-
-	if (rxq == NULL)
-		return;
-
-	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
-
-	for (i = 0; i < len * sizeof(struct virtchnl2_splitq_rx_buf_desc);
-	     i++)
-		((volatile char *)rxq->rx_ring)[i] = 0;
-
-	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
-
-	for (i = 0; i < IDPF_RX_MAX_BURST; i++)
-		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
-
-	/* The next descriptor id which can be received. */
-	rxq->rx_next_avail = 0;
-
-	/* The next descriptor id which can be refilled. */
-	rxq->rx_tail = 0;
-	/* The number of descriptors which can be refilled. */
-	rxq->nb_rx_hold = rxq->nb_rx_desc - 1;
-
-	rxq->bufq1 = NULL;
-	rxq->bufq2 = NULL;
-}
-
-static void
-idpf_rx_queue_release(void *rxq)
-{
-	struct idpf_rx_queue *q = rxq;
-
-	if (q == NULL)
-		return;
-
-	/* Split queue */
-	if (q->bufq1 != NULL && q->bufq2 != NULL) {
-		q->bufq1->ops->release_mbufs(q->bufq1);
-		rte_free(q->bufq1->sw_ring);
-		rte_memzone_free(q->bufq1->mz);
-		rte_free(q->bufq1);
-		q->bufq2->ops->release_mbufs(q->bufq2);
-		rte_free(q->bufq2->sw_ring);
-		rte_memzone_free(q->bufq2->mz);
-		rte_free(q->bufq2);
-		rte_memzone_free(q->mz);
-		rte_free(q);
-		return;
-	}
-
-	/* Single queue */
-	q->ops->release_mbufs(q);
-	rte_free(q->sw_ring);
-	rte_memzone_free(q->mz);
-	rte_free(q);
-}
-
-static void
-idpf_tx_queue_release(void *txq)
-{
-	struct idpf_tx_queue *q = txq;
-
-	if (q == NULL)
-		return;
-
-	if (q->complq) {
-		rte_memzone_free(q->complq->mz);
-		rte_free(q->complq);
-	}
-
-	q->ops->release_mbufs(q);
-	rte_free(q->sw_ring);
-	rte_memzone_free(q->mz);
-	rte_free(q);
-}
-
-static inline void
-reset_split_rx_queue(struct idpf_rx_queue *rxq)
+static const struct rte_memzone *
+idpf_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
+		      uint16_t len, uint16_t queue_type,
+		      unsigned int socket_id, bool splitq)
 {
-	reset_split_rx_descq(rxq);
-	reset_split_rx_bufq(rxq->bufq1);
-	reset_split_rx_bufq(rxq->bufq2);
-}
-
-static void
-reset_single_rx_queue(struct idpf_rx_queue *rxq)
-{
-	uint16_t len;
-	uint32_t i;
-
-	if (rxq == NULL)
-		return;
-
-	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
-
-	for (i = 0; i < len * sizeof(struct virtchnl2_singleq_rx_buf_desc);
-	     i++)
-		((volatile char *)rxq->rx_ring)[i] = 0;
-
-	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
-
-	for (i = 0; i < IDPF_RX_MAX_BURST; i++)
-		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
-
-	rxq->rx_tail = 0;
-	rxq->nb_rx_hold = 0;
-
-	rte_pktmbuf_free(rxq->pkt_first_seg);
-
-	rxq->pkt_first_seg = NULL;
-	rxq->pkt_last_seg = NULL;
-	rxq->rxrearm_start = 0;
-	rxq->rxrearm_nb = 0;
-}
-
-static void
-reset_split_tx_descq(struct idpf_tx_queue *txq)
-{
-	struct idpf_tx_entry *txe;
-	uint32_t i, size;
-	uint16_t prev;
-
-	if (txq == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
-		return;
-	}
+	char ring_name[RTE_MEMZONE_NAMESIZE];
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
 
-	size = sizeof(struct idpf_flex_tx_sched_desc) * txq->nb_tx_desc;
-	for (i = 0; i < size; i++)
-		((volatile char *)txq->desc_ring)[i] = 0;
-
-	txe = txq->sw_ring;
-	prev = (uint16_t)(txq->sw_nb_desc - 1);
-	for (i = 0; i < txq->sw_nb_desc; i++) {
-		txe[i].mbuf = NULL;
-		txe[i].last_id = i;
-		txe[prev].next_id = i;
-		prev = i;
+	memset(ring_name, 0, RTE_MEMZONE_NAMESIZE);
+	switch (queue_type) {
+	case VIRTCHNL2_QUEUE_TYPE_TX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_sched_desc),
+					      IDPF_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_desc),
+					      IDPF_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "idpf Tx ring", sizeof("idpf Tx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
+					      IDPF_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_singleq_rx_buf_desc),
+					      IDPF_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "idpf Rx ring", sizeof("idpf Rx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+		ring_size = RTE_ALIGN(len * sizeof(struct idpf_splitq_tx_compl_desc),
+				      IDPF_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "idpf Tx compl ring", sizeof("idpf Tx compl ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+		ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_splitq_rx_buf_desc),
+				      IDPF_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "idpf Rx buf ring", sizeof("idpf Rx buf ring"));
+		break;
+	default:
+		PMD_INIT_LOG(ERR, "Invalid queue type");
+		return NULL;
 	}
 
-	txq->tx_tail = 0;
-	txq->nb_used = 0;
-
-	/* Use this as next to clean for split desc queue */
-	txq->last_desc_cleaned = 0;
-	txq->sw_tail = 0;
-	txq->nb_free = txq->nb_tx_desc - 1;
-}
-
-static void
-reset_split_tx_complq(struct idpf_tx_queue *cq)
-{
-	uint32_t i, size;
-
-	if (cq == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to complq is NULL");
-		return;
+	mz = rte_eth_dma_zone_reserve(dev, ring_name, queue_idx,
+				      ring_size, IDPF_RING_BASE_ALIGN,
+				      socket_id);
+	if (mz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for ring");
+		return NULL;
 	}
 
-	size = sizeof(struct idpf_splitq_tx_compl_desc) * cq->nb_tx_desc;
-	for (i = 0; i < size; i++)
-		((volatile char *)cq->compl_ring)[i] = 0;
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
 
-	cq->tx_tail = 0;
-	cq->expected_gen_id = 1;
+	return mz;
 }
 
 static void
-reset_single_tx_queue(struct idpf_tx_queue *txq)
+idpf_dma_zone_release(const struct rte_memzone *mz)
 {
-	struct idpf_tx_entry *txe;
-	uint32_t i, size;
-	uint16_t prev;
-
-	if (txq == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
-		return;
-	}
-
-	txe = txq->sw_ring;
-	size = sizeof(struct idpf_flex_tx_desc) * txq->nb_tx_desc;
-	for (i = 0; i < size; i++)
-		((volatile char *)txq->tx_ring)[i] = 0;
-
-	prev = (uint16_t)(txq->nb_tx_desc - 1);
-	for (i = 0; i < txq->nb_tx_desc; i++) {
-		txq->tx_ring[i].qw1.cmd_dtype =
-			rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE);
-		txe[i].mbuf =  NULL;
-		txe[i].last_id = i;
-		txe[prev].next_id = i;
-		prev = i;
-	}
-
-	txq->tx_tail = 0;
-	txq->nb_used = 0;
-
-	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
-	txq->nb_free = txq->nb_tx_desc - 1;
-
-	txq->next_dd = txq->rs_thresh - 1;
-	txq->next_rs = txq->rs_thresh - 1;
+	rte_memzone_free(mz);
 }
 
 static int
-idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
+idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
 			 uint16_t queue_idx, uint16_t rx_free_thresh,
 			 uint16_t nb_desc, unsigned int socket_id,
-			 struct rte_mempool *mp)
+			 struct rte_mempool *mp, uint8_t bufq_id)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_adapter *adapter = vport->adapter;
 	struct idpf_hw *hw = &adapter->hw;
 	const struct rte_memzone *mz;
-	uint32_t ring_size;
+	struct idpf_rx_queue *bufq;
 	uint16_t len;
+	int ret;
+
+	bufq = rte_zmalloc_socket("idpf bufq",
+				   sizeof(struct idpf_rx_queue),
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue.");
+		ret = -ENOMEM;
+		goto err_bufq1_alloc;
+	}
 
 	bufq->mp = mp;
 	bufq->nb_rx_desc = nb_desc;
@@ -376,8 +159,21 @@ idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
 	len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM;
 	bufq->rx_buf_len = len;
 
-	/* Allocate the software ring. */
+	/* Allocate a little more to support bulk allocate. */
 	len = nb_desc + IDPF_RX_MAX_BURST;
+
+	mz = idpf_dma_zone_reserve(dev, queue_idx, len,
+				   VIRTCHNL2_QUEUE_TYPE_RX_BUFFER,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+
+	bufq->rx_ring_phys_addr = mz->iova;
+	bufq->rx_ring = mz->addr;
+	bufq->mz = mz;
+
 	bufq->sw_ring =
 		rte_zmalloc_socket("idpf rx bufq sw ring",
 				   sizeof(struct rte_mbuf *) * len,
@@ -385,55 +181,60 @@ idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
 				   socket_id);
 	if (bufq->sw_ring == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
-		return -ENOMEM;
-	}
-
-	/* Allocate a liitle more to support bulk allocate. */
-	len = nb_desc + IDPF_RX_MAX_BURST;
-	ring_size = RTE_ALIGN(len *
-			      sizeof(struct virtchnl2_splitq_rx_buf_desc),
-			      IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "rx_buf_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
-	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue.");
-		rte_free(bufq->sw_ring);
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
 	}
 
-	/* Zero all the descriptors in the ring. */
-	memset(mz->addr, 0, ring_size);
-	bufq->rx_ring_phys_addr = mz->iova;
-	bufq->rx_ring = mz->addr;
-
-	bufq->mz = mz;
-	reset_split_rx_bufq(bufq);
-	bufq->q_set = true;
+	idpf_reset_split_rx_bufq(bufq);
 	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
 	bufq->ops = &def_rxq_ops;
+	bufq->q_set = true;
 
-	/* TODO: allow bulk or vec */
+	if (bufq_id == 1) {
+		rxq->bufq1 = bufq;
+	} else if (bufq_id == 2) {
+		rxq->bufq2 = bufq;
+	} else {
+		PMD_INIT_LOG(ERR, "Invalid buffer queue index.");
+		ret = -EINVAL;
+		goto err_bufq_id;
+	}
 
 	return 0;
+
+err_bufq_id:
+	rte_free(bufq->sw_ring);
+err_sw_ring_alloc:
+	idpf_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(bufq);
+err_bufq1_alloc:
+	return ret;
 }
 
-static int
-idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-			  uint16_t nb_desc, unsigned int socket_id,
-			  const struct rte_eth_rxconf *rx_conf,
-			  struct rte_mempool *mp)
+static void
+idpf_rx_split_bufq_release(struct idpf_rx_queue *bufq)
+{
+	rte_free(bufq->sw_ring);
+	idpf_dma_zone_release(bufq->mz);
+	rte_free(bufq);
+}
+
+int
+idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_rxconf *rx_conf,
+		    struct rte_mempool *mp)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_adapter *adapter = vport->adapter;
-	struct idpf_rx_queue *bufq1, *bufq2;
+	struct idpf_hw *hw = &adapter->hw;
 	const struct rte_memzone *mz;
 	struct idpf_rx_queue *rxq;
 	uint16_t rx_free_thresh;
-	uint32_t ring_size;
 	uint64_t offloads;
-	uint16_t qid;
+	bool is_splitq;
 	uint16_t len;
 	int ret;
 
@@ -443,7 +244,7 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
 		IDPF_DEFAULT_RX_FREE_THRESH :
 		rx_conf->rx_free_thresh;
-	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+	if (idpf_check_rx_thresh(nb_desc, rx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Free memory if needed */
@@ -452,16 +253,19 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		dev->data->rx_queues[queue_idx] = NULL;
 	}
 
-	/* Setup Rx description queue */
+	/* Setup Rx queue */
 	rxq = rte_zmalloc_socket("idpf rxq",
 				 sizeof(struct idpf_rx_queue),
 				 RTE_CACHE_LINE_SIZE,
 				 socket_id);
 	if (rxq == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_rxq_alloc;
 	}
 
+	is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
 	rxq->mp = mp;
 	rxq->nb_rx_desc = nb_desc;
 	rxq->rx_free_thresh = rx_free_thresh;
@@ -470,343 +274,129 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
 	rxq->rx_hdr_len = 0;
 	rxq->adapter = adapter;
-	rxq->offloads = offloads;
+	rxq->offloads = idpf_rx_offload_convert(offloads);
 
 	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
 	rxq->rx_buf_len = len;
 
-	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
-	ring_size = RTE_ALIGN(len *
-			      sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
-			      IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "rx_cpmpl_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
-
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+	mz = idpf_dma_zone_reserve(dev, queue_idx, len, VIRTCHNL2_QUEUE_TYPE_RX,
+				   socket_id, is_splitq);
 	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
 		ret = -ENOMEM;
-		goto free_rxq;
+		goto err_mz_reserve;
 	}
-
-	/* Zero all the descriptors in the ring. */
-	memset(mz->addr, 0, ring_size);
 	rxq->rx_ring_phys_addr = mz->iova;
 	rxq->rx_ring = mz->addr;
-
 	rxq->mz = mz;
-	reset_split_rx_descq(rxq);
 
-	/* TODO: allow bulk or vec */
+	if (!is_splitq) {
+		rxq->sw_ring = rte_zmalloc_socket("idpf rxq sw ring",
+						  sizeof(struct rte_mbuf *) * len,
+						  RTE_CACHE_LINE_SIZE,
+						  socket_id);
+		if (rxq->sw_ring == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+			ret = -ENOMEM;
+			goto err_sw_ring_alloc;
+		}
 
-	/* setup Rx buffer queue */
-	bufq1 = rte_zmalloc_socket("idpf bufq1",
-				   sizeof(struct idpf_rx_queue),
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (bufq1 == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue 1.");
-		ret = -ENOMEM;
-		goto free_mz;
-	}
-	qid = 2 * queue_idx;
-	ret = idpf_rx_split_bufq_setup(dev, bufq1, qid, rx_free_thresh,
-				       nb_desc, socket_id, mp);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
-		ret = -EINVAL;
-		goto free_bufq1;
-	}
-	rxq->bufq1 = bufq1;
+		idpf_reset_single_rx_queue(rxq);
+		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
+				queue_idx * vport->chunks_info.rx_qtail_spacing);
+		rxq->ops = &def_rxq_ops;
+	} else {
+		idpf_reset_split_rx_descq(rxq);
 
-	bufq2 = rte_zmalloc_socket("idpf bufq2",
-				   sizeof(struct idpf_rx_queue),
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (bufq2 == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue 2.");
-		rte_free(bufq1->sw_ring);
-		rte_memzone_free(bufq1->mz);
-		ret = -ENOMEM;
-		goto free_bufq1;
-	}
-	qid = 2 * queue_idx + 1;
-	ret = idpf_rx_split_bufq_setup(dev, bufq2, qid, rx_free_thresh,
-				       nb_desc, socket_id, mp);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
-		rte_free(bufq1->sw_ring);
-		rte_memzone_free(bufq1->mz);
-		ret = -EINVAL;
-		goto free_bufq2;
+		/* Setup Rx buffer queues */
+		ret = idpf_rx_split_bufq_setup(dev, rxq, 2 * queue_idx,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 1);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
+			ret = -EINVAL;
+			goto err_bufq1_setup;
+		}
+
+		ret = idpf_rx_split_bufq_setup(dev, rxq, 2 * queue_idx + 1,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 2);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
+			ret = -EINVAL;
+			goto err_bufq2_setup;
+		}
 	}
-	rxq->bufq2 = bufq2;
 
 	rxq->q_set = true;
 	dev->data->rx_queues[queue_idx] = rxq;
 
 	return 0;
 
-free_bufq2:
-	rte_free(bufq2);
-free_bufq1:
-	rte_free(bufq1);
-free_mz:
-	rte_memzone_free(mz);
-free_rxq:
+err_bufq2_setup:
+	idpf_rx_split_bufq_release(rxq->bufq1);
+err_bufq1_setup:
+err_sw_ring_alloc:
+	idpf_dma_zone_release(mz);
+err_mz_reserve:
 	rte_free(rxq);
-
+err_rxq_alloc:
 	return ret;
 }
 
 static int
-idpf_rx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-			   uint16_t nb_desc, unsigned int socket_id,
-			   const struct rte_eth_rxconf *rx_conf,
-			   struct rte_mempool *mp)
+idpf_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
+		     uint16_t queue_idx, uint16_t nb_desc,
+		     unsigned int socket_id)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
-	struct idpf_hw *hw = &adapter->hw;
 	const struct rte_memzone *mz;
-	struct idpf_rx_queue *rxq;
-	uint16_t rx_free_thresh;
-	uint32_t ring_size;
-	uint64_t offloads;
-	uint16_t len;
-
-	offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
-
-	/* Check free threshold */
-	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
-		IDPF_DEFAULT_RX_FREE_THRESH :
-		rx_conf->rx_free_thresh;
-	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
-		return -EINVAL;
-
-	/* Free memory if needed */
-	if (dev->data->rx_queues[queue_idx] != NULL) {
-		idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
-		dev->data->rx_queues[queue_idx] = NULL;
-	}
-
-	/* Setup Rx description queue */
-	rxq = rte_zmalloc_socket("idpf rxq",
-				 sizeof(struct idpf_rx_queue),
-				 RTE_CACHE_LINE_SIZE,
-				 socket_id);
-	if (rxq == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
-		return -ENOMEM;
-	}
-
-	rxq->mp = mp;
-	rxq->nb_rx_desc = nb_desc;
-	rxq->rx_free_thresh = rx_free_thresh;
-	rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
-	rxq->port_id = dev->data->port_id;
-	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
-	rxq->rx_hdr_len = 0;
-	rxq->adapter = adapter;
-	rxq->offloads = offloads;
-
-	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
-	rxq->rx_buf_len = len;
-
-	len = nb_desc + IDPF_RX_MAX_BURST;
-	rxq->sw_ring =
-		rte_zmalloc_socket("idpf rxq sw ring",
-				   sizeof(struct rte_mbuf *) * len,
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (rxq->sw_ring == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
-		rte_free(rxq);
-		return -ENOMEM;
-	}
-
-	/* Allocate a liitle more to support bulk allocate. */
-	len = nb_desc + IDPF_RX_MAX_BURST;
-	ring_size = RTE_ALIGN(len *
-			      sizeof(struct virtchnl2_singleq_rx_buf_desc),
-			      IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "rx ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
-	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue.");
-		rte_free(rxq->sw_ring);
-		rte_free(rxq);
-		return -ENOMEM;
-	}
-
-	/* Zero all the descriptors in the ring. */
-	memset(mz->addr, 0, ring_size);
-	rxq->rx_ring_phys_addr = mz->iova;
-	rxq->rx_ring = mz->addr;
-
-	rxq->mz = mz;
-	reset_single_rx_queue(rxq);
-	rxq->q_set = true;
-	dev->data->rx_queues[queue_idx] = rxq;
-	rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
-			queue_idx * vport->chunks_info.rx_qtail_spacing);
-	rxq->ops = &def_rxq_ops;
-
-	return 0;
-}
-
-int
-idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-		    uint16_t nb_desc, unsigned int socket_id,
-		    const struct rte_eth_rxconf *rx_conf,
-		    struct rte_mempool *mp)
-{
-	struct idpf_vport *vport = dev->data->dev_private;
-
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
-		return idpf_rx_single_queue_setup(dev, queue_idx, nb_desc,
-						  socket_id, rx_conf, mp);
-	else
-		return idpf_rx_split_queue_setup(dev, queue_idx, nb_desc,
-						 socket_id, rx_conf, mp);
-}
-
-static int
-idpf_tx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-			  uint16_t nb_desc, unsigned int socket_id,
-			  const struct rte_eth_txconf *tx_conf)
-{
-	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
-	uint16_t tx_rs_thresh, tx_free_thresh;
-	struct idpf_hw *hw = &adapter->hw;
-	struct idpf_tx_queue *txq, *cq;
-	const struct rte_memzone *mz;
-	uint32_t ring_size;
-	uint64_t offloads;
+	struct idpf_tx_queue *cq;
 	int ret;
 
-	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
-
-	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh != 0) ?
-		tx_conf->tx_rs_thresh : IDPF_DEFAULT_TX_RS_THRESH);
-	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh != 0) ?
-		tx_conf->tx_free_thresh : IDPF_DEFAULT_TX_FREE_THRESH);
-	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
-		return -EINVAL;
-
-	/* Free memory if needed. */
-	if (dev->data->tx_queues[queue_idx] != NULL) {
-		idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
-		dev->data->tx_queues[queue_idx] = NULL;
-	}
-
-	/* Allocate the TX queue data structure. */
-	txq = rte_zmalloc_socket("idpf split txq",
-				 sizeof(struct idpf_tx_queue),
-				 RTE_CACHE_LINE_SIZE,
-				 socket_id);
-	if (txq == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
-		return -ENOMEM;
-	}
-
-	txq->nb_tx_desc = nb_desc;
-	txq->rs_thresh = tx_rs_thresh;
-	txq->free_thresh = tx_free_thresh;
-	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
-	txq->port_id = dev->data->port_id;
-	txq->offloads = offloads;
-	txq->tx_deferred_start = tx_conf->tx_deferred_start;
-
-	/* Allocate software ring */
-	txq->sw_nb_desc = 2 * nb_desc;
-	txq->sw_ring =
-		rte_zmalloc_socket("idpf split tx sw ring",
-				   sizeof(struct idpf_tx_entry) *
-				   txq->sw_nb_desc,
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (txq->sw_ring == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
-		ret = -ENOMEM;
-		goto err_txq_sw_ring;
-	}
-
-	/* Allocate TX hardware ring descriptors. */
-	ring_size = sizeof(struct idpf_flex_tx_sched_desc) * txq->nb_tx_desc;
-	ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "split_tx_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
-	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
-		ret = -ENOMEM;
-		goto err_txq_mz;
-	}
-	txq->tx_ring_phys_addr = mz->iova;
-	txq->desc_ring = mz->addr;
-
-	txq->mz = mz;
-	reset_split_tx_descq(txq);
-	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
-			queue_idx * vport->chunks_info.tx_qtail_spacing);
-	txq->ops = &def_txq_ops;
-
-	/* Allocate the TX completion queue data structure. */
-	txq->complq = rte_zmalloc_socket("idpf splitq cq",
-					 sizeof(struct idpf_tx_queue),
-					 RTE_CACHE_LINE_SIZE,
-					 socket_id);
-	cq = txq->complq;
+	cq = rte_zmalloc_socket("idpf splitq cq",
+				sizeof(struct idpf_tx_queue),
+				RTE_CACHE_LINE_SIZE,
+				socket_id);
 	if (cq == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for Tx compl queue");
 		ret = -ENOMEM;
-		goto err_cq;
+		goto err_cq_alloc;
 	}
-	cq->nb_tx_desc = 2 * nb_desc;
+
+	cq->nb_tx_desc = nb_desc;
 	cq->queue_id = vport->chunks_info.tx_compl_start_qid + queue_idx;
 	cq->port_id = dev->data->port_id;
 	cq->txqs = dev->data->tx_queues;
 	cq->tx_start_qid = vport->chunks_info.tx_start_qid;
 
-	ring_size = sizeof(struct idpf_splitq_tx_compl_desc) * cq->nb_tx_desc;
-	ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "tx_split_compl_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
+	mz = idpf_dma_zone_reserve(dev, queue_idx, nb_desc,
+				   VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION,
+				   socket_id, true);
 	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX completion queue");
 		ret = -ENOMEM;
-		goto err_cq_mz;
+		goto err_mz_reserve;
 	}
 	cq->tx_ring_phys_addr = mz->iova;
 	cq->compl_ring = mz->addr;
 	cq->mz = mz;
-	reset_split_tx_complq(cq);
+	idpf_reset_split_tx_complq(cq);
 
-	txq->q_set = true;
-	dev->data->tx_queues[queue_idx] = txq;
+	txq->complq = cq;
 
 	return 0;
 
-err_cq_mz:
+err_mz_reserve:
 	rte_free(cq);
-err_cq:
-	rte_memzone_free(txq->mz);
-err_txq_mz:
-	rte_free(txq->sw_ring);
-err_txq_sw_ring:
-	rte_free(txq);
-
+err_cq_alloc:
 	return ret;
 }
 
-static int
-idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-			   uint16_t nb_desc, unsigned int socket_id,
-			   const struct rte_eth_txconf *tx_conf)
+int
+idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_txconf *tx_conf)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_adapter *adapter = vport->adapter;
@@ -814,8 +404,10 @@ idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	struct idpf_hw *hw = &adapter->hw;
 	const struct rte_memzone *mz;
 	struct idpf_tx_queue *txq;
-	uint32_t ring_size;
 	uint64_t offloads;
+	uint16_t len;
+	bool is_splitq;
+	int ret;
 
 	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
 
@@ -823,7 +415,7 @@ idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		tx_conf->tx_rs_thresh : IDPF_DEFAULT_TX_RS_THRESH);
 	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
 		tx_conf->tx_free_thresh : IDPF_DEFAULT_TX_FREE_THRESH);
-	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+	if (idpf_check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Free memory if needed. */
@@ -839,71 +431,74 @@ idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 				 socket_id);
 	if (txq == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_txq_alloc;
 	}
 
-	/* TODO: vlan offload */
+	is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
 
 	txq->nb_tx_desc = nb_desc;
 	txq->rs_thresh = tx_rs_thresh;
 	txq->free_thresh = tx_free_thresh;
 	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
 	txq->port_id = dev->data->port_id;
-	txq->offloads = offloads;
+	txq->offloads = idpf_tx_offload_convert(offloads);
 	txq->tx_deferred_start = tx_conf->tx_deferred_start;
 
-	/* Allocate software ring */
-	txq->sw_ring =
-		rte_zmalloc_socket("idpf tx sw ring",
-				   sizeof(struct idpf_tx_entry) * nb_desc,
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (txq->sw_ring == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
-		rte_free(txq);
-		return -ENOMEM;
-	}
+	if (is_splitq)
+		len = 2 * nb_desc;
+	else
+		len = nb_desc;
+	txq->sw_nb_desc = len;
 
 	/* Allocate TX hardware ring descriptors. */
-	ring_size = sizeof(struct idpf_flex_tx_desc) * nb_desc;
-	ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
+	mz = idpf_dma_zone_reserve(dev, queue_idx, nb_desc, VIRTCHNL2_QUEUE_TYPE_TX,
+				   socket_id, is_splitq);
 	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
-		rte_free(txq->sw_ring);
-		rte_free(txq);
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_mz_reserve;
 	}
-
 	txq->tx_ring_phys_addr = mz->iova;
-	txq->tx_ring = mz->addr;
-
 	txq->mz = mz;
-	reset_single_tx_queue(txq);
-	txq->q_set = true;
-	dev->data->tx_queues[queue_idx] = txq;
+
+	txq->sw_ring = rte_zmalloc_socket("idpf tx sw ring",
+					  sizeof(struct idpf_tx_entry) * len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	if (!is_splitq) {
+		txq->tx_ring = mz->addr;
+		idpf_reset_single_tx_queue(txq);
+	} else {
+		txq->desc_ring = mz->addr;
+		idpf_reset_split_tx_descq(txq);
+
+		/* Setup tx completion queue if split model */
+		ret = idpf_tx_complq_setup(dev, txq, queue_idx,
+					   2 * nb_desc, socket_id);
+		if (ret != 0)
+			goto err_complq_setup;
+	}
+
 	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
 			queue_idx * vport->chunks_info.tx_qtail_spacing);
 	txq->ops = &def_txq_ops;
+	txq->q_set = true;
+	dev->data->tx_queues[queue_idx] = txq;
 
 	return 0;
-}
 
-int
-idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-		    uint16_t nb_desc, unsigned int socket_id,
-		    const struct rte_eth_txconf *tx_conf)
-{
-	struct idpf_vport *vport = dev->data->dev_private;
-
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
-		return idpf_tx_single_queue_setup(dev, queue_idx, nb_desc,
-						  socket_id, tx_conf);
-	else
-		return idpf_tx_split_queue_setup(dev, queue_idx, nb_desc,
-						 socket_id, tx_conf);
+err_complq_setup:
+err_sw_ring_alloc:
+	idpf_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(txq);
+err_txq_alloc:
+	return ret;
 }
 
 static int
@@ -916,89 +511,13 @@ idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
 							 &idpf_timestamp_dynflag);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR,
-				"Cannot register mbuf field/flag for timestamp");
+				    "Cannot register mbuf field/flag for timestamp");
 			return -EINVAL;
 		}
 	}
 	return 0;
 }
 
-static int
-idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
-{
-	volatile struct virtchnl2_singleq_rx_buf_desc *rxd;
-	struct rte_mbuf *mbuf = NULL;
-	uint64_t dma_addr;
-	uint16_t i;
-
-	for (i = 0; i < rxq->nb_rx_desc; i++) {
-		mbuf = rte_mbuf_raw_alloc(rxq->mp);
-		if (unlikely(mbuf == NULL)) {
-			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
-			return -ENOMEM;
-		}
-
-		rte_mbuf_refcnt_set(mbuf, 1);
-		mbuf->next = NULL;
-		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
-		mbuf->nb_segs = 1;
-		mbuf->port = rxq->port_id;
-
-		dma_addr =
-			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
-
-		rxd = &((volatile struct virtchnl2_singleq_rx_buf_desc *)(rxq->rx_ring))[i];
-		rxd->pkt_addr = dma_addr;
-		rxd->hdr_addr = 0;
-		rxd->rsvd1 = 0;
-		rxd->rsvd2 = 0;
-		rxq->sw_ring[i] = mbuf;
-	}
-
-	return 0;
-}
-
-static int
-idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
-{
-	volatile struct virtchnl2_splitq_rx_buf_desc *rxd;
-	struct rte_mbuf *mbuf = NULL;
-	uint64_t dma_addr;
-	uint16_t i;
-
-	for (i = 0; i < rxq->nb_rx_desc; i++) {
-		mbuf = rte_mbuf_raw_alloc(rxq->mp);
-		if (unlikely(mbuf == NULL)) {
-			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
-			return -ENOMEM;
-		}
-
-		rte_mbuf_refcnt_set(mbuf, 1);
-		mbuf->next = NULL;
-		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
-		mbuf->nb_segs = 1;
-		mbuf->port = rxq->port_id;
-
-		dma_addr =
-			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
-
-		rxd = &((volatile struct virtchnl2_splitq_rx_buf_desc *)(rxq->rx_ring))[i];
-		rxd->qword0.buf_id = i;
-		rxd->qword0.rsvd0 = 0;
-		rxd->qword0.rsvd1 = 0;
-		rxd->pkt_addr = dma_addr;
-		rxd->hdr_addr = 0;
-		rxd->rsvd2 = 0;
-
-		rxq->sw_ring[i] = mbuf;
-	}
-
-	rxq->nb_rx_hold = 0;
-	rxq->rx_tail = rxq->nb_rx_desc - 1;
-
-	return 0;
-}
-
 int
 idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
@@ -1164,11 +683,11 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	rxq = dev->data->rx_queues[rx_queue_id];
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
 		rxq->ops->release_mbufs(rxq);
-		reset_single_rx_queue(rxq);
+		idpf_reset_single_rx_queue(rxq);
 	} else {
 		rxq->bufq1->ops->release_mbufs(rxq->bufq1);
 		rxq->bufq2->ops->release_mbufs(rxq->bufq2);
-		reset_split_rx_queue(rxq);
+		idpf_reset_split_rx_queue(rxq);
 	}
 	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -1195,10 +714,10 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	txq = dev->data->tx_queues[tx_queue_id];
 	txq->ops->release_mbufs(txq);
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-		reset_single_tx_queue(txq);
+		idpf_reset_single_tx_queue(txq);
 	} else {
-		reset_split_tx_descq(txq);
-		reset_split_tx_complq(txq->complq);
+		idpf_reset_split_tx_descq(txq);
+		idpf_reset_split_tx_complq(txq->complq);
 	}
 	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index b8325f9b96..4efbf10295 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -51,7 +51,6 @@
 /* Base address of the HW descriptor ring should be 128B aligned. */
 #define IDPF_RING_BASE_ALIGN	128
 
-#define IDPF_RX_MAX_BURST		32
 #define IDPF_DEFAULT_RX_FREE_THRESH	32
 
 /* used for Vector PMD */
@@ -101,14 +100,6 @@ union idpf_tx_offload {
 	};
 };
 
-struct idpf_rxq_ops {
-	void (*release_mbufs)(struct idpf_rx_queue *rxq);
-};
-
-struct idpf_txq_ops {
-	void (*release_mbufs)(struct idpf_tx_queue *txq);
-};
-
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
index fb2b6bb53c..71a6c59823 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
@@ -562,7 +562,7 @@ idpf_tx_free_bufs_avx512(struct idpf_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->next_dd - (n - 1);
 
-	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & IDPF_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
 								rte_lcore_id());
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v5 13/15] common/idpf: add Rx and Tx data path
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
                       ` (11 preceding siblings ...)
  2023-02-02  9:53     ` [PATCH v5 12/15] common/idpf: add help functions for queue setup and release beilei.xing
@ 2023-02-02  9:53     ` beilei.xing
  2023-02-02  9:53     ` [PATCH v5 14/15] common/idpf: add vec queue setup beilei.xing
                       ` (2 subsequent siblings)
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-02  9:53 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Mingxia Liu

From: Beilei Xing <beilei.xing@intel.com>

Add timestamp filed to idpf_adapter structure.
Move scalar Rx/Tx data path for both single queue and split queue
to common module.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h |   5 +
 drivers/common/idpf/idpf_common_logs.h   |  24 +
 drivers/common/idpf/idpf_common_rxtx.c   | 987 +++++++++++++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h   |  89 +-
 drivers/common/idpf/version.map          |   6 +
 drivers/net/idpf/idpf_ethdev.c           |   2 -
 drivers/net/idpf/idpf_ethdev.h           |   4 -
 drivers/net/idpf/idpf_logs.h             |  24 -
 drivers/net/idpf/idpf_rxtx.c             | 937 +--------------------
 drivers/net/idpf/idpf_rxtx.h             | 132 ---
 drivers/net/idpf/idpf_rxtx_vec_avx512.c  |   8 +-
 11 files changed, 1115 insertions(+), 1103 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 4895f5f360..573852ff75 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -23,6 +23,8 @@
 #define IDPF_TX_COMPLQ_PER_GRP	1
 #define IDPF_TXQ_PER_GRP	1
 
+#define IDPF_MIN_FRAME_SIZE	14
+
 #define IDPF_MAX_PKT_TYPE	1024
 
 #define IDPF_DFLT_INTERVAL	16
@@ -43,6 +45,9 @@ struct idpf_adapter {
 
 	uint32_t txq_model; /* 0 - split queue model, non-0 - single queue model */
 	uint32_t rxq_model; /* 0 - split queue model, non-0 - single queue model */
+
+	/* For timestamp */
+	uint64_t time_hw;
 };
 
 struct idpf_chunks_info {
diff --git a/drivers/common/idpf/idpf_common_logs.h b/drivers/common/idpf/idpf_common_logs.h
index fe36562769..63ad2195be 100644
--- a/drivers/common/idpf/idpf_common_logs.h
+++ b/drivers/common/idpf/idpf_common_logs.h
@@ -20,4 +20,28 @@ extern int idpf_common_logtype;
 #define DRV_LOG(level, fmt, args...)		\
 	DRV_LOG_RAW(level, fmt "\n", ## args)
 
+#ifdef RTE_LIBRTE_IDPF_DEBUG_RX
+#define RX_LOG(level, ...) \
+	RTE_LOG(level, \
+		PMD, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+#else
+#define RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_IDPF_DEBUG_TX
+#define TX_LOG(level, ...) \
+	RTE_LOG(level, \
+		PMD, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+#else
+#define TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
 #endif /* _IDPF_COMMON_LOGS_H_ */
diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index eeeeedca88..459057f20e 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -3,8 +3,13 @@
  */
 
 #include <rte_mbuf_dyn.h>
+#include <rte_errno.h>
+
 #include "idpf_common_rxtx.h"
 
+int idpf_timestamp_dynfield_offset = -1;
+uint64_t idpf_timestamp_dynflag;
+
 int
 idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
 {
@@ -337,6 +342,23 @@ idpf_tx_queue_release(void *txq)
 	rte_free(q);
 }
 
+int
+idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
+{
+	int err;
+	if ((rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
+		/* Register mbuf field and flag for Rx timestamp */
+		err = rte_mbuf_dyn_rx_timestamp_register(&idpf_timestamp_dynfield_offset,
+							 &idpf_timestamp_dynflag);
+		if (err != 0) {
+			DRV_LOG(ERR,
+				"Cannot register mbuf field/flag for timestamp");
+			return -EINVAL;
+		}
+	}
+	return 0;
+}
+
 int
 idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
 {
@@ -412,3 +434,968 @@ idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
 
 	return 0;
 }
+
+#define IDPF_TIMESYNC_REG_WRAP_GUARD_BAND  10000
+/* Helper function to convert a 32b nanoseconds timestamp to 64b. */
+static inline uint64_t
+idpf_tstamp_convert_32b_64b(struct idpf_adapter *ad, uint32_t flag,
+			    uint32_t in_timestamp)
+{
+#ifdef RTE_ARCH_X86_64
+	struct idpf_hw *hw = &ad->hw;
+	const uint64_t mask = 0xFFFFFFFF;
+	uint32_t hi, lo, lo2, delta;
+	uint64_t ns;
+
+	if (flag != 0) {
+		IDPF_WRITE_REG(hw, GLTSYN_CMD_SYNC_0_0, PF_GLTSYN_CMD_SYNC_SHTIME_EN_M);
+		IDPF_WRITE_REG(hw, GLTSYN_CMD_SYNC_0_0, PF_GLTSYN_CMD_SYNC_EXEC_CMD_M |
+			       PF_GLTSYN_CMD_SYNC_SHTIME_EN_M);
+		lo = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
+		hi = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_H_0);
+		/*
+		 * On typical system, the delta between lo and lo2 is ~1000ns,
+		 * so 10000 seems a large-enough but not overly-big guard band.
+		 */
+		if (lo > (UINT32_MAX - IDPF_TIMESYNC_REG_WRAP_GUARD_BAND))
+			lo2 = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
+		else
+			lo2 = lo;
+
+		if (lo2 < lo) {
+			lo = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
+			hi = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_H_0);
+		}
+
+		ad->time_hw = ((uint64_t)hi << 32) | lo;
+	}
+
+	delta = (in_timestamp - (uint32_t)(ad->time_hw & mask));
+	if (delta > (mask / 2)) {
+		delta = ((uint32_t)(ad->time_hw & mask) - in_timestamp);
+		ns = ad->time_hw - delta;
+	} else {
+		ns = ad->time_hw + delta;
+	}
+
+	return ns;
+#else /* !RTE_ARCH_X86_64 */
+	RTE_SET_USED(ad);
+	RTE_SET_USED(flag);
+	RTE_SET_USED(in_timestamp);
+	return 0;
+#endif /* RTE_ARCH_X86_64 */
+}
+
+#define IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S				\
+	(RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S) |     \
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S) |     \
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S) |    \
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S))
+
+static inline uint64_t
+idpf_splitq_rx_csum_offload(uint8_t err)
+{
+	uint64_t flags = 0;
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S)) == 0))
+		return flags;
+
+	if (likely((err & IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S) == 0)) {
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S)) != 0))
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
+
+	return flags;
+}
+
+#define IDPF_RX_FLEX_DESC_ADV_HASH1_S  0
+#define IDPF_RX_FLEX_DESC_ADV_HASH2_S  16
+#define IDPF_RX_FLEX_DESC_ADV_HASH3_S  24
+
+static inline uint64_t
+idpf_splitq_rx_rss_offload(struct rte_mbuf *mb,
+			   volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc)
+{
+	uint8_t status_err0_qw0;
+	uint64_t flags = 0;
+
+	status_err0_qw0 = rx_desc->status_err0_qw0;
+
+	if ((status_err0_qw0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S)) != 0) {
+		flags |= RTE_MBUF_F_RX_RSS_HASH;
+		mb->hash.rss = (rte_le_to_cpu_16(rx_desc->hash1) <<
+				IDPF_RX_FLEX_DESC_ADV_HASH1_S) |
+			((uint32_t)(rx_desc->ff2_mirrid_hash2.hash2) <<
+			 IDPF_RX_FLEX_DESC_ADV_HASH2_S) |
+			((uint32_t)(rx_desc->hash3) <<
+			 IDPF_RX_FLEX_DESC_ADV_HASH3_S);
+	}
+
+	return flags;
+}
+
+static void
+idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
+{
+	volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_ring;
+	volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_desc;
+	uint16_t nb_refill = rx_bufq->rx_free_thresh;
+	uint16_t nb_desc = rx_bufq->nb_rx_desc;
+	uint16_t next_avail = rx_bufq->rx_tail;
+	struct rte_mbuf *nmb[rx_bufq->rx_free_thresh];
+	uint64_t dma_addr;
+	uint16_t delta;
+	int i;
+
+	if (rx_bufq->nb_rx_hold < rx_bufq->rx_free_thresh)
+		return;
+
+	rx_buf_ring = rx_bufq->rx_ring;
+	delta = nb_desc - next_avail;
+	if (unlikely(delta < nb_refill)) {
+		if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, delta) == 0)) {
+			for (i = 0; i < delta; i++) {
+				rx_buf_desc = &rx_buf_ring[next_avail + i];
+				rx_bufq->sw_ring[next_avail + i] = nmb[i];
+				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+				rx_buf_desc->hdr_addr = 0;
+				rx_buf_desc->pkt_addr = dma_addr;
+			}
+			nb_refill -= delta;
+			next_avail = 0;
+			rx_bufq->nb_rx_hold -= delta;
+		} else {
+			rte_atomic64_add(&rx_bufq->rx_stats.mbuf_alloc_failed,
+					 nb_desc - next_avail);
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+			       rx_bufq->port_id, rx_bufq->queue_id);
+			return;
+		}
+	}
+
+	if (nb_desc - next_avail >= nb_refill) {
+		if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, nb_refill) == 0)) {
+			for (i = 0; i < nb_refill; i++) {
+				rx_buf_desc = &rx_buf_ring[next_avail + i];
+				rx_bufq->sw_ring[next_avail + i] = nmb[i];
+				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+				rx_buf_desc->hdr_addr = 0;
+				rx_buf_desc->pkt_addr = dma_addr;
+			}
+			next_avail += nb_refill;
+			rx_bufq->nb_rx_hold -= nb_refill;
+		} else {
+			rte_atomic64_add(&rx_bufq->rx_stats.mbuf_alloc_failed,
+					 nb_desc - next_avail);
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+			       rx_bufq->port_id, rx_bufq->queue_id);
+		}
+	}
+
+	IDPF_PCI_REG_WRITE(rx_bufq->qrx_tail, next_avail);
+
+	rx_bufq->rx_tail = next_avail;
+}
+
+uint16_t
+idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		      uint16_t nb_pkts)
+{
+	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
+	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
+	uint16_t pktlen_gen_bufq_id;
+	struct idpf_rx_queue *rxq;
+	const uint32_t *ptype_tbl;
+	uint8_t status_err0_qw1;
+	struct idpf_adapter *ad;
+	struct rte_mbuf *rxm;
+	uint16_t rx_id_bufq1;
+	uint16_t rx_id_bufq2;
+	uint64_t pkt_flags;
+	uint16_t pkt_len;
+	uint16_t bufq_id;
+	uint16_t gen_id;
+	uint16_t rx_id;
+	uint16_t nb_rx;
+	uint64_t ts_ns;
+
+	nb_rx = 0;
+	rxq = rx_queue;
+	ad = rxq->adapter;
+
+	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
+		return nb_rx;
+
+	rx_id = rxq->rx_tail;
+	rx_id_bufq1 = rxq->bufq1->rx_next_avail;
+	rx_id_bufq2 = rxq->bufq2->rx_next_avail;
+	rx_desc_ring = rxq->rx_ring;
+	ptype_tbl = rxq->adapter->ptype_tbl;
+
+	if ((rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0)
+		rxq->hw_register_set = 1;
+
+	while (nb_rx < nb_pkts) {
+		rx_desc = &rx_desc_ring[rx_id];
+
+		pktlen_gen_bufq_id =
+			rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
+		gen_id = (pktlen_gen_bufq_id &
+			  VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
+			VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
+		if (gen_id != rxq->expected_gen_id)
+			break;
+
+		pkt_len = (pktlen_gen_bufq_id &
+			   VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M) >>
+			VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S;
+		if (pkt_len == 0)
+			RX_LOG(ERR, "Packet length is 0");
+
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc)) {
+			rx_id = 0;
+			rxq->expected_gen_id ^= 1;
+		}
+
+		bufq_id = (pktlen_gen_bufq_id &
+			   VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M) >>
+			VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S;
+		if (bufq_id == 0) {
+			rxm = rxq->bufq1->sw_ring[rx_id_bufq1];
+			rx_id_bufq1++;
+			if (unlikely(rx_id_bufq1 == rxq->bufq1->nb_rx_desc))
+				rx_id_bufq1 = 0;
+			rxq->bufq1->nb_rx_hold++;
+		} else {
+			rxm = rxq->bufq2->sw_ring[rx_id_bufq2];
+			rx_id_bufq2++;
+			if (unlikely(rx_id_bufq2 == rxq->bufq2->nb_rx_desc))
+				rx_id_bufq2 = 0;
+			rxq->bufq2->nb_rx_hold++;
+		}
+
+		rxm->pkt_len = pkt_len;
+		rxm->data_len = pkt_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rxm->next = NULL;
+		rxm->nb_segs = 1;
+		rxm->port = rxq->port_id;
+		rxm->ol_flags = 0;
+		rxm->packet_type =
+			ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
+				   VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
+				  VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
+
+		status_err0_qw1 = rx_desc->status_err0_qw1;
+		pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
+		pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
+		if (idpf_timestamp_dynflag > 0 &&
+		    (rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP)) {
+			/* timestamp */
+			ts_ns = idpf_tstamp_convert_32b_64b(ad,
+							    rxq->hw_register_set,
+							    rte_le_to_cpu_32(rx_desc->ts_high));
+			rxq->hw_register_set = 0;
+			*RTE_MBUF_DYNFIELD(rxm,
+					   idpf_timestamp_dynfield_offset,
+					   rte_mbuf_timestamp_t *) = ts_ns;
+			rxm->ol_flags |= idpf_timestamp_dynflag;
+		}
+
+		rxm->ol_flags |= pkt_flags;
+
+		rx_pkts[nb_rx++] = rxm;
+	}
+
+	if (nb_rx > 0) {
+		rxq->rx_tail = rx_id;
+		if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
+			rxq->bufq1->rx_next_avail = rx_id_bufq1;
+		if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
+			rxq->bufq2->rx_next_avail = rx_id_bufq2;
+
+		idpf_split_rx_bufq_refill(rxq->bufq1);
+		idpf_split_rx_bufq_refill(rxq->bufq2);
+	}
+
+	return nb_rx;
+}
+
+static inline void
+idpf_split_tx_free(struct idpf_tx_queue *cq)
+{
+	volatile struct idpf_splitq_tx_compl_desc *compl_ring = cq->compl_ring;
+	volatile struct idpf_splitq_tx_compl_desc *txd;
+	uint16_t next = cq->tx_tail;
+	struct idpf_tx_entry *txe;
+	struct idpf_tx_queue *txq;
+	uint16_t gen, qid, q_head;
+	uint16_t nb_desc_clean;
+	uint8_t ctype;
+
+	txd = &compl_ring[next];
+	gen = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+	       IDPF_TXD_COMPLQ_GEN_M) >> IDPF_TXD_COMPLQ_GEN_S;
+	if (gen != cq->expected_gen_id)
+		return;
+
+	ctype = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+		 IDPF_TXD_COMPLQ_COMPL_TYPE_M) >> IDPF_TXD_COMPLQ_COMPL_TYPE_S;
+	qid = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+	       IDPF_TXD_COMPLQ_QID_M) >> IDPF_TXD_COMPLQ_QID_S;
+	q_head = rte_le_to_cpu_16(txd->q_head_compl_tag.compl_tag);
+	txq = cq->txqs[qid - cq->tx_start_qid];
+
+	switch (ctype) {
+	case IDPF_TXD_COMPLT_RE:
+		/* clean to q_head which indicates be fetched txq desc id + 1.
+		 * TODO: need to refine and remove the if condition.
+		 */
+		if (unlikely(q_head % 32)) {
+			TX_LOG(ERR, "unexpected desc (head = %u) completion.",
+			       q_head);
+			return;
+		}
+		if (txq->last_desc_cleaned > q_head)
+			nb_desc_clean = (txq->nb_tx_desc - txq->last_desc_cleaned) +
+				q_head;
+		else
+			nb_desc_clean = q_head - txq->last_desc_cleaned;
+		txq->nb_free += nb_desc_clean;
+		txq->last_desc_cleaned = q_head;
+		break;
+	case IDPF_TXD_COMPLT_RS:
+		/* q_head indicates sw_id when ctype is 2 */
+		txe = &txq->sw_ring[q_head];
+		if (txe->mbuf != NULL) {
+			rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = NULL;
+		}
+		break;
+	default:
+		TX_LOG(ERR, "unknown completion type.");
+		return;
+	}
+
+	if (++next == cq->nb_tx_desc) {
+		next = 0;
+		cq->expected_gen_id ^= 1;
+	}
+
+	cq->tx_tail = next;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+idpf_calc_context_desc(uint64_t flags)
+{
+	if ((flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
+		return 1;
+
+	return 0;
+}
+
+/* set TSO context descriptor
+ */
+static inline void
+idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
+			union idpf_tx_offload tx_offload,
+			volatile union idpf_flex_tx_ctx_desc *ctx_desc)
+{
+	uint16_t cmd_dtype;
+	uint32_t tso_len;
+	uint8_t hdr_len;
+
+	if (tx_offload.l4_len == 0) {
+		TX_LOG(DEBUG, "L4 length set to 0");
+		return;
+	}
+
+	hdr_len = tx_offload.l2_len +
+		tx_offload.l3_len +
+		tx_offload.l4_len;
+	cmd_dtype = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX |
+		IDPF_TX_FLEX_CTX_DESC_CMD_TSO;
+	tso_len = mbuf->pkt_len - hdr_len;
+
+	ctx_desc->tso.qw1.cmd_dtype = rte_cpu_to_le_16(cmd_dtype);
+	ctx_desc->tso.qw0.hdr_len = hdr_len;
+	ctx_desc->tso.qw0.mss_rt =
+		rte_cpu_to_le_16((uint16_t)mbuf->tso_segsz &
+				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
+	ctx_desc->tso.qw0.flex_tlen =
+		rte_cpu_to_le_32(tso_len &
+				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
+}
+
+uint16_t
+idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		      uint16_t nb_pkts)
+{
+	struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue;
+	volatile struct idpf_flex_tx_sched_desc *txr;
+	volatile struct idpf_flex_tx_sched_desc *txd;
+	struct idpf_tx_entry *sw_ring;
+	union idpf_tx_offload tx_offload = {0};
+	struct idpf_tx_entry *txe, *txn;
+	uint16_t nb_used, tx_id, sw_id;
+	struct rte_mbuf *tx_pkt;
+	uint16_t nb_to_clean;
+	uint16_t nb_tx = 0;
+	uint64_t ol_flags;
+	uint16_t nb_ctx;
+
+	if (unlikely(txq == NULL) || unlikely(!txq->q_started))
+		return nb_tx;
+
+	txr = txq->desc_ring;
+	sw_ring = txq->sw_ring;
+	tx_id = txq->tx_tail;
+	sw_id = txq->sw_tail;
+	txe = &sw_ring[sw_id];
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		tx_pkt = tx_pkts[nb_tx];
+
+		if (txq->nb_free <= txq->free_thresh) {
+			/* TODO: Need to refine
+			 * 1. free and clean: Better to decide a clean destination instead of
+			 * loop times. And don't free mbuf when RS got immediately, free when
+			 * transmit or according to the clean destination.
+			 * Now, just ignore the RE write back, free mbuf when get RS
+			 * 2. out-of-order rewrite back haven't be supported, SW head and HW head
+			 * need to be separated.
+			 **/
+			nb_to_clean = 2 * txq->rs_thresh;
+			while (nb_to_clean--)
+				idpf_split_tx_free(txq->complq);
+		}
+
+		if (txq->nb_free < tx_pkt->nb_segs)
+			break;
+
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = idpf_calc_context_desc(ol_flags);
+		nb_used = tx_pkt->nb_segs + nb_ctx;
+
+		/* context descriptor */
+		if (nb_ctx != 0) {
+			volatile union idpf_flex_tx_ctx_desc *ctx_desc =
+				(volatile union idpf_flex_tx_ctx_desc *)&txr[tx_id];
+
+			if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
+				idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
+							ctx_desc);
+
+			tx_id++;
+			if (tx_id == txq->nb_tx_desc)
+				tx_id = 0;
+		}
+
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+			txe->mbuf = tx_pkt;
+
+			/* Setup TX descriptor */
+			txd->buf_addr =
+				rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt));
+			txd->qw1.cmd_dtype =
+				rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE);
+			txd->qw1.rxr_bufsize = tx_pkt->data_len;
+			txd->qw1.compl_tag = sw_id;
+			tx_id++;
+			if (tx_id == txq->nb_tx_desc)
+				tx_id = 0;
+			sw_id = txe->next_id;
+			txe = txn;
+			tx_pkt = tx_pkt->next;
+		} while (tx_pkt);
+
+		/* fill the last descriptor with End of Packet (EOP) bit */
+		txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_EOP;
+
+		if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
+			txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN;
+		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+
+		if (txq->nb_used >= 32) {
+			txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_RE;
+			/* Update txq RE bit counters */
+			txq->nb_used = 0;
+		}
+	}
+
+	/* update the tail pointer if any packets were processed */
+	if (likely(nb_tx > 0)) {
+		IDPF_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+		txq->tx_tail = tx_id;
+		txq->sw_tail = sw_id;
+	}
+
+	return nb_tx;
+}
+
+#define IDPF_RX_FLEX_DESC_STATUS0_XSUM_S				\
+	(RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) |		\
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) |		\
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) |	\
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S))
+
+/* Translate the rx descriptor status and error fields to pkt flags */
+static inline uint64_t
+idpf_rxd_to_pkt_flags(uint16_t status_error)
+{
+	uint64_t flags = 0;
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S)) == 0))
+		return flags;
+
+	if (likely((status_error & IDPF_RX_FLEX_DESC_STATUS0_XSUM_S) == 0)) {
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)) != 0))
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
+
+	return flags;
+}
+
+static inline void
+idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
+		    uint16_t rx_id)
+{
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+
+	if (nb_hold > rxq->rx_free_thresh) {
+		RX_LOG(DEBUG,
+		       "port_id=%u queue_id=%u rx_tail=%u nb_hold=%u",
+		       rxq->port_id, rxq->queue_id, rx_id, nb_hold);
+		rx_id = (uint16_t)((rx_id == 0) ?
+				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+}
+
+uint16_t
+idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts)
+{
+	volatile union virtchnl2_rx_desc *rx_ring;
+	volatile union virtchnl2_rx_desc *rxdp;
+	union virtchnl2_rx_desc rxd;
+	struct idpf_rx_queue *rxq;
+	const uint32_t *ptype_tbl;
+	uint16_t rx_id, nb_hold;
+	struct idpf_adapter *ad;
+	uint16_t rx_packet_len;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	uint16_t rx_status0;
+	uint64_t pkt_flags;
+	uint64_t dma_addr;
+	uint64_t ts_ns;
+	uint16_t nb_rx;
+
+	nb_rx = 0;
+	nb_hold = 0;
+	rxq = rx_queue;
+
+	ad = rxq->adapter;
+
+	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
+		return nb_rx;
+
+	rx_id = rxq->rx_tail;
+	rx_ring = rxq->rx_ring;
+	ptype_tbl = rxq->adapter->ptype_tbl;
+
+	if ((rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0)
+		rxq->hw_register_set = 1;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
+
+		/* Check the DD bit first */
+		if ((rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)) == 0)
+			break;
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(nmb == NULL)) {
+			rte_atomic64_inc(&rxq->rx_stats.mbuf_alloc_failed);
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+			       "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+		nb_hold++;
+		rxm = rxq->sw_ring[rx_id];
+		rxq->sw_ring[rx_id] = nmb;
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		rx_packet_len = (rte_cpu_to_le_16(rxd.flex_nic_wb.pkt_len) &
+				 VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M);
+
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		rxm->ol_flags = 0;
+		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		rxm->packet_type =
+			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
+					    VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
+
+		rxm->ol_flags |= pkt_flags;
+
+		if (idpf_timestamp_dynflag > 0 &&
+		    (rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
+			/* timestamp */
+			ts_ns = idpf_tstamp_convert_32b_64b(ad,
+					    rxq->hw_register_set,
+					    rte_le_to_cpu_32(rxd.flex_nic_wb.flex_ts.ts_high));
+			rxq->hw_register_set = 0;
+			*RTE_MBUF_DYNFIELD(rxm,
+					   idpf_timestamp_dynfield_offset,
+					   rte_mbuf_timestamp_t *) = ts_ns;
+			rxm->ol_flags |= idpf_timestamp_dynflag;
+		}
+
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+
+	idpf_update_rx_tail(rxq, nb_hold, rx_id);
+
+	return nb_rx;
+}
+
+static inline int
+idpf_xmit_cleanup(struct idpf_tx_queue *txq)
+{
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	struct idpf_tx_entry *sw_ring = txq->sw_ring;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+	uint16_t i;
+
+	volatile struct idpf_flex_tx_desc *txd = txq->tx_ring;
+
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	/* In the writeback Tx desccriptor, the only significant fields are the 4-bit DTYPE */
+	if ((txd[desc_to_clean_to].qw1.cmd_dtype &
+	     rte_cpu_to_le_16(IDPF_TXD_QW1_DTYPE_M)) !=
+	    rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE)) {
+		TX_LOG(DEBUG, "TX descriptor %4u is not done "
+		       "(port=%d queue=%d)", desc_to_clean_to,
+		       txq->port_id, txq->queue_id);
+		return -1;
+	}
+
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+					    desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					    last_desc_cleaned);
+
+	txd[desc_to_clean_to].qw1.cmd_dtype = 0;
+	txd[desc_to_clean_to].qw1.buf_size = 0;
+	for (i = 0; i < RTE_DIM(txd[desc_to_clean_to].qw1.flex.raw); i++)
+		txd[desc_to_clean_to].qw1.flex.raw[i] = 0;
+
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* TX function */
+uint16_t
+idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts)
+{
+	volatile struct idpf_flex_tx_desc *txd;
+	volatile struct idpf_flex_tx_desc *txr;
+	union idpf_tx_offload tx_offload = {0};
+	struct idpf_tx_entry *txe, *txn;
+	struct idpf_tx_entry *sw_ring;
+	struct idpf_tx_queue *txq;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint64_t buf_dma_addr;
+	uint64_t ol_flags;
+	uint16_t tx_last;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint16_t td_cmd;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint16_t slen;
+
+	nb_tx = 0;
+	txq = tx_queue;
+
+	if (unlikely(txq == NULL) || unlikely(!txq->q_started))
+		return nb_tx;
+
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_free < txq->free_thresh)
+		(void)idpf_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		td_cmd = 0;
+
+		tx_pkt = *tx_pkts++;
+		RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
+
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = idpf_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus 1 context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		TX_LOG(DEBUG, "port_id=%u queue_id=%u"
+		       " tx_first=%u tx_last=%u",
+		       txq->port_id, txq->queue_id, tx_id, tx_last);
+
+		if (nb_used > txq->nb_free) {
+			if (idpf_xmit_cleanup(txq) != 0) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->rs_thresh)) {
+				while (nb_used > txq->nb_free) {
+					if (idpf_xmit_cleanup(txq) != 0) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		if (nb_ctx != 0) {
+			/* Setup TX context descriptor if required */
+			volatile union idpf_flex_tx_ctx_desc *ctx_txd =
+				(volatile union idpf_flex_tx_ctx_desc *)
+				&txr[tx_id];
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf != NULL) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			/* TSO enabled */
+			if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
+				idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
+							ctx_txd);
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+
+		m_seg = tx_pkt;
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf != NULL)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			slen = m_seg->data_len;
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->qw1.buf_size = slen;
+			txd->qw1.cmd_dtype = rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_DATA <<
+							      IDPF_FLEX_TXD_QW1_DTYPE_S);
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg);
+
+		/* The last packet data descriptor needs End Of Packet (EOP) */
+		td_cmd |= IDPF_TX_FLEX_DESC_CMD_EOP;
+		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+
+		if (txq->nb_used >= txq->rs_thresh) {
+			TX_LOG(DEBUG, "Setting RS bit on TXD id="
+			       "%4u (port=%d queue=%d)",
+			       tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= IDPF_TX_FLEX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_used = 0;
+		}
+
+		if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
+			td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN;
+
+		txd->qw1.cmd_dtype |= rte_cpu_to_le_16(td_cmd << IDPF_FLEX_TXD_QW1_CMD_S);
+	}
+
+end_of_tx:
+	rte_wmb();
+
+	TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+	       txq->port_id, txq->queue_id, tx_id, nb_tx);
+
+	IDPF_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+/* TX prep functions */
+uint16_t
+idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	       uint16_t nb_pkts)
+{
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+	int ret;
+#endif
+	int i;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		/* Check condition for nb_segs > IDPF_TX_MAX_MTU_SEG. */
+		if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0) {
+			if (m->nb_segs > IDPF_TX_MAX_MTU_SEG) {
+				rte_errno = EINVAL;
+				return i;
+			}
+		} else if ((m->tso_segsz < IDPF_MIN_TSO_MSS) ||
+			   (m->tso_segsz > IDPF_MAX_TSO_MSS) ||
+			   (m->pkt_len > IDPF_MAX_TSO_FRAME_SIZE)) {
+			/* MSS outside the range are considered malicious */
+			rte_errno = EINVAL;
+			return i;
+		}
+
+		if ((ol_flags & IDPF_TX_OFFLOAD_NOTSUP_MASK) != 0) {
+			rte_errno = ENOTSUP;
+			return i;
+		}
+
+		if (m->pkt_len < IDPF_MIN_FRAME_SIZE) {
+			rte_errno = EINVAL;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = -ret;
+			return i;
+		}
+#endif
+	}
+
+	return i;
+}
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index c5bb7d48af..827f791505 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -27,8 +27,63 @@
 #define IDPF_TX_OFFLOAD_MULTI_SEGS       RTE_BIT64(15)
 #define IDPF_TX_OFFLOAD_MBUF_FAST_FREE   RTE_BIT64(16)
 
+#define IDPF_TX_MAX_MTU_SEG	10
+
+#define IDPF_MIN_TSO_MSS	88
+#define IDPF_MAX_TSO_MSS	9728
+#define IDPF_MAX_TSO_FRAME_SIZE	262143
+#define IDPF_TX_MAX_MTU_SEG     10
+
+#define IDPF_TX_CKSUM_OFFLOAD_MASK (		\
+		RTE_MBUF_F_TX_IP_CKSUM |	\
+		RTE_MBUF_F_TX_L4_MASK |		\
+		RTE_MBUF_F_TX_TCP_SEG)
+
+#define IDPF_TX_OFFLOAD_MASK (			\
+		IDPF_TX_CKSUM_OFFLOAD_MASK |	\
+		RTE_MBUF_F_TX_IPV4 |		\
+		RTE_MBUF_F_TX_IPV6)
+
+#define IDPF_TX_OFFLOAD_NOTSUP_MASK \
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
+
+/* MTS */
+#define GLTSYN_CMD_SYNC_0_0	(PF_TIMESYNC_BASE + 0x0)
+#define PF_GLTSYN_SHTIME_0_0	(PF_TIMESYNC_BASE + 0x4)
+#define PF_GLTSYN_SHTIME_L_0	(PF_TIMESYNC_BASE + 0x8)
+#define PF_GLTSYN_SHTIME_H_0	(PF_TIMESYNC_BASE + 0xC)
+#define GLTSYN_ART_L_0		(PF_TIMESYNC_BASE + 0x10)
+#define GLTSYN_ART_H_0		(PF_TIMESYNC_BASE + 0x14)
+#define PF_GLTSYN_SHTIME_0_1	(PF_TIMESYNC_BASE + 0x24)
+#define PF_GLTSYN_SHTIME_L_1	(PF_TIMESYNC_BASE + 0x28)
+#define PF_GLTSYN_SHTIME_H_1	(PF_TIMESYNC_BASE + 0x2C)
+#define PF_GLTSYN_SHTIME_0_2	(PF_TIMESYNC_BASE + 0x44)
+#define PF_GLTSYN_SHTIME_L_2	(PF_TIMESYNC_BASE + 0x48)
+#define PF_GLTSYN_SHTIME_H_2	(PF_TIMESYNC_BASE + 0x4C)
+#define PF_GLTSYN_SHTIME_0_3	(PF_TIMESYNC_BASE + 0x64)
+#define PF_GLTSYN_SHTIME_L_3	(PF_TIMESYNC_BASE + 0x68)
+#define PF_GLTSYN_SHTIME_H_3	(PF_TIMESYNC_BASE + 0x6C)
+
+#define PF_TIMESYNC_BAR4_BASE	0x0E400000
+#define GLTSYN_ENA		(PF_TIMESYNC_BAR4_BASE + 0x90)
+#define GLTSYN_CMD		(PF_TIMESYNC_BAR4_BASE + 0x94)
+#define GLTSYC_TIME_L		(PF_TIMESYNC_BAR4_BASE + 0x104)
+#define GLTSYC_TIME_H		(PF_TIMESYNC_BAR4_BASE + 0x108)
+
+#define GLTSYN_CMD_SYNC_0_4	(PF_TIMESYNC_BAR4_BASE + 0x110)
+#define PF_GLTSYN_SHTIME_L_4	(PF_TIMESYNC_BAR4_BASE + 0x118)
+#define PF_GLTSYN_SHTIME_H_4	(PF_TIMESYNC_BAR4_BASE + 0x11C)
+#define GLTSYN_INCVAL_L		(PF_TIMESYNC_BAR4_BASE + 0x150)
+#define GLTSYN_INCVAL_H		(PF_TIMESYNC_BAR4_BASE + 0x154)
+#define GLTSYN_SHADJ_L		(PF_TIMESYNC_BAR4_BASE + 0x158)
+#define GLTSYN_SHADJ_H		(PF_TIMESYNC_BAR4_BASE + 0x15C)
+
+#define GLTSYN_CMD_SYNC_0_5	(PF_TIMESYNC_BAR4_BASE + 0x130)
+#define PF_GLTSYN_SHTIME_L_5	(PF_TIMESYNC_BAR4_BASE + 0x138)
+#define PF_GLTSYN_SHTIME_H_5	(PF_TIMESYNC_BAR4_BASE + 0x13C)
+
 struct idpf_rx_stats {
-	uint64_t mbuf_alloc_failed;
+	rte_atomic64_t mbuf_alloc_failed;
 };
 
 struct idpf_rx_queue {
@@ -126,6 +181,18 @@ struct idpf_tx_queue {
 	struct idpf_tx_queue *complq;
 };
 
+/* Offload features */
+union idpf_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /* L3 (IP) Header Length. */
+		uint64_t l4_len:8; /* L4 Header Length. */
+		uint64_t tso_segsz:16; /* TCP TSO segment size */
+		/* uint64_t unused : 24; */
+	};
+};
+
 struct idpf_rxq_ops {
 	void (*release_mbufs)(struct idpf_rx_queue *rxq);
 };
@@ -134,6 +201,9 @@ struct idpf_txq_ops {
 	void (*release_mbufs)(struct idpf_tx_queue *txq);
 };
 
+extern int idpf_timestamp_dynfield_offset;
+extern uint64_t idpf_timestamp_dynflag;
+
 __rte_internal
 int idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh);
 __rte_internal
@@ -162,8 +232,25 @@ void idpf_rx_queue_release(void *rxq);
 __rte_internal
 void idpf_tx_queue_release(void *txq);
 __rte_internal
+int idpf_register_ts_mbuf(struct idpf_rx_queue *rxq);
+__rte_internal
 int idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq);
 __rte_internal
 int idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq);
+__rte_internal
+uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			       uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			       uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+				uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+				uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index aa6ebd7c6c..03aab598b4 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -12,6 +12,8 @@ INTERNAL {
 	idpf_config_rss;
 	idpf_create_vport_info_init;
 	idpf_execute_vc_cmd;
+	idpf_prep_pkts;
+	idpf_register_ts_mbuf;
 	idpf_release_rxq_mbufs;
 	idpf_release_txq_mbufs;
 	idpf_reset_single_rx_queue;
@@ -22,6 +24,10 @@ INTERNAL {
 	idpf_reset_split_tx_complq;
 	idpf_reset_split_tx_descq;
 	idpf_rx_queue_release;
+	idpf_singleq_recv_pkts;
+	idpf_singleq_xmit_pkts;
+	idpf_splitq_recv_pkts;
+	idpf_splitq_xmit_pkts;
 	idpf_tx_queue_release;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 734e97ffc2..ee2dec7c7c 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -22,8 +22,6 @@ rte_spinlock_t idpf_adapter_lock;
 struct idpf_adapter_list idpf_adapter_list;
 bool idpf_adapter_list_init;
 
-uint64_t idpf_timestamp_dynflag;
-
 static const char * const idpf_valid_args[] = {
 	IDPF_TX_SINGLE_Q,
 	IDPF_RX_SINGLE_Q,
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 9b40aa4e56..d791d402fb 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -28,7 +28,6 @@
 
 #define IDPF_MIN_BUF_SIZE	1024
 #define IDPF_MAX_FRAME_SIZE	9728
-#define IDPF_MIN_FRAME_SIZE	14
 #define IDPF_DEFAULT_MTU	RTE_ETHER_MTU
 
 #define IDPF_NUM_MACADDR_MAX	64
@@ -78,9 +77,6 @@ struct idpf_adapter_ext {
 	uint16_t cur_vport_nb;
 
 	uint16_t used_vecs_num;
-
-	/* For PTP */
-	uint64_t time_hw;
 };
 
 TAILQ_HEAD(idpf_adapter_list, idpf_adapter_ext);
diff --git a/drivers/net/idpf/idpf_logs.h b/drivers/net/idpf/idpf_logs.h
index d5f778fefe..bf0774b8e4 100644
--- a/drivers/net/idpf/idpf_logs.h
+++ b/drivers/net/idpf/idpf_logs.h
@@ -29,28 +29,4 @@ extern int idpf_logtype_driver;
 #define PMD_DRV_LOG(level, fmt, args...) \
 	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
 
-#ifdef RTE_LIBRTE_IDPF_DEBUG_RX
-#define PMD_RX_LOG(level, ...) \
-	RTE_LOG(level, \
-		PMD, \
-		RTE_FMT("%s(): " \
-			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
-			__func__, \
-			RTE_FMT_TAIL(__VA_ARGS__,)))
-#else
-#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
-#endif
-
-#ifdef RTE_LIBRTE_IDPF_DEBUG_TX
-#define PMD_TX_LOG(level, ...) \
-	RTE_LOG(level, \
-		PMD, \
-		RTE_FMT("%s(): " \
-			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
-			__func__, \
-			RTE_FMT_TAIL(__VA_ARGS__,)))
-#else
-#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
-#endif
-
 #endif /* _IDPF_LOGS_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index fb1814d893..1066789386 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -10,8 +10,6 @@
 #include "idpf_rxtx.h"
 #include "idpf_rxtx_vec_common.h"
 
-static int idpf_timestamp_dynfield_offset = -1;
-
 static uint64_t
 idpf_rx_offload_convert(uint64_t offload)
 {
@@ -501,23 +499,6 @@ idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	return ret;
 }
 
-static int
-idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
-{
-	int err;
-	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0) {
-		/* Register mbuf field and flag for Rx timestamp */
-		err = rte_mbuf_dyn_rx_timestamp_register(&idpf_timestamp_dynfield_offset,
-							 &idpf_timestamp_dynflag);
-		if (err != 0) {
-			PMD_DRV_LOG(ERR,
-				    "Cannot register mbuf field/flag for timestamp");
-			return -EINVAL;
-		}
-	}
-	return 0;
-}
-
 int
 idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
@@ -537,7 +518,7 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 	err = idpf_register_ts_mbuf(rxq);
 	if (err != 0) {
-		PMD_DRV_LOG(ERR, "fail to regidter timestamp mbuf %u",
+		PMD_DRV_LOG(ERR, "fail to residter timestamp mbuf %u",
 					rx_queue_id);
 		return -EIO;
 	}
@@ -762,922 +743,6 @@ idpf_stop_queues(struct rte_eth_dev *dev)
 	}
 }
 
-#define IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S				\
-	(RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S) |     \
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S) |     \
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S) |    \
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S))
-
-static inline uint64_t
-idpf_splitq_rx_csum_offload(uint8_t err)
-{
-	uint64_t flags = 0;
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S)) == 0))
-		return flags;
-
-	if (likely((err & IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S) == 0)) {
-		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
-			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
-		return flags;
-	}
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S)) != 0))
-		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
-
-	return flags;
-}
-
-#define IDPF_RX_FLEX_DESC_ADV_HASH1_S  0
-#define IDPF_RX_FLEX_DESC_ADV_HASH2_S  16
-#define IDPF_RX_FLEX_DESC_ADV_HASH3_S  24
-
-static inline uint64_t
-idpf_splitq_rx_rss_offload(struct rte_mbuf *mb,
-			   volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc)
-{
-	uint8_t status_err0_qw0;
-	uint64_t flags = 0;
-
-	status_err0_qw0 = rx_desc->status_err0_qw0;
-
-	if ((status_err0_qw0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S)) != 0) {
-		flags |= RTE_MBUF_F_RX_RSS_HASH;
-		mb->hash.rss = (rte_le_to_cpu_16(rx_desc->hash1) <<
-				IDPF_RX_FLEX_DESC_ADV_HASH1_S) |
-			((uint32_t)(rx_desc->ff2_mirrid_hash2.hash2) <<
-			 IDPF_RX_FLEX_DESC_ADV_HASH2_S) |
-			((uint32_t)(rx_desc->hash3) <<
-			 IDPF_RX_FLEX_DESC_ADV_HASH3_S);
-	}
-
-	return flags;
-}
-
-static void
-idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
-{
-	volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_ring;
-	volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_desc;
-	uint16_t nb_refill = rx_bufq->rx_free_thresh;
-	uint16_t nb_desc = rx_bufq->nb_rx_desc;
-	uint16_t next_avail = rx_bufq->rx_tail;
-	struct rte_mbuf *nmb[rx_bufq->rx_free_thresh];
-	struct rte_eth_dev *dev;
-	uint64_t dma_addr;
-	uint16_t delta;
-	int i;
-
-	if (rx_bufq->nb_rx_hold < rx_bufq->rx_free_thresh)
-		return;
-
-	rx_buf_ring = rx_bufq->rx_ring;
-	delta = nb_desc - next_avail;
-	if (unlikely(delta < nb_refill)) {
-		if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, delta) == 0)) {
-			for (i = 0; i < delta; i++) {
-				rx_buf_desc = &rx_buf_ring[next_avail + i];
-				rx_bufq->sw_ring[next_avail + i] = nmb[i];
-				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
-				rx_buf_desc->hdr_addr = 0;
-				rx_buf_desc->pkt_addr = dma_addr;
-			}
-			nb_refill -= delta;
-			next_avail = 0;
-			rx_bufq->nb_rx_hold -= delta;
-		} else {
-			dev = &rte_eth_devices[rx_bufq->port_id];
-			dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
-			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
-				   rx_bufq->port_id, rx_bufq->queue_id);
-			return;
-		}
-	}
-
-	if (nb_desc - next_avail >= nb_refill) {
-		if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, nb_refill) == 0)) {
-			for (i = 0; i < nb_refill; i++) {
-				rx_buf_desc = &rx_buf_ring[next_avail + i];
-				rx_bufq->sw_ring[next_avail + i] = nmb[i];
-				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
-				rx_buf_desc->hdr_addr = 0;
-				rx_buf_desc->pkt_addr = dma_addr;
-			}
-			next_avail += nb_refill;
-			rx_bufq->nb_rx_hold -= nb_refill;
-		} else {
-			dev = &rte_eth_devices[rx_bufq->port_id];
-			dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
-			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
-				   rx_bufq->port_id, rx_bufq->queue_id);
-		}
-	}
-
-	IDPF_PCI_REG_WRITE(rx_bufq->qrx_tail, next_avail);
-
-	rx_bufq->rx_tail = next_avail;
-}
-
-uint16_t
-idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-		      uint16_t nb_pkts)
-{
-	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
-	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
-	uint16_t pktlen_gen_bufq_id;
-	struct idpf_rx_queue *rxq;
-	const uint32_t *ptype_tbl;
-	uint8_t status_err0_qw1;
-	struct idpf_adapter_ext *ad;
-	struct rte_mbuf *rxm;
-	uint16_t rx_id_bufq1;
-	uint16_t rx_id_bufq2;
-	uint64_t pkt_flags;
-	uint16_t pkt_len;
-	uint16_t bufq_id;
-	uint16_t gen_id;
-	uint16_t rx_id;
-	uint16_t nb_rx;
-	uint64_t ts_ns;
-
-	nb_rx = 0;
-	rxq = rx_queue;
-	ad = IDPF_ADAPTER_TO_EXT(rxq->adapter);
-
-	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
-		return nb_rx;
-
-	rx_id = rxq->rx_tail;
-	rx_id_bufq1 = rxq->bufq1->rx_next_avail;
-	rx_id_bufq2 = rxq->bufq2->rx_next_avail;
-	rx_desc_ring = rxq->rx_ring;
-	ptype_tbl = rxq->adapter->ptype_tbl;
-
-	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
-		rxq->hw_register_set = 1;
-
-	while (nb_rx < nb_pkts) {
-		rx_desc = &rx_desc_ring[rx_id];
-
-		pktlen_gen_bufq_id =
-			rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
-		gen_id = (pktlen_gen_bufq_id &
-			  VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
-			VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
-		if (gen_id != rxq->expected_gen_id)
-			break;
-
-		pkt_len = (pktlen_gen_bufq_id &
-			   VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M) >>
-			VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S;
-		if (pkt_len == 0)
-			PMD_RX_LOG(ERR, "Packet length is 0");
-
-		rx_id++;
-		if (unlikely(rx_id == rxq->nb_rx_desc)) {
-			rx_id = 0;
-			rxq->expected_gen_id ^= 1;
-		}
-
-		bufq_id = (pktlen_gen_bufq_id &
-			   VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M) >>
-			VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S;
-		if (bufq_id == 0) {
-			rxm = rxq->bufq1->sw_ring[rx_id_bufq1];
-			rx_id_bufq1++;
-			if (unlikely(rx_id_bufq1 == rxq->bufq1->nb_rx_desc))
-				rx_id_bufq1 = 0;
-			rxq->bufq1->nb_rx_hold++;
-		} else {
-			rxm = rxq->bufq2->sw_ring[rx_id_bufq2];
-			rx_id_bufq2++;
-			if (unlikely(rx_id_bufq2 == rxq->bufq2->nb_rx_desc))
-				rx_id_bufq2 = 0;
-			rxq->bufq2->nb_rx_hold++;
-		}
-
-		rxm->pkt_len = pkt_len;
-		rxm->data_len = pkt_len;
-		rxm->data_off = RTE_PKTMBUF_HEADROOM;
-		rxm->next = NULL;
-		rxm->nb_segs = 1;
-		rxm->port = rxq->port_id;
-		rxm->ol_flags = 0;
-		rxm->packet_type =
-			ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
-				   VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
-				  VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
-
-		status_err0_qw1 = rx_desc->status_err0_qw1;
-		pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
-		pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
-		if (idpf_timestamp_dynflag > 0 &&
-		    (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
-			/* timestamp */
-			ts_ns = idpf_tstamp_convert_32b_64b(ad,
-				rxq->hw_register_set,
-				rte_le_to_cpu_32(rx_desc->ts_high));
-			rxq->hw_register_set = 0;
-			*RTE_MBUF_DYNFIELD(rxm,
-					   idpf_timestamp_dynfield_offset,
-					   rte_mbuf_timestamp_t *) = ts_ns;
-			rxm->ol_flags |= idpf_timestamp_dynflag;
-		}
-
-		rxm->ol_flags |= pkt_flags;
-
-		rx_pkts[nb_rx++] = rxm;
-	}
-
-	if (nb_rx > 0) {
-		rxq->rx_tail = rx_id;
-		if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
-			rxq->bufq1->rx_next_avail = rx_id_bufq1;
-		if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
-			rxq->bufq2->rx_next_avail = rx_id_bufq2;
-
-		idpf_split_rx_bufq_refill(rxq->bufq1);
-		idpf_split_rx_bufq_refill(rxq->bufq2);
-	}
-
-	return nb_rx;
-}
-
-static inline void
-idpf_split_tx_free(struct idpf_tx_queue *cq)
-{
-	volatile struct idpf_splitq_tx_compl_desc *compl_ring = cq->compl_ring;
-	volatile struct idpf_splitq_tx_compl_desc *txd;
-	uint16_t next = cq->tx_tail;
-	struct idpf_tx_entry *txe;
-	struct idpf_tx_queue *txq;
-	uint16_t gen, qid, q_head;
-	uint16_t nb_desc_clean;
-	uint8_t ctype;
-
-	txd = &compl_ring[next];
-	gen = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
-		IDPF_TXD_COMPLQ_GEN_M) >> IDPF_TXD_COMPLQ_GEN_S;
-	if (gen != cq->expected_gen_id)
-		return;
-
-	ctype = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
-		IDPF_TXD_COMPLQ_COMPL_TYPE_M) >> IDPF_TXD_COMPLQ_COMPL_TYPE_S;
-	qid = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
-		IDPF_TXD_COMPLQ_QID_M) >> IDPF_TXD_COMPLQ_QID_S;
-	q_head = rte_le_to_cpu_16(txd->q_head_compl_tag.compl_tag);
-	txq = cq->txqs[qid - cq->tx_start_qid];
-
-	switch (ctype) {
-	case IDPF_TXD_COMPLT_RE:
-		/* clean to q_head which indicates be fetched txq desc id + 1.
-		 * TODO: need to refine and remove the if condition.
-		 */
-		if (unlikely(q_head % 32)) {
-			PMD_DRV_LOG(ERR, "unexpected desc (head = %u) completion.",
-						q_head);
-			return;
-		}
-		if (txq->last_desc_cleaned > q_head)
-			nb_desc_clean = (txq->nb_tx_desc - txq->last_desc_cleaned) +
-				q_head;
-		else
-			nb_desc_clean = q_head - txq->last_desc_cleaned;
-		txq->nb_free += nb_desc_clean;
-		txq->last_desc_cleaned = q_head;
-		break;
-	case IDPF_TXD_COMPLT_RS:
-		/* q_head indicates sw_id when ctype is 2 */
-		txe = &txq->sw_ring[q_head];
-		if (txe->mbuf != NULL) {
-			rte_pktmbuf_free_seg(txe->mbuf);
-			txe->mbuf = NULL;
-		}
-		break;
-	default:
-		PMD_DRV_LOG(ERR, "unknown completion type.");
-		return;
-	}
-
-	if (++next == cq->nb_tx_desc) {
-		next = 0;
-		cq->expected_gen_id ^= 1;
-	}
-
-	cq->tx_tail = next;
-}
-
-/* Check if the context descriptor is needed for TX offloading */
-static inline uint16_t
-idpf_calc_context_desc(uint64_t flags)
-{
-	if ((flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
-		return 1;
-
-	return 0;
-}
-
-/* set TSO context descriptor
- */
-static inline void
-idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
-			union idpf_tx_offload tx_offload,
-			volatile union idpf_flex_tx_ctx_desc *ctx_desc)
-{
-	uint16_t cmd_dtype;
-	uint32_t tso_len;
-	uint8_t hdr_len;
-
-	if (tx_offload.l4_len == 0) {
-		PMD_TX_LOG(DEBUG, "L4 length set to 0");
-		return;
-	}
-
-	hdr_len = tx_offload.l2_len +
-		tx_offload.l3_len +
-		tx_offload.l4_len;
-	cmd_dtype = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX |
-		IDPF_TX_FLEX_CTX_DESC_CMD_TSO;
-	tso_len = mbuf->pkt_len - hdr_len;
-
-	ctx_desc->tso.qw1.cmd_dtype = rte_cpu_to_le_16(cmd_dtype);
-	ctx_desc->tso.qw0.hdr_len = hdr_len;
-	ctx_desc->tso.qw0.mss_rt =
-		rte_cpu_to_le_16((uint16_t)mbuf->tso_segsz &
-				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
-	ctx_desc->tso.qw0.flex_tlen =
-		rte_cpu_to_le_32(tso_len &
-				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
-}
-
-uint16_t
-idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-		      uint16_t nb_pkts)
-{
-	struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue;
-	volatile struct idpf_flex_tx_sched_desc *txr;
-	volatile struct idpf_flex_tx_sched_desc *txd;
-	struct idpf_tx_entry *sw_ring;
-	union idpf_tx_offload tx_offload = {0};
-	struct idpf_tx_entry *txe, *txn;
-	uint16_t nb_used, tx_id, sw_id;
-	struct rte_mbuf *tx_pkt;
-	uint16_t nb_to_clean;
-	uint16_t nb_tx = 0;
-	uint64_t ol_flags;
-	uint16_t nb_ctx;
-
-	if (unlikely(txq == NULL) || unlikely(!txq->q_started))
-		return nb_tx;
-
-	txr = txq->desc_ring;
-	sw_ring = txq->sw_ring;
-	tx_id = txq->tx_tail;
-	sw_id = txq->sw_tail;
-	txe = &sw_ring[sw_id];
-
-	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
-		tx_pkt = tx_pkts[nb_tx];
-
-		if (txq->nb_free <= txq->free_thresh) {
-			/* TODO: Need to refine
-			 * 1. free and clean: Better to decide a clean destination instead of
-			 * loop times. And don't free mbuf when RS got immediately, free when
-			 * transmit or according to the clean destination.
-			 * Now, just ignore the RE write back, free mbuf when get RS
-			 * 2. out-of-order rewrite back haven't be supported, SW head and HW head
-			 * need to be separated.
-			 **/
-			nb_to_clean = 2 * txq->rs_thresh;
-			while (nb_to_clean--)
-				idpf_split_tx_free(txq->complq);
-		}
-
-		if (txq->nb_free < tx_pkt->nb_segs)
-			break;
-
-		ol_flags = tx_pkt->ol_flags;
-		tx_offload.l2_len = tx_pkt->l2_len;
-		tx_offload.l3_len = tx_pkt->l3_len;
-		tx_offload.l4_len = tx_pkt->l4_len;
-		tx_offload.tso_segsz = tx_pkt->tso_segsz;
-		/* Calculate the number of context descriptors needed. */
-		nb_ctx = idpf_calc_context_desc(ol_flags);
-		nb_used = tx_pkt->nb_segs + nb_ctx;
-
-		/* context descriptor */
-		if (nb_ctx != 0) {
-			volatile union idpf_flex_tx_ctx_desc *ctx_desc =
-			(volatile union idpf_flex_tx_ctx_desc *)&txr[tx_id];
-
-			if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
-				idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
-							ctx_desc);
-
-			tx_id++;
-			if (tx_id == txq->nb_tx_desc)
-				tx_id = 0;
-		}
-
-		do {
-			txd = &txr[tx_id];
-			txn = &sw_ring[txe->next_id];
-			txe->mbuf = tx_pkt;
-
-			/* Setup TX descriptor */
-			txd->buf_addr =
-				rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt));
-			txd->qw1.cmd_dtype =
-				rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE);
-			txd->qw1.rxr_bufsize = tx_pkt->data_len;
-			txd->qw1.compl_tag = sw_id;
-			tx_id++;
-			if (tx_id == txq->nb_tx_desc)
-				tx_id = 0;
-			sw_id = txe->next_id;
-			txe = txn;
-			tx_pkt = tx_pkt->next;
-		} while (tx_pkt);
-
-		/* fill the last descriptor with End of Packet (EOP) bit */
-		txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_EOP;
-
-		if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
-			txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN;
-		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
-		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
-
-		if (txq->nb_used >= 32) {
-			txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_RE;
-			/* Update txq RE bit counters */
-			txq->nb_used = 0;
-		}
-	}
-
-	/* update the tail pointer if any packets were processed */
-	if (likely(nb_tx > 0)) {
-		IDPF_PCI_REG_WRITE(txq->qtx_tail, tx_id);
-		txq->tx_tail = tx_id;
-		txq->sw_tail = sw_id;
-	}
-
-	return nb_tx;
-}
-
-#define IDPF_RX_FLEX_DESC_STATUS0_XSUM_S				\
-	(RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) |		\
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) |		\
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) |	\
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S))
-
-/* Translate the rx descriptor status and error fields to pkt flags */
-static inline uint64_t
-idpf_rxd_to_pkt_flags(uint16_t status_error)
-{
-	uint64_t flags = 0;
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S)) == 0))
-		return flags;
-
-	if (likely((status_error & IDPF_RX_FLEX_DESC_STATUS0_XSUM_S) == 0)) {
-		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
-			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
-		return flags;
-	}
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)) != 0))
-		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
-
-	return flags;
-}
-
-static inline void
-idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
-		    uint16_t rx_id)
-{
-	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
-
-	if (nb_hold > rxq->rx_free_thresh) {
-		PMD_RX_LOG(DEBUG,
-			   "port_id=%u queue_id=%u rx_tail=%u nb_hold=%u",
-			   rxq->port_id, rxq->queue_id, rx_id, nb_hold);
-		rx_id = (uint16_t)((rx_id == 0) ?
-				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
-		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
-		nb_hold = 0;
-	}
-	rxq->nb_rx_hold = nb_hold;
-}
-
-uint16_t
-idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-		       uint16_t nb_pkts)
-{
-	volatile union virtchnl2_rx_desc *rx_ring;
-	volatile union virtchnl2_rx_desc *rxdp;
-	union virtchnl2_rx_desc rxd;
-	struct idpf_rx_queue *rxq;
-	const uint32_t *ptype_tbl;
-	uint16_t rx_id, nb_hold;
-	struct rte_eth_dev *dev;
-	struct idpf_adapter_ext *ad;
-	uint16_t rx_packet_len;
-	struct rte_mbuf *rxm;
-	struct rte_mbuf *nmb;
-	uint16_t rx_status0;
-	uint64_t pkt_flags;
-	uint64_t dma_addr;
-	uint64_t ts_ns;
-	uint16_t nb_rx;
-
-	nb_rx = 0;
-	nb_hold = 0;
-	rxq = rx_queue;
-
-	ad = IDPF_ADAPTER_TO_EXT(rxq->adapter);
-
-	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
-		return nb_rx;
-
-	rx_id = rxq->rx_tail;
-	rx_ring = rxq->rx_ring;
-	ptype_tbl = rxq->adapter->ptype_tbl;
-
-	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
-		rxq->hw_register_set = 1;
-
-	while (nb_rx < nb_pkts) {
-		rxdp = &rx_ring[rx_id];
-		rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
-
-		/* Check the DD bit first */
-		if ((rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)) == 0)
-			break;
-
-		nmb = rte_mbuf_raw_alloc(rxq->mp);
-		if (unlikely(nmb == NULL)) {
-			dev = &rte_eth_devices[rxq->port_id];
-			dev->data->rx_mbuf_alloc_failed++;
-			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
-				   "queue_id=%u", rxq->port_id, rxq->queue_id);
-			break;
-		}
-		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
-
-		nb_hold++;
-		rxm = rxq->sw_ring[rx_id];
-		rxq->sw_ring[rx_id] = nmb;
-		rx_id++;
-		if (unlikely(rx_id == rxq->nb_rx_desc))
-			rx_id = 0;
-
-		/* Prefetch next mbuf */
-		rte_prefetch0(rxq->sw_ring[rx_id]);
-
-		/* When next RX descriptor is on a cache line boundary,
-		 * prefetch the next 4 RX descriptors and next 8 pointers
-		 * to mbufs.
-		 */
-		if ((rx_id & 0x3) == 0) {
-			rte_prefetch0(&rx_ring[rx_id]);
-			rte_prefetch0(rxq->sw_ring[rx_id]);
-		}
-		dma_addr =
-			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
-		rxdp->read.hdr_addr = 0;
-		rxdp->read.pkt_addr = dma_addr;
-
-		rx_packet_len = (rte_cpu_to_le_16(rxd.flex_nic_wb.pkt_len) &
-				 VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M);
-
-		rxm->data_off = RTE_PKTMBUF_HEADROOM;
-		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
-		rxm->nb_segs = 1;
-		rxm->next = NULL;
-		rxm->pkt_len = rx_packet_len;
-		rxm->data_len = rx_packet_len;
-		rxm->port = rxq->port_id;
-		rxm->ol_flags = 0;
-		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
-		rxm->packet_type =
-			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
-					    VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
-
-		rxm->ol_flags |= pkt_flags;
-
-		if (idpf_timestamp_dynflag > 0 &&
-		   (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0) {
-			/* timestamp */
-			ts_ns = idpf_tstamp_convert_32b_64b(ad,
-				rxq->hw_register_set,
-				rte_le_to_cpu_32(rxd.flex_nic_wb.flex_ts.ts_high));
-			rxq->hw_register_set = 0;
-			*RTE_MBUF_DYNFIELD(rxm,
-					   idpf_timestamp_dynfield_offset,
-					   rte_mbuf_timestamp_t *) = ts_ns;
-			rxm->ol_flags |= idpf_timestamp_dynflag;
-		}
-
-		rx_pkts[nb_rx++] = rxm;
-	}
-	rxq->rx_tail = rx_id;
-
-	idpf_update_rx_tail(rxq, nb_hold, rx_id);
-
-	return nb_rx;
-}
-
-static inline int
-idpf_xmit_cleanup(struct idpf_tx_queue *txq)
-{
-	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
-	struct idpf_tx_entry *sw_ring = txq->sw_ring;
-	uint16_t nb_tx_desc = txq->nb_tx_desc;
-	uint16_t desc_to_clean_to;
-	uint16_t nb_tx_to_clean;
-	uint16_t i;
-
-	volatile struct idpf_flex_tx_desc *txd = txq->tx_ring;
-
-	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
-	if (desc_to_clean_to >= nb_tx_desc)
-		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
-
-	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
-	/* In the writeback Tx desccriptor, the only significant fields are the 4-bit DTYPE */
-	if ((txd[desc_to_clean_to].qw1.cmd_dtype &
-			rte_cpu_to_le_16(IDPF_TXD_QW1_DTYPE_M)) !=
-			rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE)) {
-		PMD_TX_LOG(DEBUG, "TX descriptor %4u is not done "
-			   "(port=%d queue=%d)", desc_to_clean_to,
-			   txq->port_id, txq->queue_id);
-		return -1;
-	}
-
-	if (last_desc_cleaned > desc_to_clean_to)
-		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
-					    desc_to_clean_to);
-	else
-		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
-					last_desc_cleaned);
-
-	txd[desc_to_clean_to].qw1.cmd_dtype = 0;
-	txd[desc_to_clean_to].qw1.buf_size = 0;
-	for (i = 0; i < RTE_DIM(txd[desc_to_clean_to].qw1.flex.raw); i++)
-		txd[desc_to_clean_to].qw1.flex.raw[i] = 0;
-
-	txq->last_desc_cleaned = desc_to_clean_to;
-	txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
-
-	return 0;
-}
-
-/* TX function */
-uint16_t
-idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-		       uint16_t nb_pkts)
-{
-	volatile struct idpf_flex_tx_desc *txd;
-	volatile struct idpf_flex_tx_desc *txr;
-	union idpf_tx_offload tx_offload = {0};
-	struct idpf_tx_entry *txe, *txn;
-	struct idpf_tx_entry *sw_ring;
-	struct idpf_tx_queue *txq;
-	struct rte_mbuf *tx_pkt;
-	struct rte_mbuf *m_seg;
-	uint64_t buf_dma_addr;
-	uint64_t ol_flags;
-	uint16_t tx_last;
-	uint16_t nb_used;
-	uint16_t nb_ctx;
-	uint16_t td_cmd;
-	uint16_t tx_id;
-	uint16_t nb_tx;
-	uint16_t slen;
-
-	nb_tx = 0;
-	txq = tx_queue;
-
-	if (unlikely(txq == NULL) || unlikely(!txq->q_started))
-		return nb_tx;
-
-	sw_ring = txq->sw_ring;
-	txr = txq->tx_ring;
-	tx_id = txq->tx_tail;
-	txe = &sw_ring[tx_id];
-
-	/* Check if the descriptor ring needs to be cleaned. */
-	if (txq->nb_free < txq->free_thresh)
-		(void)idpf_xmit_cleanup(txq);
-
-	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
-		td_cmd = 0;
-
-		tx_pkt = *tx_pkts++;
-		RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
-
-		ol_flags = tx_pkt->ol_flags;
-		tx_offload.l2_len = tx_pkt->l2_len;
-		tx_offload.l3_len = tx_pkt->l3_len;
-		tx_offload.l4_len = tx_pkt->l4_len;
-		tx_offload.tso_segsz = tx_pkt->tso_segsz;
-		/* Calculate the number of context descriptors needed. */
-		nb_ctx = idpf_calc_context_desc(ol_flags);
-
-		/* The number of descriptors that must be allocated for
-		 * a packet equals to the number of the segments of that
-		 * packet plus 1 context descriptor if needed.
-		 */
-		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
-		tx_last = (uint16_t)(tx_id + nb_used - 1);
-
-		/* Circular ring */
-		if (tx_last >= txq->nb_tx_desc)
-			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
-
-		PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u"
-			   " tx_first=%u tx_last=%u",
-			   txq->port_id, txq->queue_id, tx_id, tx_last);
-
-		if (nb_used > txq->nb_free) {
-			if (idpf_xmit_cleanup(txq) != 0) {
-				if (nb_tx == 0)
-					return 0;
-				goto end_of_tx;
-			}
-			if (unlikely(nb_used > txq->rs_thresh)) {
-				while (nb_used > txq->nb_free) {
-					if (idpf_xmit_cleanup(txq) != 0) {
-						if (nb_tx == 0)
-							return 0;
-						goto end_of_tx;
-					}
-				}
-			}
-		}
-
-		if (nb_ctx != 0) {
-			/* Setup TX context descriptor if required */
-			volatile union idpf_flex_tx_ctx_desc *ctx_txd =
-				(volatile union idpf_flex_tx_ctx_desc *)
-							&txr[tx_id];
-
-			txn = &sw_ring[txe->next_id];
-			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
-			if (txe->mbuf != NULL) {
-				rte_pktmbuf_free_seg(txe->mbuf);
-				txe->mbuf = NULL;
-			}
-
-			/* TSO enabled */
-			if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
-				idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
-							ctx_txd);
-
-			txe->last_id = tx_last;
-			tx_id = txe->next_id;
-			txe = txn;
-		}
-
-		m_seg = tx_pkt;
-		do {
-			txd = &txr[tx_id];
-			txn = &sw_ring[txe->next_id];
-
-			if (txe->mbuf != NULL)
-				rte_pktmbuf_free_seg(txe->mbuf);
-			txe->mbuf = m_seg;
-
-			/* Setup TX Descriptor */
-			slen = m_seg->data_len;
-			buf_dma_addr = rte_mbuf_data_iova(m_seg);
-			txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
-			txd->qw1.buf_size = slen;
-			txd->qw1.cmd_dtype = rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_DATA <<
-							      IDPF_FLEX_TXD_QW1_DTYPE_S);
-
-			txe->last_id = tx_last;
-			tx_id = txe->next_id;
-			txe = txn;
-			m_seg = m_seg->next;
-		} while (m_seg);
-
-		/* The last packet data descriptor needs End Of Packet (EOP) */
-		td_cmd |= IDPF_TX_FLEX_DESC_CMD_EOP;
-		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
-		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
-
-		if (txq->nb_used >= txq->rs_thresh) {
-			PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
-				   "%4u (port=%d queue=%d)",
-				   tx_last, txq->port_id, txq->queue_id);
-
-			td_cmd |= IDPF_TX_FLEX_DESC_CMD_RS;
-
-			/* Update txq RS bit counters */
-			txq->nb_used = 0;
-		}
-
-		if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
-			td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN;
-
-		txd->qw1.cmd_dtype |= rte_cpu_to_le_16(td_cmd << IDPF_FLEX_TXD_QW1_CMD_S);
-	}
-
-end_of_tx:
-	rte_wmb();
-
-	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
-		   txq->port_id, txq->queue_id, tx_id, nb_tx);
-
-	IDPF_PCI_REG_WRITE(txq->qtx_tail, tx_id);
-	txq->tx_tail = tx_id;
-
-	return nb_tx;
-}
-
-/* TX prep functions */
-uint16_t
-idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
-	       uint16_t nb_pkts)
-{
-#ifdef RTE_LIBRTE_ETHDEV_DEBUG
-	int ret;
-#endif
-	int i;
-	uint64_t ol_flags;
-	struct rte_mbuf *m;
-
-	for (i = 0; i < nb_pkts; i++) {
-		m = tx_pkts[i];
-		ol_flags = m->ol_flags;
-
-		/* Check condition for nb_segs > IDPF_TX_MAX_MTU_SEG. */
-		if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0) {
-			if (m->nb_segs > IDPF_TX_MAX_MTU_SEG) {
-				rte_errno = EINVAL;
-				return i;
-			}
-		} else if ((m->tso_segsz < IDPF_MIN_TSO_MSS) ||
-			   (m->tso_segsz > IDPF_MAX_TSO_MSS) ||
-			   (m->pkt_len > IDPF_MAX_TSO_FRAME_SIZE)) {
-			/* MSS outside the range are considered malicious */
-			rte_errno = EINVAL;
-			return i;
-		}
-
-		if ((ol_flags & IDPF_TX_OFFLOAD_NOTSUP_MASK) != 0) {
-			rte_errno = ENOTSUP;
-			return i;
-		}
-
-		if (m->pkt_len < IDPF_MIN_FRAME_SIZE) {
-			rte_errno = EINVAL;
-			return i;
-		}
-
-#ifdef RTE_LIBRTE_ETHDEV_DEBUG
-		ret = rte_validate_tx_offload(m);
-		if (ret != 0) {
-			rte_errno = -ret;
-			return i;
-		}
-#endif
-	}
-
-	return i;
-}
-
 static void __rte_cold
 release_rxq_mbufs_vec(struct idpf_rx_queue *rxq)
 {
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 4efbf10295..eab363c3e7 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -8,41 +8,6 @@
 #include <idpf_common_rxtx.h>
 #include "idpf_ethdev.h"
 
-/* MTS */
-#define GLTSYN_CMD_SYNC_0_0	(PF_TIMESYNC_BASE + 0x0)
-#define PF_GLTSYN_SHTIME_0_0	(PF_TIMESYNC_BASE + 0x4)
-#define PF_GLTSYN_SHTIME_L_0	(PF_TIMESYNC_BASE + 0x8)
-#define PF_GLTSYN_SHTIME_H_0	(PF_TIMESYNC_BASE + 0xC)
-#define GLTSYN_ART_L_0		(PF_TIMESYNC_BASE + 0x10)
-#define GLTSYN_ART_H_0		(PF_TIMESYNC_BASE + 0x14)
-#define PF_GLTSYN_SHTIME_0_1	(PF_TIMESYNC_BASE + 0x24)
-#define PF_GLTSYN_SHTIME_L_1	(PF_TIMESYNC_BASE + 0x28)
-#define PF_GLTSYN_SHTIME_H_1	(PF_TIMESYNC_BASE + 0x2C)
-#define PF_GLTSYN_SHTIME_0_2	(PF_TIMESYNC_BASE + 0x44)
-#define PF_GLTSYN_SHTIME_L_2	(PF_TIMESYNC_BASE + 0x48)
-#define PF_GLTSYN_SHTIME_H_2	(PF_TIMESYNC_BASE + 0x4C)
-#define PF_GLTSYN_SHTIME_0_3	(PF_TIMESYNC_BASE + 0x64)
-#define PF_GLTSYN_SHTIME_L_3	(PF_TIMESYNC_BASE + 0x68)
-#define PF_GLTSYN_SHTIME_H_3	(PF_TIMESYNC_BASE + 0x6C)
-
-#define PF_TIMESYNC_BAR4_BASE	0x0E400000
-#define GLTSYN_ENA		(PF_TIMESYNC_BAR4_BASE + 0x90)
-#define GLTSYN_CMD		(PF_TIMESYNC_BAR4_BASE + 0x94)
-#define GLTSYC_TIME_L		(PF_TIMESYNC_BAR4_BASE + 0x104)
-#define GLTSYC_TIME_H		(PF_TIMESYNC_BAR4_BASE + 0x108)
-
-#define GLTSYN_CMD_SYNC_0_4	(PF_TIMESYNC_BAR4_BASE + 0x110)
-#define PF_GLTSYN_SHTIME_L_4	(PF_TIMESYNC_BAR4_BASE + 0x118)
-#define PF_GLTSYN_SHTIME_H_4	(PF_TIMESYNC_BAR4_BASE + 0x11C)
-#define GLTSYN_INCVAL_L		(PF_TIMESYNC_BAR4_BASE + 0x150)
-#define GLTSYN_INCVAL_H		(PF_TIMESYNC_BAR4_BASE + 0x154)
-#define GLTSYN_SHADJ_L		(PF_TIMESYNC_BAR4_BASE + 0x158)
-#define GLTSYN_SHADJ_H		(PF_TIMESYNC_BAR4_BASE + 0x15C)
-
-#define GLTSYN_CMD_SYNC_0_5	(PF_TIMESYNC_BAR4_BASE + 0x130)
-#define PF_GLTSYN_SHTIME_L_5	(PF_TIMESYNC_BAR4_BASE + 0x138)
-#define PF_GLTSYN_SHTIME_H_5	(PF_TIMESYNC_BAR4_BASE + 0x13C)
-
 /* In QLEN must be whole number of 32 descriptors. */
 #define IDPF_ALIGN_RING_DESC	32
 #define IDPF_MIN_RING_DESC	32
@@ -62,44 +27,10 @@
 #define IDPF_DEFAULT_TX_RS_THRESH	32
 #define IDPF_DEFAULT_TX_FREE_THRESH	32
 
-#define IDPF_TX_MAX_MTU_SEG	10
-
-#define IDPF_MIN_TSO_MSS	88
-#define IDPF_MAX_TSO_MSS	9728
-#define IDPF_MAX_TSO_FRAME_SIZE	262143
-#define IDPF_TX_MAX_MTU_SEG     10
-
-#define IDPF_TX_CKSUM_OFFLOAD_MASK (		\
-		RTE_MBUF_F_TX_IP_CKSUM |	\
-		RTE_MBUF_F_TX_L4_MASK |		\
-		RTE_MBUF_F_TX_TCP_SEG)
-
-#define IDPF_TX_OFFLOAD_MASK (			\
-		IDPF_TX_CKSUM_OFFLOAD_MASK |	\
-		RTE_MBUF_F_TX_IPV4 |		\
-		RTE_MBUF_F_TX_IPV6)
-
-#define IDPF_TX_OFFLOAD_NOTSUP_MASK \
-		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
-
-extern uint64_t idpf_timestamp_dynflag;
-
 struct idpf_tx_vec_entry {
 	struct rte_mbuf *mbuf;
 };
 
-/* Offload features */
-union idpf_tx_offload {
-	uint64_t data;
-	struct {
-		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
-		uint64_t l3_len:9; /* L3 (IP) Header Length. */
-		uint64_t l4_len:8; /* L4 Header Length. */
-		uint64_t tso_segsz:16; /* TCP TSO segment size */
-		/* uint64_t unused : 24; */
-	};
-};
-
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
@@ -118,77 +49,14 @@ int idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
-uint16_t idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-				uint16_t nb_pkts);
 uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
 				       uint16_t nb_pkts);
-uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-			       uint16_t nb_pkts);
-uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-				uint16_t nb_pkts);
 uint16_t idpf_singleq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 				       uint16_t nb_pkts);
-uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-			       uint16_t nb_pkts);
-uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-			uint16_t nb_pkts);
 
 void idpf_stop_queues(struct rte_eth_dev *dev);
 
 void idpf_set_rx_function(struct rte_eth_dev *dev);
 void idpf_set_tx_function(struct rte_eth_dev *dev);
 
-#define IDPF_TIMESYNC_REG_WRAP_GUARD_BAND  10000
-/* Helper function to convert a 32b nanoseconds timestamp to 64b. */
-static inline uint64_t
-
-idpf_tstamp_convert_32b_64b(struct idpf_adapter_ext *ad, uint32_t flag,
-			    uint32_t in_timestamp)
-{
-#ifdef RTE_ARCH_X86_64
-	struct idpf_hw *hw = &ad->base.hw;
-	const uint64_t mask = 0xFFFFFFFF;
-	uint32_t hi, lo, lo2, delta;
-	uint64_t ns;
-
-	if (flag != 0) {
-		IDPF_WRITE_REG(hw, GLTSYN_CMD_SYNC_0_0, PF_GLTSYN_CMD_SYNC_SHTIME_EN_M);
-		IDPF_WRITE_REG(hw, GLTSYN_CMD_SYNC_0_0, PF_GLTSYN_CMD_SYNC_EXEC_CMD_M |
-			       PF_GLTSYN_CMD_SYNC_SHTIME_EN_M);
-		lo = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
-		hi = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_H_0);
-		/*
-		 * On typical system, the delta between lo and lo2 is ~1000ns,
-		 * so 10000 seems a large-enough but not overly-big guard band.
-		 */
-		if (lo > (UINT32_MAX - IDPF_TIMESYNC_REG_WRAP_GUARD_BAND))
-			lo2 = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
-		else
-			lo2 = lo;
-
-		if (lo2 < lo) {
-			lo = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
-			hi = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_H_0);
-		}
-
-		ad->time_hw = ((uint64_t)hi << 32) | lo;
-	}
-
-	delta = (in_timestamp - (uint32_t)(ad->time_hw & mask));
-	if (delta > (mask / 2)) {
-		delta = ((uint32_t)(ad->time_hw & mask) - in_timestamp);
-		ns = ad->time_hw - delta;
-	} else {
-		ns = ad->time_hw + delta;
-	}
-
-	return ns;
-#else /* !RTE_ARCH_X86_64 */
-	RTE_SET_USED(ad);
-	RTE_SET_USED(flag);
-	RTE_SET_USED(in_timestamp);
-	return 0;
-#endif /* RTE_ARCH_X86_64 */
-}
-
 #endif /* _IDPF_RXTX_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
index 71a6c59823..ea949635e0 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
@@ -38,8 +38,8 @@ idpf_singleq_rearm_common(struct idpf_rx_queue *rxq)
 						dma_addr0);
 			}
 		}
-		rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed +=
-			IDPF_RXQ_REARM_THRESH;
+		rte_atomic64_add(&rxq->rx_stats.mbuf_alloc_failed,
+				 IDPF_RXQ_REARM_THRESH);
 		return;
 	}
 	struct rte_mbuf *mb0, *mb1, *mb2, *mb3;
@@ -168,8 +168,8 @@ idpf_singleq_rearm(struct idpf_rx_queue *rxq)
 							 dma_addr0);
 				}
 			}
-			rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed +=
-					IDPF_RXQ_REARM_THRESH;
+			rte_atomic64_add(&rxq->rx_stats.mbuf_alloc_failed,
+					 IDPF_RXQ_REARM_THRESH);
 			return;
 		}
 	}
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v5 14/15] common/idpf: add vec queue setup
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
                       ` (12 preceding siblings ...)
  2023-02-02  9:53     ` [PATCH v5 13/15] common/idpf: add Rx and Tx data path beilei.xing
@ 2023-02-02  9:53     ` beilei.xing
  2023-02-02  9:53     ` [PATCH v5 15/15] common/idpf: add avx512 for single queue model beilei.xing
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-02  9:53 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move vector queue setup for single queue model to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 57 ++++++++++++++++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h |  2 +
 drivers/common/idpf/version.map        |  1 +
 drivers/net/idpf/idpf_rxtx.c           | 57 --------------------------
 drivers/net/idpf/idpf_rxtx.h           |  1 -
 5 files changed, 60 insertions(+), 58 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 459057f20e..bc95fef6bc 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1399,3 +1399,60 @@ idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 
 	return i;
 }
+
+static void __rte_cold
+release_rxq_mbufs_vec(struct idpf_rx_queue *rxq)
+{
+	const uint16_t mask = rxq->nb_rx_desc - 1;
+	uint16_t i;
+
+	if (rxq->sw_ring == NULL || rxq->rxrearm_nb >= rxq->nb_rx_desc)
+		return;
+
+	/* free all mbufs that are valid in the ring */
+	if (rxq->rxrearm_nb == 0) {
+		for (i = 0; i < rxq->nb_rx_desc; i++) {
+			if (rxq->sw_ring[i] != NULL)
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	} else {
+		for (i = rxq->rx_tail; i != rxq->rxrearm_start; i = (i + 1) & mask) {
+			if (rxq->sw_ring[i] != NULL)
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	}
+
+	rxq->rxrearm_nb = rxq->nb_rx_desc;
+
+	/* set all entries to NULL */
+	memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
+}
+
+static const struct idpf_rxq_ops def_singleq_rx_ops_vec = {
+	.release_mbufs = release_rxq_mbufs_vec,
+};
+
+static inline int
+idpf_singleq_rx_vec_setup_default(struct idpf_rx_queue *rxq)
+{
+	uintptr_t p;
+	struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
+
+	mb_def.nb_segs = 1;
+	mb_def.data_off = RTE_PKTMBUF_HEADROOM;
+	mb_def.port = rxq->port_id;
+	rte_mbuf_refcnt_set(&mb_def, 1);
+
+	/* prevent compiler reordering: rearm_data covers previous fields */
+	rte_compiler_barrier();
+	p = (uintptr_t)&mb_def.rearm_data;
+	rxq->mbuf_initializer = *(uint64_t *)p;
+	return 0;
+}
+
+int __rte_cold
+idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
+{
+	rxq->ops = &def_singleq_rx_ops_vec;
+	return idpf_singleq_rx_vec_setup_default(rxq);
+}
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index 827f791505..74d6081638 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -252,5 +252,7 @@ uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 __rte_internal
 uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts);
+__rte_internal
+int idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 03aab598b4..511705e5b0 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -25,6 +25,7 @@ INTERNAL {
 	idpf_reset_split_tx_descq;
 	idpf_rx_queue_release;
 	idpf_singleq_recv_pkts;
+	idpf_singleq_rx_vec_setup;
 	idpf_singleq_xmit_pkts;
 	idpf_splitq_recv_pkts;
 	idpf_splitq_xmit_pkts;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 1066789386..c0c622d64b 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -743,63 +743,6 @@ idpf_stop_queues(struct rte_eth_dev *dev)
 	}
 }
 
-static void __rte_cold
-release_rxq_mbufs_vec(struct idpf_rx_queue *rxq)
-{
-	const uint16_t mask = rxq->nb_rx_desc - 1;
-	uint16_t i;
-
-	if (rxq->sw_ring == NULL || rxq->rxrearm_nb >= rxq->nb_rx_desc)
-		return;
-
-	/* free all mbufs that are valid in the ring */
-	if (rxq->rxrearm_nb == 0) {
-		for (i = 0; i < rxq->nb_rx_desc; i++) {
-			if (rxq->sw_ring[i] != NULL)
-				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
-		}
-	} else {
-		for (i = rxq->rx_tail; i != rxq->rxrearm_start; i = (i + 1) & mask) {
-			if (rxq->sw_ring[i] != NULL)
-				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
-		}
-	}
-
-	rxq->rxrearm_nb = rxq->nb_rx_desc;
-
-	/* set all entries to NULL */
-	memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
-}
-
-static const struct idpf_rxq_ops def_singleq_rx_ops_vec = {
-	.release_mbufs = release_rxq_mbufs_vec,
-};
-
-static inline int
-idpf_singleq_rx_vec_setup_default(struct idpf_rx_queue *rxq)
-{
-	uintptr_t p;
-	struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
-
-	mb_def.nb_segs = 1;
-	mb_def.data_off = RTE_PKTMBUF_HEADROOM;
-	mb_def.port = rxq->port_id;
-	rte_mbuf_refcnt_set(&mb_def, 1);
-
-	/* prevent compiler reordering: rearm_data covers previous fields */
-	rte_compiler_barrier();
-	p = (uintptr_t)&mb_def.rearm_data;
-	rxq->mbuf_initializer = *(uint64_t *)p;
-	return 0;
-}
-
-int __rte_cold
-idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
-{
-	rxq->ops = &def_singleq_rx_ops_vec;
-	return idpf_singleq_rx_vec_setup_default(rxq);
-}
-
 void
 idpf_set_rx_function(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index eab363c3e7..a985dc2cf5 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -44,7 +44,6 @@ void idpf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 int idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
-int idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
 int idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v5 15/15] common/idpf: add avx512 for single queue model
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
                       ` (13 preceding siblings ...)
  2023-02-02  9:53     ` [PATCH v5 14/15] common/idpf: add vec queue setup beilei.xing
@ 2023-02-02  9:53     ` beilei.xing
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
  15 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-02  9:53 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move avx512 vector path for single queue to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.h        | 20 +++++++++++++
 .../idpf/idpf_common_rxtx_avx512.c}           |  4 +--
 drivers/common/idpf/meson.build               | 30 +++++++++++++++++++
 drivers/common/idpf/version.map               |  3 ++
 drivers/net/idpf/idpf_rxtx.h                  | 13 --------
 drivers/net/idpf/meson.build                  | 17 -----------
 6 files changed, 55 insertions(+), 32 deletions(-)
 rename drivers/{net/idpf/idpf_rxtx_vec_avx512.c => common/idpf/idpf_common_rxtx_avx512.c} (99%)

diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index 74d6081638..6e3ee7de25 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -47,6 +47,12 @@
 #define IDPF_TX_OFFLOAD_NOTSUP_MASK \
 		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
 
+/* used for Vector PMD */
+#define IDPF_VPMD_RX_MAX_BURST		32
+#define IDPF_VPMD_TX_MAX_BURST		32
+#define IDPF_VPMD_DESCS_PER_LOOP	4
+#define IDPF_RXQ_REARM_THRESH		64
+
 /* MTS */
 #define GLTSYN_CMD_SYNC_0_0	(PF_TIMESYNC_BASE + 0x0)
 #define PF_GLTSYN_SHTIME_0_0	(PF_TIMESYNC_BASE + 0x4)
@@ -193,6 +199,10 @@ union idpf_tx_offload {
 	};
 };
 
+struct idpf_tx_vec_entry {
+	struct rte_mbuf *mbuf;
+};
+
 struct idpf_rxq_ops {
 	void (*release_mbufs)(struct idpf_rx_queue *rxq);
 };
@@ -254,5 +264,15 @@ uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts);
 __rte_internal
 int idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
+__rte_internal
+int idpf_singleq_tx_vec_setup_avx512(struct idpf_tx_queue *txq);
+__rte_internal
+uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue,
+				       struct rte_mbuf **rx_pkts,
+				       uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_singleq_xmit_pkts_avx512(void *tx_queue,
+				       struct rte_mbuf **tx_pkts,
+				       uint16_t nb_pkts);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/common/idpf/idpf_common_rxtx_avx512.c
similarity index 99%
rename from drivers/net/idpf/idpf_rxtx_vec_avx512.c
rename to drivers/common/idpf/idpf_common_rxtx_avx512.c
index ea949635e0..6ae0e14d2f 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c
@@ -2,9 +2,9 @@
  * Copyright(c) 2022 Intel Corporation
  */
 
-#include "idpf_rxtx_vec_common.h"
-
 #include <rte_vect.h>
+#include <idpf_common_device.h>
+#include <idpf_common_rxtx.h>
 
 #ifndef __INTEL_COMPILER
 #pragma GCC diagnostic ignored "-Wcast-qual"
diff --git a/drivers/common/idpf/meson.build b/drivers/common/idpf/meson.build
index 5ee071fdb2..1dafafeb2f 100644
--- a/drivers/common/idpf/meson.build
+++ b/drivers/common/idpf/meson.build
@@ -9,4 +9,34 @@ sources = files(
     'idpf_common_virtchnl.c',
 )
 
+if arch_subdir == 'x86'
+    idpf_avx512_cpu_support = (
+        cc.get_define('__AVX512F__', args: machine_args) != '' and
+        cc.get_define('__AVX512BW__', args: machine_args) != ''
+    )
+
+    idpf_avx512_cc_support = (
+        not machine_args.contains('-mno-avx512f') and
+        cc.has_argument('-mavx512f') and
+        cc.has_argument('-mavx512bw')
+    )
+
+    if idpf_avx512_cpu_support == true or idpf_avx512_cc_support == true
+        cflags += ['-DCC_AVX512_SUPPORT']
+        avx512_args = [cflags, '-mavx512f', '-mavx512bw']
+        if cc.has_argument('-march=skylake-avx512')
+            avx512_args += '-march=skylake-avx512'
+        endif
+        idpf_common_avx512_lib = static_library(
+            'idpf_common_avx512_lib',
+            'idpf_common_rxtx_avx512.c',
+	    dependencies: [
+	            static_rte_mbuf,
+	    ],
+            include_directories: includes,
+            c_args: avx512_args)
+        objs += idpf_common_avx512_lib.extract_objects('idpf_common_rxtx_avx512.c')
+    endif
+endif
+
 subdir('base')
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 511705e5b0..a0e97de81f 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -25,8 +25,11 @@ INTERNAL {
 	idpf_reset_split_tx_descq;
 	idpf_rx_queue_release;
 	idpf_singleq_recv_pkts;
+	idpf_singleq_recv_pkts_avx512;
 	idpf_singleq_rx_vec_setup;
+	idpf_singleq_tx_vec_setup_avx512;
 	idpf_singleq_xmit_pkts;
+	idpf_singleq_xmit_pkts_avx512;
 	idpf_splitq_recv_pkts;
 	idpf_splitq_xmit_pkts;
 	idpf_tx_queue_release;
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index a985dc2cf5..3a5084dfd6 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -19,23 +19,14 @@
 #define IDPF_DEFAULT_RX_FREE_THRESH	32
 
 /* used for Vector PMD */
-#define IDPF_VPMD_RX_MAX_BURST	32
-#define IDPF_VPMD_TX_MAX_BURST	32
-#define IDPF_VPMD_DESCS_PER_LOOP	4
-#define IDPF_RXQ_REARM_THRESH	64
 
 #define IDPF_DEFAULT_TX_RS_THRESH	32
 #define IDPF_DEFAULT_TX_FREE_THRESH	32
 
-struct idpf_tx_vec_entry {
-	struct rte_mbuf *mbuf;
-};
-
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
 			struct rte_mempool *mp);
-int idpf_singleq_tx_vec_setup_avx512(struct idpf_tx_queue *txq);
 int idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
@@ -48,10 +39,6 @@ int idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
-uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
-				       uint16_t nb_pkts);
-uint16_t idpf_singleq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
-				       uint16_t nb_pkts);
 
 void idpf_stop_queues(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/idpf/meson.build b/drivers/net/idpf/meson.build
index 378925166f..98f8ceb77b 100644
--- a/drivers/net/idpf/meson.build
+++ b/drivers/net/idpf/meson.build
@@ -34,22 +34,5 @@ if arch_subdir == 'x86'
 
     if idpf_avx512_cpu_support == true or idpf_avx512_cc_support == true
         cflags += ['-DCC_AVX512_SUPPORT']
-        avx512_args = [cflags, '-mavx512f', '-mavx512bw']
-        if cc.has_argument('-march=skylake-avx512')
-            avx512_args += '-march=skylake-avx512'
-        endif
-        idpf_avx512_lib = static_library(
-            'idpf_avx512_lib',
-            'idpf_rxtx_vec_avx512.c',
-            dependencies: [
-                    static_rte_common_idpf,
-                    static_rte_ethdev,
-                    static_rte_bus_pci,
-                    static_rte_kvargs,
-                    static_rte_hash,
-            ],
-            include_directories: includes,
-            c_args: avx512_args)
-        objs += idpf_avx512_lib.extract_objects('idpf_rxtx_vec_avx512.c')
     endif
 endif
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 00/19] net/idpf: introduce idpf common modle
  2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
                       ` (14 preceding siblings ...)
  2023-02-02  9:53     ` [PATCH v5 15/15] common/idpf: add avx512 for single queue model beilei.xing
@ 2023-02-03  9:43     ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 01/19] common/idpf: add adapter structure beilei.xing
                         ` (20 more replies)
  15 siblings, 21 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Refactor idpf pmd by introducing idpf common module, which will be also
consumed by a new PMD - CPFL (Control Plane Function Library) PMD.

v2 changes:
 - Refine irq map/unmap functions.
 - Fix cross compile issue.
v3 changes:
 - Embed vport_info field into the vport structure.
 - Refine APIs' name and order in version.map.
 - Refine commit log.
v4 changes:
 - Refine commit log.
v5 changes:
 - Refine version.map.
 - Fix typo.
 - Return error log.
v6 changes:
 - Refine API name in common module.

Beilei Xing (19):
  common/idpf: add adapter structure
  common/idpf: add vport structure
  common/idpf: add virtual channel functions
  common/idpf: introduce adapter init and deinit
  common/idpf: add vport init/deinit
  common/idpf: add config RSS
  common/idpf: add irq map/unmap
  common/idpf: support get packet type
  common/idpf: add vport info initialization
  common/idpf: add vector flags in vport
  common/idpf: add rxq and txq struct
  common/idpf: add help functions for queue setup and release
  common/idpf: add Rx and Tx data path
  common/idpf: add vec queue setup
  common/idpf: add avx512 for single queue model
  common/idpf: refine API name for vport functions
  common/idpf: refine API name for queue config module
  common/idpf: refine API name for data path module
  common/idpf: refine API name for virtual channel functions

 drivers/common/idpf/base/idpf_controlq_api.h  |    6 -
 drivers/common/idpf/base/meson.build          |    2 +-
 drivers/common/idpf/idpf_common_device.c      |  655 +++++
 drivers/common/idpf/idpf_common_device.h      |  195 ++
 drivers/common/idpf/idpf_common_logs.h        |   47 +
 drivers/common/idpf/idpf_common_rxtx.c        | 1458 ++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h        |  278 +++
 .../idpf/idpf_common_rxtx_avx512.c}           |   24 +-
 .../idpf/idpf_common_virtchnl.c}              |  945 ++------
 drivers/common/idpf/idpf_common_virtchnl.h    |   52 +
 drivers/common/idpf/meson.build               |   38 +
 drivers/common/idpf/version.map               |   61 +-
 drivers/net/idpf/idpf_ethdev.c                |  552 +----
 drivers/net/idpf/idpf_ethdev.h                |  194 +-
 drivers/net/idpf/idpf_logs.h                  |   24 -
 drivers/net/idpf/idpf_rxtx.c                  | 2107 +++--------------
 drivers/net/idpf/idpf_rxtx.h                  |  253 +-
 drivers/net/idpf/meson.build                  |   18 -
 18 files changed, 3442 insertions(+), 3467 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_device.c
 create mode 100644 drivers/common/idpf/idpf_common_device.h
 create mode 100644 drivers/common/idpf/idpf_common_logs.h
 create mode 100644 drivers/common/idpf/idpf_common_rxtx.c
 create mode 100644 drivers/common/idpf/idpf_common_rxtx.h
 rename drivers/{net/idpf/idpf_rxtx_vec_avx512.c => common/idpf/idpf_common_rxtx_avx512.c} (97%)
 rename drivers/{net/idpf/idpf_vchnl.c => common/idpf/idpf_common_virtchnl.c} (52%)
 create mode 100644 drivers/common/idpf/idpf_common_virtchnl.h

-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 01/19] common/idpf: add adapter structure
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 02/19] common/idpf: add vport structure beilei.xing
                         ` (19 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Add structure idpf_adapter in common module, the structure includes
some basic fields.
Introduce structure idpf_adapter_ext in PMD, this structure includes
extra fields except idpf_adapter.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h | 20 ++++++
 drivers/net/idpf/idpf_ethdev.c           | 91 ++++++++++--------------
 drivers/net/idpf/idpf_ethdev.h           | 25 +++----
 drivers/net/idpf/idpf_rxtx.c             | 16 ++---
 drivers/net/idpf/idpf_rxtx.h             |  4 +-
 drivers/net/idpf/idpf_rxtx_vec_avx512.c  |  3 +-
 drivers/net/idpf/idpf_vchnl.c            | 30 ++++----
 7 files changed, 99 insertions(+), 90 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_device.h

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
new file mode 100644
index 0000000000..4f548a7185
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_COMMON_DEVICE_H_
+#define _IDPF_COMMON_DEVICE_H_
+
+#include <base/idpf_prototype.h>
+#include <base/virtchnl2.h>
+
+struct idpf_adapter {
+	struct idpf_hw hw;
+	struct virtchnl2_version_info virtchnl_version;
+	struct virtchnl2_get_capabilities caps;
+	volatile uint32_t pend_cmd; /* pending command not finished */
+	uint32_t cmd_retval; /* return value of the cmd response from cp */
+	uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */
+};
+
+#endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 3f1b77144c..1b13d081a7 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -53,8 +53,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_adapter *adapter = vport->adapter;
 
-	dev_info->max_rx_queues = adapter->caps->max_rx_q;
-	dev_info->max_tx_queues = adapter->caps->max_tx_q;
+	dev_info->max_rx_queues = adapter->caps.max_rx_q;
+	dev_info->max_tx_queues = adapter->caps.max_tx_q;
 	dev_info->min_rx_bufsize = IDPF_MIN_BUF_SIZE;
 	dev_info->max_rx_pktlen = vport->max_mtu + IDPF_ETH_OVERHEAD;
 
@@ -147,7 +147,7 @@ idpf_init_vport_req_info(struct rte_eth_dev *dev,
 			 struct virtchnl2_create_vport *vport_info)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(vport->adapter);
 
 	vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
 	if (adapter->txq_model == 0) {
@@ -379,7 +379,7 @@ idpf_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
-	if (adapter->caps->rss_caps != 0 && dev->data->nb_rx_queues != 0) {
+	if (adapter->caps.rss_caps != 0 && dev->data->nb_rx_queues != 0) {
 		ret = idpf_init_rss(vport);
 		if (ret != 0) {
 			PMD_INIT_LOG(ERR, "Failed to init rss");
@@ -420,7 +420,7 @@ idpf_config_rx_queues_irqs(struct rte_eth_dev *dev)
 
 	/* Rx interrupt disabled, Map interrupt only for writeback */
 
-	/* The capability flags adapter->caps->other_caps should be
+	/* The capability flags adapter->caps.other_caps should be
 	 * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if
 	 * condition should be updated when the FW can return the
 	 * correct flag bits.
@@ -518,9 +518,9 @@ static int
 idpf_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
-	uint16_t num_allocated_vectors =
-		adapter->caps->num_allocated_vectors;
+	struct idpf_adapter *base = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
+	uint16_t num_allocated_vectors = base->caps.num_allocated_vectors;
 	uint16_t req_vecs_num;
 	int ret;
 
@@ -596,7 +596,7 @@ static int
 idpf_dev_close(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(vport->adapter);
 
 	idpf_dev_stop(dev);
 
@@ -728,7 +728,7 @@ parse_bool(const char *key, const char *value, void *args)
 }
 
 static int
-idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter,
+idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter,
 		   struct idpf_devargs *idpf_args)
 {
 	struct rte_devargs *devargs = pci_dev->device.devargs;
@@ -875,14 +875,14 @@ idpf_init_mbx(struct idpf_hw *hw)
 }
 
 static int
-idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
+idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
 {
-	struct idpf_hw *hw = &adapter->hw;
+	struct idpf_hw *hw = &adapter->base.hw;
 	int ret = 0;
 
 	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
 	hw->hw_addr_len = pci_dev->mem_resource[0].len;
-	hw->back = adapter;
+	hw->back = &adapter->base;
 	hw->vendor_id = pci_dev->id.vendor_id;
 	hw->device_id = pci_dev->id.device_id;
 	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
@@ -902,15 +902,15 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
 		goto err;
 	}
 
-	adapter->mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
-					IDPF_DFLT_MBX_BUF_SIZE, 0);
-	if (adapter->mbx_resp == NULL) {
+	adapter->base.mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
+					     IDPF_DFLT_MBX_BUF_SIZE, 0);
+	if (adapter->base.mbx_resp == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp memory");
 		ret = -ENOMEM;
 		goto err_mbx;
 	}
 
-	ret = idpf_vc_check_api_version(adapter);
+	ret = idpf_vc_check_api_version(&adapter->base);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to check api version");
 		goto err_api;
@@ -922,21 +922,13 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
 		goto err_api;
 	}
 
-	adapter->caps = rte_zmalloc("idpf_caps",
-				sizeof(struct virtchnl2_get_capabilities), 0);
-	if (adapter->caps == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate idpf_caps memory");
-		ret = -ENOMEM;
-		goto err_api;
-	}
-
-	ret = idpf_vc_get_caps(adapter);
+	ret = idpf_vc_get_caps(&adapter->base);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to get capabilities");
-		goto err_caps;
+		goto err_api;
 	}
 
-	adapter->max_vport_nb = adapter->caps->max_vports;
+	adapter->max_vport_nb = adapter->base.caps.max_vports;
 
 	adapter->vports = rte_zmalloc("vports",
 				      adapter->max_vport_nb *
@@ -945,7 +937,7 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
 	if (adapter->vports == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
 		ret = -ENOMEM;
-		goto err_vports;
+		goto err_api;
 	}
 
 	adapter->max_rxq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
@@ -962,13 +954,9 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
 
 	return ret;
 
-err_vports:
-err_caps:
-	rte_free(adapter->caps);
-	adapter->caps = NULL;
 err_api:
-	rte_free(adapter->mbx_resp);
-	adapter->mbx_resp = NULL;
+	rte_free(adapter->base.mbx_resp);
+	adapter->base.mbx_resp = NULL;
 err_mbx:
 	idpf_ctlq_deinit(hw);
 err:
@@ -995,7 +983,7 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 };
 
 static uint16_t
-idpf_vport_idx_alloc(struct idpf_adapter *ad)
+idpf_vport_idx_alloc(struct idpf_adapter_ext *ad)
 {
 	uint16_t vport_idx;
 	uint16_t i;
@@ -1018,13 +1006,13 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_vport_param *param = init_params;
-	struct idpf_adapter *adapter = param->adapter;
+	struct idpf_adapter_ext *adapter = param->adapter;
 	/* for sending create vport virtchnl msg prepare */
 	struct virtchnl2_create_vport vport_req_info;
 	int ret = 0;
 
 	dev->dev_ops = &idpf_eth_dev_ops;
-	vport->adapter = adapter;
+	vport->adapter = &adapter->base;
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
 
@@ -1085,10 +1073,10 @@ static const struct rte_pci_id pci_id_idpf_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
-struct idpf_adapter *
-idpf_find_adapter(struct rte_pci_device *pci_dev)
+struct idpf_adapter_ext *
+idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 {
-	struct idpf_adapter *adapter;
+	struct idpf_adapter_ext *adapter;
 	int found = 0;
 
 	if (pci_dev == NULL)
@@ -1110,17 +1098,14 @@ idpf_find_adapter(struct rte_pci_device *pci_dev)
 }
 
 static void
-idpf_adapter_rel(struct idpf_adapter *adapter)
+idpf_adapter_rel(struct idpf_adapter_ext *adapter)
 {
-	struct idpf_hw *hw = &adapter->hw;
+	struct idpf_hw *hw = &adapter->base.hw;
 
 	idpf_ctlq_deinit(hw);
 
-	rte_free(adapter->caps);
-	adapter->caps = NULL;
-
-	rte_free(adapter->mbx_resp);
-	adapter->mbx_resp = NULL;
+	rte_free(adapter->base.mbx_resp);
+	adapter->base.mbx_resp = NULL;
 
 	rte_free(adapter->vports);
 	adapter->vports = NULL;
@@ -1131,7 +1116,7 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	       struct rte_pci_device *pci_dev)
 {
 	struct idpf_vport_param vport_param;
-	struct idpf_adapter *adapter;
+	struct idpf_adapter_ext *adapter;
 	struct idpf_devargs devargs;
 	char name[RTE_ETH_NAME_MAX_LEN];
 	int i, retval;
@@ -1143,11 +1128,11 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		idpf_adapter_list_init = true;
 	}
 
-	adapter = idpf_find_adapter(pci_dev);
+	adapter = idpf_find_adapter_ext(pci_dev);
 	if (adapter == NULL) {
 		first_probe = true;
-		adapter = rte_zmalloc("idpf_adapter",
-						sizeof(struct idpf_adapter), 0);
+		adapter = rte_zmalloc("idpf_adapter_ext",
+				      sizeof(struct idpf_adapter_ext), 0);
 		if (adapter == NULL) {
 			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
 			return -ENOMEM;
@@ -1225,7 +1210,7 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 static int
 idpf_pci_remove(struct rte_pci_device *pci_dev)
 {
-	struct idpf_adapter *adapter = idpf_find_adapter(pci_dev);
+	struct idpf_adapter_ext *adapter = idpf_find_adapter_ext(pci_dev);
 	uint16_t port_id;
 
 	/* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF through rte_device */
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index b0746e5041..e956fa989c 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -15,6 +15,7 @@
 
 #include "idpf_logs.h"
 
+#include <idpf_common_device.h>
 #include <base/idpf_prototype.h>
 #include <base/virtchnl2.h>
 
@@ -91,7 +92,7 @@ struct idpf_chunks_info {
 };
 
 struct idpf_vport_param {
-	struct idpf_adapter *adapter;
+	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
 	uint16_t idx;       /* index in adapter->vports[]*/
 };
@@ -144,17 +145,11 @@ struct idpf_devargs {
 	uint16_t req_vport_nb;
 };
 
-struct idpf_adapter {
-	TAILQ_ENTRY(idpf_adapter) next;
-	struct idpf_hw hw;
-	char name[IDPF_ADAPTER_NAME_LEN];
-
-	struct virtchnl2_version_info virtchnl_version;
-	struct virtchnl2_get_capabilities *caps;
+struct idpf_adapter_ext {
+	TAILQ_ENTRY(idpf_adapter_ext) next;
+	struct idpf_adapter base;
 
-	volatile uint32_t pend_cmd; /* pending command not finished */
-	uint32_t cmd_retval; /* return value of the cmd response from ipf */
-	uint8_t *mbx_resp; /* buffer to store the mailbox response from ipf */
+	char name[IDPF_ADAPTER_NAME_LEN];
 
 	uint32_t txq_model; /* 0 - split queue model, non-0 - single queue model */
 	uint32_t rxq_model; /* 0 - split queue model, non-0 - single queue model */
@@ -182,10 +177,12 @@ struct idpf_adapter {
 	uint64_t time_hw;
 };
 
-TAILQ_HEAD(idpf_adapter_list, idpf_adapter);
+TAILQ_HEAD(idpf_adapter_list, idpf_adapter_ext);
 
 #define IDPF_DEV_TO_PCI(eth_dev)		\
 	RTE_DEV_TO_PCI((eth_dev)->device)
+#define IDPF_ADAPTER_TO_EXT(p)					\
+	container_of((p), struct idpf_adapter_ext, base)
 
 /* structure used for sending and checking response of virtchnl ops */
 struct idpf_cmd_info {
@@ -234,10 +231,10 @@ atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
 	return !ret;
 }
 
-struct idpf_adapter *idpf_find_adapter(struct rte_pci_device *pci_dev);
+struct idpf_adapter_ext *idpf_find_adapter_ext(struct rte_pci_device *pci_dev);
 void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
 int idpf_vc_check_api_version(struct idpf_adapter *adapter);
-int idpf_get_pkt_type(struct idpf_adapter *adapter);
+int idpf_get_pkt_type(struct idpf_adapter_ext *adapter);
 int idpf_vc_get_caps(struct idpf_adapter *adapter);
 int idpf_vc_create_vport(struct idpf_vport *vport,
 			 struct virtchnl2_create_vport *vport_info);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 5aef8ba2b6..4845f2ea0a 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1384,7 +1384,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	struct idpf_rx_queue *rxq;
 	const uint32_t *ptype_tbl;
 	uint8_t status_err0_qw1;
-	struct idpf_adapter *ad;
+	struct idpf_adapter_ext *ad;
 	struct rte_mbuf *rxm;
 	uint16_t rx_id_bufq1;
 	uint16_t rx_id_bufq2;
@@ -1398,7 +1398,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	nb_rx = 0;
 	rxq = rx_queue;
-	ad = rxq->adapter;
+	ad = IDPF_ADAPTER_TO_EXT(rxq->adapter);
 
 	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
 		return nb_rx;
@@ -1407,7 +1407,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	rx_id_bufq1 = rxq->bufq1->rx_next_avail;
 	rx_id_bufq2 = rxq->bufq2->rx_next_avail;
 	rx_desc_ring = rxq->rx_ring;
-	ptype_tbl = rxq->adapter->ptype_tbl;
+	ptype_tbl = ad->ptype_tbl;
 
 	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
 		rxq->hw_register_set = 1;
@@ -1791,7 +1791,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	const uint32_t *ptype_tbl;
 	uint16_t rx_id, nb_hold;
 	struct rte_eth_dev *dev;
-	struct idpf_adapter *ad;
+	struct idpf_adapter_ext *ad;
 	uint16_t rx_packet_len;
 	struct rte_mbuf *rxm;
 	struct rte_mbuf *nmb;
@@ -1805,14 +1805,14 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	nb_hold = 0;
 	rxq = rx_queue;
 
-	ad = rxq->adapter;
+	ad = IDPF_ADAPTER_TO_EXT(rxq->adapter);
 
 	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
 		return nb_rx;
 
 	rx_id = rxq->rx_tail;
 	rx_ring = rxq->rx_ring;
-	ptype_tbl = rxq->adapter->ptype_tbl;
+	ptype_tbl = ad->ptype_tbl;
 
 	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
 		rxq->hw_register_set = 1;
@@ -2221,7 +2221,7 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 #ifdef RTE_ARCH_X86
-	struct idpf_adapter *ad = vport->adapter;
+	struct idpf_adapter_ext *ad = IDPF_ADAPTER_TO_EXT(vport->adapter);
 	struct idpf_rx_queue *rxq;
 	int i;
 
@@ -2275,7 +2275,7 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 #ifdef RTE_ARCH_X86
-	struct idpf_adapter *ad = vport->adapter;
+	struct idpf_adapter_ext *ad = IDPF_ADAPTER_TO_EXT(vport->adapter);
 #ifdef CC_AVX512_SUPPORT
 	struct idpf_tx_queue *txq;
 	int i;
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 730dc64ebc..047fc03614 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -247,11 +247,11 @@ void idpf_set_tx_function(struct rte_eth_dev *dev);
 /* Helper function to convert a 32b nanoseconds timestamp to 64b. */
 static inline uint64_t
 
-idpf_tstamp_convert_32b_64b(struct idpf_adapter *ad, uint32_t flag,
+idpf_tstamp_convert_32b_64b(struct idpf_adapter_ext *ad, uint32_t flag,
 			    uint32_t in_timestamp)
 {
 #ifdef RTE_ARCH_X86_64
-	struct idpf_hw *hw = &ad->hw;
+	struct idpf_hw *hw = &ad->base.hw;
 	const uint64_t mask = 0xFFFFFFFF;
 	uint32_t hi, lo, lo2, delta;
 	uint64_t ns;
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
index fb2b6bb53c..efa7cd2187 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
@@ -245,7 +245,8 @@ _idpf_singleq_recv_raw_pkts_avx512(struct idpf_rx_queue *rxq,
 				   struct rte_mbuf **rx_pkts,
 				   uint16_t nb_pkts)
 {
-	const uint32_t *type_table = rxq->adapter->ptype_tbl;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(rxq->adapter);
+	const uint32_t *type_table = adapter->ptype_tbl;
 
 	const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0,
 						    rxq->mbuf_initializer);
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 14b34619af..ca481bb915 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -311,13 +311,17 @@ idpf_vc_check_api_version(struct idpf_adapter *adapter)
 }
 
 int __rte_cold
-idpf_get_pkt_type(struct idpf_adapter *adapter)
+idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 {
 	struct virtchnl2_get_ptype_info *ptype_info;
-	uint16_t ptype_recvd = 0, ptype_offset, i, j;
+	struct idpf_adapter *base;
+	uint16_t ptype_offset, i, j;
+	uint16_t ptype_recvd = 0;
 	int ret;
 
-	ret = idpf_vc_query_ptype_info(adapter);
+	base = &adapter->base;
+
+	ret = idpf_vc_query_ptype_info(base);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Fail to query packet type information");
 		return ret;
@@ -328,7 +332,7 @@ idpf_get_pkt_type(struct idpf_adapter *adapter)
 			return -ENOMEM;
 
 	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
-		ret = idpf_read_one_msg(adapter, VIRTCHNL2_OP_GET_PTYPE_INFO,
+		ret = idpf_read_one_msg(base, VIRTCHNL2_OP_GET_PTYPE_INFO,
 					IDPF_DFLT_MBX_BUF_SIZE, (u8 *)ptype_info);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR, "Fail to get packet type information");
@@ -515,7 +519,7 @@ idpf_get_pkt_type(struct idpf_adapter *adapter)
 
 free_ptype_info:
 	rte_free(ptype_info);
-	clear_cmd(adapter);
+	clear_cmd(base);
 	return ret;
 }
 
@@ -577,7 +581,7 @@ idpf_vc_get_caps(struct idpf_adapter *adapter)
 		return err;
 	}
 
-	rte_memcpy(adapter->caps, args.out_buffer, sizeof(caps_msg));
+	rte_memcpy(&adapter->caps, args.out_buffer, sizeof(caps_msg));
 
 	return 0;
 }
@@ -740,7 +744,8 @@ idpf_vc_set_rss_hash(struct idpf_vport *vport)
 int
 idpf_vc_config_rxqs(struct idpf_vport *vport)
 {
-	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_adapter *base = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
 	struct idpf_rx_queue **rxq =
 		(struct idpf_rx_queue **)vport->dev_data->rx_queues;
 	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
@@ -832,10 +837,10 @@ idpf_vc_config_rxqs(struct idpf_vport *vport)
 		args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
 		args.in_args = (uint8_t *)vc_rxqs;
 		args.in_args_size = size;
-		args.out_buffer = adapter->mbx_resp;
+		args.out_buffer = base->mbx_resp;
 		args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-		err = idpf_execute_vc_cmd(adapter, &args);
+		err = idpf_execute_vc_cmd(base, &args);
 		rte_free(vc_rxqs);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
@@ -940,7 +945,8 @@ idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id)
 int
 idpf_vc_config_txqs(struct idpf_vport *vport)
 {
-	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_adapter *base = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
 	struct idpf_tx_queue **txq =
 		(struct idpf_tx_queue **)vport->dev_data->tx_queues;
 	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
@@ -1010,10 +1016,10 @@ idpf_vc_config_txqs(struct idpf_vport *vport)
 		args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
 		args.in_args = (uint8_t *)vc_txqs;
 		args.in_args_size = size;
-		args.out_buffer = adapter->mbx_resp;
+		args.out_buffer = base->mbx_resp;
 		args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-		err = idpf_execute_vc_cmd(adapter, &args);
+		err = idpf_execute_vc_cmd(base, &args);
 		rte_free(vc_txqs);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 02/19] common/idpf: add vport structure
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
  2023-02-03  9:43       ` [PATCH v6 01/19] common/idpf: add adapter structure beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 03/19] common/idpf: add virtual channel functions beilei.xing
                         ` (18 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Move idpf_vport structure to common module, remove ethdev dependency.
Also remove unused functions.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h |  59 ++++++
 drivers/net/idpf/idpf_ethdev.c           |  10 +-
 drivers/net/idpf/idpf_ethdev.h           |  66 +-----
 drivers/net/idpf/idpf_rxtx.c             |   4 +-
 drivers/net/idpf/idpf_rxtx.h             |   3 +
 drivers/net/idpf/idpf_vchnl.c            | 252 +++--------------------
 6 files changed, 96 insertions(+), 298 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 4f548a7185..b7fff84b25 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -17,4 +17,63 @@ struct idpf_adapter {
 	uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */
 };
 
+struct idpf_chunks_info {
+	uint32_t tx_start_qid;
+	uint32_t rx_start_qid;
+	/* Valid only if split queue model */
+	uint32_t tx_compl_start_qid;
+	uint32_t rx_buf_start_qid;
+
+	uint64_t tx_qtail_start;
+	uint32_t tx_qtail_spacing;
+	uint64_t rx_qtail_start;
+	uint32_t rx_qtail_spacing;
+	uint64_t tx_compl_qtail_start;
+	uint32_t tx_compl_qtail_spacing;
+	uint64_t rx_buf_qtail_start;
+	uint32_t rx_buf_qtail_spacing;
+};
+
+struct idpf_vport {
+	struct idpf_adapter *adapter; /* Backreference to associated adapter */
+	struct virtchnl2_create_vport *vport_info; /* virtchnl response info handling */
+	uint16_t sw_idx; /* SW index in adapter->vports[]*/
+	uint16_t vport_id;
+	uint32_t txq_model;
+	uint32_t rxq_model;
+	uint16_t num_tx_q;
+	/* valid only if txq_model is split Q */
+	uint16_t num_tx_complq;
+	uint16_t num_rx_q;
+	/* valid only if rxq_model is split Q */
+	uint16_t num_rx_bufq;
+
+	uint16_t max_mtu;
+	uint8_t default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+
+	enum virtchnl_rss_algorithm rss_algorithm;
+	uint16_t rss_key_size;
+	uint16_t rss_lut_size;
+
+	void *dev_data; /* Pointer to the device data */
+	uint16_t max_pkt_len; /* Maximum packet length */
+
+	/* RSS info */
+	uint32_t *rss_lut;
+	uint8_t *rss_key;
+	uint64_t rss_hf;
+
+	/* MSIX info*/
+	struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */
+	uint16_t max_vectors;
+	struct virtchnl2_alloc_vectors *recv_vectors;
+
+	/* Chunk info */
+	struct idpf_chunks_info chunks_info;
+
+	uint16_t devarg_id;
+
+	bool stopped;
+};
+
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 1b13d081a7..72a5c9f39b 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -275,11 +275,13 @@ static int
 idpf_init_rss(struct idpf_vport *vport)
 {
 	struct rte_eth_rss_conf *rss_conf;
+	struct rte_eth_dev_data *dev_data;
 	uint16_t i, nb_q, lut_size;
 	int ret = 0;
 
-	rss_conf = &vport->dev_data->dev_conf.rx_adv_conf.rss_conf;
-	nb_q = vport->dev_data->nb_rx_queues;
+	dev_data = vport->dev_data;
+	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev_data->nb_rx_queues;
 
 	vport->rss_key = rte_zmalloc("rss_key",
 				     vport->rss_key_size, 0);
@@ -466,7 +468,7 @@ idpf_config_rx_queues_irqs(struct rte_eth_dev *dev)
 	}
 	vport->qv_map = qv_map;
 
-	if (idpf_vc_config_irq_map_unmap(vport, true) != 0) {
+	if (idpf_vc_config_irq_map_unmap(vport, dev->data->nb_rx_queues, true) != 0) {
 		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
 		goto config_irq_map_err;
 	}
@@ -582,7 +584,7 @@ idpf_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_stop_queues(dev);
 
-	idpf_vc_config_irq_map_unmap(vport, false);
+	idpf_vc_config_irq_map_unmap(vport, dev->data->nb_rx_queues, false);
 
 	if (vport->recv_vectors != NULL)
 		idpf_vc_dealloc_vectors(vport);
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index e956fa989c..8c29019667 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -74,71 +74,12 @@ enum idpf_vc_result {
 	IDPF_MSG_CMD,      /* Read async command result */
 };
 
-struct idpf_chunks_info {
-	uint32_t tx_start_qid;
-	uint32_t rx_start_qid;
-	/* Valid only if split queue model */
-	uint32_t tx_compl_start_qid;
-	uint32_t rx_buf_start_qid;
-
-	uint64_t tx_qtail_start;
-	uint32_t tx_qtail_spacing;
-	uint64_t rx_qtail_start;
-	uint32_t rx_qtail_spacing;
-	uint64_t tx_compl_qtail_start;
-	uint32_t tx_compl_qtail_spacing;
-	uint64_t rx_buf_qtail_start;
-	uint32_t rx_buf_qtail_spacing;
-};
-
 struct idpf_vport_param {
 	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
 	uint16_t idx;       /* index in adapter->vports[]*/
 };
 
-struct idpf_vport {
-	struct idpf_adapter *adapter; /* Backreference to associated adapter */
-	struct virtchnl2_create_vport *vport_info; /* virtchnl response info handling */
-	uint16_t sw_idx; /* SW index in adapter->vports[]*/
-	uint16_t vport_id;
-	uint32_t txq_model;
-	uint32_t rxq_model;
-	uint16_t num_tx_q;
-	/* valid only if txq_model is split Q */
-	uint16_t num_tx_complq;
-	uint16_t num_rx_q;
-	/* valid only if rxq_model is split Q */
-	uint16_t num_rx_bufq;
-
-	uint16_t max_mtu;
-	uint8_t default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
-
-	enum virtchnl_rss_algorithm rss_algorithm;
-	uint16_t rss_key_size;
-	uint16_t rss_lut_size;
-
-	struct rte_eth_dev_data *dev_data; /* Pointer to the device data */
-	uint16_t max_pkt_len; /* Maximum packet length */
-
-	/* RSS info */
-	uint32_t *rss_lut;
-	uint8_t *rss_key;
-	uint64_t rss_hf;
-
-	/* MSIX info*/
-	struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */
-	uint16_t max_vectors;
-	struct virtchnl2_alloc_vectors *recv_vectors;
-
-	/* Chunk info */
-	struct idpf_chunks_info chunks_info;
-
-	uint16_t devarg_id;
-
-	bool stopped;
-};
-
 /* Struct used when parse driver specific devargs */
 struct idpf_devargs {
 	uint16_t req_vports[IDPF_MAX_VPORT_NUM];
@@ -242,15 +183,12 @@ int idpf_vc_destroy_vport(struct idpf_vport *vport);
 int idpf_vc_set_rss_key(struct idpf_vport *vport);
 int idpf_vc_set_rss_lut(struct idpf_vport *vport);
 int idpf_vc_set_rss_hash(struct idpf_vport *vport);
-int idpf_vc_config_rxqs(struct idpf_vport *vport);
-int idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id);
-int idpf_vc_config_txqs(struct idpf_vport *vport);
-int idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id);
 int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
 		      bool rx, bool on);
 int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
 int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
-int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, bool map);
+int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
+				 uint16_t nb_rxq, bool map);
 int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
 int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
 int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 4845f2ea0a..918d156e03 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1066,7 +1066,7 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		dev->data->rx_queues[rx_queue_id];
 	int err = 0;
 
-	err = idpf_vc_config_rxq(vport, rx_queue_id);
+	err = idpf_vc_config_rxq(vport, rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Fail to configure Rx queue %u", rx_queue_id);
 		return err;
@@ -1117,7 +1117,7 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 		dev->data->tx_queues[tx_queue_id];
 	int err = 0;
 
-	err = idpf_vc_config_txq(vport, tx_queue_id);
+	err = idpf_vc_config_txq(vport, txq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id);
 		return err;
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 047fc03614..9417651b3f 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -243,6 +243,9 @@ void idpf_stop_queues(struct rte_eth_dev *dev);
 void idpf_set_rx_function(struct rte_eth_dev *dev);
 void idpf_set_tx_function(struct rte_eth_dev *dev);
 
+int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
+int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
+
 #define IDPF_TIMESYNC_REG_WRAP_GUARD_BAND  10000
 /* Helper function to convert a 32b nanoseconds timestamp to 64b. */
 static inline uint64_t
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index ca481bb915..633d3295d3 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -742,121 +742,9 @@ idpf_vc_set_rss_hash(struct idpf_vport *vport)
 
 #define IDPF_RX_BUF_STRIDE		64
 int
-idpf_vc_config_rxqs(struct idpf_vport *vport)
-{
-	struct idpf_adapter *base = vport->adapter;
-	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
-	struct idpf_rx_queue **rxq =
-		(struct idpf_rx_queue **)vport->dev_data->rx_queues;
-	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
-	struct virtchnl2_rxq_info *rxq_info;
-	struct idpf_cmd_info args;
-	uint16_t total_qs, num_qs;
-	int size, i, j;
-	int err = 0;
-	int k = 0;
-
-	total_qs = vport->num_rx_q + vport->num_rx_bufq;
-	while (total_qs) {
-		if (total_qs > adapter->max_rxq_per_msg) {
-			num_qs = adapter->max_rxq_per_msg;
-			total_qs -= adapter->max_rxq_per_msg;
-		} else {
-			num_qs = total_qs;
-			total_qs = 0;
-		}
-
-		size = sizeof(*vc_rxqs) + (num_qs - 1) *
-			sizeof(struct virtchnl2_rxq_info);
-		vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
-		if (vc_rxqs == NULL) {
-			PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues");
-			err = -ENOMEM;
-			break;
-		}
-		vc_rxqs->vport_id = vport->vport_id;
-		vc_rxqs->num_qinfo = num_qs;
-		if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-			for (i = 0; i < num_qs; i++, k++) {
-				rxq_info = &vc_rxqs->qinfo[i];
-				rxq_info->dma_ring_addr = rxq[k]->rx_ring_phys_addr;
-				rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-				rxq_info->queue_id = rxq[k]->queue_id;
-				rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-				rxq_info->data_buffer_size = rxq[k]->rx_buf_len;
-				rxq_info->max_pkt_size = vport->max_pkt_len;
-
-				rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
-				rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
-
-				rxq_info->ring_len = rxq[k]->nb_rx_desc;
-			}
-		} else {
-			for (i = 0; i < num_qs / 3; i++, k++) {
-				/* Rx queue */
-				rxq_info = &vc_rxqs->qinfo[i * 3];
-				rxq_info->dma_ring_addr =
-					rxq[k]->rx_ring_phys_addr;
-				rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-				rxq_info->queue_id = rxq[k]->queue_id;
-				rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-				rxq_info->data_buffer_size = rxq[k]->rx_buf_len;
-				rxq_info->max_pkt_size = vport->max_pkt_len;
-
-				rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
-				rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
-
-				rxq_info->ring_len = rxq[k]->nb_rx_desc;
-				rxq_info->rx_bufq1_id = rxq[k]->bufq1->queue_id;
-				rxq_info->rx_bufq2_id = rxq[k]->bufq2->queue_id;
-				rxq_info->rx_buffer_low_watermark = 64;
-
-				/* Buffer queue */
-				for (j = 1; j <= IDPF_RX_BUFQ_PER_GRP; j++) {
-					struct idpf_rx_queue *bufq = j == 1 ?
-						rxq[k]->bufq1 : rxq[k]->bufq2;
-					rxq_info = &vc_rxqs->qinfo[i * 3 + j];
-					rxq_info->dma_ring_addr =
-						bufq->rx_ring_phys_addr;
-					rxq_info->type =
-						VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
-					rxq_info->queue_id = bufq->queue_id;
-					rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-					rxq_info->data_buffer_size = bufq->rx_buf_len;
-					rxq_info->desc_ids =
-						VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
-					rxq_info->ring_len = bufq->nb_rx_desc;
-
-					rxq_info->buffer_notif_stride =
-						IDPF_RX_BUF_STRIDE;
-					rxq_info->rx_buffer_low_watermark = 64;
-				}
-			}
-		}
-		memset(&args, 0, sizeof(args));
-		args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
-		args.in_args = (uint8_t *)vc_rxqs;
-		args.in_args_size = size;
-		args.out_buffer = base->mbx_resp;
-		args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-		err = idpf_execute_vc_cmd(base, &args);
-		rte_free(vc_rxqs);
-		if (err != 0) {
-			PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
-			break;
-		}
-	}
-
-	return err;
-}
-
-int
-idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id)
+idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
 {
 	struct idpf_adapter *adapter = vport->adapter;
-	struct idpf_rx_queue **rxq =
-		(struct idpf_rx_queue **)vport->dev_data->rx_queues;
 	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
 	struct virtchnl2_rxq_info *rxq_info;
 	struct idpf_cmd_info args;
@@ -880,39 +768,38 @@ idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id)
 	vc_rxqs->num_qinfo = num_qs;
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
 		rxq_info = &vc_rxqs->qinfo[0];
-		rxq_info->dma_ring_addr = rxq[rxq_id]->rx_ring_phys_addr;
+		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
 		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-		rxq_info->queue_id = rxq[rxq_id]->queue_id;
+		rxq_info->queue_id = rxq->queue_id;
 		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-		rxq_info->data_buffer_size = rxq[rxq_id]->rx_buf_len;
+		rxq_info->data_buffer_size = rxq->rx_buf_len;
 		rxq_info->max_pkt_size = vport->max_pkt_len;
 
 		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
 		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
 
-		rxq_info->ring_len = rxq[rxq_id]->nb_rx_desc;
+		rxq_info->ring_len = rxq->nb_rx_desc;
 	}  else {
 		/* Rx queue */
 		rxq_info = &vc_rxqs->qinfo[0];
-		rxq_info->dma_ring_addr = rxq[rxq_id]->rx_ring_phys_addr;
+		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
 		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-		rxq_info->queue_id = rxq[rxq_id]->queue_id;
+		rxq_info->queue_id = rxq->queue_id;
 		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-		rxq_info->data_buffer_size = rxq[rxq_id]->rx_buf_len;
+		rxq_info->data_buffer_size = rxq->rx_buf_len;
 		rxq_info->max_pkt_size = vport->max_pkt_len;
 
 		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
 		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
 
-		rxq_info->ring_len = rxq[rxq_id]->nb_rx_desc;
-		rxq_info->rx_bufq1_id = rxq[rxq_id]->bufq1->queue_id;
-		rxq_info->rx_bufq2_id = rxq[rxq_id]->bufq2->queue_id;
+		rxq_info->ring_len = rxq->nb_rx_desc;
+		rxq_info->rx_bufq1_id = rxq->bufq1->queue_id;
+		rxq_info->rx_bufq2_id = rxq->bufq2->queue_id;
 		rxq_info->rx_buffer_low_watermark = 64;
 
 		/* Buffer queue */
 		for (i = 1; i <= IDPF_RX_BUFQ_PER_GRP; i++) {
-			struct idpf_rx_queue *bufq =
-				i == 1 ? rxq[rxq_id]->bufq1 : rxq[rxq_id]->bufq2;
+			struct idpf_rx_queue *bufq = i == 1 ? rxq->bufq1 : rxq->bufq2;
 			rxq_info = &vc_rxqs->qinfo[i];
 			rxq_info->dma_ring_addr = bufq->rx_ring_phys_addr;
 			rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
@@ -943,99 +830,9 @@ idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id)
 }
 
 int
-idpf_vc_config_txqs(struct idpf_vport *vport)
-{
-	struct idpf_adapter *base = vport->adapter;
-	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
-	struct idpf_tx_queue **txq =
-		(struct idpf_tx_queue **)vport->dev_data->tx_queues;
-	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
-	struct virtchnl2_txq_info *txq_info;
-	struct idpf_cmd_info args;
-	uint16_t total_qs, num_qs;
-	int size, i;
-	int err = 0;
-	int k = 0;
-
-	total_qs = vport->num_tx_q + vport->num_tx_complq;
-	while (total_qs) {
-		if (total_qs > adapter->max_txq_per_msg) {
-			num_qs = adapter->max_txq_per_msg;
-			total_qs -= adapter->max_txq_per_msg;
-		} else {
-			num_qs = total_qs;
-			total_qs = 0;
-		}
-		size = sizeof(*vc_txqs) + (num_qs - 1) *
-			sizeof(struct virtchnl2_txq_info);
-		vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
-		if (vc_txqs == NULL) {
-			PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues");
-			err = -ENOMEM;
-			break;
-		}
-		vc_txqs->vport_id = vport->vport_id;
-		vc_txqs->num_qinfo = num_qs;
-		if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-			for (i = 0; i < num_qs; i++, k++) {
-				txq_info = &vc_txqs->qinfo[i];
-				txq_info->dma_ring_addr = txq[k]->tx_ring_phys_addr;
-				txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-				txq_info->queue_id = txq[k]->queue_id;
-				txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-				txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
-				txq_info->ring_len = txq[k]->nb_tx_desc;
-			}
-		} else {
-			for (i = 0; i < num_qs / 2; i++, k++) {
-				/* txq info */
-				txq_info = &vc_txqs->qinfo[2 * i];
-				txq_info->dma_ring_addr = txq[k]->tx_ring_phys_addr;
-				txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-				txq_info->queue_id = txq[k]->queue_id;
-				txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-				txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-				txq_info->ring_len = txq[k]->nb_tx_desc;
-				txq_info->tx_compl_queue_id =
-					txq[k]->complq->queue_id;
-				txq_info->relative_queue_id = txq_info->queue_id;
-
-				/* tx completion queue info */
-				txq_info = &vc_txqs->qinfo[2 * i + 1];
-				txq_info->dma_ring_addr =
-					txq[k]->complq->tx_ring_phys_addr;
-				txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-				txq_info->queue_id = txq[k]->complq->queue_id;
-				txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-				txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-				txq_info->ring_len = txq[k]->complq->nb_tx_desc;
-			}
-		}
-
-		memset(&args, 0, sizeof(args));
-		args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
-		args.in_args = (uint8_t *)vc_txqs;
-		args.in_args_size = size;
-		args.out_buffer = base->mbx_resp;
-		args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-		err = idpf_execute_vc_cmd(base, &args);
-		rte_free(vc_txqs);
-		if (err != 0) {
-			PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
-			break;
-		}
-	}
-
-	return err;
-}
-
-int
-idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id)
+idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
 {
 	struct idpf_adapter *adapter = vport->adapter;
-	struct idpf_tx_queue **txq =
-		(struct idpf_tx_queue **)vport->dev_data->tx_queues;
 	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
 	struct virtchnl2_txq_info *txq_info;
 	struct idpf_cmd_info args;
@@ -1060,32 +857,32 @@ idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id)
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
 		txq_info = &vc_txqs->qinfo[0];
-		txq_info->dma_ring_addr = txq[txq_id]->tx_ring_phys_addr;
+		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
 		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-		txq_info->queue_id = txq[txq_id]->queue_id;
+		txq_info->queue_id = txq->queue_id;
 		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
 		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
-		txq_info->ring_len = txq[txq_id]->nb_tx_desc;
+		txq_info->ring_len = txq->nb_tx_desc;
 	} else {
 		/* txq info */
 		txq_info = &vc_txqs->qinfo[0];
-		txq_info->dma_ring_addr = txq[txq_id]->tx_ring_phys_addr;
+		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
 		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-		txq_info->queue_id = txq[txq_id]->queue_id;
+		txq_info->queue_id = txq->queue_id;
 		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
 		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-		txq_info->ring_len = txq[txq_id]->nb_tx_desc;
-		txq_info->tx_compl_queue_id = txq[txq_id]->complq->queue_id;
+		txq_info->ring_len = txq->nb_tx_desc;
+		txq_info->tx_compl_queue_id = txq->complq->queue_id;
 		txq_info->relative_queue_id = txq_info->queue_id;
 
 		/* tx completion queue info */
 		txq_info = &vc_txqs->qinfo[1];
-		txq_info->dma_ring_addr = txq[txq_id]->complq->tx_ring_phys_addr;
+		txq_info->dma_ring_addr = txq->complq->tx_ring_phys_addr;
 		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-		txq_info->queue_id = txq[txq_id]->complq->queue_id;
+		txq_info->queue_id = txq->complq->queue_id;
 		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
 		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-		txq_info->ring_len = txq[txq_id]->complq->nb_tx_desc;
+		txq_info->ring_len = txq->complq->nb_tx_desc;
 	}
 
 	memset(&args, 0, sizeof(args));
@@ -1104,12 +901,11 @@ idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id)
 }
 
 int
-idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, bool map)
+idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_queue_vector_maps *map_info;
 	struct virtchnl2_queue_vector *vecmap;
-	uint16_t nb_rxq = vport->dev_data->nb_rx_queues;
 	struct idpf_cmd_info args;
 	int len, i, err = 0;
 
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 03/19] common/idpf: add virtual channel functions
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
  2023-02-03  9:43       ` [PATCH v6 01/19] common/idpf: add adapter structure beilei.xing
  2023-02-03  9:43       ` [PATCH v6 02/19] common/idpf: add vport structure beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 04/19] common/idpf: introduce adapter init and deinit beilei.xing
                         ` (17 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Move most of the virtual channel functions to idpf common module.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/base/idpf_controlq_api.h |   4 -
 drivers/common/idpf/base/meson.build         |   2 +-
 drivers/common/idpf/idpf_common_device.c     |   8 +
 drivers/common/idpf/idpf_common_device.h     |  61 ++
 drivers/common/idpf/idpf_common_logs.h       |  23 +
 drivers/common/idpf/idpf_common_virtchnl.c   | 815 ++++++++++++++++++
 drivers/common/idpf/idpf_common_virtchnl.h   |  48 ++
 drivers/common/idpf/meson.build              |   5 +
 drivers/common/idpf/version.map              |  20 +-
 drivers/net/idpf/idpf_ethdev.c               |   9 +-
 drivers/net/idpf/idpf_ethdev.h               |  85 +-
 drivers/net/idpf/idpf_rxtx.c                 |   8 +-
 drivers/net/idpf/idpf_vchnl.c                | 817 +------------------
 13 files changed, 986 insertions(+), 919 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_device.c
 create mode 100644 drivers/common/idpf/idpf_common_logs.h
 create mode 100644 drivers/common/idpf/idpf_common_virtchnl.c
 create mode 100644 drivers/common/idpf/idpf_common_virtchnl.h

diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h
index 68ac0cfe70..891a0f10f6 100644
--- a/drivers/common/idpf/base/idpf_controlq_api.h
+++ b/drivers/common/idpf/base/idpf_controlq_api.h
@@ -177,7 +177,6 @@ void idpf_ctlq_remove(struct idpf_hw *hw,
 		      struct idpf_ctlq_info *cq);
 
 /* Sends messages to HW and will also free the buffer*/
-__rte_internal
 int idpf_ctlq_send(struct idpf_hw *hw,
 		   struct idpf_ctlq_info *cq,
 		   u16 num_q_msg,
@@ -186,17 +185,14 @@ int idpf_ctlq_send(struct idpf_hw *hw,
 /* Receives messages and called by interrupt handler/polling
  * initiated by app/process. Also caller is supposed to free the buffers
  */
-__rte_internal
 int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 		   struct idpf_ctlq_msg *q_msg);
 
 /* Reclaims send descriptors on HW write back */
-__rte_internal
 int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
 		       struct idpf_ctlq_msg *msg_status[]);
 
 /* Indicate RX buffers are done being processed */
-__rte_internal
 int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw,
 			    struct idpf_ctlq_info *cq,
 			    u16 *buff_count,
diff --git a/drivers/common/idpf/base/meson.build b/drivers/common/idpf/base/meson.build
index 183587b51a..dc4b93c198 100644
--- a/drivers/common/idpf/base/meson.build
+++ b/drivers/common/idpf/base/meson.build
@@ -1,7 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2022 Intel Corporation
 
-sources = files(
+sources += files(
         'idpf_common.c',
         'idpf_controlq.c',
         'idpf_controlq_setup.c',
diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
new file mode 100644
index 0000000000..5062780362
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <rte_log.h>
+#include <idpf_common_device.h>
+
+RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index b7fff84b25..a7537281d1 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -7,6 +7,12 @@
 
 #include <base/idpf_prototype.h>
 #include <base/virtchnl2.h>
+#include <idpf_common_logs.h>
+
+#define IDPF_CTLQ_LEN		64
+#define IDPF_DFLT_MBX_BUF_SIZE	4096
+
+#define IDPF_MAX_PKT_TYPE	1024
 
 struct idpf_adapter {
 	struct idpf_hw hw;
@@ -76,4 +82,59 @@ struct idpf_vport {
 	bool stopped;
 };
 
+/* Message type read in virtual channel from PF */
+enum idpf_vc_result {
+	IDPF_MSG_ERR = -1, /* Meet error when accessing admin queue */
+	IDPF_MSG_NON,      /* Read nothing from admin queue */
+	IDPF_MSG_SYS,      /* Read system msg from admin queue */
+	IDPF_MSG_CMD,      /* Read async command result */
+};
+
+/* structure used for sending and checking response of virtchnl ops */
+struct idpf_cmd_info {
+	uint32_t ops;
+	uint8_t *in_args;       /* buffer for sending */
+	uint32_t in_args_size;  /* buffer size for sending */
+	uint8_t *out_buffer;    /* buffer for response */
+	uint32_t out_size;      /* buffer size for response */
+};
+
+/* notify current command done. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+notify_cmd(struct idpf_adapter *adapter, int msg_ret)
+{
+	adapter->cmd_retval = msg_ret;
+	/* Return value may be checked in anither thread, need to ensure the coherence. */
+	rte_wmb();
+	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
+}
+
+/* clear current command. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+clear_cmd(struct idpf_adapter *adapter)
+{
+	/* Return value may be checked in anither thread, need to ensure the coherence. */
+	rte_wmb();
+	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
+	adapter->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
+}
+
+/* Check there is pending cmd in execution. If none, set new command. */
+static inline bool
+atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
+{
+	uint32_t op_unk = VIRTCHNL2_OP_UNKNOWN;
+	bool ret = __atomic_compare_exchange(&adapter->pend_cmd, &op_unk, &ops,
+					    0, __ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE);
+
+	if (!ret)
+		DRV_LOG(ERR, "There is incomplete cmd %d", adapter->pend_cmd);
+
+	return !ret;
+}
+
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_logs.h b/drivers/common/idpf/idpf_common_logs.h
new file mode 100644
index 0000000000..fe36562769
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_logs.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_COMMON_LOGS_H_
+#define _IDPF_COMMON_LOGS_H_
+
+#include <rte_log.h>
+
+extern int idpf_common_logtype;
+
+#define DRV_LOG_RAW(level, ...)					\
+	rte_log(RTE_LOG_ ## level,				\
+		idpf_common_logtype,				\
+		RTE_FMT("%s(): "				\
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n",	\
+			__func__,				\
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define DRV_LOG(level, fmt, args...)		\
+	DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _IDPF_COMMON_LOGS_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
new file mode 100644
index 0000000000..f86c1abf0f
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -0,0 +1,815 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <idpf_common_virtchnl.h>
+#include <idpf_common_logs.h>
+
+static int
+idpf_vc_clean(struct idpf_adapter *adapter)
+{
+	struct idpf_ctlq_msg *q_msg[IDPF_CTLQ_LEN];
+	uint16_t num_q_msg = IDPF_CTLQ_LEN;
+	struct idpf_dma_mem *dma_mem;
+	int err;
+	uint32_t i;
+
+	for (i = 0; i < 10; i++) {
+		err = idpf_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg);
+		msleep(20);
+		if (num_q_msg > 0)
+			break;
+	}
+	if (err != 0)
+		return err;
+
+	/* Empty queue is not an error */
+	for (i = 0; i < num_q_msg; i++) {
+		dma_mem = q_msg[i]->ctx.indirect.payload;
+		if (dma_mem != NULL) {
+			idpf_free_dma_mem(&adapter->hw, dma_mem);
+			rte_free(dma_mem);
+		}
+		rte_free(q_msg[i]);
+	}
+
+	return 0;
+}
+
+static int
+idpf_send_vc_msg(struct idpf_adapter *adapter, uint32_t op,
+		 uint16_t msg_size, uint8_t *msg)
+{
+	struct idpf_ctlq_msg *ctlq_msg;
+	struct idpf_dma_mem *dma_mem;
+	int err;
+
+	err = idpf_vc_clean(adapter);
+	if (err != 0)
+		goto err;
+
+	ctlq_msg = rte_zmalloc(NULL, sizeof(struct idpf_ctlq_msg), 0);
+	if (ctlq_msg == NULL) {
+		err = -ENOMEM;
+		goto err;
+	}
+
+	dma_mem = rte_zmalloc(NULL, sizeof(struct idpf_dma_mem), 0);
+	if (dma_mem == NULL) {
+		err = -ENOMEM;
+		goto dma_mem_error;
+	}
+
+	dma_mem->size = IDPF_DFLT_MBX_BUF_SIZE;
+	idpf_alloc_dma_mem(&adapter->hw, dma_mem, dma_mem->size);
+	if (dma_mem->va == NULL) {
+		err = -ENOMEM;
+		goto dma_alloc_error;
+	}
+
+	memcpy(dma_mem->va, msg, msg_size);
+
+	ctlq_msg->opcode = idpf_mbq_opc_send_msg_to_pf;
+	ctlq_msg->func_id = 0;
+	ctlq_msg->data_len = msg_size;
+	ctlq_msg->cookie.mbx.chnl_opcode = op;
+	ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS;
+	ctlq_msg->ctx.indirect.payload = dma_mem;
+
+	err = idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
+	if (err != 0)
+		goto send_error;
+
+	return 0;
+
+send_error:
+	idpf_free_dma_mem(&adapter->hw, dma_mem);
+dma_alloc_error:
+	rte_free(dma_mem);
+dma_mem_error:
+	rte_free(ctlq_msg);
+err:
+	return err;
+}
+
+static enum idpf_vc_result
+idpf_read_msg_from_cp(struct idpf_adapter *adapter, uint16_t buf_len,
+		      uint8_t *buf)
+{
+	struct idpf_hw *hw = &adapter->hw;
+	struct idpf_ctlq_msg ctlq_msg;
+	struct idpf_dma_mem *dma_mem = NULL;
+	enum idpf_vc_result result = IDPF_MSG_NON;
+	uint32_t opcode;
+	uint16_t pending = 1;
+	int ret;
+
+	ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+	if (ret != 0) {
+		DRV_LOG(DEBUG, "Can't read msg from AQ");
+		if (ret != -ENOMSG)
+			result = IDPF_MSG_ERR;
+		return result;
+	}
+
+	rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len);
+
+	opcode = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+	adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+	DRV_LOG(DEBUG, "CQ from CP carries opcode %u, retval %d",
+		opcode, adapter->cmd_retval);
+
+	if (opcode == VIRTCHNL2_OP_EVENT) {
+		struct virtchnl2_event *ve = ctlq_msg.ctx.indirect.payload->va;
+
+		result = IDPF_MSG_SYS;
+		switch (ve->event) {
+		case VIRTCHNL2_EVENT_LINK_CHANGE:
+			/* TBD */
+			break;
+		default:
+			DRV_LOG(ERR, "%s: Unknown event %d from CP",
+				__func__, ve->event);
+			break;
+		}
+	} else {
+		/* async reply msg on command issued by pf previously */
+		result = IDPF_MSG_CMD;
+		if (opcode != adapter->pend_cmd) {
+			DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
+				adapter->pend_cmd, opcode);
+			result = IDPF_MSG_ERR;
+		}
+	}
+
+	if (ctlq_msg.data_len != 0)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret != 0 && dma_mem != NULL)
+		idpf_free_dma_mem(hw, dma_mem);
+
+	return result;
+}
+
+#define MAX_TRY_TIMES 200
+#define ASQ_DELAY_MS  10
+
+int
+idpf_vc_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_len,
+		     uint8_t *buf)
+{
+	int err = 0;
+	int i = 0;
+	int ret;
+
+	do {
+		ret = idpf_read_msg_from_cp(adapter, buf_len, buf);
+		if (ret == IDPF_MSG_CMD)
+			break;
+		rte_delay_ms(ASQ_DELAY_MS);
+	} while (i++ < MAX_TRY_TIMES);
+	if (i >= MAX_TRY_TIMES ||
+	    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+		err = -EBUSY;
+		DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
+			adapter->cmd_retval, ops);
+	}
+
+	return err;
+}
+
+int
+idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
+{
+	int err = 0;
+	int i = 0;
+	int ret;
+
+	if (atomic_set_cmd(adapter, args->ops))
+		return -EINVAL;
+
+	ret = idpf_send_vc_msg(adapter, args->ops, args->in_args_size, args->in_args);
+	if (ret != 0) {
+		DRV_LOG(ERR, "fail to send cmd %d", args->ops);
+		clear_cmd(adapter);
+		return ret;
+	}
+
+	switch (args->ops) {
+	case VIRTCHNL_OP_VERSION:
+	case VIRTCHNL2_OP_GET_CAPS:
+	case VIRTCHNL2_OP_CREATE_VPORT:
+	case VIRTCHNL2_OP_DESTROY_VPORT:
+	case VIRTCHNL2_OP_SET_RSS_KEY:
+	case VIRTCHNL2_OP_SET_RSS_LUT:
+	case VIRTCHNL2_OP_SET_RSS_HASH:
+	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
+	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
+	case VIRTCHNL2_OP_ENABLE_QUEUES:
+	case VIRTCHNL2_OP_DISABLE_QUEUES:
+	case VIRTCHNL2_OP_ENABLE_VPORT:
+	case VIRTCHNL2_OP_DISABLE_VPORT:
+	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
+	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
+	case VIRTCHNL2_OP_ALLOC_VECTORS:
+	case VIRTCHNL2_OP_DEALLOC_VECTORS:
+		/* for init virtchnl ops, need to poll the response */
+		err = idpf_vc_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
+		clear_cmd(adapter);
+		break;
+	case VIRTCHNL2_OP_GET_PTYPE_INFO:
+		/* for multuple response message,
+		 * do not handle the response here.
+		 */
+		break;
+	default:
+		/* For other virtchnl ops in running time,
+		 * wait for the cmd done flag.
+		 */
+		do {
+			if (adapter->pend_cmd == VIRTCHNL_OP_UNKNOWN)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+			/* If don't read msg or read sys event, continue */
+		} while (i++ < MAX_TRY_TIMES);
+		/* If there's no response is received, clear command */
+		if (i >= MAX_TRY_TIMES  ||
+		    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -EBUSY;
+			DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
+				adapter->cmd_retval, args->ops);
+			clear_cmd(adapter);
+		}
+		break;
+	}
+
+	return err;
+}
+
+int
+idpf_vc_check_api_version(struct idpf_adapter *adapter)
+{
+	struct virtchnl2_version_info version, *pver;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&version, 0, sizeof(struct virtchnl_version_info));
+	version.major = VIRTCHNL2_VERSION_MAJOR_2;
+	version.minor = VIRTCHNL2_VERSION_MINOR_0;
+
+	args.ops = VIRTCHNL_OP_VERSION;
+	args.in_args = (uint8_t *)&version;
+	args.in_args_size = sizeof(version);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0) {
+		DRV_LOG(ERR,
+			"Failed to execute command of VIRTCHNL_OP_VERSION");
+		return err;
+	}
+
+	pver = (struct virtchnl2_version_info *)args.out_buffer;
+	adapter->virtchnl_version = *pver;
+
+	if (adapter->virtchnl_version.major != VIRTCHNL2_VERSION_MAJOR_2 ||
+	    adapter->virtchnl_version.minor != VIRTCHNL2_VERSION_MINOR_0) {
+		DRV_LOG(ERR, "VIRTCHNL API version mismatch:(%u.%u)-(%u.%u)",
+			adapter->virtchnl_version.major,
+			adapter->virtchnl_version.minor,
+			VIRTCHNL2_VERSION_MAJOR_2,
+			VIRTCHNL2_VERSION_MINOR_0);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int
+idpf_vc_get_caps(struct idpf_adapter *adapter)
+{
+	struct virtchnl2_get_capabilities caps_msg;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities));
+
+	caps_msg.csum_caps =
+		VIRTCHNL2_CAP_TX_CSUM_L3_IPV4          |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP      |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP      |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP     |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP      |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP      |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP     |
+		VIRTCHNL2_CAP_TX_CSUM_GENERIC          |
+		VIRTCHNL2_CAP_RX_CSUM_L3_IPV4          |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP      |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP      |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP     |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP      |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP      |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP     |
+		VIRTCHNL2_CAP_RX_CSUM_GENERIC;
+
+	caps_msg.rss_caps =
+		VIRTCHNL2_CAP_RSS_IPV4_TCP             |
+		VIRTCHNL2_CAP_RSS_IPV4_UDP             |
+		VIRTCHNL2_CAP_RSS_IPV4_SCTP            |
+		VIRTCHNL2_CAP_RSS_IPV4_OTHER           |
+		VIRTCHNL2_CAP_RSS_IPV6_TCP             |
+		VIRTCHNL2_CAP_RSS_IPV6_UDP             |
+		VIRTCHNL2_CAP_RSS_IPV6_SCTP            |
+		VIRTCHNL2_CAP_RSS_IPV6_OTHER           |
+		VIRTCHNL2_CAP_RSS_IPV4_AH              |
+		VIRTCHNL2_CAP_RSS_IPV4_ESP             |
+		VIRTCHNL2_CAP_RSS_IPV4_AH_ESP          |
+		VIRTCHNL2_CAP_RSS_IPV6_AH              |
+		VIRTCHNL2_CAP_RSS_IPV6_ESP             |
+		VIRTCHNL2_CAP_RSS_IPV6_AH_ESP;
+
+	caps_msg.other_caps = VIRTCHNL2_CAP_WB_ON_ITR;
+
+	args.ops = VIRTCHNL2_OP_GET_CAPS;
+	args.in_args = (uint8_t *)&caps_msg;
+	args.in_args_size = sizeof(caps_msg);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0) {
+		DRV_LOG(ERR,
+			"Failed to execute command of VIRTCHNL2_OP_GET_CAPS");
+		return err;
+	}
+
+	rte_memcpy(&adapter->caps, args.out_buffer, sizeof(caps_msg));
+
+	return 0;
+}
+
+int
+idpf_vc_create_vport(struct idpf_vport *vport,
+		     struct virtchnl2_create_vport *vport_req_info)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_create_vport vport_msg;
+	struct idpf_cmd_info args;
+	int err = -1;
+
+	memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
+	vport_msg.vport_type = vport_req_info->vport_type;
+	vport_msg.txq_model = vport_req_info->txq_model;
+	vport_msg.rxq_model = vport_req_info->rxq_model;
+	vport_msg.num_tx_q = vport_req_info->num_tx_q;
+	vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
+	vport_msg.num_rx_q = vport_req_info->num_rx_q;
+	vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_CREATE_VPORT;
+	args.in_args = (uint8_t *)&vport_msg;
+	args.in_args_size = sizeof(vport_msg);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0) {
+		DRV_LOG(ERR,
+			"Failed to execute command of VIRTCHNL2_OP_CREATE_VPORT");
+		return err;
+	}
+
+	rte_memcpy(vport->vport_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
+	return 0;
+}
+
+int
+idpf_vc_destroy_vport(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_vport vc_vport;
+	struct idpf_cmd_info args;
+	int err;
+
+	vc_vport.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_DESTROY_VPORT;
+	args.in_args = (uint8_t *)&vc_vport;
+	args.in_args_size = sizeof(vc_vport);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DESTROY_VPORT");
+
+	return err;
+}
+
+int
+idpf_vc_set_rss_key(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_key *rss_key;
+	struct idpf_cmd_info args;
+	int len, err;
+
+	len = sizeof(*rss_key) + sizeof(rss_key->key[0]) *
+		(vport->rss_key_size - 1);
+	rss_key = rte_zmalloc("rss_key", len, 0);
+	if (rss_key == NULL)
+		return -ENOMEM;
+
+	rss_key->vport_id = vport->vport_id;
+	rss_key->key_len = vport->rss_key_size;
+	rte_memcpy(rss_key->key, vport->rss_key,
+		   sizeof(rss_key->key[0]) * vport->rss_key_size);
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_SET_RSS_KEY;
+	args.in_args = (uint8_t *)rss_key;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_KEY");
+
+	rte_free(rss_key);
+	return err;
+}
+
+int
+idpf_vc_set_rss_lut(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_lut *rss_lut;
+	struct idpf_cmd_info args;
+	int len, err;
+
+	len = sizeof(*rss_lut) + sizeof(rss_lut->lut[0]) *
+		(vport->rss_lut_size - 1);
+	rss_lut = rte_zmalloc("rss_lut", len, 0);
+	if (rss_lut == NULL)
+		return -ENOMEM;
+
+	rss_lut->vport_id = vport->vport_id;
+	rss_lut->lut_entries = vport->rss_lut_size;
+	rte_memcpy(rss_lut->lut, vport->rss_lut,
+		   sizeof(rss_lut->lut[0]) * vport->rss_lut_size);
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_SET_RSS_LUT;
+	args.in_args = (uint8_t *)rss_lut;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_LUT");
+
+	rte_free(rss_lut);
+	return err;
+}
+
+int
+idpf_vc_set_rss_hash(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_hash rss_hash;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_hash, 0, sizeof(rss_hash));
+	rss_hash.ptype_groups = vport->rss_hf;
+	rss_hash.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_SET_RSS_HASH;
+	args.in_args = (uint8_t *)&rss_hash;
+	args.in_args_size = sizeof(rss_hash);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of OP_SET_RSS_HASH");
+
+	return err;
+}
+
+int
+idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_queue_vector_maps *map_info;
+	struct virtchnl2_queue_vector *vecmap;
+	struct idpf_cmd_info args;
+	int len, i, err = 0;
+
+	len = sizeof(struct virtchnl2_queue_vector_maps) +
+		(nb_rxq - 1) * sizeof(struct virtchnl2_queue_vector);
+
+	map_info = rte_zmalloc("map_info", len, 0);
+	if (map_info == NULL)
+		return -ENOMEM;
+
+	map_info->vport_id = vport->vport_id;
+	map_info->num_qv_maps = nb_rxq;
+	for (i = 0; i < nb_rxq; i++) {
+		vecmap = &map_info->qv_maps[i];
+		vecmap->queue_id = vport->qv_map[i].queue_id;
+		vecmap->vector_id = vport->qv_map[i].vector_id;
+		vecmap->itr_idx = VIRTCHNL2_ITR_IDX_0;
+		vecmap->queue_type = VIRTCHNL2_QUEUE_TYPE_RX;
+	}
+
+	args.ops = map ? VIRTCHNL2_OP_MAP_QUEUE_VECTOR :
+		VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR;
+	args.in_args = (uint8_t *)map_info;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUE_VECTOR",
+			map ? "MAP" : "UNMAP");
+
+	rte_free(map_info);
+	return err;
+}
+
+int
+idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_alloc_vectors *alloc_vec;
+	struct idpf_cmd_info args;
+	int err, len;
+
+	len = sizeof(struct virtchnl2_alloc_vectors) +
+		(num_vectors - 1) * sizeof(struct virtchnl2_vector_chunk);
+	alloc_vec = rte_zmalloc("alloc_vec", len, 0);
+	if (alloc_vec == NULL)
+		return -ENOMEM;
+
+	alloc_vec->num_vectors = num_vectors;
+
+	args.ops = VIRTCHNL2_OP_ALLOC_VECTORS;
+	args.in_args = (uint8_t *)alloc_vec;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_ALLOC_VECTORS");
+
+	if (vport->recv_vectors == NULL) {
+		vport->recv_vectors = rte_zmalloc("recv_vectors", len, 0);
+		if (vport->recv_vectors == NULL) {
+			rte_free(alloc_vec);
+			return -ENOMEM;
+		}
+	}
+
+	rte_memcpy(vport->recv_vectors, args.out_buffer, len);
+	rte_free(alloc_vec);
+	return err;
+}
+
+int
+idpf_vc_dealloc_vectors(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_alloc_vectors *alloc_vec;
+	struct virtchnl2_vector_chunks *vcs;
+	struct idpf_cmd_info args;
+	int err, len;
+
+	alloc_vec = vport->recv_vectors;
+	vcs = &alloc_vec->vchunks;
+
+	len = sizeof(struct virtchnl2_vector_chunks) +
+		(vcs->num_vchunks - 1) * sizeof(struct virtchnl2_vector_chunk);
+
+	args.ops = VIRTCHNL2_OP_DEALLOC_VECTORS;
+	args.in_args = (uint8_t *)vcs;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_DEALLOC_VECTORS");
+
+	return err;
+}
+
+static int
+idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
+			  uint32_t type, bool on)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_del_ena_dis_queues *queue_select;
+	struct virtchnl2_queue_chunk *queue_chunk;
+	struct idpf_cmd_info args;
+	int err, len;
+
+	len = sizeof(struct virtchnl2_del_ena_dis_queues);
+	queue_select = rte_zmalloc("queue_select", len, 0);
+	if (queue_select == NULL)
+		return -ENOMEM;
+
+	queue_chunk = queue_select->chunks.chunks;
+	queue_select->chunks.num_chunks = 1;
+	queue_select->vport_id = vport->vport_id;
+
+	queue_chunk->type = type;
+	queue_chunk->start_queue_id = qid;
+	queue_chunk->num_queues = 1;
+
+	args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
+		VIRTCHNL2_OP_DISABLE_QUEUES;
+	args.in_args = (uint8_t *)queue_select;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+			on ? "ENABLE" : "DISABLE");
+
+	rte_free(queue_select);
+	return err;
+}
+
+int
+idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
+		     bool rx, bool on)
+{
+	uint32_t type;
+	int err, queue_id;
+
+	/* switch txq/rxq */
+	type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
+
+	if (type == VIRTCHNL2_QUEUE_TYPE_RX)
+		queue_id = vport->chunks_info.rx_start_qid + qid;
+	else
+		queue_id = vport->chunks_info.tx_start_qid + qid;
+	err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
+	if (err != 0)
+		return err;
+
+	/* switch tx completion queue */
+	if (!rx && vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+		queue_id = vport->chunks_info.tx_compl_start_qid + qid;
+		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
+		if (err != 0)
+			return err;
+	}
+
+	/* switch rx buffer queue */
+	if (rx && vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+		queue_id = vport->chunks_info.rx_buf_start_qid + 2 * qid;
+		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
+		if (err != 0)
+			return err;
+		queue_id++;
+		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
+		if (err != 0)
+			return err;
+	}
+
+	return err;
+}
+
+#define IDPF_RXTX_QUEUE_CHUNKS_NUM	2
+int
+idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_del_ena_dis_queues *queue_select;
+	struct virtchnl2_queue_chunk *queue_chunk;
+	uint32_t type;
+	struct idpf_cmd_info args;
+	uint16_t num_chunks;
+	int err, len;
+
+	num_chunks = IDPF_RXTX_QUEUE_CHUNKS_NUM;
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+		num_chunks++;
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+		num_chunks++;
+
+	len = sizeof(struct virtchnl2_del_ena_dis_queues) +
+		sizeof(struct virtchnl2_queue_chunk) * (num_chunks - 1);
+	queue_select = rte_zmalloc("queue_select", len, 0);
+	if (queue_select == NULL)
+		return -ENOMEM;
+
+	queue_chunk = queue_select->chunks.chunks;
+	queue_select->chunks.num_chunks = num_chunks;
+	queue_select->vport_id = vport->vport_id;
+
+	type = VIRTCHNL_QUEUE_TYPE_RX;
+	queue_chunk[type].type = type;
+	queue_chunk[type].start_queue_id = vport->chunks_info.rx_start_qid;
+	queue_chunk[type].num_queues = vport->num_rx_q;
+
+	type = VIRTCHNL2_QUEUE_TYPE_TX;
+	queue_chunk[type].type = type;
+	queue_chunk[type].start_queue_id = vport->chunks_info.tx_start_qid;
+	queue_chunk[type].num_queues = vport->num_tx_q;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+		queue_chunk[type].type = type;
+		queue_chunk[type].start_queue_id =
+			vport->chunks_info.rx_buf_start_qid;
+		queue_chunk[type].num_queues = vport->num_rx_bufq;
+	}
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+		queue_chunk[type].type = type;
+		queue_chunk[type].start_queue_id =
+			vport->chunks_info.tx_compl_start_qid;
+		queue_chunk[type].num_queues = vport->num_tx_complq;
+	}
+
+	args.ops = enable ? VIRTCHNL2_OP_ENABLE_QUEUES :
+		VIRTCHNL2_OP_DISABLE_QUEUES;
+	args.in_args = (uint8_t *)queue_select;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+			enable ? "ENABLE" : "DISABLE");
+
+	rte_free(queue_select);
+	return err;
+}
+
+int
+idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_vport vc_vport;
+	struct idpf_cmd_info args;
+	int err;
+
+	vc_vport.vport_id = vport->vport_id;
+	args.ops = enable ? VIRTCHNL2_OP_ENABLE_VPORT :
+		VIRTCHNL2_OP_DISABLE_VPORT;
+	args.in_args = (uint8_t *)&vc_vport;
+	args.in_args_size = sizeof(vc_vport);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0) {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_VPORT",
+			enable ? "ENABLE" : "DISABLE");
+	}
+
+	return err;
+}
+
+int
+idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
+{
+	struct virtchnl2_get_ptype_info *ptype_info;
+	struct idpf_cmd_info args;
+	int len, err;
+
+	len = sizeof(struct virtchnl2_get_ptype_info);
+	ptype_info = rte_zmalloc("ptype_info", len, 0);
+	if (ptype_info == NULL)
+		return -ENOMEM;
+
+	ptype_info->start_ptype_id = 0;
+	ptype_info->num_ptypes = IDPF_MAX_PKT_TYPE;
+	args.ops = VIRTCHNL2_OP_GET_PTYPE_INFO;
+	args.in_args = (uint8_t *)ptype_info;
+	args.in_args_size = len;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_PTYPE_INFO");
+
+	rte_free(ptype_info);
+	return err;
+}
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
new file mode 100644
index 0000000000..e05619f4b4
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_COMMON_VIRTCHNL_H_
+#define _IDPF_COMMON_VIRTCHNL_H_
+
+#include <idpf_common_device.h>
+
+__rte_internal
+int idpf_vc_check_api_version(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_vc_get_caps(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_vc_create_vport(struct idpf_vport *vport,
+			 struct virtchnl2_create_vport *vport_info);
+__rte_internal
+int idpf_vc_destroy_vport(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_set_rss_key(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_set_rss_lut(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_set_rss_hash(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
+			 bool rx, bool on);
+__rte_internal
+int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
+__rte_internal
+int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
+__rte_internal
+int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
+				 uint16_t nb_rxq, bool map);
+__rte_internal
+int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
+__rte_internal
+int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_vc_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
+			 uint16_t buf_len, uint8_t *buf);
+__rte_internal
+int idpf_execute_vc_cmd(struct idpf_adapter *adapter,
+			struct idpf_cmd_info *args);
+
+#endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/meson.build b/drivers/common/idpf/meson.build
index 77d997b4a7..d1578641ba 100644
--- a/drivers/common/idpf/meson.build
+++ b/drivers/common/idpf/meson.build
@@ -1,4 +1,9 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2022 Intel Corporation
 
+sources = files(
+    'idpf_common_device.c',
+    'idpf_common_virtchnl.c',
+)
+
 subdir('base')
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index bfb246c752..9bc0d2a909 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,10 +3,22 @@ INTERNAL {
 
 	idpf_ctlq_deinit;
 	idpf_ctlq_init;
-	idpf_ctlq_clean_sq;
-	idpf_ctlq_recv;
-	idpf_ctlq_send;
-	idpf_ctlq_post_rx_buffs;
+	idpf_execute_vc_cmd;
+	idpf_vc_alloc_vectors;
+	idpf_vc_check_api_version;
+	idpf_vc_config_irq_map_unmap;
+	idpf_vc_create_vport;
+	idpf_vc_dealloc_vectors;
+	idpf_vc_destroy_vport;
+	idpf_vc_ena_dis_queues;
+	idpf_vc_ena_dis_vport;
+	idpf_vc_get_caps;
+	idpf_vc_query_ptype_info;
+	idpf_vc_read_one_msg;
+	idpf_vc_set_rss_hash;
+	idpf_vc_set_rss_key;
+	idpf_vc_set_rss_lut;
+	idpf_vc_switch_queue;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 72a5c9f39b..759fc981d7 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -942,13 +942,6 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapt
 		goto err_api;
 	}
 
-	adapter->max_rxq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
-				sizeof(struct virtchnl2_config_rx_queues)) /
-				sizeof(struct virtchnl2_rxq_info);
-	adapter->max_txq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
-				sizeof(struct virtchnl2_config_tx_queues)) /
-				sizeof(struct virtchnl2_txq_info);
-
 	adapter->cur_vports = 0;
 	adapter->cur_vport_nb = 0;
 
@@ -1075,7 +1068,7 @@ static const struct rte_pci_id pci_id_idpf_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
-struct idpf_adapter_ext *
+static struct idpf_adapter_ext *
 idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 {
 	struct idpf_adapter_ext *adapter;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 8c29019667..efc540fa32 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -16,6 +16,7 @@
 #include "idpf_logs.h"
 
 #include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
 #include <base/idpf_prototype.h>
 #include <base/virtchnl2.h>
 
@@ -31,8 +32,6 @@
 #define IDPF_RX_BUFQ_PER_GRP	2
 
 #define IDPF_CTLQ_ID		-1
-#define IDPF_CTLQ_LEN		64
-#define IDPF_DFLT_MBX_BUF_SIZE	4096
 
 #define IDPF_DFLT_Q_VEC_NUM	1
 #define IDPF_DFLT_INTERVAL	16
@@ -44,8 +43,6 @@
 
 #define IDPF_NUM_MACADDR_MAX	64
 
-#define IDPF_MAX_PKT_TYPE	1024
-
 #define IDPF_VLAN_TAG_SIZE	4
 #define IDPF_ETH_OVERHEAD \
 	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + IDPF_VLAN_TAG_SIZE * 2)
@@ -66,14 +63,6 @@
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
-/* Message type read in virtual channel from PF */
-enum idpf_vc_result {
-	IDPF_MSG_ERR = -1, /* Meet error when accessing admin queue */
-	IDPF_MSG_NON,      /* Read nothing from admin queue */
-	IDPF_MSG_SYS,      /* Read system msg from admin queue */
-	IDPF_MSG_CMD,      /* Read async command result */
-};
-
 struct idpf_vport_param {
 	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
@@ -103,10 +92,6 @@ struct idpf_adapter_ext {
 
 	uint16_t used_vecs_num;
 
-	/* Max config queue number per VC message */
-	uint32_t max_rxq_per_msg;
-	uint32_t max_txq_per_msg;
-
 	uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
 
 	bool rx_vec_allowed;
@@ -125,74 +110,6 @@ TAILQ_HEAD(idpf_adapter_list, idpf_adapter_ext);
 #define IDPF_ADAPTER_TO_EXT(p)					\
 	container_of((p), struct idpf_adapter_ext, base)
 
-/* structure used for sending and checking response of virtchnl ops */
-struct idpf_cmd_info {
-	uint32_t ops;
-	uint8_t *in_args;       /* buffer for sending */
-	uint32_t in_args_size;  /* buffer size for sending */
-	uint8_t *out_buffer;    /* buffer for response */
-	uint32_t out_size;      /* buffer size for response */
-};
-
-/* notify current command done. Only call in case execute
- * _atomic_set_cmd successfully.
- */
-static inline void
-notify_cmd(struct idpf_adapter *adapter, int msg_ret)
-{
-	adapter->cmd_retval = msg_ret;
-	/* Return value may be checked in anither thread, need to ensure the coherence. */
-	rte_wmb();
-	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
-}
-
-/* clear current command. Only call in case execute
- * _atomic_set_cmd successfully.
- */
-static inline void
-clear_cmd(struct idpf_adapter *adapter)
-{
-	/* Return value may be checked in anither thread, need to ensure the coherence. */
-	rte_wmb();
-	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
-	adapter->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
-}
-
-/* Check there is pending cmd in execution. If none, set new command. */
-static inline bool
-atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
-{
-	uint32_t op_unk = VIRTCHNL2_OP_UNKNOWN;
-	bool ret = __atomic_compare_exchange(&adapter->pend_cmd, &op_unk, &ops,
-					    0, __ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE);
-
-	if (!ret)
-		PMD_DRV_LOG(ERR, "There is incomplete cmd %d", adapter->pend_cmd);
-
-	return !ret;
-}
-
-struct idpf_adapter_ext *idpf_find_adapter_ext(struct rte_pci_device *pci_dev);
-void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
-int idpf_vc_check_api_version(struct idpf_adapter *adapter);
 int idpf_get_pkt_type(struct idpf_adapter_ext *adapter);
-int idpf_vc_get_caps(struct idpf_adapter *adapter);
-int idpf_vc_create_vport(struct idpf_vport *vport,
-			 struct virtchnl2_create_vport *vport_info);
-int idpf_vc_destroy_vport(struct idpf_vport *vport);
-int idpf_vc_set_rss_key(struct idpf_vport *vport);
-int idpf_vc_set_rss_lut(struct idpf_vport *vport);
-int idpf_vc_set_rss_hash(struct idpf_vport *vport);
-int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
-		      bool rx, bool on);
-int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
-int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
-int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
-				 uint16_t nb_rxq, bool map);
-int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
-int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
-int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
-int idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
-		      uint16_t buf_len, uint8_t *buf);
 
 #endif /* _IDPF_ETHDEV_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 918d156e03..ad3e31208d 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1080,7 +1080,7 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_switch_queue(vport, rx_queue_id, true, true);
+	err = idpf_vc_switch_queue(vport, rx_queue_id, true, true);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
 			    rx_queue_id);
@@ -1131,7 +1131,7 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_switch_queue(vport, tx_queue_id, false, true);
+	err = idpf_vc_switch_queue(vport, tx_queue_id, false, true);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
 			    tx_queue_id);
@@ -1154,7 +1154,7 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (rx_queue_id >= dev->data->nb_rx_queues)
 		return -EINVAL;
 
-	err = idpf_switch_queue(vport, rx_queue_id, true, false);
+	err = idpf_vc_switch_queue(vport, rx_queue_id, true, false);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
 			    rx_queue_id);
@@ -1185,7 +1185,7 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	if (tx_queue_id >= dev->data->nb_tx_queues)
 		return -EINVAL;
 
-	err = idpf_switch_queue(vport, tx_queue_id, false, false);
+	err = idpf_vc_switch_queue(vport, tx_queue_id, false, false);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
 			    tx_queue_id);
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 633d3295d3..6f4eb52beb 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -23,293 +23,6 @@
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
 
-static int
-idpf_vc_clean(struct idpf_adapter *adapter)
-{
-	struct idpf_ctlq_msg *q_msg[IDPF_CTLQ_LEN];
-	uint16_t num_q_msg = IDPF_CTLQ_LEN;
-	struct idpf_dma_mem *dma_mem;
-	int err;
-	uint32_t i;
-
-	for (i = 0; i < 10; i++) {
-		err = idpf_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg);
-		msleep(20);
-		if (num_q_msg > 0)
-			break;
-	}
-	if (err != 0)
-		return err;
-
-	/* Empty queue is not an error */
-	for (i = 0; i < num_q_msg; i++) {
-		dma_mem = q_msg[i]->ctx.indirect.payload;
-		if (dma_mem != NULL) {
-			idpf_free_dma_mem(&adapter->hw, dma_mem);
-			rte_free(dma_mem);
-		}
-		rte_free(q_msg[i]);
-	}
-
-	return 0;
-}
-
-static int
-idpf_send_vc_msg(struct idpf_adapter *adapter, uint32_t op,
-		 uint16_t msg_size, uint8_t *msg)
-{
-	struct idpf_ctlq_msg *ctlq_msg;
-	struct idpf_dma_mem *dma_mem;
-	int err;
-
-	err = idpf_vc_clean(adapter);
-	if (err != 0)
-		goto err;
-
-	ctlq_msg = rte_zmalloc(NULL, sizeof(struct idpf_ctlq_msg), 0);
-	if (ctlq_msg == NULL) {
-		err = -ENOMEM;
-		goto err;
-	}
-
-	dma_mem = rte_zmalloc(NULL, sizeof(struct idpf_dma_mem), 0);
-	if (dma_mem == NULL) {
-		err = -ENOMEM;
-		goto dma_mem_error;
-	}
-
-	dma_mem->size = IDPF_DFLT_MBX_BUF_SIZE;
-	idpf_alloc_dma_mem(&adapter->hw, dma_mem, dma_mem->size);
-	if (dma_mem->va == NULL) {
-		err = -ENOMEM;
-		goto dma_alloc_error;
-	}
-
-	memcpy(dma_mem->va, msg, msg_size);
-
-	ctlq_msg->opcode = idpf_mbq_opc_send_msg_to_pf;
-	ctlq_msg->func_id = 0;
-	ctlq_msg->data_len = msg_size;
-	ctlq_msg->cookie.mbx.chnl_opcode = op;
-	ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS;
-	ctlq_msg->ctx.indirect.payload = dma_mem;
-
-	err = idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
-	if (err != 0)
-		goto send_error;
-
-	return 0;
-
-send_error:
-	idpf_free_dma_mem(&adapter->hw, dma_mem);
-dma_alloc_error:
-	rte_free(dma_mem);
-dma_mem_error:
-	rte_free(ctlq_msg);
-err:
-	return err;
-}
-
-static enum idpf_vc_result
-idpf_read_msg_from_cp(struct idpf_adapter *adapter, uint16_t buf_len,
-		      uint8_t *buf)
-{
-	struct idpf_hw *hw = &adapter->hw;
-	struct idpf_ctlq_msg ctlq_msg;
-	struct idpf_dma_mem *dma_mem = NULL;
-	enum idpf_vc_result result = IDPF_MSG_NON;
-	uint32_t opcode;
-	uint16_t pending = 1;
-	int ret;
-
-	ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
-	if (ret != 0) {
-		PMD_DRV_LOG(DEBUG, "Can't read msg from AQ");
-		if (ret != -ENOMSG)
-			result = IDPF_MSG_ERR;
-		return result;
-	}
-
-	rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len);
-
-	opcode = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
-	adapter->cmd_retval =
-		(enum virtchnl_status_code)rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
-
-	PMD_DRV_LOG(DEBUG, "CQ from CP carries opcode %u, retval %d",
-		    opcode, adapter->cmd_retval);
-
-	if (opcode == VIRTCHNL2_OP_EVENT) {
-		struct virtchnl2_event *ve =
-			(struct virtchnl2_event *)ctlq_msg.ctx.indirect.payload->va;
-
-		result = IDPF_MSG_SYS;
-		switch (ve->event) {
-		case VIRTCHNL2_EVENT_LINK_CHANGE:
-			/* TBD */
-			break;
-		default:
-			PMD_DRV_LOG(ERR, "%s: Unknown event %d from CP",
-				    __func__, ve->event);
-			break;
-		}
-	} else {
-		/* async reply msg on command issued by pf previously */
-		result = IDPF_MSG_CMD;
-		if (opcode != adapter->pend_cmd) {
-			PMD_DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
-				    adapter->pend_cmd, opcode);
-			result = IDPF_MSG_ERR;
-		}
-	}
-
-	if (ctlq_msg.data_len != 0)
-		dma_mem = ctlq_msg.ctx.indirect.payload;
-	else
-		pending = 0;
-
-	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
-	if (ret != 0 && dma_mem != NULL)
-		idpf_free_dma_mem(hw, dma_mem);
-
-	return result;
-}
-
-#define MAX_TRY_TIMES 200
-#define ASQ_DELAY_MS  10
-
-int
-idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_len,
-		  uint8_t *buf)
-{
-	int err = 0;
-	int i = 0;
-	int ret;
-
-	do {
-		ret = idpf_read_msg_from_cp(adapter, buf_len, buf);
-		if (ret == IDPF_MSG_CMD)
-			break;
-		rte_delay_ms(ASQ_DELAY_MS);
-	} while (i++ < MAX_TRY_TIMES);
-	if (i >= MAX_TRY_TIMES ||
-	    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
-		err = -EBUSY;
-		PMD_DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
-			    adapter->cmd_retval, ops);
-	}
-
-	return err;
-}
-
-static int
-idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
-{
-	int err = 0;
-	int i = 0;
-	int ret;
-
-	if (atomic_set_cmd(adapter, args->ops))
-		return -EINVAL;
-
-	ret = idpf_send_vc_msg(adapter, args->ops, args->in_args_size, args->in_args);
-	if (ret != 0) {
-		PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops);
-		clear_cmd(adapter);
-		return ret;
-	}
-
-	switch (args->ops) {
-	case VIRTCHNL_OP_VERSION:
-	case VIRTCHNL2_OP_GET_CAPS:
-	case VIRTCHNL2_OP_CREATE_VPORT:
-	case VIRTCHNL2_OP_DESTROY_VPORT:
-	case VIRTCHNL2_OP_SET_RSS_KEY:
-	case VIRTCHNL2_OP_SET_RSS_LUT:
-	case VIRTCHNL2_OP_SET_RSS_HASH:
-	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_QUEUES:
-	case VIRTCHNL2_OP_DISABLE_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_VPORT:
-	case VIRTCHNL2_OP_DISABLE_VPORT:
-	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_ALLOC_VECTORS:
-	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-		/* for init virtchnl ops, need to poll the response */
-		err = idpf_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
-		clear_cmd(adapter);
-		break;
-	case VIRTCHNL2_OP_GET_PTYPE_INFO:
-		/* for multuple response message,
-		 * do not handle the response here.
-		 */
-		break;
-	default:
-		/* For other virtchnl ops in running time,
-		 * wait for the cmd done flag.
-		 */
-		do {
-			if (adapter->pend_cmd == VIRTCHNL_OP_UNKNOWN)
-				break;
-			rte_delay_ms(ASQ_DELAY_MS);
-			/* If don't read msg or read sys event, continue */
-		} while (i++ < MAX_TRY_TIMES);
-		/* If there's no response is received, clear command */
-		if (i >= MAX_TRY_TIMES  ||
-		    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
-			err = -EBUSY;
-			PMD_DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
-				    adapter->cmd_retval, args->ops);
-			clear_cmd(adapter);
-		}
-		break;
-	}
-
-	return err;
-}
-
-int
-idpf_vc_check_api_version(struct idpf_adapter *adapter)
-{
-	struct virtchnl2_version_info version, *pver;
-	struct idpf_cmd_info args;
-	int err;
-
-	memset(&version, 0, sizeof(struct virtchnl_version_info));
-	version.major = VIRTCHNL2_VERSION_MAJOR_2;
-	version.minor = VIRTCHNL2_VERSION_MINOR_0;
-
-	args.ops = VIRTCHNL_OP_VERSION;
-	args.in_args = (uint8_t *)&version;
-	args.in_args_size = sizeof(version);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0) {
-		PMD_DRV_LOG(ERR,
-			    "Failed to execute command of VIRTCHNL_OP_VERSION");
-		return err;
-	}
-
-	pver = (struct virtchnl2_version_info *)args.out_buffer;
-	adapter->virtchnl_version = *pver;
-
-	if (adapter->virtchnl_version.major != VIRTCHNL2_VERSION_MAJOR_2 ||
-	    adapter->virtchnl_version.minor != VIRTCHNL2_VERSION_MINOR_0) {
-		PMD_INIT_LOG(ERR, "VIRTCHNL API version mismatch:(%u.%u)-(%u.%u)",
-			     adapter->virtchnl_version.major,
-			     adapter->virtchnl_version.minor,
-			     VIRTCHNL2_VERSION_MAJOR_2,
-			     VIRTCHNL2_VERSION_MINOR_0);
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
 int __rte_cold
 idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 {
@@ -332,8 +45,8 @@ idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 			return -ENOMEM;
 
 	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
-		ret = idpf_read_one_msg(base, VIRTCHNL2_OP_GET_PTYPE_INFO,
-					IDPF_DFLT_MBX_BUF_SIZE, (u8 *)ptype_info);
+		ret = idpf_vc_read_one_msg(base, VIRTCHNL2_OP_GET_PTYPE_INFO,
+					   IDPF_DFLT_MBX_BUF_SIZE, (uint8_t *)ptype_info);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR, "Fail to get packet type information");
 			goto free_ptype_info;
@@ -349,7 +62,7 @@ idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 			uint32_t proto_hdr = 0;
 
 			ptype = (struct virtchnl2_ptype *)
-					((u8 *)ptype_info + ptype_offset);
+					((uint8_t *)ptype_info + ptype_offset);
 			ptype_offset += IDPF_GET_PTYPE_SIZE(ptype);
 			if (ptype_offset > IDPF_DFLT_MBX_BUF_SIZE) {
 				ret = -EINVAL;
@@ -523,223 +236,6 @@ idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 	return ret;
 }
 
-int
-idpf_vc_get_caps(struct idpf_adapter *adapter)
-{
-	struct virtchnl2_get_capabilities caps_msg;
-	struct idpf_cmd_info args;
-	int err;
-
-	memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities));
-
-	caps_msg.csum_caps =
-		VIRTCHNL2_CAP_TX_CSUM_L3_IPV4          |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP      |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP      |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP     |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP      |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP      |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP     |
-		VIRTCHNL2_CAP_TX_CSUM_GENERIC          |
-		VIRTCHNL2_CAP_RX_CSUM_L3_IPV4          |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP      |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP      |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP     |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP      |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP      |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP     |
-		VIRTCHNL2_CAP_RX_CSUM_GENERIC;
-
-	caps_msg.rss_caps =
-		VIRTCHNL2_CAP_RSS_IPV4_TCP             |
-		VIRTCHNL2_CAP_RSS_IPV4_UDP             |
-		VIRTCHNL2_CAP_RSS_IPV4_SCTP            |
-		VIRTCHNL2_CAP_RSS_IPV4_OTHER           |
-		VIRTCHNL2_CAP_RSS_IPV6_TCP             |
-		VIRTCHNL2_CAP_RSS_IPV6_UDP             |
-		VIRTCHNL2_CAP_RSS_IPV6_SCTP            |
-		VIRTCHNL2_CAP_RSS_IPV6_OTHER           |
-		VIRTCHNL2_CAP_RSS_IPV4_AH              |
-		VIRTCHNL2_CAP_RSS_IPV4_ESP             |
-		VIRTCHNL2_CAP_RSS_IPV4_AH_ESP          |
-		VIRTCHNL2_CAP_RSS_IPV6_AH              |
-		VIRTCHNL2_CAP_RSS_IPV6_ESP             |
-		VIRTCHNL2_CAP_RSS_IPV6_AH_ESP;
-
-	caps_msg.other_caps = VIRTCHNL2_CAP_WB_ON_ITR;
-
-	args.ops = VIRTCHNL2_OP_GET_CAPS;
-	args.in_args = (uint8_t *)&caps_msg;
-	args.in_args_size = sizeof(caps_msg);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0) {
-		PMD_DRV_LOG(ERR,
-			    "Failed to execute command of VIRTCHNL2_OP_GET_CAPS");
-		return err;
-	}
-
-	rte_memcpy(&adapter->caps, args.out_buffer, sizeof(caps_msg));
-
-	return 0;
-}
-
-int
-idpf_vc_create_vport(struct idpf_vport *vport,
-		     struct virtchnl2_create_vport *vport_req_info)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_create_vport vport_msg;
-	struct idpf_cmd_info args;
-	int err = -1;
-
-	memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
-	vport_msg.vport_type = vport_req_info->vport_type;
-	vport_msg.txq_model = vport_req_info->txq_model;
-	vport_msg.rxq_model = vport_req_info->rxq_model;
-	vport_msg.num_tx_q = vport_req_info->num_tx_q;
-	vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
-	vport_msg.num_rx_q = vport_req_info->num_rx_q;
-	vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_CREATE_VPORT;
-	args.in_args = (uint8_t *)&vport_msg;
-	args.in_args_size = sizeof(vport_msg);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0) {
-		PMD_DRV_LOG(ERR,
-			    "Failed to execute command of VIRTCHNL2_OP_CREATE_VPORT");
-		return err;
-	}
-
-	rte_memcpy(vport->vport_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
-	return 0;
-}
-
-int
-idpf_vc_destroy_vport(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_vport vc_vport;
-	struct idpf_cmd_info args;
-	int err;
-
-	vc_vport.vport_id = vport->vport_id;
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_DESTROY_VPORT;
-	args.in_args = (uint8_t *)&vc_vport;
-	args.in_args_size = sizeof(vc_vport);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DESTROY_VPORT");
-
-	return err;
-}
-
-int
-idpf_vc_set_rss_key(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_rss_key *rss_key;
-	struct idpf_cmd_info args;
-	int len, err;
-
-	len = sizeof(*rss_key) + sizeof(rss_key->key[0]) *
-		(vport->rss_key_size - 1);
-	rss_key = rte_zmalloc("rss_key", len, 0);
-	if (rss_key == NULL)
-		return -ENOMEM;
-
-	rss_key->vport_id = vport->vport_id;
-	rss_key->key_len = vport->rss_key_size;
-	rte_memcpy(rss_key->key, vport->rss_key,
-		   sizeof(rss_key->key[0]) * vport->rss_key_size);
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_SET_RSS_KEY;
-	args.in_args = (uint8_t *)rss_key;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_KEY");
-
-	rte_free(rss_key);
-	return err;
-}
-
-int
-idpf_vc_set_rss_lut(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_rss_lut *rss_lut;
-	struct idpf_cmd_info args;
-	int len, err;
-
-	len = sizeof(*rss_lut) + sizeof(rss_lut->lut[0]) *
-		(vport->rss_lut_size - 1);
-	rss_lut = rte_zmalloc("rss_lut", len, 0);
-	if (rss_lut == NULL)
-		return -ENOMEM;
-
-	rss_lut->vport_id = vport->vport_id;
-	rss_lut->lut_entries = vport->rss_lut_size;
-	rte_memcpy(rss_lut->lut, vport->rss_lut,
-		   sizeof(rss_lut->lut[0]) * vport->rss_lut_size);
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_SET_RSS_LUT;
-	args.in_args = (uint8_t *)rss_lut;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_LUT");
-
-	rte_free(rss_lut);
-	return err;
-}
-
-int
-idpf_vc_set_rss_hash(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_rss_hash rss_hash;
-	struct idpf_cmd_info args;
-	int err;
-
-	memset(&rss_hash, 0, sizeof(rss_hash));
-	rss_hash.ptype_groups = vport->rss_hf;
-	rss_hash.vport_id = vport->vport_id;
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_SET_RSS_HASH;
-	args.in_args = (uint8_t *)&rss_hash;
-	args.in_args_size = sizeof(rss_hash);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of OP_SET_RSS_HASH");
-
-	return err;
-}
-
 #define IDPF_RX_BUF_STRIDE		64
 int
 idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
@@ -899,310 +395,3 @@ idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
 
 	return err;
 }
-
-int
-idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_queue_vector_maps *map_info;
-	struct virtchnl2_queue_vector *vecmap;
-	struct idpf_cmd_info args;
-	int len, i, err = 0;
-
-	len = sizeof(struct virtchnl2_queue_vector_maps) +
-		(nb_rxq - 1) * sizeof(struct virtchnl2_queue_vector);
-
-	map_info = rte_zmalloc("map_info", len, 0);
-	if (map_info == NULL)
-		return -ENOMEM;
-
-	map_info->vport_id = vport->vport_id;
-	map_info->num_qv_maps = nb_rxq;
-	for (i = 0; i < nb_rxq; i++) {
-		vecmap = &map_info->qv_maps[i];
-		vecmap->queue_id = vport->qv_map[i].queue_id;
-		vecmap->vector_id = vport->qv_map[i].vector_id;
-		vecmap->itr_idx = VIRTCHNL2_ITR_IDX_0;
-		vecmap->queue_type = VIRTCHNL2_QUEUE_TYPE_RX;
-	}
-
-	args.ops = map ? VIRTCHNL2_OP_MAP_QUEUE_VECTOR :
-		VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR;
-	args.in_args = (u8 *)map_info;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUE_VECTOR",
-			    map ? "MAP" : "UNMAP");
-
-	rte_free(map_info);
-	return err;
-}
-
-int
-idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_alloc_vectors *alloc_vec;
-	struct idpf_cmd_info args;
-	int err, len;
-
-	len = sizeof(struct virtchnl2_alloc_vectors) +
-		(num_vectors - 1) * sizeof(struct virtchnl2_vector_chunk);
-	alloc_vec = rte_zmalloc("alloc_vec", len, 0);
-	if (alloc_vec == NULL)
-		return -ENOMEM;
-
-	alloc_vec->num_vectors = num_vectors;
-
-	args.ops = VIRTCHNL2_OP_ALLOC_VECTORS;
-	args.in_args = (u8 *)alloc_vec;
-	args.in_args_size = sizeof(struct virtchnl2_alloc_vectors);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_ALLOC_VECTORS");
-
-	if (vport->recv_vectors == NULL) {
-		vport->recv_vectors = rte_zmalloc("recv_vectors", len, 0);
-		if (vport->recv_vectors == NULL) {
-			rte_free(alloc_vec);
-			return -ENOMEM;
-		}
-	}
-
-	rte_memcpy(vport->recv_vectors, args.out_buffer, len);
-	rte_free(alloc_vec);
-	return err;
-}
-
-int
-idpf_vc_dealloc_vectors(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_alloc_vectors *alloc_vec;
-	struct virtchnl2_vector_chunks *vcs;
-	struct idpf_cmd_info args;
-	int err, len;
-
-	alloc_vec = vport->recv_vectors;
-	vcs = &alloc_vec->vchunks;
-
-	len = sizeof(struct virtchnl2_vector_chunks) +
-		(vcs->num_vchunks - 1) * sizeof(struct virtchnl2_vector_chunk);
-
-	args.ops = VIRTCHNL2_OP_DEALLOC_VECTORS;
-	args.in_args = (u8 *)vcs;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_DEALLOC_VECTORS");
-
-	return err;
-}
-
-static int
-idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
-			  uint32_t type, bool on)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_del_ena_dis_queues *queue_select;
-	struct virtchnl2_queue_chunk *queue_chunk;
-	struct idpf_cmd_info args;
-	int err, len;
-
-	len = sizeof(struct virtchnl2_del_ena_dis_queues);
-	queue_select = rte_zmalloc("queue_select", len, 0);
-	if (queue_select == NULL)
-		return -ENOMEM;
-
-	queue_chunk = queue_select->chunks.chunks;
-	queue_select->chunks.num_chunks = 1;
-	queue_select->vport_id = vport->vport_id;
-
-	queue_chunk->type = type;
-	queue_chunk->start_queue_id = qid;
-	queue_chunk->num_queues = 1;
-
-	args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
-		VIRTCHNL2_OP_DISABLE_QUEUES;
-	args.in_args = (u8 *)queue_select;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
-			    on ? "ENABLE" : "DISABLE");
-
-	rte_free(queue_select);
-	return err;
-}
-
-int
-idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
-		     bool rx, bool on)
-{
-	uint32_t type;
-	int err, queue_id;
-
-	/* switch txq/rxq */
-	type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
-
-	if (type == VIRTCHNL2_QUEUE_TYPE_RX)
-		queue_id = vport->chunks_info.rx_start_qid + qid;
-	else
-		queue_id = vport->chunks_info.tx_start_qid + qid;
-	err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
-	if (err != 0)
-		return err;
-
-	/* switch tx completion queue */
-	if (!rx && vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-		queue_id = vport->chunks_info.tx_compl_start_qid + qid;
-		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
-		if (err != 0)
-			return err;
-	}
-
-	/* switch rx buffer queue */
-	if (rx && vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
-		queue_id = vport->chunks_info.rx_buf_start_qid + 2 * qid;
-		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
-		if (err != 0)
-			return err;
-		queue_id++;
-		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
-		if (err != 0)
-			return err;
-	}
-
-	return err;
-}
-
-#define IDPF_RXTX_QUEUE_CHUNKS_NUM	2
-int
-idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_del_ena_dis_queues *queue_select;
-	struct virtchnl2_queue_chunk *queue_chunk;
-	uint32_t type;
-	struct idpf_cmd_info args;
-	uint16_t num_chunks;
-	int err, len;
-
-	num_chunks = IDPF_RXTX_QUEUE_CHUNKS_NUM;
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
-		num_chunks++;
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
-		num_chunks++;
-
-	len = sizeof(struct virtchnl2_del_ena_dis_queues) +
-		sizeof(struct virtchnl2_queue_chunk) * (num_chunks - 1);
-	queue_select = rte_zmalloc("queue_select", len, 0);
-	if (queue_select == NULL)
-		return -ENOMEM;
-
-	queue_chunk = queue_select->chunks.chunks;
-	queue_select->chunks.num_chunks = num_chunks;
-	queue_select->vport_id = vport->vport_id;
-
-	type = VIRTCHNL_QUEUE_TYPE_RX;
-	queue_chunk[type].type = type;
-	queue_chunk[type].start_queue_id = vport->chunks_info.rx_start_qid;
-	queue_chunk[type].num_queues = vport->num_rx_q;
-
-	type = VIRTCHNL2_QUEUE_TYPE_TX;
-	queue_chunk[type].type = type;
-	queue_chunk[type].start_queue_id = vport->chunks_info.tx_start_qid;
-	queue_chunk[type].num_queues = vport->num_tx_q;
-
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
-		queue_chunk[type].type = type;
-		queue_chunk[type].start_queue_id =
-			vport->chunks_info.rx_buf_start_qid;
-		queue_chunk[type].num_queues = vport->num_rx_bufq;
-	}
-
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-		queue_chunk[type].type = type;
-		queue_chunk[type].start_queue_id =
-			vport->chunks_info.tx_compl_start_qid;
-		queue_chunk[type].num_queues = vport->num_tx_complq;
-	}
-
-	args.ops = enable ? VIRTCHNL2_OP_ENABLE_QUEUES :
-		VIRTCHNL2_OP_DISABLE_QUEUES;
-	args.in_args = (u8 *)queue_select;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
-			    enable ? "ENABLE" : "DISABLE");
-
-	rte_free(queue_select);
-	return err;
-}
-
-int
-idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_vport vc_vport;
-	struct idpf_cmd_info args;
-	int err;
-
-	vc_vport.vport_id = vport->vport_id;
-	args.ops = enable ? VIRTCHNL2_OP_ENABLE_VPORT :
-			    VIRTCHNL2_OP_DISABLE_VPORT;
-	args.in_args = (uint8_t *)&vc_vport;
-	args.in_args_size = sizeof(vc_vport);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0) {
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_VPORT",
-			    enable ? "ENABLE" : "DISABLE");
-	}
-
-	return err;
-}
-
-int
-idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
-{
-	struct virtchnl2_get_ptype_info *ptype_info;
-	struct idpf_cmd_info args;
-	int len, err;
-
-	len = sizeof(struct virtchnl2_get_ptype_info);
-	ptype_info = rte_zmalloc("ptype_info", len, 0);
-	if (ptype_info == NULL)
-		return -ENOMEM;
-
-	ptype_info->start_ptype_id = 0;
-	ptype_info->num_ptypes = IDPF_MAX_PKT_TYPE;
-	args.ops = VIRTCHNL2_OP_GET_PTYPE_INFO;
-	args.in_args = (u8 *)ptype_info;
-	args.in_args_size = len;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_PTYPE_INFO");
-
-	rte_free(ptype_info);
-	return err;
-}
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 04/19] common/idpf: introduce adapter init and deinit
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (2 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 03/19] common/idpf: add virtual channel functions beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 05/19] common/idpf: add vport init/deinit beilei.xing
                         ` (16 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Introduce idpf_adapter_init and idpf_adapter_deinit
functions in common module.
And also introduce idpf_adapter_ext_init and
idpf_adapter_ext_deinit functions.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/base/idpf_controlq_api.h |   2 -
 drivers/common/idpf/idpf_common_device.c     | 153 ++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h     |   6 +
 drivers/common/idpf/version.map              |   4 +-
 drivers/net/idpf/idpf_ethdev.c               | 158 ++-----------------
 drivers/net/idpf/idpf_ethdev.h               |   2 -
 6 files changed, 178 insertions(+), 147 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h
index 891a0f10f6..32d17baadf 100644
--- a/drivers/common/idpf/base/idpf_controlq_api.h
+++ b/drivers/common/idpf/base/idpf_controlq_api.h
@@ -161,7 +161,6 @@ enum idpf_mbx_opc {
 /* Will init all required q including default mb.  "q_info" is an array of
  * create_info structs equal to the number of control queues to be created.
  */
-__rte_internal
 int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 		   struct idpf_ctlq_create_info *q_info);
 
@@ -199,7 +198,6 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw,
 			    struct idpf_dma_mem **buffs);
 
 /* Will destroy all q including the default mb */
-__rte_internal
 int idpf_ctlq_deinit(struct idpf_hw *hw);
 
 #endif /* _IDPF_CONTROLQ_API_H_ */
diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 5062780362..b2b42443e4 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -4,5 +4,158 @@
 
 #include <rte_log.h>
 #include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
+
+static void
+idpf_reset_pf(struct idpf_hw *hw)
+{
+	uint32_t reg;
+
+	reg = IDPF_READ_REG(hw, PFGEN_CTRL);
+	IDPF_WRITE_REG(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR));
+}
+
+#define IDPF_RESET_WAIT_CNT 100
+static int
+idpf_check_pf_reset_done(struct idpf_hw *hw)
+{
+	uint32_t reg;
+	int i;
+
+	for (i = 0; i < IDPF_RESET_WAIT_CNT; i++) {
+		reg = IDPF_READ_REG(hw, PFGEN_RSTAT);
+		if (reg != 0xFFFFFFFF && (reg & PFGEN_RSTAT_PFR_STATE_M))
+			return 0;
+		rte_delay_ms(1000);
+	}
+
+	DRV_LOG(ERR, "IDPF reset timeout");
+	return -EBUSY;
+}
+
+#define CTLQ_NUM 2
+static int
+idpf_init_mbx(struct idpf_hw *hw)
+{
+	struct idpf_ctlq_create_info ctlq_info[CTLQ_NUM] = {
+		{
+			.type = IDPF_CTLQ_TYPE_MAILBOX_TX,
+			.id = IDPF_CTLQ_ID,
+			.len = IDPF_CTLQ_LEN,
+			.buf_size = IDPF_DFLT_MBX_BUF_SIZE,
+			.reg = {
+				.head = PF_FW_ATQH,
+				.tail = PF_FW_ATQT,
+				.len = PF_FW_ATQLEN,
+				.bah = PF_FW_ATQBAH,
+				.bal = PF_FW_ATQBAL,
+				.len_mask = PF_FW_ATQLEN_ATQLEN_M,
+				.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M,
+				.head_mask = PF_FW_ATQH_ATQH_M,
+			}
+		},
+		{
+			.type = IDPF_CTLQ_TYPE_MAILBOX_RX,
+			.id = IDPF_CTLQ_ID,
+			.len = IDPF_CTLQ_LEN,
+			.buf_size = IDPF_DFLT_MBX_BUF_SIZE,
+			.reg = {
+				.head = PF_FW_ARQH,
+				.tail = PF_FW_ARQT,
+				.len = PF_FW_ARQLEN,
+				.bah = PF_FW_ARQBAH,
+				.bal = PF_FW_ARQBAL,
+				.len_mask = PF_FW_ARQLEN_ARQLEN_M,
+				.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M,
+				.head_mask = PF_FW_ARQH_ARQH_M,
+			}
+		}
+	};
+	struct idpf_ctlq_info *ctlq;
+	int ret;
+
+	ret = idpf_ctlq_init(hw, CTLQ_NUM, ctlq_info);
+	if (ret != 0)
+		return ret;
+
+	LIST_FOR_EACH_ENTRY_SAFE(ctlq, NULL, &hw->cq_list_head,
+				 struct idpf_ctlq_info, cq_list) {
+		if (ctlq->q_id == IDPF_CTLQ_ID &&
+		    ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
+			hw->asq = ctlq;
+		if (ctlq->q_id == IDPF_CTLQ_ID &&
+		    ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_RX)
+			hw->arq = ctlq;
+	}
+
+	if (hw->asq == NULL || hw->arq == NULL) {
+		idpf_ctlq_deinit(hw);
+		ret = -ENOENT;
+	}
+
+	return ret;
+}
+
+int
+idpf_adapter_init(struct idpf_adapter *adapter)
+{
+	struct idpf_hw *hw = &adapter->hw;
+	int ret;
+
+	idpf_reset_pf(hw);
+	ret = idpf_check_pf_reset_done(hw);
+	if (ret != 0) {
+		DRV_LOG(ERR, "IDPF is still resetting");
+		goto err_check_reset;
+	}
+
+	ret = idpf_init_mbx(hw);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to init mailbox");
+		goto err_check_reset;
+	}
+
+	adapter->mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
+					IDPF_DFLT_MBX_BUF_SIZE, 0);
+	if (adapter->mbx_resp == NULL) {
+		DRV_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp memory");
+		ret = -ENOMEM;
+		goto err_mbx_resp;
+	}
+
+	ret = idpf_vc_check_api_version(adapter);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to check api version");
+		goto err_check_api;
+	}
+
+	ret = idpf_vc_get_caps(adapter);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to get capabilities");
+		goto err_check_api;
+	}
+
+	return 0;
+
+err_check_api:
+	rte_free(adapter->mbx_resp);
+	adapter->mbx_resp = NULL;
+err_mbx_resp:
+	idpf_ctlq_deinit(hw);
+err_check_reset:
+	return ret;
+}
+
+int
+idpf_adapter_deinit(struct idpf_adapter *adapter)
+{
+	struct idpf_hw *hw = &adapter->hw;
+
+	idpf_ctlq_deinit(hw);
+	rte_free(adapter->mbx_resp);
+	adapter->mbx_resp = NULL;
+
+	return 0;
+}
 
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index a7537281d1..e4344ea392 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -9,6 +9,7 @@
 #include <base/virtchnl2.h>
 #include <idpf_common_logs.h>
 
+#define IDPF_CTLQ_ID		-1
 #define IDPF_CTLQ_LEN		64
 #define IDPF_DFLT_MBX_BUF_SIZE	4096
 
@@ -137,4 +138,9 @@ atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
 	return !ret;
 }
 
+__rte_internal
+int idpf_adapter_init(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_adapter_deinit(struct idpf_adapter *adapter);
+
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 9bc0d2a909..8056996e3c 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -1,8 +1,8 @@
 INTERNAL {
 	global:
 
-	idpf_ctlq_deinit;
-	idpf_ctlq_init;
+	idpf_adapter_deinit;
+	idpf_adapter_init;
 	idpf_execute_vc_cmd;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 759fc981d7..c17c7bb472 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -786,148 +786,32 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap
 	return ret;
 }
 
-static void
-idpf_reset_pf(struct idpf_hw *hw)
-{
-	uint32_t reg;
-
-	reg = IDPF_READ_REG(hw, PFGEN_CTRL);
-	IDPF_WRITE_REG(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR));
-}
-
-#define IDPF_RESET_WAIT_CNT 100
 static int
-idpf_check_pf_reset_done(struct idpf_hw *hw)
+idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
 {
-	uint32_t reg;
-	int i;
-
-	for (i = 0; i < IDPF_RESET_WAIT_CNT; i++) {
-		reg = IDPF_READ_REG(hw, PFGEN_RSTAT);
-		if (reg != 0xFFFFFFFF && (reg & PFGEN_RSTAT_PFR_STATE_M))
-			return 0;
-		rte_delay_ms(1000);
-	}
-
-	PMD_INIT_LOG(ERR, "IDPF reset timeout");
-	return -EBUSY;
-}
-
-#define CTLQ_NUM 2
-static int
-idpf_init_mbx(struct idpf_hw *hw)
-{
-	struct idpf_ctlq_create_info ctlq_info[CTLQ_NUM] = {
-		{
-			.type = IDPF_CTLQ_TYPE_MAILBOX_TX,
-			.id = IDPF_CTLQ_ID,
-			.len = IDPF_CTLQ_LEN,
-			.buf_size = IDPF_DFLT_MBX_BUF_SIZE,
-			.reg = {
-				.head = PF_FW_ATQH,
-				.tail = PF_FW_ATQT,
-				.len = PF_FW_ATQLEN,
-				.bah = PF_FW_ATQBAH,
-				.bal = PF_FW_ATQBAL,
-				.len_mask = PF_FW_ATQLEN_ATQLEN_M,
-				.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M,
-				.head_mask = PF_FW_ATQH_ATQH_M,
-			}
-		},
-		{
-			.type = IDPF_CTLQ_TYPE_MAILBOX_RX,
-			.id = IDPF_CTLQ_ID,
-			.len = IDPF_CTLQ_LEN,
-			.buf_size = IDPF_DFLT_MBX_BUF_SIZE,
-			.reg = {
-				.head = PF_FW_ARQH,
-				.tail = PF_FW_ARQT,
-				.len = PF_FW_ARQLEN,
-				.bah = PF_FW_ARQBAH,
-				.bal = PF_FW_ARQBAL,
-				.len_mask = PF_FW_ARQLEN_ARQLEN_M,
-				.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M,
-				.head_mask = PF_FW_ARQH_ARQH_M,
-			}
-		}
-	};
-	struct idpf_ctlq_info *ctlq;
-	int ret;
-
-	ret = idpf_ctlq_init(hw, CTLQ_NUM, ctlq_info);
-	if (ret != 0)
-		return ret;
-
-	LIST_FOR_EACH_ENTRY_SAFE(ctlq, NULL, &hw->cq_list_head,
-				 struct idpf_ctlq_info, cq_list) {
-		if (ctlq->q_id == IDPF_CTLQ_ID &&
-		    ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
-			hw->asq = ctlq;
-		if (ctlq->q_id == IDPF_CTLQ_ID &&
-		    ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_RX)
-			hw->arq = ctlq;
-	}
-
-	if (hw->asq == NULL || hw->arq == NULL) {
-		idpf_ctlq_deinit(hw);
-		ret = -ENOENT;
-	}
-
-	return ret;
-}
-
-static int
-idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
-{
-	struct idpf_hw *hw = &adapter->base.hw;
+	struct idpf_adapter *base = &adapter->base;
+	struct idpf_hw *hw = &base->hw;
 	int ret = 0;
 
 	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
 	hw->hw_addr_len = pci_dev->mem_resource[0].len;
-	hw->back = &adapter->base;
+	hw->back = base;
 	hw->vendor_id = pci_dev->id.vendor_id;
 	hw->device_id = pci_dev->id.device_id;
 	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
 
 	strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
 
-	idpf_reset_pf(hw);
-	ret = idpf_check_pf_reset_done(hw);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "IDPF is still resetting");
-		goto err;
-	}
-
-	ret = idpf_init_mbx(hw);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to init mailbox");
-		goto err;
-	}
-
-	adapter->base.mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
-					     IDPF_DFLT_MBX_BUF_SIZE, 0);
-	if (adapter->base.mbx_resp == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp memory");
-		ret = -ENOMEM;
-		goto err_mbx;
-	}
-
-	ret = idpf_vc_check_api_version(&adapter->base);
+	ret = idpf_adapter_init(base);
 	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to check api version");
-		goto err_api;
+		PMD_INIT_LOG(ERR, "Failed to init adapter");
+		goto err_adapter_init;
 	}
 
 	ret = idpf_get_pkt_type(adapter);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to set ptype table");
-		goto err_api;
-	}
-
-	ret = idpf_vc_get_caps(&adapter->base);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to get capabilities");
-		goto err_api;
+		goto err_get_ptype;
 	}
 
 	adapter->max_vport_nb = adapter->base.caps.max_vports;
@@ -939,7 +823,7 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapt
 	if (adapter->vports == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
 		ret = -ENOMEM;
-		goto err_api;
+		goto err_get_ptype;
 	}
 
 	adapter->cur_vports = 0;
@@ -949,12 +833,9 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapt
 
 	return ret;
 
-err_api:
-	rte_free(adapter->base.mbx_resp);
-	adapter->base.mbx_resp = NULL;
-err_mbx:
-	idpf_ctlq_deinit(hw);
-err:
+err_get_ptype:
+	idpf_adapter_deinit(base);
+err_adapter_init:
 	return ret;
 }
 
@@ -1093,14 +974,9 @@ idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 }
 
 static void
-idpf_adapter_rel(struct idpf_adapter_ext *adapter)
+idpf_adapter_ext_deinit(struct idpf_adapter_ext *adapter)
 {
-	struct idpf_hw *hw = &adapter->base.hw;
-
-	idpf_ctlq_deinit(hw);
-
-	rte_free(adapter->base.mbx_resp);
-	adapter->base.mbx_resp = NULL;
+	idpf_adapter_deinit(&adapter->base);
 
 	rte_free(adapter->vports);
 	adapter->vports = NULL;
@@ -1133,7 +1009,7 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 			return -ENOMEM;
 		}
 
-		retval = idpf_adapter_init(pci_dev, adapter);
+		retval = idpf_adapter_ext_init(pci_dev, adapter);
 		if (retval != 0) {
 			PMD_INIT_LOG(ERR, "Failed to init adapter.");
 			return retval;
@@ -1196,7 +1072,7 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		rte_spinlock_lock(&idpf_adapter_lock);
 		TAILQ_REMOVE(&idpf_adapter_list, adapter, next);
 		rte_spinlock_unlock(&idpf_adapter_lock);
-		idpf_adapter_rel(adapter);
+		idpf_adapter_ext_deinit(adapter);
 		rte_free(adapter);
 	}
 	return retval;
@@ -1216,7 +1092,7 @@ idpf_pci_remove(struct rte_pci_device *pci_dev)
 	rte_spinlock_lock(&idpf_adapter_lock);
 	TAILQ_REMOVE(&idpf_adapter_list, adapter, next);
 	rte_spinlock_unlock(&idpf_adapter_lock);
-	idpf_adapter_rel(adapter);
+	idpf_adapter_ext_deinit(adapter);
 	rte_free(adapter);
 
 	return 0;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index efc540fa32..07ffe8e408 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -31,8 +31,6 @@
 #define IDPF_RXQ_PER_GRP	1
 #define IDPF_RX_BUFQ_PER_GRP	2
 
-#define IDPF_CTLQ_ID		-1
-
 #define IDPF_DFLT_Q_VEC_NUM	1
 #define IDPF_DFLT_INTERVAL	16
 
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 05/19] common/idpf: add vport init/deinit
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (3 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 04/19] common/idpf: introduce adapter init and deinit beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 06/19] common/idpf: add config RSS beilei.xing
                         ` (15 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Introduce idpf_vport_init and idpf_vport_deinit functions
in common module.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 115 +++++++++++++++++
 drivers/common/idpf/idpf_common_device.h   |  13 +-
 drivers/common/idpf/idpf_common_virtchnl.c |  18 +--
 drivers/common/idpf/version.map            |   2 +
 drivers/net/idpf/idpf_ethdev.c             | 138 ++-------------------
 5 files changed, 148 insertions(+), 138 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index b2b42443e4..5628fb5c57 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -158,4 +158,119 @@ idpf_adapter_deinit(struct idpf_adapter *adapter)
 	return 0;
 }
 
+int
+idpf_vport_init(struct idpf_vport *vport,
+		struct virtchnl2_create_vport *create_vport_info,
+		void *dev_data)
+{
+	struct virtchnl2_create_vport *vport_info;
+	int i, type, ret;
+
+	ret = idpf_vc_create_vport(vport, create_vport_info);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to create vport.");
+		goto err_create_vport;
+	}
+
+	vport_info = &(vport->vport_info.info);
+	vport->vport_id = vport_info->vport_id;
+	vport->txq_model = vport_info->txq_model;
+	vport->rxq_model = vport_info->rxq_model;
+	vport->num_tx_q = vport_info->num_tx_q;
+	vport->num_tx_complq = vport_info->num_tx_complq;
+	vport->num_rx_q = vport_info->num_rx_q;
+	vport->num_rx_bufq = vport_info->num_rx_bufq;
+	vport->max_mtu = vport_info->max_mtu;
+	rte_memcpy(vport->default_mac_addr,
+		   vport_info->default_mac_addr, ETH_ALEN);
+	vport->rss_algorithm = vport_info->rss_algorithm;
+	vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+				      vport_info->rss_key_size);
+	vport->rss_lut_size = vport_info->rss_lut_size;
+
+	for (i = 0; i < vport_info->chunks.num_chunks; i++) {
+		type = vport_info->chunks.chunks[i].type;
+		switch (type) {
+		case VIRTCHNL2_QUEUE_TYPE_TX:
+			vport->chunks_info.tx_start_qid =
+				vport_info->chunks.chunks[i].start_queue_id;
+			vport->chunks_info.tx_qtail_start =
+				vport_info->chunks.chunks[i].qtail_reg_start;
+			vport->chunks_info.tx_qtail_spacing =
+				vport_info->chunks.chunks[i].qtail_reg_spacing;
+			break;
+		case VIRTCHNL2_QUEUE_TYPE_RX:
+			vport->chunks_info.rx_start_qid =
+				vport_info->chunks.chunks[i].start_queue_id;
+			vport->chunks_info.rx_qtail_start =
+				vport_info->chunks.chunks[i].qtail_reg_start;
+			vport->chunks_info.rx_qtail_spacing =
+				vport_info->chunks.chunks[i].qtail_reg_spacing;
+			break;
+		case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+			vport->chunks_info.tx_compl_start_qid =
+				vport_info->chunks.chunks[i].start_queue_id;
+			vport->chunks_info.tx_compl_qtail_start =
+				vport_info->chunks.chunks[i].qtail_reg_start;
+			vport->chunks_info.tx_compl_qtail_spacing =
+				vport_info->chunks.chunks[i].qtail_reg_spacing;
+			break;
+		case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+			vport->chunks_info.rx_buf_start_qid =
+				vport_info->chunks.chunks[i].start_queue_id;
+			vport->chunks_info.rx_buf_qtail_start =
+				vport_info->chunks.chunks[i].qtail_reg_start;
+			vport->chunks_info.rx_buf_qtail_spacing =
+				vport_info->chunks.chunks[i].qtail_reg_spacing;
+			break;
+		default:
+			DRV_LOG(ERR, "Unsupported queue type");
+			break;
+		}
+	}
+
+	vport->dev_data = dev_data;
+
+	vport->rss_key = rte_zmalloc("rss_key",
+				     vport->rss_key_size, 0);
+	if (vport->rss_key == NULL) {
+		DRV_LOG(ERR, "Failed to allocate RSS key");
+		ret = -ENOMEM;
+		goto err_rss_key;
+	}
+
+	vport->rss_lut = rte_zmalloc("rss_lut",
+				     sizeof(uint32_t) * vport->rss_lut_size, 0);
+	if (vport->rss_lut == NULL) {
+		DRV_LOG(ERR, "Failed to allocate RSS lut");
+		ret = -ENOMEM;
+		goto err_rss_lut;
+	}
+
+	return 0;
+
+err_rss_lut:
+	vport->dev_data = NULL;
+	rte_free(vport->rss_key);
+	vport->rss_key = NULL;
+err_rss_key:
+	idpf_vc_destroy_vport(vport);
+err_create_vport:
+	return ret;
+}
+int
+idpf_vport_deinit(struct idpf_vport *vport)
+{
+	rte_free(vport->rss_lut);
+	vport->rss_lut = NULL;
+
+	rte_free(vport->rss_key);
+	vport->rss_key = NULL;
+
+	vport->dev_data = NULL;
+
+	idpf_vc_destroy_vport(vport);
+
+	return 0;
+}
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index e4344ea392..14d04268e5 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -9,6 +9,8 @@
 #include <base/virtchnl2.h>
 #include <idpf_common_logs.h>
 
+#define IDPF_RSS_KEY_LEN	52
+
 #define IDPF_CTLQ_ID		-1
 #define IDPF_CTLQ_LEN		64
 #define IDPF_DFLT_MBX_BUF_SIZE	4096
@@ -43,7 +45,10 @@ struct idpf_chunks_info {
 
 struct idpf_vport {
 	struct idpf_adapter *adapter; /* Backreference to associated adapter */
-	struct virtchnl2_create_vport *vport_info; /* virtchnl response info handling */
+	union {
+		struct virtchnl2_create_vport info; /* virtchnl response info handling */
+		uint8_t data[IDPF_DFLT_MBX_BUF_SIZE];
+	} vport_info;
 	uint16_t sw_idx; /* SW index in adapter->vports[]*/
 	uint16_t vport_id;
 	uint32_t txq_model;
@@ -142,5 +147,11 @@ __rte_internal
 int idpf_adapter_init(struct idpf_adapter *adapter);
 __rte_internal
 int idpf_adapter_deinit(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_vport_init(struct idpf_vport *vport,
+		    struct virtchnl2_create_vport *vport_req_info,
+		    void *dev_data);
+__rte_internal
+int idpf_vport_deinit(struct idpf_vport *vport);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index f86c1abf0f..e90aa1604d 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -355,7 +355,7 @@ idpf_vc_get_caps(struct idpf_adapter *adapter)
 
 int
 idpf_vc_create_vport(struct idpf_vport *vport,
-		     struct virtchnl2_create_vport *vport_req_info)
+		     struct virtchnl2_create_vport *create_vport_info)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_create_vport vport_msg;
@@ -363,13 +363,13 @@ idpf_vc_create_vport(struct idpf_vport *vport,
 	int err = -1;
 
 	memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
-	vport_msg.vport_type = vport_req_info->vport_type;
-	vport_msg.txq_model = vport_req_info->txq_model;
-	vport_msg.rxq_model = vport_req_info->rxq_model;
-	vport_msg.num_tx_q = vport_req_info->num_tx_q;
-	vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
-	vport_msg.num_rx_q = vport_req_info->num_rx_q;
-	vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
+	vport_msg.vport_type = create_vport_info->vport_type;
+	vport_msg.txq_model = create_vport_info->txq_model;
+	vport_msg.rxq_model = create_vport_info->rxq_model;
+	vport_msg.num_tx_q = create_vport_info->num_tx_q;
+	vport_msg.num_tx_complq = create_vport_info->num_tx_complq;
+	vport_msg.num_rx_q = create_vport_info->num_rx_q;
+	vport_msg.num_rx_bufq = create_vport_info->num_rx_bufq;
 
 	memset(&args, 0, sizeof(args));
 	args.ops = VIRTCHNL2_OP_CREATE_VPORT;
@@ -385,7 +385,7 @@ idpf_vc_create_vport(struct idpf_vport *vport,
 		return err;
 	}
 
-	rte_memcpy(vport->vport_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
+	rte_memcpy(&(vport->vport_info.info), args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
 	return 0;
 }
 
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 8056996e3c..c1ae5affa4 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -19,6 +19,8 @@ INTERNAL {
 	idpf_vc_set_rss_key;
 	idpf_vc_set_rss_lut;
 	idpf_vc_switch_queue;
+	idpf_vport_deinit;
+	idpf_vport_init;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index c17c7bb472..7a8fb6fd4a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -178,73 +178,6 @@ idpf_init_vport_req_info(struct rte_eth_dev *dev,
 	return 0;
 }
 
-#define IDPF_RSS_KEY_LEN 52
-
-static int
-idpf_init_vport(struct idpf_vport *vport)
-{
-	struct virtchnl2_create_vport *vport_info = vport->vport_info;
-	int i, type;
-
-	vport->vport_id = vport_info->vport_id;
-	vport->txq_model = vport_info->txq_model;
-	vport->rxq_model = vport_info->rxq_model;
-	vport->num_tx_q = vport_info->num_tx_q;
-	vport->num_tx_complq = vport_info->num_tx_complq;
-	vport->num_rx_q = vport_info->num_rx_q;
-	vport->num_rx_bufq = vport_info->num_rx_bufq;
-	vport->max_mtu = vport_info->max_mtu;
-	rte_memcpy(vport->default_mac_addr,
-		   vport_info->default_mac_addr, ETH_ALEN);
-	vport->rss_algorithm = vport_info->rss_algorithm;
-	vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
-				     vport_info->rss_key_size);
-	vport->rss_lut_size = vport_info->rss_lut_size;
-
-	for (i = 0; i < vport_info->chunks.num_chunks; i++) {
-		type = vport_info->chunks.chunks[i].type;
-		switch (type) {
-		case VIRTCHNL2_QUEUE_TYPE_TX:
-			vport->chunks_info.tx_start_qid =
-				vport_info->chunks.chunks[i].start_queue_id;
-			vport->chunks_info.tx_qtail_start =
-				vport_info->chunks.chunks[i].qtail_reg_start;
-			vport->chunks_info.tx_qtail_spacing =
-				vport_info->chunks.chunks[i].qtail_reg_spacing;
-			break;
-		case VIRTCHNL2_QUEUE_TYPE_RX:
-			vport->chunks_info.rx_start_qid =
-				vport_info->chunks.chunks[i].start_queue_id;
-			vport->chunks_info.rx_qtail_start =
-				vport_info->chunks.chunks[i].qtail_reg_start;
-			vport->chunks_info.rx_qtail_spacing =
-				vport_info->chunks.chunks[i].qtail_reg_spacing;
-			break;
-		case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
-			vport->chunks_info.tx_compl_start_qid =
-				vport_info->chunks.chunks[i].start_queue_id;
-			vport->chunks_info.tx_compl_qtail_start =
-				vport_info->chunks.chunks[i].qtail_reg_start;
-			vport->chunks_info.tx_compl_qtail_spacing =
-				vport_info->chunks.chunks[i].qtail_reg_spacing;
-			break;
-		case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
-			vport->chunks_info.rx_buf_start_qid =
-				vport_info->chunks.chunks[i].start_queue_id;
-			vport->chunks_info.rx_buf_qtail_start =
-				vport_info->chunks.chunks[i].qtail_reg_start;
-			vport->chunks_info.rx_buf_qtail_spacing =
-				vport_info->chunks.chunks[i].qtail_reg_spacing;
-			break;
-		default:
-			PMD_INIT_LOG(ERR, "Unsupported queue type");
-			break;
-		}
-	}
-
-	return 0;
-}
-
 static int
 idpf_config_rss(struct idpf_vport *vport)
 {
@@ -276,63 +209,34 @@ idpf_init_rss(struct idpf_vport *vport)
 {
 	struct rte_eth_rss_conf *rss_conf;
 	struct rte_eth_dev_data *dev_data;
-	uint16_t i, nb_q, lut_size;
+	uint16_t i, nb_q;
 	int ret = 0;
 
 	dev_data = vport->dev_data;
 	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
 	nb_q = dev_data->nb_rx_queues;
 
-	vport->rss_key = rte_zmalloc("rss_key",
-				     vport->rss_key_size, 0);
-	if (vport->rss_key == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate RSS key");
-		ret = -ENOMEM;
-		goto err_alloc_key;
-	}
-
-	lut_size = vport->rss_lut_size;
-	vport->rss_lut = rte_zmalloc("rss_lut",
-				     sizeof(uint32_t) * lut_size, 0);
-	if (vport->rss_lut == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate RSS lut");
-		ret = -ENOMEM;
-		goto err_alloc_lut;
-	}
-
 	if (rss_conf->rss_key == NULL) {
 		for (i = 0; i < vport->rss_key_size; i++)
 			vport->rss_key[i] = (uint8_t)rte_rand();
 	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
 		PMD_INIT_LOG(ERR, "Invalid RSS key length in RSS configuration, should be %d",
 			     vport->rss_key_size);
-		ret = -EINVAL;
-		goto err_cfg_key;
+		return -EINVAL;
 	} else {
 		rte_memcpy(vport->rss_key, rss_conf->rss_key,
 			   vport->rss_key_size);
 	}
 
-	for (i = 0; i < lut_size; i++)
+	for (i = 0; i < vport->rss_lut_size; i++)
 		vport->rss_lut[i] = i % nb_q;
 
 	vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
 
 	ret = idpf_config_rss(vport);
-	if (ret != 0) {
+	if (ret != 0)
 		PMD_INIT_LOG(ERR, "Failed to configure RSS");
-		goto err_cfg_key;
-	}
-
-	return ret;
 
-err_cfg_key:
-	rte_free(vport->rss_lut);
-	vport->rss_lut = NULL;
-err_alloc_lut:
-	rte_free(vport->rss_key);
-	vport->rss_key = NULL;
-err_alloc_key:
 	return ret;
 }
 
@@ -602,13 +506,7 @@ idpf_dev_close(struct rte_eth_dev *dev)
 
 	idpf_dev_stop(dev);
 
-	idpf_vc_destroy_vport(vport);
-
-	rte_free(vport->rss_lut);
-	vport->rss_lut = NULL;
-
-	rte_free(vport->rss_key);
-	vport->rss_key = NULL;
+	idpf_vport_deinit(vport);
 
 	rte_free(vport->recv_vectors);
 	vport->recv_vectors = NULL;
@@ -892,13 +790,6 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
 
-	vport->vport_info = rte_zmalloc(NULL, IDPF_DFLT_MBX_BUF_SIZE, 0);
-	if (vport->vport_info == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate vport_info");
-		ret = -ENOMEM;
-		goto err;
-	}
-
 	memset(&vport_req_info, 0, sizeof(vport_req_info));
 	ret = idpf_init_vport_req_info(dev, &vport_req_info);
 	if (ret != 0) {
@@ -906,19 +797,12 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 		goto err;
 	}
 
-	ret = idpf_vc_create_vport(vport, &vport_req_info);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to create vport.");
-		goto err_create_vport;
-	}
-
-	ret = idpf_init_vport(vport);
+	ret = idpf_vport_init(vport, &vport_req_info, dev->data);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to init vports.");
-		goto err_init_vport;
+		goto err;
 	}
 
-	vport->dev_data = dev->data;
 	adapter->vports[param->idx] = vport;
 	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
 	adapter->cur_vport_nb++;
@@ -927,7 +811,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	if (dev->data->mac_addrs == NULL) {
 		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
 		ret = -ENOMEM;
-		goto err_init_vport;
+		goto err_mac_addrs;
 	}
 
 	rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
@@ -935,11 +819,9 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 
 	return 0;
 
-err_init_vport:
+err_mac_addrs:
 	adapter->vports[param->idx] = NULL;  /* reset */
-	idpf_vc_destroy_vport(vport);
-err_create_vport:
-	rte_free(vport->vport_info);
+	idpf_vport_deinit(vport);
 err:
 	return ret;
 }
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 06/19] common/idpf: add config RSS
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (4 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 05/19] common/idpf: add vport init/deinit beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 07/19] common/idpf: add irq map/unmap beilei.xing
                         ` (14 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move configure RSS to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c | 25 +++++++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h |  2 ++
 drivers/common/idpf/version.map          |  1 +
 drivers/net/idpf/idpf_ethdev.c           | 26 ------------------------
 4 files changed, 28 insertions(+), 26 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 5628fb5c57..eee96b5083 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -273,4 +273,29 @@ idpf_vport_deinit(struct idpf_vport *vport)
 
 	return 0;
 }
+int
+idpf_config_rss(struct idpf_vport *vport)
+{
+	int ret;
+
+	ret = idpf_vc_set_rss_key(vport);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+	ret = idpf_vc_set_rss_lut(vport);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to configure RSS lut");
+		return ret;
+	}
+
+	ret = idpf_vc_set_rss_hash(vport);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return ret;
+}
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 14d04268e5..1d3bb06fef 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -153,5 +153,7 @@ int idpf_vport_init(struct idpf_vport *vport,
 		    void *dev_data);
 __rte_internal
 int idpf_vport_deinit(struct idpf_vport *vport);
+__rte_internal
+int idpf_config_rss(struct idpf_vport *vport);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index c1ae5affa4..fd56a9988f 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,6 +3,7 @@ INTERNAL {
 
 	idpf_adapter_deinit;
 	idpf_adapter_init;
+	idpf_config_rss;
 	idpf_execute_vc_cmd;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 7a8fb6fd4a..f728318dad 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -178,32 +178,6 @@ idpf_init_vport_req_info(struct rte_eth_dev *dev,
 	return 0;
 }
 
-static int
-idpf_config_rss(struct idpf_vport *vport)
-{
-	int ret;
-
-	ret = idpf_vc_set_rss_key(vport);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
-		return ret;
-	}
-
-	ret = idpf_vc_set_rss_lut(vport);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
-		return ret;
-	}
-
-	ret = idpf_vc_set_rss_hash(vport);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
-		return ret;
-	}
-
-	return ret;
-}
-
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 07/19] common/idpf: add irq map/unmap
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (5 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 06/19] common/idpf: add config RSS beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 08/19] common/idpf: support get packet type beilei.xing
                         ` (13 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Introduce idpf_config_irq_map/idpf_config_irq_unmap functions
in common module, and refine config rxq irqs function.
Refine device start function with some irq error handling. Besides,
vport->stopped should be initialized at the end of the function.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 102 +++++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h   |   6 ++
 drivers/common/idpf/idpf_common_virtchnl.c |   8 --
 drivers/common/idpf/idpf_common_virtchnl.h |   6 +-
 drivers/common/idpf/version.map            |   2 +
 drivers/net/idpf/idpf_ethdev.c             | 102 +++------------------
 drivers/net/idpf/idpf_ethdev.h             |   1 -
 7 files changed, 125 insertions(+), 102 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index eee96b5083..04bf4d51dd 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -247,8 +247,21 @@ idpf_vport_init(struct idpf_vport *vport,
 		goto err_rss_lut;
 	}
 
+	/* recv_vectors is used for VIRTCHNL2_OP_ALLOC_VECTORS response,
+	 * reserve maximum size for it now, may need optimization in future.
+	 */
+	vport->recv_vectors = rte_zmalloc("recv_vectors", IDPF_DFLT_MBX_BUF_SIZE, 0);
+	if (vport->recv_vectors == NULL) {
+		DRV_LOG(ERR, "Failed to allocate recv_vectors");
+		ret = -ENOMEM;
+		goto err_recv_vec;
+	}
+
 	return 0;
 
+err_recv_vec:
+	rte_free(vport->rss_lut);
+	vport->rss_lut = NULL;
 err_rss_lut:
 	vport->dev_data = NULL;
 	rte_free(vport->rss_key);
@@ -261,6 +274,8 @@ idpf_vport_init(struct idpf_vport *vport,
 int
 idpf_vport_deinit(struct idpf_vport *vport)
 {
+	rte_free(vport->recv_vectors);
+	vport->recv_vectors = NULL;
 	rte_free(vport->rss_lut);
 	vport->rss_lut = NULL;
 
@@ -298,4 +313,91 @@ idpf_config_rss(struct idpf_vport *vport)
 
 	return ret;
 }
+
+int
+idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_queue_vector *qv_map;
+	struct idpf_hw *hw = &adapter->hw;
+	uint32_t dynctl_val, itrn_val;
+	uint32_t dynctl_reg_start;
+	uint32_t itrn_reg_start;
+	uint16_t i;
+	int ret;
+
+	qv_map = rte_zmalloc("qv_map",
+			     nb_rx_queues *
+			     sizeof(struct virtchnl2_queue_vector), 0);
+	if (qv_map == NULL) {
+		DRV_LOG(ERR, "Failed to allocate %d queue-vector map",
+			nb_rx_queues);
+		ret = -ENOMEM;
+		goto qv_map_alloc_err;
+	}
+
+	/* Rx interrupt disabled, Map interrupt only for writeback */
+
+	/* The capability flags adapter->caps.other_caps should be
+	 * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if
+	 * condition should be updated when the FW can return the
+	 * correct flag bits.
+	 */
+	dynctl_reg_start =
+		vport->recv_vectors->vchunks.vchunks->dynctl_reg_start;
+	itrn_reg_start =
+		vport->recv_vectors->vchunks.vchunks->itrn_reg_start;
+	dynctl_val = IDPF_READ_REG(hw, dynctl_reg_start);
+	DRV_LOG(DEBUG, "Value of dynctl_reg_start is 0x%x", dynctl_val);
+	itrn_val = IDPF_READ_REG(hw, itrn_reg_start);
+	DRV_LOG(DEBUG, "Value of itrn_reg_start is 0x%x", itrn_val);
+	/* Force write-backs by setting WB_ON_ITR bit in DYN_CTL
+	 * register. WB_ON_ITR and INTENA are mutually exclusive
+	 * bits. Setting WB_ON_ITR bits means TX and RX Descs
+	 * are written back based on ITR expiration irrespective
+	 * of INTENA setting.
+	 */
+	/* TBD: need to tune INTERVAL value for better performance. */
+	itrn_val = (itrn_val == 0) ? IDPF_DFLT_INTERVAL : itrn_val;
+	dynctl_val = VIRTCHNL2_ITR_IDX_0  <<
+		     PF_GLINT_DYN_CTL_ITR_INDX_S |
+		     PF_GLINT_DYN_CTL_WB_ON_ITR_M |
+		     itrn_val << PF_GLINT_DYN_CTL_INTERVAL_S;
+	IDPF_WRITE_REG(hw, dynctl_reg_start, dynctl_val);
+
+	for (i = 0; i < nb_rx_queues; i++) {
+		/* map all queues to the same vector */
+		qv_map[i].queue_id = vport->chunks_info.rx_start_qid + i;
+		qv_map[i].vector_id =
+			vport->recv_vectors->vchunks.vchunks->start_vector_id;
+	}
+	vport->qv_map = qv_map;
+
+	ret = idpf_vc_config_irq_map_unmap(vport, nb_rx_queues, true);
+	if (ret != 0) {
+		DRV_LOG(ERR, "config interrupt mapping failed");
+		goto config_irq_map_err;
+	}
+
+	return 0;
+
+config_irq_map_err:
+	rte_free(vport->qv_map);
+	vport->qv_map = NULL;
+
+qv_map_alloc_err:
+	return ret;
+}
+
+int
+idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues)
+{
+	idpf_vc_config_irq_map_unmap(vport, nb_rx_queues, false);
+
+	rte_free(vport->qv_map);
+	vport->qv_map = NULL;
+
+	return 0;
+}
+
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 1d3bb06fef..d45c2b8777 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -17,6 +17,8 @@
 
 #define IDPF_MAX_PKT_TYPE	1024
 
+#define IDPF_DFLT_INTERVAL	16
+
 struct idpf_adapter {
 	struct idpf_hw hw;
 	struct virtchnl2_version_info virtchnl_version;
@@ -155,5 +157,9 @@ __rte_internal
 int idpf_vport_deinit(struct idpf_vport *vport);
 __rte_internal
 int idpf_config_rss(struct idpf_vport *vport);
+__rte_internal
+int idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues);
+__rte_internal
+int idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index e90aa1604d..f659321bdb 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -573,14 +573,6 @@ idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_ALLOC_VECTORS");
 
-	if (vport->recv_vectors == NULL) {
-		vport->recv_vectors = rte_zmalloc("recv_vectors", len, 0);
-		if (vport->recv_vectors == NULL) {
-			rte_free(alloc_vec);
-			return -ENOMEM;
-		}
-	}
-
 	rte_memcpy(vport->recv_vectors, args.out_buffer, len);
 	rte_free(alloc_vec);
 	return err;
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index e05619f4b4..155527f0b6 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -23,6 +23,9 @@ int idpf_vc_set_rss_lut(struct idpf_vport *vport);
 __rte_internal
 int idpf_vc_set_rss_hash(struct idpf_vport *vport);
 __rte_internal
+int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
+				 uint16_t nb_rxq, bool map);
+__rte_internal
 int idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
 			 bool rx, bool on);
 __rte_internal
@@ -30,9 +33,6 @@ int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
 __rte_internal
 int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
 __rte_internal
-int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
-				 uint16_t nb_rxq, bool map);
-__rte_internal
 int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
 __rte_internal
 int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index fd56a9988f..5dab5787de 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,6 +3,8 @@ INTERNAL {
 
 	idpf_adapter_deinit;
 	idpf_adapter_init;
+	idpf_config_irq_map;
+	idpf_config_irq_unmap;
 	idpf_config_rss;
 	idpf_execute_vc_cmd;
 	idpf_vc_alloc_vectors;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index f728318dad..d0799087a5 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -281,84 +281,9 @@ static int
 idpf_config_rx_queues_irqs(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_queue_vector *qv_map;
-	struct idpf_hw *hw = &adapter->hw;
-	uint32_t dynctl_reg_start;
-	uint32_t itrn_reg_start;
-	uint32_t dynctl_val, itrn_val;
-	uint16_t i;
-
-	qv_map = rte_zmalloc("qv_map",
-			dev->data->nb_rx_queues *
-			sizeof(struct virtchnl2_queue_vector), 0);
-	if (qv_map == NULL) {
-		PMD_DRV_LOG(ERR, "Failed to allocate %d queue-vector map",
-			    dev->data->nb_rx_queues);
-		goto qv_map_alloc_err;
-	}
-
-	/* Rx interrupt disabled, Map interrupt only for writeback */
-
-	/* The capability flags adapter->caps.other_caps should be
-	 * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if
-	 * condition should be updated when the FW can return the
-	 * correct flag bits.
-	 */
-	dynctl_reg_start =
-		vport->recv_vectors->vchunks.vchunks->dynctl_reg_start;
-	itrn_reg_start =
-		vport->recv_vectors->vchunks.vchunks->itrn_reg_start;
-	dynctl_val = IDPF_READ_REG(hw, dynctl_reg_start);
-	PMD_DRV_LOG(DEBUG, "Value of dynctl_reg_start is 0x%x",
-		    dynctl_val);
-	itrn_val = IDPF_READ_REG(hw, itrn_reg_start);
-	PMD_DRV_LOG(DEBUG, "Value of itrn_reg_start is 0x%x", itrn_val);
-	/* Force write-backs by setting WB_ON_ITR bit in DYN_CTL
-	 * register. WB_ON_ITR and INTENA are mutually exclusive
-	 * bits. Setting WB_ON_ITR bits means TX and RX Descs
-	 * are written back based on ITR expiration irrespective
-	 * of INTENA setting.
-	 */
-	/* TBD: need to tune INTERVAL value for better performance. */
-	if (itrn_val != 0)
-		IDPF_WRITE_REG(hw,
-			       dynctl_reg_start,
-			       VIRTCHNL2_ITR_IDX_0  <<
-			       PF_GLINT_DYN_CTL_ITR_INDX_S |
-			       PF_GLINT_DYN_CTL_WB_ON_ITR_M |
-			       itrn_val <<
-			       PF_GLINT_DYN_CTL_INTERVAL_S);
-	else
-		IDPF_WRITE_REG(hw,
-			       dynctl_reg_start,
-			       VIRTCHNL2_ITR_IDX_0  <<
-			       PF_GLINT_DYN_CTL_ITR_INDX_S |
-			       PF_GLINT_DYN_CTL_WB_ON_ITR_M |
-			       IDPF_DFLT_INTERVAL <<
-			       PF_GLINT_DYN_CTL_INTERVAL_S);
-
-	for (i = 0; i < dev->data->nb_rx_queues; i++) {
-		/* map all queues to the same vector */
-		qv_map[i].queue_id = vport->chunks_info.rx_start_qid + i;
-		qv_map[i].vector_id =
-			vport->recv_vectors->vchunks.vchunks->start_vector_id;
-	}
-	vport->qv_map = qv_map;
-
-	if (idpf_vc_config_irq_map_unmap(vport, dev->data->nb_rx_queues, true) != 0) {
-		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
-		goto config_irq_map_err;
-	}
-
-	return 0;
-
-config_irq_map_err:
-	rte_free(vport->qv_map);
-	vport->qv_map = NULL;
+	uint16_t nb_rx_queues = dev->data->nb_rx_queues;
 
-qv_map_alloc_err:
-	return -1;
+	return idpf_config_irq_map(vport, nb_rx_queues);
 }
 
 static int
@@ -404,8 +329,6 @@ idpf_dev_start(struct rte_eth_dev *dev)
 	uint16_t req_vecs_num;
 	int ret;
 
-	vport->stopped = 0;
-
 	req_vecs_num = IDPF_DFLT_Q_VEC_NUM;
 	if (req_vecs_num + adapter->used_vecs_num > num_allocated_vectors) {
 		PMD_DRV_LOG(ERR, "The accumulated request vectors' number should be less than %d",
@@ -424,13 +347,13 @@ idpf_dev_start(struct rte_eth_dev *dev)
 	ret = idpf_config_rx_queues_irqs(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to configure irqs");
-		goto err_vec;
+		goto err_irq;
 	}
 
 	ret = idpf_start_queues(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to start queues");
-		goto err_vec;
+		goto err_startq;
 	}
 
 	idpf_set_rx_function(dev);
@@ -442,10 +365,16 @@ idpf_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	vport->stopped = 0;
+
 	return 0;
 
 err_vport:
 	idpf_stop_queues(dev);
+err_startq:
+	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
+err_irq:
+	idpf_vc_dealloc_vectors(vport);
 err_vec:
 	return ret;
 }
@@ -462,10 +391,9 @@ idpf_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_stop_queues(dev);
 
-	idpf_vc_config_irq_map_unmap(vport, dev->data->nb_rx_queues, false);
+	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
 
-	if (vport->recv_vectors != NULL)
-		idpf_vc_dealloc_vectors(vport);
+	idpf_vc_dealloc_vectors(vport);
 
 	vport->stopped = 1;
 
@@ -482,12 +410,6 @@ idpf_dev_close(struct rte_eth_dev *dev)
 
 	idpf_vport_deinit(vport);
 
-	rte_free(vport->recv_vectors);
-	vport->recv_vectors = NULL;
-
-	rte_free(vport->qv_map);
-	vport->qv_map = NULL;
-
 	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
 	adapter->cur_vport_nb--;
 	dev->data->dev_private = NULL;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 07ffe8e408..55be98a8ed 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -32,7 +32,6 @@
 #define IDPF_RX_BUFQ_PER_GRP	2
 
 #define IDPF_DFLT_Q_VEC_NUM	1
-#define IDPF_DFLT_INTERVAL	16
 
 #define IDPF_MIN_BUF_SIZE	1024
 #define IDPF_MAX_FRAME_SIZE	9728
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 08/19] common/idpf: support get packet type
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (6 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 07/19] common/idpf: add irq map/unmap beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 09/19] common/idpf: add vport info initialization beilei.xing
                         ` (12 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move ptype_tbl field to idpf_adapter structure.
Move get_pkt_type to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c | 216 +++++++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h |   7 +
 drivers/common/idpf/meson.build          |   2 +
 drivers/net/idpf/idpf_ethdev.c           |   6 -
 drivers/net/idpf/idpf_ethdev.h           |   4 -
 drivers/net/idpf/idpf_rxtx.c             |   4 +-
 drivers/net/idpf/idpf_rxtx.h             |   4 -
 drivers/net/idpf/idpf_rxtx_vec_avx512.c  |   3 +-
 drivers/net/idpf/idpf_vchnl.c            | 213 ----------------------
 9 files changed, 228 insertions(+), 231 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 04bf4d51dd..3f8e25e6a2 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -96,6 +96,216 @@ idpf_init_mbx(struct idpf_hw *hw)
 	return ret;
 }
 
+static int
+idpf_get_pkt_type(struct idpf_adapter *adapter)
+{
+	struct virtchnl2_get_ptype_info *ptype_info;
+	uint16_t ptype_offset, i, j;
+	uint16_t ptype_recvd = 0;
+	int ret;
+
+	ret = idpf_vc_query_ptype_info(adapter);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Fail to query packet type information");
+		return ret;
+	}
+
+	ptype_info = rte_zmalloc("ptype_info", IDPF_DFLT_MBX_BUF_SIZE, 0);
+		if (ptype_info == NULL)
+			return -ENOMEM;
+
+	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
+		ret = idpf_vc_read_one_msg(adapter, VIRTCHNL2_OP_GET_PTYPE_INFO,
+					   IDPF_DFLT_MBX_BUF_SIZE, (uint8_t *)ptype_info);
+		if (ret != 0) {
+			DRV_LOG(ERR, "Fail to get packet type information");
+			goto free_ptype_info;
+		}
+
+		ptype_recvd += ptype_info->num_ptypes;
+		ptype_offset = sizeof(struct virtchnl2_get_ptype_info) -
+						sizeof(struct virtchnl2_ptype);
+
+		for (i = 0; i < rte_cpu_to_le_16(ptype_info->num_ptypes); i++) {
+			bool is_inner = false, is_ip = false;
+			struct virtchnl2_ptype *ptype;
+			uint32_t proto_hdr = 0;
+
+			ptype = (struct virtchnl2_ptype *)
+					((uint8_t *)ptype_info + ptype_offset);
+			ptype_offset += IDPF_GET_PTYPE_SIZE(ptype);
+			if (ptype_offset > IDPF_DFLT_MBX_BUF_SIZE) {
+				ret = -EINVAL;
+				goto free_ptype_info;
+			}
+
+			if (rte_cpu_to_le_16(ptype->ptype_id_10) == 0xFFFF)
+				goto free_ptype_info;
+
+			for (j = 0; j < ptype->proto_id_count; j++) {
+				switch (rte_cpu_to_le_16(ptype->proto_id[j])) {
+				case VIRTCHNL2_PROTO_HDR_GRE:
+				case VIRTCHNL2_PROTO_HDR_VXLAN:
+					proto_hdr &= ~RTE_PTYPE_L4_MASK;
+					proto_hdr |= RTE_PTYPE_TUNNEL_GRENAT;
+					is_inner = true;
+					break;
+				case VIRTCHNL2_PROTO_HDR_MAC:
+					if (is_inner) {
+						proto_hdr &= ~RTE_PTYPE_INNER_L2_MASK;
+						proto_hdr |= RTE_PTYPE_INNER_L2_ETHER;
+					} else {
+						proto_hdr &= ~RTE_PTYPE_L2_MASK;
+						proto_hdr |= RTE_PTYPE_L2_ETHER;
+					}
+					break;
+				case VIRTCHNL2_PROTO_HDR_VLAN:
+					if (is_inner) {
+						proto_hdr &= ~RTE_PTYPE_INNER_L2_MASK;
+						proto_hdr |= RTE_PTYPE_INNER_L2_ETHER_VLAN;
+					}
+					break;
+				case VIRTCHNL2_PROTO_HDR_PTP:
+					proto_hdr &= ~RTE_PTYPE_L2_MASK;
+					proto_hdr |= RTE_PTYPE_L2_ETHER_TIMESYNC;
+					break;
+				case VIRTCHNL2_PROTO_HDR_LLDP:
+					proto_hdr &= ~RTE_PTYPE_L2_MASK;
+					proto_hdr |= RTE_PTYPE_L2_ETHER_LLDP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_ARP:
+					proto_hdr &= ~RTE_PTYPE_L2_MASK;
+					proto_hdr |= RTE_PTYPE_L2_ETHER_ARP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_PPPOE:
+					proto_hdr &= ~RTE_PTYPE_L2_MASK;
+					proto_hdr |= RTE_PTYPE_L2_ETHER_PPPOE;
+					break;
+				case VIRTCHNL2_PROTO_HDR_IPV4:
+					if (!is_ip) {
+						proto_hdr |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
+						is_ip = true;
+					} else {
+						proto_hdr |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+							     RTE_PTYPE_TUNNEL_IP;
+						is_inner = true;
+					}
+						break;
+				case VIRTCHNL2_PROTO_HDR_IPV6:
+					if (!is_ip) {
+						proto_hdr |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
+						is_ip = true;
+					} else {
+						proto_hdr |= RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+							     RTE_PTYPE_TUNNEL_IP;
+						is_inner = true;
+					}
+					break;
+				case VIRTCHNL2_PROTO_HDR_IPV4_FRAG:
+				case VIRTCHNL2_PROTO_HDR_IPV6_FRAG:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_FRAG;
+					else
+						proto_hdr |= RTE_PTYPE_L4_FRAG;
+					break;
+				case VIRTCHNL2_PROTO_HDR_UDP:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_UDP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_UDP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_TCP:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_TCP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_TCP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_SCTP:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_SCTP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_SCTP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_ICMP:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_ICMP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_ICMP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_ICMPV6:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_ICMP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_ICMP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_L2TPV2:
+				case VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL:
+				case VIRTCHNL2_PROTO_HDR_L2TPV3:
+					is_inner = true;
+					proto_hdr |= RTE_PTYPE_TUNNEL_L2TP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_NVGRE:
+					is_inner = true;
+					proto_hdr |= RTE_PTYPE_TUNNEL_NVGRE;
+					break;
+				case VIRTCHNL2_PROTO_HDR_GTPC_TEID:
+					is_inner = true;
+					proto_hdr |= RTE_PTYPE_TUNNEL_GTPC;
+					break;
+				case VIRTCHNL2_PROTO_HDR_GTPU:
+				case VIRTCHNL2_PROTO_HDR_GTPU_UL:
+				case VIRTCHNL2_PROTO_HDR_GTPU_DL:
+					is_inner = true;
+					proto_hdr |= RTE_PTYPE_TUNNEL_GTPU;
+					break;
+				case VIRTCHNL2_PROTO_HDR_PAY:
+				case VIRTCHNL2_PROTO_HDR_IPV6_EH:
+				case VIRTCHNL2_PROTO_HDR_PRE_MAC:
+				case VIRTCHNL2_PROTO_HDR_POST_MAC:
+				case VIRTCHNL2_PROTO_HDR_ETHERTYPE:
+				case VIRTCHNL2_PROTO_HDR_SVLAN:
+				case VIRTCHNL2_PROTO_HDR_CVLAN:
+				case VIRTCHNL2_PROTO_HDR_MPLS:
+				case VIRTCHNL2_PROTO_HDR_MMPLS:
+				case VIRTCHNL2_PROTO_HDR_CTRL:
+				case VIRTCHNL2_PROTO_HDR_ECP:
+				case VIRTCHNL2_PROTO_HDR_EAPOL:
+				case VIRTCHNL2_PROTO_HDR_PPPOD:
+				case VIRTCHNL2_PROTO_HDR_IGMP:
+				case VIRTCHNL2_PROTO_HDR_AH:
+				case VIRTCHNL2_PROTO_HDR_ESP:
+				case VIRTCHNL2_PROTO_HDR_IKE:
+				case VIRTCHNL2_PROTO_HDR_NATT_KEEP:
+				case VIRTCHNL2_PROTO_HDR_GTP:
+				case VIRTCHNL2_PROTO_HDR_GTP_EH:
+				case VIRTCHNL2_PROTO_HDR_GTPCV2:
+				case VIRTCHNL2_PROTO_HDR_ECPRI:
+				case VIRTCHNL2_PROTO_HDR_VRRP:
+				case VIRTCHNL2_PROTO_HDR_OSPF:
+				case VIRTCHNL2_PROTO_HDR_TUN:
+				case VIRTCHNL2_PROTO_HDR_VXLAN_GPE:
+				case VIRTCHNL2_PROTO_HDR_GENEVE:
+				case VIRTCHNL2_PROTO_HDR_NSH:
+				case VIRTCHNL2_PROTO_HDR_QUIC:
+				case VIRTCHNL2_PROTO_HDR_PFCP:
+				case VIRTCHNL2_PROTO_HDR_PFCP_NODE:
+				case VIRTCHNL2_PROTO_HDR_PFCP_SESSION:
+				case VIRTCHNL2_PROTO_HDR_RTP:
+				case VIRTCHNL2_PROTO_HDR_NO_PROTO:
+				default:
+					continue;
+				}
+				adapter->ptype_tbl[ptype->ptype_id_10] = proto_hdr;
+			}
+		}
+	}
+
+free_ptype_info:
+	rte_free(ptype_info);
+	clear_cmd(adapter);
+	return ret;
+}
+
 int
 idpf_adapter_init(struct idpf_adapter *adapter)
 {
@@ -135,6 +345,12 @@ idpf_adapter_init(struct idpf_adapter *adapter)
 		goto err_check_api;
 	}
 
+	ret = idpf_get_pkt_type(adapter);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to set ptype table");
+		goto err_check_api;
+	}
+
 	return 0;
 
 err_check_api:
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index d45c2b8777..997f01f3aa 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -5,6 +5,7 @@
 #ifndef _IDPF_COMMON_DEVICE_H_
 #define _IDPF_COMMON_DEVICE_H_
 
+#include <rte_mbuf_ptype.h>
 #include <base/idpf_prototype.h>
 #include <base/virtchnl2.h>
 #include <idpf_common_logs.h>
@@ -19,6 +20,10 @@
 
 #define IDPF_DFLT_INTERVAL	16
 
+#define IDPF_GET_PTYPE_SIZE(p)						\
+	(sizeof(struct virtchnl2_ptype) +				\
+	 (((p)->proto_id_count ? ((p)->proto_id_count - 1) : 0) * sizeof((p)->proto_id[0])))
+
 struct idpf_adapter {
 	struct idpf_hw hw;
 	struct virtchnl2_version_info virtchnl_version;
@@ -26,6 +31,8 @@ struct idpf_adapter {
 	volatile uint32_t pend_cmd; /* pending command not finished */
 	uint32_t cmd_retval; /* return value of the cmd response from cp */
 	uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */
+
+	uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
 };
 
 struct idpf_chunks_info {
diff --git a/drivers/common/idpf/meson.build b/drivers/common/idpf/meson.build
index d1578641ba..c6cc7a196b 100644
--- a/drivers/common/idpf/meson.build
+++ b/drivers/common/idpf/meson.build
@@ -1,6 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2022 Intel Corporation
 
+deps += ['mbuf']
+
 sources = files(
     'idpf_common_device.c',
     'idpf_common_virtchnl.c',
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index d0799087a5..84046f955a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -602,12 +602,6 @@ idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *a
 		goto err_adapter_init;
 	}
 
-	ret = idpf_get_pkt_type(adapter);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to set ptype table");
-		goto err_get_ptype;
-	}
-
 	adapter->max_vport_nb = adapter->base.caps.max_vports;
 
 	adapter->vports = rte_zmalloc("vports",
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 55be98a8ed..d30807ca41 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -89,8 +89,6 @@ struct idpf_adapter_ext {
 
 	uint16_t used_vecs_num;
 
-	uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
-
 	bool rx_vec_allowed;
 	bool tx_vec_allowed;
 	bool rx_use_avx512;
@@ -107,6 +105,4 @@ TAILQ_HEAD(idpf_adapter_list, idpf_adapter_ext);
 #define IDPF_ADAPTER_TO_EXT(p)					\
 	container_of((p), struct idpf_adapter_ext, base)
 
-int idpf_get_pkt_type(struct idpf_adapter_ext *adapter);
-
 #endif /* _IDPF_ETHDEV_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index ad3e31208d..0b10e4248b 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1407,7 +1407,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	rx_id_bufq1 = rxq->bufq1->rx_next_avail;
 	rx_id_bufq2 = rxq->bufq2->rx_next_avail;
 	rx_desc_ring = rxq->rx_ring;
-	ptype_tbl = ad->ptype_tbl;
+	ptype_tbl = rxq->adapter->ptype_tbl;
 
 	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
 		rxq->hw_register_set = 1;
@@ -1812,7 +1812,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	rx_id = rxq->rx_tail;
 	rx_ring = rxq->rx_ring;
-	ptype_tbl = ad->ptype_tbl;
+	ptype_tbl = rxq->adapter->ptype_tbl;
 
 	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
 		rxq->hw_register_set = 1;
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 9417651b3f..cac6040943 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -82,10 +82,6 @@
 #define IDPF_TX_OFFLOAD_NOTSUP_MASK \
 		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
 
-#define IDPF_GET_PTYPE_SIZE(p) \
-	(sizeof(struct virtchnl2_ptype) + \
-	(((p)->proto_id_count ? ((p)->proto_id_count - 1) : 0) * sizeof((p)->proto_id[0])))
-
 extern uint64_t idpf_timestamp_dynflag;
 
 struct idpf_rx_queue {
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
index efa7cd2187..fb2b6bb53c 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
@@ -245,8 +245,7 @@ _idpf_singleq_recv_raw_pkts_avx512(struct idpf_rx_queue *rxq,
 				   struct rte_mbuf **rx_pkts,
 				   uint16_t nb_pkts)
 {
-	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(rxq->adapter);
-	const uint32_t *type_table = adapter->ptype_tbl;
+	const uint32_t *type_table = rxq->adapter->ptype_tbl;
 
 	const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0,
 						    rxq->mbuf_initializer);
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 6f4eb52beb..45d05ed108 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -23,219 +23,6 @@
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
 
-int __rte_cold
-idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
-{
-	struct virtchnl2_get_ptype_info *ptype_info;
-	struct idpf_adapter *base;
-	uint16_t ptype_offset, i, j;
-	uint16_t ptype_recvd = 0;
-	int ret;
-
-	base = &adapter->base;
-
-	ret = idpf_vc_query_ptype_info(base);
-	if (ret != 0) {
-		PMD_DRV_LOG(ERR, "Fail to query packet type information");
-		return ret;
-	}
-
-	ptype_info = rte_zmalloc("ptype_info", IDPF_DFLT_MBX_BUF_SIZE, 0);
-		if (ptype_info == NULL)
-			return -ENOMEM;
-
-	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
-		ret = idpf_vc_read_one_msg(base, VIRTCHNL2_OP_GET_PTYPE_INFO,
-					   IDPF_DFLT_MBX_BUF_SIZE, (uint8_t *)ptype_info);
-		if (ret != 0) {
-			PMD_DRV_LOG(ERR, "Fail to get packet type information");
-			goto free_ptype_info;
-		}
-
-		ptype_recvd += ptype_info->num_ptypes;
-		ptype_offset = sizeof(struct virtchnl2_get_ptype_info) -
-						sizeof(struct virtchnl2_ptype);
-
-		for (i = 0; i < rte_cpu_to_le_16(ptype_info->num_ptypes); i++) {
-			bool is_inner = false, is_ip = false;
-			struct virtchnl2_ptype *ptype;
-			uint32_t proto_hdr = 0;
-
-			ptype = (struct virtchnl2_ptype *)
-					((uint8_t *)ptype_info + ptype_offset);
-			ptype_offset += IDPF_GET_PTYPE_SIZE(ptype);
-			if (ptype_offset > IDPF_DFLT_MBX_BUF_SIZE) {
-				ret = -EINVAL;
-				goto free_ptype_info;
-			}
-
-			if (rte_cpu_to_le_16(ptype->ptype_id_10) == 0xFFFF)
-				goto free_ptype_info;
-
-			for (j = 0; j < ptype->proto_id_count; j++) {
-				switch (rte_cpu_to_le_16(ptype->proto_id[j])) {
-				case VIRTCHNL2_PROTO_HDR_GRE:
-				case VIRTCHNL2_PROTO_HDR_VXLAN:
-					proto_hdr &= ~RTE_PTYPE_L4_MASK;
-					proto_hdr |= RTE_PTYPE_TUNNEL_GRENAT;
-					is_inner = true;
-					break;
-				case VIRTCHNL2_PROTO_HDR_MAC:
-					if (is_inner) {
-						proto_hdr &= ~RTE_PTYPE_INNER_L2_MASK;
-						proto_hdr |= RTE_PTYPE_INNER_L2_ETHER;
-					} else {
-						proto_hdr &= ~RTE_PTYPE_L2_MASK;
-						proto_hdr |= RTE_PTYPE_L2_ETHER;
-					}
-					break;
-				case VIRTCHNL2_PROTO_HDR_VLAN:
-					if (is_inner) {
-						proto_hdr &= ~RTE_PTYPE_INNER_L2_MASK;
-						proto_hdr |= RTE_PTYPE_INNER_L2_ETHER_VLAN;
-					}
-					break;
-				case VIRTCHNL2_PROTO_HDR_PTP:
-					proto_hdr &= ~RTE_PTYPE_L2_MASK;
-					proto_hdr |= RTE_PTYPE_L2_ETHER_TIMESYNC;
-					break;
-				case VIRTCHNL2_PROTO_HDR_LLDP:
-					proto_hdr &= ~RTE_PTYPE_L2_MASK;
-					proto_hdr |= RTE_PTYPE_L2_ETHER_LLDP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_ARP:
-					proto_hdr &= ~RTE_PTYPE_L2_MASK;
-					proto_hdr |= RTE_PTYPE_L2_ETHER_ARP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_PPPOE:
-					proto_hdr &= ~RTE_PTYPE_L2_MASK;
-					proto_hdr |= RTE_PTYPE_L2_ETHER_PPPOE;
-					break;
-				case VIRTCHNL2_PROTO_HDR_IPV4:
-					if (!is_ip) {
-						proto_hdr |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
-						is_ip = true;
-					} else {
-						proto_hdr |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
-							     RTE_PTYPE_TUNNEL_IP;
-						is_inner = true;
-					}
-						break;
-				case VIRTCHNL2_PROTO_HDR_IPV6:
-					if (!is_ip) {
-						proto_hdr |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
-						is_ip = true;
-					} else {
-						proto_hdr |= RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
-							     RTE_PTYPE_TUNNEL_IP;
-						is_inner = true;
-					}
-					break;
-				case VIRTCHNL2_PROTO_HDR_IPV4_FRAG:
-				case VIRTCHNL2_PROTO_HDR_IPV6_FRAG:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_FRAG;
-					else
-						proto_hdr |= RTE_PTYPE_L4_FRAG;
-					break;
-				case VIRTCHNL2_PROTO_HDR_UDP:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_UDP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_UDP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_TCP:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_TCP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_TCP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_SCTP:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_SCTP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_SCTP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_ICMP:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_ICMP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_ICMP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_ICMPV6:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_ICMP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_ICMP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_L2TPV2:
-				case VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL:
-				case VIRTCHNL2_PROTO_HDR_L2TPV3:
-					is_inner = true;
-					proto_hdr |= RTE_PTYPE_TUNNEL_L2TP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_NVGRE:
-					is_inner = true;
-					proto_hdr |= RTE_PTYPE_TUNNEL_NVGRE;
-					break;
-				case VIRTCHNL2_PROTO_HDR_GTPC_TEID:
-					is_inner = true;
-					proto_hdr |= RTE_PTYPE_TUNNEL_GTPC;
-					break;
-				case VIRTCHNL2_PROTO_HDR_GTPU:
-				case VIRTCHNL2_PROTO_HDR_GTPU_UL:
-				case VIRTCHNL2_PROTO_HDR_GTPU_DL:
-					is_inner = true;
-					proto_hdr |= RTE_PTYPE_TUNNEL_GTPU;
-					break;
-				case VIRTCHNL2_PROTO_HDR_PAY:
-				case VIRTCHNL2_PROTO_HDR_IPV6_EH:
-				case VIRTCHNL2_PROTO_HDR_PRE_MAC:
-				case VIRTCHNL2_PROTO_HDR_POST_MAC:
-				case VIRTCHNL2_PROTO_HDR_ETHERTYPE:
-				case VIRTCHNL2_PROTO_HDR_SVLAN:
-				case VIRTCHNL2_PROTO_HDR_CVLAN:
-				case VIRTCHNL2_PROTO_HDR_MPLS:
-				case VIRTCHNL2_PROTO_HDR_MMPLS:
-				case VIRTCHNL2_PROTO_HDR_CTRL:
-				case VIRTCHNL2_PROTO_HDR_ECP:
-				case VIRTCHNL2_PROTO_HDR_EAPOL:
-				case VIRTCHNL2_PROTO_HDR_PPPOD:
-				case VIRTCHNL2_PROTO_HDR_IGMP:
-				case VIRTCHNL2_PROTO_HDR_AH:
-				case VIRTCHNL2_PROTO_HDR_ESP:
-				case VIRTCHNL2_PROTO_HDR_IKE:
-				case VIRTCHNL2_PROTO_HDR_NATT_KEEP:
-				case VIRTCHNL2_PROTO_HDR_GTP:
-				case VIRTCHNL2_PROTO_HDR_GTP_EH:
-				case VIRTCHNL2_PROTO_HDR_GTPCV2:
-				case VIRTCHNL2_PROTO_HDR_ECPRI:
-				case VIRTCHNL2_PROTO_HDR_VRRP:
-				case VIRTCHNL2_PROTO_HDR_OSPF:
-				case VIRTCHNL2_PROTO_HDR_TUN:
-				case VIRTCHNL2_PROTO_HDR_VXLAN_GPE:
-				case VIRTCHNL2_PROTO_HDR_GENEVE:
-				case VIRTCHNL2_PROTO_HDR_NSH:
-				case VIRTCHNL2_PROTO_HDR_QUIC:
-				case VIRTCHNL2_PROTO_HDR_PFCP:
-				case VIRTCHNL2_PROTO_HDR_PFCP_NODE:
-				case VIRTCHNL2_PROTO_HDR_PFCP_SESSION:
-				case VIRTCHNL2_PROTO_HDR_RTP:
-				case VIRTCHNL2_PROTO_HDR_NO_PROTO:
-				default:
-					continue;
-				}
-				adapter->ptype_tbl[ptype->ptype_id_10] = proto_hdr;
-			}
-		}
-	}
-
-free_ptype_info:
-	rte_free(ptype_info);
-	clear_cmd(base);
-	return ret;
-}
-
 #define IDPF_RX_BUF_STRIDE		64
 int
 idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 09/19] common/idpf: add vport info initialization
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (7 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 08/19] common/idpf: support get packet type beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 10/19] common/idpf: add vector flags in vport beilei.xing
                         ` (11 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move queue module fields from idpf_adapter_ext structure to
idpf_adapter structure.
Refine some parameter and function name, and move function
idpf_create_vport_info_init to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c | 36 ++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h | 11 ++++++
 drivers/common/idpf/version.map          |  1 +
 drivers/net/idpf/idpf_ethdev.c           | 48 +++---------------------
 drivers/net/idpf/idpf_ethdev.h           |  8 ----
 5 files changed, 54 insertions(+), 50 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 3f8e25e6a2..a9304df6dd 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -616,4 +616,40 @@ idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues)
 	return 0;
 }
 
+int
+idpf_create_vport_info_init(struct idpf_vport *vport,
+			    struct virtchnl2_create_vport *vport_info)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+
+	vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
+	if (adapter->txq_model == 0) {
+		vport_info->txq_model =
+			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+		vport_info->num_tx_q =
+			rte_cpu_to_le_16(IDPF_DEFAULT_TXQ_NUM);
+		vport_info->num_tx_complq =
+			rte_cpu_to_le_16(IDPF_DEFAULT_TXQ_NUM * IDPF_TX_COMPLQ_PER_GRP);
+	} else {
+		vport_info->txq_model =
+			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+		vport_info->num_tx_q = rte_cpu_to_le_16(IDPF_DEFAULT_TXQ_NUM);
+		vport_info->num_tx_complq = 0;
+	}
+	if (adapter->rxq_model == 0) {
+		vport_info->rxq_model =
+			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+		vport_info->num_rx_q = rte_cpu_to_le_16(IDPF_DEFAULT_RXQ_NUM);
+		vport_info->num_rx_bufq =
+			rte_cpu_to_le_16(IDPF_DEFAULT_RXQ_NUM * IDPF_RX_BUFQ_PER_GRP);
+	} else {
+		vport_info->rxq_model =
+			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+		vport_info->num_rx_q = rte_cpu_to_le_16(IDPF_DEFAULT_RXQ_NUM);
+		vport_info->num_rx_bufq = 0;
+	}
+
+	return 0;
+}
+
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 997f01f3aa..0c73d40e53 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -16,6 +16,11 @@
 #define IDPF_CTLQ_LEN		64
 #define IDPF_DFLT_MBX_BUF_SIZE	4096
 
+#define IDPF_DEFAULT_RXQ_NUM	16
+#define IDPF_RX_BUFQ_PER_GRP	2
+#define IDPF_DEFAULT_TXQ_NUM	16
+#define IDPF_TX_COMPLQ_PER_GRP	1
+
 #define IDPF_MAX_PKT_TYPE	1024
 
 #define IDPF_DFLT_INTERVAL	16
@@ -33,6 +38,9 @@ struct idpf_adapter {
 	uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */
 
 	uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
+
+	uint32_t txq_model; /* 0 - split queue model, non-0 - single queue model */
+	uint32_t rxq_model; /* 0 - split queue model, non-0 - single queue model */
 };
 
 struct idpf_chunks_info {
@@ -168,5 +176,8 @@ __rte_internal
 int idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues);
 __rte_internal
 int idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues);
+__rte_internal
+int idpf_create_vport_info_init(struct idpf_vport *vport,
+				struct virtchnl2_create_vport *vport_info);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 5dab5787de..83338640c4 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -6,6 +6,7 @@ INTERNAL {
 	idpf_config_irq_map;
 	idpf_config_irq_unmap;
 	idpf_config_rss;
+	idpf_create_vport_info_init;
 	idpf_execute_vc_cmd;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 84046f955a..734e97ffc2 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -142,42 +142,6 @@ idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
-static int
-idpf_init_vport_req_info(struct rte_eth_dev *dev,
-			 struct virtchnl2_create_vport *vport_info)
-{
-	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(vport->adapter);
-
-	vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
-	if (adapter->txq_model == 0) {
-		vport_info->txq_model =
-			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
-		vport_info->num_tx_q = IDPF_DEFAULT_TXQ_NUM;
-		vport_info->num_tx_complq =
-			IDPF_DEFAULT_TXQ_NUM * IDPF_TX_COMPLQ_PER_GRP;
-	} else {
-		vport_info->txq_model =
-			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
-		vport_info->num_tx_q = IDPF_DEFAULT_TXQ_NUM;
-		vport_info->num_tx_complq = 0;
-	}
-	if (adapter->rxq_model == 0) {
-		vport_info->rxq_model =
-			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
-		vport_info->num_rx_q = IDPF_DEFAULT_RXQ_NUM;
-		vport_info->num_rx_bufq =
-			IDPF_DEFAULT_RXQ_NUM * IDPF_RX_BUFQ_PER_GRP;
-	} else {
-		vport_info->rxq_model =
-			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
-		vport_info->num_rx_q = IDPF_DEFAULT_RXQ_NUM;
-		vport_info->num_rx_bufq = 0;
-	}
-
-	return 0;
-}
-
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -566,12 +530,12 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap
 		goto bail;
 
 	ret = rte_kvargs_process(kvlist, IDPF_TX_SINGLE_Q, &parse_bool,
-				 &adapter->txq_model);
+				 &adapter->base.txq_model);
 	if (ret != 0)
 		goto bail;
 
 	ret = rte_kvargs_process(kvlist, IDPF_RX_SINGLE_Q, &parse_bool,
-				 &adapter->rxq_model);
+				 &adapter->base.rxq_model);
 	if (ret != 0)
 		goto bail;
 
@@ -672,7 +636,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	struct idpf_vport_param *param = init_params;
 	struct idpf_adapter_ext *adapter = param->adapter;
 	/* for sending create vport virtchnl msg prepare */
-	struct virtchnl2_create_vport vport_req_info;
+	struct virtchnl2_create_vport create_vport_info;
 	int ret = 0;
 
 	dev->dev_ops = &idpf_eth_dev_ops;
@@ -680,14 +644,14 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
 
-	memset(&vport_req_info, 0, sizeof(vport_req_info));
-	ret = idpf_init_vport_req_info(dev, &vport_req_info);
+	memset(&create_vport_info, 0, sizeof(create_vport_info));
+	ret = idpf_create_vport_info_init(vport, &create_vport_info);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
 		goto err;
 	}
 
-	ret = idpf_vport_init(vport, &vport_req_info, dev->data);
+	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to init vports.");
 		goto err;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index d30807ca41..c2a7abb05c 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -22,14 +22,9 @@
 
 #define IDPF_MAX_VPORT_NUM	8
 
-#define IDPF_DEFAULT_RXQ_NUM	16
-#define IDPF_DEFAULT_TXQ_NUM	16
-
 #define IDPF_INVALID_VPORT_IDX	0xffff
 #define IDPF_TXQ_PER_GRP	1
-#define IDPF_TX_COMPLQ_PER_GRP	1
 #define IDPF_RXQ_PER_GRP	1
-#define IDPF_RX_BUFQ_PER_GRP	2
 
 #define IDPF_DFLT_Q_VEC_NUM	1
 
@@ -78,9 +73,6 @@ struct idpf_adapter_ext {
 
 	char name[IDPF_ADAPTER_NAME_LEN];
 
-	uint32_t txq_model; /* 0 - split queue model, non-0 - single queue model */
-	uint32_t rxq_model; /* 0 - split queue model, non-0 - single queue model */
-
 	struct idpf_vport **vports;
 	uint16_t max_vport_nb;
 
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 10/19] common/idpf: add vector flags in vport
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (8 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 09/19] common/idpf: add vport info initialization beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 11/19] common/idpf: add rxq and txq struct beilei.xing
                         ` (10 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move vector flags from idpf_adapter_ext structure to
idpf_vport structure.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h |  5 +++++
 drivers/net/idpf/idpf_ethdev.h           |  5 -----
 drivers/net/idpf/idpf_rxtx.c             | 22 ++++++++++------------
 3 files changed, 15 insertions(+), 17 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 0c73d40e53..61c47ba5f4 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -103,6 +103,11 @@ struct idpf_vport {
 	uint16_t devarg_id;
 
 	bool stopped;
+
+	bool rx_vec_allowed;
+	bool tx_vec_allowed;
+	bool rx_use_avx512;
+	bool tx_use_avx512;
 };
 
 /* Message type read in virtual channel from PF */
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index c2a7abb05c..bef6199622 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -81,11 +81,6 @@ struct idpf_adapter_ext {
 
 	uint16_t used_vecs_num;
 
-	bool rx_vec_allowed;
-	bool tx_vec_allowed;
-	bool rx_use_avx512;
-	bool tx_use_avx512;
-
 	/* For PTP */
 	uint64_t time_hw;
 };
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 0b10e4248b..068eb8000e 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -2221,25 +2221,24 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 #ifdef RTE_ARCH_X86
-	struct idpf_adapter_ext *ad = IDPF_ADAPTER_TO_EXT(vport->adapter);
 	struct idpf_rx_queue *rxq;
 	int i;
 
 	if (idpf_rx_vec_dev_check_default(dev) == IDPF_VECTOR_PATH &&
 	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
-		ad->rx_vec_allowed = true;
+		vport->rx_vec_allowed = true;
 
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
 			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
-				ad->rx_use_avx512 = true;
+				vport->rx_use_avx512 = true;
 #else
 		PMD_DRV_LOG(NOTICE,
 			    "AVX512 is not supported in build env");
 #endif /* CC_AVX512_SUPPORT */
 	} else {
-		ad->rx_vec_allowed = false;
+		vport->rx_vec_allowed = false;
 	}
 #endif /* RTE_ARCH_X86 */
 
@@ -2247,13 +2246,13 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
 	} else {
-		if (ad->rx_vec_allowed) {
+		if (vport->rx_vec_allowed) {
 			for (i = 0; i < dev->data->nb_tx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
 				(void)idpf_singleq_rx_vec_setup(rxq);
 			}
 #ifdef CC_AVX512_SUPPORT
-			if (ad->rx_use_avx512) {
+			if (vport->rx_use_avx512) {
 				dev->rx_pkt_burst = idpf_singleq_recv_pkts_avx512;
 				return;
 			}
@@ -2275,7 +2274,6 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 #ifdef RTE_ARCH_X86
-	struct idpf_adapter_ext *ad = IDPF_ADAPTER_TO_EXT(vport->adapter);
 #ifdef CC_AVX512_SUPPORT
 	struct idpf_tx_queue *txq;
 	int i;
@@ -2283,18 +2281,18 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 
 	if (idpf_rx_vec_dev_check_default(dev) == IDPF_VECTOR_PATH &&
 	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
-		ad->tx_vec_allowed = true;
+		vport->tx_vec_allowed = true;
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
 			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
-				ad->tx_use_avx512 = true;
+				vport->tx_use_avx512 = true;
 #else
 		PMD_DRV_LOG(NOTICE,
 			    "AVX512 is not supported in build env");
 #endif /* CC_AVX512_SUPPORT */
 	} else {
-		ad->tx_vec_allowed = false;
+		vport->tx_vec_allowed = false;
 	}
 #endif /* RTE_ARCH_X86 */
 
@@ -2303,9 +2301,9 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 		dev->tx_pkt_prepare = idpf_prep_pkts;
 	} else {
 #ifdef RTE_ARCH_X86
-		if (ad->tx_vec_allowed) {
+		if (vport->tx_vec_allowed) {
 #ifdef CC_AVX512_SUPPORT
-			if (ad->tx_use_avx512) {
+			if (vport->tx_use_avx512) {
 				for (i = 0; i < dev->data->nb_tx_queues; i++) {
 					txq = dev->data->tx_queues[i];
 					if (txq == NULL)
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 11/19] common/idpf: add rxq and txq struct
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (9 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 10/19] common/idpf: add vector flags in vport beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 12/19] common/idpf: add help functions for queue setup and release beilei.xing
                         ` (9 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Add idpf_rxq and idpf_txq structure in common module.
Move idpf_vc_config_rxq and idpf_vc_config_txq functions
to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   2 +
 drivers/common/idpf/idpf_common_rxtx.h     | 112 +++++++++++++
 drivers/common/idpf/idpf_common_virtchnl.c | 160 ++++++++++++++++++
 drivers/common/idpf/idpf_common_virtchnl.h |  10 +-
 drivers/common/idpf/version.map            |   2 +
 drivers/net/idpf/idpf_ethdev.h             |   2 -
 drivers/net/idpf/idpf_rxtx.h               |  97 +----------
 drivers/net/idpf/idpf_vchnl.c              | 184 ---------------------
 drivers/net/idpf/meson.build               |   1 -
 9 files changed, 284 insertions(+), 286 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_rxtx.h
 delete mode 100644 drivers/net/idpf/idpf_vchnl.c

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 61c47ba5f4..4895f5f360 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -18,8 +18,10 @@
 
 #define IDPF_DEFAULT_RXQ_NUM	16
 #define IDPF_RX_BUFQ_PER_GRP	2
+#define IDPF_RXQ_PER_GRP	1
 #define IDPF_DEFAULT_TXQ_NUM	16
 #define IDPF_TX_COMPLQ_PER_GRP	1
+#define IDPF_TXQ_PER_GRP	1
 
 #define IDPF_MAX_PKT_TYPE	1024
 
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
new file mode 100644
index 0000000000..a9ed31c08a
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_COMMON_RXTX_H_
+#define _IDPF_COMMON_RXTX_H_
+
+#include <rte_mbuf_ptype.h>
+#include <rte_mbuf_core.h>
+
+#include "idpf_common_device.h"
+
+struct idpf_rx_stats {
+	uint64_t mbuf_alloc_failed;
+};
+
+struct idpf_rx_queue {
+	struct idpf_adapter *adapter;   /* the adapter this queue belongs to */
+	struct rte_mempool *mp;         /* mbuf pool to populate Rx ring */
+	const struct rte_memzone *mz;   /* memzone for Rx ring */
+	volatile void *rx_ring;
+	struct rte_mbuf **sw_ring;      /* address of SW ring */
+	uint64_t rx_ring_phys_addr;     /* Rx ring DMA address */
+
+	uint16_t nb_rx_desc;            /* ring length */
+	uint16_t rx_tail;               /* current value of tail */
+	volatile uint8_t *qrx_tail;     /* register address of tail */
+	uint16_t rx_free_thresh;        /* max free RX desc to hold */
+	uint16_t nb_rx_hold;            /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
+	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
+
+	/* used for VPMD */
+	uint16_t rxrearm_nb;       /* number of remaining to be re-armed */
+	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
+	uint64_t mbuf_initializer; /* value to init mbufs */
+
+	uint16_t rx_nb_avail;
+	uint16_t rx_next_avail;
+
+	uint16_t port_id;       /* device port ID */
+	uint16_t queue_id;      /* Rx queue index */
+	uint16_t rx_buf_len;    /* The packet buffer size */
+	uint16_t rx_hdr_len;    /* The header buffer size */
+	uint16_t max_pkt_len;   /* Maximum packet length */
+	uint8_t rxdid;
+
+	bool q_set;             /* if rx queue has been configured */
+	bool q_started;         /* if rx queue has been started */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+	const struct idpf_rxq_ops *ops;
+
+	struct idpf_rx_stats rx_stats;
+
+	/* only valid for split queue mode */
+	uint8_t expected_gen_id;
+	struct idpf_rx_queue *bufq1;
+	struct idpf_rx_queue *bufq2;
+
+	uint64_t offloads;
+	uint32_t hw_register_set;
+};
+
+struct idpf_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct idpf_tx_queue {
+	const struct rte_memzone *mz;		/* memzone for Tx ring */
+	volatile struct idpf_flex_tx_desc *tx_ring;	/* Tx ring virtual address */
+	volatile union {
+		struct idpf_flex_tx_sched_desc *desc_ring;
+		struct idpf_splitq_tx_compl_desc *compl_ring;
+	};
+	uint64_t tx_ring_phys_addr;		/* Tx ring DMA address */
+	struct idpf_tx_entry *sw_ring;		/* address array of SW ring */
+
+	uint16_t nb_tx_desc;		/* ring length */
+	uint16_t tx_tail;		/* current value of tail */
+	volatile uint8_t *qtx_tail;	/* register address of tail */
+	/* number of used desc since RS bit set */
+	uint16_t nb_used;
+	uint16_t nb_free;
+	uint16_t last_desc_cleaned;	/* last desc have been cleaned*/
+	uint16_t free_thresh;
+	uint16_t rs_thresh;
+
+	uint16_t port_id;
+	uint16_t queue_id;
+	uint64_t offloads;
+	uint16_t next_dd;	/* next to set RS, for VPMD */
+	uint16_t next_rs;	/* next to check DD,  for VPMD */
+
+	bool q_set;		/* if tx queue has been configured */
+	bool q_started;		/* if tx queue has been started */
+	bool tx_deferred_start; /* don't start this queue in dev start */
+	const struct idpf_txq_ops *ops;
+
+	/* only valid for split queue mode */
+	uint16_t sw_nb_desc;
+	uint16_t sw_tail;
+	void **txqs;
+	uint32_t tx_start_qid;
+	uint8_t expected_gen_id;
+	struct idpf_tx_queue *complq;
+};
+
+#endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index f659321bdb..299caa19f1 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -805,3 +805,163 @@ idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
 	rte_free(ptype_info);
 	return err;
 }
+
+#define IDPF_RX_BUF_STRIDE		64
+int
+idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
+	struct virtchnl2_rxq_info *rxq_info;
+	struct idpf_cmd_info args;
+	uint16_t num_qs;
+	int size, err, i;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
+		num_qs = IDPF_RXQ_PER_GRP;
+	else
+		num_qs = IDPF_RXQ_PER_GRP + IDPF_RX_BUFQ_PER_GRP;
+
+	size = sizeof(*vc_rxqs) + (num_qs - 1) *
+		sizeof(struct virtchnl2_rxq_info);
+	vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
+	if (vc_rxqs == NULL) {
+		DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues");
+		err = -ENOMEM;
+		return err;
+	}
+	vc_rxqs->vport_id = vport->vport_id;
+	vc_rxqs->num_qinfo = num_qs;
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		rxq_info = &vc_rxqs->qinfo[0];
+		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
+		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
+		rxq_info->queue_id = rxq->queue_id;
+		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
+		rxq_info->data_buffer_size = rxq->rx_buf_len;
+		rxq_info->max_pkt_size = vport->max_pkt_len;
+
+		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
+		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
+
+		rxq_info->ring_len = rxq->nb_rx_desc;
+	}  else {
+		/* Rx queue */
+		rxq_info = &vc_rxqs->qinfo[0];
+		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
+		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
+		rxq_info->queue_id = rxq->queue_id;
+		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+		rxq_info->data_buffer_size = rxq->rx_buf_len;
+		rxq_info->max_pkt_size = vport->max_pkt_len;
+
+		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
+		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
+
+		rxq_info->ring_len = rxq->nb_rx_desc;
+		rxq_info->rx_bufq1_id = rxq->bufq1->queue_id;
+		rxq_info->rx_bufq2_id = rxq->bufq2->queue_id;
+		rxq_info->rx_buffer_low_watermark = 64;
+
+		/* Buffer queue */
+		for (i = 1; i <= IDPF_RX_BUFQ_PER_GRP; i++) {
+			struct idpf_rx_queue *bufq = i == 1 ? rxq->bufq1 : rxq->bufq2;
+			rxq_info = &vc_rxqs->qinfo[i];
+			rxq_info->dma_ring_addr = bufq->rx_ring_phys_addr;
+			rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+			rxq_info->queue_id = bufq->queue_id;
+			rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+			rxq_info->data_buffer_size = bufq->rx_buf_len;
+			rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
+			rxq_info->ring_len = bufq->nb_rx_desc;
+
+			rxq_info->buffer_notif_stride = IDPF_RX_BUF_STRIDE;
+			rxq_info->rx_buffer_low_watermark = 64;
+		}
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
+	args.in_args = (uint8_t *)vc_rxqs;
+	args.in_args_size = size;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	rte_free(vc_rxqs);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
+
+	return err;
+}
+
+int
+idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
+	struct virtchnl2_txq_info *txq_info;
+	struct idpf_cmd_info args;
+	uint16_t num_qs;
+	int size, err;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
+		num_qs = IDPF_TXQ_PER_GRP;
+	else
+		num_qs = IDPF_TXQ_PER_GRP + IDPF_TX_COMPLQ_PER_GRP;
+
+	size = sizeof(*vc_txqs) + (num_qs - 1) *
+		sizeof(struct virtchnl2_txq_info);
+	vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
+	if (vc_txqs == NULL) {
+		DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues");
+		err = -ENOMEM;
+		return err;
+	}
+	vc_txqs->vport_id = vport->vport_id;
+	vc_txqs->num_qinfo = num_qs;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		txq_info = &vc_txqs->qinfo[0];
+		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
+		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
+		txq_info->queue_id = txq->queue_id;
+		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
+		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
+		txq_info->ring_len = txq->nb_tx_desc;
+	} else {
+		/* txq info */
+		txq_info = &vc_txqs->qinfo[0];
+		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
+		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
+		txq_info->queue_id = txq->queue_id;
+		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+		txq_info->ring_len = txq->nb_tx_desc;
+		txq_info->tx_compl_queue_id = txq->complq->queue_id;
+		txq_info->relative_queue_id = txq_info->queue_id;
+
+		/* tx completion queue info */
+		txq_info = &vc_txqs->qinfo[1];
+		txq_info->dma_ring_addr = txq->complq->tx_ring_phys_addr;
+		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+		txq_info->queue_id = txq->complq->queue_id;
+		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+		txq_info->ring_len = txq->complq->nb_tx_desc;
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
+	args.in_args = (uint8_t *)vc_txqs;
+	args.in_args_size = size;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	rte_free(vc_txqs);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
+
+	return err;
+}
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 155527f0b6..07755d4923 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -6,6 +6,7 @@
 #define _IDPF_COMMON_VIRTCHNL_H_
 
 #include <idpf_common_device.h>
+#include <idpf_common_rxtx.h>
 
 __rte_internal
 int idpf_vc_check_api_version(struct idpf_adapter *adapter);
@@ -26,6 +27,9 @@ __rte_internal
 int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
 				 uint16_t nb_rxq, bool map);
 __rte_internal
+int idpf_execute_vc_cmd(struct idpf_adapter *adapter,
+			struct idpf_cmd_info *args);
+__rte_internal
 int idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
 			 bool rx, bool on);
 __rte_internal
@@ -42,7 +46,7 @@ __rte_internal
 int idpf_vc_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
 			 uint16_t buf_len, uint8_t *buf);
 __rte_internal
-int idpf_execute_vc_cmd(struct idpf_adapter *adapter,
-			struct idpf_cmd_info *args);
-
+int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
+__rte_internal
+int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 83338640c4..69295270df 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -11,6 +11,8 @@ INTERNAL {
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
 	idpf_vc_config_irq_map_unmap;
+	idpf_vc_config_rxq;
+	idpf_vc_config_txq;
 	idpf_vc_create_vport;
 	idpf_vc_dealloc_vectors;
 	idpf_vc_destroy_vport;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index bef6199622..9b40aa4e56 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -23,8 +23,6 @@
 #define IDPF_MAX_VPORT_NUM	8
 
 #define IDPF_INVALID_VPORT_IDX	0xffff
-#define IDPF_TXQ_PER_GRP	1
-#define IDPF_RXQ_PER_GRP	1
 
 #define IDPF_DFLT_Q_VEC_NUM	1
 
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index cac6040943..b8325f9b96 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -5,6 +5,7 @@
 #ifndef _IDPF_RXTX_H_
 #define _IDPF_RXTX_H_
 
+#include <idpf_common_rxtx.h>
 #include "idpf_ethdev.h"
 
 /* MTS */
@@ -84,103 +85,10 @@
 
 extern uint64_t idpf_timestamp_dynflag;
 
-struct idpf_rx_queue {
-	struct idpf_adapter *adapter;   /* the adapter this queue belongs to */
-	struct rte_mempool *mp;         /* mbuf pool to populate Rx ring */
-	const struct rte_memzone *mz;   /* memzone for Rx ring */
-	volatile void *rx_ring;
-	struct rte_mbuf **sw_ring;      /* address of SW ring */
-	uint64_t rx_ring_phys_addr;     /* Rx ring DMA address */
-
-	uint16_t nb_rx_desc;            /* ring length */
-	uint16_t rx_tail;               /* current value of tail */
-	volatile uint8_t *qrx_tail;     /* register address of tail */
-	uint16_t rx_free_thresh;        /* max free RX desc to hold */
-	uint16_t nb_rx_hold;            /* number of held free RX desc */
-	struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
-	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
-	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
-
-	/* used for VPMD */
-	uint16_t rxrearm_nb;       /* number of remaining to be re-armed */
-	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
-	uint64_t mbuf_initializer; /* value to init mbufs */
-
-	uint16_t rx_nb_avail;
-	uint16_t rx_next_avail;
-
-	uint16_t port_id;       /* device port ID */
-	uint16_t queue_id;      /* Rx queue index */
-	uint16_t rx_buf_len;    /* The packet buffer size */
-	uint16_t rx_hdr_len;    /* The header buffer size */
-	uint16_t max_pkt_len;   /* Maximum packet length */
-	uint8_t rxdid;
-
-	bool q_set;             /* if rx queue has been configured */
-	bool q_started;         /* if rx queue has been started */
-	bool rx_deferred_start; /* don't start this queue in dev start */
-	const struct idpf_rxq_ops *ops;
-
-	/* only valid for split queue mode */
-	uint8_t expected_gen_id;
-	struct idpf_rx_queue *bufq1;
-	struct idpf_rx_queue *bufq2;
-
-	uint64_t offloads;
-	uint32_t hw_register_set;
-};
-
-struct idpf_tx_entry {
-	struct rte_mbuf *mbuf;
-	uint16_t next_id;
-	uint16_t last_id;
-};
-
 struct idpf_tx_vec_entry {
 	struct rte_mbuf *mbuf;
 };
 
-/* Structure associated with each TX queue. */
-struct idpf_tx_queue {
-	const struct rte_memzone *mz;		/* memzone for Tx ring */
-	volatile struct idpf_flex_tx_desc *tx_ring;	/* Tx ring virtual address */
-	volatile union {
-		struct idpf_flex_tx_sched_desc *desc_ring;
-		struct idpf_splitq_tx_compl_desc *compl_ring;
-	};
-	uint64_t tx_ring_phys_addr;		/* Tx ring DMA address */
-	struct idpf_tx_entry *sw_ring;		/* address array of SW ring */
-
-	uint16_t nb_tx_desc;		/* ring length */
-	uint16_t tx_tail;		/* current value of tail */
-	volatile uint8_t *qtx_tail;	/* register address of tail */
-	/* number of used desc since RS bit set */
-	uint16_t nb_used;
-	uint16_t nb_free;
-	uint16_t last_desc_cleaned;	/* last desc have been cleaned*/
-	uint16_t free_thresh;
-	uint16_t rs_thresh;
-
-	uint16_t port_id;
-	uint16_t queue_id;
-	uint64_t offloads;
-	uint16_t next_dd;	/* next to set RS, for VPMD */
-	uint16_t next_rs;	/* next to check DD,  for VPMD */
-
-	bool q_set;		/* if tx queue has been configured */
-	bool q_started;		/* if tx queue has been started */
-	bool tx_deferred_start; /* don't start this queue in dev start */
-	const struct idpf_txq_ops *ops;
-
-	/* only valid for split queue mode */
-	uint16_t sw_nb_desc;
-	uint16_t sw_tail;
-	void **txqs;
-	uint32_t tx_start_qid;
-	uint8_t expected_gen_id;
-	struct idpf_tx_queue *complq;
-};
-
 /* Offload features */
 union idpf_tx_offload {
 	uint64_t data;
@@ -239,9 +147,6 @@ void idpf_stop_queues(struct rte_eth_dev *dev);
 void idpf_set_rx_function(struct rte_eth_dev *dev);
 void idpf_set_tx_function(struct rte_eth_dev *dev);
 
-int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
-int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
-
 #define IDPF_TIMESYNC_REG_WRAP_GUARD_BAND  10000
 /* Helper function to convert a 32b nanoseconds timestamp to 64b. */
 static inline uint64_t
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
deleted file mode 100644
index 45d05ed108..0000000000
--- a/drivers/net/idpf/idpf_vchnl.c
+++ /dev/null
@@ -1,184 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2022 Intel Corporation
- */
-
-#include <stdio.h>
-#include <errno.h>
-#include <stdint.h>
-#include <string.h>
-#include <unistd.h>
-#include <stdarg.h>
-#include <inttypes.h>
-#include <rte_byteorder.h>
-#include <rte_common.h>
-
-#include <rte_debug.h>
-#include <rte_atomic.h>
-#include <rte_eal.h>
-#include <rte_ether.h>
-#include <ethdev_driver.h>
-#include <ethdev_pci.h>
-#include <rte_dev.h>
-
-#include "idpf_ethdev.h"
-#include "idpf_rxtx.h"
-
-#define IDPF_RX_BUF_STRIDE		64
-int
-idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
-	struct virtchnl2_rxq_info *rxq_info;
-	struct idpf_cmd_info args;
-	uint16_t num_qs;
-	int size, err, i;
-
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
-		num_qs = IDPF_RXQ_PER_GRP;
-	else
-		num_qs = IDPF_RXQ_PER_GRP + IDPF_RX_BUFQ_PER_GRP;
-
-	size = sizeof(*vc_rxqs) + (num_qs - 1) *
-		sizeof(struct virtchnl2_rxq_info);
-	vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
-	if (vc_rxqs == NULL) {
-		PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues");
-		err = -ENOMEM;
-		return err;
-	}
-	vc_rxqs->vport_id = vport->vport_id;
-	vc_rxqs->num_qinfo = num_qs;
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-		rxq_info = &vc_rxqs->qinfo[0];
-		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
-		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-		rxq_info->queue_id = rxq->queue_id;
-		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-		rxq_info->data_buffer_size = rxq->rx_buf_len;
-		rxq_info->max_pkt_size = vport->max_pkt_len;
-
-		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
-		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
-
-		rxq_info->ring_len = rxq->nb_rx_desc;
-	}  else {
-		/* Rx queue */
-		rxq_info = &vc_rxqs->qinfo[0];
-		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
-		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-		rxq_info->queue_id = rxq->queue_id;
-		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-		rxq_info->data_buffer_size = rxq->rx_buf_len;
-		rxq_info->max_pkt_size = vport->max_pkt_len;
-
-		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
-		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
-
-		rxq_info->ring_len = rxq->nb_rx_desc;
-		rxq_info->rx_bufq1_id = rxq->bufq1->queue_id;
-		rxq_info->rx_bufq2_id = rxq->bufq2->queue_id;
-		rxq_info->rx_buffer_low_watermark = 64;
-
-		/* Buffer queue */
-		for (i = 1; i <= IDPF_RX_BUFQ_PER_GRP; i++) {
-			struct idpf_rx_queue *bufq = i == 1 ? rxq->bufq1 : rxq->bufq2;
-			rxq_info = &vc_rxqs->qinfo[i];
-			rxq_info->dma_ring_addr = bufq->rx_ring_phys_addr;
-			rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
-			rxq_info->queue_id = bufq->queue_id;
-			rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-			rxq_info->data_buffer_size = bufq->rx_buf_len;
-			rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
-			rxq_info->ring_len = bufq->nb_rx_desc;
-
-			rxq_info->buffer_notif_stride = IDPF_RX_BUF_STRIDE;
-			rxq_info->rx_buffer_low_watermark = 64;
-		}
-	}
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
-	args.in_args = (uint8_t *)vc_rxqs;
-	args.in_args_size = size;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	rte_free(vc_rxqs);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
-
-	return err;
-}
-
-int
-idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
-	struct virtchnl2_txq_info *txq_info;
-	struct idpf_cmd_info args;
-	uint16_t num_qs;
-	int size, err;
-
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
-		num_qs = IDPF_TXQ_PER_GRP;
-	else
-		num_qs = IDPF_TXQ_PER_GRP + IDPF_TX_COMPLQ_PER_GRP;
-
-	size = sizeof(*vc_txqs) + (num_qs - 1) *
-		sizeof(struct virtchnl2_txq_info);
-	vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
-	if (vc_txqs == NULL) {
-		PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues");
-		err = -ENOMEM;
-		return err;
-	}
-	vc_txqs->vport_id = vport->vport_id;
-	vc_txqs->num_qinfo = num_qs;
-
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-		txq_info = &vc_txqs->qinfo[0];
-		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
-		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-		txq_info->queue_id = txq->queue_id;
-		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
-		txq_info->ring_len = txq->nb_tx_desc;
-	} else {
-		/* txq info */
-		txq_info = &vc_txqs->qinfo[0];
-		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
-		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-		txq_info->queue_id = txq->queue_id;
-		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-		txq_info->ring_len = txq->nb_tx_desc;
-		txq_info->tx_compl_queue_id = txq->complq->queue_id;
-		txq_info->relative_queue_id = txq_info->queue_id;
-
-		/* tx completion queue info */
-		txq_info = &vc_txqs->qinfo[1];
-		txq_info->dma_ring_addr = txq->complq->tx_ring_phys_addr;
-		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-		txq_info->queue_id = txq->complq->queue_id;
-		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-		txq_info->ring_len = txq->complq->nb_tx_desc;
-	}
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
-	args.in_args = (uint8_t *)vc_txqs;
-	args.in_args_size = size;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	rte_free(vc_txqs);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
-
-	return err;
-}
diff --git a/drivers/net/idpf/meson.build b/drivers/net/idpf/meson.build
index 650dade0b9..378925166f 100644
--- a/drivers/net/idpf/meson.build
+++ b/drivers/net/idpf/meson.build
@@ -18,7 +18,6 @@ deps += ['common_idpf']
 sources = files(
         'idpf_ethdev.c',
         'idpf_rxtx.c',
-        'idpf_vchnl.c',
 )
 
 if arch_subdir == 'x86'
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 12/19] common/idpf: add help functions for queue setup and release
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (10 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 11/19] common/idpf: add rxq and txq struct beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 13/19] common/idpf: add Rx and Tx data path beilei.xing
                         ` (8 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Refine rxq setup and txq setup.
Move some help functions of queue setup and queue release
to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c  |  414 +++++++++
 drivers/common/idpf/idpf_common_rxtx.h  |   57 ++
 drivers/common/idpf/meson.build         |    1 +
 drivers/common/idpf/version.map         |   15 +
 drivers/net/idpf/idpf_rxtx.c            | 1051 ++++++-----------------
 drivers/net/idpf/idpf_rxtx.h            |    9 -
 drivers/net/idpf/idpf_rxtx_vec_avx512.c |    2 +-
 7 files changed, 773 insertions(+), 776 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_rxtx.c

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
new file mode 100644
index 0000000000..eeeeedca88
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -0,0 +1,414 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <rte_mbuf_dyn.h>
+#include "idpf_common_rxtx.h"
+
+int
+idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+{
+	/* The following constraints must be satisfied:
+	 * thresh < rxq->nb_rx_desc
+	 */
+	if (thresh >= nb_desc) {
+		DRV_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
+			thresh, nb_desc);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int
+idpf_check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+		     uint16_t tx_free_thresh)
+{
+	/* TX descriptors will have their RS bit set after tx_rs_thresh
+	 * descriptors have been used. The TX descriptor ring will be cleaned
+	 * after tx_free_thresh descriptors are used or if the number of
+	 * descriptors required to transmit a packet is greater than the
+	 * number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		DRV_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+			"number of TX descriptors (%u) minus 2",
+			tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		DRV_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+			"number of TX descriptors (%u) minus 3.",
+			tx_free_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		DRV_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+			"equal to tx_free_thresh (%u).",
+			tx_rs_thresh, tx_free_thresh);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		DRV_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+			"number of TX descriptors (%u).",
+			tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+void
+idpf_release_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+	uint16_t i;
+
+	if (rxq->sw_ring == NULL)
+		return;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i] != NULL) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+			rxq->sw_ring[i] = NULL;
+		}
+	}
+}
+
+void
+idpf_release_txq_mbufs(struct idpf_tx_queue *txq)
+{
+	uint16_t nb_desc, i;
+
+	if (txq == NULL || txq->sw_ring == NULL) {
+		DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
+		return;
+	}
+
+	if (txq->sw_nb_desc != 0) {
+		/* For split queue model, descriptor ring */
+		nb_desc = txq->sw_nb_desc;
+	} else {
+		/* For single queue model */
+		nb_desc = txq->nb_tx_desc;
+	}
+	for (i = 0; i < nb_desc; i++) {
+		if (txq->sw_ring[i].mbuf != NULL) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+void
+idpf_reset_split_rx_descq(struct idpf_rx_queue *rxq)
+{
+	uint16_t len;
+	uint32_t i;
+
+	if (rxq == NULL)
+		return;
+
+	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3);
+	     i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	rxq->rx_tail = 0;
+	rxq->expected_gen_id = 1;
+}
+
+void
+idpf_reset_split_rx_bufq(struct idpf_rx_queue *rxq)
+{
+	uint16_t len;
+	uint32_t i;
+
+	if (rxq == NULL)
+		return;
+
+	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(struct virtchnl2_splitq_rx_buf_desc);
+	     i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+	/* The next descriptor id which can be received. */
+	rxq->rx_next_avail = 0;
+
+	/* The next descriptor id which can be refilled. */
+	rxq->rx_tail = 0;
+	/* The number of descriptors which can be refilled. */
+	rxq->nb_rx_hold = rxq->nb_rx_desc - 1;
+
+	rxq->bufq1 = NULL;
+	rxq->bufq2 = NULL;
+}
+
+void
+idpf_reset_split_rx_queue(struct idpf_rx_queue *rxq)
+{
+	idpf_reset_split_rx_descq(rxq);
+	idpf_reset_split_rx_bufq(rxq->bufq1);
+	idpf_reset_split_rx_bufq(rxq->bufq2);
+}
+
+void
+idpf_reset_single_rx_queue(struct idpf_rx_queue *rxq)
+{
+	uint16_t len;
+	uint32_t i;
+
+	if (rxq == NULL)
+		return;
+
+	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(struct virtchnl2_singleq_rx_buf_desc);
+	     i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+
+	rte_pktmbuf_free(rxq->pkt_first_seg);
+
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+	rxq->rxrearm_start = 0;
+	rxq->rxrearm_nb = 0;
+}
+
+void
+idpf_reset_split_tx_descq(struct idpf_tx_queue *txq)
+{
+	struct idpf_tx_entry *txe;
+	uint32_t i, size;
+	uint16_t prev;
+
+	if (txq == NULL) {
+		DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	size = sizeof(struct idpf_flex_tx_sched_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->desc_ring)[i] = 0;
+
+	txe = txq->sw_ring;
+	prev = (uint16_t)(txq->sw_nb_desc - 1);
+	for (i = 0; i < txq->sw_nb_desc; i++) {
+		txe[i].mbuf = NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_used = 0;
+
+	/* Use this as next to clean for split desc queue */
+	txq->last_desc_cleaned = 0;
+	txq->sw_tail = 0;
+	txq->nb_free = txq->nb_tx_desc - 1;
+}
+
+void
+idpf_reset_split_tx_complq(struct idpf_tx_queue *cq)
+{
+	uint32_t i, size;
+
+	if (cq == NULL) {
+		DRV_LOG(DEBUG, "Pointer to complq is NULL");
+		return;
+	}
+
+	size = sizeof(struct idpf_splitq_tx_compl_desc) * cq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)cq->compl_ring)[i] = 0;
+
+	cq->tx_tail = 0;
+	cq->expected_gen_id = 1;
+}
+
+void
+idpf_reset_single_tx_queue(struct idpf_tx_queue *txq)
+{
+	struct idpf_tx_entry *txe;
+	uint32_t i, size;
+	uint16_t prev;
+
+	if (txq == NULL) {
+		DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct idpf_flex_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		txq->tx_ring[i].qw1.cmd_dtype =
+			rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_used = 0;
+
+	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+	txq->nb_free = txq->nb_tx_desc - 1;
+
+	txq->next_dd = txq->rs_thresh - 1;
+	txq->next_rs = txq->rs_thresh - 1;
+}
+
+void
+idpf_rx_queue_release(void *rxq)
+{
+	struct idpf_rx_queue *q = rxq;
+
+	if (q == NULL)
+		return;
+
+	/* Split queue */
+	if (q->bufq1 != NULL && q->bufq2 != NULL) {
+		q->bufq1->ops->release_mbufs(q->bufq1);
+		rte_free(q->bufq1->sw_ring);
+		rte_memzone_free(q->bufq1->mz);
+		rte_free(q->bufq1);
+		q->bufq2->ops->release_mbufs(q->bufq2);
+		rte_free(q->bufq2->sw_ring);
+		rte_memzone_free(q->bufq2->mz);
+		rte_free(q->bufq2);
+		rte_memzone_free(q->mz);
+		rte_free(q);
+		return;
+	}
+
+	/* Single queue */
+	q->ops->release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+idpf_tx_queue_release(void *txq)
+{
+	struct idpf_tx_queue *q = txq;
+
+	if (q == NULL)
+		return;
+
+	if (q->complq) {
+		rte_memzone_free(q->complq->mz);
+		rte_free(q->complq);
+	}
+
+	q->ops->release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+int
+idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+	volatile struct virtchnl2_singleq_rx_buf_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(mbuf == NULL)) {
+			DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &((volatile struct virtchnl2_singleq_rx_buf_desc *)(rxq->rx_ring))[i];
+		rxd->pkt_addr = dma_addr;
+		rxd->hdr_addr = 0;
+		rxd->rsvd1 = 0;
+		rxd->rsvd2 = 0;
+		rxq->sw_ring[i] = mbuf;
+	}
+
+	return 0;
+}
+
+int
+idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+	volatile struct virtchnl2_splitq_rx_buf_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(mbuf == NULL)) {
+			DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &((volatile struct virtchnl2_splitq_rx_buf_desc *)(rxq->rx_ring))[i];
+		rxd->qword0.buf_id = i;
+		rxd->qword0.rsvd0 = 0;
+		rxd->qword0.rsvd1 = 0;
+		rxd->pkt_addr = dma_addr;
+		rxd->hdr_addr = 0;
+		rxd->rsvd2 = 0;
+
+		rxq->sw_ring[i] = mbuf;
+	}
+
+	rxq->nb_rx_hold = 0;
+	rxq->rx_tail = rxq->nb_rx_desc - 1;
+
+	return 0;
+}
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index a9ed31c08a..c5bb7d48af 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -5,11 +5,28 @@
 #ifndef _IDPF_COMMON_RXTX_H_
 #define _IDPF_COMMON_RXTX_H_
 
+#include <rte_mbuf.h>
 #include <rte_mbuf_ptype.h>
 #include <rte_mbuf_core.h>
 
 #include "idpf_common_device.h"
 
+#define IDPF_RX_MAX_BURST		32
+
+#define IDPF_RX_OFFLOAD_IPV4_CKSUM		RTE_BIT64(1)
+#define IDPF_RX_OFFLOAD_UDP_CKSUM		RTE_BIT64(2)
+#define IDPF_RX_OFFLOAD_TCP_CKSUM		RTE_BIT64(3)
+#define IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM	RTE_BIT64(6)
+#define IDPF_RX_OFFLOAD_TIMESTAMP		RTE_BIT64(14)
+
+#define IDPF_TX_OFFLOAD_IPV4_CKSUM       RTE_BIT64(1)
+#define IDPF_TX_OFFLOAD_UDP_CKSUM        RTE_BIT64(2)
+#define IDPF_TX_OFFLOAD_TCP_CKSUM        RTE_BIT64(3)
+#define IDPF_TX_OFFLOAD_SCTP_CKSUM       RTE_BIT64(4)
+#define IDPF_TX_OFFLOAD_TCP_TSO          RTE_BIT64(5)
+#define IDPF_TX_OFFLOAD_MULTI_SEGS       RTE_BIT64(15)
+#define IDPF_TX_OFFLOAD_MBUF_FAST_FREE   RTE_BIT64(16)
+
 struct idpf_rx_stats {
 	uint64_t mbuf_alloc_failed;
 };
@@ -109,4 +126,44 @@ struct idpf_tx_queue {
 	struct idpf_tx_queue *complq;
 };
 
+struct idpf_rxq_ops {
+	void (*release_mbufs)(struct idpf_rx_queue *rxq);
+};
+
+struct idpf_txq_ops {
+	void (*release_mbufs)(struct idpf_tx_queue *txq);
+};
+
+__rte_internal
+int idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh);
+__rte_internal
+int idpf_check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+			 uint16_t tx_free_thresh);
+__rte_internal
+void idpf_release_rxq_mbufs(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_release_txq_mbufs(struct idpf_tx_queue *txq);
+__rte_internal
+void idpf_reset_split_rx_descq(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_reset_split_rx_bufq(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_reset_split_rx_queue(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_reset_single_rx_queue(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_reset_split_tx_descq(struct idpf_tx_queue *txq);
+__rte_internal
+void idpf_reset_split_tx_complq(struct idpf_tx_queue *cq);
+__rte_internal
+void idpf_reset_single_tx_queue(struct idpf_tx_queue *txq);
+__rte_internal
+void idpf_rx_queue_release(void *rxq);
+__rte_internal
+void idpf_tx_queue_release(void *txq);
+__rte_internal
+int idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq);
+__rte_internal
+int idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq);
+
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/meson.build b/drivers/common/idpf/meson.build
index c6cc7a196b..5ee071fdb2 100644
--- a/drivers/common/idpf/meson.build
+++ b/drivers/common/idpf/meson.build
@@ -5,6 +5,7 @@ deps += ['mbuf']
 
 sources = files(
     'idpf_common_device.c',
+    'idpf_common_rxtx.c',
     'idpf_common_virtchnl.c',
 )
 
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 69295270df..aa6ebd7c6c 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,11 +3,26 @@ INTERNAL {
 
 	idpf_adapter_deinit;
 	idpf_adapter_init;
+	idpf_alloc_single_rxq_mbufs;
+	idpf_alloc_split_rxq_mbufs;
+	idpf_check_rx_thresh;
+	idpf_check_tx_thresh;
 	idpf_config_irq_map;
 	idpf_config_irq_unmap;
 	idpf_config_rss;
 	idpf_create_vport_info_init;
 	idpf_execute_vc_cmd;
+	idpf_release_rxq_mbufs;
+	idpf_release_txq_mbufs;
+	idpf_reset_single_rx_queue;
+	idpf_reset_single_tx_queue;
+	idpf_reset_split_rx_bufq;
+	idpf_reset_split_rx_descq;
+	idpf_reset_split_rx_queue;
+	idpf_reset_split_tx_complq;
+	idpf_reset_split_tx_descq;
+	idpf_rx_queue_release;
+	idpf_tx_queue_release;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
 	idpf_vc_config_irq_map_unmap;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 068eb8000e..fb1814d893 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -12,358 +12,141 @@
 
 static int idpf_timestamp_dynfield_offset = -1;
 
-static int
-check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
-{
-	/* The following constraints must be satisfied:
-	 *   thresh < rxq->nb_rx_desc
-	 */
-	if (thresh >= nb_desc) {
-		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
-			     thresh, nb_desc);
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static int
-check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
-		uint16_t tx_free_thresh)
+static uint64_t
+idpf_rx_offload_convert(uint64_t offload)
 {
-	/* TX descriptors will have their RS bit set after tx_rs_thresh
-	 * descriptors have been used. The TX descriptor ring will be cleaned
-	 * after tx_free_thresh descriptors are used or if the number of
-	 * descriptors required to transmit a packet is greater than the
-	 * number of free TX descriptors.
-	 *
-	 * The following constraints must be satisfied:
-	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
-	 *  - tx_free_thresh must be less than the size of the ring minus 3.
-	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
-	 *  - tx_rs_thresh must be a divisor of the ring size.
-	 *
-	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
-	 * race condition, hence the maximum threshold constraints. When set
-	 * to zero use default values.
-	 */
-	if (tx_rs_thresh >= (nb_desc - 2)) {
-		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
-			     "number of TX descriptors (%u) minus 2",
-			     tx_rs_thresh, nb_desc);
-		return -EINVAL;
-	}
-	if (tx_free_thresh >= (nb_desc - 3)) {
-		PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be less than the "
-			     "number of TX descriptors (%u) minus 3.",
-			     tx_free_thresh, nb_desc);
-		return -EINVAL;
-	}
-	if (tx_rs_thresh > tx_free_thresh) {
-		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
-			     "equal to tx_free_thresh (%u).",
-			     tx_rs_thresh, tx_free_thresh);
-		return -EINVAL;
-	}
-	if ((nb_desc % tx_rs_thresh) != 0) {
-		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
-			     "number of TX descriptors (%u).",
-			     tx_rs_thresh, nb_desc);
-		return -EINVAL;
-	}
-
-	return 0;
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
+		ol |= IDPF_RX_OFFLOAD_TIMESTAMP;
+
+	return ol;
 }
 
-static void
-release_rxq_mbufs(struct idpf_rx_queue *rxq)
+static uint64_t
+idpf_tx_offload_convert(uint64_t offload)
 {
-	uint16_t i;
-
-	if (rxq->sw_ring == NULL)
-		return;
-
-	for (i = 0; i < rxq->nb_rx_desc; i++) {
-		if (rxq->sw_ring[i] != NULL) {
-			rte_pktmbuf_free_seg(rxq->sw_ring[i]);
-			rxq->sw_ring[i] = NULL;
-		}
-	}
-}
-
-static void
-release_txq_mbufs(struct idpf_tx_queue *txq)
-{
-	uint16_t nb_desc, i;
-
-	if (txq == NULL || txq->sw_ring == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
-		return;
-	}
-
-	if (txq->sw_nb_desc != 0) {
-		/* For split queue model, descriptor ring */
-		nb_desc = txq->sw_nb_desc;
-	} else {
-		/* For single queue model */
-		nb_desc = txq->nb_tx_desc;
-	}
-	for (i = 0; i < nb_desc; i++) {
-		if (txq->sw_ring[i].mbuf != NULL) {
-			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
-			txq->sw_ring[i].mbuf = NULL;
-		}
-	}
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_SCTP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
+		ol |= IDPF_TX_OFFLOAD_MULTI_SEGS;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0)
+		ol |= IDPF_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	return ol;
 }
 
 static const struct idpf_rxq_ops def_rxq_ops = {
-	.release_mbufs = release_rxq_mbufs,
+	.release_mbufs = idpf_release_rxq_mbufs,
 };
 
 static const struct idpf_txq_ops def_txq_ops = {
-	.release_mbufs = release_txq_mbufs,
+	.release_mbufs = idpf_release_txq_mbufs,
 };
 
-static void
-reset_split_rx_descq(struct idpf_rx_queue *rxq)
-{
-	uint16_t len;
-	uint32_t i;
-
-	if (rxq == NULL)
-		return;
-
-	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
-
-	for (i = 0; i < len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3);
-	     i++)
-		((volatile char *)rxq->rx_ring)[i] = 0;
-
-	rxq->rx_tail = 0;
-	rxq->expected_gen_id = 1;
-}
-
-static void
-reset_split_rx_bufq(struct idpf_rx_queue *rxq)
-{
-	uint16_t len;
-	uint32_t i;
-
-	if (rxq == NULL)
-		return;
-
-	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
-
-	for (i = 0; i < len * sizeof(struct virtchnl2_splitq_rx_buf_desc);
-	     i++)
-		((volatile char *)rxq->rx_ring)[i] = 0;
-
-	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
-
-	for (i = 0; i < IDPF_RX_MAX_BURST; i++)
-		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
-
-	/* The next descriptor id which can be received. */
-	rxq->rx_next_avail = 0;
-
-	/* The next descriptor id which can be refilled. */
-	rxq->rx_tail = 0;
-	/* The number of descriptors which can be refilled. */
-	rxq->nb_rx_hold = rxq->nb_rx_desc - 1;
-
-	rxq->bufq1 = NULL;
-	rxq->bufq2 = NULL;
-}
-
-static void
-idpf_rx_queue_release(void *rxq)
-{
-	struct idpf_rx_queue *q = rxq;
-
-	if (q == NULL)
-		return;
-
-	/* Split queue */
-	if (q->bufq1 != NULL && q->bufq2 != NULL) {
-		q->bufq1->ops->release_mbufs(q->bufq1);
-		rte_free(q->bufq1->sw_ring);
-		rte_memzone_free(q->bufq1->mz);
-		rte_free(q->bufq1);
-		q->bufq2->ops->release_mbufs(q->bufq2);
-		rte_free(q->bufq2->sw_ring);
-		rte_memzone_free(q->bufq2->mz);
-		rte_free(q->bufq2);
-		rte_memzone_free(q->mz);
-		rte_free(q);
-		return;
-	}
-
-	/* Single queue */
-	q->ops->release_mbufs(q);
-	rte_free(q->sw_ring);
-	rte_memzone_free(q->mz);
-	rte_free(q);
-}
-
-static void
-idpf_tx_queue_release(void *txq)
-{
-	struct idpf_tx_queue *q = txq;
-
-	if (q == NULL)
-		return;
-
-	if (q->complq) {
-		rte_memzone_free(q->complq->mz);
-		rte_free(q->complq);
-	}
-
-	q->ops->release_mbufs(q);
-	rte_free(q->sw_ring);
-	rte_memzone_free(q->mz);
-	rte_free(q);
-}
-
-static inline void
-reset_split_rx_queue(struct idpf_rx_queue *rxq)
+static const struct rte_memzone *
+idpf_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
+		      uint16_t len, uint16_t queue_type,
+		      unsigned int socket_id, bool splitq)
 {
-	reset_split_rx_descq(rxq);
-	reset_split_rx_bufq(rxq->bufq1);
-	reset_split_rx_bufq(rxq->bufq2);
-}
-
-static void
-reset_single_rx_queue(struct idpf_rx_queue *rxq)
-{
-	uint16_t len;
-	uint32_t i;
-
-	if (rxq == NULL)
-		return;
-
-	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
-
-	for (i = 0; i < len * sizeof(struct virtchnl2_singleq_rx_buf_desc);
-	     i++)
-		((volatile char *)rxq->rx_ring)[i] = 0;
-
-	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
-
-	for (i = 0; i < IDPF_RX_MAX_BURST; i++)
-		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
-
-	rxq->rx_tail = 0;
-	rxq->nb_rx_hold = 0;
-
-	rte_pktmbuf_free(rxq->pkt_first_seg);
-
-	rxq->pkt_first_seg = NULL;
-	rxq->pkt_last_seg = NULL;
-	rxq->rxrearm_start = 0;
-	rxq->rxrearm_nb = 0;
-}
-
-static void
-reset_split_tx_descq(struct idpf_tx_queue *txq)
-{
-	struct idpf_tx_entry *txe;
-	uint32_t i, size;
-	uint16_t prev;
-
-	if (txq == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
-		return;
-	}
+	char ring_name[RTE_MEMZONE_NAMESIZE];
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
 
-	size = sizeof(struct idpf_flex_tx_sched_desc) * txq->nb_tx_desc;
-	for (i = 0; i < size; i++)
-		((volatile char *)txq->desc_ring)[i] = 0;
-
-	txe = txq->sw_ring;
-	prev = (uint16_t)(txq->sw_nb_desc - 1);
-	for (i = 0; i < txq->sw_nb_desc; i++) {
-		txe[i].mbuf = NULL;
-		txe[i].last_id = i;
-		txe[prev].next_id = i;
-		prev = i;
+	memset(ring_name, 0, RTE_MEMZONE_NAMESIZE);
+	switch (queue_type) {
+	case VIRTCHNL2_QUEUE_TYPE_TX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_sched_desc),
+					      IDPF_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_desc),
+					      IDPF_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "idpf Tx ring", sizeof("idpf Tx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
+					      IDPF_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_singleq_rx_buf_desc),
+					      IDPF_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "idpf Rx ring", sizeof("idpf Rx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+		ring_size = RTE_ALIGN(len * sizeof(struct idpf_splitq_tx_compl_desc),
+				      IDPF_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "idpf Tx compl ring", sizeof("idpf Tx compl ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+		ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_splitq_rx_buf_desc),
+				      IDPF_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "idpf Rx buf ring", sizeof("idpf Rx buf ring"));
+		break;
+	default:
+		PMD_INIT_LOG(ERR, "Invalid queue type");
+		return NULL;
 	}
 
-	txq->tx_tail = 0;
-	txq->nb_used = 0;
-
-	/* Use this as next to clean for split desc queue */
-	txq->last_desc_cleaned = 0;
-	txq->sw_tail = 0;
-	txq->nb_free = txq->nb_tx_desc - 1;
-}
-
-static void
-reset_split_tx_complq(struct idpf_tx_queue *cq)
-{
-	uint32_t i, size;
-
-	if (cq == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to complq is NULL");
-		return;
+	mz = rte_eth_dma_zone_reserve(dev, ring_name, queue_idx,
+				      ring_size, IDPF_RING_BASE_ALIGN,
+				      socket_id);
+	if (mz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for ring");
+		return NULL;
 	}
 
-	size = sizeof(struct idpf_splitq_tx_compl_desc) * cq->nb_tx_desc;
-	for (i = 0; i < size; i++)
-		((volatile char *)cq->compl_ring)[i] = 0;
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
 
-	cq->tx_tail = 0;
-	cq->expected_gen_id = 1;
+	return mz;
 }
 
 static void
-reset_single_tx_queue(struct idpf_tx_queue *txq)
+idpf_dma_zone_release(const struct rte_memzone *mz)
 {
-	struct idpf_tx_entry *txe;
-	uint32_t i, size;
-	uint16_t prev;
-
-	if (txq == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
-		return;
-	}
-
-	txe = txq->sw_ring;
-	size = sizeof(struct idpf_flex_tx_desc) * txq->nb_tx_desc;
-	for (i = 0; i < size; i++)
-		((volatile char *)txq->tx_ring)[i] = 0;
-
-	prev = (uint16_t)(txq->nb_tx_desc - 1);
-	for (i = 0; i < txq->nb_tx_desc; i++) {
-		txq->tx_ring[i].qw1.cmd_dtype =
-			rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE);
-		txe[i].mbuf =  NULL;
-		txe[i].last_id = i;
-		txe[prev].next_id = i;
-		prev = i;
-	}
-
-	txq->tx_tail = 0;
-	txq->nb_used = 0;
-
-	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
-	txq->nb_free = txq->nb_tx_desc - 1;
-
-	txq->next_dd = txq->rs_thresh - 1;
-	txq->next_rs = txq->rs_thresh - 1;
+	rte_memzone_free(mz);
 }
 
 static int
-idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
+idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
 			 uint16_t queue_idx, uint16_t rx_free_thresh,
 			 uint16_t nb_desc, unsigned int socket_id,
-			 struct rte_mempool *mp)
+			 struct rte_mempool *mp, uint8_t bufq_id)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_adapter *adapter = vport->adapter;
 	struct idpf_hw *hw = &adapter->hw;
 	const struct rte_memzone *mz;
-	uint32_t ring_size;
+	struct idpf_rx_queue *bufq;
 	uint16_t len;
+	int ret;
+
+	bufq = rte_zmalloc_socket("idpf bufq",
+				   sizeof(struct idpf_rx_queue),
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue.");
+		ret = -ENOMEM;
+		goto err_bufq1_alloc;
+	}
 
 	bufq->mp = mp;
 	bufq->nb_rx_desc = nb_desc;
@@ -376,8 +159,21 @@ idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
 	len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM;
 	bufq->rx_buf_len = len;
 
-	/* Allocate the software ring. */
+	/* Allocate a little more to support bulk allocate. */
 	len = nb_desc + IDPF_RX_MAX_BURST;
+
+	mz = idpf_dma_zone_reserve(dev, queue_idx, len,
+				   VIRTCHNL2_QUEUE_TYPE_RX_BUFFER,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+
+	bufq->rx_ring_phys_addr = mz->iova;
+	bufq->rx_ring = mz->addr;
+	bufq->mz = mz;
+
 	bufq->sw_ring =
 		rte_zmalloc_socket("idpf rx bufq sw ring",
 				   sizeof(struct rte_mbuf *) * len,
@@ -385,55 +181,60 @@ idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
 				   socket_id);
 	if (bufq->sw_ring == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
-		return -ENOMEM;
-	}
-
-	/* Allocate a liitle more to support bulk allocate. */
-	len = nb_desc + IDPF_RX_MAX_BURST;
-	ring_size = RTE_ALIGN(len *
-			      sizeof(struct virtchnl2_splitq_rx_buf_desc),
-			      IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "rx_buf_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
-	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue.");
-		rte_free(bufq->sw_ring);
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
 	}
 
-	/* Zero all the descriptors in the ring. */
-	memset(mz->addr, 0, ring_size);
-	bufq->rx_ring_phys_addr = mz->iova;
-	bufq->rx_ring = mz->addr;
-
-	bufq->mz = mz;
-	reset_split_rx_bufq(bufq);
-	bufq->q_set = true;
+	idpf_reset_split_rx_bufq(bufq);
 	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
 	bufq->ops = &def_rxq_ops;
+	bufq->q_set = true;
 
-	/* TODO: allow bulk or vec */
+	if (bufq_id == 1) {
+		rxq->bufq1 = bufq;
+	} else if (bufq_id == 2) {
+		rxq->bufq2 = bufq;
+	} else {
+		PMD_INIT_LOG(ERR, "Invalid buffer queue index.");
+		ret = -EINVAL;
+		goto err_bufq_id;
+	}
 
 	return 0;
+
+err_bufq_id:
+	rte_free(bufq->sw_ring);
+err_sw_ring_alloc:
+	idpf_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(bufq);
+err_bufq1_alloc:
+	return ret;
 }
 
-static int
-idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-			  uint16_t nb_desc, unsigned int socket_id,
-			  const struct rte_eth_rxconf *rx_conf,
-			  struct rte_mempool *mp)
+static void
+idpf_rx_split_bufq_release(struct idpf_rx_queue *bufq)
+{
+	rte_free(bufq->sw_ring);
+	idpf_dma_zone_release(bufq->mz);
+	rte_free(bufq);
+}
+
+int
+idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_rxconf *rx_conf,
+		    struct rte_mempool *mp)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_adapter *adapter = vport->adapter;
-	struct idpf_rx_queue *bufq1, *bufq2;
+	struct idpf_hw *hw = &adapter->hw;
 	const struct rte_memzone *mz;
 	struct idpf_rx_queue *rxq;
 	uint16_t rx_free_thresh;
-	uint32_t ring_size;
 	uint64_t offloads;
-	uint16_t qid;
+	bool is_splitq;
 	uint16_t len;
 	int ret;
 
@@ -443,7 +244,7 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
 		IDPF_DEFAULT_RX_FREE_THRESH :
 		rx_conf->rx_free_thresh;
-	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+	if (idpf_check_rx_thresh(nb_desc, rx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Free memory if needed */
@@ -452,16 +253,19 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		dev->data->rx_queues[queue_idx] = NULL;
 	}
 
-	/* Setup Rx description queue */
+	/* Setup Rx queue */
 	rxq = rte_zmalloc_socket("idpf rxq",
 				 sizeof(struct idpf_rx_queue),
 				 RTE_CACHE_LINE_SIZE,
 				 socket_id);
 	if (rxq == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_rxq_alloc;
 	}
 
+	is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
 	rxq->mp = mp;
 	rxq->nb_rx_desc = nb_desc;
 	rxq->rx_free_thresh = rx_free_thresh;
@@ -470,343 +274,129 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
 	rxq->rx_hdr_len = 0;
 	rxq->adapter = adapter;
-	rxq->offloads = offloads;
+	rxq->offloads = idpf_rx_offload_convert(offloads);
 
 	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
 	rxq->rx_buf_len = len;
 
-	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
-	ring_size = RTE_ALIGN(len *
-			      sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
-			      IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "rx_cpmpl_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
-
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+	mz = idpf_dma_zone_reserve(dev, queue_idx, len, VIRTCHNL2_QUEUE_TYPE_RX,
+				   socket_id, is_splitq);
 	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
 		ret = -ENOMEM;
-		goto free_rxq;
+		goto err_mz_reserve;
 	}
-
-	/* Zero all the descriptors in the ring. */
-	memset(mz->addr, 0, ring_size);
 	rxq->rx_ring_phys_addr = mz->iova;
 	rxq->rx_ring = mz->addr;
-
 	rxq->mz = mz;
-	reset_split_rx_descq(rxq);
 
-	/* TODO: allow bulk or vec */
+	if (!is_splitq) {
+		rxq->sw_ring = rte_zmalloc_socket("idpf rxq sw ring",
+						  sizeof(struct rte_mbuf *) * len,
+						  RTE_CACHE_LINE_SIZE,
+						  socket_id);
+		if (rxq->sw_ring == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+			ret = -ENOMEM;
+			goto err_sw_ring_alloc;
+		}
 
-	/* setup Rx buffer queue */
-	bufq1 = rte_zmalloc_socket("idpf bufq1",
-				   sizeof(struct idpf_rx_queue),
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (bufq1 == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue 1.");
-		ret = -ENOMEM;
-		goto free_mz;
-	}
-	qid = 2 * queue_idx;
-	ret = idpf_rx_split_bufq_setup(dev, bufq1, qid, rx_free_thresh,
-				       nb_desc, socket_id, mp);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
-		ret = -EINVAL;
-		goto free_bufq1;
-	}
-	rxq->bufq1 = bufq1;
+		idpf_reset_single_rx_queue(rxq);
+		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
+				queue_idx * vport->chunks_info.rx_qtail_spacing);
+		rxq->ops = &def_rxq_ops;
+	} else {
+		idpf_reset_split_rx_descq(rxq);
 
-	bufq2 = rte_zmalloc_socket("idpf bufq2",
-				   sizeof(struct idpf_rx_queue),
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (bufq2 == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue 2.");
-		rte_free(bufq1->sw_ring);
-		rte_memzone_free(bufq1->mz);
-		ret = -ENOMEM;
-		goto free_bufq1;
-	}
-	qid = 2 * queue_idx + 1;
-	ret = idpf_rx_split_bufq_setup(dev, bufq2, qid, rx_free_thresh,
-				       nb_desc, socket_id, mp);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
-		rte_free(bufq1->sw_ring);
-		rte_memzone_free(bufq1->mz);
-		ret = -EINVAL;
-		goto free_bufq2;
+		/* Setup Rx buffer queues */
+		ret = idpf_rx_split_bufq_setup(dev, rxq, 2 * queue_idx,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 1);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
+			ret = -EINVAL;
+			goto err_bufq1_setup;
+		}
+
+		ret = idpf_rx_split_bufq_setup(dev, rxq, 2 * queue_idx + 1,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 2);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
+			ret = -EINVAL;
+			goto err_bufq2_setup;
+		}
 	}
-	rxq->bufq2 = bufq2;
 
 	rxq->q_set = true;
 	dev->data->rx_queues[queue_idx] = rxq;
 
 	return 0;
 
-free_bufq2:
-	rte_free(bufq2);
-free_bufq1:
-	rte_free(bufq1);
-free_mz:
-	rte_memzone_free(mz);
-free_rxq:
+err_bufq2_setup:
+	idpf_rx_split_bufq_release(rxq->bufq1);
+err_bufq1_setup:
+err_sw_ring_alloc:
+	idpf_dma_zone_release(mz);
+err_mz_reserve:
 	rte_free(rxq);
-
+err_rxq_alloc:
 	return ret;
 }
 
 static int
-idpf_rx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-			   uint16_t nb_desc, unsigned int socket_id,
-			   const struct rte_eth_rxconf *rx_conf,
-			   struct rte_mempool *mp)
+idpf_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
+		     uint16_t queue_idx, uint16_t nb_desc,
+		     unsigned int socket_id)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
-	struct idpf_hw *hw = &adapter->hw;
 	const struct rte_memzone *mz;
-	struct idpf_rx_queue *rxq;
-	uint16_t rx_free_thresh;
-	uint32_t ring_size;
-	uint64_t offloads;
-	uint16_t len;
-
-	offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
-
-	/* Check free threshold */
-	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
-		IDPF_DEFAULT_RX_FREE_THRESH :
-		rx_conf->rx_free_thresh;
-	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
-		return -EINVAL;
-
-	/* Free memory if needed */
-	if (dev->data->rx_queues[queue_idx] != NULL) {
-		idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
-		dev->data->rx_queues[queue_idx] = NULL;
-	}
-
-	/* Setup Rx description queue */
-	rxq = rte_zmalloc_socket("idpf rxq",
-				 sizeof(struct idpf_rx_queue),
-				 RTE_CACHE_LINE_SIZE,
-				 socket_id);
-	if (rxq == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
-		return -ENOMEM;
-	}
-
-	rxq->mp = mp;
-	rxq->nb_rx_desc = nb_desc;
-	rxq->rx_free_thresh = rx_free_thresh;
-	rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
-	rxq->port_id = dev->data->port_id;
-	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
-	rxq->rx_hdr_len = 0;
-	rxq->adapter = adapter;
-	rxq->offloads = offloads;
-
-	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
-	rxq->rx_buf_len = len;
-
-	len = nb_desc + IDPF_RX_MAX_BURST;
-	rxq->sw_ring =
-		rte_zmalloc_socket("idpf rxq sw ring",
-				   sizeof(struct rte_mbuf *) * len,
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (rxq->sw_ring == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
-		rte_free(rxq);
-		return -ENOMEM;
-	}
-
-	/* Allocate a liitle more to support bulk allocate. */
-	len = nb_desc + IDPF_RX_MAX_BURST;
-	ring_size = RTE_ALIGN(len *
-			      sizeof(struct virtchnl2_singleq_rx_buf_desc),
-			      IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "rx ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
-	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue.");
-		rte_free(rxq->sw_ring);
-		rte_free(rxq);
-		return -ENOMEM;
-	}
-
-	/* Zero all the descriptors in the ring. */
-	memset(mz->addr, 0, ring_size);
-	rxq->rx_ring_phys_addr = mz->iova;
-	rxq->rx_ring = mz->addr;
-
-	rxq->mz = mz;
-	reset_single_rx_queue(rxq);
-	rxq->q_set = true;
-	dev->data->rx_queues[queue_idx] = rxq;
-	rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
-			queue_idx * vport->chunks_info.rx_qtail_spacing);
-	rxq->ops = &def_rxq_ops;
-
-	return 0;
-}
-
-int
-idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-		    uint16_t nb_desc, unsigned int socket_id,
-		    const struct rte_eth_rxconf *rx_conf,
-		    struct rte_mempool *mp)
-{
-	struct idpf_vport *vport = dev->data->dev_private;
-
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
-		return idpf_rx_single_queue_setup(dev, queue_idx, nb_desc,
-						  socket_id, rx_conf, mp);
-	else
-		return idpf_rx_split_queue_setup(dev, queue_idx, nb_desc,
-						 socket_id, rx_conf, mp);
-}
-
-static int
-idpf_tx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-			  uint16_t nb_desc, unsigned int socket_id,
-			  const struct rte_eth_txconf *tx_conf)
-{
-	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
-	uint16_t tx_rs_thresh, tx_free_thresh;
-	struct idpf_hw *hw = &adapter->hw;
-	struct idpf_tx_queue *txq, *cq;
-	const struct rte_memzone *mz;
-	uint32_t ring_size;
-	uint64_t offloads;
+	struct idpf_tx_queue *cq;
 	int ret;
 
-	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
-
-	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh != 0) ?
-		tx_conf->tx_rs_thresh : IDPF_DEFAULT_TX_RS_THRESH);
-	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh != 0) ?
-		tx_conf->tx_free_thresh : IDPF_DEFAULT_TX_FREE_THRESH);
-	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
-		return -EINVAL;
-
-	/* Free memory if needed. */
-	if (dev->data->tx_queues[queue_idx] != NULL) {
-		idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
-		dev->data->tx_queues[queue_idx] = NULL;
-	}
-
-	/* Allocate the TX queue data structure. */
-	txq = rte_zmalloc_socket("idpf split txq",
-				 sizeof(struct idpf_tx_queue),
-				 RTE_CACHE_LINE_SIZE,
-				 socket_id);
-	if (txq == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
-		return -ENOMEM;
-	}
-
-	txq->nb_tx_desc = nb_desc;
-	txq->rs_thresh = tx_rs_thresh;
-	txq->free_thresh = tx_free_thresh;
-	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
-	txq->port_id = dev->data->port_id;
-	txq->offloads = offloads;
-	txq->tx_deferred_start = tx_conf->tx_deferred_start;
-
-	/* Allocate software ring */
-	txq->sw_nb_desc = 2 * nb_desc;
-	txq->sw_ring =
-		rte_zmalloc_socket("idpf split tx sw ring",
-				   sizeof(struct idpf_tx_entry) *
-				   txq->sw_nb_desc,
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (txq->sw_ring == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
-		ret = -ENOMEM;
-		goto err_txq_sw_ring;
-	}
-
-	/* Allocate TX hardware ring descriptors. */
-	ring_size = sizeof(struct idpf_flex_tx_sched_desc) * txq->nb_tx_desc;
-	ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "split_tx_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
-	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
-		ret = -ENOMEM;
-		goto err_txq_mz;
-	}
-	txq->tx_ring_phys_addr = mz->iova;
-	txq->desc_ring = mz->addr;
-
-	txq->mz = mz;
-	reset_split_tx_descq(txq);
-	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
-			queue_idx * vport->chunks_info.tx_qtail_spacing);
-	txq->ops = &def_txq_ops;
-
-	/* Allocate the TX completion queue data structure. */
-	txq->complq = rte_zmalloc_socket("idpf splitq cq",
-					 sizeof(struct idpf_tx_queue),
-					 RTE_CACHE_LINE_SIZE,
-					 socket_id);
-	cq = txq->complq;
+	cq = rte_zmalloc_socket("idpf splitq cq",
+				sizeof(struct idpf_tx_queue),
+				RTE_CACHE_LINE_SIZE,
+				socket_id);
 	if (cq == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for Tx compl queue");
 		ret = -ENOMEM;
-		goto err_cq;
+		goto err_cq_alloc;
 	}
-	cq->nb_tx_desc = 2 * nb_desc;
+
+	cq->nb_tx_desc = nb_desc;
 	cq->queue_id = vport->chunks_info.tx_compl_start_qid + queue_idx;
 	cq->port_id = dev->data->port_id;
 	cq->txqs = dev->data->tx_queues;
 	cq->tx_start_qid = vport->chunks_info.tx_start_qid;
 
-	ring_size = sizeof(struct idpf_splitq_tx_compl_desc) * cq->nb_tx_desc;
-	ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "tx_split_compl_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
+	mz = idpf_dma_zone_reserve(dev, queue_idx, nb_desc,
+				   VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION,
+				   socket_id, true);
 	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX completion queue");
 		ret = -ENOMEM;
-		goto err_cq_mz;
+		goto err_mz_reserve;
 	}
 	cq->tx_ring_phys_addr = mz->iova;
 	cq->compl_ring = mz->addr;
 	cq->mz = mz;
-	reset_split_tx_complq(cq);
+	idpf_reset_split_tx_complq(cq);
 
-	txq->q_set = true;
-	dev->data->tx_queues[queue_idx] = txq;
+	txq->complq = cq;
 
 	return 0;
 
-err_cq_mz:
+err_mz_reserve:
 	rte_free(cq);
-err_cq:
-	rte_memzone_free(txq->mz);
-err_txq_mz:
-	rte_free(txq->sw_ring);
-err_txq_sw_ring:
-	rte_free(txq);
-
+err_cq_alloc:
 	return ret;
 }
 
-static int
-idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-			   uint16_t nb_desc, unsigned int socket_id,
-			   const struct rte_eth_txconf *tx_conf)
+int
+idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_txconf *tx_conf)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_adapter *adapter = vport->adapter;
@@ -814,8 +404,10 @@ idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	struct idpf_hw *hw = &adapter->hw;
 	const struct rte_memzone *mz;
 	struct idpf_tx_queue *txq;
-	uint32_t ring_size;
 	uint64_t offloads;
+	uint16_t len;
+	bool is_splitq;
+	int ret;
 
 	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
 
@@ -823,7 +415,7 @@ idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		tx_conf->tx_rs_thresh : IDPF_DEFAULT_TX_RS_THRESH);
 	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
 		tx_conf->tx_free_thresh : IDPF_DEFAULT_TX_FREE_THRESH);
-	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+	if (idpf_check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Free memory if needed. */
@@ -839,71 +431,74 @@ idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 				 socket_id);
 	if (txq == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_txq_alloc;
 	}
 
-	/* TODO: vlan offload */
+	is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
 
 	txq->nb_tx_desc = nb_desc;
 	txq->rs_thresh = tx_rs_thresh;
 	txq->free_thresh = tx_free_thresh;
 	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
 	txq->port_id = dev->data->port_id;
-	txq->offloads = offloads;
+	txq->offloads = idpf_tx_offload_convert(offloads);
 	txq->tx_deferred_start = tx_conf->tx_deferred_start;
 
-	/* Allocate software ring */
-	txq->sw_ring =
-		rte_zmalloc_socket("idpf tx sw ring",
-				   sizeof(struct idpf_tx_entry) * nb_desc,
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (txq->sw_ring == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
-		rte_free(txq);
-		return -ENOMEM;
-	}
+	if (is_splitq)
+		len = 2 * nb_desc;
+	else
+		len = nb_desc;
+	txq->sw_nb_desc = len;
 
 	/* Allocate TX hardware ring descriptors. */
-	ring_size = sizeof(struct idpf_flex_tx_desc) * nb_desc;
-	ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
+	mz = idpf_dma_zone_reserve(dev, queue_idx, nb_desc, VIRTCHNL2_QUEUE_TYPE_TX,
+				   socket_id, is_splitq);
 	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
-		rte_free(txq->sw_ring);
-		rte_free(txq);
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_mz_reserve;
 	}
-
 	txq->tx_ring_phys_addr = mz->iova;
-	txq->tx_ring = mz->addr;
-
 	txq->mz = mz;
-	reset_single_tx_queue(txq);
-	txq->q_set = true;
-	dev->data->tx_queues[queue_idx] = txq;
+
+	txq->sw_ring = rte_zmalloc_socket("idpf tx sw ring",
+					  sizeof(struct idpf_tx_entry) * len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	if (!is_splitq) {
+		txq->tx_ring = mz->addr;
+		idpf_reset_single_tx_queue(txq);
+	} else {
+		txq->desc_ring = mz->addr;
+		idpf_reset_split_tx_descq(txq);
+
+		/* Setup tx completion queue if split model */
+		ret = idpf_tx_complq_setup(dev, txq, queue_idx,
+					   2 * nb_desc, socket_id);
+		if (ret != 0)
+			goto err_complq_setup;
+	}
+
 	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
 			queue_idx * vport->chunks_info.tx_qtail_spacing);
 	txq->ops = &def_txq_ops;
+	txq->q_set = true;
+	dev->data->tx_queues[queue_idx] = txq;
 
 	return 0;
-}
 
-int
-idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-		    uint16_t nb_desc, unsigned int socket_id,
-		    const struct rte_eth_txconf *tx_conf)
-{
-	struct idpf_vport *vport = dev->data->dev_private;
-
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
-		return idpf_tx_single_queue_setup(dev, queue_idx, nb_desc,
-						  socket_id, tx_conf);
-	else
-		return idpf_tx_split_queue_setup(dev, queue_idx, nb_desc,
-						 socket_id, tx_conf);
+err_complq_setup:
+err_sw_ring_alloc:
+	idpf_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(txq);
+err_txq_alloc:
+	return ret;
 }
 
 static int
@@ -916,89 +511,13 @@ idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
 							 &idpf_timestamp_dynflag);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR,
-				"Cannot register mbuf field/flag for timestamp");
+				    "Cannot register mbuf field/flag for timestamp");
 			return -EINVAL;
 		}
 	}
 	return 0;
 }
 
-static int
-idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
-{
-	volatile struct virtchnl2_singleq_rx_buf_desc *rxd;
-	struct rte_mbuf *mbuf = NULL;
-	uint64_t dma_addr;
-	uint16_t i;
-
-	for (i = 0; i < rxq->nb_rx_desc; i++) {
-		mbuf = rte_mbuf_raw_alloc(rxq->mp);
-		if (unlikely(mbuf == NULL)) {
-			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
-			return -ENOMEM;
-		}
-
-		rte_mbuf_refcnt_set(mbuf, 1);
-		mbuf->next = NULL;
-		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
-		mbuf->nb_segs = 1;
-		mbuf->port = rxq->port_id;
-
-		dma_addr =
-			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
-
-		rxd = &((volatile struct virtchnl2_singleq_rx_buf_desc *)(rxq->rx_ring))[i];
-		rxd->pkt_addr = dma_addr;
-		rxd->hdr_addr = 0;
-		rxd->rsvd1 = 0;
-		rxd->rsvd2 = 0;
-		rxq->sw_ring[i] = mbuf;
-	}
-
-	return 0;
-}
-
-static int
-idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
-{
-	volatile struct virtchnl2_splitq_rx_buf_desc *rxd;
-	struct rte_mbuf *mbuf = NULL;
-	uint64_t dma_addr;
-	uint16_t i;
-
-	for (i = 0; i < rxq->nb_rx_desc; i++) {
-		mbuf = rte_mbuf_raw_alloc(rxq->mp);
-		if (unlikely(mbuf == NULL)) {
-			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
-			return -ENOMEM;
-		}
-
-		rte_mbuf_refcnt_set(mbuf, 1);
-		mbuf->next = NULL;
-		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
-		mbuf->nb_segs = 1;
-		mbuf->port = rxq->port_id;
-
-		dma_addr =
-			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
-
-		rxd = &((volatile struct virtchnl2_splitq_rx_buf_desc *)(rxq->rx_ring))[i];
-		rxd->qword0.buf_id = i;
-		rxd->qword0.rsvd0 = 0;
-		rxd->qword0.rsvd1 = 0;
-		rxd->pkt_addr = dma_addr;
-		rxd->hdr_addr = 0;
-		rxd->rsvd2 = 0;
-
-		rxq->sw_ring[i] = mbuf;
-	}
-
-	rxq->nb_rx_hold = 0;
-	rxq->rx_tail = rxq->nb_rx_desc - 1;
-
-	return 0;
-}
-
 int
 idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
@@ -1164,11 +683,11 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	rxq = dev->data->rx_queues[rx_queue_id];
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
 		rxq->ops->release_mbufs(rxq);
-		reset_single_rx_queue(rxq);
+		idpf_reset_single_rx_queue(rxq);
 	} else {
 		rxq->bufq1->ops->release_mbufs(rxq->bufq1);
 		rxq->bufq2->ops->release_mbufs(rxq->bufq2);
-		reset_split_rx_queue(rxq);
+		idpf_reset_split_rx_queue(rxq);
 	}
 	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -1195,10 +714,10 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	txq = dev->data->tx_queues[tx_queue_id];
 	txq->ops->release_mbufs(txq);
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-		reset_single_tx_queue(txq);
+		idpf_reset_single_tx_queue(txq);
 	} else {
-		reset_split_tx_descq(txq);
-		reset_split_tx_complq(txq->complq);
+		idpf_reset_split_tx_descq(txq);
+		idpf_reset_split_tx_complq(txq->complq);
 	}
 	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index b8325f9b96..4efbf10295 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -51,7 +51,6 @@
 /* Base address of the HW descriptor ring should be 128B aligned. */
 #define IDPF_RING_BASE_ALIGN	128
 
-#define IDPF_RX_MAX_BURST		32
 #define IDPF_DEFAULT_RX_FREE_THRESH	32
 
 /* used for Vector PMD */
@@ -101,14 +100,6 @@ union idpf_tx_offload {
 	};
 };
 
-struct idpf_rxq_ops {
-	void (*release_mbufs)(struct idpf_rx_queue *rxq);
-};
-
-struct idpf_txq_ops {
-	void (*release_mbufs)(struct idpf_tx_queue *txq);
-};
-
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
index fb2b6bb53c..71a6c59823 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
@@ -562,7 +562,7 @@ idpf_tx_free_bufs_avx512(struct idpf_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->next_dd - (n - 1);
 
-	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & IDPF_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
 								rte_lcore_id());
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 13/19] common/idpf: add Rx and Tx data path
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (11 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 12/19] common/idpf: add help functions for queue setup and release beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 14/19] common/idpf: add vec queue setup beilei.xing
                         ` (7 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Mingxia Liu

From: Beilei Xing <beilei.xing@intel.com>

Add timestamp filed to idpf_adapter structure.
Move scalar Rx/Tx data path for both single queue and split queue
to common module.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h |   5 +
 drivers/common/idpf/idpf_common_logs.h   |  24 +
 drivers/common/idpf/idpf_common_rxtx.c   | 987 +++++++++++++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h   |  89 +-
 drivers/common/idpf/version.map          |   6 +
 drivers/net/idpf/idpf_ethdev.c           |   2 -
 drivers/net/idpf/idpf_ethdev.h           |   4 -
 drivers/net/idpf/idpf_logs.h             |  24 -
 drivers/net/idpf/idpf_rxtx.c             | 937 +--------------------
 drivers/net/idpf/idpf_rxtx.h             | 132 ---
 drivers/net/idpf/idpf_rxtx_vec_avx512.c  |   8 +-
 11 files changed, 1115 insertions(+), 1103 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 4895f5f360..573852ff75 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -23,6 +23,8 @@
 #define IDPF_TX_COMPLQ_PER_GRP	1
 #define IDPF_TXQ_PER_GRP	1
 
+#define IDPF_MIN_FRAME_SIZE	14
+
 #define IDPF_MAX_PKT_TYPE	1024
 
 #define IDPF_DFLT_INTERVAL	16
@@ -43,6 +45,9 @@ struct idpf_adapter {
 
 	uint32_t txq_model; /* 0 - split queue model, non-0 - single queue model */
 	uint32_t rxq_model; /* 0 - split queue model, non-0 - single queue model */
+
+	/* For timestamp */
+	uint64_t time_hw;
 };
 
 struct idpf_chunks_info {
diff --git a/drivers/common/idpf/idpf_common_logs.h b/drivers/common/idpf/idpf_common_logs.h
index fe36562769..63ad2195be 100644
--- a/drivers/common/idpf/idpf_common_logs.h
+++ b/drivers/common/idpf/idpf_common_logs.h
@@ -20,4 +20,28 @@ extern int idpf_common_logtype;
 #define DRV_LOG(level, fmt, args...)		\
 	DRV_LOG_RAW(level, fmt "\n", ## args)
 
+#ifdef RTE_LIBRTE_IDPF_DEBUG_RX
+#define RX_LOG(level, ...) \
+	RTE_LOG(level, \
+		PMD, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+#else
+#define RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_IDPF_DEBUG_TX
+#define TX_LOG(level, ...) \
+	RTE_LOG(level, \
+		PMD, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+#else
+#define TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
 #endif /* _IDPF_COMMON_LOGS_H_ */
diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index eeeeedca88..459057f20e 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -3,8 +3,13 @@
  */
 
 #include <rte_mbuf_dyn.h>
+#include <rte_errno.h>
+
 #include "idpf_common_rxtx.h"
 
+int idpf_timestamp_dynfield_offset = -1;
+uint64_t idpf_timestamp_dynflag;
+
 int
 idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
 {
@@ -337,6 +342,23 @@ idpf_tx_queue_release(void *txq)
 	rte_free(q);
 }
 
+int
+idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
+{
+	int err;
+	if ((rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
+		/* Register mbuf field and flag for Rx timestamp */
+		err = rte_mbuf_dyn_rx_timestamp_register(&idpf_timestamp_dynfield_offset,
+							 &idpf_timestamp_dynflag);
+		if (err != 0) {
+			DRV_LOG(ERR,
+				"Cannot register mbuf field/flag for timestamp");
+			return -EINVAL;
+		}
+	}
+	return 0;
+}
+
 int
 idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
 {
@@ -412,3 +434,968 @@ idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
 
 	return 0;
 }
+
+#define IDPF_TIMESYNC_REG_WRAP_GUARD_BAND  10000
+/* Helper function to convert a 32b nanoseconds timestamp to 64b. */
+static inline uint64_t
+idpf_tstamp_convert_32b_64b(struct idpf_adapter *ad, uint32_t flag,
+			    uint32_t in_timestamp)
+{
+#ifdef RTE_ARCH_X86_64
+	struct idpf_hw *hw = &ad->hw;
+	const uint64_t mask = 0xFFFFFFFF;
+	uint32_t hi, lo, lo2, delta;
+	uint64_t ns;
+
+	if (flag != 0) {
+		IDPF_WRITE_REG(hw, GLTSYN_CMD_SYNC_0_0, PF_GLTSYN_CMD_SYNC_SHTIME_EN_M);
+		IDPF_WRITE_REG(hw, GLTSYN_CMD_SYNC_0_0, PF_GLTSYN_CMD_SYNC_EXEC_CMD_M |
+			       PF_GLTSYN_CMD_SYNC_SHTIME_EN_M);
+		lo = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
+		hi = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_H_0);
+		/*
+		 * On typical system, the delta between lo and lo2 is ~1000ns,
+		 * so 10000 seems a large-enough but not overly-big guard band.
+		 */
+		if (lo > (UINT32_MAX - IDPF_TIMESYNC_REG_WRAP_GUARD_BAND))
+			lo2 = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
+		else
+			lo2 = lo;
+
+		if (lo2 < lo) {
+			lo = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
+			hi = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_H_0);
+		}
+
+		ad->time_hw = ((uint64_t)hi << 32) | lo;
+	}
+
+	delta = (in_timestamp - (uint32_t)(ad->time_hw & mask));
+	if (delta > (mask / 2)) {
+		delta = ((uint32_t)(ad->time_hw & mask) - in_timestamp);
+		ns = ad->time_hw - delta;
+	} else {
+		ns = ad->time_hw + delta;
+	}
+
+	return ns;
+#else /* !RTE_ARCH_X86_64 */
+	RTE_SET_USED(ad);
+	RTE_SET_USED(flag);
+	RTE_SET_USED(in_timestamp);
+	return 0;
+#endif /* RTE_ARCH_X86_64 */
+}
+
+#define IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S				\
+	(RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S) |     \
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S) |     \
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S) |    \
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S))
+
+static inline uint64_t
+idpf_splitq_rx_csum_offload(uint8_t err)
+{
+	uint64_t flags = 0;
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S)) == 0))
+		return flags;
+
+	if (likely((err & IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S) == 0)) {
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S)) != 0))
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
+
+	return flags;
+}
+
+#define IDPF_RX_FLEX_DESC_ADV_HASH1_S  0
+#define IDPF_RX_FLEX_DESC_ADV_HASH2_S  16
+#define IDPF_RX_FLEX_DESC_ADV_HASH3_S  24
+
+static inline uint64_t
+idpf_splitq_rx_rss_offload(struct rte_mbuf *mb,
+			   volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc)
+{
+	uint8_t status_err0_qw0;
+	uint64_t flags = 0;
+
+	status_err0_qw0 = rx_desc->status_err0_qw0;
+
+	if ((status_err0_qw0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S)) != 0) {
+		flags |= RTE_MBUF_F_RX_RSS_HASH;
+		mb->hash.rss = (rte_le_to_cpu_16(rx_desc->hash1) <<
+				IDPF_RX_FLEX_DESC_ADV_HASH1_S) |
+			((uint32_t)(rx_desc->ff2_mirrid_hash2.hash2) <<
+			 IDPF_RX_FLEX_DESC_ADV_HASH2_S) |
+			((uint32_t)(rx_desc->hash3) <<
+			 IDPF_RX_FLEX_DESC_ADV_HASH3_S);
+	}
+
+	return flags;
+}
+
+static void
+idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
+{
+	volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_ring;
+	volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_desc;
+	uint16_t nb_refill = rx_bufq->rx_free_thresh;
+	uint16_t nb_desc = rx_bufq->nb_rx_desc;
+	uint16_t next_avail = rx_bufq->rx_tail;
+	struct rte_mbuf *nmb[rx_bufq->rx_free_thresh];
+	uint64_t dma_addr;
+	uint16_t delta;
+	int i;
+
+	if (rx_bufq->nb_rx_hold < rx_bufq->rx_free_thresh)
+		return;
+
+	rx_buf_ring = rx_bufq->rx_ring;
+	delta = nb_desc - next_avail;
+	if (unlikely(delta < nb_refill)) {
+		if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, delta) == 0)) {
+			for (i = 0; i < delta; i++) {
+				rx_buf_desc = &rx_buf_ring[next_avail + i];
+				rx_bufq->sw_ring[next_avail + i] = nmb[i];
+				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+				rx_buf_desc->hdr_addr = 0;
+				rx_buf_desc->pkt_addr = dma_addr;
+			}
+			nb_refill -= delta;
+			next_avail = 0;
+			rx_bufq->nb_rx_hold -= delta;
+		} else {
+			rte_atomic64_add(&rx_bufq->rx_stats.mbuf_alloc_failed,
+					 nb_desc - next_avail);
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+			       rx_bufq->port_id, rx_bufq->queue_id);
+			return;
+		}
+	}
+
+	if (nb_desc - next_avail >= nb_refill) {
+		if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, nb_refill) == 0)) {
+			for (i = 0; i < nb_refill; i++) {
+				rx_buf_desc = &rx_buf_ring[next_avail + i];
+				rx_bufq->sw_ring[next_avail + i] = nmb[i];
+				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+				rx_buf_desc->hdr_addr = 0;
+				rx_buf_desc->pkt_addr = dma_addr;
+			}
+			next_avail += nb_refill;
+			rx_bufq->nb_rx_hold -= nb_refill;
+		} else {
+			rte_atomic64_add(&rx_bufq->rx_stats.mbuf_alloc_failed,
+					 nb_desc - next_avail);
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+			       rx_bufq->port_id, rx_bufq->queue_id);
+		}
+	}
+
+	IDPF_PCI_REG_WRITE(rx_bufq->qrx_tail, next_avail);
+
+	rx_bufq->rx_tail = next_avail;
+}
+
+uint16_t
+idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		      uint16_t nb_pkts)
+{
+	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
+	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
+	uint16_t pktlen_gen_bufq_id;
+	struct idpf_rx_queue *rxq;
+	const uint32_t *ptype_tbl;
+	uint8_t status_err0_qw1;
+	struct idpf_adapter *ad;
+	struct rte_mbuf *rxm;
+	uint16_t rx_id_bufq1;
+	uint16_t rx_id_bufq2;
+	uint64_t pkt_flags;
+	uint16_t pkt_len;
+	uint16_t bufq_id;
+	uint16_t gen_id;
+	uint16_t rx_id;
+	uint16_t nb_rx;
+	uint64_t ts_ns;
+
+	nb_rx = 0;
+	rxq = rx_queue;
+	ad = rxq->adapter;
+
+	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
+		return nb_rx;
+
+	rx_id = rxq->rx_tail;
+	rx_id_bufq1 = rxq->bufq1->rx_next_avail;
+	rx_id_bufq2 = rxq->bufq2->rx_next_avail;
+	rx_desc_ring = rxq->rx_ring;
+	ptype_tbl = rxq->adapter->ptype_tbl;
+
+	if ((rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0)
+		rxq->hw_register_set = 1;
+
+	while (nb_rx < nb_pkts) {
+		rx_desc = &rx_desc_ring[rx_id];
+
+		pktlen_gen_bufq_id =
+			rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
+		gen_id = (pktlen_gen_bufq_id &
+			  VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
+			VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
+		if (gen_id != rxq->expected_gen_id)
+			break;
+
+		pkt_len = (pktlen_gen_bufq_id &
+			   VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M) >>
+			VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S;
+		if (pkt_len == 0)
+			RX_LOG(ERR, "Packet length is 0");
+
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc)) {
+			rx_id = 0;
+			rxq->expected_gen_id ^= 1;
+		}
+
+		bufq_id = (pktlen_gen_bufq_id &
+			   VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M) >>
+			VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S;
+		if (bufq_id == 0) {
+			rxm = rxq->bufq1->sw_ring[rx_id_bufq1];
+			rx_id_bufq1++;
+			if (unlikely(rx_id_bufq1 == rxq->bufq1->nb_rx_desc))
+				rx_id_bufq1 = 0;
+			rxq->bufq1->nb_rx_hold++;
+		} else {
+			rxm = rxq->bufq2->sw_ring[rx_id_bufq2];
+			rx_id_bufq2++;
+			if (unlikely(rx_id_bufq2 == rxq->bufq2->nb_rx_desc))
+				rx_id_bufq2 = 0;
+			rxq->bufq2->nb_rx_hold++;
+		}
+
+		rxm->pkt_len = pkt_len;
+		rxm->data_len = pkt_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rxm->next = NULL;
+		rxm->nb_segs = 1;
+		rxm->port = rxq->port_id;
+		rxm->ol_flags = 0;
+		rxm->packet_type =
+			ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
+				   VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
+				  VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
+
+		status_err0_qw1 = rx_desc->status_err0_qw1;
+		pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
+		pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
+		if (idpf_timestamp_dynflag > 0 &&
+		    (rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP)) {
+			/* timestamp */
+			ts_ns = idpf_tstamp_convert_32b_64b(ad,
+							    rxq->hw_register_set,
+							    rte_le_to_cpu_32(rx_desc->ts_high));
+			rxq->hw_register_set = 0;
+			*RTE_MBUF_DYNFIELD(rxm,
+					   idpf_timestamp_dynfield_offset,
+					   rte_mbuf_timestamp_t *) = ts_ns;
+			rxm->ol_flags |= idpf_timestamp_dynflag;
+		}
+
+		rxm->ol_flags |= pkt_flags;
+
+		rx_pkts[nb_rx++] = rxm;
+	}
+
+	if (nb_rx > 0) {
+		rxq->rx_tail = rx_id;
+		if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
+			rxq->bufq1->rx_next_avail = rx_id_bufq1;
+		if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
+			rxq->bufq2->rx_next_avail = rx_id_bufq2;
+
+		idpf_split_rx_bufq_refill(rxq->bufq1);
+		idpf_split_rx_bufq_refill(rxq->bufq2);
+	}
+
+	return nb_rx;
+}
+
+static inline void
+idpf_split_tx_free(struct idpf_tx_queue *cq)
+{
+	volatile struct idpf_splitq_tx_compl_desc *compl_ring = cq->compl_ring;
+	volatile struct idpf_splitq_tx_compl_desc *txd;
+	uint16_t next = cq->tx_tail;
+	struct idpf_tx_entry *txe;
+	struct idpf_tx_queue *txq;
+	uint16_t gen, qid, q_head;
+	uint16_t nb_desc_clean;
+	uint8_t ctype;
+
+	txd = &compl_ring[next];
+	gen = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+	       IDPF_TXD_COMPLQ_GEN_M) >> IDPF_TXD_COMPLQ_GEN_S;
+	if (gen != cq->expected_gen_id)
+		return;
+
+	ctype = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+		 IDPF_TXD_COMPLQ_COMPL_TYPE_M) >> IDPF_TXD_COMPLQ_COMPL_TYPE_S;
+	qid = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+	       IDPF_TXD_COMPLQ_QID_M) >> IDPF_TXD_COMPLQ_QID_S;
+	q_head = rte_le_to_cpu_16(txd->q_head_compl_tag.compl_tag);
+	txq = cq->txqs[qid - cq->tx_start_qid];
+
+	switch (ctype) {
+	case IDPF_TXD_COMPLT_RE:
+		/* clean to q_head which indicates be fetched txq desc id + 1.
+		 * TODO: need to refine and remove the if condition.
+		 */
+		if (unlikely(q_head % 32)) {
+			TX_LOG(ERR, "unexpected desc (head = %u) completion.",
+			       q_head);
+			return;
+		}
+		if (txq->last_desc_cleaned > q_head)
+			nb_desc_clean = (txq->nb_tx_desc - txq->last_desc_cleaned) +
+				q_head;
+		else
+			nb_desc_clean = q_head - txq->last_desc_cleaned;
+		txq->nb_free += nb_desc_clean;
+		txq->last_desc_cleaned = q_head;
+		break;
+	case IDPF_TXD_COMPLT_RS:
+		/* q_head indicates sw_id when ctype is 2 */
+		txe = &txq->sw_ring[q_head];
+		if (txe->mbuf != NULL) {
+			rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = NULL;
+		}
+		break;
+	default:
+		TX_LOG(ERR, "unknown completion type.");
+		return;
+	}
+
+	if (++next == cq->nb_tx_desc) {
+		next = 0;
+		cq->expected_gen_id ^= 1;
+	}
+
+	cq->tx_tail = next;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+idpf_calc_context_desc(uint64_t flags)
+{
+	if ((flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
+		return 1;
+
+	return 0;
+}
+
+/* set TSO context descriptor
+ */
+static inline void
+idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
+			union idpf_tx_offload tx_offload,
+			volatile union idpf_flex_tx_ctx_desc *ctx_desc)
+{
+	uint16_t cmd_dtype;
+	uint32_t tso_len;
+	uint8_t hdr_len;
+
+	if (tx_offload.l4_len == 0) {
+		TX_LOG(DEBUG, "L4 length set to 0");
+		return;
+	}
+
+	hdr_len = tx_offload.l2_len +
+		tx_offload.l3_len +
+		tx_offload.l4_len;
+	cmd_dtype = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX |
+		IDPF_TX_FLEX_CTX_DESC_CMD_TSO;
+	tso_len = mbuf->pkt_len - hdr_len;
+
+	ctx_desc->tso.qw1.cmd_dtype = rte_cpu_to_le_16(cmd_dtype);
+	ctx_desc->tso.qw0.hdr_len = hdr_len;
+	ctx_desc->tso.qw0.mss_rt =
+		rte_cpu_to_le_16((uint16_t)mbuf->tso_segsz &
+				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
+	ctx_desc->tso.qw0.flex_tlen =
+		rte_cpu_to_le_32(tso_len &
+				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
+}
+
+uint16_t
+idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		      uint16_t nb_pkts)
+{
+	struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue;
+	volatile struct idpf_flex_tx_sched_desc *txr;
+	volatile struct idpf_flex_tx_sched_desc *txd;
+	struct idpf_tx_entry *sw_ring;
+	union idpf_tx_offload tx_offload = {0};
+	struct idpf_tx_entry *txe, *txn;
+	uint16_t nb_used, tx_id, sw_id;
+	struct rte_mbuf *tx_pkt;
+	uint16_t nb_to_clean;
+	uint16_t nb_tx = 0;
+	uint64_t ol_flags;
+	uint16_t nb_ctx;
+
+	if (unlikely(txq == NULL) || unlikely(!txq->q_started))
+		return nb_tx;
+
+	txr = txq->desc_ring;
+	sw_ring = txq->sw_ring;
+	tx_id = txq->tx_tail;
+	sw_id = txq->sw_tail;
+	txe = &sw_ring[sw_id];
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		tx_pkt = tx_pkts[nb_tx];
+
+		if (txq->nb_free <= txq->free_thresh) {
+			/* TODO: Need to refine
+			 * 1. free and clean: Better to decide a clean destination instead of
+			 * loop times. And don't free mbuf when RS got immediately, free when
+			 * transmit or according to the clean destination.
+			 * Now, just ignore the RE write back, free mbuf when get RS
+			 * 2. out-of-order rewrite back haven't be supported, SW head and HW head
+			 * need to be separated.
+			 **/
+			nb_to_clean = 2 * txq->rs_thresh;
+			while (nb_to_clean--)
+				idpf_split_tx_free(txq->complq);
+		}
+
+		if (txq->nb_free < tx_pkt->nb_segs)
+			break;
+
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = idpf_calc_context_desc(ol_flags);
+		nb_used = tx_pkt->nb_segs + nb_ctx;
+
+		/* context descriptor */
+		if (nb_ctx != 0) {
+			volatile union idpf_flex_tx_ctx_desc *ctx_desc =
+				(volatile union idpf_flex_tx_ctx_desc *)&txr[tx_id];
+
+			if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
+				idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
+							ctx_desc);
+
+			tx_id++;
+			if (tx_id == txq->nb_tx_desc)
+				tx_id = 0;
+		}
+
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+			txe->mbuf = tx_pkt;
+
+			/* Setup TX descriptor */
+			txd->buf_addr =
+				rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt));
+			txd->qw1.cmd_dtype =
+				rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE);
+			txd->qw1.rxr_bufsize = tx_pkt->data_len;
+			txd->qw1.compl_tag = sw_id;
+			tx_id++;
+			if (tx_id == txq->nb_tx_desc)
+				tx_id = 0;
+			sw_id = txe->next_id;
+			txe = txn;
+			tx_pkt = tx_pkt->next;
+		} while (tx_pkt);
+
+		/* fill the last descriptor with End of Packet (EOP) bit */
+		txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_EOP;
+
+		if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
+			txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN;
+		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+
+		if (txq->nb_used >= 32) {
+			txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_RE;
+			/* Update txq RE bit counters */
+			txq->nb_used = 0;
+		}
+	}
+
+	/* update the tail pointer if any packets were processed */
+	if (likely(nb_tx > 0)) {
+		IDPF_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+		txq->tx_tail = tx_id;
+		txq->sw_tail = sw_id;
+	}
+
+	return nb_tx;
+}
+
+#define IDPF_RX_FLEX_DESC_STATUS0_XSUM_S				\
+	(RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) |		\
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) |		\
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) |	\
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S))
+
+/* Translate the rx descriptor status and error fields to pkt flags */
+static inline uint64_t
+idpf_rxd_to_pkt_flags(uint16_t status_error)
+{
+	uint64_t flags = 0;
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S)) == 0))
+		return flags;
+
+	if (likely((status_error & IDPF_RX_FLEX_DESC_STATUS0_XSUM_S) == 0)) {
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)) != 0))
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
+
+	return flags;
+}
+
+static inline void
+idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
+		    uint16_t rx_id)
+{
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+
+	if (nb_hold > rxq->rx_free_thresh) {
+		RX_LOG(DEBUG,
+		       "port_id=%u queue_id=%u rx_tail=%u nb_hold=%u",
+		       rxq->port_id, rxq->queue_id, rx_id, nb_hold);
+		rx_id = (uint16_t)((rx_id == 0) ?
+				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+}
+
+uint16_t
+idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts)
+{
+	volatile union virtchnl2_rx_desc *rx_ring;
+	volatile union virtchnl2_rx_desc *rxdp;
+	union virtchnl2_rx_desc rxd;
+	struct idpf_rx_queue *rxq;
+	const uint32_t *ptype_tbl;
+	uint16_t rx_id, nb_hold;
+	struct idpf_adapter *ad;
+	uint16_t rx_packet_len;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	uint16_t rx_status0;
+	uint64_t pkt_flags;
+	uint64_t dma_addr;
+	uint64_t ts_ns;
+	uint16_t nb_rx;
+
+	nb_rx = 0;
+	nb_hold = 0;
+	rxq = rx_queue;
+
+	ad = rxq->adapter;
+
+	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
+		return nb_rx;
+
+	rx_id = rxq->rx_tail;
+	rx_ring = rxq->rx_ring;
+	ptype_tbl = rxq->adapter->ptype_tbl;
+
+	if ((rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0)
+		rxq->hw_register_set = 1;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
+
+		/* Check the DD bit first */
+		if ((rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)) == 0)
+			break;
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(nmb == NULL)) {
+			rte_atomic64_inc(&rxq->rx_stats.mbuf_alloc_failed);
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+			       "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+		nb_hold++;
+		rxm = rxq->sw_ring[rx_id];
+		rxq->sw_ring[rx_id] = nmb;
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		rx_packet_len = (rte_cpu_to_le_16(rxd.flex_nic_wb.pkt_len) &
+				 VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M);
+
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		rxm->ol_flags = 0;
+		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		rxm->packet_type =
+			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
+					    VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
+
+		rxm->ol_flags |= pkt_flags;
+
+		if (idpf_timestamp_dynflag > 0 &&
+		    (rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
+			/* timestamp */
+			ts_ns = idpf_tstamp_convert_32b_64b(ad,
+					    rxq->hw_register_set,
+					    rte_le_to_cpu_32(rxd.flex_nic_wb.flex_ts.ts_high));
+			rxq->hw_register_set = 0;
+			*RTE_MBUF_DYNFIELD(rxm,
+					   idpf_timestamp_dynfield_offset,
+					   rte_mbuf_timestamp_t *) = ts_ns;
+			rxm->ol_flags |= idpf_timestamp_dynflag;
+		}
+
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+
+	idpf_update_rx_tail(rxq, nb_hold, rx_id);
+
+	return nb_rx;
+}
+
+static inline int
+idpf_xmit_cleanup(struct idpf_tx_queue *txq)
+{
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	struct idpf_tx_entry *sw_ring = txq->sw_ring;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+	uint16_t i;
+
+	volatile struct idpf_flex_tx_desc *txd = txq->tx_ring;
+
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	/* In the writeback Tx desccriptor, the only significant fields are the 4-bit DTYPE */
+	if ((txd[desc_to_clean_to].qw1.cmd_dtype &
+	     rte_cpu_to_le_16(IDPF_TXD_QW1_DTYPE_M)) !=
+	    rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE)) {
+		TX_LOG(DEBUG, "TX descriptor %4u is not done "
+		       "(port=%d queue=%d)", desc_to_clean_to,
+		       txq->port_id, txq->queue_id);
+		return -1;
+	}
+
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+					    desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					    last_desc_cleaned);
+
+	txd[desc_to_clean_to].qw1.cmd_dtype = 0;
+	txd[desc_to_clean_to].qw1.buf_size = 0;
+	for (i = 0; i < RTE_DIM(txd[desc_to_clean_to].qw1.flex.raw); i++)
+		txd[desc_to_clean_to].qw1.flex.raw[i] = 0;
+
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* TX function */
+uint16_t
+idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts)
+{
+	volatile struct idpf_flex_tx_desc *txd;
+	volatile struct idpf_flex_tx_desc *txr;
+	union idpf_tx_offload tx_offload = {0};
+	struct idpf_tx_entry *txe, *txn;
+	struct idpf_tx_entry *sw_ring;
+	struct idpf_tx_queue *txq;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint64_t buf_dma_addr;
+	uint64_t ol_flags;
+	uint16_t tx_last;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint16_t td_cmd;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint16_t slen;
+
+	nb_tx = 0;
+	txq = tx_queue;
+
+	if (unlikely(txq == NULL) || unlikely(!txq->q_started))
+		return nb_tx;
+
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_free < txq->free_thresh)
+		(void)idpf_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		td_cmd = 0;
+
+		tx_pkt = *tx_pkts++;
+		RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
+
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = idpf_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus 1 context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		TX_LOG(DEBUG, "port_id=%u queue_id=%u"
+		       " tx_first=%u tx_last=%u",
+		       txq->port_id, txq->queue_id, tx_id, tx_last);
+
+		if (nb_used > txq->nb_free) {
+			if (idpf_xmit_cleanup(txq) != 0) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->rs_thresh)) {
+				while (nb_used > txq->nb_free) {
+					if (idpf_xmit_cleanup(txq) != 0) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		if (nb_ctx != 0) {
+			/* Setup TX context descriptor if required */
+			volatile union idpf_flex_tx_ctx_desc *ctx_txd =
+				(volatile union idpf_flex_tx_ctx_desc *)
+				&txr[tx_id];
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf != NULL) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			/* TSO enabled */
+			if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
+				idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
+							ctx_txd);
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+
+		m_seg = tx_pkt;
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf != NULL)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			slen = m_seg->data_len;
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->qw1.buf_size = slen;
+			txd->qw1.cmd_dtype = rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_DATA <<
+							      IDPF_FLEX_TXD_QW1_DTYPE_S);
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg);
+
+		/* The last packet data descriptor needs End Of Packet (EOP) */
+		td_cmd |= IDPF_TX_FLEX_DESC_CMD_EOP;
+		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+
+		if (txq->nb_used >= txq->rs_thresh) {
+			TX_LOG(DEBUG, "Setting RS bit on TXD id="
+			       "%4u (port=%d queue=%d)",
+			       tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= IDPF_TX_FLEX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_used = 0;
+		}
+
+		if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
+			td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN;
+
+		txd->qw1.cmd_dtype |= rte_cpu_to_le_16(td_cmd << IDPF_FLEX_TXD_QW1_CMD_S);
+	}
+
+end_of_tx:
+	rte_wmb();
+
+	TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+	       txq->port_id, txq->queue_id, tx_id, nb_tx);
+
+	IDPF_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+/* TX prep functions */
+uint16_t
+idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	       uint16_t nb_pkts)
+{
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+	int ret;
+#endif
+	int i;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		/* Check condition for nb_segs > IDPF_TX_MAX_MTU_SEG. */
+		if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0) {
+			if (m->nb_segs > IDPF_TX_MAX_MTU_SEG) {
+				rte_errno = EINVAL;
+				return i;
+			}
+		} else if ((m->tso_segsz < IDPF_MIN_TSO_MSS) ||
+			   (m->tso_segsz > IDPF_MAX_TSO_MSS) ||
+			   (m->pkt_len > IDPF_MAX_TSO_FRAME_SIZE)) {
+			/* MSS outside the range are considered malicious */
+			rte_errno = EINVAL;
+			return i;
+		}
+
+		if ((ol_flags & IDPF_TX_OFFLOAD_NOTSUP_MASK) != 0) {
+			rte_errno = ENOTSUP;
+			return i;
+		}
+
+		if (m->pkt_len < IDPF_MIN_FRAME_SIZE) {
+			rte_errno = EINVAL;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = -ret;
+			return i;
+		}
+#endif
+	}
+
+	return i;
+}
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index c5bb7d48af..827f791505 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -27,8 +27,63 @@
 #define IDPF_TX_OFFLOAD_MULTI_SEGS       RTE_BIT64(15)
 #define IDPF_TX_OFFLOAD_MBUF_FAST_FREE   RTE_BIT64(16)
 
+#define IDPF_TX_MAX_MTU_SEG	10
+
+#define IDPF_MIN_TSO_MSS	88
+#define IDPF_MAX_TSO_MSS	9728
+#define IDPF_MAX_TSO_FRAME_SIZE	262143
+#define IDPF_TX_MAX_MTU_SEG     10
+
+#define IDPF_TX_CKSUM_OFFLOAD_MASK (		\
+		RTE_MBUF_F_TX_IP_CKSUM |	\
+		RTE_MBUF_F_TX_L4_MASK |		\
+		RTE_MBUF_F_TX_TCP_SEG)
+
+#define IDPF_TX_OFFLOAD_MASK (			\
+		IDPF_TX_CKSUM_OFFLOAD_MASK |	\
+		RTE_MBUF_F_TX_IPV4 |		\
+		RTE_MBUF_F_TX_IPV6)
+
+#define IDPF_TX_OFFLOAD_NOTSUP_MASK \
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
+
+/* MTS */
+#define GLTSYN_CMD_SYNC_0_0	(PF_TIMESYNC_BASE + 0x0)
+#define PF_GLTSYN_SHTIME_0_0	(PF_TIMESYNC_BASE + 0x4)
+#define PF_GLTSYN_SHTIME_L_0	(PF_TIMESYNC_BASE + 0x8)
+#define PF_GLTSYN_SHTIME_H_0	(PF_TIMESYNC_BASE + 0xC)
+#define GLTSYN_ART_L_0		(PF_TIMESYNC_BASE + 0x10)
+#define GLTSYN_ART_H_0		(PF_TIMESYNC_BASE + 0x14)
+#define PF_GLTSYN_SHTIME_0_1	(PF_TIMESYNC_BASE + 0x24)
+#define PF_GLTSYN_SHTIME_L_1	(PF_TIMESYNC_BASE + 0x28)
+#define PF_GLTSYN_SHTIME_H_1	(PF_TIMESYNC_BASE + 0x2C)
+#define PF_GLTSYN_SHTIME_0_2	(PF_TIMESYNC_BASE + 0x44)
+#define PF_GLTSYN_SHTIME_L_2	(PF_TIMESYNC_BASE + 0x48)
+#define PF_GLTSYN_SHTIME_H_2	(PF_TIMESYNC_BASE + 0x4C)
+#define PF_GLTSYN_SHTIME_0_3	(PF_TIMESYNC_BASE + 0x64)
+#define PF_GLTSYN_SHTIME_L_3	(PF_TIMESYNC_BASE + 0x68)
+#define PF_GLTSYN_SHTIME_H_3	(PF_TIMESYNC_BASE + 0x6C)
+
+#define PF_TIMESYNC_BAR4_BASE	0x0E400000
+#define GLTSYN_ENA		(PF_TIMESYNC_BAR4_BASE + 0x90)
+#define GLTSYN_CMD		(PF_TIMESYNC_BAR4_BASE + 0x94)
+#define GLTSYC_TIME_L		(PF_TIMESYNC_BAR4_BASE + 0x104)
+#define GLTSYC_TIME_H		(PF_TIMESYNC_BAR4_BASE + 0x108)
+
+#define GLTSYN_CMD_SYNC_0_4	(PF_TIMESYNC_BAR4_BASE + 0x110)
+#define PF_GLTSYN_SHTIME_L_4	(PF_TIMESYNC_BAR4_BASE + 0x118)
+#define PF_GLTSYN_SHTIME_H_4	(PF_TIMESYNC_BAR4_BASE + 0x11C)
+#define GLTSYN_INCVAL_L		(PF_TIMESYNC_BAR4_BASE + 0x150)
+#define GLTSYN_INCVAL_H		(PF_TIMESYNC_BAR4_BASE + 0x154)
+#define GLTSYN_SHADJ_L		(PF_TIMESYNC_BAR4_BASE + 0x158)
+#define GLTSYN_SHADJ_H		(PF_TIMESYNC_BAR4_BASE + 0x15C)
+
+#define GLTSYN_CMD_SYNC_0_5	(PF_TIMESYNC_BAR4_BASE + 0x130)
+#define PF_GLTSYN_SHTIME_L_5	(PF_TIMESYNC_BAR4_BASE + 0x138)
+#define PF_GLTSYN_SHTIME_H_5	(PF_TIMESYNC_BAR4_BASE + 0x13C)
+
 struct idpf_rx_stats {
-	uint64_t mbuf_alloc_failed;
+	rte_atomic64_t mbuf_alloc_failed;
 };
 
 struct idpf_rx_queue {
@@ -126,6 +181,18 @@ struct idpf_tx_queue {
 	struct idpf_tx_queue *complq;
 };
 
+/* Offload features */
+union idpf_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /* L3 (IP) Header Length. */
+		uint64_t l4_len:8; /* L4 Header Length. */
+		uint64_t tso_segsz:16; /* TCP TSO segment size */
+		/* uint64_t unused : 24; */
+	};
+};
+
 struct idpf_rxq_ops {
 	void (*release_mbufs)(struct idpf_rx_queue *rxq);
 };
@@ -134,6 +201,9 @@ struct idpf_txq_ops {
 	void (*release_mbufs)(struct idpf_tx_queue *txq);
 };
 
+extern int idpf_timestamp_dynfield_offset;
+extern uint64_t idpf_timestamp_dynflag;
+
 __rte_internal
 int idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh);
 __rte_internal
@@ -162,8 +232,25 @@ void idpf_rx_queue_release(void *rxq);
 __rte_internal
 void idpf_tx_queue_release(void *txq);
 __rte_internal
+int idpf_register_ts_mbuf(struct idpf_rx_queue *rxq);
+__rte_internal
 int idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq);
 __rte_internal
 int idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq);
+__rte_internal
+uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			       uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			       uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+				uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+				uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index aa6ebd7c6c..03aab598b4 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -12,6 +12,8 @@ INTERNAL {
 	idpf_config_rss;
 	idpf_create_vport_info_init;
 	idpf_execute_vc_cmd;
+	idpf_prep_pkts;
+	idpf_register_ts_mbuf;
 	idpf_release_rxq_mbufs;
 	idpf_release_txq_mbufs;
 	idpf_reset_single_rx_queue;
@@ -22,6 +24,10 @@ INTERNAL {
 	idpf_reset_split_tx_complq;
 	idpf_reset_split_tx_descq;
 	idpf_rx_queue_release;
+	idpf_singleq_recv_pkts;
+	idpf_singleq_xmit_pkts;
+	idpf_splitq_recv_pkts;
+	idpf_splitq_xmit_pkts;
 	idpf_tx_queue_release;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 734e97ffc2..ee2dec7c7c 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -22,8 +22,6 @@ rte_spinlock_t idpf_adapter_lock;
 struct idpf_adapter_list idpf_adapter_list;
 bool idpf_adapter_list_init;
 
-uint64_t idpf_timestamp_dynflag;
-
 static const char * const idpf_valid_args[] = {
 	IDPF_TX_SINGLE_Q,
 	IDPF_RX_SINGLE_Q,
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 9b40aa4e56..d791d402fb 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -28,7 +28,6 @@
 
 #define IDPF_MIN_BUF_SIZE	1024
 #define IDPF_MAX_FRAME_SIZE	9728
-#define IDPF_MIN_FRAME_SIZE	14
 #define IDPF_DEFAULT_MTU	RTE_ETHER_MTU
 
 #define IDPF_NUM_MACADDR_MAX	64
@@ -78,9 +77,6 @@ struct idpf_adapter_ext {
 	uint16_t cur_vport_nb;
 
 	uint16_t used_vecs_num;
-
-	/* For PTP */
-	uint64_t time_hw;
 };
 
 TAILQ_HEAD(idpf_adapter_list, idpf_adapter_ext);
diff --git a/drivers/net/idpf/idpf_logs.h b/drivers/net/idpf/idpf_logs.h
index d5f778fefe..bf0774b8e4 100644
--- a/drivers/net/idpf/idpf_logs.h
+++ b/drivers/net/idpf/idpf_logs.h
@@ -29,28 +29,4 @@ extern int idpf_logtype_driver;
 #define PMD_DRV_LOG(level, fmt, args...) \
 	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
 
-#ifdef RTE_LIBRTE_IDPF_DEBUG_RX
-#define PMD_RX_LOG(level, ...) \
-	RTE_LOG(level, \
-		PMD, \
-		RTE_FMT("%s(): " \
-			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
-			__func__, \
-			RTE_FMT_TAIL(__VA_ARGS__,)))
-#else
-#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
-#endif
-
-#ifdef RTE_LIBRTE_IDPF_DEBUG_TX
-#define PMD_TX_LOG(level, ...) \
-	RTE_LOG(level, \
-		PMD, \
-		RTE_FMT("%s(): " \
-			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
-			__func__, \
-			RTE_FMT_TAIL(__VA_ARGS__,)))
-#else
-#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
-#endif
-
 #endif /* _IDPF_LOGS_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index fb1814d893..1066789386 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -10,8 +10,6 @@
 #include "idpf_rxtx.h"
 #include "idpf_rxtx_vec_common.h"
 
-static int idpf_timestamp_dynfield_offset = -1;
-
 static uint64_t
 idpf_rx_offload_convert(uint64_t offload)
 {
@@ -501,23 +499,6 @@ idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	return ret;
 }
 
-static int
-idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
-{
-	int err;
-	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0) {
-		/* Register mbuf field and flag for Rx timestamp */
-		err = rte_mbuf_dyn_rx_timestamp_register(&idpf_timestamp_dynfield_offset,
-							 &idpf_timestamp_dynflag);
-		if (err != 0) {
-			PMD_DRV_LOG(ERR,
-				    "Cannot register mbuf field/flag for timestamp");
-			return -EINVAL;
-		}
-	}
-	return 0;
-}
-
 int
 idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
@@ -537,7 +518,7 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 	err = idpf_register_ts_mbuf(rxq);
 	if (err != 0) {
-		PMD_DRV_LOG(ERR, "fail to regidter timestamp mbuf %u",
+		PMD_DRV_LOG(ERR, "fail to residter timestamp mbuf %u",
 					rx_queue_id);
 		return -EIO;
 	}
@@ -762,922 +743,6 @@ idpf_stop_queues(struct rte_eth_dev *dev)
 	}
 }
 
-#define IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S				\
-	(RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S) |     \
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S) |     \
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S) |    \
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S))
-
-static inline uint64_t
-idpf_splitq_rx_csum_offload(uint8_t err)
-{
-	uint64_t flags = 0;
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S)) == 0))
-		return flags;
-
-	if (likely((err & IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S) == 0)) {
-		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
-			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
-		return flags;
-	}
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S)) != 0))
-		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
-
-	return flags;
-}
-
-#define IDPF_RX_FLEX_DESC_ADV_HASH1_S  0
-#define IDPF_RX_FLEX_DESC_ADV_HASH2_S  16
-#define IDPF_RX_FLEX_DESC_ADV_HASH3_S  24
-
-static inline uint64_t
-idpf_splitq_rx_rss_offload(struct rte_mbuf *mb,
-			   volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc)
-{
-	uint8_t status_err0_qw0;
-	uint64_t flags = 0;
-
-	status_err0_qw0 = rx_desc->status_err0_qw0;
-
-	if ((status_err0_qw0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S)) != 0) {
-		flags |= RTE_MBUF_F_RX_RSS_HASH;
-		mb->hash.rss = (rte_le_to_cpu_16(rx_desc->hash1) <<
-				IDPF_RX_FLEX_DESC_ADV_HASH1_S) |
-			((uint32_t)(rx_desc->ff2_mirrid_hash2.hash2) <<
-			 IDPF_RX_FLEX_DESC_ADV_HASH2_S) |
-			((uint32_t)(rx_desc->hash3) <<
-			 IDPF_RX_FLEX_DESC_ADV_HASH3_S);
-	}
-
-	return flags;
-}
-
-static void
-idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
-{
-	volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_ring;
-	volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_desc;
-	uint16_t nb_refill = rx_bufq->rx_free_thresh;
-	uint16_t nb_desc = rx_bufq->nb_rx_desc;
-	uint16_t next_avail = rx_bufq->rx_tail;
-	struct rte_mbuf *nmb[rx_bufq->rx_free_thresh];
-	struct rte_eth_dev *dev;
-	uint64_t dma_addr;
-	uint16_t delta;
-	int i;
-
-	if (rx_bufq->nb_rx_hold < rx_bufq->rx_free_thresh)
-		return;
-
-	rx_buf_ring = rx_bufq->rx_ring;
-	delta = nb_desc - next_avail;
-	if (unlikely(delta < nb_refill)) {
-		if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, delta) == 0)) {
-			for (i = 0; i < delta; i++) {
-				rx_buf_desc = &rx_buf_ring[next_avail + i];
-				rx_bufq->sw_ring[next_avail + i] = nmb[i];
-				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
-				rx_buf_desc->hdr_addr = 0;
-				rx_buf_desc->pkt_addr = dma_addr;
-			}
-			nb_refill -= delta;
-			next_avail = 0;
-			rx_bufq->nb_rx_hold -= delta;
-		} else {
-			dev = &rte_eth_devices[rx_bufq->port_id];
-			dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
-			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
-				   rx_bufq->port_id, rx_bufq->queue_id);
-			return;
-		}
-	}
-
-	if (nb_desc - next_avail >= nb_refill) {
-		if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, nb_refill) == 0)) {
-			for (i = 0; i < nb_refill; i++) {
-				rx_buf_desc = &rx_buf_ring[next_avail + i];
-				rx_bufq->sw_ring[next_avail + i] = nmb[i];
-				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
-				rx_buf_desc->hdr_addr = 0;
-				rx_buf_desc->pkt_addr = dma_addr;
-			}
-			next_avail += nb_refill;
-			rx_bufq->nb_rx_hold -= nb_refill;
-		} else {
-			dev = &rte_eth_devices[rx_bufq->port_id];
-			dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
-			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
-				   rx_bufq->port_id, rx_bufq->queue_id);
-		}
-	}
-
-	IDPF_PCI_REG_WRITE(rx_bufq->qrx_tail, next_avail);
-
-	rx_bufq->rx_tail = next_avail;
-}
-
-uint16_t
-idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-		      uint16_t nb_pkts)
-{
-	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
-	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
-	uint16_t pktlen_gen_bufq_id;
-	struct idpf_rx_queue *rxq;
-	const uint32_t *ptype_tbl;
-	uint8_t status_err0_qw1;
-	struct idpf_adapter_ext *ad;
-	struct rte_mbuf *rxm;
-	uint16_t rx_id_bufq1;
-	uint16_t rx_id_bufq2;
-	uint64_t pkt_flags;
-	uint16_t pkt_len;
-	uint16_t bufq_id;
-	uint16_t gen_id;
-	uint16_t rx_id;
-	uint16_t nb_rx;
-	uint64_t ts_ns;
-
-	nb_rx = 0;
-	rxq = rx_queue;
-	ad = IDPF_ADAPTER_TO_EXT(rxq->adapter);
-
-	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
-		return nb_rx;
-
-	rx_id = rxq->rx_tail;
-	rx_id_bufq1 = rxq->bufq1->rx_next_avail;
-	rx_id_bufq2 = rxq->bufq2->rx_next_avail;
-	rx_desc_ring = rxq->rx_ring;
-	ptype_tbl = rxq->adapter->ptype_tbl;
-
-	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
-		rxq->hw_register_set = 1;
-
-	while (nb_rx < nb_pkts) {
-		rx_desc = &rx_desc_ring[rx_id];
-
-		pktlen_gen_bufq_id =
-			rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
-		gen_id = (pktlen_gen_bufq_id &
-			  VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
-			VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
-		if (gen_id != rxq->expected_gen_id)
-			break;
-
-		pkt_len = (pktlen_gen_bufq_id &
-			   VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M) >>
-			VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S;
-		if (pkt_len == 0)
-			PMD_RX_LOG(ERR, "Packet length is 0");
-
-		rx_id++;
-		if (unlikely(rx_id == rxq->nb_rx_desc)) {
-			rx_id = 0;
-			rxq->expected_gen_id ^= 1;
-		}
-
-		bufq_id = (pktlen_gen_bufq_id &
-			   VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M) >>
-			VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S;
-		if (bufq_id == 0) {
-			rxm = rxq->bufq1->sw_ring[rx_id_bufq1];
-			rx_id_bufq1++;
-			if (unlikely(rx_id_bufq1 == rxq->bufq1->nb_rx_desc))
-				rx_id_bufq1 = 0;
-			rxq->bufq1->nb_rx_hold++;
-		} else {
-			rxm = rxq->bufq2->sw_ring[rx_id_bufq2];
-			rx_id_bufq2++;
-			if (unlikely(rx_id_bufq2 == rxq->bufq2->nb_rx_desc))
-				rx_id_bufq2 = 0;
-			rxq->bufq2->nb_rx_hold++;
-		}
-
-		rxm->pkt_len = pkt_len;
-		rxm->data_len = pkt_len;
-		rxm->data_off = RTE_PKTMBUF_HEADROOM;
-		rxm->next = NULL;
-		rxm->nb_segs = 1;
-		rxm->port = rxq->port_id;
-		rxm->ol_flags = 0;
-		rxm->packet_type =
-			ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
-				   VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
-				  VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
-
-		status_err0_qw1 = rx_desc->status_err0_qw1;
-		pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
-		pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
-		if (idpf_timestamp_dynflag > 0 &&
-		    (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
-			/* timestamp */
-			ts_ns = idpf_tstamp_convert_32b_64b(ad,
-				rxq->hw_register_set,
-				rte_le_to_cpu_32(rx_desc->ts_high));
-			rxq->hw_register_set = 0;
-			*RTE_MBUF_DYNFIELD(rxm,
-					   idpf_timestamp_dynfield_offset,
-					   rte_mbuf_timestamp_t *) = ts_ns;
-			rxm->ol_flags |= idpf_timestamp_dynflag;
-		}
-
-		rxm->ol_flags |= pkt_flags;
-
-		rx_pkts[nb_rx++] = rxm;
-	}
-
-	if (nb_rx > 0) {
-		rxq->rx_tail = rx_id;
-		if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
-			rxq->bufq1->rx_next_avail = rx_id_bufq1;
-		if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
-			rxq->bufq2->rx_next_avail = rx_id_bufq2;
-
-		idpf_split_rx_bufq_refill(rxq->bufq1);
-		idpf_split_rx_bufq_refill(rxq->bufq2);
-	}
-
-	return nb_rx;
-}
-
-static inline void
-idpf_split_tx_free(struct idpf_tx_queue *cq)
-{
-	volatile struct idpf_splitq_tx_compl_desc *compl_ring = cq->compl_ring;
-	volatile struct idpf_splitq_tx_compl_desc *txd;
-	uint16_t next = cq->tx_tail;
-	struct idpf_tx_entry *txe;
-	struct idpf_tx_queue *txq;
-	uint16_t gen, qid, q_head;
-	uint16_t nb_desc_clean;
-	uint8_t ctype;
-
-	txd = &compl_ring[next];
-	gen = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
-		IDPF_TXD_COMPLQ_GEN_M) >> IDPF_TXD_COMPLQ_GEN_S;
-	if (gen != cq->expected_gen_id)
-		return;
-
-	ctype = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
-		IDPF_TXD_COMPLQ_COMPL_TYPE_M) >> IDPF_TXD_COMPLQ_COMPL_TYPE_S;
-	qid = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
-		IDPF_TXD_COMPLQ_QID_M) >> IDPF_TXD_COMPLQ_QID_S;
-	q_head = rte_le_to_cpu_16(txd->q_head_compl_tag.compl_tag);
-	txq = cq->txqs[qid - cq->tx_start_qid];
-
-	switch (ctype) {
-	case IDPF_TXD_COMPLT_RE:
-		/* clean to q_head which indicates be fetched txq desc id + 1.
-		 * TODO: need to refine and remove the if condition.
-		 */
-		if (unlikely(q_head % 32)) {
-			PMD_DRV_LOG(ERR, "unexpected desc (head = %u) completion.",
-						q_head);
-			return;
-		}
-		if (txq->last_desc_cleaned > q_head)
-			nb_desc_clean = (txq->nb_tx_desc - txq->last_desc_cleaned) +
-				q_head;
-		else
-			nb_desc_clean = q_head - txq->last_desc_cleaned;
-		txq->nb_free += nb_desc_clean;
-		txq->last_desc_cleaned = q_head;
-		break;
-	case IDPF_TXD_COMPLT_RS:
-		/* q_head indicates sw_id when ctype is 2 */
-		txe = &txq->sw_ring[q_head];
-		if (txe->mbuf != NULL) {
-			rte_pktmbuf_free_seg(txe->mbuf);
-			txe->mbuf = NULL;
-		}
-		break;
-	default:
-		PMD_DRV_LOG(ERR, "unknown completion type.");
-		return;
-	}
-
-	if (++next == cq->nb_tx_desc) {
-		next = 0;
-		cq->expected_gen_id ^= 1;
-	}
-
-	cq->tx_tail = next;
-}
-
-/* Check if the context descriptor is needed for TX offloading */
-static inline uint16_t
-idpf_calc_context_desc(uint64_t flags)
-{
-	if ((flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
-		return 1;
-
-	return 0;
-}
-
-/* set TSO context descriptor
- */
-static inline void
-idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
-			union idpf_tx_offload tx_offload,
-			volatile union idpf_flex_tx_ctx_desc *ctx_desc)
-{
-	uint16_t cmd_dtype;
-	uint32_t tso_len;
-	uint8_t hdr_len;
-
-	if (tx_offload.l4_len == 0) {
-		PMD_TX_LOG(DEBUG, "L4 length set to 0");
-		return;
-	}
-
-	hdr_len = tx_offload.l2_len +
-		tx_offload.l3_len +
-		tx_offload.l4_len;
-	cmd_dtype = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX |
-		IDPF_TX_FLEX_CTX_DESC_CMD_TSO;
-	tso_len = mbuf->pkt_len - hdr_len;
-
-	ctx_desc->tso.qw1.cmd_dtype = rte_cpu_to_le_16(cmd_dtype);
-	ctx_desc->tso.qw0.hdr_len = hdr_len;
-	ctx_desc->tso.qw0.mss_rt =
-		rte_cpu_to_le_16((uint16_t)mbuf->tso_segsz &
-				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
-	ctx_desc->tso.qw0.flex_tlen =
-		rte_cpu_to_le_32(tso_len &
-				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
-}
-
-uint16_t
-idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-		      uint16_t nb_pkts)
-{
-	struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue;
-	volatile struct idpf_flex_tx_sched_desc *txr;
-	volatile struct idpf_flex_tx_sched_desc *txd;
-	struct idpf_tx_entry *sw_ring;
-	union idpf_tx_offload tx_offload = {0};
-	struct idpf_tx_entry *txe, *txn;
-	uint16_t nb_used, tx_id, sw_id;
-	struct rte_mbuf *tx_pkt;
-	uint16_t nb_to_clean;
-	uint16_t nb_tx = 0;
-	uint64_t ol_flags;
-	uint16_t nb_ctx;
-
-	if (unlikely(txq == NULL) || unlikely(!txq->q_started))
-		return nb_tx;
-
-	txr = txq->desc_ring;
-	sw_ring = txq->sw_ring;
-	tx_id = txq->tx_tail;
-	sw_id = txq->sw_tail;
-	txe = &sw_ring[sw_id];
-
-	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
-		tx_pkt = tx_pkts[nb_tx];
-
-		if (txq->nb_free <= txq->free_thresh) {
-			/* TODO: Need to refine
-			 * 1. free and clean: Better to decide a clean destination instead of
-			 * loop times. And don't free mbuf when RS got immediately, free when
-			 * transmit or according to the clean destination.
-			 * Now, just ignore the RE write back, free mbuf when get RS
-			 * 2. out-of-order rewrite back haven't be supported, SW head and HW head
-			 * need to be separated.
-			 **/
-			nb_to_clean = 2 * txq->rs_thresh;
-			while (nb_to_clean--)
-				idpf_split_tx_free(txq->complq);
-		}
-
-		if (txq->nb_free < tx_pkt->nb_segs)
-			break;
-
-		ol_flags = tx_pkt->ol_flags;
-		tx_offload.l2_len = tx_pkt->l2_len;
-		tx_offload.l3_len = tx_pkt->l3_len;
-		tx_offload.l4_len = tx_pkt->l4_len;
-		tx_offload.tso_segsz = tx_pkt->tso_segsz;
-		/* Calculate the number of context descriptors needed. */
-		nb_ctx = idpf_calc_context_desc(ol_flags);
-		nb_used = tx_pkt->nb_segs + nb_ctx;
-
-		/* context descriptor */
-		if (nb_ctx != 0) {
-			volatile union idpf_flex_tx_ctx_desc *ctx_desc =
-			(volatile union idpf_flex_tx_ctx_desc *)&txr[tx_id];
-
-			if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
-				idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
-							ctx_desc);
-
-			tx_id++;
-			if (tx_id == txq->nb_tx_desc)
-				tx_id = 0;
-		}
-
-		do {
-			txd = &txr[tx_id];
-			txn = &sw_ring[txe->next_id];
-			txe->mbuf = tx_pkt;
-
-			/* Setup TX descriptor */
-			txd->buf_addr =
-				rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt));
-			txd->qw1.cmd_dtype =
-				rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE);
-			txd->qw1.rxr_bufsize = tx_pkt->data_len;
-			txd->qw1.compl_tag = sw_id;
-			tx_id++;
-			if (tx_id == txq->nb_tx_desc)
-				tx_id = 0;
-			sw_id = txe->next_id;
-			txe = txn;
-			tx_pkt = tx_pkt->next;
-		} while (tx_pkt);
-
-		/* fill the last descriptor with End of Packet (EOP) bit */
-		txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_EOP;
-
-		if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
-			txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN;
-		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
-		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
-
-		if (txq->nb_used >= 32) {
-			txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_RE;
-			/* Update txq RE bit counters */
-			txq->nb_used = 0;
-		}
-	}
-
-	/* update the tail pointer if any packets were processed */
-	if (likely(nb_tx > 0)) {
-		IDPF_PCI_REG_WRITE(txq->qtx_tail, tx_id);
-		txq->tx_tail = tx_id;
-		txq->sw_tail = sw_id;
-	}
-
-	return nb_tx;
-}
-
-#define IDPF_RX_FLEX_DESC_STATUS0_XSUM_S				\
-	(RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) |		\
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) |		\
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) |	\
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S))
-
-/* Translate the rx descriptor status and error fields to pkt flags */
-static inline uint64_t
-idpf_rxd_to_pkt_flags(uint16_t status_error)
-{
-	uint64_t flags = 0;
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S)) == 0))
-		return flags;
-
-	if (likely((status_error & IDPF_RX_FLEX_DESC_STATUS0_XSUM_S) == 0)) {
-		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
-			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
-		return flags;
-	}
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)) != 0))
-		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
-
-	return flags;
-}
-
-static inline void
-idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
-		    uint16_t rx_id)
-{
-	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
-
-	if (nb_hold > rxq->rx_free_thresh) {
-		PMD_RX_LOG(DEBUG,
-			   "port_id=%u queue_id=%u rx_tail=%u nb_hold=%u",
-			   rxq->port_id, rxq->queue_id, rx_id, nb_hold);
-		rx_id = (uint16_t)((rx_id == 0) ?
-				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
-		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
-		nb_hold = 0;
-	}
-	rxq->nb_rx_hold = nb_hold;
-}
-
-uint16_t
-idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-		       uint16_t nb_pkts)
-{
-	volatile union virtchnl2_rx_desc *rx_ring;
-	volatile union virtchnl2_rx_desc *rxdp;
-	union virtchnl2_rx_desc rxd;
-	struct idpf_rx_queue *rxq;
-	const uint32_t *ptype_tbl;
-	uint16_t rx_id, nb_hold;
-	struct rte_eth_dev *dev;
-	struct idpf_adapter_ext *ad;
-	uint16_t rx_packet_len;
-	struct rte_mbuf *rxm;
-	struct rte_mbuf *nmb;
-	uint16_t rx_status0;
-	uint64_t pkt_flags;
-	uint64_t dma_addr;
-	uint64_t ts_ns;
-	uint16_t nb_rx;
-
-	nb_rx = 0;
-	nb_hold = 0;
-	rxq = rx_queue;
-
-	ad = IDPF_ADAPTER_TO_EXT(rxq->adapter);
-
-	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
-		return nb_rx;
-
-	rx_id = rxq->rx_tail;
-	rx_ring = rxq->rx_ring;
-	ptype_tbl = rxq->adapter->ptype_tbl;
-
-	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
-		rxq->hw_register_set = 1;
-
-	while (nb_rx < nb_pkts) {
-		rxdp = &rx_ring[rx_id];
-		rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
-
-		/* Check the DD bit first */
-		if ((rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)) == 0)
-			break;
-
-		nmb = rte_mbuf_raw_alloc(rxq->mp);
-		if (unlikely(nmb == NULL)) {
-			dev = &rte_eth_devices[rxq->port_id];
-			dev->data->rx_mbuf_alloc_failed++;
-			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
-				   "queue_id=%u", rxq->port_id, rxq->queue_id);
-			break;
-		}
-		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
-
-		nb_hold++;
-		rxm = rxq->sw_ring[rx_id];
-		rxq->sw_ring[rx_id] = nmb;
-		rx_id++;
-		if (unlikely(rx_id == rxq->nb_rx_desc))
-			rx_id = 0;
-
-		/* Prefetch next mbuf */
-		rte_prefetch0(rxq->sw_ring[rx_id]);
-
-		/* When next RX descriptor is on a cache line boundary,
-		 * prefetch the next 4 RX descriptors and next 8 pointers
-		 * to mbufs.
-		 */
-		if ((rx_id & 0x3) == 0) {
-			rte_prefetch0(&rx_ring[rx_id]);
-			rte_prefetch0(rxq->sw_ring[rx_id]);
-		}
-		dma_addr =
-			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
-		rxdp->read.hdr_addr = 0;
-		rxdp->read.pkt_addr = dma_addr;
-
-		rx_packet_len = (rte_cpu_to_le_16(rxd.flex_nic_wb.pkt_len) &
-				 VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M);
-
-		rxm->data_off = RTE_PKTMBUF_HEADROOM;
-		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
-		rxm->nb_segs = 1;
-		rxm->next = NULL;
-		rxm->pkt_len = rx_packet_len;
-		rxm->data_len = rx_packet_len;
-		rxm->port = rxq->port_id;
-		rxm->ol_flags = 0;
-		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
-		rxm->packet_type =
-			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
-					    VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
-
-		rxm->ol_flags |= pkt_flags;
-
-		if (idpf_timestamp_dynflag > 0 &&
-		   (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0) {
-			/* timestamp */
-			ts_ns = idpf_tstamp_convert_32b_64b(ad,
-				rxq->hw_register_set,
-				rte_le_to_cpu_32(rxd.flex_nic_wb.flex_ts.ts_high));
-			rxq->hw_register_set = 0;
-			*RTE_MBUF_DYNFIELD(rxm,
-					   idpf_timestamp_dynfield_offset,
-					   rte_mbuf_timestamp_t *) = ts_ns;
-			rxm->ol_flags |= idpf_timestamp_dynflag;
-		}
-
-		rx_pkts[nb_rx++] = rxm;
-	}
-	rxq->rx_tail = rx_id;
-
-	idpf_update_rx_tail(rxq, nb_hold, rx_id);
-
-	return nb_rx;
-}
-
-static inline int
-idpf_xmit_cleanup(struct idpf_tx_queue *txq)
-{
-	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
-	struct idpf_tx_entry *sw_ring = txq->sw_ring;
-	uint16_t nb_tx_desc = txq->nb_tx_desc;
-	uint16_t desc_to_clean_to;
-	uint16_t nb_tx_to_clean;
-	uint16_t i;
-
-	volatile struct idpf_flex_tx_desc *txd = txq->tx_ring;
-
-	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
-	if (desc_to_clean_to >= nb_tx_desc)
-		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
-
-	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
-	/* In the writeback Tx desccriptor, the only significant fields are the 4-bit DTYPE */
-	if ((txd[desc_to_clean_to].qw1.cmd_dtype &
-			rte_cpu_to_le_16(IDPF_TXD_QW1_DTYPE_M)) !=
-			rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE)) {
-		PMD_TX_LOG(DEBUG, "TX descriptor %4u is not done "
-			   "(port=%d queue=%d)", desc_to_clean_to,
-			   txq->port_id, txq->queue_id);
-		return -1;
-	}
-
-	if (last_desc_cleaned > desc_to_clean_to)
-		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
-					    desc_to_clean_to);
-	else
-		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
-					last_desc_cleaned);
-
-	txd[desc_to_clean_to].qw1.cmd_dtype = 0;
-	txd[desc_to_clean_to].qw1.buf_size = 0;
-	for (i = 0; i < RTE_DIM(txd[desc_to_clean_to].qw1.flex.raw); i++)
-		txd[desc_to_clean_to].qw1.flex.raw[i] = 0;
-
-	txq->last_desc_cleaned = desc_to_clean_to;
-	txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
-
-	return 0;
-}
-
-/* TX function */
-uint16_t
-idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-		       uint16_t nb_pkts)
-{
-	volatile struct idpf_flex_tx_desc *txd;
-	volatile struct idpf_flex_tx_desc *txr;
-	union idpf_tx_offload tx_offload = {0};
-	struct idpf_tx_entry *txe, *txn;
-	struct idpf_tx_entry *sw_ring;
-	struct idpf_tx_queue *txq;
-	struct rte_mbuf *tx_pkt;
-	struct rte_mbuf *m_seg;
-	uint64_t buf_dma_addr;
-	uint64_t ol_flags;
-	uint16_t tx_last;
-	uint16_t nb_used;
-	uint16_t nb_ctx;
-	uint16_t td_cmd;
-	uint16_t tx_id;
-	uint16_t nb_tx;
-	uint16_t slen;
-
-	nb_tx = 0;
-	txq = tx_queue;
-
-	if (unlikely(txq == NULL) || unlikely(!txq->q_started))
-		return nb_tx;
-
-	sw_ring = txq->sw_ring;
-	txr = txq->tx_ring;
-	tx_id = txq->tx_tail;
-	txe = &sw_ring[tx_id];
-
-	/* Check if the descriptor ring needs to be cleaned. */
-	if (txq->nb_free < txq->free_thresh)
-		(void)idpf_xmit_cleanup(txq);
-
-	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
-		td_cmd = 0;
-
-		tx_pkt = *tx_pkts++;
-		RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
-
-		ol_flags = tx_pkt->ol_flags;
-		tx_offload.l2_len = tx_pkt->l2_len;
-		tx_offload.l3_len = tx_pkt->l3_len;
-		tx_offload.l4_len = tx_pkt->l4_len;
-		tx_offload.tso_segsz = tx_pkt->tso_segsz;
-		/* Calculate the number of context descriptors needed. */
-		nb_ctx = idpf_calc_context_desc(ol_flags);
-
-		/* The number of descriptors that must be allocated for
-		 * a packet equals to the number of the segments of that
-		 * packet plus 1 context descriptor if needed.
-		 */
-		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
-		tx_last = (uint16_t)(tx_id + nb_used - 1);
-
-		/* Circular ring */
-		if (tx_last >= txq->nb_tx_desc)
-			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
-
-		PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u"
-			   " tx_first=%u tx_last=%u",
-			   txq->port_id, txq->queue_id, tx_id, tx_last);
-
-		if (nb_used > txq->nb_free) {
-			if (idpf_xmit_cleanup(txq) != 0) {
-				if (nb_tx == 0)
-					return 0;
-				goto end_of_tx;
-			}
-			if (unlikely(nb_used > txq->rs_thresh)) {
-				while (nb_used > txq->nb_free) {
-					if (idpf_xmit_cleanup(txq) != 0) {
-						if (nb_tx == 0)
-							return 0;
-						goto end_of_tx;
-					}
-				}
-			}
-		}
-
-		if (nb_ctx != 0) {
-			/* Setup TX context descriptor if required */
-			volatile union idpf_flex_tx_ctx_desc *ctx_txd =
-				(volatile union idpf_flex_tx_ctx_desc *)
-							&txr[tx_id];
-
-			txn = &sw_ring[txe->next_id];
-			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
-			if (txe->mbuf != NULL) {
-				rte_pktmbuf_free_seg(txe->mbuf);
-				txe->mbuf = NULL;
-			}
-
-			/* TSO enabled */
-			if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
-				idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
-							ctx_txd);
-
-			txe->last_id = tx_last;
-			tx_id = txe->next_id;
-			txe = txn;
-		}
-
-		m_seg = tx_pkt;
-		do {
-			txd = &txr[tx_id];
-			txn = &sw_ring[txe->next_id];
-
-			if (txe->mbuf != NULL)
-				rte_pktmbuf_free_seg(txe->mbuf);
-			txe->mbuf = m_seg;
-
-			/* Setup TX Descriptor */
-			slen = m_seg->data_len;
-			buf_dma_addr = rte_mbuf_data_iova(m_seg);
-			txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
-			txd->qw1.buf_size = slen;
-			txd->qw1.cmd_dtype = rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_DATA <<
-							      IDPF_FLEX_TXD_QW1_DTYPE_S);
-
-			txe->last_id = tx_last;
-			tx_id = txe->next_id;
-			txe = txn;
-			m_seg = m_seg->next;
-		} while (m_seg);
-
-		/* The last packet data descriptor needs End Of Packet (EOP) */
-		td_cmd |= IDPF_TX_FLEX_DESC_CMD_EOP;
-		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
-		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
-
-		if (txq->nb_used >= txq->rs_thresh) {
-			PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
-				   "%4u (port=%d queue=%d)",
-				   tx_last, txq->port_id, txq->queue_id);
-
-			td_cmd |= IDPF_TX_FLEX_DESC_CMD_RS;
-
-			/* Update txq RS bit counters */
-			txq->nb_used = 0;
-		}
-
-		if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
-			td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN;
-
-		txd->qw1.cmd_dtype |= rte_cpu_to_le_16(td_cmd << IDPF_FLEX_TXD_QW1_CMD_S);
-	}
-
-end_of_tx:
-	rte_wmb();
-
-	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
-		   txq->port_id, txq->queue_id, tx_id, nb_tx);
-
-	IDPF_PCI_REG_WRITE(txq->qtx_tail, tx_id);
-	txq->tx_tail = tx_id;
-
-	return nb_tx;
-}
-
-/* TX prep functions */
-uint16_t
-idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
-	       uint16_t nb_pkts)
-{
-#ifdef RTE_LIBRTE_ETHDEV_DEBUG
-	int ret;
-#endif
-	int i;
-	uint64_t ol_flags;
-	struct rte_mbuf *m;
-
-	for (i = 0; i < nb_pkts; i++) {
-		m = tx_pkts[i];
-		ol_flags = m->ol_flags;
-
-		/* Check condition for nb_segs > IDPF_TX_MAX_MTU_SEG. */
-		if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0) {
-			if (m->nb_segs > IDPF_TX_MAX_MTU_SEG) {
-				rte_errno = EINVAL;
-				return i;
-			}
-		} else if ((m->tso_segsz < IDPF_MIN_TSO_MSS) ||
-			   (m->tso_segsz > IDPF_MAX_TSO_MSS) ||
-			   (m->pkt_len > IDPF_MAX_TSO_FRAME_SIZE)) {
-			/* MSS outside the range are considered malicious */
-			rte_errno = EINVAL;
-			return i;
-		}
-
-		if ((ol_flags & IDPF_TX_OFFLOAD_NOTSUP_MASK) != 0) {
-			rte_errno = ENOTSUP;
-			return i;
-		}
-
-		if (m->pkt_len < IDPF_MIN_FRAME_SIZE) {
-			rte_errno = EINVAL;
-			return i;
-		}
-
-#ifdef RTE_LIBRTE_ETHDEV_DEBUG
-		ret = rte_validate_tx_offload(m);
-		if (ret != 0) {
-			rte_errno = -ret;
-			return i;
-		}
-#endif
-	}
-
-	return i;
-}
-
 static void __rte_cold
 release_rxq_mbufs_vec(struct idpf_rx_queue *rxq)
 {
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 4efbf10295..eab363c3e7 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -8,41 +8,6 @@
 #include <idpf_common_rxtx.h>
 #include "idpf_ethdev.h"
 
-/* MTS */
-#define GLTSYN_CMD_SYNC_0_0	(PF_TIMESYNC_BASE + 0x0)
-#define PF_GLTSYN_SHTIME_0_0	(PF_TIMESYNC_BASE + 0x4)
-#define PF_GLTSYN_SHTIME_L_0	(PF_TIMESYNC_BASE + 0x8)
-#define PF_GLTSYN_SHTIME_H_0	(PF_TIMESYNC_BASE + 0xC)
-#define GLTSYN_ART_L_0		(PF_TIMESYNC_BASE + 0x10)
-#define GLTSYN_ART_H_0		(PF_TIMESYNC_BASE + 0x14)
-#define PF_GLTSYN_SHTIME_0_1	(PF_TIMESYNC_BASE + 0x24)
-#define PF_GLTSYN_SHTIME_L_1	(PF_TIMESYNC_BASE + 0x28)
-#define PF_GLTSYN_SHTIME_H_1	(PF_TIMESYNC_BASE + 0x2C)
-#define PF_GLTSYN_SHTIME_0_2	(PF_TIMESYNC_BASE + 0x44)
-#define PF_GLTSYN_SHTIME_L_2	(PF_TIMESYNC_BASE + 0x48)
-#define PF_GLTSYN_SHTIME_H_2	(PF_TIMESYNC_BASE + 0x4C)
-#define PF_GLTSYN_SHTIME_0_3	(PF_TIMESYNC_BASE + 0x64)
-#define PF_GLTSYN_SHTIME_L_3	(PF_TIMESYNC_BASE + 0x68)
-#define PF_GLTSYN_SHTIME_H_3	(PF_TIMESYNC_BASE + 0x6C)
-
-#define PF_TIMESYNC_BAR4_BASE	0x0E400000
-#define GLTSYN_ENA		(PF_TIMESYNC_BAR4_BASE + 0x90)
-#define GLTSYN_CMD		(PF_TIMESYNC_BAR4_BASE + 0x94)
-#define GLTSYC_TIME_L		(PF_TIMESYNC_BAR4_BASE + 0x104)
-#define GLTSYC_TIME_H		(PF_TIMESYNC_BAR4_BASE + 0x108)
-
-#define GLTSYN_CMD_SYNC_0_4	(PF_TIMESYNC_BAR4_BASE + 0x110)
-#define PF_GLTSYN_SHTIME_L_4	(PF_TIMESYNC_BAR4_BASE + 0x118)
-#define PF_GLTSYN_SHTIME_H_4	(PF_TIMESYNC_BAR4_BASE + 0x11C)
-#define GLTSYN_INCVAL_L		(PF_TIMESYNC_BAR4_BASE + 0x150)
-#define GLTSYN_INCVAL_H		(PF_TIMESYNC_BAR4_BASE + 0x154)
-#define GLTSYN_SHADJ_L		(PF_TIMESYNC_BAR4_BASE + 0x158)
-#define GLTSYN_SHADJ_H		(PF_TIMESYNC_BAR4_BASE + 0x15C)
-
-#define GLTSYN_CMD_SYNC_0_5	(PF_TIMESYNC_BAR4_BASE + 0x130)
-#define PF_GLTSYN_SHTIME_L_5	(PF_TIMESYNC_BAR4_BASE + 0x138)
-#define PF_GLTSYN_SHTIME_H_5	(PF_TIMESYNC_BAR4_BASE + 0x13C)
-
 /* In QLEN must be whole number of 32 descriptors. */
 #define IDPF_ALIGN_RING_DESC	32
 #define IDPF_MIN_RING_DESC	32
@@ -62,44 +27,10 @@
 #define IDPF_DEFAULT_TX_RS_THRESH	32
 #define IDPF_DEFAULT_TX_FREE_THRESH	32
 
-#define IDPF_TX_MAX_MTU_SEG	10
-
-#define IDPF_MIN_TSO_MSS	88
-#define IDPF_MAX_TSO_MSS	9728
-#define IDPF_MAX_TSO_FRAME_SIZE	262143
-#define IDPF_TX_MAX_MTU_SEG     10
-
-#define IDPF_TX_CKSUM_OFFLOAD_MASK (		\
-		RTE_MBUF_F_TX_IP_CKSUM |	\
-		RTE_MBUF_F_TX_L4_MASK |		\
-		RTE_MBUF_F_TX_TCP_SEG)
-
-#define IDPF_TX_OFFLOAD_MASK (			\
-		IDPF_TX_CKSUM_OFFLOAD_MASK |	\
-		RTE_MBUF_F_TX_IPV4 |		\
-		RTE_MBUF_F_TX_IPV6)
-
-#define IDPF_TX_OFFLOAD_NOTSUP_MASK \
-		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
-
-extern uint64_t idpf_timestamp_dynflag;
-
 struct idpf_tx_vec_entry {
 	struct rte_mbuf *mbuf;
 };
 
-/* Offload features */
-union idpf_tx_offload {
-	uint64_t data;
-	struct {
-		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
-		uint64_t l3_len:9; /* L3 (IP) Header Length. */
-		uint64_t l4_len:8; /* L4 Header Length. */
-		uint64_t tso_segsz:16; /* TCP TSO segment size */
-		/* uint64_t unused : 24; */
-	};
-};
-
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
@@ -118,77 +49,14 @@ int idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
-uint16_t idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-				uint16_t nb_pkts);
 uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
 				       uint16_t nb_pkts);
-uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-			       uint16_t nb_pkts);
-uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-				uint16_t nb_pkts);
 uint16_t idpf_singleq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 				       uint16_t nb_pkts);
-uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-			       uint16_t nb_pkts);
-uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-			uint16_t nb_pkts);
 
 void idpf_stop_queues(struct rte_eth_dev *dev);
 
 void idpf_set_rx_function(struct rte_eth_dev *dev);
 void idpf_set_tx_function(struct rte_eth_dev *dev);
 
-#define IDPF_TIMESYNC_REG_WRAP_GUARD_BAND  10000
-/* Helper function to convert a 32b nanoseconds timestamp to 64b. */
-static inline uint64_t
-
-idpf_tstamp_convert_32b_64b(struct idpf_adapter_ext *ad, uint32_t flag,
-			    uint32_t in_timestamp)
-{
-#ifdef RTE_ARCH_X86_64
-	struct idpf_hw *hw = &ad->base.hw;
-	const uint64_t mask = 0xFFFFFFFF;
-	uint32_t hi, lo, lo2, delta;
-	uint64_t ns;
-
-	if (flag != 0) {
-		IDPF_WRITE_REG(hw, GLTSYN_CMD_SYNC_0_0, PF_GLTSYN_CMD_SYNC_SHTIME_EN_M);
-		IDPF_WRITE_REG(hw, GLTSYN_CMD_SYNC_0_0, PF_GLTSYN_CMD_SYNC_EXEC_CMD_M |
-			       PF_GLTSYN_CMD_SYNC_SHTIME_EN_M);
-		lo = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
-		hi = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_H_0);
-		/*
-		 * On typical system, the delta between lo and lo2 is ~1000ns,
-		 * so 10000 seems a large-enough but not overly-big guard band.
-		 */
-		if (lo > (UINT32_MAX - IDPF_TIMESYNC_REG_WRAP_GUARD_BAND))
-			lo2 = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
-		else
-			lo2 = lo;
-
-		if (lo2 < lo) {
-			lo = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
-			hi = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_H_0);
-		}
-
-		ad->time_hw = ((uint64_t)hi << 32) | lo;
-	}
-
-	delta = (in_timestamp - (uint32_t)(ad->time_hw & mask));
-	if (delta > (mask / 2)) {
-		delta = ((uint32_t)(ad->time_hw & mask) - in_timestamp);
-		ns = ad->time_hw - delta;
-	} else {
-		ns = ad->time_hw + delta;
-	}
-
-	return ns;
-#else /* !RTE_ARCH_X86_64 */
-	RTE_SET_USED(ad);
-	RTE_SET_USED(flag);
-	RTE_SET_USED(in_timestamp);
-	return 0;
-#endif /* RTE_ARCH_X86_64 */
-}
-
 #endif /* _IDPF_RXTX_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
index 71a6c59823..ea949635e0 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
@@ -38,8 +38,8 @@ idpf_singleq_rearm_common(struct idpf_rx_queue *rxq)
 						dma_addr0);
 			}
 		}
-		rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed +=
-			IDPF_RXQ_REARM_THRESH;
+		rte_atomic64_add(&rxq->rx_stats.mbuf_alloc_failed,
+				 IDPF_RXQ_REARM_THRESH);
 		return;
 	}
 	struct rte_mbuf *mb0, *mb1, *mb2, *mb3;
@@ -168,8 +168,8 @@ idpf_singleq_rearm(struct idpf_rx_queue *rxq)
 							 dma_addr0);
 				}
 			}
-			rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed +=
-					IDPF_RXQ_REARM_THRESH;
+			rte_atomic64_add(&rxq->rx_stats.mbuf_alloc_failed,
+					 IDPF_RXQ_REARM_THRESH);
 			return;
 		}
 	}
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 14/19] common/idpf: add vec queue setup
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (12 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 13/19] common/idpf: add Rx and Tx data path beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 15/19] common/idpf: add avx512 for single queue model beilei.xing
                         ` (6 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move vector queue setup for single queue model to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 57 ++++++++++++++++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h |  2 +
 drivers/common/idpf/version.map        |  1 +
 drivers/net/idpf/idpf_rxtx.c           | 57 --------------------------
 drivers/net/idpf/idpf_rxtx.h           |  1 -
 5 files changed, 60 insertions(+), 58 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 459057f20e..bc95fef6bc 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1399,3 +1399,60 @@ idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 
 	return i;
 }
+
+static void __rte_cold
+release_rxq_mbufs_vec(struct idpf_rx_queue *rxq)
+{
+	const uint16_t mask = rxq->nb_rx_desc - 1;
+	uint16_t i;
+
+	if (rxq->sw_ring == NULL || rxq->rxrearm_nb >= rxq->nb_rx_desc)
+		return;
+
+	/* free all mbufs that are valid in the ring */
+	if (rxq->rxrearm_nb == 0) {
+		for (i = 0; i < rxq->nb_rx_desc; i++) {
+			if (rxq->sw_ring[i] != NULL)
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	} else {
+		for (i = rxq->rx_tail; i != rxq->rxrearm_start; i = (i + 1) & mask) {
+			if (rxq->sw_ring[i] != NULL)
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	}
+
+	rxq->rxrearm_nb = rxq->nb_rx_desc;
+
+	/* set all entries to NULL */
+	memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
+}
+
+static const struct idpf_rxq_ops def_singleq_rx_ops_vec = {
+	.release_mbufs = release_rxq_mbufs_vec,
+};
+
+static inline int
+idpf_singleq_rx_vec_setup_default(struct idpf_rx_queue *rxq)
+{
+	uintptr_t p;
+	struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
+
+	mb_def.nb_segs = 1;
+	mb_def.data_off = RTE_PKTMBUF_HEADROOM;
+	mb_def.port = rxq->port_id;
+	rte_mbuf_refcnt_set(&mb_def, 1);
+
+	/* prevent compiler reordering: rearm_data covers previous fields */
+	rte_compiler_barrier();
+	p = (uintptr_t)&mb_def.rearm_data;
+	rxq->mbuf_initializer = *(uint64_t *)p;
+	return 0;
+}
+
+int __rte_cold
+idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
+{
+	rxq->ops = &def_singleq_rx_ops_vec;
+	return idpf_singleq_rx_vec_setup_default(rxq);
+}
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index 827f791505..74d6081638 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -252,5 +252,7 @@ uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 __rte_internal
 uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts);
+__rte_internal
+int idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 03aab598b4..511705e5b0 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -25,6 +25,7 @@ INTERNAL {
 	idpf_reset_split_tx_descq;
 	idpf_rx_queue_release;
 	idpf_singleq_recv_pkts;
+	idpf_singleq_rx_vec_setup;
 	idpf_singleq_xmit_pkts;
 	idpf_splitq_recv_pkts;
 	idpf_splitq_xmit_pkts;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 1066789386..c0c622d64b 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -743,63 +743,6 @@ idpf_stop_queues(struct rte_eth_dev *dev)
 	}
 }
 
-static void __rte_cold
-release_rxq_mbufs_vec(struct idpf_rx_queue *rxq)
-{
-	const uint16_t mask = rxq->nb_rx_desc - 1;
-	uint16_t i;
-
-	if (rxq->sw_ring == NULL || rxq->rxrearm_nb >= rxq->nb_rx_desc)
-		return;
-
-	/* free all mbufs that are valid in the ring */
-	if (rxq->rxrearm_nb == 0) {
-		for (i = 0; i < rxq->nb_rx_desc; i++) {
-			if (rxq->sw_ring[i] != NULL)
-				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
-		}
-	} else {
-		for (i = rxq->rx_tail; i != rxq->rxrearm_start; i = (i + 1) & mask) {
-			if (rxq->sw_ring[i] != NULL)
-				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
-		}
-	}
-
-	rxq->rxrearm_nb = rxq->nb_rx_desc;
-
-	/* set all entries to NULL */
-	memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
-}
-
-static const struct idpf_rxq_ops def_singleq_rx_ops_vec = {
-	.release_mbufs = release_rxq_mbufs_vec,
-};
-
-static inline int
-idpf_singleq_rx_vec_setup_default(struct idpf_rx_queue *rxq)
-{
-	uintptr_t p;
-	struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
-
-	mb_def.nb_segs = 1;
-	mb_def.data_off = RTE_PKTMBUF_HEADROOM;
-	mb_def.port = rxq->port_id;
-	rte_mbuf_refcnt_set(&mb_def, 1);
-
-	/* prevent compiler reordering: rearm_data covers previous fields */
-	rte_compiler_barrier();
-	p = (uintptr_t)&mb_def.rearm_data;
-	rxq->mbuf_initializer = *(uint64_t *)p;
-	return 0;
-}
-
-int __rte_cold
-idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
-{
-	rxq->ops = &def_singleq_rx_ops_vec;
-	return idpf_singleq_rx_vec_setup_default(rxq);
-}
-
 void
 idpf_set_rx_function(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index eab363c3e7..a985dc2cf5 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -44,7 +44,6 @@ void idpf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 int idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
-int idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
 int idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 15/19] common/idpf: add avx512 for single queue model
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (13 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 14/19] common/idpf: add vec queue setup beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 16/19] common/idpf: refine API name for vport functions beilei.xing
                         ` (5 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move avx512 vector path for single queue to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.h        | 20 +++++++++++++
 .../idpf/idpf_common_rxtx_avx512.c}           |  4 +--
 drivers/common/idpf/meson.build               | 30 +++++++++++++++++++
 drivers/common/idpf/version.map               |  3 ++
 drivers/net/idpf/idpf_rxtx.h                  | 13 --------
 drivers/net/idpf/meson.build                  | 17 -----------
 6 files changed, 55 insertions(+), 32 deletions(-)
 rename drivers/{net/idpf/idpf_rxtx_vec_avx512.c => common/idpf/idpf_common_rxtx_avx512.c} (99%)

diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index 74d6081638..6e3ee7de25 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -47,6 +47,12 @@
 #define IDPF_TX_OFFLOAD_NOTSUP_MASK \
 		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
 
+/* used for Vector PMD */
+#define IDPF_VPMD_RX_MAX_BURST		32
+#define IDPF_VPMD_TX_MAX_BURST		32
+#define IDPF_VPMD_DESCS_PER_LOOP	4
+#define IDPF_RXQ_REARM_THRESH		64
+
 /* MTS */
 #define GLTSYN_CMD_SYNC_0_0	(PF_TIMESYNC_BASE + 0x0)
 #define PF_GLTSYN_SHTIME_0_0	(PF_TIMESYNC_BASE + 0x4)
@@ -193,6 +199,10 @@ union idpf_tx_offload {
 	};
 };
 
+struct idpf_tx_vec_entry {
+	struct rte_mbuf *mbuf;
+};
+
 struct idpf_rxq_ops {
 	void (*release_mbufs)(struct idpf_rx_queue *rxq);
 };
@@ -254,5 +264,15 @@ uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts);
 __rte_internal
 int idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
+__rte_internal
+int idpf_singleq_tx_vec_setup_avx512(struct idpf_tx_queue *txq);
+__rte_internal
+uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue,
+				       struct rte_mbuf **rx_pkts,
+				       uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_singleq_xmit_pkts_avx512(void *tx_queue,
+				       struct rte_mbuf **tx_pkts,
+				       uint16_t nb_pkts);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/common/idpf/idpf_common_rxtx_avx512.c
similarity index 99%
rename from drivers/net/idpf/idpf_rxtx_vec_avx512.c
rename to drivers/common/idpf/idpf_common_rxtx_avx512.c
index ea949635e0..6ae0e14d2f 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c
@@ -2,9 +2,9 @@
  * Copyright(c) 2022 Intel Corporation
  */
 
-#include "idpf_rxtx_vec_common.h"
-
 #include <rte_vect.h>
+#include <idpf_common_device.h>
+#include <idpf_common_rxtx.h>
 
 #ifndef __INTEL_COMPILER
 #pragma GCC diagnostic ignored "-Wcast-qual"
diff --git a/drivers/common/idpf/meson.build b/drivers/common/idpf/meson.build
index 5ee071fdb2..1dafafeb2f 100644
--- a/drivers/common/idpf/meson.build
+++ b/drivers/common/idpf/meson.build
@@ -9,4 +9,34 @@ sources = files(
     'idpf_common_virtchnl.c',
 )
 
+if arch_subdir == 'x86'
+    idpf_avx512_cpu_support = (
+        cc.get_define('__AVX512F__', args: machine_args) != '' and
+        cc.get_define('__AVX512BW__', args: machine_args) != ''
+    )
+
+    idpf_avx512_cc_support = (
+        not machine_args.contains('-mno-avx512f') and
+        cc.has_argument('-mavx512f') and
+        cc.has_argument('-mavx512bw')
+    )
+
+    if idpf_avx512_cpu_support == true or idpf_avx512_cc_support == true
+        cflags += ['-DCC_AVX512_SUPPORT']
+        avx512_args = [cflags, '-mavx512f', '-mavx512bw']
+        if cc.has_argument('-march=skylake-avx512')
+            avx512_args += '-march=skylake-avx512'
+        endif
+        idpf_common_avx512_lib = static_library(
+            'idpf_common_avx512_lib',
+            'idpf_common_rxtx_avx512.c',
+	    dependencies: [
+	            static_rte_mbuf,
+	    ],
+            include_directories: includes,
+            c_args: avx512_args)
+        objs += idpf_common_avx512_lib.extract_objects('idpf_common_rxtx_avx512.c')
+    endif
+endif
+
 subdir('base')
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 511705e5b0..a0e97de81f 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -25,8 +25,11 @@ INTERNAL {
 	idpf_reset_split_tx_descq;
 	idpf_rx_queue_release;
 	idpf_singleq_recv_pkts;
+	idpf_singleq_recv_pkts_avx512;
 	idpf_singleq_rx_vec_setup;
+	idpf_singleq_tx_vec_setup_avx512;
 	idpf_singleq_xmit_pkts;
+	idpf_singleq_xmit_pkts_avx512;
 	idpf_splitq_recv_pkts;
 	idpf_splitq_xmit_pkts;
 	idpf_tx_queue_release;
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index a985dc2cf5..3a5084dfd6 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -19,23 +19,14 @@
 #define IDPF_DEFAULT_RX_FREE_THRESH	32
 
 /* used for Vector PMD */
-#define IDPF_VPMD_RX_MAX_BURST	32
-#define IDPF_VPMD_TX_MAX_BURST	32
-#define IDPF_VPMD_DESCS_PER_LOOP	4
-#define IDPF_RXQ_REARM_THRESH	64
 
 #define IDPF_DEFAULT_TX_RS_THRESH	32
 #define IDPF_DEFAULT_TX_FREE_THRESH	32
 
-struct idpf_tx_vec_entry {
-	struct rte_mbuf *mbuf;
-};
-
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
 			struct rte_mempool *mp);
-int idpf_singleq_tx_vec_setup_avx512(struct idpf_tx_queue *txq);
 int idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
@@ -48,10 +39,6 @@ int idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
-uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
-				       uint16_t nb_pkts);
-uint16_t idpf_singleq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
-				       uint16_t nb_pkts);
 
 void idpf_stop_queues(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/idpf/meson.build b/drivers/net/idpf/meson.build
index 378925166f..98f8ceb77b 100644
--- a/drivers/net/idpf/meson.build
+++ b/drivers/net/idpf/meson.build
@@ -34,22 +34,5 @@ if arch_subdir == 'x86'
 
     if idpf_avx512_cpu_support == true or idpf_avx512_cc_support == true
         cflags += ['-DCC_AVX512_SUPPORT']
-        avx512_args = [cflags, '-mavx512f', '-mavx512bw']
-        if cc.has_argument('-march=skylake-avx512')
-            avx512_args += '-march=skylake-avx512'
-        endif
-        idpf_avx512_lib = static_library(
-            'idpf_avx512_lib',
-            'idpf_rxtx_vec_avx512.c',
-            dependencies: [
-                    static_rte_common_idpf,
-                    static_rte_ethdev,
-                    static_rte_bus_pci,
-                    static_rte_kvargs,
-                    static_rte_hash,
-            ],
-            include_directories: includes,
-            c_args: avx512_args)
-        objs += idpf_avx512_lib.extract_objects('idpf_rxtx_vec_avx512.c')
     endif
 endif
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 16/19] common/idpf: refine API name for vport functions
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (14 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 15/19] common/idpf: add avx512 for single queue model beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 17/19] common/idpf: refine API name for queue config module beilei.xing
                         ` (4 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

This patch refines API name for all vport related functions.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c |  8 ++++----
 drivers/common/idpf/idpf_common_device.h | 10 +++++-----
 drivers/common/idpf/version.map          | 14 ++++++++------
 drivers/net/idpf/idpf_ethdev.c           | 10 +++++-----
 4 files changed, 22 insertions(+), 20 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index a9304df6dd..f17b7736ae 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -505,7 +505,7 @@ idpf_vport_deinit(struct idpf_vport *vport)
 	return 0;
 }
 int
-idpf_config_rss(struct idpf_vport *vport)
+idpf_vport_rss_config(struct idpf_vport *vport)
 {
 	int ret;
 
@@ -531,7 +531,7 @@ idpf_config_rss(struct idpf_vport *vport)
 }
 
 int
-idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues)
+idpf_vport_irq_map_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_queue_vector *qv_map;
@@ -606,7 +606,7 @@ idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues)
 }
 
 int
-idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues)
+idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 {
 	idpf_vc_config_irq_map_unmap(vport, nb_rx_queues, false);
 
@@ -617,7 +617,7 @@ idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues)
 }
 
 int
-idpf_create_vport_info_init(struct idpf_vport *vport,
+idpf_vport_info_init(struct idpf_vport *vport,
 			    struct virtchnl2_create_vport *vport_info)
 {
 	struct idpf_adapter *adapter = vport->adapter;
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 573852ff75..09e967dc17 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -183,13 +183,13 @@ int idpf_vport_init(struct idpf_vport *vport,
 __rte_internal
 int idpf_vport_deinit(struct idpf_vport *vport);
 __rte_internal
-int idpf_config_rss(struct idpf_vport *vport);
+int idpf_vport_rss_config(struct idpf_vport *vport);
 __rte_internal
-int idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues);
+int idpf_vport_irq_map_config(struct idpf_vport *vport, uint16_t nb_rx_queues);
 __rte_internal
-int idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues);
+int idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues);
 __rte_internal
-int idpf_create_vport_info_init(struct idpf_vport *vport,
-				struct virtchnl2_create_vport *vport_info);
+int idpf_vport_info_init(struct idpf_vport *vport,
+			 struct virtchnl2_create_vport *vport_info);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index a0e97de81f..bd4dae503a 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,14 +3,18 @@ INTERNAL {
 
 	idpf_adapter_deinit;
 	idpf_adapter_init;
+
+	idpf_vport_deinit;
+	idpf_vport_info_init;
+	idpf_vport_init;
+	idpf_vport_irq_map_config;
+	idpf_vport_irq_unmap_config;
+	idpf_vport_rss_config;
+
 	idpf_alloc_single_rxq_mbufs;
 	idpf_alloc_split_rxq_mbufs;
 	idpf_check_rx_thresh;
 	idpf_check_tx_thresh;
-	idpf_config_irq_map;
-	idpf_config_irq_unmap;
-	idpf_config_rss;
-	idpf_create_vport_info_init;
 	idpf_execute_vc_cmd;
 	idpf_prep_pkts;
 	idpf_register_ts_mbuf;
@@ -50,8 +54,6 @@ INTERNAL {
 	idpf_vc_set_rss_key;
 	idpf_vc_set_rss_lut;
 	idpf_vc_switch_queue;
-	idpf_vport_deinit;
-	idpf_vport_init;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index ee2dec7c7c..b324c0dc83 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -169,7 +169,7 @@ idpf_init_rss(struct idpf_vport *vport)
 
 	vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
 
-	ret = idpf_config_rss(vport);
+	ret = idpf_vport_rss_config(vport);
 	if (ret != 0)
 		PMD_INIT_LOG(ERR, "Failed to configure RSS");
 
@@ -245,7 +245,7 @@ idpf_config_rx_queues_irqs(struct rte_eth_dev *dev)
 	struct idpf_vport *vport = dev->data->dev_private;
 	uint16_t nb_rx_queues = dev->data->nb_rx_queues;
 
-	return idpf_config_irq_map(vport, nb_rx_queues);
+	return idpf_vport_irq_map_config(vport, nb_rx_queues);
 }
 
 static int
@@ -334,7 +334,7 @@ idpf_dev_start(struct rte_eth_dev *dev)
 err_vport:
 	idpf_stop_queues(dev);
 err_startq:
-	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
+	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
 err_irq:
 	idpf_vc_dealloc_vectors(vport);
 err_vec:
@@ -353,7 +353,7 @@ idpf_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_stop_queues(dev);
 
-	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
+	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
 
 	idpf_vc_dealloc_vectors(vport);
 
@@ -643,7 +643,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->devarg_id = param->devarg_id;
 
 	memset(&create_vport_info, 0, sizeof(create_vport_info));
-	ret = idpf_create_vport_info_init(vport, &create_vport_info);
+	ret = idpf_vport_info_init(vport, &create_vport_info);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
 		goto err;
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 17/19] common/idpf: refine API name for queue config module
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (15 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 16/19] common/idpf: refine API name for vport functions beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 18/19] common/idpf: refine API name for data path module beilei.xing
                         ` (3 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

This patch refines API name for queue config functions.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c        | 42 ++++++++--------
 drivers/common/idpf/idpf_common_rxtx.h        | 38 +++++++-------
 drivers/common/idpf/idpf_common_rxtx_avx512.c |  2 +-
 drivers/common/idpf/version.map               | 37 +++++++-------
 drivers/net/idpf/idpf_rxtx.c                  | 50 +++++++++----------
 5 files changed, 85 insertions(+), 84 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index bc95fef6bc..0b87aeea73 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -11,7 +11,7 @@ int idpf_timestamp_dynfield_offset = -1;
 uint64_t idpf_timestamp_dynflag;
 
 int
-idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+idpf_qc_rx_thresh_check(uint16_t nb_desc, uint16_t thresh)
 {
 	/* The following constraints must be satisfied:
 	 * thresh < rxq->nb_rx_desc
@@ -26,8 +26,8 @@ idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
 }
 
 int
-idpf_check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
-		     uint16_t tx_free_thresh)
+idpf_qc_tx_thresh_check(uint16_t nb_desc, uint16_t tx_rs_thresh,
+			uint16_t tx_free_thresh)
 {
 	/* TX descriptors will have their RS bit set after tx_rs_thresh
 	 * descriptors have been used. The TX descriptor ring will be cleaned
@@ -74,7 +74,7 @@ idpf_check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
 }
 
 void
-idpf_release_rxq_mbufs(struct idpf_rx_queue *rxq)
+idpf_qc_rxq_mbufs_release(struct idpf_rx_queue *rxq)
 {
 	uint16_t i;
 
@@ -90,7 +90,7 @@ idpf_release_rxq_mbufs(struct idpf_rx_queue *rxq)
 }
 
 void
-idpf_release_txq_mbufs(struct idpf_tx_queue *txq)
+idpf_qc_txq_mbufs_release(struct idpf_tx_queue *txq)
 {
 	uint16_t nb_desc, i;
 
@@ -115,7 +115,7 @@ idpf_release_txq_mbufs(struct idpf_tx_queue *txq)
 }
 
 void
-idpf_reset_split_rx_descq(struct idpf_rx_queue *rxq)
+idpf_qc_split_rx_descq_reset(struct idpf_rx_queue *rxq)
 {
 	uint16_t len;
 	uint32_t i;
@@ -134,7 +134,7 @@ idpf_reset_split_rx_descq(struct idpf_rx_queue *rxq)
 }
 
 void
-idpf_reset_split_rx_bufq(struct idpf_rx_queue *rxq)
+idpf_qc_split_rx_bufq_reset(struct idpf_rx_queue *rxq)
 {
 	uint16_t len;
 	uint32_t i;
@@ -166,15 +166,15 @@ idpf_reset_split_rx_bufq(struct idpf_rx_queue *rxq)
 }
 
 void
-idpf_reset_split_rx_queue(struct idpf_rx_queue *rxq)
+idpf_qc_split_rx_queue_reset(struct idpf_rx_queue *rxq)
 {
-	idpf_reset_split_rx_descq(rxq);
-	idpf_reset_split_rx_bufq(rxq->bufq1);
-	idpf_reset_split_rx_bufq(rxq->bufq2);
+	idpf_qc_split_rx_descq_reset(rxq);
+	idpf_qc_split_rx_bufq_reset(rxq->bufq1);
+	idpf_qc_split_rx_bufq_reset(rxq->bufq2);
 }
 
 void
-idpf_reset_single_rx_queue(struct idpf_rx_queue *rxq)
+idpf_qc_single_rx_queue_reset(struct idpf_rx_queue *rxq)
 {
 	uint16_t len;
 	uint32_t i;
@@ -205,7 +205,7 @@ idpf_reset_single_rx_queue(struct idpf_rx_queue *rxq)
 }
 
 void
-idpf_reset_split_tx_descq(struct idpf_tx_queue *txq)
+idpf_qc_split_tx_descq_reset(struct idpf_tx_queue *txq)
 {
 	struct idpf_tx_entry *txe;
 	uint32_t i, size;
@@ -239,7 +239,7 @@ idpf_reset_split_tx_descq(struct idpf_tx_queue *txq)
 }
 
 void
-idpf_reset_split_tx_complq(struct idpf_tx_queue *cq)
+idpf_qc_split_tx_complq_reset(struct idpf_tx_queue *cq)
 {
 	uint32_t i, size;
 
@@ -257,7 +257,7 @@ idpf_reset_split_tx_complq(struct idpf_tx_queue *cq)
 }
 
 void
-idpf_reset_single_tx_queue(struct idpf_tx_queue *txq)
+idpf_qc_single_tx_queue_reset(struct idpf_tx_queue *txq)
 {
 	struct idpf_tx_entry *txe;
 	uint32_t i, size;
@@ -294,7 +294,7 @@ idpf_reset_single_tx_queue(struct idpf_tx_queue *txq)
 }
 
 void
-idpf_rx_queue_release(void *rxq)
+idpf_qc_rx_queue_release(void *rxq)
 {
 	struct idpf_rx_queue *q = rxq;
 
@@ -324,7 +324,7 @@ idpf_rx_queue_release(void *rxq)
 }
 
 void
-idpf_tx_queue_release(void *txq)
+idpf_qc_tx_queue_release(void *txq)
 {
 	struct idpf_tx_queue *q = txq;
 
@@ -343,7 +343,7 @@ idpf_tx_queue_release(void *txq)
 }
 
 int
-idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
+idpf_qc_ts_mbuf_register(struct idpf_rx_queue *rxq)
 {
 	int err;
 	if ((rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
@@ -360,7 +360,7 @@ idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
 }
 
 int
-idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
+idpf_qc_single_rxq_mbufs_alloc(struct idpf_rx_queue *rxq)
 {
 	volatile struct virtchnl2_singleq_rx_buf_desc *rxd;
 	struct rte_mbuf *mbuf = NULL;
@@ -395,7 +395,7 @@ idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
 }
 
 int
-idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
+idpf_qc_split_rxq_mbufs_alloc(struct idpf_rx_queue *rxq)
 {
 	volatile struct virtchnl2_splitq_rx_buf_desc *rxd;
 	struct rte_mbuf *mbuf = NULL;
@@ -1451,7 +1451,7 @@ idpf_singleq_rx_vec_setup_default(struct idpf_rx_queue *rxq)
 }
 
 int __rte_cold
-idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
+idpf_qc_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
 {
 	rxq->ops = &def_singleq_rx_ops_vec;
 	return idpf_singleq_rx_vec_setup_default(rxq);
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index 6e3ee7de25..7966d15f51 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -215,38 +215,38 @@ extern int idpf_timestamp_dynfield_offset;
 extern uint64_t idpf_timestamp_dynflag;
 
 __rte_internal
-int idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh);
+int idpf_qc_rx_thresh_check(uint16_t nb_desc, uint16_t thresh);
 __rte_internal
-int idpf_check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
-			 uint16_t tx_free_thresh);
+int idpf_qc_tx_thresh_check(uint16_t nb_desc, uint16_t tx_rs_thresh,
+			    uint16_t tx_free_thresh);
 __rte_internal
-void idpf_release_rxq_mbufs(struct idpf_rx_queue *rxq);
+void idpf_qc_rxq_mbufs_release(struct idpf_rx_queue *rxq);
 __rte_internal
-void idpf_release_txq_mbufs(struct idpf_tx_queue *txq);
+void idpf_qc_txq_mbufs_release(struct idpf_tx_queue *txq);
 __rte_internal
-void idpf_reset_split_rx_descq(struct idpf_rx_queue *rxq);
+void idpf_qc_split_rx_descq_reset(struct idpf_rx_queue *rxq);
 __rte_internal
-void idpf_reset_split_rx_bufq(struct idpf_rx_queue *rxq);
+void idpf_qc_split_rx_bufq_reset(struct idpf_rx_queue *rxq);
 __rte_internal
-void idpf_reset_split_rx_queue(struct idpf_rx_queue *rxq);
+void idpf_qc_split_rx_queue_reset(struct idpf_rx_queue *rxq);
 __rte_internal
-void idpf_reset_single_rx_queue(struct idpf_rx_queue *rxq);
+void idpf_qc_single_rx_queue_reset(struct idpf_rx_queue *rxq);
 __rte_internal
-void idpf_reset_split_tx_descq(struct idpf_tx_queue *txq);
+void idpf_qc_split_tx_descq_reset(struct idpf_tx_queue *txq);
 __rte_internal
-void idpf_reset_split_tx_complq(struct idpf_tx_queue *cq);
+void idpf_qc_split_tx_complq_reset(struct idpf_tx_queue *cq);
 __rte_internal
-void idpf_reset_single_tx_queue(struct idpf_tx_queue *txq);
+void idpf_qc_single_tx_queue_reset(struct idpf_tx_queue *txq);
 __rte_internal
-void idpf_rx_queue_release(void *rxq);
+void idpf_qc_rx_queue_release(void *rxq);
 __rte_internal
-void idpf_tx_queue_release(void *txq);
+void idpf_qc_tx_queue_release(void *txq);
 __rte_internal
-int idpf_register_ts_mbuf(struct idpf_rx_queue *rxq);
+int idpf_qc_ts_mbuf_register(struct idpf_rx_queue *rxq);
 __rte_internal
-int idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq);
+int idpf_qc_single_rxq_mbufs_alloc(struct idpf_rx_queue *rxq);
 __rte_internal
-int idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq);
+int idpf_qc_split_rxq_mbufs_alloc(struct idpf_rx_queue *rxq);
 __rte_internal
 uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			       uint16_t nb_pkts);
@@ -263,9 +263,9 @@ __rte_internal
 uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts);
 __rte_internal
-int idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
+int idpf_qc_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
 __rte_internal
-int idpf_singleq_tx_vec_setup_avx512(struct idpf_tx_queue *txq);
+int idpf_qc_singleq_tx_vec_avx512_setup(struct idpf_tx_queue *txq);
 __rte_internal
 uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue,
 				       struct rte_mbuf **rx_pkts,
diff --git a/drivers/common/idpf/idpf_common_rxtx_avx512.c b/drivers/common/idpf/idpf_common_rxtx_avx512.c
index 6ae0e14d2f..d94e36b521 100644
--- a/drivers/common/idpf/idpf_common_rxtx_avx512.c
+++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c
@@ -850,7 +850,7 @@ static const struct idpf_txq_ops avx512_singleq_tx_vec_ops = {
 };
 
 int __rte_cold
-idpf_singleq_tx_vec_setup_avx512(struct idpf_tx_queue *txq)
+idpf_qc_singleq_tx_vec_avx512_setup(struct idpf_tx_queue *txq)
 {
 	txq->ops = &avx512_singleq_tx_vec_ops;
 	return 0;
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index bd4dae503a..2ff152a353 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -4,6 +4,25 @@ INTERNAL {
 	idpf_adapter_deinit;
 	idpf_adapter_init;
 
+	idpf_qc_rx_thresh_check;
+	idpf_qc_rx_queue_release;
+	idpf_qc_rxq_mbufs_release;
+	idpf_qc_single_rx_queue_reset;
+	idpf_qc_single_rxq_mbufs_alloc;
+	idpf_qc_single_tx_queue_reset;
+	idpf_qc_singleq_rx_vec_setup;
+	idpf_qc_singleq_tx_vec_avx512_setup;
+	idpf_qc_split_rx_bufq_reset;
+	idpf_qc_split_rx_descq_reset;
+	idpf_qc_split_rx_queue_reset;
+	idpf_qc_split_rxq_mbufs_alloc;
+	idpf_qc_split_tx_complq_reset;
+	idpf_qc_split_tx_descq_reset;
+	idpf_qc_ts_mbuf_register;
+	idpf_qc_tx_queue_release;
+	idpf_qc_tx_thresh_check;
+	idpf_qc_txq_mbufs_release;
+
 	idpf_vport_deinit;
 	idpf_vport_info_init;
 	idpf_vport_init;
@@ -11,32 +30,14 @@ INTERNAL {
 	idpf_vport_irq_unmap_config;
 	idpf_vport_rss_config;
 
-	idpf_alloc_single_rxq_mbufs;
-	idpf_alloc_split_rxq_mbufs;
-	idpf_check_rx_thresh;
-	idpf_check_tx_thresh;
 	idpf_execute_vc_cmd;
 	idpf_prep_pkts;
-	idpf_register_ts_mbuf;
-	idpf_release_rxq_mbufs;
-	idpf_release_txq_mbufs;
-	idpf_reset_single_rx_queue;
-	idpf_reset_single_tx_queue;
-	idpf_reset_split_rx_bufq;
-	idpf_reset_split_rx_descq;
-	idpf_reset_split_rx_queue;
-	idpf_reset_split_tx_complq;
-	idpf_reset_split_tx_descq;
-	idpf_rx_queue_release;
 	idpf_singleq_recv_pkts;
 	idpf_singleq_recv_pkts_avx512;
-	idpf_singleq_rx_vec_setup;
-	idpf_singleq_tx_vec_setup_avx512;
 	idpf_singleq_xmit_pkts;
 	idpf_singleq_xmit_pkts_avx512;
 	idpf_splitq_recv_pkts;
 	idpf_splitq_xmit_pkts;
-	idpf_tx_queue_release;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
 	idpf_vc_config_irq_map_unmap;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index c0c622d64b..ec75d6f69e 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -51,11 +51,11 @@ idpf_tx_offload_convert(uint64_t offload)
 }
 
 static const struct idpf_rxq_ops def_rxq_ops = {
-	.release_mbufs = idpf_release_rxq_mbufs,
+	.release_mbufs = idpf_qc_rxq_mbufs_release,
 };
 
 static const struct idpf_txq_ops def_txq_ops = {
-	.release_mbufs = idpf_release_txq_mbufs,
+	.release_mbufs = idpf_qc_txq_mbufs_release,
 };
 
 static const struct rte_memzone *
@@ -183,7 +183,7 @@ idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
 		goto err_sw_ring_alloc;
 	}
 
-	idpf_reset_split_rx_bufq(bufq);
+	idpf_qc_split_rx_bufq_reset(bufq);
 	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
 	bufq->ops = &def_rxq_ops;
@@ -242,12 +242,12 @@ idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
 		IDPF_DEFAULT_RX_FREE_THRESH :
 		rx_conf->rx_free_thresh;
-	if (idpf_check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+	if (idpf_qc_rx_thresh_check(nb_desc, rx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Free memory if needed */
 	if (dev->data->rx_queues[queue_idx] != NULL) {
-		idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		idpf_qc_rx_queue_release(dev->data->rx_queues[queue_idx]);
 		dev->data->rx_queues[queue_idx] = NULL;
 	}
 
@@ -300,12 +300,12 @@ idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			goto err_sw_ring_alloc;
 		}
 
-		idpf_reset_single_rx_queue(rxq);
+		idpf_qc_single_rx_queue_reset(rxq);
 		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
 				queue_idx * vport->chunks_info.rx_qtail_spacing);
 		rxq->ops = &def_rxq_ops;
 	} else {
-		idpf_reset_split_rx_descq(rxq);
+		idpf_qc_split_rx_descq_reset(rxq);
 
 		/* Setup Rx buffer queues */
 		ret = idpf_rx_split_bufq_setup(dev, rxq, 2 * queue_idx,
@@ -379,7 +379,7 @@ idpf_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
 	cq->tx_ring_phys_addr = mz->iova;
 	cq->compl_ring = mz->addr;
 	cq->mz = mz;
-	idpf_reset_split_tx_complq(cq);
+	idpf_qc_split_tx_complq_reset(cq);
 
 	txq->complq = cq;
 
@@ -413,12 +413,12 @@ idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		tx_conf->tx_rs_thresh : IDPF_DEFAULT_TX_RS_THRESH);
 	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
 		tx_conf->tx_free_thresh : IDPF_DEFAULT_TX_FREE_THRESH);
-	if (idpf_check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+	if (idpf_qc_tx_thresh_check(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Free memory if needed. */
 	if (dev->data->tx_queues[queue_idx] != NULL) {
-		idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		idpf_qc_tx_queue_release(dev->data->tx_queues[queue_idx]);
 		dev->data->tx_queues[queue_idx] = NULL;
 	}
 
@@ -470,10 +470,10 @@ idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 	if (!is_splitq) {
 		txq->tx_ring = mz->addr;
-		idpf_reset_single_tx_queue(txq);
+		idpf_qc_single_tx_queue_reset(txq);
 	} else {
 		txq->desc_ring = mz->addr;
-		idpf_reset_split_tx_descq(txq);
+		idpf_qc_split_tx_descq_reset(txq);
 
 		/* Setup tx completion queue if split model */
 		ret = idpf_tx_complq_setup(dev, txq, queue_idx,
@@ -516,7 +516,7 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
-	err = idpf_register_ts_mbuf(rxq);
+	err = idpf_qc_ts_mbuf_register(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to residter timestamp mbuf %u",
 					rx_queue_id);
@@ -525,7 +525,7 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 	if (rxq->bufq1 == NULL) {
 		/* Single queue */
-		err = idpf_alloc_single_rxq_mbufs(rxq);
+		err = idpf_qc_single_rxq_mbufs_alloc(rxq);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
 			return err;
@@ -537,12 +537,12 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
 	} else {
 		/* Split queue */
-		err = idpf_alloc_split_rxq_mbufs(rxq->bufq1);
+		err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
 			return err;
 		}
-		err = idpf_alloc_split_rxq_mbufs(rxq->bufq2);
+		err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
 			return err;
@@ -664,11 +664,11 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	rxq = dev->data->rx_queues[rx_queue_id];
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
 		rxq->ops->release_mbufs(rxq);
-		idpf_reset_single_rx_queue(rxq);
+		idpf_qc_single_rx_queue_reset(rxq);
 	} else {
 		rxq->bufq1->ops->release_mbufs(rxq->bufq1);
 		rxq->bufq2->ops->release_mbufs(rxq->bufq2);
-		idpf_reset_split_rx_queue(rxq);
+		idpf_qc_split_rx_queue_reset(rxq);
 	}
 	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -695,10 +695,10 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	txq = dev->data->tx_queues[tx_queue_id];
 	txq->ops->release_mbufs(txq);
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-		idpf_reset_single_tx_queue(txq);
+		idpf_qc_single_tx_queue_reset(txq);
 	} else {
-		idpf_reset_split_tx_descq(txq);
-		idpf_reset_split_tx_complq(txq->complq);
+		idpf_qc_split_tx_descq_reset(txq);
+		idpf_qc_split_tx_complq_reset(txq->complq);
 	}
 	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -708,13 +708,13 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 void
 idpf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
 {
-	idpf_rx_queue_release(dev->data->rx_queues[qid]);
+	idpf_qc_rx_queue_release(dev->data->rx_queues[qid]);
 }
 
 void
 idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
 {
-	idpf_tx_queue_release(dev->data->tx_queues[qid]);
+	idpf_qc_tx_queue_release(dev->data->tx_queues[qid]);
 }
 
 void
@@ -776,7 +776,7 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 		if (vport->rx_vec_allowed) {
 			for (i = 0; i < dev->data->nb_tx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				(void)idpf_singleq_rx_vec_setup(rxq);
+				(void)idpf_qc_singleq_rx_vec_setup(rxq);
 			}
 #ifdef CC_AVX512_SUPPORT
 			if (vport->rx_use_avx512) {
@@ -835,7 +835,7 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 					txq = dev->data->tx_queues[i];
 					if (txq == NULL)
 						continue;
-					idpf_singleq_tx_vec_setup_avx512(txq);
+					idpf_qc_singleq_tx_vec_avx512_setup(txq);
 				}
 				dev->tx_pkt_burst = idpf_singleq_xmit_pkts_avx512;
 				dev->tx_pkt_prepare = idpf_prep_pkts;
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 18/19] common/idpf: refine API name for data path module
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (16 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 17/19] common/idpf: refine API name for queue config module beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-03  9:43       ` [PATCH v6 19/19] common/idpf: refine API name for virtual channel functions beilei.xing
                         ` (2 subsequent siblings)
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Refine API name for all data path functions.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c        | 20 ++++++------
 drivers/common/idpf/idpf_common_rxtx.h        | 32 +++++++++----------
 drivers/common/idpf/idpf_common_rxtx_avx512.c |  8 ++---
 drivers/common/idpf/version.map               | 15 +++++----
 drivers/net/idpf/idpf_rxtx.c                  | 22 ++++++-------
 5 files changed, 49 insertions(+), 48 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 0b87aeea73..d6777b2af3 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -618,8 +618,8 @@ idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
 }
 
 uint16_t
-idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-		      uint16_t nb_pkts)
+idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
 {
 	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
 	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
@@ -850,8 +850,8 @@ idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
 }
 
 uint16_t
-idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-		      uint16_t nb_pkts)
+idpf_dp_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			 uint16_t nb_pkts)
 {
 	struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue;
 	volatile struct idpf_flex_tx_sched_desc *txr;
@@ -1024,8 +1024,8 @@ idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
 }
 
 uint16_t
-idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-		       uint16_t nb_pkts)
+idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			  uint16_t nb_pkts)
 {
 	volatile union virtchnl2_rx_desc *rx_ring;
 	volatile union virtchnl2_rx_desc *rxdp;
@@ -1186,8 +1186,8 @@ idpf_xmit_cleanup(struct idpf_tx_queue *txq)
 
 /* TX function */
 uint16_t
-idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-		       uint16_t nb_pkts)
+idpf_dp_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			  uint16_t nb_pkts)
 {
 	volatile struct idpf_flex_tx_desc *txd;
 	volatile struct idpf_flex_tx_desc *txr;
@@ -1350,8 +1350,8 @@ idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 /* TX prep functions */
 uint16_t
-idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
-	       uint16_t nb_pkts)
+idpf_dp_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+		  uint16_t nb_pkts)
 {
 #ifdef RTE_LIBRTE_ETHDEV_DEBUG
 	int ret;
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index 7966d15f51..37ea8b4b9c 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -248,31 +248,31 @@ int idpf_qc_single_rxq_mbufs_alloc(struct idpf_rx_queue *rxq);
 __rte_internal
 int idpf_qc_split_rxq_mbufs_alloc(struct idpf_rx_queue *rxq);
 __rte_internal
-uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-			       uint16_t nb_pkts);
+uint16_t idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+				  uint16_t nb_pkts);
 __rte_internal
-uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-			       uint16_t nb_pkts);
+uint16_t idpf_dp_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+				  uint16_t nb_pkts);
 __rte_internal
-uint16_t idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-				uint16_t nb_pkts);
+uint16_t idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+				   uint16_t nb_pkts);
 __rte_internal
-uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-				uint16_t nb_pkts);
+uint16_t idpf_dp_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+				   uint16_t nb_pkts);
 __rte_internal
-uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-			uint16_t nb_pkts);
+uint16_t idpf_dp_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			   uint16_t nb_pkts);
 __rte_internal
 int idpf_qc_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
 __rte_internal
 int idpf_qc_singleq_tx_vec_avx512_setup(struct idpf_tx_queue *txq);
 __rte_internal
-uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue,
-				       struct rte_mbuf **rx_pkts,
-				       uint16_t nb_pkts);
+uint16_t idpf_dp_singleq_recv_pkts_avx512(void *rx_queue,
+					  struct rte_mbuf **rx_pkts,
+					  uint16_t nb_pkts);
 __rte_internal
-uint16_t idpf_singleq_xmit_pkts_avx512(void *tx_queue,
-				       struct rte_mbuf **tx_pkts,
-				       uint16_t nb_pkts);
+uint16_t idpf_dp_singleq_xmit_pkts_avx512(void *tx_queue,
+					  struct rte_mbuf **tx_pkts,
+					  uint16_t nb_pkts);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/idpf_common_rxtx_avx512.c b/drivers/common/idpf/idpf_common_rxtx_avx512.c
index d94e36b521..8ade27027c 100644
--- a/drivers/common/idpf/idpf_common_rxtx_avx512.c
+++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c
@@ -533,8 +533,8 @@ _idpf_singleq_recv_raw_pkts_avx512(struct idpf_rx_queue *rxq,
  * - nb_pkts < IDPF_DESCS_PER_LOOP, just return no packet
  */
 uint16_t
-idpf_singleq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
-			  uint16_t nb_pkts)
+idpf_dp_singleq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
+				 uint16_t nb_pkts)
 {
 	return _idpf_singleq_recv_raw_pkts_avx512(rx_queue, rx_pkts, nb_pkts);
 }
@@ -819,8 +819,8 @@ idpf_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
 }
 
 uint16_t
-idpf_singleq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
-			     uint16_t nb_pkts)
+idpf_dp_singleq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
+				 uint16_t nb_pkts)
 {
 	return idpf_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts);
 }
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 2ff152a353..e37a40771b 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -4,6 +4,14 @@ INTERNAL {
 	idpf_adapter_deinit;
 	idpf_adapter_init;
 
+	idpf_dp_prep_pkts;
+	idpf_dp_singleq_recv_pkts;
+	idpf_dp_singleq_recv_pkts_avx512;
+	idpf_dp_singleq_xmit_pkts;
+	idpf_dp_singleq_xmit_pkts_avx512;
+	idpf_dp_splitq_recv_pkts;
+	idpf_dp_splitq_xmit_pkts;
+
 	idpf_qc_rx_thresh_check;
 	idpf_qc_rx_queue_release;
 	idpf_qc_rxq_mbufs_release;
@@ -31,13 +39,6 @@ INTERNAL {
 	idpf_vport_rss_config;
 
 	idpf_execute_vc_cmd;
-	idpf_prep_pkts;
-	idpf_singleq_recv_pkts;
-	idpf_singleq_recv_pkts_avx512;
-	idpf_singleq_xmit_pkts;
-	idpf_singleq_xmit_pkts_avx512;
-	idpf_splitq_recv_pkts;
-	idpf_splitq_xmit_pkts;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
 	idpf_vc_config_irq_map_unmap;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index ec75d6f69e..41e91b16b6 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -771,7 +771,7 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 
 #ifdef RTE_ARCH_X86
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
+		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
 	} else {
 		if (vport->rx_vec_allowed) {
 			for (i = 0; i < dev->data->nb_tx_queues; i++) {
@@ -780,19 +780,19 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 			}
 #ifdef CC_AVX512_SUPPORT
 			if (vport->rx_use_avx512) {
-				dev->rx_pkt_burst = idpf_singleq_recv_pkts_avx512;
+				dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts_avx512;
 				return;
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
 
-		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
 	}
 #else
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
-		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
+		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
 	else
-		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
 #endif /* RTE_ARCH_X86 */
 }
 
@@ -824,8 +824,8 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 #endif /* RTE_ARCH_X86 */
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
-		dev->tx_pkt_prepare = idpf_prep_pkts;
+		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	} else {
 #ifdef RTE_ARCH_X86
 		if (vport->tx_vec_allowed) {
@@ -837,14 +837,14 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 						continue;
 					idpf_qc_singleq_tx_vec_avx512_setup(txq);
 				}
-				dev->tx_pkt_burst = idpf_singleq_xmit_pkts_avx512;
-				dev->tx_pkt_prepare = idpf_prep_pkts;
+				dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 				return;
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
 #endif /* RTE_ARCH_X86 */
-		dev->tx_pkt_burst = idpf_singleq_xmit_pkts;
-		dev->tx_pkt_prepare = idpf_prep_pkts;
+		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	}
 }
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v6 19/19] common/idpf: refine API name for virtual channel functions
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (17 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 18/19] common/idpf: refine API name for data path module beilei.xing
@ 2023-02-03  9:43       ` beilei.xing
  2023-02-06  2:58       ` [PATCH v6 00/19] net/idpf: introduce idpf common modle Zhang, Qi Z
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
  20 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-03  9:43 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

This patch refines API name for all virtual channel functions.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 24 ++++----
 drivers/common/idpf/idpf_common_virtchnl.c | 70 +++++++++++-----------
 drivers/common/idpf/idpf_common_virtchnl.h | 36 +++++------
 drivers/common/idpf/version.map            | 38 ++++++------
 drivers/net/idpf/idpf_ethdev.c             | 10 ++--
 drivers/net/idpf/idpf_rxtx.c               | 12 ++--
 6 files changed, 95 insertions(+), 95 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index f17b7736ae..6c5f10a8ce 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -104,7 +104,7 @@ idpf_get_pkt_type(struct idpf_adapter *adapter)
 	uint16_t ptype_recvd = 0;
 	int ret;
 
-	ret = idpf_vc_query_ptype_info(adapter);
+	ret = idpf_vc_ptype_info_query(adapter);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Fail to query packet type information");
 		return ret;
@@ -115,7 +115,7 @@ idpf_get_pkt_type(struct idpf_adapter *adapter)
 			return -ENOMEM;
 
 	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
-		ret = idpf_vc_read_one_msg(adapter, VIRTCHNL2_OP_GET_PTYPE_INFO,
+		ret = idpf_vc_one_msg_read(adapter, VIRTCHNL2_OP_GET_PTYPE_INFO,
 					   IDPF_DFLT_MBX_BUF_SIZE, (uint8_t *)ptype_info);
 		if (ret != 0) {
 			DRV_LOG(ERR, "Fail to get packet type information");
@@ -333,13 +333,13 @@ idpf_adapter_init(struct idpf_adapter *adapter)
 		goto err_mbx_resp;
 	}
 
-	ret = idpf_vc_check_api_version(adapter);
+	ret = idpf_vc_api_version_check(adapter);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Failed to check api version");
 		goto err_check_api;
 	}
 
-	ret = idpf_vc_get_caps(adapter);
+	ret = idpf_vc_caps_get(adapter);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Failed to get capabilities");
 		goto err_check_api;
@@ -382,7 +382,7 @@ idpf_vport_init(struct idpf_vport *vport,
 	struct virtchnl2_create_vport *vport_info;
 	int i, type, ret;
 
-	ret = idpf_vc_create_vport(vport, create_vport_info);
+	ret = idpf_vc_vport_create(vport, create_vport_info);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Failed to create vport.");
 		goto err_create_vport;
@@ -483,7 +483,7 @@ idpf_vport_init(struct idpf_vport *vport,
 	rte_free(vport->rss_key);
 	vport->rss_key = NULL;
 err_rss_key:
-	idpf_vc_destroy_vport(vport);
+	idpf_vc_vport_destroy(vport);
 err_create_vport:
 	return ret;
 }
@@ -500,7 +500,7 @@ idpf_vport_deinit(struct idpf_vport *vport)
 
 	vport->dev_data = NULL;
 
-	idpf_vc_destroy_vport(vport);
+	idpf_vc_vport_destroy(vport);
 
 	return 0;
 }
@@ -509,19 +509,19 @@ idpf_vport_rss_config(struct idpf_vport *vport)
 {
 	int ret;
 
-	ret = idpf_vc_set_rss_key(vport);
+	ret = idpf_vc_rss_key_set(vport);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Failed to configure RSS key");
 		return ret;
 	}
 
-	ret = idpf_vc_set_rss_lut(vport);
+	ret = idpf_vc_rss_lut_set(vport);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Failed to configure RSS lut");
 		return ret;
 	}
 
-	ret = idpf_vc_set_rss_hash(vport);
+	ret = idpf_vc_rss_hash_set(vport);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Failed to configure RSS hash");
 		return ret;
@@ -589,7 +589,7 @@ idpf_vport_irq_map_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 	}
 	vport->qv_map = qv_map;
 
-	ret = idpf_vc_config_irq_map_unmap(vport, nb_rx_queues, true);
+	ret = idpf_vc_irq_map_unmap_config(vport, nb_rx_queues, true);
 	if (ret != 0) {
 		DRV_LOG(ERR, "config interrupt mapping failed");
 		goto config_irq_map_err;
@@ -608,7 +608,7 @@ idpf_vport_irq_map_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 int
 idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 {
-	idpf_vc_config_irq_map_unmap(vport, nb_rx_queues, false);
+	idpf_vc_irq_map_unmap_config(vport, nb_rx_queues, false);
 
 	rte_free(vport->qv_map);
 	vport->qv_map = NULL;
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 299caa19f1..50e2ade89e 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -159,7 +159,7 @@ idpf_read_msg_from_cp(struct idpf_adapter *adapter, uint16_t buf_len,
 #define ASQ_DELAY_MS  10
 
 int
-idpf_vc_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_len,
+idpf_vc_one_msg_read(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_len,
 		     uint8_t *buf)
 {
 	int err = 0;
@@ -183,7 +183,7 @@ idpf_vc_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_le
 }
 
 int
-idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
+idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 {
 	int err = 0;
 	int i = 0;
@@ -218,7 +218,7 @@ idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
 		/* for init virtchnl ops, need to poll the response */
-		err = idpf_vc_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
+		err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
 		break;
 	case VIRTCHNL2_OP_GET_PTYPE_INFO:
@@ -251,7 +251,7 @@ idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 }
 
 int
-idpf_vc_check_api_version(struct idpf_adapter *adapter)
+idpf_vc_api_version_check(struct idpf_adapter *adapter)
 {
 	struct virtchnl2_version_info version, *pver;
 	struct idpf_cmd_info args;
@@ -267,7 +267,7 @@ idpf_vc_check_api_version(struct idpf_adapter *adapter)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0) {
 		DRV_LOG(ERR,
 			"Failed to execute command of VIRTCHNL_OP_VERSION");
@@ -291,7 +291,7 @@ idpf_vc_check_api_version(struct idpf_adapter *adapter)
 }
 
 int
-idpf_vc_get_caps(struct idpf_adapter *adapter)
+idpf_vc_caps_get(struct idpf_adapter *adapter)
 {
 	struct virtchnl2_get_capabilities caps_msg;
 	struct idpf_cmd_info args;
@@ -341,7 +341,7 @@ idpf_vc_get_caps(struct idpf_adapter *adapter)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0) {
 		DRV_LOG(ERR,
 			"Failed to execute command of VIRTCHNL2_OP_GET_CAPS");
@@ -354,7 +354,7 @@ idpf_vc_get_caps(struct idpf_adapter *adapter)
 }
 
 int
-idpf_vc_create_vport(struct idpf_vport *vport,
+idpf_vc_vport_create(struct idpf_vport *vport,
 		     struct virtchnl2_create_vport *create_vport_info)
 {
 	struct idpf_adapter *adapter = vport->adapter;
@@ -378,7 +378,7 @@ idpf_vc_create_vport(struct idpf_vport *vport,
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0) {
 		DRV_LOG(ERR,
 			"Failed to execute command of VIRTCHNL2_OP_CREATE_VPORT");
@@ -390,7 +390,7 @@ idpf_vc_create_vport(struct idpf_vport *vport,
 }
 
 int
-idpf_vc_destroy_vport(struct idpf_vport *vport)
+idpf_vc_vport_destroy(struct idpf_vport *vport)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_vport vc_vport;
@@ -406,7 +406,7 @@ idpf_vc_destroy_vport(struct idpf_vport *vport)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DESTROY_VPORT");
 
@@ -414,7 +414,7 @@ idpf_vc_destroy_vport(struct idpf_vport *vport)
 }
 
 int
-idpf_vc_set_rss_key(struct idpf_vport *vport)
+idpf_vc_rss_key_set(struct idpf_vport *vport)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_rss_key *rss_key;
@@ -439,7 +439,7 @@ idpf_vc_set_rss_key(struct idpf_vport *vport)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_KEY");
 
@@ -448,7 +448,7 @@ idpf_vc_set_rss_key(struct idpf_vport *vport)
 }
 
 int
-idpf_vc_set_rss_lut(struct idpf_vport *vport)
+idpf_vc_rss_lut_set(struct idpf_vport *vport)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_rss_lut *rss_lut;
@@ -473,7 +473,7 @@ idpf_vc_set_rss_lut(struct idpf_vport *vport)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_LUT");
 
@@ -482,7 +482,7 @@ idpf_vc_set_rss_lut(struct idpf_vport *vport)
 }
 
 int
-idpf_vc_set_rss_hash(struct idpf_vport *vport)
+idpf_vc_rss_hash_set(struct idpf_vport *vport)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_rss_hash rss_hash;
@@ -500,7 +500,7 @@ idpf_vc_set_rss_hash(struct idpf_vport *vport)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of OP_SET_RSS_HASH");
 
@@ -508,7 +508,7 @@ idpf_vc_set_rss_hash(struct idpf_vport *vport)
 }
 
 int
-idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
+idpf_vc_irq_map_unmap_config(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_queue_vector_maps *map_info;
@@ -539,7 +539,7 @@ idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map
 	args.in_args_size = len;
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUE_VECTOR",
 			map ? "MAP" : "UNMAP");
@@ -549,7 +549,7 @@ idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map
 }
 
 int
-idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
+idpf_vc_vectors_alloc(struct idpf_vport *vport, uint16_t num_vectors)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_alloc_vectors *alloc_vec;
@@ -569,7 +569,7 @@ idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
 	args.in_args_size = len;
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_ALLOC_VECTORS");
 
@@ -579,7 +579,7 @@ idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
 }
 
 int
-idpf_vc_dealloc_vectors(struct idpf_vport *vport)
+idpf_vc_vectors_dealloc(struct idpf_vport *vport)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_alloc_vectors *alloc_vec;
@@ -598,7 +598,7 @@ idpf_vc_dealloc_vectors(struct idpf_vport *vport)
 	args.in_args_size = len;
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_DEALLOC_VECTORS");
 
@@ -634,7 +634,7 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
 	args.in_args_size = len;
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
 			on ? "ENABLE" : "DISABLE");
@@ -644,7 +644,7 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
 }
 
 int
-idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
+idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
 		     bool rx, bool on)
 {
 	uint32_t type;
@@ -688,7 +688,7 @@ idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
 
 #define IDPF_RXTX_QUEUE_CHUNKS_NUM	2
 int
-idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable)
+idpf_vc_queues_ena_dis(struct idpf_vport *vport, bool enable)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_del_ena_dis_queues *queue_select;
@@ -746,7 +746,7 @@ idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable)
 	args.in_args_size = len;
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
 			enable ? "ENABLE" : "DISABLE");
@@ -756,7 +756,7 @@ idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable)
 }
 
 int
-idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable)
+idpf_vc_vport_ena_dis(struct idpf_vport *vport, bool enable)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_vport vc_vport;
@@ -771,7 +771,7 @@ idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0) {
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_VPORT",
 			enable ? "ENABLE" : "DISABLE");
@@ -781,7 +781,7 @@ idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable)
 }
 
 int
-idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
+idpf_vc_ptype_info_query(struct idpf_adapter *adapter)
 {
 	struct virtchnl2_get_ptype_info *ptype_info;
 	struct idpf_cmd_info args;
@@ -798,7 +798,7 @@ idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
 	args.in_args = (uint8_t *)ptype_info;
 	args.in_args_size = len;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_PTYPE_INFO");
 
@@ -808,7 +808,7 @@ idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
 
 #define IDPF_RX_BUF_STRIDE		64
 int
-idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
+idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
@@ -887,7 +887,7 @@ idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	rte_free(vc_rxqs);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
@@ -896,7 +896,7 @@ idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
 }
 
 int
-idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
+idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
@@ -958,7 +958,7 @@ idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	rte_free(vc_txqs);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 07755d4923..dcd855c08c 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -9,44 +9,44 @@
 #include <idpf_common_rxtx.h>
 
 __rte_internal
-int idpf_vc_check_api_version(struct idpf_adapter *adapter);
+int idpf_vc_api_version_check(struct idpf_adapter *adapter);
 __rte_internal
-int idpf_vc_get_caps(struct idpf_adapter *adapter);
+int idpf_vc_caps_get(struct idpf_adapter *adapter);
 __rte_internal
-int idpf_vc_create_vport(struct idpf_vport *vport,
+int idpf_vc_vport_create(struct idpf_vport *vport,
 			 struct virtchnl2_create_vport *vport_info);
 __rte_internal
-int idpf_vc_destroy_vport(struct idpf_vport *vport);
+int idpf_vc_vport_destroy(struct idpf_vport *vport);
 __rte_internal
-int idpf_vc_set_rss_key(struct idpf_vport *vport);
+int idpf_vc_rss_key_set(struct idpf_vport *vport);
 __rte_internal
-int idpf_vc_set_rss_lut(struct idpf_vport *vport);
+int idpf_vc_rss_lut_set(struct idpf_vport *vport);
 __rte_internal
-int idpf_vc_set_rss_hash(struct idpf_vport *vport);
+int idpf_vc_rss_hash_set(struct idpf_vport *vport);
 __rte_internal
-int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
+int idpf_vc_irq_map_unmap_config(struct idpf_vport *vport,
 				 uint16_t nb_rxq, bool map);
 __rte_internal
-int idpf_execute_vc_cmd(struct idpf_adapter *adapter,
+int idpf_vc_cmd_execute(struct idpf_adapter *adapter,
 			struct idpf_cmd_info *args);
 __rte_internal
-int idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
+int idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
 			 bool rx, bool on);
 __rte_internal
-int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
+int idpf_vc_queues_ena_dis(struct idpf_vport *vport, bool enable);
 __rte_internal
-int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
+int idpf_vc_vport_ena_dis(struct idpf_vport *vport, bool enable);
 __rte_internal
-int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
+int idpf_vc_vectors_alloc(struct idpf_vport *vport, uint16_t num_vectors);
 __rte_internal
-int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
+int idpf_vc_vectors_dealloc(struct idpf_vport *vport);
 __rte_internal
-int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
+int idpf_vc_ptype_info_query(struct idpf_adapter *adapter);
 __rte_internal
-int idpf_vc_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
+int idpf_vc_one_msg_read(struct idpf_adapter *adapter, uint32_t ops,
 			 uint16_t buf_len, uint8_t *buf);
 __rte_internal
-int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
+int idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
 __rte_internal
-int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
+int idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index e37a40771b..1c35761611 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -31,6 +31,25 @@ INTERNAL {
 	idpf_qc_tx_thresh_check;
 	idpf_qc_txq_mbufs_release;
 
+	idpf_vc_api_version_check;
+	idpf_vc_caps_get;
+	idpf_vc_cmd_execute;
+	idpf_vc_irq_map_unmap_config;
+	idpf_vc_one_msg_read;
+	idpf_vc_ptype_info_query;
+	idpf_vc_queue_switch;
+	idpf_vc_queues_ena_dis;
+	idpf_vc_rss_hash_set;
+	idpf_vc_rss_key_set;
+	idpf_vc_rss_lut_set;
+	idpf_vc_rxq_config;
+	idpf_vc_txq_config;
+	idpf_vc_vectors_alloc;
+	idpf_vc_vectors_dealloc;
+	idpf_vc_vport_create;
+	idpf_vc_vport_destroy;
+	idpf_vc_vport_ena_dis;
+
 	idpf_vport_deinit;
 	idpf_vport_info_init;
 	idpf_vport_init;
@@ -38,24 +57,5 @@ INTERNAL {
 	idpf_vport_irq_unmap_config;
 	idpf_vport_rss_config;
 
-	idpf_execute_vc_cmd;
-	idpf_vc_alloc_vectors;
-	idpf_vc_check_api_version;
-	idpf_vc_config_irq_map_unmap;
-	idpf_vc_config_rxq;
-	idpf_vc_config_txq;
-	idpf_vc_create_vport;
-	idpf_vc_dealloc_vectors;
-	idpf_vc_destroy_vport;
-	idpf_vc_ena_dis_queues;
-	idpf_vc_ena_dis_vport;
-	idpf_vc_get_caps;
-	idpf_vc_query_ptype_info;
-	idpf_vc_read_one_msg;
-	idpf_vc_set_rss_hash;
-	idpf_vc_set_rss_key;
-	idpf_vc_set_rss_lut;
-	idpf_vc_switch_queue;
-
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index b324c0dc83..33f5e90743 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -299,7 +299,7 @@ idpf_dev_start(struct rte_eth_dev *dev)
 		goto err_vec;
 	}
 
-	ret = idpf_vc_alloc_vectors(vport, req_vecs_num);
+	ret = idpf_vc_vectors_alloc(vport, req_vecs_num);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to allocate interrupt vectors");
 		goto err_vec;
@@ -321,7 +321,7 @@ idpf_dev_start(struct rte_eth_dev *dev)
 	idpf_set_rx_function(dev);
 	idpf_set_tx_function(dev);
 
-	ret = idpf_vc_ena_dis_vport(vport, true);
+	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
 		goto err_vport;
@@ -336,7 +336,7 @@ idpf_dev_start(struct rte_eth_dev *dev)
 err_startq:
 	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
 err_irq:
-	idpf_vc_dealloc_vectors(vport);
+	idpf_vc_vectors_dealloc(vport);
 err_vec:
 	return ret;
 }
@@ -349,13 +349,13 @@ idpf_dev_stop(struct rte_eth_dev *dev)
 	if (vport->stopped == 1)
 		return 0;
 
-	idpf_vc_ena_dis_vport(vport, false);
+	idpf_vc_vport_ena_dis(vport, false);
 
 	idpf_stop_queues(dev);
 
 	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
 
-	idpf_vc_dealloc_vectors(vport);
+	idpf_vc_vectors_dealloc(vport);
 
 	vport->stopped = 1;
 
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 41e91b16b6..f41783daea 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -566,7 +566,7 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		dev->data->rx_queues[rx_queue_id];
 	int err = 0;
 
-	err = idpf_vc_config_rxq(vport, rxq);
+	err = idpf_vc_rxq_config(vport, rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Fail to configure Rx queue %u", rx_queue_id);
 		return err;
@@ -580,7 +580,7 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_switch_queue(vport, rx_queue_id, true, true);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
 			    rx_queue_id);
@@ -617,7 +617,7 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 		dev->data->tx_queues[tx_queue_id];
 	int err = 0;
 
-	err = idpf_vc_config_txq(vport, txq);
+	err = idpf_vc_txq_config(vport, txq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id);
 		return err;
@@ -631,7 +631,7 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_switch_queue(vport, tx_queue_id, false, true);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
 			    tx_queue_id);
@@ -654,7 +654,7 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (rx_queue_id >= dev->data->nb_rx_queues)
 		return -EINVAL;
 
-	err = idpf_vc_switch_queue(vport, rx_queue_id, true, false);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
 			    rx_queue_id);
@@ -685,7 +685,7 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	if (tx_queue_id >= dev->data->nb_tx_queues)
 		return -EINVAL;
 
-	err = idpf_vc_switch_queue(vport, tx_queue_id, false, false);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
 			    tx_queue_id);
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* RE: [PATCH v6 00/19] net/idpf: introduce idpf common modle
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (18 preceding siblings ...)
  2023-02-03  9:43       ` [PATCH v6 19/19] common/idpf: refine API name for virtual channel functions beilei.xing
@ 2023-02-06  2:58       ` Zhang, Qi Z
  2023-02-06  6:16         ` Xing, Beilei
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
  20 siblings, 1 reply; 79+ messages in thread
From: Zhang, Qi Z @ 2023-02-06  2:58 UTC (permalink / raw)
  To: Xing, Beilei, Wu, Jingjing; +Cc: dev



> -----Original Message-----
> From: Xing, Beilei <beilei.xing@intel.com>
> Sent: Friday, February 3, 2023 5:43 PM
> To: Wu, Jingjing <jingjing.wu@intel.com>
> Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>
> Subject: [PATCH v6 00/19] net/idpf: introduce idpf common modle
> 
> From: Beilei Xing <beilei.xing@intel.com>
> 
> Refactor idpf pmd by introducing idpf common module, which will be also
> consumed by a new PMD - CPFL (Control Plane Function Library) PMD.
> 
> v2 changes:
>  - Refine irq map/unmap functions.
>  - Fix cross compile issue.
> v3 changes:
>  - Embed vport_info field into the vport structure.
>  - Refine APIs' name and order in version.map.
>  - Refine commit log.
> v4 changes:
>  - Refine commit log.
> v5 changes:
>  - Refine version.map.
>  - Fix typo.
>  - Return error log.
> v6 changes:
>  - Refine API name in common module.
> 
> Beilei Xing (19):
>   common/idpf: add adapter structure
>   common/idpf: add vport structure
>   common/idpf: add virtual channel functions
>   common/idpf: introduce adapter init and deinit
>   common/idpf: add vport init/deinit
>   common/idpf: add config RSS
>   common/idpf: add irq map/unmap
>   common/idpf: support get packet type
>   common/idpf: add vport info initialization
>   common/idpf: add vector flags in vport
>   common/idpf: add rxq and txq struct
>   common/idpf: add help functions for queue setup and release
>   common/idpf: add Rx and Tx data path
>   common/idpf: add vec queue setup
>   common/idpf: add avx512 for single queue model
>   common/idpf: refine API name for vport functions
>   common/idpf: refine API name for queue config module
>   common/idpf: refine API name for data path module
>   common/idpf: refine API name for virtual channel functions
> 
>  drivers/common/idpf/base/idpf_controlq_api.h  |    6 -
>  drivers/common/idpf/base/meson.build          |    2 +-
>  drivers/common/idpf/idpf_common_device.c      |  655 +++++
>  drivers/common/idpf/idpf_common_device.h      |  195 ++
>  drivers/common/idpf/idpf_common_logs.h        |   47 +
>  drivers/common/idpf/idpf_common_rxtx.c        | 1458 ++++++++++++
>  drivers/common/idpf/idpf_common_rxtx.h        |  278 +++
>  .../idpf/idpf_common_rxtx_avx512.c}           |   24 +-
>  .../idpf/idpf_common_virtchnl.c}              |  945 ++------
>  drivers/common/idpf/idpf_common_virtchnl.h    |   52 +
>  drivers/common/idpf/meson.build               |   38 +
>  drivers/common/idpf/version.map               |   61 +-
>  drivers/net/idpf/idpf_ethdev.c                |  552 +----
>  drivers/net/idpf/idpf_ethdev.h                |  194 +-
>  drivers/net/idpf/idpf_logs.h                  |   24 -
>  drivers/net/idpf/idpf_rxtx.c                  | 2107 +++--------------
>  drivers/net/idpf/idpf_rxtx.h                  |  253 +-
>  drivers/net/idpf/meson.build                  |   18 -
>  18 files changed, 3442 insertions(+), 3467 deletions(-)  create mode 100644
> drivers/common/idpf/idpf_common_device.c
>  create mode 100644 drivers/common/idpf/idpf_common_device.h
>  create mode 100644 drivers/common/idpf/idpf_common_logs.h
>  create mode 100644 drivers/common/idpf/idpf_common_rxtx.c
>  create mode 100644 drivers/common/idpf/idpf_common_rxtx.h
>  rename drivers/{net/idpf/idpf_rxtx_vec_avx512.c =>
> common/idpf/idpf_common_rxtx_avx512.c} (97%)  rename
> drivers/{net/idpf/idpf_vchnl.c => common/idpf/idpf_common_virtchnl.c}
> (52%)  create mode 100644 drivers/common/idpf/idpf_common_virtchnl.h
> 
> --
> 2.26.2	

Overall looks good to me, just couple thing need to fix

1. fix copy right date to 2023
2. fix some meson build , you can use devtools/check-meson.py to check the warning.





^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 00/19] net/idpf: introduce idpf common modle
  2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
                         ` (19 preceding siblings ...)
  2023-02-06  2:58       ` [PATCH v6 00/19] net/idpf: introduce idpf common modle Zhang, Qi Z
@ 2023-02-06  5:45       ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 01/19] common/idpf: add adapter structure beilei.xing
                           ` (19 more replies)
  20 siblings, 20 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:45 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Refactor idpf pmd by introducing idpf common module, which will be also
consumed by a new PMD - CPFL (Control Plane Function Library) PMD.

v2 changes:
 - Refine irq map/unmap functions.
 - Fix cross compile issue.
v3 changes:
 - Embed vport_info field into the vport structure.
 - Refine APIs' name and order in version.map.
 - Refine commit log.
v4 changes:
 - Refine commit log.
v5 changes:
 - Refine version.map.
 - Fix typo.
 - Return error log.
v6 changes:
 - Refine API name in common module.
v7 changes:
 - Change new files' copy right date to 2023.
 - Correct format for meson.build.
 - Change rte_atomic usages to compiler atomic built-ins.

Beilei Xing (19):
  common/idpf: add adapter structure
  common/idpf: add vport structure
  common/idpf: add virtual channel functions
  common/idpf: introduce adapter init and deinit
  common/idpf: add vport init/deinit
  common/idpf: add config RSS
  common/idpf: add irq map/unmap
  common/idpf: support get packet type
  common/idpf: add vport info initialization
  common/idpf: add vector flags in vport
  common/idpf: add rxq and txq struct
  common/idpf: add help functions for queue setup and release
  common/idpf: add Rx and Tx data path
  common/idpf: add vec queue setup
  common/idpf: add avx512 for single queue model
  common/idpf: refine API name for vport functions
  common/idpf: refine API name for queue config module
  common/idpf: refine API name for data path module
  common/idpf: refine API name for virtual channel functions

 drivers/common/idpf/base/idpf_controlq_api.h  |    6 -
 drivers/common/idpf/base/meson.build          |    2 +-
 drivers/common/idpf/idpf_common_device.c      |  655 +++++
 drivers/common/idpf/idpf_common_device.h      |  195 ++
 drivers/common/idpf/idpf_common_logs.h        |   47 +
 drivers/common/idpf/idpf_common_rxtx.c        | 1458 ++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h        |  278 +++
 .../idpf/idpf_common_rxtx_avx512.c}           |   26 +-
 .../idpf/idpf_common_virtchnl.c}              |  947 ++------
 drivers/common/idpf/idpf_common_virtchnl.h    |   52 +
 drivers/common/idpf/meson.build               |   35 +
 drivers/common/idpf/version.map               |   61 +-
 drivers/net/idpf/idpf_ethdev.c                |  552 +----
 drivers/net/idpf/idpf_ethdev.h                |  194 +-
 drivers/net/idpf/idpf_logs.h                  |   24 -
 drivers/net/idpf/idpf_rxtx.c                  | 2107 +++--------------
 drivers/net/idpf/idpf_rxtx.h                  |  253 +-
 drivers/net/idpf/meson.build                  |   18 -
 18 files changed, 3441 insertions(+), 3469 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_device.c
 create mode 100644 drivers/common/idpf/idpf_common_device.h
 create mode 100644 drivers/common/idpf/idpf_common_logs.h
 create mode 100644 drivers/common/idpf/idpf_common_rxtx.c
 create mode 100644 drivers/common/idpf/idpf_common_rxtx.h
 rename drivers/{net/idpf/idpf_rxtx_vec_avx512.c => common/idpf/idpf_common_rxtx_avx512.c} (97%)
 rename drivers/{net/idpf/idpf_vchnl.c => common/idpf/idpf_common_virtchnl.c} (51%)
 create mode 100644 drivers/common/idpf/idpf_common_virtchnl.h

-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 01/19] common/idpf: add adapter structure
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 02/19] common/idpf: add vport structure beilei.xing
                           ` (18 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Add structure idpf_adapter in common module, the structure includes
some basic fields.
Introduce structure idpf_adapter_ext in PMD, this structure includes
extra fields except idpf_adapter.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h | 20 ++++++
 drivers/net/idpf/idpf_ethdev.c           | 91 ++++++++++--------------
 drivers/net/idpf/idpf_ethdev.h           | 25 +++----
 drivers/net/idpf/idpf_rxtx.c             | 16 ++---
 drivers/net/idpf/idpf_rxtx.h             |  4 +-
 drivers/net/idpf/idpf_rxtx_vec_avx512.c  |  3 +-
 drivers/net/idpf/idpf_vchnl.c            | 30 ++++----
 7 files changed, 99 insertions(+), 90 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_device.h

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
new file mode 100644
index 0000000000..358e68cb8c
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _IDPF_COMMON_DEVICE_H_
+#define _IDPF_COMMON_DEVICE_H_
+
+#include <base/idpf_prototype.h>
+#include <base/virtchnl2.h>
+
+struct idpf_adapter {
+	struct idpf_hw hw;
+	struct virtchnl2_version_info virtchnl_version;
+	struct virtchnl2_get_capabilities caps;
+	volatile uint32_t pend_cmd; /* pending command not finished */
+	uint32_t cmd_retval; /* return value of the cmd response from cp */
+	uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */
+};
+
+#endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 3f1b77144c..1b13d081a7 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -53,8 +53,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_adapter *adapter = vport->adapter;
 
-	dev_info->max_rx_queues = adapter->caps->max_rx_q;
-	dev_info->max_tx_queues = adapter->caps->max_tx_q;
+	dev_info->max_rx_queues = adapter->caps.max_rx_q;
+	dev_info->max_tx_queues = adapter->caps.max_tx_q;
 	dev_info->min_rx_bufsize = IDPF_MIN_BUF_SIZE;
 	dev_info->max_rx_pktlen = vport->max_mtu + IDPF_ETH_OVERHEAD;
 
@@ -147,7 +147,7 @@ idpf_init_vport_req_info(struct rte_eth_dev *dev,
 			 struct virtchnl2_create_vport *vport_info)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(vport->adapter);
 
 	vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
 	if (adapter->txq_model == 0) {
@@ -379,7 +379,7 @@ idpf_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
-	if (adapter->caps->rss_caps != 0 && dev->data->nb_rx_queues != 0) {
+	if (adapter->caps.rss_caps != 0 && dev->data->nb_rx_queues != 0) {
 		ret = idpf_init_rss(vport);
 		if (ret != 0) {
 			PMD_INIT_LOG(ERR, "Failed to init rss");
@@ -420,7 +420,7 @@ idpf_config_rx_queues_irqs(struct rte_eth_dev *dev)
 
 	/* Rx interrupt disabled, Map interrupt only for writeback */
 
-	/* The capability flags adapter->caps->other_caps should be
+	/* The capability flags adapter->caps.other_caps should be
 	 * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if
 	 * condition should be updated when the FW can return the
 	 * correct flag bits.
@@ -518,9 +518,9 @@ static int
 idpf_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
-	uint16_t num_allocated_vectors =
-		adapter->caps->num_allocated_vectors;
+	struct idpf_adapter *base = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
+	uint16_t num_allocated_vectors = base->caps.num_allocated_vectors;
 	uint16_t req_vecs_num;
 	int ret;
 
@@ -596,7 +596,7 @@ static int
 idpf_dev_close(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(vport->adapter);
 
 	idpf_dev_stop(dev);
 
@@ -728,7 +728,7 @@ parse_bool(const char *key, const char *value, void *args)
 }
 
 static int
-idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter,
+idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter,
 		   struct idpf_devargs *idpf_args)
 {
 	struct rte_devargs *devargs = pci_dev->device.devargs;
@@ -875,14 +875,14 @@ idpf_init_mbx(struct idpf_hw *hw)
 }
 
 static int
-idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
+idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
 {
-	struct idpf_hw *hw = &adapter->hw;
+	struct idpf_hw *hw = &adapter->base.hw;
 	int ret = 0;
 
 	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
 	hw->hw_addr_len = pci_dev->mem_resource[0].len;
-	hw->back = adapter;
+	hw->back = &adapter->base;
 	hw->vendor_id = pci_dev->id.vendor_id;
 	hw->device_id = pci_dev->id.device_id;
 	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
@@ -902,15 +902,15 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
 		goto err;
 	}
 
-	adapter->mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
-					IDPF_DFLT_MBX_BUF_SIZE, 0);
-	if (adapter->mbx_resp == NULL) {
+	adapter->base.mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
+					     IDPF_DFLT_MBX_BUF_SIZE, 0);
+	if (adapter->base.mbx_resp == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp memory");
 		ret = -ENOMEM;
 		goto err_mbx;
 	}
 
-	ret = idpf_vc_check_api_version(adapter);
+	ret = idpf_vc_check_api_version(&adapter->base);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to check api version");
 		goto err_api;
@@ -922,21 +922,13 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
 		goto err_api;
 	}
 
-	adapter->caps = rte_zmalloc("idpf_caps",
-				sizeof(struct virtchnl2_get_capabilities), 0);
-	if (adapter->caps == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate idpf_caps memory");
-		ret = -ENOMEM;
-		goto err_api;
-	}
-
-	ret = idpf_vc_get_caps(adapter);
+	ret = idpf_vc_get_caps(&adapter->base);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to get capabilities");
-		goto err_caps;
+		goto err_api;
 	}
 
-	adapter->max_vport_nb = adapter->caps->max_vports;
+	adapter->max_vport_nb = adapter->base.caps.max_vports;
 
 	adapter->vports = rte_zmalloc("vports",
 				      adapter->max_vport_nb *
@@ -945,7 +937,7 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
 	if (adapter->vports == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
 		ret = -ENOMEM;
-		goto err_vports;
+		goto err_api;
 	}
 
 	adapter->max_rxq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
@@ -962,13 +954,9 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter *adapter)
 
 	return ret;
 
-err_vports:
-err_caps:
-	rte_free(adapter->caps);
-	adapter->caps = NULL;
 err_api:
-	rte_free(adapter->mbx_resp);
-	adapter->mbx_resp = NULL;
+	rte_free(adapter->base.mbx_resp);
+	adapter->base.mbx_resp = NULL;
 err_mbx:
 	idpf_ctlq_deinit(hw);
 err:
@@ -995,7 +983,7 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 };
 
 static uint16_t
-idpf_vport_idx_alloc(struct idpf_adapter *ad)
+idpf_vport_idx_alloc(struct idpf_adapter_ext *ad)
 {
 	uint16_t vport_idx;
 	uint16_t i;
@@ -1018,13 +1006,13 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_vport_param *param = init_params;
-	struct idpf_adapter *adapter = param->adapter;
+	struct idpf_adapter_ext *adapter = param->adapter;
 	/* for sending create vport virtchnl msg prepare */
 	struct virtchnl2_create_vport vport_req_info;
 	int ret = 0;
 
 	dev->dev_ops = &idpf_eth_dev_ops;
-	vport->adapter = adapter;
+	vport->adapter = &adapter->base;
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
 
@@ -1085,10 +1073,10 @@ static const struct rte_pci_id pci_id_idpf_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
-struct idpf_adapter *
-idpf_find_adapter(struct rte_pci_device *pci_dev)
+struct idpf_adapter_ext *
+idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 {
-	struct idpf_adapter *adapter;
+	struct idpf_adapter_ext *adapter;
 	int found = 0;
 
 	if (pci_dev == NULL)
@@ -1110,17 +1098,14 @@ idpf_find_adapter(struct rte_pci_device *pci_dev)
 }
 
 static void
-idpf_adapter_rel(struct idpf_adapter *adapter)
+idpf_adapter_rel(struct idpf_adapter_ext *adapter)
 {
-	struct idpf_hw *hw = &adapter->hw;
+	struct idpf_hw *hw = &adapter->base.hw;
 
 	idpf_ctlq_deinit(hw);
 
-	rte_free(adapter->caps);
-	adapter->caps = NULL;
-
-	rte_free(adapter->mbx_resp);
-	adapter->mbx_resp = NULL;
+	rte_free(adapter->base.mbx_resp);
+	adapter->base.mbx_resp = NULL;
 
 	rte_free(adapter->vports);
 	adapter->vports = NULL;
@@ -1131,7 +1116,7 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	       struct rte_pci_device *pci_dev)
 {
 	struct idpf_vport_param vport_param;
-	struct idpf_adapter *adapter;
+	struct idpf_adapter_ext *adapter;
 	struct idpf_devargs devargs;
 	char name[RTE_ETH_NAME_MAX_LEN];
 	int i, retval;
@@ -1143,11 +1128,11 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		idpf_adapter_list_init = true;
 	}
 
-	adapter = idpf_find_adapter(pci_dev);
+	adapter = idpf_find_adapter_ext(pci_dev);
 	if (adapter == NULL) {
 		first_probe = true;
-		adapter = rte_zmalloc("idpf_adapter",
-						sizeof(struct idpf_adapter), 0);
+		adapter = rte_zmalloc("idpf_adapter_ext",
+				      sizeof(struct idpf_adapter_ext), 0);
 		if (adapter == NULL) {
 			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
 			return -ENOMEM;
@@ -1225,7 +1210,7 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 static int
 idpf_pci_remove(struct rte_pci_device *pci_dev)
 {
-	struct idpf_adapter *adapter = idpf_find_adapter(pci_dev);
+	struct idpf_adapter_ext *adapter = idpf_find_adapter_ext(pci_dev);
 	uint16_t port_id;
 
 	/* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF through rte_device */
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index b0746e5041..e956fa989c 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -15,6 +15,7 @@
 
 #include "idpf_logs.h"
 
+#include <idpf_common_device.h>
 #include <base/idpf_prototype.h>
 #include <base/virtchnl2.h>
 
@@ -91,7 +92,7 @@ struct idpf_chunks_info {
 };
 
 struct idpf_vport_param {
-	struct idpf_adapter *adapter;
+	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
 	uint16_t idx;       /* index in adapter->vports[]*/
 };
@@ -144,17 +145,11 @@ struct idpf_devargs {
 	uint16_t req_vport_nb;
 };
 
-struct idpf_adapter {
-	TAILQ_ENTRY(idpf_adapter) next;
-	struct idpf_hw hw;
-	char name[IDPF_ADAPTER_NAME_LEN];
-
-	struct virtchnl2_version_info virtchnl_version;
-	struct virtchnl2_get_capabilities *caps;
+struct idpf_adapter_ext {
+	TAILQ_ENTRY(idpf_adapter_ext) next;
+	struct idpf_adapter base;
 
-	volatile uint32_t pend_cmd; /* pending command not finished */
-	uint32_t cmd_retval; /* return value of the cmd response from ipf */
-	uint8_t *mbx_resp; /* buffer to store the mailbox response from ipf */
+	char name[IDPF_ADAPTER_NAME_LEN];
 
 	uint32_t txq_model; /* 0 - split queue model, non-0 - single queue model */
 	uint32_t rxq_model; /* 0 - split queue model, non-0 - single queue model */
@@ -182,10 +177,12 @@ struct idpf_adapter {
 	uint64_t time_hw;
 };
 
-TAILQ_HEAD(idpf_adapter_list, idpf_adapter);
+TAILQ_HEAD(idpf_adapter_list, idpf_adapter_ext);
 
 #define IDPF_DEV_TO_PCI(eth_dev)		\
 	RTE_DEV_TO_PCI((eth_dev)->device)
+#define IDPF_ADAPTER_TO_EXT(p)					\
+	container_of((p), struct idpf_adapter_ext, base)
 
 /* structure used for sending and checking response of virtchnl ops */
 struct idpf_cmd_info {
@@ -234,10 +231,10 @@ atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
 	return !ret;
 }
 
-struct idpf_adapter *idpf_find_adapter(struct rte_pci_device *pci_dev);
+struct idpf_adapter_ext *idpf_find_adapter_ext(struct rte_pci_device *pci_dev);
 void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
 int idpf_vc_check_api_version(struct idpf_adapter *adapter);
-int idpf_get_pkt_type(struct idpf_adapter *adapter);
+int idpf_get_pkt_type(struct idpf_adapter_ext *adapter);
 int idpf_vc_get_caps(struct idpf_adapter *adapter);
 int idpf_vc_create_vport(struct idpf_vport *vport,
 			 struct virtchnl2_create_vport *vport_info);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 5aef8ba2b6..4845f2ea0a 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1384,7 +1384,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	struct idpf_rx_queue *rxq;
 	const uint32_t *ptype_tbl;
 	uint8_t status_err0_qw1;
-	struct idpf_adapter *ad;
+	struct idpf_adapter_ext *ad;
 	struct rte_mbuf *rxm;
 	uint16_t rx_id_bufq1;
 	uint16_t rx_id_bufq2;
@@ -1398,7 +1398,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	nb_rx = 0;
 	rxq = rx_queue;
-	ad = rxq->adapter;
+	ad = IDPF_ADAPTER_TO_EXT(rxq->adapter);
 
 	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
 		return nb_rx;
@@ -1407,7 +1407,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	rx_id_bufq1 = rxq->bufq1->rx_next_avail;
 	rx_id_bufq2 = rxq->bufq2->rx_next_avail;
 	rx_desc_ring = rxq->rx_ring;
-	ptype_tbl = rxq->adapter->ptype_tbl;
+	ptype_tbl = ad->ptype_tbl;
 
 	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
 		rxq->hw_register_set = 1;
@@ -1791,7 +1791,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	const uint32_t *ptype_tbl;
 	uint16_t rx_id, nb_hold;
 	struct rte_eth_dev *dev;
-	struct idpf_adapter *ad;
+	struct idpf_adapter_ext *ad;
 	uint16_t rx_packet_len;
 	struct rte_mbuf *rxm;
 	struct rte_mbuf *nmb;
@@ -1805,14 +1805,14 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	nb_hold = 0;
 	rxq = rx_queue;
 
-	ad = rxq->adapter;
+	ad = IDPF_ADAPTER_TO_EXT(rxq->adapter);
 
 	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
 		return nb_rx;
 
 	rx_id = rxq->rx_tail;
 	rx_ring = rxq->rx_ring;
-	ptype_tbl = rxq->adapter->ptype_tbl;
+	ptype_tbl = ad->ptype_tbl;
 
 	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
 		rxq->hw_register_set = 1;
@@ -2221,7 +2221,7 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 #ifdef RTE_ARCH_X86
-	struct idpf_adapter *ad = vport->adapter;
+	struct idpf_adapter_ext *ad = IDPF_ADAPTER_TO_EXT(vport->adapter);
 	struct idpf_rx_queue *rxq;
 	int i;
 
@@ -2275,7 +2275,7 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 #ifdef RTE_ARCH_X86
-	struct idpf_adapter *ad = vport->adapter;
+	struct idpf_adapter_ext *ad = IDPF_ADAPTER_TO_EXT(vport->adapter);
 #ifdef CC_AVX512_SUPPORT
 	struct idpf_tx_queue *txq;
 	int i;
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 730dc64ebc..047fc03614 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -247,11 +247,11 @@ void idpf_set_tx_function(struct rte_eth_dev *dev);
 /* Helper function to convert a 32b nanoseconds timestamp to 64b. */
 static inline uint64_t
 
-idpf_tstamp_convert_32b_64b(struct idpf_adapter *ad, uint32_t flag,
+idpf_tstamp_convert_32b_64b(struct idpf_adapter_ext *ad, uint32_t flag,
 			    uint32_t in_timestamp)
 {
 #ifdef RTE_ARCH_X86_64
-	struct idpf_hw *hw = &ad->hw;
+	struct idpf_hw *hw = &ad->base.hw;
 	const uint64_t mask = 0xFFFFFFFF;
 	uint32_t hi, lo, lo2, delta;
 	uint64_t ns;
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
index fb2b6bb53c..efa7cd2187 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
@@ -245,7 +245,8 @@ _idpf_singleq_recv_raw_pkts_avx512(struct idpf_rx_queue *rxq,
 				   struct rte_mbuf **rx_pkts,
 				   uint16_t nb_pkts)
 {
-	const uint32_t *type_table = rxq->adapter->ptype_tbl;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(rxq->adapter);
+	const uint32_t *type_table = adapter->ptype_tbl;
 
 	const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0,
 						    rxq->mbuf_initializer);
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 14b34619af..ca481bb915 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -311,13 +311,17 @@ idpf_vc_check_api_version(struct idpf_adapter *adapter)
 }
 
 int __rte_cold
-idpf_get_pkt_type(struct idpf_adapter *adapter)
+idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 {
 	struct virtchnl2_get_ptype_info *ptype_info;
-	uint16_t ptype_recvd = 0, ptype_offset, i, j;
+	struct idpf_adapter *base;
+	uint16_t ptype_offset, i, j;
+	uint16_t ptype_recvd = 0;
 	int ret;
 
-	ret = idpf_vc_query_ptype_info(adapter);
+	base = &adapter->base;
+
+	ret = idpf_vc_query_ptype_info(base);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Fail to query packet type information");
 		return ret;
@@ -328,7 +332,7 @@ idpf_get_pkt_type(struct idpf_adapter *adapter)
 			return -ENOMEM;
 
 	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
-		ret = idpf_read_one_msg(adapter, VIRTCHNL2_OP_GET_PTYPE_INFO,
+		ret = idpf_read_one_msg(base, VIRTCHNL2_OP_GET_PTYPE_INFO,
 					IDPF_DFLT_MBX_BUF_SIZE, (u8 *)ptype_info);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR, "Fail to get packet type information");
@@ -515,7 +519,7 @@ idpf_get_pkt_type(struct idpf_adapter *adapter)
 
 free_ptype_info:
 	rte_free(ptype_info);
-	clear_cmd(adapter);
+	clear_cmd(base);
 	return ret;
 }
 
@@ -577,7 +581,7 @@ idpf_vc_get_caps(struct idpf_adapter *adapter)
 		return err;
 	}
 
-	rte_memcpy(adapter->caps, args.out_buffer, sizeof(caps_msg));
+	rte_memcpy(&adapter->caps, args.out_buffer, sizeof(caps_msg));
 
 	return 0;
 }
@@ -740,7 +744,8 @@ idpf_vc_set_rss_hash(struct idpf_vport *vport)
 int
 idpf_vc_config_rxqs(struct idpf_vport *vport)
 {
-	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_adapter *base = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
 	struct idpf_rx_queue **rxq =
 		(struct idpf_rx_queue **)vport->dev_data->rx_queues;
 	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
@@ -832,10 +837,10 @@ idpf_vc_config_rxqs(struct idpf_vport *vport)
 		args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
 		args.in_args = (uint8_t *)vc_rxqs;
 		args.in_args_size = size;
-		args.out_buffer = adapter->mbx_resp;
+		args.out_buffer = base->mbx_resp;
 		args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-		err = idpf_execute_vc_cmd(adapter, &args);
+		err = idpf_execute_vc_cmd(base, &args);
 		rte_free(vc_rxqs);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
@@ -940,7 +945,8 @@ idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id)
 int
 idpf_vc_config_txqs(struct idpf_vport *vport)
 {
-	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_adapter *base = vport->adapter;
+	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
 	struct idpf_tx_queue **txq =
 		(struct idpf_tx_queue **)vport->dev_data->tx_queues;
 	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
@@ -1010,10 +1016,10 @@ idpf_vc_config_txqs(struct idpf_vport *vport)
 		args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
 		args.in_args = (uint8_t *)vc_txqs;
 		args.in_args_size = size;
-		args.out_buffer = adapter->mbx_resp;
+		args.out_buffer = base->mbx_resp;
 		args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-		err = idpf_execute_vc_cmd(adapter, &args);
+		err = idpf_execute_vc_cmd(base, &args);
 		rte_free(vc_txqs);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 02/19] common/idpf: add vport structure
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
  2023-02-06  5:46         ` [PATCH v7 01/19] common/idpf: add adapter structure beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 03/19] common/idpf: add virtual channel functions beilei.xing
                           ` (17 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Move idpf_vport structure to common module, remove ethdev dependency.
Also remove unused functions.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h |  59 ++++++
 drivers/net/idpf/idpf_ethdev.c           |  10 +-
 drivers/net/idpf/idpf_ethdev.h           |  66 +-----
 drivers/net/idpf/idpf_rxtx.c             |   4 +-
 drivers/net/idpf/idpf_rxtx.h             |   3 +
 drivers/net/idpf/idpf_vchnl.c            | 252 +++--------------------
 6 files changed, 96 insertions(+), 298 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 358e68cb8c..8bd02b4fde 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -17,4 +17,63 @@ struct idpf_adapter {
 	uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */
 };
 
+struct idpf_chunks_info {
+	uint32_t tx_start_qid;
+	uint32_t rx_start_qid;
+	/* Valid only if split queue model */
+	uint32_t tx_compl_start_qid;
+	uint32_t rx_buf_start_qid;
+
+	uint64_t tx_qtail_start;
+	uint32_t tx_qtail_spacing;
+	uint64_t rx_qtail_start;
+	uint32_t rx_qtail_spacing;
+	uint64_t tx_compl_qtail_start;
+	uint32_t tx_compl_qtail_spacing;
+	uint64_t rx_buf_qtail_start;
+	uint32_t rx_buf_qtail_spacing;
+};
+
+struct idpf_vport {
+	struct idpf_adapter *adapter; /* Backreference to associated adapter */
+	struct virtchnl2_create_vport *vport_info; /* virtchnl response info handling */
+	uint16_t sw_idx; /* SW index in adapter->vports[]*/
+	uint16_t vport_id;
+	uint32_t txq_model;
+	uint32_t rxq_model;
+	uint16_t num_tx_q;
+	/* valid only if txq_model is split Q */
+	uint16_t num_tx_complq;
+	uint16_t num_rx_q;
+	/* valid only if rxq_model is split Q */
+	uint16_t num_rx_bufq;
+
+	uint16_t max_mtu;
+	uint8_t default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+
+	enum virtchnl_rss_algorithm rss_algorithm;
+	uint16_t rss_key_size;
+	uint16_t rss_lut_size;
+
+	void *dev_data; /* Pointer to the device data */
+	uint16_t max_pkt_len; /* Maximum packet length */
+
+	/* RSS info */
+	uint32_t *rss_lut;
+	uint8_t *rss_key;
+	uint64_t rss_hf;
+
+	/* MSIX info*/
+	struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */
+	uint16_t max_vectors;
+	struct virtchnl2_alloc_vectors *recv_vectors;
+
+	/* Chunk info */
+	struct idpf_chunks_info chunks_info;
+
+	uint16_t devarg_id;
+
+	bool stopped;
+};
+
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 1b13d081a7..72a5c9f39b 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -275,11 +275,13 @@ static int
 idpf_init_rss(struct idpf_vport *vport)
 {
 	struct rte_eth_rss_conf *rss_conf;
+	struct rte_eth_dev_data *dev_data;
 	uint16_t i, nb_q, lut_size;
 	int ret = 0;
 
-	rss_conf = &vport->dev_data->dev_conf.rx_adv_conf.rss_conf;
-	nb_q = vport->dev_data->nb_rx_queues;
+	dev_data = vport->dev_data;
+	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev_data->nb_rx_queues;
 
 	vport->rss_key = rte_zmalloc("rss_key",
 				     vport->rss_key_size, 0);
@@ -466,7 +468,7 @@ idpf_config_rx_queues_irqs(struct rte_eth_dev *dev)
 	}
 	vport->qv_map = qv_map;
 
-	if (idpf_vc_config_irq_map_unmap(vport, true) != 0) {
+	if (idpf_vc_config_irq_map_unmap(vport, dev->data->nb_rx_queues, true) != 0) {
 		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
 		goto config_irq_map_err;
 	}
@@ -582,7 +584,7 @@ idpf_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_stop_queues(dev);
 
-	idpf_vc_config_irq_map_unmap(vport, false);
+	idpf_vc_config_irq_map_unmap(vport, dev->data->nb_rx_queues, false);
 
 	if (vport->recv_vectors != NULL)
 		idpf_vc_dealloc_vectors(vport);
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index e956fa989c..8c29019667 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -74,71 +74,12 @@ enum idpf_vc_result {
 	IDPF_MSG_CMD,      /* Read async command result */
 };
 
-struct idpf_chunks_info {
-	uint32_t tx_start_qid;
-	uint32_t rx_start_qid;
-	/* Valid only if split queue model */
-	uint32_t tx_compl_start_qid;
-	uint32_t rx_buf_start_qid;
-
-	uint64_t tx_qtail_start;
-	uint32_t tx_qtail_spacing;
-	uint64_t rx_qtail_start;
-	uint32_t rx_qtail_spacing;
-	uint64_t tx_compl_qtail_start;
-	uint32_t tx_compl_qtail_spacing;
-	uint64_t rx_buf_qtail_start;
-	uint32_t rx_buf_qtail_spacing;
-};
-
 struct idpf_vport_param {
 	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
 	uint16_t idx;       /* index in adapter->vports[]*/
 };
 
-struct idpf_vport {
-	struct idpf_adapter *adapter; /* Backreference to associated adapter */
-	struct virtchnl2_create_vport *vport_info; /* virtchnl response info handling */
-	uint16_t sw_idx; /* SW index in adapter->vports[]*/
-	uint16_t vport_id;
-	uint32_t txq_model;
-	uint32_t rxq_model;
-	uint16_t num_tx_q;
-	/* valid only if txq_model is split Q */
-	uint16_t num_tx_complq;
-	uint16_t num_rx_q;
-	/* valid only if rxq_model is split Q */
-	uint16_t num_rx_bufq;
-
-	uint16_t max_mtu;
-	uint8_t default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
-
-	enum virtchnl_rss_algorithm rss_algorithm;
-	uint16_t rss_key_size;
-	uint16_t rss_lut_size;
-
-	struct rte_eth_dev_data *dev_data; /* Pointer to the device data */
-	uint16_t max_pkt_len; /* Maximum packet length */
-
-	/* RSS info */
-	uint32_t *rss_lut;
-	uint8_t *rss_key;
-	uint64_t rss_hf;
-
-	/* MSIX info*/
-	struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */
-	uint16_t max_vectors;
-	struct virtchnl2_alloc_vectors *recv_vectors;
-
-	/* Chunk info */
-	struct idpf_chunks_info chunks_info;
-
-	uint16_t devarg_id;
-
-	bool stopped;
-};
-
 /* Struct used when parse driver specific devargs */
 struct idpf_devargs {
 	uint16_t req_vports[IDPF_MAX_VPORT_NUM];
@@ -242,15 +183,12 @@ int idpf_vc_destroy_vport(struct idpf_vport *vport);
 int idpf_vc_set_rss_key(struct idpf_vport *vport);
 int idpf_vc_set_rss_lut(struct idpf_vport *vport);
 int idpf_vc_set_rss_hash(struct idpf_vport *vport);
-int idpf_vc_config_rxqs(struct idpf_vport *vport);
-int idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id);
-int idpf_vc_config_txqs(struct idpf_vport *vport);
-int idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id);
 int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
 		      bool rx, bool on);
 int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
 int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
-int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, bool map);
+int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
+				 uint16_t nb_rxq, bool map);
 int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
 int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
 int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 4845f2ea0a..918d156e03 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1066,7 +1066,7 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		dev->data->rx_queues[rx_queue_id];
 	int err = 0;
 
-	err = idpf_vc_config_rxq(vport, rx_queue_id);
+	err = idpf_vc_config_rxq(vport, rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Fail to configure Rx queue %u", rx_queue_id);
 		return err;
@@ -1117,7 +1117,7 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 		dev->data->tx_queues[tx_queue_id];
 	int err = 0;
 
-	err = idpf_vc_config_txq(vport, tx_queue_id);
+	err = idpf_vc_config_txq(vport, txq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id);
 		return err;
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 047fc03614..9417651b3f 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -243,6 +243,9 @@ void idpf_stop_queues(struct rte_eth_dev *dev);
 void idpf_set_rx_function(struct rte_eth_dev *dev);
 void idpf_set_tx_function(struct rte_eth_dev *dev);
 
+int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
+int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
+
 #define IDPF_TIMESYNC_REG_WRAP_GUARD_BAND  10000
 /* Helper function to convert a 32b nanoseconds timestamp to 64b. */
 static inline uint64_t
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index ca481bb915..633d3295d3 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -742,121 +742,9 @@ idpf_vc_set_rss_hash(struct idpf_vport *vport)
 
 #define IDPF_RX_BUF_STRIDE		64
 int
-idpf_vc_config_rxqs(struct idpf_vport *vport)
-{
-	struct idpf_adapter *base = vport->adapter;
-	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
-	struct idpf_rx_queue **rxq =
-		(struct idpf_rx_queue **)vport->dev_data->rx_queues;
-	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
-	struct virtchnl2_rxq_info *rxq_info;
-	struct idpf_cmd_info args;
-	uint16_t total_qs, num_qs;
-	int size, i, j;
-	int err = 0;
-	int k = 0;
-
-	total_qs = vport->num_rx_q + vport->num_rx_bufq;
-	while (total_qs) {
-		if (total_qs > adapter->max_rxq_per_msg) {
-			num_qs = adapter->max_rxq_per_msg;
-			total_qs -= adapter->max_rxq_per_msg;
-		} else {
-			num_qs = total_qs;
-			total_qs = 0;
-		}
-
-		size = sizeof(*vc_rxqs) + (num_qs - 1) *
-			sizeof(struct virtchnl2_rxq_info);
-		vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
-		if (vc_rxqs == NULL) {
-			PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues");
-			err = -ENOMEM;
-			break;
-		}
-		vc_rxqs->vport_id = vport->vport_id;
-		vc_rxqs->num_qinfo = num_qs;
-		if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-			for (i = 0; i < num_qs; i++, k++) {
-				rxq_info = &vc_rxqs->qinfo[i];
-				rxq_info->dma_ring_addr = rxq[k]->rx_ring_phys_addr;
-				rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-				rxq_info->queue_id = rxq[k]->queue_id;
-				rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-				rxq_info->data_buffer_size = rxq[k]->rx_buf_len;
-				rxq_info->max_pkt_size = vport->max_pkt_len;
-
-				rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
-				rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
-
-				rxq_info->ring_len = rxq[k]->nb_rx_desc;
-			}
-		} else {
-			for (i = 0; i < num_qs / 3; i++, k++) {
-				/* Rx queue */
-				rxq_info = &vc_rxqs->qinfo[i * 3];
-				rxq_info->dma_ring_addr =
-					rxq[k]->rx_ring_phys_addr;
-				rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-				rxq_info->queue_id = rxq[k]->queue_id;
-				rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-				rxq_info->data_buffer_size = rxq[k]->rx_buf_len;
-				rxq_info->max_pkt_size = vport->max_pkt_len;
-
-				rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
-				rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
-
-				rxq_info->ring_len = rxq[k]->nb_rx_desc;
-				rxq_info->rx_bufq1_id = rxq[k]->bufq1->queue_id;
-				rxq_info->rx_bufq2_id = rxq[k]->bufq2->queue_id;
-				rxq_info->rx_buffer_low_watermark = 64;
-
-				/* Buffer queue */
-				for (j = 1; j <= IDPF_RX_BUFQ_PER_GRP; j++) {
-					struct idpf_rx_queue *bufq = j == 1 ?
-						rxq[k]->bufq1 : rxq[k]->bufq2;
-					rxq_info = &vc_rxqs->qinfo[i * 3 + j];
-					rxq_info->dma_ring_addr =
-						bufq->rx_ring_phys_addr;
-					rxq_info->type =
-						VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
-					rxq_info->queue_id = bufq->queue_id;
-					rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-					rxq_info->data_buffer_size = bufq->rx_buf_len;
-					rxq_info->desc_ids =
-						VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
-					rxq_info->ring_len = bufq->nb_rx_desc;
-
-					rxq_info->buffer_notif_stride =
-						IDPF_RX_BUF_STRIDE;
-					rxq_info->rx_buffer_low_watermark = 64;
-				}
-			}
-		}
-		memset(&args, 0, sizeof(args));
-		args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
-		args.in_args = (uint8_t *)vc_rxqs;
-		args.in_args_size = size;
-		args.out_buffer = base->mbx_resp;
-		args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-		err = idpf_execute_vc_cmd(base, &args);
-		rte_free(vc_rxqs);
-		if (err != 0) {
-			PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
-			break;
-		}
-	}
-
-	return err;
-}
-
-int
-idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id)
+idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
 {
 	struct idpf_adapter *adapter = vport->adapter;
-	struct idpf_rx_queue **rxq =
-		(struct idpf_rx_queue **)vport->dev_data->rx_queues;
 	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
 	struct virtchnl2_rxq_info *rxq_info;
 	struct idpf_cmd_info args;
@@ -880,39 +768,38 @@ idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id)
 	vc_rxqs->num_qinfo = num_qs;
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
 		rxq_info = &vc_rxqs->qinfo[0];
-		rxq_info->dma_ring_addr = rxq[rxq_id]->rx_ring_phys_addr;
+		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
 		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-		rxq_info->queue_id = rxq[rxq_id]->queue_id;
+		rxq_info->queue_id = rxq->queue_id;
 		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-		rxq_info->data_buffer_size = rxq[rxq_id]->rx_buf_len;
+		rxq_info->data_buffer_size = rxq->rx_buf_len;
 		rxq_info->max_pkt_size = vport->max_pkt_len;
 
 		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
 		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
 
-		rxq_info->ring_len = rxq[rxq_id]->nb_rx_desc;
+		rxq_info->ring_len = rxq->nb_rx_desc;
 	}  else {
 		/* Rx queue */
 		rxq_info = &vc_rxqs->qinfo[0];
-		rxq_info->dma_ring_addr = rxq[rxq_id]->rx_ring_phys_addr;
+		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
 		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-		rxq_info->queue_id = rxq[rxq_id]->queue_id;
+		rxq_info->queue_id = rxq->queue_id;
 		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-		rxq_info->data_buffer_size = rxq[rxq_id]->rx_buf_len;
+		rxq_info->data_buffer_size = rxq->rx_buf_len;
 		rxq_info->max_pkt_size = vport->max_pkt_len;
 
 		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
 		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
 
-		rxq_info->ring_len = rxq[rxq_id]->nb_rx_desc;
-		rxq_info->rx_bufq1_id = rxq[rxq_id]->bufq1->queue_id;
-		rxq_info->rx_bufq2_id = rxq[rxq_id]->bufq2->queue_id;
+		rxq_info->ring_len = rxq->nb_rx_desc;
+		rxq_info->rx_bufq1_id = rxq->bufq1->queue_id;
+		rxq_info->rx_bufq2_id = rxq->bufq2->queue_id;
 		rxq_info->rx_buffer_low_watermark = 64;
 
 		/* Buffer queue */
 		for (i = 1; i <= IDPF_RX_BUFQ_PER_GRP; i++) {
-			struct idpf_rx_queue *bufq =
-				i == 1 ? rxq[rxq_id]->bufq1 : rxq[rxq_id]->bufq2;
+			struct idpf_rx_queue *bufq = i == 1 ? rxq->bufq1 : rxq->bufq2;
 			rxq_info = &vc_rxqs->qinfo[i];
 			rxq_info->dma_ring_addr = bufq->rx_ring_phys_addr;
 			rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
@@ -943,99 +830,9 @@ idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id)
 }
 
 int
-idpf_vc_config_txqs(struct idpf_vport *vport)
-{
-	struct idpf_adapter *base = vport->adapter;
-	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(base);
-	struct idpf_tx_queue **txq =
-		(struct idpf_tx_queue **)vport->dev_data->tx_queues;
-	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
-	struct virtchnl2_txq_info *txq_info;
-	struct idpf_cmd_info args;
-	uint16_t total_qs, num_qs;
-	int size, i;
-	int err = 0;
-	int k = 0;
-
-	total_qs = vport->num_tx_q + vport->num_tx_complq;
-	while (total_qs) {
-		if (total_qs > adapter->max_txq_per_msg) {
-			num_qs = adapter->max_txq_per_msg;
-			total_qs -= adapter->max_txq_per_msg;
-		} else {
-			num_qs = total_qs;
-			total_qs = 0;
-		}
-		size = sizeof(*vc_txqs) + (num_qs - 1) *
-			sizeof(struct virtchnl2_txq_info);
-		vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
-		if (vc_txqs == NULL) {
-			PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues");
-			err = -ENOMEM;
-			break;
-		}
-		vc_txqs->vport_id = vport->vport_id;
-		vc_txqs->num_qinfo = num_qs;
-		if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-			for (i = 0; i < num_qs; i++, k++) {
-				txq_info = &vc_txqs->qinfo[i];
-				txq_info->dma_ring_addr = txq[k]->tx_ring_phys_addr;
-				txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-				txq_info->queue_id = txq[k]->queue_id;
-				txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-				txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
-				txq_info->ring_len = txq[k]->nb_tx_desc;
-			}
-		} else {
-			for (i = 0; i < num_qs / 2; i++, k++) {
-				/* txq info */
-				txq_info = &vc_txqs->qinfo[2 * i];
-				txq_info->dma_ring_addr = txq[k]->tx_ring_phys_addr;
-				txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-				txq_info->queue_id = txq[k]->queue_id;
-				txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-				txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-				txq_info->ring_len = txq[k]->nb_tx_desc;
-				txq_info->tx_compl_queue_id =
-					txq[k]->complq->queue_id;
-				txq_info->relative_queue_id = txq_info->queue_id;
-
-				/* tx completion queue info */
-				txq_info = &vc_txqs->qinfo[2 * i + 1];
-				txq_info->dma_ring_addr =
-					txq[k]->complq->tx_ring_phys_addr;
-				txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-				txq_info->queue_id = txq[k]->complq->queue_id;
-				txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-				txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-				txq_info->ring_len = txq[k]->complq->nb_tx_desc;
-			}
-		}
-
-		memset(&args, 0, sizeof(args));
-		args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
-		args.in_args = (uint8_t *)vc_txqs;
-		args.in_args_size = size;
-		args.out_buffer = base->mbx_resp;
-		args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-		err = idpf_execute_vc_cmd(base, &args);
-		rte_free(vc_txqs);
-		if (err != 0) {
-			PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
-			break;
-		}
-	}
-
-	return err;
-}
-
-int
-idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id)
+idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
 {
 	struct idpf_adapter *adapter = vport->adapter;
-	struct idpf_tx_queue **txq =
-		(struct idpf_tx_queue **)vport->dev_data->tx_queues;
 	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
 	struct virtchnl2_txq_info *txq_info;
 	struct idpf_cmd_info args;
@@ -1060,32 +857,32 @@ idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id)
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
 		txq_info = &vc_txqs->qinfo[0];
-		txq_info->dma_ring_addr = txq[txq_id]->tx_ring_phys_addr;
+		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
 		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-		txq_info->queue_id = txq[txq_id]->queue_id;
+		txq_info->queue_id = txq->queue_id;
 		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
 		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
-		txq_info->ring_len = txq[txq_id]->nb_tx_desc;
+		txq_info->ring_len = txq->nb_tx_desc;
 	} else {
 		/* txq info */
 		txq_info = &vc_txqs->qinfo[0];
-		txq_info->dma_ring_addr = txq[txq_id]->tx_ring_phys_addr;
+		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
 		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-		txq_info->queue_id = txq[txq_id]->queue_id;
+		txq_info->queue_id = txq->queue_id;
 		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
 		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-		txq_info->ring_len = txq[txq_id]->nb_tx_desc;
-		txq_info->tx_compl_queue_id = txq[txq_id]->complq->queue_id;
+		txq_info->ring_len = txq->nb_tx_desc;
+		txq_info->tx_compl_queue_id = txq->complq->queue_id;
 		txq_info->relative_queue_id = txq_info->queue_id;
 
 		/* tx completion queue info */
 		txq_info = &vc_txqs->qinfo[1];
-		txq_info->dma_ring_addr = txq[txq_id]->complq->tx_ring_phys_addr;
+		txq_info->dma_ring_addr = txq->complq->tx_ring_phys_addr;
 		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-		txq_info->queue_id = txq[txq_id]->complq->queue_id;
+		txq_info->queue_id = txq->complq->queue_id;
 		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
 		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-		txq_info->ring_len = txq[txq_id]->complq->nb_tx_desc;
+		txq_info->ring_len = txq->complq->nb_tx_desc;
 	}
 
 	memset(&args, 0, sizeof(args));
@@ -1104,12 +901,11 @@ idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id)
 }
 
 int
-idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, bool map)
+idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_queue_vector_maps *map_info;
 	struct virtchnl2_queue_vector *vecmap;
-	uint16_t nb_rxq = vport->dev_data->nb_rx_queues;
 	struct idpf_cmd_info args;
 	int len, i, err = 0;
 
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 03/19] common/idpf: add virtual channel functions
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
  2023-02-06  5:46         ` [PATCH v7 01/19] common/idpf: add adapter structure beilei.xing
  2023-02-06  5:46         ` [PATCH v7 02/19] common/idpf: add vport structure beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 04/19] common/idpf: introduce adapter init and deinit beilei.xing
                           ` (16 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Move most of the virtual channel functions to idpf common module.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/base/idpf_controlq_api.h |   4 -
 drivers/common/idpf/base/meson.build         |   2 +-
 drivers/common/idpf/idpf_common_device.c     |   8 +
 drivers/common/idpf/idpf_common_device.h     |  61 ++
 drivers/common/idpf/idpf_common_logs.h       |  23 +
 drivers/common/idpf/idpf_common_virtchnl.c   | 815 ++++++++++++++++++
 drivers/common/idpf/idpf_common_virtchnl.h   |  48 ++
 drivers/common/idpf/meson.build              |   5 +
 drivers/common/idpf/version.map              |  20 +-
 drivers/net/idpf/idpf_ethdev.c               |   9 +-
 drivers/net/idpf/idpf_ethdev.h               |  85 +-
 drivers/net/idpf/idpf_rxtx.c                 |   8 +-
 drivers/net/idpf/idpf_vchnl.c                | 817 +------------------
 13 files changed, 986 insertions(+), 919 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_device.c
 create mode 100644 drivers/common/idpf/idpf_common_logs.h
 create mode 100644 drivers/common/idpf/idpf_common_virtchnl.c
 create mode 100644 drivers/common/idpf/idpf_common_virtchnl.h

diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h
index 68ac0cfe70..891a0f10f6 100644
--- a/drivers/common/idpf/base/idpf_controlq_api.h
+++ b/drivers/common/idpf/base/idpf_controlq_api.h
@@ -177,7 +177,6 @@ void idpf_ctlq_remove(struct idpf_hw *hw,
 		      struct idpf_ctlq_info *cq);
 
 /* Sends messages to HW and will also free the buffer*/
-__rte_internal
 int idpf_ctlq_send(struct idpf_hw *hw,
 		   struct idpf_ctlq_info *cq,
 		   u16 num_q_msg,
@@ -186,17 +185,14 @@ int idpf_ctlq_send(struct idpf_hw *hw,
 /* Receives messages and called by interrupt handler/polling
  * initiated by app/process. Also caller is supposed to free the buffers
  */
-__rte_internal
 int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 		   struct idpf_ctlq_msg *q_msg);
 
 /* Reclaims send descriptors on HW write back */
-__rte_internal
 int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
 		       struct idpf_ctlq_msg *msg_status[]);
 
 /* Indicate RX buffers are done being processed */
-__rte_internal
 int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw,
 			    struct idpf_ctlq_info *cq,
 			    u16 *buff_count,
diff --git a/drivers/common/idpf/base/meson.build b/drivers/common/idpf/base/meson.build
index 183587b51a..dc4b93c198 100644
--- a/drivers/common/idpf/base/meson.build
+++ b/drivers/common/idpf/base/meson.build
@@ -1,7 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2022 Intel Corporation
 
-sources = files(
+sources += files(
         'idpf_common.c',
         'idpf_controlq.c',
         'idpf_controlq_setup.c',
diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
new file mode 100644
index 0000000000..197fa03b7f
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#include <rte_log.h>
+#include <idpf_common_device.h>
+
+RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 8bd02b4fde..e86f8157e7 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -7,6 +7,12 @@
 
 #include <base/idpf_prototype.h>
 #include <base/virtchnl2.h>
+#include <idpf_common_logs.h>
+
+#define IDPF_CTLQ_LEN		64
+#define IDPF_DFLT_MBX_BUF_SIZE	4096
+
+#define IDPF_MAX_PKT_TYPE	1024
 
 struct idpf_adapter {
 	struct idpf_hw hw;
@@ -76,4 +82,59 @@ struct idpf_vport {
 	bool stopped;
 };
 
+/* Message type read in virtual channel from PF */
+enum idpf_vc_result {
+	IDPF_MSG_ERR = -1, /* Meet error when accessing admin queue */
+	IDPF_MSG_NON,      /* Read nothing from admin queue */
+	IDPF_MSG_SYS,      /* Read system msg from admin queue */
+	IDPF_MSG_CMD,      /* Read async command result */
+};
+
+/* structure used for sending and checking response of virtchnl ops */
+struct idpf_cmd_info {
+	uint32_t ops;
+	uint8_t *in_args;       /* buffer for sending */
+	uint32_t in_args_size;  /* buffer size for sending */
+	uint8_t *out_buffer;    /* buffer for response */
+	uint32_t out_size;      /* buffer size for response */
+};
+
+/* notify current command done. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+notify_cmd(struct idpf_adapter *adapter, int msg_ret)
+{
+	adapter->cmd_retval = msg_ret;
+	/* Return value may be checked in anither thread, need to ensure the coherence. */
+	rte_wmb();
+	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
+}
+
+/* clear current command. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+clear_cmd(struct idpf_adapter *adapter)
+{
+	/* Return value may be checked in anither thread, need to ensure the coherence. */
+	rte_wmb();
+	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
+	adapter->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
+}
+
+/* Check there is pending cmd in execution. If none, set new command. */
+static inline bool
+atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
+{
+	uint32_t op_unk = VIRTCHNL2_OP_UNKNOWN;
+	bool ret = __atomic_compare_exchange(&adapter->pend_cmd, &op_unk, &ops,
+					    0, __ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE);
+
+	if (!ret)
+		DRV_LOG(ERR, "There is incomplete cmd %d", adapter->pend_cmd);
+
+	return !ret;
+}
+
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_logs.h b/drivers/common/idpf/idpf_common_logs.h
new file mode 100644
index 0000000000..4c7978fb49
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_logs.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _IDPF_COMMON_LOGS_H_
+#define _IDPF_COMMON_LOGS_H_
+
+#include <rte_log.h>
+
+extern int idpf_common_logtype;
+
+#define DRV_LOG_RAW(level, ...)					\
+	rte_log(RTE_LOG_ ## level,				\
+		idpf_common_logtype,				\
+		RTE_FMT("%s(): "				\
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n",	\
+			__func__,				\
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define DRV_LOG(level, fmt, args...)		\
+	DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _IDPF_COMMON_LOGS_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
new file mode 100644
index 0000000000..0704a4fea2
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -0,0 +1,815 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#include <idpf_common_virtchnl.h>
+#include <idpf_common_logs.h>
+
+static int
+idpf_vc_clean(struct idpf_adapter *adapter)
+{
+	struct idpf_ctlq_msg *q_msg[IDPF_CTLQ_LEN];
+	uint16_t num_q_msg = IDPF_CTLQ_LEN;
+	struct idpf_dma_mem *dma_mem;
+	int err;
+	uint32_t i;
+
+	for (i = 0; i < 10; i++) {
+		err = idpf_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg);
+		msleep(20);
+		if (num_q_msg > 0)
+			break;
+	}
+	if (err != 0)
+		return err;
+
+	/* Empty queue is not an error */
+	for (i = 0; i < num_q_msg; i++) {
+		dma_mem = q_msg[i]->ctx.indirect.payload;
+		if (dma_mem != NULL) {
+			idpf_free_dma_mem(&adapter->hw, dma_mem);
+			rte_free(dma_mem);
+		}
+		rte_free(q_msg[i]);
+	}
+
+	return 0;
+}
+
+static int
+idpf_send_vc_msg(struct idpf_adapter *adapter, uint32_t op,
+		 uint16_t msg_size, uint8_t *msg)
+{
+	struct idpf_ctlq_msg *ctlq_msg;
+	struct idpf_dma_mem *dma_mem;
+	int err;
+
+	err = idpf_vc_clean(adapter);
+	if (err != 0)
+		goto err;
+
+	ctlq_msg = rte_zmalloc(NULL, sizeof(struct idpf_ctlq_msg), 0);
+	if (ctlq_msg == NULL) {
+		err = -ENOMEM;
+		goto err;
+	}
+
+	dma_mem = rte_zmalloc(NULL, sizeof(struct idpf_dma_mem), 0);
+	if (dma_mem == NULL) {
+		err = -ENOMEM;
+		goto dma_mem_error;
+	}
+
+	dma_mem->size = IDPF_DFLT_MBX_BUF_SIZE;
+	idpf_alloc_dma_mem(&adapter->hw, dma_mem, dma_mem->size);
+	if (dma_mem->va == NULL) {
+		err = -ENOMEM;
+		goto dma_alloc_error;
+	}
+
+	memcpy(dma_mem->va, msg, msg_size);
+
+	ctlq_msg->opcode = idpf_mbq_opc_send_msg_to_pf;
+	ctlq_msg->func_id = 0;
+	ctlq_msg->data_len = msg_size;
+	ctlq_msg->cookie.mbx.chnl_opcode = op;
+	ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS;
+	ctlq_msg->ctx.indirect.payload = dma_mem;
+
+	err = idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
+	if (err != 0)
+		goto send_error;
+
+	return 0;
+
+send_error:
+	idpf_free_dma_mem(&adapter->hw, dma_mem);
+dma_alloc_error:
+	rte_free(dma_mem);
+dma_mem_error:
+	rte_free(ctlq_msg);
+err:
+	return err;
+}
+
+static enum idpf_vc_result
+idpf_read_msg_from_cp(struct idpf_adapter *adapter, uint16_t buf_len,
+		      uint8_t *buf)
+{
+	struct idpf_hw *hw = &adapter->hw;
+	struct idpf_ctlq_msg ctlq_msg;
+	struct idpf_dma_mem *dma_mem = NULL;
+	enum idpf_vc_result result = IDPF_MSG_NON;
+	uint32_t opcode;
+	uint16_t pending = 1;
+	int ret;
+
+	ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+	if (ret != 0) {
+		DRV_LOG(DEBUG, "Can't read msg from AQ");
+		if (ret != -ENOMSG)
+			result = IDPF_MSG_ERR;
+		return result;
+	}
+
+	rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len);
+
+	opcode = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+	adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+	DRV_LOG(DEBUG, "CQ from CP carries opcode %u, retval %d",
+		opcode, adapter->cmd_retval);
+
+	if (opcode == VIRTCHNL2_OP_EVENT) {
+		struct virtchnl2_event *ve = ctlq_msg.ctx.indirect.payload->va;
+
+		result = IDPF_MSG_SYS;
+		switch (ve->event) {
+		case VIRTCHNL2_EVENT_LINK_CHANGE:
+			/* TBD */
+			break;
+		default:
+			DRV_LOG(ERR, "%s: Unknown event %d from CP",
+				__func__, ve->event);
+			break;
+		}
+	} else {
+		/* async reply msg on command issued by pf previously */
+		result = IDPF_MSG_CMD;
+		if (opcode != adapter->pend_cmd) {
+			DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
+				adapter->pend_cmd, opcode);
+			result = IDPF_MSG_ERR;
+		}
+	}
+
+	if (ctlq_msg.data_len != 0)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret != 0 && dma_mem != NULL)
+		idpf_free_dma_mem(hw, dma_mem);
+
+	return result;
+}
+
+#define MAX_TRY_TIMES 200
+#define ASQ_DELAY_MS  10
+
+int
+idpf_vc_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_len,
+		     uint8_t *buf)
+{
+	int err = 0;
+	int i = 0;
+	int ret;
+
+	do {
+		ret = idpf_read_msg_from_cp(adapter, buf_len, buf);
+		if (ret == IDPF_MSG_CMD)
+			break;
+		rte_delay_ms(ASQ_DELAY_MS);
+	} while (i++ < MAX_TRY_TIMES);
+	if (i >= MAX_TRY_TIMES ||
+	    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+		err = -EBUSY;
+		DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
+			adapter->cmd_retval, ops);
+	}
+
+	return err;
+}
+
+int
+idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
+{
+	int err = 0;
+	int i = 0;
+	int ret;
+
+	if (atomic_set_cmd(adapter, args->ops))
+		return -EINVAL;
+
+	ret = idpf_send_vc_msg(adapter, args->ops, args->in_args_size, args->in_args);
+	if (ret != 0) {
+		DRV_LOG(ERR, "fail to send cmd %d", args->ops);
+		clear_cmd(adapter);
+		return ret;
+	}
+
+	switch (args->ops) {
+	case VIRTCHNL_OP_VERSION:
+	case VIRTCHNL2_OP_GET_CAPS:
+	case VIRTCHNL2_OP_CREATE_VPORT:
+	case VIRTCHNL2_OP_DESTROY_VPORT:
+	case VIRTCHNL2_OP_SET_RSS_KEY:
+	case VIRTCHNL2_OP_SET_RSS_LUT:
+	case VIRTCHNL2_OP_SET_RSS_HASH:
+	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
+	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
+	case VIRTCHNL2_OP_ENABLE_QUEUES:
+	case VIRTCHNL2_OP_DISABLE_QUEUES:
+	case VIRTCHNL2_OP_ENABLE_VPORT:
+	case VIRTCHNL2_OP_DISABLE_VPORT:
+	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
+	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
+	case VIRTCHNL2_OP_ALLOC_VECTORS:
+	case VIRTCHNL2_OP_DEALLOC_VECTORS:
+		/* for init virtchnl ops, need to poll the response */
+		err = idpf_vc_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
+		clear_cmd(adapter);
+		break;
+	case VIRTCHNL2_OP_GET_PTYPE_INFO:
+		/* for multuple response message,
+		 * do not handle the response here.
+		 */
+		break;
+	default:
+		/* For other virtchnl ops in running time,
+		 * wait for the cmd done flag.
+		 */
+		do {
+			if (adapter->pend_cmd == VIRTCHNL_OP_UNKNOWN)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+			/* If don't read msg or read sys event, continue */
+		} while (i++ < MAX_TRY_TIMES);
+		/* If there's no response is received, clear command */
+		if (i >= MAX_TRY_TIMES  ||
+		    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -EBUSY;
+			DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
+				adapter->cmd_retval, args->ops);
+			clear_cmd(adapter);
+		}
+		break;
+	}
+
+	return err;
+}
+
+int
+idpf_vc_check_api_version(struct idpf_adapter *adapter)
+{
+	struct virtchnl2_version_info version, *pver;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&version, 0, sizeof(struct virtchnl_version_info));
+	version.major = VIRTCHNL2_VERSION_MAJOR_2;
+	version.minor = VIRTCHNL2_VERSION_MINOR_0;
+
+	args.ops = VIRTCHNL_OP_VERSION;
+	args.in_args = (uint8_t *)&version;
+	args.in_args_size = sizeof(version);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0) {
+		DRV_LOG(ERR,
+			"Failed to execute command of VIRTCHNL_OP_VERSION");
+		return err;
+	}
+
+	pver = (struct virtchnl2_version_info *)args.out_buffer;
+	adapter->virtchnl_version = *pver;
+
+	if (adapter->virtchnl_version.major != VIRTCHNL2_VERSION_MAJOR_2 ||
+	    adapter->virtchnl_version.minor != VIRTCHNL2_VERSION_MINOR_0) {
+		DRV_LOG(ERR, "VIRTCHNL API version mismatch:(%u.%u)-(%u.%u)",
+			adapter->virtchnl_version.major,
+			adapter->virtchnl_version.minor,
+			VIRTCHNL2_VERSION_MAJOR_2,
+			VIRTCHNL2_VERSION_MINOR_0);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int
+idpf_vc_get_caps(struct idpf_adapter *adapter)
+{
+	struct virtchnl2_get_capabilities caps_msg;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities));
+
+	caps_msg.csum_caps =
+		VIRTCHNL2_CAP_TX_CSUM_L3_IPV4          |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP      |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP      |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP     |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP      |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP      |
+		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP     |
+		VIRTCHNL2_CAP_TX_CSUM_GENERIC          |
+		VIRTCHNL2_CAP_RX_CSUM_L3_IPV4          |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP      |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP      |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP     |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP      |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP      |
+		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP     |
+		VIRTCHNL2_CAP_RX_CSUM_GENERIC;
+
+	caps_msg.rss_caps =
+		VIRTCHNL2_CAP_RSS_IPV4_TCP             |
+		VIRTCHNL2_CAP_RSS_IPV4_UDP             |
+		VIRTCHNL2_CAP_RSS_IPV4_SCTP            |
+		VIRTCHNL2_CAP_RSS_IPV4_OTHER           |
+		VIRTCHNL2_CAP_RSS_IPV6_TCP             |
+		VIRTCHNL2_CAP_RSS_IPV6_UDP             |
+		VIRTCHNL2_CAP_RSS_IPV6_SCTP            |
+		VIRTCHNL2_CAP_RSS_IPV6_OTHER           |
+		VIRTCHNL2_CAP_RSS_IPV4_AH              |
+		VIRTCHNL2_CAP_RSS_IPV4_ESP             |
+		VIRTCHNL2_CAP_RSS_IPV4_AH_ESP          |
+		VIRTCHNL2_CAP_RSS_IPV6_AH              |
+		VIRTCHNL2_CAP_RSS_IPV6_ESP             |
+		VIRTCHNL2_CAP_RSS_IPV6_AH_ESP;
+
+	caps_msg.other_caps = VIRTCHNL2_CAP_WB_ON_ITR;
+
+	args.ops = VIRTCHNL2_OP_GET_CAPS;
+	args.in_args = (uint8_t *)&caps_msg;
+	args.in_args_size = sizeof(caps_msg);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0) {
+		DRV_LOG(ERR,
+			"Failed to execute command of VIRTCHNL2_OP_GET_CAPS");
+		return err;
+	}
+
+	rte_memcpy(&adapter->caps, args.out_buffer, sizeof(caps_msg));
+
+	return 0;
+}
+
+int
+idpf_vc_create_vport(struct idpf_vport *vport,
+		     struct virtchnl2_create_vport *vport_req_info)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_create_vport vport_msg;
+	struct idpf_cmd_info args;
+	int err = -1;
+
+	memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
+	vport_msg.vport_type = vport_req_info->vport_type;
+	vport_msg.txq_model = vport_req_info->txq_model;
+	vport_msg.rxq_model = vport_req_info->rxq_model;
+	vport_msg.num_tx_q = vport_req_info->num_tx_q;
+	vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
+	vport_msg.num_rx_q = vport_req_info->num_rx_q;
+	vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_CREATE_VPORT;
+	args.in_args = (uint8_t *)&vport_msg;
+	args.in_args_size = sizeof(vport_msg);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0) {
+		DRV_LOG(ERR,
+			"Failed to execute command of VIRTCHNL2_OP_CREATE_VPORT");
+		return err;
+	}
+
+	rte_memcpy(vport->vport_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
+	return 0;
+}
+
+int
+idpf_vc_destroy_vport(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_vport vc_vport;
+	struct idpf_cmd_info args;
+	int err;
+
+	vc_vport.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_DESTROY_VPORT;
+	args.in_args = (uint8_t *)&vc_vport;
+	args.in_args_size = sizeof(vc_vport);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DESTROY_VPORT");
+
+	return err;
+}
+
+int
+idpf_vc_set_rss_key(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_key *rss_key;
+	struct idpf_cmd_info args;
+	int len, err;
+
+	len = sizeof(*rss_key) + sizeof(rss_key->key[0]) *
+		(vport->rss_key_size - 1);
+	rss_key = rte_zmalloc("rss_key", len, 0);
+	if (rss_key == NULL)
+		return -ENOMEM;
+
+	rss_key->vport_id = vport->vport_id;
+	rss_key->key_len = vport->rss_key_size;
+	rte_memcpy(rss_key->key, vport->rss_key,
+		   sizeof(rss_key->key[0]) * vport->rss_key_size);
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_SET_RSS_KEY;
+	args.in_args = (uint8_t *)rss_key;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_KEY");
+
+	rte_free(rss_key);
+	return err;
+}
+
+int
+idpf_vc_set_rss_lut(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_lut *rss_lut;
+	struct idpf_cmd_info args;
+	int len, err;
+
+	len = sizeof(*rss_lut) + sizeof(rss_lut->lut[0]) *
+		(vport->rss_lut_size - 1);
+	rss_lut = rte_zmalloc("rss_lut", len, 0);
+	if (rss_lut == NULL)
+		return -ENOMEM;
+
+	rss_lut->vport_id = vport->vport_id;
+	rss_lut->lut_entries = vport->rss_lut_size;
+	rte_memcpy(rss_lut->lut, vport->rss_lut,
+		   sizeof(rss_lut->lut[0]) * vport->rss_lut_size);
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_SET_RSS_LUT;
+	args.in_args = (uint8_t *)rss_lut;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_LUT");
+
+	rte_free(rss_lut);
+	return err;
+}
+
+int
+idpf_vc_set_rss_hash(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_hash rss_hash;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_hash, 0, sizeof(rss_hash));
+	rss_hash.ptype_groups = vport->rss_hf;
+	rss_hash.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_SET_RSS_HASH;
+	args.in_args = (uint8_t *)&rss_hash;
+	args.in_args_size = sizeof(rss_hash);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of OP_SET_RSS_HASH");
+
+	return err;
+}
+
+int
+idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_queue_vector_maps *map_info;
+	struct virtchnl2_queue_vector *vecmap;
+	struct idpf_cmd_info args;
+	int len, i, err = 0;
+
+	len = sizeof(struct virtchnl2_queue_vector_maps) +
+		(nb_rxq - 1) * sizeof(struct virtchnl2_queue_vector);
+
+	map_info = rte_zmalloc("map_info", len, 0);
+	if (map_info == NULL)
+		return -ENOMEM;
+
+	map_info->vport_id = vport->vport_id;
+	map_info->num_qv_maps = nb_rxq;
+	for (i = 0; i < nb_rxq; i++) {
+		vecmap = &map_info->qv_maps[i];
+		vecmap->queue_id = vport->qv_map[i].queue_id;
+		vecmap->vector_id = vport->qv_map[i].vector_id;
+		vecmap->itr_idx = VIRTCHNL2_ITR_IDX_0;
+		vecmap->queue_type = VIRTCHNL2_QUEUE_TYPE_RX;
+	}
+
+	args.ops = map ? VIRTCHNL2_OP_MAP_QUEUE_VECTOR :
+		VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR;
+	args.in_args = (uint8_t *)map_info;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUE_VECTOR",
+			map ? "MAP" : "UNMAP");
+
+	rte_free(map_info);
+	return err;
+}
+
+int
+idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_alloc_vectors *alloc_vec;
+	struct idpf_cmd_info args;
+	int err, len;
+
+	len = sizeof(struct virtchnl2_alloc_vectors) +
+		(num_vectors - 1) * sizeof(struct virtchnl2_vector_chunk);
+	alloc_vec = rte_zmalloc("alloc_vec", len, 0);
+	if (alloc_vec == NULL)
+		return -ENOMEM;
+
+	alloc_vec->num_vectors = num_vectors;
+
+	args.ops = VIRTCHNL2_OP_ALLOC_VECTORS;
+	args.in_args = (uint8_t *)alloc_vec;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_ALLOC_VECTORS");
+
+	if (vport->recv_vectors == NULL) {
+		vport->recv_vectors = rte_zmalloc("recv_vectors", len, 0);
+		if (vport->recv_vectors == NULL) {
+			rte_free(alloc_vec);
+			return -ENOMEM;
+		}
+	}
+
+	rte_memcpy(vport->recv_vectors, args.out_buffer, len);
+	rte_free(alloc_vec);
+	return err;
+}
+
+int
+idpf_vc_dealloc_vectors(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_alloc_vectors *alloc_vec;
+	struct virtchnl2_vector_chunks *vcs;
+	struct idpf_cmd_info args;
+	int err, len;
+
+	alloc_vec = vport->recv_vectors;
+	vcs = &alloc_vec->vchunks;
+
+	len = sizeof(struct virtchnl2_vector_chunks) +
+		(vcs->num_vchunks - 1) * sizeof(struct virtchnl2_vector_chunk);
+
+	args.ops = VIRTCHNL2_OP_DEALLOC_VECTORS;
+	args.in_args = (uint8_t *)vcs;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_DEALLOC_VECTORS");
+
+	return err;
+}
+
+static int
+idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
+			  uint32_t type, bool on)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_del_ena_dis_queues *queue_select;
+	struct virtchnl2_queue_chunk *queue_chunk;
+	struct idpf_cmd_info args;
+	int err, len;
+
+	len = sizeof(struct virtchnl2_del_ena_dis_queues);
+	queue_select = rte_zmalloc("queue_select", len, 0);
+	if (queue_select == NULL)
+		return -ENOMEM;
+
+	queue_chunk = queue_select->chunks.chunks;
+	queue_select->chunks.num_chunks = 1;
+	queue_select->vport_id = vport->vport_id;
+
+	queue_chunk->type = type;
+	queue_chunk->start_queue_id = qid;
+	queue_chunk->num_queues = 1;
+
+	args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
+		VIRTCHNL2_OP_DISABLE_QUEUES;
+	args.in_args = (uint8_t *)queue_select;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+			on ? "ENABLE" : "DISABLE");
+
+	rte_free(queue_select);
+	return err;
+}
+
+int
+idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
+		     bool rx, bool on)
+{
+	uint32_t type;
+	int err, queue_id;
+
+	/* switch txq/rxq */
+	type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
+
+	if (type == VIRTCHNL2_QUEUE_TYPE_RX)
+		queue_id = vport->chunks_info.rx_start_qid + qid;
+	else
+		queue_id = vport->chunks_info.tx_start_qid + qid;
+	err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
+	if (err != 0)
+		return err;
+
+	/* switch tx completion queue */
+	if (!rx && vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+		queue_id = vport->chunks_info.tx_compl_start_qid + qid;
+		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
+		if (err != 0)
+			return err;
+	}
+
+	/* switch rx buffer queue */
+	if (rx && vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+		queue_id = vport->chunks_info.rx_buf_start_qid + 2 * qid;
+		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
+		if (err != 0)
+			return err;
+		queue_id++;
+		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
+		if (err != 0)
+			return err;
+	}
+
+	return err;
+}
+
+#define IDPF_RXTX_QUEUE_CHUNKS_NUM	2
+int
+idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_del_ena_dis_queues *queue_select;
+	struct virtchnl2_queue_chunk *queue_chunk;
+	uint32_t type;
+	struct idpf_cmd_info args;
+	uint16_t num_chunks;
+	int err, len;
+
+	num_chunks = IDPF_RXTX_QUEUE_CHUNKS_NUM;
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+		num_chunks++;
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+		num_chunks++;
+
+	len = sizeof(struct virtchnl2_del_ena_dis_queues) +
+		sizeof(struct virtchnl2_queue_chunk) * (num_chunks - 1);
+	queue_select = rte_zmalloc("queue_select", len, 0);
+	if (queue_select == NULL)
+		return -ENOMEM;
+
+	queue_chunk = queue_select->chunks.chunks;
+	queue_select->chunks.num_chunks = num_chunks;
+	queue_select->vport_id = vport->vport_id;
+
+	type = VIRTCHNL_QUEUE_TYPE_RX;
+	queue_chunk[type].type = type;
+	queue_chunk[type].start_queue_id = vport->chunks_info.rx_start_qid;
+	queue_chunk[type].num_queues = vport->num_rx_q;
+
+	type = VIRTCHNL2_QUEUE_TYPE_TX;
+	queue_chunk[type].type = type;
+	queue_chunk[type].start_queue_id = vport->chunks_info.tx_start_qid;
+	queue_chunk[type].num_queues = vport->num_tx_q;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+		queue_chunk[type].type = type;
+		queue_chunk[type].start_queue_id =
+			vport->chunks_info.rx_buf_start_qid;
+		queue_chunk[type].num_queues = vport->num_rx_bufq;
+	}
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+		queue_chunk[type].type = type;
+		queue_chunk[type].start_queue_id =
+			vport->chunks_info.tx_compl_start_qid;
+		queue_chunk[type].num_queues = vport->num_tx_complq;
+	}
+
+	args.ops = enable ? VIRTCHNL2_OP_ENABLE_QUEUES :
+		VIRTCHNL2_OP_DISABLE_QUEUES;
+	args.in_args = (uint8_t *)queue_select;
+	args.in_args_size = len;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+			enable ? "ENABLE" : "DISABLE");
+
+	rte_free(queue_select);
+	return err;
+}
+
+int
+idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_vport vc_vport;
+	struct idpf_cmd_info args;
+	int err;
+
+	vc_vport.vport_id = vport->vport_id;
+	args.ops = enable ? VIRTCHNL2_OP_ENABLE_VPORT :
+		VIRTCHNL2_OP_DISABLE_VPORT;
+	args.in_args = (uint8_t *)&vc_vport;
+	args.in_args_size = sizeof(vc_vport);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0) {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_VPORT",
+			enable ? "ENABLE" : "DISABLE");
+	}
+
+	return err;
+}
+
+int
+idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
+{
+	struct virtchnl2_get_ptype_info *ptype_info;
+	struct idpf_cmd_info args;
+	int len, err;
+
+	len = sizeof(struct virtchnl2_get_ptype_info);
+	ptype_info = rte_zmalloc("ptype_info", len, 0);
+	if (ptype_info == NULL)
+		return -ENOMEM;
+
+	ptype_info->start_ptype_id = 0;
+	ptype_info->num_ptypes = IDPF_MAX_PKT_TYPE;
+	args.ops = VIRTCHNL2_OP_GET_PTYPE_INFO;
+	args.in_args = (uint8_t *)ptype_info;
+	args.in_args_size = len;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_PTYPE_INFO");
+
+	rte_free(ptype_info);
+	return err;
+}
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
new file mode 100644
index 0000000000..3533eb9b3d
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _IDPF_COMMON_VIRTCHNL_H_
+#define _IDPF_COMMON_VIRTCHNL_H_
+
+#include <idpf_common_device.h>
+
+__rte_internal
+int idpf_vc_check_api_version(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_vc_get_caps(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_vc_create_vport(struct idpf_vport *vport,
+			 struct virtchnl2_create_vport *vport_info);
+__rte_internal
+int idpf_vc_destroy_vport(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_set_rss_key(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_set_rss_lut(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_set_rss_hash(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
+			 bool rx, bool on);
+__rte_internal
+int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
+__rte_internal
+int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
+__rte_internal
+int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
+				 uint16_t nb_rxq, bool map);
+__rte_internal
+int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
+__rte_internal
+int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_vc_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
+			 uint16_t buf_len, uint8_t *buf);
+__rte_internal
+int idpf_execute_vc_cmd(struct idpf_adapter *adapter,
+			struct idpf_cmd_info *args);
+
+#endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/meson.build b/drivers/common/idpf/meson.build
index 77d997b4a7..c8a514e02a 100644
--- a/drivers/common/idpf/meson.build
+++ b/drivers/common/idpf/meson.build
@@ -1,4 +1,9 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2022 Intel Corporation
 
+sources = files(
+        'idpf_common_device.c',
+        'idpf_common_virtchnl.c',
+)
+
 subdir('base')
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index bfb246c752..9bc0d2a909 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,10 +3,22 @@ INTERNAL {
 
 	idpf_ctlq_deinit;
 	idpf_ctlq_init;
-	idpf_ctlq_clean_sq;
-	idpf_ctlq_recv;
-	idpf_ctlq_send;
-	idpf_ctlq_post_rx_buffs;
+	idpf_execute_vc_cmd;
+	idpf_vc_alloc_vectors;
+	idpf_vc_check_api_version;
+	idpf_vc_config_irq_map_unmap;
+	idpf_vc_create_vport;
+	idpf_vc_dealloc_vectors;
+	idpf_vc_destroy_vport;
+	idpf_vc_ena_dis_queues;
+	idpf_vc_ena_dis_vport;
+	idpf_vc_get_caps;
+	idpf_vc_query_ptype_info;
+	idpf_vc_read_one_msg;
+	idpf_vc_set_rss_hash;
+	idpf_vc_set_rss_key;
+	idpf_vc_set_rss_lut;
+	idpf_vc_switch_queue;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 72a5c9f39b..759fc981d7 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -942,13 +942,6 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapt
 		goto err_api;
 	}
 
-	adapter->max_rxq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
-				sizeof(struct virtchnl2_config_rx_queues)) /
-				sizeof(struct virtchnl2_rxq_info);
-	adapter->max_txq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
-				sizeof(struct virtchnl2_config_tx_queues)) /
-				sizeof(struct virtchnl2_txq_info);
-
 	adapter->cur_vports = 0;
 	adapter->cur_vport_nb = 0;
 
@@ -1075,7 +1068,7 @@ static const struct rte_pci_id pci_id_idpf_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
-struct idpf_adapter_ext *
+static struct idpf_adapter_ext *
 idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 {
 	struct idpf_adapter_ext *adapter;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 8c29019667..efc540fa32 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -16,6 +16,7 @@
 #include "idpf_logs.h"
 
 #include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
 #include <base/idpf_prototype.h>
 #include <base/virtchnl2.h>
 
@@ -31,8 +32,6 @@
 #define IDPF_RX_BUFQ_PER_GRP	2
 
 #define IDPF_CTLQ_ID		-1
-#define IDPF_CTLQ_LEN		64
-#define IDPF_DFLT_MBX_BUF_SIZE	4096
 
 #define IDPF_DFLT_Q_VEC_NUM	1
 #define IDPF_DFLT_INTERVAL	16
@@ -44,8 +43,6 @@
 
 #define IDPF_NUM_MACADDR_MAX	64
 
-#define IDPF_MAX_PKT_TYPE	1024
-
 #define IDPF_VLAN_TAG_SIZE	4
 #define IDPF_ETH_OVERHEAD \
 	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + IDPF_VLAN_TAG_SIZE * 2)
@@ -66,14 +63,6 @@
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
-/* Message type read in virtual channel from PF */
-enum idpf_vc_result {
-	IDPF_MSG_ERR = -1, /* Meet error when accessing admin queue */
-	IDPF_MSG_NON,      /* Read nothing from admin queue */
-	IDPF_MSG_SYS,      /* Read system msg from admin queue */
-	IDPF_MSG_CMD,      /* Read async command result */
-};
-
 struct idpf_vport_param {
 	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
@@ -103,10 +92,6 @@ struct idpf_adapter_ext {
 
 	uint16_t used_vecs_num;
 
-	/* Max config queue number per VC message */
-	uint32_t max_rxq_per_msg;
-	uint32_t max_txq_per_msg;
-
 	uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
 
 	bool rx_vec_allowed;
@@ -125,74 +110,6 @@ TAILQ_HEAD(idpf_adapter_list, idpf_adapter_ext);
 #define IDPF_ADAPTER_TO_EXT(p)					\
 	container_of((p), struct idpf_adapter_ext, base)
 
-/* structure used for sending and checking response of virtchnl ops */
-struct idpf_cmd_info {
-	uint32_t ops;
-	uint8_t *in_args;       /* buffer for sending */
-	uint32_t in_args_size;  /* buffer size for sending */
-	uint8_t *out_buffer;    /* buffer for response */
-	uint32_t out_size;      /* buffer size for response */
-};
-
-/* notify current command done. Only call in case execute
- * _atomic_set_cmd successfully.
- */
-static inline void
-notify_cmd(struct idpf_adapter *adapter, int msg_ret)
-{
-	adapter->cmd_retval = msg_ret;
-	/* Return value may be checked in anither thread, need to ensure the coherence. */
-	rte_wmb();
-	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
-}
-
-/* clear current command. Only call in case execute
- * _atomic_set_cmd successfully.
- */
-static inline void
-clear_cmd(struct idpf_adapter *adapter)
-{
-	/* Return value may be checked in anither thread, need to ensure the coherence. */
-	rte_wmb();
-	adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
-	adapter->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
-}
-
-/* Check there is pending cmd in execution. If none, set new command. */
-static inline bool
-atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
-{
-	uint32_t op_unk = VIRTCHNL2_OP_UNKNOWN;
-	bool ret = __atomic_compare_exchange(&adapter->pend_cmd, &op_unk, &ops,
-					    0, __ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE);
-
-	if (!ret)
-		PMD_DRV_LOG(ERR, "There is incomplete cmd %d", adapter->pend_cmd);
-
-	return !ret;
-}
-
-struct idpf_adapter_ext *idpf_find_adapter_ext(struct rte_pci_device *pci_dev);
-void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
-int idpf_vc_check_api_version(struct idpf_adapter *adapter);
 int idpf_get_pkt_type(struct idpf_adapter_ext *adapter);
-int idpf_vc_get_caps(struct idpf_adapter *adapter);
-int idpf_vc_create_vport(struct idpf_vport *vport,
-			 struct virtchnl2_create_vport *vport_info);
-int idpf_vc_destroy_vport(struct idpf_vport *vport);
-int idpf_vc_set_rss_key(struct idpf_vport *vport);
-int idpf_vc_set_rss_lut(struct idpf_vport *vport);
-int idpf_vc_set_rss_hash(struct idpf_vport *vport);
-int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
-		      bool rx, bool on);
-int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
-int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
-int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
-				 uint16_t nb_rxq, bool map);
-int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
-int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
-int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
-int idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
-		      uint16_t buf_len, uint8_t *buf);
 
 #endif /* _IDPF_ETHDEV_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 918d156e03..ad3e31208d 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1080,7 +1080,7 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_switch_queue(vport, rx_queue_id, true, true);
+	err = idpf_vc_switch_queue(vport, rx_queue_id, true, true);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
 			    rx_queue_id);
@@ -1131,7 +1131,7 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_switch_queue(vport, tx_queue_id, false, true);
+	err = idpf_vc_switch_queue(vport, tx_queue_id, false, true);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
 			    tx_queue_id);
@@ -1154,7 +1154,7 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (rx_queue_id >= dev->data->nb_rx_queues)
 		return -EINVAL;
 
-	err = idpf_switch_queue(vport, rx_queue_id, true, false);
+	err = idpf_vc_switch_queue(vport, rx_queue_id, true, false);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
 			    rx_queue_id);
@@ -1185,7 +1185,7 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	if (tx_queue_id >= dev->data->nb_tx_queues)
 		return -EINVAL;
 
-	err = idpf_switch_queue(vport, tx_queue_id, false, false);
+	err = idpf_vc_switch_queue(vport, tx_queue_id, false, false);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
 			    tx_queue_id);
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 633d3295d3..6f4eb52beb 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -23,293 +23,6 @@
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
 
-static int
-idpf_vc_clean(struct idpf_adapter *adapter)
-{
-	struct idpf_ctlq_msg *q_msg[IDPF_CTLQ_LEN];
-	uint16_t num_q_msg = IDPF_CTLQ_LEN;
-	struct idpf_dma_mem *dma_mem;
-	int err;
-	uint32_t i;
-
-	for (i = 0; i < 10; i++) {
-		err = idpf_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg);
-		msleep(20);
-		if (num_q_msg > 0)
-			break;
-	}
-	if (err != 0)
-		return err;
-
-	/* Empty queue is not an error */
-	for (i = 0; i < num_q_msg; i++) {
-		dma_mem = q_msg[i]->ctx.indirect.payload;
-		if (dma_mem != NULL) {
-			idpf_free_dma_mem(&adapter->hw, dma_mem);
-			rte_free(dma_mem);
-		}
-		rte_free(q_msg[i]);
-	}
-
-	return 0;
-}
-
-static int
-idpf_send_vc_msg(struct idpf_adapter *adapter, uint32_t op,
-		 uint16_t msg_size, uint8_t *msg)
-{
-	struct idpf_ctlq_msg *ctlq_msg;
-	struct idpf_dma_mem *dma_mem;
-	int err;
-
-	err = idpf_vc_clean(adapter);
-	if (err != 0)
-		goto err;
-
-	ctlq_msg = rte_zmalloc(NULL, sizeof(struct idpf_ctlq_msg), 0);
-	if (ctlq_msg == NULL) {
-		err = -ENOMEM;
-		goto err;
-	}
-
-	dma_mem = rte_zmalloc(NULL, sizeof(struct idpf_dma_mem), 0);
-	if (dma_mem == NULL) {
-		err = -ENOMEM;
-		goto dma_mem_error;
-	}
-
-	dma_mem->size = IDPF_DFLT_MBX_BUF_SIZE;
-	idpf_alloc_dma_mem(&adapter->hw, dma_mem, dma_mem->size);
-	if (dma_mem->va == NULL) {
-		err = -ENOMEM;
-		goto dma_alloc_error;
-	}
-
-	memcpy(dma_mem->va, msg, msg_size);
-
-	ctlq_msg->opcode = idpf_mbq_opc_send_msg_to_pf;
-	ctlq_msg->func_id = 0;
-	ctlq_msg->data_len = msg_size;
-	ctlq_msg->cookie.mbx.chnl_opcode = op;
-	ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS;
-	ctlq_msg->ctx.indirect.payload = dma_mem;
-
-	err = idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
-	if (err != 0)
-		goto send_error;
-
-	return 0;
-
-send_error:
-	idpf_free_dma_mem(&adapter->hw, dma_mem);
-dma_alloc_error:
-	rte_free(dma_mem);
-dma_mem_error:
-	rte_free(ctlq_msg);
-err:
-	return err;
-}
-
-static enum idpf_vc_result
-idpf_read_msg_from_cp(struct idpf_adapter *adapter, uint16_t buf_len,
-		      uint8_t *buf)
-{
-	struct idpf_hw *hw = &adapter->hw;
-	struct idpf_ctlq_msg ctlq_msg;
-	struct idpf_dma_mem *dma_mem = NULL;
-	enum idpf_vc_result result = IDPF_MSG_NON;
-	uint32_t opcode;
-	uint16_t pending = 1;
-	int ret;
-
-	ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
-	if (ret != 0) {
-		PMD_DRV_LOG(DEBUG, "Can't read msg from AQ");
-		if (ret != -ENOMSG)
-			result = IDPF_MSG_ERR;
-		return result;
-	}
-
-	rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len);
-
-	opcode = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
-	adapter->cmd_retval =
-		(enum virtchnl_status_code)rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
-
-	PMD_DRV_LOG(DEBUG, "CQ from CP carries opcode %u, retval %d",
-		    opcode, adapter->cmd_retval);
-
-	if (opcode == VIRTCHNL2_OP_EVENT) {
-		struct virtchnl2_event *ve =
-			(struct virtchnl2_event *)ctlq_msg.ctx.indirect.payload->va;
-
-		result = IDPF_MSG_SYS;
-		switch (ve->event) {
-		case VIRTCHNL2_EVENT_LINK_CHANGE:
-			/* TBD */
-			break;
-		default:
-			PMD_DRV_LOG(ERR, "%s: Unknown event %d from CP",
-				    __func__, ve->event);
-			break;
-		}
-	} else {
-		/* async reply msg on command issued by pf previously */
-		result = IDPF_MSG_CMD;
-		if (opcode != adapter->pend_cmd) {
-			PMD_DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
-				    adapter->pend_cmd, opcode);
-			result = IDPF_MSG_ERR;
-		}
-	}
-
-	if (ctlq_msg.data_len != 0)
-		dma_mem = ctlq_msg.ctx.indirect.payload;
-	else
-		pending = 0;
-
-	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
-	if (ret != 0 && dma_mem != NULL)
-		idpf_free_dma_mem(hw, dma_mem);
-
-	return result;
-}
-
-#define MAX_TRY_TIMES 200
-#define ASQ_DELAY_MS  10
-
-int
-idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_len,
-		  uint8_t *buf)
-{
-	int err = 0;
-	int i = 0;
-	int ret;
-
-	do {
-		ret = idpf_read_msg_from_cp(adapter, buf_len, buf);
-		if (ret == IDPF_MSG_CMD)
-			break;
-		rte_delay_ms(ASQ_DELAY_MS);
-	} while (i++ < MAX_TRY_TIMES);
-	if (i >= MAX_TRY_TIMES ||
-	    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
-		err = -EBUSY;
-		PMD_DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
-			    adapter->cmd_retval, ops);
-	}
-
-	return err;
-}
-
-static int
-idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
-{
-	int err = 0;
-	int i = 0;
-	int ret;
-
-	if (atomic_set_cmd(adapter, args->ops))
-		return -EINVAL;
-
-	ret = idpf_send_vc_msg(adapter, args->ops, args->in_args_size, args->in_args);
-	if (ret != 0) {
-		PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops);
-		clear_cmd(adapter);
-		return ret;
-	}
-
-	switch (args->ops) {
-	case VIRTCHNL_OP_VERSION:
-	case VIRTCHNL2_OP_GET_CAPS:
-	case VIRTCHNL2_OP_CREATE_VPORT:
-	case VIRTCHNL2_OP_DESTROY_VPORT:
-	case VIRTCHNL2_OP_SET_RSS_KEY:
-	case VIRTCHNL2_OP_SET_RSS_LUT:
-	case VIRTCHNL2_OP_SET_RSS_HASH:
-	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_QUEUES:
-	case VIRTCHNL2_OP_DISABLE_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_VPORT:
-	case VIRTCHNL2_OP_DISABLE_VPORT:
-	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_ALLOC_VECTORS:
-	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-		/* for init virtchnl ops, need to poll the response */
-		err = idpf_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
-		clear_cmd(adapter);
-		break;
-	case VIRTCHNL2_OP_GET_PTYPE_INFO:
-		/* for multuple response message,
-		 * do not handle the response here.
-		 */
-		break;
-	default:
-		/* For other virtchnl ops in running time,
-		 * wait for the cmd done flag.
-		 */
-		do {
-			if (adapter->pend_cmd == VIRTCHNL_OP_UNKNOWN)
-				break;
-			rte_delay_ms(ASQ_DELAY_MS);
-			/* If don't read msg or read sys event, continue */
-		} while (i++ < MAX_TRY_TIMES);
-		/* If there's no response is received, clear command */
-		if (i >= MAX_TRY_TIMES  ||
-		    adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
-			err = -EBUSY;
-			PMD_DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
-				    adapter->cmd_retval, args->ops);
-			clear_cmd(adapter);
-		}
-		break;
-	}
-
-	return err;
-}
-
-int
-idpf_vc_check_api_version(struct idpf_adapter *adapter)
-{
-	struct virtchnl2_version_info version, *pver;
-	struct idpf_cmd_info args;
-	int err;
-
-	memset(&version, 0, sizeof(struct virtchnl_version_info));
-	version.major = VIRTCHNL2_VERSION_MAJOR_2;
-	version.minor = VIRTCHNL2_VERSION_MINOR_0;
-
-	args.ops = VIRTCHNL_OP_VERSION;
-	args.in_args = (uint8_t *)&version;
-	args.in_args_size = sizeof(version);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0) {
-		PMD_DRV_LOG(ERR,
-			    "Failed to execute command of VIRTCHNL_OP_VERSION");
-		return err;
-	}
-
-	pver = (struct virtchnl2_version_info *)args.out_buffer;
-	adapter->virtchnl_version = *pver;
-
-	if (adapter->virtchnl_version.major != VIRTCHNL2_VERSION_MAJOR_2 ||
-	    adapter->virtchnl_version.minor != VIRTCHNL2_VERSION_MINOR_0) {
-		PMD_INIT_LOG(ERR, "VIRTCHNL API version mismatch:(%u.%u)-(%u.%u)",
-			     adapter->virtchnl_version.major,
-			     adapter->virtchnl_version.minor,
-			     VIRTCHNL2_VERSION_MAJOR_2,
-			     VIRTCHNL2_VERSION_MINOR_0);
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
 int __rte_cold
 idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 {
@@ -332,8 +45,8 @@ idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 			return -ENOMEM;
 
 	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
-		ret = idpf_read_one_msg(base, VIRTCHNL2_OP_GET_PTYPE_INFO,
-					IDPF_DFLT_MBX_BUF_SIZE, (u8 *)ptype_info);
+		ret = idpf_vc_read_one_msg(base, VIRTCHNL2_OP_GET_PTYPE_INFO,
+					   IDPF_DFLT_MBX_BUF_SIZE, (uint8_t *)ptype_info);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR, "Fail to get packet type information");
 			goto free_ptype_info;
@@ -349,7 +62,7 @@ idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 			uint32_t proto_hdr = 0;
 
 			ptype = (struct virtchnl2_ptype *)
-					((u8 *)ptype_info + ptype_offset);
+					((uint8_t *)ptype_info + ptype_offset);
 			ptype_offset += IDPF_GET_PTYPE_SIZE(ptype);
 			if (ptype_offset > IDPF_DFLT_MBX_BUF_SIZE) {
 				ret = -EINVAL;
@@ -523,223 +236,6 @@ idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
 	return ret;
 }
 
-int
-idpf_vc_get_caps(struct idpf_adapter *adapter)
-{
-	struct virtchnl2_get_capabilities caps_msg;
-	struct idpf_cmd_info args;
-	int err;
-
-	memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities));
-
-	caps_msg.csum_caps =
-		VIRTCHNL2_CAP_TX_CSUM_L3_IPV4          |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP      |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP      |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP     |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP      |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP      |
-		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP     |
-		VIRTCHNL2_CAP_TX_CSUM_GENERIC          |
-		VIRTCHNL2_CAP_RX_CSUM_L3_IPV4          |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP      |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP      |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP     |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP      |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP      |
-		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP     |
-		VIRTCHNL2_CAP_RX_CSUM_GENERIC;
-
-	caps_msg.rss_caps =
-		VIRTCHNL2_CAP_RSS_IPV4_TCP             |
-		VIRTCHNL2_CAP_RSS_IPV4_UDP             |
-		VIRTCHNL2_CAP_RSS_IPV4_SCTP            |
-		VIRTCHNL2_CAP_RSS_IPV4_OTHER           |
-		VIRTCHNL2_CAP_RSS_IPV6_TCP             |
-		VIRTCHNL2_CAP_RSS_IPV6_UDP             |
-		VIRTCHNL2_CAP_RSS_IPV6_SCTP            |
-		VIRTCHNL2_CAP_RSS_IPV6_OTHER           |
-		VIRTCHNL2_CAP_RSS_IPV4_AH              |
-		VIRTCHNL2_CAP_RSS_IPV4_ESP             |
-		VIRTCHNL2_CAP_RSS_IPV4_AH_ESP          |
-		VIRTCHNL2_CAP_RSS_IPV6_AH              |
-		VIRTCHNL2_CAP_RSS_IPV6_ESP             |
-		VIRTCHNL2_CAP_RSS_IPV6_AH_ESP;
-
-	caps_msg.other_caps = VIRTCHNL2_CAP_WB_ON_ITR;
-
-	args.ops = VIRTCHNL2_OP_GET_CAPS;
-	args.in_args = (uint8_t *)&caps_msg;
-	args.in_args_size = sizeof(caps_msg);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0) {
-		PMD_DRV_LOG(ERR,
-			    "Failed to execute command of VIRTCHNL2_OP_GET_CAPS");
-		return err;
-	}
-
-	rte_memcpy(&adapter->caps, args.out_buffer, sizeof(caps_msg));
-
-	return 0;
-}
-
-int
-idpf_vc_create_vport(struct idpf_vport *vport,
-		     struct virtchnl2_create_vport *vport_req_info)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_create_vport vport_msg;
-	struct idpf_cmd_info args;
-	int err = -1;
-
-	memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
-	vport_msg.vport_type = vport_req_info->vport_type;
-	vport_msg.txq_model = vport_req_info->txq_model;
-	vport_msg.rxq_model = vport_req_info->rxq_model;
-	vport_msg.num_tx_q = vport_req_info->num_tx_q;
-	vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
-	vport_msg.num_rx_q = vport_req_info->num_rx_q;
-	vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_CREATE_VPORT;
-	args.in_args = (uint8_t *)&vport_msg;
-	args.in_args_size = sizeof(vport_msg);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0) {
-		PMD_DRV_LOG(ERR,
-			    "Failed to execute command of VIRTCHNL2_OP_CREATE_VPORT");
-		return err;
-	}
-
-	rte_memcpy(vport->vport_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
-	return 0;
-}
-
-int
-idpf_vc_destroy_vport(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_vport vc_vport;
-	struct idpf_cmd_info args;
-	int err;
-
-	vc_vport.vport_id = vport->vport_id;
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_DESTROY_VPORT;
-	args.in_args = (uint8_t *)&vc_vport;
-	args.in_args_size = sizeof(vc_vport);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DESTROY_VPORT");
-
-	return err;
-}
-
-int
-idpf_vc_set_rss_key(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_rss_key *rss_key;
-	struct idpf_cmd_info args;
-	int len, err;
-
-	len = sizeof(*rss_key) + sizeof(rss_key->key[0]) *
-		(vport->rss_key_size - 1);
-	rss_key = rte_zmalloc("rss_key", len, 0);
-	if (rss_key == NULL)
-		return -ENOMEM;
-
-	rss_key->vport_id = vport->vport_id;
-	rss_key->key_len = vport->rss_key_size;
-	rte_memcpy(rss_key->key, vport->rss_key,
-		   sizeof(rss_key->key[0]) * vport->rss_key_size);
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_SET_RSS_KEY;
-	args.in_args = (uint8_t *)rss_key;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_KEY");
-
-	rte_free(rss_key);
-	return err;
-}
-
-int
-idpf_vc_set_rss_lut(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_rss_lut *rss_lut;
-	struct idpf_cmd_info args;
-	int len, err;
-
-	len = sizeof(*rss_lut) + sizeof(rss_lut->lut[0]) *
-		(vport->rss_lut_size - 1);
-	rss_lut = rte_zmalloc("rss_lut", len, 0);
-	if (rss_lut == NULL)
-		return -ENOMEM;
-
-	rss_lut->vport_id = vport->vport_id;
-	rss_lut->lut_entries = vport->rss_lut_size;
-	rte_memcpy(rss_lut->lut, vport->rss_lut,
-		   sizeof(rss_lut->lut[0]) * vport->rss_lut_size);
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_SET_RSS_LUT;
-	args.in_args = (uint8_t *)rss_lut;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_LUT");
-
-	rte_free(rss_lut);
-	return err;
-}
-
-int
-idpf_vc_set_rss_hash(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_rss_hash rss_hash;
-	struct idpf_cmd_info args;
-	int err;
-
-	memset(&rss_hash, 0, sizeof(rss_hash));
-	rss_hash.ptype_groups = vport->rss_hf;
-	rss_hash.vport_id = vport->vport_id;
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_SET_RSS_HASH;
-	args.in_args = (uint8_t *)&rss_hash;
-	args.in_args_size = sizeof(rss_hash);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of OP_SET_RSS_HASH");
-
-	return err;
-}
-
 #define IDPF_RX_BUF_STRIDE		64
 int
 idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
@@ -899,310 +395,3 @@ idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
 
 	return err;
 }
-
-int
-idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_queue_vector_maps *map_info;
-	struct virtchnl2_queue_vector *vecmap;
-	struct idpf_cmd_info args;
-	int len, i, err = 0;
-
-	len = sizeof(struct virtchnl2_queue_vector_maps) +
-		(nb_rxq - 1) * sizeof(struct virtchnl2_queue_vector);
-
-	map_info = rte_zmalloc("map_info", len, 0);
-	if (map_info == NULL)
-		return -ENOMEM;
-
-	map_info->vport_id = vport->vport_id;
-	map_info->num_qv_maps = nb_rxq;
-	for (i = 0; i < nb_rxq; i++) {
-		vecmap = &map_info->qv_maps[i];
-		vecmap->queue_id = vport->qv_map[i].queue_id;
-		vecmap->vector_id = vport->qv_map[i].vector_id;
-		vecmap->itr_idx = VIRTCHNL2_ITR_IDX_0;
-		vecmap->queue_type = VIRTCHNL2_QUEUE_TYPE_RX;
-	}
-
-	args.ops = map ? VIRTCHNL2_OP_MAP_QUEUE_VECTOR :
-		VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR;
-	args.in_args = (u8 *)map_info;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUE_VECTOR",
-			    map ? "MAP" : "UNMAP");
-
-	rte_free(map_info);
-	return err;
-}
-
-int
-idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_alloc_vectors *alloc_vec;
-	struct idpf_cmd_info args;
-	int err, len;
-
-	len = sizeof(struct virtchnl2_alloc_vectors) +
-		(num_vectors - 1) * sizeof(struct virtchnl2_vector_chunk);
-	alloc_vec = rte_zmalloc("alloc_vec", len, 0);
-	if (alloc_vec == NULL)
-		return -ENOMEM;
-
-	alloc_vec->num_vectors = num_vectors;
-
-	args.ops = VIRTCHNL2_OP_ALLOC_VECTORS;
-	args.in_args = (u8 *)alloc_vec;
-	args.in_args_size = sizeof(struct virtchnl2_alloc_vectors);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_ALLOC_VECTORS");
-
-	if (vport->recv_vectors == NULL) {
-		vport->recv_vectors = rte_zmalloc("recv_vectors", len, 0);
-		if (vport->recv_vectors == NULL) {
-			rte_free(alloc_vec);
-			return -ENOMEM;
-		}
-	}
-
-	rte_memcpy(vport->recv_vectors, args.out_buffer, len);
-	rte_free(alloc_vec);
-	return err;
-}
-
-int
-idpf_vc_dealloc_vectors(struct idpf_vport *vport)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_alloc_vectors *alloc_vec;
-	struct virtchnl2_vector_chunks *vcs;
-	struct idpf_cmd_info args;
-	int err, len;
-
-	alloc_vec = vport->recv_vectors;
-	vcs = &alloc_vec->vchunks;
-
-	len = sizeof(struct virtchnl2_vector_chunks) +
-		(vcs->num_vchunks - 1) * sizeof(struct virtchnl2_vector_chunk);
-
-	args.ops = VIRTCHNL2_OP_DEALLOC_VECTORS;
-	args.in_args = (u8 *)vcs;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_DEALLOC_VECTORS");
-
-	return err;
-}
-
-static int
-idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
-			  uint32_t type, bool on)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_del_ena_dis_queues *queue_select;
-	struct virtchnl2_queue_chunk *queue_chunk;
-	struct idpf_cmd_info args;
-	int err, len;
-
-	len = sizeof(struct virtchnl2_del_ena_dis_queues);
-	queue_select = rte_zmalloc("queue_select", len, 0);
-	if (queue_select == NULL)
-		return -ENOMEM;
-
-	queue_chunk = queue_select->chunks.chunks;
-	queue_select->chunks.num_chunks = 1;
-	queue_select->vport_id = vport->vport_id;
-
-	queue_chunk->type = type;
-	queue_chunk->start_queue_id = qid;
-	queue_chunk->num_queues = 1;
-
-	args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
-		VIRTCHNL2_OP_DISABLE_QUEUES;
-	args.in_args = (u8 *)queue_select;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
-			    on ? "ENABLE" : "DISABLE");
-
-	rte_free(queue_select);
-	return err;
-}
-
-int
-idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
-		     bool rx, bool on)
-{
-	uint32_t type;
-	int err, queue_id;
-
-	/* switch txq/rxq */
-	type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
-
-	if (type == VIRTCHNL2_QUEUE_TYPE_RX)
-		queue_id = vport->chunks_info.rx_start_qid + qid;
-	else
-		queue_id = vport->chunks_info.tx_start_qid + qid;
-	err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
-	if (err != 0)
-		return err;
-
-	/* switch tx completion queue */
-	if (!rx && vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-		queue_id = vport->chunks_info.tx_compl_start_qid + qid;
-		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
-		if (err != 0)
-			return err;
-	}
-
-	/* switch rx buffer queue */
-	if (rx && vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
-		queue_id = vport->chunks_info.rx_buf_start_qid + 2 * qid;
-		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
-		if (err != 0)
-			return err;
-		queue_id++;
-		err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
-		if (err != 0)
-			return err;
-	}
-
-	return err;
-}
-
-#define IDPF_RXTX_QUEUE_CHUNKS_NUM	2
-int
-idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_del_ena_dis_queues *queue_select;
-	struct virtchnl2_queue_chunk *queue_chunk;
-	uint32_t type;
-	struct idpf_cmd_info args;
-	uint16_t num_chunks;
-	int err, len;
-
-	num_chunks = IDPF_RXTX_QUEUE_CHUNKS_NUM;
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
-		num_chunks++;
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
-		num_chunks++;
-
-	len = sizeof(struct virtchnl2_del_ena_dis_queues) +
-		sizeof(struct virtchnl2_queue_chunk) * (num_chunks - 1);
-	queue_select = rte_zmalloc("queue_select", len, 0);
-	if (queue_select == NULL)
-		return -ENOMEM;
-
-	queue_chunk = queue_select->chunks.chunks;
-	queue_select->chunks.num_chunks = num_chunks;
-	queue_select->vport_id = vport->vport_id;
-
-	type = VIRTCHNL_QUEUE_TYPE_RX;
-	queue_chunk[type].type = type;
-	queue_chunk[type].start_queue_id = vport->chunks_info.rx_start_qid;
-	queue_chunk[type].num_queues = vport->num_rx_q;
-
-	type = VIRTCHNL2_QUEUE_TYPE_TX;
-	queue_chunk[type].type = type;
-	queue_chunk[type].start_queue_id = vport->chunks_info.tx_start_qid;
-	queue_chunk[type].num_queues = vport->num_tx_q;
-
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
-		queue_chunk[type].type = type;
-		queue_chunk[type].start_queue_id =
-			vport->chunks_info.rx_buf_start_qid;
-		queue_chunk[type].num_queues = vport->num_rx_bufq;
-	}
-
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-		queue_chunk[type].type = type;
-		queue_chunk[type].start_queue_id =
-			vport->chunks_info.tx_compl_start_qid;
-		queue_chunk[type].num_queues = vport->num_tx_complq;
-	}
-
-	args.ops = enable ? VIRTCHNL2_OP_ENABLE_QUEUES :
-		VIRTCHNL2_OP_DISABLE_QUEUES;
-	args.in_args = (u8 *)queue_select;
-	args.in_args_size = len;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
-			    enable ? "ENABLE" : "DISABLE");
-
-	rte_free(queue_select);
-	return err;
-}
-
-int
-idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_vport vc_vport;
-	struct idpf_cmd_info args;
-	int err;
-
-	vc_vport.vport_id = vport->vport_id;
-	args.ops = enable ? VIRTCHNL2_OP_ENABLE_VPORT :
-			    VIRTCHNL2_OP_DISABLE_VPORT;
-	args.in_args = (uint8_t *)&vc_vport;
-	args.in_args_size = sizeof(vc_vport);
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0) {
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_VPORT",
-			    enable ? "ENABLE" : "DISABLE");
-	}
-
-	return err;
-}
-
-int
-idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
-{
-	struct virtchnl2_get_ptype_info *ptype_info;
-	struct idpf_cmd_info args;
-	int len, err;
-
-	len = sizeof(struct virtchnl2_get_ptype_info);
-	ptype_info = rte_zmalloc("ptype_info", len, 0);
-	if (ptype_info == NULL)
-		return -ENOMEM;
-
-	ptype_info->start_ptype_id = 0;
-	ptype_info->num_ptypes = IDPF_MAX_PKT_TYPE;
-	args.ops = VIRTCHNL2_OP_GET_PTYPE_INFO;
-	args.in_args = (u8 *)ptype_info;
-	args.in_args_size = len;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_PTYPE_INFO");
-
-	rte_free(ptype_info);
-	return err;
-}
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 04/19] common/idpf: introduce adapter init and deinit
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (2 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 03/19] common/idpf: add virtual channel functions beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 05/19] common/idpf: add vport init/deinit beilei.xing
                           ` (15 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Introduce idpf_adapter_init and idpf_adapter_deinit
functions in common module.
And also introduce idpf_adapter_ext_init and
idpf_adapter_ext_deinit functions.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/base/idpf_controlq_api.h |   2 -
 drivers/common/idpf/idpf_common_device.c     | 153 ++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h     |   6 +
 drivers/common/idpf/version.map              |   4 +-
 drivers/net/idpf/idpf_ethdev.c               | 158 ++-----------------
 drivers/net/idpf/idpf_ethdev.h               |   2 -
 6 files changed, 178 insertions(+), 147 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h
index 891a0f10f6..32d17baadf 100644
--- a/drivers/common/idpf/base/idpf_controlq_api.h
+++ b/drivers/common/idpf/base/idpf_controlq_api.h
@@ -161,7 +161,6 @@ enum idpf_mbx_opc {
 /* Will init all required q including default mb.  "q_info" is an array of
  * create_info structs equal to the number of control queues to be created.
  */
-__rte_internal
 int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 		   struct idpf_ctlq_create_info *q_info);
 
@@ -199,7 +198,6 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw,
 			    struct idpf_dma_mem **buffs);
 
 /* Will destroy all q including the default mb */
-__rte_internal
 int idpf_ctlq_deinit(struct idpf_hw *hw);
 
 #endif /* _IDPF_CONTROLQ_API_H_ */
diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 197fa03b7f..3ba7ed78f5 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -4,5 +4,158 @@
 
 #include <rte_log.h>
 #include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
+
+static void
+idpf_reset_pf(struct idpf_hw *hw)
+{
+	uint32_t reg;
+
+	reg = IDPF_READ_REG(hw, PFGEN_CTRL);
+	IDPF_WRITE_REG(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR));
+}
+
+#define IDPF_RESET_WAIT_CNT 100
+static int
+idpf_check_pf_reset_done(struct idpf_hw *hw)
+{
+	uint32_t reg;
+	int i;
+
+	for (i = 0; i < IDPF_RESET_WAIT_CNT; i++) {
+		reg = IDPF_READ_REG(hw, PFGEN_RSTAT);
+		if (reg != 0xFFFFFFFF && (reg & PFGEN_RSTAT_PFR_STATE_M))
+			return 0;
+		rte_delay_ms(1000);
+	}
+
+	DRV_LOG(ERR, "IDPF reset timeout");
+	return -EBUSY;
+}
+
+#define CTLQ_NUM 2
+static int
+idpf_init_mbx(struct idpf_hw *hw)
+{
+	struct idpf_ctlq_create_info ctlq_info[CTLQ_NUM] = {
+		{
+			.type = IDPF_CTLQ_TYPE_MAILBOX_TX,
+			.id = IDPF_CTLQ_ID,
+			.len = IDPF_CTLQ_LEN,
+			.buf_size = IDPF_DFLT_MBX_BUF_SIZE,
+			.reg = {
+				.head = PF_FW_ATQH,
+				.tail = PF_FW_ATQT,
+				.len = PF_FW_ATQLEN,
+				.bah = PF_FW_ATQBAH,
+				.bal = PF_FW_ATQBAL,
+				.len_mask = PF_FW_ATQLEN_ATQLEN_M,
+				.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M,
+				.head_mask = PF_FW_ATQH_ATQH_M,
+			}
+		},
+		{
+			.type = IDPF_CTLQ_TYPE_MAILBOX_RX,
+			.id = IDPF_CTLQ_ID,
+			.len = IDPF_CTLQ_LEN,
+			.buf_size = IDPF_DFLT_MBX_BUF_SIZE,
+			.reg = {
+				.head = PF_FW_ARQH,
+				.tail = PF_FW_ARQT,
+				.len = PF_FW_ARQLEN,
+				.bah = PF_FW_ARQBAH,
+				.bal = PF_FW_ARQBAL,
+				.len_mask = PF_FW_ARQLEN_ARQLEN_M,
+				.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M,
+				.head_mask = PF_FW_ARQH_ARQH_M,
+			}
+		}
+	};
+	struct idpf_ctlq_info *ctlq;
+	int ret;
+
+	ret = idpf_ctlq_init(hw, CTLQ_NUM, ctlq_info);
+	if (ret != 0)
+		return ret;
+
+	LIST_FOR_EACH_ENTRY_SAFE(ctlq, NULL, &hw->cq_list_head,
+				 struct idpf_ctlq_info, cq_list) {
+		if (ctlq->q_id == IDPF_CTLQ_ID &&
+		    ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
+			hw->asq = ctlq;
+		if (ctlq->q_id == IDPF_CTLQ_ID &&
+		    ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_RX)
+			hw->arq = ctlq;
+	}
+
+	if (hw->asq == NULL || hw->arq == NULL) {
+		idpf_ctlq_deinit(hw);
+		ret = -ENOENT;
+	}
+
+	return ret;
+}
+
+int
+idpf_adapter_init(struct idpf_adapter *adapter)
+{
+	struct idpf_hw *hw = &adapter->hw;
+	int ret;
+
+	idpf_reset_pf(hw);
+	ret = idpf_check_pf_reset_done(hw);
+	if (ret != 0) {
+		DRV_LOG(ERR, "IDPF is still resetting");
+		goto err_check_reset;
+	}
+
+	ret = idpf_init_mbx(hw);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to init mailbox");
+		goto err_check_reset;
+	}
+
+	adapter->mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
+					IDPF_DFLT_MBX_BUF_SIZE, 0);
+	if (adapter->mbx_resp == NULL) {
+		DRV_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp memory");
+		ret = -ENOMEM;
+		goto err_mbx_resp;
+	}
+
+	ret = idpf_vc_check_api_version(adapter);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to check api version");
+		goto err_check_api;
+	}
+
+	ret = idpf_vc_get_caps(adapter);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to get capabilities");
+		goto err_check_api;
+	}
+
+	return 0;
+
+err_check_api:
+	rte_free(adapter->mbx_resp);
+	adapter->mbx_resp = NULL;
+err_mbx_resp:
+	idpf_ctlq_deinit(hw);
+err_check_reset:
+	return ret;
+}
+
+int
+idpf_adapter_deinit(struct idpf_adapter *adapter)
+{
+	struct idpf_hw *hw = &adapter->hw;
+
+	idpf_ctlq_deinit(hw);
+	rte_free(adapter->mbx_resp);
+	adapter->mbx_resp = NULL;
+
+	return 0;
+}
 
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index e86f8157e7..003a67cbfd 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -9,6 +9,7 @@
 #include <base/virtchnl2.h>
 #include <idpf_common_logs.h>
 
+#define IDPF_CTLQ_ID		-1
 #define IDPF_CTLQ_LEN		64
 #define IDPF_DFLT_MBX_BUF_SIZE	4096
 
@@ -137,4 +138,9 @@ atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
 	return !ret;
 }
 
+__rte_internal
+int idpf_adapter_init(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_adapter_deinit(struct idpf_adapter *adapter);
+
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 9bc0d2a909..8056996e3c 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -1,8 +1,8 @@
 INTERNAL {
 	global:
 
-	idpf_ctlq_deinit;
-	idpf_ctlq_init;
+	idpf_adapter_deinit;
+	idpf_adapter_init;
 	idpf_execute_vc_cmd;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 759fc981d7..c17c7bb472 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -786,148 +786,32 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap
 	return ret;
 }
 
-static void
-idpf_reset_pf(struct idpf_hw *hw)
-{
-	uint32_t reg;
-
-	reg = IDPF_READ_REG(hw, PFGEN_CTRL);
-	IDPF_WRITE_REG(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR));
-}
-
-#define IDPF_RESET_WAIT_CNT 100
 static int
-idpf_check_pf_reset_done(struct idpf_hw *hw)
+idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
 {
-	uint32_t reg;
-	int i;
-
-	for (i = 0; i < IDPF_RESET_WAIT_CNT; i++) {
-		reg = IDPF_READ_REG(hw, PFGEN_RSTAT);
-		if (reg != 0xFFFFFFFF && (reg & PFGEN_RSTAT_PFR_STATE_M))
-			return 0;
-		rte_delay_ms(1000);
-	}
-
-	PMD_INIT_LOG(ERR, "IDPF reset timeout");
-	return -EBUSY;
-}
-
-#define CTLQ_NUM 2
-static int
-idpf_init_mbx(struct idpf_hw *hw)
-{
-	struct idpf_ctlq_create_info ctlq_info[CTLQ_NUM] = {
-		{
-			.type = IDPF_CTLQ_TYPE_MAILBOX_TX,
-			.id = IDPF_CTLQ_ID,
-			.len = IDPF_CTLQ_LEN,
-			.buf_size = IDPF_DFLT_MBX_BUF_SIZE,
-			.reg = {
-				.head = PF_FW_ATQH,
-				.tail = PF_FW_ATQT,
-				.len = PF_FW_ATQLEN,
-				.bah = PF_FW_ATQBAH,
-				.bal = PF_FW_ATQBAL,
-				.len_mask = PF_FW_ATQLEN_ATQLEN_M,
-				.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M,
-				.head_mask = PF_FW_ATQH_ATQH_M,
-			}
-		},
-		{
-			.type = IDPF_CTLQ_TYPE_MAILBOX_RX,
-			.id = IDPF_CTLQ_ID,
-			.len = IDPF_CTLQ_LEN,
-			.buf_size = IDPF_DFLT_MBX_BUF_SIZE,
-			.reg = {
-				.head = PF_FW_ARQH,
-				.tail = PF_FW_ARQT,
-				.len = PF_FW_ARQLEN,
-				.bah = PF_FW_ARQBAH,
-				.bal = PF_FW_ARQBAL,
-				.len_mask = PF_FW_ARQLEN_ARQLEN_M,
-				.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M,
-				.head_mask = PF_FW_ARQH_ARQH_M,
-			}
-		}
-	};
-	struct idpf_ctlq_info *ctlq;
-	int ret;
-
-	ret = idpf_ctlq_init(hw, CTLQ_NUM, ctlq_info);
-	if (ret != 0)
-		return ret;
-
-	LIST_FOR_EACH_ENTRY_SAFE(ctlq, NULL, &hw->cq_list_head,
-				 struct idpf_ctlq_info, cq_list) {
-		if (ctlq->q_id == IDPF_CTLQ_ID &&
-		    ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
-			hw->asq = ctlq;
-		if (ctlq->q_id == IDPF_CTLQ_ID &&
-		    ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_RX)
-			hw->arq = ctlq;
-	}
-
-	if (hw->asq == NULL || hw->arq == NULL) {
-		idpf_ctlq_deinit(hw);
-		ret = -ENOENT;
-	}
-
-	return ret;
-}
-
-static int
-idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
-{
-	struct idpf_hw *hw = &adapter->base.hw;
+	struct idpf_adapter *base = &adapter->base;
+	struct idpf_hw *hw = &base->hw;
 	int ret = 0;
 
 	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
 	hw->hw_addr_len = pci_dev->mem_resource[0].len;
-	hw->back = &adapter->base;
+	hw->back = base;
 	hw->vendor_id = pci_dev->id.vendor_id;
 	hw->device_id = pci_dev->id.device_id;
 	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
 
 	strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
 
-	idpf_reset_pf(hw);
-	ret = idpf_check_pf_reset_done(hw);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "IDPF is still resetting");
-		goto err;
-	}
-
-	ret = idpf_init_mbx(hw);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to init mailbox");
-		goto err;
-	}
-
-	adapter->base.mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
-					     IDPF_DFLT_MBX_BUF_SIZE, 0);
-	if (adapter->base.mbx_resp == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp memory");
-		ret = -ENOMEM;
-		goto err_mbx;
-	}
-
-	ret = idpf_vc_check_api_version(&adapter->base);
+	ret = idpf_adapter_init(base);
 	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to check api version");
-		goto err_api;
+		PMD_INIT_LOG(ERR, "Failed to init adapter");
+		goto err_adapter_init;
 	}
 
 	ret = idpf_get_pkt_type(adapter);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to set ptype table");
-		goto err_api;
-	}
-
-	ret = idpf_vc_get_caps(&adapter->base);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to get capabilities");
-		goto err_api;
+		goto err_get_ptype;
 	}
 
 	adapter->max_vport_nb = adapter->base.caps.max_vports;
@@ -939,7 +823,7 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapt
 	if (adapter->vports == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
 		ret = -ENOMEM;
-		goto err_api;
+		goto err_get_ptype;
 	}
 
 	adapter->cur_vports = 0;
@@ -949,12 +833,9 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapt
 
 	return ret;
 
-err_api:
-	rte_free(adapter->base.mbx_resp);
-	adapter->base.mbx_resp = NULL;
-err_mbx:
-	idpf_ctlq_deinit(hw);
-err:
+err_get_ptype:
+	idpf_adapter_deinit(base);
+err_adapter_init:
 	return ret;
 }
 
@@ -1093,14 +974,9 @@ idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 }
 
 static void
-idpf_adapter_rel(struct idpf_adapter_ext *adapter)
+idpf_adapter_ext_deinit(struct idpf_adapter_ext *adapter)
 {
-	struct idpf_hw *hw = &adapter->base.hw;
-
-	idpf_ctlq_deinit(hw);
-
-	rte_free(adapter->base.mbx_resp);
-	adapter->base.mbx_resp = NULL;
+	idpf_adapter_deinit(&adapter->base);
 
 	rte_free(adapter->vports);
 	adapter->vports = NULL;
@@ -1133,7 +1009,7 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 			return -ENOMEM;
 		}
 
-		retval = idpf_adapter_init(pci_dev, adapter);
+		retval = idpf_adapter_ext_init(pci_dev, adapter);
 		if (retval != 0) {
 			PMD_INIT_LOG(ERR, "Failed to init adapter.");
 			return retval;
@@ -1196,7 +1072,7 @@ idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		rte_spinlock_lock(&idpf_adapter_lock);
 		TAILQ_REMOVE(&idpf_adapter_list, adapter, next);
 		rte_spinlock_unlock(&idpf_adapter_lock);
-		idpf_adapter_rel(adapter);
+		idpf_adapter_ext_deinit(adapter);
 		rte_free(adapter);
 	}
 	return retval;
@@ -1216,7 +1092,7 @@ idpf_pci_remove(struct rte_pci_device *pci_dev)
 	rte_spinlock_lock(&idpf_adapter_lock);
 	TAILQ_REMOVE(&idpf_adapter_list, adapter, next);
 	rte_spinlock_unlock(&idpf_adapter_lock);
-	idpf_adapter_rel(adapter);
+	idpf_adapter_ext_deinit(adapter);
 	rte_free(adapter);
 
 	return 0;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index efc540fa32..07ffe8e408 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -31,8 +31,6 @@
 #define IDPF_RXQ_PER_GRP	1
 #define IDPF_RX_BUFQ_PER_GRP	2
 
-#define IDPF_CTLQ_ID		-1
-
 #define IDPF_DFLT_Q_VEC_NUM	1
 #define IDPF_DFLT_INTERVAL	16
 
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 05/19] common/idpf: add vport init/deinit
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (3 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 04/19] common/idpf: introduce adapter init and deinit beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 06/19] common/idpf: add config RSS beilei.xing
                           ` (14 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Wenjun Wu

From: Beilei Xing <beilei.xing@intel.com>

Introduce idpf_vport_init and idpf_vport_deinit functions
in common module.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 115 +++++++++++++++++
 drivers/common/idpf/idpf_common_device.h   |  13 +-
 drivers/common/idpf/idpf_common_virtchnl.c |  18 +--
 drivers/common/idpf/version.map            |   2 +
 drivers/net/idpf/idpf_ethdev.c             | 138 ++-------------------
 5 files changed, 148 insertions(+), 138 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 3ba7ed78f5..79b7bef015 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -158,4 +158,119 @@ idpf_adapter_deinit(struct idpf_adapter *adapter)
 	return 0;
 }
 
+int
+idpf_vport_init(struct idpf_vport *vport,
+		struct virtchnl2_create_vport *create_vport_info,
+		void *dev_data)
+{
+	struct virtchnl2_create_vport *vport_info;
+	int i, type, ret;
+
+	ret = idpf_vc_create_vport(vport, create_vport_info);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to create vport.");
+		goto err_create_vport;
+	}
+
+	vport_info = &(vport->vport_info.info);
+	vport->vport_id = vport_info->vport_id;
+	vport->txq_model = vport_info->txq_model;
+	vport->rxq_model = vport_info->rxq_model;
+	vport->num_tx_q = vport_info->num_tx_q;
+	vport->num_tx_complq = vport_info->num_tx_complq;
+	vport->num_rx_q = vport_info->num_rx_q;
+	vport->num_rx_bufq = vport_info->num_rx_bufq;
+	vport->max_mtu = vport_info->max_mtu;
+	rte_memcpy(vport->default_mac_addr,
+		   vport_info->default_mac_addr, ETH_ALEN);
+	vport->rss_algorithm = vport_info->rss_algorithm;
+	vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+				      vport_info->rss_key_size);
+	vport->rss_lut_size = vport_info->rss_lut_size;
+
+	for (i = 0; i < vport_info->chunks.num_chunks; i++) {
+		type = vport_info->chunks.chunks[i].type;
+		switch (type) {
+		case VIRTCHNL2_QUEUE_TYPE_TX:
+			vport->chunks_info.tx_start_qid =
+				vport_info->chunks.chunks[i].start_queue_id;
+			vport->chunks_info.tx_qtail_start =
+				vport_info->chunks.chunks[i].qtail_reg_start;
+			vport->chunks_info.tx_qtail_spacing =
+				vport_info->chunks.chunks[i].qtail_reg_spacing;
+			break;
+		case VIRTCHNL2_QUEUE_TYPE_RX:
+			vport->chunks_info.rx_start_qid =
+				vport_info->chunks.chunks[i].start_queue_id;
+			vport->chunks_info.rx_qtail_start =
+				vport_info->chunks.chunks[i].qtail_reg_start;
+			vport->chunks_info.rx_qtail_spacing =
+				vport_info->chunks.chunks[i].qtail_reg_spacing;
+			break;
+		case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+			vport->chunks_info.tx_compl_start_qid =
+				vport_info->chunks.chunks[i].start_queue_id;
+			vport->chunks_info.tx_compl_qtail_start =
+				vport_info->chunks.chunks[i].qtail_reg_start;
+			vport->chunks_info.tx_compl_qtail_spacing =
+				vport_info->chunks.chunks[i].qtail_reg_spacing;
+			break;
+		case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+			vport->chunks_info.rx_buf_start_qid =
+				vport_info->chunks.chunks[i].start_queue_id;
+			vport->chunks_info.rx_buf_qtail_start =
+				vport_info->chunks.chunks[i].qtail_reg_start;
+			vport->chunks_info.rx_buf_qtail_spacing =
+				vport_info->chunks.chunks[i].qtail_reg_spacing;
+			break;
+		default:
+			DRV_LOG(ERR, "Unsupported queue type");
+			break;
+		}
+	}
+
+	vport->dev_data = dev_data;
+
+	vport->rss_key = rte_zmalloc("rss_key",
+				     vport->rss_key_size, 0);
+	if (vport->rss_key == NULL) {
+		DRV_LOG(ERR, "Failed to allocate RSS key");
+		ret = -ENOMEM;
+		goto err_rss_key;
+	}
+
+	vport->rss_lut = rte_zmalloc("rss_lut",
+				     sizeof(uint32_t) * vport->rss_lut_size, 0);
+	if (vport->rss_lut == NULL) {
+		DRV_LOG(ERR, "Failed to allocate RSS lut");
+		ret = -ENOMEM;
+		goto err_rss_lut;
+	}
+
+	return 0;
+
+err_rss_lut:
+	vport->dev_data = NULL;
+	rte_free(vport->rss_key);
+	vport->rss_key = NULL;
+err_rss_key:
+	idpf_vc_destroy_vport(vport);
+err_create_vport:
+	return ret;
+}
+int
+idpf_vport_deinit(struct idpf_vport *vport)
+{
+	rte_free(vport->rss_lut);
+	vport->rss_lut = NULL;
+
+	rte_free(vport->rss_key);
+	vport->rss_key = NULL;
+
+	vport->dev_data = NULL;
+
+	idpf_vc_destroy_vport(vport);
+
+	return 0;
+}
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 003a67cbfd..e9f7ed36d5 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -9,6 +9,8 @@
 #include <base/virtchnl2.h>
 #include <idpf_common_logs.h>
 
+#define IDPF_RSS_KEY_LEN	52
+
 #define IDPF_CTLQ_ID		-1
 #define IDPF_CTLQ_LEN		64
 #define IDPF_DFLT_MBX_BUF_SIZE	4096
@@ -43,7 +45,10 @@ struct idpf_chunks_info {
 
 struct idpf_vport {
 	struct idpf_adapter *adapter; /* Backreference to associated adapter */
-	struct virtchnl2_create_vport *vport_info; /* virtchnl response info handling */
+	union {
+		struct virtchnl2_create_vport info; /* virtchnl response info handling */
+		uint8_t data[IDPF_DFLT_MBX_BUF_SIZE];
+	} vport_info;
 	uint16_t sw_idx; /* SW index in adapter->vports[]*/
 	uint16_t vport_id;
 	uint32_t txq_model;
@@ -142,5 +147,11 @@ __rte_internal
 int idpf_adapter_init(struct idpf_adapter *adapter);
 __rte_internal
 int idpf_adapter_deinit(struct idpf_adapter *adapter);
+__rte_internal
+int idpf_vport_init(struct idpf_vport *vport,
+		    struct virtchnl2_create_vport *vport_req_info,
+		    void *dev_data);
+__rte_internal
+int idpf_vport_deinit(struct idpf_vport *vport);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 0704a4fea2..6cff79833f 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -355,7 +355,7 @@ idpf_vc_get_caps(struct idpf_adapter *adapter)
 
 int
 idpf_vc_create_vport(struct idpf_vport *vport,
-		     struct virtchnl2_create_vport *vport_req_info)
+		     struct virtchnl2_create_vport *create_vport_info)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_create_vport vport_msg;
@@ -363,13 +363,13 @@ idpf_vc_create_vport(struct idpf_vport *vport,
 	int err = -1;
 
 	memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
-	vport_msg.vport_type = vport_req_info->vport_type;
-	vport_msg.txq_model = vport_req_info->txq_model;
-	vport_msg.rxq_model = vport_req_info->rxq_model;
-	vport_msg.num_tx_q = vport_req_info->num_tx_q;
-	vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
-	vport_msg.num_rx_q = vport_req_info->num_rx_q;
-	vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
+	vport_msg.vport_type = create_vport_info->vport_type;
+	vport_msg.txq_model = create_vport_info->txq_model;
+	vport_msg.rxq_model = create_vport_info->rxq_model;
+	vport_msg.num_tx_q = create_vport_info->num_tx_q;
+	vport_msg.num_tx_complq = create_vport_info->num_tx_complq;
+	vport_msg.num_rx_q = create_vport_info->num_rx_q;
+	vport_msg.num_rx_bufq = create_vport_info->num_rx_bufq;
 
 	memset(&args, 0, sizeof(args));
 	args.ops = VIRTCHNL2_OP_CREATE_VPORT;
@@ -385,7 +385,7 @@ idpf_vc_create_vport(struct idpf_vport *vport,
 		return err;
 	}
 
-	rte_memcpy(vport->vport_info, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
+	rte_memcpy(&(vport->vport_info.info), args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
 	return 0;
 }
 
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 8056996e3c..c1ae5affa4 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -19,6 +19,8 @@ INTERNAL {
 	idpf_vc_set_rss_key;
 	idpf_vc_set_rss_lut;
 	idpf_vc_switch_queue;
+	idpf_vport_deinit;
+	idpf_vport_init;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index c17c7bb472..7a8fb6fd4a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -178,73 +178,6 @@ idpf_init_vport_req_info(struct rte_eth_dev *dev,
 	return 0;
 }
 
-#define IDPF_RSS_KEY_LEN 52
-
-static int
-idpf_init_vport(struct idpf_vport *vport)
-{
-	struct virtchnl2_create_vport *vport_info = vport->vport_info;
-	int i, type;
-
-	vport->vport_id = vport_info->vport_id;
-	vport->txq_model = vport_info->txq_model;
-	vport->rxq_model = vport_info->rxq_model;
-	vport->num_tx_q = vport_info->num_tx_q;
-	vport->num_tx_complq = vport_info->num_tx_complq;
-	vport->num_rx_q = vport_info->num_rx_q;
-	vport->num_rx_bufq = vport_info->num_rx_bufq;
-	vport->max_mtu = vport_info->max_mtu;
-	rte_memcpy(vport->default_mac_addr,
-		   vport_info->default_mac_addr, ETH_ALEN);
-	vport->rss_algorithm = vport_info->rss_algorithm;
-	vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
-				     vport_info->rss_key_size);
-	vport->rss_lut_size = vport_info->rss_lut_size;
-
-	for (i = 0; i < vport_info->chunks.num_chunks; i++) {
-		type = vport_info->chunks.chunks[i].type;
-		switch (type) {
-		case VIRTCHNL2_QUEUE_TYPE_TX:
-			vport->chunks_info.tx_start_qid =
-				vport_info->chunks.chunks[i].start_queue_id;
-			vport->chunks_info.tx_qtail_start =
-				vport_info->chunks.chunks[i].qtail_reg_start;
-			vport->chunks_info.tx_qtail_spacing =
-				vport_info->chunks.chunks[i].qtail_reg_spacing;
-			break;
-		case VIRTCHNL2_QUEUE_TYPE_RX:
-			vport->chunks_info.rx_start_qid =
-				vport_info->chunks.chunks[i].start_queue_id;
-			vport->chunks_info.rx_qtail_start =
-				vport_info->chunks.chunks[i].qtail_reg_start;
-			vport->chunks_info.rx_qtail_spacing =
-				vport_info->chunks.chunks[i].qtail_reg_spacing;
-			break;
-		case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
-			vport->chunks_info.tx_compl_start_qid =
-				vport_info->chunks.chunks[i].start_queue_id;
-			vport->chunks_info.tx_compl_qtail_start =
-				vport_info->chunks.chunks[i].qtail_reg_start;
-			vport->chunks_info.tx_compl_qtail_spacing =
-				vport_info->chunks.chunks[i].qtail_reg_spacing;
-			break;
-		case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
-			vport->chunks_info.rx_buf_start_qid =
-				vport_info->chunks.chunks[i].start_queue_id;
-			vport->chunks_info.rx_buf_qtail_start =
-				vport_info->chunks.chunks[i].qtail_reg_start;
-			vport->chunks_info.rx_buf_qtail_spacing =
-				vport_info->chunks.chunks[i].qtail_reg_spacing;
-			break;
-		default:
-			PMD_INIT_LOG(ERR, "Unsupported queue type");
-			break;
-		}
-	}
-
-	return 0;
-}
-
 static int
 idpf_config_rss(struct idpf_vport *vport)
 {
@@ -276,63 +209,34 @@ idpf_init_rss(struct idpf_vport *vport)
 {
 	struct rte_eth_rss_conf *rss_conf;
 	struct rte_eth_dev_data *dev_data;
-	uint16_t i, nb_q, lut_size;
+	uint16_t i, nb_q;
 	int ret = 0;
 
 	dev_data = vport->dev_data;
 	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
 	nb_q = dev_data->nb_rx_queues;
 
-	vport->rss_key = rte_zmalloc("rss_key",
-				     vport->rss_key_size, 0);
-	if (vport->rss_key == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate RSS key");
-		ret = -ENOMEM;
-		goto err_alloc_key;
-	}
-
-	lut_size = vport->rss_lut_size;
-	vport->rss_lut = rte_zmalloc("rss_lut",
-				     sizeof(uint32_t) * lut_size, 0);
-	if (vport->rss_lut == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate RSS lut");
-		ret = -ENOMEM;
-		goto err_alloc_lut;
-	}
-
 	if (rss_conf->rss_key == NULL) {
 		for (i = 0; i < vport->rss_key_size; i++)
 			vport->rss_key[i] = (uint8_t)rte_rand();
 	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
 		PMD_INIT_LOG(ERR, "Invalid RSS key length in RSS configuration, should be %d",
 			     vport->rss_key_size);
-		ret = -EINVAL;
-		goto err_cfg_key;
+		return -EINVAL;
 	} else {
 		rte_memcpy(vport->rss_key, rss_conf->rss_key,
 			   vport->rss_key_size);
 	}
 
-	for (i = 0; i < lut_size; i++)
+	for (i = 0; i < vport->rss_lut_size; i++)
 		vport->rss_lut[i] = i % nb_q;
 
 	vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
 
 	ret = idpf_config_rss(vport);
-	if (ret != 0) {
+	if (ret != 0)
 		PMD_INIT_LOG(ERR, "Failed to configure RSS");
-		goto err_cfg_key;
-	}
-
-	return ret;
 
-err_cfg_key:
-	rte_free(vport->rss_lut);
-	vport->rss_lut = NULL;
-err_alloc_lut:
-	rte_free(vport->rss_key);
-	vport->rss_key = NULL;
-err_alloc_key:
 	return ret;
 }
 
@@ -602,13 +506,7 @@ idpf_dev_close(struct rte_eth_dev *dev)
 
 	idpf_dev_stop(dev);
 
-	idpf_vc_destroy_vport(vport);
-
-	rte_free(vport->rss_lut);
-	vport->rss_lut = NULL;
-
-	rte_free(vport->rss_key);
-	vport->rss_key = NULL;
+	idpf_vport_deinit(vport);
 
 	rte_free(vport->recv_vectors);
 	vport->recv_vectors = NULL;
@@ -892,13 +790,6 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
 
-	vport->vport_info = rte_zmalloc(NULL, IDPF_DFLT_MBX_BUF_SIZE, 0);
-	if (vport->vport_info == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate vport_info");
-		ret = -ENOMEM;
-		goto err;
-	}
-
 	memset(&vport_req_info, 0, sizeof(vport_req_info));
 	ret = idpf_init_vport_req_info(dev, &vport_req_info);
 	if (ret != 0) {
@@ -906,19 +797,12 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 		goto err;
 	}
 
-	ret = idpf_vc_create_vport(vport, &vport_req_info);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to create vport.");
-		goto err_create_vport;
-	}
-
-	ret = idpf_init_vport(vport);
+	ret = idpf_vport_init(vport, &vport_req_info, dev->data);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to init vports.");
-		goto err_init_vport;
+		goto err;
 	}
 
-	vport->dev_data = dev->data;
 	adapter->vports[param->idx] = vport;
 	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
 	adapter->cur_vport_nb++;
@@ -927,7 +811,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	if (dev->data->mac_addrs == NULL) {
 		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
 		ret = -ENOMEM;
-		goto err_init_vport;
+		goto err_mac_addrs;
 	}
 
 	rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
@@ -935,11 +819,9 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 
 	return 0;
 
-err_init_vport:
+err_mac_addrs:
 	adapter->vports[param->idx] = NULL;  /* reset */
-	idpf_vc_destroy_vport(vport);
-err_create_vport:
-	rte_free(vport->vport_info);
+	idpf_vport_deinit(vport);
 err:
 	return ret;
 }
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 06/19] common/idpf: add config RSS
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (4 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 05/19] common/idpf: add vport init/deinit beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 07/19] common/idpf: add irq map/unmap beilei.xing
                           ` (13 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move configure RSS to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c | 25 +++++++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h |  2 ++
 drivers/common/idpf/version.map          |  1 +
 drivers/net/idpf/idpf_ethdev.c           | 26 ------------------------
 4 files changed, 28 insertions(+), 26 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 79b7bef015..ae50a741f3 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -273,4 +273,29 @@ idpf_vport_deinit(struct idpf_vport *vport)
 
 	return 0;
 }
+int
+idpf_config_rss(struct idpf_vport *vport)
+{
+	int ret;
+
+	ret = idpf_vc_set_rss_key(vport);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+	ret = idpf_vc_set_rss_lut(vport);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to configure RSS lut");
+		return ret;
+	}
+
+	ret = idpf_vc_set_rss_hash(vport);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return ret;
+}
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index e9f7ed36d5..2db5a1d1f9 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -153,5 +153,7 @@ int idpf_vport_init(struct idpf_vport *vport,
 		    void *dev_data);
 __rte_internal
 int idpf_vport_deinit(struct idpf_vport *vport);
+__rte_internal
+int idpf_config_rss(struct idpf_vport *vport);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index c1ae5affa4..fd56a9988f 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,6 +3,7 @@ INTERNAL {
 
 	idpf_adapter_deinit;
 	idpf_adapter_init;
+	idpf_config_rss;
 	idpf_execute_vc_cmd;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 7a8fb6fd4a..f728318dad 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -178,32 +178,6 @@ idpf_init_vport_req_info(struct rte_eth_dev *dev,
 	return 0;
 }
 
-static int
-idpf_config_rss(struct idpf_vport *vport)
-{
-	int ret;
-
-	ret = idpf_vc_set_rss_key(vport);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
-		return ret;
-	}
-
-	ret = idpf_vc_set_rss_lut(vport);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
-		return ret;
-	}
-
-	ret = idpf_vc_set_rss_hash(vport);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
-		return ret;
-	}
-
-	return ret;
-}
-
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 07/19] common/idpf: add irq map/unmap
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (5 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 06/19] common/idpf: add config RSS beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 08/19] common/idpf: support get packet type beilei.xing
                           ` (12 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Introduce idpf_config_irq_map/idpf_config_irq_unmap functions
in common module, and refine config rxq irqs function.
Refine device start function with some irq error handling. Besides,
vport->stopped should be initialized at the end of the function.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 102 +++++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h   |   6 ++
 drivers/common/idpf/idpf_common_virtchnl.c |   8 --
 drivers/common/idpf/idpf_common_virtchnl.h |   6 +-
 drivers/common/idpf/version.map            |   2 +
 drivers/net/idpf/idpf_ethdev.c             | 102 +++------------------
 drivers/net/idpf/idpf_ethdev.h             |   1 -
 7 files changed, 125 insertions(+), 102 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index ae50a741f3..336977891c 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -247,8 +247,21 @@ idpf_vport_init(struct idpf_vport *vport,
 		goto err_rss_lut;
 	}
 
+	/* recv_vectors is used for VIRTCHNL2_OP_ALLOC_VECTORS response,
+	 * reserve maximum size for it now, may need optimization in future.
+	 */
+	vport->recv_vectors = rte_zmalloc("recv_vectors", IDPF_DFLT_MBX_BUF_SIZE, 0);
+	if (vport->recv_vectors == NULL) {
+		DRV_LOG(ERR, "Failed to allocate recv_vectors");
+		ret = -ENOMEM;
+		goto err_recv_vec;
+	}
+
 	return 0;
 
+err_recv_vec:
+	rte_free(vport->rss_lut);
+	vport->rss_lut = NULL;
 err_rss_lut:
 	vport->dev_data = NULL;
 	rte_free(vport->rss_key);
@@ -261,6 +274,8 @@ idpf_vport_init(struct idpf_vport *vport,
 int
 idpf_vport_deinit(struct idpf_vport *vport)
 {
+	rte_free(vport->recv_vectors);
+	vport->recv_vectors = NULL;
 	rte_free(vport->rss_lut);
 	vport->rss_lut = NULL;
 
@@ -298,4 +313,91 @@ idpf_config_rss(struct idpf_vport *vport)
 
 	return ret;
 }
+
+int
+idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_queue_vector *qv_map;
+	struct idpf_hw *hw = &adapter->hw;
+	uint32_t dynctl_val, itrn_val;
+	uint32_t dynctl_reg_start;
+	uint32_t itrn_reg_start;
+	uint16_t i;
+	int ret;
+
+	qv_map = rte_zmalloc("qv_map",
+			     nb_rx_queues *
+			     sizeof(struct virtchnl2_queue_vector), 0);
+	if (qv_map == NULL) {
+		DRV_LOG(ERR, "Failed to allocate %d queue-vector map",
+			nb_rx_queues);
+		ret = -ENOMEM;
+		goto qv_map_alloc_err;
+	}
+
+	/* Rx interrupt disabled, Map interrupt only for writeback */
+
+	/* The capability flags adapter->caps.other_caps should be
+	 * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if
+	 * condition should be updated when the FW can return the
+	 * correct flag bits.
+	 */
+	dynctl_reg_start =
+		vport->recv_vectors->vchunks.vchunks->dynctl_reg_start;
+	itrn_reg_start =
+		vport->recv_vectors->vchunks.vchunks->itrn_reg_start;
+	dynctl_val = IDPF_READ_REG(hw, dynctl_reg_start);
+	DRV_LOG(DEBUG, "Value of dynctl_reg_start is 0x%x", dynctl_val);
+	itrn_val = IDPF_READ_REG(hw, itrn_reg_start);
+	DRV_LOG(DEBUG, "Value of itrn_reg_start is 0x%x", itrn_val);
+	/* Force write-backs by setting WB_ON_ITR bit in DYN_CTL
+	 * register. WB_ON_ITR and INTENA are mutually exclusive
+	 * bits. Setting WB_ON_ITR bits means TX and RX Descs
+	 * are written back based on ITR expiration irrespective
+	 * of INTENA setting.
+	 */
+	/* TBD: need to tune INTERVAL value for better performance. */
+	itrn_val = (itrn_val == 0) ? IDPF_DFLT_INTERVAL : itrn_val;
+	dynctl_val = VIRTCHNL2_ITR_IDX_0  <<
+		     PF_GLINT_DYN_CTL_ITR_INDX_S |
+		     PF_GLINT_DYN_CTL_WB_ON_ITR_M |
+		     itrn_val << PF_GLINT_DYN_CTL_INTERVAL_S;
+	IDPF_WRITE_REG(hw, dynctl_reg_start, dynctl_val);
+
+	for (i = 0; i < nb_rx_queues; i++) {
+		/* map all queues to the same vector */
+		qv_map[i].queue_id = vport->chunks_info.rx_start_qid + i;
+		qv_map[i].vector_id =
+			vport->recv_vectors->vchunks.vchunks->start_vector_id;
+	}
+	vport->qv_map = qv_map;
+
+	ret = idpf_vc_config_irq_map_unmap(vport, nb_rx_queues, true);
+	if (ret != 0) {
+		DRV_LOG(ERR, "config interrupt mapping failed");
+		goto config_irq_map_err;
+	}
+
+	return 0;
+
+config_irq_map_err:
+	rte_free(vport->qv_map);
+	vport->qv_map = NULL;
+
+qv_map_alloc_err:
+	return ret;
+}
+
+int
+idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues)
+{
+	idpf_vc_config_irq_map_unmap(vport, nb_rx_queues, false);
+
+	rte_free(vport->qv_map);
+	vport->qv_map = NULL;
+
+	return 0;
+}
+
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 2db5a1d1f9..a13f8818b9 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -17,6 +17,8 @@
 
 #define IDPF_MAX_PKT_TYPE	1024
 
+#define IDPF_DFLT_INTERVAL	16
+
 struct idpf_adapter {
 	struct idpf_hw hw;
 	struct virtchnl2_version_info virtchnl_version;
@@ -155,5 +157,9 @@ __rte_internal
 int idpf_vport_deinit(struct idpf_vport *vport);
 __rte_internal
 int idpf_config_rss(struct idpf_vport *vport);
+__rte_internal
+int idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues);
+__rte_internal
+int idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 6cff79833f..6d637150ff 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -573,14 +573,6 @@ idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_ALLOC_VECTORS");
 
-	if (vport->recv_vectors == NULL) {
-		vport->recv_vectors = rte_zmalloc("recv_vectors", len, 0);
-		if (vport->recv_vectors == NULL) {
-			rte_free(alloc_vec);
-			return -ENOMEM;
-		}
-	}
-
 	rte_memcpy(vport->recv_vectors, args.out_buffer, len);
 	rte_free(alloc_vec);
 	return err;
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 3533eb9b3d..a1fef56d3e 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -23,6 +23,9 @@ int idpf_vc_set_rss_lut(struct idpf_vport *vport);
 __rte_internal
 int idpf_vc_set_rss_hash(struct idpf_vport *vport);
 __rte_internal
+int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
+				 uint16_t nb_rxq, bool map);
+__rte_internal
 int idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
 			 bool rx, bool on);
 __rte_internal
@@ -30,9 +33,6 @@ int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
 __rte_internal
 int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
 __rte_internal
-int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
-				 uint16_t nb_rxq, bool map);
-__rte_internal
 int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
 __rte_internal
 int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index fd56a9988f..5dab5787de 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,6 +3,8 @@ INTERNAL {
 
 	idpf_adapter_deinit;
 	idpf_adapter_init;
+	idpf_config_irq_map;
+	idpf_config_irq_unmap;
 	idpf_config_rss;
 	idpf_execute_vc_cmd;
 	idpf_vc_alloc_vectors;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index f728318dad..d0799087a5 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -281,84 +281,9 @@ static int
 idpf_config_rx_queues_irqs(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_queue_vector *qv_map;
-	struct idpf_hw *hw = &adapter->hw;
-	uint32_t dynctl_reg_start;
-	uint32_t itrn_reg_start;
-	uint32_t dynctl_val, itrn_val;
-	uint16_t i;
-
-	qv_map = rte_zmalloc("qv_map",
-			dev->data->nb_rx_queues *
-			sizeof(struct virtchnl2_queue_vector), 0);
-	if (qv_map == NULL) {
-		PMD_DRV_LOG(ERR, "Failed to allocate %d queue-vector map",
-			    dev->data->nb_rx_queues);
-		goto qv_map_alloc_err;
-	}
-
-	/* Rx interrupt disabled, Map interrupt only for writeback */
-
-	/* The capability flags adapter->caps.other_caps should be
-	 * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if
-	 * condition should be updated when the FW can return the
-	 * correct flag bits.
-	 */
-	dynctl_reg_start =
-		vport->recv_vectors->vchunks.vchunks->dynctl_reg_start;
-	itrn_reg_start =
-		vport->recv_vectors->vchunks.vchunks->itrn_reg_start;
-	dynctl_val = IDPF_READ_REG(hw, dynctl_reg_start);
-	PMD_DRV_LOG(DEBUG, "Value of dynctl_reg_start is 0x%x",
-		    dynctl_val);
-	itrn_val = IDPF_READ_REG(hw, itrn_reg_start);
-	PMD_DRV_LOG(DEBUG, "Value of itrn_reg_start is 0x%x", itrn_val);
-	/* Force write-backs by setting WB_ON_ITR bit in DYN_CTL
-	 * register. WB_ON_ITR and INTENA are mutually exclusive
-	 * bits. Setting WB_ON_ITR bits means TX and RX Descs
-	 * are written back based on ITR expiration irrespective
-	 * of INTENA setting.
-	 */
-	/* TBD: need to tune INTERVAL value for better performance. */
-	if (itrn_val != 0)
-		IDPF_WRITE_REG(hw,
-			       dynctl_reg_start,
-			       VIRTCHNL2_ITR_IDX_0  <<
-			       PF_GLINT_DYN_CTL_ITR_INDX_S |
-			       PF_GLINT_DYN_CTL_WB_ON_ITR_M |
-			       itrn_val <<
-			       PF_GLINT_DYN_CTL_INTERVAL_S);
-	else
-		IDPF_WRITE_REG(hw,
-			       dynctl_reg_start,
-			       VIRTCHNL2_ITR_IDX_0  <<
-			       PF_GLINT_DYN_CTL_ITR_INDX_S |
-			       PF_GLINT_DYN_CTL_WB_ON_ITR_M |
-			       IDPF_DFLT_INTERVAL <<
-			       PF_GLINT_DYN_CTL_INTERVAL_S);
-
-	for (i = 0; i < dev->data->nb_rx_queues; i++) {
-		/* map all queues to the same vector */
-		qv_map[i].queue_id = vport->chunks_info.rx_start_qid + i;
-		qv_map[i].vector_id =
-			vport->recv_vectors->vchunks.vchunks->start_vector_id;
-	}
-	vport->qv_map = qv_map;
-
-	if (idpf_vc_config_irq_map_unmap(vport, dev->data->nb_rx_queues, true) != 0) {
-		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
-		goto config_irq_map_err;
-	}
-
-	return 0;
-
-config_irq_map_err:
-	rte_free(vport->qv_map);
-	vport->qv_map = NULL;
+	uint16_t nb_rx_queues = dev->data->nb_rx_queues;
 
-qv_map_alloc_err:
-	return -1;
+	return idpf_config_irq_map(vport, nb_rx_queues);
 }
 
 static int
@@ -404,8 +329,6 @@ idpf_dev_start(struct rte_eth_dev *dev)
 	uint16_t req_vecs_num;
 	int ret;
 
-	vport->stopped = 0;
-
 	req_vecs_num = IDPF_DFLT_Q_VEC_NUM;
 	if (req_vecs_num + adapter->used_vecs_num > num_allocated_vectors) {
 		PMD_DRV_LOG(ERR, "The accumulated request vectors' number should be less than %d",
@@ -424,13 +347,13 @@ idpf_dev_start(struct rte_eth_dev *dev)
 	ret = idpf_config_rx_queues_irqs(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to configure irqs");
-		goto err_vec;
+		goto err_irq;
 	}
 
 	ret = idpf_start_queues(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to start queues");
-		goto err_vec;
+		goto err_startq;
 	}
 
 	idpf_set_rx_function(dev);
@@ -442,10 +365,16 @@ idpf_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	vport->stopped = 0;
+
 	return 0;
 
 err_vport:
 	idpf_stop_queues(dev);
+err_startq:
+	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
+err_irq:
+	idpf_vc_dealloc_vectors(vport);
 err_vec:
 	return ret;
 }
@@ -462,10 +391,9 @@ idpf_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_stop_queues(dev);
 
-	idpf_vc_config_irq_map_unmap(vport, dev->data->nb_rx_queues, false);
+	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
 
-	if (vport->recv_vectors != NULL)
-		idpf_vc_dealloc_vectors(vport);
+	idpf_vc_dealloc_vectors(vport);
 
 	vport->stopped = 1;
 
@@ -482,12 +410,6 @@ idpf_dev_close(struct rte_eth_dev *dev)
 
 	idpf_vport_deinit(vport);
 
-	rte_free(vport->recv_vectors);
-	vport->recv_vectors = NULL;
-
-	rte_free(vport->qv_map);
-	vport->qv_map = NULL;
-
 	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
 	adapter->cur_vport_nb--;
 	dev->data->dev_private = NULL;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 07ffe8e408..55be98a8ed 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -32,7 +32,6 @@
 #define IDPF_RX_BUFQ_PER_GRP	2
 
 #define IDPF_DFLT_Q_VEC_NUM	1
-#define IDPF_DFLT_INTERVAL	16
 
 #define IDPF_MIN_BUF_SIZE	1024
 #define IDPF_MAX_FRAME_SIZE	9728
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 08/19] common/idpf: support get packet type
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (6 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 07/19] common/idpf: add irq map/unmap beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 09/19] common/idpf: add vport info initialization beilei.xing
                           ` (11 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move ptype_tbl field to idpf_adapter structure.
Move get_pkt_type to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c | 216 +++++++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h |   7 +
 drivers/common/idpf/meson.build          |   2 +
 drivers/net/idpf/idpf_ethdev.c           |   6 -
 drivers/net/idpf/idpf_ethdev.h           |   4 -
 drivers/net/idpf/idpf_rxtx.c             |   4 +-
 drivers/net/idpf/idpf_rxtx.h             |   4 -
 drivers/net/idpf/idpf_rxtx_vec_avx512.c  |   3 +-
 drivers/net/idpf/idpf_vchnl.c            | 213 ----------------------
 9 files changed, 228 insertions(+), 231 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 336977891c..f62d4d1976 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -96,6 +96,216 @@ idpf_init_mbx(struct idpf_hw *hw)
 	return ret;
 }
 
+static int
+idpf_get_pkt_type(struct idpf_adapter *adapter)
+{
+	struct virtchnl2_get_ptype_info *ptype_info;
+	uint16_t ptype_offset, i, j;
+	uint16_t ptype_recvd = 0;
+	int ret;
+
+	ret = idpf_vc_query_ptype_info(adapter);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Fail to query packet type information");
+		return ret;
+	}
+
+	ptype_info = rte_zmalloc("ptype_info", IDPF_DFLT_MBX_BUF_SIZE, 0);
+		if (ptype_info == NULL)
+			return -ENOMEM;
+
+	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
+		ret = idpf_vc_read_one_msg(adapter, VIRTCHNL2_OP_GET_PTYPE_INFO,
+					   IDPF_DFLT_MBX_BUF_SIZE, (uint8_t *)ptype_info);
+		if (ret != 0) {
+			DRV_LOG(ERR, "Fail to get packet type information");
+			goto free_ptype_info;
+		}
+
+		ptype_recvd += ptype_info->num_ptypes;
+		ptype_offset = sizeof(struct virtchnl2_get_ptype_info) -
+						sizeof(struct virtchnl2_ptype);
+
+		for (i = 0; i < rte_cpu_to_le_16(ptype_info->num_ptypes); i++) {
+			bool is_inner = false, is_ip = false;
+			struct virtchnl2_ptype *ptype;
+			uint32_t proto_hdr = 0;
+
+			ptype = (struct virtchnl2_ptype *)
+					((uint8_t *)ptype_info + ptype_offset);
+			ptype_offset += IDPF_GET_PTYPE_SIZE(ptype);
+			if (ptype_offset > IDPF_DFLT_MBX_BUF_SIZE) {
+				ret = -EINVAL;
+				goto free_ptype_info;
+			}
+
+			if (rte_cpu_to_le_16(ptype->ptype_id_10) == 0xFFFF)
+				goto free_ptype_info;
+
+			for (j = 0; j < ptype->proto_id_count; j++) {
+				switch (rte_cpu_to_le_16(ptype->proto_id[j])) {
+				case VIRTCHNL2_PROTO_HDR_GRE:
+				case VIRTCHNL2_PROTO_HDR_VXLAN:
+					proto_hdr &= ~RTE_PTYPE_L4_MASK;
+					proto_hdr |= RTE_PTYPE_TUNNEL_GRENAT;
+					is_inner = true;
+					break;
+				case VIRTCHNL2_PROTO_HDR_MAC:
+					if (is_inner) {
+						proto_hdr &= ~RTE_PTYPE_INNER_L2_MASK;
+						proto_hdr |= RTE_PTYPE_INNER_L2_ETHER;
+					} else {
+						proto_hdr &= ~RTE_PTYPE_L2_MASK;
+						proto_hdr |= RTE_PTYPE_L2_ETHER;
+					}
+					break;
+				case VIRTCHNL2_PROTO_HDR_VLAN:
+					if (is_inner) {
+						proto_hdr &= ~RTE_PTYPE_INNER_L2_MASK;
+						proto_hdr |= RTE_PTYPE_INNER_L2_ETHER_VLAN;
+					}
+					break;
+				case VIRTCHNL2_PROTO_HDR_PTP:
+					proto_hdr &= ~RTE_PTYPE_L2_MASK;
+					proto_hdr |= RTE_PTYPE_L2_ETHER_TIMESYNC;
+					break;
+				case VIRTCHNL2_PROTO_HDR_LLDP:
+					proto_hdr &= ~RTE_PTYPE_L2_MASK;
+					proto_hdr |= RTE_PTYPE_L2_ETHER_LLDP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_ARP:
+					proto_hdr &= ~RTE_PTYPE_L2_MASK;
+					proto_hdr |= RTE_PTYPE_L2_ETHER_ARP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_PPPOE:
+					proto_hdr &= ~RTE_PTYPE_L2_MASK;
+					proto_hdr |= RTE_PTYPE_L2_ETHER_PPPOE;
+					break;
+				case VIRTCHNL2_PROTO_HDR_IPV4:
+					if (!is_ip) {
+						proto_hdr |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
+						is_ip = true;
+					} else {
+						proto_hdr |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+							     RTE_PTYPE_TUNNEL_IP;
+						is_inner = true;
+					}
+						break;
+				case VIRTCHNL2_PROTO_HDR_IPV6:
+					if (!is_ip) {
+						proto_hdr |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
+						is_ip = true;
+					} else {
+						proto_hdr |= RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+							     RTE_PTYPE_TUNNEL_IP;
+						is_inner = true;
+					}
+					break;
+				case VIRTCHNL2_PROTO_HDR_IPV4_FRAG:
+				case VIRTCHNL2_PROTO_HDR_IPV6_FRAG:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_FRAG;
+					else
+						proto_hdr |= RTE_PTYPE_L4_FRAG;
+					break;
+				case VIRTCHNL2_PROTO_HDR_UDP:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_UDP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_UDP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_TCP:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_TCP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_TCP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_SCTP:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_SCTP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_SCTP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_ICMP:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_ICMP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_ICMP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_ICMPV6:
+					if (is_inner)
+						proto_hdr |= RTE_PTYPE_INNER_L4_ICMP;
+					else
+						proto_hdr |= RTE_PTYPE_L4_ICMP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_L2TPV2:
+				case VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL:
+				case VIRTCHNL2_PROTO_HDR_L2TPV3:
+					is_inner = true;
+					proto_hdr |= RTE_PTYPE_TUNNEL_L2TP;
+					break;
+				case VIRTCHNL2_PROTO_HDR_NVGRE:
+					is_inner = true;
+					proto_hdr |= RTE_PTYPE_TUNNEL_NVGRE;
+					break;
+				case VIRTCHNL2_PROTO_HDR_GTPC_TEID:
+					is_inner = true;
+					proto_hdr |= RTE_PTYPE_TUNNEL_GTPC;
+					break;
+				case VIRTCHNL2_PROTO_HDR_GTPU:
+				case VIRTCHNL2_PROTO_HDR_GTPU_UL:
+				case VIRTCHNL2_PROTO_HDR_GTPU_DL:
+					is_inner = true;
+					proto_hdr |= RTE_PTYPE_TUNNEL_GTPU;
+					break;
+				case VIRTCHNL2_PROTO_HDR_PAY:
+				case VIRTCHNL2_PROTO_HDR_IPV6_EH:
+				case VIRTCHNL2_PROTO_HDR_PRE_MAC:
+				case VIRTCHNL2_PROTO_HDR_POST_MAC:
+				case VIRTCHNL2_PROTO_HDR_ETHERTYPE:
+				case VIRTCHNL2_PROTO_HDR_SVLAN:
+				case VIRTCHNL2_PROTO_HDR_CVLAN:
+				case VIRTCHNL2_PROTO_HDR_MPLS:
+				case VIRTCHNL2_PROTO_HDR_MMPLS:
+				case VIRTCHNL2_PROTO_HDR_CTRL:
+				case VIRTCHNL2_PROTO_HDR_ECP:
+				case VIRTCHNL2_PROTO_HDR_EAPOL:
+				case VIRTCHNL2_PROTO_HDR_PPPOD:
+				case VIRTCHNL2_PROTO_HDR_IGMP:
+				case VIRTCHNL2_PROTO_HDR_AH:
+				case VIRTCHNL2_PROTO_HDR_ESP:
+				case VIRTCHNL2_PROTO_HDR_IKE:
+				case VIRTCHNL2_PROTO_HDR_NATT_KEEP:
+				case VIRTCHNL2_PROTO_HDR_GTP:
+				case VIRTCHNL2_PROTO_HDR_GTP_EH:
+				case VIRTCHNL2_PROTO_HDR_GTPCV2:
+				case VIRTCHNL2_PROTO_HDR_ECPRI:
+				case VIRTCHNL2_PROTO_HDR_VRRP:
+				case VIRTCHNL2_PROTO_HDR_OSPF:
+				case VIRTCHNL2_PROTO_HDR_TUN:
+				case VIRTCHNL2_PROTO_HDR_VXLAN_GPE:
+				case VIRTCHNL2_PROTO_HDR_GENEVE:
+				case VIRTCHNL2_PROTO_HDR_NSH:
+				case VIRTCHNL2_PROTO_HDR_QUIC:
+				case VIRTCHNL2_PROTO_HDR_PFCP:
+				case VIRTCHNL2_PROTO_HDR_PFCP_NODE:
+				case VIRTCHNL2_PROTO_HDR_PFCP_SESSION:
+				case VIRTCHNL2_PROTO_HDR_RTP:
+				case VIRTCHNL2_PROTO_HDR_NO_PROTO:
+				default:
+					continue;
+				}
+				adapter->ptype_tbl[ptype->ptype_id_10] = proto_hdr;
+			}
+		}
+	}
+
+free_ptype_info:
+	rte_free(ptype_info);
+	clear_cmd(adapter);
+	return ret;
+}
+
 int
 idpf_adapter_init(struct idpf_adapter *adapter)
 {
@@ -135,6 +345,12 @@ idpf_adapter_init(struct idpf_adapter *adapter)
 		goto err_check_api;
 	}
 
+	ret = idpf_get_pkt_type(adapter);
+	if (ret != 0) {
+		DRV_LOG(ERR, "Failed to set ptype table");
+		goto err_check_api;
+	}
+
 	return 0;
 
 err_check_api:
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index a13f8818b9..0585ba3a88 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -5,6 +5,7 @@
 #ifndef _IDPF_COMMON_DEVICE_H_
 #define _IDPF_COMMON_DEVICE_H_
 
+#include <rte_mbuf_ptype.h>
 #include <base/idpf_prototype.h>
 #include <base/virtchnl2.h>
 #include <idpf_common_logs.h>
@@ -19,6 +20,10 @@
 
 #define IDPF_DFLT_INTERVAL	16
 
+#define IDPF_GET_PTYPE_SIZE(p)						\
+	(sizeof(struct virtchnl2_ptype) +				\
+	 (((p)->proto_id_count ? ((p)->proto_id_count - 1) : 0) * sizeof((p)->proto_id[0])))
+
 struct idpf_adapter {
 	struct idpf_hw hw;
 	struct virtchnl2_version_info virtchnl_version;
@@ -26,6 +31,8 @@ struct idpf_adapter {
 	volatile uint32_t pend_cmd; /* pending command not finished */
 	uint32_t cmd_retval; /* return value of the cmd response from cp */
 	uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */
+
+	uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
 };
 
 struct idpf_chunks_info {
diff --git a/drivers/common/idpf/meson.build b/drivers/common/idpf/meson.build
index c8a514e02a..ea1063a7a2 100644
--- a/drivers/common/idpf/meson.build
+++ b/drivers/common/idpf/meson.build
@@ -1,6 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2022 Intel Corporation
 
+deps += ['mbuf']
+
 sources = files(
         'idpf_common_device.c',
         'idpf_common_virtchnl.c',
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index d0799087a5..84046f955a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -602,12 +602,6 @@ idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *a
 		goto err_adapter_init;
 	}
 
-	ret = idpf_get_pkt_type(adapter);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to set ptype table");
-		goto err_get_ptype;
-	}
-
 	adapter->max_vport_nb = adapter->base.caps.max_vports;
 
 	adapter->vports = rte_zmalloc("vports",
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 55be98a8ed..d30807ca41 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -89,8 +89,6 @@ struct idpf_adapter_ext {
 
 	uint16_t used_vecs_num;
 
-	uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
-
 	bool rx_vec_allowed;
 	bool tx_vec_allowed;
 	bool rx_use_avx512;
@@ -107,6 +105,4 @@ TAILQ_HEAD(idpf_adapter_list, idpf_adapter_ext);
 #define IDPF_ADAPTER_TO_EXT(p)					\
 	container_of((p), struct idpf_adapter_ext, base)
 
-int idpf_get_pkt_type(struct idpf_adapter_ext *adapter);
-
 #endif /* _IDPF_ETHDEV_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index ad3e31208d..0b10e4248b 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1407,7 +1407,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	rx_id_bufq1 = rxq->bufq1->rx_next_avail;
 	rx_id_bufq2 = rxq->bufq2->rx_next_avail;
 	rx_desc_ring = rxq->rx_ring;
-	ptype_tbl = ad->ptype_tbl;
+	ptype_tbl = rxq->adapter->ptype_tbl;
 
 	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
 		rxq->hw_register_set = 1;
@@ -1812,7 +1812,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	rx_id = rxq->rx_tail;
 	rx_ring = rxq->rx_ring;
-	ptype_tbl = ad->ptype_tbl;
+	ptype_tbl = rxq->adapter->ptype_tbl;
 
 	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
 		rxq->hw_register_set = 1;
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 9417651b3f..cac6040943 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -82,10 +82,6 @@
 #define IDPF_TX_OFFLOAD_NOTSUP_MASK \
 		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
 
-#define IDPF_GET_PTYPE_SIZE(p) \
-	(sizeof(struct virtchnl2_ptype) + \
-	(((p)->proto_id_count ? ((p)->proto_id_count - 1) : 0) * sizeof((p)->proto_id[0])))
-
 extern uint64_t idpf_timestamp_dynflag;
 
 struct idpf_rx_queue {
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
index efa7cd2187..fb2b6bb53c 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
@@ -245,8 +245,7 @@ _idpf_singleq_recv_raw_pkts_avx512(struct idpf_rx_queue *rxq,
 				   struct rte_mbuf **rx_pkts,
 				   uint16_t nb_pkts)
 {
-	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(rxq->adapter);
-	const uint32_t *type_table = adapter->ptype_tbl;
+	const uint32_t *type_table = rxq->adapter->ptype_tbl;
 
 	const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0,
 						    rxq->mbuf_initializer);
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 6f4eb52beb..45d05ed108 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -23,219 +23,6 @@
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
 
-int __rte_cold
-idpf_get_pkt_type(struct idpf_adapter_ext *adapter)
-{
-	struct virtchnl2_get_ptype_info *ptype_info;
-	struct idpf_adapter *base;
-	uint16_t ptype_offset, i, j;
-	uint16_t ptype_recvd = 0;
-	int ret;
-
-	base = &adapter->base;
-
-	ret = idpf_vc_query_ptype_info(base);
-	if (ret != 0) {
-		PMD_DRV_LOG(ERR, "Fail to query packet type information");
-		return ret;
-	}
-
-	ptype_info = rte_zmalloc("ptype_info", IDPF_DFLT_MBX_BUF_SIZE, 0);
-		if (ptype_info == NULL)
-			return -ENOMEM;
-
-	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
-		ret = idpf_vc_read_one_msg(base, VIRTCHNL2_OP_GET_PTYPE_INFO,
-					   IDPF_DFLT_MBX_BUF_SIZE, (uint8_t *)ptype_info);
-		if (ret != 0) {
-			PMD_DRV_LOG(ERR, "Fail to get packet type information");
-			goto free_ptype_info;
-		}
-
-		ptype_recvd += ptype_info->num_ptypes;
-		ptype_offset = sizeof(struct virtchnl2_get_ptype_info) -
-						sizeof(struct virtchnl2_ptype);
-
-		for (i = 0; i < rte_cpu_to_le_16(ptype_info->num_ptypes); i++) {
-			bool is_inner = false, is_ip = false;
-			struct virtchnl2_ptype *ptype;
-			uint32_t proto_hdr = 0;
-
-			ptype = (struct virtchnl2_ptype *)
-					((uint8_t *)ptype_info + ptype_offset);
-			ptype_offset += IDPF_GET_PTYPE_SIZE(ptype);
-			if (ptype_offset > IDPF_DFLT_MBX_BUF_SIZE) {
-				ret = -EINVAL;
-				goto free_ptype_info;
-			}
-
-			if (rte_cpu_to_le_16(ptype->ptype_id_10) == 0xFFFF)
-				goto free_ptype_info;
-
-			for (j = 0; j < ptype->proto_id_count; j++) {
-				switch (rte_cpu_to_le_16(ptype->proto_id[j])) {
-				case VIRTCHNL2_PROTO_HDR_GRE:
-				case VIRTCHNL2_PROTO_HDR_VXLAN:
-					proto_hdr &= ~RTE_PTYPE_L4_MASK;
-					proto_hdr |= RTE_PTYPE_TUNNEL_GRENAT;
-					is_inner = true;
-					break;
-				case VIRTCHNL2_PROTO_HDR_MAC:
-					if (is_inner) {
-						proto_hdr &= ~RTE_PTYPE_INNER_L2_MASK;
-						proto_hdr |= RTE_PTYPE_INNER_L2_ETHER;
-					} else {
-						proto_hdr &= ~RTE_PTYPE_L2_MASK;
-						proto_hdr |= RTE_PTYPE_L2_ETHER;
-					}
-					break;
-				case VIRTCHNL2_PROTO_HDR_VLAN:
-					if (is_inner) {
-						proto_hdr &= ~RTE_PTYPE_INNER_L2_MASK;
-						proto_hdr |= RTE_PTYPE_INNER_L2_ETHER_VLAN;
-					}
-					break;
-				case VIRTCHNL2_PROTO_HDR_PTP:
-					proto_hdr &= ~RTE_PTYPE_L2_MASK;
-					proto_hdr |= RTE_PTYPE_L2_ETHER_TIMESYNC;
-					break;
-				case VIRTCHNL2_PROTO_HDR_LLDP:
-					proto_hdr &= ~RTE_PTYPE_L2_MASK;
-					proto_hdr |= RTE_PTYPE_L2_ETHER_LLDP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_ARP:
-					proto_hdr &= ~RTE_PTYPE_L2_MASK;
-					proto_hdr |= RTE_PTYPE_L2_ETHER_ARP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_PPPOE:
-					proto_hdr &= ~RTE_PTYPE_L2_MASK;
-					proto_hdr |= RTE_PTYPE_L2_ETHER_PPPOE;
-					break;
-				case VIRTCHNL2_PROTO_HDR_IPV4:
-					if (!is_ip) {
-						proto_hdr |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
-						is_ip = true;
-					} else {
-						proto_hdr |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
-							     RTE_PTYPE_TUNNEL_IP;
-						is_inner = true;
-					}
-						break;
-				case VIRTCHNL2_PROTO_HDR_IPV6:
-					if (!is_ip) {
-						proto_hdr |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
-						is_ip = true;
-					} else {
-						proto_hdr |= RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
-							     RTE_PTYPE_TUNNEL_IP;
-						is_inner = true;
-					}
-					break;
-				case VIRTCHNL2_PROTO_HDR_IPV4_FRAG:
-				case VIRTCHNL2_PROTO_HDR_IPV6_FRAG:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_FRAG;
-					else
-						proto_hdr |= RTE_PTYPE_L4_FRAG;
-					break;
-				case VIRTCHNL2_PROTO_HDR_UDP:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_UDP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_UDP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_TCP:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_TCP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_TCP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_SCTP:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_SCTP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_SCTP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_ICMP:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_ICMP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_ICMP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_ICMPV6:
-					if (is_inner)
-						proto_hdr |= RTE_PTYPE_INNER_L4_ICMP;
-					else
-						proto_hdr |= RTE_PTYPE_L4_ICMP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_L2TPV2:
-				case VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL:
-				case VIRTCHNL2_PROTO_HDR_L2TPV3:
-					is_inner = true;
-					proto_hdr |= RTE_PTYPE_TUNNEL_L2TP;
-					break;
-				case VIRTCHNL2_PROTO_HDR_NVGRE:
-					is_inner = true;
-					proto_hdr |= RTE_PTYPE_TUNNEL_NVGRE;
-					break;
-				case VIRTCHNL2_PROTO_HDR_GTPC_TEID:
-					is_inner = true;
-					proto_hdr |= RTE_PTYPE_TUNNEL_GTPC;
-					break;
-				case VIRTCHNL2_PROTO_HDR_GTPU:
-				case VIRTCHNL2_PROTO_HDR_GTPU_UL:
-				case VIRTCHNL2_PROTO_HDR_GTPU_DL:
-					is_inner = true;
-					proto_hdr |= RTE_PTYPE_TUNNEL_GTPU;
-					break;
-				case VIRTCHNL2_PROTO_HDR_PAY:
-				case VIRTCHNL2_PROTO_HDR_IPV6_EH:
-				case VIRTCHNL2_PROTO_HDR_PRE_MAC:
-				case VIRTCHNL2_PROTO_HDR_POST_MAC:
-				case VIRTCHNL2_PROTO_HDR_ETHERTYPE:
-				case VIRTCHNL2_PROTO_HDR_SVLAN:
-				case VIRTCHNL2_PROTO_HDR_CVLAN:
-				case VIRTCHNL2_PROTO_HDR_MPLS:
-				case VIRTCHNL2_PROTO_HDR_MMPLS:
-				case VIRTCHNL2_PROTO_HDR_CTRL:
-				case VIRTCHNL2_PROTO_HDR_ECP:
-				case VIRTCHNL2_PROTO_HDR_EAPOL:
-				case VIRTCHNL2_PROTO_HDR_PPPOD:
-				case VIRTCHNL2_PROTO_HDR_IGMP:
-				case VIRTCHNL2_PROTO_HDR_AH:
-				case VIRTCHNL2_PROTO_HDR_ESP:
-				case VIRTCHNL2_PROTO_HDR_IKE:
-				case VIRTCHNL2_PROTO_HDR_NATT_KEEP:
-				case VIRTCHNL2_PROTO_HDR_GTP:
-				case VIRTCHNL2_PROTO_HDR_GTP_EH:
-				case VIRTCHNL2_PROTO_HDR_GTPCV2:
-				case VIRTCHNL2_PROTO_HDR_ECPRI:
-				case VIRTCHNL2_PROTO_HDR_VRRP:
-				case VIRTCHNL2_PROTO_HDR_OSPF:
-				case VIRTCHNL2_PROTO_HDR_TUN:
-				case VIRTCHNL2_PROTO_HDR_VXLAN_GPE:
-				case VIRTCHNL2_PROTO_HDR_GENEVE:
-				case VIRTCHNL2_PROTO_HDR_NSH:
-				case VIRTCHNL2_PROTO_HDR_QUIC:
-				case VIRTCHNL2_PROTO_HDR_PFCP:
-				case VIRTCHNL2_PROTO_HDR_PFCP_NODE:
-				case VIRTCHNL2_PROTO_HDR_PFCP_SESSION:
-				case VIRTCHNL2_PROTO_HDR_RTP:
-				case VIRTCHNL2_PROTO_HDR_NO_PROTO:
-				default:
-					continue;
-				}
-				adapter->ptype_tbl[ptype->ptype_id_10] = proto_hdr;
-			}
-		}
-	}
-
-free_ptype_info:
-	rte_free(ptype_info);
-	clear_cmd(base);
-	return ret;
-}
-
 #define IDPF_RX_BUF_STRIDE		64
 int
 idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 09/19] common/idpf: add vport info initialization
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (7 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 08/19] common/idpf: support get packet type beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 10/19] common/idpf: add vector flags in vport beilei.xing
                           ` (10 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move queue module fields from idpf_adapter_ext structure to
idpf_adapter structure.
Refine some parameter and function name, and move function
idpf_create_vport_info_init to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c | 36 ++++++++++++++++++
 drivers/common/idpf/idpf_common_device.h | 11 ++++++
 drivers/common/idpf/version.map          |  1 +
 drivers/net/idpf/idpf_ethdev.c           | 48 +++---------------------
 drivers/net/idpf/idpf_ethdev.h           |  8 ----
 5 files changed, 54 insertions(+), 50 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index f62d4d1976..e8d69c2490 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -616,4 +616,40 @@ idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues)
 	return 0;
 }
 
+int
+idpf_create_vport_info_init(struct idpf_vport *vport,
+			    struct virtchnl2_create_vport *vport_info)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+
+	vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
+	if (adapter->txq_model == 0) {
+		vport_info->txq_model =
+			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+		vport_info->num_tx_q =
+			rte_cpu_to_le_16(IDPF_DEFAULT_TXQ_NUM);
+		vport_info->num_tx_complq =
+			rte_cpu_to_le_16(IDPF_DEFAULT_TXQ_NUM * IDPF_TX_COMPLQ_PER_GRP);
+	} else {
+		vport_info->txq_model =
+			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+		vport_info->num_tx_q = rte_cpu_to_le_16(IDPF_DEFAULT_TXQ_NUM);
+		vport_info->num_tx_complq = 0;
+	}
+	if (adapter->rxq_model == 0) {
+		vport_info->rxq_model =
+			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+		vport_info->num_rx_q = rte_cpu_to_le_16(IDPF_DEFAULT_RXQ_NUM);
+		vport_info->num_rx_bufq =
+			rte_cpu_to_le_16(IDPF_DEFAULT_RXQ_NUM * IDPF_RX_BUFQ_PER_GRP);
+	} else {
+		vport_info->rxq_model =
+			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+		vport_info->num_rx_q = rte_cpu_to_le_16(IDPF_DEFAULT_RXQ_NUM);
+		vport_info->num_rx_bufq = 0;
+	}
+
+	return 0;
+}
+
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 0585ba3a88..2a6e9d6ee4 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -16,6 +16,11 @@
 #define IDPF_CTLQ_LEN		64
 #define IDPF_DFLT_MBX_BUF_SIZE	4096
 
+#define IDPF_DEFAULT_RXQ_NUM	16
+#define IDPF_RX_BUFQ_PER_GRP	2
+#define IDPF_DEFAULT_TXQ_NUM	16
+#define IDPF_TX_COMPLQ_PER_GRP	1
+
 #define IDPF_MAX_PKT_TYPE	1024
 
 #define IDPF_DFLT_INTERVAL	16
@@ -33,6 +38,9 @@ struct idpf_adapter {
 	uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */
 
 	uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
+
+	uint32_t txq_model; /* 0 - split queue model, non-0 - single queue model */
+	uint32_t rxq_model; /* 0 - split queue model, non-0 - single queue model */
 };
 
 struct idpf_chunks_info {
@@ -168,5 +176,8 @@ __rte_internal
 int idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues);
 __rte_internal
 int idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues);
+__rte_internal
+int idpf_create_vport_info_init(struct idpf_vport *vport,
+				struct virtchnl2_create_vport *vport_info);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 5dab5787de..83338640c4 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -6,6 +6,7 @@ INTERNAL {
 	idpf_config_irq_map;
 	idpf_config_irq_unmap;
 	idpf_config_rss;
+	idpf_create_vport_info_init;
 	idpf_execute_vc_cmd;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 84046f955a..734e97ffc2 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -142,42 +142,6 @@ idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
-static int
-idpf_init_vport_req_info(struct rte_eth_dev *dev,
-			 struct virtchnl2_create_vport *vport_info)
-{
-	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter_ext *adapter = IDPF_ADAPTER_TO_EXT(vport->adapter);
-
-	vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
-	if (adapter->txq_model == 0) {
-		vport_info->txq_model =
-			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
-		vport_info->num_tx_q = IDPF_DEFAULT_TXQ_NUM;
-		vport_info->num_tx_complq =
-			IDPF_DEFAULT_TXQ_NUM * IDPF_TX_COMPLQ_PER_GRP;
-	} else {
-		vport_info->txq_model =
-			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
-		vport_info->num_tx_q = IDPF_DEFAULT_TXQ_NUM;
-		vport_info->num_tx_complq = 0;
-	}
-	if (adapter->rxq_model == 0) {
-		vport_info->rxq_model =
-			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
-		vport_info->num_rx_q = IDPF_DEFAULT_RXQ_NUM;
-		vport_info->num_rx_bufq =
-			IDPF_DEFAULT_RXQ_NUM * IDPF_RX_BUFQ_PER_GRP;
-	} else {
-		vport_info->rxq_model =
-			rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
-		vport_info->num_rx_q = IDPF_DEFAULT_RXQ_NUM;
-		vport_info->num_rx_bufq = 0;
-	}
-
-	return 0;
-}
-
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -566,12 +530,12 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap
 		goto bail;
 
 	ret = rte_kvargs_process(kvlist, IDPF_TX_SINGLE_Q, &parse_bool,
-				 &adapter->txq_model);
+				 &adapter->base.txq_model);
 	if (ret != 0)
 		goto bail;
 
 	ret = rte_kvargs_process(kvlist, IDPF_RX_SINGLE_Q, &parse_bool,
-				 &adapter->rxq_model);
+				 &adapter->base.rxq_model);
 	if (ret != 0)
 		goto bail;
 
@@ -672,7 +636,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	struct idpf_vport_param *param = init_params;
 	struct idpf_adapter_ext *adapter = param->adapter;
 	/* for sending create vport virtchnl msg prepare */
-	struct virtchnl2_create_vport vport_req_info;
+	struct virtchnl2_create_vport create_vport_info;
 	int ret = 0;
 
 	dev->dev_ops = &idpf_eth_dev_ops;
@@ -680,14 +644,14 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
 
-	memset(&vport_req_info, 0, sizeof(vport_req_info));
-	ret = idpf_init_vport_req_info(dev, &vport_req_info);
+	memset(&create_vport_info, 0, sizeof(create_vport_info));
+	ret = idpf_create_vport_info_init(vport, &create_vport_info);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
 		goto err;
 	}
 
-	ret = idpf_vport_init(vport, &vport_req_info, dev->data);
+	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to init vports.");
 		goto err;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index d30807ca41..c2a7abb05c 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -22,14 +22,9 @@
 
 #define IDPF_MAX_VPORT_NUM	8
 
-#define IDPF_DEFAULT_RXQ_NUM	16
-#define IDPF_DEFAULT_TXQ_NUM	16
-
 #define IDPF_INVALID_VPORT_IDX	0xffff
 #define IDPF_TXQ_PER_GRP	1
-#define IDPF_TX_COMPLQ_PER_GRP	1
 #define IDPF_RXQ_PER_GRP	1
-#define IDPF_RX_BUFQ_PER_GRP	2
 
 #define IDPF_DFLT_Q_VEC_NUM	1
 
@@ -78,9 +73,6 @@ struct idpf_adapter_ext {
 
 	char name[IDPF_ADAPTER_NAME_LEN];
 
-	uint32_t txq_model; /* 0 - split queue model, non-0 - single queue model */
-	uint32_t rxq_model; /* 0 - split queue model, non-0 - single queue model */
-
 	struct idpf_vport **vports;
 	uint16_t max_vport_nb;
 
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 10/19] common/idpf: add vector flags in vport
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (8 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 09/19] common/idpf: add vport info initialization beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 11/19] common/idpf: add rxq and txq struct beilei.xing
                           ` (9 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move vector flags from idpf_adapter_ext structure to
idpf_vport structure.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h |  5 +++++
 drivers/net/idpf/idpf_ethdev.h           |  5 -----
 drivers/net/idpf/idpf_rxtx.c             | 22 ++++++++++------------
 3 files changed, 15 insertions(+), 17 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 2a6e9d6ee4..0ffc653436 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -103,6 +103,11 @@ struct idpf_vport {
 	uint16_t devarg_id;
 
 	bool stopped;
+
+	bool rx_vec_allowed;
+	bool tx_vec_allowed;
+	bool rx_use_avx512;
+	bool tx_use_avx512;
 };
 
 /* Message type read in virtual channel from PF */
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index c2a7abb05c..bef6199622 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -81,11 +81,6 @@ struct idpf_adapter_ext {
 
 	uint16_t used_vecs_num;
 
-	bool rx_vec_allowed;
-	bool tx_vec_allowed;
-	bool rx_use_avx512;
-	bool tx_use_avx512;
-
 	/* For PTP */
 	uint64_t time_hw;
 };
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 0b10e4248b..068eb8000e 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -2221,25 +2221,24 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 #ifdef RTE_ARCH_X86
-	struct idpf_adapter_ext *ad = IDPF_ADAPTER_TO_EXT(vport->adapter);
 	struct idpf_rx_queue *rxq;
 	int i;
 
 	if (idpf_rx_vec_dev_check_default(dev) == IDPF_VECTOR_PATH &&
 	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
-		ad->rx_vec_allowed = true;
+		vport->rx_vec_allowed = true;
 
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
 			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
-				ad->rx_use_avx512 = true;
+				vport->rx_use_avx512 = true;
 #else
 		PMD_DRV_LOG(NOTICE,
 			    "AVX512 is not supported in build env");
 #endif /* CC_AVX512_SUPPORT */
 	} else {
-		ad->rx_vec_allowed = false;
+		vport->rx_vec_allowed = false;
 	}
 #endif /* RTE_ARCH_X86 */
 
@@ -2247,13 +2246,13 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
 	} else {
-		if (ad->rx_vec_allowed) {
+		if (vport->rx_vec_allowed) {
 			for (i = 0; i < dev->data->nb_tx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
 				(void)idpf_singleq_rx_vec_setup(rxq);
 			}
 #ifdef CC_AVX512_SUPPORT
-			if (ad->rx_use_avx512) {
+			if (vport->rx_use_avx512) {
 				dev->rx_pkt_burst = idpf_singleq_recv_pkts_avx512;
 				return;
 			}
@@ -2275,7 +2274,6 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 #ifdef RTE_ARCH_X86
-	struct idpf_adapter_ext *ad = IDPF_ADAPTER_TO_EXT(vport->adapter);
 #ifdef CC_AVX512_SUPPORT
 	struct idpf_tx_queue *txq;
 	int i;
@@ -2283,18 +2281,18 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 
 	if (idpf_rx_vec_dev_check_default(dev) == IDPF_VECTOR_PATH &&
 	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
-		ad->tx_vec_allowed = true;
+		vport->tx_vec_allowed = true;
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
 			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
-				ad->tx_use_avx512 = true;
+				vport->tx_use_avx512 = true;
 #else
 		PMD_DRV_LOG(NOTICE,
 			    "AVX512 is not supported in build env");
 #endif /* CC_AVX512_SUPPORT */
 	} else {
-		ad->tx_vec_allowed = false;
+		vport->tx_vec_allowed = false;
 	}
 #endif /* RTE_ARCH_X86 */
 
@@ -2303,9 +2301,9 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 		dev->tx_pkt_prepare = idpf_prep_pkts;
 	} else {
 #ifdef RTE_ARCH_X86
-		if (ad->tx_vec_allowed) {
+		if (vport->tx_vec_allowed) {
 #ifdef CC_AVX512_SUPPORT
-			if (ad->tx_use_avx512) {
+			if (vport->tx_use_avx512) {
 				for (i = 0; i < dev->data->nb_tx_queues; i++) {
 					txq = dev->data->tx_queues[i];
 					if (txq == NULL)
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 11/19] common/idpf: add rxq and txq struct
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (9 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 10/19] common/idpf: add vector flags in vport beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 12/19] common/idpf: add help functions for queue setup and release beilei.xing
                           ` (8 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Add idpf_rxq and idpf_txq structure in common module.
Move idpf_vc_config_rxq and idpf_vc_config_txq functions
to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   2 +
 drivers/common/idpf/idpf_common_rxtx.h     | 112 +++++++++++++
 drivers/common/idpf/idpf_common_virtchnl.c | 160 ++++++++++++++++++
 drivers/common/idpf/idpf_common_virtchnl.h |  10 +-
 drivers/common/idpf/version.map            |   2 +
 drivers/net/idpf/idpf_ethdev.h             |   2 -
 drivers/net/idpf/idpf_rxtx.h               |  97 +----------
 drivers/net/idpf/idpf_vchnl.c              | 184 ---------------------
 drivers/net/idpf/meson.build               |   1 -
 9 files changed, 284 insertions(+), 286 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_rxtx.h
 delete mode 100644 drivers/net/idpf/idpf_vchnl.c

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 0ffc653436..629d812748 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -18,8 +18,10 @@
 
 #define IDPF_DEFAULT_RXQ_NUM	16
 #define IDPF_RX_BUFQ_PER_GRP	2
+#define IDPF_RXQ_PER_GRP	1
 #define IDPF_DEFAULT_TXQ_NUM	16
 #define IDPF_TX_COMPLQ_PER_GRP	1
+#define IDPF_TXQ_PER_GRP	1
 
 #define IDPF_MAX_PKT_TYPE	1024
 
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
new file mode 100644
index 0000000000..f3e31aaf2f
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _IDPF_COMMON_RXTX_H_
+#define _IDPF_COMMON_RXTX_H_
+
+#include <rte_mbuf_ptype.h>
+#include <rte_mbuf_core.h>
+
+#include "idpf_common_device.h"
+
+struct idpf_rx_stats {
+	uint64_t mbuf_alloc_failed;
+};
+
+struct idpf_rx_queue {
+	struct idpf_adapter *adapter;   /* the adapter this queue belongs to */
+	struct rte_mempool *mp;         /* mbuf pool to populate Rx ring */
+	const struct rte_memzone *mz;   /* memzone for Rx ring */
+	volatile void *rx_ring;
+	struct rte_mbuf **sw_ring;      /* address of SW ring */
+	uint64_t rx_ring_phys_addr;     /* Rx ring DMA address */
+
+	uint16_t nb_rx_desc;            /* ring length */
+	uint16_t rx_tail;               /* current value of tail */
+	volatile uint8_t *qrx_tail;     /* register address of tail */
+	uint16_t rx_free_thresh;        /* max free RX desc to hold */
+	uint16_t nb_rx_hold;            /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
+	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
+
+	/* used for VPMD */
+	uint16_t rxrearm_nb;       /* number of remaining to be re-armed */
+	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
+	uint64_t mbuf_initializer; /* value to init mbufs */
+
+	uint16_t rx_nb_avail;
+	uint16_t rx_next_avail;
+
+	uint16_t port_id;       /* device port ID */
+	uint16_t queue_id;      /* Rx queue index */
+	uint16_t rx_buf_len;    /* The packet buffer size */
+	uint16_t rx_hdr_len;    /* The header buffer size */
+	uint16_t max_pkt_len;   /* Maximum packet length */
+	uint8_t rxdid;
+
+	bool q_set;             /* if rx queue has been configured */
+	bool q_started;         /* if rx queue has been started */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+	const struct idpf_rxq_ops *ops;
+
+	struct idpf_rx_stats rx_stats;
+
+	/* only valid for split queue mode */
+	uint8_t expected_gen_id;
+	struct idpf_rx_queue *bufq1;
+	struct idpf_rx_queue *bufq2;
+
+	uint64_t offloads;
+	uint32_t hw_register_set;
+};
+
+struct idpf_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct idpf_tx_queue {
+	const struct rte_memzone *mz;		/* memzone for Tx ring */
+	volatile struct idpf_flex_tx_desc *tx_ring;	/* Tx ring virtual address */
+	volatile union {
+		struct idpf_flex_tx_sched_desc *desc_ring;
+		struct idpf_splitq_tx_compl_desc *compl_ring;
+	};
+	uint64_t tx_ring_phys_addr;		/* Tx ring DMA address */
+	struct idpf_tx_entry *sw_ring;		/* address array of SW ring */
+
+	uint16_t nb_tx_desc;		/* ring length */
+	uint16_t tx_tail;		/* current value of tail */
+	volatile uint8_t *qtx_tail;	/* register address of tail */
+	/* number of used desc since RS bit set */
+	uint16_t nb_used;
+	uint16_t nb_free;
+	uint16_t last_desc_cleaned;	/* last desc have been cleaned*/
+	uint16_t free_thresh;
+	uint16_t rs_thresh;
+
+	uint16_t port_id;
+	uint16_t queue_id;
+	uint64_t offloads;
+	uint16_t next_dd;	/* next to set RS, for VPMD */
+	uint16_t next_rs;	/* next to check DD,  for VPMD */
+
+	bool q_set;		/* if tx queue has been configured */
+	bool q_started;		/* if tx queue has been started */
+	bool tx_deferred_start; /* don't start this queue in dev start */
+	const struct idpf_txq_ops *ops;
+
+	/* only valid for split queue mode */
+	uint16_t sw_nb_desc;
+	uint16_t sw_tail;
+	void **txqs;
+	uint32_t tx_start_qid;
+	uint8_t expected_gen_id;
+	struct idpf_tx_queue *complq;
+};
+
+#endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 6d637150ff..8ccfb5989e 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -805,3 +805,163 @@ idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
 	rte_free(ptype_info);
 	return err;
 }
+
+#define IDPF_RX_BUF_STRIDE		64
+int
+idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
+	struct virtchnl2_rxq_info *rxq_info;
+	struct idpf_cmd_info args;
+	uint16_t num_qs;
+	int size, err, i;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
+		num_qs = IDPF_RXQ_PER_GRP;
+	else
+		num_qs = IDPF_RXQ_PER_GRP + IDPF_RX_BUFQ_PER_GRP;
+
+	size = sizeof(*vc_rxqs) + (num_qs - 1) *
+		sizeof(struct virtchnl2_rxq_info);
+	vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
+	if (vc_rxqs == NULL) {
+		DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues");
+		err = -ENOMEM;
+		return err;
+	}
+	vc_rxqs->vport_id = vport->vport_id;
+	vc_rxqs->num_qinfo = num_qs;
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		rxq_info = &vc_rxqs->qinfo[0];
+		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
+		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
+		rxq_info->queue_id = rxq->queue_id;
+		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
+		rxq_info->data_buffer_size = rxq->rx_buf_len;
+		rxq_info->max_pkt_size = vport->max_pkt_len;
+
+		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
+		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
+
+		rxq_info->ring_len = rxq->nb_rx_desc;
+	}  else {
+		/* Rx queue */
+		rxq_info = &vc_rxqs->qinfo[0];
+		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
+		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
+		rxq_info->queue_id = rxq->queue_id;
+		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+		rxq_info->data_buffer_size = rxq->rx_buf_len;
+		rxq_info->max_pkt_size = vport->max_pkt_len;
+
+		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
+		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
+
+		rxq_info->ring_len = rxq->nb_rx_desc;
+		rxq_info->rx_bufq1_id = rxq->bufq1->queue_id;
+		rxq_info->rx_bufq2_id = rxq->bufq2->queue_id;
+		rxq_info->rx_buffer_low_watermark = 64;
+
+		/* Buffer queue */
+		for (i = 1; i <= IDPF_RX_BUFQ_PER_GRP; i++) {
+			struct idpf_rx_queue *bufq = i == 1 ? rxq->bufq1 : rxq->bufq2;
+			rxq_info = &vc_rxqs->qinfo[i];
+			rxq_info->dma_ring_addr = bufq->rx_ring_phys_addr;
+			rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+			rxq_info->queue_id = bufq->queue_id;
+			rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+			rxq_info->data_buffer_size = bufq->rx_buf_len;
+			rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
+			rxq_info->ring_len = bufq->nb_rx_desc;
+
+			rxq_info->buffer_notif_stride = IDPF_RX_BUF_STRIDE;
+			rxq_info->rx_buffer_low_watermark = 64;
+		}
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
+	args.in_args = (uint8_t *)vc_rxqs;
+	args.in_args_size = size;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	rte_free(vc_rxqs);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
+
+	return err;
+}
+
+int
+idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
+	struct virtchnl2_txq_info *txq_info;
+	struct idpf_cmd_info args;
+	uint16_t num_qs;
+	int size, err;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
+		num_qs = IDPF_TXQ_PER_GRP;
+	else
+		num_qs = IDPF_TXQ_PER_GRP + IDPF_TX_COMPLQ_PER_GRP;
+
+	size = sizeof(*vc_txqs) + (num_qs - 1) *
+		sizeof(struct virtchnl2_txq_info);
+	vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
+	if (vc_txqs == NULL) {
+		DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues");
+		err = -ENOMEM;
+		return err;
+	}
+	vc_txqs->vport_id = vport->vport_id;
+	vc_txqs->num_qinfo = num_qs;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		txq_info = &vc_txqs->qinfo[0];
+		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
+		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
+		txq_info->queue_id = txq->queue_id;
+		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
+		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
+		txq_info->ring_len = txq->nb_tx_desc;
+	} else {
+		/* txq info */
+		txq_info = &vc_txqs->qinfo[0];
+		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
+		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
+		txq_info->queue_id = txq->queue_id;
+		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+		txq_info->ring_len = txq->nb_tx_desc;
+		txq_info->tx_compl_queue_id = txq->complq->queue_id;
+		txq_info->relative_queue_id = txq_info->queue_id;
+
+		/* tx completion queue info */
+		txq_info = &vc_txqs->qinfo[1];
+		txq_info->dma_ring_addr = txq->complq->tx_ring_phys_addr;
+		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+		txq_info->queue_id = txq->complq->queue_id;
+		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+		txq_info->ring_len = txq->complq->nb_tx_desc;
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
+	args.in_args = (uint8_t *)vc_txqs;
+	args.in_args_size = size;
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	rte_free(vc_txqs);
+	if (err != 0)
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
+
+	return err;
+}
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index a1fef56d3e..bbe31700be 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -6,6 +6,7 @@
 #define _IDPF_COMMON_VIRTCHNL_H_
 
 #include <idpf_common_device.h>
+#include <idpf_common_rxtx.h>
 
 __rte_internal
 int idpf_vc_check_api_version(struct idpf_adapter *adapter);
@@ -26,6 +27,9 @@ __rte_internal
 int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
 				 uint16_t nb_rxq, bool map);
 __rte_internal
+int idpf_execute_vc_cmd(struct idpf_adapter *adapter,
+			struct idpf_cmd_info *args);
+__rte_internal
 int idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
 			 bool rx, bool on);
 __rte_internal
@@ -42,7 +46,7 @@ __rte_internal
 int idpf_vc_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
 			 uint16_t buf_len, uint8_t *buf);
 __rte_internal
-int idpf_execute_vc_cmd(struct idpf_adapter *adapter,
-			struct idpf_cmd_info *args);
-
+int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
+__rte_internal
+int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 83338640c4..69295270df 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -11,6 +11,8 @@ INTERNAL {
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
 	idpf_vc_config_irq_map_unmap;
+	idpf_vc_config_rxq;
+	idpf_vc_config_txq;
 	idpf_vc_create_vport;
 	idpf_vc_dealloc_vectors;
 	idpf_vc_destroy_vport;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index bef6199622..9b40aa4e56 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -23,8 +23,6 @@
 #define IDPF_MAX_VPORT_NUM	8
 
 #define IDPF_INVALID_VPORT_IDX	0xffff
-#define IDPF_TXQ_PER_GRP	1
-#define IDPF_RXQ_PER_GRP	1
 
 #define IDPF_DFLT_Q_VEC_NUM	1
 
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index cac6040943..b8325f9b96 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -5,6 +5,7 @@
 #ifndef _IDPF_RXTX_H_
 #define _IDPF_RXTX_H_
 
+#include <idpf_common_rxtx.h>
 #include "idpf_ethdev.h"
 
 /* MTS */
@@ -84,103 +85,10 @@
 
 extern uint64_t idpf_timestamp_dynflag;
 
-struct idpf_rx_queue {
-	struct idpf_adapter *adapter;   /* the adapter this queue belongs to */
-	struct rte_mempool *mp;         /* mbuf pool to populate Rx ring */
-	const struct rte_memzone *mz;   /* memzone for Rx ring */
-	volatile void *rx_ring;
-	struct rte_mbuf **sw_ring;      /* address of SW ring */
-	uint64_t rx_ring_phys_addr;     /* Rx ring DMA address */
-
-	uint16_t nb_rx_desc;            /* ring length */
-	uint16_t rx_tail;               /* current value of tail */
-	volatile uint8_t *qrx_tail;     /* register address of tail */
-	uint16_t rx_free_thresh;        /* max free RX desc to hold */
-	uint16_t nb_rx_hold;            /* number of held free RX desc */
-	struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
-	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
-	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
-
-	/* used for VPMD */
-	uint16_t rxrearm_nb;       /* number of remaining to be re-armed */
-	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
-	uint64_t mbuf_initializer; /* value to init mbufs */
-
-	uint16_t rx_nb_avail;
-	uint16_t rx_next_avail;
-
-	uint16_t port_id;       /* device port ID */
-	uint16_t queue_id;      /* Rx queue index */
-	uint16_t rx_buf_len;    /* The packet buffer size */
-	uint16_t rx_hdr_len;    /* The header buffer size */
-	uint16_t max_pkt_len;   /* Maximum packet length */
-	uint8_t rxdid;
-
-	bool q_set;             /* if rx queue has been configured */
-	bool q_started;         /* if rx queue has been started */
-	bool rx_deferred_start; /* don't start this queue in dev start */
-	const struct idpf_rxq_ops *ops;
-
-	/* only valid for split queue mode */
-	uint8_t expected_gen_id;
-	struct idpf_rx_queue *bufq1;
-	struct idpf_rx_queue *bufq2;
-
-	uint64_t offloads;
-	uint32_t hw_register_set;
-};
-
-struct idpf_tx_entry {
-	struct rte_mbuf *mbuf;
-	uint16_t next_id;
-	uint16_t last_id;
-};
-
 struct idpf_tx_vec_entry {
 	struct rte_mbuf *mbuf;
 };
 
-/* Structure associated with each TX queue. */
-struct idpf_tx_queue {
-	const struct rte_memzone *mz;		/* memzone for Tx ring */
-	volatile struct idpf_flex_tx_desc *tx_ring;	/* Tx ring virtual address */
-	volatile union {
-		struct idpf_flex_tx_sched_desc *desc_ring;
-		struct idpf_splitq_tx_compl_desc *compl_ring;
-	};
-	uint64_t tx_ring_phys_addr;		/* Tx ring DMA address */
-	struct idpf_tx_entry *sw_ring;		/* address array of SW ring */
-
-	uint16_t nb_tx_desc;		/* ring length */
-	uint16_t tx_tail;		/* current value of tail */
-	volatile uint8_t *qtx_tail;	/* register address of tail */
-	/* number of used desc since RS bit set */
-	uint16_t nb_used;
-	uint16_t nb_free;
-	uint16_t last_desc_cleaned;	/* last desc have been cleaned*/
-	uint16_t free_thresh;
-	uint16_t rs_thresh;
-
-	uint16_t port_id;
-	uint16_t queue_id;
-	uint64_t offloads;
-	uint16_t next_dd;	/* next to set RS, for VPMD */
-	uint16_t next_rs;	/* next to check DD,  for VPMD */
-
-	bool q_set;		/* if tx queue has been configured */
-	bool q_started;		/* if tx queue has been started */
-	bool tx_deferred_start; /* don't start this queue in dev start */
-	const struct idpf_txq_ops *ops;
-
-	/* only valid for split queue mode */
-	uint16_t sw_nb_desc;
-	uint16_t sw_tail;
-	void **txqs;
-	uint32_t tx_start_qid;
-	uint8_t expected_gen_id;
-	struct idpf_tx_queue *complq;
-};
-
 /* Offload features */
 union idpf_tx_offload {
 	uint64_t data;
@@ -239,9 +147,6 @@ void idpf_stop_queues(struct rte_eth_dev *dev);
 void idpf_set_rx_function(struct rte_eth_dev *dev);
 void idpf_set_tx_function(struct rte_eth_dev *dev);
 
-int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
-int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
-
 #define IDPF_TIMESYNC_REG_WRAP_GUARD_BAND  10000
 /* Helper function to convert a 32b nanoseconds timestamp to 64b. */
 static inline uint64_t
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
deleted file mode 100644
index 45d05ed108..0000000000
--- a/drivers/net/idpf/idpf_vchnl.c
+++ /dev/null
@@ -1,184 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2022 Intel Corporation
- */
-
-#include <stdio.h>
-#include <errno.h>
-#include <stdint.h>
-#include <string.h>
-#include <unistd.h>
-#include <stdarg.h>
-#include <inttypes.h>
-#include <rte_byteorder.h>
-#include <rte_common.h>
-
-#include <rte_debug.h>
-#include <rte_atomic.h>
-#include <rte_eal.h>
-#include <rte_ether.h>
-#include <ethdev_driver.h>
-#include <ethdev_pci.h>
-#include <rte_dev.h>
-
-#include "idpf_ethdev.h"
-#include "idpf_rxtx.h"
-
-#define IDPF_RX_BUF_STRIDE		64
-int
-idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
-	struct virtchnl2_rxq_info *rxq_info;
-	struct idpf_cmd_info args;
-	uint16_t num_qs;
-	int size, err, i;
-
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
-		num_qs = IDPF_RXQ_PER_GRP;
-	else
-		num_qs = IDPF_RXQ_PER_GRP + IDPF_RX_BUFQ_PER_GRP;
-
-	size = sizeof(*vc_rxqs) + (num_qs - 1) *
-		sizeof(struct virtchnl2_rxq_info);
-	vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
-	if (vc_rxqs == NULL) {
-		PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues");
-		err = -ENOMEM;
-		return err;
-	}
-	vc_rxqs->vport_id = vport->vport_id;
-	vc_rxqs->num_qinfo = num_qs;
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-		rxq_info = &vc_rxqs->qinfo[0];
-		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
-		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-		rxq_info->queue_id = rxq->queue_id;
-		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-		rxq_info->data_buffer_size = rxq->rx_buf_len;
-		rxq_info->max_pkt_size = vport->max_pkt_len;
-
-		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
-		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
-
-		rxq_info->ring_len = rxq->nb_rx_desc;
-	}  else {
-		/* Rx queue */
-		rxq_info = &vc_rxqs->qinfo[0];
-		rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr;
-		rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
-		rxq_info->queue_id = rxq->queue_id;
-		rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-		rxq_info->data_buffer_size = rxq->rx_buf_len;
-		rxq_info->max_pkt_size = vport->max_pkt_len;
-
-		rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
-		rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
-
-		rxq_info->ring_len = rxq->nb_rx_desc;
-		rxq_info->rx_bufq1_id = rxq->bufq1->queue_id;
-		rxq_info->rx_bufq2_id = rxq->bufq2->queue_id;
-		rxq_info->rx_buffer_low_watermark = 64;
-
-		/* Buffer queue */
-		for (i = 1; i <= IDPF_RX_BUFQ_PER_GRP; i++) {
-			struct idpf_rx_queue *bufq = i == 1 ? rxq->bufq1 : rxq->bufq2;
-			rxq_info = &vc_rxqs->qinfo[i];
-			rxq_info->dma_ring_addr = bufq->rx_ring_phys_addr;
-			rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
-			rxq_info->queue_id = bufq->queue_id;
-			rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-			rxq_info->data_buffer_size = bufq->rx_buf_len;
-			rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
-			rxq_info->ring_len = bufq->nb_rx_desc;
-
-			rxq_info->buffer_notif_stride = IDPF_RX_BUF_STRIDE;
-			rxq_info->rx_buffer_low_watermark = 64;
-		}
-	}
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
-	args.in_args = (uint8_t *)vc_rxqs;
-	args.in_args_size = size;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	rte_free(vc_rxqs);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
-
-	return err;
-}
-
-int
-idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
-{
-	struct idpf_adapter *adapter = vport->adapter;
-	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
-	struct virtchnl2_txq_info *txq_info;
-	struct idpf_cmd_info args;
-	uint16_t num_qs;
-	int size, err;
-
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
-		num_qs = IDPF_TXQ_PER_GRP;
-	else
-		num_qs = IDPF_TXQ_PER_GRP + IDPF_TX_COMPLQ_PER_GRP;
-
-	size = sizeof(*vc_txqs) + (num_qs - 1) *
-		sizeof(struct virtchnl2_txq_info);
-	vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
-	if (vc_txqs == NULL) {
-		PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues");
-		err = -ENOMEM;
-		return err;
-	}
-	vc_txqs->vport_id = vport->vport_id;
-	vc_txqs->num_qinfo = num_qs;
-
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-		txq_info = &vc_txqs->qinfo[0];
-		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
-		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-		txq_info->queue_id = txq->queue_id;
-		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
-		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
-		txq_info->ring_len = txq->nb_tx_desc;
-	} else {
-		/* txq info */
-		txq_info = &vc_txqs->qinfo[0];
-		txq_info->dma_ring_addr = txq->tx_ring_phys_addr;
-		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
-		txq_info->queue_id = txq->queue_id;
-		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-		txq_info->ring_len = txq->nb_tx_desc;
-		txq_info->tx_compl_queue_id = txq->complq->queue_id;
-		txq_info->relative_queue_id = txq_info->queue_id;
-
-		/* tx completion queue info */
-		txq_info = &vc_txqs->qinfo[1];
-		txq_info->dma_ring_addr = txq->complq->tx_ring_phys_addr;
-		txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
-		txq_info->queue_id = txq->complq->queue_id;
-		txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
-		txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
-		txq_info->ring_len = txq->complq->nb_tx_desc;
-	}
-
-	memset(&args, 0, sizeof(args));
-	args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
-	args.in_args = (uint8_t *)vc_txqs;
-	args.in_args_size = size;
-	args.out_buffer = adapter->mbx_resp;
-	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-
-	err = idpf_execute_vc_cmd(adapter, &args);
-	rte_free(vc_txqs);
-	if (err != 0)
-		PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
-
-	return err;
-}
diff --git a/drivers/net/idpf/meson.build b/drivers/net/idpf/meson.build
index 650dade0b9..378925166f 100644
--- a/drivers/net/idpf/meson.build
+++ b/drivers/net/idpf/meson.build
@@ -18,7 +18,6 @@ deps += ['common_idpf']
 sources = files(
         'idpf_ethdev.c',
         'idpf_rxtx.c',
-        'idpf_vchnl.c',
 )
 
 if arch_subdir == 'x86'
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 12/19] common/idpf: add help functions for queue setup and release
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (10 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 11/19] common/idpf: add rxq and txq struct beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 13/19] common/idpf: add Rx and Tx data path beilei.xing
                           ` (7 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Refine rxq setup and txq setup.
Move some help functions of queue setup and queue release
to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c  |  414 +++++++++
 drivers/common/idpf/idpf_common_rxtx.h  |   57 ++
 drivers/common/idpf/meson.build         |    1 +
 drivers/common/idpf/version.map         |   15 +
 drivers/net/idpf/idpf_rxtx.c            | 1051 ++++++-----------------
 drivers/net/idpf/idpf_rxtx.h            |    9 -
 drivers/net/idpf/idpf_rxtx_vec_avx512.c |    2 +-
 7 files changed, 773 insertions(+), 776 deletions(-)
 create mode 100644 drivers/common/idpf/idpf_common_rxtx.c

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
new file mode 100644
index 0000000000..832d57c518
--- /dev/null
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -0,0 +1,414 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#include <rte_mbuf_dyn.h>
+#include "idpf_common_rxtx.h"
+
+int
+idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+{
+	/* The following constraints must be satisfied:
+	 * thresh < rxq->nb_rx_desc
+	 */
+	if (thresh >= nb_desc) {
+		DRV_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
+			thresh, nb_desc);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int
+idpf_check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+		     uint16_t tx_free_thresh)
+{
+	/* TX descriptors will have their RS bit set after tx_rs_thresh
+	 * descriptors have been used. The TX descriptor ring will be cleaned
+	 * after tx_free_thresh descriptors are used or if the number of
+	 * descriptors required to transmit a packet is greater than the
+	 * number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		DRV_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+			"number of TX descriptors (%u) minus 2",
+			tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		DRV_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+			"number of TX descriptors (%u) minus 3.",
+			tx_free_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		DRV_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+			"equal to tx_free_thresh (%u).",
+			tx_rs_thresh, tx_free_thresh);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		DRV_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+			"number of TX descriptors (%u).",
+			tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+void
+idpf_release_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+	uint16_t i;
+
+	if (rxq->sw_ring == NULL)
+		return;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i] != NULL) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+			rxq->sw_ring[i] = NULL;
+		}
+	}
+}
+
+void
+idpf_release_txq_mbufs(struct idpf_tx_queue *txq)
+{
+	uint16_t nb_desc, i;
+
+	if (txq == NULL || txq->sw_ring == NULL) {
+		DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
+		return;
+	}
+
+	if (txq->sw_nb_desc != 0) {
+		/* For split queue model, descriptor ring */
+		nb_desc = txq->sw_nb_desc;
+	} else {
+		/* For single queue model */
+		nb_desc = txq->nb_tx_desc;
+	}
+	for (i = 0; i < nb_desc; i++) {
+		if (txq->sw_ring[i].mbuf != NULL) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+void
+idpf_reset_split_rx_descq(struct idpf_rx_queue *rxq)
+{
+	uint16_t len;
+	uint32_t i;
+
+	if (rxq == NULL)
+		return;
+
+	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3);
+	     i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	rxq->rx_tail = 0;
+	rxq->expected_gen_id = 1;
+}
+
+void
+idpf_reset_split_rx_bufq(struct idpf_rx_queue *rxq)
+{
+	uint16_t len;
+	uint32_t i;
+
+	if (rxq == NULL)
+		return;
+
+	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(struct virtchnl2_splitq_rx_buf_desc);
+	     i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+	/* The next descriptor id which can be received. */
+	rxq->rx_next_avail = 0;
+
+	/* The next descriptor id which can be refilled. */
+	rxq->rx_tail = 0;
+	/* The number of descriptors which can be refilled. */
+	rxq->nb_rx_hold = rxq->nb_rx_desc - 1;
+
+	rxq->bufq1 = NULL;
+	rxq->bufq2 = NULL;
+}
+
+void
+idpf_reset_split_rx_queue(struct idpf_rx_queue *rxq)
+{
+	idpf_reset_split_rx_descq(rxq);
+	idpf_reset_split_rx_bufq(rxq->bufq1);
+	idpf_reset_split_rx_bufq(rxq->bufq2);
+}
+
+void
+idpf_reset_single_rx_queue(struct idpf_rx_queue *rxq)
+{
+	uint16_t len;
+	uint32_t i;
+
+	if (rxq == NULL)
+		return;
+
+	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(struct virtchnl2_singleq_rx_buf_desc);
+	     i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+
+	rte_pktmbuf_free(rxq->pkt_first_seg);
+
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+	rxq->rxrearm_start = 0;
+	rxq->rxrearm_nb = 0;
+}
+
+void
+idpf_reset_split_tx_descq(struct idpf_tx_queue *txq)
+{
+	struct idpf_tx_entry *txe;
+	uint32_t i, size;
+	uint16_t prev;
+
+	if (txq == NULL) {
+		DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	size = sizeof(struct idpf_flex_tx_sched_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->desc_ring)[i] = 0;
+
+	txe = txq->sw_ring;
+	prev = (uint16_t)(txq->sw_nb_desc - 1);
+	for (i = 0; i < txq->sw_nb_desc; i++) {
+		txe[i].mbuf = NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_used = 0;
+
+	/* Use this as next to clean for split desc queue */
+	txq->last_desc_cleaned = 0;
+	txq->sw_tail = 0;
+	txq->nb_free = txq->nb_tx_desc - 1;
+}
+
+void
+idpf_reset_split_tx_complq(struct idpf_tx_queue *cq)
+{
+	uint32_t i, size;
+
+	if (cq == NULL) {
+		DRV_LOG(DEBUG, "Pointer to complq is NULL");
+		return;
+	}
+
+	size = sizeof(struct idpf_splitq_tx_compl_desc) * cq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)cq->compl_ring)[i] = 0;
+
+	cq->tx_tail = 0;
+	cq->expected_gen_id = 1;
+}
+
+void
+idpf_reset_single_tx_queue(struct idpf_tx_queue *txq)
+{
+	struct idpf_tx_entry *txe;
+	uint32_t i, size;
+	uint16_t prev;
+
+	if (txq == NULL) {
+		DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct idpf_flex_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		txq->tx_ring[i].qw1.cmd_dtype =
+			rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_used = 0;
+
+	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+	txq->nb_free = txq->nb_tx_desc - 1;
+
+	txq->next_dd = txq->rs_thresh - 1;
+	txq->next_rs = txq->rs_thresh - 1;
+}
+
+void
+idpf_rx_queue_release(void *rxq)
+{
+	struct idpf_rx_queue *q = rxq;
+
+	if (q == NULL)
+		return;
+
+	/* Split queue */
+	if (q->bufq1 != NULL && q->bufq2 != NULL) {
+		q->bufq1->ops->release_mbufs(q->bufq1);
+		rte_free(q->bufq1->sw_ring);
+		rte_memzone_free(q->bufq1->mz);
+		rte_free(q->bufq1);
+		q->bufq2->ops->release_mbufs(q->bufq2);
+		rte_free(q->bufq2->sw_ring);
+		rte_memzone_free(q->bufq2->mz);
+		rte_free(q->bufq2);
+		rte_memzone_free(q->mz);
+		rte_free(q);
+		return;
+	}
+
+	/* Single queue */
+	q->ops->release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+idpf_tx_queue_release(void *txq)
+{
+	struct idpf_tx_queue *q = txq;
+
+	if (q == NULL)
+		return;
+
+	if (q->complq) {
+		rte_memzone_free(q->complq->mz);
+		rte_free(q->complq);
+	}
+
+	q->ops->release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+int
+idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+	volatile struct virtchnl2_singleq_rx_buf_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(mbuf == NULL)) {
+			DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &((volatile struct virtchnl2_singleq_rx_buf_desc *)(rxq->rx_ring))[i];
+		rxd->pkt_addr = dma_addr;
+		rxd->hdr_addr = 0;
+		rxd->rsvd1 = 0;
+		rxd->rsvd2 = 0;
+		rxq->sw_ring[i] = mbuf;
+	}
+
+	return 0;
+}
+
+int
+idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+	volatile struct virtchnl2_splitq_rx_buf_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(mbuf == NULL)) {
+			DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &((volatile struct virtchnl2_splitq_rx_buf_desc *)(rxq->rx_ring))[i];
+		rxd->qword0.buf_id = i;
+		rxd->qword0.rsvd0 = 0;
+		rxd->qword0.rsvd1 = 0;
+		rxd->pkt_addr = dma_addr;
+		rxd->hdr_addr = 0;
+		rxd->rsvd2 = 0;
+
+		rxq->sw_ring[i] = mbuf;
+	}
+
+	rxq->nb_rx_hold = 0;
+	rxq->rx_tail = rxq->nb_rx_desc - 1;
+
+	return 0;
+}
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index f3e31aaf2f..874c4848c4 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -5,11 +5,28 @@
 #ifndef _IDPF_COMMON_RXTX_H_
 #define _IDPF_COMMON_RXTX_H_
 
+#include <rte_mbuf.h>
 #include <rte_mbuf_ptype.h>
 #include <rte_mbuf_core.h>
 
 #include "idpf_common_device.h"
 
+#define IDPF_RX_MAX_BURST		32
+
+#define IDPF_RX_OFFLOAD_IPV4_CKSUM		RTE_BIT64(1)
+#define IDPF_RX_OFFLOAD_UDP_CKSUM		RTE_BIT64(2)
+#define IDPF_RX_OFFLOAD_TCP_CKSUM		RTE_BIT64(3)
+#define IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM	RTE_BIT64(6)
+#define IDPF_RX_OFFLOAD_TIMESTAMP		RTE_BIT64(14)
+
+#define IDPF_TX_OFFLOAD_IPV4_CKSUM       RTE_BIT64(1)
+#define IDPF_TX_OFFLOAD_UDP_CKSUM        RTE_BIT64(2)
+#define IDPF_TX_OFFLOAD_TCP_CKSUM        RTE_BIT64(3)
+#define IDPF_TX_OFFLOAD_SCTP_CKSUM       RTE_BIT64(4)
+#define IDPF_TX_OFFLOAD_TCP_TSO          RTE_BIT64(5)
+#define IDPF_TX_OFFLOAD_MULTI_SEGS       RTE_BIT64(15)
+#define IDPF_TX_OFFLOAD_MBUF_FAST_FREE   RTE_BIT64(16)
+
 struct idpf_rx_stats {
 	uint64_t mbuf_alloc_failed;
 };
@@ -109,4 +126,44 @@ struct idpf_tx_queue {
 	struct idpf_tx_queue *complq;
 };
 
+struct idpf_rxq_ops {
+	void (*release_mbufs)(struct idpf_rx_queue *rxq);
+};
+
+struct idpf_txq_ops {
+	void (*release_mbufs)(struct idpf_tx_queue *txq);
+};
+
+__rte_internal
+int idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh);
+__rte_internal
+int idpf_check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+			 uint16_t tx_free_thresh);
+__rte_internal
+void idpf_release_rxq_mbufs(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_release_txq_mbufs(struct idpf_tx_queue *txq);
+__rte_internal
+void idpf_reset_split_rx_descq(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_reset_split_rx_bufq(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_reset_split_rx_queue(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_reset_single_rx_queue(struct idpf_rx_queue *rxq);
+__rte_internal
+void idpf_reset_split_tx_descq(struct idpf_tx_queue *txq);
+__rte_internal
+void idpf_reset_split_tx_complq(struct idpf_tx_queue *cq);
+__rte_internal
+void idpf_reset_single_tx_queue(struct idpf_tx_queue *txq);
+__rte_internal
+void idpf_rx_queue_release(void *rxq);
+__rte_internal
+void idpf_tx_queue_release(void *txq);
+__rte_internal
+int idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq);
+__rte_internal
+int idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq);
+
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/meson.build b/drivers/common/idpf/meson.build
index ea1063a7a2..6735f4af61 100644
--- a/drivers/common/idpf/meson.build
+++ b/drivers/common/idpf/meson.build
@@ -5,6 +5,7 @@ deps += ['mbuf']
 
 sources = files(
         'idpf_common_device.c',
+        'idpf_common_rxtx.c',
         'idpf_common_virtchnl.c',
 )
 
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 69295270df..aa6ebd7c6c 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,11 +3,26 @@ INTERNAL {
 
 	idpf_adapter_deinit;
 	idpf_adapter_init;
+	idpf_alloc_single_rxq_mbufs;
+	idpf_alloc_split_rxq_mbufs;
+	idpf_check_rx_thresh;
+	idpf_check_tx_thresh;
 	idpf_config_irq_map;
 	idpf_config_irq_unmap;
 	idpf_config_rss;
 	idpf_create_vport_info_init;
 	idpf_execute_vc_cmd;
+	idpf_release_rxq_mbufs;
+	idpf_release_txq_mbufs;
+	idpf_reset_single_rx_queue;
+	idpf_reset_single_tx_queue;
+	idpf_reset_split_rx_bufq;
+	idpf_reset_split_rx_descq;
+	idpf_reset_split_rx_queue;
+	idpf_reset_split_tx_complq;
+	idpf_reset_split_tx_descq;
+	idpf_rx_queue_release;
+	idpf_tx_queue_release;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
 	idpf_vc_config_irq_map_unmap;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 068eb8000e..fb1814d893 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -12,358 +12,141 @@
 
 static int idpf_timestamp_dynfield_offset = -1;
 
-static int
-check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
-{
-	/* The following constraints must be satisfied:
-	 *   thresh < rxq->nb_rx_desc
-	 */
-	if (thresh >= nb_desc) {
-		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
-			     thresh, nb_desc);
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-static int
-check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
-		uint16_t tx_free_thresh)
+static uint64_t
+idpf_rx_offload_convert(uint64_t offload)
 {
-	/* TX descriptors will have their RS bit set after tx_rs_thresh
-	 * descriptors have been used. The TX descriptor ring will be cleaned
-	 * after tx_free_thresh descriptors are used or if the number of
-	 * descriptors required to transmit a packet is greater than the
-	 * number of free TX descriptors.
-	 *
-	 * The following constraints must be satisfied:
-	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
-	 *  - tx_free_thresh must be less than the size of the ring minus 3.
-	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
-	 *  - tx_rs_thresh must be a divisor of the ring size.
-	 *
-	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
-	 * race condition, hence the maximum threshold constraints. When set
-	 * to zero use default values.
-	 */
-	if (tx_rs_thresh >= (nb_desc - 2)) {
-		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
-			     "number of TX descriptors (%u) minus 2",
-			     tx_rs_thresh, nb_desc);
-		return -EINVAL;
-	}
-	if (tx_free_thresh >= (nb_desc - 3)) {
-		PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be less than the "
-			     "number of TX descriptors (%u) minus 3.",
-			     tx_free_thresh, nb_desc);
-		return -EINVAL;
-	}
-	if (tx_rs_thresh > tx_free_thresh) {
-		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
-			     "equal to tx_free_thresh (%u).",
-			     tx_rs_thresh, tx_free_thresh);
-		return -EINVAL;
-	}
-	if ((nb_desc % tx_rs_thresh) != 0) {
-		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
-			     "number of TX descriptors (%u).",
-			     tx_rs_thresh, nb_desc);
-		return -EINVAL;
-	}
-
-	return 0;
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
+		ol |= IDPF_RX_OFFLOAD_TIMESTAMP;
+
+	return ol;
 }
 
-static void
-release_rxq_mbufs(struct idpf_rx_queue *rxq)
+static uint64_t
+idpf_tx_offload_convert(uint64_t offload)
 {
-	uint16_t i;
-
-	if (rxq->sw_ring == NULL)
-		return;
-
-	for (i = 0; i < rxq->nb_rx_desc; i++) {
-		if (rxq->sw_ring[i] != NULL) {
-			rte_pktmbuf_free_seg(rxq->sw_ring[i]);
-			rxq->sw_ring[i] = NULL;
-		}
-	}
-}
-
-static void
-release_txq_mbufs(struct idpf_tx_queue *txq)
-{
-	uint16_t nb_desc, i;
-
-	if (txq == NULL || txq->sw_ring == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
-		return;
-	}
-
-	if (txq->sw_nb_desc != 0) {
-		/* For split queue model, descriptor ring */
-		nb_desc = txq->sw_nb_desc;
-	} else {
-		/* For single queue model */
-		nb_desc = txq->nb_tx_desc;
-	}
-	for (i = 0; i < nb_desc; i++) {
-		if (txq->sw_ring[i].mbuf != NULL) {
-			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
-			txq->sw_ring[i].mbuf = NULL;
-		}
-	}
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_SCTP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
+		ol |= IDPF_TX_OFFLOAD_MULTI_SEGS;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0)
+		ol |= IDPF_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	return ol;
 }
 
 static const struct idpf_rxq_ops def_rxq_ops = {
-	.release_mbufs = release_rxq_mbufs,
+	.release_mbufs = idpf_release_rxq_mbufs,
 };
 
 static const struct idpf_txq_ops def_txq_ops = {
-	.release_mbufs = release_txq_mbufs,
+	.release_mbufs = idpf_release_txq_mbufs,
 };
 
-static void
-reset_split_rx_descq(struct idpf_rx_queue *rxq)
-{
-	uint16_t len;
-	uint32_t i;
-
-	if (rxq == NULL)
-		return;
-
-	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
-
-	for (i = 0; i < len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3);
-	     i++)
-		((volatile char *)rxq->rx_ring)[i] = 0;
-
-	rxq->rx_tail = 0;
-	rxq->expected_gen_id = 1;
-}
-
-static void
-reset_split_rx_bufq(struct idpf_rx_queue *rxq)
-{
-	uint16_t len;
-	uint32_t i;
-
-	if (rxq == NULL)
-		return;
-
-	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
-
-	for (i = 0; i < len * sizeof(struct virtchnl2_splitq_rx_buf_desc);
-	     i++)
-		((volatile char *)rxq->rx_ring)[i] = 0;
-
-	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
-
-	for (i = 0; i < IDPF_RX_MAX_BURST; i++)
-		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
-
-	/* The next descriptor id which can be received. */
-	rxq->rx_next_avail = 0;
-
-	/* The next descriptor id which can be refilled. */
-	rxq->rx_tail = 0;
-	/* The number of descriptors which can be refilled. */
-	rxq->nb_rx_hold = rxq->nb_rx_desc - 1;
-
-	rxq->bufq1 = NULL;
-	rxq->bufq2 = NULL;
-}
-
-static void
-idpf_rx_queue_release(void *rxq)
-{
-	struct idpf_rx_queue *q = rxq;
-
-	if (q == NULL)
-		return;
-
-	/* Split queue */
-	if (q->bufq1 != NULL && q->bufq2 != NULL) {
-		q->bufq1->ops->release_mbufs(q->bufq1);
-		rte_free(q->bufq1->sw_ring);
-		rte_memzone_free(q->bufq1->mz);
-		rte_free(q->bufq1);
-		q->bufq2->ops->release_mbufs(q->bufq2);
-		rte_free(q->bufq2->sw_ring);
-		rte_memzone_free(q->bufq2->mz);
-		rte_free(q->bufq2);
-		rte_memzone_free(q->mz);
-		rte_free(q);
-		return;
-	}
-
-	/* Single queue */
-	q->ops->release_mbufs(q);
-	rte_free(q->sw_ring);
-	rte_memzone_free(q->mz);
-	rte_free(q);
-}
-
-static void
-idpf_tx_queue_release(void *txq)
-{
-	struct idpf_tx_queue *q = txq;
-
-	if (q == NULL)
-		return;
-
-	if (q->complq) {
-		rte_memzone_free(q->complq->mz);
-		rte_free(q->complq);
-	}
-
-	q->ops->release_mbufs(q);
-	rte_free(q->sw_ring);
-	rte_memzone_free(q->mz);
-	rte_free(q);
-}
-
-static inline void
-reset_split_rx_queue(struct idpf_rx_queue *rxq)
+static const struct rte_memzone *
+idpf_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
+		      uint16_t len, uint16_t queue_type,
+		      unsigned int socket_id, bool splitq)
 {
-	reset_split_rx_descq(rxq);
-	reset_split_rx_bufq(rxq->bufq1);
-	reset_split_rx_bufq(rxq->bufq2);
-}
-
-static void
-reset_single_rx_queue(struct idpf_rx_queue *rxq)
-{
-	uint16_t len;
-	uint32_t i;
-
-	if (rxq == NULL)
-		return;
-
-	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
-
-	for (i = 0; i < len * sizeof(struct virtchnl2_singleq_rx_buf_desc);
-	     i++)
-		((volatile char *)rxq->rx_ring)[i] = 0;
-
-	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
-
-	for (i = 0; i < IDPF_RX_MAX_BURST; i++)
-		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
-
-	rxq->rx_tail = 0;
-	rxq->nb_rx_hold = 0;
-
-	rte_pktmbuf_free(rxq->pkt_first_seg);
-
-	rxq->pkt_first_seg = NULL;
-	rxq->pkt_last_seg = NULL;
-	rxq->rxrearm_start = 0;
-	rxq->rxrearm_nb = 0;
-}
-
-static void
-reset_split_tx_descq(struct idpf_tx_queue *txq)
-{
-	struct idpf_tx_entry *txe;
-	uint32_t i, size;
-	uint16_t prev;
-
-	if (txq == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
-		return;
-	}
+	char ring_name[RTE_MEMZONE_NAMESIZE];
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
 
-	size = sizeof(struct idpf_flex_tx_sched_desc) * txq->nb_tx_desc;
-	for (i = 0; i < size; i++)
-		((volatile char *)txq->desc_ring)[i] = 0;
-
-	txe = txq->sw_ring;
-	prev = (uint16_t)(txq->sw_nb_desc - 1);
-	for (i = 0; i < txq->sw_nb_desc; i++) {
-		txe[i].mbuf = NULL;
-		txe[i].last_id = i;
-		txe[prev].next_id = i;
-		prev = i;
+	memset(ring_name, 0, RTE_MEMZONE_NAMESIZE);
+	switch (queue_type) {
+	case VIRTCHNL2_QUEUE_TYPE_TX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_sched_desc),
+					      IDPF_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_desc),
+					      IDPF_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "idpf Tx ring", sizeof("idpf Tx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
+					      IDPF_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_singleq_rx_buf_desc),
+					      IDPF_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "idpf Rx ring", sizeof("idpf Rx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+		ring_size = RTE_ALIGN(len * sizeof(struct idpf_splitq_tx_compl_desc),
+				      IDPF_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "idpf Tx compl ring", sizeof("idpf Tx compl ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+		ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_splitq_rx_buf_desc),
+				      IDPF_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "idpf Rx buf ring", sizeof("idpf Rx buf ring"));
+		break;
+	default:
+		PMD_INIT_LOG(ERR, "Invalid queue type");
+		return NULL;
 	}
 
-	txq->tx_tail = 0;
-	txq->nb_used = 0;
-
-	/* Use this as next to clean for split desc queue */
-	txq->last_desc_cleaned = 0;
-	txq->sw_tail = 0;
-	txq->nb_free = txq->nb_tx_desc - 1;
-}
-
-static void
-reset_split_tx_complq(struct idpf_tx_queue *cq)
-{
-	uint32_t i, size;
-
-	if (cq == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to complq is NULL");
-		return;
+	mz = rte_eth_dma_zone_reserve(dev, ring_name, queue_idx,
+				      ring_size, IDPF_RING_BASE_ALIGN,
+				      socket_id);
+	if (mz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for ring");
+		return NULL;
 	}
 
-	size = sizeof(struct idpf_splitq_tx_compl_desc) * cq->nb_tx_desc;
-	for (i = 0; i < size; i++)
-		((volatile char *)cq->compl_ring)[i] = 0;
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
 
-	cq->tx_tail = 0;
-	cq->expected_gen_id = 1;
+	return mz;
 }
 
 static void
-reset_single_tx_queue(struct idpf_tx_queue *txq)
+idpf_dma_zone_release(const struct rte_memzone *mz)
 {
-	struct idpf_tx_entry *txe;
-	uint32_t i, size;
-	uint16_t prev;
-
-	if (txq == NULL) {
-		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
-		return;
-	}
-
-	txe = txq->sw_ring;
-	size = sizeof(struct idpf_flex_tx_desc) * txq->nb_tx_desc;
-	for (i = 0; i < size; i++)
-		((volatile char *)txq->tx_ring)[i] = 0;
-
-	prev = (uint16_t)(txq->nb_tx_desc - 1);
-	for (i = 0; i < txq->nb_tx_desc; i++) {
-		txq->tx_ring[i].qw1.cmd_dtype =
-			rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE);
-		txe[i].mbuf =  NULL;
-		txe[i].last_id = i;
-		txe[prev].next_id = i;
-		prev = i;
-	}
-
-	txq->tx_tail = 0;
-	txq->nb_used = 0;
-
-	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
-	txq->nb_free = txq->nb_tx_desc - 1;
-
-	txq->next_dd = txq->rs_thresh - 1;
-	txq->next_rs = txq->rs_thresh - 1;
+	rte_memzone_free(mz);
 }
 
 static int
-idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
+idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
 			 uint16_t queue_idx, uint16_t rx_free_thresh,
 			 uint16_t nb_desc, unsigned int socket_id,
-			 struct rte_mempool *mp)
+			 struct rte_mempool *mp, uint8_t bufq_id)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_adapter *adapter = vport->adapter;
 	struct idpf_hw *hw = &adapter->hw;
 	const struct rte_memzone *mz;
-	uint32_t ring_size;
+	struct idpf_rx_queue *bufq;
 	uint16_t len;
+	int ret;
+
+	bufq = rte_zmalloc_socket("idpf bufq",
+				   sizeof(struct idpf_rx_queue),
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue.");
+		ret = -ENOMEM;
+		goto err_bufq1_alloc;
+	}
 
 	bufq->mp = mp;
 	bufq->nb_rx_desc = nb_desc;
@@ -376,8 +159,21 @@ idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
 	len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM;
 	bufq->rx_buf_len = len;
 
-	/* Allocate the software ring. */
+	/* Allocate a little more to support bulk allocate. */
 	len = nb_desc + IDPF_RX_MAX_BURST;
+
+	mz = idpf_dma_zone_reserve(dev, queue_idx, len,
+				   VIRTCHNL2_QUEUE_TYPE_RX_BUFFER,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+
+	bufq->rx_ring_phys_addr = mz->iova;
+	bufq->rx_ring = mz->addr;
+	bufq->mz = mz;
+
 	bufq->sw_ring =
 		rte_zmalloc_socket("idpf rx bufq sw ring",
 				   sizeof(struct rte_mbuf *) * len,
@@ -385,55 +181,60 @@ idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
 				   socket_id);
 	if (bufq->sw_ring == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
-		return -ENOMEM;
-	}
-
-	/* Allocate a liitle more to support bulk allocate. */
-	len = nb_desc + IDPF_RX_MAX_BURST;
-	ring_size = RTE_ALIGN(len *
-			      sizeof(struct virtchnl2_splitq_rx_buf_desc),
-			      IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "rx_buf_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
-	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue.");
-		rte_free(bufq->sw_ring);
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
 	}
 
-	/* Zero all the descriptors in the ring. */
-	memset(mz->addr, 0, ring_size);
-	bufq->rx_ring_phys_addr = mz->iova;
-	bufq->rx_ring = mz->addr;
-
-	bufq->mz = mz;
-	reset_split_rx_bufq(bufq);
-	bufq->q_set = true;
+	idpf_reset_split_rx_bufq(bufq);
 	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
 	bufq->ops = &def_rxq_ops;
+	bufq->q_set = true;
 
-	/* TODO: allow bulk or vec */
+	if (bufq_id == 1) {
+		rxq->bufq1 = bufq;
+	} else if (bufq_id == 2) {
+		rxq->bufq2 = bufq;
+	} else {
+		PMD_INIT_LOG(ERR, "Invalid buffer queue index.");
+		ret = -EINVAL;
+		goto err_bufq_id;
+	}
 
 	return 0;
+
+err_bufq_id:
+	rte_free(bufq->sw_ring);
+err_sw_ring_alloc:
+	idpf_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(bufq);
+err_bufq1_alloc:
+	return ret;
 }
 
-static int
-idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-			  uint16_t nb_desc, unsigned int socket_id,
-			  const struct rte_eth_rxconf *rx_conf,
-			  struct rte_mempool *mp)
+static void
+idpf_rx_split_bufq_release(struct idpf_rx_queue *bufq)
+{
+	rte_free(bufq->sw_ring);
+	idpf_dma_zone_release(bufq->mz);
+	rte_free(bufq);
+}
+
+int
+idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_rxconf *rx_conf,
+		    struct rte_mempool *mp)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_adapter *adapter = vport->adapter;
-	struct idpf_rx_queue *bufq1, *bufq2;
+	struct idpf_hw *hw = &adapter->hw;
 	const struct rte_memzone *mz;
 	struct idpf_rx_queue *rxq;
 	uint16_t rx_free_thresh;
-	uint32_t ring_size;
 	uint64_t offloads;
-	uint16_t qid;
+	bool is_splitq;
 	uint16_t len;
 	int ret;
 
@@ -443,7 +244,7 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
 		IDPF_DEFAULT_RX_FREE_THRESH :
 		rx_conf->rx_free_thresh;
-	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+	if (idpf_check_rx_thresh(nb_desc, rx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Free memory if needed */
@@ -452,16 +253,19 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		dev->data->rx_queues[queue_idx] = NULL;
 	}
 
-	/* Setup Rx description queue */
+	/* Setup Rx queue */
 	rxq = rte_zmalloc_socket("idpf rxq",
 				 sizeof(struct idpf_rx_queue),
 				 RTE_CACHE_LINE_SIZE,
 				 socket_id);
 	if (rxq == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_rxq_alloc;
 	}
 
+	is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
 	rxq->mp = mp;
 	rxq->nb_rx_desc = nb_desc;
 	rxq->rx_free_thresh = rx_free_thresh;
@@ -470,343 +274,129 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
 	rxq->rx_hdr_len = 0;
 	rxq->adapter = adapter;
-	rxq->offloads = offloads;
+	rxq->offloads = idpf_rx_offload_convert(offloads);
 
 	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
 	rxq->rx_buf_len = len;
 
-	len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
-	ring_size = RTE_ALIGN(len *
-			      sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
-			      IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "rx_cpmpl_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
-
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+	mz = idpf_dma_zone_reserve(dev, queue_idx, len, VIRTCHNL2_QUEUE_TYPE_RX,
+				   socket_id, is_splitq);
 	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
 		ret = -ENOMEM;
-		goto free_rxq;
+		goto err_mz_reserve;
 	}
-
-	/* Zero all the descriptors in the ring. */
-	memset(mz->addr, 0, ring_size);
 	rxq->rx_ring_phys_addr = mz->iova;
 	rxq->rx_ring = mz->addr;
-
 	rxq->mz = mz;
-	reset_split_rx_descq(rxq);
 
-	/* TODO: allow bulk or vec */
+	if (!is_splitq) {
+		rxq->sw_ring = rte_zmalloc_socket("idpf rxq sw ring",
+						  sizeof(struct rte_mbuf *) * len,
+						  RTE_CACHE_LINE_SIZE,
+						  socket_id);
+		if (rxq->sw_ring == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+			ret = -ENOMEM;
+			goto err_sw_ring_alloc;
+		}
 
-	/* setup Rx buffer queue */
-	bufq1 = rte_zmalloc_socket("idpf bufq1",
-				   sizeof(struct idpf_rx_queue),
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (bufq1 == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue 1.");
-		ret = -ENOMEM;
-		goto free_mz;
-	}
-	qid = 2 * queue_idx;
-	ret = idpf_rx_split_bufq_setup(dev, bufq1, qid, rx_free_thresh,
-				       nb_desc, socket_id, mp);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
-		ret = -EINVAL;
-		goto free_bufq1;
-	}
-	rxq->bufq1 = bufq1;
+		idpf_reset_single_rx_queue(rxq);
+		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
+				queue_idx * vport->chunks_info.rx_qtail_spacing);
+		rxq->ops = &def_rxq_ops;
+	} else {
+		idpf_reset_split_rx_descq(rxq);
 
-	bufq2 = rte_zmalloc_socket("idpf bufq2",
-				   sizeof(struct idpf_rx_queue),
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (bufq2 == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue 2.");
-		rte_free(bufq1->sw_ring);
-		rte_memzone_free(bufq1->mz);
-		ret = -ENOMEM;
-		goto free_bufq1;
-	}
-	qid = 2 * queue_idx + 1;
-	ret = idpf_rx_split_bufq_setup(dev, bufq2, qid, rx_free_thresh,
-				       nb_desc, socket_id, mp);
-	if (ret != 0) {
-		PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
-		rte_free(bufq1->sw_ring);
-		rte_memzone_free(bufq1->mz);
-		ret = -EINVAL;
-		goto free_bufq2;
+		/* Setup Rx buffer queues */
+		ret = idpf_rx_split_bufq_setup(dev, rxq, 2 * queue_idx,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 1);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
+			ret = -EINVAL;
+			goto err_bufq1_setup;
+		}
+
+		ret = idpf_rx_split_bufq_setup(dev, rxq, 2 * queue_idx + 1,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 2);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
+			ret = -EINVAL;
+			goto err_bufq2_setup;
+		}
 	}
-	rxq->bufq2 = bufq2;
 
 	rxq->q_set = true;
 	dev->data->rx_queues[queue_idx] = rxq;
 
 	return 0;
 
-free_bufq2:
-	rte_free(bufq2);
-free_bufq1:
-	rte_free(bufq1);
-free_mz:
-	rte_memzone_free(mz);
-free_rxq:
+err_bufq2_setup:
+	idpf_rx_split_bufq_release(rxq->bufq1);
+err_bufq1_setup:
+err_sw_ring_alloc:
+	idpf_dma_zone_release(mz);
+err_mz_reserve:
 	rte_free(rxq);
-
+err_rxq_alloc:
 	return ret;
 }
 
 static int
-idpf_rx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-			   uint16_t nb_desc, unsigned int socket_id,
-			   const struct rte_eth_rxconf *rx_conf,
-			   struct rte_mempool *mp)
+idpf_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
+		     uint16_t queue_idx, uint16_t nb_desc,
+		     unsigned int socket_id)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
-	struct idpf_hw *hw = &adapter->hw;
 	const struct rte_memzone *mz;
-	struct idpf_rx_queue *rxq;
-	uint16_t rx_free_thresh;
-	uint32_t ring_size;
-	uint64_t offloads;
-	uint16_t len;
-
-	offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
-
-	/* Check free threshold */
-	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
-		IDPF_DEFAULT_RX_FREE_THRESH :
-		rx_conf->rx_free_thresh;
-	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
-		return -EINVAL;
-
-	/* Free memory if needed */
-	if (dev->data->rx_queues[queue_idx] != NULL) {
-		idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
-		dev->data->rx_queues[queue_idx] = NULL;
-	}
-
-	/* Setup Rx description queue */
-	rxq = rte_zmalloc_socket("idpf rxq",
-				 sizeof(struct idpf_rx_queue),
-				 RTE_CACHE_LINE_SIZE,
-				 socket_id);
-	if (rxq == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
-		return -ENOMEM;
-	}
-
-	rxq->mp = mp;
-	rxq->nb_rx_desc = nb_desc;
-	rxq->rx_free_thresh = rx_free_thresh;
-	rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
-	rxq->port_id = dev->data->port_id;
-	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
-	rxq->rx_hdr_len = 0;
-	rxq->adapter = adapter;
-	rxq->offloads = offloads;
-
-	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
-	rxq->rx_buf_len = len;
-
-	len = nb_desc + IDPF_RX_MAX_BURST;
-	rxq->sw_ring =
-		rte_zmalloc_socket("idpf rxq sw ring",
-				   sizeof(struct rte_mbuf *) * len,
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (rxq->sw_ring == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
-		rte_free(rxq);
-		return -ENOMEM;
-	}
-
-	/* Allocate a liitle more to support bulk allocate. */
-	len = nb_desc + IDPF_RX_MAX_BURST;
-	ring_size = RTE_ALIGN(len *
-			      sizeof(struct virtchnl2_singleq_rx_buf_desc),
-			      IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "rx ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
-	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue.");
-		rte_free(rxq->sw_ring);
-		rte_free(rxq);
-		return -ENOMEM;
-	}
-
-	/* Zero all the descriptors in the ring. */
-	memset(mz->addr, 0, ring_size);
-	rxq->rx_ring_phys_addr = mz->iova;
-	rxq->rx_ring = mz->addr;
-
-	rxq->mz = mz;
-	reset_single_rx_queue(rxq);
-	rxq->q_set = true;
-	dev->data->rx_queues[queue_idx] = rxq;
-	rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
-			queue_idx * vport->chunks_info.rx_qtail_spacing);
-	rxq->ops = &def_rxq_ops;
-
-	return 0;
-}
-
-int
-idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-		    uint16_t nb_desc, unsigned int socket_id,
-		    const struct rte_eth_rxconf *rx_conf,
-		    struct rte_mempool *mp)
-{
-	struct idpf_vport *vport = dev->data->dev_private;
-
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
-		return idpf_rx_single_queue_setup(dev, queue_idx, nb_desc,
-						  socket_id, rx_conf, mp);
-	else
-		return idpf_rx_split_queue_setup(dev, queue_idx, nb_desc,
-						 socket_id, rx_conf, mp);
-}
-
-static int
-idpf_tx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-			  uint16_t nb_desc, unsigned int socket_id,
-			  const struct rte_eth_txconf *tx_conf)
-{
-	struct idpf_vport *vport = dev->data->dev_private;
-	struct idpf_adapter *adapter = vport->adapter;
-	uint16_t tx_rs_thresh, tx_free_thresh;
-	struct idpf_hw *hw = &adapter->hw;
-	struct idpf_tx_queue *txq, *cq;
-	const struct rte_memzone *mz;
-	uint32_t ring_size;
-	uint64_t offloads;
+	struct idpf_tx_queue *cq;
 	int ret;
 
-	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
-
-	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh != 0) ?
-		tx_conf->tx_rs_thresh : IDPF_DEFAULT_TX_RS_THRESH);
-	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh != 0) ?
-		tx_conf->tx_free_thresh : IDPF_DEFAULT_TX_FREE_THRESH);
-	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
-		return -EINVAL;
-
-	/* Free memory if needed. */
-	if (dev->data->tx_queues[queue_idx] != NULL) {
-		idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
-		dev->data->tx_queues[queue_idx] = NULL;
-	}
-
-	/* Allocate the TX queue data structure. */
-	txq = rte_zmalloc_socket("idpf split txq",
-				 sizeof(struct idpf_tx_queue),
-				 RTE_CACHE_LINE_SIZE,
-				 socket_id);
-	if (txq == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
-		return -ENOMEM;
-	}
-
-	txq->nb_tx_desc = nb_desc;
-	txq->rs_thresh = tx_rs_thresh;
-	txq->free_thresh = tx_free_thresh;
-	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
-	txq->port_id = dev->data->port_id;
-	txq->offloads = offloads;
-	txq->tx_deferred_start = tx_conf->tx_deferred_start;
-
-	/* Allocate software ring */
-	txq->sw_nb_desc = 2 * nb_desc;
-	txq->sw_ring =
-		rte_zmalloc_socket("idpf split tx sw ring",
-				   sizeof(struct idpf_tx_entry) *
-				   txq->sw_nb_desc,
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (txq->sw_ring == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
-		ret = -ENOMEM;
-		goto err_txq_sw_ring;
-	}
-
-	/* Allocate TX hardware ring descriptors. */
-	ring_size = sizeof(struct idpf_flex_tx_sched_desc) * txq->nb_tx_desc;
-	ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "split_tx_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
-	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
-		ret = -ENOMEM;
-		goto err_txq_mz;
-	}
-	txq->tx_ring_phys_addr = mz->iova;
-	txq->desc_ring = mz->addr;
-
-	txq->mz = mz;
-	reset_split_tx_descq(txq);
-	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
-			queue_idx * vport->chunks_info.tx_qtail_spacing);
-	txq->ops = &def_txq_ops;
-
-	/* Allocate the TX completion queue data structure. */
-	txq->complq = rte_zmalloc_socket("idpf splitq cq",
-					 sizeof(struct idpf_tx_queue),
-					 RTE_CACHE_LINE_SIZE,
-					 socket_id);
-	cq = txq->complq;
+	cq = rte_zmalloc_socket("idpf splitq cq",
+				sizeof(struct idpf_tx_queue),
+				RTE_CACHE_LINE_SIZE,
+				socket_id);
 	if (cq == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for Tx compl queue");
 		ret = -ENOMEM;
-		goto err_cq;
+		goto err_cq_alloc;
 	}
-	cq->nb_tx_desc = 2 * nb_desc;
+
+	cq->nb_tx_desc = nb_desc;
 	cq->queue_id = vport->chunks_info.tx_compl_start_qid + queue_idx;
 	cq->port_id = dev->data->port_id;
 	cq->txqs = dev->data->tx_queues;
 	cq->tx_start_qid = vport->chunks_info.tx_start_qid;
 
-	ring_size = sizeof(struct idpf_splitq_tx_compl_desc) * cq->nb_tx_desc;
-	ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "tx_split_compl_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
+	mz = idpf_dma_zone_reserve(dev, queue_idx, nb_desc,
+				   VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION,
+				   socket_id, true);
 	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX completion queue");
 		ret = -ENOMEM;
-		goto err_cq_mz;
+		goto err_mz_reserve;
 	}
 	cq->tx_ring_phys_addr = mz->iova;
 	cq->compl_ring = mz->addr;
 	cq->mz = mz;
-	reset_split_tx_complq(cq);
+	idpf_reset_split_tx_complq(cq);
 
-	txq->q_set = true;
-	dev->data->tx_queues[queue_idx] = txq;
+	txq->complq = cq;
 
 	return 0;
 
-err_cq_mz:
+err_mz_reserve:
 	rte_free(cq);
-err_cq:
-	rte_memzone_free(txq->mz);
-err_txq_mz:
-	rte_free(txq->sw_ring);
-err_txq_sw_ring:
-	rte_free(txq);
-
+err_cq_alloc:
 	return ret;
 }
 
-static int
-idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-			   uint16_t nb_desc, unsigned int socket_id,
-			   const struct rte_eth_txconf *tx_conf)
+int
+idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_txconf *tx_conf)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_adapter *adapter = vport->adapter;
@@ -814,8 +404,10 @@ idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	struct idpf_hw *hw = &adapter->hw;
 	const struct rte_memzone *mz;
 	struct idpf_tx_queue *txq;
-	uint32_t ring_size;
 	uint64_t offloads;
+	uint16_t len;
+	bool is_splitq;
+	int ret;
 
 	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
 
@@ -823,7 +415,7 @@ idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		tx_conf->tx_rs_thresh : IDPF_DEFAULT_TX_RS_THRESH);
 	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
 		tx_conf->tx_free_thresh : IDPF_DEFAULT_TX_FREE_THRESH);
-	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+	if (idpf_check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Free memory if needed. */
@@ -839,71 +431,74 @@ idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 				 socket_id);
 	if (txq == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_txq_alloc;
 	}
 
-	/* TODO: vlan offload */
+	is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
 
 	txq->nb_tx_desc = nb_desc;
 	txq->rs_thresh = tx_rs_thresh;
 	txq->free_thresh = tx_free_thresh;
 	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
 	txq->port_id = dev->data->port_id;
-	txq->offloads = offloads;
+	txq->offloads = idpf_tx_offload_convert(offloads);
 	txq->tx_deferred_start = tx_conf->tx_deferred_start;
 
-	/* Allocate software ring */
-	txq->sw_ring =
-		rte_zmalloc_socket("idpf tx sw ring",
-				   sizeof(struct idpf_tx_entry) * nb_desc,
-				   RTE_CACHE_LINE_SIZE,
-				   socket_id);
-	if (txq->sw_ring == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
-		rte_free(txq);
-		return -ENOMEM;
-	}
+	if (is_splitq)
+		len = 2 * nb_desc;
+	else
+		len = nb_desc;
+	txq->sw_nb_desc = len;
 
 	/* Allocate TX hardware ring descriptors. */
-	ring_size = sizeof(struct idpf_flex_tx_desc) * nb_desc;
-	ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
-	mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
-				      ring_size, IDPF_RING_BASE_ALIGN,
-				      socket_id);
+	mz = idpf_dma_zone_reserve(dev, queue_idx, nb_desc, VIRTCHNL2_QUEUE_TYPE_TX,
+				   socket_id, is_splitq);
 	if (mz == NULL) {
-		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
-		rte_free(txq->sw_ring);
-		rte_free(txq);
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_mz_reserve;
 	}
-
 	txq->tx_ring_phys_addr = mz->iova;
-	txq->tx_ring = mz->addr;
-
 	txq->mz = mz;
-	reset_single_tx_queue(txq);
-	txq->q_set = true;
-	dev->data->tx_queues[queue_idx] = txq;
+
+	txq->sw_ring = rte_zmalloc_socket("idpf tx sw ring",
+					  sizeof(struct idpf_tx_entry) * len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	if (!is_splitq) {
+		txq->tx_ring = mz->addr;
+		idpf_reset_single_tx_queue(txq);
+	} else {
+		txq->desc_ring = mz->addr;
+		idpf_reset_split_tx_descq(txq);
+
+		/* Setup tx completion queue if split model */
+		ret = idpf_tx_complq_setup(dev, txq, queue_idx,
+					   2 * nb_desc, socket_id);
+		if (ret != 0)
+			goto err_complq_setup;
+	}
+
 	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
 			queue_idx * vport->chunks_info.tx_qtail_spacing);
 	txq->ops = &def_txq_ops;
+	txq->q_set = true;
+	dev->data->tx_queues[queue_idx] = txq;
 
 	return 0;
-}
 
-int
-idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-		    uint16_t nb_desc, unsigned int socket_id,
-		    const struct rte_eth_txconf *tx_conf)
-{
-	struct idpf_vport *vport = dev->data->dev_private;
-
-	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
-		return idpf_tx_single_queue_setup(dev, queue_idx, nb_desc,
-						  socket_id, tx_conf);
-	else
-		return idpf_tx_split_queue_setup(dev, queue_idx, nb_desc,
-						 socket_id, tx_conf);
+err_complq_setup:
+err_sw_ring_alloc:
+	idpf_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(txq);
+err_txq_alloc:
+	return ret;
 }
 
 static int
@@ -916,89 +511,13 @@ idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
 							 &idpf_timestamp_dynflag);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR,
-				"Cannot register mbuf field/flag for timestamp");
+				    "Cannot register mbuf field/flag for timestamp");
 			return -EINVAL;
 		}
 	}
 	return 0;
 }
 
-static int
-idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
-{
-	volatile struct virtchnl2_singleq_rx_buf_desc *rxd;
-	struct rte_mbuf *mbuf = NULL;
-	uint64_t dma_addr;
-	uint16_t i;
-
-	for (i = 0; i < rxq->nb_rx_desc; i++) {
-		mbuf = rte_mbuf_raw_alloc(rxq->mp);
-		if (unlikely(mbuf == NULL)) {
-			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
-			return -ENOMEM;
-		}
-
-		rte_mbuf_refcnt_set(mbuf, 1);
-		mbuf->next = NULL;
-		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
-		mbuf->nb_segs = 1;
-		mbuf->port = rxq->port_id;
-
-		dma_addr =
-			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
-
-		rxd = &((volatile struct virtchnl2_singleq_rx_buf_desc *)(rxq->rx_ring))[i];
-		rxd->pkt_addr = dma_addr;
-		rxd->hdr_addr = 0;
-		rxd->rsvd1 = 0;
-		rxd->rsvd2 = 0;
-		rxq->sw_ring[i] = mbuf;
-	}
-
-	return 0;
-}
-
-static int
-idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
-{
-	volatile struct virtchnl2_splitq_rx_buf_desc *rxd;
-	struct rte_mbuf *mbuf = NULL;
-	uint64_t dma_addr;
-	uint16_t i;
-
-	for (i = 0; i < rxq->nb_rx_desc; i++) {
-		mbuf = rte_mbuf_raw_alloc(rxq->mp);
-		if (unlikely(mbuf == NULL)) {
-			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
-			return -ENOMEM;
-		}
-
-		rte_mbuf_refcnt_set(mbuf, 1);
-		mbuf->next = NULL;
-		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
-		mbuf->nb_segs = 1;
-		mbuf->port = rxq->port_id;
-
-		dma_addr =
-			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
-
-		rxd = &((volatile struct virtchnl2_splitq_rx_buf_desc *)(rxq->rx_ring))[i];
-		rxd->qword0.buf_id = i;
-		rxd->qword0.rsvd0 = 0;
-		rxd->qword0.rsvd1 = 0;
-		rxd->pkt_addr = dma_addr;
-		rxd->hdr_addr = 0;
-		rxd->rsvd2 = 0;
-
-		rxq->sw_ring[i] = mbuf;
-	}
-
-	rxq->nb_rx_hold = 0;
-	rxq->rx_tail = rxq->nb_rx_desc - 1;
-
-	return 0;
-}
-
 int
 idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
@@ -1164,11 +683,11 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	rxq = dev->data->rx_queues[rx_queue_id];
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
 		rxq->ops->release_mbufs(rxq);
-		reset_single_rx_queue(rxq);
+		idpf_reset_single_rx_queue(rxq);
 	} else {
 		rxq->bufq1->ops->release_mbufs(rxq->bufq1);
 		rxq->bufq2->ops->release_mbufs(rxq->bufq2);
-		reset_split_rx_queue(rxq);
+		idpf_reset_split_rx_queue(rxq);
 	}
 	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -1195,10 +714,10 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	txq = dev->data->tx_queues[tx_queue_id];
 	txq->ops->release_mbufs(txq);
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-		reset_single_tx_queue(txq);
+		idpf_reset_single_tx_queue(txq);
 	} else {
-		reset_split_tx_descq(txq);
-		reset_split_tx_complq(txq->complq);
+		idpf_reset_split_tx_descq(txq);
+		idpf_reset_split_tx_complq(txq->complq);
 	}
 	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index b8325f9b96..4efbf10295 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -51,7 +51,6 @@
 /* Base address of the HW descriptor ring should be 128B aligned. */
 #define IDPF_RING_BASE_ALIGN	128
 
-#define IDPF_RX_MAX_BURST		32
 #define IDPF_DEFAULT_RX_FREE_THRESH	32
 
 /* used for Vector PMD */
@@ -101,14 +100,6 @@ union idpf_tx_offload {
 	};
 };
 
-struct idpf_rxq_ops {
-	void (*release_mbufs)(struct idpf_rx_queue *rxq);
-};
-
-struct idpf_txq_ops {
-	void (*release_mbufs)(struct idpf_tx_queue *txq);
-};
-
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
index fb2b6bb53c..71a6c59823 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
@@ -562,7 +562,7 @@ idpf_tx_free_bufs_avx512(struct idpf_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->next_dd - (n - 1);
 
-	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & IDPF_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
 								rte_lcore_id());
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 13/19] common/idpf: add Rx and Tx data path
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (11 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 12/19] common/idpf: add help functions for queue setup and release beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 14/19] common/idpf: add vec queue setup beilei.xing
                           ` (6 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing, Mingxia Liu

From: Beilei Xing <beilei.xing@intel.com>

Add timestamp filed to idpf_adapter structure.
Move scalar Rx/Tx data path for both single queue and split queue
to common module.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h |   5 +
 drivers/common/idpf/idpf_common_logs.h   |  24 +
 drivers/common/idpf/idpf_common_rxtx.c   | 987 +++++++++++++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h   |  87 ++
 drivers/common/idpf/version.map          |   6 +
 drivers/net/idpf/idpf_ethdev.c           |   2 -
 drivers/net/idpf/idpf_ethdev.h           |   4 -
 drivers/net/idpf/idpf_logs.h             |  24 -
 drivers/net/idpf/idpf_rxtx.c             | 937 +--------------------
 drivers/net/idpf/idpf_rxtx.h             | 132 ---
 drivers/net/idpf/idpf_rxtx_vec_avx512.c  |   8 +-
 11 files changed, 1114 insertions(+), 1102 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 629d812748..583ca90361 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -23,6 +23,8 @@
 #define IDPF_TX_COMPLQ_PER_GRP	1
 #define IDPF_TXQ_PER_GRP	1
 
+#define IDPF_MIN_FRAME_SIZE	14
+
 #define IDPF_MAX_PKT_TYPE	1024
 
 #define IDPF_DFLT_INTERVAL	16
@@ -43,6 +45,9 @@ struct idpf_adapter {
 
 	uint32_t txq_model; /* 0 - split queue model, non-0 - single queue model */
 	uint32_t rxq_model; /* 0 - split queue model, non-0 - single queue model */
+
+	/* For timestamp */
+	uint64_t time_hw;
 };
 
 struct idpf_chunks_info {
diff --git a/drivers/common/idpf/idpf_common_logs.h b/drivers/common/idpf/idpf_common_logs.h
index 4c7978fb49..f6be84ceb5 100644
--- a/drivers/common/idpf/idpf_common_logs.h
+++ b/drivers/common/idpf/idpf_common_logs.h
@@ -20,4 +20,28 @@ extern int idpf_common_logtype;
 #define DRV_LOG(level, fmt, args...)		\
 	DRV_LOG_RAW(level, fmt "\n", ## args)
 
+#ifdef RTE_LIBRTE_IDPF_DEBUG_RX
+#define RX_LOG(level, ...) \
+	RTE_LOG(level, \
+		PMD, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+#else
+#define RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_IDPF_DEBUG_TX
+#define TX_LOG(level, ...) \
+	RTE_LOG(level, \
+		PMD, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+#else
+#define TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
 #endif /* _IDPF_COMMON_LOGS_H_ */
diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 832d57c518..aea4263d92 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -3,8 +3,13 @@
  */
 
 #include <rte_mbuf_dyn.h>
+#include <rte_errno.h>
+
 #include "idpf_common_rxtx.h"
 
+int idpf_timestamp_dynfield_offset = -1;
+uint64_t idpf_timestamp_dynflag;
+
 int
 idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
 {
@@ -337,6 +342,23 @@ idpf_tx_queue_release(void *txq)
 	rte_free(q);
 }
 
+int
+idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
+{
+	int err;
+	if ((rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
+		/* Register mbuf field and flag for Rx timestamp */
+		err = rte_mbuf_dyn_rx_timestamp_register(&idpf_timestamp_dynfield_offset,
+							 &idpf_timestamp_dynflag);
+		if (err != 0) {
+			DRV_LOG(ERR,
+				"Cannot register mbuf field/flag for timestamp");
+			return -EINVAL;
+		}
+	}
+	return 0;
+}
+
 int
 idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
 {
@@ -412,3 +434,968 @@ idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
 
 	return 0;
 }
+
+#define IDPF_TIMESYNC_REG_WRAP_GUARD_BAND  10000
+/* Helper function to convert a 32b nanoseconds timestamp to 64b. */
+static inline uint64_t
+idpf_tstamp_convert_32b_64b(struct idpf_adapter *ad, uint32_t flag,
+			    uint32_t in_timestamp)
+{
+#ifdef RTE_ARCH_X86_64
+	struct idpf_hw *hw = &ad->hw;
+	const uint64_t mask = 0xFFFFFFFF;
+	uint32_t hi, lo, lo2, delta;
+	uint64_t ns;
+
+	if (flag != 0) {
+		IDPF_WRITE_REG(hw, GLTSYN_CMD_SYNC_0_0, PF_GLTSYN_CMD_SYNC_SHTIME_EN_M);
+		IDPF_WRITE_REG(hw, GLTSYN_CMD_SYNC_0_0, PF_GLTSYN_CMD_SYNC_EXEC_CMD_M |
+			       PF_GLTSYN_CMD_SYNC_SHTIME_EN_M);
+		lo = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
+		hi = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_H_0);
+		/*
+		 * On typical system, the delta between lo and lo2 is ~1000ns,
+		 * so 10000 seems a large-enough but not overly-big guard band.
+		 */
+		if (lo > (UINT32_MAX - IDPF_TIMESYNC_REG_WRAP_GUARD_BAND))
+			lo2 = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
+		else
+			lo2 = lo;
+
+		if (lo2 < lo) {
+			lo = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
+			hi = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_H_0);
+		}
+
+		ad->time_hw = ((uint64_t)hi << 32) | lo;
+	}
+
+	delta = (in_timestamp - (uint32_t)(ad->time_hw & mask));
+	if (delta > (mask / 2)) {
+		delta = ((uint32_t)(ad->time_hw & mask) - in_timestamp);
+		ns = ad->time_hw - delta;
+	} else {
+		ns = ad->time_hw + delta;
+	}
+
+	return ns;
+#else /* !RTE_ARCH_X86_64 */
+	RTE_SET_USED(ad);
+	RTE_SET_USED(flag);
+	RTE_SET_USED(in_timestamp);
+	return 0;
+#endif /* RTE_ARCH_X86_64 */
+}
+
+#define IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S				\
+	(RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S) |     \
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S) |     \
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S) |    \
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S))
+
+static inline uint64_t
+idpf_splitq_rx_csum_offload(uint8_t err)
+{
+	uint64_t flags = 0;
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S)) == 0))
+		return flags;
+
+	if (likely((err & IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S) == 0)) {
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S)) != 0))
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
+
+	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
+
+	return flags;
+}
+
+#define IDPF_RX_FLEX_DESC_ADV_HASH1_S  0
+#define IDPF_RX_FLEX_DESC_ADV_HASH2_S  16
+#define IDPF_RX_FLEX_DESC_ADV_HASH3_S  24
+
+static inline uint64_t
+idpf_splitq_rx_rss_offload(struct rte_mbuf *mb,
+			   volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc)
+{
+	uint8_t status_err0_qw0;
+	uint64_t flags = 0;
+
+	status_err0_qw0 = rx_desc->status_err0_qw0;
+
+	if ((status_err0_qw0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S)) != 0) {
+		flags |= RTE_MBUF_F_RX_RSS_HASH;
+		mb->hash.rss = (rte_le_to_cpu_16(rx_desc->hash1) <<
+				IDPF_RX_FLEX_DESC_ADV_HASH1_S) |
+			((uint32_t)(rx_desc->ff2_mirrid_hash2.hash2) <<
+			 IDPF_RX_FLEX_DESC_ADV_HASH2_S) |
+			((uint32_t)(rx_desc->hash3) <<
+			 IDPF_RX_FLEX_DESC_ADV_HASH3_S);
+	}
+
+	return flags;
+}
+
+static void
+idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
+{
+	volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_ring;
+	volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_desc;
+	uint16_t nb_refill = rx_bufq->rx_free_thresh;
+	uint16_t nb_desc = rx_bufq->nb_rx_desc;
+	uint16_t next_avail = rx_bufq->rx_tail;
+	struct rte_mbuf *nmb[rx_bufq->rx_free_thresh];
+	uint64_t dma_addr;
+	uint16_t delta;
+	int i;
+
+	if (rx_bufq->nb_rx_hold < rx_bufq->rx_free_thresh)
+		return;
+
+	rx_buf_ring = rx_bufq->rx_ring;
+	delta = nb_desc - next_avail;
+	if (unlikely(delta < nb_refill)) {
+		if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, delta) == 0)) {
+			for (i = 0; i < delta; i++) {
+				rx_buf_desc = &rx_buf_ring[next_avail + i];
+				rx_bufq->sw_ring[next_avail + i] = nmb[i];
+				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+				rx_buf_desc->hdr_addr = 0;
+				rx_buf_desc->pkt_addr = dma_addr;
+			}
+			nb_refill -= delta;
+			next_avail = 0;
+			rx_bufq->nb_rx_hold -= delta;
+		} else {
+			__atomic_fetch_add(&rx_bufq->rx_stats.mbuf_alloc_failed,
+					   nb_desc - next_avail, __ATOMIC_RELAXED);
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+			       rx_bufq->port_id, rx_bufq->queue_id);
+			return;
+		}
+	}
+
+	if (nb_desc - next_avail >= nb_refill) {
+		if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, nb_refill) == 0)) {
+			for (i = 0; i < nb_refill; i++) {
+				rx_buf_desc = &rx_buf_ring[next_avail + i];
+				rx_bufq->sw_ring[next_avail + i] = nmb[i];
+				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+				rx_buf_desc->hdr_addr = 0;
+				rx_buf_desc->pkt_addr = dma_addr;
+			}
+			next_avail += nb_refill;
+			rx_bufq->nb_rx_hold -= nb_refill;
+		} else {
+			__atomic_fetch_add(&rx_bufq->rx_stats.mbuf_alloc_failed,
+					   nb_desc - next_avail, __ATOMIC_RELAXED);
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+			       rx_bufq->port_id, rx_bufq->queue_id);
+		}
+	}
+
+	IDPF_PCI_REG_WRITE(rx_bufq->qrx_tail, next_avail);
+
+	rx_bufq->rx_tail = next_avail;
+}
+
+uint16_t
+idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		      uint16_t nb_pkts)
+{
+	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
+	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
+	uint16_t pktlen_gen_bufq_id;
+	struct idpf_rx_queue *rxq;
+	const uint32_t *ptype_tbl;
+	uint8_t status_err0_qw1;
+	struct idpf_adapter *ad;
+	struct rte_mbuf *rxm;
+	uint16_t rx_id_bufq1;
+	uint16_t rx_id_bufq2;
+	uint64_t pkt_flags;
+	uint16_t pkt_len;
+	uint16_t bufq_id;
+	uint16_t gen_id;
+	uint16_t rx_id;
+	uint16_t nb_rx;
+	uint64_t ts_ns;
+
+	nb_rx = 0;
+	rxq = rx_queue;
+	ad = rxq->adapter;
+
+	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
+		return nb_rx;
+
+	rx_id = rxq->rx_tail;
+	rx_id_bufq1 = rxq->bufq1->rx_next_avail;
+	rx_id_bufq2 = rxq->bufq2->rx_next_avail;
+	rx_desc_ring = rxq->rx_ring;
+	ptype_tbl = rxq->adapter->ptype_tbl;
+
+	if ((rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0)
+		rxq->hw_register_set = 1;
+
+	while (nb_rx < nb_pkts) {
+		rx_desc = &rx_desc_ring[rx_id];
+
+		pktlen_gen_bufq_id =
+			rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
+		gen_id = (pktlen_gen_bufq_id &
+			  VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
+			VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
+		if (gen_id != rxq->expected_gen_id)
+			break;
+
+		pkt_len = (pktlen_gen_bufq_id &
+			   VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M) >>
+			VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S;
+		if (pkt_len == 0)
+			RX_LOG(ERR, "Packet length is 0");
+
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc)) {
+			rx_id = 0;
+			rxq->expected_gen_id ^= 1;
+		}
+
+		bufq_id = (pktlen_gen_bufq_id &
+			   VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M) >>
+			VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S;
+		if (bufq_id == 0) {
+			rxm = rxq->bufq1->sw_ring[rx_id_bufq1];
+			rx_id_bufq1++;
+			if (unlikely(rx_id_bufq1 == rxq->bufq1->nb_rx_desc))
+				rx_id_bufq1 = 0;
+			rxq->bufq1->nb_rx_hold++;
+		} else {
+			rxm = rxq->bufq2->sw_ring[rx_id_bufq2];
+			rx_id_bufq2++;
+			if (unlikely(rx_id_bufq2 == rxq->bufq2->nb_rx_desc))
+				rx_id_bufq2 = 0;
+			rxq->bufq2->nb_rx_hold++;
+		}
+
+		rxm->pkt_len = pkt_len;
+		rxm->data_len = pkt_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rxm->next = NULL;
+		rxm->nb_segs = 1;
+		rxm->port = rxq->port_id;
+		rxm->ol_flags = 0;
+		rxm->packet_type =
+			ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
+				   VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
+				  VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
+
+		status_err0_qw1 = rx_desc->status_err0_qw1;
+		pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
+		pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
+		if (idpf_timestamp_dynflag > 0 &&
+		    (rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP)) {
+			/* timestamp */
+			ts_ns = idpf_tstamp_convert_32b_64b(ad,
+							    rxq->hw_register_set,
+							    rte_le_to_cpu_32(rx_desc->ts_high));
+			rxq->hw_register_set = 0;
+			*RTE_MBUF_DYNFIELD(rxm,
+					   idpf_timestamp_dynfield_offset,
+					   rte_mbuf_timestamp_t *) = ts_ns;
+			rxm->ol_flags |= idpf_timestamp_dynflag;
+		}
+
+		rxm->ol_flags |= pkt_flags;
+
+		rx_pkts[nb_rx++] = rxm;
+	}
+
+	if (nb_rx > 0) {
+		rxq->rx_tail = rx_id;
+		if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
+			rxq->bufq1->rx_next_avail = rx_id_bufq1;
+		if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
+			rxq->bufq2->rx_next_avail = rx_id_bufq2;
+
+		idpf_split_rx_bufq_refill(rxq->bufq1);
+		idpf_split_rx_bufq_refill(rxq->bufq2);
+	}
+
+	return nb_rx;
+}
+
+static inline void
+idpf_split_tx_free(struct idpf_tx_queue *cq)
+{
+	volatile struct idpf_splitq_tx_compl_desc *compl_ring = cq->compl_ring;
+	volatile struct idpf_splitq_tx_compl_desc *txd;
+	uint16_t next = cq->tx_tail;
+	struct idpf_tx_entry *txe;
+	struct idpf_tx_queue *txq;
+	uint16_t gen, qid, q_head;
+	uint16_t nb_desc_clean;
+	uint8_t ctype;
+
+	txd = &compl_ring[next];
+	gen = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+	       IDPF_TXD_COMPLQ_GEN_M) >> IDPF_TXD_COMPLQ_GEN_S;
+	if (gen != cq->expected_gen_id)
+		return;
+
+	ctype = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+		 IDPF_TXD_COMPLQ_COMPL_TYPE_M) >> IDPF_TXD_COMPLQ_COMPL_TYPE_S;
+	qid = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+	       IDPF_TXD_COMPLQ_QID_M) >> IDPF_TXD_COMPLQ_QID_S;
+	q_head = rte_le_to_cpu_16(txd->q_head_compl_tag.compl_tag);
+	txq = cq->txqs[qid - cq->tx_start_qid];
+
+	switch (ctype) {
+	case IDPF_TXD_COMPLT_RE:
+		/* clean to q_head which indicates be fetched txq desc id + 1.
+		 * TODO: need to refine and remove the if condition.
+		 */
+		if (unlikely(q_head % 32)) {
+			TX_LOG(ERR, "unexpected desc (head = %u) completion.",
+			       q_head);
+			return;
+		}
+		if (txq->last_desc_cleaned > q_head)
+			nb_desc_clean = (txq->nb_tx_desc - txq->last_desc_cleaned) +
+				q_head;
+		else
+			nb_desc_clean = q_head - txq->last_desc_cleaned;
+		txq->nb_free += nb_desc_clean;
+		txq->last_desc_cleaned = q_head;
+		break;
+	case IDPF_TXD_COMPLT_RS:
+		/* q_head indicates sw_id when ctype is 2 */
+		txe = &txq->sw_ring[q_head];
+		if (txe->mbuf != NULL) {
+			rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = NULL;
+		}
+		break;
+	default:
+		TX_LOG(ERR, "unknown completion type.");
+		return;
+	}
+
+	if (++next == cq->nb_tx_desc) {
+		next = 0;
+		cq->expected_gen_id ^= 1;
+	}
+
+	cq->tx_tail = next;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+idpf_calc_context_desc(uint64_t flags)
+{
+	if ((flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
+		return 1;
+
+	return 0;
+}
+
+/* set TSO context descriptor
+ */
+static inline void
+idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
+			union idpf_tx_offload tx_offload,
+			volatile union idpf_flex_tx_ctx_desc *ctx_desc)
+{
+	uint16_t cmd_dtype;
+	uint32_t tso_len;
+	uint8_t hdr_len;
+
+	if (tx_offload.l4_len == 0) {
+		TX_LOG(DEBUG, "L4 length set to 0");
+		return;
+	}
+
+	hdr_len = tx_offload.l2_len +
+		tx_offload.l3_len +
+		tx_offload.l4_len;
+	cmd_dtype = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX |
+		IDPF_TX_FLEX_CTX_DESC_CMD_TSO;
+	tso_len = mbuf->pkt_len - hdr_len;
+
+	ctx_desc->tso.qw1.cmd_dtype = rte_cpu_to_le_16(cmd_dtype);
+	ctx_desc->tso.qw0.hdr_len = hdr_len;
+	ctx_desc->tso.qw0.mss_rt =
+		rte_cpu_to_le_16((uint16_t)mbuf->tso_segsz &
+				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
+	ctx_desc->tso.qw0.flex_tlen =
+		rte_cpu_to_le_32(tso_len &
+				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
+}
+
+uint16_t
+idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		      uint16_t nb_pkts)
+{
+	struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue;
+	volatile struct idpf_flex_tx_sched_desc *txr;
+	volatile struct idpf_flex_tx_sched_desc *txd;
+	struct idpf_tx_entry *sw_ring;
+	union idpf_tx_offload tx_offload = {0};
+	struct idpf_tx_entry *txe, *txn;
+	uint16_t nb_used, tx_id, sw_id;
+	struct rte_mbuf *tx_pkt;
+	uint16_t nb_to_clean;
+	uint16_t nb_tx = 0;
+	uint64_t ol_flags;
+	uint16_t nb_ctx;
+
+	if (unlikely(txq == NULL) || unlikely(!txq->q_started))
+		return nb_tx;
+
+	txr = txq->desc_ring;
+	sw_ring = txq->sw_ring;
+	tx_id = txq->tx_tail;
+	sw_id = txq->sw_tail;
+	txe = &sw_ring[sw_id];
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		tx_pkt = tx_pkts[nb_tx];
+
+		if (txq->nb_free <= txq->free_thresh) {
+			/* TODO: Need to refine
+			 * 1. free and clean: Better to decide a clean destination instead of
+			 * loop times. And don't free mbuf when RS got immediately, free when
+			 * transmit or according to the clean destination.
+			 * Now, just ignore the RE write back, free mbuf when get RS
+			 * 2. out-of-order rewrite back haven't be supported, SW head and HW head
+			 * need to be separated.
+			 **/
+			nb_to_clean = 2 * txq->rs_thresh;
+			while (nb_to_clean--)
+				idpf_split_tx_free(txq->complq);
+		}
+
+		if (txq->nb_free < tx_pkt->nb_segs)
+			break;
+
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = idpf_calc_context_desc(ol_flags);
+		nb_used = tx_pkt->nb_segs + nb_ctx;
+
+		/* context descriptor */
+		if (nb_ctx != 0) {
+			volatile union idpf_flex_tx_ctx_desc *ctx_desc =
+				(volatile union idpf_flex_tx_ctx_desc *)&txr[tx_id];
+
+			if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
+				idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
+							ctx_desc);
+
+			tx_id++;
+			if (tx_id == txq->nb_tx_desc)
+				tx_id = 0;
+		}
+
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+			txe->mbuf = tx_pkt;
+
+			/* Setup TX descriptor */
+			txd->buf_addr =
+				rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt));
+			txd->qw1.cmd_dtype =
+				rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE);
+			txd->qw1.rxr_bufsize = tx_pkt->data_len;
+			txd->qw1.compl_tag = sw_id;
+			tx_id++;
+			if (tx_id == txq->nb_tx_desc)
+				tx_id = 0;
+			sw_id = txe->next_id;
+			txe = txn;
+			tx_pkt = tx_pkt->next;
+		} while (tx_pkt);
+
+		/* fill the last descriptor with End of Packet (EOP) bit */
+		txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_EOP;
+
+		if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
+			txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN;
+		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+
+		if (txq->nb_used >= 32) {
+			txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_RE;
+			/* Update txq RE bit counters */
+			txq->nb_used = 0;
+		}
+	}
+
+	/* update the tail pointer if any packets were processed */
+	if (likely(nb_tx > 0)) {
+		IDPF_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+		txq->tx_tail = tx_id;
+		txq->sw_tail = sw_id;
+	}
+
+	return nb_tx;
+}
+
+#define IDPF_RX_FLEX_DESC_STATUS0_XSUM_S				\
+	(RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) |		\
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) |		\
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) |	\
+	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S))
+
+/* Translate the rx descriptor status and error fields to pkt flags */
+static inline uint64_t
+idpf_rxd_to_pkt_flags(uint16_t status_error)
+{
+	uint64_t flags = 0;
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S)) == 0))
+		return flags;
+
+	if (likely((status_error & IDPF_RX_FLEX_DESC_STATUS0_XSUM_S) == 0)) {
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)) != 0))
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
+
+	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S)) != 0))
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
+	else
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
+
+	return flags;
+}
+
+static inline void
+idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
+		    uint16_t rx_id)
+{
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+
+	if (nb_hold > rxq->rx_free_thresh) {
+		RX_LOG(DEBUG,
+		       "port_id=%u queue_id=%u rx_tail=%u nb_hold=%u",
+		       rxq->port_id, rxq->queue_id, rx_id, nb_hold);
+		rx_id = (uint16_t)((rx_id == 0) ?
+				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+}
+
+uint16_t
+idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts)
+{
+	volatile union virtchnl2_rx_desc *rx_ring;
+	volatile union virtchnl2_rx_desc *rxdp;
+	union virtchnl2_rx_desc rxd;
+	struct idpf_rx_queue *rxq;
+	const uint32_t *ptype_tbl;
+	uint16_t rx_id, nb_hold;
+	struct idpf_adapter *ad;
+	uint16_t rx_packet_len;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	uint16_t rx_status0;
+	uint64_t pkt_flags;
+	uint64_t dma_addr;
+	uint64_t ts_ns;
+	uint16_t nb_rx;
+
+	nb_rx = 0;
+	nb_hold = 0;
+	rxq = rx_queue;
+
+	ad = rxq->adapter;
+
+	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
+		return nb_rx;
+
+	rx_id = rxq->rx_tail;
+	rx_ring = rxq->rx_ring;
+	ptype_tbl = rxq->adapter->ptype_tbl;
+
+	if ((rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0)
+		rxq->hw_register_set = 1;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
+
+		/* Check the DD bit first */
+		if ((rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)) == 0)
+			break;
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(nmb == NULL)) {
+			__atomic_fetch_add(&rxq->rx_stats.mbuf_alloc_failed, 1, __ATOMIC_RELAXED);
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+			       "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+		nb_hold++;
+		rxm = rxq->sw_ring[rx_id];
+		rxq->sw_ring[rx_id] = nmb;
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		rx_packet_len = (rte_cpu_to_le_16(rxd.flex_nic_wb.pkt_len) &
+				 VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M);
+
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		rxm->ol_flags = 0;
+		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		rxm->packet_type =
+			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
+					    VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
+
+		rxm->ol_flags |= pkt_flags;
+
+		if (idpf_timestamp_dynflag > 0 &&
+		    (rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
+			/* timestamp */
+			ts_ns = idpf_tstamp_convert_32b_64b(ad,
+					    rxq->hw_register_set,
+					    rte_le_to_cpu_32(rxd.flex_nic_wb.flex_ts.ts_high));
+			rxq->hw_register_set = 0;
+			*RTE_MBUF_DYNFIELD(rxm,
+					   idpf_timestamp_dynfield_offset,
+					   rte_mbuf_timestamp_t *) = ts_ns;
+			rxm->ol_flags |= idpf_timestamp_dynflag;
+		}
+
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+
+	idpf_update_rx_tail(rxq, nb_hold, rx_id);
+
+	return nb_rx;
+}
+
+static inline int
+idpf_xmit_cleanup(struct idpf_tx_queue *txq)
+{
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	struct idpf_tx_entry *sw_ring = txq->sw_ring;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+	uint16_t i;
+
+	volatile struct idpf_flex_tx_desc *txd = txq->tx_ring;
+
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	/* In the writeback Tx desccriptor, the only significant fields are the 4-bit DTYPE */
+	if ((txd[desc_to_clean_to].qw1.cmd_dtype &
+	     rte_cpu_to_le_16(IDPF_TXD_QW1_DTYPE_M)) !=
+	    rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE)) {
+		TX_LOG(DEBUG, "TX descriptor %4u is not done "
+		       "(port=%d queue=%d)", desc_to_clean_to,
+		       txq->port_id, txq->queue_id);
+		return -1;
+	}
+
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+					    desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					    last_desc_cleaned);
+
+	txd[desc_to_clean_to].qw1.cmd_dtype = 0;
+	txd[desc_to_clean_to].qw1.buf_size = 0;
+	for (i = 0; i < RTE_DIM(txd[desc_to_clean_to].qw1.flex.raw); i++)
+		txd[desc_to_clean_to].qw1.flex.raw[i] = 0;
+
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* TX function */
+uint16_t
+idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts)
+{
+	volatile struct idpf_flex_tx_desc *txd;
+	volatile struct idpf_flex_tx_desc *txr;
+	union idpf_tx_offload tx_offload = {0};
+	struct idpf_tx_entry *txe, *txn;
+	struct idpf_tx_entry *sw_ring;
+	struct idpf_tx_queue *txq;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint64_t buf_dma_addr;
+	uint64_t ol_flags;
+	uint16_t tx_last;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint16_t td_cmd;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint16_t slen;
+
+	nb_tx = 0;
+	txq = tx_queue;
+
+	if (unlikely(txq == NULL) || unlikely(!txq->q_started))
+		return nb_tx;
+
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_free < txq->free_thresh)
+		(void)idpf_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		td_cmd = 0;
+
+		tx_pkt = *tx_pkts++;
+		RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
+
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = idpf_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus 1 context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		TX_LOG(DEBUG, "port_id=%u queue_id=%u"
+		       " tx_first=%u tx_last=%u",
+		       txq->port_id, txq->queue_id, tx_id, tx_last);
+
+		if (nb_used > txq->nb_free) {
+			if (idpf_xmit_cleanup(txq) != 0) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->rs_thresh)) {
+				while (nb_used > txq->nb_free) {
+					if (idpf_xmit_cleanup(txq) != 0) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		if (nb_ctx != 0) {
+			/* Setup TX context descriptor if required */
+			volatile union idpf_flex_tx_ctx_desc *ctx_txd =
+				(volatile union idpf_flex_tx_ctx_desc *)
+				&txr[tx_id];
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf != NULL) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			/* TSO enabled */
+			if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
+				idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
+							ctx_txd);
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+
+		m_seg = tx_pkt;
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf != NULL)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			slen = m_seg->data_len;
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->qw1.buf_size = slen;
+			txd->qw1.cmd_dtype = rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_DATA <<
+							      IDPF_FLEX_TXD_QW1_DTYPE_S);
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg);
+
+		/* The last packet data descriptor needs End Of Packet (EOP) */
+		td_cmd |= IDPF_TX_FLEX_DESC_CMD_EOP;
+		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+
+		if (txq->nb_used >= txq->rs_thresh) {
+			TX_LOG(DEBUG, "Setting RS bit on TXD id="
+			       "%4u (port=%d queue=%d)",
+			       tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= IDPF_TX_FLEX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_used = 0;
+		}
+
+		if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
+			td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN;
+
+		txd->qw1.cmd_dtype |= rte_cpu_to_le_16(td_cmd << IDPF_FLEX_TXD_QW1_CMD_S);
+	}
+
+end_of_tx:
+	rte_wmb();
+
+	TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+	       txq->port_id, txq->queue_id, tx_id, nb_tx);
+
+	IDPF_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+/* TX prep functions */
+uint16_t
+idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	       uint16_t nb_pkts)
+{
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+	int ret;
+#endif
+	int i;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		/* Check condition for nb_segs > IDPF_TX_MAX_MTU_SEG. */
+		if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0) {
+			if (m->nb_segs > IDPF_TX_MAX_MTU_SEG) {
+				rte_errno = EINVAL;
+				return i;
+			}
+		} else if ((m->tso_segsz < IDPF_MIN_TSO_MSS) ||
+			   (m->tso_segsz > IDPF_MAX_TSO_MSS) ||
+			   (m->pkt_len > IDPF_MAX_TSO_FRAME_SIZE)) {
+			/* MSS outside the range are considered malicious */
+			rte_errno = EINVAL;
+			return i;
+		}
+
+		if ((ol_flags & IDPF_TX_OFFLOAD_NOTSUP_MASK) != 0) {
+			rte_errno = ENOTSUP;
+			return i;
+		}
+
+		if (m->pkt_len < IDPF_MIN_FRAME_SIZE) {
+			rte_errno = EINVAL;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = -ret;
+			return i;
+		}
+#endif
+	}
+
+	return i;
+}
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index 874c4848c4..ef4e4f4a3c 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -27,6 +27,61 @@
 #define IDPF_TX_OFFLOAD_MULTI_SEGS       RTE_BIT64(15)
 #define IDPF_TX_OFFLOAD_MBUF_FAST_FREE   RTE_BIT64(16)
 
+#define IDPF_TX_MAX_MTU_SEG	10
+
+#define IDPF_MIN_TSO_MSS	88
+#define IDPF_MAX_TSO_MSS	9728
+#define IDPF_MAX_TSO_FRAME_SIZE	262143
+#define IDPF_TX_MAX_MTU_SEG     10
+
+#define IDPF_TX_CKSUM_OFFLOAD_MASK (		\
+		RTE_MBUF_F_TX_IP_CKSUM |	\
+		RTE_MBUF_F_TX_L4_MASK |		\
+		RTE_MBUF_F_TX_TCP_SEG)
+
+#define IDPF_TX_OFFLOAD_MASK (			\
+		IDPF_TX_CKSUM_OFFLOAD_MASK |	\
+		RTE_MBUF_F_TX_IPV4 |		\
+		RTE_MBUF_F_TX_IPV6)
+
+#define IDPF_TX_OFFLOAD_NOTSUP_MASK \
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
+
+/* MTS */
+#define GLTSYN_CMD_SYNC_0_0	(PF_TIMESYNC_BASE + 0x0)
+#define PF_GLTSYN_SHTIME_0_0	(PF_TIMESYNC_BASE + 0x4)
+#define PF_GLTSYN_SHTIME_L_0	(PF_TIMESYNC_BASE + 0x8)
+#define PF_GLTSYN_SHTIME_H_0	(PF_TIMESYNC_BASE + 0xC)
+#define GLTSYN_ART_L_0		(PF_TIMESYNC_BASE + 0x10)
+#define GLTSYN_ART_H_0		(PF_TIMESYNC_BASE + 0x14)
+#define PF_GLTSYN_SHTIME_0_1	(PF_TIMESYNC_BASE + 0x24)
+#define PF_GLTSYN_SHTIME_L_1	(PF_TIMESYNC_BASE + 0x28)
+#define PF_GLTSYN_SHTIME_H_1	(PF_TIMESYNC_BASE + 0x2C)
+#define PF_GLTSYN_SHTIME_0_2	(PF_TIMESYNC_BASE + 0x44)
+#define PF_GLTSYN_SHTIME_L_2	(PF_TIMESYNC_BASE + 0x48)
+#define PF_GLTSYN_SHTIME_H_2	(PF_TIMESYNC_BASE + 0x4C)
+#define PF_GLTSYN_SHTIME_0_3	(PF_TIMESYNC_BASE + 0x64)
+#define PF_GLTSYN_SHTIME_L_3	(PF_TIMESYNC_BASE + 0x68)
+#define PF_GLTSYN_SHTIME_H_3	(PF_TIMESYNC_BASE + 0x6C)
+
+#define PF_TIMESYNC_BAR4_BASE	0x0E400000
+#define GLTSYN_ENA		(PF_TIMESYNC_BAR4_BASE + 0x90)
+#define GLTSYN_CMD		(PF_TIMESYNC_BAR4_BASE + 0x94)
+#define GLTSYC_TIME_L		(PF_TIMESYNC_BAR4_BASE + 0x104)
+#define GLTSYC_TIME_H		(PF_TIMESYNC_BAR4_BASE + 0x108)
+
+#define GLTSYN_CMD_SYNC_0_4	(PF_TIMESYNC_BAR4_BASE + 0x110)
+#define PF_GLTSYN_SHTIME_L_4	(PF_TIMESYNC_BAR4_BASE + 0x118)
+#define PF_GLTSYN_SHTIME_H_4	(PF_TIMESYNC_BAR4_BASE + 0x11C)
+#define GLTSYN_INCVAL_L		(PF_TIMESYNC_BAR4_BASE + 0x150)
+#define GLTSYN_INCVAL_H		(PF_TIMESYNC_BAR4_BASE + 0x154)
+#define GLTSYN_SHADJ_L		(PF_TIMESYNC_BAR4_BASE + 0x158)
+#define GLTSYN_SHADJ_H		(PF_TIMESYNC_BAR4_BASE + 0x15C)
+
+#define GLTSYN_CMD_SYNC_0_5	(PF_TIMESYNC_BAR4_BASE + 0x130)
+#define PF_GLTSYN_SHTIME_L_5	(PF_TIMESYNC_BAR4_BASE + 0x138)
+#define PF_GLTSYN_SHTIME_H_5	(PF_TIMESYNC_BAR4_BASE + 0x13C)
+
 struct idpf_rx_stats {
 	uint64_t mbuf_alloc_failed;
 };
@@ -126,6 +181,18 @@ struct idpf_tx_queue {
 	struct idpf_tx_queue *complq;
 };
 
+/* Offload features */
+union idpf_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /* L3 (IP) Header Length. */
+		uint64_t l4_len:8; /* L4 Header Length. */
+		uint64_t tso_segsz:16; /* TCP TSO segment size */
+		/* uint64_t unused : 24; */
+	};
+};
+
 struct idpf_rxq_ops {
 	void (*release_mbufs)(struct idpf_rx_queue *rxq);
 };
@@ -134,6 +201,9 @@ struct idpf_txq_ops {
 	void (*release_mbufs)(struct idpf_tx_queue *txq);
 };
 
+extern int idpf_timestamp_dynfield_offset;
+extern uint64_t idpf_timestamp_dynflag;
+
 __rte_internal
 int idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh);
 __rte_internal
@@ -162,8 +232,25 @@ void idpf_rx_queue_release(void *rxq);
 __rte_internal
 void idpf_tx_queue_release(void *txq);
 __rte_internal
+int idpf_register_ts_mbuf(struct idpf_rx_queue *rxq);
+__rte_internal
 int idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq);
 __rte_internal
 int idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq);
+__rte_internal
+uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			       uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			       uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+				uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+				uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index aa6ebd7c6c..03aab598b4 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -12,6 +12,8 @@ INTERNAL {
 	idpf_config_rss;
 	idpf_create_vport_info_init;
 	idpf_execute_vc_cmd;
+	idpf_prep_pkts;
+	idpf_register_ts_mbuf;
 	idpf_release_rxq_mbufs;
 	idpf_release_txq_mbufs;
 	idpf_reset_single_rx_queue;
@@ -22,6 +24,10 @@ INTERNAL {
 	idpf_reset_split_tx_complq;
 	idpf_reset_split_tx_descq;
 	idpf_rx_queue_release;
+	idpf_singleq_recv_pkts;
+	idpf_singleq_xmit_pkts;
+	idpf_splitq_recv_pkts;
+	idpf_splitq_xmit_pkts;
 	idpf_tx_queue_release;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 734e97ffc2..ee2dec7c7c 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -22,8 +22,6 @@ rte_spinlock_t idpf_adapter_lock;
 struct idpf_adapter_list idpf_adapter_list;
 bool idpf_adapter_list_init;
 
-uint64_t idpf_timestamp_dynflag;
-
 static const char * const idpf_valid_args[] = {
 	IDPF_TX_SINGLE_Q,
 	IDPF_RX_SINGLE_Q,
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 9b40aa4e56..d791d402fb 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -28,7 +28,6 @@
 
 #define IDPF_MIN_BUF_SIZE	1024
 #define IDPF_MAX_FRAME_SIZE	9728
-#define IDPF_MIN_FRAME_SIZE	14
 #define IDPF_DEFAULT_MTU	RTE_ETHER_MTU
 
 #define IDPF_NUM_MACADDR_MAX	64
@@ -78,9 +77,6 @@ struct idpf_adapter_ext {
 	uint16_t cur_vport_nb;
 
 	uint16_t used_vecs_num;
-
-	/* For PTP */
-	uint64_t time_hw;
 };
 
 TAILQ_HEAD(idpf_adapter_list, idpf_adapter_ext);
diff --git a/drivers/net/idpf/idpf_logs.h b/drivers/net/idpf/idpf_logs.h
index d5f778fefe..bf0774b8e4 100644
--- a/drivers/net/idpf/idpf_logs.h
+++ b/drivers/net/idpf/idpf_logs.h
@@ -29,28 +29,4 @@ extern int idpf_logtype_driver;
 #define PMD_DRV_LOG(level, fmt, args...) \
 	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
 
-#ifdef RTE_LIBRTE_IDPF_DEBUG_RX
-#define PMD_RX_LOG(level, ...) \
-	RTE_LOG(level, \
-		PMD, \
-		RTE_FMT("%s(): " \
-			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
-			__func__, \
-			RTE_FMT_TAIL(__VA_ARGS__,)))
-#else
-#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
-#endif
-
-#ifdef RTE_LIBRTE_IDPF_DEBUG_TX
-#define PMD_TX_LOG(level, ...) \
-	RTE_LOG(level, \
-		PMD, \
-		RTE_FMT("%s(): " \
-			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
-			__func__, \
-			RTE_FMT_TAIL(__VA_ARGS__,)))
-#else
-#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
-#endif
-
 #endif /* _IDPF_LOGS_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index fb1814d893..1066789386 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -10,8 +10,6 @@
 #include "idpf_rxtx.h"
 #include "idpf_rxtx_vec_common.h"
 
-static int idpf_timestamp_dynfield_offset = -1;
-
 static uint64_t
 idpf_rx_offload_convert(uint64_t offload)
 {
@@ -501,23 +499,6 @@ idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	return ret;
 }
 
-static int
-idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
-{
-	int err;
-	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0) {
-		/* Register mbuf field and flag for Rx timestamp */
-		err = rte_mbuf_dyn_rx_timestamp_register(&idpf_timestamp_dynfield_offset,
-							 &idpf_timestamp_dynflag);
-		if (err != 0) {
-			PMD_DRV_LOG(ERR,
-				    "Cannot register mbuf field/flag for timestamp");
-			return -EINVAL;
-		}
-	}
-	return 0;
-}
-
 int
 idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
@@ -537,7 +518,7 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 	err = idpf_register_ts_mbuf(rxq);
 	if (err != 0) {
-		PMD_DRV_LOG(ERR, "fail to regidter timestamp mbuf %u",
+		PMD_DRV_LOG(ERR, "fail to residter timestamp mbuf %u",
 					rx_queue_id);
 		return -EIO;
 	}
@@ -762,922 +743,6 @@ idpf_stop_queues(struct rte_eth_dev *dev)
 	}
 }
 
-#define IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S				\
-	(RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S) |     \
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S) |     \
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S) |    \
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S))
-
-static inline uint64_t
-idpf_splitq_rx_csum_offload(uint8_t err)
-{
-	uint64_t flags = 0;
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S)) == 0))
-		return flags;
-
-	if (likely((err & IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S) == 0)) {
-		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
-			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
-		return flags;
-	}
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S)) != 0))
-		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
-
-	if (unlikely((err & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
-
-	return flags;
-}
-
-#define IDPF_RX_FLEX_DESC_ADV_HASH1_S  0
-#define IDPF_RX_FLEX_DESC_ADV_HASH2_S  16
-#define IDPF_RX_FLEX_DESC_ADV_HASH3_S  24
-
-static inline uint64_t
-idpf_splitq_rx_rss_offload(struct rte_mbuf *mb,
-			   volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc)
-{
-	uint8_t status_err0_qw0;
-	uint64_t flags = 0;
-
-	status_err0_qw0 = rx_desc->status_err0_qw0;
-
-	if ((status_err0_qw0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S)) != 0) {
-		flags |= RTE_MBUF_F_RX_RSS_HASH;
-		mb->hash.rss = (rte_le_to_cpu_16(rx_desc->hash1) <<
-				IDPF_RX_FLEX_DESC_ADV_HASH1_S) |
-			((uint32_t)(rx_desc->ff2_mirrid_hash2.hash2) <<
-			 IDPF_RX_FLEX_DESC_ADV_HASH2_S) |
-			((uint32_t)(rx_desc->hash3) <<
-			 IDPF_RX_FLEX_DESC_ADV_HASH3_S);
-	}
-
-	return flags;
-}
-
-static void
-idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
-{
-	volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_ring;
-	volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_desc;
-	uint16_t nb_refill = rx_bufq->rx_free_thresh;
-	uint16_t nb_desc = rx_bufq->nb_rx_desc;
-	uint16_t next_avail = rx_bufq->rx_tail;
-	struct rte_mbuf *nmb[rx_bufq->rx_free_thresh];
-	struct rte_eth_dev *dev;
-	uint64_t dma_addr;
-	uint16_t delta;
-	int i;
-
-	if (rx_bufq->nb_rx_hold < rx_bufq->rx_free_thresh)
-		return;
-
-	rx_buf_ring = rx_bufq->rx_ring;
-	delta = nb_desc - next_avail;
-	if (unlikely(delta < nb_refill)) {
-		if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, delta) == 0)) {
-			for (i = 0; i < delta; i++) {
-				rx_buf_desc = &rx_buf_ring[next_avail + i];
-				rx_bufq->sw_ring[next_avail + i] = nmb[i];
-				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
-				rx_buf_desc->hdr_addr = 0;
-				rx_buf_desc->pkt_addr = dma_addr;
-			}
-			nb_refill -= delta;
-			next_avail = 0;
-			rx_bufq->nb_rx_hold -= delta;
-		} else {
-			dev = &rte_eth_devices[rx_bufq->port_id];
-			dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
-			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
-				   rx_bufq->port_id, rx_bufq->queue_id);
-			return;
-		}
-	}
-
-	if (nb_desc - next_avail >= nb_refill) {
-		if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, nb_refill) == 0)) {
-			for (i = 0; i < nb_refill; i++) {
-				rx_buf_desc = &rx_buf_ring[next_avail + i];
-				rx_bufq->sw_ring[next_avail + i] = nmb[i];
-				dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
-				rx_buf_desc->hdr_addr = 0;
-				rx_buf_desc->pkt_addr = dma_addr;
-			}
-			next_avail += nb_refill;
-			rx_bufq->nb_rx_hold -= nb_refill;
-		} else {
-			dev = &rte_eth_devices[rx_bufq->port_id];
-			dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
-			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
-				   rx_bufq->port_id, rx_bufq->queue_id);
-		}
-	}
-
-	IDPF_PCI_REG_WRITE(rx_bufq->qrx_tail, next_avail);
-
-	rx_bufq->rx_tail = next_avail;
-}
-
-uint16_t
-idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-		      uint16_t nb_pkts)
-{
-	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
-	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
-	uint16_t pktlen_gen_bufq_id;
-	struct idpf_rx_queue *rxq;
-	const uint32_t *ptype_tbl;
-	uint8_t status_err0_qw1;
-	struct idpf_adapter_ext *ad;
-	struct rte_mbuf *rxm;
-	uint16_t rx_id_bufq1;
-	uint16_t rx_id_bufq2;
-	uint64_t pkt_flags;
-	uint16_t pkt_len;
-	uint16_t bufq_id;
-	uint16_t gen_id;
-	uint16_t rx_id;
-	uint16_t nb_rx;
-	uint64_t ts_ns;
-
-	nb_rx = 0;
-	rxq = rx_queue;
-	ad = IDPF_ADAPTER_TO_EXT(rxq->adapter);
-
-	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
-		return nb_rx;
-
-	rx_id = rxq->rx_tail;
-	rx_id_bufq1 = rxq->bufq1->rx_next_avail;
-	rx_id_bufq2 = rxq->bufq2->rx_next_avail;
-	rx_desc_ring = rxq->rx_ring;
-	ptype_tbl = rxq->adapter->ptype_tbl;
-
-	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
-		rxq->hw_register_set = 1;
-
-	while (nb_rx < nb_pkts) {
-		rx_desc = &rx_desc_ring[rx_id];
-
-		pktlen_gen_bufq_id =
-			rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
-		gen_id = (pktlen_gen_bufq_id &
-			  VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
-			VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
-		if (gen_id != rxq->expected_gen_id)
-			break;
-
-		pkt_len = (pktlen_gen_bufq_id &
-			   VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M) >>
-			VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S;
-		if (pkt_len == 0)
-			PMD_RX_LOG(ERR, "Packet length is 0");
-
-		rx_id++;
-		if (unlikely(rx_id == rxq->nb_rx_desc)) {
-			rx_id = 0;
-			rxq->expected_gen_id ^= 1;
-		}
-
-		bufq_id = (pktlen_gen_bufq_id &
-			   VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M) >>
-			VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S;
-		if (bufq_id == 0) {
-			rxm = rxq->bufq1->sw_ring[rx_id_bufq1];
-			rx_id_bufq1++;
-			if (unlikely(rx_id_bufq1 == rxq->bufq1->nb_rx_desc))
-				rx_id_bufq1 = 0;
-			rxq->bufq1->nb_rx_hold++;
-		} else {
-			rxm = rxq->bufq2->sw_ring[rx_id_bufq2];
-			rx_id_bufq2++;
-			if (unlikely(rx_id_bufq2 == rxq->bufq2->nb_rx_desc))
-				rx_id_bufq2 = 0;
-			rxq->bufq2->nb_rx_hold++;
-		}
-
-		rxm->pkt_len = pkt_len;
-		rxm->data_len = pkt_len;
-		rxm->data_off = RTE_PKTMBUF_HEADROOM;
-		rxm->next = NULL;
-		rxm->nb_segs = 1;
-		rxm->port = rxq->port_id;
-		rxm->ol_flags = 0;
-		rxm->packet_type =
-			ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
-				   VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
-				  VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
-
-		status_err0_qw1 = rx_desc->status_err0_qw1;
-		pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
-		pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
-		if (idpf_timestamp_dynflag > 0 &&
-		    (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
-			/* timestamp */
-			ts_ns = idpf_tstamp_convert_32b_64b(ad,
-				rxq->hw_register_set,
-				rte_le_to_cpu_32(rx_desc->ts_high));
-			rxq->hw_register_set = 0;
-			*RTE_MBUF_DYNFIELD(rxm,
-					   idpf_timestamp_dynfield_offset,
-					   rte_mbuf_timestamp_t *) = ts_ns;
-			rxm->ol_flags |= idpf_timestamp_dynflag;
-		}
-
-		rxm->ol_flags |= pkt_flags;
-
-		rx_pkts[nb_rx++] = rxm;
-	}
-
-	if (nb_rx > 0) {
-		rxq->rx_tail = rx_id;
-		if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
-			rxq->bufq1->rx_next_avail = rx_id_bufq1;
-		if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
-			rxq->bufq2->rx_next_avail = rx_id_bufq2;
-
-		idpf_split_rx_bufq_refill(rxq->bufq1);
-		idpf_split_rx_bufq_refill(rxq->bufq2);
-	}
-
-	return nb_rx;
-}
-
-static inline void
-idpf_split_tx_free(struct idpf_tx_queue *cq)
-{
-	volatile struct idpf_splitq_tx_compl_desc *compl_ring = cq->compl_ring;
-	volatile struct idpf_splitq_tx_compl_desc *txd;
-	uint16_t next = cq->tx_tail;
-	struct idpf_tx_entry *txe;
-	struct idpf_tx_queue *txq;
-	uint16_t gen, qid, q_head;
-	uint16_t nb_desc_clean;
-	uint8_t ctype;
-
-	txd = &compl_ring[next];
-	gen = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
-		IDPF_TXD_COMPLQ_GEN_M) >> IDPF_TXD_COMPLQ_GEN_S;
-	if (gen != cq->expected_gen_id)
-		return;
-
-	ctype = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
-		IDPF_TXD_COMPLQ_COMPL_TYPE_M) >> IDPF_TXD_COMPLQ_COMPL_TYPE_S;
-	qid = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
-		IDPF_TXD_COMPLQ_QID_M) >> IDPF_TXD_COMPLQ_QID_S;
-	q_head = rte_le_to_cpu_16(txd->q_head_compl_tag.compl_tag);
-	txq = cq->txqs[qid - cq->tx_start_qid];
-
-	switch (ctype) {
-	case IDPF_TXD_COMPLT_RE:
-		/* clean to q_head which indicates be fetched txq desc id + 1.
-		 * TODO: need to refine and remove the if condition.
-		 */
-		if (unlikely(q_head % 32)) {
-			PMD_DRV_LOG(ERR, "unexpected desc (head = %u) completion.",
-						q_head);
-			return;
-		}
-		if (txq->last_desc_cleaned > q_head)
-			nb_desc_clean = (txq->nb_tx_desc - txq->last_desc_cleaned) +
-				q_head;
-		else
-			nb_desc_clean = q_head - txq->last_desc_cleaned;
-		txq->nb_free += nb_desc_clean;
-		txq->last_desc_cleaned = q_head;
-		break;
-	case IDPF_TXD_COMPLT_RS:
-		/* q_head indicates sw_id when ctype is 2 */
-		txe = &txq->sw_ring[q_head];
-		if (txe->mbuf != NULL) {
-			rte_pktmbuf_free_seg(txe->mbuf);
-			txe->mbuf = NULL;
-		}
-		break;
-	default:
-		PMD_DRV_LOG(ERR, "unknown completion type.");
-		return;
-	}
-
-	if (++next == cq->nb_tx_desc) {
-		next = 0;
-		cq->expected_gen_id ^= 1;
-	}
-
-	cq->tx_tail = next;
-}
-
-/* Check if the context descriptor is needed for TX offloading */
-static inline uint16_t
-idpf_calc_context_desc(uint64_t flags)
-{
-	if ((flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
-		return 1;
-
-	return 0;
-}
-
-/* set TSO context descriptor
- */
-static inline void
-idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
-			union idpf_tx_offload tx_offload,
-			volatile union idpf_flex_tx_ctx_desc *ctx_desc)
-{
-	uint16_t cmd_dtype;
-	uint32_t tso_len;
-	uint8_t hdr_len;
-
-	if (tx_offload.l4_len == 0) {
-		PMD_TX_LOG(DEBUG, "L4 length set to 0");
-		return;
-	}
-
-	hdr_len = tx_offload.l2_len +
-		tx_offload.l3_len +
-		tx_offload.l4_len;
-	cmd_dtype = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX |
-		IDPF_TX_FLEX_CTX_DESC_CMD_TSO;
-	tso_len = mbuf->pkt_len - hdr_len;
-
-	ctx_desc->tso.qw1.cmd_dtype = rte_cpu_to_le_16(cmd_dtype);
-	ctx_desc->tso.qw0.hdr_len = hdr_len;
-	ctx_desc->tso.qw0.mss_rt =
-		rte_cpu_to_le_16((uint16_t)mbuf->tso_segsz &
-				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
-	ctx_desc->tso.qw0.flex_tlen =
-		rte_cpu_to_le_32(tso_len &
-				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
-}
-
-uint16_t
-idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-		      uint16_t nb_pkts)
-{
-	struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue;
-	volatile struct idpf_flex_tx_sched_desc *txr;
-	volatile struct idpf_flex_tx_sched_desc *txd;
-	struct idpf_tx_entry *sw_ring;
-	union idpf_tx_offload tx_offload = {0};
-	struct idpf_tx_entry *txe, *txn;
-	uint16_t nb_used, tx_id, sw_id;
-	struct rte_mbuf *tx_pkt;
-	uint16_t nb_to_clean;
-	uint16_t nb_tx = 0;
-	uint64_t ol_flags;
-	uint16_t nb_ctx;
-
-	if (unlikely(txq == NULL) || unlikely(!txq->q_started))
-		return nb_tx;
-
-	txr = txq->desc_ring;
-	sw_ring = txq->sw_ring;
-	tx_id = txq->tx_tail;
-	sw_id = txq->sw_tail;
-	txe = &sw_ring[sw_id];
-
-	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
-		tx_pkt = tx_pkts[nb_tx];
-
-		if (txq->nb_free <= txq->free_thresh) {
-			/* TODO: Need to refine
-			 * 1. free and clean: Better to decide a clean destination instead of
-			 * loop times. And don't free mbuf when RS got immediately, free when
-			 * transmit or according to the clean destination.
-			 * Now, just ignore the RE write back, free mbuf when get RS
-			 * 2. out-of-order rewrite back haven't be supported, SW head and HW head
-			 * need to be separated.
-			 **/
-			nb_to_clean = 2 * txq->rs_thresh;
-			while (nb_to_clean--)
-				idpf_split_tx_free(txq->complq);
-		}
-
-		if (txq->nb_free < tx_pkt->nb_segs)
-			break;
-
-		ol_flags = tx_pkt->ol_flags;
-		tx_offload.l2_len = tx_pkt->l2_len;
-		tx_offload.l3_len = tx_pkt->l3_len;
-		tx_offload.l4_len = tx_pkt->l4_len;
-		tx_offload.tso_segsz = tx_pkt->tso_segsz;
-		/* Calculate the number of context descriptors needed. */
-		nb_ctx = idpf_calc_context_desc(ol_flags);
-		nb_used = tx_pkt->nb_segs + nb_ctx;
-
-		/* context descriptor */
-		if (nb_ctx != 0) {
-			volatile union idpf_flex_tx_ctx_desc *ctx_desc =
-			(volatile union idpf_flex_tx_ctx_desc *)&txr[tx_id];
-
-			if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
-				idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
-							ctx_desc);
-
-			tx_id++;
-			if (tx_id == txq->nb_tx_desc)
-				tx_id = 0;
-		}
-
-		do {
-			txd = &txr[tx_id];
-			txn = &sw_ring[txe->next_id];
-			txe->mbuf = tx_pkt;
-
-			/* Setup TX descriptor */
-			txd->buf_addr =
-				rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt));
-			txd->qw1.cmd_dtype =
-				rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE);
-			txd->qw1.rxr_bufsize = tx_pkt->data_len;
-			txd->qw1.compl_tag = sw_id;
-			tx_id++;
-			if (tx_id == txq->nb_tx_desc)
-				tx_id = 0;
-			sw_id = txe->next_id;
-			txe = txn;
-			tx_pkt = tx_pkt->next;
-		} while (tx_pkt);
-
-		/* fill the last descriptor with End of Packet (EOP) bit */
-		txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_EOP;
-
-		if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
-			txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN;
-		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
-		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
-
-		if (txq->nb_used >= 32) {
-			txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_RE;
-			/* Update txq RE bit counters */
-			txq->nb_used = 0;
-		}
-	}
-
-	/* update the tail pointer if any packets were processed */
-	if (likely(nb_tx > 0)) {
-		IDPF_PCI_REG_WRITE(txq->qtx_tail, tx_id);
-		txq->tx_tail = tx_id;
-		txq->sw_tail = sw_id;
-	}
-
-	return nb_tx;
-}
-
-#define IDPF_RX_FLEX_DESC_STATUS0_XSUM_S				\
-	(RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) |		\
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) |		\
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) |	\
-	 RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S))
-
-/* Translate the rx descriptor status and error fields to pkt flags */
-static inline uint64_t
-idpf_rxd_to_pkt_flags(uint16_t status_error)
-{
-	uint64_t flags = 0;
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S)) == 0))
-		return flags;
-
-	if (likely((status_error & IDPF_RX_FLEX_DESC_STATUS0_XSUM_S) == 0)) {
-		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
-			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
-		return flags;
-	}
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)) != 0))
-		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
-
-	if (unlikely((status_error & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S)) != 0))
-		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
-	else
-		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
-
-	return flags;
-}
-
-static inline void
-idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
-		    uint16_t rx_id)
-{
-	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
-
-	if (nb_hold > rxq->rx_free_thresh) {
-		PMD_RX_LOG(DEBUG,
-			   "port_id=%u queue_id=%u rx_tail=%u nb_hold=%u",
-			   rxq->port_id, rxq->queue_id, rx_id, nb_hold);
-		rx_id = (uint16_t)((rx_id == 0) ?
-				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
-		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
-		nb_hold = 0;
-	}
-	rxq->nb_rx_hold = nb_hold;
-}
-
-uint16_t
-idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-		       uint16_t nb_pkts)
-{
-	volatile union virtchnl2_rx_desc *rx_ring;
-	volatile union virtchnl2_rx_desc *rxdp;
-	union virtchnl2_rx_desc rxd;
-	struct idpf_rx_queue *rxq;
-	const uint32_t *ptype_tbl;
-	uint16_t rx_id, nb_hold;
-	struct rte_eth_dev *dev;
-	struct idpf_adapter_ext *ad;
-	uint16_t rx_packet_len;
-	struct rte_mbuf *rxm;
-	struct rte_mbuf *nmb;
-	uint16_t rx_status0;
-	uint64_t pkt_flags;
-	uint64_t dma_addr;
-	uint64_t ts_ns;
-	uint16_t nb_rx;
-
-	nb_rx = 0;
-	nb_hold = 0;
-	rxq = rx_queue;
-
-	ad = IDPF_ADAPTER_TO_EXT(rxq->adapter);
-
-	if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
-		return nb_rx;
-
-	rx_id = rxq->rx_tail;
-	rx_ring = rxq->rx_ring;
-	ptype_tbl = rxq->adapter->ptype_tbl;
-
-	if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
-		rxq->hw_register_set = 1;
-
-	while (nb_rx < nb_pkts) {
-		rxdp = &rx_ring[rx_id];
-		rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
-
-		/* Check the DD bit first */
-		if ((rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)) == 0)
-			break;
-
-		nmb = rte_mbuf_raw_alloc(rxq->mp);
-		if (unlikely(nmb == NULL)) {
-			dev = &rte_eth_devices[rxq->port_id];
-			dev->data->rx_mbuf_alloc_failed++;
-			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
-				   "queue_id=%u", rxq->port_id, rxq->queue_id);
-			break;
-		}
-		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
-
-		nb_hold++;
-		rxm = rxq->sw_ring[rx_id];
-		rxq->sw_ring[rx_id] = nmb;
-		rx_id++;
-		if (unlikely(rx_id == rxq->nb_rx_desc))
-			rx_id = 0;
-
-		/* Prefetch next mbuf */
-		rte_prefetch0(rxq->sw_ring[rx_id]);
-
-		/* When next RX descriptor is on a cache line boundary,
-		 * prefetch the next 4 RX descriptors and next 8 pointers
-		 * to mbufs.
-		 */
-		if ((rx_id & 0x3) == 0) {
-			rte_prefetch0(&rx_ring[rx_id]);
-			rte_prefetch0(rxq->sw_ring[rx_id]);
-		}
-		dma_addr =
-			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
-		rxdp->read.hdr_addr = 0;
-		rxdp->read.pkt_addr = dma_addr;
-
-		rx_packet_len = (rte_cpu_to_le_16(rxd.flex_nic_wb.pkt_len) &
-				 VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M);
-
-		rxm->data_off = RTE_PKTMBUF_HEADROOM;
-		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
-		rxm->nb_segs = 1;
-		rxm->next = NULL;
-		rxm->pkt_len = rx_packet_len;
-		rxm->data_len = rx_packet_len;
-		rxm->port = rxq->port_id;
-		rxm->ol_flags = 0;
-		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
-		rxm->packet_type =
-			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
-					    VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
-
-		rxm->ol_flags |= pkt_flags;
-
-		if (idpf_timestamp_dynflag > 0 &&
-		   (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0) {
-			/* timestamp */
-			ts_ns = idpf_tstamp_convert_32b_64b(ad,
-				rxq->hw_register_set,
-				rte_le_to_cpu_32(rxd.flex_nic_wb.flex_ts.ts_high));
-			rxq->hw_register_set = 0;
-			*RTE_MBUF_DYNFIELD(rxm,
-					   idpf_timestamp_dynfield_offset,
-					   rte_mbuf_timestamp_t *) = ts_ns;
-			rxm->ol_flags |= idpf_timestamp_dynflag;
-		}
-
-		rx_pkts[nb_rx++] = rxm;
-	}
-	rxq->rx_tail = rx_id;
-
-	idpf_update_rx_tail(rxq, nb_hold, rx_id);
-
-	return nb_rx;
-}
-
-static inline int
-idpf_xmit_cleanup(struct idpf_tx_queue *txq)
-{
-	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
-	struct idpf_tx_entry *sw_ring = txq->sw_ring;
-	uint16_t nb_tx_desc = txq->nb_tx_desc;
-	uint16_t desc_to_clean_to;
-	uint16_t nb_tx_to_clean;
-	uint16_t i;
-
-	volatile struct idpf_flex_tx_desc *txd = txq->tx_ring;
-
-	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
-	if (desc_to_clean_to >= nb_tx_desc)
-		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
-
-	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
-	/* In the writeback Tx desccriptor, the only significant fields are the 4-bit DTYPE */
-	if ((txd[desc_to_clean_to].qw1.cmd_dtype &
-			rte_cpu_to_le_16(IDPF_TXD_QW1_DTYPE_M)) !=
-			rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE)) {
-		PMD_TX_LOG(DEBUG, "TX descriptor %4u is not done "
-			   "(port=%d queue=%d)", desc_to_clean_to,
-			   txq->port_id, txq->queue_id);
-		return -1;
-	}
-
-	if (last_desc_cleaned > desc_to_clean_to)
-		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
-					    desc_to_clean_to);
-	else
-		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
-					last_desc_cleaned);
-
-	txd[desc_to_clean_to].qw1.cmd_dtype = 0;
-	txd[desc_to_clean_to].qw1.buf_size = 0;
-	for (i = 0; i < RTE_DIM(txd[desc_to_clean_to].qw1.flex.raw); i++)
-		txd[desc_to_clean_to].qw1.flex.raw[i] = 0;
-
-	txq->last_desc_cleaned = desc_to_clean_to;
-	txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
-
-	return 0;
-}
-
-/* TX function */
-uint16_t
-idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-		       uint16_t nb_pkts)
-{
-	volatile struct idpf_flex_tx_desc *txd;
-	volatile struct idpf_flex_tx_desc *txr;
-	union idpf_tx_offload tx_offload = {0};
-	struct idpf_tx_entry *txe, *txn;
-	struct idpf_tx_entry *sw_ring;
-	struct idpf_tx_queue *txq;
-	struct rte_mbuf *tx_pkt;
-	struct rte_mbuf *m_seg;
-	uint64_t buf_dma_addr;
-	uint64_t ol_flags;
-	uint16_t tx_last;
-	uint16_t nb_used;
-	uint16_t nb_ctx;
-	uint16_t td_cmd;
-	uint16_t tx_id;
-	uint16_t nb_tx;
-	uint16_t slen;
-
-	nb_tx = 0;
-	txq = tx_queue;
-
-	if (unlikely(txq == NULL) || unlikely(!txq->q_started))
-		return nb_tx;
-
-	sw_ring = txq->sw_ring;
-	txr = txq->tx_ring;
-	tx_id = txq->tx_tail;
-	txe = &sw_ring[tx_id];
-
-	/* Check if the descriptor ring needs to be cleaned. */
-	if (txq->nb_free < txq->free_thresh)
-		(void)idpf_xmit_cleanup(txq);
-
-	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
-		td_cmd = 0;
-
-		tx_pkt = *tx_pkts++;
-		RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
-
-		ol_flags = tx_pkt->ol_flags;
-		tx_offload.l2_len = tx_pkt->l2_len;
-		tx_offload.l3_len = tx_pkt->l3_len;
-		tx_offload.l4_len = tx_pkt->l4_len;
-		tx_offload.tso_segsz = tx_pkt->tso_segsz;
-		/* Calculate the number of context descriptors needed. */
-		nb_ctx = idpf_calc_context_desc(ol_flags);
-
-		/* The number of descriptors that must be allocated for
-		 * a packet equals to the number of the segments of that
-		 * packet plus 1 context descriptor if needed.
-		 */
-		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
-		tx_last = (uint16_t)(tx_id + nb_used - 1);
-
-		/* Circular ring */
-		if (tx_last >= txq->nb_tx_desc)
-			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
-
-		PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u"
-			   " tx_first=%u tx_last=%u",
-			   txq->port_id, txq->queue_id, tx_id, tx_last);
-
-		if (nb_used > txq->nb_free) {
-			if (idpf_xmit_cleanup(txq) != 0) {
-				if (nb_tx == 0)
-					return 0;
-				goto end_of_tx;
-			}
-			if (unlikely(nb_used > txq->rs_thresh)) {
-				while (nb_used > txq->nb_free) {
-					if (idpf_xmit_cleanup(txq) != 0) {
-						if (nb_tx == 0)
-							return 0;
-						goto end_of_tx;
-					}
-				}
-			}
-		}
-
-		if (nb_ctx != 0) {
-			/* Setup TX context descriptor if required */
-			volatile union idpf_flex_tx_ctx_desc *ctx_txd =
-				(volatile union idpf_flex_tx_ctx_desc *)
-							&txr[tx_id];
-
-			txn = &sw_ring[txe->next_id];
-			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
-			if (txe->mbuf != NULL) {
-				rte_pktmbuf_free_seg(txe->mbuf);
-				txe->mbuf = NULL;
-			}
-
-			/* TSO enabled */
-			if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
-				idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
-							ctx_txd);
-
-			txe->last_id = tx_last;
-			tx_id = txe->next_id;
-			txe = txn;
-		}
-
-		m_seg = tx_pkt;
-		do {
-			txd = &txr[tx_id];
-			txn = &sw_ring[txe->next_id];
-
-			if (txe->mbuf != NULL)
-				rte_pktmbuf_free_seg(txe->mbuf);
-			txe->mbuf = m_seg;
-
-			/* Setup TX Descriptor */
-			slen = m_seg->data_len;
-			buf_dma_addr = rte_mbuf_data_iova(m_seg);
-			txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
-			txd->qw1.buf_size = slen;
-			txd->qw1.cmd_dtype = rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_FLEX_DATA <<
-							      IDPF_FLEX_TXD_QW1_DTYPE_S);
-
-			txe->last_id = tx_last;
-			tx_id = txe->next_id;
-			txe = txn;
-			m_seg = m_seg->next;
-		} while (m_seg);
-
-		/* The last packet data descriptor needs End Of Packet (EOP) */
-		td_cmd |= IDPF_TX_FLEX_DESC_CMD_EOP;
-		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
-		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
-
-		if (txq->nb_used >= txq->rs_thresh) {
-			PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
-				   "%4u (port=%d queue=%d)",
-				   tx_last, txq->port_id, txq->queue_id);
-
-			td_cmd |= IDPF_TX_FLEX_DESC_CMD_RS;
-
-			/* Update txq RS bit counters */
-			txq->nb_used = 0;
-		}
-
-		if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
-			td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN;
-
-		txd->qw1.cmd_dtype |= rte_cpu_to_le_16(td_cmd << IDPF_FLEX_TXD_QW1_CMD_S);
-	}
-
-end_of_tx:
-	rte_wmb();
-
-	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
-		   txq->port_id, txq->queue_id, tx_id, nb_tx);
-
-	IDPF_PCI_REG_WRITE(txq->qtx_tail, tx_id);
-	txq->tx_tail = tx_id;
-
-	return nb_tx;
-}
-
-/* TX prep functions */
-uint16_t
-idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
-	       uint16_t nb_pkts)
-{
-#ifdef RTE_LIBRTE_ETHDEV_DEBUG
-	int ret;
-#endif
-	int i;
-	uint64_t ol_flags;
-	struct rte_mbuf *m;
-
-	for (i = 0; i < nb_pkts; i++) {
-		m = tx_pkts[i];
-		ol_flags = m->ol_flags;
-
-		/* Check condition for nb_segs > IDPF_TX_MAX_MTU_SEG. */
-		if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0) {
-			if (m->nb_segs > IDPF_TX_MAX_MTU_SEG) {
-				rte_errno = EINVAL;
-				return i;
-			}
-		} else if ((m->tso_segsz < IDPF_MIN_TSO_MSS) ||
-			   (m->tso_segsz > IDPF_MAX_TSO_MSS) ||
-			   (m->pkt_len > IDPF_MAX_TSO_FRAME_SIZE)) {
-			/* MSS outside the range are considered malicious */
-			rte_errno = EINVAL;
-			return i;
-		}
-
-		if ((ol_flags & IDPF_TX_OFFLOAD_NOTSUP_MASK) != 0) {
-			rte_errno = ENOTSUP;
-			return i;
-		}
-
-		if (m->pkt_len < IDPF_MIN_FRAME_SIZE) {
-			rte_errno = EINVAL;
-			return i;
-		}
-
-#ifdef RTE_LIBRTE_ETHDEV_DEBUG
-		ret = rte_validate_tx_offload(m);
-		if (ret != 0) {
-			rte_errno = -ret;
-			return i;
-		}
-#endif
-	}
-
-	return i;
-}
-
 static void __rte_cold
 release_rxq_mbufs_vec(struct idpf_rx_queue *rxq)
 {
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 4efbf10295..eab363c3e7 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -8,41 +8,6 @@
 #include <idpf_common_rxtx.h>
 #include "idpf_ethdev.h"
 
-/* MTS */
-#define GLTSYN_CMD_SYNC_0_0	(PF_TIMESYNC_BASE + 0x0)
-#define PF_GLTSYN_SHTIME_0_0	(PF_TIMESYNC_BASE + 0x4)
-#define PF_GLTSYN_SHTIME_L_0	(PF_TIMESYNC_BASE + 0x8)
-#define PF_GLTSYN_SHTIME_H_0	(PF_TIMESYNC_BASE + 0xC)
-#define GLTSYN_ART_L_0		(PF_TIMESYNC_BASE + 0x10)
-#define GLTSYN_ART_H_0		(PF_TIMESYNC_BASE + 0x14)
-#define PF_GLTSYN_SHTIME_0_1	(PF_TIMESYNC_BASE + 0x24)
-#define PF_GLTSYN_SHTIME_L_1	(PF_TIMESYNC_BASE + 0x28)
-#define PF_GLTSYN_SHTIME_H_1	(PF_TIMESYNC_BASE + 0x2C)
-#define PF_GLTSYN_SHTIME_0_2	(PF_TIMESYNC_BASE + 0x44)
-#define PF_GLTSYN_SHTIME_L_2	(PF_TIMESYNC_BASE + 0x48)
-#define PF_GLTSYN_SHTIME_H_2	(PF_TIMESYNC_BASE + 0x4C)
-#define PF_GLTSYN_SHTIME_0_3	(PF_TIMESYNC_BASE + 0x64)
-#define PF_GLTSYN_SHTIME_L_3	(PF_TIMESYNC_BASE + 0x68)
-#define PF_GLTSYN_SHTIME_H_3	(PF_TIMESYNC_BASE + 0x6C)
-
-#define PF_TIMESYNC_BAR4_BASE	0x0E400000
-#define GLTSYN_ENA		(PF_TIMESYNC_BAR4_BASE + 0x90)
-#define GLTSYN_CMD		(PF_TIMESYNC_BAR4_BASE + 0x94)
-#define GLTSYC_TIME_L		(PF_TIMESYNC_BAR4_BASE + 0x104)
-#define GLTSYC_TIME_H		(PF_TIMESYNC_BAR4_BASE + 0x108)
-
-#define GLTSYN_CMD_SYNC_0_4	(PF_TIMESYNC_BAR4_BASE + 0x110)
-#define PF_GLTSYN_SHTIME_L_4	(PF_TIMESYNC_BAR4_BASE + 0x118)
-#define PF_GLTSYN_SHTIME_H_4	(PF_TIMESYNC_BAR4_BASE + 0x11C)
-#define GLTSYN_INCVAL_L		(PF_TIMESYNC_BAR4_BASE + 0x150)
-#define GLTSYN_INCVAL_H		(PF_TIMESYNC_BAR4_BASE + 0x154)
-#define GLTSYN_SHADJ_L		(PF_TIMESYNC_BAR4_BASE + 0x158)
-#define GLTSYN_SHADJ_H		(PF_TIMESYNC_BAR4_BASE + 0x15C)
-
-#define GLTSYN_CMD_SYNC_0_5	(PF_TIMESYNC_BAR4_BASE + 0x130)
-#define PF_GLTSYN_SHTIME_L_5	(PF_TIMESYNC_BAR4_BASE + 0x138)
-#define PF_GLTSYN_SHTIME_H_5	(PF_TIMESYNC_BAR4_BASE + 0x13C)
-
 /* In QLEN must be whole number of 32 descriptors. */
 #define IDPF_ALIGN_RING_DESC	32
 #define IDPF_MIN_RING_DESC	32
@@ -62,44 +27,10 @@
 #define IDPF_DEFAULT_TX_RS_THRESH	32
 #define IDPF_DEFAULT_TX_FREE_THRESH	32
 
-#define IDPF_TX_MAX_MTU_SEG	10
-
-#define IDPF_MIN_TSO_MSS	88
-#define IDPF_MAX_TSO_MSS	9728
-#define IDPF_MAX_TSO_FRAME_SIZE	262143
-#define IDPF_TX_MAX_MTU_SEG     10
-
-#define IDPF_TX_CKSUM_OFFLOAD_MASK (		\
-		RTE_MBUF_F_TX_IP_CKSUM |	\
-		RTE_MBUF_F_TX_L4_MASK |		\
-		RTE_MBUF_F_TX_TCP_SEG)
-
-#define IDPF_TX_OFFLOAD_MASK (			\
-		IDPF_TX_CKSUM_OFFLOAD_MASK |	\
-		RTE_MBUF_F_TX_IPV4 |		\
-		RTE_MBUF_F_TX_IPV6)
-
-#define IDPF_TX_OFFLOAD_NOTSUP_MASK \
-		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
-
-extern uint64_t idpf_timestamp_dynflag;
-
 struct idpf_tx_vec_entry {
 	struct rte_mbuf *mbuf;
 };
 
-/* Offload features */
-union idpf_tx_offload {
-	uint64_t data;
-	struct {
-		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
-		uint64_t l3_len:9; /* L3 (IP) Header Length. */
-		uint64_t l4_len:8; /* L4 Header Length. */
-		uint64_t tso_segsz:16; /* TCP TSO segment size */
-		/* uint64_t unused : 24; */
-	};
-};
-
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
@@ -118,77 +49,14 @@ int idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
-uint16_t idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-				uint16_t nb_pkts);
 uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
 				       uint16_t nb_pkts);
-uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-			       uint16_t nb_pkts);
-uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-				uint16_t nb_pkts);
 uint16_t idpf_singleq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 				       uint16_t nb_pkts);
-uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-			       uint16_t nb_pkts);
-uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-			uint16_t nb_pkts);
 
 void idpf_stop_queues(struct rte_eth_dev *dev);
 
 void idpf_set_rx_function(struct rte_eth_dev *dev);
 void idpf_set_tx_function(struct rte_eth_dev *dev);
 
-#define IDPF_TIMESYNC_REG_WRAP_GUARD_BAND  10000
-/* Helper function to convert a 32b nanoseconds timestamp to 64b. */
-static inline uint64_t
-
-idpf_tstamp_convert_32b_64b(struct idpf_adapter_ext *ad, uint32_t flag,
-			    uint32_t in_timestamp)
-{
-#ifdef RTE_ARCH_X86_64
-	struct idpf_hw *hw = &ad->base.hw;
-	const uint64_t mask = 0xFFFFFFFF;
-	uint32_t hi, lo, lo2, delta;
-	uint64_t ns;
-
-	if (flag != 0) {
-		IDPF_WRITE_REG(hw, GLTSYN_CMD_SYNC_0_0, PF_GLTSYN_CMD_SYNC_SHTIME_EN_M);
-		IDPF_WRITE_REG(hw, GLTSYN_CMD_SYNC_0_0, PF_GLTSYN_CMD_SYNC_EXEC_CMD_M |
-			       PF_GLTSYN_CMD_SYNC_SHTIME_EN_M);
-		lo = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
-		hi = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_H_0);
-		/*
-		 * On typical system, the delta between lo and lo2 is ~1000ns,
-		 * so 10000 seems a large-enough but not overly-big guard band.
-		 */
-		if (lo > (UINT32_MAX - IDPF_TIMESYNC_REG_WRAP_GUARD_BAND))
-			lo2 = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
-		else
-			lo2 = lo;
-
-		if (lo2 < lo) {
-			lo = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_L_0);
-			hi = IDPF_READ_REG(hw, PF_GLTSYN_SHTIME_H_0);
-		}
-
-		ad->time_hw = ((uint64_t)hi << 32) | lo;
-	}
-
-	delta = (in_timestamp - (uint32_t)(ad->time_hw & mask));
-	if (delta > (mask / 2)) {
-		delta = ((uint32_t)(ad->time_hw & mask) - in_timestamp);
-		ns = ad->time_hw - delta;
-	} else {
-		ns = ad->time_hw + delta;
-	}
-
-	return ns;
-#else /* !RTE_ARCH_X86_64 */
-	RTE_SET_USED(ad);
-	RTE_SET_USED(flag);
-	RTE_SET_USED(in_timestamp);
-	return 0;
-#endif /* RTE_ARCH_X86_64 */
-}
-
 #endif /* _IDPF_RXTX_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
index 71a6c59823..b1204b052e 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/net/idpf/idpf_rxtx_vec_avx512.c
@@ -38,8 +38,8 @@ idpf_singleq_rearm_common(struct idpf_rx_queue *rxq)
 						dma_addr0);
 			}
 		}
-		rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed +=
-			IDPF_RXQ_REARM_THRESH;
+		__atomic_fetch_add(&rxq->rx_stats.mbuf_alloc_failed,
+				   IDPF_RXQ_REARM_THRESH, __ATOMIC_RELAXED);
 		return;
 	}
 	struct rte_mbuf *mb0, *mb1, *mb2, *mb3;
@@ -168,8 +168,8 @@ idpf_singleq_rearm(struct idpf_rx_queue *rxq)
 							 dma_addr0);
 				}
 			}
-			rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed +=
-					IDPF_RXQ_REARM_THRESH;
+			__atomic_fetch_add(&rxq->rx_stats.mbuf_alloc_failed,
+					   IDPF_RXQ_REARM_THRESH, __ATOMIC_RELAXED);
 			return;
 		}
 	}
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 14/19] common/idpf: add vec queue setup
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (12 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 13/19] common/idpf: add Rx and Tx data path beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 15/19] common/idpf: add avx512 for single queue model beilei.xing
                           ` (5 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move vector queue setup for single queue model to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 57 ++++++++++++++++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h |  2 +
 drivers/common/idpf/version.map        |  1 +
 drivers/net/idpf/idpf_rxtx.c           | 57 --------------------------
 drivers/net/idpf/idpf_rxtx.h           |  1 -
 5 files changed, 60 insertions(+), 58 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index aea4263d92..9d0e8e35aa 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1399,3 +1399,60 @@ idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 
 	return i;
 }
+
+static void __rte_cold
+release_rxq_mbufs_vec(struct idpf_rx_queue *rxq)
+{
+	const uint16_t mask = rxq->nb_rx_desc - 1;
+	uint16_t i;
+
+	if (rxq->sw_ring == NULL || rxq->rxrearm_nb >= rxq->nb_rx_desc)
+		return;
+
+	/* free all mbufs that are valid in the ring */
+	if (rxq->rxrearm_nb == 0) {
+		for (i = 0; i < rxq->nb_rx_desc; i++) {
+			if (rxq->sw_ring[i] != NULL)
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	} else {
+		for (i = rxq->rx_tail; i != rxq->rxrearm_start; i = (i + 1) & mask) {
+			if (rxq->sw_ring[i] != NULL)
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	}
+
+	rxq->rxrearm_nb = rxq->nb_rx_desc;
+
+	/* set all entries to NULL */
+	memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
+}
+
+static const struct idpf_rxq_ops def_singleq_rx_ops_vec = {
+	.release_mbufs = release_rxq_mbufs_vec,
+};
+
+static inline int
+idpf_singleq_rx_vec_setup_default(struct idpf_rx_queue *rxq)
+{
+	uintptr_t p;
+	struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
+
+	mb_def.nb_segs = 1;
+	mb_def.data_off = RTE_PKTMBUF_HEADROOM;
+	mb_def.port = rxq->port_id;
+	rte_mbuf_refcnt_set(&mb_def, 1);
+
+	/* prevent compiler reordering: rearm_data covers previous fields */
+	rte_compiler_barrier();
+	p = (uintptr_t)&mb_def.rearm_data;
+	rxq->mbuf_initializer = *(uint64_t *)p;
+	return 0;
+}
+
+int __rte_cold
+idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
+{
+	rxq->ops = &def_singleq_rx_ops_vec;
+	return idpf_singleq_rx_vec_setup_default(rxq);
+}
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index ef4e4f4a3c..f88ed20cdf 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -252,5 +252,7 @@ uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 __rte_internal
 uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts);
+__rte_internal
+int idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 03aab598b4..511705e5b0 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -25,6 +25,7 @@ INTERNAL {
 	idpf_reset_split_tx_descq;
 	idpf_rx_queue_release;
 	idpf_singleq_recv_pkts;
+	idpf_singleq_rx_vec_setup;
 	idpf_singleq_xmit_pkts;
 	idpf_splitq_recv_pkts;
 	idpf_splitq_xmit_pkts;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 1066789386..c0c622d64b 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -743,63 +743,6 @@ idpf_stop_queues(struct rte_eth_dev *dev)
 	}
 }
 
-static void __rte_cold
-release_rxq_mbufs_vec(struct idpf_rx_queue *rxq)
-{
-	const uint16_t mask = rxq->nb_rx_desc - 1;
-	uint16_t i;
-
-	if (rxq->sw_ring == NULL || rxq->rxrearm_nb >= rxq->nb_rx_desc)
-		return;
-
-	/* free all mbufs that are valid in the ring */
-	if (rxq->rxrearm_nb == 0) {
-		for (i = 0; i < rxq->nb_rx_desc; i++) {
-			if (rxq->sw_ring[i] != NULL)
-				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
-		}
-	} else {
-		for (i = rxq->rx_tail; i != rxq->rxrearm_start; i = (i + 1) & mask) {
-			if (rxq->sw_ring[i] != NULL)
-				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
-		}
-	}
-
-	rxq->rxrearm_nb = rxq->nb_rx_desc;
-
-	/* set all entries to NULL */
-	memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
-}
-
-static const struct idpf_rxq_ops def_singleq_rx_ops_vec = {
-	.release_mbufs = release_rxq_mbufs_vec,
-};
-
-static inline int
-idpf_singleq_rx_vec_setup_default(struct idpf_rx_queue *rxq)
-{
-	uintptr_t p;
-	struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
-
-	mb_def.nb_segs = 1;
-	mb_def.data_off = RTE_PKTMBUF_HEADROOM;
-	mb_def.port = rxq->port_id;
-	rte_mbuf_refcnt_set(&mb_def, 1);
-
-	/* prevent compiler reordering: rearm_data covers previous fields */
-	rte_compiler_barrier();
-	p = (uintptr_t)&mb_def.rearm_data;
-	rxq->mbuf_initializer = *(uint64_t *)p;
-	return 0;
-}
-
-int __rte_cold
-idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
-{
-	rxq->ops = &def_singleq_rx_ops_vec;
-	return idpf_singleq_rx_vec_setup_default(rxq);
-}
-
 void
 idpf_set_rx_function(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index eab363c3e7..a985dc2cf5 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -44,7 +44,6 @@ void idpf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 int idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
-int idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
 int idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 15/19] common/idpf: add avx512 for single queue model
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (13 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 14/19] common/idpf: add vec queue setup beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 16/19] common/idpf: refine API name for vport functions beilei.xing
                           ` (4 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Move avx512 vector path for single queue to common module.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.h        | 20 ++++++++++++++
 .../idpf/idpf_common_rxtx_avx512.c}           |  6 ++---
 drivers/common/idpf/meson.build               | 27 +++++++++++++++++++
 drivers/common/idpf/version.map               |  3 +++
 drivers/net/idpf/idpf_rxtx.h                  | 13 ---------
 drivers/net/idpf/meson.build                  | 17 ------------
 6 files changed, 53 insertions(+), 33 deletions(-)
 rename drivers/{net/idpf/idpf_rxtx_vec_avx512.c => common/idpf/idpf_common_rxtx_avx512.c} (99%)

diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index f88ed20cdf..370571a517 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -47,6 +47,12 @@
 #define IDPF_TX_OFFLOAD_NOTSUP_MASK \
 		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
 
+/* used for Vector PMD */
+#define IDPF_VPMD_RX_MAX_BURST		32
+#define IDPF_VPMD_TX_MAX_BURST		32
+#define IDPF_VPMD_DESCS_PER_LOOP	4
+#define IDPF_RXQ_REARM_THRESH		64
+
 /* MTS */
 #define GLTSYN_CMD_SYNC_0_0	(PF_TIMESYNC_BASE + 0x0)
 #define PF_GLTSYN_SHTIME_0_0	(PF_TIMESYNC_BASE + 0x4)
@@ -193,6 +199,10 @@ union idpf_tx_offload {
 	};
 };
 
+struct idpf_tx_vec_entry {
+	struct rte_mbuf *mbuf;
+};
+
 struct idpf_rxq_ops {
 	void (*release_mbufs)(struct idpf_rx_queue *rxq);
 };
@@ -254,5 +264,15 @@ uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts);
 __rte_internal
 int idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
+__rte_internal
+int idpf_singleq_tx_vec_setup_avx512(struct idpf_tx_queue *txq);
+__rte_internal
+uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue,
+				       struct rte_mbuf **rx_pkts,
+				       uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_singleq_xmit_pkts_avx512(void *tx_queue,
+				       struct rte_mbuf **tx_pkts,
+				       uint16_t nb_pkts);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx_vec_avx512.c b/drivers/common/idpf/idpf_common_rxtx_avx512.c
similarity index 99%
rename from drivers/net/idpf/idpf_rxtx_vec_avx512.c
rename to drivers/common/idpf/idpf_common_rxtx_avx512.c
index b1204b052e..b765c78b34 100644
--- a/drivers/net/idpf/idpf_rxtx_vec_avx512.c
+++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c
@@ -1,10 +1,10 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2022 Intel Corporation
+ * Copyright(c) 2023 Intel Corporation
  */
 
-#include "idpf_rxtx_vec_common.h"
-
 #include <rte_vect.h>
+#include <idpf_common_device.h>
+#include <idpf_common_rxtx.h>
 
 #ifndef __INTEL_COMPILER
 #pragma GCC diagnostic ignored "-Wcast-qual"
diff --git a/drivers/common/idpf/meson.build b/drivers/common/idpf/meson.build
index 6735f4af61..13df0d9ac3 100644
--- a/drivers/common/idpf/meson.build
+++ b/drivers/common/idpf/meson.build
@@ -9,4 +9,31 @@ sources = files(
         'idpf_common_virtchnl.c',
 )
 
+if arch_subdir == 'x86'
+    idpf_avx512_cpu_support = (
+        cc.get_define('__AVX512F__', args: machine_args) != '' and
+        cc.get_define('__AVX512BW__', args: machine_args) != ''
+    )
+
+    idpf_avx512_cc_support = (
+        not machine_args.contains('-mno-avx512f') and
+        cc.has_argument('-mavx512f') and
+        cc.has_argument('-mavx512bw')
+    )
+
+    if idpf_avx512_cpu_support == true or idpf_avx512_cc_support == true
+        cflags += ['-DCC_AVX512_SUPPORT']
+        avx512_args = [cflags, '-mavx512f', '-mavx512bw']
+        if cc.has_argument('-march=skylake-avx512')
+            avx512_args += '-march=skylake-avx512'
+        endif
+        idpf_common_avx512_lib = static_library('idpf_common_avx512_lib',
+                'idpf_common_rxtx_avx512.c',
+                dependencies: [static_rte_mbuf,],
+                include_directories: includes,
+                c_args: avx512_args)
+        objs += idpf_common_avx512_lib.extract_objects('idpf_common_rxtx_avx512.c')
+    endif
+endif
+
 subdir('base')
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 511705e5b0..a0e97de81f 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -25,8 +25,11 @@ INTERNAL {
 	idpf_reset_split_tx_descq;
 	idpf_rx_queue_release;
 	idpf_singleq_recv_pkts;
+	idpf_singleq_recv_pkts_avx512;
 	idpf_singleq_rx_vec_setup;
+	idpf_singleq_tx_vec_setup_avx512;
 	idpf_singleq_xmit_pkts;
+	idpf_singleq_xmit_pkts_avx512;
 	idpf_splitq_recv_pkts;
 	idpf_splitq_xmit_pkts;
 	idpf_tx_queue_release;
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index a985dc2cf5..3a5084dfd6 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -19,23 +19,14 @@
 #define IDPF_DEFAULT_RX_FREE_THRESH	32
 
 /* used for Vector PMD */
-#define IDPF_VPMD_RX_MAX_BURST	32
-#define IDPF_VPMD_TX_MAX_BURST	32
-#define IDPF_VPMD_DESCS_PER_LOOP	4
-#define IDPF_RXQ_REARM_THRESH	64
 
 #define IDPF_DEFAULT_TX_RS_THRESH	32
 #define IDPF_DEFAULT_TX_FREE_THRESH	32
 
-struct idpf_tx_vec_entry {
-	struct rte_mbuf *mbuf;
-};
-
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
 			struct rte_mempool *mp);
-int idpf_singleq_tx_vec_setup_avx512(struct idpf_tx_queue *txq);
 int idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
@@ -48,10 +39,6 @@ int idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
-uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
-				       uint16_t nb_pkts);
-uint16_t idpf_singleq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
-				       uint16_t nb_pkts);
 
 void idpf_stop_queues(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/idpf/meson.build b/drivers/net/idpf/meson.build
index 378925166f..98f8ceb77b 100644
--- a/drivers/net/idpf/meson.build
+++ b/drivers/net/idpf/meson.build
@@ -34,22 +34,5 @@ if arch_subdir == 'x86'
 
     if idpf_avx512_cpu_support == true or idpf_avx512_cc_support == true
         cflags += ['-DCC_AVX512_SUPPORT']
-        avx512_args = [cflags, '-mavx512f', '-mavx512bw']
-        if cc.has_argument('-march=skylake-avx512')
-            avx512_args += '-march=skylake-avx512'
-        endif
-        idpf_avx512_lib = static_library(
-            'idpf_avx512_lib',
-            'idpf_rxtx_vec_avx512.c',
-            dependencies: [
-                    static_rte_common_idpf,
-                    static_rte_ethdev,
-                    static_rte_bus_pci,
-                    static_rte_kvargs,
-                    static_rte_hash,
-            ],
-            include_directories: includes,
-            c_args: avx512_args)
-        objs += idpf_avx512_lib.extract_objects('idpf_rxtx_vec_avx512.c')
     endif
 endif
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 16/19] common/idpf: refine API name for vport functions
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (14 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 15/19] common/idpf: add avx512 for single queue model beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 17/19] common/idpf: refine API name for queue config module beilei.xing
                           ` (3 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

This patch refines API name for all vport related functions.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c |  8 ++++----
 drivers/common/idpf/idpf_common_device.h | 10 +++++-----
 drivers/common/idpf/version.map          | 14 ++++++++------
 drivers/net/idpf/idpf_ethdev.c           | 10 +++++-----
 4 files changed, 22 insertions(+), 20 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index e8d69c2490..e67bd616dc 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -505,7 +505,7 @@ idpf_vport_deinit(struct idpf_vport *vport)
 	return 0;
 }
 int
-idpf_config_rss(struct idpf_vport *vport)
+idpf_vport_rss_config(struct idpf_vport *vport)
 {
 	int ret;
 
@@ -531,7 +531,7 @@ idpf_config_rss(struct idpf_vport *vport)
 }
 
 int
-idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues)
+idpf_vport_irq_map_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_queue_vector *qv_map;
@@ -606,7 +606,7 @@ idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues)
 }
 
 int
-idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues)
+idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 {
 	idpf_vc_config_irq_map_unmap(vport, nb_rx_queues, false);
 
@@ -617,7 +617,7 @@ idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues)
 }
 
 int
-idpf_create_vport_info_init(struct idpf_vport *vport,
+idpf_vport_info_init(struct idpf_vport *vport,
 			    struct virtchnl2_create_vport *vport_info)
 {
 	struct idpf_adapter *adapter = vport->adapter;
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 583ca90361..545117df79 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -183,13 +183,13 @@ int idpf_vport_init(struct idpf_vport *vport,
 __rte_internal
 int idpf_vport_deinit(struct idpf_vport *vport);
 __rte_internal
-int idpf_config_rss(struct idpf_vport *vport);
+int idpf_vport_rss_config(struct idpf_vport *vport);
 __rte_internal
-int idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues);
+int idpf_vport_irq_map_config(struct idpf_vport *vport, uint16_t nb_rx_queues);
 __rte_internal
-int idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues);
+int idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues);
 __rte_internal
-int idpf_create_vport_info_init(struct idpf_vport *vport,
-				struct virtchnl2_create_vport *vport_info);
+int idpf_vport_info_init(struct idpf_vport *vport,
+			 struct virtchnl2_create_vport *vport_info);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index a0e97de81f..bd4dae503a 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -3,14 +3,18 @@ INTERNAL {
 
 	idpf_adapter_deinit;
 	idpf_adapter_init;
+
+	idpf_vport_deinit;
+	idpf_vport_info_init;
+	idpf_vport_init;
+	idpf_vport_irq_map_config;
+	idpf_vport_irq_unmap_config;
+	idpf_vport_rss_config;
+
 	idpf_alloc_single_rxq_mbufs;
 	idpf_alloc_split_rxq_mbufs;
 	idpf_check_rx_thresh;
 	idpf_check_tx_thresh;
-	idpf_config_irq_map;
-	idpf_config_irq_unmap;
-	idpf_config_rss;
-	idpf_create_vport_info_init;
 	idpf_execute_vc_cmd;
 	idpf_prep_pkts;
 	idpf_register_ts_mbuf;
@@ -50,8 +54,6 @@ INTERNAL {
 	idpf_vc_set_rss_key;
 	idpf_vc_set_rss_lut;
 	idpf_vc_switch_queue;
-	idpf_vport_deinit;
-	idpf_vport_init;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index ee2dec7c7c..b324c0dc83 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -169,7 +169,7 @@ idpf_init_rss(struct idpf_vport *vport)
 
 	vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
 
-	ret = idpf_config_rss(vport);
+	ret = idpf_vport_rss_config(vport);
 	if (ret != 0)
 		PMD_INIT_LOG(ERR, "Failed to configure RSS");
 
@@ -245,7 +245,7 @@ idpf_config_rx_queues_irqs(struct rte_eth_dev *dev)
 	struct idpf_vport *vport = dev->data->dev_private;
 	uint16_t nb_rx_queues = dev->data->nb_rx_queues;
 
-	return idpf_config_irq_map(vport, nb_rx_queues);
+	return idpf_vport_irq_map_config(vport, nb_rx_queues);
 }
 
 static int
@@ -334,7 +334,7 @@ idpf_dev_start(struct rte_eth_dev *dev)
 err_vport:
 	idpf_stop_queues(dev);
 err_startq:
-	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
+	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
 err_irq:
 	idpf_vc_dealloc_vectors(vport);
 err_vec:
@@ -353,7 +353,7 @@ idpf_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_stop_queues(dev);
 
-	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
+	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
 
 	idpf_vc_dealloc_vectors(vport);
 
@@ -643,7 +643,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->devarg_id = param->devarg_id;
 
 	memset(&create_vport_info, 0, sizeof(create_vport_info));
-	ret = idpf_create_vport_info_init(vport, &create_vport_info);
+	ret = idpf_vport_info_init(vport, &create_vport_info);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
 		goto err;
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 17/19] common/idpf: refine API name for queue config module
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (15 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 16/19] common/idpf: refine API name for vport functions beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 18/19] common/idpf: refine API name for data path module beilei.xing
                           ` (2 subsequent siblings)
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

This patch refines API name for queue config functions.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c        | 42 ++++++++--------
 drivers/common/idpf/idpf_common_rxtx.h        | 38 +++++++-------
 drivers/common/idpf/idpf_common_rxtx_avx512.c |  2 +-
 drivers/common/idpf/version.map               | 37 +++++++-------
 drivers/net/idpf/idpf_rxtx.c                  | 50 +++++++++----------
 5 files changed, 85 insertions(+), 84 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 9d0e8e35aa..86dadf9cd2 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -11,7 +11,7 @@ int idpf_timestamp_dynfield_offset = -1;
 uint64_t idpf_timestamp_dynflag;
 
 int
-idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+idpf_qc_rx_thresh_check(uint16_t nb_desc, uint16_t thresh)
 {
 	/* The following constraints must be satisfied:
 	 * thresh < rxq->nb_rx_desc
@@ -26,8 +26,8 @@ idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
 }
 
 int
-idpf_check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
-		     uint16_t tx_free_thresh)
+idpf_qc_tx_thresh_check(uint16_t nb_desc, uint16_t tx_rs_thresh,
+			uint16_t tx_free_thresh)
 {
 	/* TX descriptors will have their RS bit set after tx_rs_thresh
 	 * descriptors have been used. The TX descriptor ring will be cleaned
@@ -74,7 +74,7 @@ idpf_check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
 }
 
 void
-idpf_release_rxq_mbufs(struct idpf_rx_queue *rxq)
+idpf_qc_rxq_mbufs_release(struct idpf_rx_queue *rxq)
 {
 	uint16_t i;
 
@@ -90,7 +90,7 @@ idpf_release_rxq_mbufs(struct idpf_rx_queue *rxq)
 }
 
 void
-idpf_release_txq_mbufs(struct idpf_tx_queue *txq)
+idpf_qc_txq_mbufs_release(struct idpf_tx_queue *txq)
 {
 	uint16_t nb_desc, i;
 
@@ -115,7 +115,7 @@ idpf_release_txq_mbufs(struct idpf_tx_queue *txq)
 }
 
 void
-idpf_reset_split_rx_descq(struct idpf_rx_queue *rxq)
+idpf_qc_split_rx_descq_reset(struct idpf_rx_queue *rxq)
 {
 	uint16_t len;
 	uint32_t i;
@@ -134,7 +134,7 @@ idpf_reset_split_rx_descq(struct idpf_rx_queue *rxq)
 }
 
 void
-idpf_reset_split_rx_bufq(struct idpf_rx_queue *rxq)
+idpf_qc_split_rx_bufq_reset(struct idpf_rx_queue *rxq)
 {
 	uint16_t len;
 	uint32_t i;
@@ -166,15 +166,15 @@ idpf_reset_split_rx_bufq(struct idpf_rx_queue *rxq)
 }
 
 void
-idpf_reset_split_rx_queue(struct idpf_rx_queue *rxq)
+idpf_qc_split_rx_queue_reset(struct idpf_rx_queue *rxq)
 {
-	idpf_reset_split_rx_descq(rxq);
-	idpf_reset_split_rx_bufq(rxq->bufq1);
-	idpf_reset_split_rx_bufq(rxq->bufq2);
+	idpf_qc_split_rx_descq_reset(rxq);
+	idpf_qc_split_rx_bufq_reset(rxq->bufq1);
+	idpf_qc_split_rx_bufq_reset(rxq->bufq2);
 }
 
 void
-idpf_reset_single_rx_queue(struct idpf_rx_queue *rxq)
+idpf_qc_single_rx_queue_reset(struct idpf_rx_queue *rxq)
 {
 	uint16_t len;
 	uint32_t i;
@@ -205,7 +205,7 @@ idpf_reset_single_rx_queue(struct idpf_rx_queue *rxq)
 }
 
 void
-idpf_reset_split_tx_descq(struct idpf_tx_queue *txq)
+idpf_qc_split_tx_descq_reset(struct idpf_tx_queue *txq)
 {
 	struct idpf_tx_entry *txe;
 	uint32_t i, size;
@@ -239,7 +239,7 @@ idpf_reset_split_tx_descq(struct idpf_tx_queue *txq)
 }
 
 void
-idpf_reset_split_tx_complq(struct idpf_tx_queue *cq)
+idpf_qc_split_tx_complq_reset(struct idpf_tx_queue *cq)
 {
 	uint32_t i, size;
 
@@ -257,7 +257,7 @@ idpf_reset_split_tx_complq(struct idpf_tx_queue *cq)
 }
 
 void
-idpf_reset_single_tx_queue(struct idpf_tx_queue *txq)
+idpf_qc_single_tx_queue_reset(struct idpf_tx_queue *txq)
 {
 	struct idpf_tx_entry *txe;
 	uint32_t i, size;
@@ -294,7 +294,7 @@ idpf_reset_single_tx_queue(struct idpf_tx_queue *txq)
 }
 
 void
-idpf_rx_queue_release(void *rxq)
+idpf_qc_rx_queue_release(void *rxq)
 {
 	struct idpf_rx_queue *q = rxq;
 
@@ -324,7 +324,7 @@ idpf_rx_queue_release(void *rxq)
 }
 
 void
-idpf_tx_queue_release(void *txq)
+idpf_qc_tx_queue_release(void *txq)
 {
 	struct idpf_tx_queue *q = txq;
 
@@ -343,7 +343,7 @@ idpf_tx_queue_release(void *txq)
 }
 
 int
-idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
+idpf_qc_ts_mbuf_register(struct idpf_rx_queue *rxq)
 {
 	int err;
 	if ((rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
@@ -360,7 +360,7 @@ idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
 }
 
 int
-idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
+idpf_qc_single_rxq_mbufs_alloc(struct idpf_rx_queue *rxq)
 {
 	volatile struct virtchnl2_singleq_rx_buf_desc *rxd;
 	struct rte_mbuf *mbuf = NULL;
@@ -395,7 +395,7 @@ idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
 }
 
 int
-idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
+idpf_qc_split_rxq_mbufs_alloc(struct idpf_rx_queue *rxq)
 {
 	volatile struct virtchnl2_splitq_rx_buf_desc *rxd;
 	struct rte_mbuf *mbuf = NULL;
@@ -1451,7 +1451,7 @@ idpf_singleq_rx_vec_setup_default(struct idpf_rx_queue *rxq)
 }
 
 int __rte_cold
-idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
+idpf_qc_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
 {
 	rxq->ops = &def_singleq_rx_ops_vec;
 	return idpf_singleq_rx_vec_setup_default(rxq);
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index 370571a517..08081ad30a 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -215,38 +215,38 @@ extern int idpf_timestamp_dynfield_offset;
 extern uint64_t idpf_timestamp_dynflag;
 
 __rte_internal
-int idpf_check_rx_thresh(uint16_t nb_desc, uint16_t thresh);
+int idpf_qc_rx_thresh_check(uint16_t nb_desc, uint16_t thresh);
 __rte_internal
-int idpf_check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
-			 uint16_t tx_free_thresh);
+int idpf_qc_tx_thresh_check(uint16_t nb_desc, uint16_t tx_rs_thresh,
+			    uint16_t tx_free_thresh);
 __rte_internal
-void idpf_release_rxq_mbufs(struct idpf_rx_queue *rxq);
+void idpf_qc_rxq_mbufs_release(struct idpf_rx_queue *rxq);
 __rte_internal
-void idpf_release_txq_mbufs(struct idpf_tx_queue *txq);
+void idpf_qc_txq_mbufs_release(struct idpf_tx_queue *txq);
 __rte_internal
-void idpf_reset_split_rx_descq(struct idpf_rx_queue *rxq);
+void idpf_qc_split_rx_descq_reset(struct idpf_rx_queue *rxq);
 __rte_internal
-void idpf_reset_split_rx_bufq(struct idpf_rx_queue *rxq);
+void idpf_qc_split_rx_bufq_reset(struct idpf_rx_queue *rxq);
 __rte_internal
-void idpf_reset_split_rx_queue(struct idpf_rx_queue *rxq);
+void idpf_qc_split_rx_queue_reset(struct idpf_rx_queue *rxq);
 __rte_internal
-void idpf_reset_single_rx_queue(struct idpf_rx_queue *rxq);
+void idpf_qc_single_rx_queue_reset(struct idpf_rx_queue *rxq);
 __rte_internal
-void idpf_reset_split_tx_descq(struct idpf_tx_queue *txq);
+void idpf_qc_split_tx_descq_reset(struct idpf_tx_queue *txq);
 __rte_internal
-void idpf_reset_split_tx_complq(struct idpf_tx_queue *cq);
+void idpf_qc_split_tx_complq_reset(struct idpf_tx_queue *cq);
 __rte_internal
-void idpf_reset_single_tx_queue(struct idpf_tx_queue *txq);
+void idpf_qc_single_tx_queue_reset(struct idpf_tx_queue *txq);
 __rte_internal
-void idpf_rx_queue_release(void *rxq);
+void idpf_qc_rx_queue_release(void *rxq);
 __rte_internal
-void idpf_tx_queue_release(void *txq);
+void idpf_qc_tx_queue_release(void *txq);
 __rte_internal
-int idpf_register_ts_mbuf(struct idpf_rx_queue *rxq);
+int idpf_qc_ts_mbuf_register(struct idpf_rx_queue *rxq);
 __rte_internal
-int idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq);
+int idpf_qc_single_rxq_mbufs_alloc(struct idpf_rx_queue *rxq);
 __rte_internal
-int idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq);
+int idpf_qc_split_rxq_mbufs_alloc(struct idpf_rx_queue *rxq);
 __rte_internal
 uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			       uint16_t nb_pkts);
@@ -263,9 +263,9 @@ __rte_internal
 uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts);
 __rte_internal
-int idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
+int idpf_qc_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
 __rte_internal
-int idpf_singleq_tx_vec_setup_avx512(struct idpf_tx_queue *txq);
+int idpf_qc_singleq_tx_vec_avx512_setup(struct idpf_tx_queue *txq);
 __rte_internal
 uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue,
 				       struct rte_mbuf **rx_pkts,
diff --git a/drivers/common/idpf/idpf_common_rxtx_avx512.c b/drivers/common/idpf/idpf_common_rxtx_avx512.c
index b765c78b34..9dd63fefab 100644
--- a/drivers/common/idpf/idpf_common_rxtx_avx512.c
+++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c
@@ -850,7 +850,7 @@ static const struct idpf_txq_ops avx512_singleq_tx_vec_ops = {
 };
 
 int __rte_cold
-idpf_singleq_tx_vec_setup_avx512(struct idpf_tx_queue *txq)
+idpf_qc_singleq_tx_vec_avx512_setup(struct idpf_tx_queue *txq)
 {
 	txq->ops = &avx512_singleq_tx_vec_ops;
 	return 0;
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index bd4dae503a..2ff152a353 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -4,6 +4,25 @@ INTERNAL {
 	idpf_adapter_deinit;
 	idpf_adapter_init;
 
+	idpf_qc_rx_thresh_check;
+	idpf_qc_rx_queue_release;
+	idpf_qc_rxq_mbufs_release;
+	idpf_qc_single_rx_queue_reset;
+	idpf_qc_single_rxq_mbufs_alloc;
+	idpf_qc_single_tx_queue_reset;
+	idpf_qc_singleq_rx_vec_setup;
+	idpf_qc_singleq_tx_vec_avx512_setup;
+	idpf_qc_split_rx_bufq_reset;
+	idpf_qc_split_rx_descq_reset;
+	idpf_qc_split_rx_queue_reset;
+	idpf_qc_split_rxq_mbufs_alloc;
+	idpf_qc_split_tx_complq_reset;
+	idpf_qc_split_tx_descq_reset;
+	idpf_qc_ts_mbuf_register;
+	idpf_qc_tx_queue_release;
+	idpf_qc_tx_thresh_check;
+	idpf_qc_txq_mbufs_release;
+
 	idpf_vport_deinit;
 	idpf_vport_info_init;
 	idpf_vport_init;
@@ -11,32 +30,14 @@ INTERNAL {
 	idpf_vport_irq_unmap_config;
 	idpf_vport_rss_config;
 
-	idpf_alloc_single_rxq_mbufs;
-	idpf_alloc_split_rxq_mbufs;
-	idpf_check_rx_thresh;
-	idpf_check_tx_thresh;
 	idpf_execute_vc_cmd;
 	idpf_prep_pkts;
-	idpf_register_ts_mbuf;
-	idpf_release_rxq_mbufs;
-	idpf_release_txq_mbufs;
-	idpf_reset_single_rx_queue;
-	idpf_reset_single_tx_queue;
-	idpf_reset_split_rx_bufq;
-	idpf_reset_split_rx_descq;
-	idpf_reset_split_rx_queue;
-	idpf_reset_split_tx_complq;
-	idpf_reset_split_tx_descq;
-	idpf_rx_queue_release;
 	idpf_singleq_recv_pkts;
 	idpf_singleq_recv_pkts_avx512;
-	idpf_singleq_rx_vec_setup;
-	idpf_singleq_tx_vec_setup_avx512;
 	idpf_singleq_xmit_pkts;
 	idpf_singleq_xmit_pkts_avx512;
 	idpf_splitq_recv_pkts;
 	idpf_splitq_xmit_pkts;
-	idpf_tx_queue_release;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
 	idpf_vc_config_irq_map_unmap;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index c0c622d64b..ec75d6f69e 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -51,11 +51,11 @@ idpf_tx_offload_convert(uint64_t offload)
 }
 
 static const struct idpf_rxq_ops def_rxq_ops = {
-	.release_mbufs = idpf_release_rxq_mbufs,
+	.release_mbufs = idpf_qc_rxq_mbufs_release,
 };
 
 static const struct idpf_txq_ops def_txq_ops = {
-	.release_mbufs = idpf_release_txq_mbufs,
+	.release_mbufs = idpf_qc_txq_mbufs_release,
 };
 
 static const struct rte_memzone *
@@ -183,7 +183,7 @@ idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
 		goto err_sw_ring_alloc;
 	}
 
-	idpf_reset_split_rx_bufq(bufq);
+	idpf_qc_split_rx_bufq_reset(bufq);
 	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
 	bufq->ops = &def_rxq_ops;
@@ -242,12 +242,12 @@ idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
 		IDPF_DEFAULT_RX_FREE_THRESH :
 		rx_conf->rx_free_thresh;
-	if (idpf_check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+	if (idpf_qc_rx_thresh_check(nb_desc, rx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Free memory if needed */
 	if (dev->data->rx_queues[queue_idx] != NULL) {
-		idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		idpf_qc_rx_queue_release(dev->data->rx_queues[queue_idx]);
 		dev->data->rx_queues[queue_idx] = NULL;
 	}
 
@@ -300,12 +300,12 @@ idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			goto err_sw_ring_alloc;
 		}
 
-		idpf_reset_single_rx_queue(rxq);
+		idpf_qc_single_rx_queue_reset(rxq);
 		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
 				queue_idx * vport->chunks_info.rx_qtail_spacing);
 		rxq->ops = &def_rxq_ops;
 	} else {
-		idpf_reset_split_rx_descq(rxq);
+		idpf_qc_split_rx_descq_reset(rxq);
 
 		/* Setup Rx buffer queues */
 		ret = idpf_rx_split_bufq_setup(dev, rxq, 2 * queue_idx,
@@ -379,7 +379,7 @@ idpf_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
 	cq->tx_ring_phys_addr = mz->iova;
 	cq->compl_ring = mz->addr;
 	cq->mz = mz;
-	idpf_reset_split_tx_complq(cq);
+	idpf_qc_split_tx_complq_reset(cq);
 
 	txq->complq = cq;
 
@@ -413,12 +413,12 @@ idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		tx_conf->tx_rs_thresh : IDPF_DEFAULT_TX_RS_THRESH);
 	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
 		tx_conf->tx_free_thresh : IDPF_DEFAULT_TX_FREE_THRESH);
-	if (idpf_check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+	if (idpf_qc_tx_thresh_check(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Free memory if needed. */
 	if (dev->data->tx_queues[queue_idx] != NULL) {
-		idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		idpf_qc_tx_queue_release(dev->data->tx_queues[queue_idx]);
 		dev->data->tx_queues[queue_idx] = NULL;
 	}
 
@@ -470,10 +470,10 @@ idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 	if (!is_splitq) {
 		txq->tx_ring = mz->addr;
-		idpf_reset_single_tx_queue(txq);
+		idpf_qc_single_tx_queue_reset(txq);
 	} else {
 		txq->desc_ring = mz->addr;
-		idpf_reset_split_tx_descq(txq);
+		idpf_qc_split_tx_descq_reset(txq);
 
 		/* Setup tx completion queue if split model */
 		ret = idpf_tx_complq_setup(dev, txq, queue_idx,
@@ -516,7 +516,7 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
-	err = idpf_register_ts_mbuf(rxq);
+	err = idpf_qc_ts_mbuf_register(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to residter timestamp mbuf %u",
 					rx_queue_id);
@@ -525,7 +525,7 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 	if (rxq->bufq1 == NULL) {
 		/* Single queue */
-		err = idpf_alloc_single_rxq_mbufs(rxq);
+		err = idpf_qc_single_rxq_mbufs_alloc(rxq);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
 			return err;
@@ -537,12 +537,12 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
 	} else {
 		/* Split queue */
-		err = idpf_alloc_split_rxq_mbufs(rxq->bufq1);
+		err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
 			return err;
 		}
-		err = idpf_alloc_split_rxq_mbufs(rxq->bufq2);
+		err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2);
 		if (err != 0) {
 			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
 			return err;
@@ -664,11 +664,11 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	rxq = dev->data->rx_queues[rx_queue_id];
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
 		rxq->ops->release_mbufs(rxq);
-		idpf_reset_single_rx_queue(rxq);
+		idpf_qc_single_rx_queue_reset(rxq);
 	} else {
 		rxq->bufq1->ops->release_mbufs(rxq->bufq1);
 		rxq->bufq2->ops->release_mbufs(rxq->bufq2);
-		idpf_reset_split_rx_queue(rxq);
+		idpf_qc_split_rx_queue_reset(rxq);
 	}
 	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -695,10 +695,10 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	txq = dev->data->tx_queues[tx_queue_id];
 	txq->ops->release_mbufs(txq);
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
-		idpf_reset_single_tx_queue(txq);
+		idpf_qc_single_tx_queue_reset(txq);
 	} else {
-		idpf_reset_split_tx_descq(txq);
-		idpf_reset_split_tx_complq(txq->complq);
+		idpf_qc_split_tx_descq_reset(txq);
+		idpf_qc_split_tx_complq_reset(txq->complq);
 	}
 	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -708,13 +708,13 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 void
 idpf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
 {
-	idpf_rx_queue_release(dev->data->rx_queues[qid]);
+	idpf_qc_rx_queue_release(dev->data->rx_queues[qid]);
 }
 
 void
 idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
 {
-	idpf_tx_queue_release(dev->data->tx_queues[qid]);
+	idpf_qc_tx_queue_release(dev->data->tx_queues[qid]);
 }
 
 void
@@ -776,7 +776,7 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 		if (vport->rx_vec_allowed) {
 			for (i = 0; i < dev->data->nb_tx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				(void)idpf_singleq_rx_vec_setup(rxq);
+				(void)idpf_qc_singleq_rx_vec_setup(rxq);
 			}
 #ifdef CC_AVX512_SUPPORT
 			if (vport->rx_use_avx512) {
@@ -835,7 +835,7 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 					txq = dev->data->tx_queues[i];
 					if (txq == NULL)
 						continue;
-					idpf_singleq_tx_vec_setup_avx512(txq);
+					idpf_qc_singleq_tx_vec_avx512_setup(txq);
 				}
 				dev->tx_pkt_burst = idpf_singleq_xmit_pkts_avx512;
 				dev->tx_pkt_prepare = idpf_prep_pkts;
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 18/19] common/idpf: refine API name for data path module
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (16 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 17/19] common/idpf: refine API name for queue config module beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06  5:46         ` [PATCH v7 19/19] common/idpf: refine API name for virtual channel functions beilei.xing
  2023-02-06 13:15         ` [PATCH v7 00/19] net/idpf: introduce idpf common modle Zhang, Qi Z
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

Refine API name for all data path functions.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c        | 20 ++++++------
 drivers/common/idpf/idpf_common_rxtx.h        | 32 +++++++++----------
 drivers/common/idpf/idpf_common_rxtx_avx512.c |  8 ++---
 drivers/common/idpf/version.map               | 15 +++++----
 drivers/net/idpf/idpf_rxtx.c                  | 22 ++++++-------
 5 files changed, 49 insertions(+), 48 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 86dadf9cd2..b1585208ec 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -618,8 +618,8 @@ idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
 }
 
 uint16_t
-idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-		      uint16_t nb_pkts)
+idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
 {
 	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
 	volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
@@ -850,8 +850,8 @@ idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
 }
 
 uint16_t
-idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-		      uint16_t nb_pkts)
+idpf_dp_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			 uint16_t nb_pkts)
 {
 	struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue;
 	volatile struct idpf_flex_tx_sched_desc *txr;
@@ -1024,8 +1024,8 @@ idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
 }
 
 uint16_t
-idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-		       uint16_t nb_pkts)
+idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			  uint16_t nb_pkts)
 {
 	volatile union virtchnl2_rx_desc *rx_ring;
 	volatile union virtchnl2_rx_desc *rxdp;
@@ -1186,8 +1186,8 @@ idpf_xmit_cleanup(struct idpf_tx_queue *txq)
 
 /* TX function */
 uint16_t
-idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-		       uint16_t nb_pkts)
+idpf_dp_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			  uint16_t nb_pkts)
 {
 	volatile struct idpf_flex_tx_desc *txd;
 	volatile struct idpf_flex_tx_desc *txr;
@@ -1350,8 +1350,8 @@ idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 /* TX prep functions */
 uint16_t
-idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
-	       uint16_t nb_pkts)
+idpf_dp_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+		  uint16_t nb_pkts)
 {
 #ifdef RTE_LIBRTE_ETHDEV_DEBUG
 	int ret;
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index 08081ad30a..7fd3e5259d 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -248,31 +248,31 @@ int idpf_qc_single_rxq_mbufs_alloc(struct idpf_rx_queue *rxq);
 __rte_internal
 int idpf_qc_split_rxq_mbufs_alloc(struct idpf_rx_queue *rxq);
 __rte_internal
-uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-			       uint16_t nb_pkts);
+uint16_t idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+				  uint16_t nb_pkts);
 __rte_internal
-uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-			       uint16_t nb_pkts);
+uint16_t idpf_dp_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+				  uint16_t nb_pkts);
 __rte_internal
-uint16_t idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-				uint16_t nb_pkts);
+uint16_t idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+				   uint16_t nb_pkts);
 __rte_internal
-uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-				uint16_t nb_pkts);
+uint16_t idpf_dp_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+				   uint16_t nb_pkts);
 __rte_internal
-uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-			uint16_t nb_pkts);
+uint16_t idpf_dp_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			   uint16_t nb_pkts);
 __rte_internal
 int idpf_qc_singleq_rx_vec_setup(struct idpf_rx_queue *rxq);
 __rte_internal
 int idpf_qc_singleq_tx_vec_avx512_setup(struct idpf_tx_queue *txq);
 __rte_internal
-uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue,
-				       struct rte_mbuf **rx_pkts,
-				       uint16_t nb_pkts);
+uint16_t idpf_dp_singleq_recv_pkts_avx512(void *rx_queue,
+					  struct rte_mbuf **rx_pkts,
+					  uint16_t nb_pkts);
 __rte_internal
-uint16_t idpf_singleq_xmit_pkts_avx512(void *tx_queue,
-				       struct rte_mbuf **tx_pkts,
-				       uint16_t nb_pkts);
+uint16_t idpf_dp_singleq_xmit_pkts_avx512(void *tx_queue,
+					  struct rte_mbuf **tx_pkts,
+					  uint16_t nb_pkts);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/idpf_common_rxtx_avx512.c b/drivers/common/idpf/idpf_common_rxtx_avx512.c
index 9dd63fefab..f41c577dcf 100644
--- a/drivers/common/idpf/idpf_common_rxtx_avx512.c
+++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c
@@ -533,8 +533,8 @@ _idpf_singleq_recv_raw_pkts_avx512(struct idpf_rx_queue *rxq,
  * - nb_pkts < IDPF_DESCS_PER_LOOP, just return no packet
  */
 uint16_t
-idpf_singleq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
-			  uint16_t nb_pkts)
+idpf_dp_singleq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
+				 uint16_t nb_pkts)
 {
 	return _idpf_singleq_recv_raw_pkts_avx512(rx_queue, rx_pkts, nb_pkts);
 }
@@ -819,8 +819,8 @@ idpf_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
 }
 
 uint16_t
-idpf_singleq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
-			     uint16_t nb_pkts)
+idpf_dp_singleq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
+				 uint16_t nb_pkts)
 {
 	return idpf_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts);
 }
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 2ff152a353..e37a40771b 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -4,6 +4,14 @@ INTERNAL {
 	idpf_adapter_deinit;
 	idpf_adapter_init;
 
+	idpf_dp_prep_pkts;
+	idpf_dp_singleq_recv_pkts;
+	idpf_dp_singleq_recv_pkts_avx512;
+	idpf_dp_singleq_xmit_pkts;
+	idpf_dp_singleq_xmit_pkts_avx512;
+	idpf_dp_splitq_recv_pkts;
+	idpf_dp_splitq_xmit_pkts;
+
 	idpf_qc_rx_thresh_check;
 	idpf_qc_rx_queue_release;
 	idpf_qc_rxq_mbufs_release;
@@ -31,13 +39,6 @@ INTERNAL {
 	idpf_vport_rss_config;
 
 	idpf_execute_vc_cmd;
-	idpf_prep_pkts;
-	idpf_singleq_recv_pkts;
-	idpf_singleq_recv_pkts_avx512;
-	idpf_singleq_xmit_pkts;
-	idpf_singleq_xmit_pkts_avx512;
-	idpf_splitq_recv_pkts;
-	idpf_splitq_xmit_pkts;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
 	idpf_vc_config_irq_map_unmap;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index ec75d6f69e..41e91b16b6 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -771,7 +771,7 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 
 #ifdef RTE_ARCH_X86
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
+		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
 	} else {
 		if (vport->rx_vec_allowed) {
 			for (i = 0; i < dev->data->nb_tx_queues; i++) {
@@ -780,19 +780,19 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 			}
 #ifdef CC_AVX512_SUPPORT
 			if (vport->rx_use_avx512) {
-				dev->rx_pkt_burst = idpf_singleq_recv_pkts_avx512;
+				dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts_avx512;
 				return;
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
 
-		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
 	}
 #else
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
-		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
+		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
 	else
-		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
 #endif /* RTE_ARCH_X86 */
 }
 
@@ -824,8 +824,8 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 #endif /* RTE_ARCH_X86 */
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
-		dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
-		dev->tx_pkt_prepare = idpf_prep_pkts;
+		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	} else {
 #ifdef RTE_ARCH_X86
 		if (vport->tx_vec_allowed) {
@@ -837,14 +837,14 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
 						continue;
 					idpf_qc_singleq_tx_vec_avx512_setup(txq);
 				}
-				dev->tx_pkt_burst = idpf_singleq_xmit_pkts_avx512;
-				dev->tx_pkt_prepare = idpf_prep_pkts;
+				dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 				return;
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
 #endif /* RTE_ARCH_X86 */
-		dev->tx_pkt_burst = idpf_singleq_xmit_pkts;
-		dev->tx_pkt_prepare = idpf_prep_pkts;
+		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	}
 }
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* [PATCH v7 19/19] common/idpf: refine API name for virtual channel functions
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (17 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 18/19] common/idpf: refine API name for data path module beilei.xing
@ 2023-02-06  5:46         ` beilei.xing
  2023-02-06 13:15         ` [PATCH v7 00/19] net/idpf: introduce idpf common modle Zhang, Qi Z
  19 siblings, 0 replies; 79+ messages in thread
From: beilei.xing @ 2023-02-06  5:46 UTC (permalink / raw)
  To: jingjing.wu; +Cc: dev, qi.z.zhang, Beilei Xing

From: Beilei Xing <beilei.xing@intel.com>

This patch refines API name for all virtual channel functions.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 24 ++++----
 drivers/common/idpf/idpf_common_virtchnl.c | 70 +++++++++++-----------
 drivers/common/idpf/idpf_common_virtchnl.h | 36 +++++------
 drivers/common/idpf/version.map            | 38 ++++++------
 drivers/net/idpf/idpf_ethdev.c             | 10 ++--
 drivers/net/idpf/idpf_rxtx.c               | 12 ++--
 6 files changed, 95 insertions(+), 95 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index e67bd616dc..48b3e3c0dd 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -104,7 +104,7 @@ idpf_get_pkt_type(struct idpf_adapter *adapter)
 	uint16_t ptype_recvd = 0;
 	int ret;
 
-	ret = idpf_vc_query_ptype_info(adapter);
+	ret = idpf_vc_ptype_info_query(adapter);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Fail to query packet type information");
 		return ret;
@@ -115,7 +115,7 @@ idpf_get_pkt_type(struct idpf_adapter *adapter)
 			return -ENOMEM;
 
 	while (ptype_recvd < IDPF_MAX_PKT_TYPE) {
-		ret = idpf_vc_read_one_msg(adapter, VIRTCHNL2_OP_GET_PTYPE_INFO,
+		ret = idpf_vc_one_msg_read(adapter, VIRTCHNL2_OP_GET_PTYPE_INFO,
 					   IDPF_DFLT_MBX_BUF_SIZE, (uint8_t *)ptype_info);
 		if (ret != 0) {
 			DRV_LOG(ERR, "Fail to get packet type information");
@@ -333,13 +333,13 @@ idpf_adapter_init(struct idpf_adapter *adapter)
 		goto err_mbx_resp;
 	}
 
-	ret = idpf_vc_check_api_version(adapter);
+	ret = idpf_vc_api_version_check(adapter);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Failed to check api version");
 		goto err_check_api;
 	}
 
-	ret = idpf_vc_get_caps(adapter);
+	ret = idpf_vc_caps_get(adapter);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Failed to get capabilities");
 		goto err_check_api;
@@ -382,7 +382,7 @@ idpf_vport_init(struct idpf_vport *vport,
 	struct virtchnl2_create_vport *vport_info;
 	int i, type, ret;
 
-	ret = idpf_vc_create_vport(vport, create_vport_info);
+	ret = idpf_vc_vport_create(vport, create_vport_info);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Failed to create vport.");
 		goto err_create_vport;
@@ -483,7 +483,7 @@ idpf_vport_init(struct idpf_vport *vport,
 	rte_free(vport->rss_key);
 	vport->rss_key = NULL;
 err_rss_key:
-	idpf_vc_destroy_vport(vport);
+	idpf_vc_vport_destroy(vport);
 err_create_vport:
 	return ret;
 }
@@ -500,7 +500,7 @@ idpf_vport_deinit(struct idpf_vport *vport)
 
 	vport->dev_data = NULL;
 
-	idpf_vc_destroy_vport(vport);
+	idpf_vc_vport_destroy(vport);
 
 	return 0;
 }
@@ -509,19 +509,19 @@ idpf_vport_rss_config(struct idpf_vport *vport)
 {
 	int ret;
 
-	ret = idpf_vc_set_rss_key(vport);
+	ret = idpf_vc_rss_key_set(vport);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Failed to configure RSS key");
 		return ret;
 	}
 
-	ret = idpf_vc_set_rss_lut(vport);
+	ret = idpf_vc_rss_lut_set(vport);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Failed to configure RSS lut");
 		return ret;
 	}
 
-	ret = idpf_vc_set_rss_hash(vport);
+	ret = idpf_vc_rss_hash_set(vport);
 	if (ret != 0) {
 		DRV_LOG(ERR, "Failed to configure RSS hash");
 		return ret;
@@ -589,7 +589,7 @@ idpf_vport_irq_map_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 	}
 	vport->qv_map = qv_map;
 
-	ret = idpf_vc_config_irq_map_unmap(vport, nb_rx_queues, true);
+	ret = idpf_vc_irq_map_unmap_config(vport, nb_rx_queues, true);
 	if (ret != 0) {
 		DRV_LOG(ERR, "config interrupt mapping failed");
 		goto config_irq_map_err;
@@ -608,7 +608,7 @@ idpf_vport_irq_map_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 int
 idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 {
-	idpf_vc_config_irq_map_unmap(vport, nb_rx_queues, false);
+	idpf_vc_irq_map_unmap_config(vport, nb_rx_queues, false);
 
 	rte_free(vport->qv_map);
 	vport->qv_map = NULL;
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 8ccfb5989e..31fadefbd3 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -159,7 +159,7 @@ idpf_read_msg_from_cp(struct idpf_adapter *adapter, uint16_t buf_len,
 #define ASQ_DELAY_MS  10
 
 int
-idpf_vc_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_len,
+idpf_vc_one_msg_read(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_len,
 		     uint8_t *buf)
 {
 	int err = 0;
@@ -183,7 +183,7 @@ idpf_vc_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_le
 }
 
 int
-idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
+idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 {
 	int err = 0;
 	int i = 0;
@@ -218,7 +218,7 @@ idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
 		/* for init virtchnl ops, need to poll the response */
-		err = idpf_vc_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
+		err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
 		break;
 	case VIRTCHNL2_OP_GET_PTYPE_INFO:
@@ -251,7 +251,7 @@ idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 }
 
 int
-idpf_vc_check_api_version(struct idpf_adapter *adapter)
+idpf_vc_api_version_check(struct idpf_adapter *adapter)
 {
 	struct virtchnl2_version_info version, *pver;
 	struct idpf_cmd_info args;
@@ -267,7 +267,7 @@ idpf_vc_check_api_version(struct idpf_adapter *adapter)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0) {
 		DRV_LOG(ERR,
 			"Failed to execute command of VIRTCHNL_OP_VERSION");
@@ -291,7 +291,7 @@ idpf_vc_check_api_version(struct idpf_adapter *adapter)
 }
 
 int
-idpf_vc_get_caps(struct idpf_adapter *adapter)
+idpf_vc_caps_get(struct idpf_adapter *adapter)
 {
 	struct virtchnl2_get_capabilities caps_msg;
 	struct idpf_cmd_info args;
@@ -341,7 +341,7 @@ idpf_vc_get_caps(struct idpf_adapter *adapter)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0) {
 		DRV_LOG(ERR,
 			"Failed to execute command of VIRTCHNL2_OP_GET_CAPS");
@@ -354,7 +354,7 @@ idpf_vc_get_caps(struct idpf_adapter *adapter)
 }
 
 int
-idpf_vc_create_vport(struct idpf_vport *vport,
+idpf_vc_vport_create(struct idpf_vport *vport,
 		     struct virtchnl2_create_vport *create_vport_info)
 {
 	struct idpf_adapter *adapter = vport->adapter;
@@ -378,7 +378,7 @@ idpf_vc_create_vport(struct idpf_vport *vport,
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0) {
 		DRV_LOG(ERR,
 			"Failed to execute command of VIRTCHNL2_OP_CREATE_VPORT");
@@ -390,7 +390,7 @@ idpf_vc_create_vport(struct idpf_vport *vport,
 }
 
 int
-idpf_vc_destroy_vport(struct idpf_vport *vport)
+idpf_vc_vport_destroy(struct idpf_vport *vport)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_vport vc_vport;
@@ -406,7 +406,7 @@ idpf_vc_destroy_vport(struct idpf_vport *vport)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DESTROY_VPORT");
 
@@ -414,7 +414,7 @@ idpf_vc_destroy_vport(struct idpf_vport *vport)
 }
 
 int
-idpf_vc_set_rss_key(struct idpf_vport *vport)
+idpf_vc_rss_key_set(struct idpf_vport *vport)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_rss_key *rss_key;
@@ -439,7 +439,7 @@ idpf_vc_set_rss_key(struct idpf_vport *vport)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_KEY");
 
@@ -448,7 +448,7 @@ idpf_vc_set_rss_key(struct idpf_vport *vport)
 }
 
 int
-idpf_vc_set_rss_lut(struct idpf_vport *vport)
+idpf_vc_rss_lut_set(struct idpf_vport *vport)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_rss_lut *rss_lut;
@@ -473,7 +473,7 @@ idpf_vc_set_rss_lut(struct idpf_vport *vport)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_LUT");
 
@@ -482,7 +482,7 @@ idpf_vc_set_rss_lut(struct idpf_vport *vport)
 }
 
 int
-idpf_vc_set_rss_hash(struct idpf_vport *vport)
+idpf_vc_rss_hash_set(struct idpf_vport *vport)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_rss_hash rss_hash;
@@ -500,7 +500,7 @@ idpf_vc_set_rss_hash(struct idpf_vport *vport)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of OP_SET_RSS_HASH");
 
@@ -508,7 +508,7 @@ idpf_vc_set_rss_hash(struct idpf_vport *vport)
 }
 
 int
-idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
+idpf_vc_irq_map_unmap_config(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_queue_vector_maps *map_info;
@@ -539,7 +539,7 @@ idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map
 	args.in_args_size = len;
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUE_VECTOR",
 			map ? "MAP" : "UNMAP");
@@ -549,7 +549,7 @@ idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map
 }
 
 int
-idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
+idpf_vc_vectors_alloc(struct idpf_vport *vport, uint16_t num_vectors)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_alloc_vectors *alloc_vec;
@@ -569,7 +569,7 @@ idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
 	args.in_args_size = len;
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_ALLOC_VECTORS");
 
@@ -579,7 +579,7 @@ idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
 }
 
 int
-idpf_vc_dealloc_vectors(struct idpf_vport *vport)
+idpf_vc_vectors_dealloc(struct idpf_vport *vport)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_alloc_vectors *alloc_vec;
@@ -598,7 +598,7 @@ idpf_vc_dealloc_vectors(struct idpf_vport *vport)
 	args.in_args_size = len;
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command VIRTCHNL2_OP_DEALLOC_VECTORS");
 
@@ -634,7 +634,7 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
 	args.in_args_size = len;
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
 			on ? "ENABLE" : "DISABLE");
@@ -644,7 +644,7 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
 }
 
 int
-idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
+idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
 		     bool rx, bool on)
 {
 	uint32_t type;
@@ -688,7 +688,7 @@ idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
 
 #define IDPF_RXTX_QUEUE_CHUNKS_NUM	2
 int
-idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable)
+idpf_vc_queues_ena_dis(struct idpf_vport *vport, bool enable)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_del_ena_dis_queues *queue_select;
@@ -746,7 +746,7 @@ idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable)
 	args.in_args_size = len;
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
 			enable ? "ENABLE" : "DISABLE");
@@ -756,7 +756,7 @@ idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable)
 }
 
 int
-idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable)
+idpf_vc_vport_ena_dis(struct idpf_vport *vport, bool enable)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_vport vc_vport;
@@ -771,7 +771,7 @@ idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0) {
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_VPORT",
 			enable ? "ENABLE" : "DISABLE");
@@ -781,7 +781,7 @@ idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable)
 }
 
 int
-idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
+idpf_vc_ptype_info_query(struct idpf_adapter *adapter)
 {
 	struct virtchnl2_get_ptype_info *ptype_info;
 	struct idpf_cmd_info args;
@@ -798,7 +798,7 @@ idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
 	args.in_args = (uint8_t *)ptype_info;
 	args.in_args_size = len;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_PTYPE_INFO");
 
@@ -808,7 +808,7 @@ idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
 
 #define IDPF_RX_BUF_STRIDE		64
 int
-idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
+idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
@@ -887,7 +887,7 @@ idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	rte_free(vc_rxqs);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
@@ -896,7 +896,7 @@ idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
 }
 
 int
-idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
+idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq)
 {
 	struct idpf_adapter *adapter = vport->adapter;
 	struct virtchnl2_config_tx_queues *vc_txqs = NULL;
@@ -958,7 +958,7 @@ idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq)
 	args.out_buffer = adapter->mbx_resp;
 	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
 
-	err = idpf_execute_vc_cmd(adapter, &args);
+	err = idpf_vc_cmd_execute(adapter, &args);
 	rte_free(vc_txqs);
 	if (err != 0)
 		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index bbe31700be..c105f02836 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -9,44 +9,44 @@
 #include <idpf_common_rxtx.h>
 
 __rte_internal
-int idpf_vc_check_api_version(struct idpf_adapter *adapter);
+int idpf_vc_api_version_check(struct idpf_adapter *adapter);
 __rte_internal
-int idpf_vc_get_caps(struct idpf_adapter *adapter);
+int idpf_vc_caps_get(struct idpf_adapter *adapter);
 __rte_internal
-int idpf_vc_create_vport(struct idpf_vport *vport,
+int idpf_vc_vport_create(struct idpf_vport *vport,
 			 struct virtchnl2_create_vport *vport_info);
 __rte_internal
-int idpf_vc_destroy_vport(struct idpf_vport *vport);
+int idpf_vc_vport_destroy(struct idpf_vport *vport);
 __rte_internal
-int idpf_vc_set_rss_key(struct idpf_vport *vport);
+int idpf_vc_rss_key_set(struct idpf_vport *vport);
 __rte_internal
-int idpf_vc_set_rss_lut(struct idpf_vport *vport);
+int idpf_vc_rss_lut_set(struct idpf_vport *vport);
 __rte_internal
-int idpf_vc_set_rss_hash(struct idpf_vport *vport);
+int idpf_vc_rss_hash_set(struct idpf_vport *vport);
 __rte_internal
-int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
+int idpf_vc_irq_map_unmap_config(struct idpf_vport *vport,
 				 uint16_t nb_rxq, bool map);
 __rte_internal
-int idpf_execute_vc_cmd(struct idpf_adapter *adapter,
+int idpf_vc_cmd_execute(struct idpf_adapter *adapter,
 			struct idpf_cmd_info *args);
 __rte_internal
-int idpf_vc_switch_queue(struct idpf_vport *vport, uint16_t qid,
+int idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
 			 bool rx, bool on);
 __rte_internal
-int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
+int idpf_vc_queues_ena_dis(struct idpf_vport *vport, bool enable);
 __rte_internal
-int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
+int idpf_vc_vport_ena_dis(struct idpf_vport *vport, bool enable);
 __rte_internal
-int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors);
+int idpf_vc_vectors_alloc(struct idpf_vport *vport, uint16_t num_vectors);
 __rte_internal
-int idpf_vc_dealloc_vectors(struct idpf_vport *vport);
+int idpf_vc_vectors_dealloc(struct idpf_vport *vport);
 __rte_internal
-int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
+int idpf_vc_ptype_info_query(struct idpf_adapter *adapter);
 __rte_internal
-int idpf_vc_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
+int idpf_vc_one_msg_read(struct idpf_adapter *adapter, uint32_t ops,
 			 uint16_t buf_len, uint8_t *buf);
 __rte_internal
-int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
+int idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
 __rte_internal
-int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
+int idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index e37a40771b..1c35761611 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -31,6 +31,25 @@ INTERNAL {
 	idpf_qc_tx_thresh_check;
 	idpf_qc_txq_mbufs_release;
 
+	idpf_vc_api_version_check;
+	idpf_vc_caps_get;
+	idpf_vc_cmd_execute;
+	idpf_vc_irq_map_unmap_config;
+	idpf_vc_one_msg_read;
+	idpf_vc_ptype_info_query;
+	idpf_vc_queue_switch;
+	idpf_vc_queues_ena_dis;
+	idpf_vc_rss_hash_set;
+	idpf_vc_rss_key_set;
+	idpf_vc_rss_lut_set;
+	idpf_vc_rxq_config;
+	idpf_vc_txq_config;
+	idpf_vc_vectors_alloc;
+	idpf_vc_vectors_dealloc;
+	idpf_vc_vport_create;
+	idpf_vc_vport_destroy;
+	idpf_vc_vport_ena_dis;
+
 	idpf_vport_deinit;
 	idpf_vport_info_init;
 	idpf_vport_init;
@@ -38,24 +57,5 @@ INTERNAL {
 	idpf_vport_irq_unmap_config;
 	idpf_vport_rss_config;
 
-	idpf_execute_vc_cmd;
-	idpf_vc_alloc_vectors;
-	idpf_vc_check_api_version;
-	idpf_vc_config_irq_map_unmap;
-	idpf_vc_config_rxq;
-	idpf_vc_config_txq;
-	idpf_vc_create_vport;
-	idpf_vc_dealloc_vectors;
-	idpf_vc_destroy_vport;
-	idpf_vc_ena_dis_queues;
-	idpf_vc_ena_dis_vport;
-	idpf_vc_get_caps;
-	idpf_vc_query_ptype_info;
-	idpf_vc_read_one_msg;
-	idpf_vc_set_rss_hash;
-	idpf_vc_set_rss_key;
-	idpf_vc_set_rss_lut;
-	idpf_vc_switch_queue;
-
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index b324c0dc83..33f5e90743 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -299,7 +299,7 @@ idpf_dev_start(struct rte_eth_dev *dev)
 		goto err_vec;
 	}
 
-	ret = idpf_vc_alloc_vectors(vport, req_vecs_num);
+	ret = idpf_vc_vectors_alloc(vport, req_vecs_num);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to allocate interrupt vectors");
 		goto err_vec;
@@ -321,7 +321,7 @@ idpf_dev_start(struct rte_eth_dev *dev)
 	idpf_set_rx_function(dev);
 	idpf_set_tx_function(dev);
 
-	ret = idpf_vc_ena_dis_vport(vport, true);
+	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
 		goto err_vport;
@@ -336,7 +336,7 @@ idpf_dev_start(struct rte_eth_dev *dev)
 err_startq:
 	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
 err_irq:
-	idpf_vc_dealloc_vectors(vport);
+	idpf_vc_vectors_dealloc(vport);
 err_vec:
 	return ret;
 }
@@ -349,13 +349,13 @@ idpf_dev_stop(struct rte_eth_dev *dev)
 	if (vport->stopped == 1)
 		return 0;
 
-	idpf_vc_ena_dis_vport(vport, false);
+	idpf_vc_vport_ena_dis(vport, false);
 
 	idpf_stop_queues(dev);
 
 	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
 
-	idpf_vc_dealloc_vectors(vport);
+	idpf_vc_vectors_dealloc(vport);
 
 	vport->stopped = 1;
 
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 41e91b16b6..f41783daea 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -566,7 +566,7 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		dev->data->rx_queues[rx_queue_id];
 	int err = 0;
 
-	err = idpf_vc_config_rxq(vport, rxq);
+	err = idpf_vc_rxq_config(vport, rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Fail to configure Rx queue %u", rx_queue_id);
 		return err;
@@ -580,7 +580,7 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_switch_queue(vport, rx_queue_id, true, true);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
 			    rx_queue_id);
@@ -617,7 +617,7 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 		dev->data->tx_queues[tx_queue_id];
 	int err = 0;
 
-	err = idpf_vc_config_txq(vport, txq);
+	err = idpf_vc_txq_config(vport, txq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id);
 		return err;
@@ -631,7 +631,7 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_switch_queue(vport, tx_queue_id, false, true);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
 			    tx_queue_id);
@@ -654,7 +654,7 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (rx_queue_id >= dev->data->nb_rx_queues)
 		return -EINVAL;
 
-	err = idpf_vc_switch_queue(vport, rx_queue_id, true, false);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
 			    rx_queue_id);
@@ -685,7 +685,7 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	if (tx_queue_id >= dev->data->nb_tx_queues)
 		return -EINVAL;
 
-	err = idpf_vc_switch_queue(vport, tx_queue_id, false, false);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
 			    tx_queue_id);
-- 
2.26.2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* RE: [PATCH v6 00/19] net/idpf: introduce idpf common modle
  2023-02-06  2:58       ` [PATCH v6 00/19] net/idpf: introduce idpf common modle Zhang, Qi Z
@ 2023-02-06  6:16         ` Xing, Beilei
  0 siblings, 0 replies; 79+ messages in thread
From: Xing, Beilei @ 2023-02-06  6:16 UTC (permalink / raw)
  To: Zhang, Qi Z, Wu, Jingjing; +Cc: dev



> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang@intel.com>
> Sent: Monday, February 6, 2023 10:59 AM
> To: Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v6 00/19] net/idpf: introduce idpf common modle
> 
> 
> 
> > -----Original Message-----
> > From: Xing, Beilei <beilei.xing@intel.com>
> > Sent: Friday, February 3, 2023 5:43 PM
> > To: Wu, Jingjing <jingjing.wu@intel.com>
> > Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei
> > <beilei.xing@intel.com>
> > Subject: [PATCH v6 00/19] net/idpf: introduce idpf common modle
> >
> > From: Beilei Xing <beilei.xing@intel.com>
> >
> > Refactor idpf pmd by introducing idpf common module, which will be
> > also consumed by a new PMD - CPFL (Control Plane Function Library) PMD.
> >
> > v2 changes:
> >  - Refine irq map/unmap functions.
> >  - Fix cross compile issue.
> > v3 changes:
> >  - Embed vport_info field into the vport structure.
> >  - Refine APIs' name and order in version.map.
> >  - Refine commit log.
> > v4 changes:
> >  - Refine commit log.
> > v5 changes:
> >  - Refine version.map.
> >  - Fix typo.
> >  - Return error log.
> > v6 changes:
> >  - Refine API name in common module.
> >
> > Beilei Xing (19):
> >   common/idpf: add adapter structure
> >   common/idpf: add vport structure
> >   common/idpf: add virtual channel functions
> >   common/idpf: introduce adapter init and deinit
> >   common/idpf: add vport init/deinit
> >   common/idpf: add config RSS
> >   common/idpf: add irq map/unmap
> >   common/idpf: support get packet type
> >   common/idpf: add vport info initialization
> >   common/idpf: add vector flags in vport
> >   common/idpf: add rxq and txq struct
> >   common/idpf: add help functions for queue setup and release
> >   common/idpf: add Rx and Tx data path
> >   common/idpf: add vec queue setup
> >   common/idpf: add avx512 for single queue model
> >   common/idpf: refine API name for vport functions
> >   common/idpf: refine API name for queue config module
> >   common/idpf: refine API name for data path module
> >   common/idpf: refine API name for virtual channel functions
> >
> >  drivers/common/idpf/base/idpf_controlq_api.h  |    6 -
> >  drivers/common/idpf/base/meson.build          |    2 +-
> >  drivers/common/idpf/idpf_common_device.c      |  655 +++++
> >  drivers/common/idpf/idpf_common_device.h      |  195 ++
> >  drivers/common/idpf/idpf_common_logs.h        |   47 +
> >  drivers/common/idpf/idpf_common_rxtx.c        | 1458 ++++++++++++
> >  drivers/common/idpf/idpf_common_rxtx.h        |  278 +++
> >  .../idpf/idpf_common_rxtx_avx512.c}           |   24 +-
> >  .../idpf/idpf_common_virtchnl.c}              |  945 ++------
> >  drivers/common/idpf/idpf_common_virtchnl.h    |   52 +
> >  drivers/common/idpf/meson.build               |   38 +
> >  drivers/common/idpf/version.map               |   61 +-
> >  drivers/net/idpf/idpf_ethdev.c                |  552 +----
> >  drivers/net/idpf/idpf_ethdev.h                |  194 +-
> >  drivers/net/idpf/idpf_logs.h                  |   24 -
> >  drivers/net/idpf/idpf_rxtx.c                  | 2107 +++--------------
> >  drivers/net/idpf/idpf_rxtx.h                  |  253 +-
> >  drivers/net/idpf/meson.build                  |   18 -
> >  18 files changed, 3442 insertions(+), 3467 deletions(-)  create mode
> > 100644 drivers/common/idpf/idpf_common_device.c
> >  create mode 100644 drivers/common/idpf/idpf_common_device.h
> >  create mode 100644 drivers/common/idpf/idpf_common_logs.h
> >  create mode 100644 drivers/common/idpf/idpf_common_rxtx.c
> >  create mode 100644 drivers/common/idpf/idpf_common_rxtx.h
> >  rename drivers/{net/idpf/idpf_rxtx_vec_avx512.c =>
> > common/idpf/idpf_common_rxtx_avx512.c} (97%)  rename
> > drivers/{net/idpf/idpf_vchnl.c => common/idpf/idpf_common_virtchnl.c}
> > (52%)  create mode 100644 drivers/common/idpf/idpf_common_virtchnl.h
> >
> > --
> > 2.26.2
> 
> Overall looks good to me, just couple thing need to fix
> 
> 1. fix copy right date to 2023
> 2. fix some meson build , you can use devtools/check-meson.py to check the
> warning.

Yes, updated in v7.

> 
> 
> 


^ permalink raw reply	[flat|nested] 79+ messages in thread

* RE: [PATCH v7 00/19] net/idpf: introduce idpf common modle
  2023-02-06  5:45       ` [PATCH v7 " beilei.xing
                           ` (18 preceding siblings ...)
  2023-02-06  5:46         ` [PATCH v7 19/19] common/idpf: refine API name for virtual channel functions beilei.xing
@ 2023-02-06 13:15         ` Zhang, Qi Z
  19 siblings, 0 replies; 79+ messages in thread
From: Zhang, Qi Z @ 2023-02-06 13:15 UTC (permalink / raw)
  To: Xing, Beilei, Wu, Jingjing; +Cc: dev



> -----Original Message-----
> From: Xing, Beilei <beilei.xing@intel.com>
> Sent: Monday, February 6, 2023 1:46 PM
> To: Wu, Jingjing <jingjing.wu@intel.com>
> Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>
> Subject: [PATCH v7 00/19] net/idpf: introduce idpf common modle
> 
> From: Beilei Xing <beilei.xing@intel.com>
> 
> Refactor idpf pmd by introducing idpf common module, which will be also
> consumed by a new PMD - CPFL (Control Plane Function Library) PMD.
> 
> v2 changes:
>  - Refine irq map/unmap functions.
>  - Fix cross compile issue.
> v3 changes:
>  - Embed vport_info field into the vport structure.
>  - Refine APIs' name and order in version.map.
>  - Refine commit log.
> v4 changes:
>  - Refine commit log.
> v5 changes:
>  - Refine version.map.
>  - Fix typo.
>  - Return error log.
> v6 changes:
>  - Refine API name in common module.
> v7 changes:
>  - Change new files' copy right date to 2023.
>  - Correct format for meson.build.
>  - Change rte_atomic usages to compiler atomic built-ins.
> 
> Beilei Xing (19):
>   common/idpf: add adapter structure
>   common/idpf: add vport structure
>   common/idpf: add virtual channel functions
>   common/idpf: introduce adapter init and deinit
>   common/idpf: add vport init/deinit
>   common/idpf: add config RSS
>   common/idpf: add irq map/unmap
>   common/idpf: support get packet type
>   common/idpf: add vport info initialization
>   common/idpf: add vector flags in vport
>   common/idpf: add rxq and txq struct
>   common/idpf: add help functions for queue setup and release
>   common/idpf: add Rx and Tx data path
>   common/idpf: add vec queue setup
>   common/idpf: add avx512 for single queue model
>   common/idpf: refine API name for vport functions
>   common/idpf: refine API name for queue config module
>   common/idpf: refine API name for data path module
>   common/idpf: refine API name for virtual channel functions
> 
>  drivers/common/idpf/base/idpf_controlq_api.h  |    6 -
>  drivers/common/idpf/base/meson.build          |    2 +-
>  drivers/common/idpf/idpf_common_device.c      |  655 +++++
>  drivers/common/idpf/idpf_common_device.h      |  195 ++
>  drivers/common/idpf/idpf_common_logs.h        |   47 +
>  drivers/common/idpf/idpf_common_rxtx.c        | 1458 ++++++++++++
>  drivers/common/idpf/idpf_common_rxtx.h        |  278 +++
>  .../idpf/idpf_common_rxtx_avx512.c}           |   26 +-
>  .../idpf/idpf_common_virtchnl.c}              |  947 ++------
>  drivers/common/idpf/idpf_common_virtchnl.h    |   52 +
>  drivers/common/idpf/meson.build               |   35 +
>  drivers/common/idpf/version.map               |   61 +-
>  drivers/net/idpf/idpf_ethdev.c                |  552 +----
>  drivers/net/idpf/idpf_ethdev.h                |  194 +-
>  drivers/net/idpf/idpf_logs.h                  |   24 -
>  drivers/net/idpf/idpf_rxtx.c                  | 2107 +++--------------
>  drivers/net/idpf/idpf_rxtx.h                  |  253 +-
>  drivers/net/idpf/meson.build                  |   18 -
>  18 files changed, 3441 insertions(+), 3469 deletions(-)  create mode 100644
> drivers/common/idpf/idpf_common_device.c
>  create mode 100644 drivers/common/idpf/idpf_common_device.h
>  create mode 100644 drivers/common/idpf/idpf_common_logs.h
>  create mode 100644 drivers/common/idpf/idpf_common_rxtx.c
>  create mode 100644 drivers/common/idpf/idpf_common_rxtx.h
>  rename drivers/{net/idpf/idpf_rxtx_vec_avx512.c =>
> common/idpf/idpf_common_rxtx_avx512.c} (97%)  rename
> drivers/{net/idpf/idpf_vchnl.c => common/idpf/idpf_common_virtchnl.c}
> (51%)  create mode 100644 drivers/common/idpf/idpf_common_virtchnl.h
> 
> --
> 2.26.2

Acked-by: Qi Zhang <qi.z.zhang@intel.com>

Applied to dpdk-next-net-intel.

Thanks
Qi

^ permalink raw reply	[flat|nested] 79+ messages in thread

end of thread, other threads:[~2023-02-06 13:15 UTC | newest]

Thread overview: 79+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <https://patches.dpdk.org/project/dpdk/cover/20230117072626.93796-1-beilei.xing@intel.com/>
2023-01-17  8:06 ` [PATCH v4 00/15] net/idpf: introduce idpf common modle beilei.xing
2023-01-17  8:06   ` [PATCH v4 01/15] common/idpf: add adapter structure beilei.xing
2023-01-17  8:06   ` [PATCH v4 02/15] common/idpf: add vport structure beilei.xing
2023-01-17  8:06   ` [PATCH v4 03/15] common/idpf: add virtual channel functions beilei.xing
2023-01-18  4:00     ` Zhang, Qi Z
2023-01-18  4:10       ` Zhang, Qi Z
2023-01-17  8:06   ` [PATCH v4 04/15] common/idpf: introduce adapter init and deinit beilei.xing
2023-01-17  8:06   ` [PATCH v4 05/15] common/idpf: add vport init/deinit beilei.xing
2023-01-17  8:06   ` [PATCH v4 06/15] common/idpf: add config RSS beilei.xing
2023-01-17  8:06   ` [PATCH v4 07/15] common/idpf: add irq map/unmap beilei.xing
2023-01-31  8:11     ` Wu, Jingjing
2023-01-17  8:06   ` [PATCH v4 08/15] common/idpf: support get packet type beilei.xing
2023-01-17  8:06   ` [PATCH v4 09/15] common/idpf: add vport info initialization beilei.xing
2023-01-31  8:24     ` Wu, Jingjing
2023-01-17  8:06   ` [PATCH v4 10/15] common/idpf: add vector flags in vport beilei.xing
2023-01-17  8:06   ` [PATCH v4 11/15] common/idpf: add rxq and txq struct beilei.xing
2023-01-17  8:06   ` [PATCH v4 12/15] common/idpf: add help functions for queue setup and release beilei.xing
2023-01-17  8:06   ` [PATCH v4 13/15] common/idpf: add Rx and Tx data path beilei.xing
2023-01-17  8:06   ` [PATCH v4 14/15] common/idpf: add vec queue setup beilei.xing
2023-01-17  8:06   ` [PATCH v4 15/15] common/idpf: add avx512 for single queue model beilei.xing
2023-02-02  9:53   ` [PATCH v5 00/15] net/idpf: introduce idpf common modle beilei.xing
2023-02-02  9:53     ` [PATCH v5 01/15] common/idpf: add adapter structure beilei.xing
2023-02-02  9:53     ` [PATCH v5 02/15] common/idpf: add vport structure beilei.xing
2023-02-02  9:53     ` [PATCH v5 03/15] common/idpf: add virtual channel functions beilei.xing
2023-02-02  9:53     ` [PATCH v5 04/15] common/idpf: introduce adapter init and deinit beilei.xing
2023-02-02  9:53     ` [PATCH v5 05/15] common/idpf: add vport init/deinit beilei.xing
2023-02-02  9:53     ` [PATCH v5 06/15] common/idpf: add config RSS beilei.xing
2023-02-02  9:53     ` [PATCH v5 07/15] common/idpf: add irq map/unmap beilei.xing
2023-02-02  9:53     ` [PATCH v5 08/15] common/idpf: support get packet type beilei.xing
2023-02-02  9:53     ` [PATCH v5 09/15] common/idpf: add vport info initialization beilei.xing
2023-02-02  9:53     ` [PATCH v5 10/15] common/idpf: add vector flags in vport beilei.xing
2023-02-02  9:53     ` [PATCH v5 11/15] common/idpf: add rxq and txq struct beilei.xing
2023-02-02  9:53     ` [PATCH v5 12/15] common/idpf: add help functions for queue setup and release beilei.xing
2023-02-02  9:53     ` [PATCH v5 13/15] common/idpf: add Rx and Tx data path beilei.xing
2023-02-02  9:53     ` [PATCH v5 14/15] common/idpf: add vec queue setup beilei.xing
2023-02-02  9:53     ` [PATCH v5 15/15] common/idpf: add avx512 for single queue model beilei.xing
2023-02-03  9:43     ` [PATCH v6 00/19] net/idpf: introduce idpf common modle beilei.xing
2023-02-03  9:43       ` [PATCH v6 01/19] common/idpf: add adapter structure beilei.xing
2023-02-03  9:43       ` [PATCH v6 02/19] common/idpf: add vport structure beilei.xing
2023-02-03  9:43       ` [PATCH v6 03/19] common/idpf: add virtual channel functions beilei.xing
2023-02-03  9:43       ` [PATCH v6 04/19] common/idpf: introduce adapter init and deinit beilei.xing
2023-02-03  9:43       ` [PATCH v6 05/19] common/idpf: add vport init/deinit beilei.xing
2023-02-03  9:43       ` [PATCH v6 06/19] common/idpf: add config RSS beilei.xing
2023-02-03  9:43       ` [PATCH v6 07/19] common/idpf: add irq map/unmap beilei.xing
2023-02-03  9:43       ` [PATCH v6 08/19] common/idpf: support get packet type beilei.xing
2023-02-03  9:43       ` [PATCH v6 09/19] common/idpf: add vport info initialization beilei.xing
2023-02-03  9:43       ` [PATCH v6 10/19] common/idpf: add vector flags in vport beilei.xing
2023-02-03  9:43       ` [PATCH v6 11/19] common/idpf: add rxq and txq struct beilei.xing
2023-02-03  9:43       ` [PATCH v6 12/19] common/idpf: add help functions for queue setup and release beilei.xing
2023-02-03  9:43       ` [PATCH v6 13/19] common/idpf: add Rx and Tx data path beilei.xing
2023-02-03  9:43       ` [PATCH v6 14/19] common/idpf: add vec queue setup beilei.xing
2023-02-03  9:43       ` [PATCH v6 15/19] common/idpf: add avx512 for single queue model beilei.xing
2023-02-03  9:43       ` [PATCH v6 16/19] common/idpf: refine API name for vport functions beilei.xing
2023-02-03  9:43       ` [PATCH v6 17/19] common/idpf: refine API name for queue config module beilei.xing
2023-02-03  9:43       ` [PATCH v6 18/19] common/idpf: refine API name for data path module beilei.xing
2023-02-03  9:43       ` [PATCH v6 19/19] common/idpf: refine API name for virtual channel functions beilei.xing
2023-02-06  2:58       ` [PATCH v6 00/19] net/idpf: introduce idpf common modle Zhang, Qi Z
2023-02-06  6:16         ` Xing, Beilei
2023-02-06  5:45       ` [PATCH v7 " beilei.xing
2023-02-06  5:46         ` [PATCH v7 01/19] common/idpf: add adapter structure beilei.xing
2023-02-06  5:46         ` [PATCH v7 02/19] common/idpf: add vport structure beilei.xing
2023-02-06  5:46         ` [PATCH v7 03/19] common/idpf: add virtual channel functions beilei.xing
2023-02-06  5:46         ` [PATCH v7 04/19] common/idpf: introduce adapter init and deinit beilei.xing
2023-02-06  5:46         ` [PATCH v7 05/19] common/idpf: add vport init/deinit beilei.xing
2023-02-06  5:46         ` [PATCH v7 06/19] common/idpf: add config RSS beilei.xing
2023-02-06  5:46         ` [PATCH v7 07/19] common/idpf: add irq map/unmap beilei.xing
2023-02-06  5:46         ` [PATCH v7 08/19] common/idpf: support get packet type beilei.xing
2023-02-06  5:46         ` [PATCH v7 09/19] common/idpf: add vport info initialization beilei.xing
2023-02-06  5:46         ` [PATCH v7 10/19] common/idpf: add vector flags in vport beilei.xing
2023-02-06  5:46         ` [PATCH v7 11/19] common/idpf: add rxq and txq struct beilei.xing
2023-02-06  5:46         ` [PATCH v7 12/19] common/idpf: add help functions for queue setup and release beilei.xing
2023-02-06  5:46         ` [PATCH v7 13/19] common/idpf: add Rx and Tx data path beilei.xing
2023-02-06  5:46         ` [PATCH v7 14/19] common/idpf: add vec queue setup beilei.xing
2023-02-06  5:46         ` [PATCH v7 15/19] common/idpf: add avx512 for single queue model beilei.xing
2023-02-06  5:46         ` [PATCH v7 16/19] common/idpf: refine API name for vport functions beilei.xing
2023-02-06  5:46         ` [PATCH v7 17/19] common/idpf: refine API name for queue config module beilei.xing
2023-02-06  5:46         ` [PATCH v7 18/19] common/idpf: refine API name for data path module beilei.xing
2023-02-06  5:46         ` [PATCH v7 19/19] common/idpf: refine API name for virtual channel functions beilei.xing
2023-02-06 13:15         ` [PATCH v7 00/19] net/idpf: introduce idpf common modle Zhang, Qi Z

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).